text
stringlengths
8
267k
meta
dict
Q: SQL file encoding in Visual Studio When we create stored procedures in Visual Studio (with the extension .sql) the file encoding is automatically set to 'Unicode (UTF-8 with signature) - Codepage 65001'. This causes ours DBA's problems as their scripts don't like it. We have to manually go in and change them to 'Western European (windows) - Codepage 1252'. This only seems to be happening since we upgraded to VS 2008 from 2005. Can anyone explain whats going on and how to stop it? A: To summarise the link provided by Codeslayer (in case the page url changes)... Change the sql file templates to ANSI encoding by opening with notepad and then saving it as an ANSI file. You can do the same for files already created. \Common7\Tools\Templates\Database Project Items You've just made our DBA's very happy! A: I think somebody faced a similar problem like yours and had the following workaround which is posted at http://connect.microsoft.com/VisualStudio/feedback/Workaround.aspx?FeedbackID=319830 A: For Visual Studio 2010, there is another set of files you need to update: C:\Program Files (x86)\Microsoft Visual Studio 10.0\VSTSDB\Extensions\SqlServer\Items
{ "language": "en", "url": "https://stackoverflow.com/questions/45426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: NCover, TypeMock and MSTest has anyone got NCover, TypeMock and MSTest to work together? and if so how. I've had 2 or 3 serious tries at this now and just can't get it to work. I'm using MSTest 9, NCover 2.1 and TypeMock 4.1. Ideally I would like to run them from an MSBuild task. Cheers Mat A: Well its a bit late but here is the answer for future generations ... Few key points: * *In older version of Typemock (like 4.1) you need an enterprise license in order to run Typemock with NCover. In the current version all licenses have the same features list. *In order to run Typemock with other profilers you need to use the link feature of Typemock. In your case you can do it with Typemock MSBuild task. *You need to run MSTest with the /noisolation argument. This will prevent MSTest to spawn VSTestHost.exe process that will actually run your tests. This creates a problem enabling the environment variables that are needed in order to let the profilers work In the example below I'm running the tests in Tests.dll and asking for coverage report about ClassLibrary.dll <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="RunTests" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project ="c:\Program Files\Typemock\Isolator\5.2\TypeMock.MSBuild.Tasks" /> <PropertyGroup> <NCOVER>"E:\src\TypeMock\Build\Binaries\NCover\NCover 2.0\NCover.Console.exe"</NCOVER> <MSTest>"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe"</MSTest> </PropertyGroup> <Target Name ="Test"> <TypeMockStart Target="2.0" Link ="NCover2.0"/> <Exec ContinueOnError="true" Command="$(NCOVER) //a ClassLibrary $(MSTest) /noisolation /testcontainer:E:\src\TestNcover3\MSBuildTest\bin\Debug\Tests.dll" /> <TypeMockStop/> </Target> </Project>
{ "language": "en", "url": "https://stackoverflow.com/questions/45431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Determining members of local groups via C# I wondered whether anybody knows how to obtain membership of local groups on a remote server programmatically via C#. Would this require administrator permissions? And if so is there any way to confirm the currently logged in user's membership (or not) of these groups? A: Howto: (Almost) Everything In Active Directory via C# is very helpfull and also includes instructions on how to iterate AD members in a group. public ArrayList Groups(string userDn, bool recursive) { ArrayList groupMemberships = new ArrayList(); return AttributeValuesMultiString("memberOf", userDn, groupMemberships, recursive); } You will also need this function: public ArrayList AttributeValuesMultiString(string attributeName, string objectDn, ArrayList valuesCollection, bool recursive) { DirectoryEntry ent = new DirectoryEntry(objectDn); PropertyValueCollection ValueCollection = ent.Properties[attributeName]; IEnumerator en = ValueCollection.GetEnumerator(); while (en.MoveNext()) { if (en.Current != null) { if (!valuesCollection.Contains(en.Current.ToString())) { valuesCollection.Add(en.Current.ToString()); if (recursive) { AttributeValuesMultiString(attributeName, "LDAP://" + en.Current.ToString(), valuesCollection, true); } } } } ent.Close(); ent.Dispose(); return valuesCollection; } If you do now want to use this AD-method, you could use the info in this article, but it uses unmanaged code: http://www.codeproject.com/KB/cs/groupandmembers.aspx The sample application that they made: A: It appears there is a new Assembly in .net 3.5 called System.DirectoryServices.AccountManagement which gives a cleaner implementation than System.DirectoryServices. Dominick Baier blogs about a couple of simple operations including checking membership of a group:- public static bool IsUserInGroup(string username, string groupname, ContextType type) { PrincipalContext context = new PrincipalContext(type); UserPrincipal user = UserPrincipal.FindByIdentity( context, IdentityType.SamAccountName, username); GroupPrincipal group = GroupPrincipal.FindByIdentity( context, groupname); return user.IsMemberOf(group); } I think I will use this approach, thanks for the suggestions though however! :-) A: Perhaps this is something that can be done via WMI? A: I asked a similar question, and ended up writing an answer which used WMI to enum the group members. I had real problems with authentication in the system.directoryservices.accountmanagement stuff. YMMV, of course. A: I'd be curious if the System.DirectoryServices.AccountManagement is fully managed. I've used System.DirectoryServices.ActiveDirectory which is a wrapper for COM Interop which has led to many headaches... A: This may possibly help. I had to develop an app where we want to authenticate against active directory, and also examine the groups strings that the user is in. For a couple of reasons we don't want to use windows authentication, but rather have our own forms based authentication. I developed the routine below to firstly authenticate the user, and secondly examine all the groups that the user belongs to. Perhaps it may help. The routine uses LogonUser to authenticate, and then gets the list of numerical guid-like group ids (SIDs) for that user, and translates each one to a human readable form. Hope this helps, I had to synthesise this approach from a variety of different google searches. private int validateUserActiveDirectory() { IntPtr token = IntPtr.Zero; int DBgroupLevel = 0; // make sure you're yourself -- recommended at msdn http://support.microsoft.com/kb/248187 RevertToSelf(); if (LogonUser(txtUserName.Value, propDomain, txtUserPass.Text, LOGON32_LOGON_NETWORK, LOGON32_PROVIDER_DEFAULT, token) != 0) { // ImpersonateLoggedOnUser not required for us -- we are not doing impersonated stuff, but leave it here for completeness. //ImpersonateLoggedOnUser(token); // do impersonated stuff // end impersonated stuff // ensure that we are the original user CloseHandle(token); RevertToSelf(); System.Security.Principal.IdentityReferenceCollection groups = Context.Request.LogonUserIdentity.Groups; IdentityReference translatedGroup = default(IdentityReference); foreach (IdentityReference g in groups) { translatedGroup = g.Translate(typeof(NTAccount)); if (translatedGroup.Value.ToLower().Contains("desired group")) { inDBGroup = true; return 1; } } } else { return 0; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/45437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MS SQL FTI - searching on "n*" returns numbers This seems like odd behaviour from SQL's full-text-index. FTI stores number in its index with an "NN" prefix, so "123" is saved as "NN123". Now when a user searches for words beginning with N (i.e. contains "n*" ) they also get all numbers. So: select [TextField] from [MyTable] where contains([TextField], '"n*"') Returns: MyTable.TextField -------------------------------------------------- This text contains the word navigator This text is nice This text only has 123, and shouldn't be returned Is there a good way to exclude that last row? Is there a consistent workaround for this? Those extra "" are needed to make the wildcard token work: select [TextField] from [MyTable] where contains([TextField], 'n*') Would search for literal n* - and there aren't any. --return rows with the word text select [TextField] from [MyTable] where contains([TextField], 'text') --return rows with the word tex* select [TextField] from [MyTable] where contains([TextField], 'tex*') --return rows with words that begin tex... select [TextField] from [MyTable] where contains([TextField], '"tex*"') A: There are a couple of ways to handle this, though neither is really all that great. First, add a column to your table that says that TextField is really a number. If you could do that and filter, you would have the most performant version. If that's not an option, then you will need to add a further filter. While I haven't extensively tested it, you could add the filter AND TextField NOT LIKE 'NN%[0-9]%' The downside is that this would filter out 'NN12NOO' but that may be an edge case not represented by your data.
{ "language": "en", "url": "https://stackoverflow.com/questions/45438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ICalendar and event updates not working in Outlook I'm generating ICalendar (.ics) files. Using the UID and SEQUENCE fields I can update existing events in Google Calendar and in Windows Calendar BUT NOT in MS Outlook 2007 - it just creates a second event How do I get them to work for Outlook ? Thanks Tom A: I got a hold of Tom Carter, the asker. He had a working example with a request followed by a cancellation. What I had wrong was my METHOD was inside my VEVENT when it should have been outside. So here is a working update! Original: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//WA//FRWEB//EN METHOD:REQUEST BEGIN:VEVENT UID:FRICAL201 SEQUENCE:0 DTSTAMP:20081108T151809Z ORGANIZER:[email protected] DTSTART:20081109T121200 SUMMARY:11/9/2008 12:12:00 PM TRIP FROM JFK AIRPORT (JFK) LOCATION:JFK AIRPORT (JFK) END:VEVENT END:VCALENDAR Update: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//WA//FRWEB//EN METHOD:REQUEST BEGIN:VEVENT UID:FRICAL201 SEQUENCE:1 DTSTAMP:20081108T161809Z ORGANIZER:[email protected] DTSTART:20081109T121300 SUMMARY:11/9/2008 12:13:00 PM TRIP FROM JFK AIRPORT (JFK) LOCATION:JFK AIRPORT (JFK) END:VEVENT END:VCALENDAR All I did was add the request method (in the correct spot!), and an organizer. A: I am using outlook 2003 (from reading the posts, 2007 appears to behave in the same way) and you need to clearly distinguish between the behaviour of the explicit file import of an ics file and the implicit import when an ics file is 'double clicked'. On the Outlook menu File / Import and Export ... Outlook will load as many VEVENT entries as are in the file and no amount of changing UID:, SEQUENCE: or DTSTAMP: values changes this, i.e.if you change any data and re-import it you just get a duplicate set of entries. If you double click on an ics file it processes the first VEVENT entry only. However it does recognise the UID and, if the DTSTAMP: is later (the SEQUENCE can be the same but not lower) you will be prompted and it will update the event in your calendar. BEGIN:VCALENDAR VERSION:2.0 PRODID:www.membership-services.net METHOD:REQUEST BEGIN:VEVENT DTSTART:20090126T210000 DTEND:20090126T220000 SUMMARY:Avondale - Thameside Away Game vs Croydon LOCATION:Whitgift School DESCRIPTION:http://maps.google.co.uk/maps?f=q&hl=en&geocode=&q=CR2+6YT UID:AWPC_8 SEQUENCE:0 DTSTAMP:20090123T112600 END:VEVENT BEGIN:VEVENT DTSTART:20090202T213000 DTEND:20090202T223000 SUMMARY:Avondale - Thameside Home Game vs Orcas LOCATION:Putney DESCRIPTION:http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&ie=UTF8&ll=51.4635,-0.2285&spn=0.005,0.009613&t=h&z=17&iwloc=lyrftr:w2t.90,0x48760f04a04b1801:0x49ebf12503a5d5a9,51.463459,-0.228674 UID:AWPC_10 SEQUENCE:0 DTSTAMP:20090123T112600 END:VEVENT END:VCALENDAR A: add this to your ICS file X-WR-RELCALID:MyCal123 where MyCal123 is a unique identifier for your calendar. By adding this line to an ICS file the entire calendar gets updated (after a prompt in Outlook). You don't even need to change the DTSTAMP or SEQUENCE or ORGANIZER and METHOD:PUBLISH is fine for the update. Just update the event details, double-click the ICS and the calendar will update. Note that this also works fine if you have published the calendar and provided a URL for people to view it. They just need to hit refresh after about 2mins and they will also get the update. Thanks to David Bjørnhart for pointing this out: ICal import creates new calendar When Open the ics file A: I've continued to do some testing and have now managed to get Outlook to update and cancel events based on the .cs file. Outlook in fact seems to respond to the rules defined in RFC 2446 In summary you have to specify METHOD:REQUEST and ORGANIZER:xxxxxxxx in addition to UID: and SEQUENCE: For a cancellation you have to specify METHOD:CANCEL Request/Update Example BEGIN:VCALENDAR VERSION:2.0 PRODID:-//SYFADIS//PORTAIL FORMATION//FR METHOD:REQUEST BEGIN:VEVENT UID:[email protected] SEQUENCE:5 DTSTAMP:20081106T154911Z ORGANIZER:[email protected] DTSTART:20081113T164907 DTEND:20081115T170000 SUMMARY:TestTraining STATUS:CONFIRMED END:VEVENT END:VCALENDAR Cancel Example; BEGIN:VCALENDAR VERSION:2.0 PRODID:-//SYFADIS//PORTAIL FORMATION//FR METHOD:CANCEL BEGIN:VEVENT UID:[email protected] SEQUENCE:7 DTSTAMP:20081106T154916Z ORGANIZER:[email protected] DTSTART:20081113T164907 SUMMARY:TestTraining STATUS:CANCELLED END:VEVENT END:VCALENDAR A: I'm using Entourage, so this may not match up exactly with the behavior you're seeing, but I hope it helps. Using the iCalendar from your reply, Entourage wouldn't even import the data. Using a known-good file, I got it to import, then successfully update. Comparing the two files, the only structural differences are as follows: * *My known-good doesn't have a VERSION element *My known-good doesn't have a PRODID element *My known-good doesn't have a STATUS element *My known-good doesn't have a SEQUENCE element Since Microsoft's support for open standards tends to lag, I'd suggest trying without the VERSION info.
{ "language": "en", "url": "https://stackoverflow.com/questions/45453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Suggest some good MVC framework in perl Can you suggest some good MVC framework for perl -- one I am aware of is catalyst The need is to be able to expose services on the perl infrastructure which can be called by Java/.Net applications seamlessly. A: Another alternative besides the ones already mentioned is Continuity; however, it is (as the name is meant to imply) continuation-based rather than MVC in the typical sense. Still, it’s worth mentioning because it is one of the better Perl web frameworks. That said, I like Catalyst much better than any of the alternatives. And it’s still getting better all the time! The downside of that is that current preferred coding approaches continue to evolve at a fairly hurried clip – but for the last couple of versions, there has been strong emphasis on API compatibility, so the burden is now mostly mental rather than administrative. The upcoming port of the internals to Moose in particular is poised to provide some excellent benefits. But the biggest argument in favour of Catalyst, IMO, is the Chained dispatch type. I have seen nothing like it in all of web-framework-dom, and it is a most excellent tool to keep your code as DRY as possible. This couples well with another great thing that Catalyst provides, namely uri_for – a method which takes a controller and a bunch of arguments and then constructs a URI that would dispatch to that place, which it returns. Together, these facilities mean that you can structure your URI space any way you deem right, yet at the same time can structure your controllers to avoid duplication of logic, and keep templates independent of the URI structure. It’s just brilliant. A: Seconding comments made by others: Catalyst (which more or less forked from Maypole) is by far and away the most complete and robust of them. There is a book by Jonathan Rockway that will certainly help you come to grips with it. In addition to the 'Chained' dispatch type, the :Regex (and :LocalRegex) dispatch methods provide enormous flexibility. The latest app we've built here supports a lot of disparate-looking URLs through just a handful of subs using :LocalRegex. I also particularly like the fact that you are not limited to a particular templating language or database. The mailing list (and the book) both have a preference for Template::Toolkit (as do I), and DBIx::Class (we continue to use Class::DBI), but you can use pretty much anything you like. Catalyst is marvelously agnostic that way. Don't be put off by the fact Catalyst seems to require half of CPAN as dependencies. Once you get it up and running, it is a well-oiled machine. It has reached a level of maturity now that once you come to grips with it, you find it 'fades into the background'. You spend your time solving business needs, not fighting with the tools you use. It does what it says on the tin. Catalyst++ A: Been playing with Squatting the last few days and I have to say it looks very promising and been fun to use. Its a micro webframework (or web microframework ;-) and is heavily influenced by Camping which is written in Ruby. NB. Squatting (& Camping) don't have model components baked into the framework. Here's the authors comments on models... "Models? The whole world is your model. ;-) I've always been ambivalent about defining policy here. Use whatever works for you" A: There is also CGI::Application, which is more like the guts of a framework. It helps a person to write basic CGI's and glue bits on to it to make it as custom as they like. So you can have it use hardly any modules, or just about everyone under the sun. A: I'll tell you right now that Catalyst has by far the best reputation amongst Perl developers in terms of a rapid application development MVC framework. In terms of "pure" MVC I'm not sure there are even that many "mature" or at least production-ready alternatives. If Catalyst doesn't seem right to you, then you could build upon the lightweight framework CGI::Application to suit your needs or take a look at some of the lesser known MVC frameworks like PageKit and Maypole. A: Catalyst is the way to go. There is also Jifty, but (last time I looked), it had terrible documentation. A: For your problem I would take a look into Jifty::Plugin::REST which allows access to models and actions using various formats. Let me just say that Jifty doesn't have terrible documentation. However, most of included documentation is API documentation, but there is very low-noise mailing list which has useful tips and links to applications. Wiki at http://jifty.org/ is another resource which has useful bits. If your goal is to make video store (my favorite benchmark for 4GLs and CRUD frameworks) in afternoon, it's really worth a look! A: If you are already aware of Catalyst, then I recommend focusing on it. It is mature, well-documented, and has a very large user-base, community, and collection of plug-ins. A: Since this old thread popped up, I will mention two exciting new additions to the Perl MVC world: * *Dancer (CPAN) which is heavily influenced by Ruby's Sinatra, known for being very lightweight *Mojolicious (CPAN) which is written by the original developer of Catalyst to use what he learned there, it has no non-core dependencies, with very modern builtins (HTML5/CSS3/Websockets, JSON/XML parsers, its own UserAgent/templating engine) (N.B. I have used Mojolicious more than Dancer, and as such if I missed some features of Dancer that I listed for Mojolicious then I apologize in advance) A: Another options is Gantry when used in conjunction with the BigTop module it can reduce the time it takes to build simple CRUD sites. A: There is also Clearpress which I can recommend as a useful database backed application. It needs fewer dependencies than Catalyst. We have written a few large applications with it, and I run a badminton ladder website using it. A: I have built some applications with Kelp, it's easy to learn and very helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/45470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Postback events from within DataView I'm presenting information from a DataTable on my page and would like to add some sorting functionality which goes a bit beyond a straight forward column sort. As such I have been trying to place LinkButtons in the HeaderItems of my GridView which post-back to functions that change session information before reloading the page. Clicking my links DOES cause a post-back but they don't seem to generate any OnClick events as my OnClick functions don't get executed. I have AutoEventWireup set to true and if I move the links out of the GridView they work fine. I've got around the problem by creating regular anchors, appending queries to their hrefs and checking for them at page load but I'd prefer C# to be doing the grunt work. Any ideas? Update: To clarify the IDs of the controls match their OnClick function names. A: You're on the right track but try working with the Command Name/Argument of the LinkButton. Try something like this: In the HeaderTemplate of the the TemplateField, add a LinkButton and set the CommandName and CommandArgument <HeaderTemplate> <asp:LinkButton ID="LinkButton1" runat="server" CommandName="sort" CommandArgument="Products" Text="<%# Bind('ProductName")' /> </HeaderTemplate> Next, set the RowCommand event of the GridView protected void GridView1_RowCommand(object sender, GridViewCommandEventArgs e) { if (e.CommandName == "sort") { //Now sort by e.CommandArgument } } This way, you have a lot of control of your LinkButtons and you don't need to do much work to keep track of them. A: Two things to keep in mind when using events on dynamically generated controls in ASP.Net: * *Firstly, the controls should ideally be created in the Page.Init event handler. This is to ensure that the controls have already been created before the event handling code is ran. *Secondly, you must assign the same value to the controls ID property, so that the event handler code knows that that was the control that should handle the event. A: You can specify the method to call when the link is clicked. <HeaderTemplate> <asp:LinkButton ID="lnkHdr1" Text="Hdr1" OnCommand="lnkHdr1_OnCommand" CommandArgument="Hdr1" runat="server"></asp:LinkButton> </HeaderTemplate> The code-behind: protected void lnkHdr1_OnCommand(object sender, CommandEventArgs e) { // e.CommandArgument }
{ "language": "en", "url": "https://stackoverflow.com/questions/45475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to do streaming read of a large XML file in C# 3.5 How can you do a streaming read on a large XML file that contains a xs:sequence just below root element, without loading the whole file into a XDocument instance in memory? A: Going with a SAX-style element parser and the XmlTextReader class created with XmlReader.Create would be a good idea, yes. Here's a slightly-modified code example from CodeGuru: void ParseURL(string strUrl) { try { using (var reader = XmlReader.Create(strUrl)) { while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: var attributes = new Hashtable(); var strURI = reader.NamespaceURI; var strName = reader.Name; if (reader.HasAttributes) { for (int i = 0; i < reader.AttributeCount; i++) { reader.MoveToAttribute(i); attributes.Add(reader.Name,reader.Value); } } StartElement(strURI,strName,strName,attributes); break; // //you can handle other cases here // //case XmlNodeType.EndElement: // Todo //case XmlNodeType.Text: // Todo default: break; } } } catch (XmlException e) { Console.WriteLine("error occured: " + e.Message); } } } } A: I can't add a comment, since I just signed up but the code sample posted by Hirvox and currently selected as the answer has a bug in it. It should not have the new statement when using the static Create method. Current: using (var reader = new XmlReader.Create(strUrl)) Fixed: using (var reader = XmlReader.Create(strUrl)) A: I think it's not possible if you want to use object model (i.e. XElement\XDocument) to query XML. Obviously, you can't build XML objects tree without reading enough data. However you can use XmlReader class. The XmlReader class reads XML data from a stream or file. It provides non-cached, forward-only, read-only access to XML data. A: Heres is a howto: http://support.microsoft.com/kb/301228/en-us Just remember that you should not use XmlTextReader but instead XmlReader in conjunction with XmlReader.Create A: I'm confused by the mention of the "xs:sequence" - this is a XML Schema element. Are you trying to open a large XML Schema file? Are you open a large XML file that is based on that schema? Or are you trying to open a large XML file and validate it at the same time? None of these situations should provide you with a problem using the standard XmlReader (or XmlValidatingReader). Reading XML with XMLReader: http://msdn.microsoft.com/en-us/library/9d83k261(VS.80).aspx A: That code sample tries to turn XmlReader style code into SAX style code - if you're writing code from scratch I'd just use XmlReader as it was intended - Pull not Push.
{ "language": "en", "url": "https://stackoverflow.com/questions/45481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Are there conventions for function names when using the Perl Test::More? Are there conventions for function names when using the Perl Test::More or Test::Simple modules? I'm specifically asking about the names of functions that are used to set up a test environment before the test and to tear down the environment after successful completion of the test(s). cheers, Rob A: If you are looking for more XUnit-style testing, check out Test::Class. It provides the Test(setup) and Test(teardown) attributes for methods that, well, set up and tear down your environment. It also gives you a much nicer way of dealing with plans (you can provide one for each test method individually, so the counting is much less fiddly) and lets you inherit tests via test class hierarchies. A: I dont think there are any such conventions out there. The only way you can do it is perhaps use BEGIN/END blocks, if the resources are to be used over the whole file. The general approach I take is to put related tests in one code block and then initialize the variables/resource etc there. You can perhaps keep an easy count of how many tests you have for each function. Something like ... BEGIN { # If you want to set some global db setting/file setting/INC changes etc } # Tests functionality 1... { # have fun .... } # Tests functionality 2... { # have more fun .... } END { # Clean up the BEGIN changes } On other note, you may want to read this for testing in perl ... http://perlandmac.blogspot.com/2007/08/using-perl-testsimple-and-testmore.html A: I do not think there is a official set of conventions, so I would recommend looking at the examples at http://perldoc.perl.org/Test/More.html and see how the write their tests. A: Thanks Espo. I've had a look at the relevant perldocs but there's no real convention regarding the setup and teardown aspects. Not like XUnit series of tests. Thanks for the answer Jagmal but I'm not sure about using the BEGIN and END blocks for the setup and teardown as you are not making clear what you are doing by the names. There's also the obvious problem of only having one setup run and one teardown run per test, i.e. per each .t file. I've had a quick look at Test::Most and it looks really interesting, especially the explain function. Thanks Matt. Hmmm. Just thinking further about using the BEGIN and END blocks, I'm thinking if I decrease the granularity of the tests so that there is only one setup and one teardown needed then this would be a good solution. cheers, Rob A: We use Test::More extensively for our unit tests as a lot (most) of our data processing scripts are written in Perl. We don't have a specific convention for the function names but rather do something like Jagmal suggests, namely breaking the tests up into smaller chunks and initializing locally. In our case each subtest is encapsulated in a separate function within the test script. On top of this we have a framework that allows us to run all the subtests (the full unit test) or call individual subtests or sets of subtests to allow for running of just the ones we're working on at the moment. A: First convention I'd suggest is ditching Test::More for Test::Most A: Perl testing scripts aren't special or magic in any way. As such, they can contain the exact same things that any other Perl script can. You can name routines anything you want, and call them before, after, and intertwingled with, your tests. You can have any amount of initialization code before any tests, any amount of cleanup code after tests, and any amount of any other code mixed in with tests. This all assumes that you're talking about CPAN-style t/*.t test scripts. I think you are, but I can manage to read your question as one about extending test harnesses, if I squint just right. A: If you are open to get into acceptance testing as well, like Ruby's Cucumber - take a look at this small example http://github.com/kesor/p5-cucumber that is using Test::More and a cucumber style of acceptance testing.
{ "language": "en", "url": "https://stackoverflow.com/questions/45485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MySQL Error 1093 - Can't specify target table for update in FROM clause I have a table story_category in my database with corrupt entries. The next query returns the corrupt entries: SELECT * FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM category INNER JOIN story_category ON category_id=category.id); I tried to delete them executing: DELETE FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM category INNER JOIN story_category ON category_id=category.id); But I get the next error: #1093 - You can't specify target table 'story_category' for update in FROM clause How can I overcome this? A: If you can't do UPDATE table SET a=value WHERE x IN (SELECT x FROM table WHERE condition); because it is the same table, you can trick and do : UPDATE table SET a=value WHERE x IN (SELECT * FROM (SELECT x FROM table WHERE condition) as t) [update or delete or whatever] A: Update: This answer covers the general error classification. For a more specific answer about how to best handle the OP's exact query, please see other answers to this question In MySQL, you can't modify the same table which you use in the SELECT part. This behaviour is documented at: http://dev.mysql.com/doc/refman/5.6/en/update.html Maybe you can just join the table to itself If the logic is simple enough to re-shape the query, lose the subquery and join the table to itself, employing appropriate selection criteria. This will cause MySQL to see the table as two different things, allowing destructive changes to go ahead. UPDATE tbl AS a INNER JOIN tbl AS b ON .... SET a.col = b.col Alternatively, try nesting the subquery deeper into a from clause ... If you absolutely need the subquery, there's a workaround, but it's ugly for several reasons, including performance: UPDATE tbl SET col = ( SELECT ... FROM (SELECT.... FROM) AS x); The nested subquery in the FROM clause creates an implicit temporary table, so it doesn't count as the same table you're updating. ... but watch out for the query optimiser However, beware that from MySQL 5.7.6 and onward, the optimiser may optimise out the subquery, and still give you the error. Luckily, the optimizer_switch variable can be used to switch off this behaviour; although I couldn't recommend doing this as anything more than a short term fix, or for small one-off tasks. SET optimizer_switch = 'derived_merge=off'; Thanks to Peter V. Mørch for this advice in the comments. Example technique was from Baron Schwartz, originally published at Nabble, paraphrased and extended here. A: The simplest way to do this is use a table alias when you are referring parent query table inside the sub query. Example : insert into xxx_tab (trans_id) values ((select max(trans_id)+1 from xxx_tab)); Change it to: insert into xxx_tab (trans_id) values ((select max(P.trans_id)+1 from xxx_tab P)); A: NexusRex provided a very good solution for deleting with join from the same table. If you do this: DELETE FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id AS cid FROM category INNER JOIN story_category ON category_id=category.id ) you are going to get an error. But if you wrap the condition in one more select: DELETE FROM story_category WHERE category_id NOT IN ( SELECT cid FROM ( SELECT DISTINCT category.id AS cid FROM category INNER JOIN story_category ON category_id=category.id ) AS c ) it would do the right thing!! Explanation: The query optimizer does a derived merge optimization for the first query (which causes it to fail with the error), but the second query doesn't qualify for the derived merge optimization. Hence the optimizer is forced to execute the subquery first. A: You could insert the desired rows' ids into a temp table and then delete all the rows that are found in that table. which may be what @Cheekysoft meant by doing it in two steps. A: DELETE FROM story_category WHERE category_id NOT IN ( SELECT cid FROM ( SELECT DISTINCT category.id AS cid FROM category INNER JOIN story_category ON category_id=category.id ) AS c ) A: According to the Mysql UPDATE Syntax linked by @CheekySoft, it says right at the bottom. Currently, you cannot update a table and select from the same table in a subquery. I guess you are deleting from store_category while still selecting from it in the union. A: Try to save result of Select statement in separate variable and then use that for delete query. A: try this DELETE FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM (SELECT * FROM STORY_CATEGORY) sc; A: For the specific query the OP is trying to achieve, the ideal and most efficient way to do this is NOT to use a subquery at all. Here are the LEFT JOIN versions of the OP's two queries: SELECT s.* FROM story_category s LEFT JOIN category c ON c.id=s.category_id WHERE c.id IS NULL; Note: DELETE s restricts delete operations to the story_category table.Documentation DELETE s FROM story_category s LEFT JOIN category c ON c.id=s.category_id WHERE c.id IS NULL; A: This is what I did for updating a Priority column value by 1 if it is >=1 in a table and in its WHERE clause using a subquery on same table to make sure that at least one row contains Priority=1 (because that was the condition to be checked while performing update) : UPDATE My_Table SET Priority=Priority + 1 WHERE Priority >= 1 AND (SELECT TRUE FROM (SELECT * FROM My_Table WHERE Priority=1 LIMIT 1) as t); I know it's a bit ugly but it does works fine. A: Recently i had to update records in the same table i did it like below: UPDATE skills AS s, (SELECT id FROM skills WHERE type = 'Programming') AS p SET s.type = 'Development' WHERE s.id = p.id; A: The inner join in your sub-query is unnecessary. It looks like you want to delete the entries in story_category where the category_id is not in the category table. Instead of that: DELETE FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM category INNER JOIN story_category ON category_id=category.id); Do this: DELETE FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM category); A: If something does not work, when coming thru the front-door, then take the back-door: drop table if exists apples; create table if not exists apples(variety char(10) primary key, price int); insert into apples values('fuji', 5), ('gala', 6); drop table if exists apples_new; create table if not exists apples_new like apples; insert into apples_new select * from apples; update apples_new set price = (select price from apples where variety = 'gala') where variety = 'fuji'; rename table apples to apples_orig; rename table apples_new to apples; drop table apples_orig; It's fast. The bigger the data, the better. A: how about this query hope it helps DELETE FROM story_category LEFT JOIN (SELECT category.id FROM category) cat ON story_category.id = cat.id WHERE cat.id IS NULL A: As far as concerns, you want to delete rows in story_category that do not exist in category. Here is your original query to identify the rows to delete: SELECT * FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM category INNER JOIN story_category ON category_id=category.id ); Combining NOT IN with a subquery that JOINs the original table seems unecessarily convoluted. This can be expressed in a more straight-forward manner with not exists and a correlated subquery: select sc.* from story_category sc where not exists (select 1 from category c where c.id = sc.category_id); Now it is easy to turn this to a delete statement: delete from story_category where not exists (select 1 from category c where c.id = story_category.category_id); This quer would run on any MySQL version, as well as in most other databases that I know. Demo on DB Fiddle: -- set-up create table story_category(category_id int); create table category (id int); insert into story_category values (1), (2), (3), (4), (5); insert into category values (4), (5), (6), (7); -- your original query to identify offending rows SELECT * FROM story_category WHERE category_id NOT IN ( SELECT DISTINCT category.id FROM category INNER JOIN story_category ON category_id=category.id); | category_id | | ----------: | | 1 | | 2 | | 3 | -- a functionally-equivalent, simpler query for this select sc.* from story_category sc where not exists (select 1 from category c where c.id = sc.category_id) | category_id | | ----------: | | 1 | | 2 | | 3 | -- the delete query delete from story_category where not exists (select 1 from category c where c.id = story_category.category_id); -- outcome select * from story_category; | category_id | | ----------: | | 4 | | 5 |
{ "language": "en", "url": "https://stackoverflow.com/questions/45494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "799" }
Q: Wacom tablet Python interface If possible I want to catch pressure sensitive input from a Wacom tablet in Python. Are there any Python libraries available that can do this? A: You could perhaps take a look at the software described here. It is a gnome applet, written in Python. From the web site: "The gnome wacom applet is a small gnome panel applet that shows how much pressure is being applied to your wacom tablet by the current device. Clicking on the panel icon brings up a dialog allowing you to select a different device and check what pressure and tilt information is being recieved from it. This dialog also contains a small drawing test area to give your pen a quick test." Google is your friend A: Use PySide (wrapper for QT)'s QTabletEvent: http://www.pyside.org/docs/pyside/PySide/QtGui/QTabletEvent.html#PySide.QtGui.QTabletEvent A: For Mac OS X: https://bitbucket.org/AnomalousUnderdog/pythonmactabletlib A small Python library to allow Python scripts to access pen tablet input data in Mac OS X. The library exists as plain C code compiled as a dynamic library/shared object. It interfaces with the Mac OS X's API to get data on pen tablet input. Then, Python scripts can use ctypes to get the data. Send me a message if you have any problems with it. A: Pressure data is available in PyGObject to access Gtk+ 3 on multiple platforms, though "Windows users may still want to keep using PyGTK until more convenient installers are published." [citation] Motion event objects generated by pressure sensitive devices will carry pressure data.
{ "language": "en", "url": "https://stackoverflow.com/questions/45500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there a Python library for generating .ico files? I'm looking to create favicon.ico files programatically from Python, but PIL only has support for reading ico files. A: Perhaps the following would work: * *Generate your icon image using PIL *Convert the image to .ico format using the python interface to ImageMagick, PythonMagick I have not tried this approach. The ImageMagick convert command line program was able to convert a .png file to .ico format, so at least ImageMagick supports the .ico format. A: According to Wikipedia modern browsers can handle favicons in PNG format, so maybe you could just generate that? Alternatively the ICO article describes the format... A: You can use Pillow: from PIL import Image filename = r'logo.png' img = Image.open(filename) img.save('logo.ico') Optionally, you may specify the icon sizes you want: icon_sizes = [(16,16), (32, 32), (48, 48), (64,64)] img.save('logo.ico', sizes=icon_sizes) The Pillow docs say that by default it will generate sizes [(16, 16), (24, 24), (32, 32), (48, 48), (64, 64), (128, 128), (255, 255)] and any size bigger than the original size or 255 will be ignored. Yes, it is in the Read-only section of the docs, but it works to some extent. A: Although this question is rather old, it's a prominent search result for using Python to convert PNG files to ICO, so I thought I'll add my two cents. If all you want is a favicon, Douglas Leeder's answer seems perfectly fine to me. If you have one high-resolution PNG file of your logo and want to convert it to an ICO file, the answer of Ronan Paixão is probably the easiest way to go. But an ICO file can contain multiple images, intended for different resolutions, and I often found myself in the situation of wanting to have fine-grained control over these different resolutions, to avoid unfortunate anti-aliasing effects, which means that I want to provide each image resolution individually. Which means that I want to convert not a single, but multiple PNG files into a single ICO file. As far as I can see, the Pillow package doesn't provide this capability. Fortunately, a modern ICO file can contain multiple PNG files inside, so the task boils down to the simple challenge of writing some header entries. Depending on the situation, I wrote two functions, the first one basically the solution of Ronan Paixão, while the second one provides the functionality to join several PNG files into one ICO file: from pathlib import Path from PIL import Image def bake_one_big_png_to_ico(sourcefile, targetfile, sizes=None): """Converts one big PNG into one ICO file. args: sourcefile (str): Pathname of a PNG file. targetfile (str): Pathname of the resulting ICO file. sizes (list of int): Requested sizes of the resulting icon file, defaults to [16, 32, 48]. Use this function if you have one big, square PNG file and don’t care about fine-tuning individual icon sizes. Example:: sourcefile = "Path/to/high_resolution_logo_512x512.png" targetfile = "Path/to/logo.ico" sizes = [16, 24, 32, 48, 256] bake_one_big_png_to_ico(sourcefile, targetfile, sizes) """ if sizes is None: sizes = [16, 32, 48] icon_sizes = [(x, x) for x in sizes] Image.open(sourcefile).save(targetfile, icon_sizes=icon_sizes) def bake_several_pngs_to_ico(sourcefiles, targetfile): """Converts several PNG files into one ICO file. args: sourcefiles (list of str): A list of pathnames of PNG files. targetfile (str): Pathname of the resulting ICO file. Use this function if you want to have fine-grained control over the resulting icon file, providing each possible icon resolution individually. Example:: sourcefiles = [ "Path/to/logo_16x16.png", "Path/to/logo_32x32.png", "Path/to/logo_48x48.png" ] targetfile = "Path/to/logo.ico" bake_several_pngs_to_ico(sourcefiles, targetfile) """ # Write the global header number_of_sources = len(sourcefiles) data = bytes((0, 0, 1, 0, number_of_sources, 0)) offset = 6 + number_of_sources * 16 # Write the header entries for each individual image for sourcefile in sourcefiles: img = Image.open(sourcefile) data += bytes((img.width, img.height, 0, 0, 1, 0, 32, 0, )) bytesize = Path(sourcefile).stat().st_size data += bytesize.to_bytes(4, byteorder="little") data += offset.to_bytes(4, byteorder="little") offset += bytesize # Write the individual image data for sourcefile in sourcefiles: data += Path(sourcefile).read_bytes() # Save the icon file Path(targetfile).write_bytes(data) The code presupposes that your PNG files are 32-Bit-per-Pixel RGBA images. Otherwise, the number 32 in the above code would have to be changed and should be replaced with some Pillow-image-sniffing. A: If you have imageio, (probably the best library for reading/writing images in Python), you can use it: import imageio img = imageio.imread('logo.png') imageio.imwrite('logo.ico', img) Install is as easy as pip install imageio A: I don't know if this applies for all cases, but on WinXP an .ico can be a bmp of size 16x16, 32x32 or 64x64. Just change the extension to ico from bmp and you're ready to go. A: I was trying to batch convert my logo into multiple sizes for my python based app packaged with fbs and ended up using the below to do this based on the above answers. worked perfectly even the .ico one generated and shows in Windows. In Linux it shows the one Icon.ico as compressed, but not too worried about that as its only going to be used in Windows. from PIL import Image icon_sizes = [(16, 16), (24, 24), (32, 32), (48, 48), (64, 64), (128, 128), (255, 255)] image = Image.open('/some/path/to/logo-python/logo.png') fileoutpath = '/some/path/to/logo-python/' for size in icon_sizes: print(size[0]) fileoutname = fileoutpath + str(size[0]) + ".png" new_image = image.resize(size) new_image.save(fileoutname) new_logo_ico_filename = fileoutpath + "Icon.ico" new_logo_ico = image.resize((128, 128)) new_logo_ico.save(new_logo_ico_filename, format="ICO",quality=90)
{ "language": "en", "url": "https://stackoverflow.com/questions/45507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: How can I call a .NET DLL from an Inno Setup script? I want to call a function from a .NET DLL (coded in C#) from an Inno Setup script. I have: * *marked the Register for COM interop option in the project properties, *changed the ComVisible setting in the AssemblyInfo.cs file, *added these lines to the ISS script: [Files] Source: c:\temp\1\MyDLL.dll; Flags: dontcopy [Code] function MyFunction(): string; external 'MyFunction@files:MyDLL.dll stdcall setuponly'; But I still get the following error: Runtime Error (at -1:0): Cannot Import dll:C:\DOCUME~1\foo\LOCALS~1\Temp\is-LRL3E.tmp\MyDLL.dll. What am I doing wrong? A: Intenta de esta manera (Try this way): Var obj: Variant va: MyVariableType; Begin //Starting ExtractTemporaryFile('MyDll.dll'); RegisterServer(False, ExpandConstant('{tmp}\MyDll.dll'), False); obj := CreateOleObject('MyDll.MyClass'); //Using va := obj.MyFunction(); //Finishing UnregisterServer(False, ExpandConstant('{tmp}\MyDll.dll'), False); DeleteFile('{tmp}\MyDll.dll'); End; Suerte (good luck) A: Oops, my bad, it's been too long since I've read pascal! So, if you need to get the value then there are a couple of possibilities: * *Write the functionality in C/C++ and export the function, that's definitely supported. *Use a Managed C++ dll to shim to your .NET dll, and expose the call as a C interface point (this should work, but it's getting messy) *Use an .exe to store the result of your code in a .INI file or the registry or in a temp file and read the result in the setup code section (this is now properly nasty) When I last worked with InnoSetup it didn't support your scenario directly (calling .NET code from setup). A: I read a little bit more about it - now I can see the difference between importing a C-style function and creating an OLE object. Something like this would work for me: [Code] procedure MyFunction(); var oleObject: Variant; begin oleObject := CreateOleObject('MyDLL.MyDLL'); MsgBox(oleObject.MyFunction, mbInformation, mb_Ok); end; but it requires registering the DLL file. I guess I will have to create a command-line application to call the functions from the DLL. A: Use the Unmanaged Exports library to export a function from a C# assembly, in a way that it can be called in Inno Setup. * *Implement a static method in C# class library *Add the Unmanaged Exports NuGet package to your project *Set Platform target of your project to x86 *Add the DllExport attribute to your method *If needed, define a marshaling for the function arguments (particularly marshaling of string arguments has to be defined). *Build using RGiesecke.DllExport; using System.Runtime.InteropServices; using System.Text.RegularExpressions; namespace MyNetDll { public class MyFunctions { [DllExport(CallingConvention = CallingConvention.StdCall)] public static bool RegexMatch( [MarshalAs(UnmanagedType.LPWStr)]string pattern, [MarshalAs(UnmanagedType.LPWStr)]string input) { return Regex.Match(input, pattern).Success; } } } On Inno Setup side: [Files] Source: "MyNetDll.dll"; Flags: dontcopy [Code] function RegexMatch(Pattern: string; Input: string): Boolean; external 'RegexMatch@files:MyNetDll.dll stdcall'; And now you can use your function like this: if RegexMatch('[0-9]+', '123456789') then begin Log('Matched'); end else begin Log('Not matched'); end; See also: * *Returning a string from a C# DLL with Unmanaged Exports to Inno Setup script *Inno Setup - External .NET DLL with dependencies A: You're trying to import a C-style function from your .NET dll - this doesn't really have anything to do with COM interop. COM interop allows you to activate your .NET objects as COM objects, it doesn't expose them as C/C++ exported functions/types. If your function doesn't need to return any data, why not make a simple .exe that calls your function and just run that from your setup? Also: See the innosetup support newsgroups where you might get better support. A: A .NET dll can be best called from any other programming language by exposing it as a COM object. Take a look at this example: http://support.microsoft.com/kb/828736. This shows how to call a ".NET dll" from "unmanaged C++". You can replace the "unamanged C++" by any other programming language, that can be used as a COM client. A: Try using delayload, it is used for a dll that may not exist at runtime. This solve the problem. For example: [Files] Source: odbccp32.dll; Flags: dontcopy [Code] procedure SQLConfigDataSource(hwndParent: Integer; Frequest: Integer; LpszDriver: String; lpszAttributes: String); external 'SQLConfigDataSource@files:odbccp32.dll stdcall delayload';
{ "language": "en", "url": "https://stackoverflow.com/questions/45510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Simplest way to change listview and treeview colours I'm trying to find a simple way to change the colour of the text and background in listview and treeview controls in WTL or plain Win32 code. I really don't want to have to implement full owner drawing for these controls, simply change the colours used. I want to make sure that the images are still drawn with proper transparency. Any suggestions? A: Have a look at the following macros: ListView_SetBkColor ListView_SetTextColor TreeView_SetBkColor TreeView_SetTextColor A: There are also appropriate methods of the CListViewCtrl and CTreeViewCtrl wrapper classes: * *GetBkColor *SetBkColor A: You may also want to take a look at WTL's CCustomDraw::OnItemPrePaint (that's if you need to control the drawing of individual items) A good article that describes this process is here A: It's been a while since I've use the win32 API directly, but I believe that if you handle the WM_ERASEBACKGROUND message for your control, you can use FillRect() in your handler to paint the background using whatever color you like.
{ "language": "en", "url": "https://stackoverflow.com/questions/45528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ASPSmartUpload v3.2 I have the unfortunate task of fixing a legacy ASP site that was migrated to Windows Server 2003, and I know nothing about ASP. The main issue appears to be with ASPSmartUpload, of which I have version 2.3 installed. According to http://foradvice.net/smart_upload_faq.htm: FAQ: does aspSmartUpload work on Windows 2003 server ? Yes, last versions of aspSmartUpload work fine on the Windows 2003 server. If you upgrade your OS and used an old version of aspSmartUpload, you have to download and setup aspSmartUpload 3.2+. Of course, aspsmart.com doesn't respond and any Google result for "aspsmartupload 3.2" points to the dead link. The latest version I can find anywhere is v3.0, on some dodgy-looking DLL download site. What is the best way to resolve this, or can anyone provide a working link to version 3.2 of ASPSmartUpload? Thanks! A: We searched for quite a while before finding these dlls. Here is the link for the ASPSMARTUPLOAD.DLL Usage page that tells you how to install it and a link for downloading version 3.3. From what I understand, windows server 2008 requires version 3.2 or higher, but we couldn't find version 3.2. I believe version 3.3 will work the same. One other item, we had to get msvbvm50.dll VB Runtimes for this to work on windows server 2008. Once these files were registered the server restarted we got past this issue. A: Fortunately I have a copy of the original v3.3 distribution. I've shared it here.
{ "language": "en", "url": "https://stackoverflow.com/questions/45534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Get month and year from a datetime in SQL Server 2005 I need the month+year from the datetime in SQL Server like 'Jan 2008'. I'm grouping the query by month, year. I've searched and found functions like datepart, convert, etc., but none of them seem useful for this. Am I missing something here? Is there a function for this? A: If you mean you want them back as a string, in that format; SELECT CONVERT(CHAR(4), date_of_birth, 100) + CONVERT(CHAR(4), date_of_birth, 120) FROM customers Here are the other format options A: Beginning with SQL Server 2012, you can use: SELECT FORMAT(@date, 'yyyyMM') A: How about this? Select DateName( Month, getDate() ) + ' ' + DateName( Year, getDate() ) A: That format doesn't exist. You need to do a combination of two things, select convert(varchar(4),getdate(),100) + convert(varchar(4),year(getdate())) A: ( Month(Created) + ',' + Year(Created) ) AS Date A: the best way to do that is with : dateadd(month,datediff(month,0,*your_date*),0) it will keep your datetime type A: cast(cast(sq.QuotaDate as date) as varchar(7)) gives "2006-04" format A: The question is about SQL Server 2005, many of the answers here are for later version SQL Server. select convert (varchar(7), getdate(),20) --Typical output 2015-04 SQL Server 2005 does not have date function which was introduced in SQL Server 2008 A: returns the full month name, -, full year e.g. March-2017 CONCAT(DATENAME(mm, GetDate()), '-', DATEPART(yy, GetDate())) A: select datepart(month,getdate()) -- integer (1,2,3...) ,datepart(year,getdate()) -- integer ,datename(month,getdate()) -- string ('September',...) A: Use: select datepart(mm,getdate()) --to get month value select datename(mm,getdate()) --to get name of month A: In SQL server 2012, below can be used select FORMAT(getdate(), 'MMM yyyy') This gives exact "Jun 2016" A: Funny, I was just playing around writing this same query out in SQL Server and then LINQ. SELECT DATENAME(mm, article.Created) AS Month, DATENAME(yyyy, article.Created) AS Year, COUNT(*) AS Total FROM Articles AS article GROUP BY DATENAME(mm, article.Created), DATENAME(yyyy, article.Created) ORDER BY Month, Year DESC It produces the following ouput (example). Month | Year | Total January | 2009 | 2 A: I had the same problem and after looking around I found this: SELECT DATENAME(yyyy, date) AS year FROM Income GROUP BY DATENAME(yyyy, date) It's working great! A: Converting the date to the first of the month allows you to Group By and Order By a single attribute, and it's faster in my experience. declare @mytable table(mydate datetime) declare @date datetime set @date = '19000101' while @date < getdate() begin insert into @mytable values(@date) set @date = dateadd(day,1,@date) end select count(*) total_records from @mytable select dateadd(month,datediff(month,0,mydate),0) first_of_the_month, count(*) cnt from @mytable group by dateadd(month,datediff(month,0,mydate),0) A: ---Lalmuni Demos--- create table Users ( userid int,date_of_birth date ) ---insert values--- insert into Users values(4,'9/10/1991') select DATEDIFF(year,date_of_birth, getdate()) - (CASE WHEN (DATEADD(year, DATEDIFF(year,date_of_birth, getdate()),date_of_birth)) > getdate() THEN 1 ELSE 0 END) as Years, MONTH(getdate() - (DATEADD(year, DATEDIFF(year, date_of_birth, getdate()), date_of_birth))) - 1 as Months, DAY(getdate() - (DATEADD(year, DATEDIFF(year,date_of_birth, getdate()), date_of_birth))) - 1 as Days, from users A: Yes, you can use datename(month,intime) to get the month in text. A: ,datename(month,(od.SHIP_DATE)) as MONTH_ Answer: MONTH_ January January September October December October September A: It's work great. DECLARE @pYear VARCHAR(4) DECLARE @pMonth VARCHAR(2) DECLARE @pDay VARCHAR(2) SET @pYear = RIGHT(CONVERT(CHAR(10), GETDATE(), 101), 4) SET @pMonth = LEFT(CONVERT(CHAR(10), GETDATE(), 101), 2) SET @pDay = SUBSTRING(CONVERT(CHAR(10), GETDATE(), 101), 4,2) SELECT @pYear,@pMonth,@pDay A: The following works perfectly! I just used it, try it out. date_format(date,'%Y-%c')
{ "language": "en", "url": "https://stackoverflow.com/questions/45535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: How to know whether a window with a given title is already open in Tk? I’ve writen a little python script that just pops up a message box containing the text passed on the command line. I want to pop it up only when the window —resulting from a previous call— is not open. from Tkinter import * import tkMessageBox root = Tk() root.withdraw() # TODO not if a window with this title exists tkMessageBox.showinfo("Key you!", " ".join(sys.argv[1:])) Any idea how to check that? A: I believe you want: if 'normal' != root.state(): tkMessageBox.showinfo("Key you!", " ".join(sys.argv[1:])) A: The previous answer works accordingly to the code you have provided. You say it does not work because the answerer complies with "sois bête et discipliné" rule in that he did not add root.mainloop() to his code since your question does not either. By adding the later line, for some reason caused by the event loop, you should test the exact string "withdrawn" as follows: import tkinter as tk from tkinter import messagebox import sys root = tk.Tk() root.withdraw() if 'withdrawn' != root.state(): messagebox.showinfo("Key you!", sys.argv[1:]) root.mainloop() Note: do not run this code otherwise your Terminal session will hang up. To circumvent this discomfort, you will have to reset the window state using either root.state("normal") which will lead to the message box to disappear as if a click on the Ok button occurred, or root.iconify() through which you can stop the Terminal session to hang up by right clicking on the tkinter icon appearing on your OS taskbar.
{ "language": "en", "url": "https://stackoverflow.com/questions/45540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Add horizontal scroll to asp.net listbox control How can I add horizontal scroll capabilities to the asp.net listbox control? A: Example to add horizontal scroll: <asp:ListBox ID="List" runat="server" Height="320px" Width="100%" style="overflow-x:auto;"SelectionMode="Multiple"> </asp:ListBox> CSS3 overflow-x Property: http://www.w3schools.com/cssref/css3_pr_overflow-x.asp A: If you really, really need it, one idea would be to create a custom ListBox class whose HTML looks like this: sets the width of the SELECT to that of your widest value's width (the max width of the scrollbar, for example). Now wrap that SELECT inside of a DIV of the 'constrained' size and let it scroll on overflow. Here's a quick example starting down those lines, here's the type of HTML you want spit out by a control: <div style="width:200px; height:100px; overflow:auto;"> <SELECT size="4"> <OPTION Value="1">blahblahblahblahblahblahblahblahblahblah blahblah</OPTION> <OPTION Value="2">2</OPTION> <OPTION Value="3">3</OPTION> <OPTION Value="4">4</OPTION> </SELECT> </div> so in essence I'd recommend creating a composite custom control for this, which renders this HTML. They're pretty easy to make, Google on the terms 'composite control asp.net'. The toughest part will be matching up the div dimensions to that of the select box, to make the scrollbars work/line up properly. That's why it's kinda tricky. Source Also, take a look at this: Automatically Adding/Hide Horizontal Scroll bar in the ListBox control EDIT: Make sure you have enough height to include the scroll bar height or else you'll get the vertical scroll bar on both controls. A: We can put this list box inside a DIV and set the style for DIV to overflow which will automatically show the scroll bar whenever necessary. Your aspx page has the following DIV: <div id='hello' style="Z-INDEX: 102; LEFT: 13px; OVERFLOW: auto; WIDTH: 247px; POSITION: absolute; TOP: 62px; HEIGHT: 134px" > Put your asp:listbox inside the DIV definition. In page_load function, you need to define the width and height of the list box properly, so that it won't overflow with the DIV. private void Page_Load(object sender, System.EventArgs e) { if (!IsPostBack) { int nItem = Convert.ToInt32(ListBox1.Items.Count * 17); ListBox1.Height = nItem; ListBox1.Width = 800; } } Code and solution available at http://www.codeproject.com/KB/custom-controls/HorizontalListBox.aspx A: If you are doing it only for display purpose, You can do it in another way by using Textbox with mulitiline property. By appending the text with new line as such! List<Yourclass> result = null; result = Objname.getResult(Parameter1, Parameter2); foreach (Yourclass res in result) { txtBoxUser.Text += res.Fieldname1.ToString(); txtBoxUser.Text += "\r\n" + res.Fieldname2.ToString(); txtBoxUser.Text += "\n\n"; } Hence you will get the view of mulitline textbox with All your data arranged in good format as above code(New line and all). And also it will wrap your texts if it exceeded the width of your textbox. Also you no need to bother about the scrollsbars and here you will get only vertical scroll bar since all our results have been wrapped as per the behaviour of textbox.
{ "language": "en", "url": "https://stackoverflow.com/questions/45545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I return a 403 Forbidden in Spring MVC? I want my controller to return the right HTTP response code when the user lacks permission to view a particular page. A: Use ResponseStatusException : @GetMapping("/demo") public String demo(){ if (forbidden){ throw new ResponseStatusException(HttpStatus.FORBIDDEN); } } A: Use this: response.setStatus(403). A: You can also just throw org.springframework.security.access.AccessDeniedException("403 returned"); This returns a 403 in the response header. A: Create an Exception annotated with @ResponseStatus e.g. like this: @ResponseStatus(HttpStatus.FORBIDDEN) public class ForbiddenException extends RuntimeException { } Now just throw that Exception in your handler method and the response will have status 403. A: Quickie If you are using plain JSP views (as is most common), then simply add <% response.setStatus( 403 ); %> somewhere in your view file. At the top is a nice place. Detail In MVC, i would always set this in the view, and in most cases with Spring-MVC, use the SimpleMappingExceptionResolver to present the correct view in response to a thrown runtime Exception. For example: create and throw a PermissionDeniedException in your controller or service layer and have the exception resolver point to a view file permissionDenied.jsp. This view file sets the 403 status and shows the user an appropriate message. In your Spring bean XML file: <bean id="exceptionResolver" class="org.springframework.web.servlet.handler.SimpleMappingExceptionResolver"> <property name="exceptionMappings"> <props> <prop key="PermissionDeniedException"> rescues/permissionDenied </prop> ... set other exception/view mappings as <prop>s here ... </props> </property> <property name="defaultErrorView" value="rescues/general" /> </bean> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/views/" /> <property name="suffix" value=".jsp" /> </bean> If you need to implement a user login mechanism, take a look at Spring Security (formerly Acegi Security). A: Using an ExceptionResolver is a great way to go, but if you just want this to be view-independent, you could certainly make a call to response.sendError(HttpServletResponse.SC_FORBIDDEN, "AdditionalInformationIfAvailable"); in your Controller.
{ "language": "en", "url": "https://stackoverflow.com/questions/45546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How to get browser IP or hostname? I have a web application that should behave differently for internal users than external ones. The web application is available over the Internet, and therefore obviously to the internal users as well. All the users are anonymous, not authenticated, but the page should render differently for internal users than external. What I'm doing in my code is use Request.UserHostName and then Dns.GetHostEntry. The result is then compared to a setting in my web.config (that holds something like *.mydomain.local) . If the comparison gives a positive result then I render the HTML that the internal user should see otherwise I render the HTML the external user should see. However, my problem is that I don't always get the expected value from Request.UserHostName. on the development site I get the IP-number (?) of the machine running the browser but on the customer site I don't get the IP-number of the user machine, I get some other IP-number. The browsers don't have any proxies set or anything like that. Should I be using something else than Request.UserHostName? A: I recommend using IP addresses as well. I'm dealing with this exact same situation setting up an authentication system right now as well and the conditions described by Epso and Robin M are exactly what is happening. External users coming to the site give me their actual IP address while all internal users provide the IP of the gateway machine(router) on to the private subnet the webservers sit on. To deal with it I just check for that one IP. If I get the IP of the gateway, I provide the internal access. If I get anything else they get the external one which requires additional authentication in my case. In yours, it would just mean a different interface. A: Try Request.UserHostAddress, which returns the client's IP address. Assuming your internal network uses IP addresses reserved for LANs, it should be relatively simple to check if an IP is internal or external. A: There might be a firewall that is doing some sort of NAT, to enable inside clients to use the external dns-name to reach the server. Is the IP-number you get on customer site the same at the external customer-server ip? In that case you can hard code for that one IP-address. All internal computers behind that firewall will appear to have to same ip-address and you can classify them as "internal". A: It looks like you're being returned a public facing IP Address. Get the user to go to http://www.myipaddress.com . If this is the same as the IP Address returned to your software, then this is definitely the case. The only solution I can see to get around this is to either get them to connect to the machine holding the asp.net application via a VPN, or to use some other kind of authentication. The latter is probably the best option. A: It does sound like there is a proxy between users and the server on the customer site (it doesn't need to be configured in the browser). It may be an internal or external proxy depending on your network configuration. I would avoid using the UserHostName for what is effectively authentication as it is presented by the browser duing the request and would be easy to spoof. IP address would be much more effective as it's difficult to spoof an IP address in a TCP/IP connection (and maintain a connection). It's still weak authentication but may be sufficient in this scenario. Even if you are using IP address, if there's a NAT proxy between client and server, you may have to accept that anything coming through that proxy is trusted (I'm assuming that external/untrusted clients don't come through that proxy). If that isn't acceptable, you're back to other methods of authentication. Rather than requiring a logon or VPN connection, you might consider a permanent cookie or client certificates and only give those to internal clients but you would need some way of delivering those to the client. You could certainly deliver a permanent cookie based on a one-time logon. Cookies can be spoofed in a similar way in that the UserHostName can be however you've got a better opportunity to create a cookie value that is less guessable than a domain name.
{ "language": "en", "url": "https://stackoverflow.com/questions/45553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: fprintf returns success but can't write to an existing file In my code fprintf returns successfully by returning the number of bytes written in STREAM, but in the actual file the string I wrote is not there. A: The output is probably just buffered. Try closing the file using close() or call fflush() on the stream to force the string to the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/45571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best method of getting Int32 from first four bytes of GUID? I'm just wondering if it exists better solution for this. BitConverter.ToInt32(sample_guid.ToByteArray(), 0) A: I don't think there's a better solution than this. A: I don't know if it's better, but it is easier to read: Int32.Parse(sample_guid.ToString().SubString(0,1)); I'm a junior developer, admittedly, but the above reads easier to me than a byte conversion, and on a modern computer it would run indistinguishably quickly. A: Dunno about a better solution, but I hope you don't intend to use that Int32 as a random (or unique) value. You cannot depend on any sub part of a Guid to be unique. Guid is assumed to be unique, only in its entirety.
{ "language": "en", "url": "https://stackoverflow.com/questions/45572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Syntax highlighting for html markup disappears in Visual Studio 2008 This happened to me in Visual Studio 2008 pre and post 2008 sp1 on more than one computer and to someone else I know, so it can't be an isolated incident. Seemingly random, every so often I lose all syntax highlighting in my aspx page (the html) so that Visual Studio now looks like a really expensive version of notepad. Does anyone know why does happens? Better yet, anyone know how to fix it? A: You have basically experienced Visual Studio giving up. It gives up for many reason, the page is too complicated to highlight which is usually because there are too many syntax errors. The highlighting is done with some very complicated and intelligent RegEx statements essentially, however if Visual Studio is unable to apply them it just displays black text. One thing to try is to fix all the syntax issues, if any. By the way this "giving up" happens in most IDE's. I have seen it happen once or twice in Elcipse too. A: it happened to me after an uninstall of visual studio using a removal tool at http://msdn.microsoft.com/en-us/vstudio/bb968856.aspx I had to run this before upgrading sql server management studio to 2008 version syntax highlighting has disappeared since because of package loading failures. I didn't find the fix yet so if anybody has an idea... A: Try Ctrl-K, Ctrl-D (reformat document). This will usually restore you syntax coloring. If it doesn't, it should tell you where it got confused (e.g. Couldn't reformat due to line 123). A: I've followed the instructions from Andrea but had to include the following procedure: Andrea Instructions: Enter VS2008, click on Tools/Options Check "Show all settings" option Choose Environment/International Settings Change the language combo box. Close VS2008. ...then... After closing Visual Studio, run this command below from a command prompt with admin permission: run devenv /resetskippkgs Reference: http://forums.asp.net/t/1413383.aspx A: when this happens to me, i let it sit there for a minute. it will usually bring the syntax highlighting back. A: i had the same problem. Installation of DPack solved this issue A: Enter VS2008, click on Tools/Options Check "Show all settings" option Choose Environment/International Settings Change the language combo box. Close VS2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/45577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I convince GroovyShell to maintain state over eval() calls? I'm trying to use Groovy to create an interactive scripting / macro mode for my application. The application is OSGi and much of the information the scripts may need is not know up front. I figured I could use GroovyShell and call eval() multiple times continually appending to the namespace as OSGi bundles are loaded. GroovyShell maintains variable state over multiple eval calls, but not class definitions or methods. goal: Create a base class during startup. As OSGi bundles load, create derived classes as needed. A: I am not sure about what you mean about declared classes not existing between evals, the following two scripts work as expected when evaled one after another: class C {{println 'hi'}} new C() ... new C() However methods become bound to the class that declared them, and GroovyShell creates a new class for each instance. If you do not need the return value of any of the scripts and they are truly scripts (not classes with main methods) you can attach the following to the end of every evaluated scrips. Class klass = this.getClass() this.getMetaClass().getMethods().each { if (it.declaringClass.cachedClass == klass) { binding[it.name] = this.&"$it.name" } } If you depend on the return value you can hand-manage the evaluation and run the script as part of your parsing (warning, untested code follows, for illustrative uses only)... String scriptText = ... Script script = shell.parse(scriptText) def returnValue = script.run() Class klass = script.getClass() script.getMetaClass().getMethods().each { if (it.declaringClass.cachedClass == klass) { shell.context[it.name] = this.&"$it.name" } } // do whatever with returnValue... There is one last caveat I am sure you are aware of. Statically typed variables are not kept between evals as they are not stored in the binding. So in the previous script the variable 'klass' will not be kept between script invocations and will disappear. To rectify that simply remove the type declarations on the first use of all variables, that means they will be read and written to the binding. A: Ended up injecting code before each script compilation. End goal is that the user written script has a domain-specific-language available for use. A: This might be what you are looking for? From Groovy in Action def binding = new Binding(x: 6, y: 4) def shell = new GroovyShell(binding) def expression = '''f = x * y''' shell.evaluate(expression) assert binding.getVariable("f") == 24 An appropriate use of Binding will allow you to maintain state?
{ "language": "en", "url": "https://stackoverflow.com/questions/45582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a way to perform a "Refresh Dependencies" in a setup project outside VS2008? I have a solution with several projects. One of them is a setup project. If you expand the setup project in the Solution Explorer, you see a Detected Dependencies node. If you right click on it, you get a menu item called Refresh Dependencies. This refreshes any dependencies based on the files included in the setup. I am asking if I can execute this action outside Visual Studio, using either devenv.com or MSBuild. I want this because I am using CruiseControl.NET for continuous integration and in some solutions I found that the setup output is missing some dependencies because of the way I automatically build the projects. Update: It turned out that my setup is not very friendly to how Setup projects work in Visual Studio. I ended up using Post Build Events in order to create the whole application structure ready to just be copied to a computer and work out of the box. I am not using setup projects in Visual Studio anymore, unless I really have to. A: Record or create a macro: Option Strict Off Option Explicit Off Imports System Imports EnvDTE Imports EnvDTE80 Imports EnvDTE90 Imports System.Diagnostics Public Module RefreshDependencies Sub TemporaryMacro() DTE.ActiveWindow.Object.GetItem("Project\Setup1\Setup1").Select(vsUISelectionType.vsUISelectionTypeSelect) DTE.ExecuteCommand("Build.RefreshDependencies") End Sub End Module Then just call the macro in the command line: devenv /command "Macros.MyMacros.RefreshDependencies C:\MyProjects\MyApp\"
{ "language": "en", "url": "https://stackoverflow.com/questions/45593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: A issue with the jquery dialog when using the themeroller css The demos for the jquery ui dialog all use the "flora" theme. I wanted a customized theme, so I used the themeroller to generate a css file. When I used it, everything seemed to be working fine, but later I found that I can't control any input element contained in the dialog (i.e, can't type into a text field, can't check checkboxes). Further investigation revealed that this happens if I set the dialog attribute "modal" to true. This doesn't happen when I use the flora theme. Here is the js file: topMenu = { init: function(){ $("#my_button").bind("click", function(){ $("#SERVICE03_DLG").dialog("open"); $("#something").focus(); }); $("#SERVICE03_DLG").dialog({ autoOpen: false, modal: true, resizable: false, title: "my title", overlay: { opacity: 0.5, background: "black" }, buttons: { "OK": function() { alert("hi!"); }, "cancel": function() { $(this).dialog("close"); } }, close: function(){ $("#something").val(""); } }); } } $(document).ready(topMenu.init); Here is the html that uses the flora theme: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=Shift_JIS"> <title>sample</title> <script src="jquery-1.2.6.min.js" language="JavaScript"></script> <link rel="stylesheet" href="flora/flora.all.css" type="text/css"> <script src="jquery-ui-personalized-1.5.2.min.js" language="JavaScript"></script> <script src="TopMenu.js" language="JavaScript"></script> </head> <body> <input type="button" value="click me!" id="my_button"> <div id="SERVICE03_DLG" class="flora">please enter something<br><br> <label for="something">somthing:</label>&nbsp;<input name="something" id="something" type="text" maxlength="20" size="24"> </div> </body> </html> Here is the html that uses the downloaded themeroller theme: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=Shift_JIS"> <title>sample</title> <script src="jquery-1.2.6.min.js" language="JavaScript"></script> <link rel="stylesheet" href="jquery-ui-themeroller.css" type="text/css"> <script src="jquery-ui-personalized-1.5.2.min.js" language="JavaScript"></script> <script src="TopMenu.js" language="JavaScript"></script> </head> <body> <input type="button" value="click me!" id="my_button"> <div id="SERVICE03_DLG" class="ui-dialog">please enter something<br><br> <label for="something">somthing:</label>&nbsp;<input name="something" id="something" type="text" maxlength="20" size="24"> </div> </body> </html> As you can see, only the referenced css file and class names are different. Anybody have a clue as to what could be wrong? @David: I tried it, and it doesn't seem to work (neither on FF or IE). I tried inline css: style="z-index:5000" and I've also tried it referencing an external css file: #SERVICE03_DLG{z-index:5000;} But neither of these work. Am I missing something in what you suggested? Edit: Solve by brostbeef! Since I was originally using flora, I had mistakenly assumed that I have to specify a class attribute. Turns out, this is only true when you actually use the flora theme (as in the samples). If you use the customized theme, specifying a class attribute causes that strange behaviour. A: I think it is because you have the classes different. <div id="SERVICE03_DLG" class="flora"> (flora) <div id="SERVICE03_DLG" class="ui-dialog"> (custom) Even with the flora theme, you would still use the ui-dialog class to define it as a dialog. I've done modals before and I've never even defined a class in the tag. jQueryUI should take care of that for you. Try getting rid of the class attribute or using the "ui-dialog" class. A: After playing with this in Firebug, if you add a z-index attribute greater than 1004 to your default div, id of "SERVICE03_DLG", then it will work. I'd give it something extremely high, like 5000, just to be sure. I'm not sure what it is in the themeroller CSS that causes this. They've probably changed or neglected the position attribute of the target div that it turns into a dialog. A: I tried implementing a themeroller theme with a dialog and tabs and it turns out that the themeroller CSS doesn't work with official jQuery! Especially for dialog and tabs, they modified the element classes from the official jquery ones. See here: http://filamentgroup.com/lab/introducing_themeroller_design_download_custom_themes_for_jquery_ui/ A user's comment: 3) the generated theme that I downloaded seems to be incomplete - when I attempt to use it my tabs (which work with the flora theme, code identical to the documentation example) do not get styled as tabs Having run into 3 I thought I was stuck and would have to revert using “flora"… I have since discovered by reading the source code of the “demo” file that if I adjust my html and give the < li> items I’m using for my tabs the “ui-tabs-nav-item” class then it will work. The theme generated by themeroller is thus unfortunately incomplete. If the tabs stuff is incomplete, it makes me wonder what else is incomplete. It was rather frustrating. :( followed by the themeroller developers comment: 3) We’ll take a look at that. You’re right that those classes should be added by the plugin. For now though, it probably wouldn’t hurt much to just add them to your markup so you can use themeroller themes. We’ll check it out, though. I think our selectors could be based off of the parent ui-tabs selector instead, but I think we were trying not to use elements in our selectors. Consider it on the to-do list A: Man, this is a good one. I've tried doing a bunch of things on these two pages. Have you tried just leaving the CSS out altogether and trying both pages then? I used Firebug to remove the CSS from the header on both pages, and the input still worked on one and not on the other - but, I'm inclined to believe that Firebug doesn't completely remove the CSS from the rendering, and you'll get different results if you actually remove it from the code. I also found that you can paste text into the text box using the mouse - it just won't accept keyboard input. There doesn't seem to be any event handler on it that would interfere with this, though.
{ "language": "en", "url": "https://stackoverflow.com/questions/45600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Why doesn't C# support implied generic types on class constructors? C# doesn't require you to specify a generic type parameter if the compiler can infer it, for instance: List<int> myInts = new List<int> {0,1,1, 2,3,5,8,13,21,34,55,89,144,233,377, 610,987,1597,2584,4181,6765}; //this statement is clunky List<string> myStrings = myInts. Select<int,string>( i => i.ToString() ). ToList<string>(); //the type is inferred from the lambda expression //the compiler knows that it's taking an int and //returning a string List<string> myStrings = myInts. Select( i => i.ToString() ). ToList(); This is needed for anonymous types where you don't know what the type parameter would be (in intellisense it shows up as 'a) because it's added by the compiler. Class-level type parameters don't let you do this: //sample generic class public class GenericDemo<T> { public GenericDemo ( T value ) { GenericTypedProperty = value; } public T GenericTypedProperty {get; set;} } //why can't I do: int anIntValue = 4181; var item = new GenericDemo( anIntValue ); //type inference fails //however I can create a wrapper like this: public static GenericDemo<T> Create<T> ( T value ) { return new GenericDemo<T> ( value ); } //then this works - type inference on the method compiles var item = Create( anIntValue ); Why doesn't C# support this class level generic type inference? A: Actually, your question isn't bad. I've been toying with a generic programming language for last few years and although I've never come around to actually develop it (and probably never will), I've thought a lot about generic type inference and one of my top priorities has always been to allow the construction of classes without having to specify the generic type. C# simply lacks the set of rules to make this possible. I think the developers never saw the neccesity to include this. Actually, the following code would be very near to your proposition and solve the problem. All C# needs is an added syntax support. class Foo<T> { public Foo(T x) { … } } // Notice: non-generic class overload. Possible in C#! class Foo { public static Foo<T> ctor<T>(T x) { return new Foo<T>(x); } } var x = Foo.ctor(42); Since this code actually works, we've shown that the problem is not one of semantics but simply one of lacking support. I guess I have to take back my previous posting. ;-) A: Thanks Konrad, that's a good response (+1), but just to expand on it. Let's pretend that C# has an explicit constructor function: //your example var x = new Foo( 1 ); //becomes var x = Foo.ctor( 1 ); //your problem is valid because this would be var x = Foo<T>.ctor<int>( 1 ); //and T can't be inferred You're quite right that the first constructor can't be inferred. Now let's go back to the class class Foo<T> { //<T> can't mean anything else in this context public Foo(T x) { } } //this would now throw an exception unless the //typeparam matches the parameter var x = Foo<int>.ctor( 1 ); //so why wouldn't this work? var x = Foo.ctor( 1 ); Of course, if I add your constructor back in (with its alternate type) we have an ambiguous call - exactly as if a normal method overload couldn't be resolved. A: Why doesn't C# support this class level generic type inference? Because they're generally ambiguous. By contrast, type inference is trivial for function calls (if all types appear in arguments). But in the case of constructor calls (glorified functions, for the sake of discussion), the compiler has to resolve multiple levels at the same time. One level is the class level and the other is the constructor arguments level. I believe solving this is algorithmically non-trivial. Intuitively, I'd say it's even NP-complete. To illustrate an extreme case where resolution is impossible, imagine the following class and tell me what the compiler should do: class Foo<T> { public Foo<U>(U x) { } } var x = new Foo(1);
{ "language": "en", "url": "https://stackoverflow.com/questions/45604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: Cascading deletes in PostgreSQL I have a database with a few dozen tables interlinked with foreign keys. Under normal circumstances, I want the default ON DELETE RESTRICT behavior for those constraints. But when trying to share a snapshot of the database with a consultant, I needed to remove some sensitive data. I wish that my memory of a DELETE FROM Table CASCADE command hadn't been pure hallucination. What I ended out doing was dumping the database, writing a script to process the dump by adding ON DELETE CASCADE clauses too all the foreign key constraints, restoring from that, performing my deletes, dumping again, removing the ON DELETE CASCADE, and finally restoring again. That was easier than writing the deletion query I'd have needed to do this in SQL -- removing whole slices of the database isn't a normal operation, so the schema isn't exactly adapted to it. Does anyone have a better solution for the next time something like this comes up? A: You do not need to dump and restore. You should be able to just drop the constraint, rebuild it with cascade, do your deletes, drop it again, and the rebuild it with restrict. CREATE TABLE "header" ( header_id serial NOT NULL, CONSTRAINT header_pkey PRIMARY KEY (header_id) ); CREATE TABLE detail ( header_id integer, stuff text, CONSTRAINT detail_header_id_fkey FOREIGN KEY (header_id) REFERENCES "header" (header_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); insert into header values(1); insert into detail values(1,'stuff'); delete from header where header_id=1; alter table detail drop constraint detail_header_id_fkey; alter table detail add constraint detail_header_id_fkey FOREIGN KEY (header_id) REFERENCES "header" (header_id) on delete cascade; delete from header where header_id=1; alter table detail add constraint detail_header_id_fkey FOREIGN KEY (header_id) REFERENCES "header" (header_id) on delete restrict; A: You could create the foreign key constraints as DEFERRABLE. Then you would be able to temporarily disable them while you scrub the data and re-enable them when you are done. Have a look at this question. A: TRUNCATE table CASCADE; I'm a Postgres novice, so I'm not sure what the trade-off is for TRUNCATE vs. DROP. A: You may want to look into using schemas with PostgreSQL. I've done this in past projects to allow different groups of people or developers to have their own data. Then you can use your scripts to create multiple copies of your database for just such situations. A: @Tony: No, schemas can be useful, and indeed, we use them to partition data in our database. But I'm talking about trying to scrub sensitive data before letting a consultant have a copy of the db. I want that data gone. A: I don't think you'd need to process the dump file like that. Do a streaming dump/restore, and process that. Something like: createdb -h scratchserver scratchdb createdb -h scratchserver sanitizeddb pg_dump -h liveserver livedb --schema-only | psql -h scratchserver sanitizeddb pg_dump -h scratchserver sanitizeddb | sed -e "s/RESTRICT/CASCADE/" | psql -h scratchserver scratchdb pg_dump -h liveserver livedb --data-only | psql -h scratchserver scratchdb psql -h scrachserver scratchdb -f delete-sensitive.sql pg_dump -h scratchserver scratchdb --data-only | psql -h scratchserver sanitizeddb pg_dump -Fc -Z9 -h scratchserver sanitizedb > sanitizeddb.pgdump where you store all your DELETE sqls in delete-sensitive.sql. The sanitizeddb database/steps can be removed if you don't mind the consultant getting a db with CASCADE foreign keys instead of RESTRICT foreign keys. There might also be better ways depending on how often you need to do this, how big the database is, and what percentage of data is sensitive, but I can't think of a simpler way to do it once or twice for a reasonably sized database. You'd need a different database after all, so unless you already have a slony cluster, can't avoid the dump/restore cycle, which might be time consuming. A: TRUNCATE just removes the data from table and leaves the structure
{ "language": "en", "url": "https://stackoverflow.com/questions/45611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Javascript collection of DOM objects - why can't I reverse with Array.reverse()? What could be the problem with reversing the array of DOM objects as in the following code: var imagesArr = new Array(); imagesArr = document.getElementById("myDivHolderId").getElementsByTagName("img"); imagesArr.reverse(); In Firefox 3, when I call the reverse() method the script stops executing and shows the following error in the console of the Web Developer Toolbar: imagesArr.reverse is not a function The imagesArr variable can be iterated through with a for loop and elements like imagesArr[i] can be accessed, so why is it not seen as an array when calling the reverse() method? A: this problem can Actually be solved easily with array spread operator. let elements = document.querySelectorAll('button'); elements = [...elements]; console.log(elements) // Before reverse elements = elements.reverse(); // Now the reverse function will work console.log(elements) // After reverse <html> <body> <button>button1</button> <button>button2</button> <button>button3</button> <button>button4</button> <button>button5</button> </body> </html> A: getElementsByTag() returns a NodeList instead of an Array. You need to convert the NodeList to an array then reverse it. var imagesArr = [].slice.call(document.getElementById("myDivHolderId").getElementsByTagName("img"), 0).reverse(); A: I know this question is old but I think it needs a bit of clarification as some of the answers here are outdated as W3C changed the definition, and consequently the return value of these methods getElementsByTagName() and getElementsByClassName() These methods as of the time of writing this answer return an object - empty or not - of type HTMLCollection and not NodeList. It's like the difference between the properties children which returns an object of type HTMLCollection since it's only composed of elements and excluding text or comment nodes, and childNodes which returns an object of type NodeList since it could contain other node types like text and comments as well. Note: I'd go on tangent here and express my lack of insight on why querySelectorAll() method currently returns a NodeList and not an HTMLCollection since it exclusively works on element nodes in the document and nothing else. Probably it has something to do with potential coverage of other node types in the future and they went for a more future proof solution, who knows really? :) EDIT: I think I got the rationale behind this decision to opt for a NodeList and not an HTMLCollection for the querySelectorAll(). Since they constructed HTMLCollection to be exclusively and entirely live and since this method doesn't need this live functionality, they decided for a NodeList implementation instead to best serve its purpose economically and efficiently. A: Because getElementsByTag name actually returns a NodeList structure. It has similar array like indexing properties for syntactic convenience, but it is not an array. For example, the set of entries is actually constantly being dynamically updated - if you add a new img tag under myDivHolderId, it will automatically appear in imagesArr. See http://www.w3.org/TR/DOM-Level-2-Core/core.html#ID-536297177 for more. A: getElementsByTag() returns a NodeList instead of an Array. You can convert a NodeList to an Array but note that the array will be another object, so reversing it will not affect the DOM nodes position. var listNodes = document.getElementById("myDivHolderId").getElementsByTagName("img"); var arrayNodes = Array.slice.call(listNodes, 0); arrayNodes.reverse(); In order to change the position, you will have to remove the DOM nodes and add them all again at the right position. Array.prototype.slice.call(arrayLike, 0) is a great way to convert an array-like to an array, but if you are using a JavaScript library, it may actually provide a even better/faster way to do it. For example, jQuery has $.makeArray(arrayLike). You can also use the Array methods directly on the NodeList: Array.prototype.reverse.call(listNodes); A: Your first line is irrelevant, since it doesn't coerce the assignment to the variable, javascript works the other way. imagesArr, is not of Type Array(), its of whatever the return type of getElementsByTagName("img") is. In this case, its an HtmlCollection in Firefox 3. The only methods on this object, are the indexers, and length. In order to work in reverse, just iterate backwards. A: This worked for me, I did a reverse for loop and allocated the nodes to an array var Slides = document.getElementById("slideshow").querySelectorAll('li'); var TempArr = []; for (var x = Slides.length; x--;) { TempArr.push(Slides[x]); } Slides = TempArr;
{ "language": "en", "url": "https://stackoverflow.com/questions/45613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How do you deal with polymorphism in a database? Example I have Person, SpecialPerson, and User. Person and SpecialPerson are just people - they don't have a user name or password on a site, but they are stored in a database for record keeping. User has all of the same data as Person and potentially SpecialPerson, along with a user name and password as they are registered with the site. How would you address this problem? Would you have a Person table which stores all data common to a person and use a key to look up their data in SpecialPerson (if they are a special person) and User (if they are a user) and vice-versa? A: What I'm going to say here is going to send database architects into conniptions but here goes: Consider a database view as the equivalent of an interface definition. And a table is the equivalent of a class. So in your example, all 3 person classes will implement the IPerson interface. So you have 3 tables - one for each of 'User', 'Person' and 'SpecialPerson'. Then have a view 'PersonView' or whatever that selects the common properties (as defined by your 'interface') from all 3 tables into the single view. Use a 'PersonType' column in this view to store the actual type of the person being stored. So when you're running a query that can be operated on any type of person, just query the PersonView view. A: Take a look at Martin Fowler's Patterns of Enterprise Application Architecture: * *Single Table Inheritance: When mapping to a relational database, we try to minimize the joins that can quickly mount up when processing an inheritance structure in multiple tables. Single Table Inheritance maps all fields of all classes of an inheritance structure into a single table. *Class Table Inheritance: You want database structures that map clearly to the objects and allow links anywhere in the inheritance structure. Class Table Inheritance supports this by using one database table per class in the inheritance structure. *Concrete Table Inheritance: Thinking of tables from an object instance point of view, a sensible route is to take each object in memory and map it to a single database row. This implies Concrete Table Inheritance, where there's a table for each concrete class in the inheritance hierarchy. A: If the User, Person and Special person all have the same foreign keys, then I would have a single table. Add a column called Type which is constrained to be User, Person or Special Person. Then based on the value of Type have constraints on the other optional columns. For the object code it doesn't make much difference if you have the separate tables or multiple tables to represent polymorphism. However if you have to do SQL against the database, its much easier if the polymorphism is captured in single table...provided the foreign keys for the sub types are the same. A: There's three basic strategies for handling inheritance in a relational database, and a number of more complex/bespoke alternatives depending on your exact needs. * *Table per class hierarchy. One table for the whole hierarchy. *Table per subclass. A separate table is created for every sub class with a 0-1 association between the subclassed tables. *Table per concrete class. A single table is created for every concrete class. Each of these appoaches raises its own issues about normalization, data access code, and data storage, although my personal preferance is to use table per subclass unless there's a specific performance or structural reason to go with one of the the alternatives. A: At the risk of being an 'architecture astronaut' here, I would be more inclined to go with separate tables for the subclasses. Have the primary key of the subclass tables also be a foreign key linking back to the supertype. The main reason for doing it this way is that it then becomes much more logically consistent and you do not end up with a lot of fields that are NULL and nonsensical for that particular record. This method also makes it much more easy to add extra fields to the subtypes as you iterate your design process. This does add the downside of adding JOINs to your queries, which can impact performance, but I almost always go with an ideal design first, and then look to optimise later if it proves to be necessary. The few times I have gone the 'optimal' way first I have almost always regretted it later. So my design would be something like PERSON (personid, name, address, phone, ...) SPECIALPERSON (personid REFERENCES PERSON(personid), extra fields...) USER (personid REFERENCES PERSON(personid), username, encryptedpassword, extra fields...) You could also create VIEWs later on that aggregates the supertype and the subtype, if that is necessary. The one flaw in this approach is if you find yourself heavily searching for the subtypes associated with a particulare supertype. There is no easy answer to this off the top of my head, you could track it programmatically if necessary, or else run soem global queries and cache the results. It will really depend on the application. A: This might not be what the OP meant to ask, but I thought I might throw this in here. I recently had a unique case of db polymorphism in a project. We had between 60 to 120 possible classes, each with its own set of 30 to 40 unique attributes, and about 10 - 12 common attributes on all the classes . We decided to go the SQL-XML route and ended up with a single table. Something like : PERSON (personid,persontype, name,address, phone, XMLOtherProperties) containing all common properties as columns and then a big XML property bag. The ORM layer was then responsible for reading/writing the respective properties from the XMLOtherProperties. A bit like : public string StrangeProperty { get { return XMLPropertyBag["StrangeProperty"];} set { XMLPropertyBag["StrangeProperty"]= value;} } (we ended up mapping the xml column as a Hastable rather than a XML doc, but you can use whatever suits your DAL best) It's not going to win any design awards, but it will work if you have a large (or unknown) number of possible classes. And in SQL2005 you can still use XPATH in your SQL queries to select rows based on some property that is stored as XML.. it's just a small performance penalty to take in. A: There are generally three ways of mapping object inheritance to database tables. You can make one big table with all the fields from all the objects with a special field for the type. This is fast but wastes space, although modern databases save space by not storing empty fields. And if you're only looking for all users in the table, with every type of person in it things can get slow. Not all or-mappers support this. You can make different tables for all the different child classes with all of the tables containing the base-class fields. This is ok from a performance perspective. But not from a maintenance perspective. Every time your base-class changes all the tables change. You can also make a table per class like you suggested. This way you need joins to get all the data. So it's less performant. I think it's the cleanest solution. What you want to use depends of course on your situation. None of the solutions is perfect so you have to weigh the pros and cons. A: I'd say that, depending on what differentiates Person and Special Person, you probably don't want polymorphism for this task. I'd create a User table, a Person table that has a nullable foreign key field to User (i.e, the Person can be a User, but does not have to). Then I would make a SpecialPerson table which relates to the Person table with any extra fields in it. If a record is present in SpecialPerson for a given Person.ID, he/she/it is a special person. A: In our company we deal with polymorphism by combining all the fields in one table and its worst and no referential integrity can be enforced and very difficult to understand model. I would recommend against that approach for sure. I would go with Table per subclass and also avoid performance hit but using ORM where we can avoid joining with all subclass tables by building query on the fly by basing on type. The aforementioned strategy works for single record level pull but for bulk update or select you can't avoid it. A: This is an older post but I thought I'll weigh in from a conceptual, procedural and performance standpoint. The first question I would ask is the relationship between person, specialperson, and user, and whether it's possible for someone to be both a specialperson and a user simultaneously. Or, any other of 4 possible combinations (class a + b, class b + c, class a + c, or a + b + c). If this class is stored as a value in a type field and would therefore collapse these combinations, and that collapse is unacceptable, then I would think a secondary table would be required allowing for a one-to-many relationship. I've learned you don't judge that until you evaluate the usage and the cost of losing your combination information. The other factor that makes me lean toward a single table is your description of the scenario. User is the only entity with a username (say varchar(30)) and password (say varchar(32)). If the common fields' possible length is an average 20 characters per 20 fields, then your column size increase is 62 over 400, or about 15% - 10 years ago this would have been more costly than it is with modern RDBMS systems, especially with a field type like varchar (e.g. for MySQL) available. And, if security is of concern to you, it might be advantageous to have a secondary one-to-one table called credentials ( user_id, username, password). This table would be invoked in a JOIN contextually at say time of login, but structurally separate from just "anyone" in the main table. And, a LEFT JOIN is available for queries that might want to consider "registered users". My main consideration for years is still to consider the object's significance (and therefore possible evolution) outside the DB and in the real world. In this case, all types of persons have beating hearts (I hope), and may also have hierarchical relationships to one another; so, in the back of my mind, even if not now, we may need to store such relationships by another method. That's not explicitly related to your question here, but it is another example of the expression of an object's relationship. And by now (7 years later) you should have good insight into how your decision worked anyway :) A: yes, I would also consider a TypeID along with a PersonType table if it is possible there will be more types. However, if there is only 3 that shouldn't be nec. A: In the past I've done it exactly as you suggest -- have a Person table for common stuff, then SpecialPerson linked for the derived class. However, I'm re-thinking that, as Linq2Sql wants to have a field in the same table indicate the difference. I haven't looked at the entity model too much, though -- pretty sure that allows the other method. A: Personally, I would store all of these different user classes in a single table. You can then either have a field which stores a 'Type' value, or you can imply what type of person you're dealing with by what fields are filled in. For example, if UserID is NULL, then this record isn't a User. You could link out to other tables using a one to one-or-none type of join, but then in every query you'll be adding extra joins. The first method is also supported by LINQ-to-SQL if you decide to go down that route (they call it 'Table Per Hierarchy' or 'TPH').
{ "language": "en", "url": "https://stackoverflow.com/questions/45621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Sockets and Processes in Java In Java, what would the best way be to have a constantly listening port open, and still send upon receipt of a packet. I am not particularly savvy with network programming at the moment, so the tutorials I have found on the net aren't particularly helpful. Would it make sense to have the listening socket as a serversocket and run it in a separate thread to the socket I'm using to send data to the server? In a loosely related question. Does anyone know if programming simply for java, in netbeans then exporting it for use on a blackberry (using a plugin) the sockets would still work ? A: As for connecting to a Blackberry, this is problematic since in most cases the Blackberry won't have a public IP address and will instead be behind a WAP gateway or wireless provider access point server. RIM provides the Mobile Data Server (MDS) to get around this and provide "Push" data which uses ServerSocket semantics on the Blackberry. The MDS is available with the Blackberry Enterprise Server (BES) and the Unite Server. Once set up data can be sent to a particular unit via the MDS using the HTTP protocol. There is an excellent description of the Push protocol here with LAMP source code. The parameter PORT=7874 in pushout.pl connects to the Blackberry Browser Push server socket. By changing that parameter the payload can be sent to an arbitrary port where your own ServerSocket is accepting connections. A: If your socket code has to run on a BlackBerry, you cannot using standard Java sockets. You have to use the J2ME Connector.open API for creating both types of sockets (those that initiate connections from the BlackBerry, and those that listen for connections/pushes on the BlackBerry). Have a look at the examples that come with RIM's JDE. A: If you can afford the threading, try this (keep in mind I've left out some details like exception handling and playing nice with threads). You may want to look into SocketChannels and/or NIO async sockets / selectors. This should get you started. boolean finished = false; int port = 10000; ServerSocket server = new ServerSocket(port); while (!finished) { // This will block until a connection is made Socket s = server.accept(); // Spawn off some thread (or use a thread pool) to handle this socket // Server will continue to listen } A: I'd need to go back to the basics for this one too. I'd recommend O'Reilly's excellent Java in a Nutshell that includes code examples for just such a case (available online as well). See Chapter 7 for a pretty good overview of the decisions you'd want to make early on.
{ "language": "en", "url": "https://stackoverflow.com/questions/45623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I change the default author for accessing a local SVN repository? I use TortoiseSVN to access file based local repo. In all my commits an author is my Windows login name. Is it possible to use different name? I know how to change author after commit but how to change before? Installing apache/svnserver is not an option. A: Yes, it's possible. TortoiseSVN and the svn command line client share the same settings location in your profile folder. So you may simply checkout one version using svn.exe: $ svn co --username different_user_name file:///C:/path/to/your/repo ... and Subversion will happily replace the associated username for that repository. New commits from TortoiseSVN will then always use that username, no matter with what program you make the new checkouts. The procedure should work with TortoiseSVN 1.5.5. If it doesn't, try emptying svn's authentication cache (%APPDATA%\Subversion\auth\svn.username) first. A: Another possible workaround (but I am NOT advocating it) is to use a client-side hook script before commit, in order to change the username. Hook scripts are discussed in the Subversion book, and local hooks are discussed in "Client Side Hook Scripts" in TortoiseSVN help. A: As far as I know, TortoiseSVN does not offer any way to do this. Presumably it's not seen as a big issue, since file based access is not practical for multi-user scenarios, and for single-user the author-name is of lesser importance. A possible workaround would be to create another Windows username with the author name you want and connect with this. A: I've never hosted svn on a Windows machine, so this is a shot in the dark. You might be able to create a new Windows user and specify that user when browsing, checking out, committing, etc. Let's say you want to make changes as msznajder. Create a user with that name in Windows, then try browsing the repository using TortoiseSVN's Repo-browser and specify the username in the URL - something like file:///msznajder@localhost/some/file/path. A: I suggest setting a post-commit-hook (in your repo's hooks folder - just copy the post-commit-hook.tmpl to post-commit-hook.bat and empty it [1] svn propset svn:author --revprop -r HEAD <author> file:///<path-to-repo> [2] [1] provided SlikSvn or similar is installed, i.e. commandline svn access is possible) [2] (or .sh if on linux, with the #!/bin/bash preamble) with the same content
{ "language": "en", "url": "https://stackoverflow.com/questions/45624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Using ASP.NET AJAX PageMethods and Validators I have a basic CRUD form that uses PageMethods to update the user details, however the Validators don't fire off, I think I need to manually initialize the validators and check whether the validation has passed in my javascript save method. Any ideas on how to do this? A: Ok so I finally solved this: You need to call Page_ClientValidate() in your Save javascript method and If it returns true continue with the save, the Page_ClientValidate() initiates the client side validators, See code below: function Save() { var clientValidationPassed =Page_ClientValidate(); if(clientValidationPassed) { //Save Data PageMethods.SaveUser(UserName,Role,SaveCustomerRequestComplete, RequestError); $find('editPopupExtender').hide(); } else { //Do Nothing as CLient Validation messages are now displayed } return false; } A: what are you using for development? VS 2008 supposedly has better JS debugging, haven't tried it yet. For Ajax you can use the Sys.Debug obj A: If you use Firefox, you can use the FireBug plugin. It has great javascript debugging support.
{ "language": "en", "url": "https://stackoverflow.com/questions/45626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you detect/avoid Memory leaks in your (Unmanaged) code? In unmanaged C/C++ code, what are the best practices to detect memory leaks? And coding guidelines to avoid? (As if it's that simple ;) We have used a bit of a silly way in the past: having a counter increment for every memory allocation call and decrement while freeing. At the end of the program, the counter value should be zero. I know this is not a great way and there are a few catches. (For instance, if you are freeing memory which was allocated by a platform API call, your allocation count will not exactly match your freeing count. Of course, then we incremented the counter when calling API calls that allocated memory.) I am expecting your experiences, suggestions and maybe some references to tools which simplify this. A: If your C/C++ code is portable to *nix, few things are better than Valgrind. A: Microsoft VC++ in debug mode shows memory leaks, although it doesn't show where your leaks are. If you are using C++ you can always avoid using new explicitly: you have vector, string, auto_ptr (pre C++11; replaced by unique_ptr in C++11), unique_ptr (C++11) and shared_ptr (C++11) in your arsenal. When new is unavoidable, try to hide it in a constructor (and hide delete in a destructor); the same works for 3rd party APIs. A: Visual Leak Detector is a very good tool, altough it does not supports the calls on VC9 runtimes (MSVCR90D.DLL for example). A: If you are using Visual Studio, Microsoft provides some useful functions for detecting and debugging memory leaks. I would start with this article: https://msdn.microsoft.com/en-us/library/x98tx3cf(v=vs.140).aspx Here is the quick summary of those articles. First, include these headers: #define _CRTDBG_MAP_ALLOC #include <stdlib.h> #include <crtdbg.h> Then you need to call this when your program exits: _CrtDumpMemoryLeaks(); Alternatively, if your program does not exit in the same place every time, you can call this at the start of your program: _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); Now when the program exits all the allocations that were not free'd will be printed in the Output Window along with the file they were allocated in and the allocation occurrence. This strategy works for most programs. However, it becomes difficult or impossible in certain cases. Using third party libraries that do some initialization on startup may cause other objects to appear in the memory dump and can make tracking down your leaks difficult. Also, if any of your classes have members with the same name as any of the memory allocation routines( such as malloc ), the CRT debug macros will cause problems. There are other techniques explained in the MSDN link referenced above that could be used as well. A: There are various replacement "malloc" libraries out there that will allow you to call a function at the end and it will tell you about all the unfreed memory, and in many cases, who malloced (or new'ed) it in the first place. A: If you're using MS VC++, I can highly recommend this free tool from the codeproject: leakfinder by Jochen Kalmbach. You simply add the class to your project, and call InitAllocCheck(ACOutput_XML) DeInitAllocCheck() before and after the code you want to check for leaks. Once you've build and run the code, Jochen provides a neat GUI tool where you can load the resulting .xmlleaks file, and navigate through the call stack where each leak was generated to hunt down the offending line of code. Rational's (now owned by IBM) PurifyPlus illustrates leaks in a similar fashion, but I find the leakfinder tool actually easier to use, with the bonus of it not costing several thousand dollars! A: In C++: use RAII. Smart pointers like std::unique_ptr, std::shared_ptr, std::weak_ptr are your friends. A: Never used it myself, but my C friends tell me Purify. A: If you're using Visual Studio it might be worth looking at Bounds Checker. It's not free, but it's been incredibly helpful in finding leaks in my code. It doesn't just do memory leaks either, but also GDI resource leaks, WinAPI usage errors, and other stuff. It'll even show you where the leaked memory was initialized, making it much easier to track down the leak. A: As a C++ Developer here's some simply guidelines: * *Use pointers only when absolutely necessary *If you need a pointer, doublecheck if a SmartPointer is a possibility *Use the GRASP Creator pattern. As for the detection of memory leaks personally I've always used Visual Leak Detector and find it to be very useful. A: Are you counting the allocs and frees by interpolating your own syscall functions which record the calls and then pass the call to the real function? This is the only way you can keep track of calls originating from code that you haven't written. Have a look at the man page for ld.so. Or ld.so.1 on some systems. Also do Google LD_PRELOAD and you'll find some interesting articles explaining the technique over on www.itworld.com. A: I think that there is no easy answer to this question. How you might really approach this solution depends on your requirements. Do you need a cross platform solution? Are you using new/delete or malloc/free (or both)? Are you really looking for just "leaks" or do you want better protection, such as detecting buffer overruns (or underruns)? If you are working on the windows side, the MS debug runtime libraries have some basic debug detection functionality, and as another has already pointed out, there are several wrappers that can be included in your source to help with leak detection. Finding a package that can work with both new/delete and malloc/free obviously gives you more flexibility. I don't know enough about the unix side to provide help, although again, others have. But beyond just leak detection, there is the notion of detecting memory corruption via buffer overruns (or underruns). This type of debug functionality is I think more difficult than plain leak detection. This type of system is also further complicated if you are working with C++ objects because polymorhpic classes can be deleted in varying ways causing trickiness in determining the true base pointer that is being deleted. I know of no good "free" system that does decent protection for overruns. we have written a system (cross platform) and found it to be pretty challenging. A: I'd like to offer something I've used at times in the past: a rudimentary leak checker which is source level and fairly automatic. I'm giving this away for three reasons: * *You might find it useful. *Though it's a bit krufty, I don't let that embarass me. *Even though it's tied to some win32 hooks, that should be easy to alleviate. There are things of which you must be careful when using it: don't do anything that needs to lean on new in the underlying code, beware of the warnings about cases it might miss at the top of leakcheck.cpp, realize that if you turn on (and fix any issues with) the code that does image dumps, you may generate a huge file. The design is meant to allow you to turn the checker on and off without recompiling everything that includes its header. Include leakcheck.h where you want to track checking and rebuild once. Thereafter, compile leakcheck.cpp with or without LEAKCHECK #define'd and then relink to turn it on and off. Including unleakcheck.h will turn it off locally in a file. Two macros are provided: CLEARALLOCINFO() will avoid reporting the same file and line inappropriately when you traverse allocating code that didn't include leakcheck.h. ALLOCFENCE() just drops a line in the generated report without doing any allocation. Again, please realize that I haven't used this in a while and you may have to work with it a bit. I'm dropping it in to illustrate the idea. If there turns out to be sufficient interest, I'd be willing to work up an example, updating the code in the process, and replace the contents of the following URL with something nicer that includes a decently syntax-colored listing. You can find it here: http://www.cse.ucsd.edu/~tkammeye/leakcheck.html A: For Linux: Try Google Perftools There are a lot of tools that do similar alloc/free counting, the pros of Goolge Perftools: * *Quite fast (in comparison to valgrind: very fast) *Comes with nice graphical display of results *Has other useful capabilities: cpu-profiling, memory-usage profiling... A: The best defense against leaks is a program structure which minimizes the use of malloc. This is not only good from a programming perspective, but also improves performance and maintainability. I'm not talking about using other things in place of malloc, but in terms of re-using objects and keeping very explicit tabs on all objects being passed around rather than allocating willy-nilly like one often gets used to in languages with garbage collectors like Java. For example, a program I work on has a bunch of frame objects representing image data. Each frame object has sub-data, which the frame's destructor frees. The program keeps a list of all frames that are allocated, and when it needs a new one, checks a list of unused frame objects to see if it can re-use an existing one rather than allocate a new one. On shutdown, it just iterates through the list, freeing everything. A: I would recommend using Memory Validator from software verify. This tool proved itself to be of invaluable help to help me track down memory leaks and to improve the memory management of the applications i am working on. A very complete and fast tool. A: I've been using DevStudio for far too many years now and it always amazes me just how many programmers don't know about the memory analysis tools that are available in the debug run time libraries. Here's a few links to get started with: Tracking Heap Allocation Requests - specifically the section on Unique Allocation Request Numbers _CrtSetDbgFlag _CrtSetBreakAlloc Of course, if you're not using DevStudio then this won't be particularly helpful. A: I’m amazed no one mentioned DebugDiag for Windows OS. It works on release builds, and even at the customer site. (You just need to keep your release version PDBs, and configure DebugDiag to use Microsoft public symbol server) A: Working on Motorola cell phones operating system, we hijacked memory allocation library to observe all memory allocations. It helped to find a lot of problems with memory allocations. Since prevention is better then curing, I would recommend to use static analysis tool like Klockwork or PC-Lint A: At least for MS VC++, the C Runtime library has several functions that I've found helpful in the past. Check the MSDN help for the _Crt* functions. A: Paul Nettle's mmgr is a long time favourite tool of mine. You include mmgr.h in your source files, define TEST_MEMORY, and it delivers a textfile full of memory problems that occurred during a run of your app. A: General Coding Guideline: * *Resources should be deallocated at the same "layer" (function/class/library) where they are allocated. *If this is not possible, try to use some automatic deallocation (boost shared pointer...) A: Memory debugging tools are worth their weight in gold but over the years I've found that two simple ideas can be used to prevent most memory/resource leaks from being coded in the first place. * *Write release code immediatly after writing the acquisition code for the resources you want to allocate. With this method its harder to "forget" and in some sense forces one to seriously think of the lifecycle of resources being used upfront instead of as an aside. *Use return as sparringly as possible. What is allocated should only be freed in one place if possible. The conditional path between acquisition of resource and release should be designed to be as simple and obvious as possible. A: At the top of this list (when I read it) was valgrind. Valgrind is excellent if you are able to reproduce the leak on a test system. I've used it with great success. What if you've just noticed that the production system is leaking right now and you have no idea how to reproduce it in test? Some evidence of what's wrong is captured in the state of that production system, and it might be enough to provide an insight on where the problem is so you can reproduce it. That's where Monte Carlo sampling comes into the picture. Read Raymond Chen's blog article, “The poor man's way of identifying memory leaks” and then check out my implementation (assumes Linux, tested only on x86 and x86-64) http://github.com/tialaramex/leakdice/tree/master A: Valgrind is a nice option for Linux. Under MacOS X, you can enable the MallocDebug library which has several options for debugging memory allocation problems (see the malloc manpage, the "ENVIRONMENT" section has the relevant details). The OS X SDK also includes a tool called MallocDebug (usually installed in /Developer/Applications/Performance Tools/) that can help you to monitor usage and leaks. A: Detect: Debug CRT Avoid: Smart pointers, boehm GC A: A nice malloc, calloc and reallloc replacement is rmdebug, it's pretty simple to use. It is much faster to then valgrind, so you can test your code extensively. Of course it has some downsides, once you found a leak you probably still need to use valgrind to find where the leak appears and you can only test mallocs that you do directly. If a lib leaks because you use it wrong, rmdebug won't find it. http://www.hexco.de/rmdebug/ A: Most memory profilers slow my large complex Windows application to the point where the results are useless. There is one tool that works well for finding leaks in my application: UMDH - http://msdn.microsoft.com/en-us/library/ff560206%28VS.85%29.aspx A: Mtrace appears to be the standard built-in one for linux. The steps are : * *set up the environment variable MALLOC_TRACE in bash MALLOC_TRACE=/tmp/mtrace.dat export MALLOC_TRACE; *Add #include <mcheck.h> to the top of you main source file *Add mtrace(); at the start of main and muntrace(); at the bottom (before the return statement) *compile your program with the -g switch for debug information *run your program *display leak info with mtrace your_prog_exe_name /tmp/mtrace.dat (I had to install the mtrace perl script first on my fedora system with yum install glibc_utils  )
{ "language": "en", "url": "https://stackoverflow.com/questions/45627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "127" }
Q: How do I write SELECT FROM myTable WHERE id IN (SELECT...) in Linq? How do you rewrite this in Linq? SELECT Id, Name FROM TableA WHERE TableA.Id IN (SELECT xx from TableB INNER JOIN Table C....) So in plain english, I want to select Id and Name from TableA where TableA's Id is in a result set from a second query. A: LINQ supports IN in the form of contains. Think "collection.Contains(id)" instead of "id IN (collection)". from a in TableA where ( from b in TableB join c in TableC on b.id equals c.id select b.id ).Contains(TableA.Id) select new { a.Id, a.Name } See also this blog post. A: from a in TableA where (from b in TableB join c in TableC on b.id equals c.id where .. select b.id) .Contains(a.Id) select new { a.Id, a.Name } A: There is no out of box support for IN in LINQ. You need to join 2 queries.
{ "language": "en", "url": "https://stackoverflow.com/questions/45634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Library or algorithm to explode an alphanumeric range I was wondering if there is an open source library or algorithm that can expand a non-numeric range. For example, if you have 1A to 9A you should get 1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A. I've tried Googling for this and the best I could come up with were Regex that would expand numerics with dashes (1-3 becoming 1,2,3). A: As noted by others, it would be useful to be more specific. I don't think you can expect there to be a library that will generate ranges according to any arbitrary order on string you can come up with. If you can simply define what the successor of any given string is, then the solutions is quite easy. That is, if you have a successor function S on strings (e.g. with S('3A') = '4A'), then something like the following can be used: s = initial_string while s != final_string do output s s = S(s) output s Something I have used in the past to generate all strings of a given length l and with given range b to e of characters, is the following piece of (pseudo-)code. It can be easily adapted to a wide range of variations. // initialise s with b at every position for i in [0..l) do s[i] = b done = false while not done do output s j = 0 // if s[j] is e, reset it to b and "add carry" while j < l and s[j] == e do s[j] = b j = j + 1 if j == l then done = true if not done then s[j] = s[j] + 1 For example, to start at a specific string you need only the change the initialisation. To set the end you only need to change the behaviour for the inner while to separately handle position l (limiting to the character in the end string on that position and if reached decrementing l). A: I was trying to leave it somewhat open because the number of possibilities is staggering. I believe this one of those questions that could not be answered 100% here without going through a lot of technical detail about is considered a "good" or "bad" range. I'm just trying to find a jumping point for ideas on how other people have tackled this problem. I was hoping that someone wrote a blog post explaining how they went about it solving this problem or created a whole library to handle this. A: I would say the first step in the solution will be to define how characters and numbers interact and form a sequence. The given example isn't clear, as you would at least assume it to run 1A, 1B .... 8Y, 8Z, 9A - that's assuming your input is restricted to decimal followed by a single character. If you can define a continuous sequence for characters and decimals, then you it will simply be a matter of some recursion / looping to generate part of that sequence. For example, you could assume that each character in the input is one of (1-9A-Z), therefore you could easily make that continuous by grabbing the decimal ascii value of the alpha characters and subtracting 55, in effect giving you the range (1-35) A: If we assume that the start and end ranges will follow the same alternating pattern, and limit the range of digits to 0-9 and A-Z, we can think of each group of digits as a component in a multi-dimensonal coordinate. For example, 1A would correspond to the two-dimensional coordinate (1,A) (which is what Excel uses to label its two-dimensional grid of rows and columns); whereas AA1BB2 would be a four-dimensional coordinate (AA,1,BB,2). Because each component is independent, to expand the range between two coordinates we just return all combinations of the expansion of each component. Below is a quick implementation I cooked up this afternoon. It works for an arbitrary number of alternations of normal and alphabetic numbers, and handles large alphabetic ranges (i.e. from AB to CDE, not just AB to CD). Note: This is intended as a rough draft of an actual implementation (I'm taking off tomorrow, so it is even less polished than usual ;). All the usual caveats regarding error handling, robustness, (readability ;), etc, apply. IEnumerable<string> ExpandRange( string start, string end ) { // Split coordinates into component parts. string[] startParts = GetRangeParts( start ); string[] endParts = GetRangeParts( end ); // Expand range between parts // (i.e. 1->3 becomes 1,2,3; A->C becomes A,B,C). int length = startParts.Length; int[] lengths = new int[length]; string[][] expandedParts = new string[length][]; for( int i = 0; i < length; ++i ) { expandedParts[i] = ExpandRangeParts( startParts[i], endParts[i] ); lengths[i] = expandedParts[i].Length; } // Return all combinations of expanded parts. int[] indexes = new int[length]; do { var sb = new StringBuilder( ); for( int i = 0; i < length; ++i ) { int partIndex = indexes[i]; sb.Append( expandedParts[i][partIndex] ); } yield return sb.ToString( ); } while( IncrementIndexes( indexes, lengths ) ); } readonly Regex RangeRegex = new Regex( "([0-9]*)([A-Z]*)" ); string[] GetRangeParts( string range ) { // Match all alternating digit-letter components of coordinate. var matches = RangeRegex.Matches( range ); var parts = from match in matches.Cast<Match>( ) from matchGroup in match.Groups.Cast<Group>( ).Skip( 1 ) let value = matchGroup.Value where value.Length > 0 select value; return parts.ToArray( ); } string[] ExpandRangeParts( string startPart, string endPart ) { int start, end; Func<int, string> toString; bool isNumeric = char.IsDigit( startPart, 0 ); if( isNumeric ) { // Parse regular integers directly. start = int.Parse( startPart ); end = int.Parse( endPart ); toString = ( i ) => i.ToString( ); } else { // Convert alphabetic numbers to integers for expansion, // then convert back for display. start = AlphaNumberToInt( startPart ); end = AlphaNumberToInt( endPart ); toString = IntToAlphaNumber; } int count = end - start + 1; return Enumerable.Range( start, count ) .Select( toString ) .Where( s => s.Length > 0 ) .ToArray( ); } bool IncrementIndexes( int[] indexes, int[] lengths ) { // Increment indexes from right to left (i.e. Arabic numeral order). bool carry = true; for( int i = lengths.Length; carry && i > 0; --i ) { int index = i - 1; int incrementedValue = (indexes[index] + 1) % lengths[index]; indexes[index] = incrementedValue; carry = (incrementedValue == 0); } return !carry; } // Alphabetic numbers are 1-based (i.e. A = 1, AA = 11, etc, mod base-26). const char AlphaDigitZero = (char)('A' - 1); const int AlphaNumberBase = 'Z' - AlphaDigitZero + 1; int AlphaNumberToInt( string number ) { int sum = 0; int place = 1; foreach( char c in number.Cast<char>( ).Reverse( ) ) { int digit = c - AlphaDigitZero; sum += digit * place; place *= AlphaNumberBase; } return sum; } string IntToAlphaNumber( int number ) { List<char> digits = new List<char>( ); while( number > 0 ) { int digit = number % AlphaNumberBase; if( digit == 0 ) // Compensate for 1-based alphabetic numbers. return ""; char c = (char)(AlphaDigitZero + digit); digits.Add( c ); number /= AlphaNumberBase; } digits.Reverse( ); return new string( digits.ToArray( ) ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/45642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Common Files in Visual Studio Solution Many times I have seen Visual Studio solutions which have multiple projects that share source files. These common source files are usually out in a common directory and in the solution explorer their icon shows up with a link arrow in the bottom left. However, any time I try to add a source file to the project that is outside of that project's main directory, it just automatically copies it into the directory so that I no longer have a shared copy. I found that I can get around this by manually opening the project file in a text editor and modifying the path to something like "../../../Common/Source.cs" but this is more of a hack then I would like. Is there a setting or something I can change that will allow me to do this from within the IDE? A: Thanks @aku! I knew this could be done, but I didn't know how to do this from Visual Studio. It shows up as a shortcut to the file and the csproj file generates the resulting XML like this: <Compile Include="..\CommonAssemblyInfo.cs"> <Link>CommonAssemblyInfo.cs</Link> </Compile> I've seen this technique commonly used for common AssemblyInfo files to keep a consistent version. A: Right click on a project, select Add->Existing Item->Add as link (press on small arrow on Add button)
{ "language": "en", "url": "https://stackoverflow.com/questions/45650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: SQL: How to get the id of values I just INSERTed? I inserted some values into a table. There is a column whose value is auto-generated. In the next statement of my code, I want to retrieve this value. Can you tell me how to do it the right way? A: If you are working with Oracle: Inset into Table (Fields....) values (Values...) RETURNING (List of Fields...) INTO (variables...) example: INSERT INTO PERSON (NAME) VALUES ('JACK') RETURNING ID_PERSON INTO vIdPerson or if you are calling from... Java with a CallableStatement (sry, it's my field) INSERT INTO PERSON (NAME) VALUES ('JACK') RETURNING ID_PERSON INTO ? and declaring an autput parameter for the statement A: There's no standard way to do it (just as there is no standard way to create auto-incrementing IDs). Here are two ways to do it in PostgreSQL. Assume this is your table: CREATE TABLE mytable ( id SERIAL PRIMARY KEY, lastname VARCHAR NOT NULL, firstname VARCHAR ); You can do it in two statements as long as they're consecutive statements in the same connection (this will be safe in PHP with connection pooling because PHP doesn't give the connection back to the pool until your script is done): INSERT INTO mytable (lastname, firstname) VALUES ('Washington', 'George'); SELECT lastval(); lastval() gives you the last auto-generated sequence value used in the current connection. The other way is to use PostgreSQL's RETURNING clause on the INSERT statement: INSERT INTO mytable (lastname) VALUES ('Cher') RETURNING id; This form returns a result set just like a SELECT statement, and is also handy for returning any kind of calculated default value. A: @@IDENTITY is not scope safe and will get you back the id from another table if you have an insert trigger on the original table, always use SCOPE_IDENTITY() A: An important note is that using vendor SQL queries to retrieve the last inserted ID are safe to use without fearing about concurrent connections. I always thought that you had to create a transaction in order to INSERT a line and then SELECT the last inserted ID in order to avoid retrieving an ID inserted by another client. But these vendor specific queries always retrieve the last inserted ID for the current connection to the database. It means that the last inserted ID cannot be affected by other client insertions as long as they use their own database connection. A: For SQL 2005: Assuming the following table definition: CREATE TABLE [dbo].[Test]( [ID] [int] IDENTITY(1,1) NOT NULL, [somevalue] [nchar](10) NULL, ) You can use the following: INSERT INTO Test(somevalue) OUTPUT INSERTED.ID VALUES('asdfasdf') Which will return the value of the ID column. A: From the site i found out the following things: SQL SERVER – @@IDENTITY vs SCOPE_IDENTITY() vs IDENT_CURRENT – Retrieve Last Inserted Identity of Record March 25, 2007 by pinaldave SELECT @@IDENTITY It returns the last IDENTITY value produced on a connection, regardless of the table that produced the value, and regardless of the scope of the statement that produced the value. @@IDENTITY will return the last identity value entered into a table in your current session. While @@IDENTITY is limited to the current session, it is not limited to the current scope. If you have a trigger on a table that causes an identity to be created in another table, you will get the identity that was created last, even if it was the trigger that created it. SELECT SCOPE_IDENTITY() It returns the last IDENTITY value produced on a connection and by a statement in the same scope, regardless of the table that produced the value. SCOPE_IDENTITY(), like @@IDENTITY, will return the last identity value created in the current session, but it will also limit it to your current scope as well. In other words, it will return the last identity value that you explicitly created, rather than any identity that was created by a trigger or a user defined function. SELECT IDENT_CURRENT(‘tablename’) It returns the last IDENTITY value produced in a table, regardless of the connection that created the value, and regardless of the scope of the statement that produced the value. IDENT_CURRENT is not limited by scope and session; it is limited to a specified table. IDENT_CURRENT returns the identity value generated for a specific table in any session and any scope. A: This is how I do my store procedures for MSSQL with an autogenerated ID. CREATE PROCEDURE [dbo].[InsertProducts] @id INT = NULL OUT, @name VARCHAR(150) = NULL, @desc VARCHAR(250) = NULL AS INSERT INTO dbo.Products (Name, Description) VALUES (@name, @desc) SET @id = SCOPE_IDENTITY(); A: Remember that @@IDENTITY returns the most recently created identity for your current connection, not necessarily the identity for the recently added row in a table. You should always use SCOPE_IDENTITY() to return the identity of the recently added row. A: What database are you using? As far as I'm aware, there is no database agnostic method for doing this. A: This is how I've done it using parameterized commands. MSSQL INSERT INTO MyTable (Field1, Field2) VALUES (@Value1, @Value2); SELECT SCOPE_IDENTITY(); MySQL INSERT INTO MyTable (Field1, Field2) VALUES (?Value1, ?Value2); SELECT LAST_INSERT_ID(); A: sql = "INSERT INTO MyTable (Name) VALUES (@Name);" + "SELECT CAST(scope_identity() AS int)"; SqlCommand cmd = new SqlCommand(sql, conn); int newId = (int)cmd.ExecuteScalar(); A: Ms SQL Server: this is good solution even if you inserting more rows: Declare @tblInsertedId table (Id int not null) INSERT INTO Test ([Title], [Text]) OUTPUT inserted.Id INTO @tblInsertedId (Id) SELECT [Title], [Text] FROM AnotherTable select Id from @tblInsertedId A: This works very nicely in SQL 2005: DECLARE @inserted_ids TABLE ([id] INT); INSERT INTO [dbo].[some_table] ([col1],[col2],[col3],[col4],[col5],[col6]) OUTPUT INSERTED.[id] INTO @inserted_ids VALUES (@col1,@col2,@col3,@col4,@col5,@col6) It has the benefit of returning all the IDs if your INSERT statement inserts multiple rows. A: If your using PHP and MySQL you can use the mysql_insert_id() function which will tell you the ID of item you Just instered. But without your Language and DBMS I'm just shooting in the dark here. A: Again no language agnostic response, but in Java it goes like this: Connection conn = Database.getCurrent().getConnection(); PreparedStatement ps = conn.prepareStatement(insertSql, Statement.RETURN_GENERATED_KEYS); try { ps.executeUpdate(); ResultSet rs = ps.getGeneratedKeys(); rs.next(); long primaryKey = rs.getLong(1); } finally { ps.close(); } A: Rob's answer would be the most vendor-agnostic, but if you're using MySQL the safer and correct choise would be the built-in LAST_INSERT_ID() function. A: SELECT @@Scope_Identity as Id There is also @@identity, but if you have a trigger, it will return the results of something that happened during the trigger, where scope_identity respects your scope. A: * *insert the row with a known guid. *fetch the autoId-field with this guid. This should work with any kind of database. A: An Environment Based Oracle Solution: CREATE OR REPLACE PACKAGE LAST AS ID NUMBER; FUNCTION IDENT RETURN NUMBER; END; / CREATE OR REPLACE PACKAGE BODY LAST AS FUNCTION IDENT RETURN NUMBER IS BEGIN RETURN ID; END; END; / CREATE TABLE Test ( TestID INTEGER , Field1 int, Field2 int ) CREATE SEQUENCE Test_seq / CREATE OR REPLACE TRIGGER Test_itrig BEFORE INSERT ON Test FOR EACH ROW DECLARE seq_val number; BEGIN IF :new.TestID IS NULL THEN SELECT Test_seq.nextval INTO seq_val FROM DUAL; :new.TestID := seq_val; Last.ID := seq_val; END IF; END; / To get next identity value: SELECT LAST.IDENT FROM DUAL A: In TransactSQL, you can use OUTPUT clause to achieve that. INSERT INTO my_table(col1,col2,col3) OUTPUT INSERTED.id VALUES('col1Value','col2Value','col3Value') FRI: http://msdn.microsoft.com/en-us/library/ms177564.aspx A: Simplest answer: command.ExecuteScalar() by default returns the first column Return Value Type: System.Object The first column of the first row in the result set, or a null reference (Nothing in Visual Basic) if the result set is empty. Returns a maximum of 2033 characters. Copied from MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/45651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: Has anyone connected BizTalk with QuickBooks? We use QuickBooks for financial management, and feed it from a variety of sources. I now need to hook it up to BizTalk, and I'd hate to reinvent the wheel. I've done searches, and as far as I can tell there's no QuickBooks adapter for BizTalk. Does anyone know of anything that'll do the job, preferably something that doesn't suck? Doesn't the QB SDK require that Quickbooks be running on the client machine? Is there any way around it? A: Quickbooks talks .NET quite easily. You'll need the QuickBooks SDK 7.0 and a copy of Visual Studio.NET, but after that it's very easy to do anything with Quickbooks. Imports QBFC7Lib Sub AttachToDB() If isAttachedtoQB Then Exit Sub Lasterror = "Unknown QuickBooks Error" Try QbSession = New QBSessionManager QbSession.OpenConnection("", "Your Company Name") QbSession.BeginSession("", ENOpenMode.omDontCare) MsgReq = QbSession.CreateMsgSetRequest("UK", 6, 0) MsgReq.Attributes.OnError = ENRqOnError.roeStop Lasterror = "" isAttachedtoQB = True Catch e As Exception If Not QbSession Is Nothing Then QbSession.CloseConnection() QbSession = Nothing End If isAttachedtoQB = False Lasterror = "QuickBooks Connection Error. - " + e.Message + "." End Try End Sub See http://developer.intuit.com/ for more information. A: If you do build the integration code using .NET, you may want to consider leveraging the WCF Line-of-Business SDK: http://www.microsoft.com/biztalk/technologies/wcflobadaptersdk.mspx It's not a BizTalk-only technology, despite its categorization. The SDK is designed to make it easier to create a WCF channel to a LOB application, which can be consumed from almost any other platform. A: Unfortunately it does. It also asks you to authorise any application you've built. (at least once.) I don't know any way around it. A: The QB SDK does not require that QuickBooks be running on the client machine. It does require that QuickBooks is installed on the client machine. You can access QuickBooks company files even if QuickBooks is not running though. Have a look through the SDK docs. Additionally, when QuickBooks first prompts you to authorize the application, you need to make sure to tell it to allow access to the company file, even when QuickBooks isn't open/the company file isn't open.
{ "language": "en", "url": "https://stackoverflow.com/questions/45653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I retrieve IPIEHTMLDocument2 interface on IE Mobile I wrote an Active X plugin for IE7 which implements IObjectWithSite besides some other necessary interfaces (note no IOleClient). This interface is queried and called by IE7. During the SetSite() call I retrieve a pointer to IE7's site interface which I can use to retrieve the IHTMLDocument2 interface using the following approach: IUnknown *site = pUnkSite; /* retrieved from IE7 during SetSite() call */ IServiceProvider *sp = NULL; IHTMLWindow2 *win = NULL; IHTMLDocument2 *doc = NULL; if(site) { site->QueryInterface(IID_IServiceProvider, (void **)&sp); if(sp) { sp->QueryService(IID_IHTMLWindow2, IID_IHTMLWindow2, (void **)&win); if(win) { win->get_document(&doc); } } } if(doc) { /* found */ } I tried a similiar approach on PIE as well using the following code, however, even the IPIEHTMLWindow2 interface cannot be acquired, so I'm stuck: IUnknown *site = pUnkSite; /* retrieved from PIE during SetSite() call */ IPIEHTMLWindow2 *win = NULL; IPIEHTMLDocument1 *tmp = NULL; IPIEHTMLDocument2 *doc = NULL; if(site) { site->QueryInterface(__uuidof(*win), (void **)&win); if(win) { /* never the case */ win->get_document(&tmp); if(tmp) { tmp->QueryInterface(__uuidof(*doc), (void **)&doc); } } } if(doc) { /* found */ } Using the IServiceProvider interface doesn't work either, so I already tested this. Any ideas? A: I found the following code in the Google Gears code, here. I copied the functions I think you need to here. The one you need is at the bottom (GetHtmlWindow2), but the other two are needed as well. Hopefully I didn't miss anything, but if I did the stuff you need is probably at the link. #ifdef WINCE // We can't get IWebBrowser2 for WinCE. #else HRESULT ActiveXUtils::GetWebBrowser2(IUnknown *site, IWebBrowser2 **browser2) { CComQIPtr<IServiceProvider> service_provider = site; if (!service_provider) { return E_FAIL; } return service_provider->QueryService(SID_SWebBrowserApp, IID_IWebBrowser2, reinterpret_cast<void**>(browser2)); } #endif HRESULT ActiveXUtils::GetHtmlDocument2(IUnknown *site, IHTMLDocument2 **document2) { HRESULT hr; #ifdef WINCE // Follow path Window2 -> Window -> Document -> Document2 CComPtr<IPIEHTMLWindow2> window2; hr = GetHtmlWindow2(site, &window2); if (FAILED(hr) || !window2) { return false; } CComQIPtr<IPIEHTMLWindow> window = window2; CComPtr<IHTMLDocument> document; hr = window->get_document(&document); if (FAILED(hr) || !document) { return E_FAIL; } return document->QueryInterface(__uuidof(*document2), reinterpret_cast<void**>(document2)); #else CComPtr<IWebBrowser2> web_browser2; hr = GetWebBrowser2(site, &web_browser2); if (FAILED(hr) || !web_browser2) { return E_FAIL; } CComPtr<IDispatch> doc_dispatch; hr = web_browser2->get_Document(&doc_dispatch); if (FAILED(hr) || !doc_dispatch) { return E_FAIL; } return doc_dispatch->QueryInterface(document2); #endif } HRESULT ActiveXUtils::GetHtmlWindow2(IUnknown *site, #ifdef WINCE IPIEHTMLWindow2 **window2) { // site is javascript IDispatch pointer. return site->QueryInterface(__uuidof(*window2), reinterpret_cast<void**>(window2)); #else IHTMLWindow2 **window2) { CComPtr<IHTMLDocument2> html_document2; // To hook an event on a page's window object, follow the path // IWebBrowser2->document->parentWindow->IHTMLWindow2 HRESULT hr = GetHtmlDocument2(site, &html_document2); if (FAILED(hr) || !html_document2) { return E_FAIL; } return html_document2->get_parentWindow(window2); #endif } A: Well I was aware of the gears code already. The mechanism gears uses is based on a workaround through performing an explicit method call into the gears plugin from the gears loader to set the window object and use that as site interface instead of the IUnknown provided by IE Mobile in the SetSite call. Regarding to the gears code the Google engineers are aware of the same problem I'm asking and came up with this workaround I described. However, I believe there must be another more "official" way of dealing with this issue since explicitely setting the site on an Active X control/plugin isn't very great. I'm going to ask the MS IE Mobile team directly now and will keep you informed once I get a solution. It might be a bug in IE Mobile which is the most likely thing I can imagine of, but who knows... But thanks anyways for your response ;))
{ "language": "en", "url": "https://stackoverflow.com/questions/45658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a limit with the number of SSL connections? Is there a limit with the number of SSL connections? We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit? A: Your operating system will have a limit on the number of open files if you are on linux ulimit -a will show your various limits. I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess) A: Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol. But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between. The actually server you are connected to may have an arbitrary limit set too. This question doesn't have enough information to answer beyond "YES". A: SSL itself doesn't have any limitations, but there are some practical limits you may be running into: * *SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit. *TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server. *On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.
{ "language": "en", "url": "https://stackoverflow.com/questions/45686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Visual Studio Add-in not going away Ok, so I demo'd Refactor Pro and Resharper, I'm more comfortable with Resharper so that's what I bought. When I uninstalled Refactor Pro I thought everything was breezy. However, now when I open Visual Studio I get The Add-in 'DevExpress Tools' failed to load or caused and exception, woudl you like to remove this Add in? If you choose yes, the file it was loaded from, 'C:\ProgramData\Application Data\Microsoft\MSEnvShared\Addins\DevExpressToolsOrcas.Addin' will be renamed. I hit yes, then get: "Could Not rename Add-in file "C:\ProgramData\Application Data\Microsoft\MSEnvShared\Addins\DevExpressToolsOrcas.Addin" This happens every time, I went to that location and there is not folder by that name. I searched for that file and nothing. Anyone experience a clingy add-in? A: I had the same issue with the VS.NET 2005 version and I'm not sure it is related. It was a registry problem and when i contacted the people from devexpress they send me a clean up tool. You can try to see if there is another clean up tool for 2008 or search in the registry for the file name and remove it manually. A: This page has instructions on how to manually remove a Visual Studio add-in: http://www.mztools.com/articles/2006/mz2006018.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/45695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to compile a .NET application to native code? Let's say I want to run a .NET application on a machine where the .NET framework is not available; Is there any way to compile the application to native code? A: You can! However you're restricted to .NET 1.1 (no generics for you): Mono Ahead-Of-Time compilation (AOT) However, this means compiling is really native, so you'll no longer be able to deploy one single bytecode assembly, you'll need one per platform. It was originally designed because there's no .NET or Mono for iPhone, so that's how they made MonoTouch. A: You can use NativeAOT (part of .NET 7, formerly CoreRT). This technology is a bit limiting, since you cannot rely on reflection too much, but overall you can compile range of application with it. Web apps, WinForms apps, console apps. As of .NET 7 Preview 5 you can just add <PropertyGroup> <PublishAot>true</PublishAot> </PropertyGroup> If you want use daily builds of NativeAOT you need add following lines in your project file <ItemGroup> <PackageReference Include="Microsoft.DotNet.ILCompiler" Version="8.0.0-*" /> </ItemGroup> and add dotnet8 Nuget feed into your nuget.config like that <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <!--To inherit the global NuGet package sources remove the <clear/> line below --> <clear /> <add key="dotnet-public" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/index.json" /> <add key="dotnet8" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet8/nuget/v3/index.json" /> </packageSources> </configuration> You can target .NET 6 apps, and with ILTrim improvements in .NET 6 the more and more code would be ready for native compilation. For simple applications you can try use BFlat which may give you even better results. A: You can do this using the new precompilation technology called .NET Native. Check it out here: http://msdn.microsoft.com/en-US/vstudio/dotnetnative Currently it is only available for Windows Store Apps. It performs single component linking. So .NET Framework libraries are statically linked into your app. Everything is compiled to native and IL assemblies are no longer deployed. Apps do not run against CLR but a stripped down, optimized runtime called Managed Runtime (Mrt.dll) As stated above, NGEN used a mix compilation model and relied on IL and JIT for dynamic scenarios. .NET Native does not utilise JIT but it does support various dynamic scenarios. Code authors would need to utilize Runtime Directives to provide hints to the .NET Native compiler on the dynamic scenarios they wish to support. A: You can use ngen.exe to generate a native image but you still have to distribute the original non-native code as well, and it still needs the framework installed on the target machine. Which doesn't solve your problem, really. A: Microsoft has an article describing how you can Compile MSIL to Native Code You can use Ngen. The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead using the just-in-time (JIT) compiler to compile the original assembly. Unfortunately, you still need the libraries from the framework in order to run your program. There's no feature that I know of with the MS .Net framework SDK that allows you to compile all the required files into a single executable A: As some of the other answers here have mentioned, you can use the .NET Native tool to compile your app to native machine code. Unlike those answers, however, I will explain how to do it. Steps: * *Install the dotnet CLI (command line interface) tool, which is part of the new .NET Core toolchain. We'll use this to compile our app; you can find a good article about it here. *Open up a shell prompt and cd to the directory of your app. *Type this: dotnet compile --native That's it! When you're done, your app will be compiled down to a single binary, like this: It'll be a standalone executable; no PDBs, assemblies, or config files included (hooray!). Alternatively, if you want an even faster program, you can run this: dotnet compile --native --cpp That will optimize your program using the C++ code generator (as opposed to RyuJIT), so your app is even more optimized for AOT scenarios. You can find more info on this at the dotnet CLI GitHub repo. A: 2019 Answer: Use dotnet/corert. It can compile .NET Core projects into standalone .exe files. No dependencies (except for system libraries like kernel32.dll). I bet this is exactly what the OP need. From its GitHub home page: The CoreRT compiler can compile a managed .NET Core application into a native (architecture specific) single-file executable that is easy to deploy. It can also produce standalone dynamic or static libraries that can be consumed by applications written in other programming languages. A: RemoteSoft makes a tool that compiles a .NET application into a package that can be run without .NET installed. I don't have any experience with it: RemoteSoft Salamander A: I have tested several of them and at this moment the only one that supports .NET 3.5 and also has a great virtualization stack is Xenocode Postbuild With ngen you still need to have the .NET framework installed but using a tool as such all your managed code is compiled into native code so you can deploy it to machines without the framework presence. A: Microsoft has announced its .NET Native Preview that will allow to run .NET applications without having the framework installed. Take a look: http://blogs.msdn.com/b/dotnet/archive/2014/04/02/announcing-net-native-preview.aspx FAQ: http://msdn.microsoft.com/en-US/vstudio/dn642499.aspx You can download Microsoft .NET Native for VS2013 from here: http://msdn.microsoft.com/en-US/vstudio/dotnetnative A: Yes, using Ngen, the Native Image Generator. There are, however, a number of things you need to be aware of: * *You still need the CLR to run your executable. *The CLR will not dynamically optimize your assemblies based on the environment it's run in (e.g. 486 vs. 586 vs. 686, etc.) All in all, it's only worth using Ngen if you need to reduce the startup time of your application. A: The nature of .NET is to be able to install apps that have been compiled to MSIL, then either by JIT or Ngen, MSIL is compiled to native code and stored locally in a cache. It was never intended on generating a true native .exe that can be run independently of the .NET framework. Maybe there's some hack that does this, but it doesn't sound safe to me. There are too many dynamics that require the framework, such as: dynamic assembly loading, MSIL code generation, etc. A: The main reason to compile into Native is to secure your codes, otherwise the MSIL compiled is like deploying the source codes in the client's machine. NGEN compiles into native but also need to deploy IL codes, this purpose is just to reduce the startup time but it is also useless. CoreRt is alpha version and working only with simple helloworld type apps. .Net Core compiles into single executable files but it is also not native exe, this is just a zipped file of IL codes and it will unzip the codes into temp folder while running. My simple question from Microsoft is, if RyuJIT can compile IL into native on the fly then why not you can compile the same IL ahead-of-time (AOT). A: Looks like net core RT workable solutions; soon all apps will go to .net core; https://www.codeproject.com/Articles/5262251/Generate-Native-Executable-from-NET-Core-3-1-Proje?msg=5753507#xx5753507xx https://learn.microsoft.com/en-us/archive/msdn-magazine/2018/november/net-core-publishing-options-with-net-core not tested maybe with old win .net sdk possible do similar. A: I think it's not possible. You will need to distribute .NET FW as well. If you want to compile .NET app to native code, use NGen tool A: try this (http://www.dotnetnative.online/) to compile .net compiled exe into native exe, I tried this, its new but good.
{ "language": "en", "url": "https://stackoverflow.com/questions/45702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: How would you architect a desktop application in C# 3.0 I've created a simple desktop application in C# 3.0 to learn some C#, wpf and .Net 3.5. My application essentially reads data from a csv file and stores it in a SQL server CE database. I use sqlmetal to generate the ORM code for the database. My first iteration of this app is ugly as hell and I'm in the process of refactoring it. Which brings me to my question. How would you architect a desktop database app in C#? What are the best practices? Do you create a Database Abstraction Layer (DAL) which uses the sqlmetal generated code? Or is the generated code enough of an abstraction? If you use DAL pattern, do you make it a singleton or a static member? Do you use the View-Model-ModelView pattern with the DAL pattern? Apologies if this seems like a long open ended question, but I have been giving this a lot of thought recently. I see a lot of examples on how to architect an enterprise n-tier app in C# but not that many on architecting standalone desktop apps. A: I would start with the Composite Application Guidance for WPF (cough PRISM cough) from Microsoft's P&P team. With the download comes a great reference application that is the starting point for most of my WPF development today. The DotNetRocks crew just interviewed Glenn Block and Brian Noyes about this if you're interested in hearing more from them. Even better, Prism is not nearly as heavy as the CAB was, if you're familiar at all with that from the WinForms days. A: The answer is "it depends" as always. A few things to think about: You may want to make this fat client app a web app (for example) at some point. If so, you should be sure to keep separation between the business layer (and below) and the presentation. The simplest way to do this is to be sure all calls to the business logic go through an interface of some kind. A more complex way is to implement a full MVC setup. Another thing you may consider is making the data access layer independent of the business logic and user interface. By this I mean that all calls from business logic into the DAL should be generic "get me this data" rather than "get me this data from SQL" or even worse "run this SQL statement". In this way, you can replace your DAL with one that accesses a different database, XML files, or even something icky like flat files. In short, separation of concerns. This allows you to grow in the future by adding a different UI, segmenting all three areas into their own tier, or changing the relevant technology. A: Before architecting anything you should define requirements for your app. It's a common error of beginner developers - starting writing code ahead of thinking about how it would perform. My advice will be to try to describe some feature of you application. It will help you to feel how it should be implemented. As for useful learning resources I would highly recommend you to take a look at CompositeWPF it's a project designed specifically to teach developers best practices of desktop app development. A: I'd start with Jeremy Miller's Build Your Own Cab series. I was an early CAB adopter. I learned a lot from digging into that technology and reading all the .NET blogs about application architecture. But recently I had a chance to start a new project, and instead of using CAB I went with StructureMap & NHibernate and borrowed some of the patterns that Jeremy uses (in particular, his way of handling event aggregation). The result was a really simplified, hand-tooled framework that does everything I need and I love working with it. As to the specifics of your question: I use a Repository for data access. I initially wrote some ADO.NET code and used data readers and mapped my objects. But that got old real fast, so I grabbed NHibernate and was really pleased. The repositories use NHibernate for data access, and my data access needs are pretty simple in this particular app. I have a service layer (exposed via WCF, Duplex channels) that utilizes the repositories. My app is basically client-server with real time updating (and I know your question was just about clients, but I would use the same technologies and patterns). O n the client side I utilize MVP with StructureMap for IoC and some very simple event aggregation strategies for cross-class communications. I code to interfaces for just about everything. The only other thing I did was borrow from the CAB the idea of a flexible "Workspace" for dynamically displaying views. I wrote my own Workspace interface though and implemented my own DeckWorkspace and TableWorkspace for use in my app (these were really simple things to write). A lot of my decisions in this most recent application were the result of experience and pain I felt using other frameworks and tools. I made different decisions this time around. Maybe the only way to really understand how to architect an application is to feel the pain of doing it wrong beforehand. A: I would say yes, it could easily be structured towards smaller applications. There is a learning curve towards getting started, but honestly, it helped me understand WPF better than attempting to start from scratch. After starting a project with CompositeWPF and then starting another project without it, I found myself attempting to duplicate features of CompositeWPF on my own because I missed those features! :)
{ "language": "en", "url": "https://stackoverflow.com/questions/45705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to do multi-column sorting on a Visual Basic 6 ListView? I am working in Visual Basic 6 and need to sort by multiple columns in a ListView. For example, sorting a list of music tracks by artist, then album, then track number. As far as I know, VB6 does not support this out of the box. Here are the suggestions I have already heard: * *Sort the data in a SQL table first and display the data in the resulting order *Sort the data in an ADO recordset object in memory *Sort by the primary column and then perform a sort algorithm on the items, moving them around into the correct positions manually Does anyone have experience with multiple-column sorting in VB6 who could lend advice? A: I would create a hidden column in the listview that concatenates those three columns and sort by that A: You can try sorting using the Windows API and callbacks: Link Alternatively, you could try switching to a vbAccelerator ListView; I highly recommend it.
{ "language": "en", "url": "https://stackoverflow.com/questions/45716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What path should I pass as an AssemblyPath parameter to the Publish.GacRemove function? I want to use the Publish.GacRemove function to remove an assembly from GAC. However, I don't understand what path I should pass as an argument. Should it be a path to the original DLL (what if I removed it after installing it in the GAC?) or the path to the assembly in the GAC? UPDATE: I finally used these API wrappers. A: I am using the GacInstall to publish my assemblies, however once installed into the gac, I sometimes delete my ‘temporary’ copy of the assemblies. And then, if I ever wanted to uninstall the assemblies from the gac I do not have the files at the original path. This is causing a problem since I cannot seem to get the GacRemove method to uninstall the assemblies unless I keep the original files. Conclusion: Yes, you need to specify the path to the original DLL. (And try to not move/delete it later). If you delete it, try to copy the file from the GAC to your original path and you should be able to uninstall it using GacRemove. A: I am not exactly sure about it but I believe GacRemove should do the same thing as gacutil /u. So, it should be the path of your DLL. However it doesn't have to be the same DLL file. Copy of the original should suffice since what counts is the uniqueID of the DLL.
{ "language": "en", "url": "https://stackoverflow.com/questions/45729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I extract a part of a xaml object graph via linq to xml? I have an object graph serialized to xaml. A rough sample of what it looks like is: <MyObject xmlns.... > <MyObject.TheCollection> <PolymorphicObjectOne .../> <HiImPolymorphic ... /> </MyObject.TheCollection> </MyObject> I want to use Linq to XML in order to extract the serialized objects within the TheCollection. Note: MyObject may be named differently at runtime; I'm interested in any object that implements the same interface, which has a public collection called TheCollection that contains types of IPolymorphicLol. The only things I know at runtime are the depth at which I will find the collection and that the collection element is named ``*.TheCollection`. Everything else will change. The xml will be retrieved from a database using Linq; if I could combine both queries so instead of getting the entire serialized graph and then extracting the collection objects I would just get back the collection that would be sweet. A: Will, It is not possible to find out whether an object implements some interface by looking at XAML. With constraints given you can find xml element that has a child named . You can use following code: It will return all elements having child element which name ends with .TheCollection static IEnumerable<XElement> FindElement(XElement root) { foreach (var element in root.Elements()) { if (element.Name.LocalName.EndsWith(".TheCollection")) { yield return element.Parent; } foreach (var subElement in FindElement(element)) { yield return subElement; } } } To make sure that object represented by this element implements some interface you need to read metadata from your assemblies. I would recommend you to use Mono.Cecil framework to analyze types in your assemblies without using reflection. A: @aku Yes, I know that xaml doesn't include any indication of base types or interfaces. But I do know the interface of the root objects, and the interface that the collection holds, at compile time. The serialized graphs are stored in a sql database as XML, and we're using linq to retrieve them as XElements. Currently, along with your solution, we are limited to deserializing the graphs, iterating through them, pulling out the objects we want from the collection, removing all references to them from, and then disposing, their parents. Its all very kludgy. I was hoping for a single stroke solution; something along the lines of an xpath, but inline with our linq to sql query that returns just the elements we're looking for...
{ "language": "en", "url": "https://stackoverflow.com/questions/45732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: apache mod_proxy error os10060 and returning 503? Can't get to my site. Apache gives the following error message: [Fri Sep 05 08:47:42 2008] [error] (OS 10060)A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : proxy: HTTP: attempt to connect to 10.10.10.1:80 (10.10.10.1) failed A: Can you connect to the proxied host (10.10.10.1) directly? Is it functioning normally? A: http://www.checkupdown.com/status/E503.html Your Web server is effectively 'closed for repair'. It is still functioning minimally because it can at least respond with a 503 status code, but full service is impossible i.e. your Web site is simply unavailable. There are a myriad possible reasons for this, but generally it is because of some human intervention by the operators of your Web server machine. You can usually expect that someone is working on the problem, and normal service will resume as soon as possible. You need to restart the webserver then figure out why it shut it self down.
{ "language": "en", "url": "https://stackoverflow.com/questions/45736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Customize the Sharepoint add list column page I have defined a custom Sharepoint list for special attributes related to a software application inventory and installed it as a feature. I also want to group these attributes in categories. How could I change the Sharepoint page that allows the user to add a column to a list, so that when the user adds a column to my custom list type (column = attribute) he'll have a dropdown to choose the category? A: From what I understand you want to add a choice column data type thats already prepopulated so that users can then add it to their own content types? have a look here, this is probably what you want to do: http://www.sharethispoint.com/archive/2006/08/07/23.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/45741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Code Profiling in Visual Studio 2005 I have a Visual Studio 2005 Solution workspace which in turn has 8 projects included in it. I want to profile the complete code(all the projects) and get some measure about the absolute cycles taken by each function to execute, or at least percentage cycle consumptions. I checked out help for VS 2005, and also the project setiings options but could not find any pointers on hwo to get the profile info. Any help regarding this would be beneficial. -AD. A: If your application is not particularly processor intensive, redgate ANTS Profiler is a good choice - the line-by-line stats can come in quite handy, and the whole product is clean and well-designed. If your app needs a lot of CPU to operate normally, however, most of the .NET profilers on the market won't be able to handle it. The only two that I have ever found that will work for a really heavy-weight application are JetBrains dotTrace and YourKit. The two are very similar, which is not surprising, given that YourKit seems to have been started by a former JetBrains employee. I personally prefer dotTrace, but that may just be because that is what I used first, and there has never been any good reason to switch. I have tested ANTS, AQTime, DevPartner, GlowCode, Borland OptimizeIt and Intel VTune, and all of them have too much overhead to handle a demanding application. (VTune is a possible exception, but it is so horribly complex to configure and use that I was never able to figure out exactly what it could handle. It is also very expensive.) A: I guess the inbuilt profiler of Visual Studio 2005 comes onyl with the Developer Edition and Team Edition. I have a Professional edition which, it seems doesnot have the inbuilt profiler tool. -AD A: I've used both the profiler in Compuware’s DevPartner (I like to still call it “TrueTime”) and Rational's Quantify. I always liked Quantify better, but as I've moved between companies DevPartner is usually already the “standard”. Both are expensive, but they (seem to) add so much value that any commercial shop should have no problem investing in some seats. Quantify didn’t require special rebuilds of the project – which was GREAT. It also crashed less (that’s not saying much, it had its own issues). DevPartner also tends to break as each new version of Visual Stuido was release (maybe this is better now?). Buy the yearly maintenance agreement if you go this way. That said, I’ve often just write a class remembers the time at construction and spits out (log file) the elapsed time in its destructor. I used QueryPerformanceCounter. I’d stick this class at the top of the function I’d want to time. You could get fancy with making it a macro, use the preprocessor to include this class only under a special build… A: I recommend you EQATEC profiler which also includes in its site a tracer. Also it's free and easy to use. alt text http://www.eqatec.com/tools/profiler/profiler-logo.gif A: We use DevPartner with Visual Studio 2005. It gives you performance analysis of the specific projects in your solution you want to look at. We also use it for memory management analysis, and error analysis. Is commercial tool, so it's not free. A: Red-gate's Profiler is great for this. A: I use Jebrains profiler is very easy to use and performs very well too. A: If your app needs a lot of CPU to operate normally, however, most of the .NET profilers on the market won't be able to handle it. I have used a trial version of RedGate Ant's profiler on an optimizing algorithm that normally uses up to 100% CPU on a single core machines and though slow it managed to get through and report all I needed to know. Extremely helpfull tool. I wonder what kind of algorithms have you run on the Ant's profiler. Has anyone used the VS profiler ?
{ "language": "en", "url": "https://stackoverflow.com/questions/45744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Managing/Using libraries with Debug builds vs Release builds I'm curious about everyones practices when it comes to using or distributing libraries for an application that you write. First of all, when developing your application do you link the debug or release version of the libraries? (For when you run your application in debug mode) Then when you run your app in release mode just before deploying, which build of the libraries do you use? How do you perform the switch between your debug and release version of the libraries? Do you do it manually, do you use macros, or whatever else is it that you do? A: I would first determine what requirements are needed from the library: * *Debug/Release *Unicode support *And so on.. With that determined you can then create configurations for each combination required by yourself or other library users. When compiling and linking it is very important that you keep that libraries and executable consistent with respect to configurations used i.e. don't mix release & debug when linking. I know on the Windows/VS platform this can cause subtle memory issues if debug & release libs are mixed within an executable. As Brian has mentioned to Visual Studio it's best to use the Configuration Manager to setup how you want each configuration you require to be built. For example our projects require the following configurations to be available depending on the executable being built. * *Debug+Unicode *Debug+ASCII *Release+Unicode *Release+ASCII The users of this particular project use the Configuration Manager to match their executable requirements with the project's available configurations. Regarding the use of macros, they are used extensively in implementing compile time decisions for requirements like if the debug or release version of a function is to be linked. If you're using VS you can view the pre-processor definitions attribute to see how the various macros are defined e.g. _DEBUG _RELEASE, this is how the configuration controls whats compiled. What platform are you using to compile/link your projects? EDIT: Expanding on your updated comment.. If the Configuration Manager option is not available to you then I recommend using the following properties from the project: * *Linker->Additional Library Directories or Linker->Input Use the macro $(ConfigurationName) to link with the appropriate library configuration e.g. Debug/Release. $(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.lib * *Build Events or Custom Build Step configuration property Execute a copy of the required library file(s) from the dependent project prior (or after) to the build occurring. xcopy $(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.dll $(IntDir) The macro $(ProjectDir) will be substituted for the current project's location and causes the operation to occur relative to the current project. The macro $(ConfigurationName) will be substituted for the currently selected configuration (default is Debug or Release) which allows the correct items to be copied depending on what configuration is being built currently. If you use a regular naming convention for your project configurations it will help, as you can use the $(ConfigurationName) macro, otherwise you can simply use a fixed string. A: I use VS. The way that I do it is that the libraries I need through the references of the project. Which basically just says in what folder to look for a specific library at project load time. I develop my libraries to be as project independent or reusable as possible. Therefore they are all projects of their own. So of the libraries that I need for a specific project, I create a "3rdParty" or "libs" folder at the same level as my "src" folder in my svn folder tree. I tend to only use released libraries, but when I get some unknown issues and want to switch to debug, I manually copy a debug version of the files in the "lib" folder and reload the project. I am unsure wether I should be keeping both debug and released versions in my svn tree. Although since they are projects of their own, keeping them in the svn tree of another project doesn't right. They can be built again without an hitch at any moment. And then I wanted to find a way of making the switch more...hmmm...well basically automatic if you while, but that's not what I really mean. It just feels that switching the files manually between released and debug isn't right. Maybe I haven't found it yet, but what I would like is an option that would do like: For library "stack.dll" look in "......\3rdParty\" for release and "......\3rdPartyD\" for debug. Anything that those something like I don't know. What do you suggest? Remember libraries are external projects. There the built files are totally elsewhere. In fact think of it as you have to check out another project, build it, and copy the built library if you want another copy. How would you set that up?
{ "language": "en", "url": "https://stackoverflow.com/questions/45769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: C# Dynamic Event Subscription How would you dynamically subscribe to a C# event so that given a Object instance and a String name containing the name of the event, you subscribe to that event and do something (write to the console for example) when that event has been fired? It would seem using Reflection this isn't possible and I would like to avoid having to use Reflection.Emit if possible, as this currently (to me) seems like the only way of doing it. /EDIT: I do not know the signature of the delegate needed for the event, this is the core of the problem /EDIT 2: Although delegate contravariance seems like a good plan, I can not make the assumption necessary to use this solution A: It's not a completely general solution, but if all your events are of the form void Foo(object o, T args) , where T derives from EventArgs, then you can use delegate contravariance to get away with it. Like this (where the signature of KeyDown is not the same as that of Click) : public Form1() { Button b = new Button(); TextBox tb = new TextBox(); this.Controls.Add(b); this.Controls.Add(tb); WireUp(b, "Click", "Clickbutton"); WireUp(tb, "KeyDown", "Clickbutton"); } void WireUp(object o, string eventname, string methodname) { EventInfo ei = o.GetType().GetEvent(eventname); MethodInfo mi = this.GetType().GetMethod(methodname, BindingFlags.Public | BindingFlags.Instance | BindingFlags.NonPublic); Delegate del = Delegate.CreateDelegate(ei.EventHandlerType, this, mi); ei.AddEventHandler(o, del); } void Clickbutton(object sender, System.EventArgs e) { MessageBox.Show("hello!"); } A: It is possible to subscribe to an event using Reflection var o = new SomeObjectWithEvent; o.GetType().GetEvent("SomeEvent").AddEventHandler(...); http://msdn.microsoft.com/en-us/library/system.reflection.eventinfo.addeventhandler.aspx Now here is going to be the problem that you are going to have to solve. The delegates required for each event handler will have different signatures. You are going to have to find away to create these methods dynamically, which probably means Reflection.Emit, or you are going to have to limit your self to a certain delegate so that you can handle it with compiled code. Hope this helps. A: You can compile expression trees to use void methods without any arguments as event handlers for events of any type. To accommodate other event handler types, you have to map the event handler's parameters to the events somehow. using System; using System.Linq; using System.Linq.Expressions; using System.Reflection; class ExampleEventArgs : EventArgs { public int IntArg {get; set;} } class EventRaiser { public event EventHandler SomethingHappened; public event EventHandler<ExampleEventArgs> SomethingHappenedWithArg; public void RaiseEvents() { if (SomethingHappened!=null) SomethingHappened(this, EventArgs.Empty); if (SomethingHappenedWithArg!=null) { SomethingHappenedWithArg(this, new ExampleEventArgs{IntArg = 5}); } } } class Handler { public void HandleEvent() { Console.WriteLine("Handler.HandleEvent() called.");} public void HandleEventWithArg(int arg) { Console.WriteLine("Arg: {0}",arg); } } static class EventProxy { //void delegates with no parameters static public Delegate Create(EventInfo evt, Action d) { var handlerType = evt.EventHandlerType; var eventParams = handlerType.GetMethod("Invoke").GetParameters(); //lambda: (object x0, EventArgs x1) => d() var parameters = eventParams.Select(p=>Expression.Parameter(p.ParameterType,"x")); var body = Expression.Call(Expression.Constant(d),d.GetType().GetMethod("Invoke")); var lambda = Expression.Lambda(body,parameters.ToArray()); return Delegate.CreateDelegate(handlerType, lambda.Compile(), "Invoke", false); } //void delegate with one parameter static public Delegate Create<T>(EventInfo evt, Action<T> d) { var handlerType = evt.EventHandlerType; var eventParams = handlerType.GetMethod("Invoke").GetParameters(); //lambda: (object x0, ExampleEventArgs x1) => d(x1.IntArg) var parameters = eventParams.Select(p=>Expression.Parameter(p.ParameterType,"x")).ToArray(); var arg = getArgExpression(parameters[1], typeof(T)); var body = Expression.Call(Expression.Constant(d),d.GetType().GetMethod("Invoke"), arg); var lambda = Expression.Lambda(body,parameters); return Delegate.CreateDelegate(handlerType, lambda.Compile(), "Invoke", false); } //returns an expression that represents an argument to be passed to the delegate static Expression getArgExpression(ParameterExpression eventArgs, Type handlerArgType) { if (eventArgs.Type==typeof(ExampleEventArgs) && handlerArgType==typeof(int)) { //"x1.IntArg" var memberInfo = eventArgs.Type.GetMember("IntArg")[0]; return Expression.MakeMemberAccess(eventArgs,memberInfo); } throw new NotSupportedException(eventArgs+"->"+handlerArgType); } } static class Test { public static void Main() { var raiser = new EventRaiser(); var handler = new Handler(); //void delegate with no parameters string eventName = "SomethingHappened"; var eventinfo = raiser.GetType().GetEvent(eventName); eventinfo.AddEventHandler(raiser,EventProxy.Create(eventinfo,handler.HandleEvent)); //void delegate with one parameter string eventName2 = "SomethingHappenedWithArg"; var eventInfo2 = raiser.GetType().GetEvent(eventName2); eventInfo2.AddEventHandler(raiser,EventProxy.Create<int>(eventInfo2,handler.HandleEventWithArg)); //or even just: eventinfo.AddEventHandler(raiser,EventProxy.Create(eventinfo,()=>Console.WriteLine("!"))); eventInfo2.AddEventHandler(raiser,EventProxy.Create<int>(eventInfo2,i=>Console.WriteLine(i+"!"))); raiser.RaiseEvents(); } } A: public TestForm() { Button b = new Button(); this.Controls.Add(b); MethodInfo method = typeof(TestForm).GetMethod("Clickbutton", BindingFlags.NonPublic | BindingFlags.Instance); Type type = typeof(EventHandler); Delegate handler = Delegate.CreateDelegate(type, this, method); EventInfo eventInfo = cbo.GetType().GetEvent("Click"); eventInfo.AddEventHandler(b, handler); } void Clickbutton(object sender, System.EventArgs e) { // Code here } A: Try LinFu--it has a universal event handler that lets you bind to any event at runtime. For example, here's you you can bind a handler to the Click event of a dynamic button: // Note: The CustomDelegate signature is defined as: // public delegate object CustomDelegate(params object[] args); CustomDelegate handler = delegate { Console.WriteLine("Button Clicked!"); return null; }; Button myButton = new Button(); // Connect the handler to the event EventBinder.BindToEvent("Click", myButton, handler); LinFu lets you bind your handlers to any event, regardless of the delegate signature. Enjoy! You can find it here: http://www.codeproject.com/KB/cs/LinFuPart3.aspx A: I recently wrote a series of blog posts describing unit testing events, and one of the techniques I discuss describes dynamic event subscription. I used reflection and MSIL (code emitting) for the dynamic aspects, but this is all wrapped up nicely. Using the DynamicEvent class, events can be subscribed to dynamically like so: EventPublisher publisher = new EventPublisher(); foreach (EventInfo eventInfo in publisher.GetType().GetEvents()) { DynamicEvent.Subscribe(eventInfo, publisher, (sender, e, eventName) => { Console.WriteLine("Event raised: " + eventName); }); } One of the features of the pattern I implemented was that it injects the event name into the call to the event handler so you know which event has been raised. Very useful for unit testing. The blog article is quite lengthy as it is describing an event unit testing technique, but full source code and tests are provided, and a detailed description of how dynamic event subscription was implemented is detailed in the last post. http://gojisoft.com/blog/2010/04/22/event-sequence-unit-testing-part-1/ A: What you want can be achieved using dependency injection. For example Microsoft Composite UI app block does exactly what you described A: This method adds to an event, a dynamic handler that calls a method OnRaised, passing the event parameters as an object array: void Subscribe(object source, EventInfo ev) { var eventParams = ev.EventHandlerType.GetMethod("Invoke").GetParameters().Select(p => Expression.Parameter(p.ParameterType)).ToArray(); var eventHandler = Expression.Lambda(ev.EventHandlerType, Expression.Call( instance: Expression.Constant(this), method: typeof(EventSubscriber).GetMethod(nameof(OnRaised), BindingFlags.NonPublic | BindingFlags.Instance), arg0: Expression.Constant(ev.Name), arg1: Expression.NewArrayInit(typeof(object), eventParams.Select(p => Expression.Convert(p, typeof(object))))), eventParams); ev.AddEventHandler(source, eventHandler.Compile()); } OnRaised has this signature: void OnRaised(string name, object[] parameters); A: Do you mean something like: //reflect out the method to fire as a delegate EventHandler eventDelegate = ( EventHandler ) Delegate.CreateDelegate( typeof( EventHandler ), //type of event delegate objectWithEventSubscriber, //instance of the object with the matching method eventSubscriberMethodName, //the name of the method true ); This doesn't do the subscription, but will give to the method to call. Edit: Post was clarified after this answer, my example won't help if you don't know the type. However all events in .Net should follow the default event pattern, so as long as you've followed it this will work with the basic EventHandler.
{ "language": "en", "url": "https://stackoverflow.com/questions/45779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Automate Deployment for Web Applications? My team is currently trying to automate the deployment of our .Net and PHP web applications. We want to streamline deployments, and to avoid the hassle and many of the headaches caused by doing it manually. We require a solution that will enable us to: - Compile the application - Version the application with the SVN version number - Backup the existing site - Deploy to a web farm All our apps are source controlled using SVN and our .Net apps use CruiseControl. We have been trying to use MSBuild and NAnt deployment scripts with limited success. We have also used Capistrano in the past, but wish to avoid using Ruby if possible. Are there any other deployment tools out there that would help us? A: Thank you all for your kind suggestions. We checked them all out, but after careful consideration we decided to roll our own with a combination of CruiseControl, NAnt, MSBuild and MSDeploy. This article has some great information: Integrating MSBuild with CruiseControl.NET Here's roughly how our solution works: * *Developers build the 'debug' version of the app and run unit tests, then check in to SVN. *CruiseControl sees the updates and calls our build script... * *Runs any new migrations on the build database *Replaces the config files with the build server config *Builds the 'debug' configuration of the app *Runs all unit and integration tests *Builds the 'deploy' configuration of the app * *Versions the DLLs with the current major/minor version and SVN revision, e.g. 1.2.0.423 *Moves this new build to a 'release' folder on our build server *Removes unneeded files *Updates IIS on the build server if required Then when we have verified everything is ready to go up to live/staging we run another script to: * *Run migrations on live/staging server *MSDeploy: archive current live/staging site *MSDeploy: sync site from build to live/staging It wasn't pretty getting to this stage, but it's mostly working like a charm now :D I'm going to try and keep this answer updated as we make changes to our process, as there seem to be several similar questions on SA now. A: I have used Visual Build Pro for years, It's quite slick and easy to use and has many standard operations (like the ones you mentioned) built in. A: I use Puppet, Makefiles to build RPMs and Bamboo to do this for me. My system doesn't directly apply, and I'm not to familiar with the Windows world, but there are some transferable patterns. My make setup allows me to build RPM's for everything (php libs, php websites, perl modules, C apps, etc) that make up my app. This can be called manually, or through Bamboo. I transfer these RPM's into a yum repo and puppet handles making sure the latest (or correct) versions of software are installed in the cluster. Could you automate building software packages into MSI's? I think Puppet can manage installation of software packages and versions in Windows. A: I use msdeploy for this. It works perfect. About Ant; for the .NET platform we have NAnt and you can use it in combination with MSDeploy; you have the possibility to call MSDeploy from your Nant-script. Edited: Just to make things clear; you can do everything with msdeploy. Using Nant is not a requirement. A: Rather than using xcopy we managed to use the -source:dirpath command with UNC addresses to the servers with msdeploy. The key was the ignoreAcls=true and removing calls to username and password in the msdeploy string: msdeploy -verb:sync -source:dirpath=\\build\e$\app -dest:dirpath=\\live\d$\app,ignoreAcls=true The example deploys the site from our build server's E drive to the D drive on our live server. There are some security considerations with exposing shares or this level of disk access on a live server. We're currently looking into using a limited access shared folder. We then pipe this output to a log file which is then moved to the backup archive for reference. The log file records which files were moved and when.Continuing the example above with the output pipe command: ... > E:\archive\msdeploy.log A: No one mentioned Final Builder http://www.finalbuilder.com. Its on par with Visual build Pro. Good GUI for creating automated build deployment harnesses A: Fabric. Seems small, simple, procedural. Written in Python, since Ruby is a no-no (why?). A: Check out Setup Factory (from indigo rose). It's pretty robust in what it can do. It uses the Windows installer API. It can probably do what you need. A: The only reason Nant should exist is so that you have a framework similar to Ant in which we can write Tasks using the .NET set of languages. If you don't want to get a pure .NET developer to write custom Tasks, I can't see any reason you can't use Ant. Just because you write your application in a .NET language, doesn't mean you have to use a .NET build tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/45783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: How can I fork a background processes from a Perl CGI script on Windows? I've had some trouble forking of processes from a Perl CGI script when running on Windows. The main issue seems to be that 'fork' is emulated when running on windows, and doesn't actually seem to create a new process (just another thread in the current one). This means that web servers (like IIS) which are waiting for the process to finish continue waiting until the 'background' process finishes. Is there a way of forking off a background process from a CGI script under Windows? Even better, is there a single function I can call which will do this in a cross platform way? (And just to make life extra difficult, I'd really like a good way to redirect the forked processes output to a file at the same time). A: If you want to do this in a platform independent way, Proc::Background is probably the best way. A: Use Win32::Process->Create with DETACHED_PROCESS parameter A: perlfork: Perl provides a fork() keyword that corresponds to the Unix system call of the same name. On most Unix-like platforms where the fork() system call is available, Perl's fork() simply calls it. On some platforms such as Windows where the fork() system call is not available, Perl can be built to emulate fork() at the interpreter level. While the emulation is designed to be as compatible as possible with the real fork() at the the level of the Perl program, there are certain important differences that stem from the fact that all the pseudo child ``processes'' created this way live in the same real process as far as the operating system is concerned. A: I've found real problems with fork() on Windows, especially when dealing with Win32 Objects in Perl. Thus, if it's going to be Windows specific, I'd really recommend you look at the Thread library within Perl. I use this to good effect accepting more than one connection at a time on websites using IIS, and then using even more threads to execute different scripts all at once. A: This question is very old, and the accepted answer is correct. However, I just got this to work, and figured I'd add some more detail about how to accomplish it for anyone who needs it. The following code exists in a very large perl CGI script. This particular sub routine creates tickets in multiple ticketing systems, then uses the returned ticket numbers to make an automated call via Twilio services. The call takes awhile, and I didn't want the CGI users to have to wait until the call ended to see the output from their request. To that end, I did the following: (All the CGI code that is standard stuff. Calls the subroutine needed, and then) my $randnum = int(rand(100000)); my $callcmd = $progdir_path . "/aoff-caller.pl --uniqueid $uuid --region $region --ticketid $ticketid"; my $daemon = Proc::Daemon->new( work_dir => $progdir_path, child_STDOUT => $tmpdir_path . '/stdout.txt', child_STDERR => $tmpdir_path . '/stderr.txt', pid_file => $tmpdir_path . '/' . $randnum . '-pid.txt', exec_command => $callcmd, ); my $pid = $daemon->Init(); exit 0; (kill CGI at the appropriate place) I am sure that the random number generated and attached to the pid is overkill, but I have no interest in creating issues that are extremely easily avoided. Hopefully this helps someone looking to do the same sort of thing. Remember to add use Proc::Daemon at the top of your script, mirror the code and alter to the paths and names of your program, and you should be good to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/45792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Using IIS6, how can I place files in a sub-folder but have them served as if they were in the root? Our ASP.NET 3.5 website running on IIS 6 has two teams that are adding content: * *Development team adding code. *Business team adding simple web pages. For sanity and organization, we would like for the business team to add their web pages to a sub-folder in the project: Root: for pages of development team Content: for pages of business team But We would like for users to be able to navigate to the business team content without having to append "Content" in their URLs, as described below: Root: Default.aspx (Available at: www.oursite.com/default.aspx) Content: Popcorn.aspx (Available at: www.oursite.com/popcorn.aspx) Is there a way we can accomplish for making a config entry in an ISAPI rewrite tool for every one of these pages? A: Since the extensions will be ASPX, ASP.NET will pick up the request... you can write an HttpModule that checks for pages that yield a 404 and then check the subfolder also. If you know that all pages with a certain format will be coming from that folder, then you can just rewrite the URL in ASP.NET (either in Global.asax or an HttpModule). A: I don't have any way to test this right now, but I think you can use the -f flag on RewriteCond to check if a file exists, in either directory. RewriteCond %{REQUEST_FILENAME} -!f RewriteCond Content/%{REQUEST_FILENAME} -f RewriteRule (.*) Content/(.*) Something like that might do what you're after, too.
{ "language": "en", "url": "https://stackoverflow.com/questions/45796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Service to make an audio podcast from a video one? * *Video podcast *??? *Audio only mp3 player I'm looking for somewhere which will extract audio from video, but instead of a single file, for an on going video podcast. I would most like a website which would suck in the RSS and spit out an RSS (I'm thinking of something like Feedburner), though would settle for something on my own machine. If it must be on my machine, it should be quick, transparent, and automatic when I download each episode. What would you use? Edit: I'm on an Ubuntu 8.04 machine; so running ffmpeg is no problem; however, I'm looking for automation and feed awareness. Here's my use case: I want to listen to lectures at Google Video, or Structure and Interpretation of Computer Programs. These videos come out fairly often, so anything that's needed to be done manually will also be done fairly often. Here's one approach I'd thought of: * *download the RSS *parse the RSS for enclosures, *download the enclosures, keeping a track what has already been downloaded previously *transcode the files, but not the ones done already *reconstruct an RSS with the audio files, remembering to change the metadata. *schedule to be run periodically *point podcatcher at new RSS feed. I also liked the approach of gPodder of using a post-download script. I wish the Lazy Web still worked. A: You could automate this using the open source command line tool ffmpeg. Parse the RSS to get the video files, fetch them over the net if needed, then spit each one out to a command line like this: ffmpeg -i episode1.mov -ab 128000 episode1.mp3 The -ab switch sets the output bit rate to 128 kbits/s on the audio file, adjust as needed. Once you have the audio files you can reconstruct the RSS feed to link to the audio files if so desired. A: How to extract audio from video to MP3: http://www.dvdvideosoft.com/guides/dvd/extract-audio-from-video-to-mp3.htm How to Convert a Video Podcast to Audio Only: http://www.legalandrew.com/2007/03/10/how-to-convert-a-video-podcast-to-audio-only/ A: When you edit your video, doesn't your editor provide you an option to split out the audio? A: What platform is your own machine? What format is the video podcast? You could possibly get Handbrake to do this (Windows, Linux and Mac), I don't know if it's scriptable at all but I think it can be used to separate audio and video. edit: There is a commandline interface for Handbrake, but it appears I was wrong about it accepting non-DVD input. On the Mac I'd probably rig up something with Applescript and QuickTime - what platform are you on?
{ "language": "en", "url": "https://stackoverflow.com/questions/45803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: WPF Get Element(s) under mouse Is there a way with WPF to get an array of elements under the mouse on a MouseMove event? A: You can also try using the Mouse.DirectlyOver property to get the top-most element that is under the mouse. A: From "WPF Unleashed", page 383: Visual hit testing can inform you about all Visuals that intersect a location, [...] you must use [...] the [VisualTreeHelper.]HitTest method that accepts a HitTestResultCallback delegate. Before this version of HitTest returns, the delegate is invoked once for each relevant Visual, starting from the topmost and ending at the bottommost. The signature of such a callback is HitTestResultBehavior Callback(HitTestResult result) and it has to return HitTestResultBehaviour.Continue to receive further hits, as shown below (from the linked page on MSDN): // Return the result of the hit test to the callback. public HitTestResultBehavior MyHitTestResult(HitTestResult result) { // Add the hit test result to the list that will be processed after the enumeration. hitResultsList.Add(result.VisualHit); // Set the behavior to return visuals at all z-order levels. return HitTestResultBehavior.Continue; } For further information, please consult the MSDN documentation for VisualTreeHelper.HitTest. A: Can you use the VisualTreeHelper.HitTest ? http://lukieb.blogspot.com/2008/07/visualtreehelperhittest.html
{ "language": "en", "url": "https://stackoverflow.com/questions/45813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Counting number of views for a page ignoring search engines? I notice that StackOverflow has a views count for each question and that these view numbers are fairly low and accurate. I have a similar thing on one of my sites. It basically logs a "hit" whenever the page is loaded in the backend code. Unfortunately it also does this for search engine hits giving bloated and inaccurate numbers. I guess one way to not count a robot would be to do the view counting with an AJAX call once the page has loaded, but I'm sure there's other, better ways to ignore search engines in your hit counters whilst still letting them in to crawl your site. Do you know any? A: An AJAX call will do it, but usually search engines will not load images, javascript or CSS files, so it may be easier to include one of those files in the page, and pass the URL of the page you want to log a request against as a parameter in the file request. For example, in the page... http://www.example.com/example.html You might include in the head section <link href="empty.css?log=example.html" rel="stylesheet" type="text/css" /> And have your server side log the request, then return an empty css file. The same approach would apply to JavaScript or and image file, though in all cases you'll want to look carefully at what caching might take place. Another option would be to eliminate the search engines based on their user agent. There's a big list of possible user agents at http://user-agents.org/ to get you started. Of course, you could go the other way, and only count requests from things you know are web browsers (covering IE, Firefox, Safari, Opera and this newfangled Chrome thing would get you 99% of the way there). Even easier would be to use a log analytics tool like awstats or a service like Google analytics, both of which have already solved this problem. A: To solve this problem I implemented a simple filter that would look at the User-Agent header in the HTTP request and compare it to a list of known robots. I got the robot list from www.robotstxt.org. It's downloadable in a simple text-format that can easily be parsed to auto-generate the "blacklist". A: You don't really need to use AJAX, just use JavaScript to add an iFrame off screen. KEEP IT SIMPLE <script type="javascript"> document.write('<iframe src="myLogScript.php" style="visibility:hidden" width="1" height="1" frameborder="0">'); </script> A: An extension to Matt Sheppard's answer might be something like the following: <script type="text/javascript"> var thePg=window.location.pathname; var theSite=window.location.hostname; var theImage=new Image; theImage.src="/test/hitcounter.php?pg=" + thePg + "?site=" + theSite; </script> which can be plugged into a page header or footer template without needing to substitute the page name server-side. Note that if you include the query string (window.location.search), a robust version of this should encode the string to prevent evildoers from crafting page requests that exploit vulnerabilities based on weird stuff in URLs. The nice thing about this vs. a regular <img> tag or <iframe> is that the user won't see a red x if there is a problem with the hitcounter script. In some cases, it's also important to know the URL that was seen by the browser, before rewrites, etc. that happen server-side, and this give you that. If you want it both ways, then add another parameter server-side that inserts that version of the page name into the query string as well. An example of the log files from a test of this page: 10.1.1.17 - - [13/Sep/2008:22:21:00 -0400] "GET /test/testpage.html HTTP/1.1" 200 306 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16" 10.1.1.17 - - [13/Sep/2008:22:21:00 -0400] "GET /test/hitcounter.php?pg=/test/testpage.html?site=www.home.***.com HTTP/1.1" 301 - "http://www.home.***.com/test/testpage.html" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16" A: The reason Stack Overflow has accurate view counts is that it only count each view/user once. Third-party hit counter (and web statistics) application often filter out search engines and display them in a separate window/tab/section. A: You are either going to have to do what you said in your question with AJAX. Or exclude out User-Agent strings that are known search engines. The only sure way to stop bots are with AJAX.
{ "language": "en", "url": "https://stackoverflow.com/questions/45824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do you automatically set the focus to a textbox when a web page loads? How do you automatically set the focus to a textbox when a web page loads? Is there an HTML tag to do it or does it have to be done via Javascript? A: IMHO, the 'cleanest' way to select the First, visible, enabled text field on the page, is to use jQuery and do something like this: $(document).ready(function() { $('input:text[value=""]:visible:enabled:first').focus(); }); Hope that helps... Thanks... A: <html> <head> <script language="javascript" type="text/javascript"> function SetFocus(InputID) { document.getElementById(InputID).focus(); } </script> </head> <body onload="SetFocus('Box2')"> <input id="Box1" size="30" /><br/> <input id="Box2" size="30" /> </body> </html> A: As a general advice, I would recommend not stealing the focus from the address bar. (Jeff already talked about that.) Web page can take some time to load, which means that your focus change can occur some long time after the user typed the pae URL. Then he could have changed his mind and be back to url typing while you will be loading your page and stealing the focus to put it in your textbox. That's the one and only reason that made me remove Google as my start page. Of course, if you control the network (local network) or if the focus change is to solve an important usability issue, forget all I just said :) A: If you're using jquery: $(function() { $("#Box1").focus(); }); or prototype: Event.observe(window, 'load', function() { $("Box1").focus(); }); or plain javascript: window.onload = function() { document.getElementById("Box1").focus(); }; though keep in mind that this will replace other on load handlers, so look up addLoadEvent() in google for a safe way to append onload handlers rather than replacing. A: I had a slightly different problem. I wanted autofocus, but, wanted the placeholder text to remain, cross-browser. Some browsers would hide the placeholder text as soon as the field focused, some would keep it. I had to either get placeholders staying cross-browser, which has weird side effects, or stop using autofocus. So I listened for the first key typed against the body tag, and redirected that key into the target input field. Then all the event handlers involved get killed off to keep things clean. var urlInput = $('#Url'); function bodyFirstKey(ev) { $('body').off('keydown', bodyFirstKey); urlInput.off('focus', urlInputFirstFocus); if (ev.target == document.body) { urlInput.focus(); if (!ev.ctrlKey && !ev.metaKey && !ev.altKey) { urlInput.val(ev.key); return false; } } }; function urlInputFirstFocus() { $('body').off('keydown', bodyFirstKey); urlInput.off('focus', urlInputFirstFocus); }; $('body').keydown(bodyFirstKey); urlInput.focus(urlInputFirstFocus); https://jsfiddle.net/b9chris/qLrrb93w/ A: It is possible to set autofocus on input elements <input type="text" class="b_calle" id="b_calle" placeholder="Buscar por nombre de calle" autofocus="autofocus"> A: You need to use javascript: <BODY onLoad="document.getElementById('myButton').focus();"> @Ben notes that you should not add event handlers like this. While that is another question, he recommends that you use this function: function addLoadEvent(func) { var oldonload = window.onload; if (typeof window.onload != 'function') { window.onload = func; } else { window.onload = function() { if (oldonload) { oldonload(); } func(); } } } And then put a call to addLoadEvent on your page and reference a function the sets the focus to you desired textbox. A: Simply write autofocus in the textfield. This is simple and it works like this: <input name="abc" autofocus></input> Hope this helps. A: You can do it easily by using jquery in this way: <script type="text/javascript"> $(document).ready(function () { $("#myTextBoxId").focus(); }); </script> by calling this function in $(document).ready(). It means this function will execute when the DOM is ready. For more information about the READY function, refer to : http://api.jquery.com/ready/ A: Using plain vanilla html and javascript <input type='text' id='txtMyInputBox' /> <script language='javascript' type='text/javascript'> function SetFocus() { // safety check, make sure its a post 1999 browser if (!document.getElementById) { return; } var txtMyInputBoxElement = document.getElementById("txtMyInputBox"); if (txtMyInputBoxElement != null) { txtMyInputBoxElement.focus(); } } SetFocus(); </script> For those out there using the .net framework and asp.net 2.0 or above, its trivial. If you are using older versions of the framework, you'd need to write some javascript similar to above. In your OnLoad handler (generally page_load if you are using the stock page template supplied with visual studio) you can use: C# protected void PageLoad(object sender, EventArgs e) { Page.SetFocus(txtMyInputBox); } VB.NET Protected Sub PageLoad(sender as Object, e as EventArgs) Page.SetFocus(txtMyInputBox) End Sub (* Note I removed the underscore character from the function name that is generally Page_Load since in a code block it refused to render properly! I could not see in the markup documentation how to get underscores to render unescaped.) Hope this helps. A: In HTML there's an autofocus attribute to all form fields. There's a good tutorial on it in Dive Into HTML 5. Unfortunately it's currently not supported by IE versions less than 10. To use the HTML 5 attribute and fall back to a JS option: <input id="my-input" autofocus="autofocus" /> <script> if (!("autofocus" in document.createElement("input"))) { document.getElementById("my-input").focus(); } </script> No jQuery, onload or event handlers are required, because the JS is below the HTML element. Edit: another advantage is that it works with JavaScript off in some browsers and you can remove the JavaScript when you don't want to support older browsers. Edit 2: Firefox 4 now supports the autofocus attribute, just leaving IE without support. A: Adjusted my answer from Dave1010 above as JavaScript backup didn't work for me. <input type="text" id="my-input" /> And the Javascipt backup check: if (!(document.getElementById("my-input").hasAttribute("autofocus"))) { document.getElementById("my-input").focus(); } A: Use the below code. For me it is working jQuery("[id$='hfSpecialty_ids']").focus() A: If you are using ASP.NET then you can use yourControlName.Focus() in the code on the server, which will add appropriate JavaScript into the page. Other server-side frameworks may have an equivalent method.
{ "language": "en", "url": "https://stackoverflow.com/questions/45827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "175" }
Q: How many app.config files are you allowed to have per AppDomain? I'm hoping there's a way to avoid custom configuration files if an application runs in a single AppDomain. A: From Suzanne Cook's .NET CLR Notes: App.Config Files: As default the app config file of the default appdomain is in the process exe’s directory and named the same as the process exe + ".config". Also, note that a web.config file is an app.config - ASP.NET sets that as the config file for your appdomain. To change the config file, set an AppDomainSetup.ConfigurationFile to the new location and pass that AppDomainSetup to your call to AppDomain.CreateDomain(). Then, run all of the code requiring that application config from within that new appdomain. Note, though, that you won’t be able to choose the CLR version by setting the ConfigurationFile – at that point, a CLR will already be running, and there can only be one per process. Application configuration files are per appdomain. So, you can set a ‘dll config’ by using the method above, but that means that it will be used for the entire appdomain, and it only gets one.
{ "language": "en", "url": "https://stackoverflow.com/questions/45838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Web Design for Google Chrome What, if any, considerations (HTML, CSS, JavaScript) should you take when designing for Google Chrome? A: Chrome uses Webkit, the same engine as is used by Safari, OmniWeb, iCab and more. Just code everything based on the standards and verify in each browser. A: I think first and foremost you should focus on using HTML and scripting that follows the standards. After you have that running, file a bug report then make the browser-specific tweaks. If Chrome is worth a flip you shouldn't have to tweak things for it. A: The same ones you'd take for Safari, as they share the same rendering engine (with a slight version mismatch). A: I'm sure filing a bug report really helps with all those IE rendering issues! Realistically, you need to test your application in each browser, no browser 100% follows the W3C standards so ultimately you can't rely on following that at all. You need to test everything you do in any browser you wish to support. As has been mentioned, Google Chrome has the same rendering engine as Safari/iPhone/etc., WebKit which passes Acid3, so there should be minimal issues if you follow the standards. But don't rely on it. Google Chrome currently uses a slightly older version of WebKit than Safari. I'm sure they'll eventually be on the same version at some point, but unfortunately any new browser becomes just another browser to test in. A: Are you designing specifically for Chrome, or do you want to make sure your pages work well with Chrome? Assuming it's the latter, then just use the same design considerations you'd do for any browser. If applicable, keep in mind that many phones and video game consoles have web browsers now. Chrome uses a new JavaScript engine, so you'll have to test your JavaScript using Chrome as well as Safari. The HTML and CSS may render pretty much the same, but they use different JavaScript engines.
{ "language": "en", "url": "https://stackoverflow.com/questions/45846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I get js2-mode to use spaces instead of tabs in Emacs? I am using js2-mode to edit Javascript in Emacs, but I can't seem to get it to stop using tabs instead of spaces for indentation. My other modes work fine, just having issues w/ js2. A: On my copy of GNU Emacs 24.2.1, setting: (setq-default indent-tabs-mode nil) in .emacs is not sufficient for javascript mode, presumably because the setting is somehow being over-ridden in a per-buffer context. The following change is sufficient: (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(indent-tabs-mode nil)) A: Do you have (setq-default indent-tabs-mode nil) in your .emacs? It works fine for me in emacs 23.0.60.1 when I do that. js2-mode uses the standard emacs function indent-to, which respects indent-tabs-mode, to do its indenting. A: Add this to your .emacs file somewhere after you load js2 mode: (setq js2-mode-hook '(lambda () (progn (set-variable 'indent-tabs-mode nil))))
{ "language": "en", "url": "https://stackoverflow.com/questions/45861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Can you disable the back button in a JFace wizard? I'm writing a wizard for an Eclipse RCP application. After doing some processing on a file and taking some user input, I don't want to let the user go back to make changes. At this point they must either accept or reject the changes they are about to make to the system. What I can't seem to find is a method call that lets me override the buttons that display or the user's ability to hit the back button. I'd prefer that it not be there or at least be disabled. Has anyone found a way to do this using the JFace Wizard and WizardPage? Usability-wise, am I breaking wizard conventions? Should I consider a different approach to the problem? A: Expanding on jodonell's answer: Disabling the back button is harder than it should be, due to non-intuitive behavior in the default implementation of WizardPage.getPreviousPage(). You can call setPreviousPage( null ), and getPreviousPage() still returns the previous page. You need to override the implementation of getPreviousPage() in order to disable the back button: public abstract class MyWizardPage extends WizardPage { private boolean backButtonEnabled = true; public void setBackButtonEnabled(boolean enabled) { backButtonEnabled = enabled; getContainer().updateButtons(); } @Override public IWizardPage getPreviousPage() { if (!backButtonEnabled) { return null; } return super.getPreviousPage(); } } See my blog post for a few more JFace wizard tips and tricks: http://nsawadsky.blogspot.com/2011/07/jface-wizard-tips-and-tricks.html A: From a UI perspective this seems rather bad. Your users are going to get frustrated if they make a mistake and want to go back and correct it and you don't let them. I think it would be much better to change the application to allow going back rather than looking for ways to prevent it. A: You can return null from the getPreviousPage() method in your wizard page implementation. A: There is no way to do this using standard JFace wizard APIs. My team accomplished this by writing a custom WizardDialog. We did this on an Eclipse RCP application and not on an eclipse plugin. Disabling the back button is breaking convention, but our business analysts really wanted the functionality.
{ "language": "en", "url": "https://stackoverflow.com/questions/45865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: MySQL Partitioning / Sharding / Splitting - which way to go? We have an InnoDB database that is about 70 GB and we expect it to grow to several hundred GB in the next 2 to 3 years. About 60 % of the data belong to a single table. Currently the database is working quite well as we have a server with 64 GB of RAM, so almost the whole database fits into memory, but we’re concerned about the future when the amount of data will be considerably larger. Right now we’re considering some way of splitting up the tables (especially the one that accounts for the biggest part of the data) and I’m now wondering, what would be the best way to do it. The options I’m currently aware of are * *Using MySQL Partitioning that comes with version 5.1 *Using some kind of third party library that encapsulates the partitioning of the data (like hibernate shards) *Implementing it ourselves inside our application Our application is built on J2EE and EJB 2.1 (hopefully we’re switching to EJB 3 some day). What would you suggest? EDIT (2011-02-11): Just an update: Currently the size of the database is 380 GB, the data size of our "big" table is 220 GB and the size of its index is 36 GB. So while the whole table does not fit in memory any more, the index does. The system is still performing fine (still on the same hardware) and we're still thinking about partitioning the data. EDIT (2014-06-04): One more update: The size of the whole database is 1.5 TB, the size of our "big" table is 1.1 TB. We upgraded our server to a 4 processor machine (Intel Xeon E7450) with 128 GB RAM. The system is still performing fine. What we're planning to do next is putting our big table on a separate database server (we've already done the necessary changes in our software) while simultaneously upgrading to new hardware with 256 GB RAM. This setup is supposed to last for two years. Then we will either have to finally start implementing a sharding solution or just buy servers with 1 TB of RAM which should keep us going for some time. EDIT (2016-01-18): We have since put our big table in it's own database on a separate server. Currently the size ot this database is about 1.9 TB, the size of the other database (with all tables except for the "big" one) is 1.1 TB. Current Hardware setup: * *HP ProLiant DL 580 *4 x Intel(R) Xeon(R) CPU E7- 4830 *256 GB RAM Performance is fine with this setup. A: If you think you're going to be IO/memory bound, I don't think partitioning is going to be helpful. As usual, benchmarking first will help you figure out the best direction. If you don't have spare servers with 64GB of memory kicking around, you can always ask your vendor for a 'demo unit'. I would lean towards sharding if you don't expect 1 query aggregate reporting. I'm assuming you'd shard the whole database and not just your big table: it's best to keep entire entities together. Well, if your model splits nicely, anyway. A: This is a great example of what can MySql partitioning do in a real-life example of huge data flows: http://web.archive.org/web/20101125025320/http://www.tritux.com/blog/2010/11/19/partitioning-mysql-database-with-high-load-solutions/11/1 Hoping it will be helpful for your case. A: You will definitely start to run into issues on that 42 GB table once it no longer fits in memory. In fact, as soon as it does not fit in memory anymore, performance will degrade extremely quickly. One way to test is to put that table on another machine with less RAM and see how poor it performs. First of all, it doesn't matter as much splitting out tables unless you also move some of the tables to a separate physical volume. This is incorrect. Partioning (either through the feature in MySQL 5.1, or the same thing using MERGE tables) can provide significant performance benefits even if the tables are on the same drive. As an example, let's say that you are running SELECT queries on your big table using a date range. If the table is whole, the query will be forced to scan through the entire table (and at that size, even using indexes can be slow). The advantage of partitioning is that your queries will only run on the partitions where it is absolutely necessary. If each partition is 1 GB in size and your query only needs to access 5 partitions in order to fulfill itself, the combined 5 GB table is a lot easier for MySQL to deal with than a monster 42 GB version. One thing you need to ask yourself is how you are querying the data. If there is a chance that your queries will only need to access certain chunks of data (i.e. a date range or ID range), partitioning of some kind will prove beneficial. I've heard that there is still some buggyness with MySQL 5.1 partitioning, particularly related to MySQL choosing the correct key. MERGE tables can provide the same functionality, although they require slightly more overhead. Hope that helps...good luck! A: A while back at a Microsoft ArcReady event, I saw a presentation on scaling patterns that might be useful to you. You can view the slides for it online. A: I would go for MariaDB InnoDB + Partitions (either by key or by date, depending on your queries). I did this and now I don't have any Database problems anymore. MySQL can be replaced with MariaDB in seconds...all the database files stay the same. A: First of all, it doesn't matter as much splitting out tables unless you also move some of the tables to a separate physical volume. Secondly, it's not necessarily the table with the largest physical size that you want to move. You may have a much smaller table that gets more activity, while your big table remains fairly constant or only appends data. Whatever you do, don't implement it yourselves. Let the database system handle it. A: What does the big table do. If you're going to split it, you've got a few options: - Split it using the database system (don't know much about that) - Split it by row. - split it by column. Splitting it by row would only be possible if your data can be separated easily into chunks. e.g. Something like Basecamp has multiple accounts which are completely separate. You could keep 50% of the accounts in one table and 50% in a different table on a different machine. Splitting by Column is good for situations where the row size contains large text fields or BLOBS. If you've got a table with (for example) a user image and a huge block of text, you could farm the image into a completely different table. (on a different machine) You break normalisation here, but I don't think it would cause too many problems. A: You would probably want to split that large table eventually. You'll probably want to put it on a separate hard disk, before thinking of a second server. Doing it with MySQL is the most convenient option. If it is capable, then go for it. BUT Everything depends on how your database is being used, really. Statistics.
{ "language": "en", "url": "https://stackoverflow.com/questions/45879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: View TFS checkin history through merges? In TFS when you merge branch A to branch B and checkin, you get a single changeset on B (typically with a comment like "merged A->B"). This means B doesn't have any of the checkin history from A. So if someone created a new file on branch A, you can't tell who created it from branch B. And if someone updated a file on A, you can't tell who did the update from branch B. Is there any way to see this kind of detailed changeset history across branches? Some kind of power toy, or third party tool, or anything? Update: The TFS Power Toy tfpt history /followbranches tool does not "expand merges," it only "expands branches" and therefore doesn't solve this problem. A: Right now 'tf merges' and 'tf merges /f:detailed' provide the most complete merge tracking information. However, they are command-line only. And the only 3rd party tool I know of that attempts to provide a GUI is TFS Sidekicks. This gets a lot easier in TFS 2010. See screenshots at: * *http://blogs.msdn.com/mitrik/archive/2009/06/08/first-class-branches.aspx *http://msdn.microsoft.com/en-us/library/dd405662(VS.100).aspx *http://msdn.microsoft.com/en-us/library/dd465202(VS.100).aspx *http://blogs.msdn.com/bharry/archive/2008/01/16/new-features-to-understand-branching-merging.aspx (old prototype, has changed somewhat since then) A: TFS 2010 will include support for this. Brian Harry talks about it in this presentation. You will now be able to see where a change originated and who made it after the change has been merged to a different branch. A: TFS SideKicks is another good tool for supplementing TFS default tools. A: The TFS 2008 power toys does come with the tf history /followbranches command. But that command doesn't expand merges. All it does is show you the change set history from A before branch B was created. What it doesn't show you is what change sets were merged from A -> B after the branch was created. In other words, what I want to see is all the change sets that were made on a source branch and then applied to a target branch as part of merge operation. A: I think you would find TFS Sidekicks helpful, especially the history area: History Sidekick application pane provides the following features: * *View version control tree with files and folders (similar to Source Control Explorer) *Search item (file or folder) by name and select found item in version control tree *View selected item history either for all users or filtered by user *Export history list to CSV file *Compare file versions selected in history *View selected item properties and pending changes *View selected item branches tree and selected branch properties *View selected item merge history; it is possible to view separately all merges performed with selected item as a merge target (merges to) or with selected item as a source (merges from) *Compare merge target and source file versions in history *View selected item merge candidates in a tree view; it is possible to select single merge source from the list *Compare merge candidate file version with latest version of target file *View selected item labels either for all users or filtered by user; the information displayed includes item version in label *Compare file versions between two labels *View changeset details supported in all lists containing changesets Team Foundation Sidekicks A: Might want to try the TFS Follow branch History tool: http://www.codeplex.com/TFSBranchHistory A: "TFS Branched History" plugin exists at Microsoft Gallery: http://visualstudiogallery.msdn.microsoft.com/7d4f37b6-f9a4-44c6-b0a0-994956538a44 Plugin does insert "Branched History" button into the context menu of Source Control Explorer (TFS) The button icon is with clock like standard "History" but with blue arrow: If you click "Branched History", new window will be opened and Path property will be set to the current Source Control Explorer path: Click "Run query" to get results at the "History" tab: From context menu you can query standard Changeset Details and Compare File (Folder) dialogs.
{ "language": "en", "url": "https://stackoverflow.com/questions/45882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: What is the most efficient way to sort an Html Select's Options by value, while preserving the currently selected item? I have jQuery but I'm not sure if it has any built-in sorting helpers. I could make a 2d array of each item's text, value, and selected properties, but I don't think that javascript's built in Array.sort() would work correctly. A: Well, in IE6 it seems to sort on the nested array's [0] item: function sortSelect(selectToSort) { var arrOptions = []; for (var i = 0; i < selectToSort.options.length; i++) { arrOptions[i] = []; arrOptions[i][0] = selectToSort.options[i].value; arrOptions[i][1] = selectToSort.options[i].text; arrOptions[i][2] = selectToSort.options[i].selected; } arrOptions.sort(); for (var i = 0; i < selectToSort.options.length; i++) { selectToSort.options[i].value = arrOptions[i][0]; selectToSort.options[i].text = arrOptions[i][1]; selectToSort.options[i].selected = arrOptions[i][2]; } } I'll see if this works in other browsers... Edit: it works in Firefox too, woo hoo! Is there an easier way than this though? is there some method built into javascript or jQuery that sorts selects that I am missing, or is this the best way? A: There's a closed jQuery ticket for a sort that should work, but just wasn't included in the core. jQuery.fn.sort = function() { return this.pushStack( [].sort.apply( this, arguments ), []); }; Referenced from a Google Groups thread, I think you just pass in a function that is used to sort, like so function sortSelect(selectToSort) { jQuery(selectToSort.options).sort(function(a,b){ return a.value > b.value ? 1 : -1; }); } Hope it helps! A: This is a better solution. Declare a global function to JQuery $.fn.sortSelect = function() { var op = this.children("option"); op.sort(function(a, b) { return a.text > b.text ? 1 : -1; }) return this.empty().append(op); } And call the function from the code. $("#my_select").sortSelect(); A: Modified Tom's answer above slightly so that it actually modifies the contents of the select box to be sorted, rather than just returning the sorted elements. $('#your_select_box').sort_select_box(); jQuery function: $.fn.sort_select_box = function(){ // Get options from select box var my_options = $("#" + this.attr('id') + ' option'); // sort alphabetically my_options.sort(function(a,b) { if (a.text > b.text) return 1; else if (a.text < b.text) return -1; else return 0 }) //replace with sorted my_options; $(this).empty().append( my_options ); // clearing any selections $("#"+this.attr('id')+" option").attr('selected', false); } A: Array.sort() defaults to converting each element to a string, and comparing those values. So ["value", "text", "selected"] gets sorted as "value, text, selected". Which will probably work fine, most of the time. If you do want to sort on value alone, or interpret value as a number, then you can pass a comparison function into sort(): arrOptions.sort(function(a,b) { return new Number(a[0]) - new Number(b[0]); }); A: I've just wrapped Mark's idea in a jquery function $('#your_select_box').sort_select_box(); JQuery function: $.fn.sort_select_box = function(){ var my_options = $("#" + this.attr('id') + ' option'); my_options.sort(function(a,b) { if (a.text > b.text) return 1; else if (a.text < b.text) return -1; else return 0 }) return my_options; } A: Extract options into a temporary array, sort, then rebuild the list: var my_options = $("#my_select option"); var selected = $("#my_select").val(); my_options.sort(function(a,b) { if (a.text > b.text) return 1; if (a.text < b.text) return -1; return 0 }) $("#my_select").empty().append( my_options ); $("#my_select").val(selected); Mozilla's sort documentation (specifically the compareFunction) and Wikipedia's Sorting Algorithm page are relevant. If you want to make the sort case insensitive, replace text with text.toLowerCase() The sort function shown above illustrates how to sort. Sorting non-english languages accurately can be complex (see the unicode collation algorithm). Using localeCompare in the sort function is a good solution, eg: my_options.sort(function(a,b) { return a.text.localeCompare(b.text); }); A: The solution I mentioned in my comment to @Juan Perez $.fn.sortOptions = function(){ $(this).each(function(){ var op = $(this).children("option"); op.sort(function(a, b) { return a.text > b.text ? 1 : -1; }) return $(this).empty().append(op); }); } Usage: $("select").sortOptions(); This can still be improved on, but I didn't need to add any more bells or whistles :) A: Remember: if you want to use context selector, just concatenate the ID will not work $.fn.sort_select_box = function(){ var my_options = $("option", $(this)); my_options.sort(function(a,b) { if (a.text > b.text) return 1; else if (a.text < b.text) return -1; else return 0 }); $(this).empty().append(my_options); } // Usando: $("select#ProdutoFornecedorId", $($context)).sort_select_box(); A: A bit late but for what it's worth I've implemented a more complex function you can include generically. It has a few options for varied output. It can also recurse into <OPTGROUP> tags based on their label. $.fn.sortSelect = function(options){ const OPTIONS_DEFAULT = { recursive: true, // Recurse into <optgroup> reverse: false, // Reverse order useValues: false, // Use values instead of text for <option> (<optgruop> is always label based) blankFirst: true, // Force placeholder <option> with empty value first, ignores reverse } if (typeof options != "object" || null === options) { options = OPTIONS_DEFAULT; } var sortOptions = function($root, $node, options){ if ($node.length != 1) { return false; } if ($node[0].tagName != "SELECT" && $node[0].tagName != "OPTGROUP") { return false; } if (options.recursive) { $node.children('optgroup').each(function(k, v){ return sortOptions($root, $(v), options); }); } var $options = $node.children('option, optgroup'); var $optionsSorted = $options.sort(function(a, b){ if (options.blankFirst) { if (a.tagName == "OPTION" && a.value == "") { return -1; } if (b.tagName == "OPTION" && b.value == "") { return 1; } } var textA = (a.tagName == "OPTION" ? (options.useValues ? a.value : a.text) : a.label); var textB = (b.tagName == "OPTION" ? (options.useValues ? a.value : b.text) : b.label); if (textA > textB) { return options.reverse ? -1 : 1; } if (textA < textB) { return options.reverse ? 1 : -1; } return 0; }); $options.remove(); $optionsSorted.appendTo($node); return true; }; var selected = $(this).val(); var sorted = sortOptions($(this), $(this), {...OPTIONS_DEFAULT, ...options}); $(this).val(selected); return sorted; }; You can then call the sortSelect() function on any <SELECT> tag, or just a single <OPTGROUP> to only sort a group's options. Example: $('select').sortSelect(); Reverse order using the "reverse" option: $('select').sortSelect({ reverse: true }); You could apply this to all selects automatically, perhaps only if they include an important class (e.g. "js-sort") with this: $('select.js-sort').each(function(k, v){ $(v).sortSelect(); }); A: Seems jquery still is not particularly helpful enough for sorting options in a html select element. Here is some plain-plain javascript code for sorting options: function sortOptionsByText(a,b) { // I keep an empty value option on top, b.value comparison to 0 might not be necessary if empty value is always on top... if (a.value.length==0 || (b.value.length>0 && a.text <= b.text)) return -1; // no sort: a, b return 1; // sort switches places: b, a } function sortOptionsByValue(a,b) { if (a.value <= b.value) return -1; // a, b return 1; // b, a } function clearChildren(elem) { if (elem) { while (elem.firstChild) { elem.removeChild(elem.firstChild); } } } function sortSelectElem(sel,byText) { const val=sel.value; const tmp=[...sel.options]; tmp.sort(byText?sortOptionsByText:sortOptionsByValue); clearChildren(sel); sel.append(...tmp); sel.value=val; } RACE: <select id="list" size="6"> <option value="">--PICK ONE--</option> <option value="1">HUMANOID</option> <option value="2">AMPHIBIAN</option> <option value="3">REPTILE</option> <option value="4">INSECTOID</option> </select><br> <button type="button" onclick="sortSelectElem(document.getElementById('list'));">SORT LIST BY VALUE</button><br> <button type="button" onclick="sortSelectElem(document.getElementById('list'),true);">SORT LIST BY TEXT</button> A: With jquery this worked for me in Chrome in order to sort a select made from database unsorted elements. $(document).ready(function(){ var $list = $("#my_select"); var selected = $("#my_select").val(); //save selected value $list.children().detach().sort(function(a, b) { return $(a).text().localeCompare($(b).text()); }).appendTo($list); //do the sorting locale for latin chars $("#my_select").val(selected); //select previous selected value });
{ "language": "en", "url": "https://stackoverflow.com/questions/45888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Opening a non-standard URL in a Cocoa app In an application that I'm writing I have some code like this: NSWorkspace* ws = [NSWorkspace sharedWorkspace]; NSString* myurl = @"http://www.somewebsite.com/method?a=%d"; NSURL* url = [NSURL URLWithString:myurl]; [ws openURL:url]; The main difference being that myurl comes from somewhere outside my control. Note the %d in the URL which isn't entirely correct and means that URLWithString fails, returning nil. What is the "correct" way of handling this? Do I need to parse the string and properly encode the arguments? Or is there some clever method in Cocoa that does all the hard work for me? A: I'm not sure if this is exactly what you're looking for, but there is a method in NSString that will sanitize a URL: stringByAddingPercentEscapesUsingEncoding: A: I think the behaviour here is correct, because %d is not a valid component of a URL (% is the escape, but expects two hex characters to follow it). You can't just URL encode the URL as given to you, because that would encode the /s and ?s as well, which you don't want. So, the question is, what's the correct behaviour here? Perhaps you would want it to be turned into... http://www.somewebsite.com/method?a=%25d (i.e. the % is encode to the encoded version of % in a URL, so when method gets the input, it sees a as being set to %d) I don't think there's any library function which will do that sort of thing for you, since there's no 'correct' way to do it. About he only correct thing you can do is return an error message saying the URL you were given is invalid (just as URLWithString is) If you wanted to try to handle the input, I guess you would need to search the URL for any % symbols which are not immediately followed by two hex characters, and then replace the % with %25 in that case. That should be quite possible with a regular expression, though I suspect there may be some additional complexities if your URLs start containing encoded versions of characters outside the ASCII character set. A: Unfortunately you need to be smarter than what is provided by Apple : stringByAddingPercentEscapesUsingEncoding: This will escape all invalid URL characters so that "http://foo.com/hey%20dude/", which is valid, becomes "http://foo.com/hey%2520dud/", which is not what we want. According to apple documentation : http://developer.apple.com/library/mac/documentation/CoreFOundation/Reference/CFURLRef/Reference/reference.html#//apple_ref/c/func/CFURLCreateStringByAddingPercentEscapes I made an NSURL category which does the right thing and works with odd string such as ones with partial encoding (i.e. "http://foo.com/hey dude/i%20do%20it/"). Here is the code: @interface NSURL (SmartEncoding) + (NSURL *)smartURLWithString:(NSString *)str; @end @implementation NSURL (SmartEncoding) + (NSURL *)smartURLWithString:(NSString *)str { CFStringRef preprocessed = CFURLCreateStringByReplacingPercentEscapesUsingEncoding(NULL, (CFStringRef)str, CFSTR(""), kCFStringEncodingUTF8); if (!preprocessed) preprocessed = CFURLCreateStringByReplacingPercentEscapesUsingEncoding(NULL, (CFStringRef)str, CFSTR(""), kCFStringEncodingASCII); if (!preprocessed) return [NSURL URLWithString:str]; CFStringRef sanitized = CFURLCreateStringByAddingPercentEscapes(NULL, preprocessed, NULL, NULL, kCFStringEncodingUTF8); CFRelease(preprocessed); NSURL *result = (NSURL*)CFURLCreateWithString(NULL, sanitized, NULL); CFRelease(sanitized); return [result autorelease]; } @end It works fine with UTF8 string encoded and ASCII ones.
{ "language": "en", "url": "https://stackoverflow.com/questions/45898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you add an image? Situation: I have a simple XML document that contains image information. I need to transform it into HTML. However, I can't see where the open tag is and when I use the XSL code below, it shows the following error message: "Cannot write an attribute node when no element start tag is open." XML content: <root> <HeaderText> <HeaderText>Dan Testing</HeaderText> </HeaderText> <Image> <img width="100" height="100" alt="FPO lady" src="/uploadedImages/temp_photo_small.jpg"/> </Image> <BodyText> <p>This is a test of the body text<br /></p> </BodyText> <ShowLinkArrow>false</ShowLinkArrow> </root> XSL code: <xsl:stylesheet version="1.0" extension-element-prefixes="msxsl" exclude-result-prefixes="msxsl js dl" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:js="urn:custom-javascript" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:dl="urn:datalist"> <xsl:output method="xml" version="1.0" omit-xml-declaration="yes" indent="yes" encoding="utf-8"/> <xsl:template match="/" xml:space="preserve"> <img> <xsl:attribute name="width"> 100 </xsl:attribute> <xsl:attribute name="height"> 100 </xsl:attribute> <xsl:attribute name="class"> CalloutRightPhoto </xsl:attribute> <xsl:attribute name="src"> <xsl:copy-of select="/root/Image/node()"/> </xsl:attribute> </img> </xsl:template> </xsl:stylesheet> A: Shouldn't that be: <xsl:value-of select="/root/Image/img/@src"/> ? It looks like you are trying to copy the entire Image/img node to the attribute @src A: In order to add attributes, XSL wants <xsl:element name="img"> (attributes) </xsl:element> instead of just <img> (attributes) </img> Although, yes, if you're just copying the element as-is, you don't need any of that. A: Never mind -- I'm an idiot. I just needed <xsl:value-of select="/root/Image/node()"/> A: Just to clarify the problem here - the error is in the following bit of code: <xsl:attribute name="src"> <xsl:copy-of select="/root/Image/node()"/> </xsl:attribute> The instruction xsl:copy-of takes a node or node-set and makes a copy of it - outputting a node or node-set. However an attribute cannot contain a node, only a textual value, so xsl:value-of would be a possible solution (as this returns the textual value of a node or nodeset). A MUCH shorter solution (and perhaps more elegant) would be the following: <img width="100" height="100" src="{/root/Image/node()}" class="CalloutRightPhoto"/> The use of the {} in the attribute is called an Attribute Value Template, and can contain any XPATH expression. Note, the same XPath can be used here as you have used in the xsl_copy-of as it knows to take the textual value when used in a Attribute Value Template. A: The other option to try is a straightforward <img width="100" height="100" src="/root/Image/image.jpeg" class="CalloutRightPhoto"/> i.e. without {} but instead giving the direct image path
{ "language": "en", "url": "https://stackoverflow.com/questions/45904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Tips and tricks for working with Microsoft Visual Studio solutions and project After answering on this question I thought it would be nice to collect some tips & tricks for working with MSVS solutions and projects. Here is my list: * *How to avoid saving new projects automatically to reduce garbage in file system. Uncheck Tools->Options->Projects and Solutions->Save new projects when created *How to add common file to multiple projects without copying it to project’s directory. Right click on a project, select Add->Existing Item->Add as link (press on small arrow on Add button) *How to add project to solution without including it in the build process Right click on solution, select Add->New solution folder. Right click on created folder, select Add->Add existing project *How to edit project file from Visual Studio? Right click on project and select Unload Project, right click on unloaded project and select Edit. Or install Power Commands and select Edit Project File *How to group files in the project tree (like auto-generated files for WinForms controls) Open project file for editing. Change <Compile Include="MainFile.cs" /> <Compile Include="SecondaryFile.cs" /> To <Compile Include="SecondaryFile.cs "> <DependentUpon> MainFile.cs </DependentUpon> </Compile> Do you have anything else to add? A: I'm a huge fan of using msbuild to build my solutions with the /m option so that it builds using multiple cores. It can drastically decrease your build time. Scott Hanselman posted on how to add it to your tools list at http://www.hanselman.com/blog/HackParallelMSBuildsFromWithinTheVisualStudioIDE.aspx. I usually just run 'msbuild /m' from the command prompt or PowerShell, though. Another tip that is sometimes useful is taking advantage of the pre- and post-build events to add additional logic before or after a build. To see these, go to the Properties for a Project, click on the Compile tab, and then choose "Build Events..." A: I love debugging with the Multiple startup projects option A: I like changing the default location that new projects are saved to. Tools->Options (Select Projects and Solutions tab) This "tab" has all sorts of goodness. Not just the ability to change the default locations and avoid saving new projects automatically but other nice things as well. For example: Track Active Item - Selects the file in the solution explorer when you change windows. Show Output window when build starts - Toggle to show or not. I like it on, your mileage will vary. A: Using the command window to quickly open files in your solution: * *Bring up the Command Window (CTRL-ALT-A) *Type open <filename> I create an alias for open by executing the following at the Command Window: alias o open. Visual Studio will remember the alias from then on, so I only ever need to type o <filename>. It even searches database projects unlike some third-party plugins! Unfortunately, there is a bug in the filename completion when searching for nested files. A simple workaround is to type the beginning of the filename, hit the ESC key and then type the rest of the name. To search for login.aspx.cs, type login.aspx, hit ESCP and then type .cs. A: First rule of working with Visual Studio: * *Install ReSharper A: I have a tip regarding the "Track Active Item" option mentioned above, for when working with big projects. It's posted here: Forcing the Solution Explorer to select the file in the editor in visual studio 2005
{ "language": "en", "url": "https://stackoverflow.com/questions/45908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: MenuStrip Error My users are having an intermittent error when using a Windows Forms application built in VB.NET 3.5. Apparently when they click on the form and the form re-paints, a red 'X' will be painted over the MenuStrip control and the app will crash with the following error. Has anyone seen this before? Can someone point me in the right direction? System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index at System.Collections.ArrayList.get_Item(Int32 index) at System.Windows.Forms.ToolStripItemCollection.get_Item(Int32 index) at System.Windows.Forms.ToolStrip.OnPaint(PaintEventArgs e) at System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer, Boolean disposeEventArgs) at System.Windows.Forms.Control.WmPaint(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ToolStrip.WndProc(Message& m) at System.Windows.Forms.MenuStrip.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) A: Are you adding items to this strip dynamically? A: You will have to find where in the the code this is happening but it is being cause by an integer variable being used to access your dynamic menu. Before you use the menu, use an if statement to make sure it is between 0 and the size of the collection - 1. Also, place a break point where you create the variable and step through the code watching what happens to it. Also, a code sample of how you are using the dynamic menu would help. A: While looking through the code, I discovered that the menu is being cleared and reloaded whenever the form data is being refreshed. The menu only needs to be loaded once, when the form is initially loaded. I think that the menu may be getting cleared while the form is in the process of being painted. Do you think that this may be true? A: Thanks to all of you that helped to point me in the right direction. I made a change to only clear/add the menu when the form is loaded, so I shouldn't see this error again when the form is painting.
{ "language": "en", "url": "https://stackoverflow.com/questions/45924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Branch / merge management in Subversion 1.5 I've used subversion for a while, and used the svnmerge.py script for a while in my old job to manage merges between branches. I believe Subversion 1.5 is now out, and was supposed to have some branch / merge management system integrated with it. So, can someone give me a quick overview of how to track merges in a branch with Subversion 1.5? Are there any pitfalls with using this support? Do I need to ensure all the team upgrades to 1.5 before using this support? A: Usage Merge tracking is managed by the client and stored in a property (svn:mergeinfo). To use merge tracking you just merge as usual but without the revision range: svn merge trunkURL The client will take care of reading the properties to see what revision(s) need to be merged in and then update the properties with the newly-merged revision(s). Here is a pretty basic overview of the process. Pitfalls, etc. I personally haven't run into any problems with merge tracking, but my usage of the feature has been pretty light. Upgrading There are two upgrades you'll need to do to get merge tracking: * *Server: Your server must be running 1.5 to get merge tracking support. *Client: You can use a 1.x client against a 1.5 server, but you won't get merge tracking. Just upgrade everyone. A: An addition to Chris's post: You may also have to upgrade the repository itself, if you run into "Retrieval of mergeinfo unsupported" messages. The command to run on the server to do this is, svnadmin upgrade REPOS_PATH where REPOS_PATH is the local path to your repository of course. A: If you've built your repository with the ~ svn-merge.py command you can upgrade your repository to use svn native merge tracking with the XXX command.
{ "language": "en", "url": "https://stackoverflow.com/questions/45941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: php execute a background process I need to execute a directory copy upon a user action, but the directories are quite large, so I would like to be able to perform such an action without the user being aware of the time it takes for the copy to complete. Any suggestions would be much appreciated. A: Well i found a bit faster and easier version to use shell_exec('screen -dmS $name_of_screen $command'); and it works. A: Here is a function to launch a background process in PHP. Finally created one that actually works on Windows too, after a lot of reading and testing different approaches and parameters. function LaunchBackgroundProcess($command){ // Run command Asynchroniously (in a separate thread) if(PHP_OS=='WINNT' || PHP_OS=='WIN32' || PHP_OS=='Windows'){ // Windows $command = 'start "" '. $command; } else { // Linux/UNIX $command = $command .' /dev/null &'; } $handle = popen($command, 'r'); if($handle!==false){ pclose($handle); return true; } else { return false; } } Note 1: On windows, do not use /B parameter as suggested elsewhere. It forces process to run the same console window as start command itself, resulting in the process being processed synchronously. To run the process in a separate thread (asynchronously), do not use /B. Note 2: The empty double quotes after start "" are required if the command is a quoted path. start command interprets the first quoted parameter as window title. A: Can you arrange to fork off a separate process, and then run your copy in the background? It's been a while since I did any PHP, but the function pcntl-fork looks promising. A: Use this function to run your program in background. It cross-platform and fully customizable. <?php function startBackgroundProcess( $command, $stdin = null, $redirectStdout = null, $redirectStderr = null, $cwd = null, $env = null, $other_options = null ) { $descriptorspec = array( 1 => is_string($redirectStdout) ? array('file', $redirectStdout, 'w') : array('pipe', 'w'), 2 => is_string($redirectStderr) ? array('file', $redirectStderr, 'w') : array('pipe', 'w'), ); if (is_string($stdin)) { $descriptorspec[0] = array('pipe', 'r'); } $proc = proc_open($command, $descriptorspec, $pipes, $cwd, $env, $other_options); if (!is_resource($proc)) { throw new \Exception("Failed to start background process by command: $command"); } if (is_string($stdin)) { fwrite($pipes[0], $stdin); fclose($pipes[0]); } if (!is_string($redirectStdout)) { fclose($pipes[1]); } if (!is_string($redirectStderr)) { fclose($pipes[2]); } return $proc; } Note that after command started, by default this function closes the stdin and stdout of running process. You can redirect process output into some file via $redirectStdout and $redirectStderr arguments. Note for windows users: You cannot redirect stdout/stderr to nul in the following manner: startBackgroundProcess('ping yandex.com', null, 'nul', 'nul'); However, you can do this: startBackgroundProcess('ping yandex.com >nul 2>&1'); Notes for *nix users: 1) Use exec shell command if you want get actual PID: $proc = startBackgroundProcess('exec ping yandex.com -c 15', null, '/dev/null', '/dev/null'); print_r(proc_get_status($proc)); 2) Use $stdin argument if you want to pass some data to the input of your program: startBackgroundProcess('cat > input.txt', "Hello world!\n"); A: Assuming this is running on a Linux machine, I've always handled it like this: exec(sprintf("%s > %s 2>&1 & echo $! >> %s", $cmd, $outputfile, $pidfile)); This launches the command $cmd, redirects the command output to $outputfile, and writes the process id to $pidfile. That lets you easily monitor what the process is doing and if it's still running. function isRunning($pid){ try{ $result = shell_exec(sprintf("ps %d", $pid)); if( count(preg_split("/\n/", $result)) > 2){ return true; } }catch(Exception $e){} return false; } A: You might try a queuing system like Resque. You then can generate a job, that processes the information and quite fast return with the "processing" image. With this approach you won't know when it is finished though. This solution is intended for larger scale applications, where you don't want your front machines to do the heavy lifting, so they can process user requests. Therefore it might or might not work with physical data like files and folders, but for processing more complicated logic or other asynchronous tasks (ie new registrations mails) it is nice to have and very scalable. A: A working solution for both Windows and Linux. Find more on My github page. function run_process($cmd,$outputFile = '/dev/null', $append = false){ $pid=0; if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {//'This is a server using Windows!'; $cmd = 'wmic process call create "'.$cmd.'" | find "ProcessId"'; $handle = popen("start /B ". $cmd, "r"); $read = fread($handle, 200); //Read the output $pid=substr($read,strpos($read,'=')+1); $pid=substr($pid,0,strpos($pid,';') ); $pid = (int)$pid; pclose($handle); //Close }else{ $pid = (int)shell_exec(sprintf('%s %s %s 2>&1 & echo $!', $cmd, ($append) ? '>>' : '>', $outputFile)); } return $pid; } function is_process_running($pid){ if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {//'This is a server using Windows!'; //tasklist /FI "PID eq 6480" $result = shell_exec('tasklist /FI "PID eq '.$pid.'"' ); if (count(preg_split("/\n/", $result)) > 0 && !preg_match('/No tasks/', $result)) { return true; } }else{ $result = shell_exec(sprintf('ps %d 2>&1', $pid)); if (count(preg_split("/\n/", $result)) > 2 && !preg_match('/ERROR: Process ID out of range/', $result)) { return true; } } return false; } function stop_process($pid){ if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {//'This is a server using Windows!'; $result = shell_exec('taskkill /PID '.$pid ); if (count(preg_split("/\n/", $result)) > 0 && !preg_match('/No tasks/', $result)) { return true; } }else{ $result = shell_exec(sprintf('kill %d 2>&1', $pid)); if (!preg_match('/No such process/', $result)) { return true; } } } A: Thanks to this answer: A perfect tool to run a background process would be Symfony Process Component, which is based on proc_* functions, but it's much easier to use. See its documentation for more information. A: Write the process as a server-side script in whatever language (php/bash/perl/etc) is handy and then call it from the process control functions in your php script. The function probably detects if standard io is used as the output stream and if it is then that will set the return value..if not then it ends proc_close( proc_open( "./command --foo=1 &", array(), $foo ) ); I tested this quickly from the command line using "sleep 25s" as the command and it worked like a charm. (Answer found here) A: You might want to try to append this to your command >/dev/null 2>/dev/null & eg. shell_exec('service named reload >/dev/null 2>/dev/null &'); A: I'd just like to add a very simple example for testing this functionality on Windows: Create the following two files and save them to a web directory: foreground.php: <?php ini_set("display_errors",1); error_reporting(E_ALL); echo "<pre>loading page</pre>"; function run_background_process() { file_put_contents("testprocesses.php","foreground start time = " . time() . "\n"); echo "<pre> foreground start time = " . time() . "</pre>"; // output from the command must be redirected to a file or another output stream // http://ca.php.net/manual/en/function.exec.php exec("php background.php > testoutput.php 2>&1 & echo $!", $output); echo "<pre> foreground end time = " . time() . "</pre>"; file_put_contents("testprocesses.php","foreground end time = " . time() . "\n", FILE_APPEND); return $output; } echo "<pre>calling run_background_process</pre>"; $output = run_background_process(); echo "<pre>output = "; print_r($output); echo "</pre>"; echo "<pre>end of page</pre>"; ?> background.php: <? file_put_contents("testprocesses.php","background start time = " . time() . "\n", FILE_APPEND); sleep(10); file_put_contents("testprocesses.php","background end time = " . time() . "\n", FILE_APPEND); ?> Give IUSR permission to write to the directory in which you created the above files Give IUSR permission to READ and EXECUTE C:\Windows\System32\cmd.exe Hit foreground.php from a web browser The following should be rendered to the browser w/the current timestamps and local resource # in the output array: loading page calling run_background_process foreground start time = 1266003600 foreground end time = 1266003600 output = Array ( [0] => 15010 ) end of page You should see testoutput.php in the same directory as the above files were saved, and it should be empty You should see testprocesses.php in the same directory as the above files were saved, and it should contain the following text w/the current timestamps: foreground start time = 1266003600 foreground end time = 1266003600 background start time = 1266003600 background end time = 1266003610 A: If you need to just do something in background without the PHP page waiting for it to complete, you could use another (background) PHP script that is "invoked" with wget command. This background PHP script will be executed with privileges, of course, as any other PHP script on your system. Here is an example on Windows using wget from gnuwin32 packages. The background code (file test-proc-bg.php) as an exmple ... sleep(5); // some delay file_put_contents('test.txt', date('Y-m-d/H:i:s.u')); // writes time in a file The foreground script, the one invoking ... $proc_command = "wget.exe http://localhost/test-proc-bg.php -q -O - -b"; $proc = popen($proc_command, "r"); pclose($proc); You must use the popen/pclose for this to work properly. The wget options: -q keeps wget quiet. -O - outputs to stdout. -b works on background A: Instead of initiating a background process, what about creating a trigger file and having a scheduler like cron or autosys periodically execute a script that looks for and acts on the trigger files? The triggers could contain instructions or even raw commands (better yet, just make it a shell script). A: If using PHP there is a much easier way to do this using pcntl_fork: http://www.php.net/manual/en/function.pcntl-fork.php A: I am heavily using fast_cgi_finish_request() In combination with a closure and register_shutdown_function() $message ='job executed'; $backgroundJob = function() use ($message) { //do some work here echo $message; } Then register this closure to be executed before shutdown. register_shutdown_function($backgroundJob); Finally when the response was sent to the client you can close the connection to the client and continue working with the PHP process: fast_cgi_finish_request(); The closure will be executed after fast_cgi_finish_request. The $message will not be visible at any time. And you can register as much closures as you want, but take care about script execution time. This will only work if PHP is running as a Fast CGI module (was that right?!) A: If you are looking to execute a background process via PHP, pipe the command's output to /dev/null and add & to the end of the command. exec("bg_process > /dev/null &"); Note that you can not utilize the $output parameter of exec() or else PHP will hang (probably until the process completes). A: PHP scripting is not like other desktop application developing language. In desktop application languages we can set daemon threads to run a background process but in PHP a process is occuring when user request for a page. However It is possible to set a background job using server's cron job functionality which php script runs. A: For those of us using Windows, look at this: Reference: http://php.net/manual/en/function.exec.php#43917 I too wrestled with getting a program to run in the background in Windows while the script continues to execute. This method unlike the other solutions allows you to start any program minimized, maximized, or with no window at all. llbra@phpbrasil's solution does work but it sometimes produces an unwanted window on the desktop when you really want the task to run hidden. start Notepad.exe minimized in the background: <?php $WshShell = new COM("WScript.Shell"); $oExec = $WshShell->Run("notepad.exe", 7, false); ?> start a shell command invisible in the background: <?php $WshShell = new COM("WScript.Shell"); $oExec = $WshShell->Run("cmd /C dir /S %windir%", 0, false); ?> start MSPaint maximized and wait for you to close it before continuing the script: <?php $WshShell = new COM("WScript.Shell"); $oExec = $WshShell->Run("mspaint.exe", 3, true); ?> For more info on the Run() method go to: http://msdn.microsoft.com/library/en-us/script56/html/wsMthRun.asp Edited URL: Go to https://technet.microsoft.com/en-us/library/ee156605.aspx instead as the link above no longer exists. A: New answer to an old question. Using this library, the following code would spawn an asynchronous/parallel PHPThread to do background work. * *Must have pcntl, posix, and socket extensions *Designed for/tested in CLI mode. EZ code sample: function threadproc($thread, $param) { echo "\tI'm a PHPThread. In this example, I was given only one parameter: \"". print_r($param, true) ."\" to work with, but I can accept as many as you'd like!\n"; for ($i = 0; $i < 10; $i++) { usleep(1000000); echo "\tPHPThread working, very busy...\n"; } return "I'm a return value!"; } $thread_id = phpthread_create($thread, array(), "threadproc", null, array("123456")); echo "I'm the main thread doing very important work!\n"; for ($n = 0; $n < 5; $n++) { usleep(1000000); echo "Main thread...working!\n"; } echo "\nMain thread done working. Waiting on our PHPThread...\n"; phpthread_join($thread_id, $retval); echo "\n\nOur PHPThread returned: " . print_r($retval, true) . "!\n"; A: From PHP official documentation(php.net) <?php function execInBackground($cmd) { if (substr(php_uname(), 0, 7) == "Windows"){ pclose(popen("start /B ". $cmd, "r")); } else { exec($cmd . " > /dev/null &"); } } ?> A: I know it is a 100 year old post, but anyway, thought it might be useful to someone. You can put an invisible image somewhere on the page pointing to the url that needs to run in the background, like this: <img src="run-in-background.php" border="0" alt="" width="1" height="1" />
{ "language": "en", "url": "https://stackoverflow.com/questions/45953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "285" }
Q: Analizing MIPS binaries: is there a Python library for parsing binary data? I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere? A: You might be interested in the DWARF library from pydevtools: >>> from bintools.dwarf import DWARF >>> dwarf = DWARF('test/test') >>> dwarf.get_loc_by_addr(0x8048475) ('/home/emilmont/Workspace/dbg/test/main.c', 36, 0) A: Please check pyelftools - a new pure Python library meant to do this. A: You should give Construct a try. It is very useful to parse binary data into python objects. There is even an example for the ELF32 file format. A: I don't know of any, but if all else fails you could use ctypes to directly use libdwarf, libelf or libbfd. A: I've been developing a DWARF parser using Construct. Currently fairly rough, and parsing is slow. But I thought I should at least let you know. It may suit your needs, with a bit of work. I've got the code in Mercurial, hosted at bitbucket: * *http://bitbucket.org/cmcqueen1975/pythondwarf/ *http://bitbucket.org/cmcqueen1975/construct/ (necessary modifications to Construct library) Construct is a very interesting library. DWARF is a complex format (as I'm discovering) and pushes Construct to its limits I think. A: hachior is another library for parsing binary data
{ "language": "en", "url": "https://stackoverflow.com/questions/45954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to multiply 10 to an "Integer" object in Java? How do I multiply 10 to an Integer object and get back the Integer object? I am looking for the neatest way of doing this. I would probably do it this way: Get int from Integer object, multiply it with the other int and create another Integer object with this int value. Code will be something like ... integerObj = new Integer(integerObj.intValue() * 10); But, I saw a code where the author is doing it this way: Get the String from the Integer object, concatenate "0" at the end and then get Integer object back by using Integer.parseInt The code is something like this: String s = integerObj + "0"; integerObj = Integer.parseInt(s); Is there any merit in doing it either way? And what would be the most efficient/neatest way in general and in this case? A: The string approach is amusing, but almost certainly a bad way to do it. Getting the int value of an Integer, and creating a new one will be very fast, where as parseInt would be fairly expensive to call. Overall, I'd agree with your original approach (which, as others have pointed out, can be done without so much clutter if you have autoboxing as introduced in Java 5). A: The problem with the second way is the way Strings are handled in Java: * *"0" is converted into a constant String object at compile time. *Each time this code is called, s is constructed as a new String object, and javac converts that code to String s = new StringBuilder().append(integerObj.toString()).append("0").toString() (StringBuffer for older versions). Even if you use the same integerObj, i.e., String s1 = integerObj + "0"; String s2 = integerObj + "0"; (s1 == s2) would be false, while s1.equals(s2) would be true. *Integer.parseInt internally calls new Integer() anyway, because Integer is immutable. BTW, autoboxing/unboxing is internally the same as the first method. A: With Java 5's autoboxing, you can simply do: Integer a = new Integer(2); // or even just Integer a = 2; a *= 10; System.out.println(a); A: Keep away from the second approach, best bet would be the autoboxing if you're using java 1.5, anything earlier your first example would be best. A: The solution using the String method is not so good for a variety of reasons. Some are aesthetic reasons others are practical. On a practical front more objects get created by the String version than the more normal form (as you have expressed in your first example). On an aesthetic note, I think that the second version obscures the intent of the code and that is nearly as important as getting it to produce the result you want. A: toolkit's answer above is correct and the best way, but it doesn't give a full explanation of what is happening. Assuming Java 5 or later: Integer a = new Integer(2); // or even just Integer a = 2; a *= 10; System.out.println(a); // will output 20 What you need to know is that this is the exact same as doing: Integer a = new Integer(2); // or even just Integer a = 2; a = a.intValue() * 10; System.out.println(a.intValue()); // will output 20 By performing the operation (in this case *=) on the object 'a', you are not changing the int value inside the 'a' object, but actually assigning a new object to 'a'. This is because 'a' gets auto-unboxed in order to perform the multiplication, and then the result of the multiplication gets auto-boxed and assigned to 'a'. Integer is an immutable object. (All wrapper classes are immutable.) Take for example this piece of code: static void test() { Integer i = new Integer(10); System.out.println("StartingMemory: " + System.identityHashCode(i)); changeInteger(i); System.out.println("Step1: " + i); changeInteger(++i); System.out.println("Step2: " + i.intValue()); System.out.println("MiddleMemory: " + System.identityHashCode(i)); } static void changeInteger(Integer i) { System.out.println("ChangeStartMemory: " + System.identityHashCode(i)); System.out.println("ChangeStartValue: " + i); i++; System.out.println("ChangeEnd: " + i); System.out.println("ChangeEndMemory: " + System.identityHashCode(i)); } The output will be: StartingMemory: 1373539035 ChangeStartMemory: 1373539035 ChangeStartValue: 10 ChangeEnd: 11 ChangeEndMemory: 190331520 Step1: 10 ChangeStartMemory: 190331520 ChangeStartValue: 11 ChangeEnd: 12 ChangeEndMemory: 1298706257 Step2: 11 MiddleMemory: 190331520 You can see the memory address for 'i' is changing (your memory addresses will be different). Now lets do a little test with reflection, add this onto the end of the test() method: System.out.println("MiddleMemory: " + System.identityHashCode(i)); try { final Field f = i.getClass().getDeclaredField("value"); f.setAccessible(true); f.setInt(i, 15); System.out.println("Step3: " + i.intValue()); System.out.println("EndingMemory: " + System.identityHashCode(i)); } catch (final Exception e) { e.printStackTrace(); } The additional output will be: MiddleMemory: 190331520 Step2: 15 MiddleMemory: 190331520 You can see that the memory address for 'i' did not change, even though we changed its value using reflection. (DO NOT USE REFLECTION THIS WAY IN REAL LIFE!!)
{ "language": "en", "url": "https://stackoverflow.com/questions/45964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: mmap() vs. reading blocks I'm working on a program that will be processing files that could potentially be 100GB or more in size. The files contain sets of variable length records. I've got a first implementation up and running and am now looking towards improving performance, particularly at doing I/O more efficiently since the input file gets scanned many times. Is there a rule of thumb for using mmap() versus reading in blocks via C++'s fstream library? What I'd like to do is read large blocks from disk into a buffer, process complete records from the buffer, and then read more. The mmap() code could potentially get very messy since mmap'd blocks need to lie on page sized boundaries (my understanding) and records could potentially lie across page boundaries. With fstreams, I can just seek to the start of a record and begin reading again, since we're not limited to reading blocks that lie on page sized boundaries. How can I decide between these two options without actually writing up a complete implementation first? Any rules of thumb (e.g., mmap() is 2x faster) or simple tests? A: There are lots of good answers here already that cover many of the salient points, so I'll just add a couple of issues I didn't see addressed directly above. That is, this answer shouldn't be considered a comprehensive of the pros and cons, but rather an addendum to other answers here. mmap seems like magic Taking the case where the file is already fully cached1 as the baseline2, mmap might seem pretty much like magic: * *mmap only requires 1 system call to (potentially) map the entire file, after which no more system calls are needed. *mmap doesn't require a copy of the file data from kernel to user-space. *mmap allows you to access the file "as memory", including processing it with whatever advanced tricks you can do against memory, such as compiler auto-vectorization, SIMD intrinsics, prefetching, optimized in-memory parsing routines, OpenMP, etc. In the case that the file is already in the cache, it seems impossible to beat: you just directly access the kernel page cache as memory and it can't get faster than that. Well, it can. mmap is not actually magic because... mmap still does per-page work A primary hidden cost of mmap vs read(2) (which is really the comparable OS-level syscall for reading blocks) is that with mmap you'll need to do "some work" for every 4K page accessed in a new mapping, even though it might be hidden by the page-fault mechanism. For a example a typical implementation that just mmaps the entire file will need to fault-in so 100 GB / 4K = 25 million faults to read a 100 GB file. Now, these will be minor faults, but 25 million page faults is still not going to be super fast. The cost of a minor fault is probably in the 100s of nanos in the best case. mmap relies heavily on TLB performance Now, you can pass MAP_POPULATE to mmap to tell it to set up all the page tables before returning, so there should be no page faults while accessing it. Now, this has the little problem that it also reads the entire file into RAM, which is going to blow up if you try to map a 100GB file - but let's ignore that for now3. The kernel needs to do per-page work to set up these page tables (shows up as kernel time). This ends up being a major cost in the mmap approach, and it's proportional to the file size (i.e., it doesn't get relatively less important as the file size grows)4. Finally, even in user-space accessing such a mapping isn't exactly free (compared to large memory buffers not originating from a file-based mmap) - even once the page tables are set up, each access to a new page is going to, conceptually, incur a TLB miss. Since mmaping a file means using the page cache and its 4K pages, you again incur this cost 25 million times for a 100GB file. Now, the actual cost of these TLB misses depends heavily on at least the following aspects of your hardware: (a) how many 4K TLB enties you have and how the rest of the translation caching works performs (b) how well hardware prefetch deals with with the TLB - e.g., can prefetch trigger a page walk? (c) how fast and how parallel the page walking hardware is. On modern high-end x86 Intel processors, the page walking hardware is in general very strong: there are at least 2 parallel page walkers, a page walk can occur concurrently with continued execution, and hardware prefetching can trigger a page walk. So the TLB impact on a streaming read load is fairly low - and such a load will often perform similarly regardless of the page size. Other hardware is usually much worse, however! read() avoids these pitfalls The read() syscall, which is what generally underlies the "block read" type calls offered e.g., in C, C++ and other languages has one primary disadvantage that everyone is well-aware of: * *Every read() call of N bytes must copy N bytes from kernel to user space. On the other hand, it avoids most the costs above - you don't need to map in 25 million 4K pages into user space. You can usually malloc a single buffer small buffer in user space, and re-use that repeatedly for all your read calls. On the kernel side, there is almost no issue with 4K pages or TLB misses because all of RAM is usually linearly mapped using a few very large pages (e.g., 1 GB pages on x86), so the underlying pages in the page cache are covered very efficiently in kernel space. So basically you have the following comparison to determine which is faster for a single read of a large file: Is the extra per-page work implied by the mmap approach more costly than the per-byte work of copying file contents from kernel to user space implied by using read()? On many systems, they are actually approximately balanced. Note that each one scales with completely different attributes of the hardware and OS stack. In particular, the mmap approach becomes relatively faster when: * *The OS has fast minor-fault handling and especially minor-fault bulking optimizations such as fault-around. *The OS has a good MAP_POPULATE implementation which can efficiently process large maps in cases where, for example, the underlying pages are contiguous in physical memory. *The hardware has strong page translation performance, such as large TLBs, fast second level TLBs, fast and parallel page-walkers, good prefetch interaction with translation and so on. ... while the read() approach becomes relatively faster when: * *The read() syscall has good copy performance. E.g., good copy_to_user performance on the kernel side. *The kernel has an efficient (relative to userland) way to map memory, e.g., using only a few large pages with hardware support. *The kernel has fast syscalls and a way to keep kernel TLB entries around across syscalls. The hardware factors above vary wildly across different platforms, even within the same family (e.g., within x86 generations and especially market segments) and definitely across architectures (e.g., ARM vs x86 vs PPC). The OS factors keep changing as well, with various improvements on both sides causing a large jump in the relative speed for one approach or the other. A recent list includes: * *Addition of fault-around, described above, which really helps the mmap case without MAP_POPULATE. *Addition of fast-path copy_to_user methods in arch/x86/lib/copy_user_64.S, e.g., using REP MOVQ when it is fast, which really help the read() case. Update after Spectre and Meltdown The mitigations for the Spectre and Meltdown vulnerabilities considerably increased the cost of a system call. On the systems I've measured, the cost of a "do nothing" system call (which is an estimate of the pure overhead of the system call, apart from any actual work done by the call) went from about 100 ns on a typical modern Linux system to about 700 ns. Furthermore, depending on your system, the page-table isolation fix specifically for Meltdown can have additional downstream effects apart from the direct system call cost due to the need to reload TLB entries. All of this is a relative disadvantage for read() based methods as compared to mmap based methods, since read() methods must make one system call for each "buffer size" worth of data. You can't arbitrarily increase the buffer size to amortize this cost since using large buffers usually performs worse since you exceed the L1 size and hence are constantly suffering cache misses. On the other hand, with mmap, you can map in a large region of memory with MAP_POPULATE and the access it efficiently, at the cost of only a single system call. 1 This more-or-less also includes the case where the file wasn't fully cached to start with, but where the OS read-ahead is good enough to make it appear so (i.e., the page is usually cached by the time you want it). This is a subtle issue though because the way read-ahead works is often quite different between mmap and read calls, and can be further adjusted by "advise" calls as described in 2. 2 ... because if the file is not cached, your behavior is going to be completely dominated by IO concerns, including how sympathetic your access pattern is to the underlying hardware - and all your effort should be in ensuring such access is as sympathetic as possible, e.g. via use of madvise or fadvise calls (and whatever application level changes you can make to improve access patterns). 3 You could get around that, for example, by sequentially mmaping in windows of a smaller size, say 100 MB. 4 In fact, it turns out the MAP_POPULATE approach is (at least one some hardware/OS combination) only slightly faster than not using it, probably because the kernel is using faultaround - so the actual number of minor faults is reduced by a factor of 16 or so. A: I'm sorry Ben Collins lost his sliding windows mmap source code. That'd be nice to have in Boost. Yes, mapping the file is much faster. You're essentially using the the OS virtual memory subsystem to associate memory-to-disk and vice versa. Think about it this way: if the OS kernel developers could make it faster they would. Because doing so makes just about everything faster: databases, boot times, program load times, et cetera. The sliding window approach really isn't that difficult as multiple continguous pages can be mapped at once. So the size of the record doesn't matter so long as the largest of any single record will fit into memory. The important thing is managing the book-keeping. If a record doesn't begin on a getpagesize() boundary, your mapping has to begin on the previous page. The length of the region mapped extends from the first byte of the record (rounded down if necessary to the nearest multiple of getpagesize()) to the last byte of the record (rounded up to the nearest multiple of getpagesize()). When you're finished processing a record, you can unmap() it, and move on to the next. This all works just fine under Windows too using CreateFileMapping() and MapViewOfFile() (and GetSystemInfo() to get SYSTEM_INFO.dwAllocationGranularity --- not SYSTEM_INFO.dwPageSize). A: The main performance cost is going to be disk i/o. "mmap()" is certainly quicker than istream, but the difference might not be noticeable because the disk i/o will dominate your run-times. I tried Ben Collins's code fragment (see above/below) to test his assertion that "mmap() is way faster" and found no measurable difference. See my comments on his answer. I would certainly not recommend separately mmap'ing each record in turn unless your "records" are huge - that would be horribly slow, requiring 2 system calls for each record and possibly losing the page out of the disk-memory cache..... In your case I think mmap(), istream and the low-level open()/read() calls will all be about the same. I would recommend mmap() in these cases: * *There is random access (not sequential) within the file, AND *the whole thing fits comfortably in memory OR there is locality-of-reference within the file so that certain pages can be mapped in and other pages mapped out. That way the operating system uses the available RAM to maximum benefit. *OR if multiple processes are reading/working on the same file, then mmap() is fantastic because the processes all share the same physical pages. (btw - I love mmap()/MapViewOfFile()). A: mmap should be faster, but I don't know how much. It very much depends on your code. If you use mmap it's best to mmap the whole file at once, that will make you life a lot easier. One potential problem is that if your file is bigger than 4GB (or in practice the limit is lower, often 2GB) you will need a 64bit architecture. So if you're using a 32 environment, you probably don't want to use it. Having said that, there may be a better route to improving performance. You said the input file gets scanned many times, if you can read it out in one pass and then be done with it, that could potentially be much faster. A: mmap is way faster. You might write a simple benchmark to prove it to yourself: char data[0x1000]; std::ifstream in("file.bin"); while (in) { in.read(data, 0x1000); // do something with data } versus: const int file_size=something; const int page_size=0x1000; int off=0; void *data; int fd = open("filename.bin", O_RDONLY); while (off < file_size) { data = mmap(NULL, page_size, PROT_READ, 0, fd, off); // do stuff with data munmap(data, page_size); off += page_size; } Clearly, I'm leaving out details (like how to determine when you reach the end of the file in the event that your file isn't a multiple of page_size, for instance), but it really shouldn't be much more complicated than this. If you can, you might try to break up your data into multiple files that can be mmap()-ed in whole instead of in part (much simpler). A couple of months ago I had a half-baked implementation of a sliding-window mmap()-ed stream class for boost_iostreams, but nobody cared and I got busy with other stuff. Most unfortunately, I deleted an archive of old unfinished projects a few weeks ago, and that was one of the victims :-( Update: I should also add the caveat that this benchmark would look quite different in Windows because Microsoft implemented a nifty file cache that does most of what you would do with mmap in the first place. I.e., for frequently-accessed files, you could just do std::ifstream.read() and it would be as fast as mmap, because the file cache would have already done a memory-mapping for you, and it's transparent. Final Update: Look, people: across a lot of different platform combinations of OS and standard libraries and disks and memory hierarchies, I can't say for certain that the system call mmap, viewed as a black box, will always always always be substantially faster than read. That wasn't exactly my intent, even if my words could be construed that way. Ultimately, my point was that memory-mapped i/o is generally faster than byte-based i/o; this is still true. If you find experimentally that there's no difference between the two, then the only explanation that seems reasonable to me is that your platform implements memory-mapping under the covers in a way that is advantageous to the performance of calls to read. The only way to be absolutely certain that you're using memory-mapped i/o in a portable way is to use mmap. If you don't care about portability and you can rely on the particular characteristics of your target platforms, then using read may be suitable without sacrificing measurably any performance. Edit to clean up answer list: @jbl: the sliding window mmap sounds interesting. Can you say a little more about it? Sure - I was writing a C++ library for Git (a libgit++, if you will), and I ran into a similar problem to this: I needed to be able to open large (very large) files and not have performance be a total dog (as it would be with std::fstream). Boost::Iostreams already has a mapped_file Source, but the problem was that it was mmapping whole files, which limits you to 2^(wordsize). On 32-bit machines, 4GB isn't big enough. It's not unreasonable to expect to have .pack files in Git that become much larger than that, so I needed to read the file in chunks without resorting to regular file i/o. Under the covers of Boost::Iostreams, I implemented a Source, which is more or less another view of the interaction between std::streambuf and std::istream. You could also try a similar approach by just inheriting std::filebuf into a mapped_filebuf and similarly, inheriting std::fstream into a mapped_fstream. It's the interaction between the two that's difficult to get right. Boost::Iostreams has some of the work done for you, and it also provides hooks for filters and chains, so I thought it would be more useful to implement it that way. A: Perhaps you should pre-process the files, so each record is in a separate file (or at least that each file is a mmap-able size). Also could you do all of the processing steps for each record, before moving onto the next one? Maybe that would avoid some of the IO overhead? A: I agree that mmap'd file I/O is going to be faster, but while your benchmarking the code, shouldn't the counter example be somewhat optimized? Ben Collins wrote: char data[0x1000]; std::ifstream in("file.bin"); while (in) { in.read(data, 0x1000); // do something with data } I would suggest also trying: char data[0x1000]; std::ifstream iifle( "file.bin"); std::istream in( ifile.rdbuf() ); while( in ) { in.read( data, 0x1000); // do something with data } And beyond that, you might also try making the buffer size the same size as one page of virtual memory, in case 0x1000 is not the size of one page of virtual memory on your machine... IMHO mmap'd file I/O still wins, but this should make things closer. A: I remember mapping a huge file containing a tree structure into memory years ago. I was amazed by the speed compared to normal de-serialization which involves lot of work in memory, like allocating tree nodes and setting pointers. So in fact I was comparing a single call to mmap (or its counterpart on Windows) against many (MANY) calls to operator new and constructor calls. For such kind of task, mmap is unbeatable compared to de-serialization. Of course one should look into boosts relocatable pointer for this. A: I was trying to find the final word on mmap / read performance on Linux and I came across a nice post (link) on the Linux kernel mailing list. It's from 2000, so there have been many improvements to IO and virtual memory in the kernel since then, but it nicely explains the reason why mmap or read might be faster or slower. * *A call to mmap has more overhead than read (just like epoll has more overhead than poll, which has more overhead than read). Changing virtual memory mappings is a quite expensive operation on some processors for the same reasons that switching between different processes is expensive. *The IO system can already use the disk cache, so if you read a file, you'll hit the cache or miss it no matter what method you use. However, * *Memory maps are generally faster for random access, especially if your access patterns are sparse and unpredictable. *Memory maps allow you to keep using pages from the cache until you are done. This means that if you use a file heavily for a long period of time, then close it and reopen it, the pages will still be cached. With read, your file may have been flushed from the cache ages ago. This does not apply if you use a file and immediately discard it. (If you try to mlock pages just to keep them in cache, you are trying to outsmart the disk cache and this kind of foolery rarely helps system performance). *Reading a file directly is very simple and fast. The discussion of mmap/read reminds me of two other performance discussions: * *Some Java programmers were shocked to discover that nonblocking I/O is often slower than blocking I/O, which made perfect sense if you know that nonblocking I/O requires making more syscalls. *Some other network programmers were shocked to learn that epoll is often slower than poll, which makes perfect sense if you know that managing epoll requires making more syscalls. Conclusion: Use memory maps if you access data randomly, keep it around for a long time, or if you know you can share it with other processes (MAP_SHARED isn't very interesting if there is no actual sharing). Read files normally if you access data sequentially or discard it after reading. And if either method makes your program less complex, do that. For many real world cases there's no sure way to show one is faster without testing your actual application and NOT a benchmark. (Sorry for necro'ing this question, but I was looking for an answer and this question kept coming up at the top of Google results.) A: This sounds like a good use-case for multi-threading... I'd think you could pretty easily setup one thread to be reading data while the other(s) process it. That may be a way to dramatically increase the perceived performance. Just a thought. A: To my mind, using mmap() "just" unburdens the developer from having to write their own caching code. In a simple "read through file eactly once" case, this isn't going to be hard (although as mlbrock points out you still save the memory copy into process space), but if you're going back and forth in the file or skipping bits and so forth, I believe the kernel developers have probably done a better job implementing caching than I can... A: I think the greatest thing about mmap is potential for asynchronous reading with: addr1 = NULL; while( size_left > 0 ) { r = min(MMAP_SIZE, size_left); addr2 = mmap(NULL, r, PROT_READ, MAP_FLAGS, 0, pos); if (addr1 != NULL) { /* process mmap from prev cycle */ feed_data(ctx, addr1, MMAP_SIZE); munmap(addr1, MMAP_SIZE); } addr1 = addr2; size_left -= r; pos += r; } feed_data(ctx, addr1, r); munmap(addr1, r); Problem is that I can't find the right MAP_FLAGS to give a hint that this memory should be synced from file asap. I hope that MAP_POPULATE gives the right hint for mmap (i.e. it will not try to load all contents before return from call, but will do that in async. with feed_data). At least it gives better results with this flag even that manual states that it does nothing without MAP_PRIVATE since 2.6.23.
{ "language": "en", "url": "https://stackoverflow.com/questions/45972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "232" }
Q: Meaning/cause of RPC Exception 'No interfaces have been exported.' We have a fairly standard client/server application built using MS RPC. Both client and server are implemented in C++. The client establishes a session to the server, then makes repeated calls to it over a period of time before finally closing the session. Periodically, however, especially under heavy load conditions, we are seeing an RPC exception show up with code 1754: RPC_S_NOTHING_TO_EXPORT. It appears that this happens in the middle of a session. The user is logged on for a while, making successful calls, then one of the calls inexplicably returns this error. As far as we can tell, the server receives no indication that anything went wrong - and it definitely doesn't see the call the client made. The error code appears to have permanent implications, as well. Having the client retry the connection doesn't work, either. However, if the user has multiple user sessions active simultaneously between the same client and server, the other connections are unaffected. In essence, I have two questions: * *Does anyone know what RPC_S_NOTHING_TO_EXPORT means? The MSDN documentation simply says: "No interfaces have been exported." ... Huh? The session was working fine for numerous instances of the same call up until this point... *Does anyone have any ideas as to how to identify the real problem? Note: Capturing network traffic is something we would rather avoid, if possible, as the problem is sporadic enough that we would likely go through multiple gigabytes of traffic before running into an occurrence. A: Capturing network traffic would be one of the best ways to tackle this issue. If you can't do that, could you dump the client process and debug with WinDBG or Visual Studio? Perhaps compare a dump when operating normally versus in the error state?
{ "language": "en", "url": "https://stackoverflow.com/questions/45977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Choosing a folder with .NET 3.5 In a C# .NET 3.5 app (a mix of WinForms and WPF) I want to let the user select a folder to import a load of data from. At the moment, it's using System.Windows.Forms.FolderBrowserDialog but that's a bit lame. Mainly because you can't type the path into it (so you need to map a network drive, instead of typing a UNC path). I'd like something more like the System.Windows.Forms.OpenFileDialog, but for folders instead of files. What can I use instead? A WinForms or WPF solution is fine, but I'd prefer not to PInvoke into the Windows API if I can avoid it. A: Don't create it yourself! It's been done. You can use FolderBrowserDialogEx - a re-usable derivative of the built-in FolderBrowserDialog. This one allows you to type in a path, even a UNC path. You can also browse for computers or printers with it. Works just like the built-in FBD, but ... better. Full Source code. Free. MS-Public license. Code to use it: var dlg1 = new Ionic.Utils.FolderBrowserDialogEx(); dlg1.Description = "Select a folder to extract to:"; dlg1.ShowNewFolderButton = true; dlg1.ShowEditBox = true; //dlg1.NewStyle = false; dlg1.SelectedPath = txtExtractDirectory.Text; dlg1.ShowFullPathInEditBox = true; dlg1.RootFolder = System.Environment.SpecialFolder.MyComputer; // Show the FolderBrowserDialog. DialogResult result = dlg1.ShowDialog(); if (result == DialogResult.OK) { txtExtractDirectory.Text = dlg1.SelectedPath; } A: Unfortunately there are no dialogs other than FolderBrowserDialog for folder selection. You need to create this dialog yourself or use PInvoke. A: So far, based on the lack of responses to my identical question, I'd assume the answer is to roll your own dialog from scratch. I've seen things here and there about subclassing the common dialogs from VB6 and I think this might be part of the solution, but I've never seen anything about modifying what the dialog thinks it's selecting. It'd be possible through .NET via PInvoke and some other tricks, but I have yet to see code that does it. I know it's possible and it's not Vista-specific because Visual Studio has done it since VS 2003. Here's hoping someone answers either yours or mine! A: After hours of searching for a similar solution I found this answer by leetNightShade to a working solution. There are three things I believe make this solution much better than all the others. * *It is simple to use. It only requires you include two files (which can be combined to one anyway) in your project. *It falls back to the standard FolderBrowserDialog when used on XP or older systems. *The author grants permission to use the code for any purpose you deem fit. There’s no license as such as you are free to take and do with the code what you will. Download the code here.
{ "language": "en", "url": "https://stackoverflow.com/questions/45988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: What's the easiest way to convert Wiki markup to HTML? I'm building a website that requires very basic markup capabilities. I can't use any 3rd party plugins, so I just need a simple way to convert markup to HTML. I might have a total of 3 tags that I'll allow. What is the best way to convert ==Heading== to <h2>Heading</h2>, or --bold-- to <b>bold</b>? Can this be done simply with Regex, or does somebody have a simple function? I'm writing this in C#, but examples from other languages would probably work. A: It's not really a simple problem, because if you're going to display things back to the user, you'll need to also sanitise the input to ensure you don't create any cross site scripting vulnerabilities. That said, you could probably do something pretty simple as you describe most easily with a regular expression replacement. For example replace the pattern ==([^=]*)== with <h2>\1</h2> A: This really depends on the Wiki syntax you're using as there are several different ones. Obviously the wiki software has this functionality somewhere; if you can't find a software package that does this for you, you could start looking for the relevant code in your wiki software. A: Maybe this is what you need. This page is a compilation of links, descriptions, and status reports of the various alternative MediaWiki parsers — that is, programs and projects, other than MediaWiki itself, which are able or intended to translate MediaWiki's text markup syntax into something else.
{ "language": "en", "url": "https://stackoverflow.com/questions/45991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to change "Generate Method Stub" to throw NotImplementedException in VS? How can I change default Generate Method Stub behavior in Visaul Studio to generate method with body throw new NotImplementedException(); instead of throw new Exception("The method or operation is not implemented."); A: Taken from: http://blogs.msdn.com/ansonh/archive/2005/12/08/501763.aspx Visual Studio 2005 supports targeting the 1.0 version of the compact framework. In order to keep the size of the compact framework small, it does not include all of the same types that exist in the desktop framework. One of the types that is not included is NotImplementedException. You can change the generated code by editing the code snippet file: C:\Program Files\Microsoft Visual Studio 8\VC#\Snippets\1033\Refactoring\MethodStub.snippet and changing the Declarations section to the following: <Declarations> <Literal Editable="true"> <ID>signature</ID> <Default>signature</Default> </Literal> <Literal> <ID>Exception</ID> <Function>SimpleTypeName(global::System.NotImplementedException)</Function> </Literal> </Declarations> A: There's another reason: FxCop catches instances of anybody throwing 'Exception' and flags it, but throwing instances of 'NotImplementedException' is acceptable. I actually like the default behavior, because it does have this differentiation. NotImplementedException is not a temporary exception to be thrown while you're working your way through your code. It implies "I mean it, I'm really not going to implement this thing". If you leave the codegen the way it is, it's easy for you to differentiate in the code the "I will come back to this later" bits from "I've decided not to do this" bits.
{ "language": "en", "url": "https://stackoverflow.com/questions/46003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do you implement resource "edit" forms in a RESTful way? We are trying to implement a REST API for an application we have now. We want to expose read/write capabilities for various resources using the REST API. How do we implement the "form" part of this? I get how to expose "read" of our data by creating RESTful URLs that essentially function as method calls and return the data: GET /restapi/myobject?param=object-id-maybe ...and an XML document representing some data structure is returned. Fine. But, normally, in a web application, an "edit" would involve two requests: one to load the current version of the resources and populate the form with that data, and one to post the modified data back. But I don't get how you would do the same thing with HTTP methods that REST is sort of mapped to. It's a PUT, right? Can someone explain this? (Additional consideration: The UI would be primarily done with AJAX) -- Update: That definitely helps. But, I am still a bit confused about the server side? Obviously, I am not simply dealing with files here. On the server, the code that answers the requests should be filtering the request method to determine what to do with it? Is that the "switch" between reads and writes? A: I think you need to separate data services from web UI. When providing data services, a RESTful system is entirely appropriate, including the use of verbs that browsers can't support (like PUT and DELETE). When describing a UI, I think most people confuse "RESTful" with "nice, predictable URLs". I wouldn't be all that worried about a purely RESTful URL syntax when you're describing web UI. A: If you're submitting the data via plain HTML, you're restricted to doing a POST based form. The URI that the POST request is sent to should not be the URI for the resource being modified. You should either POST to a collection resource that ADDs a newly created resource each time (with the URI for the new resource in the Location header and a 202 status code) or POST to an updater resource that updates a resource with a supplied URI in the request's content (or custom header). If you're using an XmlHttpRequest object, you can set the method to PUT and submit the data to the resource's URI. This can also work with empty forms if the server supplies a valid URI for the yet-nonexistent resource. The first PUT would create the resource (returning 202). Subsequent PUTs will either do nothing if it's the same data or modify the existing resource (in either case a 200 is returned unless an error occurs). A: There are many different alternatives you can use. A good solution is provided at the microformats wiki and has also been referenced by the RESTful JSON crew. As close as you can get to a standard, really. Operate on a Record GET /people/1 return the first record DELETE /people/1 destroy the first record POST /people/1?_method=DELETE alias for DELETE, to compensate for browser limitations GET /people/1/edit return a form to edit the first record PUT /people/1 submit fields for updating the first record POST /people/1?_method=PUT alias for PUT, to compensate for browser limitations A: The load should just be a normal GET request, and the saving of new data should be a POST to the URL which currently has the data... For example, load the current data from http://www.example.com/record/matt-s-example and then, change the data, and POST back to the same URL with the new data. A PUT request could be used when creating a new record (i.e. PUT the data at a URL which doesn't currently exist), but in practice just POSTing is probably a better approach to get started with.
{ "language": "en", "url": "https://stackoverflow.com/questions/46004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Self Updating What's the best way to terminate a program and then run additional code from the program that's being terminated? For example, what would be the best way for a program to self update itself? A: You have a couple options: You could use another application .exe to do the auto update. This is probably the best method. You can also rename a program's exe while it is running. Hence allowing you to get the file from some update server and replace it. On the program's next startup it will be using the new .exe. You can then delete the renamed file on startup. A: It'd be really helpful to know what language we're talking about here. I'm sure I could give you some really great tips for doing this in PowerBuilder or Cobol, but that might not really be what you're after! If you're talking Java however, then you could use a shut down hook - works great for me. A: Another thing to consider is that most of the "major" apps I've been using (FileZilla, Paint.NET, etc.), are having the updaters uninstall the previous version of the app and then doing a fresh install of the new version of the application. I understand this won't work for really large applications, but this does seem to be a "preferred" process for the small to medium size applications. A: I don't know of a way to do it without a second program that the primary program launches prior to shutting down. Program 2 downloads and installs the changes and then relaunches the primary program. A: We did something like this in our previous app. We captured the termination of the program (in .NET 2.0) from either the X or the close button, and then kicked off a background update process that the user didn't see. It would check the server (client-server app) for an update, and if there was one available, it would download in the background using BITS. Then the next time the application opened, it would realize that there was a new version (we set a flag) and popped up a message alerting the user to the new version, and a button to click if they wanted to view the new features added to this version. A: It makes it easier if you have a secondary app that runs to do the updates. You would execute the "updater" app, and then inside of it wait for the other process to exit. If you need access to the regular apps DLLs and such but they also need updating, you can run the updater from a secondary location with already updated DLLs so that they are not in use in the original location. A: If you're using writing a .NET application, you might consider using ClickOnce. If you need quite a bit of customization, you might look elsewhere. We have an external process that performs updating for us. When it finds an update, it downloads it to a secondary folder and then waits for the main application to exit. On exit, it replaces all of the current files. The primary process just kicks the update process off every 4 hours. Because the update process will wait for the exit of the primary app, the primary app doesn't have to do any special processing other than start the update application. This is a side issue, but if you're considering writing your own update process, I would encourage you to look into using compression of some sort to (1) save on download and (2) provide one file to pull from an update server. Hope that makes sense!
{ "language": "en", "url": "https://stackoverflow.com/questions/46013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Data Validation Design Patterns If I have a collection of database tables (in an Access file, for example) and need to validate each table in this collection against a rule set that has both common rules across all tables as well as individual rules specific to one or a subset of tables, can someone recommend a good design pattern to look into? Specifically, I would like to avoid code similar to: void Main() { ValidateTable1(); ValidateTable2(); ValidateTable3(); } private void ValidateTable1() { //Table1 validation code goes here } private void ValidateTable2() { //Table2 validation code goes here } private void ValidateTable3() { //Table3 validation code goes here } Also, I've decided to use log4net to log all of the errors and warnings, so that each method can be declared void and doesn't need to return anything. Is this a good idea or would it be better to create some sort of ValidationException that catches all exceptions and stores them in a List<ValidationException> before printing them all out at the end? I did find this, which looks like it may work, but I'm hoping to actually find some code samples to work off of. Any suggestions? Has anyone done something similar in the past? For some background, the program will be written in either C# or VB.NET and the tables will more than likely be stored in either Access or SQL Server CE. A: I'd return some type of ValidationSummary for each one... or an IList depending on how you want to structure it. you could also opt to do some magic like this: using(var validation = new ValidationScope()) { ValidateTable1(); ValidateTable2(); ValidateTable3(); if(validation.Haserrors) { MessageBox.Show(validation.ValidationSummary); return; } DoSomethingElse(); } then the ValidateTable would just reach into the current scope, like this: ValidationScope.Current.AddError("col1", "Col1 should not be NULL"); something to that effect. A: Two approaches: * *CSLA where anonymous methods on business objects are used for validation. *Read JP Boodhoo's blog where he has implemented a rules engine and has very detailed posts and sample code published. You can also see him at work on DNR Tv episode that's well worth watching. A: I would try with a combination of the Factory and Visitor patterns: using System; using System.Collections.Generic; namespace Example2 { interface IVisitor { void Visit(Table1 table1); void Visit(Table2 table2); } interface IVisitable { void Accept(IVisitor visitor); } interface ILog { void Verbose(string message); void Debug(string messsage); void Info(string message); void Error(string message); void Fatal(string message); } class Error { public string Message { get; set; } } class Table1 : IVisitable { public int Id { get; set; } public string Data { get; set; } private IList<Table2> InnerElements { get; } = new List<Table2>(); public void Accept(IVisitor visitor) { visitor.Visit(this); foreach(var innerElement in InnerElements) visitor.Visit(innerElement); } } class Table2 : IVisitable { public int Id { get; set; } public int Data { get; set; } public void Accept(IVisitor visitor) { visitor.Visit(this); } } class Validator : IVisitor { private readonly ILog log; private readonly IRuleSet<Table1> table1Rules; private readonly IRuleSet<Table2> table2Rules; public Validator(ILog log, IRuleSet<Table1> table1Rules, IRuleSet<Table2> table2Rules) { this.log = log; this.table1Rules = table1Rules; this.table2Rules = table2Rules; } public void Visit(Table1 table1) { IEnumerable<Error> errors = table1Rules.EnforceOn(table1); foreach (var error in errors) log.Error(error.Message); } public void Visit(Table2 table2) { IEnumerable<Error> errors = table2Rules.EnforceOn(table2); foreach (var error in errors) log.Error(error.Message); } } class RuleSets { private readonly IRuleSetFactory factory; public RuleSets(IRuleSetFactory factory) { this.factory = factory; } public IRuleSet<Table1> RulesForTable1 => factory.For<Table1>() .AddRule(o => string.IsNullOrEmpty(o.Data), "Data1 is null or empty") .AddRule(o => o.Data.Length < 10, "Data1 is too short") .AddRule(o => o.Data.Length > 26, "Data1 is too long"); public IRuleSet<Table2> RulesForTable2 => factory.For<Table2>() .AddRule(o => o.Data < 0, "Data2 is negative") .AddRule(o => o.Data > 10, "Data2 is too big"); } interface IRuleSetFactory { IRuleSet<T> For<T>(); } interface IRuleSet<T> { IEnumerable<Error> EnforceOn(T obj); IRuleSet<T> AddRule(Func<T, bool> rule, string description); } class Program { void Run() { var log = new ConsoleLogger(); var factory = new SimpleRules(); var rules = new RuleSets(factory); var validator = new Validator(log, rules.RulesForTable1, rules.RulesForTable2); var toValidate = new List<IVisitable>(); toValidate.Add(new Table1()); toValidate.Add(new Table2()); foreach (var validatable in toValidate) validatable.Accept(validator); } } } A: Just an update on this: I decided to go with the Decorator pattern. That is, I have one 'generic' table class that implements an IValidateableTable interface (which contains validate() method). Then, I created several validation decorators (that also implement IValidateableTable) which I can wrap around each table that I'm trying to validate. So, the code ends up looking like this: IValidateableTable table1 = new GenericTable(myDataSet); table1 = new NonNullNonEmptyColumnValidator(table1, "ColumnA"); table1 = new ColumnValueValidator(table1, "ColumnB", "ExpectedValue"); Then, all I need to do is call table1.Validate() which unwinds through the decorators calling all of the needed validations. So far, it seems to work really well, though I am still open to suggestions. A: I think you are really talking about a concept called constraints in the world of databases. Constraints are how a database guarantees the integrity of the data it contains. It makes much more sense to put this sort of logic in the database, rather than the application (even Access offers rudimentary forms of constraints, such as requiring uniqueness of values in a column, or values from a list, etc.). Input validation (of individual fields) is of course a different matter, and any application should still perform that (to provide nice feedback to the user in case of problems), even if the DB has well-defined constraints of the table columns.
{ "language": "en", "url": "https://stackoverflow.com/questions/46029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: C# Force Form Focus So, I did search google and SO prior to asking this question. Basically I have a DLL that has a form compiled into it. The form will be used to display information to the screen. Eventually it will be asynchronous and expose a lot of customization in the dll. For now I just want it to display properly. The problem that I am having is that I use the dll by loading it in a Powershell session. So when I try to display the form and get it to come to the top and have focus, It has no problem with displaying over all the other apps, but I can't for the life of me get it to display over the Powershell window. Here is the code that I am currently using to try and get it to display. I am sure that the majority of it won't be required once I figure it out, this just represents all the things that I found via google. CLass Blah { [DllImport("user32.dll", EntryPoint = "SystemParametersInfo")] public static extern bool SystemParametersInfo(uint uiAction, uint uiParam, uint pvParam, uint fWinIni); [DllImport("user32.dll", EntryPoint = "SetForegroundWindow")] public static extern bool SetForegroundWindow(IntPtr hWnd); [DllImport("User32.dll", EntryPoint = "ShowWindowAsync")] private static extern bool ShowWindowAsync(IntPtr hWnd, int cmdShow); private const int WS_SHOWNORMAL = 1; public void ShowMessage(string msg) { MessageForm msgFrm = new MessageForm(); msgFrm.lblMessage.Text = "FOO"; msgFrm.ShowDialog(); msgFrm.BringToFront(); msgFrm.TopMost = true; msgFrm.Activate(); SystemParametersInfo((uint)0x2001, 0, 0, 0x0002 | 0x0001); ShowWindowAsync(msgFrm.Handle, WS_SHOWNORMAL); SetForegroundWindow(msgFrm.Handle); SystemParametersInfo((uint)0x2001, 200000, 200000, 0x0002 | 0x0001); } } As I say I'm sure that most of that is either not needed or even flat out wrong, I just wanted to show the things that I had tried. Also, as I mentioned, I plan to have this be asynchronously displayed at some point which I suspect will wind up requiring a separate thread. Would splitting the form out into it's own thread make it easier to cause it to get focus over the Powershell session? @Joel, thanks for the info. Here is what I tried based on your suggestion: msgFrm.ShowDialog(); msgFrm.BringToFront(); msgFrm.Focus(); Application.DoEvents(); The form still comes up under the Powershell session. I'll proceed with working out the threading. I've spawned threads before but never where the parent thread needed to talk to the child thread, so we'll see how it goes. Thnks for all the ideas so far folks. Ok, threading it took care of the problem. @Quarrelsome, I did try both of those. Neither (nor both together) worked. I am curious as to what is evil about using threading? I am not using Application.Run and I have yet to have a problem. I am using a mediator class that both the parent thread and the child thread have access to. In that object I am using a ReaderWriterLock to lock one property that represents the message that I want displayed on the form that the child thread creates. The parent locks the property then writes what should be displayed. The child thread locks the property and reads what it should change the label on the form to. The child has to do this on a polling interval (I default it to 500ms) which I'm not real happy about, but I could not find an event driven way to let the child thread know that the proerty had changed, so I'm stuck with polling. A: Doesn't ShowDialog() have different window behavior than just Show()? What if you tried: msgFrm.Show(); msgFrm.BringToFront(); msgFrm.Focus(); A: TopMost = true; .Activate() ? Either of those any good? Splitting it out into its own thread is a bit evil as it wont work properly if you don't call it with Application.Run and that will swallow up the thread. In the worst case scenario I guess you could separate it out into a different process and communicate via the disk or WCF. A: The following solution should meet your requirements: * *Assembly can be loaded into PowerShell and main class instantiated *When ShowMessage method on this instance is called, a new window is shown and activated *If you call ShowMessage multiple times, this same window updates its title text and is activated *To stop using the window, call Dispose method Step 1: Let's create a temporary working directory (you can naturally use your own dir) (powershell.exe) mkdir C:\TEMP\PshWindow cd C:\TEMP\PshWindow Step 2: Now let's define class that we will be interacting with in PowerShell: // file 'InfoProvider.cs' in C:\TEMP\PshWindow using System; using System.Threading; using System.Windows.Forms; namespace PshWindow { public sealed class InfoProvider : IDisposable { public void Dispose() { GC.SuppressFinalize(this); lock (this._sync) { if (!this._disposed) { this._disposed = true; if (null != this._worker) { if (null != this._form) { this._form.Invoke(new Action(() => this._form.Close())); } this._worker.Join(); this._form = null; this._worker = null; } } } } public void ShowMessage(string msg) { lock (this._sync) { // make sure worker is up and running if (this._disposed) { throw new ObjectDisposedException("InfoProvider"); } if (null == this._worker) { this._worker = new Thread(() => (this._form = new MyForm(this._sync)).ShowDialog()) { IsBackground = true }; this._worker.Start(); while (this._form == null || !this._form.Created) { Monitor.Wait(this._sync); } } // update the text this._form.Invoke(new Action(delegate { this._form.Text = msg; this._form.Activate(); })); } } private bool _disposed; private Form _form; private Thread _worker; private readonly object _sync = new object(); } } As well as the Form that will be shown: // file 'MyForm.cs' in C:\TEMP\PshWindow using System; using System.Drawing; using System.Threading; using System.Windows.Forms; namespace PshWindow { internal sealed class MyForm : Form { public MyForm(object sync) { this._sync = sync; this.BackColor = Color.LightGreen; this.Width = 200; this.Height = 80; this.FormBorderStyle = FormBorderStyle.SizableToolWindow; } protected override void OnShown(EventArgs e) { base.OnShown(e); this.TopMost = true; lock (this._sync) { Monitor.PulseAll(this._sync); } } private readonly object _sync; } } Step 3: Let's compile the assembly... (powershell.exe) csc /out:PshWindow.dll /target:library InfoProvider.cs MyForm.cs Step 4: ... and load the assembly in PowerShell to have fun with it: (powershell.exe) [System.Reflection.Assembly]::LoadFile('C:\TEMP\PshWindow\PshWindow.dll') $a = New-Object PshWindow.InfoProvider $a.ShowMessage('Hello, world') A green-ish window with title 'Hello, world' should now pop-up and be active. If you reactivate the PowerShell window and enter: $a.ShowMessage('Stack overflow') The Window's title should change to 'Stack overflow' and the window should be active again. To stop working with our window, dispose the object: $a.Dispose() This solution works as expected in both Windows XP SP3, x86 and Windows Vista SP1, x64. If there are question about how this solution works I can update this entry with detailed discussion. For now I'm hoping the code if self-explanatory. A: I also had trouble activating and bringing a window to the foreground. Here is the code that eventually worked for me. I'm not sure if it will solve your problem. Basically, call ShowWindow() then SetForegroundWindow(). using System.Diagnostics; using System.Runtime.InteropServices; // Sets the window to be foreground [DllImport("User32")] private static extern int SetForegroundWindow(IntPtr hwnd); // Activate or minimize a window [DllImportAttribute("User32.DLL")] private static extern bool ShowWindow(IntPtr hWnd, int nCmdShow); private const int SW_SHOW = 5; private const int SW_MINIMIZE = 6; private const int SW_RESTORE = 9; private void ActivateApplication(string briefAppName) { Process[] procList = Process.GetProcessesByName(briefAppName); if (procList.Length > 0) { ShowWindow(procList[0].MainWindowHandle, SW_RESTORE); SetForegroundWindow(procList[0].MainWindowHandle); } } A: Here is some code that I've used on one form or another for a few years. There are a few gotchas to making a window in another app pop up. Once you have the window handle do this: if (IsIconic(hWnd)) ShowWindowAsync(hWnd, SW_RESTORE); ShowWindowAsync(hWnd, SW_SHOW); SetForegroundWindow(hWnd); // Code from Karl E. Peterson, www.mvps.org/vb/sample.htm // Converted to Delphi by Ray Lischner // Published in The Delphi Magazine 55, page 16 // Converted to C# by Kevin Gale IntPtr foregroundWindow = GetForegroundWindow(); IntPtr Dummy = IntPtr.Zero; uint foregroundThreadId = GetWindowThreadProcessId(foregroundWindow, Dummy); uint thisThreadId = GetWindowThreadProcessId(hWnd, Dummy); if (AttachThreadInput(thisThreadId, foregroundThreadId, true)) { BringWindowToTop(hWnd); // IE 5.5 related hack SetForegroundWindow(hWnd); AttachThreadInput(thisThreadId, foregroundThreadId, false); } if (GetForegroundWindow() != hWnd) { // Code by Daniel P. Stasinski // Converted to C# by Kevin Gale IntPtr Timeout = IntPtr.Zero; SystemParametersInfo(SPI_GETFOREGROUNDLOCKTIMEOUT, 0, Timeout, 0); SystemParametersInfo(SPI_SETFOREGROUNDLOCKTIMEOUT, 0, Dummy, SPIF_SENDCHANGE); BringWindowToTop(hWnd); // IE 5.5 related hack SetForegroundWindow(hWnd); SystemParametersInfo(SPI_SETFOREGROUNDLOCKTIMEOUT, 0, Timeout, SPIF_SENDCHANGE); } I won't post the whole unit since since it does other things that aren't relevant but here are the constants and imports for the above code. //Win32 API calls necesary to raise an unowned processs main window [DllImport("user32.dll")] private static extern bool SetForegroundWindow(IntPtr hWnd); [DllImport("user32.dll")] private static extern bool ShowWindowAsync(IntPtr hWnd, int nCmdShow); [DllImport("user32.dll")] private static extern bool IsIconic(IntPtr hWnd); [DllImport("user32.dll", SetLastError = true)] private static extern bool SystemParametersInfo(uint uiAction, uint uiParam, IntPtr pvParam, uint fWinIni); [DllImport("user32.dll", SetLastError = true)] private static extern uint GetWindowThreadProcessId(IntPtr hWnd, IntPtr lpdwProcessId); [DllImport("user32.dll")] private static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] private static extern bool AttachThreadInput(uint idAttach, uint idAttachTo, bool fAttach); [DllImport("user32.dll")] static extern bool BringWindowToTop(IntPtr hWnd); [DllImport("user32.dll")] private static extern int GetWindowText(IntPtr hWnd, StringBuilder lpString, Int32 nMaxCount); [DllImport("user32.dll")] private static extern int GetWindowThreadProcessId(IntPtr hWnd, ref Int32 lpdwProcessId); [DllImport("User32.dll")] public static extern IntPtr GetParent(IntPtr hWnd); private const int SW_HIDE = 0; private const int SW_SHOWNORMAL = 1; private const int SW_NORMAL = 1; private const int SW_SHOWMINIMIZED = 2; private const int SW_SHOWMAXIMIZED = 3; private const int SW_MAXIMIZE = 3; private const int SW_SHOWNOACTIVATE = 4; private const int SW_SHOW = 5; private const int SW_MINIMIZE = 6; private const int SW_SHOWMINNOACTIVE = 7; private const int SW_SHOWNA = 8; private const int SW_RESTORE = 9; private const int SW_SHOWDEFAULT = 10; private const int SW_MAX = 10; private const uint SPI_GETFOREGROUNDLOCKTIMEOUT = 0x2000; private const uint SPI_SETFOREGROUNDLOCKTIMEOUT = 0x2001; private const int SPIF_SENDCHANGE = 0x2; A: Huge thanks people. I think I've made it a bit shorter, here's what I put on a seperate thread and seems to be working ok. private static void StatusChecking() { IntPtr iActiveForm = IntPtr.Zero, iCurrentACtiveApp = IntPtr.Zero; Int32 iMyProcID = Process.GetCurrentProcess().Id, iCurrentProcID = 0; IntPtr iTmp = (IntPtr)1; while (bIsRunning) { try { Thread.Sleep(45); if (Form.ActiveForm != null) { iActiveForm = Form.ActiveForm.Handle; } iTmp = GetForegroundWindow(); if (iTmp == IntPtr.Zero) continue; GetWindowThreadProcessId(iTmp, ref iCurrentProcID); if (iCurrentProcID == 0) { iCurrentProcID = 1; continue; } if (iCurrentProcID != iMyProcID) { SystemParametersInfo(SPI_GETFOREGROUNDLOCKTIMEOUT, 0, IntPtr.Zero, 0); SystemParametersInfo(SPI_SETFOREGROUNDLOCKTIMEOUT, 0, IntPtr.Zero, SPIF_SENDCHANGE); BringWindowToTop(iActiveForm); SetForegroundWindow(iActiveForm); } else iActiveForm = iTmp; } catch (Exception ex) { Definitions.UnhandledExceptionHandler(ex, 103106); } } } I don`t bother repasting the definitions... A: You shouldn't need to import any win32 functions for this. If .Focus() isn't enough the form should also have a .BringToFront() method you can use. If that fails, you can set it's .TopMost property to true. You don't want to leave it true forever, so then call Application.DoEvents so the form can process that message and set it back to false. A: Don't you just want the dialog to be a child of the calling form? To do that you'll need the pass in the calling window and use the ShowDialog( IWin32Window owner ) method.
{ "language": "en", "url": "https://stackoverflow.com/questions/46030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How do I lock certain SQL rows while running a process on them? My work has a financial application, written in VB.NET with SQL, that several users can be working on at the same time. At some point, one user might decide to Post the batch of entries that they (and possibly other people) are currently working on. Obviously, I no longer want any other users to add, edit, or delete entries in that batch after the Post process has been initiated. I have already seen that I can lock all data by opening the SQL transaction the moment the Post process starts, but the process can be fairly lengthy and I would prefer not to have the Transaction open for the several minutes it might take to complete the function. Is there a way to lock just the records that I know need to be operated on from VB.NET code? A: If you are using Oracle you would Select for update on the rows you are locking. here is an example SELECT address1 , city, country FROM location FOR UPDATE; A: You probably want to set an isolation level for the entire transaction rather than using with (rowlock) on specific tables. Look at this page: http://msdn.microsoft.com/en-us/library/ms173763.aspx Specifically, search within it for 'row lock', and I think you'll find that READ COMMITTED or REPEATABLE READ are what you want. READ COMMITTED is the SQL Server default. If READ COMMITTED doesn't seem strong enough to you, then go for REPEATABLE READ. Update: After reading one of your follow up posts, you definitely want repeatable read. That will hold the lock until you either commit or rollback the transaction. A: add with (rowlock) to your SQL query SQL Server Performance article EDIT: ok, I misunderstood the question. What you want is transaction isolation. +1 to Joel :) A: wrap it in a tran use an holdlock + updlock in the select example begin tran select * from SomeTable (holdlock,updlock) where .... processing here commit
{ "language": "en", "url": "https://stackoverflow.com/questions/46034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Redirect from domain name to a dotted quad hosted box I have a php server that is running my domain name. For testing purposes I am running an asp.net on a dotted quad IP. I am hoping to link them together via either PHP or some kind of DNS/.htaccess voodoo. So if I go to www.mydomain.com/test it redirects (but keeps the url of (www.mydomain.com/test) in the browser's address bar and the pages are served by the dotted quad IP asp.net box. A: Instead of pointing www.yourdomain.com/test at your test server, why not use test.yourdomain.com? Assuming you have access to the DNS records for yourdomain.com, you should just need to create an A record mapping test.yourdomain.com to your test server's IP address. A: It is quite possible, if I understand what you're getting at. You have a PHP server with your domain pointing to it. You also have a separate ASP.NET server that only has an IP address associated with it, no domain. Is there any drawback to simply pointing your domain name to your ASP.NEt box? A: The easiest way is to make www.mydomain.com/test serve a HTML file which has a single frame with the plain IP address. However, this means that the URL in the (awesome) address bar always stays exactly the same, even if you click a link on the displayed page. (You can avoid this by adding target=_top in the href, but this would require some modifications to your "asp.net".) The only other way I can think of is to make www.mydomain.com act as proxy. That is, at /test it has a script or something that gets the page from your "asp.net" and forwards it to the client. A: You can do this with a proxy, but I think Will Harris's answer is the best - use a subdomain. Much simpler, and it'll get rid of issues with relative links as well. A: I agree that the sub-domain idea is the best, but if for some reason it doesn't work for you you could also have the php page at /test proxy requests to a URL at the dotted quad machine (using fopen to access the dotted quad URL).
{ "language": "en", "url": "https://stackoverflow.com/questions/46074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the best approach for (client-side) disabling of a submit button? Details: * *Only disable after user clicks the submit button, but before the posting back to the server *ASP.NET Webforms (.NET 1.1) *Prefer jQuery (if any library at all) *Must be enabled if form reloads (i.e. credit card failed) This isn't a necessity that I do this, but if there is a simple way to do it without having to change too much, I'll do it. (i.e. if there isn't a simple solution, I probably won't do it, so don't worry about digging too deep) A: You could do something like this: $('form').submit(function() { $(this) .find(":submit,:image") // get all the submit buttons .attr({ disabled : 'disabled' }) // disable them .end() // go back to this form .submit(function() { // change the onsubmit to always reject. return false; }) ; }); Benefits of this: * *It will work with all your forms, with all methods of submission: * *clicking a submit element *pressing enter, or *calling form.submit() from some other code *It will disable all submit elements: * *<input type="submit"/> *<button type="submit"></button> *<input type="image" /> *it's really short. A: I'm guessing that you don't want them to hit the submit button more than once while the submit is processing. My approach has been to just hide the button entirely and display some sort of status indicator (animated gif, etc) instead. Here's a very contrived example (it's technically in prototype but I think a jquery version would be very similar): <html> <head> <script type="text/javascript" src="include/js/prototype.js"></script> <script type="text/javascript"> function handleSubmit() { $('submit').hide(); $('progressWheel').show(); return true; } </script> </head> <body> <img src="include/images/progress-wheel_lg.gif" id="progressWheel" style="display:none;"/> <input type="submit" name="submit" id="submit" value="Submit" onclick="handleSubmit();"/> </body> </html> A: For all submit buttons, via JQuery, it'd be: $('input[type=submit]').click(function() { this.disabled = true; }); Or it might be more useful to do so on form submission: $('form').submit(function() { $('input[type=submit]', this).attr("disabled","disabled"); }); But I think we could give a better answer to your question if we knew a bit more about the context. If this is an ajax request, then you'll need to make sure you enable submit buttons again on either success or failure. If this is a standard HTTP form submission (aside from disabling the button with javascript) and you're doing this to safe guard from multiple submissions of the same form, then you ought to have some sort of control in the code that deals with the submitted data, because disabling a button with javascript might not prevent multiple submissions. A: in JQuery: $('#SubmitButtonID').click(function() { this.disabled = true; }); A: On thing to be aware of is that you should not disable the button before the form is submitted. If you disable the button using javascript in the OnClick event you may lose the form submit. So I would suggest you hide the button using javascript by placing an image above it or by moving the button out of the visible range. That should allow the form submit to proceed normally. A: There are three ways to submit a form that should be covered. Use both David McLaughlin's and Jimmy's suggestions. One will disable the submit button form element while the other disables the basic HTML form submit. For the third, these won't disable Javascript from doing a form.submit(). The OnSubmit="return false" method only applies when a user clicks the submit button or presses Enter in a input form element. Client side scripting will need to be handled as well. A: How about Code behind: btnContinue3.Attributes.Item("onclick") = "disableSubmit()" Javascript: function disableSubmit() { document.getElementById('btnContinue3').onclick = function() { alert('Please only click once on the submit button!'); return false; }; } This doesnt solve the problem of what happens if the postback times out, but other than that it worked for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/46079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you move a file? I'm using TortoiseSVN against the SourceForge SVN repository. I'd like to move a file from one folder to another in order to maintain its revision history. Is this possible? If so, how do you do it? (My current strategy has been to copy the file into the new folder and check it in and then delete the file from the current folder.) A: Subversion has native support for moving files. svn move SOURCE DESTINATION See the online help (svn help move) for more information. A: With TortoiseSVN I just move the file on disk. When I come to commit my changes I select the missing file and the new one and select "Repair move" from the right click menu: This means I can let my IDE move round files and use it refactoring tools without losing history. A: Cut file via operating system context menu as you usually do, then instead of doing regular paste, right click to bring context menu, then choose TortoiseSVN -> Paste (make sure you commit from root to include both old and new files in the commit). A: i think in the svn browser in tortoisesvn you can just drag it from one place to another. A: In TortoiseSVN right click somewhere and go TortoiseSVN > Repo Browser open the repository. All you then have to do is drag and drop the file from one folder to the where you want it. It'll ask you to add a commit message and it defaults it to "Moved file/folder remotely" A: Check out section 5.14.2. Moving files and folders (or check out "move" in the Index of the help) of the TortoiseSVN help. You do a move via right-dragging. It also mentions that you need to commit from the parent folder to make it "one" revision. This works for doing the change in a working copy. (Note that the SVN items in the following image will only show up if the destination folder has already been added to the repository.) You can also do the move via the Repo Browser (section 5.23. The Repository Browser of the help). A: Using TortoiseSVN I just right-click and drag the folder from one location to another. When you release the right-click you'll have the option to "SVN Move Version File." However, I believe that SVN doesn't do anything "fancy" there, but simply deletes the file in the previous location and Adds it to the new location. A: Since you're using Tortoise you may want to check out this link on LosTechies. It should be almost exactly what you are looking for. http://www.lostechies.com/blogs/joshua_lockwood/archive/2007/09/12/subversion-tip-of-the-day-moving-files.aspx A: For TortoiseSVN client: * *Select the files you wish to move, *Right click and drag the files to the folder you wish to move them to, *A window will popup after you release the drag and drop on a folder. Click "SVN Move versioned item(s) here", *After you click the above the commit window message box will appear. Enter a message and submit. Now you are done. A: If I'm not wrong starting from version 1.5 SVN can track moved files\folders. In TortoiseSVN use can move file via drag&drop. A: May also be called, "rename" by tortoise, but svn move, is the command in the barebones svn client. A: Transferring a file using TortoiseSVN: Step:1 Please Select the files which you want to move, Right-click and drag the files to the folder which you to move them to, A window will popup after follow the below instruction Step 2: After you click the above the commit the file as below mention A: If you are moving folders via Repository Browser, then there is no Move option on right-click; the only way is to drag and drop.
{ "language": "en", "url": "https://stackoverflow.com/questions/46080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "222" }
Q: Remote debugging an NT application from XP with Visual Studio 6.0 Is it possible to remote-debug a Visual C++ 6.0 application running on a Windows NT machine from a developer workstation running Windows XP? If so, is there a procedure written up somewhere? A: Take a look at this article. Also this may be helpful although you don't mention which version of the IDE you're using. A: Yes -- you can also use a newer version of Visual Studio. As long as you have the PDB file for the target application it doesn't matter what version it was built with (well, VS6 might not understand a newer PDB, but backwards should be fine). The remote debugging experience on newer VS versions is a lot smoother than old versions in my experience. It is also easier to set up if you can arrange things so that you are attaching to an existing process that you have started manually rather than kicking off the process (avoid a lot of the path setup).
{ "language": "en", "url": "https://stackoverflow.com/questions/46084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the names given to these 2 LINQ expressions I'm trying to find the correct names for these 2 "types" of coding expressions in LINQ so that I can refer to them correctly. I want to say that the first is called "Fluent Style"? var selectVar = arrayVar.Select( (a,i) => new { Line = a }); var selectVar = from s in arrayVar select new { Line = s }; A: * *First - calling an extension method. This style of coding is called "fluent interface" as you mentioned. *Second method is called language integrated query A: The first isn't even really LINQ, it's a lambda expression, with a type invariant object created. (a) => new { blah = b} The second is a LINQ query filling an on the fly class that has a property Line. There is no hashrocket operator in this one, so this one is just plain old linq. A: The name of the second form is "query comprehesion syntax", which the compiler translates into the first form.
{ "language": "en", "url": "https://stackoverflow.com/questions/46096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a list of browser conditionals for use including stylesheets? I've seen people doing things like this in their HTML: <!--[if IE]> <link rel="stylesheet" href="ie.css" type="text/css" /> <![endif]--> Does this work across all modern browsers and is there a list of browser types that will work with that kind of if statement? Edit Thanks Ross. Interesting to find out about gt, lt, gte, & lte. A: If you can use Javascript, there are several options: navigator.appName navigator.appVersion link Or something more robust by using a library such as jQuery. Finally, you could use the BrowserDetect object from QuirksMode. Once you have the browser name and version, you can then insert HTML to link to a style sheet or include other tags. A: Conditional comments are purely for IE (version 5 and later). The official Microsoft documentation is here. If you are going to use them the best strategy is to conditionally include external stylesheets or javascript files after your normal includes. This means that on IE your IE-specific code will override everything else. On any other browser the code will be treated as comments and ignored by the parser. A: This works across all browsers because anything except IE sees <!--IGNORED COMMENT-->. Only IE reads the comment if it contains a conditional clause. Have a look at this article You can also specify which version of IE. For example: <!--[if IE 8]> <link rel="stylesheet type="text/css" href="ie8.css" /> <![endif]--> A: Further to Ross' answer, you can only target the Internet Explorer rendering engine with conditional comments; there is no similar construct for other browsers. For example, you can't write conditional comments that target Firefox, but are ignored by Internet Explorer. The way I achieve the same effect as your example above is to sniff the user agent string. I then deliver a suitable CSS file for that browser. This isn't perfect because sometimes people change their user-agent string for compatibility. The other way to target different browsers is to utilise browser specific hacks. These are particularly nasty because they usually rely on bugs in the browser and bugs are liable to be fixed! User-agent sniffing is the best all-round solution in my opinion.
{ "language": "en", "url": "https://stackoverflow.com/questions/46124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can XPath match on parts of an element's name? I want to do this: //*fu which returns all nodes whose name ends in fu, such as <tarfu /> and <snafu />, but not <fubar /> A: This answer is for XPath 1.0 where there is no equivalent of the XPath 2.0 standard ends-with() function. The following XPath 1.0 expression selects all elements in the xml document, whose names end with the string "fu": //*[substring(name(),string-length(name())-1) = 'fu'] A: Do something like: //*[ends-with(name(), 'fu')] For a good XPath reference, check out W3Schools. A: I struggled with Dimitre Novatchev's answer, it wouldn't return matches. I knew your XPath must have a section telling that "fu" has length 2. It's advised to have a string-length('fu') to determine what to substring. For those who aren't able to get results with his answer and they require solution with xpath 1.0: //*[substring(name(), string-length(name()) - string-length('fu') +1) = 'fu'] Finds matches of elements ending with "fu" or //*[substring(name(), string-length(name()) - string-length('Position') +1) = 'Position'] Finds matches to elements ending with "Position"
{ "language": "en", "url": "https://stackoverflow.com/questions/46125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How do I group in memory lists? I have a list of Foo. Foo has properties Bar and Lum. Some Foos have identical values for Bar. How can I use lambda/linq to group my Foos by Bar so I can iterate over each grouping's Lums? A: var q = from x in list group x by x.Bar into g select g; foreach (var group in q) { Console.WriteLine("Group " + group.Key); foreach (var item in group) { Console.WriteLine(item.Bar); } } A: Deeno, Enjoy: var foos = new List<Foo> { new Foo{Bar = 1,Lum = 1}, new Foo{Bar = 1,Lum = 2}, new Foo{Bar = 2,Lum = 3}, }; // Using language integrated queries: var q = from foo in foos group foo by foo.Bar into groupedFoos let lums = from fooGroup in groupedFoos select fooGroup.Lum select new { Bar = groupedFoos.Key, Lums = lums }; // Using lambdas var q = foos.GroupBy(x => x.Bar). Select(y => new {Bar = y.Key, Lums = y.Select(z => z.Lum)}); foreach (var group in q) { Console.WriteLine("Lums for Bar#" + group.Bar); foreach (var lum in group.Lums) { Console.WriteLine(lum); } } To learn more about LINQ read 101 LINQ Samples
{ "language": "en", "url": "https://stackoverflow.com/questions/46130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: File format for generating dynamic reports in applications We generate dynamic reports in all of our business web applications written for .Net and J2EE. On the server side we use ActiveReports.Net and JasperReports to generate the reports. We then export them to PDF to send down to the browser. Our clients all use Adobe Reader. We have endless problems with the different versions of Adobe Reader and how they are setup on the client. What file format/readers are others using for their dynamic reports? We need something that allows for precise layout as many of the reports are forms that are printed with data from out systems. HTML is not expressive enough. A: I've used SQL Reporting Services for this purpose. You can design a report template in Visual Studio or generate the XML for the report on the fly in code. You can then have SSRS export the report to about 10 different formats and send to the client including pdf, excel, html, etc. You can also write your own plugin to export to your own format. Crystal Reports has a similar product thats more expensive but has a better report designer. A: I've always had the most success using PDFs to accomplish this. I can't think of a more universally acceptable format that does what you are trying to do. Rather than looking for another format, perhaps it would be better to try to understand how to overcome the problems that you are experiencing with Acrobat on the client side. Can you provide some more information on the types of problems that you are experiencing with Acrobat? A: I does know only 3(4) possible viewer(formats) for reporting in browser. * *PDF *Flash *Java *(Silverlihgt) For all 3 there are reporting solutions. Silverlight are to new and I does not know a solution. You can test how flash and Java in your intranet work and then search a reporting solution. I think PDF should be made the few problems if you use the newest readers. The old readers has many bad bugs.
{ "language": "en", "url": "https://stackoverflow.com/questions/46133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best approach to moving a preexisting project from Flash 7/AS2 to Flex/AS3? I have a large codebase that targetted Flash 7, with a lot of AS2 classes. I'm hoping that I'll be able to use Flex for any new projects, but a lot of new stuff in our roadmap is additions to the old code. The syntax for AS2 and AS3 is generally the same, so I'm starting to wonder how hard it would be to port the current codebase to Flex/AS3. I know all the UI-related stuff would be iffy (currently the UI is generated at runtime with a lot of createEmptyMovieClip() and attachMovie() stuff), but the UI and controller/model stuff is mostly separated. Has anyone tried porting a large codebase of AS2 code to AS3? How difficult is it? What kinds of pitfalls did you run into? Any recommendations for approaches to doing this kind of project? A: Some notable problems I saw when attempting to convert a large number of AS2 classes to AS3: Package naming class your.package.YourClass { } becomes package your.package { class YourClass { } } Imports are required You must explicitly import any outside classes used -- referring to them by their fully qualified name is no longer enough. Interface methods can't be labelled 'public' This makes total sense, but AS2 will let you do it so if you have any they'll need to be removed. Explicit 'override' keyword Any functions that override a parent class function must be declared with the override keyword, much like C#. Along the same lines, if you have interfaces that extend other interfaces and redeclare functions, those overrides must be removed (again, as with public, this notation didn't make sense anyway but AS2 let you do it). All the Flash builtin stuff changed You alluded to this above, but it's now flash.display.MovieClip instead of just MovieClip, for example. There are a lot of specifics in this category, and I didn't get far enough to find them all, but there's going to be a lot of annoyance here. Conclusion I didn't get to work on this conversion to the point of success, but I was able in a matter of hours to write a quick C# tool that handled every aspect of this except the override keyword. Automating the imports can be tricky -- in my case the packages we use all start with a few root-level packages so they're easy to detect. A: First off, I hope you're not using eval() in your projects, since there is no equivalent in AS3. One of the things I would do is go through Adobe's migration guide (which is basically just an itemized list of what has changed) item by item and try to figure out if each item can be changed via a simple search and replace operation (possibly using a regex) or whether it's easier to just manually edit the occurrences to correspond to AS3. Probably in a lot of cases (especially if, as you said, the amount of code to be migrated is quite high) you'll be best off scripting the changes (i.e. using regex search & replace) and manually fixing any border cases where the automated changes have failed. Be prepared to set some time aside for a bit of debugging and running through some test cases as well. Also, as others have already mentioned, trying to combine AS2 SWFs with AS3 SWFs is not a good idea and doesn't really even work, so you'll definitely have to migrate all of the code in one project at once. A: Here are some additional references for moving from AS2 to AS3: Grant Skinners Introductory AS3 Workshop slidedeck http://gskinner.com/talks/as3workshop/ Lee Brimelow : 6 Reasons to learn ActionScript 3 http://www.adobe.com/devnet/actionscript/articles/six_reasons_as3.html Colin Moock : Essential ActionScript 3 (considered the "bible" for ActionScript developers): http://www.amazon.com/Essential-ActionScript-3-0/dp/0596526946 mike chambers [email protected] A: My experience has been that the best way to migrate to AS3 is in two phases - first structurally, and second syntactically. First, do rounds of refactoring where you stay in AS2, but get as close to AS3 architecture as you can. Naturally this includes moving all your frame scripts and #include scripts into packages and classes, but you can do more subtle things like changing all your event listeners and dispatchers to follow the AS3 flow (using static class properties for event types, and registering by method rather than by object). You'll also want to get rid of all your "built-in" events (such as onEnterFrame), and you'll want to take a close look at nontrivial mouse interaction (such as dragging) and keyboard interaction (such as detecting whether a key is pressed). This phase can be done incrementally. The second phase is to convert from AS2 to AS3 - changing "_x" to "x", and all the APIs, and so on. This can't be done incrementally, you have to just do as much as you can in one fell swoop and then start fixing all the compile errors. For this reason, the more you can do in the first phase, the more pain you avoid in the second phase. This process has worked for me on a reasonably large project, but I should note that the first phase requires a solid understanding of how AS3 is structured. If you're new to AS3, then you'll probably need to try building some of the functionality you'll need to be porting. For example, if your legacy code uses dragging and drop targets, you'll want to try implementing that in AS3 to understand how your code will have to change structurally. If you then refactor your AS2 with that in mind, the final syntax changes should go smoothly. The biggest pitfalls for me were the parts that involved a lot of attaching, duplicating and moving MovieClips, changing their depths, and so on. All that stuff can't really be rearchitected to look like AS3; you have to just mash it all into the newer way of thinking and then start fixing the bugs. One final note - I really wouldn't worry about stuff like import and override statements, at least not to the point of automating it. If you miss any, it will be caught by the compiler. But if you miss structural problems, you'll have a lot more pain. A: Migrating a bigger project like this from as2 will be more than a simple search and replace. The new syntax is fairly similar and simple to adapt (as lilserf mentioned) but if nothing else the fact that as3 is more strict and the new event model will mostly likely cause a lot of problems. You'll probably be better off by more or less rewriting almost everything from scratch, possibly using the old code as a guide. Migrating from as2 -> as3 in terms of knowledge is fairly simple though. If you know object oriented as2, moving on to as3 won't be a problem at all. You still don't have to use mxml for your UI unless you specifically want to. Mxml just provides a quick way to build the UI (etc) but if you want to do it yourself with actionscript there's nothing stopping you (this would also probably be easier if you already have that UI in as2 code). Flex (Builder) is just a quick way to do stuff you may not want to do yourself, such as building the UI and binding data but essentially it's just creating a part of the .swf for you -- there's no magic to it ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/46136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What are the Java regular expressions for matching IPv4 and IPv6 strings? Looking for a string to pass to String#matches(String) that will match IPv4, and another to match IPv6. A: Another good option for processing IPs is to use Java's classes Inet4Address and Inet6Address, which can be useful in a number of ways, one of which is to determine the validity of the IP address. I know this doesn't answer the question directly, but just thought it's worth mentioning. A: Here's a regex to match IPv4 addresses: \b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b You'll need to escape the backslashes when you specify it as a string literal in Java: "\\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\b" A: public static final String IPV4_REGEX = "\\A(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)(\\.(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)){3}\\z"; public static final String IPV6_HEX4DECCOMPRESSED_REGEX = "\\A((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?) ::((?:[0-9A-Fa-f]{1,4}:)*)(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)(\\.(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)){3}\\z"; public static final String IPV6_6HEX4DEC_REGEX = "\\A((?:[0-9A-Fa-f]{1,4}:){6,6})(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)(\\.(25[0-5]|2[0-4]\\d|[0-1]?\\d?\\d)){3}\\z"; public static final String IPV6_HEXCOMPRESSED_REGEX = "\\A((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)::((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)\\z"; public static final String IPV6_REGEX = "\\A(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\\z"; Got these from some blog. Someone good w/ regexes should be able to come up with a single regex for all IPv6 address types. Actually, I guess you could have a single regex that matches both IPv4 and IPv6. A: package com.capgemini.basics; import java.util.*; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.regex.PatternSyntaxException; public class Main { private static Pattern VALID_IPV4_PATTERN = null; private static Pattern VALID_IPV6_PATTERN1 = null; private static Pattern VALID_IPV6_PATTERN2 = null; private static final String ipv4Pattern = "(([01]?\\d\\d?|2[0-4]\\d|25[0-5])\\.){3}([01]?\\d\\d?|2[0-4]\\d|25[0-5])"; private static final String ipv6Pattern1 = "([0-9a-f]{1,4}:){7}([0-9a-f]){1,4}"; private static final String ipv6Pattern2 = "^((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)::((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)$"; static { try { VALID_IPV4_PATTERN = Pattern.compile(ipv4Pattern, Pattern.CASE_INSENSITIVE); VALID_IPV6_PATTERN1 = Pattern.compile(ipv6Pattern1, Pattern.CASE_INSENSITIVE); VALID_IPV6_PATTERN2 = Pattern.compile(ipv6Pattern2, Pattern.CASE_INSENSITIVE); } catch (PatternSyntaxException e) { System.out.println("Neither"); } } public static List<String> validateAddresses(List<String> ipAddress) { final List<String> validity= new ArrayList<String>(); int len = ipAddress.size(); for(int i=0; i<len; i++){ Matcher m1 = Main.VALID_IPV4_PATTERN.matcher(ipAddress.get(i)); Matcher m12 = Main.VALID_IPV6_PATTERN1.matcher(ipAddress.get(i)); Matcher m22 = Main.VALID_IPV6_PATTERN2.matcher(ipAddress.get(i)); if (m1.matches()) { validity.add("IPv4"); } else if(m12.matches() || m22.matches()){ validity.add("IPv6"); } else{ validity.add("Neither"); } } return validity; } public static void main(String[] args) { final List<String> IPAddress = new ArrayList<String>(); //Test Case 0 /*IPAddress.add("121.18.19.20"); IPAddress.add("0.12.12.34"); IPAddress.add("121.234.12.12"); IPAddress.add("23.45.12.56"); IPAddress.add("0.1.2.3");*/ //Test Case 1 /*IPAddress.add("2001:0db8:0000:0000:0000:ff00:0042:8329"); IPAddress.add("2001:0db8:0:0:0:ff00:42:8329"); IPAddress.add("::1"); IPAddress.add("2001:0db8::ff00:42:8329"); IPAddress.add("0000:0000:0000:0000:0000:0000:0000:0001");*/ //Test Case 2 /*IPAddress.add("000.012.234.23"); IPAddress.add("666.666.23.23"); IPAddress.add(".213.123.23.32"); IPAddress.add("23.45.22.32."); IPAddress.add("272:2624:235e:3bc2:c46d:682:5d46:638g"); IPAddress.add("1:22:333:4444");*/ final List<String> result = validateAddresses(IPAddress); for (int i=0; i<result.size(); i++) System.out.println(result.get(i)+" "); } } A: The regex allows the use of leading zeros in the IPv4 parts. Some Unix and Mac distros convert those segments into octals. I suggest using 25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d as an IPv4 segment. A: Regexes for ipv6 can get really tricky when you consider addresses with embedded ipv4 and addresses that are compressed. The open-source IPAddress Java library will validate all standard representations of IPv6 and IPv4 and also supports prefix-length (and validation of such). Disclaimer: I am the project manager of that library. Code example: try { IPAddressString str = new IPAddressString("::1"); IPAddress addr = str.toAddress(); } catch(AddressStringException e) { //e.getMessage has validation error }
{ "language": "en", "url": "https://stackoverflow.com/questions/46146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Why are all links are red, in Chrome and Safari? Have just started using Google Chrome, and noticed in parts of our site, e.g. all the links on the page, are bright red. They should be black with a dotted underline. Is there some gotcha in WebKit rendering that turns all links red regardless of the style? A: Are all of the resources that you're linking to in the present at the locations where your page is seeking them (verify this by actually checking it). I've also had an issue when checking an app in Safari where I was attempting to pull a file that wasn't there and I had very similar output to yours (red links). EDIT: Adding Developingchris's find to the answer, since it explains it so well: k, found it. If any of your stylesheets is missing or pathed incorrectly, it throws a 404. If your 404 page has inline styles, they get respected via the "alternate sheets" rule in webkit. Thus, the red links on the "yellow screen of death" are causing my problem in overlap. A: k, found it. If any of your stylesheets is missing or pathed incorrectly, it throws a 404. If your 404 page has inline styles, they get respected via the "alternate sheets" rule in webkit. Thus, the red links on the "yellow screen of death" are causing my problem in overlap. A: Chrome has a bug where it obeys alternate stylesheets. Do you have an alternate stylesheet that makes links red? A: That explains the problem I had with my app--it was a Rails app, which also has 404 pages with red applied to some of the styles. Makes a whole lot more sense now than it did back then. Too bad you can't accept your own answer! A: you can use a jscript console like firebug to find out where the color comes from A: Have you set a :visited setting in your stylesheet? A: Run your CSS file and your HTML file through the w3c validators. I had a similar problem when testing an application in Safari. The problem was in my code.
{ "language": "en", "url": "https://stackoverflow.com/questions/46147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best way to manage generated code in an automated build? In my automated NAnt build we have a step that generates a lot of code off of the database (using SubSonic) and the code is separated into folders that match the schema name in the database. For example: * */generated-code * */dbo * *SomeTable.cs *OtherTable.cs */abc * *Customer.cs *Order.cs The schema names are there to isolate the generated classes that an app will need. For example, there is an ABC app, that will pull in the generated code from this central folder. I'm doing that on a pre-build event, like this: del /F /Q $(ProjectDir)Entities\generated*.cs copy $(ProjectDir)....\generated-code\abc*.cs $(ProjectDir)Entities\generated*.cs So on every build, the Nant script runs the generator which puts all the code into a central holding place, then it kicks off the solution build... which includes pre-build events for each of the projects that need their generated classes. So here's the friction I'm seeing: 1) Each new app needs to setup this pre-build event. It kind of sucks to have to do this. 2) In our build server we don't generate code, so I actually have an IF $(ConfigurationName) == "Debug" before each of those commands, so it doens't happen for release builds 3) Sometimes the commands fail, which fails our local build. It will fail if: - there is no generated code yet (just setting up a new project, no database yet) - there is no existing code in the directory (first build) usually these are minor fixes and we've just hacked our way to getting a new project or a new machine up and running with the build, but it's preventing me from my 1-click-build Nirvana. So I'd like to hear suggestions on how to improve this where it's a bit more durable. Maybe move the copying of the code into the application folders into the NAnt script? This seems kind of backwards to me, but I'm willing to listen to arguments for it. OK, fire away :) A: How often does your DB schema change? Wouldn't it be possible to generate the database-related files on demand (e.g. when the schema changes) and then check them into your code repository? If your database schema doesn't change, you can also package the compiled *.cs classes and distribute the archive to other projects. A: We have two projects in our solution that are built completely out of generated code. Basically, we run the code generator .exe as a post-build step for another project and along with generating the code, it automates the active instance of visual studio to make sure that the generated project is in the solution, that it has all of the generated code files, and that they are checked out/added to TFS as necessary. It very rarely flakes out during the VS automation stage, and we have to run it "by hand" but that's usually only if you have several instances of VS open with >1 instance of the solution open and it can't figure out which one it's supposed to automate. Our solution and process are such that the generation should always be done and correct before our auto-build gets to it, so this approach might not work for you. A: Yeah I'd like to take VS out of the equation so that a build from VS is just simply compiling the code and references. I can manage the NAnt script... I'm just wondering if people have advice around having 1 NAnt script, or possibly one for each project which can push the code into the projects rather than being pulled. This does mean that you have to opt-in to generate code.
{ "language": "en", "url": "https://stackoverflow.com/questions/46149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I validate an email address in JavaScript? I'd like to check if the user input is an email address in JavaScript, before sending it to a server or attempting to send an email to it, to prevent the most basic mistyping. How could I achieve this? A: JavaScript can match a regular expression: emailAddress.match( / some_regex /); Here's an RFC22 regular expression for emails: ^((?>[a-zA-Z\d!#$%&'*+\-/=?^_`{|}~]+\x20*|"((?=[\x01-\x7f])[^"\\]|\\[\x01-\x7f])* "\x20*)*(?<angle><))?((?!\.)(?>\.?[a-zA-Z\d!#$%&'*+\-/=?^_`{|}~]+)+|"((?=[\x01-\x 7f])[^"\\]|\\[\x01-\x7f])*")@(((?!-)[a-zA-Z\d\-]+(?<!-)\.)+[a-zA-Z]{2,}|\[(((?(?< !\[)\.)(25[0-5]|2[0-4]\d|[01]?\d?\d)){4}|[a-zA-Z\d\-]*[a-zA-Z\d]:((?=[\x01-\x7f]) [^\\\[\]]|\\[\x01-\x7f])+)\])(?(angle)>)$ A: Most of the answers here are not linter friendly, it's a mess! Some of them are also outdated! After a lot of time spending, I decided to use an external library named email-validator, install it easily by npm for example and import/require it in your own project: https://www.npmjs.com/package/email-validator //NodeJs const validator = require("email-validator"); validator.validate("[email protected]"); // true //TypeScript/JavaScript import * as EmailValidator from 'email-validator'; EmailValidator.validate("[email protected]"); // true A: Just for completeness, here you have another RFC 2822 compliant regex The official standard is known as RFC 2822. It describes the syntax that valid email addresses must adhere to. You can (but you shouldn't — read on) implement it with this regular expression: (?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\]) (...) We get a more practical implementation of RFC 2822 if we omit the syntax using double quotes and square brackets. It will still match 99.99% of all email addresses in actual use today. [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? A further change you could make is to allow any two-letter country code top level domain, and only specific generic top level domains. This regex filters dummy email addresses like [email protected]. You will need to update it as new top-level domains are added. [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+(?:[A-Z]{2}|com|org|net|gov|mil|biz|info|mobi|name|aero|jobs|museum)\b So even when following official standards, there are still trade-offs to be made. Don't blindly copy regular expressions from online libraries or discussion forums. Always test them on your own data and with your own applications. Emphasis mine A: All email addresses contain an 'at' (i.e. @) symbol. Test that necessary condition: email.includes('@') Or, if you need to support IE/older browsers: email.indexOf('@') > 0 Don't bother with anything more complicated. Even if you could perfectly determine whether an email is RFC-syntactically valid, that wouldn't tell you whether it belongs to the person who supplied it. That's what really matters. To test that, send a validation message. A: Correct validation of email address in compliance with the RFCs is not something that can be achieved with a one-liner regular expression. An article with the best solution I've found in PHP is What is a valid email address?. Obviously, it has been ported to Java. I think the function is too complex to be ported and used in JavaScript. JavaScript/node.js port: https://www.npmjs.com/package/email-addresses. A good practice is to validate your data on the client, but double-check the validation on the server. With this in mind, you can simply check whether a string looks like a valid email address on the client and perform the strict check on the server. Here's the JavaScript function I use to check if a string looks like a valid mail address: function looksLikeMail(str) { var lastAtPos = str.lastIndexOf('@'); var lastDotPos = str.lastIndexOf('.'); return (lastAtPos < lastDotPos && lastAtPos > 0 && str.indexOf('@@') == -1 && lastDotPos > 2 && (str.length - lastDotPos) > 2); } Explanation: * *lastAtPos < lastDotPos: Last @ should be before last . since @ cannot be part of server name (as far as I know). *lastAtPos > 0: There should be something (the email username) before the last @. *str.indexOf('@@') == -1: There should be no @@ in the address. Even if @ appears as the last character in email username, it has to be quoted so " would be between that @ and the last @ in the address. *lastDotPos > 2: There should be at least three characters before the last dot, for example [email protected]. *(str.length - lastDotPos) > 2: There should be enough characters after the last dot to form a two-character domain. I'm not sure if the brackets are necessary. A: This is the correct RFC822 version. function checkEmail(emailAddress) { var sQtext = '[^\\x0d\\x22\\x5c\\x80-\\xff]'; var sDtext = '[^\\x0d\\x5b-\\x5d\\x80-\\xff]'; var sAtom = '[^\\x00-\\x20\\x22\\x28\\x29\\x2c\\x2e\\x3a-\\x3c\\x3e\\x40\\x5b-\\x5d\\x7f-\\xff]+'; var sQuotedPair = '\\x5c[\\x00-\\x7f]'; var sDomainLiteral = '\\x5b(' + sDtext + '|' + sQuotedPair + ')*\\x5d'; var sQuotedString = '\\x22(' + sQtext + '|' + sQuotedPair + ')*\\x22'; var sDomain_ref = sAtom; var sSubDomain = '(' + sDomain_ref + '|' + sDomainLiteral + ')'; var sWord = '(' + sAtom + '|' + sQuotedString + ')'; var sDomain = sSubDomain + '(\\x2e' + sSubDomain + ')*'; var sLocalPart = sWord + '(\\x2e' + sWord + ')*'; var sAddrSpec = sLocalPart + '\\x40' + sDomain; // complete RFC822 email address spec var sValidEmail = '^' + sAddrSpec + '$'; // as whole string var reValidEmail = new RegExp(sValidEmail); return reValidEmail.test(emailAddress); } A: This was stolen from http://codesnippets.joyent.com/posts/show/1917 email = $('email'); filter = /^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/; if (filter.test(email.value)) { // Yay! valid return true; } else {return false;} A: Do this: ^([a-zA-Z0-9!#$%&'*+\/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+\/=?^_`{|}~-]+)*@(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]*[a-zA-Z0-9])?)$ It's based on RFC 2822 Test it at https://regex101.com/r/857lzc/1 Often when storing email addresses in the database I make them lowercase and, in practice, regexs can usually be marked case insensitive. In those cases this is slightly shorter: [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? Here's an example of it being used in JavaScript (with the case insensitive flag i at the end). var emailCheck=/^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$/i; console.log( emailCheck.test('[email protected]') ); Note: Technically some emails can include quotes in the section before the @ symbol with escape characters inside the quotes (so your email user can be obnoxious and contain stuff like @ and "..." as long as it's written in quotes). NOBODY DOES THIS EVER! It's obsolete. But, it IS included in the true RFC 2822 standard and omitted here. Note 2: The beginning of an email (before the @ sign) can be case sensitive (via the spec). However, anyone with a case-sensitive email is probably used to having issues, and, in practice, case insensitive is a safe assumption. More info: Are email addresses case sensitive? More info: http://www.regular-expressions.info/email.html A: The regular expression provided by Microsoft within ASP.NET MVC is /^[\w-]+(\.[\w-]+)*@([a-z0-9-]+(\.[a-z0-9-]+)*?\.[a-z]{2,6}|(\d{1,3}\.){3}\d{1,3})(:\d{4})?$/ Which I post here in case it's flawed - though it's always been perfect for my needs. A: I prefer to keep it simple and keep my users happy. I also prefer code which is easy to understand. RegEx is not. function isValidEmail(value) { const atLocation = value.lastIndexOf("@"); const dotLocation = value.lastIndexOf("."); return ( atLocation > 0 && dotLocation > atLocation + 1 && dotLocation < value.length - 1 ); }; * *Get the location of the last "@" and the last "." *Make sure the "@" is not the first char (there is something before it) *Make sure the "." is after the "@" and that there is at least one char between them *Make sure there is at least a single char after the "." Will this allow invalid email addresses to pass? Sure, but I don't think you need much more for a good user experience that allows you to enable/disable a button, display an error message, etc. You only know for sure that an email address is valid when you attempt to send an email to that address. A: Using regular expressions is probably the best way. You can see a bunch of tests here (taken from chromium) const validateEmail = (email) => { return String(email) .toLowerCase() .match( /^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ ); }; Here's the example of a regular expression that accepts unicode: const re = /^(([^<>()[\]\.,;:\s@\"]+(\.[^<>()[\]\.,;:\s@\"]+)*)|(\".+\"))@(([^<>()[\]\.,;:\s@\"]+\.)+[^<>()[\]\.,;:\s@\"]{2,})$/i; But keep in mind that one should not rely only upon JavaScript validation. JavaScript can easily be disabled. This should be validated on the server side as well. Here's an example of the above in action: const validateEmail = (email) => { return email.match( /^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ ); }; const validate = () => { const $result = $('#result'); const email = $('#email').val(); $result.text(''); if (validateEmail(email)) { $result.text(email + ' is valid :)'); $result.css('color', 'green'); } else { $result.text(email + ' is not valid :('); $result.css('color', 'red'); } return false; } $('#email').on('input', validate); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <label for="email">Enter an email address: </label> <input id="email" /> <h2 id="result"></h2> A: Sectrean's solution works great, but it was failing my linter. So I added some escapes: function validateEmail(email){ var re = /^(([^<>()[]\\.,;:\s@\"]+(\.[^<>()[]\\.,;:\s@\"]+)*)|(\".+\"))@(([[0-9]{1,3}\‌​.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; return re.test(email); } A: The best practice is to either use HTML5 built-in email tag. <input type="email" name="email"> or the common email syntax as recognizing @ and . from the string is given below. ^[a-zA-Z0-9_\-.]+@[a-zA-Z0-9\-]+\.[a-zA-Z0-9\-.]+$ Note that this would still produce invalid email that will still match the regex, its almost impossible to catch them all but this will improve the situation a little. A: This is a JavaScript translation of the validation suggested by the official Rails guide used by thousands of websites: /^([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})$/i Relatively simple but tests against most common errors. Tested on a dataset of thousands of emails and it had zero false negatives/positives. Example usage: const emailRegex = /^([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})$/i; emailRegex.test('[email protected]'); // true // Multi-word domains emailRegex.test('[email protected]'); // true emailRegex.test('[email protected]'); // true // Valid special characters emailRegex.test('unusual+but+valid+email1900=/!#$%&\'*+-/=?^_`.{|}[email protected]') // true // Trailing dots emailRegex.test('[email protected].'); // false // No domain emailRegex.test('email@example'); // false // Leading space emailRegex.test(' [email protected]'); // false // Trailing space emailRegex.test('[email protected] '); // false // Incorrect domains emailRegex.test('email@example,com '); // false // Other invalid emails emailRegex.test('invalid.email.com') // false emailRegex.test('invalid@[email protected]') // false emailRegex.test('[email protected]') // false A: I'm really looking forward to solve this problem. So I modified email validation regular expression above * *Original /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ *Modified /^(([^<>()\[\]\.,;:\s@\"]+(\.[^<>()\[\]\.,;:\s@\"]+)*)|(\".+\"))@(([^<>()\.,;\s@\"]+\.{0,1})+[^<>()\.,;:\s@\"]{2,})$/ to pass the examples in Wikipedia Email Address. And you can see the result in here. A: Wow, there are lots of complexity here. If all you want to do is just catch the most obvious syntax errors, I would do something like this: ^\S+@\S+$ It usually catches the most obvious errors that the user makes and assures that the form is mostly right, which is what JavaScript validation is all about. EDIT: We can also check for '.' in the email using /^\S+@\S+\.\S+$/ A: Here is a function I use for front end email validation. (The Regular Expression came from parsley.js) <!DOCTYPE html> <html> <head> <title>Our Company</title> <style> .form-style { color: #ccc; } </style> </head> <body> <h1>Email Validation Form Example</h1> <input type="text" name="email" id="emailInput" class="form-style"> <script> function validateEmail(emailAddress) { var regularExpression = /^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))){2,6}$/i; return regularExpression.test(emailAddress); } function showEmailValidationState(event) { if (validateEmail(event.target.value)) { document.getElementById("emailInput").style.color = 'black'; } } document.getElementById("emailInput").addEventListener("keyup", showEmailValidationState); </script> </body> </html> A: Following Regex validations: * *No spacial characters before @ *(-) and (.) should not be together after @ No special characters after @ 2 characters must before @ Email length should be less 128 characters function validateEmail(email) { var chrbeforAt = email.substr(0, email.indexOf('@')); if (!($.trim(email).length > 127)) { if (chrbeforAt.length >= 2) { var re = /^(([^<>()[\]{}'^?\\.,!|//#%*-+=&;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?/; //var re = /[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?/; return re.test(email); } else { return false; } } else { return false; } } A: Use the regular expression: /^[a-z][a-zA-Z0-9_.]*(\.[a-zA-Z][a-zA-Z0-9_.]*)?@[a-z][a-zA-Z-0-9]*\.[a-z]+(\.[a-z]+)?$/ Example: function validateEmail(email) { var re = /^[a-z][a-zA-Z0-9_.]*(\.[a-zA-Z][a-zA-Z0-9_.]*)?@[a-z][a-zA-Z-0-9]*\.[a-z]+(\.[a-z]+)?$/; return re.test(email); } It should allow only @ , . , _ A: You can also try var string = "[email protected]" var exp = /(\w(=?@)\w+\.{1}[a-zA-Z]{2,})/i alert(exp.test(string)) A: You can use this regex (from w3resource (*not related to W3C)): /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/.test(emailValue) If you use Node you can use this in the back-end as well as the front-end. I don't know other back-end languages so I cannot evaluate for other use cases. A: If you get this error: Using regular expressions is security-sensitive. Then here is what you are looking for. This solution is free from " Regular expression Denial of Service (ReDoS) " Regex to validate emails without (ReDoS): /^[a-z0-9](?!.*?[^\na-z0-9]{2})[^\s@]+@[^\s@]+\.[^\s@]+[a-z0-9]$/ Please let me know if this solution works for you. Thanks. A: There's something you have to understand the second you decide to use a regular expression to validate emails: It's probably not a good idea. Once you have come to terms with that, there are many implementations out there that can get you halfway there, this article sums them up nicely. In short, however, the only way to be absolutely, positively sure that what the user entered is in fact an email is to actually send an email and see what happens. Other than that it's all just guesses. A: Simply check out if the entered email address is valid or not using HTML. <input type="email"/> There isn't any need to write a function for validation. A: You should not use regular expressions to validate an input string to check if it's an email. It's too complicated and would not cover all the cases. Now since you can only cover 90% of the cases, write something like: function isPossiblyValidEmail(txt) { return txt.length > 5 && txt.indexOf('@')>0; } You can refine it. For instance, 'aaa@' is valid. But overall you get the gist. And don't get carried away... A simple 90% solution is better than 100% solution that does not work. The world needs simpler code... A: Wikipedia standard mail syntax : https://en.wikipedia.org/wiki/Email_address#Examples https://fr.wikipedia.org/wiki/Adresse_%C3%A9lectronique#Syntaxe_exacte Function : function validMail(mail) { return /^(([^<>()\[\]\.,;:\s@\"]+(\.[^<>()\[\]\.,;:\s@\"]+)*)|(\".+\"))@(([^<>()\.,;\s@\"]+\.{0,1})+([^<>()\.,;:\s@\"]{2,}|[\d\.]+))$/.test(mail); } Valid emails : validMail('[email protected]') // Return true validMail('[email protected].') // Return true validMail('[email protected]') // Return true validMail('user@localserver') // Return true validMail('[email protected]') // Return true validMail('user+mailbox/[email protected]') // Return true validMail('"very.(),:;<>[]\".VERY.\"very@\\ \"very\".unusual"@strange.example.com') // Return true validMail('!#$%&\'*+-/=?^_`.{|}[email protected]') // Return true validMail('"()<>[]:,;@\\\"!#$%&\'-/=?^_`{}| ~.a"@example.org') // Return true validMail('"Abc@def"@example.com') // Return true validMail('"Fred Bloggs"@example.com') // Return true validMail('"Joe.\\Blow"@example.com') // Return true validMail('Loïc.Accentué@voilà.fr') // Return true validMail('" "@example.org') // Return true validMail('user@[IPv6:2001:DB8::1]') // Return true Invalid emails : validMail('Abc.example.com') // Return false validMail('A@b@[email protected]') // Return false validMail('a"b(c)d,e:f;g<h>i[j\k][email protected]') // Return false validMail('just"not"[email protected]') // Return false validMail('this is"not\[email protected]') // Return false validMail('this\ still\"not\\[email protected]') // Return false validMail('[email protected]') // Return false validMail('[email protected]') // Return false Show this test : https://regex101.com/r/LHJ9gU/1 A: HTML5 itself has email validation. If your browser supports HTML5 then you can use the following code. <form> <label>Email Address <input type="email" placeholder="[email protected]" required> </label> <input type="submit"> </form> jsFiddle link From the HTML5 spec: A valid e-mail address is a string that matches the email production of the following ABNF, the character set for which is Unicode. email = 1*( atext / "." ) "@" label *( "." label ) label = let-dig [ [ ldh-str ] let-dig ] ; limited to a length of 63 characters by RFC 1034 section 3.5 atext = < as defined in RFC 5322 section 3.2.3 > let-dig = < as defined in RFC 1034 section 3.5 > ldh-str = < as defined in RFC 1034 section 3.5 > This requirement is a willful violation of RFC 5322, which defines a syntax for e-mail addresses that is simultaneously too strict (before the "@" character), too vague (after the "@" character), and too lax (allowing comments, whitespace characters, and quoted strings in manners unfamiliar to most users) to be of practical use here. The following JavaScript- and Perl-compatible regular expression is an implementation of the above definition. /^[a-zA-Z0-9.!#$%&'*+\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$/ A: If you're using Closure you can use the built-in goog.format.EmailAddress type: http://docs.closure-library.googlecode.com/git/class_goog_format_EmailAddress.html For example: goog.format.EmailAddress.isValidAddrSpec("[email protected]") Note that by reading the source (linked above) you can see the comments state that IDN are not supported and that it only aims to cover most addresses: // This is a fairly naive implementation, but it covers 99% of use cases. // For more details, see http://en.wikipedia.org/wiki/Email_address#Syntax // TODO(mariakhomenko): we should also be handling i18n domain names as per // http://en.wikipedia.org/wiki/Internationalized_domain_name A: <pre> **The personal_info part contains the following ASCII characters. 1.Uppercase (A-Z) and lowercase (a-z) English letters. 2.Digits (0-9). 3.Characters ! # $ % & ' * + - / = ? ^ _ ` { | } ~ 4.Character . ( period, dot or fullstop) provided that it is not the first or last character and it will not come one after the other.** </pre> *Example of valid email id* <pre> [email protected] [email protected] [email protected] [email protected] [email protected] </pre> <pre> xxxx.ourearth.com [@ is not present] [email protected] [ tld (Top Level domain) can not start with dot "." ] @you.me.net [ No character before @ ] [email protected] [ ".b" is not a valid tld ] [email protected] [ tld can not start with dot "." ] [email protected] [ an email should not be start with "." ] xxxxx()*@gmail.com [ here the regular expression only allows character, digit, underscore and dash ] [email protected] [double dots are not allowed </pre> **javascript mail code** function ValidateEmail(inputText) { var mailformat = /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/; if(inputText.value.match(mailformat)) { document.form1.text1.focus(); return true; } else { alert("You have entered an invalid email address!"); document.form1.text1.focus(); return false; } } A: the best one :D (RFC-friendly & no error "too complex") : function isMail(mail) { pattuser = /^([A-Z0-9_%+\-!#$&'*\/=?^`{|}~]+\.?)*[A-Z0-9_%+\-!#$&'*\/=?^`{|}~]+$/i; pattdomain = /^([A-Z0-9-]+\.?)*[A-Z0-9-]+(\.[A-Z]{2,9})+$/i; tab = mail.split("@"); if (tab.length != 2) return false; return (pattuser.test(tab[0]) && pattdomain.test(tab[1])); } A: If you are using ng-pattern and material this does the job. vm.validateEmail = '([a-zA-Z0-9_.]{1,})((@[a-zA-Z]{2,})[\\\.]([a-zA-Z]{2}|[a-zA-Z]{3}))'; A: Here is the recommended Regex pattern for HTML5 on MDN: Browsers that support the email input type automatically provide validation to ensure that only text that matches the standard format for Internet e-mail addresses is entered into the input box. Browsers that implement the specification should be using an algorithm equivalent to the following regular expression: /^[a-zA-Z0-9.!#$%&'*+\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61} [a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$/ https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#Validation A: The answer to this question (if you stood back and really thought about it) is that you can't really prevent a user from mistyping their e-mail. Everybody is providing answers that match all the possible letters but nobody has really taken into account the full myriad of characters that can be used. I refer you to this post and the answer explaining the amount of characters that can be accepted:- Remove invalid characters from e-mail You can't predict if the letters are the exact way the user intended them to be - which is the most common mistake.. missing a letter out or typing the wrong letter. Ultimately, no matter what you do in Javascript you will always need your backend script to check if the e-mail was sent successfully too and it will probably have a validation process in place too. So if you wanted to ensure it was some sort of e-mail address and not their username you only really need to check if there's an @ symbol in there and at least 1 dot and leave all the rest to your backend code. I have provided a very basic function that demonstrates a delayed check for real-time input and an instant check. Javascript is acting as a first checkpoint to validate the basics to save posting invalid content and annoying the user. It will always assume the bigger check is at the backend though. // Very simple, non-library dependent client-side e-mail validator var emailv = { // Timeout handler for checkDelay() to: null, // The core function that takes a string and validates it check : function(em){ // Check 1 - The split ensures there's only one @ var c1 = em.split('@').length == 2; // Check 2 - Must be at least 1 dot too var c2 = em.indexOf('.') > 0; // Check 3 - ensures there's always something after a @ or dot var c3 = !(em.slice(-1)=="@"||em.slice(-1)=="."); return (c1&&c2&&c3); // If all TRUE, great. }, // Shortcut to quickly check any text input by dom id checkById : function(inputId){ d = document.getElementById(inputId); return d?emailv.check(d.value):false; }, // Check delay for checking on real-time inputs checkDelay: function(em){ clearTimeout(emailv.to); emailv.to = setTimeout("emailv.checkDelayP2('"+em+"')",1000); }, // Part two of Check delay checkDelayP2: function(em){ if(emailv.check(em)){ // Javascript filter says it seems okay // For sakes of this demo, pretend we are now making a background // check to see if e-mail is taken. We tell the user to wait.. emailv.status("Wait.."); // Pretend the background check took 2 seconds // and said e-mail was available setTimeout("emailv.status('OK')",2000); } else { // Javascript say it's bad, mmmkay? emailv.status("BAD E-mail"); } }, status : function(s){ document.getElementById('emailstatus').innerHTML=s; } } <h2>1 of 2 - Delayed check</h2> <ul> <li><strong>Waits until the user has stopped typing</strong></li> <li>Useful for when you want to then send e-mail to background database to check if it exists.</li> <li>Allows them 1 idle second before checking the e-mail address.</li> </ul> <input type="text" size="50" id="youremail" name="youremail" onkeyup="emailv.checkDelay(this.value)" onpaste="emailv.checkDelay(this.value)" /> <span id='emailstatus'></span> <h2>2 of 2 - Instant Check</h2> <input type="text" size="50" id="youremail2" name="youremail2" /> <a href="Javascript:void(0)" onclick="alert(emailv.checkById('youremail2')?'Seems okay to me':'This e-mail is dodgy')">Check e-mail</a> It would be better to avoid stopping the user entering a valid e-mail than to apply so many restrictions that it becomes complicated. This is just my opinion! A: Regex updated! try this let val = '[email protected]'; if(/^[a-z0-9][a-z0-9-_\.]+@([a-z]|[a-z0-9]?[a-z0-9-]+[a-z0-9])\.[a-z0-9]{2,10}(?:\.[a-z]{2,10})?$/.test(val)) { console.log('passed'); } typscript version complete // export const emailValid = (val:string):boolean => /^[a-z0-9][a-z0-9-_\.]+@([a-z]|[a-z0-9]?[a-z0-9-]+[a-z0-9])\.[a-z0-9]{2,10}(?:\.[a-z]{2,10})?$/.test(val); more info https://git.io/vhEfc A: It's hard to get an email validator 100% correct. The only real way to get it correct would be to send a test email to the account. That said, there are a few basic checks that can help make sure that you're getting something reasonable. Some things to improve: Instead of new RegExp, just try writing the regexp out like this: if (reg.test(/@/)) Second, check to make sure that a period comes after the @ sign, and make sure that there are characters between the @s and periods. A: This is how node-validator does it: /^(?:[\w\!\#\$\%\&\'\*\+\-\/\=\?\^\`\{\|\}\~]+\.)*[\w\!\#\$\%\&\'\*\+\-\/\=\?\^\`\{\|\}\~]+@(?:(?:(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-](?!\.)){0,61}[a-zA-Z0-9]?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9\-](?!$)){0,61}[a-zA-Z0-9]?)|(?:\[(?:(?:[01]?\d{1,2}|2[0-4]\d|25[0-5])\.){3}(?:[01]?\d{1,2}|2[0-4]\d|25[0-5])\]))$/ A: I have found this to be the best solution: /^[^\s@]+@[^\s@]+\.[^\s@]+$/ It allows the following formats: 1. [email protected] 2. [email protected] 3. [email protected] 4. [email protected] 9. #!$%&'*+-/=?^_`{}|[email protected] 6. "()[]:,;@\\\"!#$%&'*+-/=?^_`{}| ~.a"@example.org 7. " "@example.org (space between the quotes) 8. üñîçøðé@example.com (Unicode characters in local part) 9. üñîçøðé@üñîçøðé.com (Unicode characters in domain part) 10. Pelé@example.com (Latin) 11. δοκιμή@παράδειγμα.δοκιμή (Greek) 12. 我買@屋企.香港 (Chinese) 13. 甲斐@黒川.日本 (Japanese) 14. чебурашка@ящик-с-апельсинами.рф (Cyrillic) It's clearly versatile and allows the all-important international characters, while still enforcing the basic [email protected] format. It will block spaces which are technically allowed by RFC, but they are so rare that I'm happy to do this. A: Following Regex validations: * *No spacial characters before @ *(-) and (.) should not be together after @ *No special characters after @ 2 characters must before @ *Email length should be less 128 characters function validateEmail(email) { var chrbeforAt = email.substr(0, email.indexOf('@')); if (!($.trim(email).length > 127)) { if (chrbeforAt.length >= 2) { var re = /^(([^<>()[\]{}'^?\\.,!|//#%*-+=&;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?/; return re.test(email); } else { return false; } } else { return false; } } A: Whoever is using @pvl solution and wants it to pass ESLint Prefer-template then here's a version where I used template literals instead of string concatenation. validateEmail(email) { let sQtext = '[^\\x0d\\x22\\x5c\\x80-\\xff]'; let sDtext = '[^\\x0d\\x5b-\\x5d\\x80-\\xff]'; let sAtom = '[^\\x00-\\x20\\x22\\x28\\x29\\x2c\\x2e\\x3a-\\x3c\\x3e\\x40\\x5b-\\x5d\\x7f-\\xff]+'; let sQuotedPair = '\\x5c[\\x00-\\x7f]'; let sDomainLiteral = `\\x5b(${sDtext}|${sQuotedPair})*\\x5d`; let sQuotedString = `\\x22(${sQtext}|${sQuotedPair})*\\x22`; let sDomainRef = sAtom; let sSubDomain = `(${sDomainRef}|${sDomainLiteral})`; let sWord = `(${sAtom}|${sQuotedString})`; let sDomain = `${sSubDomain}(\\x2e${sSubDomain})*`; let sLocalPart = `${sWord}(\\x2e${sWord})*`; let sAddrSpec = `${sLocalPart}\\x40${sDomain}`; // complete RFC822 email address spec let sValidEmail = `^${sAddrSpec}$`; // as whole string let reValidEmail = new RegExp(sValidEmail); return reValidEmail.test(email); } A: In nodeJS you can also use validator node module and simply use like that Install the library with npm install validator var validator = require('validator'); validator.isEmail('[email protected]'); //=> true A: There are some complex RegEx written here, that also works. I tested this one and it works too: [a-zA-Z0-9._]+[@]+[a-zA-Z0-9]+[.]+[a-zA-Z]{2,6} Please test this here : http://www.regextester.com/?fam=97334 Hope this helps. A: How about creating a function which will test any string against emails' pattern using regular expression in JavaScript, as we know email addresses can be quite different in different regions, like in UK and Australia it usually ends up with .co.uk or .com.au, so I tried to cover those as well, also check if the string passed to the function, something like this: var isEmail = function(str) { return typeof str==='string' && /^[\w+\d+._]+\@[\w+\d+_+]+\.[\w+\d+._]{2,8}$/.test(str); } and check if it's email like below: isEmail('[email protected]'); //true isEmail('[email protected]'); //true isEmail('[email protected]'); //true isEmail('[email protected]'); //true isEmail('[email protected]'); //true isEmail('[email protected]'); //true isEmail('[email protected]'); //true isEmail('[email protected]#sswzazaaaa'); //false isEmail('[email protected]'); //false A: ES6 sample const validateEmail=(email)=> /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/.test(email); A: Here's a simple regex that would just check for the basic format of an email e.g., [email protected]: \S+@\S+\.\S+ A: This question is more dificult to answer than seems at first sight. There were loads of people around the world looking for "the regex to rule them all" but the truth is that there are tones of email providers. What's the problem? Well, "a_z%@gmail.com cannot exists but it may exists an address like that through another provider "[email protected]. Why? According to the RFC: https://en.wikipedia.org/wiki/Email_address#RFC_specification. I'll take an excerpt to facilitate the lecture: The local-part of the email address may use any of these ASCII characters: - uppercase and lowercase Latin letters A to Z and a to z; - digits 0 to 9; - special characters !#$%&'*+-/=?^_`{|}~; - dot ., provided that it is not the first or last character unless quoted, and provided also that it does not appear consecutively unless quoted (e.g. [email protected] is not allowed but "John..Doe"@example.com is allowed);[6] Note that some mail servers wildcard local parts, typically the characters following a plus and less often the characters following a minus, so fred+bah@domain and fred+foo@domain might end up in the same inbox as fred+@domain or even as fred@domain. This can be useful for tagging emails for sorting, see below, and for spam control. Braces { and } are also used in that fashion, although less often. - space and "(),:;<>@[\] characters are allowed with restrictions (they are only allowed inside a quoted string, as described in the paragraph below, and in addition, a backslash or double-quote must be preceded by a backslash); - comments are allowed with parentheses at either end of the local-part; e.g. john.smith(comment)@example.com and (comment)[email protected] are both equivalent to [email protected]. So, i can own an email address like that: A__z/J0hn.sm{it!}[email protected] If you try this address i bet it will fail in all or the major part of regex posted all across the net. But remember this address follows the RFC rules so it's fair valid. Imagine my frustration at not being able to register anywhere checked with those regex!! The only one who really can validate an email address is the provider of the email address. How to deal with, so? It doesn't matter if a user adds a non-valid e-mail in almost all cases. You can rely on HTML 5 input type="email" that is running near to RFC, little chance to fail. HTML5 input type="email" info: https://www.w3.org/TR/2012/WD-html-markup-20121011/input.email.html For example, this is an RFC valid email: "very.(),:;<>[]\".VERY.\"very@\\ \"very\".unusual"@strange.example.com But the html5 validation will tell you that the text before @ must not contain " or () chars for example, which is actually incorrect. Anyway, you should do this by accepting the email address and sending an email message to that email address, with a code/link the user must visit to confirm validity. A good practice while doing this is the "enter your e-mail again" input to avoid user typing errors. If this is not enough for you, add a pre-submit modal-window with a title "is this your current e-mail?", then the mail entered by the user inside an h2 tag, you know, to show clearly which e-mail they entered, then a "yes, submit" button. A: General email regex (RFC 5322 Official Standard): https://emailregex.com/ JavaScript: /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ A: Use the browser/runtime to handle parsing the input by prepending a protocol and pass it to the URL API, trapping any errors and check the resulting username and hostname properties of the result. It will handle basically all transformations and possibilities (punycode of character sets, etc). This only establishes that the input is parsable, not that is valid--that is only possible through checking if the destination machine receives messages for that alias. This provides a close (imo reasonable) guess though, and can be expanded to be more specific and realistic if you're comfortable both maintaining it and also risking invalid rejections. (Note it doesn't attempt to address IPv4 or IPv6 addresses, simply the broad range of customer-facing scenarios using a domain.) function validEmail(email=''){ var $0, url, isValid = false, emailPatternInput = /^[^@]{1,64}@[^@]{4,253}$/, emailPatternUrl = /^[^@]{1,64}@[a-z][a-z0-9\.-]{3,252}$/i; email = email.trim(); try{ url = new URL('http://'+email); $0 = `${url.username}@${url.hostname}`; isValid = emailPatternInput.test( email ); if(!isValid) throw 'invalid email pattern on input:' + email; isValid = emailPatternUrl.test( $0 ); if(!isValid) throw 'invalid email pattern on url:' + $0; console.log(`email looks legit "${email}" checking url-parts: "${$0 === email ? '-SAME-':$0}"`); }catch(err){ console.error(`probably not an email address: "${email}"`, err); }; return isValid; } ['user+this@はじめよう.みんな', 'stuff@things', '[email protected]', 'Jean+Franç[email protected]','هيا@יאללה', '试@例子.测试.مثال.آزمایشی', 'not@@really', 'no'].forEach(email=>console.log(validEmail(email), email)); This is the both the simplest and most generally permissive example I can come up with. Please edit it in cases where it can be made to be more accurate while maintain its simplicity and reasonable generally permissive validity. Also see MDN URL docs URL, window.URL and Nodejs for URL APIs. A: Use the URL interface in JavaScript to parse the address in the minimum practical expected format user@host then check that it looks reasonable. Next send a message to it and see if that works (for example require the recipient validate a one-time token via the address). Note that this handles punycode, internationalization, as shown in the samples below. https://developer.mozilla.org/en-US/docs/Web/API/URL an example with simple tests: function validEmail(input=''){ const emailPatternInput = /^[^@]{1,64}@[^@]{4,253}$/, emailPatternUrl = /^[^@]{1,64}@[a-z][a-z0-9\.-]{3,252}$/i; let email, url, valid = false, error, same = false; try{ email = input.trim(); // handles punycode, etc using browser's own maintained implementation url = new URL('http://'+email); let urlderived = `${url.username}@${url.hostname}`; same = urlderived === email; valid = emailPatternInput.test( email ); if(!valid) throw new Error('invalid email pattern on input:' + email); valid = emailPatternUrl.test( urlderived ); if(!valid) throw new Error('invalid email pattern on url:' + urlderived); }catch(err){ error = err; }; return {email, url, same, valid, error}; } [ 'user+this@はじめよう.みんな' , '[email protected]' , 'stuff@things' , '[email protected]' , 'Jean+Franç[email protected]','هيا@יאללה' , '试@例子.测试.مثال.آزمایشی' , 'not@@really' , 'no' ].forEach(email=>console.log(validEmail(email), email)); A: A solution that does not check the existence of the TLD is incomplete. Almost all answers to this questions suggest using Regex to validate emails addresses. I think Regex is only good for a rudimentary validation. It seems that the checking validation of email addresses is actually two separate problems: 1- Validation of email format: Making sure if the email complies with the format and pattern of emails in RFC 5322 and if the TLD actually exists. A list of all valid TLDs can be found here. For example, although the address [email protected] will pass the regex, it is not a valid email, because ccc is not a top-level domain by IANA. 2- Making sure the email actually exists: For doing this, the only option is to send the users an email. A: Use this code inside your validator function: var emailID = document.forms["formName"]["form element id"].value; atpos = emailID.indexOf("@"); dotpos = emailID.lastIndexOf("."); if (atpos < 1 || ( dotpos - atpos < 2 )) { alert("Please enter correct email ID") return false; } Else you can use jQuery. Inside rules define: eMailId: { required: true, email: true } A: In modern browsers you can build on top of @Sushil's answer with pure JavaScript and the DOM: function validateEmail(value) { var input = document.createElement('input'); input.type = 'email'; input.required = true; input.value = value; return typeof input.checkValidity === 'function' ? input.checkValidity() : /\S+@\S+\.\S+/.test(value); } I've put together an example in the fiddle http://jsfiddle.net/boldewyn/2b6d5/. Combined with feature detection and the bare-bones validation from Squirtle's Answer, it frees you from the regular expression massacre and does not bork on old browsers. A: In contrast to squirtle, here is a complex solution, but it does a mighty fine job of validating emails properly: function isEmail(email) { return /^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))$/i.test(email); } Use like so: if (isEmail('[email protected]')){ console.log('This is email is valid'); } A: var testresults function checkemail() { var str = document.validation.emailcheck.value var filter = /^([\w-]+(?:\.[\w-]+)*)@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$/i if (filter.test(str)) testresults = true else { alert("Please input a valid email address!") testresults = false } return (testresults) } function checkbae() { if (document.layers || document.getElementById || document.all) return checkemail() else return true } <form name="validation" onSubmit="return checkbae()"> Please input a valid email address:<br /> <input type="text" size=18 name="emailcheck"> <input type="submit" value="Submit"> </form> A: My knowledge of regular expressions is not that good. That's why I check the general syntax with a simple regular expression first and check more specific options with other functions afterwards. This may not be not the best technical solution, but this way I'm way more flexible and faster. The most common errors I've come across are spaces (especially at the beginning and end) and occasionally a double dot. function check_email(val){ if(!val.match(/\S+@\S+\.\S+/)){ // Jaymon's / Squirtle's solution // Do something return false; } if( val.indexOf(' ')!=-1 || val.indexOf('..')!=-1){ // Do something return false; } return true; } check_email('check@thiscom'); // Returns false check_email('[email protected]'); // Returns false check_email(' [email protected]'); // Returns false check_email('[email protected]'); // Returns true A: Regex for validating email address [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])+ A: I've slightly modified Jaymon's answer for people who want really simple validation in the form of: [email protected] The regular expression: /^\S+@\S+\.\S+$/ To prevent matching multiple @ signs: /^[^\s@]+@[^\s@]+\.[^\s@]+$/ The above regexes match the whole string, remove the leading and ^ and trailing $ if you want to match anywhere in the string. The example below matches anywhere in the string. If you do want to match the whole sring, you may want to trim() the string first. Example JavaScript function: function validateEmail(email) { var re = /\S+@\S+\.\S+/; return re.test(email); } console.log(validateEmail('my email is [email protected]')); // true console.log(validateEmail('my email is anystring@anystring .any')); // false A: Here is a very good discussion about using regular expressions to validate email addresses; "Comparing E-mail Address Validating Regular Expressions" Here is the current top expression, that is JavaScript compatible, for reference purposes: /^[-a-z0-9~!$%^&*_=+}{\'?]+(\.[-a-z0-9~!$%^&*_=+}{\'?]+)*@([a-z0-9_][-a-z0-9_]*(\.[-a-z0-9_]+)*\.(aero|arpa|biz|com|coop|edu|gov|info|int|mil|museum|name|net|org|pro|travel|mobi|[a-z][a-z])|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}))(:[0-9]{1,5})?$/i A: Apparently, that's it: /^([\w\!\#$\%\&\'\*\+\-\/\=\?\^\`{\|\}\~]+\.)*[\w\!\#$\%\&\'\*\+\-\/\=\?\^\`{\|\}\~]+@((((([a-z0-9]{1}[a-z0-9\-]{0,62}[a-z0-9]{1})|[a-z])\.)+[a-z]{2,6})|(\d{1,3}\.){3}\d{1,3}(\:\d{1,5})?)$/i Taken from http://fightingforalostcause.net/misc/2006/compare-email-regex.php on Oct 1 '10. But, of course, that's ignoring internationalization. A: I was looking for a Regex in JS that passes all Email Address test cases: * *[email protected] Valid email *[email protected] Email contains dot in the address field *[email protected] Email contains dot with subdomain *[email protected] Plus sign is considered valid character *[email protected] Domain is valid IP address *email@[192.0.2.123] Square bracket around IP address is considered valid *“email”@example.com Quotes around email is considered valid *[email protected] Digits in address are valid *[email protected] Dash in domain name is valid *[email protected] Underscore in the address field is valid *[email protected] .name is valid Top Level Domain name *[email protected] Dot in Top Level Domain name also considered valid (using co.jp as example here) *[email protected] Dash in address field is valid Here we go : http://regexr.com/3f07j OR regex: Regex = /(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@[*[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+]*/ A: Wow, there are a lot of answers that contain slightly different regular expressions. I've tried many that I've got different results and a variety of different issues with all of them. For UI validation, I'm good with the most basic check of looking for an @ sign. It's important to note, that I always do server-side validation with a standard "validate email" that contains a unique link for the user to confirm their email address. if (email.indexOf('@') > 0) I have purposely chosen 0 even with zero-based as it also ensures there is a single character before the @. A: In my case, I wanted to avoid ~ and # that's why I have used another solution: function validEmail(email){ const regex = /^((?!\.)[\w\-_.]*[^.])(@\w+)(\.\w+(\.\w+)?[^.\W])$/; return regex.test(email); } function validEmail(email){ const regex = /^((?!\.)[\w\-_.]*[^.])(@\w+)(\.\w+(\.\w+)?[^.\W])$/; return regex.test(email); } const emails = [ '[email protected]', '[email protected]', '[email protected]', 'pio_#[email protected]', 'pio_pio@#factory.com', '[email protected]#om', '[email protected]*om', 'pio^[email protected]' ] for(const email of emails){ document.write(email+' : '+validEmail(email)+'</br>'); } A: If you are using AngularJS, just add type="email" to the input element: https://docs.angularjs.org/api/ng/input/input%5Bemail%5D In case there is no input element, it can be created dynamically: var isEmail = $compile('<input ng-model="m" type="email">')($rootScope.$new()). controller('ngModel').$validators["email"]; if (isEmail('[email protected]')) { console.log('valid'); } A: I know its not regex but any way... This is example with node and npm package email-existence this is ultimate checking if email exist and if its in the right form :) This will ping the email if its responding if it got no response it will return false or else true. function doesEmailExist(email) { var emailExistence = require('email-existence'); return emailExistence.check(email,function (err,status) { if (status) { return status; } else { throw new Error('Email does not exist'); } }); } A: If you want to use Jquery and want to have modern approach then use JQuery input mask with validation. http://bseth99.github.io/projects/jquery-ui/5-jquery-masks.html Demo on how simple jquery input mask is here: http://codepen.io/anon/pen/gpRyBp Example of simple input mask for date forexample NOT full validation <input id="date" type="text" placeholder="YYYY-MM-DD"/> and the script: $("#date").mask("9999-99-99",{placeholder:"YYYY-MM-DD"}); A: I'd like to add a short note about non-ASCII characters. Rnevius's (and co.) solution is brilliant, but it allows to add Cyrillic, Japanese, Emoticons and other unicode symbols which may be restricted by some servers. The code below will print true though it contains UTF-8 character Ё. console.log (/^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/.test ('Ё@example.org')) In my case all non-ASCII symbols are prohibited so I have modified the original expression to exclude all characters above U+007F: /^(([^\u0080-\uffff<>()\[\]\\.,;:\s@"]+(\.[^\u0080-\uffff<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ Maybe this will help someone to prevent undesired behaviour. A: <input type="email" class="form-control" required="required" placeholder="Email Address" name="Email" id="Email" autocomplete="Email"> <button class="btn-1 shadow-0 full-width" type="button" id="register">Register account</button> $("#register").click(function(){ var rea = /^[a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$/; var Email = $("#Email").val(); var x = rea.test(Email); if (!x) { alert('Type Your valid Email'); return false; } </script> A: Here is a solution that works and includes validation/notification fuctionality in a form: You can run it at this link JAVASCRIPT (function() { 'use strict'; window.addEventListener('load', function() { var form = document.getElementById('needs-validation'); form.addEventListener('submit', function(event) { if (form.checkValidity() === false) { event.preventDefault(); } form.classList.add('was-validated'); event.preventDefault(); }, false); }, false); })(); HTML <p class='title'> <b>Email validation</b> <hr size="30px;"> </p> <br> <form id="needs-validation" novalidate> <p class='form_text'>Try it out!</p> <div class="form-row"> <div class="col-12"> <input type="email" class="form-control" placeholder="Email Address" required> <div class="invalid-feedback"> Please enter a valid email address. </div> </div> <div class="row"> <div class="col-12"> <button type="submit" class="btn btn-default btn-block">Sign up now </button> </div> </div> </form> A: I wrote a JavaScript email validator which is fully compatile with PHP's filter_var($value, FILTER_VALIDATE_EMAIL) implementation. https://github.com/mpyw/FILTER_VALIDATE_EMAIL.js import validateEmail from 'filter-validate-email' const value = '...' const result = validateEmail(value) is equivalent to: <?php $value = '...'; $result = (bool)filter_var($value, FILTER_VALIDATE_EMAIL, FILTER_FLAG_EMAIL_UNICODE); A: Here's how I do it. I'm using match() to check for the standard email pattern and I'm adding a class to the input text to notify the user accordingly. Hope that helps! $(document).ready(function(){ $('#submit').on('click', function(){ var email = $('#email').val(); var pat = /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/; if (email.match(pat)){ $('#email') .addClass('input-valid'); return false; } else { $('#email') .addClass('input-error') .val(''); return false; } }); }); .input-error { border: 1px solid red; color: red; } .input-valid { border: 1px solid green; color: green; } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <form> <input type="text" id="email" placeholder="[email protected]" class=""> <input type="submit" id="submit" value="Send"/> </form> A: If you want something a human can read and maintain, I would recommend Masala Parser (I'm one of the creators of it). import {C,Streams} from '@masala/parser' const illegalCharset = ' @\u00A0\n\t'; const extendedIllegalCharset = illegalCharset + '.'; // Assume '[email protected]' export function simpleEmail() { return C.charNotIn(illegalCharset).rep() // 'nicolas' .then(C.char('@')) .then(subDns()) //'internal.masala.co.' .then(C.charNotIn(extendedIllegalCharset).rep()) //'uk' .eos(); // Must be end of the char stream } // [email protected] => extract 'internal.masala.co.' function subDns() { return C.charNotIn(extendedIllegalCharset).rep().then(C.char('.')).rep() } function validateEmail(email:string) { console.log(email + ': ' + (simpleEmail().parse(Streams.ofString(email)).isAccepted())); } validateEmail('[email protected]'); // True validateEmail('nz@co.'); // False, trailing "." If you want to accept the ultimate ugly email version, you can add in quotes in the first part: function inQuote() { return C.char('"') .then(C.notChar('"').rep()) .then(C.char('"')) } function allEmail() { return inQuote().or(C.charNotIn(illegalCharset)) .rep() // repeat (inQuote or anyCharacter) .then(C.char('@')) .then(subDns()) .then(C.charNotIn(extendedIllegalCharset).rep()) .eos() // Must be end of the character stream // Create a structure .map(function (characters) { return ({ email: characters.join('') }); }); } '"nicolas""love-quotes"@masala.co.uk' is officially valid, but should it be in your system? At least with Masala, you give yourself a chance to understand it. And so for the next year, colleague. A: These will work with the top used emails(they match exactly the rules of each one). Gmail /^[a-z]((?!\.\.)([a-z\.])){4,28}[a-z0-9]@gmail.com$/i Yahoo /^[a-z]((?!\.\.)([\w\.])){3,30}[\w]@yahoo.com$/i Outlook/Hotmail /[a-z]((?!\.\.)([\w\.])){0,62}[\w]@(outlook.com|hotmail.com)$/i A: // Html form call function name at submit button <form name="form1" action="#"> <input type='text' name='text1'/> <input type="submit" name="submit" value="Submit" onclick="ValidateEmail(document.form1.text1)"/> </from> // Write the function name ValidateEmail below <script> function ValidateEmail(inputText) { var mailformat = /^[a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9-]+(?:\.[a-zA-Z0-9-]+)*$/; if(inputText.value.match(mailformat)) { alert("Valid email address!"); document.form1.text1.focus(); return true; } else { alert("You have entered an invalid email address!"); document.form1.text1.focus(); return false; } } </script> A: Yet another perfect regexp for email validation /^([^\s\@])+\@(([^\s\@\.])+\.)+([^\s\.]{2,})+$/ You can test it here https://regex101.com/r/FV3pUI/2 A: // Try this regular Expression by ES6 function const emailValidate = (email) => { const regexp= /^[\w.%+-]+@[\w.-]+\.[\w]{2,6}$/; return regexp.test(email); } A: One of my coworker shared this regex with me. I like it a lot. function isValidEmailAddress (email) { var validEmail = false; if (email) { email = email.trim().toLowerCase(); var pattern = /^[\w-']+(\.[\w-']+)*@([a-zA-Z0-9]+[a-zA-Z0-9-]+(\.[a-zA-Z0-9-]+)*?\.[a-zA-Z]{2,6}|(\d{1,3}\.){3}\d{1,3})(:\d{4})?$/; validEmail = pattern.exec(email); } return validEmail; } if (typeof String.prototype.trim !== 'function') { String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g, ''); }; } A: \b[a-z][\w\d_\.]+@\w+\.[a-z]{2}[a-z]?\.?[a-z]{,2}\s It allows: [email protected] [email protected] [email protected] [email protected] A: If you define your regular expression as a string then all backslashes need to be escaped, so instead of '\w' you should have '\w'. Alternatively, define it as a regular expression: var pattern = /^\w+@[a-zA-Z_]+?\.[a-zA-Z]{2,3}$/; A: There is my version of an email validator. This code is done with object-oriented programming and realized as a class with static methods. You will find two versions of the validators: strict(EmailValidator.validate) and kind(EmailValidator.validateKind). The first throws an error if an email is invalid and returns email otherwise. The second returns Boolean value that says if an email is valid. I prefer the strict version in most of the cases. export class EmailValidator { /** * @param {string} email * @return {string} * @throws {Error} */ static validate(email) { email = this.prepareEmail(email); const isValid = this.validateKind(email); if (isValid) return email; throw new Error(`Got invalid email: ${email}.`); } /** * @param {string} email * @return {boolean} */ static validateKind(email) { email = this.prepareEmail(email); const regex = this.getRegex(); return regex.test(email); } /** * @return {RegExp} * @private */ static getRegex() { return /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; } /** * @param {string} email * @return {string} * @private */ static prepareEmail(email) { return String(email).toLowerCase(); } } To validate an email you can follow these ways: // First way. try { EmailValidator.validate('[email protected]'); } catch (e) { console.error(e.message); } // Second way. const email = '[email protected]'; const isValid = EmailValidator.validateKind(email); if (isValid) console.log(`Email is valid: ${email}.`); else console.log(`Email is invalid: ${email}.`); A: I am using this function /** * @param {*} email */ export const validateEmail = email => { return new RegExp(/[\w-]+@([\w-]+\.)+[\w-]+/gm).test(email); };
{ "language": "en", "url": "https://stackoverflow.com/questions/46155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5478" }
Q: What could cause Run-time error 1012 Error accessing application data directories Friend of mine has a problem :). There is an application written in Visual Basic 6.0 (not by him). One of users reported that when it run on Windows 2000 and tried to scan folders on disk it raised box with message: Run-time error 1012 Error accessing application data directories We couldn't google anything about it and didn't find anything about runtime error 1012 in VB6 help files. My guess was that VB calls some old API function which returns folder to which app has no access (private, ciphered, belongs to other user and app is run by user without needed privileges). But we could not reproduce this (on Windows XP professional). Anyone meets with bug like this in the past? A: Error 1012 is rather generically ERROR_CANT_READ. See this Microsoft list, but it also implies it refers to the registry. You could try running SysInternals Process Monitor to look for failing file/registry operations by the process.
{ "language": "en", "url": "https://stackoverflow.com/questions/46156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Getting an int representation of a String I am looking for a way to create an int\long representation of an arbitrary alpha-numeric String. Hash codes won't do it, because I can't afford hash collisions i.e. the representation must be unique and repeatable. The numeric representation will be used to perform efficient (hopefully) compares. The creation of the numeric key will take some time, but it only has to happen once, whereas I need to perform vast numbers of comparisons with it - which will hopefully be much faster than comparing the raw Strings. Any other idea's on faster String comparison will be most appreciated too... A: How long are the strings? If they are very short, then a unique ID can be generated by considering the characters as digits in base 36 (26 + 10) that form a n-digits number where n is the length of the string. On the other hand, if the strings are short enough to allow this, direct comparison won't be an issue anyway. Otherwise you'll have to generate a collision-free hash and this can only be done when the complete problem space is known in advance (i.e. if you know all strings that can possibly occur). You will want to have a look at perfect hashing, although the only feasible algorithm to find a perfect hash function that I know is probabilistic so collisions are still theoretically possible. There might be other ways to find such a function. Knuth called this a “rather amusing … puzzle” in TAoCP but he doesn't give an algorithm either. In general, you give way too few information to find an algorithm that doesn't require probing the whole problem space in some manner. This does invariably mean that the problem has exponential running time but could be solved using machine-learning heuristics. I'm not sure if this is advisable in your case. A: Perhaps: String y = "oiu291981u39u192u3198u389u28u389u"; BigInteger bi = new BigInteger(y, 36); System.out.println(bi); A: At the end of the day, a single alphanumeric character has at least 36 possible values. If you include punctuation, lower case, etc then you can easily pass 72 possible values. A non-colliding number that allows you to quickly compare strings would necessarily grow exponentially with the length of the string. So you first must decide on the longest string you are expecting to compare. Assuming it's N characters in length, and assuming you ONLY need uppercase letters and the numerals 0-9 then you need to have an integer representation that can be as high as 36^N For a string of length 25 (common name field) then you end up needing a binary number with 130 bits. If you compose that into 32 bit numbers, you'll need 4. Then you can compare each number (four integer compares should take no time, compared to walking the string). I would recommend a big number library, but for this specialized case I'm pretty sure you can write your own and get better performance. If you want to handle 72 possible values per character (uppercase, lowercase, numerals, punctuation...) and you need 10 characters, then you'll need 62 bits - two 32 bit integers (or one 64 bit if you're on a system that supports 64 bit computing) If, however, you are not able to restrict the numbers in the string (ie, could be any of the 256 letters/numbers/characters/etc) and you can't define the size of the string, then comparing the strings directly is the only way to go, but there's a shortcut. Cast the pointer of the string to a 32 bit unsigned integer array, and compare the string 4 bytes at a time (or 64 bits/8bytes at a time on a 64 bit processor). This means that a 100 character string only requires 25 compares maximum to find which is greater. You may need to re-define the character set (and convert the strings) so that the characters with higher precedence are assigned values closer to 0, and lower precedence values closer to 255 (or vice versa, depending on how you are comparing them). Good luck! -Adam A: As long as it's a hash function, be it String.hashCode(), MD5 or SHA1, collision is unavoidable unless you have a fixed limit on the string's length. It is mathematically impossible to have one-to-one mapping from an infinite group to a finite group. Stepping back, is collision avoidance absolutely necessary? A: Unless your string is limited in length, you can't avoid collisions. There are 4294967296 possible values for an integer (2^32). If you have a string of more than 4 ASCII characters, or more than two unicode characters, then there are more possible string values than possible integer values. You can't have a unique integer value for every possible 5 character string. Long values have more possible values, but they would only provide a unique value for every possible string of 8 ASCII characters. Hash codes are useful as a two step process: first see if the hash code matches, then check the whole string. For most strings that don't match, you only need to do the first step, and it's really fast. A: Can't you just start with a hash code, and if the hash codes match, do a character by character comparison? A: A few questions in the beginning: * *Did you test that simple string comparison is too slow? *How the comparison looks like ('ABC' == 'abc' or 'ABC' != 'abc')? *How many string do you have to compare? *How many comparison do you have to do? *How your strings look like (the length, letter case)? As far as I remember String in Java is an object and two identical strings point to the same object. So, maybe it would be enough to compare objects (probably string comparison is already implemented in this way). If it doesn't help you can try to use Pascal implementation of string object when first element is length and if your strings have various length this should save some CPU time. A: How long are your strings? Unless you choose an int representation that's longer than the string, collisions will always be possible no matter what conversion you're using. So if you're using a 32 bit integer, you can only uniquely represent strings of up to 4 bytes. A: How big are your strings? Arbitrarily long strings cannot be compressed into 32/64 bit format. A: If you don't want collisions, try something insane like SHA-512. I can't guarantee there won't be collisions, but I don't think they have found any yet. A: Assuming "alphanumeric" means letters and numbers, you could treat each letter/number as a base-36 digit. Unfortunately, large strings will cause the number to grow rapidly and you'd have to resort to big integers, which are hardly efficient. If your strings are usually different when you make the comparison (i.e. searching for a specific string) the hash might be your best option. Once you get a potential hit, you can do the string comparison to be sure. A well-designed hash will make collisions exceedingly rare. A: It would seem that an MD5 hash would work fine. The risk of a hash collision would be extremely unlikely. Depending on the length of your string, a hash that generates an int/long would run into max value problems very quickly. A: Why don't you do something like 1stChar + (10 x 2ndChar) + 100 x (3rdChar) ...., where you use the simple integer value of each character, i.e. a = 1, b = 2 etc, or just the integer value if it's not a letter. This will give a unique value for each string, even for 2 strings that are just the same letters in a different order. Of course if gets more complicated if you need to worry about Unicode rather than just ASCII and the numbers could get large if you need to use long string. Are the standard Java string comparison functions definitely not efficient enough? A: String length may vary, but let's say 10 characters for now. In that case, in order to guarantee uniqueness you'd have to use some sort of big integer representation. I doubt that doing comparisons on big integers would be substantially faster than doing string comparisons in the first place. I'll second what other's have said here, use some sort of hash, then in the event of a hash match check the original strings to weed out any collisions. In any case, If your strings are around 10 characters, I doubt that comparing, say, a bunch of 32 bit hashes will be all that much faster than direct string comparisons. I think you have to ask yourself if it's it really worth the additional complexity.
{ "language": "en", "url": "https://stackoverflow.com/questions/46160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }