text
stringlengths
8
267k
meta
dict
Q: Is it feasible to support multiple applications of the same type that are all written in different languages? As much as we would all like to say it is a benefit to programmers to be language agnostic, is it really feasible to support multiple enterprise Web applications of the same type all written in different languages? Think about how complicated a CMS or e-commerce system can be -- now imagine supporting three different CMS platforms all written in different languages. I would hate to be known as a .NET or Java or PHP shop, but I also don't want to be the vendor who says they can support a solution they have never worked with, upsetting a client who wonders why we can't get something done right on time. Can anyone speak from experience on this? Does your company usually just suck it up, and try and to learn a new platform on the fly? Do you bill up-to-speed, or eat those costs? A: I think it all depends on who your clients are and what they expect. I think knowing about different technologies is good, but really when you're hired by someone, they expect you to know what you are doing. Personally, I would much rather be known that I do a really good job with a certain type of technology and when hired, I get the job done well. If you try and go after every contract without regard to what your core competencies are, you aren't going to succeed. You'll anger the people who do hire you and make mistakes, and you'll potentially miss opportunities where you can really shine. Sometimes you have to make compromises to pay the bills, but if you aren't careful, it can bite you in the end. The large consulting firms I've worked with throw resources at it and hope they don't anger too many people. They mainly do this because they know that the people who work with the consultants and get angry when they don't get the job done aren't the ones making the decisions to keep them hired. To them (not all of them I know, but some definately), don't care if they screw up because they ultimately know they can convince the VPs and SVPs to keep them around. A: To be honest, I think you tend to see this kind of thing happen over time, no matter how disciplined the organization is. It's natural for new methodologies to come bundled in the form of new libraries, frameworks, or even languages. Keep in mind that a .NET shop may well have been a ASP/VB shop at one time. They'll probably still maintain older systems for clients, because there's little benefit to rewriting everything from scratch. I'm not sure anyone has the luxury to keep everything "the same," because language issues are minor compared to library or framework issues -- especially the ones you build yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/59436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you check what version of SQL Server for a database using TSQL? Is there a system stored procedure to get the version #? A: There is another extended Stored Procedure which can be used to see the Version info: exec [master].sys.[xp_msver] A: CREATE FUNCTION dbo.UFN_GET_SQL_SEVER_VERSION ( ) RETURNS sysname AS BEGIN DECLARE @ServerVersion sysname, @ProductVersion sysname, @ProductLevel sysname, @Edition sysname; SELECT @ProductVersion = CONVERT(sysname, SERVERPROPERTY('ProductVersion')), @ProductLevel = CONVERT(sysname, SERVERPROPERTY('ProductLevel')), @Edition = CONVERT(sysname, SERVERPROPERTY ('Edition')); --see: http://support2.microsoft.com/kb/321185 SELECT @ServerVersion = CASE WHEN @ProductVersion LIKE '8.00.%' THEN 'Microsoft SQL Server 2000' WHEN @ProductVersion LIKE '9.00.%' THEN 'Microsoft SQL Server 2005' WHEN @ProductVersion LIKE '10.00.%' THEN 'Microsoft SQL Server 2008' WHEN @ProductVersion LIKE '10.50.%' THEN 'Microsoft SQL Server 2008 R2' WHEN @ProductVersion LIKE '11.0%' THEN 'Microsoft SQL Server 2012' WHEN @ProductVersion LIKE '12.0%' THEN 'Microsoft SQL Server 2014' END RETURN @ServerVersion + N' ('+@ProductLevel + N'), ' + @Edition + ' - ' + @ProductVersion; END GO A: SELECT @@VERSION A: Here's a bit of script I use for testing if a server is 2005 or later declare @isSqlServer2005 bit select @isSqlServer2005 = case when CONVERT(int, SUBSTRING(CONVERT(varchar(15), SERVERPROPERTY('productversion')), 0, CHARINDEX('.', CONVERT(varchar(15), SERVERPROPERTY('productversion'))))) < 9 then 0 else 1 end select @isSqlServer2005 Note : updated from original answer (see comment) A: I know this is an older post but I updated the code found in the link (which is dead as of 2013-12-03) mentioned in the answer posted by Matt Rogish: DECLARE @ver nvarchar(128) SET @ver = CAST(serverproperty('ProductVersion') AS nvarchar) SET @ver = SUBSTRING(@ver, 1, CHARINDEX('.', @ver) - 1) IF ( @ver = '7' ) SELECT 'SQL Server 7' ELSE IF ( @ver = '8' ) SELECT 'SQL Server 2000' ELSE IF ( @ver = '9' ) SELECT 'SQL Server 2005' ELSE IF ( @ver = '10' ) SELECT 'SQL Server 2008/2008 R2' ELSE IF ( @ver = '11' ) SELECT 'SQL Server 2012' ELSE IF ( @ver = '12' ) SELECT 'SQL Server 2014' ELSE IF ( @ver = '13' ) SELECT 'SQL Server 2016' ELSE IF ( @ver = '14' ) SELECT 'SQL Server 2017' ELSE SELECT 'Unsupported SQL Server Version' A: Try SELECT @@VERSION or for SQL Server 2000 and above the following is easier to parse :) SELECT SERVERPROPERTY('productversion') , SERVERPROPERTY('productlevel') , SERVERPROPERTY('edition') From: http://support.microsoft.com/kb/321185 A: The KB article linked in Joe's post is great for determining which service packs have been installed for any version. Along those same lines, this KB article maps version numbers to specific hotfixes and cumulative updates, but it only applies to SQL05 SP2 and up. A: For SQL Server 2000 and above, I prefer the following parsing of Joe's answer: declare @sqlVers numeric(4,2) select @sqlVers = left(cast(serverproperty('productversion') as varchar), 4) Gives results as follows: Result Server Version 8.00 SQL 2000 9.00 SQL 2005 10.00 SQL 2008 10.50 SQL 2008R2 11.00 SQL 2012 12.00 SQL 2014 Basic list of version numbers here, or exhaustive list from Microsoft here. A: Try this: if (SELECT LEFT(CAST(SERVERPROPERTY('productversion') as varchar), 2)) = '10' BEGIN A: SELECT @@SERVERNAME AS ServerName, CASE WHEN LEFT(CAST(serverproperty('productversion') as char), 1) = 9 THEN '2005' WHEN LEFT(CAST(serverproperty('productversion') as char), 2) = 10 THEN '2008' WHEN LEFT(CAST(serverproperty('productversion') as char), 2) = 11 THEN '2012' END AS MajorVersion, SERVERPROPERTY ('productlevel') AS MinorVersion, SERVERPROPERTY('productversion') AS FullVersion, SERVERPROPERTY ('edition') AS Edition A: Getting only the major SQL Server version in a single select: SELECT SUBSTRING(ver, 1, CHARINDEX('.', ver) - 1) FROM (SELECT CAST(serverproperty('ProductVersion') AS nvarchar) ver) as t Returns 8 for SQL 2000, 9 for SQL 2005 and so on (tested up to 2012). A: If all you want is the major version for T-SQL reasons, the following gives you the year of the SQL Server version for 2000 or later. SELECT left(ltrim(replace(@@Version,'Microsoft SQL Server','')),4) This code gracefully handles the extra spaces and tabs for various versions of SQL Server. A: Try SELECT @@MICROSOFTVERSION / 0x01000000 AS MajorVersionNumber For more information see: Querying for version/edition info A: select substring(@@version,0,charindex(convert(varchar,SERVERPROPERTY('productversion')) ,@@version)+len(convert(varchar,SERVERPROPERTY('productversion')))) A: Try this: SELECT @@VERSION[server], SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition') A: Try this: SELECT 'the sqlserver is ' + substring(@@VERSION, 21, 5) AS [sql version]
{ "language": "en", "url": "https://stackoverflow.com/questions/59444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "130" }
Q: Creating a Silverlight DataTemplate in code How do I create a silverlight data template in code? I've seen plenty of examples for WPF, but nothing for Silverlight. Edit: Here's the code I'm now using this for, based on the answer from Santiago below. public DataTemplate Create(Type type) { return (DataTemplate)XamlReader.Load( @"<DataTemplate xmlns=""http://schemas.microsoft.com/client/2007""> <" + type.Name + @" Text=""{Binding " + ShowColumn + @"}""/> </DataTemplate>" ); } This works really nicely and allows me to change the binding on the fly. A: Yes, Silverligt 4 older than WPF's current versions. When you add a template as a resource i.e. as I did I added a userControl Template in Application.xaml MergedResources between ResourceDictionary. In XAML if tag implemented IDictionary you could user x:Key attribute. Like that <ResourceDictionary> <DataTemplate x:Key="TextBoxEditTemplate"> <Some user control x:Name="myOwnControl" /> </DataTemplate> </ResourceDictionary> Ok! You may reach your template by coding that, Application.Current.resources["TextBoxEditTemplate"] on the other hand some methods for finding members of this template will not work. Beside this DataTemplate doesn't implement IDictionary so you cannot assign x:Key attribute for items in this dataTemplate. as myOwnControl in example. Without xaml current silverlight has some restrictions about creation fully dynamic code-behind DataTemplates.Even it works on WPF. Anyway the best solution by this point is creation of XAML script for datatemplate ,You may assing some values element in DataTemplate script. We created our own usercontrols has some properties with DependencyObjectProperty... At last if your object has no inherits ,i.e. not a MyControl:UserControl you may inherit MyObject:DependencyObject by this way you can reach your object by calling like Application.Current.Resources.FirstChilderen... FindName works only in WPF A: Although you cannot programatically create it, you can load it from a XAML string in code like this: public static DataTemplate Create(Type type) { return (DataTemplate) XamlReader.Load( @"<DataTemplate xmlns=""http://schemas.microsoft.com/client/2007""> <" + type.Name + @"/> </DataTemplate>" ); } The snippet above creates a data template containing a single control, which may be a user control with the contents you need. A: citation from MSDN: The XAML usage that defines the content for creating a data template is not exposed as a settable property. It is special behavior built into the XAML processing of a DataTemplate object element. A: I had a few problems with this code, getting element not foung exceptions. Just for reference, it was that I needed my namesspace included in the DataTemplate... private DataTemplate Create(Type type) { string xaml = @"<DataTemplate xmlns=""http://schemas.microsoft.com/client/2007"" xmlns:controls=""clr-namespace:" + type.Namespace + @";assembly=" + type.Namespace + @"""> <controls:" + type.Name + @"/></DataTemplate>"; return (DataTemplate)XamlReader.Load(xaml); }
{ "language": "en", "url": "https://stackoverflow.com/questions/59451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How do I make custom MenuHeaders in WPF with accelerators? I'd like to make some custom MenuHeaders in WPF so I can have (for example), an icon and text in a menu item. Normally using MenuItems, if you populate the Header field with straight text, you can add an accelerator by using an underscore. eg, _File However, if I wanted to put in a UserControl, I believe this function would break, how would I do something similar to the following? <Menu> <MenuItem> <MenuItem.Header> <UserControl> <Image Source="..." /> <Label Text="_Open" /> </UserControl> </MenuItem.Header> </MenuItem> ... A: I think the Icon property fits your needs. However to answer the original question, it is possible to retain the Accelerator functionality when you compose the content of your menuitem. If you have nested content in a MenuItem you need to define the AccessText property explicitly like in the first one below. When you use the inline form, this is automagically taken care of. <Menu> <MenuItem> <MenuItem.Header> <StackPanel Orientation="Horizontal"> <Image Source="Images/Open.ico" /> <AccessText>_Open..</AccessText> </StackPanel> </MenuItem.Header> </MenuItem> <MenuItem Header="_Close" /> </Menu> A: The problem is you placed the image inside of the content of the MenuHeader which means that you'll lose the accelerator key. If you're just trying to have an image in the menu header, do the following. <MenuItem Header="_Open"> <MenuItem.Icon> <Image Source="images/Open.png"/> </MenuItem.Icon> </MenuItem> If you want to customize the look and feel even further, modify the controltemplate and style for the menu. From experience, styling the menus and menuitems are much more difficult then styling the other WPF controls. A: First thought, you would think that the Icon property can only contain an image. But it can actually contain anything! I discovered this by accident when I programmatically tried to set the Image property directly to a string with the path to an image. The result was that it did not show the image, but the actual text of the path! Then I discovered that I had to create an Image element first and set that to the Icon property. This lead me to think that the Image property was just any content container that is located in the icon area at the left in the menu, and I was right. I tried to put a button there, and it worked! This is showing a button with the text "i" in the Icon area of the menu item. When you click on the button, the Button_Click event is triggered (the LanguageMenu_Click is NOT triggered when you click the button). <MenuItem Name="LanguageMenu" Header="_Language" Click="LanguageMenu_Click"> <MenuItem.Icon> <Button Click="Button_Click">i</Button> </MenuItem.Icon> </MenuItem> This leads to an alternative to not have to make an image for the icon, but use text with a symbol font instead to display a simple "icon". The following example uses the Wingdings font which contains a floppydisk symbol. This symbol in the font is mapped to the charachter <, which has special meaning in XAML, so we have to use the encoded version &lt; instead. This works like a dream! The following shows a floppydisk symbol as an icon on the menu item: <MenuItem Name="mnuFileSave" Header="Save" Command="ApplicationCommands.Save"> <MenuItem.Icon> <Label VerticalAlignment="Center" HorizontalAlignment="Center" FontFamily="Wingdings">&lt;</Label> </MenuItem.Icon> </MenuItem> A: @a7an: Ah, I didn't notice the Icon property before. That's a good start. However, specifically I wanted to add an extra 'button' to some MenuItems so I could have a 'Pin' feature (see the recently loaded Documents list in Office 2007 for the feature idea). Since there needs to be code as well, will I probably need to subclass the control and add the code for the button? (Not affraid of messing with the MenuItem template, have already had to do it once and I'd do it again if I had to! ;) )
{ "language": "en", "url": "https://stackoverflow.com/questions/59456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do all methods in the Google Analytics tracking code start with an underscore? Prefixing variable and method names with an underscore is a common convention for marking things as private. Why does all the methods on the page tracker class in the Google Analytics tracking code (ga.js) start with an underscore, even the ones that are clearly public, like _getTracker and _trackPageView? A: Because Google can't be bothered to follow the Module Pattern and therefore they don't want accidental collisions in the global namespace? A: Just in case you have a getTracker() function in your own code, or similar. In other words, to avoid naming conflicts with the page's javascript code, probably. @Theo: Didn't realize (ie, not read carefully enough) they were methods. Then maybe to encourage caution or discourage use? Dunno, really. A: I've always read this like so: If the property/method is prefixed with an underscore, it is for some "internal" workings. Therefore if you are about to use/call/alter this property/method, you had better darn well know what you are doing, and or expect it to possibly be renamed/removed in a future release.
{ "language": "en", "url": "https://stackoverflow.com/questions/59462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In Emacs, how can I add a website like 'Stackoverflow' to my webjump hotlist? By default the webjump hotlist has the following which I use quite often: M-x webjump RET Google M-x webjump RET Wikipedia How can I add 'Stackoverflow' to my list? A: A general tip for looking up answers to questions like this one: * *Look up the help for a relevant function. eg. C-h f webjump *In the top line of the help buffer, hit RET on the filename in which the function is defined. This will take you to the function definition. *M-< to jump to the beginning of the buffer. *Read through the documentation for the file. Typically (and in this case) this will include information on how to configure the feature. A: You will need to find where you are setting your webjump-sites variable. This is probably your .emacs file. Then you'll need to add a pair to that alist as follows. ("stackoverflow". "www.stackoverflow.com") A full example of what to put in your .emacs would be as follows. (setq webjump-sites (append '(("stackoverflow" . "www.stackoverflow.com")) webjump-sample-sites) A: Here's some example code in a webjump.el file on a site run by Apple: ;; (require 'webjump) ;; (global-set-key "\C-cj" 'webjump) ;; (setq webjump-sites ;; (append '( ;; ("My Home Page" . "www.someisp.net/users/joebobjr/") ;; ("Pop's Site" . "www.joebob-and-son.com/") ;; ) ;; webjump-sample-sites))
{ "language": "en", "url": "https://stackoverflow.com/questions/59465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can VS be configured to automatically remove blank line(s) after text is cut? Is there a way (or shortcut) to tell VS 2008 that it cuts a line like this: Before: Some Text here This gets cut Some Code there After: Some Text here Some Code there What I want: Some Text here Some Code there PS: I don't want to select the whole line or something like this... only the text I want to cut. A: Unless I misunderstood you: Just place cursor on the line you want to cut (no selection) and press Ctrl + x. That cuts the line (leaving no blanks) and puts the text in the Clipboard. (tested in MS VC# 2008 Express with no additional settings I'm aware of) Is that what you want? A: Shift+Delete also works. Select a line and hit Shift-Delete it will remove the line and place that line in your clipboard. A: Don't select anything, just hit ctrl+x when the cursor is on the line.
{ "language": "en", "url": "https://stackoverflow.com/questions/59472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Optimize Windows Form Load Time I have a Windows Form that takes quite a bit of time to load initially. However, each subsequent request to load the Form doesn't take as long. Is there a way to optimize a Form's load time? A: You need to find out where the time is going before you can optimise it. Don't just ngen it without finding that out first, as if the problem is loading a 150MB background bitmap resource then you won't have done anything useful at all with ngen. You should disregard all specific advice or hunches about optimisation which arise without any measurements being made. A: You can use ngen. I also use this tip to reduce the Memory footprint on startup. The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead using the just-in-time (JIT) compiler to compile the original assembly.
{ "language": "en", "url": "https://stackoverflow.com/questions/59479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What's the better database design: more tables or more columns? A former coworker insisted that a database with more tables with fewer columns each is better than one with fewer tables with more columns each. For example rather than a customer table with name, address, city, state, zip, etc. columns, you would have a name table, an address table, a city table, etc. He argued this design was more efficient and flexible. Perhaps it is more flexible, but I am not qualified to comment on its efficiency. Even if it is more efficient, I think those gains may be outweighed by the added complexity. So, are there any significant benefits to more tables with fewer columns over fewer tables with more columns? A: I have a few fairly simple rules of thumb I follow when designing databases, which I think can be used to help make decisions like this.... * *Favor normalization. Denormalization is a form of optimization, with all the requisite tradeoffs, and as such it should be approached with a YAGNI attitude. *Make sure that client code referencing the database is decoupled enough from the schema that reworking it doesn't necessitate a major redesign of the client(s). *Don't be afraid to denormalize when it provides a clear benefit to performance or query complexity. *Use views or downstream tables to implement denormalization rather than denormalizing the core of the schema, when data volume and usage scenarios allow for it. The usual result of these rules is that the initial design will favor tables over columns, with a focus on eliminating redundancy. As the project progresses and denormalization points are identified, the overall structure will evolve toward a balance that compromises with limited redundancy and column proliferation in exchange for other valuable benefits. A: It depends on your database flavor. MS SQL Server, for example, tends to prefer narrower tables. That's also the more 'normalized' approach. Other engines might prefer it the other way around. Mainframes tend to fall in that category. A: A fully normalized design (i.e, "More Tables") is more flexible, easier to maintain, and avoids duplication of data, which means your data integrity is going to be a lot easier to enforce. Those are powerful reasons to normalize. I would choose to normalize first, and then only denormalize specific tables after you saw that performance was becoming an issue. My experience is that in the real world, you won't reach the point where denormalization is necessary, even with very large data sets. A: Each table should only include columns that pertain to the entity that's uniquely identified by the primary key. If all the columns in the database are all attributes of the same entity, then you'd only need one table with all the columns. If any of the columns may be null, though, you would need to put each nullable column into its own table with a foreign key to the main table in order to normalize it. This is a common scenario, so for a cleaner design, you're likley to be adding more tables than columns to existing tables. Also, by adding these optional attributes to their own table, they would no longer need to allow nulls and you avoid a slew of NULL-related issues. A: The multi-table database is a lot more flexible if any of these one to one relationships may become one to many or many to many in the future. For example, if you need to store multiple addresses for some customers, it's a lot easier if you have a customer table and an address table. I can't really see a situation where you might need to duplicate some parts of an address but not others, so separate address, city, state, and zip tables may be a bit over the top. A: Like everything else: it depends. There is no hard and fast rule regarding column count vs table count. If your customers need to have multiple addresses, then a separate table for that makes sense. If you have a really good reason to normalize the City column into its own table, then that can go, too, but I haven't seen that before because it's a free form field (usually). A table heavy, normalized design is efficient in terms of space and looks "textbook-good" but can get extremely complex. It looks nice until you have to do 12 joins to get a customer's name and address. These designs are not automatically fantastic in terms of performance that matters most: queries. Avoid complexity if possible. For example, if a customer can have only two addresses (not arbitrarily many), then it might make sense to just keep them all in a single table (CustomerID, Name, ShipToAddress, BillingAddress, ShipToCity, BillingCity, etc.). Here's Jeff's post on the topic. A: There are advantages to having tables with fewer columns, but you also need to look at your scenario above and answer these questions: Will the customer be allowed to have more than 1 address? If not, then a separate table for address is not necessary. If so, then a separate table becomes helpful because you can easily add more addresses as needed down the road, where it becomes more difficult to add more columns to the table. A: It doesn't sound so much like a question about tables/columns, but about normalization. In some situations have a high degree of normalization ("more tables" in this case) is good, and clean, but it typically takes a high number of JOINs to get relevant results. And with a large enough dataset, this can bog down performance. Jeff wrote a little about it regarding the design of StackOverflow. See also the post Jeff links to by Dare Obasanjo. A: I would argue in favor of more tables, but only up to a certain point. Using your example, if you separated your user's information into two tables, say USERS and ADDRESS, this gives you the flexibility to have multiple addresses per user. One obvious application of this is a user who has separate billing and shipping addresses. The argument in favor of having a separate CITY table would be that you only have to store each city's name once, then refer to it when you need it. That does reduce duplication, but in this example I think it's overkill. It may be more space efficient, but you'll pay the price in joins when you select data from your database. A: i would consider normalizing as the first step, so cities, counties, states, countries would be better as separate columns... the power of SQL language, together with today DBMS-es allows you to group your data later if you need to view it in some other, non-normalized view. When the system is being developed, you might consider 'unnormalizing' some part if you see that as an improvement. A: I think balance is in order in this case. If it makes sense to put a column in a table, then put it in the table, if it doesn't, then don't. Your coworkers approach would definately help to normalize the database, but that might not be very useful if you have to join 50 tables together to get the information you need. I guess what my answer would be is, use your best judgement. A: There are many sides to this, but from an application efficiency perspective mote tables can be more efficient at times. If you have a few tables with a bunch of columns every time the db as to do an operation it has a chance of making a lock, more data is made unavailable for the duration of the lock. If locks get escalated to page and tables (well hopefully not tables :) ) you can see how this can slow down the system. A: Hmm. I think its a wash and depends on your particular design model. Definitely factor out entities that have more than a few fields out into their own table, or entities whose makeup will likely change as your application's requirements changes (for instance - I'd factor out address anyways, since it has so many fields, but I'd especially do it if you thought there was any chance you'd need to handle foreign country addresses, which can be of a different form. The same with phone numbers). That said, when you're got it working, keep an eye out on performance. If you've spun an entity out that requires you to do large, expensive joins, maybe it becomes a better design decision to spin that table back into the original. A: When you design your database, you should be as close as possible from the meaning of data and NOT your application need ! A good database design should stand over 20 years without a change. A customer could have multiple adresses, that's the reality. If you decided that's your application is limited to one adresse for the first release, it's concern the design of your application not the data ! It's better to have multiple table instead of multiple column and use view if you want to simplify your query. Most of time you will have performance issue with a database it's about network performance (chain query with one row result, fetch column you don't need, etc) not about the complexity of your query. A: There are huge benefits to queries using as few columns as possible. But the table itself can have a large number. Jeff says something on this as well. Basically, make sure that you don't ask for more than you need when doing a query - performance of queries is directly related to the number of columns you ask for. A: I think you have to look at the kind of data you're storing before you make that decision. Having an address table is great but only if the likelihood of multiple people sharing the same address is high. If every person had different addresses, keeping that data in a different table just introduces unnecessary joins. I don't see the benefit of having a city table unless cities in of themselves are entities you care about in your application. Or if you want to limit the number of cities available to your users. Bottom line is decisions like this have to take the application itself into considering before you start shooting for efficiency. IMO. A: First, normalize your tables. This ensures you avoid redundant data, giving you less rows of data to scan, which improves your queries. Then, if you run into a point where the normalized tables you are joining are causing the query to take to long to process (expensive join clause), denormalize where more appropriate. A: Good to see so many inspiring and well based answers. My answer would be (unfortunately): it depends. Two cases: * If you create a datamodel that is to be used for many years and thus possibly has to adept many future changes: go for more tables and less rows and pretty strict normalization. * In other cases you can choose between more tables-less rows or less tables-more rows. Especially for people relatively new to the subject this last approach can be more intuitive and easy to comprehend. The same is valid for the choosing between the object oriented approach and other options.
{ "language": "en", "url": "https://stackoverflow.com/questions/59482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: Confused by gdb print ptr vs print "%s" 1167 ptr = (void*)getcwd(cwd, MAX_PATH_LENGTH-1); (gdb) n 1168 if (!ptr) { (gdb) print ptr $1 = 0xbff2d96c "/media/MMC-SD/partition1/aaaaaaaaaaa" (gdb) print &cwd $2 = (char (*)[3500]) 0xbff2d96c (gdb) print strlen(cwd) $3 = 36 (gdb) print "%s",cwd $4 = "/media/MMC-SD/partition1/aaaaaaaaaaa", '\0' <repeats 912 times>, "��O�001\000\000\000\000��027\000\000\000�3����EL鷠3�000��027\000\000\000\000\000\000\000\027\000\000\000\000��/�027\000\000\000�3����N����\230���鷠3�000��027\000\000\000\000\000\000\000��000\000\000\000\001\000\000\000��M鷠3����\000\000\000\000.\231�027��w\005\b\001\000"... (gdb) print "%s", ptr $5 = 0xbff2d96c "/media/MMC-SD/partition1/aaaaaaaaaaa" (gdb) Quit Why is ptr printing the string correctly but cwd not; this also affects the program and it crashes if I try to use the cwd... [edit: turns out that crash was caused by a stupid buffer overflow on this var... grr...not gdb, but the print question was still valid] A: I agree with mweerden. Trying something I believe is similar to your code, I get: (gdb) print cwd $1 = "/media", '\0' <repeats 782 times>, "\016���" ... (gdb) print (char*) cwd $2 = 0xbfc8eb84 "/media" from gdb, so it seems that since cwd was defined as char cwd[3500], gdb prints the entire array, while if you tell gdb to interpret it as a char*, it will work as you expect. If your application crashes, I would assume it is because of something else. A: The reason that cwd is printed differently in gdb is because gdb knows that ptr is a char * (I guess) and that cwd is an array of length 3500 (as shown in your output). So when printing ptr it prints the pointer value (and as a service also the string it points to) and when printing cwd it prints the whole array. I don't see any reason why using cwd instead of ptr would lead to problems, but I would need to see some code to be sure. A: That ptr is displayed as nicely-formatted string and cwd as "byte buffer" is probably specific to gdb. In any case it shouldn't affect your application; according to man 3 getcwd, ptr should point to cwd (or it should be NULL if an error occurred). Can you use ptr without crashing the program? A: What type is cwd? The above code snippet doesn't tell us that. It could be that ptr being a void* is treated differently by gdb.
{ "language": "en", "url": "https://stackoverflow.com/questions/59483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Convert this delegate to an anonymous method or lambda I am new to all the anonymous features and need some help. I have gotten the following to work: public void FakeSaveWithMessage(Transaction t) { t.Message = "I drink goats blood"; } public delegate void FakeSave(Transaction t); public void SampleTestFunction() { Expect.Call(delegate { _dao.Save(t); }).Do(new FakeSave(FakeSaveWithMessage)); } But this is totally ugly and I would like to have the inside of the Do to be an anonymous method or even a lambda if it is possible. I tried: Expect.Call(delegate { _dao.Save(t); }).Do(delegate(Transaction t2) { t2.Message = "I drink goats blood"; }); and Expect.Call(delegate { _dao.Save(t); }).Do(delegate { t.Message = "I drink goats blood"; }); but these give me Cannot convert anonymous method to type 'System.Delegate' because it is not a delegate type** compile errors. What am I doing wrong? Because of what Mark Ingram posted, seems like the best answer, though nobody's explicitly said it, is to do this: public delegate void FakeSave(Transaction t); Expect.Call(delegate { _dao.Save(t); }).Do( new FakeSave(delegate(Transaction t2) { t.Message = expected_msg; })); A: What Mark said. The problem is that Do takes a Delegate parameter. The compiler can't convert the anonymous methods to Delegate, only a "delegate type" i.e. a concrete type derived from Delegate. If that Do function had took Action<>, Action<,> ... etc. overloads, you wouldn't need the cast. A: That's a well known error message. Check the link below for a more detailed discussion. http://staceyw1.wordpress.com/2007/12/22/they-are-anonymous-methods-not-anonymous-delegates/ Basically you just need to put a cast in front of your anonymous delegate (your lambda expression). In case the link ever goes down, here is a copy of the post: They are Anonymous Methods, not Anonymous Delegates. Posted on December 22, 2007 by staceyw1 It is not just a talking point because we want to be difficult. It helps us reason about what exactly is going on. To be clear, there is *no such thing as an anonymous delegate. They don’t exist (not yet). They are "Anonymous Methods" – period. It matters in how we think of them and how we talk about them. Lets take a look at the anonymous method statement "delegate() {…}". This is actually two different operations and when we think of it this way, we will never be confused again. The first thing the compiler does is create the anonymous method under the covers using the inferred delegate signature as the method signature. It is not correct to say the method is "unnamed" because it does have a name and the compiler assigns it. It is just hidden from normal view. The next thing it does is create a delegate object of the required type to wrap the method. This is called delegate inference and can be the source of this confusion. For this to work, the compiler must be able to figure out (i.e. infer) what delegate type it will create. It has to be a known concrete type. Let write some code to see why. private void MyMethod() { } Does not compile: 1) Delegate d = delegate() { }; // Cannot convert anonymous method to type ‘System.Delegate’ because it is not a delegate type 2) Delegate d2 = MyMethod; // Cannot convert method group ‘MyMethod’ to non-delegate type ‘System.Delegate’ 3) Delegate d3 = (WaitCallback)MyMethod; // No overload for ‘MyMethod’ matches delegate ‘System.Threading.WaitCallback’ Line 1 does not compile because the compiler can not infer any delegate type. It can plainly see the signature we desire, but there is no concrete delegate type the compiler can see. It could create an anonymous type of type delegate for us, but it does not work like that. Line 2 does not compile for a similar reason. Even though the compiler knows the method signature, we are not giving it a delegate type and it is not just going to pick one that would happen to work (not what side effects that could have). Line 3 does not work because we purposely mismatched the method signature with a delegate having a different signature (as WaitCallback takes and object). Compiles: 4) Delegate d4 = (MethodInvoker)MyMethod; // Works because we cast to a delegate type of the same signature. 5) Delegate d5 = (Action)delegate { }; // Works for same reason as d4. 6) Action d6 = MyMethod; // Delegate inference at work here. New Action delegate is created and assigned. In contrast, these work. Line 1 works because we tell the compiler what delegate type to use and they match, so it works. Line 5 works for the same reason. Note we used the special form of "delegate" without the parens. The compiler infers the method signature from the cast and creates the anonymous method with the same signature as the inferred delegate type. Line 6 works because the MyMethod() and Action use same signature. I hope this helps. Also see: http://msdn.microsoft.com/msdnmag/issues/04/05/C20/ A: The problem is not with your delegate definition, it's that the parameter of the Do() method is of type System.Delegate, and the compiler generated delegate type (FakeSave) does not implicitly convert to System.Delegate. Try adding a cast in front of your anonymous delegate: Expect.Call(delegate { _dao.Save(t); }).Do((Delegate)delegate { t.Message = "I drink goats blood"; }); A: Try something like: Expect.Call(delegate { _dao.Save(t); }).Do(new EventHandler(delegate(Transaction t2) { t2.CheckInInfo.CheckInMessage = "I drink goats blood"; })); Note the added EventHandler around the delegate. EDIT: might not work since the function signatures of EventHandler and the delegate are not the same... The solution you added to the bottom of your question may be the only way. Alternately, you could create a generic delegate type: public delegate void UnitTestingDelegate<T>(T thing); So that the delegate is not Transaction specific.
{ "language": "en", "url": "https://stackoverflow.com/questions/59515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Synchronisation algorithms Are there any good references for synchronisation algorithms? I'm interested in algorithms that synchronize the following kinds of data between multiple users: * *Calendars *Documents *Lists and outlines I'm not just looking for synchronization of contents of directories a la rsync; I am interested in merging the data within individual files. A: SyncML is a standard for data synchronization of things normally associated with someone's personal organizer. Nokia and Motorola were both using it heavily a few years ago, but I don't know its current state. iCalendar is a calendar synchronization format specification, and CalDAV is an implementation of iCalendar atop the WebDAV protocol. Google searches for iCal or iCalendar will likely turn up the iCal application supplied by Apple as part of Mac OS X. Keep looking down the list of results until you see something which looks like a protocol. A: There's a high-level and very broad overview of all synchronisation algorithms in Optimistic Replication (by Yasushi Saito and Marc Shapiro, PDF format). A: I would have thought that looking at any of the open-source source code control applications would give you the right idea - merging changes between files is exactly what they do... A: I believe Delta Compression is what you are looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/59521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's a good program to record video or screencast being played on screen? What's a good program to record videos or screencasts being played on screen? A: I've had success with Camtasia / CamStudio. Check out http://showmedo.com for tutorials. It's a little recursive though - it would be better to capture it from source. A: On Windows, you have CamTasia (commercial), CamStudio (GPL), FRAPS (commercial with free trial). FRAPS will definitely do your job, it was designed to capture videos of 3D games. CamTasia might, it has a low-level custom codec (TechSmith Capture Codec). CamStudio probably won't, at least not smoothly. CamStudio has issues on Vista, I don't know about FRAPS, CamTasia is fine on Vista. On the Mac try ScreenFlow, their example video makes it clear it can capture live video streams. On Linux you'll be in a bit of trouble. If you recompile ffmpeg you might get it recording video. I think recordmydesktop won't do the job. A: If you're using Vista the latest version of Fraps might also do what you need. I haven't tried it for that, though, just games A: DemoCreator could be an good option. It is easy to use, and there are powerful editing features with it. http://www.sameshow.com/demo-creator.html
{ "language": "en", "url": "https://stackoverflow.com/questions/59525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Service Oriented Architecture: How would you define it Service Oriented Architecture seems to be more and more of a hot quote these days, but after asking around the office I have found that I seem to get many different definitions for it. How would you guys define SOA? What would you consider the official definition? A: Wikipedia: "A SOA is a software architecture that uses loosely coupled software services to support the requirements of business processes and software users. Resources on a network in an SOA enviroment are made available as independent services that can be accessed without knowledge of their underlying platform implementation." SOA is not that new, but it has potential to achieve some amazing things. But the organization has to be ready for it: the business has to think in processes and that's the big problem A: I'd go with: Defining a series of stateless, client agnostic business operations created to be leveraged in multiple applications. A: An SOA design includes components (i.e., services) that can be used by code regardless of implementation (i.e., any OS or langauge). A single instance of a service may also be used by multiple applications, whereas, e.g., a DLL would have to be duplicated for each app and require the same implementation technology as the linking application. Services in an SOA design are usually implemented as interoperable web services. A: As Martin Fowler says, it means different things to different people. His article on the topic is pretty good although it isn't quite a definition. http://martinfowler.com/bliki/ServiceOrientedAmbiguity.html It may explain, the difficulty coming up with a concrete definition. A: There isn't an official definition as Ryan mentioned eariler. However, I find Thomas Erl's view of the whole service-orientation quite well-structured and relevant. Here is the definition of SOA from his SOA Glossary (more): Service-oriented architecture represents an architectural model that aims to enhance the agility and cost-effectiveness of an enterprise while reducing the overall burden of IT on an organization. Thomas Erl is the author of many SOA titles most of them receiving endorsement from SOA vendors including IBM, Oracle, and Microsoft. The nice thing about his books is that they are as SOA vendor independent as possible. It means you learn more about service-orientation itself and less about some vendor's middleware that supports SOA. A: I agree with all of the people that point you to Fowler on this. Basically it runs like this: service oriented architecture got a reputation as being good, so anything that people want to be associated with good they call SOA. In reality it has a lot of downsides and can create a Service Oriented Gridlock or Dependency Oriented Architecture. Here's my go at a definition: Service Oriented Architecture is a systems integration and code reuse approach where applications are dependent on connecting to services provided by other running applications across the network. This is distinct from component architectures, where software components are shared statically between applications in the form of libraries or SDKs, for example. A: A clarification here - "Service Oriented Architecture is a systems integration and code reuse approach where applications are dependent on connecting to services provided by other running applications across the network." I have a scenario where two j2ee applications have been integrated using event driven messaging. Here the above phrases of systems integration and connecting to services provided by other running applications across the network hold good. Can i call this SOA ? The following principles would hold good here 1) statelessness 2) message oriented - loosely coupled infact de-coupled 3) extensible. However, the following do not apply 1) platform independence - neither of the applications being integrated has been designed to work in a different platform. 2) The applications are plain j2ee applications which have not been designed with all soa concepts. A: I attempted to define SOA in one of my blog posts. Here's an excerpt... For years it's been standard practice to separate functionality into functions, classes, and modules. The idea has always been that these smaller, highly specialized components are easier to share and maintain than monolithic blocks of code. Functionally, SOA is not much different. The goals are the same - reusability and easy maintenance. The biggest difference - in the case of a web service SOA - is that the shared library included in your application is replaced with an HTTP call. A: Here's a definition for you: SOA - Software Over Architected. The inclusion of pointless, over-bloated, functional interface framework called an architecture in a pretty web site with a 3d graphic folder flying from one side to the other where "dir /s > a.txt | ftp -s:upload.ftp" did the job. Software components are not bricks, cannot be generalised by common functional patterns and architecture emerges in the enterprise from good practice, not good design. Software isn't architected, it's engineered. SCRUM ON!
{ "language": "en", "url": "https://stackoverflow.com/questions/59537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: SQL Recursive Tables I have the following tables, the groups table which contains hierarchically ordered groups and group_member which stores which groups a user belongs to. groups --------- id parent_id name group_member --------- id group_id user_id ID PARENT_ID NAME --------------------------- 1 NULL Cerebra 2 1 CATS 3 2 CATS 2.0 4 1 Cerepedia 5 4 Cerepedia 2.0 6 1 CMS ID GROUP_ID USER_ID --------------------------- 1 1 3 2 1 4 3 1 5 4 2 7 5 2 6 6 4 6 7 5 12 8 4 9 9 1 10 I want to retrieve the visible groups for a given user. That it is to say groups a user belongs to and children of these groups. For example, with the above data: USER VISIBLE_GROUPS 9 4, 5 3 1,2,4,5,6 12 5 I am getting these values using recursion and several database queries. But I would like to know if it is possible to do this with a single SQL query to improve my app performance. I am using MySQL. A: Two things come to mind: 1 - You can repeatedly outer-join the table to itself to recursively walk up your tree, as in: SELECT * FROM MY_GROUPS MG1 ,MY_GROUPS MG2 ,MY_GROUPS MG3 ,MY_GROUPS MG4 ,MY_GROUPS MG5 ,MY_GROUP_MEMBERS MGM WHERE MG1.PARENT_ID = MG2.UNIQID (+) AND MG1.UNIQID = MGM.GROUP_ID (+) AND MG2.PARENT_ID = MG3.UNIQID (+) AND MG3.PARENT_ID = MG4.UNIQID (+) AND MG4.PARENT_ID = MG5.UNIQID (+) AND MGM.USER_ID = 9 That's gonna give you results like this: UNIQID PARENT_ID NAME UNIQID_1 PARENT_ID_1 NAME_1 UNIQID_2 PARENT_ID_2 NAME_2 UNIQID_3 PARENT_ID_3 NAME_3 UNIQID_4 PARENT_ID_4 NAME_4 UNIQID_5 GROUP_ID USER_ID 4 2 Cerepedia 2 1 CATS 1 null Cerebra null null null null null null 8 4 9 The limit here is that you must add a new join for each "level" you want to walk up the tree. If your tree has less than, say, 20 levels, then you could probably get away with it by creating a view that showed 20 levels from every user. 2 - The only other approach that I know of is to create a recursive database function, and call that from code. You'll still have some lookup overhead that way (i.e., your # of queries will still be equal to the # of levels you are walking on the tree), but overall it should be faster since it's all taking place within the database. I'm not sure about MySql, but in Oracle, such a function would be similar to this one (you'll have to change the table and field names; I'm just copying something I did in the past): CREATE OR REPLACE FUNCTION GoUpLevel(WO_ID INTEGER, UPLEVEL INTEGER) RETURN INTEGER IS BEGIN DECLARE iResult INTEGER; iParent INTEGER; BEGIN IF UPLEVEL <= 0 THEN iResult := WO_ID; ELSE SELECT PARENT_ID INTO iParent FROM WOTREE WHERE ID = WO_ID; iResult := GoUpLevel(iParent,UPLEVEL-1); --recursive END; RETURN iResult; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN NULL; END; END GoUpLevel; / A: Joe Cleko's books "SQL for Smarties" and "Trees and Hierarchies in SQL for Smarties" describe methods that avoid recursion entirely, by using nested sets. That complicates the updating, but makes other queries (that would normally need recursion) comparatively straightforward. There are some examples in this article written by Joe back in 1996. A: I don't think that this can be accomplished without using recursion. You can accomplish it with with a single stored procedure using mySQL, but recursion is not allowed in stored procedures by default. This article has information about how to enable recursion. I'm not certain about how much impact this would have on performance verses the multiple query approach. mySQL may do some optimization of stored procedures, but otherwise I would expect the performance to be similar. A: Didn't know if you had a Users table, so I get the list via the User_ID's stored in the Group_Member table... SELECT GroupUsers.User_ID, ( SELECT STUFF((SELECT ',' + Cast(Group_ID As Varchar(10)) FROM Group_Member Member (nolock) WHERE Member.User_ID=GroupUsers.User_ID FOR XML PATH('')),1,1,'') ) As Groups FROM (SELECT User_ID FROM Group_Member GROUP BY User_ID) GroupUsers That returns: User_ID Groups 3 1 4 1 5 1 6 2,4 7 2 9 4 10 1 12 5 Which seems right according to the data in your table. But doesn't match up with your expected value list (e.g. User 9 is only in one group in your table data but you show it in the results as belonging to two) EDIT: Dang. Just noticed that you're using MySQL. My solution was for SQL Server. Sorry. -- Kevin Fairchild A: There was already similar question raised. Here is my answer (a bit edited): I am not sure I understand correctly your question, but this could work My take on trees in SQL. Linked post described method of storing tree in database -- PostgreSQL in that case -- but the method is clear enough, so it can be adopted easily for any database. With this method you can easy update all the nodes depend on modified node K with about N simple SELECTs queries where N is distance of K from root node. Good Luck! A: I don't remember which SO question I found the link under, but this article on sitepoint.com (second page) shows another way of storing hierarchical trees in a table that makes it easy to find all child nodes, or the path to the top, things like that. Good explanation with example code. PS. Newish to StackOverflow, is the above ok as an answer, or should it really have been a comment on the question since it's just a pointer to a different solution (not exactly answering the question itself)? A: There's no way to do this in the SQL standard, but you can usually find vendor-specific extensions, e.g., CONNECT BY in Oracle. UPDATE: As the comments point out, this was added in SQL 99.
{ "language": "en", "url": "https://stackoverflow.com/questions/59544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What tools exist to convert a Delphi 7 application to C# and the .Net framework? I maintain an old PC-only application written in Delphi 7. Although Delphi has served me very well in the past I now only use it for this one application and find my skills with the language diminishing. Its syntax is too different from my 'day job' languages of Java/Ruby so it takes me longer to get into the groove of writing new code, plus it is so old I have not used many interfaces so the code is not managed which seems ancient to me now! Many of my users are moving to Vista which may run the app in compatibility mode or may have GPF problems depending on how their PC is configured by their IT department, so I have to do some maintenance on the application. I'm wondering if I should jump to a more familiar stack. Is there an automated tool that will do the legwork of converting the code base to C#, leaving me to concentrate on the conversion on any non-standard components? I'm using an embedded database component called AbsoluteDatabase which is BDE compatible and using standard SQL throughout, and a native Delphi HTML browser component which can be swapped out with something from the Microsoft world. How good are these conversion tools? A: There has been a scientific report of a successfull transformation of a 1.5 million line Delphi Project to C# by John Brant. He wrote a Delphi parser, a C# generator and lots of transformation rules on the AST. Gradually extending the set of rules, doing a daily build, lots of unit tests, and some rewriting of difficult Delphi parts allowed him with a team of 4, among which some of the original developers, with deep Delphi & C# knowledge, to migrate the software in 18 months. John Brant being the original developer of the refactoring browser and the SmaCC compiler construction kit, you are unlikely to be able to go that fast A: Many of my users are moving to Vista which may run the app in compatibility mode or may have GPF problems depending on how their PC is configured by their IT department, so I have to do some maintenance on the application. I'm wondering if I should jump to a more familiar stack. Unless you are doing something non standard, D7 applications should run fine in Vista. As for conversion to C#, I would think that most conversion tools would be a waste of time. A better approach may be to rewrite the application from scratch. A: There is no easy answer, but bear in mind that the Delphi.net variant of the language targets the .net runtime, and that different languages on .net can interoperate closely. You could try getting it to compile in Delphi.Net, factoring into different assemblies and then converting the assemblies by hand one by one. Reflector could help be reverse-engineering compiled code into a skeleton of C# code - equivalent but without comments, internal variable names etc. On the other hand, Delphi.net may be good enough (TM) for this project. But unless you have a good test suite (I'm guessing probably not, given the state of the art in Delphi 7) you're going to introduce bugs. A: I am not aware of any automated tools for making that conversion. Personally I would suggest you stick with Delphi, maybe just upgrade to a new version. I have seen a couple code DOM's that attempt to convert from Delphi to C#, but that doesn't address the library issue. CodeGear (formally Borland) has a tool for going from C# to Delphi that works OK. I would assume the tools that go the other direction will work the same (requiring a lot of editing). Here is a Swedish tool that works on the same CodeDOM principle to go from Delphi to C# (and a number of other languages). There are others, I just can't find them right now. Another option would be to upgrade to a more resent version of Delphi for .NET and port your code to .NET that way. Once you get it working in Delphi for .NET (which will be pretty easy, except for the embedded DB, unless they have a .NET version) you can use .NET Reflector and File Disassembler reverse the IL to C#. You will still be using the VCL, but you can use C# instead of Object pascal. Another similar solution would be to port it to Oxygene by RemObjects. I believe they have a Delphi Win32 migration path to WinForms. Then use .NET Reflector and File Disassembler reverse the IL to C#. In short, no easy answers. Language migration is easier then library migration. A lot of it depends on what 3rd party components you used (beyond AbsoluteDatabase) and if you made any Windows API calls directly in your application. Another completely different option would be too look for an off shore team to maintain the application. They can probably do so cheaply. You could find someone domestically, but it would cost you more no doubt (although with the sagging dollar and poor job market you never know . . . ) Good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/59547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Native Tongue as Default Language For an Application When downloading both Firefox and Chrome, I've noticed that the default version I got was in my native tongue of Hebrew. I personally don't like my applications in Hebrew, since I'm used to the English UI conventions embedded in me since long ago by: * *The lack of choice: Most programs don't offer interfaces in multiple languages and when they do, those languages are usually English and the developer's native tongue. *Programming languages which are almost completely bound to the English language. My question then is this: * *If you translate your applications, would you limit the UI to the user's native tongue or give them the choice by enabling more than one language pack by default? *Which language would your application default to (which is interesting mostly if you only install one language pack with your application)? And also generally I'd like to know how much value do you put into translating your applications on a whole. A: I've helped develop an application that was used by Dutch, English, Spanish and Portuguese speaking users. Because the application installed from CD we just added all the language packs. Mostly because it saved us a lot of work not having to maintain 4 different versions. If your application distributed from a website and you have to support more than only 4 languages I can imagine you don't want to let everyone download every language pack. But only distributing the native languages of people downloading the application seems a bit restrictive. Most people I know actually like their software in english. So at least adding the english language to all the versions makes sense. A: I've never written an application for use by a large number of people, and never for anyone that didn't use English as their language, but if I did, I would probably take a route that installs all available language packs at install (unless the user did a custom install, where I would allow them to choose language packs) and then switch between languages as an option inside the program. If I had to only choose one language, I would choose English if I was doing all of the work, or the native language of the users if I had a translator. A: When writing an application for multilingual use, I use Microsoft's Best Practices for Developing World-Ready Applications, which includes retrieving the current CultureInfo from the OS and using that as the default language pack. A: I usually try to ship products with all available sets of localized resources. Upon a user's first launch of the product, the UI is presented in the localization most closely matching the OS on their machine. Once within the app, the user has the option of switching the UI to one of the other available localizations. I think it is very important to provide localizations that match one's target markets. Most "normal" people (not software developers!) prefer by far to have a UI in their native language.
{ "language": "en", "url": "https://stackoverflow.com/questions/59549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: HTML to Markdown with Java is there an easy way to transform HTML into markdown with JAVA? I am currently using the Java MarkdownJ library to transform markdown to html. import com.petebevin.markdown.MarkdownProcessor; ... public static String getHTML(String markdown) { MarkdownProcessor markdown_processor = new MarkdownProcessor(); return markdown_processor.markdown(markdown); } public static String getMarkdown(String html) { /* TODO Ask stackoverflow */ } A: I am working on the same issue, and experimenting with a couple different techniques. The answer above could work. You could use the jTidy library to do the initial cleanup work and convert from HTML to XHTML. You use the XSLT stylesheet linked above. Unfortunately there is no library that has a one-stop function to do this in Java. You could try using the Python script html2text with Jython, but I haven't yet tried this! A: There is a Java Library called flexmark which has such a feature. Maven Dependency: <dependency> <groupId>com.vladsch.flexmark</groupId> <artifactId>flexmark-html2md-converter</artifactId> <version>0.64.0</version> </dependency> Using the class com.vladsch.flexmark.html2md.converter.FlexmarkHtmlConverter you can convert an HTML String to a Markdown String in one line like this: String md = FlexmarkHtmlConverter.builder().build().convert(html); A: if you are using WMD editor and want to get the markdown code on the server side, just use these options before loading the wmd.js script: wmd_options = { // format sent to the server. can also be "HTML" output: "Markdown", // line wrapping length for lists, blockquotes, etc. lineLength: 40, // toolbar buttons. Undo and redo get appended automatically. buttons: "bold italic | link blockquote code image | ol ul heading hr", // option to automatically add WMD to the first textarea found. autostart: true }; A: There is a great library for JS called Turndown, you can try it online here. It works for htmls that the accepted answer errors out. I needed it for Java (as the question), so I ported it. The library for Java is called CopyDown, it has the same test suite as Turndown and I've tried it with real examples that the accepted answer was throwing errors. To install with gradle: dependencies { compile 'io.github.furstenheim:copy_down:1.0' } Then to use it: CopyDown converter = new CopyDown(); String myHtml = "<h1>Some title</h1><div>Some html<p>Another paragraph</p></div>"; String markdown = converter.convert(myHtml); System.out.println(markdown); > Some title\n==========\n\nSome html\n\nAnother paragraph\n PS. It has MIT license A: There is a Haskell library called pandoc that can convert between most markup formats. Although it is not a Java library, it can be used through its CLI in Java. You can get and install the latest version from here. Read the getting started guides here. var command = "pandoc --to=markdown_strict --output=result.md input.html"; var pandoc = new ProcessBuilder() .command(command.split(" ")) .directory(new File(".")) // Working directory .start(); pandoc.waitFor(); // The output result.md will be created in the working directory This tool can also be used in GitHub Actions workflows.
{ "language": "en", "url": "https://stackoverflow.com/questions/59557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Most effective form of CAPTCHA? Of all the forms of CAPTCHA available, which one is the "least crackable" while remaining fairly human readable? A: I believe that CAPTCHA is dying. If someone really wants to break it, it will be broken. I read (somewhere, don't remember where) about a site that gave you free porn in exchange for answering CAPTCHAs to they can be rendered obsolete by bots. So, why bother? A: If you're a small enough site, no one would bother. If you're still looking for a CAPTCHA, I like tEABAG_3D by the OCR Research Team. It's complicated to break and uses your 3D vision. Plus, it being developed by people who break CAPTCHAs for fun. A: If you're just looking for a captcha to prevent spammers from bombing your blog, the best option is something simple but unique. For example, ask to write the word "Cat" into a box. The advantage of this is that no targeted captcha-breaker was developed for this solution, and your small blog isn't important enough for someone to actually develop one. I've used such a captcha on my blog with some success for a couple of years now. A: I believe that CAPTCHA is dying. If someone really wants to break it, it will be broken. I read (somewhere, don't remember where) about a site that gave you free porn in exchange for answering CAPTCHAs to they can be rendered obsolete by bots. So, why bother? Anyone who really wants to break this padlock can use a pair of bolt cutters, so why bother with the lock? Anyone who really wants to steal this car can drive up with a tow truck, so why bother locking my car? Anyone who really wants to open this safe can cut it open with an oxyacetylene torch, so why bother putting things in the safe? Because using the padlock, locking your car, putting valuables in a safe, and using a CAPTCHA weeds out a large spectrum of relatively unsophisticated or unmotivated attackers. The fact that it doesn't stop sophisticated, highly motivated attackers doesn't mean that it doesn't work at all. Using a CAPTCHA isn't going to stop all spammers, but it's going to tremendously reduce the amount that requires filtering or manual intervention. Heck look at the lame CAPTCHA that Jeff uses on his blog. Even a wimpy barrier like that still provides a lot of protection. A: This information is hard to really know because I believe a CAPTCHA gets broken long before anybody knows about it. There is economic incentive for those that break them to keep it quiet. I used to work with a guy whose job revolved mostly around breaking CAPTCHA's and I can tell you the one giving them fits currently is reCAPTCHA. Now, does that mean it will forever, call me skeptical. A: I wonder if a CAPTCHA mechanism that uses collage made of pictures and asks human to type what he sees in the collage image will be much more crack-proof than the text and number image one. Imagine that the mechanism stitches pictures of cat, cup and car into a collage image and expects human visitor to tick (checkboxes) cat, cup, and car. How long do you think will hackers and crackers will come up with an algorithm to crack the mechanism (i.e. extract image elements from the collage and recognize the object depicted by each picture) ... A: If you wanted you could try out the Microsoft Research project Asirra: http://research.microsoft.com/asirra/ A: CAPTCHAS, I believe should start being considered heavily when designing the UX. They're slow, cumbersome, and a very poor user experience. They are useful, don't get me wrong but perhaps you should look into designing a honeypot. A honeypot is created by adding a hiddenfield at the bottom of the form. Because spam bots will fill in all the fields on the page blindly you can do a check: If honeypotfield <> Empty Then "No Spam TY" Else //Proceed with the form End If This works until there is a specifically designed spambot for your site, so they can choose to fill out selected input fields. For more information: http://haacked.com/archive/2007/09/11/honeypot-captcha.aspx/ A: I agree with Thomas. Captcha is on its way out. But if you must use it, reCAPTCHA is a pretty good provider with a simple API. A: As far as I know, the Google's one is the best that there is. It hasn't been broken by computer programs yet. What I know that the crackers have been doing is to copy the image and then send it to many phishing websites where humans solve them to enter those websites. A: It doesn't matter if captchas are broken or not now -- there are Indian firms that do nothing but process captchas. I'm with the rest of the group in saying that Captchas are on their way out. A: Here is a cool link to create CAPTCHA..... http://www.codeproject.com/aspnet/CaptchaImage.asp A: Just.. don't.. There are several reasons use of captcha is not advised. http://www.interfacegeek.com/dont-ever-use-captchas/ A: I use uniqpin.com - it's easy to use and not annoying for users. So, bots can recognise a text, but can't recognize a image. A: Death by Captcha can solve any Regular CAPTCHA (incude reCAPTCHA), but not Speedcoin Cryptocurrency Captcha. Death by Captcha - http://deathbycaptcha.com Speedcoin Captcha - http://speedcoin.co/info/captcha/Speedcoin_Captcha.html
{ "language": "en", "url": "https://stackoverflow.com/questions/59560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: lock keyword in C# I understand the main function of the lock key word from MSDN lock Statement (C# Reference) The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock. When should the lock be used? For instance it makes sense with multi-threaded applications because it protects the data. But is it necessary when the application does not spin off any other threads? Is there performance issues with using lock? I have just inherited an application that is using lock everywhere, and it is single threaded and I want to know should I leave them in, are they even necessary? Please note this is more of a general knowledge question, the application speed is fine, I want to know if that is a good design pattern to follow in the future or should this be avoided unless absolutely needed. A: All the answers here seem right: locks' usefulness is to block threads from acessing locked code concurrently. However, there are many subtleties in this field, one of which is that locked blocks of code are automatically marked as critical regions by the Common Language Runtime. The effect of code being marked as critical is that, if the entire region cannot be entirely executed, the runtime may consider that your entire Application Domain is potentially jeopardized and, therefore, unload it from memory. To quote MSDN: For example, consider a task that attempts to allocate memory while holding a lock. If the memory allocation fails, aborting the current task is not sufficient to ensure stability of the AppDomain, because there can be other tasks in the domain waiting for the same lock. If the current task is terminated, other tasks could be deadlocked. Therefore, even though your application is single-threaded, this may be a hazard for you. Consider that one method in a locked block throws an exception that is eventually not handled within the block. Even if the exception is dealt as it bubbles up through the call stack, your critical region of code didn't finish normally. And who knows how the CLR will react? For more info, read this article on the perils of Thread.Abort(). A: Bear in mind that there might be reasons why your application is not as single-threaded as you think. Async I/O in .NET may well call-back on a pool thread, for example, as do some of the various timer classes (not the Windows Forms Timer, though). A: When should the lock be used? A lock should be used to protect shared resources in multithreaded code. Not for anything else. But is it necessary when the application does not spin off any other threads? Absolutely not. It's just a time waster. However do be sure that you're not implicitly using system threads. For example if you use asynchronous I/O you may receive callbacks from a random thread, not your original thread. Is there performance issues with using lock? Yes. They're not very big in a single-threaded application, but why make calls you don't need? ...if that is a good design pattern to follow in the future[?] Locking everything willy-nilly is a terrible design pattern. If your code is cluttered with random locking and then you do decide to use a background thread for some work, you're likely to run into deadlocks. Sharing a resource between multiple threads requires careful design, and the more you can isolate the tricky part, the better. A: Generally speaking if your application is single threaded, you're not going to get much use out of the lock statement. Not knowing your application exactly, I don't know if they're useful or not - but I suspect not. Further, if you're application is using lock everywhere I don't know that I would feel all that confident about it working in a multi-threaded environment anyways - did the original developer actually know how to develop multi-threaded code, or did they just add lock statements everywhere in the vague hope that that would do the trick? A: lock should be used around the code that modifies shared state, state that is modified by other threads concurrently, and those other treads must take the same lock. A lock is actually a memory access serializer, the threads (that take the lock) will wait on the lock to enter until the current thread exits the lock, so memory access is serialized. To answer you question lock is not needed in a single threaded application, and it does have performance side effects. because locks in C# are based on kernel sync objects and every lock you take creates a transition to kernel mode from user mode. If you're interested in multithreading performance a good place to start is MSDN threading guidelines A: You can have performance issues with locking variables, but normally, you'd construct your code to minimize the lengths of time that are spent inside a 'locked' block of code. As far as removing the locks. It'll depend on what exactly the code is doing. Even though it's single threaded, if your object is implemented as a Singleton, it's possible that you'll have multiple clients using an instance of it (in memory, on a server) at the same time.. A: Yes, there will be some performance penalty when using lock but it is generally neglible enough to not matter. Using locks (or any other mutual-exclusion statement or construct) is generally only needed in multi-threaded scenarios where multiple threads (either of your own making or from your caller) have the opportunity to interact with the object and change the underlying state or data maintained. For example, if you have a collection that can be accessed by multiple threads you don't want one thread changing the contents of that collection by removing an item while another thread is trying to read it. A: Lock(token) is only used to mark one or more blocks of code that should not run simultaneously in multiple threads. If your application is single-threaded, it's protecting against a condition that can't exist. And locking does invoke a performance hit, adding instructions to check for simultaneous access before code is executed. It should only be used where necessary. A: See the question about 'Mutex' in C#. And then look at these two questions regarding use of the 'lock(Object)' statement specifically. A: There is no point in having locks in the app if there is only one thread and yes, it is a performance hit although it does take a fair number of calls for that hit to stack up into something significant.
{ "language": "en", "url": "https://stackoverflow.com/questions/59590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: VBScript conditional short-circuiting workaround I have a large classic ASP app that I have to maintain, and I repeatedly find myself thwarted by the lack of short-circuit evaluation capability. E.g., VBScript won't let you get away with: if not isNull(Rs("myField")) and Rs("myField") <> 0 then ... ...because if Rs("myField") is null, you get an error in the second condition, comparing null to 0. So I'll typically end up doing this instead: dim myField if isNull(Rs("myField")) then myField = 0 else myField = Rs("myField") end if if myField <> 0 then ... Obviously, the verboseness is pretty appalling. Looking around this large code base, the best workaround I've found is to use a function the original programmer wrote, called TernaryOp, which basically grafts in ternary operator-like functionality, but I'm still stuck using a temporary variable that would not be necessary in a more full-featured language. Is there a better way? Some super-secret way that short-circuiting really does exist in VBScript? A: Maybe not the best way, but it certainly works... Also, if you are in vb6 or .net, you can have different methods that cast to proper type too. if cint( getVal( rs("blah"), "" ) )<> 0 then 'do something end if function getVal( v, replacementVal ) if v is nothing then getVal = replacementVal else getVal = v end if end function A: I always used Select Case statements to short circuit logic in VB. Something like.. Select Case True Case isNull(Rs("myField")) myField = 0 Case (Rs("myField") <> 0) myField = Rs("myField") Case Else myField = -1 End Select My syntax may be off, been a while. If the first case pops, everything else is ignored. A: If you write it as two inline IF statements, you can achieve short-circuiting: if not isNull(Rs("myField")) then if Rs("myField") <> 0 then ... But your then action must appear on the same line as well. If you need multiple statements after then, you can separate them with : or move your code to a subroutine that you can call. For example: if not isNull(Rs("myField")) then if Rs("myField") <> 0 then x = 1 : y = 2 Or if not isNull(Rs("myField")) then if Rs("myField") <> 0 then DoSomething(Rs("myField")) A: Or perhaps I got the wrong end of the question. Did you mean something like iIf() in VB? This works for me: myField = returnIf(isNothing(rs("myField")), 0, rs("myField")) where returnIf() is a function like so: function returnIf(uExpression, uTrue, uFalse) if (uExpression = true) then returnIf = uTrue else returnIf = uFalse : end if end function A: Nested IFs (only slightly less verbose): if not isNull(Rs("myField")) Then if Rs("myField") <> 0 then A: Yeah it's not the best solution but what we use is something like this function ReplaceNull(s) if IsNull(s) or s = "" then ReplaceNull = "&nbsp;" else ReplaceNull = s end if end function A: Would that there were, my friend -- TernaryOp is your only hope. A: Two options come to mind: 1) use len() or lenb() to discover if there is any data in the variable: if not lenb(rs("myField"))=0 then... 2) use a function that returns a boolean: if not isNothing(rs("myField")) then... where isNothing() is a function like so: function isNothing(vInput) isNothing = false : vInput = trim(vInput) if vartype(vInput)=0 or isEmpty(vInput) or isNull(vInput) or lenb(vInput)=0 then isNothing = true : end if end function A: You may be able to just use Else to catch nulls, ""s, etc. If UCase(Rs("myField")) = "THING" then 'Do Things elseif UCase(Rs("myField")) = "STUFF" then 'Do Other Stuff else 'Invalid data, such as a NULL, "", etc. 'Throw an error, do nothing, or default action End If I've tested this in my code and it's currently working. Might not be right for everyone's situation though.
{ "language": "en", "url": "https://stackoverflow.com/questions/59599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Detecting application hangs with ActiveX controls in .Net I am working on upgrades to a screen scraping application. We are using an ActiveX control to scrape screens out of an IBM mainframe. The mainframe program often hangs and crashes the ActiveX control causing our application to crash. We don't have access to the mainframe or the ActiveX source code. We are not going to write our own active x control. What is the bast way to encapsulate an ActiveX control to detect application hangs with the control so we can kill the process and restart with code? Should I create 2 separate applications? One as a controller that checks on the other and kills/restarts the process when it hangs? Would they have to be on separate app domains? Is it possible have two programs communicate with each other even if they are on separate app domains? A: You can start an executable with System.Diagnostics.Process.Start(). This returns a Process object with a Responding property that you can use to check periodically if the process is still active. You'll need two separate applications to do this though. And the application you're monitoring needs to have a main window because the monitoring works by checking if the application still processes messages from the main-window messagequeue. This is the same way windows knows to add "Not responding" to a window-title
{ "language": "en", "url": "https://stackoverflow.com/questions/59622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I find if my particular computer is going to have problems when I install linux? The IT lady just gave me a laptop to keep! I've always wanted to have Linux install to play with so the first thing I did is search stackoverflow for Linux Distro suggestions and found it here. However they also mention that you should search around to see if anyone's had any problems with your drivers and that distro. Now all I know is that this is a Toshiba Tecra A5 - I havent' even booted it up yet but when I do how should I go about researching whether the drivers are compatible with Ubuntu or whatever I choose to use? Should I just be googling Ubunto+DriverName or are there better resources? A: Many distros, Ubuntu included, have a "live" mode. You download the .iso image, burn the CD, and then boot from the CD. The OS will run directly off the CD without installing anything. It will run slowly, because it's reading from the CD, but it should give you the opportunity to test your hardware. A: You can try Linux-On-Laptops. A quick search shows this Tecra A5. You can also download a LiveCD version, that will tell you if you can get most of your hardware working easily. If the LiveCD works, you're good. If it doesn't, you can just pop it out of the cd-rom drive. No harm done, and you can look at other options. A: Personally, I wouldn't worry about it... if you do dual boot (which I recommend), then you can easily fall back to Windows (or whatever is installed). I have set up 3 laptops with linux, 2 Ubuntu and 1 Fedora 8. 2 of them had issues with the wireless network card, and 2 had issues with the video card (1 was easier problem to fix). In both cases, I was able to solve the problem with enough googling... the Ubuntu forums are particularly good at having resolutions to problems you may face. So... it may be some work to resolve issues you have, but with enough effort, you should be able to overcome the problems (sometimes the effort may be high by the way... it took me about a week of idle attempts to solve the first wireless card issue). A: I like Linux on Laptops you pick a notebook, and they recommend the best distro for it. The data is sometimes slightly dated, so I recommend getting the newest distro of the brand they recommend. For your notebook they recommend Ubuntu. The other option is just to try a live CD. Many distros have them, including Ubuntu. A: I would look in (at least) these two places: http://www.linuxcompatible.org/compatibility.html http://www.linux-drivers.org/ A: Most Linux distros have Live CDs that let you run the OS before actually installing it. That is how I found the Linux distribution that would run on my laptop (Ubuntu). If your laptop is older and your afraid it won't be able to handle a modern Linux desktop, look to distro's like Xubuntu, Slax, Damn Small Linux, Puppy Linux, etc. as they ship with desktops that aren't as resource intensive.
{ "language": "en", "url": "https://stackoverflow.com/questions/59627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: AJAX Partial Page Load? I have a page results page (you get there after submitting your search query elsewhere) whit a whole bunch of gridviews for different type of data objects. Obviously, some of the queries take longer than the others. How can I make each gridview render as soon as it has the data it needs? This has been tricky for me because it must work on a postback as well as a pageload. Also, the object data sources just fire automatically on page load/postback; I'm not calling any methods programatically to get the data. Will I have to change this? A: @Gareth Jenkins The page will execute all of the queries before returning even the first update panel, so he won't save any time there. The trick to do this is to move each of your complex gridviews into a user control, in the user control, get rid of the Object DataSource crap, and do your binding in the code behind. Write your bind code so that it only binds in this situation: if (this.isPostBack && ScriptManager.IsInAsyncPostback) Then, in the page, programaticly refresh the update panel using javascript once the page has loaded, and you'll get each individual gridview rendering once its ready. A: Could you put the DataGrids inside panels that have their visibility set to false, then call a client-side javascript function from the body's onload event that calls a server side function that sets the visibility of the panels to true? If you combined this with an asp:updateProgress control and wrapped the whole thing in an UpdatePanel, you should get something close to what you're looking for - especially if you rigged the js function called in onload to only show one panel and call a return function that showed the next etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/59628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: App does not run with VS 2008 SP1 DLLs, previous version works with RTM versions Since our switch from Visual Studio 6 to Visual Studio 2008, we've been using the MFC90.dll and msvc[pr]90.dlls along with the manifest files in a private side-by-side configuration so as to not worry about versions or installing them to the system. Pre-SP1, this was working fine (and still works fine on our developer machines). Now that we've done some testing post-SP1 I've been pulling my hair out since yesterday morning. First off, our NSIS installer script pulls the dlls and manifest files from the redist folder. These were no longer correct, as the app still links to the RTM version. So I added the define for _BIND_TO_CURRENT_VCLIBS_VERSION=1 to all of our projects so that they will use the SP1 DLLs in the redist folder (or subsequent ones as new service packs come out). It took me hours to find this. I've double checked the generated manifest files in the intermediate files folder from the compilation, and they correctly list the 9.0.30729.1 SP1 versions. I've double and triple checked depends on a clean machine: it all links to the local dlls with no errors. Running the app still gets the following error: The application failed to initialize properly (0xc0150002). Click on OK to terminate the application. None of the searches I've done on google or microsoft have come up with anything that relates to my specific issues (but there are hits back to 2005 with this error message). Any one had any similar problem with SP1? Options: * *Find the problem and fix it so it works as it should (preferred) *Install the redist *dig out the old RTM dlls and manifest files and remove the #define to use the current ones. (I've got them in an earlier installer build, since Microsoft blasts them out of your redist folder!) Edit: I've tried re-building with the define turned off (link to RTM dlls), and that works as long as the RTM dlls are installed in the folder. If the SP1 dlls are dropped in, it gets the following error: c:\Program Files\...\...\X.exe This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. Has no-one else had to deal with this issue? Edit: Just for grins, I downloaded and ran the vcredist_x86.exe for VS2008SP1 on my test machine. It works. With the SP1 DLLs. And my RTM linked app. But NOT in a private side-by-side distribution that worked pre-SP1. A: I have battled this problem myself last week and consider myself somewhat of an expert now ;) I'm 99% sure that not all dlls and static libraries were recompiled with the SP1 version. You need to put #define _BIND_TO_CURRENT_MFC_VERSION 1 #define _BIND_TO_CURRENT_CRT_VERSION 1 into every project you're using. For every project of a real-world size, it's very easy to forget some small lib that wasn't recompiled. There are more flags that define what versions to bind to; it's documented on http://msdn.microsoft.com/en-us/library/cc664727%28v=vs.90%29.aspx . As an alternative to the lines above, you can also put #define _BIND_TO_CURRENT_VCLIBS_VERSION 1 which will bind to the latest version of all VC libs (CRT, MFC, ATL, OpenMP). Then, check what the embedded manifest says. Download XM Resource Editor: http://www.wilsonc.demon.co.uk/d10resourceeditor.htm. Open every dll and exe in your solution. Look under 'XP Theme Manifest'. Check that the 'version' attribute on the right-hand side is '9.0.30729.1'. If it's '9.0.21022', some static library is pulling in the manifest for the old version. What I found is that in many cases, both versions were included in the manifest. This means that some libraries use the sp1 version and others don't. A great way to debug which libraries don't have the preprocessor directives set: temporarily modify your platform headers so that compilation stops when it tries to embed the old manifest. Open C:\Program Files\Microsoft Visual Studio 9.0\VC\crt\include\crtassem.h. Search for the '21022' string. In that define, put something invalid (change 'define' to 'blehbleh' or so). This way, when you're compiling a project where the _BIND_TO_CURRENT_CRT_VERSION preprocessor flag is not set, your compilation will stop and you'll know that you need to add them or made sure that it's applied everywhere. Also make sure to use Dependency Walker so that you know what dlls are being pulled in. It's easiest to install a fresh Windows XP copy with no updates (only SP2) on a virtual machine. This way you know for sure that there is nothing in the SxS folder that is being used instead of the side-by-side dlls that you supplied. A: I just remembered another trick that I used to find out which static libraries were ill-behaving: 'grep' through the static libraries for the string '21022'. HOWEVER, don't use the 'normal' grep tools like wingrep because they won't show you these strings (they think it's a binary file and look for the raw, non-unicode string). Use the 'strings' utility from the resource kit (now in the Russinovich site I think). That one will grep through binaries ok. So you let this 'strings' go through your whole source tree and you'll see the binary files (dlls and static libraries) that contain references to the wrong manifest (or to the manifest with the wrong version in it). A: Another nice tool for viewing exe and dll manifests is Manifest View, which fittingly enough will not run on a clean install of XP, because it depends on 9.0.21022. A: For your third option, you can probably find the DLLs and manifests for the 9.0.21022 version in the C:\WINDOWS\WinSxS directory on your dev machine. If you can, then you can setup your own redist directory and install those files with your app. Alternatively, you can use the 9.0.30729.1 ones supplied with Visual Studio and forge the manifest you install with your app to report that it supplies the 9.0.21022 DLLs, and not 9.0.30729.1. The runtime linker doesn't seem to mind. See this blog, which has been immensely helpful for solving these problems, for more information. Both workarounds fixed the problems I had with deploying the DLLs as private assemblies with VS2008 Express. Roel's answer is the way to go for your first option ("fix it right"), but if you depend on a library that depends on 9.0.21022 (and your manifest therefore lists both versions), then the third option may be the only way to go if you don't want to run vcredist_x86.exe. A: To understand the problem, I think it is important to realize that there are four version numbers involved: * *(A) The version of the VC header files to which the .exe is compiled. *(B) The version of the manifest file that is embedded in the resources section of that .exe. By default, this manifest file is automatically generated by Visual Studio. *(C) The version of the VC .DLLs (part of the side-by-side assembly) you copy in the same directory as the .exe. *(D) The version of the VC manifest files (part of the side-by-side assembly) you copy in the same directory as the .exe. There are two versions of the VC 2008 DLL's in the running: * *v1: 9.0.21022.8 *v2: 9.0.30729.4148 For clarity, I'll use the v1/v2 notation. The following table shows a number of possible situations: Situation | .exe (A) | embedded manifest (B) | VC DLLs (C) | VC manifests (D) ----------------------------------------------------------------------------- 1 | v2 | v1 | v1 | v1 2 | v2 | v1 | v2 | v2 3 | v2 | v1 | v2 | v1 4 | v2 | v2 | v2 | v2 The results of these situations when running the .exe on a clean Vista SP1 installation are: * *Situation 1: a popup is shown, saying: "The procedure entry point XYZXYZ could not be located in the dynamic link library". *Situation 2: nothing seems to happen when running the .exe, but the following event is logged in Windows' "Event Viewer / Application log": Activation context generation failed for "C:\Path\file.exe".Error in manifest or policy file "C:\Path\Microsoft.VC90.CRT.MANIFEST" on line 4. Component identity found in manifest does not match the identity of the component requested. Reference is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8". Definition is Microsoft *Situation 3: everything seems to work fine. This is remicles2's solution. *Situation 4: this is how it should be done. Regrettably, as Roel indicates, it can be rather hard to implement. Now, my situation (and I think it is the same as crashmstr's) is nr 1. The problem is that Visual Studio for one reason or another generates client code (A) for v2, but for one reason or another, generates a v1 manifest file (B). I have no idea where version (A) can be configured. Note that this whole explanation is still in the context of private assemblies. Update: finally I start to understand what is going on. Apparently, Visual Studio generates client code (A) for v2 by default, contrary to what I've read on some Microsoft blogs. The _BIND_TO_CURRENT_VCLIBS_VERSION flag only selects the version in the generated manifest file (B), but this version will be ignored when running the application. Conclusion An .exe that is compiled by Visual Studio 2008 links to the newest versions of the VC90 DLLs by default. You can use the _BIND_TO_CURRENT_VCLIBS_VERSION flag to control which version of the VC90 libraries will be generated in the manifest file. This indeed avoids situation 2 where you get the error message "manifest does not match the identity of the component requested". It also explains why situation 3 works fine, as even without the _BIND_TO_CURRENT_VCLIBS_VERSION flag the application is linked to the newest versions of the VC DLLs. The situation is even weirder with public side-by-side assemblies, where vcredist was run, putting the VC 9.0 DLLs in the Windows SxS directory. Even if the .exe's manifest file states that the old versions of the DLLs should be used (this is the case when the _BIND_TO_CURRENT_VCLIBS_VERSION flag is not set), Windows ignores this version number by default! Instead, Windows will use a newer version if present on the system, except when an "application configuration file" is used. Am I the only one who thinks this is confusing? So in summary: * *For private assemblies, use the _BIND_TO_CURRENT_VCLIBS_VERSION flag in the .exe's project and all dependent .lib projects. *For public assemblies, this is not required, as Windows will automatically select the correct version of the .DLLs from the SxS directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/59635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Determine Installed Compact Frameworks (and SP) Version What's the best way to determine which version of the .NET Compact Frameworks (including Service Packs) is installed on a device through a .NET application. A: Based up Scott's links, the information about the current CF version can be found in the registry of the device at: HKEY_LOCAL_MACHINE\Software\Microsoft\.NETCompactFramework Versions for CF 2.0 are: CompactFrameworks 2.0 RTM - 2.0.5238.00 CompactFrameworks 2.0 SP1 - 2.0.6129.00 CompactFrameworks 2.0 SP2 - 2.0.7045.00 A: Neil Cowburn maintains a fairly good list of all version numbers on his blog. As of right now the list looks like this: Version Release ---------- ------------------ 1.0.2268.0 1.0 RTM 1.0.3111.0 1.0 SP1 1.0.3226.0 1.0 SP2 (Recalled) 1.0.3227.0 1.0 SP2 Beta 1.0.3316.0 1.0 SP2 RTM 1.0.4177.0 1.0 SP3 Beta 1.0.4292.0 1.0 SP3 RTM 2.0.4037.0 2.0 May CTP 2.0.4135.0 2.0 Beta 1 2.0.4317.0 2.0 November CTP 2.0.4278.0 2.0 December CTP 2.0.5056.0 2.0 Beta 2 2.0.5238.0 2.0 RTM 2.0.6103.0 2.0 SP1 Beta 2.0.6129.0 2.0 SP1 RTM 2.0.7045.0 2.0 SP2 RTM 3.5.7066.0 3.5 Beta 1 3.5.7121.0 3.5 Beta 2 3.5.7283.0 3.5 RTM
{ "language": "en", "url": "https://stackoverflow.com/questions/59642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Storing multiple arrays in Python I am writing a program to simulate the actual polling data companies like Gallup or Rasmussen publish daily: www.gallup.com and www.rassmussenreports.com I'm using a brute force method, where the computer generates some random daily polling data and then calculates three day averages to see if the average of the random data matches pollsters numbers. (Most companies poll numbers are three day averages) Currently, it works well for one iteration, but my goal is to have it produce the most common simulation that matches the average polling data. I could then change the code of anywhere from 1 to 1000 iterations. And this is my problem. At the end of the test I have an array in a single variable that looks something like this: [40.1, 39.4, 56.7, 60.0, 20.0 ..... 19.0] The program currently produces one array for each correct simulation. I can store each array in a single variable, but I then have to have a program that could generate 1 to 1000 variables depending on how many iterations I requested!? How do I avoid this? I know there is an intelligent way of doing this that doesn't require the program to generate variables to store arrays depending on how many simulations I want. Code testing for McCain: test = [] while x < 5: test = round(100*random.random()) mctest.append(test) x = x +1 mctestavg = (mctest[0] + mctest[1] + mctest[2])/3 #mcavg is real data if mctestavg == mcavg[2]: mcwork = mctest How do I repeat without creating multiple mcwork vars? A: Are you talking about doing this? >>> a = [ ['a', 'b'], ['c', 'd'] ] >>> a[1] ['c', 'd'] >>> a[1][1] 'd' A: Would something like this work? from random import randint mcworks = [] for n in xrange(NUM_ITERATIONS): mctest = [randint(0, 100) for i in xrange(5)] if sum(mctest[:3])/3 == mcavg[2]: mcworks.append(mctest) # mcavg is real data In the end, you are left with a list of valid mctest lists. What I changed: * *Used a list comprehension to build the data instead of a for loop *Used random.randint to get random integers *Used slices and sum to calculate the average of the first three items *(To answer your actual question :-) ) Put the results in a list mcworks, instead of creating a new variable for every iteration A: Lists in python can contain any type of object -- If I understand the question correctly, will a list of lists do the job? Something like this (assuming you have a function generate_poll_data() which creates your data: data = [] for in xrange(num_iterations): data.append(generate_poll_data()) Then, data[n] will be the list of data from the (n-1)th run. A: since you are thinking in variables, you might prefer a dictionary over a list of lists: data = {} data['a'] = [generate_poll_data()] data['b'] = [generate_poll_data()] etc. A: I would strongly consider using NumPy to do this. You get efficient N-dimensional arrays that you can quickly and easily process. A: A neat way to do it is to use a list of lists in combination with Pandas. Then you are able to create a 3-day rolling average. This makes it easy to search through the results by just adding the real ones as another column, and using the loc function for finding which ones that match. rand_vals = [randint(0, 100) for i in range(5))] df = pd.DataFrame(data=rand_vals, columns=['generated data']) df['3 day avg'] = df['generated data'].rolling(3).mean() df['mcavg'] = mcavg # the list of real data # Extract the resulting list of values res = df.loc[df['3 day avg'] == df['mcavg']]['3 day avg'].values This is also neat if you intend to use the same random values for different polls/persons, just add another column with their real values and perform the same search for them.
{ "language": "en", "url": "https://stackoverflow.com/questions/59648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Default integer type in ASP.NET from a stored procedure I have a web page that I have hooked up to a stored procedure. In this SQL data source, I have a parameter that I'm passing back to the stored procedure of type int. ASP.NET seems to want to default to int32, but the number won't get higher than 6. Is it ok to override the ASP.NET default and put in 16 or will there be a conflict somewhere down the road? specification: the database field has a length of 4 and precision of 10, if that makes a difference in the answer. A: If you force it to be for example a byte and the number is over 255 you run the risk of a casting error (and an exception will be thrown). However if you know it not going to be higher than 6 it should not be a problem. If it was me, I would just use it as a normal int, I am not sure you save much if anything other than a few bytes by making it a byte. The risk of the exception being thrown is too high and you would lose all benefits by making it smaller. A: Stick with int32. That's what vb's "Integer" and SQL's INT is, anyway. You will not gain any significant performance improvement by using a tinyint/byte or an short/int16 instead of int/int32. In fact, the headaches you might run into in the future caused by all the casting you might have to do for objects that expect int32s will drive you crazy. A: When you say the DB field has a length of 4, that means 4 bytes, which is equivalent to an Int32 (4 bytes = 32 bits). That's why your column is being returned as an int32. There are different integer datatypes in SQL Server -- if you are sure the number won't get higher than 6, you should declare the column in the database as a "tinyint", which uses a single byte and can hold values from 0 to 255. Then the SQL data source should convert it to a "byte" datatype, which will be fine for your purposes. CLR "byte" == SQL "tinyint" (1 byte) CLR "Short" (or int16) == SQL "smallint" (2 bytes) CLR "int32" == SQL "int" EDIT: just because you can do something, doesn't mean you should -- I agree with Michael Haren, the development headache of managing these less common datatypes outweighs the small performance gain you would get, unless you are dealing with very high-performance software (in which case, why would you be using ASP.NET?) A: You're not saving much if anything by using an Int16 on the ASP side. It still has to load it into a 32-bit register eventually. A: FYI, the CLR maps int to Int32 internally anyways. A: Use whatever your SQL Server stored procedure has defined. If it's an int in SQL Server, then use Int32 in .NET. smallint in SQL is int16. Otherwise, SQL Server will just upconvert it automatically, or throw an error if it needs to be downconverted.
{ "language": "en", "url": "https://stackoverflow.com/questions/59651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Getting at the Listbox's ItemContainer when data binding Is there a way to get at the ItemContaner of a selected item in a listbox? In Silverlight 2.0 Beta 1 I could, but the container is hidden in Beta 2 of Silverlight 2.0. I'm trying to resize the listbox item when it is unselected to a specific size and when selected to a variable size. I also want to get the relative position of the selected item for animations. Growing to a variable size and getting the relative pasition is why i need to get to the listbox item. I should clarify i'm not adding items to the listbox explicitly. I am using data binding in xaml and DataTemplates. What I have trouble accessing is the ItemContainer of the selected item's DataTemplate. A: There is a way to obtain the Panel containing the item's UIElement and the mapping of items to UIElements. You have to inherit from ListBox (this actually works for any ItemsControl) and override PrepareContainerForItemOverride: protected override void PrepareContainerForItemOverride(DependencyObject element, object item) { base.PrepareContainerForItemOverride(element, item); var el = element as FrameworkElement; if (el != null) { // here is the elements's panel: _itemsHost = el.Parent as Panel; // item is original item inserted in Items or ItemsSource // we can save the mapping between items and FrameworElements: _elementMapping[item] = el; } } This is kind of hackish, but it works just fine. A: If you are adding non-UI elements to the listbox (such as strings or non-UI data objects), then this is probably pretty difficult. However if you wrap your items in some sort of FrameworkElement-derived object before adding them to the listbox, you can use TransformToVisual to get the relative size and use Height and Width to set the size of the item. In general you can wrap your objects in a ContentControl like the following. Instead of: _ListBox.Items.Add(obj0); _ListBox.Items.Add(obj1); Do this: _ListBox.Items.Add(new ContentControl { Content = obj0 }); _ListBox.Items.Add(new ContentControl { Content = obj1 }); Now when you get _ListBox.SelectedItem you can cast it to ContentControl and set the size and get the relative position. If you need the original object, simply get the value of the item's Content property. A: It appears that you can use relative binding to get at the Item Container from the ItemTemplate. <TextBlock YourTargetProperty="{Binding RelativeSource={RelativeSource FindAncestor,AncestorType={x:Type ListBoxItem}}, Mode=OneWay, Path=YourSourceProperty}" /> I found this solution here. A: Update for silverlight 5. <ListBox ItemsSource="{Binding Properties}"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding IsSelected, RelativeSource={RelativeSource AncestorType=ListBoxItem}}" /> </DataTemplate> </ListBox.ItemTemplate> RelativeSource AncestorType is now supported, making this much easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/59653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to setup a Rails integration test for XML methods? Given a controller method like: def show @model = Model.find(params[:id]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => model } end end What's the best way to write an integration test that asserts that the return has the expected XML? A: This is the idiomatic way of testing the xml response from a controller. class ProductsControllerTest < ActionController::TestCase def test_should_get_index_formatted_for_xml @request.env['HTTP_ACCEPT'] = 'application/xml' get :index assert_response :success end end A: The answer from ntalbott shows a get action. The post action is a little trickier; if you want to send the new object as an XML message, and have the XML attributes show up in the params hash in the controller, you have to get the headers right. Here's an example (Rails 2.3.x): class TruckTest < ActionController::IntegrationTest def test_new_truck paint_color = 'blue' fuzzy_dice_count = 2 truck = Truck.new({:paint_color => paint_color, :fuzzy_dice_count => fuzzy_dice_count}) @headers ||= {} @headers['HTTP_ACCEPT'] = @headers['CONTENT_TYPE'] = 'application/xml' post '/trucks.xml', truck.to_xml, @headers #puts @response.body assert_select 'truck>paint_color', paint_color assert_select 'truck>fuzzy_dice_count', fuzzy_dice_count.to_s end end You can see here that the 2nd argument to post doesn't have to be a parameters hash; it can be a string (containing XML), if the headers are right. The 3rd argument, @headers, is the part that took me a lot of research to figure out. (Note also the use of to_s when comparing an integer value in assert_select.) A: A combination of using the format and assert_select in an integration test works great: class ProductsTest < ActionController::IntegrationTest def test_contents_of_xml get '/index/1.xml' assert_select 'product name', /widget/ end end For more details check out assert_select in the Rails docs. A: These 2 answers are great, except that my results include the datetime fields, which are gong to be different in most circumstances, so the assert_equal fails. It appears that I will need to process the include @response.body using an XML parser, and then compare the individual fields, the number of elements, etc. Or is there an easier way? A: Set the request objects accept header: @request.accept = 'text/xml' # or 'application/xml' I forget which Then you can assert the response body is equal to what you were expecting assert_equal '<some>xml</some>', @response.body
{ "language": "en", "url": "https://stackoverflow.com/questions/59655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Why overwrite a file more than once to securely delete all traces of a file? Erasing programs such as Eraser recommend overwriting data maybe 36 times. As I understand it all data is stored on a hard drive as 1s or 0s. If an overwrite of random 1s and 0s is carried out once over the whole file then why isn't that enough to remove all traces of the original file? A: Daniel Feenberg (an economist at the private National Bureau of Economic Research) claims that the chances of overwritten data being recovered from a modern hard drive amount to "urban legend": Can Intelligence Agencies Read Overwritten Data? So theoretically overwriting the file once with zeroes would be sufficent. A: In conventional terms, when a one is written to disk the media records a one, and when a zero is written the media records a zero. However the actual effect is closer to obtaining a 0.95 when a zero is overwritten with a one, and a 1.05 when a one is overwritten with a one. Normal disk circuitry is set up so that both these values are read as ones, but using specialised circuitry it is possible to work out what previous "layers" contained. The recovery of at least one or two layers of overwritten data isn't too hard to perform by reading the signal from the analog head electronics with a high-quality digital sampling oscilloscope, downloading the sampled waveform to a PC, and analysing it in software to recover the previously recorded signal. What the software does is generate an "ideal" read signal and subtract it from what was actually read, leaving as the difference the remnant of the previous signal. Since the analog circuitry in a commercial hard drive is nowhere near the quality of the circuitry in the oscilloscope used to sample the signal, the ability exists to recover a lot of extra information which isn't exploited by the hard drive electronics (although with newer channel coding techniques such as PRML (explained further on) which require extensive amounts of signal processing, the use of simple tools such as an oscilloscope to directly recover the data is no longer possible) http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html A: A hard drive bit which used to be a 0, and is then changed to a '1', has a slightly weaker magnetic field than one which used to be a 1 and was then written to 1 again. With sensitive equipment the previous contents of each bit can be discerned with a reasonable degree of accuracy, by measuring the slight variances in strength. The result won't be exactly correct and there will be errors, but a good portion of the previous contents can be retrieved. By the time you've scribbled over the bits 35 times, it is effectively impossible to discern what used to be there. Edit: A modern analysis shows that a single overwritten bit can be recovered with only 56% accuracy. Trying to recover an entire byte is only accurate 0.97% of the time. So I was just repeating an urban legend. Overwriting multiple times might have been necessary when working with floppy disks or some other medium, but hard disks do not need it. A: Imagine a sector of data on the physical disk. Within this sector is a magnetic pattern (a strip) which encodes the bits of data stored in the sector. This pattern is written by a write head which is more or less stationary while the disk rotates beneath it. Now, in order for your hard drive to function properly as a data storage device each time a new magnetic pattern strip is written to a sector it has to reset the magnetic pattern in that sector enough to be readable later. However, it doesn't have to completely erase all evidence of the previous magnetic pattern, it just has to be good enough (and with the amount of error correction used today good enough doesn't have to be all that good). Consider that the write head will not always take the same track as the previous pass over a given sector (it could be skewed a little to the left or the right, it could pass over the sector at a slight angle one way or the other due to vibration, etc.) What you get is a series of layers of magnetic patterns, with the strongest pattern corresponding to the last data write. With the right instrumentation it may be possible to read this layering of patterns with enough detail to be able to determine some of the data in older layers. It helps that the data is digital, because once you have extracted the data for a given layer you can determine exactly the magnetic pattern that would have been used to write it to disk and subtract that from the readings (and then do so on the next layer, and the next). A: The reason why you want this is not harddisks, but SSDs. They remap clusters without telling the OS or filesystem drivers. This is done for wear-leveling purposes. So, the chances are quite high that the 0 bit written goes to a different place than the previous 1. Removing the SSD controller and reading the raw flash chips is well within the reach of even corporate espionage. But with 36 full disk overwrites, the wear leveling will likely have cycled through all spare blocks a few times. A: What we're looking at here is called "data remanence." In fact, most of the technologies that overwrite repeatedly are (harmlessly) doing more than what's actually necessary. There have been attempts to recover data from disks that have had data overwritten and with the exception of a few lab cases, there are really no examples of such a technique being successful. When we talk about recovery methods, primarily you will see magnetic force microscopy as the silver bullet to get around a casual overwrite but even this has no recorded successes and can be quashed in any case by writing a good pattern of binary data across the region on your magnetic media (as opposed to simple 0000000000s). Lastly, the 36 (actually 35) overwrites that you are referring to are recognized as dated and unnecessary today as the technique (known as the Gutmann method) was designed to accommodate the various - and usually unknown to the user - encoding methods used in technologies like RLL and MFM which you're not likely to run into anyhow. Even the US government guidelines state the one overwrite is sufficient to delete data, though for administrative purposes they do not consider this acceptable for "sanitization". The suggested reason for this disparity is that "bad" sectors can be marked bad by the disk hardware and not properly overwritten when the time comes to do the overwrite, therefore leaving the possibility open that visual inspection of the disk will be able to recover these regions. In the end - writing with a 1010101010101010 or fairly random pattern is enough to erase data to the point that known techniques cannot recover it. A: "Data Remanence" There's a pretty good set of references regarding possible attacks and their actual feasibility on Wikipedia. There are DoD and NIST standards and recommendations cited there too. Bottom line, it's possible but becoming ever-harder to recover overwritten data from magnetic media. Nonetheless, some (US-government) standards still require at least multiple overwrites. Meanwhile, device internals continue to become more complex, and, even after overwriting, a drive or solid-state device may have copies in unexpected (think about bad block handling or flash wear leveling (see Peter Gutmann). So the truly worried still destroy drives. A: I've always wondered why the possibility that the file was previously stored in a different physical location on the disk isn't considered. For example, if a defrag has just occurred there could easily be a copy of the file that's easily recoverable somewhere else on the disk. A: Here's a Gutmann erasing implementation I put together. It uses the cryptographic random number generator to produce a strong block of random data. public static void DeleteGutmann(string fileName) { var fi = new FileInfo(fileName); if (!fi.Exists) { return; } const int GutmannPasses = 35; var gutmanns = new byte[GutmannPasses][]; for (var i = 0; i < gutmanns.Length; i++) { if ((i == 14) || (i == 19) || (i == 25) || (i == 26) || (i == 27)) { continue; } gutmanns[i] = new byte[fi.Length]; } using (var rnd = new RNGCryptoServiceProvider()) { for (var i = 0L; i < 4; i++) { rnd.GetBytes(gutmanns[i]); rnd.GetBytes(gutmanns[31 + i]); } } for (var i = 0L; i < fi.Length;) { gutmanns[4][i] = 0x55; gutmanns[5][i] = 0xAA; gutmanns[6][i] = 0x92; gutmanns[7][i] = 0x49; gutmanns[8][i] = 0x24; gutmanns[10][i] = 0x11; gutmanns[11][i] = 0x22; gutmanns[12][i] = 0x33; gutmanns[13][i] = 0x44; gutmanns[15][i] = 0x66; gutmanns[16][i] = 0x77; gutmanns[17][i] = 0x88; gutmanns[18][i] = 0x99; gutmanns[20][i] = 0xBB; gutmanns[21][i] = 0xCC; gutmanns[22][i] = 0xDD; gutmanns[23][i] = 0xEE; gutmanns[24][i] = 0xFF; gutmanns[28][i] = 0x6D; gutmanns[29][i] = 0xB6; gutmanns[30][i++] = 0xDB; if (i >= fi.Length) { continue; } gutmanns[4][i] = 0x55; gutmanns[5][i] = 0xAA; gutmanns[6][i] = 0x49; gutmanns[7][i] = 0x24; gutmanns[8][i] = 0x92; gutmanns[10][i] = 0x11; gutmanns[11][i] = 0x22; gutmanns[12][i] = 0x33; gutmanns[13][i] = 0x44; gutmanns[15][i] = 0x66; gutmanns[16][i] = 0x77; gutmanns[17][i] = 0x88; gutmanns[18][i] = 0x99; gutmanns[20][i] = 0xBB; gutmanns[21][i] = 0xCC; gutmanns[22][i] = 0xDD; gutmanns[23][i] = 0xEE; gutmanns[24][i] = 0xFF; gutmanns[28][i] = 0xB6; gutmanns[29][i] = 0xDB; gutmanns[30][i++] = 0x6D; if (i >= fi.Length) { continue; } gutmanns[4][i] = 0x55; gutmanns[5][i] = 0xAA; gutmanns[6][i] = 0x24; gutmanns[7][i] = 0x92; gutmanns[8][i] = 0x49; gutmanns[10][i] = 0x11; gutmanns[11][i] = 0x22; gutmanns[12][i] = 0x33; gutmanns[13][i] = 0x44; gutmanns[15][i] = 0x66; gutmanns[16][i] = 0x77; gutmanns[17][i] = 0x88; gutmanns[18][i] = 0x99; gutmanns[20][i] = 0xBB; gutmanns[21][i] = 0xCC; gutmanns[22][i] = 0xDD; gutmanns[23][i] = 0xEE; gutmanns[24][i] = 0xFF; gutmanns[28][i] = 0xDB; gutmanns[29][i] = 0x6D; gutmanns[30][i++] = 0xB6; } gutmanns[14] = gutmanns[4]; gutmanns[19] = gutmanns[5]; gutmanns[25] = gutmanns[6]; gutmanns[26] = gutmanns[7]; gutmanns[27] = gutmanns[8]; Stream s; try { s = new FileStream( fi.FullName, FileMode.Open, FileAccess.Write, FileShare.None, (int)fi.Length, FileOptions.DeleteOnClose | FileOptions.RandomAccess | FileOptions.WriteThrough); } catch (UnauthorizedAccessException) { return; } catch (IOException) { return; } using (s) { if (!s.CanSeek || !s.CanWrite) { return; } for (var i = 0L; i < gutmanns.Length; i++) { s.Seek(0, SeekOrigin.Begin); s.Write(gutmanns[i], 0, gutmanns[i].Length); s.Flush(); } } } A: There are "disk repair" type applications and services that can still read data off a hard drive even after it's been formatted, so simply overwriting with random 1s and 0s one time isn't sufficient if you really need to securely erase something. I would say that for the average user, this is more than sufficient, but if you are in a high-security environment (government, military, etc.) then you need a much higher level of "delete" that can pretty effectively guarantee that no data will be recoverable from the drive. A: The United States has requirements put out regarding the erasure of sensitive information (i.e. Top Secret info) is to destroy the drive. Basically the drives were put into a machine with a huge magnet and would also physically destroy the drive for disposal. This is because there is a possibility of reading information on a drive, even being overwritten many times. A: See this: Guttman's paper A: Just invert the bits so that 1's are written to all 0's and 0's are written to all 1's then zero it all out that should get rid of any variable in the magnetic field and only takes 2 passes.
{ "language": "en", "url": "https://stackoverflow.com/questions/59656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What are the use cases for selecting CHAR over VARCHAR in SQL? I realize that CHAR is recommended if all my values are fixed-width. But, so what? Why not just pick VARCHAR for all text fields just to be safe. A: There are performance benefits, but here is one that has not been mentioned: row migration. With char, you reserve the entire space in advance.So let's says you have a char(1000), and you store 10 characters, you will use up all 1000 charaters of space. In a varchar2(1000), you will only use 10 characters. The problem comes when you modify the data. Let's say you update the column to now contain 900 characters. It is possible that the space to expand the varchar is not available in the current block. In that case, the DB engine must migrate the row to another block, and make a pointer in the original block to the new row in the new block. To read this data, the DB engine will now have to read 2 blocks. No one can equivocally say that varchar or char are better. There is a space for time tradeoff, and consideration of whether the data will be updated, especially if there is a good chance that it will grow. A: There is a difference between early performance optimization and using a best practice type of rule. If you are creating new tables where you will always have a fixed length field, it makes sense to use CHAR, you should be using it in that case. This isn't early optimization, but rather implementing a rule of thumb (or best practice). i.e. - If you have a 2 letter state field, use CHAR(2). If you have a field with the actual state names, use VARCHAR. A: I would choose varchar unless the column stores fixed value like US state code -- which is always 2 chars long and the list of valid US states code doesn't change often :). In every other case, even like storing hashed password (which is fixed length), I would choose varchar. Why -- char type column is always fulfilled with spaces, which makes for column my_column defined as char(5) with value 'ABC' inside comparation: my_column = 'ABC' -- my_column stores 'ABC ' value which is different then 'ABC' false. This feature could lead to many irritating bugs during development and makes testing harder. A: If you're working with me and you're working with Oracle, I would probably make you use varchar in almost every circumstance. The assumption that char uses less processing power than varchar may be true...for now...but database engines get better over time and this sort of general rule has the making of a future "myth". Another thing: I have never seen a performance problem because someone decided to go with varchar. You will make much better use of your time writing good code (fewer calls to the database) and efficient SQL (how do indexes work, how does the optimizer make decisions, why is exists faster than in usually...). Final thought: I have seen all sorts of problems with use of CHAR, people looking for '' when they should be looking for ' ', or people looking for 'FOO' when they should be looking for 'FOO (bunch of spaces here)', or people not trimming the trailing blanks, or bugs with Powerbuilder adding up to 2000 blanks to the value it returns from an Oracle procedure. A: CHAR takes up less storage space than VARCHAR if all your data values in that field are the same length. Now perhaps in 2009 a 800GB database is the same for all intents and purposes as a 810GB if you converted the VARCHARs to CHARs, but for short strings (1 or 2 characters), CHAR is still a industry "best practice" I would say. Now if you look at the wide variety of data types most databases provide even for integers alone (bit, tiny, int, bigint), there ARE reasons to choose one over the other. Simply choosing bigint every time is actually being a bit ignorant of the purposes and uses of the field. If a field simply represents a persons age in years, a bigint is overkill. Now it's not necessarily "wrong", but it's not efficient. But its an interesting argument, and as databases improve over time, it could be argued CHAR vs VARCHAR does get less relevant. A: I would NEVER use chars. I’ve had this debate with many people and they always bring up the tired cliché that char is faster. Well I say, how much faster? What are we talking about here, milliseconds, seconds and if so how many? You’re telling me because someone claims its a few milliseconds faster, we should introduce tons of hard to fix bugs into the system? So here are some issues you will run into: Every field will be padded, so you end up with code forever that has RTRIMS everywhere. This is also a huge disk space waste for the longer fields. Now let’s say you have the quintessential example of a char field of just one character but the field is optional. If somebody passes an empty string to that field it becomes one space. So when another application/process queries it, they get one single space, if they don’t use rtrim. We’ve had xml documents, files and other programs, display just one space, in optional fields and break things. So now you have to ensure that you’re passing nulls and not empty string, to the char field. But that’s NOT the correct use of null. Here is the use of null. Lets say you get a file from a vendor Name|Gender|City Bob||Los Angeles If gender is not specified than you enter Bob, empty string and Los Angeles into the table. Now lets say you get the file and its format changes and gender is no longer included but was in the past. Name|City Bob|Seattle Well now since gender is not included, I would use null. Varchars support this without issues. Char on the other hand is different. You always have to send null. If you ever send empty string, you will end up with a field that has spaces in it. I could go on and on with all the bugs I’ve had to fix from chars and in about 20 years of development. A: I stand by Jim McKeeth's comment. Also, indexing and full table scans are faster if your table has only CHAR columns. Basically the optimizer will be able to predict how big each record is if it only has CHAR columns, while it needs to check the size value of every VARCHAR column. Besides if you update a VARCHAR column to a size larger than its previous content you may force the database to rebuild its indexes (because you forced the database to physically move the record on disk). While with CHAR columns that'll never happen. But you probably won't care about the performance hit unless your table is huge. Remember Djikstra's wise words. Early performance optimization is the root of all evil. A: Many people have pointed out that if you know the exact length of the value using CHAR has some benefits. But while storing US states as CHAR(2) is great today, when you get the message from sales that 'We have just made our first sale to Australia', you are in a world of pain. I always send to overestimate how long I think fields will need to be rather than making an 'exact' guess to cover for future events. VARCHAR will give me more flexibility in this area. A: The general rule is to pick CHAR if all rows will have close to the same length. Pick VARCHAR (or NVARCHAR) when the length varies significantly. CHAR may also be a bit faster because all the rows are of the same length. It varies by DB implementation, but generally, VARCHAR (or NVARCHAR) uses one or two more bytes of storage (for length or termination) in addition to the actual data. So (assuming you are using a one-byte character set) storing the word "FooBar" * *CHAR(6) = 6 bytes (no overhead) *VARCHAR(100) = 8 bytes (2 bytes of overhead) *CHAR(10) = 10 bytes (4 bytes of waste) The bottom line is CHAR can be faster and more space-efficient for data of relatively the same length (within two characters length difference). Note: Microsoft SQL has 2 bytes of overhead for a VARCHAR. This may vary from DB to DB, but generally, there is at least 1 byte of overhead needed to indicate length or EOL on a VARCHAR. As was pointed out by Gaven in the comments: Things change when it comes to multi-byte characters sets, and is a is case where VARCHAR becomes a much better choice. A note about the declared length of the VARCHAR: Because it stores the length of the actual content, then you don't waste unused length. So storing 6 characters in VARCHAR(6), VARCHAR(100), or VARCHAR(MAX) uses the same amount of storage. Read more about the differences when using VARCHAR(MAX). You declare a maximum size in VARCHAR to limit how much is stored. In the comments AlwaysLearning pointed out that the Microsoft Transact-SQL docs seem to say the opposite. I would suggest that is an error or at least the docs are unclear. A: In addition to performance benefits, CHAR can be used to indicate that all values should be the same length, e.g., a column for U.S. state abbreviations. A: I think in your case there is probably no reason to not pick Varchar. It gives you flexibility and as has been mentioned by a number of respondants, performance is such now that except in very specific circumstances us meer mortals (as opposed to Google DBA's) will not notice the difference. An interesting thing worth noting when it comes to DB Types is the sqlite (a popular mini database with pretty impressive performance) puts everything into the database as a string and types on the fly. I always use VarChar and usually make it much bigger than I might strickly need. Eg. 50 for Firstname, as you say why not just to be safe. A: It's the classic space versus performance tradeoff. In MS SQL 2005, Varchar (or NVarchar for lanuagues requiring two bytes per character ie Chinese) are variable length. If you add to the row after it has been written to the hard disk it will locate the data in a non-contigious location to the original row and lead to fragmentation of your data files. This will affect performance. So, if space is not an issue then Char are better for performance but if you want to keep the database size down then varchars are better. A: Fragmentation. Char reserves space and VarChar does not. Page split can be required to accommodate update to varchar. A: Char is a little bit faster, so if you have a column that you KNOW will be a certain length, use char. For example, storing (M)ale/(F)emale/(U)nknown for gender, or 2 characters for a US state. A: Does NChar or Char perform better that their var alternatives? Great question. The simple answer is yes in certain situations. Let's see if this can be explained. Obviously we all know that if I create a table with a column of varchar(255) (let's call this column myColumn) and insert a million rows but put only a few characters into myColumn for each row, the table will be much smaller (overall number of data pages needed by the storage engine) than if I had created myColumn as char(255). Anytime I do an operation (DML) on that table and request alot of rows, it will be faster when myColumn is varchar because I don't have to move around all those "extra" spaces at the end. Move, as in when SQL Server does internal sorts such as during a distinct or union operation, or if it chooses a merge during it's query plan, etc. Move could also mean the time it takes to get the data from the server to my local pc or to another computer or wherever it is going to be consumed. But there is some overhead in using varchar. SQL Server has to use a two byte indicator (overhead) to, on each row, to know how many bytes that particular row's myColumn has in it. It's not the extra 2 bytes that presents the problem, it's the having to "decode" the length of the data in myColumn on every row. In my experiences it makes the most sense to use char instead of varchar on columns that will be joined to in queries. For example the primary key of a table, or some other column that will be indexed. CustomerNumber on a demographic table, or CodeID on a decode table, or perhaps OrderNumber on an order table. By using char, the query engine can more quickly perform the join because it can do straight pointer arithmetic (deterministically) rather than having to move it's pointers a variable amount of bytes as it reads the pages. I know I might have lost you on that last sentence. Joins in SQL Server are based around the idea of "predicates." A predicate is a condition. For example myColumn = 1, or OrderNumber < 500. So if SQL Server is performing a DML statement, and the predicates, or "keys" being joined on are a fixed length (char), the query engine doesn't have to do as much work to match rows from one table to rows from another table. It won't have to find out how long the data is in the row and then walk down the string to find the end. All that takes time. Now bear in mind this can easily be poorly implemented. I have seen char used for primary key fields in online systems. The width must be kept small i.e. char(15) or something reasonable. And it works best in online systems because you are usually only retrieving or upserting a small number of rows, so having to "rtrim" those trailing spaces you'll get in the result set is a trivial task as opposed to having to join millions of rows from one table to millions of rows on another table. Another reason CHAR makes sense over varchar on online systems is that it reduces page splits. By using char, you are essentially "reserving" (and wasting) that space so if a user comes along later and puts more data into that column SQL has already allocated space for it and in it goes. Another reason to use CHAR is similar to the second reason. If a programmer or user does a "batch" update to millions of rows, adding some sentence to a note field for example, you won't get a call from your DBA in the middle of the night wondering why their drives are full. In other words, it leads to more predictable growth of the size of a database. So those are 3 ways an online (OLTP) system can benefit from char over varchar. I hardly ever use char in a warehouse/analysis/OLAP scenario because usually you have SO much data that all those char columns can add up to lots of wasted space. Keep in mind that char can make your database much larger but most backup tools have data compression so your backups tend to be about the same size as if you had used varchar. For example LiteSpeed or RedGate SQL Backup. Another use is in views created for exporting data to a fixed width file. Let's say I have to export some data to a flat file to be read by a mainframe. It is fixed width (not delimited). I like to store the data in my "staging" table as varchar (thus consuming less space on my database) and then use a view to CAST everything to it's char equivalent, with the length corresponding to the width of the fixed width for that column. For example: create table tblStagingTable ( pkID BIGINT (IDENTITY,1,1), CustomerFirstName varchar(30), CustomerLastName varchar(30), CustomerCityStateZip varchar(100), CustomerCurrentBalance money ) insert into tblStagingTable (CustomerFirstName,CustomerLastName, CustomerCityStateZip) ('Joe','Blow','123 Main St Washington, MD 12345', 123.45) create view vwStagingTable AS SELECT CustomerFirstName = CAST(CustomerFirstName as CHAR(30)), CustomerLastName = CAST(CustomerLastName as CHAR(30)), CustomerCityStateZip = CAST(CustomerCityStateZip as CHAR(100)), CustomerCurrentBalance = CAST(CAST(CustomerCurrentBalance as NUMERIC(9,2)) AS CHAR(10)) SELECT * from vwStagingTable This is cool because internally my data takes up less space because it's using varchar. But when I use DTS or SSIS or even just a cut and paste from SSMS to Notepad, I can use the view and get the right number of trailing spaces. In DTS we used to have a feature called, damn I forget I think it was called "suggest columns" or something. In SSIS you can't do that anymore, you have to tediously define the flat file connection manager. But since you have your view setup, SSIS can know the width of each column and it can save alot of time when building your data flow tasks. So bottom line... use varchar. There are a very small number of reasons to use char and it's only for performance reasons. If you have a system with hundrends of millions of rows you will see a noticeable difference if the predicates are deterministic (char) but for most systems using char is simply wasting space. Hope that helps. Jeff A: There is some small processing overhead in calculating the actual needed size for a column value and allocating the space for a Varchar, so if you are definitely sure how long the value will always be, it is better to use Char and avoid the hit. A: when using varchar values SQL Server needs an additional 2 bytes per row to store some info about that column whereas if you use char it doesn't need that so unless you A: Using CHAR (NCHAR) and VARCHAR (NVARCHAR) brings differences in the ways the database server stores the data. The first one introduces trailing blanks; I have encountered problem when using it with LIKE operator in SQL SERVER functions. So I have to make it safe by using VARCHAR (NVARCHAR) all the times. For example, if we have a table TEST(ID INT, Status CHAR(1)), and you write a function to list all the records with some specific value like the following: CREATE FUNCTION List(@Status AS CHAR(1) = '') RETURNS TABLE AS RETURN SELECT * FROM TEST WHERE Status LIKE '%' + @Status '%' In this function we expect that when we put the default parameter the function will return all the rows, but in fact it does not. Change the @Status data type to VARCHAR will fix the issue. A: In some SQL databases, VARCHAR will be padded out to its maximum size in order to optimize the offsets, This is to speed up full table scans and indexes. Because of this, you do not have any space savings by using a VARCHAR(200) compared to a CHAR(200)
{ "language": "en", "url": "https://stackoverflow.com/questions/59667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "283" }
Q: How to get rid of `deprecated conversion from string constant to ‘char*’` warnings in GCC I'm working on an exceedingly large codebase, and recently upgraded to GCC 4.3, which now triggers this warning: warning: deprecated conversion from string constant to ‘char*’ Obviously, the correct way to fix this is to find every declaration like char *s = "constant string"; or function call like: void foo(char *s); foo("constant string"); and make them const char pointers. However, that would mean touching 564 files, minimum, which is not a task I wish to perform at this point in time. The problem right now is that I'm running with -Werror, so I need some way to stifle these warnings. How can I do that? A: I had a similar problem, and I solved it like this: #include <string.h> extern void foo(char* m); int main() { // warning: deprecated conversion from string constant to ‘char*’ //foo("Hello"); // no more warning char msg[] = "Hello"; foo(msg); } I did not have access to foo in order to adapt it to accept const char*, which would be a better solution because foo did not change m. A: Test string is const string. So you can solve like this: char str[] = "Test string"; or: const char* str = "Test string"; printf(str); A: Check out GCC's Diagnostic Pragma support, and the list of -W warning options. For GCC, you can use #pragma warning directives like explained here. A: Any functions into which you pass string literals "I am a string literal" should use char const * as the type instead of char*. If you're going to fix something, fix it right. Explanation: You can not use string literals to initialise strings that will be modified, because they are of type const char*. Casting away the constness to later modify them is undefined behaviour, so you have to copy your const char* strings char by char into dynamically allocated char* strings in order to modify them. Example: #include <iostream> void print(char* ch); void print(const char* ch) { std::cout<<ch; } int main() { print("Hello"); return 0; } A: If it's an active code base, you might still want to upgrade the code base. Of course, performing the changes manually isn't feasible but I believe that this problem could be solved once and for all by one single sed command. I haven't tried it, though, so take the following with a grain of salt. find . -exec sed -E -i .backup -n \ -e 's/char\s*\*\s*(\w+)\s*= "/char const* \1 = "/g' {} \; This might not find all places (even not considering function calls) but it would alleviate the problem and make it possible to perform the few remaining changes manually. A: Just use type casting: (char*) "test" A: Here is how to do it inline in a file, so you don't have to modify your Makefile. // gets rid of annoying "deprecated conversion from string constant blah blah" warning #pragma GCC diagnostic ignored "-Wwrite-strings" You can then later... #pragma GCC diagnostic pop A: I can't use the compiler switch. So I have turned this: char *setf = tigetstr("setf"); to this: char *setf = tigetstr((char *)"setf"); A: Replace char *str = "hello"; with char *str = (char*)"hello"; or if you are calling in function: foo("hello"); replace this with foo((char*) "hello"); A: I believe passing -Wno-write-strings to GCC will suppress this warning. A: Do typecasting from constant string to char pointer i.e. char *s = (char *) "constant string"; A: Instead of: void foo(char *s); foo("constant string"); This works: void foo(const char s[]); foo("constant string"); A: In C++, use the const_cast as like below char* str = const_cast<char*>("Test string"); A: In C++, replace: char *str = "hello"; with: std::string str ("hello"); And if you want to compare it: str.compare("HALLO"); A: I don't understand how to apply your solution :( – kalmanIsAGameChanger Working with an Arduino sketch, I had a function causing my warnings. Original function: char StrContains(char *str, char *sfind) To stop the warnings, I added the const in front of the char *str and the char *sfind. Modified: char StrContains(const char *str, const char *sfind). All warnings went away. A: Use the -Wno-deprecated option to ignore deprecated warning messages. A: You can also create a writable string from a string constant by calling strdup(). For instance, this code generates a warning: putenv("DEBUG=1"); However, the following code does not (it makes a copy of the string on the heap before passing it to putenv): putenv(strdup("DEBUG=1")); In this case (and perhaps in most others) turning off the warning is a bad idea -- it's there for a reason. The other alternative (making all strings writable by default) is potentially inefficient. Listen to what the compiler is telling you! A: Just use the -w option for g++. Example: g++ -w -o simple.o simple.cpp -lpthread Remember this doesn't avoid deprecation. Rather, it prevents showing warning message on the terminal. Now if you really want to avoid deprecation, use the const keyword like this: const char* s = "constant string"; A: Picking from here and there, here comes this solution. This compiles clean. const char * timeServer[] = { "pool.ntp.org" }; // 0 - Worldwide #define WHICH_NTP 0 // Which NTP server name to use. ... sendNTPpacket(const_cast<char*>(timeServer[WHICH_NTP])); // send an NTP packet to a server ... void sendNTPpacket(char* address) { code } I know there's only one item in the timeServer array. But there could be more. The rest were commented out for now to save memory. A: While passing string constants to functions, write it as: void setpart(const char name[]); setpart("Hello"); Instead of const char name[], you could also write const char \*name. It worked for me to remove this error: [Warning] deprecated conversion from string constant to 'char*' [-Wwrite-strings] A: Re shindow's "answer": PyTypeObject PyDict_Type= { ... PyTypeObject PyDict_Type= { PyObject_HEAD_INIT(&PyType_Type), "dict", dict_print, 0, 0 }; Watch the name field. Using gcc, it compiles without warning, but in g++ it will. I don't know why. In gcc (Compiling C), -Wno-write-strings is active by default. In g++ (Compiling C++), -Wwrite-strings is active by default This is why there is a different behaviour. For us, using macros of Boost_python generates such warnings. So we use -Wno-write-strings when compiling C++ since we always use -Werror. A: The problem right now is that I'm running with -Werror This is your real problem, IMO. You can try some automated ways of moving from (char *) to (const char *) but I would put money on them not just working. You will have to have a human involved for at least some of the work. For the short term, just ignore the warning (but IMO leave it on, or it'll never get fixed) and just remove the -Werror. A: See this situation: typedef struct tagPyTypeObject { PyObject_HEAD; char *name; PrintFun print; AddFun add; HashFun hash; } PyTypeObject; PyTypeObject PyDict_Type= { PyObject_HEAD_INIT(&PyType_Type), "dict", dict_print, 0, 0 }; Watch the name field. Using gcc, it compiles without warning, but in g++ it will. I don't know why.
{ "language": "en", "url": "https://stackoverflow.com/questions/59670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "425" }
Q: WSDualHttpBinding for duplex callbacks Would using WSDualHttpBinding for duplex callbacks work in real-world scenarios? Say, I have a .NET application that uses a random port, would the service be able to resolve the client's base addresses and port for callbacks? A: A complete answer to your question depends on the "real-world scenario" being either an Intranet or an Internet scenario. Although WSDualHttpBinding works in both scenarios there are specifics to be aware of: Intranet WSDualHttpBinding will work with your .NET application using a preconfigured custom port in an Intranet scenario and "Yes" the service will be able to resolve the client's base addresses and port for callbacks: exactly how is explained below. The reason it's explained below is that WSDualHttpBinding is primarily designed to be used over the Internet. Duplex callbacks in an Intranet scenario when you can use WCF on both client and server is best achieved by using NetTcpBinding or NetNamedPipeBinding. These bindings use TCP and ICP respectively as transport (rather than HTTP) and a custom binary encoding which is why WCF is required on both sides. For calls-back to the client the same channel used to connect to the Service via the Binding is re-used without requiring a new port to be opened. Internet In an Internet scenario valid HTTP requests and responses only travel in one direction, HTTP is designed as a one-way protocol. When using the WSDualHttpBinding WCF therefore creates a seperate HTTP channel for callbacks. In answer to your second question: the destination address for this call-back to the client is composed of the client machine hostname and port 80 by default. If the client is a development machine for example and has IIS installed, port 80 will be exclusively reserved in some scenarios which will cause conflicts with your prototype application. This is what this blog post presents a solution for and what the ClientBaseAddress property is designed to help with. Regardless of which port you go with - the default or a custom one, you must ensure all firewalls and routers on both sides are configured correctly to allow both the outgoing channel and the seperate callback channel to be established. A .NET application can also denote a Silverlight application. Because of the fact that a Silverlight application running in a browser cannot accept new incoming HTTP connections, WSDualHttpBinding with it's seperate back channel will not work. Hence PollingDuplexHttpBinding was created firstly in Silverlight 2 which can be thought of as a clever 'trick' to get around the fact that HTTP is unidirectional by keeping the request channel open for a long time (long polling) and using it as a back channel for calls back to the client. This has a number of implications on both the client and server side particularly relevant to scaling, for more detail please see this post from my blog. With an idea of your particular "real-world scenario" and your use-cases hopefully this will help you work out the correct binding to use for duplex callbacks. A: If it's an application behind a firewall, theoretically yes. It depends on what you mean by "real world"; if by that you mean "high performance" perhaps NetTcpBinding is a better appraoch.
{ "language": "en", "url": "https://stackoverflow.com/questions/59677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: In a LotusScript Agent how do you get the name of the current server? In a LotusScript Agent that is being run via WebQueryOpen, how do you get the name of the current server? A: Set s = New NotesSession Set db = s.CurrentDatabase If db.Server <> "" Then Set sName = New NotesName(db.Server) Else Set sName = New NotesName(s.Username) End If A: The sample code already provided is good but I also do it this way and just get the hierarchical name of the server: Set s = New NotesSession Set db = s.CurrentDatabase If db.Server <> "" Then Set sName = New NotesName(db.Server) Else Set sName = New NotesName(s.Username) End If ServerName = sName.Abbreviated A: Gary's answer is the most appropriate. You can actually identify the server name using hierarchical syntax to. dim session as new notesSession dim strCurrServer as string dim nmServer as notesName strCurrServer = session.currentagent.servername ' this bit is optional set nmServer = new notesName(strCurrServer) ' then you can do stuff like this print nmServer.Abbreviated That would be the fastest (dirtiest?) way to get the server name from the webquery open agent. The notesName class is a handy object for dealing with hierarchical names link text A: 'initialize event of a WebQueryOpen agent Dim s As New notessession Dim servername As String servername = s.UserName
{ "language": "en", "url": "https://stackoverflow.com/questions/59680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: .NET visual components I really like DevX components, but they are pretty expensive, maybe anyone knows free equivalents ? or web site where I can look for some kind of free visual component for .NET A: Check out free Krypton Toolkit of Component Factory. A: I also found that DevExpress offers some free components. A: I second that. Krypton all the way. Some of their controls actually outperform the same Telerik control, too.
{ "language": "en", "url": "https://stackoverflow.com/questions/59684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Haskell list difference operator in F# Is there an equivalent operator to Haskell's list difference operator \\ in F#? A: Nope... Just write it and make it an infix operator --using the set of special characters. Backslash (\) is not in the list below, so it will not work as an infix operator. See the manual: infix-op := or || & && <OP >OP $OP = |OP &OP ^OP :: -OP +OP *OP /OP %OP **OP prefix-op := !OP ?OP ~OP -OP +OP % %% & && A: Was bounced, yet I believe it is worth to write here the implementation of ( /-/ ) (the F# version of Haskell's \\): let flip f x y = f y x let rec delete x = function | [] -> [] | h :: t when x = h -> t | h :: t -> h :: delete x t let inline ( /-/ ) xs ys = List.fold (flip delete) xs ys This will operate as Haskell's \\, so that (xs @ ys) /-/ xs = ys. For example: (7 :: [1 .. 5] @ [5 .. 11]) /-/ [4 .. 7] evaluates into [1; 2; 3; 5; 7; 8; 9; 10; 11]. A: Filter items from the set of the subtrahend: let ( /-/ ) xs ys = let ySet = set ys let notInYSet x = not <| Set.contains x ySet List.filter notInYSet xs A: I'm using this: let (/-/) l1 l2 = List.filter (fun i -> not <| List.exists ((=) i) l2) l1 If anyone sees a problem, let me know. Is for lists, so there could be duplicates in the result. For example: [1;1;2] /-/ [2;3] would be eq to [1;1] A: Assuming you really want conventional set difference rather than the weird ordered-but-unsorted multiset subtraction that Haskell apparently provides, just convert the lists to sets using the built-in set function and then use the built-in - operator to compute the set difference: set xs - set ys For example: > set [1..5] - set [2..4];; val it : Set<int> = seq [1; 5]
{ "language": "en", "url": "https://stackoverflow.com/questions/59711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I check for IsPostBack in JavaScript? I need to run a JavaScript function onLoad(), but only do it if the page loaded the first time (i.e. is not the result of a postback). Basically, I need to check for IsPostBack in JavaScript. Thank you. A: Server-side, write: if(IsPostBack) { // NOTE: the following uses an overload of RegisterClientScriptBlock() // that will surround our string with the needed script tags ClientScript.RegisterClientScriptBlock(GetType(), "IsPostBack", "var isPostBack = true;", true); } Then, in your script which runs for the onLoad, check for the existence of that variable: if(isPostBack) { // do your thing } You don't really need to set the variable otherwise, like Jonathan's solution. The client-side if statement will work fine because the "isPostBack" variable will be undefined, which evaluates as false in that if statement. A: The solution didn't work for me, I had to adapt it: protected void Page_Load(object sender, EventArgs e) { string script; if (IsPostBack) { script = "var isPostBack = true;"; } else { script = "var isPostBack = false;"; } Page.ClientScript.RegisterStartupScript(GetType(), "IsPostBack", script, true); } Hope this helps. A: There is an even easier way that does not involve writing anything in the code behind: Just add this line to your javascript: if(<%=(Not Page.IsPostBack).ToString().ToLower()%>){//Your JavaScript goodies here} or if(<%=(Page.IsPostBack).ToString().ToLower()%>){//Your JavaScript goodies here} A: You could put a hidden input on the page, and after the page loads, give it a value. Then you can check that field, if it was in the post data, it's a postback, otherwise it is not. There were two solutions that used server side code (ASP.NET specific) posted as responses. I think it is worth pointing out that this solution is technology agnostic since it uses client side features only, which are available in all major browsers. A: Try this, in this JS we can check if it is post back or not and accordingly do operations in the respective loops. window.onload = isPostBack; function isPostBack() { if (!document.getElementById('clientSideIsPostBack')) { return false; } if (document.getElementById('clientSideIsPostBack').value == 'Y') { ***// DO ALL POST BACK RELATED WORK HERE*** return true; } else { ***// DO ALL INITIAL LOAD RELATED WORK HERE*** return false; } } A: hi try the following ... function pageLoad (sender, args) { alert (args._isPartialLoad); } the result is a Boolean A: You can create a hidden textbox with a value of 0. Put the onLoad() code in a if block that checks to make sure the hidden text box value is 0. if it is execute the code and set the textbox value to 1. A: Here is one way (put this in Page_Load): if (this.IsPostBack) { Page.ClientScript.RegisterStartupScript(this.GetType(),"PostbackKey","<script type='text/javascript'>var isPostBack = true;</script>"); } Then just check that variable in the JS. A: Lots of options here. For a pure JS solution, have your page submit to itself, but with additional URL parameter (mypage.html?postback=true) - you can then get the page url with window.location.href, and parse that using a split or regex to look for your variable. The much easier one, assuming you sending back to some sort of scripting language to proces the page (php/perl/asp/cf et. al), is to have them echo a line of javascript in the page setting a variable: <html> <?php if ($_POST['myVar']) { //postback echo '<script>var postingBack = true;</script>'; //Do other processing } else { echo '<script>var postingBack = false;</script>' } ?> <script> function myLoader() { if (postingBack == false) { //Do stuff } } <body onLoad="myLoader():"> ... A: Create a global variable in and apply the value <script> var isPostBack = <%=Convert.ToString(Page.IsPostBack).ToLower()%>; </script> Then you can reference it from elsewhere
{ "language": "en", "url": "https://stackoverflow.com/questions/59719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Finding network alias in .net Is there a way in .net 2.0 to discover the network alias for the machine that my code is running on? Specifically, if my workgroup sees my machine as //jekkedev01, how do I retrieve that name programmatically? A: Since you can have multiple network interfaces, each of which can have multiple IPs, and any single IP can have multiple names that can resolve to it, there may be more than one. If you want to know all the names by which your DNS server knows your machine, you can loop through them all like this: public ArrayList GetAllDnsNames() { ArrayList names = new ArrayList(); IPHostEntry host; //check each Network Interface foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces()) { //check each IP address claimed by this Network Interface foreach (UnicastIPAddressInformation i in nic.GetIPProperties().UnicastAddresses) { //get the DNS host entry for this IP address host = System.Net.Dns.GetHostEntry(i.Address.ToString()); if (!names.Contains(host.HostName)) { names.Add(host.HostName); } //check each alias, adding each to the list foreach (string s in host.Aliases) { if (!names.Contains(s)) { names.Add(s); } } } } //add "simple" host name - above loop returns fully qualified domain names (FQDNs) //but this method returns just the machine name without domain information names.Add(System.Net.Dns.GetHostName()); return names; } A: If you need the computer description, it is stored in registry: * *key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters *value name: srvcomment *data type: REG_SZ (string) AFAIK it has nothing to do with any domain server, or with the network the PC is attached to. For anything related to the network, I am using the following: * *NETBIOS name: System.Environment.MachineName *host name: System.Net.Dns.GetHostName() *DNS name: System.Net.Dns.GetHostEntry("LocalHost").HostName If the PC has multiple NETBIOS names, I do not know any other method but to group the names based on the IP address they resolve to, and even this is not reliable if the PC has multiple network interfaces. A: I'm not a .NET programmer, but the System.Net.DNS.GetHostEntry method looks like what you need. It returns an instance of the IPHostEntry class which contains the Aliases property. A: Use the System.Environment class. It has a property for retrieving the machine name, which is retrieved from the NetBios. Unless I am misunderstanding your question. A: or My.Computer.Name
{ "language": "en", "url": "https://stackoverflow.com/questions/59726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Coding Dojo with IE and SSL My application is using Dojo 1.1.1 on an SSL-only website. It is currently taking advantage of dijit.ProgressBar and a dijit.form.DateTextBox. Everything works fabulous in Firefox 2 & 3, but as soon as I try the same scripts in IE7 the results are an annoying Security Information dialog: This page contains both secure and non-secure items. Do you want to display the non-secure items? I have scrutinized the page for any non-HTTPS reference to no avail. It appears to be something specific to dojo.js. There use to be an iframe glitch where the src was set to nothing, but this appears to be fixed now (on review of the source). Anyone else having this problem? What are the best-practices for getting Dojo to play well with IE on an SSL-only web server? A: After reviewing the JavaScript sourcecode for Dijit, I thought it was likely the error results from an "insecure" refrence to a dynamically generated IFRAME. Note there are two versions of the script file, the uncompressed represents the original source (dijit.js.uncompressed.js) and the standard (dijit.js) has been compressed for optimal transfer time. Since the uncompressed version is the most readable, I will describe my solution based on that. At line #1023, an IFRAME is rendered in JavaScript: if(dojo.isIE){ var html="<iframe src='javascript:\"\"'" + " style='position: absolute; left: 0px; top: 0px;" + "z-index: -1; filter:Alpha(Opacity=\"0\");'>"; iframe = dojo.doc.createElement(html); }else{... What's the problem? IE doesn't know if the src for the IFRAME is "secure" - so I replaced it with the following: if(dojo.isIE){ var html="<iframe src='javascript:void(0);'" + " style='position: absolute; left: 0px; top: 0px;" + "z-index: -1; filter:Alpha(Opacity=\"0\");'>"; iframe = dojo.doc.createElement(html); }else{... This is the most common problem with JavaScript toolkits and SSL in IE. Since IFRAME's are used as shims due to poor overlay support for DIV's, this problem is extremely prevalent. My first 5-10 page reloads are fine, but then the security error starts popping up again. How is this possible? The same page is "secure" for 5 reloads and then it is selected by IE as "insecure" when loaded the 6th time. As it turns out, there is also a background image being set in the onload event for dijit.wai (line #1325). This reads something like this; div.style.cssText = 'border: 1px solid;' + 'border-color:red green;' + 'position: absolute;' + 'height: 5px;' + 'top: -999px;' + 'background-image: url("' + dojo.moduleUrl("dojo", "resources/blank.gif") + '");'; This won't work because the background-image tag doesn't include HTTPs. Despite the fact that the location is relative, IE7 doesn't know if it's secure so the warning is posed. In this particular instance, this CSS is used to test for Accessibility (A11y) in Dojo. Since this is not something my application will support and since there are other general buggy issues with this method, I opted to remove everything in the onload() for dijit.wai. All is good! No sporadic security problems with the page loads. A: If your page is loading files from a non-https URL Firefox should tell you the same thing. Instead of an error the lock symbol at the bottom (in the status bar) should be crossed out. Are you sure that is not the case? If you see the symbol, click on it and check which files are "unsecure". A: If you're using CDN you can include all modules by HTTPS as seen here. <script type="text/javascript"> djConfig = { modulePaths: { "dojo": "https://ajax.googleapis.com/ajax/libs/dojo/1.3.2/dojo", "dijit": "https://ajax.googleapis.com/ajax/libs/dojo/1.3.2/dijit", "dojox": "https://ajax.googleapis.com/ajax/libs/dojo/1.3.2/dojox" } }; </script> <script src="https://ajax.googleapis.com/ajax/libs/dojo/1.3.2/dojo/dojo.xd.js" type="text/javascript"></script> You can test with various versions if you want. Currently the most recent is 1.6.1
{ "language": "en", "url": "https://stackoverflow.com/questions/59734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Recover corrupt zip or gzip files? The most common method for corrupting compressed files is to inadvertently do an ASCII-mode FTP transfer, which causes a many-to-one trashing of CR and/or LF characters. Obviously, there is information loss, and the best way to fix this problem is to transfer again, in FTP binary mode. However, if the original is lost, and it's important, how recoverable is the data? [Actually, I already know what I think is the best answer (it's very difficult but sometimes possible - I'll post more later), and the common non-answers (lots of off-the-shelf programs for repairing CRCs without repairing data), but I thought it would be interesting to try out this question during the stackoverflow beta period, and see if anyone else has gone down the successful-recovery path or discovered tools I don't know about.] A: From Bukys Software Approximately 1 in 256 bytes is known to be corrupted, and the corruption is known to occur only in bytes with the value '\012'. So the byte error rate is 1/256 (0.39% of input), and 2/256 bytes (0.78% of input) are suspect. But since only three bits per smashed byte are affected, the bit error rate is only 3/(256*8): 0.15% is bad, 0.29% is suspect. ... An error in the compressed input disrupts the decompression process for all subsequent bytes...The fact that the decompressed output is recognizably bad so quickly is cause for hope -- a search for the correct answer can identify wrong answers quickly. Ultimately, several techniques were combined to successfully extract reasonable data from these files: * *Domain-specific parsing of fields and quoted strings *Machine learning from previous data with low probability of damage *Tolerance for file damage due to other causes (e.g. disk full while logging) *Lookahead for guiding the search along the highest-probability paths These techniques identify 75% of the necessary repairs with certainty, and the remainder are explored highest-probability-first, so that plausible reconstructions are identified immediately. A: You could try writing a little script to replace all of the CRs with CRLFs (assuming the direction of trashing was CRLF to CR), swapping them randomly per block until you had the correct crc. Assuming that the data wasn't particularly large, I guess that might not use all of your CPU until the heat death of the universe to complete. As there is definite information loss, I don't know that there is a better way. Loss in the CR to CRLF direction might be slightly easier to roll back.
{ "language": "en", "url": "https://stackoverflow.com/questions/59735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: So, if CAPTCHA's on the way out, what comes next? With text-recognition improving and CAPTCHA-breakers using Mechanical Turks to break otherwise unbreakable keys, what's the next technology to keep scripts from spam-botting a site that relies on user input? A: I like the concept of an 'Invisible Captcha'. Phil Haack details one implementation here. This banks on the fact that bots, spiders, and crawlers don't implement javascript engines. This too could change in the near future. A: For now, reputation systems are harder to beat. The community sites of the near future will need to rely on its higher-ranking members to remove the spam. The trend for spam is to become continually more indistinguishable from legitimate content, and for each new generation of mechanical filters to die of innefectiveness like overused antibiotics. Even reputation systems will become useless as the spammers start maintaining sock-puppet farms to create their own high-ranking members, and when the community fights back the spammers will feed the churn of sock-puppets as if it was just another cost of doing business. If you're going to build a site that takes user content, you'll either need to subscribe to the treadmill of neverending CAPTCHA-successors, or find a way to remove the incentive to spam your site in the first place. A: Image recognition rather than text recognition. A: I am a fan of limiting logins by using a credit card or cell phone SMS (like Craigslist and Gmail). These methods don't cost much (<$1), but can be highly effective in keeping spam accounts under control. However, this is tricky on a site like SO because one of the founding goals is to have minimum friction and allow anonymous users to contribute. I guess that's where the throttling and voting comes into play. A: Robots are quite hard to defeat. On one website I was involved with, we didn't even use Captcha - just a field labelled "Leave this field blank". Robots always failed that really simple test. The bigger problem is mass-human solving. There are lots of implementations whereby users solve screen-scraped captchas in return for something, like videos or images (you know what I mean). This means that there's a real human solving the captcha, so emotive, facial and more complex patterns are meaningless. Multi-step processes will discourage this behaviour, but at the cost of making things harder for genuine visitors, which is sad when we're all trying to design websites that are more usable. A: The bar will keep being raised with problems that computers are bad at and humans are good at. Something like recognising emotions in a human face is something humans are particularly good at. Another option could be along the lines of differentiating between disgusting or nice. It's totally subjective, but humans tend to hate rotten food, open wounds, poo, etc. A: Negative turing test. Have used this for over a year on WordPress, IP.Board and MediaWiki sites, and have absolutely zero spam. The only catch: you have to think of a question/answer combination that's neither common (otherwise, bots will adapt) nor too domain-specific (otherwise, potential users might not know the answer). A: There's a new tool in town - Captcha 2.0, which was developed by an Israeli online security start-up and is specifically designed to detect and fail captcha farms. You can check it out and try it for free at http://www.siteblackbox.com/captchaService.php Raz A: Typically, for a site with resources of any value to protect, you need a 3-pronged approach: * *Throttle responses from authenticated users only, disallow anonymous posts. *Minimize (not prevent) the few trash posts from authenticated users - e.g. reputation-based. *Use server-side heuristic logic to identify spam-like behavior, or better non-human-like behavior. Of course, a human moderator can also help, but then you have other problems - namely, flooding (or even drowning) the moderator, and some sites prefer the openness... A: Captcha based on Cryptocurrency - http://speedcoin.co/info/captcha/Speedcoin_Captcha.html A: The most fundamental tool to keep people from spambotting a user input site is the "nofollow" tag on links. Most comment-spammers are interested in Google juice rather than actually having their stuff seen, so nofollow removes the incentive.
{ "language": "en", "url": "https://stackoverflow.com/questions/59736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Number of possible combinations How many possible combinations of the variables a,b,c,d,e are possible if I know that: a+b+c+d+e = 500 and that they are all integers and >= 0, so I know they are finite. A: The answer to your question is 2656615626. Here's the code that generates the answer: public static long getNumCombinations( int summands, int sum ) { if ( summands <= 1 ) return 1; long combos = 0; for ( int a = 0 ; a <= sum ; a++ ) combos += getNumCombinations( summands-1, sum-a ); return combos; } In your case, summands is 5 and sum is 500. Note that this code is slow. If you need speed, cache the results from summand,sum pairs. I'm assuming you want numbers >=0. If you want >0, replace the loop initialization with a = 1 and the loop condition with a < sum. I'm also assuming you want permutations (e.g. 1+2+3+4+5 plus 2+1+3+4+5 etc). You could change the for-loop if you wanted a >= b >= c >= d >= e. A: I solved this problem for my dad a couple months ago...extend for your use. These tend to be one time problems so I didn't go for the most reusable... a+b+c+d = sum i = number of combinations for (a=0;a<=sum;a++) { for (b = 0; b <= (sum - a); b++) { for (c = 0; c <= (sum - a - b); c++) { //d = sum - a - b - c; i++ } } } A: This would actually be a good question to ask on an interview as it is simple enough that you could write up on a white board, but complex enough that it might trip someone up if they don't think carefully enough about it. Also, you can also for two different answers which cause the implementation to be quite different. Order Matters If the order matters then any solution needs to allow for zero to appear for any of the variables; thus, the most straight forward solution would be as follows: public class Combos { public static void main() { long counter = 0; for (int a = 0; a <= 500; a++) { for (int b = 0; b <= (500 - a); b++) { for (int c = 0; c <= (500 - a - b); c++) { for (int d = 0; d <= (500 - a - b - c); d++) { counter++; } } } } System.out.println(counter); } } Which returns 2656615626. Order Does Not Matter If the order does not matter then the solution is not that much harder as you just need to make sure that zero isn't possible unless sum has already been found. public class Combos { public static void main() { long counter = 0; for (int a = 1; a <= 500; a++) { for (int b = (a != 500) ? 1 : 0; b <= (500 - a); b++) { for (int c = (a + b != 500) ? 1 : 0; c <= (500 - a - b); c++) { for (int d = (a + b + c != 500) ? 1 : 0; d <= (500 - a - b - c); d++) { counter++; } } } } System.out.println(counter); } } Which returns 2573155876. A: @Torlack, @Jason Cohen: Recursion is a bad idea here, because there are "overlapping subproblems." I.e., If you choose a as 1 and b as 2, then you have 3 variables left that should add up to 497; you arrive at the same subproblem by choosing a as 2 and b as 1. (The number of such coincidences explodes as the numbers grow.) The traditional way to attack such a problem is dynamic programming: build a table bottom-up of the solutions to the sub-problems (starting with "how many combinations of 1 variable add up to 0?") then building up through iteration (the solution to "how many combinations of n variables add up to k?" is the sum of the solutions to "how many combinations of n-1 variables add up to j?" with 0 <= j <= k). public static long getCombos( int n, int sum ) { // tab[i][j] is how many combinations of (i+1) vars add up to j long[][] tab = new long[n][sum+1]; // # of combos of 1 var for any sum is 1 for( int j=0; j < tab[0].length; ++j ) { tab[0][j] = 1; } for( int i=1; i < tab.length; ++i ) { for( int j=0; j < tab[i].length; ++j ) { // # combos of (i+1) vars adding up to j is the sum of the # // of combos of i vars adding up to k, for all 0 <= k <= j // (choosing i vars forces the choice of the (i+1)st). tab[i][j] = 0; for( int k=0; k <= j; ++k ) { tab[i][j] += tab[i-1][k]; } } } return tab[n-1][sum]; } $ time java Combos 2656615626 real 0m0.151s user 0m0.120s sys 0m0.012s A: One way of looking at the problem is as follows: First, a can be any value from 0 to 500. Then if follows that b+c+d+e = 500-a. This reduces the problem by one variable. Recurse until done. For example, if a is 500, then b+c+d+e=0 which means that for the case of a = 500, there is only one combination of values for b,c,d and e. If a is 300, then b+c+d+e=200, which is in fact the same problem as the original problem, just reduced by one variable. Note: As Chris points out, this is a horrible way of actually trying to solve the problem. link text A: If they are a real numbers then infinite ... otherwise it is a bit trickier. (OK, for any computer representation of a real number there would be a finite count ... but it would be big!) A: It has general formulae, if a + b + c + d = N Then number of non-negative integral solution will be C(N + number_of_variable - 1, N) A: @Chris Conway answer is correct. I have tested with a simple code that is suitable for smaller sums. long counter = 0; int sum=25; for (int a = 0; a <= sum; a++) { for (int b = 0; b <= sum ; b++) { for (int c = 0; c <= sum; c++) { for (int d = 0; d <= sum; d++) { for (int e = 0; e <= sum; e++) { if ((a+b+c+d+e)==sum) counter=counter+1L; } } } } } System.out.println("counter e "+counter); A: The answer in math is 504!/(500! * 4!). Formally, for x1+x2+...xk=n, the number of combination of nonnegative number x1,...xk is the binomial coefficient: (k-1)-combination out of a set containing (n+k-1) elements. The intuition is to choose (k-1) points from (n+k-1) points and use the number of points between two chosen points to represent a number in x1,..xk. Sorry about the poor math edition for my fist time answering Stack Overflow. Just a test for code block Just a test for code block Just a test for code block A: Including negatives? Infinite. Including only positives? In this case they wouldn't be called "integers", but "naturals", instead. In this case... I can't really solve this, I wish I could, but my math is too rusty. There is probably some crazy integral way to solve this. I can give some pointers for the math skilled around. being x the end result, the range of a would be from 0 to x, the range of b would be from 0 to (x - a), the range of c would be from 0 to (x - a - b), and so forth until the e. The answer is the sum of all those possibilities. I am trying to find some more direct formula on Google, but I am really low on my Google-Fu today...
{ "language": "en", "url": "https://stackoverflow.com/questions/59743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Disable Specific Keys in IE 6 I need to disable specific keys (Ctrl and Backspace) in Internet Explorer 6. Is there a registry hack to do this. It has to be IE6. Thanks. Long Edit: @apandit: Whoops. I need to more specific about the backspace thing. When I say disable backspace, I mean disable the ability for Backspace to mimic the Back browser button. In IE, pressing Backspace when the focus is not in a text entry field is equivalent to pressing Back (browsing to the previous page). As for the Ctrl key. There are some pages which have links which create new IE windows. I have the popup blocker turned on, which block this. But, Ctrl clicking result in the new window being launched. This is for a kiosk application, which is currently a web based application. Clients do not have the funds at this time to make their site kiosk friendly. Things like URL filtering and disabling the URL entry field is already done. Thanks. A: For what purpose do you need this? Because disabling the backspace would be hell for typing urls or emails, etc. We could recommend other workarounds if we knew the problem better. EDIT 1: This website seems to have some information as to how it's done. I can't verify it currently, but I'll look into it: http://www.ozzu.com/programming-forum/disable-key-and-back-t44867.html Edit 2: This website has some key codes: http://www.advscheduler.com/docs/manual/type_sendkeys.html It seems BACKSPACE is 08. EDIT 3: Found some more code for blocking, check this out: <script type="text/javascript">var sType = "keypress";</script> <!--[if IE]> <script type="text/javascript">sType = "keydown";</script> <![endif]--> <script type="text/javascript"> fIntercept = function(e) { // alert(e.keyCode); e = e || event.e; if (e.keyCode == 116) { // When F5 is pressed fCancel(e); } else if (e.ctrlKey && (e.keyCode == 0 || e.keyCode == 82)) { // When ctrl is pressed with R fCancel(e); } }; fCancel = function(e) { if (e.preventDefault) { e.stopPropagation(); e.preventDefault(); } else { e.keyCode = 0; e.returnValue = false; e.cancelBubble = true; } return false; }; fAddEvent = function(obj, type, fn) { if (obj.addEventListener) { obj.addEventListener(type, fn, false); } else { obj['e'+type+fn] = fn; obj[type+fn] = function() { obj['e'+type+fn](window.event); } obj.attachEvent('on'+type, obj[type+fn]); } }; fAddEvent(document, sType, fIntercept); </script> Ok, now you should have all you need to do it. To disable backspace, the keycode is 08. You can probably just use the code I posted with slight modifications only... :\ Try it out and see if it's what you needed. (I hope you know how to use Javascript.) A: You can't do it from a web page. One of the main purposes of a web browser is to protect users from the internet. They define a very specific set of things that web sites can do, and disabling buttons isn't in the list. On the other hand, if you're a network admin and just want to mess with your users, you might be able to do it via some desktop software. But I wouldn't hold my breath. A: I'm using this jQuery solution (tested on ie6 and firefox 3.6): $(document).keydown(function(e) { var tag = e.target.tagName; var ro = e.target.readOnly; var type = e.target.type; var tags = { INPUT : '', TEXTAREA : '' }; if (e.keyCode == 8) {// backspace if (!(tag in tags && !ro && /text/.test(type))) { e.stopPropagation(); e.preventDefault(); } } }); hope it helps someone
{ "language": "en", "url": "https://stackoverflow.com/questions/59761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you get JavaScript/jQuery Intellisense Working in Visual Studio 2008? I thought jQuery Intellisense was supposed to be improved with SP1. I even downloaded an annotated version of jQuery 1.2.6, but intellisense will not work in a separate jscript file. I have the jQuery library referenced first on my web page in the <head> tag. Am I doing anything wrong? A: At the top of your external JavaScript file, add the following: /// <reference path="jQuery.js"/> Make sure the path is correct, relative to the file's position in the folder structure, etc. Also, any references need to be at the top of the file, before any other text, including comments - literally, the very first thing in the file. Hopefully future version of Visual Studio will work regardless of where it is in the file, or maybe they will do something altogether different... Once you have done that and saved the file, hit Ctrl + Shift + J to force Visual Studio to update Intellisense. A: You'll want to look at this link: http://blogs.ipona.com/james/archive/2008/02/15/JQuery-IntelliSense-in-Visual-Studio-2008.aspx UPDATE: There is a new HotFix for Visual Studio 2008 and a new jQuery Intellisense Documentation file that brings full jQuery Intellisense to VS'08. Below are links to get these two: http://blogs.msdn.com/webdevtools/archive/2008/11/07/hotfix-to-enable-vsdoc-js-intellisense-doc-files-is-now-available.aspx http://blogs.msdn.com/webdevtools/archive/2008/10/28/rich-intellisense-for-jquery.aspx A: For inline JavaScript, use: /// <reference path="~\js\jquery-vsdoc.js"/> Note the back slashes. This will not work: /// <reference path="~/js/jquery-vsdoc.js"/> A: You shouldn't need to actually reference the "-vsdoc" version. If you put the jquery-1.2.6-vsdoc.js in the same directory as jquery-1.2.6.js then Visual Studio will know to covert a jquery-1.2.6.js reference to jquery-1.2.6-vsdoc.js. I think that will actually work for any file. Hmmm... that gives a good workaround for another question on this site... Edit: This feature only works with VS2008 Service Pack 1. A: If you are including the annotated jQuery file in your source solely for intellisense, I recommend leveraging preprocessor directives to remove it from your view when you compile. Ala: <% #if (false) %> <!-- This block is here for jquery intellisense only. It will be removed by the compiler! --> <script type="text/javascript" src="Scripts/jquery-1.3.2-vsdoc.js"></script> <% #endif %> Then later in your code you can really reference jQuery. This is handy when using the Google AJAX Libraries API, because you get all the benefits Google provides you, plus intellisense. Here is a sample of using the Libraries API: <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1.3.2", { uncompressed: false }); </script> A: There is an officially supported jQuery documentation JavaScript file for Visual Studio 2008. This file is only an interim fix until Microsoft releases a hotfix that will more adequately address the issue. Embedded in ASPX: <% if (false) { %> <script src="jquery-1.2.6-vsdoc.js" type="text/javascript"></script> <% } %> Embedded in JavaScript: /// <reference path="jquery-1.2.6-vsdoc.js" /> Pick it up here: jquery-1.2.6-vsdoc.js References: * *Rich Intellisense for jQuery *Scott Hanselman - ASP.NET and jQuery A: Make sure you're not using a minimized jQuery file. Use Ctrl + Shift + J to make it work after adding JavaScript files to the project. A: jQuery Intellisense in Visual Studio 2008 A: If you want to pick up the Intellisense file from the Microsoft CDN you can use: /// <reference path="http://ajax.microsoft.com/ajax/jQuery/jquery-1.4.1-vsdoc.js" />
{ "language": "en", "url": "https://stackoverflow.com/questions/59766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: Debugging with FF3 in VS2008 I am using Firefox 3 to debug my ASP.NET applications in Visual Studio 2008. How can I configure either FF3 or VS2008 so that when I 'x' out of Firefox I don't have to hit the stop debugging button in Visual Studio? (The behavior you get with IE) A: My solution to this has been to manually attach the debugger to the relevant browser and the aspnet_wp process. When I'm finished, I simply detach all. A: Extending upon Raithlin's suggestion, Ctrl+Alt+P is a useful shortcut to bring up the Attach to Process window. A: I have the same thing. I assume you're working with Cassini (the integrated web server). I've yet to find an answer to that (I just go back to VS and press Shift+F5 to stop the debugger), but I can tell you that if you check the "Edit and Continue" box in the project's properties (web tab), your web server will stop and restart whenever you run your application. It doesn't solve the whole of the problem, but it suffices for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/59768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I get a custom application name and starting window name in Visual C# 2008 using WPF? I'm using Microsft Visual C# 2008 and am creating WPF applications. If you create a new solution and pick the WPF application template it lets you provide a single string to name the solution. It then automatically turns that string into a base project name and a namespace using underscores instead of spaces. It also generates a class that inherits from the application class named App and a starting window with a Grid control in it named Window1. I want to customize pretty much everything. What's the simplest method of renaming App, Window1, and the starting namespace which won't corrupt the Solution? A: Follow these steps: * *Rename the application and window .xaml's in the solution explorer. *Edit the application's .xaml (App.xaml originally) so the StartupUri points to the new name of the starting window the line will be as follows: StartupUri="Window1.xaml" *Edit in the original window's .cs codebehind window so Window1 becomes the new window's name. *Use the mouse on the drop-down after the new window name to copy the changed name elsewhere. *Edit the title of the window.
{ "language": "en", "url": "https://stackoverflow.com/questions/59786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you generate and analyze a thread dump from a running JBoss instance? How do you generate and analyze a thread dump from a running JBoss instance? A: There is a JBoss-specific method that is slightly more user-friendly: http://community.jboss.org/wiki/GenerateAThreadDumpWithTheJMXConsole This is especially useful when you don't have direct access to the host machine (which "kill" would require). A: http://java.sun.com/developer/technicalArticles/Programming/Stacktrace/ ... "On UNIX platforms you can send a signal to a program by using the kill command. This is the quit signal, which is handled by the JVM. For example, on Solaris you can use the command kill -QUIT process_id, where process_id is the process number of your Java program. Alternatively you can enter the key sequence <ctrl>\ in the window where the Java program was started. Sending this signal instructs a signal handler in the JVM, to recursively print out all the information on the threads and monitors inside the JVM." ... "Determining the Thread States You will see many different threads in many different states in a snapshot from a JVM stack trace. The key used is: R Running or runnable thread S Suspended thread CW Thread waiting on a condition variable MW Thread waiting on a monitor lock MS Thread suspended waiting on a monitor lock" A: The stacktrace app found here is also useful, especially on Windows machines when the java app is not started from the command line. A: Thread.getAllStackTraces() (since Java 1.5) A: Two options: OPTION 1 Generate a thread dump using JMX Console In order to generate a thread dump: * *Open the JMXConsole (for example: http://localhost:8080 ) *Navigate to jboss.system:type=ServerInfo mbean (hint: you can probably just CTRL-F and enter type=ServerInfo in the dialog box) *Click on the link for the Server Info mbean. *Navigate to the bottom where it says listThreadDump *Click it and get your thread dump Notes: If you are using Internet Explorer you should use File > Save As to save the output instead of copying the data to a text editor. For some reason when you copy the text from Internet Explorer the line breaks are not copied and all of the output ends up on a single line. OPTION 2 Generate a Thread Dump using Twiddle Alternatively you can use twiddle to execute the listThreadDump() method and pipe the returned HTML directly to file. Use this command line: <JBOSS_HOME>/bin/twiddle invoke "jboss.system:type=ServerInfo" listThreadDump > threads.html A: Sometimes JBoss locks so much that even jmx-concole doesn't respond. In such case use kill -3 on Linux and SendSignal on Windows. A: https://community.jboss.org/wiki/ThreadDumpJSP page features standalone self-contained threaddump.war that can be used without JMX.
{ "language": "en", "url": "https://stackoverflow.com/questions/59787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best way to transfer an xml to SQL Server? I have been hearing the podcast blog for a while, I hope I dont break this. The question is this: I have to insert an xml to a database. This will be for already defined tables and fields. So what is the best way to accomplish this? So far I am leaning toward programatic. I have been seeing varios options, one is Data Transfer Objects (DTO), in the SQL Server there is the sp_xml_preparedocument that is used to get transfer XMLs to an object and throught code. I am using CSharp and SQL Server 2005. The fields are not XML fields, they are the usual SQL datatypes. A: In an attempt to try and help, we may need some clarification. Maybe by restating the problem you can let us know if this is what you're asking: How can one import existing xml into a SQL 2005 database, without relying on the built-in xml type? A fairly straight forward solution that you already mentioned is the sp_xml_preparedocument, combined with openxml. Hopefully the following example illustrates the correct usage. For a more complete example checkout the MSDN docs on Using OPENXML. declare @XmlDocumentHandle int declare @XmlDocument nvarchar(1000) set @XmlDocument = N'<ROOT> <Customer> <FirstName>Will</FirstName> <LastName>Smith</LastName> </Customer> </ROOT>' -- Create temp table to insert data into create table #Customer ( FirstName varchar(20), LastName varchar(20) ) -- Create an internal representation of the XML document. exec sp_xml_preparedocument @XmlDocumentHandle output, @XmlDocument -- Insert using openxml allows us to read the structure insert into #Customer select FirstName = XmlFirstName, LastName = XmlLastName from openxml ( @XmlDocumentHandle, '/ROOT/Customer',2 ) with ( XmlFirstName varchar(20) 'FirstName', XmlLastName varchar(20) 'LastName' ) where ( XmlFirstName = 'Will' and XmlLastName = 'Smith' ) -- Cleanup xml document exec sp_xml_removedocument @XmlDocumentHandle -- Show the data select * from #Customer -- Drop tmp table drop table #Customer If you have an xml file and are using C#, then defining a stored procedure that does something like the above and then passing the entire xml file contents to the stored procedure as a string should give you a fairly straight forward way of importing xml into your existing table(s). A: If your XML conforms to a particular XSD schema, you can look into using the "xsd.exe" command line tool to generate C# object classes that you can bind the XML to, and then form your insert statements using the properties of those objects: MSDN XSD Doc A: Peruse this document and it will give you the options: MSDN: XML Options in Microsoft SQL Server 2005 A: You may want to use XSLT to transfer your XML into SQL statements... ie <xml type="user"> <data>1</data> <data>2</data> <xml> Then the XSLT would look like <xsl:template match="xml"> INSERT INTO <xsl:value-of select="@type" /> (data1, data2) VALUES ( '<xsl:value-of select="data[1]" />', '<xsl:value-of select="data[2]" />'); </xsl:template> The match statement most likely won't be the root node, but hopefully you get the idea. You may also need to wrap the non xsl:value-of parts in xsl:text to prevent extra characters from being dumped into the query. And you'd have to make sure the output of the XSLT was text. That said you could get a list of SQL statements that you could run through the DB. or you could use XSLT to output a T-SQL statement that you could load as a stored procedure.
{ "language": "en", "url": "https://stackoverflow.com/questions/59790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Good tips for a Technical presentation I am planning to give a Technical presentation for a product we are building. Intended audience is Technical developers. So, most of the time, I will be debugging trough the code in Visual Studio, performance analysis, some architecture review etc. I have read couple of blogs on font sizes to use, templates to use on Visual Studio, presentation tools, among other very useful tips. What I am looking specifically for is how to keep the session interesting without making it a dry code walkthrough? How to avoid making people fall asleep? Would be great to hear some stories.. Update1: Nice youtube clip on zoomit. Glue Audience To Your Presentation With Zoomit. Update2: New post from Scott Hanselman after his PDC talk - Tips for Preparing for a Technical Presentation A: FYI, that Hanselman article has an update (your link is from 2003). A: Use stories. Even with code examples, have a backstory: here's why someone is doing this. To increase audience participation, ask for examples of X where X is something you know you can demo, then phrase the walk-through in those terms. Or maybe you have war stories about how it was different or how it normally takes longer or whatever. I find people identify with such things, then as you give your examples they're mentally tracking it back to their own experience. A: I recommend Scott Hanselman's post (previously mentioned). I've written up a post with some tips, mostly for selfish reasons - I review it every time before I give a technical presentation: Tips for a Technical Presentation If you're using a console prompt, make sure the font is readable and that your paths are preset when possible. Take 15 minutes to install and learn to use ZoomIt, so your audience can clearly see what you're showing off. If you have to ask if they can see something, you've already failed. Probably most important is to have separate Visual Studio settings pre-configured with big, readable fonts. A: One of the best pieces of advice I ever got for doing demos is to just plain record them in advance and play back the video, narrating live. Then the unexpected stuff happens in private and you get as many stabs at it as you need. You still usually need some environment to use as a reference for questions, but for the presentation bit, recording it in advance (and rehearsing your narration over the video) pretty much guarantees you can be at the top of your game. I also like to put small jokes into the slides and that recorded video that make it seem like the person who made the slides is commenting on the live proceedings or that someone else is actually running the slides. Often, I make absolutely no reference at all to the joke in the slide. For instance, in my most recent demo presentation, I had a slide with the text "ASP.NET MVC" centered that I was talking over about how I was using the framework. In a smaller font, I had the text "Catchy name, huh?". When I did that demo live, that slide got a chuckle. It's not stand-up worthy by any stretch of the imagination, but we're often presenting some pretty dry stuff and every little bit helps. Similarly, I've included slides that are just plain snarky comments from the offscreen guy about what I'm planning to say. So, I'll say, "The codebase for this project needed a little help", while the slide behind me said "It was a pile of spaghetti with 3 meatballs, actually" and a plate of spaghetti as the slide background. Again, with no comment from me and just moving on to the next slide as though I didn't even see it actually made it funnier. That can also be a help if you don't have the best comedic timing by taking the pressure off while still adding some levity. Anyway, what it really comes down to is that I've been doing most of my demo/presentation work just like I would if it was a screencast and then substituting the live version of me (pausing the video as appropriate if things go off the rails) for the audio when I give it in front of an audience. Of course, you can then easily make the real presentation available afterward for those who want it. For the slides, I generally go out of my way to not say the exact words on the screen more often than not. A: If you are showing code that was prepared for you then make sure you can get it to work. I know this is an obvious one but I was just at a conference where 4 out of 5 speakers had code issues. Telling me it is 'cool' or even 'really cool' when it doesn't work is a tough sell. A: You should read Mark Jason Dominus excellent presentaton on public speaking: Conference Presentation Judo A: The #1 rule for me is: Don't try to show too much. It's easy to live with a chunk of code for a couple of weeks and think, "Damn, when I show 'em this they are gonna freak out!" Even during your private rehearsals you feel good about things. But once in front of an audience, the complexity of your code is multiplied by the square of the number of audience members. (It becomes exponentially harder to explain code for each audience member added!) What seemed so simple and direct privately quickly turns into a giant bowl of spaghetti that under pressure even you don't understand. Don't try to show production code (well factored and well partitioned), make simple inline examples that convey your core message. My rule #1 could be construed, by the cynical, as don't overestimate you audience. As an optimist, I see it as don't overestimate your ability to explain your code! rp A: Since it sounds like you are doing a live presentation, where you will be working with real systems and not just charts (PPT, Impress, whatever) make sure it is all working just before you start. It never fails, if I don't try it just before I start talking, it doesn't work how I expected it to. Especially with demos. (I'm doing one on Tuesday so I can relate.) The other thing that helps is simply to practice, practice, practice. Especially if you can do it in the exact environment you will be presenting in. That way you get a feel for where you need to be so as not to block the view for your listeners as well as any other technical gotchas there might be with regards to the room setup or systems. A: Put interesting comments in the code. // This better not fail during my next presentation, stupid @#$@#%$ code. Don't talk about them, let them be found by the audience. -Adam A: This is something that was explained to me, and I think it is very useful. You may want to consider not going to slide heavy at the beginning. You want to show your listeners something (obviously probably not the code) up front that will keep them on the edge of their seats wanting to learn about how to do what you just showed them. A: I've recently started to use Mind Mapping tools for presentations and found that it goes over very well. http://en.wikipedia.org/wiki/Mind_map Basically, I find people just zone out the second you start to go into details with a presentation. Conveying the information with a mind map (at least in my experience), provides a much easier way for the information to be conveyed and tied together. The key is presenting the information in stages (ie, your high-level ideas first, then in more detail, one at a time). The mind-mapping tools basically let you expand your map, as the audience watches and your present more and more detailed information. Doing it this way lets your audience gradually absorb the data in smaller stages, which tends to aid retention. Check out FreeMind for a free tool to play with. Mind Manager is a paid product, but is much more polished and fluent. A: Keep your "visual representation" simple and standard. If you're on Vista hide your desktop icons and use one of the default wallpapers. Keep your Visual Studio settings (especially toolbars) as standard and "out of the box" as possible. The more customizations you show in your environment the more likely people are going to focus on those rather than your content. Keep the content on your slides as consisce as possible. Remember, you're speaking to (and in the best scenario, with) your audience so the slides should serve as discussion points. If you want to include more details, put them in the slide notes. This is especially good if you make the slide decks available afterwards. If someone asks you a question and you don't know the answer, don't be afraid to say you don't know. It's always better than trying to guess at what you think the answer should be. Also, if you are using Vista be sure to put it in "presentation mode". PowerPoint also has a similar mode, so be sure to use it as well - you have the slide show on one monitor (the projector) and a smaller view of the slide, plus notes and a timer on your laptop monitor. A: Have you heard of Pecha-Kucha? The idea behind Pecha Kucha is to keep presentations concise, the interest level up and to have many presenters sharing their ideas within the course of one night. Therefore the 20x20 Pecha Kucha format was created: each presenter is allowed a slideshow of 20 images, each shown for 20 seconds. This results in a total presentation time of 6 minutes 40 seconds on a stage before the next presenter is up Now, i am not sure if that short duration could be ok for a product demonstration. But you can try to get some nice ideas from the concept, such as to be concise and keep to the point, effective time, space management etc.. A: Besides some software like Mind Manager to show your architecture, you make find a screen recorder as a presentation tool to illustrate your technical task. DemoCreator would be something nice to make video of your onscreen activity. And you can add more callout to make the process easier to understand. A: If you use slides at all, follow Guy Kawasaki's 10/20/30 rule: * *No more than 10 slides *No more than 20 minutes spent on slides *No less than 30 point type on slides -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/59793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: GWT context.xml in shell mode I'm trying to get the GWTShell mode to load my context.xml file in which my database is described. The only usable info can be found here, but this doesn't seem to work for the context.xml part. A: I'm using Eclipse with Cypal Studio (previously called Googlipse). If there is any other better plugin for Eclipse please recommend it. As the Shell mode uses a Tomcat instance, which is the same target server we are using in the final deployment, it should be possible to achieve (or fake) a similar behaviour. A: As of version 1.4, I have been running all my server side code, in my container of choice (Glassfish) and hooking up the GWTShell to that. Are you using Netbeans, Eclipse or something else? The Netbeans plugin gwt4nb does this for you out of the box, you just have to start your web project in debug mode. I'm sure the GWT plugin for Eclipse does the same thing. I realise this doesn't directly answer your question -> but my question is, is there a reason you're trying to get GWT to pick up your database settings and not just running your project as normal instead. I find this much better and robust way of running the GWTShell. Edit: Sorry I don't really use Eclipse, so I can't help you with plugins for it. I find Netbeans far superior for J2EE/web type projects. It's a bit slower, but far more functional. The plugin for that is called 'GWT4NB', it's free and it will set up your ant script in such a way that you just have to right-click on your web project and choose debug. I can understand if you don't want to switch IDEs though.
{ "language": "en", "url": "https://stackoverflow.com/questions/59806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: MS Access ADP Autonumber I am getting the following error in an MS Access ADP when trying to add a record on a form linked to a MS SQL Server 2000 table: Run-time error '31004': The value of an (AutoNumber) field cannot be retrived prior to being saved. Please save the record that contains the (AutoNumber) field prior to performing this action. note: retrieved is actually spelled wrong in the error. Does anyone know what this means? I've done a web search and was only able to find the answer at a certain site that only experts have access to. A: First of all, if you are going to look at experts-exchange - do it in FireFox, you'll see the unblocked answers at the bottom of the page. Second, do you have a subform on that form that's using the autonumber/key field on the master form? Do you require the data that's on that subform to be saved (i.e., having its own key) before the main form is saved. You could be into a deadlock of A and B requiring each other to be saved first. Other than that, you must somehow be accessing that autonumber field whenyou are saving it. The best I can suggest is to step through the code line by line. A: Are you trying to assign the value of an Identity field to a variable or something else before you have saved the record? For whatever reason, your app is trying to read the value of the identity field before the record has been saved, which is what generates that identity field. In other words, no value exists for the Autonumber field until the row is saved. I think we'd need to see more code or know more about the steps that lead up to this error to resolve it in more detail. A: You should have add some lines of code to show us how you're managing your data and what you are doing exactly. But I am suspecting an issue related to a recordset update. can you identify when the autonumber value is created? Is it available in a control on a form? Can you add a control to display this value to check how it is generated when adding a new record? Is the underlying recordset properly updated? Can you add something like me.recordset.update on some form events: I would try the OnCurrent one ...
{ "language": "en", "url": "https://stackoverflow.com/questions/59809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: MapPoint 2009 Load Performance I'm having some problems integrating MS MapPoint 2009 into my WinForms .Net 2.0 application in C#. I've added the ActiveX MapPoint control onto a form and have no problems getting it to display a maps and locations; my concern is the time it takes to load a map once it is created. The tests on my development machine have shown the average load time to be between 3 and 5 seconds, during which the application is totally locked. While this isn't totally unacceptable, it's an awfully long time to lose control of the application. Also, because the GUI thread is locked, I cannot show a loading dialog or something to mask the load time. The line that hangs is this: (where axMappointControl1 is the MapPoint control) axMappointControl1.NewMap(MapPoint.GeoMapRegion.geoMapNorthAmerica); I've tried executing the NewMap method on another thread but the GUI thread still ends up being blocked. My questions are: * *What can I do to speed up MapPoint when it loads? *Is there any way to load MapPoint so that it won't block the GUI thread? Any help is greatly appreciated. A: According to these threads at mapforums.com the slowness of ActiveX startup is well known and unavoidable (though the question of threading to help with GUI responsiveness is still open. One thing suggested was to abandon the ActiveX version in favor of the MapPoint.Application object instead. Hope that helps. A: Yes the Application version runs on its own thread - so this should be a quicker alternative - easier to do your own stuff whilst it is starting up. However, MapPoint 2010 tends to take a few seconds to start up when started by a user. I would create a temporary GUI thread and use this to display a splash screen during start up and/or do any thread-safe initialisation that you need to do. All calls to a MapPoint instance (or ActiveX control) must be from the same thread that create the MapPoint control or application object.
{ "language": "en", "url": "https://stackoverflow.com/questions/59816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a custom type in PowerShell for my scripts to use? I would like to be able to define and use a custom type in some of my PowerShell scripts. For example, let's pretend I had a need for an object that had the following structure: Contact { string First string Last string Phone } How would I go about creating this so that I could use it in function like the following: function PrintContact { param( [Contact]$contact ) "Customer Name is " + $contact.First + " " + $contact.Last "Customer Phone is " + $contact.Phone } Is something like this possible, or even recommended in PowerShell? A: Steven Murawski's answer is great, however I like the shorter (or rather just the neater select-object instead of using add-member syntax): function New-Person() { param ($FirstName, $LastName, $Phone) $person = new-object PSObject | select-object First, Last, Phone $person.First = $FirstName $person.Last = $LastName $person.Phone = $Phone return $person } A: Surprised no one mentioned this simple option (vs 3 or later) for creating custom objects: [PSCustomObject]@{ First = $First Last = $Last Phone = $Phone } The type will be PSCustomObject, not an actual custom type though. But it is probably the easiest way to create a custom object. A: Creating custom types can be done in PowerShell. Kirk Munro actually has two great posts that detail the process thoroughly. * *Naming Custom Objects *Defining Default Properties for Custom Objects The book Windows PowerShell In Action by Manning also has a code sample for creating a domain specific language to create custom types. The book is excellent all around, so I really recommend it. If you are just looking for a quick way to do the above, you could create a function to create the custom object like function New-Person() { param ($FirstName, $LastName, $Phone) $person = new-object PSObject $person | add-member -type NoteProperty -Name First -Value $FirstName $person | add-member -type NoteProperty -Name Last -Value $LastName $person | add-member -type NoteProperty -Name Phone -Value $Phone return $person } A: There is the concept of PSObject and Add-Member that you could use. $contact = New-Object PSObject $contact | Add-Member -memberType NoteProperty -name "First" -value "John" $contact | Add-Member -memberType NoteProperty -name "Last" -value "Doe" $contact | Add-Member -memberType NoteProperty -name "Phone" -value "123-4567" This outputs like: [8] » $contact First Last Phone ----- ---- ----- John Doe 123-4567 The other alternative (that I'm aware of) is to define a type in C#/VB.NET and load that assembly into PowerShell for use directly. This behavior is definitely encouraged because it allows other scripts or sections of your script work with an actual object. A: Here is the hard path to create custom types and store them in a collection. $Collection = @() $Object = New-Object -TypeName PSObject $Object.PsObject.TypeNames.Add('MyCustomType.Contact.Detail') Add-Member -InputObject $Object -memberType NoteProperty -name "First" -value "John" Add-Member -InputObject $Object -memberType NoteProperty -name "Last" -value "Doe" Add-Member -InputObject $Object -memberType NoteProperty -name "Phone" -value "123-4567" $Collection += $Object $Object = New-Object -TypeName PSObject $Object.PsObject.TypeNames.Add('MyCustomType.Contact.Detail') Add-Member -InputObject $Object -memberType NoteProperty -name "First" -value "Jeanne" Add-Member -InputObject $Object -memberType NoteProperty -name "Last" -value "Doe" Add-Member -InputObject $Object -memberType NoteProperty -name "Phone" -value "765-4321" $Collection += $Object Write-Ouput -InputObject $Collection A: This is the shortcut method: $myPerson = "" | Select-Object First,Last,Phone A: Prior to PowerShell 3 PowerShell's Extensible Type System didn't originally let you create concrete types you can test against the way you did in your parameter. If you don't need that test, you're fine with any of the other methods mentioned above. If you want an actual type that you can cast to or type-check with, as in your example script ... it cannot be done without writing it in C# or VB.net and compiling. In PowerShell 2, you can use the "Add-Type" command to do it quite simmple: add-type @" public struct contact { public string First; public string Last; public string Phone; } "@ Historical Note: In PowerShell 1 it was even harder. You had to manually use CodeDom, there is a very old function new-struct script on PoshCode.org which will help. Your example becomes: New-Struct Contact @{ First=[string]; Last=[string]; Phone=[string]; } Using Add-Type or New-Struct will let you actually test the class in your param([Contact]$contact) and make new ones using $contact = new-object Contact and so on... In PowerShell 3 If you don't need a "real" class that you can cast to, you don't have to use the Add-Member way that Steven and others have demonstrated above. Since PowerShell 2 you could use the -Property parameter for New-Object: $Contact = New-Object PSObject -Property @{ First=""; Last=""; Phone="" } And in PowerShell 3, we got the ability to use the PSCustomObject accelerator to add a TypeName: [PSCustomObject]@{ PSTypeName = "Contact" First = $First Last = $Last Phone = $Phone } You're still only getting a single object, so you should make a New-Contact function to make sure that every object comes out the same, but you can now easily verify a parameter "is" one of those type by decorating a parameter with the PSTypeName attribute: function PrintContact { param( [PSTypeName("Contact")]$contact ) "Customer Name is " + $contact.First + " " + $contact.Last "Customer Phone is " + $contact.Phone } In PowerShell 5 In PowerShell 5 everything changes, and we finally got class and enum as language keywords for defining types (there's no struct but that's ok): class Contact { # Optionally, add attributes to prevent invalid values [ValidateNotNullOrEmpty()][string]$First [ValidateNotNullOrEmpty()][string]$Last [ValidateNotNullOrEmpty()][string]$Phone # optionally, have a constructor to # force properties to be set: Contact($First, $Last, $Phone) { $this.First = $First $this.Last = $Last $this.Phone = $Phone } } We also got a new way to create objects without using New-Object: [Contact]::new() -- in fact, if you kept your class simple and don't define a constructor, you can create objects by casting a hashtable (although without a constructor, there would be no way to enforce that all properties must be set): class Contact { # Optionally, add attributes to prevent invalid values [ValidateNotNullOrEmpty()][string]$First [ValidateNotNullOrEmpty()][string]$Last [ValidateNotNullOrEmpty()][string]$Phone } $C = [Contact]@{ First = "Joel" Last = "Bennett" } A: Here's one more option, which uses a similar idea to the PSTypeName solution mentioned by Jaykul (and thus also requires PSv3 or above). Example * *Create a TypeName.Types.ps1xml file defining your type. E.g. Person.Types.ps1xml: <?xml version="1.0" encoding="utf-8" ?> <Types> <Type> <Name>StackOverflow.Example.Person</Name> <Members> <ScriptMethod> <Name>Initialize</Name> <Script> Param ( [Parameter(Mandatory = $true)] [string]$GivenName , [Parameter(Mandatory = $true)] [string]$Surname ) $this | Add-Member -MemberType 'NoteProperty' -Name 'GivenName' -Value $GivenName $this | Add-Member -MemberType 'NoteProperty' -Name 'Surname' -Value $Surname </Script> </ScriptMethod> <ScriptMethod> <Name>SetGivenName</Name> <Script> Param ( [Parameter(Mandatory = $true)] [string]$GivenName ) $this | Add-Member -MemberType 'NoteProperty' -Name 'GivenName' -Value $GivenName -Force </Script> </ScriptMethod> <ScriptProperty> <Name>FullName</Name> <GetScriptBlock>'{0} {1}' -f $this.GivenName, $this.Surname</GetScriptBlock> </ScriptProperty> <!-- include properties under here if we don't want them to be visible by default <MemberSet> <Name>PSStandardMembers</Name> <Members> </Members> </MemberSet> --> </Members> </Type> </Types> *Import your type: Update-TypeData -AppendPath .\Person.Types.ps1xml *Create an object of your custom type: $p = [PSCustomType]@{PSTypeName='StackOverflow.Example.Person'} *Initialise your type using the script method you defined in the XML: $p.Initialize('Anne', 'Droid') *Look at it; you'll see all properties defined: $p | Format-Table -AutoSize *Type calling a mutator to update a property's value: $p.SetGivenName('Dan') *Look at it again to see the updated value: $p | Format-Table -AutoSize Explanation * *The PS1XML file allows you to define custom properties on types. *It is not restricted to .net types as the documentation implies; so you can put what you like in '/Types/Type/Name' any object created with a matching 'PSTypeName' will inherit the members defined for this type. *Members added through PS1XML or Add-Member are restricted to NoteProperty, AliasProperty, ScriptProperty, CodeProperty, ScriptMethod, and CodeMethod (or PropertySet/MemberSet; though those are subject to the same restrictions). All of these properties are read only. *By defining a ScriptMethod we can cheat the above restriction. E.g. We can define a method (e.g. Initialize) which creates new properties, setting their values for us; thus ensuring our object has all the properties we need for our other scripts to work. *We can use this same trick to allow the properties to be updatable (albeit via method rather than direct assignment), as shown in the example's SetGivenName. This approach isn't ideal for all scenarios; but is useful for adding class-like behaviors to custom types / can be used in conjunction with other methods mentioned in the other answers. E.g. in the real world I'd probably only define the FullName property in the PS1XML, then use a function to create the object with the required values, like so: More Info Take a look at the documentation, or the OOTB type file Get-Content $PSHome\types.ps1xml for inspiration. # have something like this defined in my script so we only try to import the definition once. # the surrounding if statement may be useful if we're dot sourcing the script in an existing # session / running in ISE / something like that if (!(Get-TypeData 'StackOverflow.Example.Person')) { Update-TypeData '.\Person.Types.ps1xml' } # have a function to create my objects with all required parameters # creating them from the hash table means they're PROPERties; i.e. updatable without calling a # setter method (note: recall I said above that in this scenario I'd remove their definition # from the PS1XML) function New-SOPerson { [CmdletBinding()] [OutputType('StackOverflow.Example.Person')] Param ( [Parameter(Mandatory)] [string]$GivenName , [Parameter(Mandatory)] [string]$Surname ) ([PSCustomObject][Ordered]@{ PSTypeName = 'StackOverflow.Example.Person' GivenName = $GivenName Surname = $Surname }) } # then use my new function to generate the new object $p = New-SOPerson -GivenName 'Simon' -Surname 'Borg' # and thanks to the type magic... FullName exists :) Write-Information "$($p.FullName) was created successfully!" -InformationAction Continue
{ "language": "en", "url": "https://stackoverflow.com/questions/59819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "105" }
Q: How to retrieve an element from a set without removing it? Suppose the following: >>> s = set([1, 2, 3]) How do I get a value (any value) out of s without doing s.pop()? I want to leave the item in the set until I am sure I can remove it - something I can only be sure of after an asynchronous call to another host. Quick and dirty: >>> elem = s.pop() >>> s.add(elem) But do you know of a better way? Ideally in constant time. A: Two options that don't require copying the whole set: for e in s: break # e is now an element from s Or... e = next(iter(s)) But in general, sets don't support indexing or slicing. A: tl;dr for first_item in muh_set: break remains the optimal approach in Python 3.x. Curse you, Guido. y u do this Welcome to yet another set of Python 3.x timings, extrapolated from wr.'s excellent Python 2.x-specific response. Unlike AChampion's equally helpful Python 3.x-specific response, the timings below also time outlier solutions suggested above – including: * *list(s)[0], John's novel sequence-based solution. *random.sample(s, 1), dF.'s eclectic RNG-based solution. Code Snippets for Great Joy Turn on, tune in, time it: from timeit import Timer stats = [ "for i in range(1000): \n\tfor x in s: \n\t\tbreak", "for i in range(1000): next(iter(s))", "for i in range(1000): s.add(s.pop())", "for i in range(1000): list(s)[0]", "for i in range(1000): random.sample(s, 1)", ] for stat in stats: t = Timer(stat, setup="import random\ns=set(range(100))") try: print("Time for %s:\t %f"%(stat, t.timeit(number=1000))) except: t.print_exc() Quickly Obsoleted Timeless Timings Behold! Ordered by fastest to slowest snippets: $ ./test_get.py Time for for i in range(1000): for x in s: break: 0.249871 Time for for i in range(1000): next(iter(s)): 0.526266 Time for for i in range(1000): s.add(s.pop()): 0.658832 Time for for i in range(1000): list(s)[0]: 4.117106 Time for for i in range(1000): random.sample(s, 1): 21.851104 Faceplants for the Whole Family Unsurprisingly, manual iteration remains at least twice as fast as the next fastest solution. Although the gap has decreased from the Bad Old Python 2.x days (in which manual iteration was at least four times as fast), it disappoints the PEP 20 zealot in me that the most verbose solution is the best. At least converting a set into a list just to extract the first element of the set is as horrible as expected. Thank Guido, may his light continue to guide us. Surprisingly, the RNG-based solution is absolutely horrible. List conversion is bad, but random really takes the awful-sauce cake. So much for the Random Number God. I just wish the amorphous They would PEP up a set.get_first() method for us already. If you're reading this, They: "Please. Do something." A: I use a utility function I wrote. Its name is somewhat misleading because it kind of implies it might be a random item or something like that. def anyitem(iterable): try: return iter(iterable).next() except StopIteration: return None A: Following @wr. post, I get similar results (for Python3.5) from timeit import * stats = ["for i in range(1000): next(iter(s))", "for i in range(1000): \n\tfor x in s: \n\t\tbreak", "for i in range(1000): s.add(s.pop())"] for stat in stats: t = Timer(stat, setup="s=set(range(100000))") try: print("Time for %s:\t %f"%(stat, t.timeit(number=1000))) except: t.print_exc() Output: Time for for i in range(1000): next(iter(s)): 0.205888 Time for for i in range(1000): for x in s: break: 0.083397 Time for for i in range(1000): s.add(s.pop()): 0.226570 However, when changing the underlying set (e.g. call to remove()) things go badly for the iterable examples (for, iter): from timeit import * stats = ["while s:\n\ta = next(iter(s))\n\ts.remove(a)", "while s:\n\tfor x in s: break\n\ts.remove(x)", "while s:\n\tx=s.pop()\n\ts.add(x)\n\ts.remove(x)"] for stat in stats: t = Timer(stat, setup="s=set(range(100000))") try: print("Time for %s:\t %f"%(stat, t.timeit(number=1000))) except: t.print_exc() Results in: Time for while s: a = next(iter(s)) s.remove(a): 2.938494 Time for while s: for x in s: break s.remove(x): 2.728367 Time for while s: x=s.pop() s.add(x) s.remove(x): 0.030272 A: To provide some timing figures behind the different approaches, consider the following code. The get() is my custom addition to Python's setobject.c, being just a pop() without removing the element. from timeit import * stats = ["for i in xrange(1000): iter(s).next() ", "for i in xrange(1000): \n\tfor x in s: \n\t\tbreak", "for i in xrange(1000): s.add(s.pop()) ", "for i in xrange(1000): s.get() "] for stat in stats: t = Timer(stat, setup="s=set(range(100))") try: print "Time for %s:\t %f"%(stat, t.timeit(number=1000)) except: t.print_exc() The output is: $ ./test_get.py Time for for i in xrange(1000): iter(s).next() : 0.433080 Time for for i in xrange(1000): for x in s: break: 0.148695 Time for for i in xrange(1000): s.add(s.pop()) : 0.317418 Time for for i in xrange(1000): s.get() : 0.146673 This means that the for/break solution is the fastest (sometimes faster than the custom get() solution). A: What I usually do for small collections is to create kind of parser/converter method like this def convertSetToList(setName): return list(setName) Then I can use the new list and access by index number userFields = convertSetToList(user) name = request.json[userFields[0]] As a list you will have all the other methods that you may need to work with A: Since you want a random element, this will also work: >>> import random >>> s = set([1,2,3]) >>> random.sample(s, 1) [2] The documentation doesn't seem to mention performance of random.sample. From a really quick empirical test with a huge list and a huge set, it seems to be constant time for a list but not for the set. Also, iteration over a set isn't random; the order is undefined but predictable: >>> list(set(range(10))) == range(10) True If randomness is important and you need a bunch of elements in constant time (large sets), I'd use random.sample and convert to a list first: >>> lst = list(s) # once, O(len(s))? ... >>> e = random.sample(lst, 1)[0] # constant time A: Yet another way in Python 3: next(iter(s)) or s.__iter__().__next__() A: You can unpack the values to access the elements: s = set([1, 2, 3]) v1, v2, v3 = s print(v1,v2,v3) #1 2 3 A: Least code would be: >>> s = set([1, 2, 3]) >>> list(s)[0] 1 Obviously this would create a new list which contains each member of the set, so not great if your set is very large. A: I wondered how the functions will perform for different sets, so I did a benchmark: from random import sample def ForLoop(s): for e in s: break return e def IterNext(s): return next(iter(s)) def ListIndex(s): return list(s)[0] def PopAdd(s): e = s.pop() s.add(e) return e def RandomSample(s): return sample(s, 1) def SetUnpacking(s): e, *_ = s return e from simple_benchmark import benchmark b = benchmark([ForLoop, IterNext, ListIndex, PopAdd, RandomSample, SetUnpacking], {2**i: set(range(2**i)) for i in range(1, 20)}, argument_name='set size', function_aliases={first: 'First'}) b.plot() This plot clearly shows that some approaches (RandomSample, SetUnpacking and ListIndex) depend on the size of the set and should be avoided in the general case (at least if performance might be important). As already shown by the other answers the fastest way is ForLoop. However as long as one of the constant time approaches is used the performance difference will be negligible. iteration_utilities (Disclaimer: I'm the author) contains a convenience function for this use-case: first: >>> from iteration_utilities import first >>> first({1,2,3,4}) 1 I also included it in the benchmark above. It can compete with the other two "fast" solutions but the difference isn't much either way. A: Seemingly the most compact (6 symbols) though very slow way to get a set element (made possible by PEP 3132): e,*_=s With Python 3.5+ you can also use this 7-symbol expression (thanks to PEP 448): [*s][0] Both options are roughly 1000 times slower on my machine than the for-loop method. A: I f you want just the first element try this: b = (a-set()).pop() A: Another option is to use a dictionary with values you don't care about. E.g., poor_man_set = {} poor_man_set[1] = None poor_man_set[2] = None poor_man_set[3] = None ... You can treat the keys as a set except that they're just an array: keys = poor_man_set.keys() print "Some key = %s" % keys[0] A side effect of this choice is that your code will be backwards compatible with older, pre-set versions of Python. It's maybe not the best answer but it's another option. Edit: You can even do something like this to hide the fact that you used a dict instead of an array or set: poor_man_set = {} poor_man_set[1] = None poor_man_set[2] = None poor_man_set[3] = None poor_man_set = poor_man_set.keys() A: How about s.copy().pop()? I haven't timed it, but it should work and it's simple. It works best for small sets however, as it copies the whole set.
{ "language": "en", "url": "https://stackoverflow.com/questions/59825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "636" }
Q: Trouble with selected radiobuttonlist value on postback vb.net I have a radio button list on a page that is used to configure products. when the page loads the first time the first list of options is displayed. you select one of them then click a "Next Step" button and the page posts back and shows a new radio button list for step 2. Now if i click a "Previous Step" button i can easily get the previous list of options to display but i can not for some reason get one of the radio buttons to be selected. I can easily bring back the value i need. right after making the radio button list i have a step that just says radiobuttonlist.selected = "somevalue" depending on whatever the user chose when they completed the first step the first time. when i debug i see that the value is correct and is being applied. but then when the page is displayed the radiobutton is not selected. I have noticed that when i click my "Previous" button while debugging the folowing steps occur.: the page.load handler runs and the code inside my if not page.ispostback block does NOT run, which is correct. then the handles for the button i just clicked gets run. but then after that the page.load handler runs again but this time the code in the if not page.ispostback block DOES run... is that normal? for the page.onload block to run twice like that on a postback? i think it may have something to do with why my value is not being applied. A: It sounds like you are causing a redirect to happen. You aren't by chance doing a Response.Redirect in order to "get back" to the original page? This would cause the functionality you describe. You would first get the Postback from the Previous button to leave the page, and then you would get a fresh Request (IsPostback = false) as the page reloads. A: Try setting AutoEventWireup = False in the page. A: I DID have a response.redirect i was using but i removed it. I suppose i can do a thorough check to make sure i didnt have another one anywhere. I will try the autoeventwireup property as well. A: alright well it looks like the autoevenwriteup property was already set to false before i even started. so i don't think that was it. I didn't see any other redirects happening anywhere... i guess the search continues. A: Have you tried setting the selected value during the page's pre-render phase instead of Page_Load? A: Try to do this yourRadioButonList.Items.FindByValue(YourSavedValue).Selected = true; A: I had the same issue, like what Thunder3 mentioned, I did a redirect back to the page, and was calling a method on Page_Load, to set the RadioButtonList selected value. But the selected value was not applied to the RadioButtonList. I solved the issue by calling the method on Page_Init instead. A: One of the probable reason, and the one which I faced recently is because of the fact that radiobuttonlist distinguish the items on the basis of value and not ID, thus if duplicate values exist, this issue will be observed. Below link provide the detailed explanation. RadioButtonList selected item does not stick on postback
{ "language": "en", "url": "https://stackoverflow.com/questions/59829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: getting java exception: java.net.MalformedURLException: no protocol I am currently calling the following line of code: java.net.URL connection_url = new java.net.URL("http://<ip address>:<port>/path"); and I get the exception above when it executes. Any ideas as to why this is happening? A: As a side note, you should be using URI because Java URL class is screwed up. (The equals method I believe) A: That url string looks like it's invalid. Sure it's not supposed to be 'http://path'? Or are the server & port blank? A: Your code works perfectly fine for me: public static void main(String[] args) { try { java.net.URL connection_url = new java.net.URL("http://:/path"); System.out.println("Instantiated new URL: " + connection_url); } catch (MalformedURLException e) { e.printStackTrace(); } } Instantiated new URL: http://:/path Sure you have the right line of code? A: I have also had the same exception, but in my case the URL which I was trying to execute had a space appended. After removing the space it worked fine for me. Check that the URL does not have any trailing spaces in your case. A: I had the same error and it got resolved by the below : The jar files (JFree) which I added few days back got corrupted automatically and was causing this error. I downloaded the same files again from net and it worked fine for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/59832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Free/Cheap ASP.NET Component Libraries earlier today someone asked about free/cheap component libraries for winforms. I'm interested in the same thing, but for asp.net. There are some great commercial libraries like Telerik and DevExpress, but are there any great free/cheap alternatives? A: I am not sure what you constitute as cheap, but the Peter Blum ASP.NET controls are a good buy. peterblum.com A: I would suggest that you can get free components at sites like codeproject, but you are going to end up sinking a ton of time separating the wheat from the chaff and figuring out how to use them, and since they are not a unified library the APIs and integration will be a new learning curve for each different control. So I would recommend, unless you are a hobbyist with a lot of time on your hands, it is without a doubt worth the money for your company to buy something from Peter Blum, DevExpress, Infragistics, Telerik, or one of the other vendors who will provide support and documentation. A: I'm enjoying the obout.com control suite for asp.net - $200 buys you the whole suite and they seem to be actively developing it. The Grid and Treeview are very strong in this suite, and I've been impressed with support. A: Check out the ASP.NET Control Gallery. "The Control Gallery is a directory of over 900 controls and components to use in your own applications. You will find everything from simple controls to full e-commerce components." Not all of them are free but definitely worth a look. A: Try this free custom web controls : Excentrics World .NET
{ "language": "en", "url": "https://stackoverflow.com/questions/59834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I check if a directory exists or not in a Bash shell script? What command checks if a directory exists or not within a Bash shell script? A: if [ -d "$DIRECTORY" ]; then # Will enter here if $DIRECTORY exists fi This is not completely true... If you want to go to that directory, you also need to have the execute rights on the directory. Maybe you need to have write rights as well. Therefore: if [ -d "$DIRECTORY" ] && [ -x "$DIRECTORY" ] ; then # ... to go to that directory (even if DIRECTORY is a link) cd $DIRECTORY pwd fi if [ -d "$DIRECTORY" ] && [ -w "$DIRECTORY" ] ; then # ... to go to that directory and write something there (even if DIRECTORY is a link) cd $DIRECTORY touch foobar fi A: In kind of a ternary form, [ -d "$directory" ] && echo "exist" || echo "not exist" And with test: test -d "$directory" && echo "exist" || echo "not exist" A: The ls command in conjunction with -l (long listing) option returns attributes information about files and directories. In particular the first character of ls -l output it is usually a d or a - (dash). In case of a d the one listed is a directory for sure. The following command in just one line will tell you if the given ISDIR variable contains a path to a directory or not: [[ $(ls -ld "$ISDIR" | cut -c1) == 'd' ]] && echo "YES, $ISDIR is a directory." || echo "Sorry, $ISDIR is not a directory" Practical usage: [claudio@nowhere ~]$ ISDIR="$HOME/Music" [claudio@nowhere ~]$ ls -ld "$ISDIR" drwxr-xr-x. 2 claudio claudio 4096 Aug 23 00:02 /home/claudio/Music [claudio@nowhere ~]$ [[ $(ls -ld "$ISDIR" | cut -c1) == 'd' ]] && echo "YES, $ISDIR is a directory." || echo "Sorry, $ISDIR is not a directory" YES, /home/claudio/Music is a directory. [claudio@nowhere ~]$ touch "empty file.txt" [claudio@nowhere ~]$ ISDIR="$HOME/empty file.txt" [claudio@nowhere ~]$ [[ $(ls -ld "$ISDIR" | cut -c1) == 'd' ]] && echo "YES, $ISDIR is a directory." || echo "Sorry, $ISDIR is not a directoy" Sorry, /home/claudio/empty file.txt is not a directory A: file="foo" if [[ -e "$file" ]]; then echo "File Exists"; fi; A: There are great solutions out there, but ultimately every script will fail if you're not in the right directory. So code like this: if [ -d "$LINK_OR_DIR" ]; then if [ -L "$LINK_OR_DIR" ]; then # It is a symlink! # Symbolic link specific commands go here rm "$LINK_OR_DIR" else # It's a directory! # Directory command goes here rmdir "$LINK_OR_DIR" fi fi will execute successfully only if at the moment of execution you're in a directory that has a subdirectory that you happen to check for. I understand the initial question like this: to verify if a directory exists irrespective of the user's position in the file system. So using the command 'find' might do the trick: dir=" " echo "Input directory name to search for:" read dir find $HOME -name $dir -type d This solution is good because it allows the use of wildcards, a useful feature when searching for files/directories. The only problem is that, if the searched directory doesn't exist, the 'find' command will print nothing to standard output (not an elegant solution for my taste) and will have nonetheless a zero exit. Maybe someone could improve on this. A: The below find can be used, find . -type d -name dirname -prune -print A: One Liner: [[ -d $Directory ]] && echo true A: Or for something completely useless: [ -d . ] || echo "No" A: Here's a very pragmatic idiom: (cd $dir) || return # Is this a directory, # and do we have access? I typically wrap it in a function: can_use_as_dir() { (cd ${1:?pathname expected}) || return } Or: assert_dir_access() { (cd ${1:?pathname expected}) || exit } The nice thing about this approach is that I do not have to think of a good error message. cd will give me a standard one line message to standard error already. It will also give more information than I will be able to provide. By performing the cd inside a subshell ( ... ), the command does not affect the current directory of the caller. If the directory exists, this subshell and the function are just a no-op. Next is the argument that we pass to cd: ${1:?pathname expected}. This is a more elaborate form of parameter substitution which is explained in more detail below. Tl;dr: If the string passed into this function is empty, we again exit from the subshell ( ... ) and return from the function with the given error message. Quoting from the ksh93 man page: ${parameter:?word} If parameter is set and is non-null then substitute its value; otherwise, print word and exit from the shell (if not interactive). If word is omitted then a standard message is printed. and If the colon : is omitted from the above expressions, then the shell only checks whether parameter is set or not. The phrasing here is peculiar to the shell documentation, as word may refer to any reasonable string, including whitespace. In this particular case, I know that the standard error message 1: parameter not set is not sufficient, so I zoom in on the type of value that we expect here - the pathname of a directory. A philosophical note: The shell is not an object oriented language, so the message says pathname, not directory. At this level, I'd rather keep it simple - the arguments to a function are just strings. A: Always wrap variables in double quotes when referencing them in a Bash script. if [ -d "$DIRECTORY" ]; then # Will enter here if $DIRECTORY exists, even if it contains spaces fi Kids these days put spaces and lots of other funny characters in their directory names. (Spaces! Back in my day, we didn't have no fancy spaces!) One day, one of those kids will run your script with $DIRECTORY set to "My M0viez" and your script will blow up. You don't want that. So use double quotes. A: (1) [ -d Piyush_Drv1 ] && echo ""Exists"" || echo "Not Exists" (2) [ `find . -type d -name Piyush_Drv1 -print | wc -l` -eq 1 ] && echo Exists || echo "Not Exists" (3) [[ -d run_dir && ! -L run_dir ]] && echo Exists || echo "Not Exists" If an issue is found with one of the approaches provided above: With the ls command; the cases when a directory does not exists - an error message is shown [[ `ls -ld SAMPLE_DIR| grep ^d | wc -l` -eq 1 ]] && echo exists || not exists -ksh: not: not found [No such file or directory] A: Use the file program. Considering all directories are also files in Linux, issuing the following command would suffice: file $directory_name Checking a nonexistent file: file blah Output: cannot open 'blah' (No such file or directory) Checking an existing directory: file bluh Output: bluh: directory A: To check if a directory exists: if [ -d "$DIRECTORY" ]; then echo "$DIRECTORY does exist." fi To check if a directory does not exist: if [ ! -d "$DIRECTORY" ]; then echo "$DIRECTORY does not exist." fi However, as Jon Ericson points out, subsequent commands may not work as intended if you do not take into account that a symbolic link to a directory will also pass this check. E.g. running this: ln -s "$ACTUAL_DIR" "$SYMLINK" if [ -d "$SYMLINK" ]; then rmdir "$SYMLINK" fi Will produce the error message: rmdir: failed to remove `symlink': Not a directory So symbolic links may have to be treated differently, if subsequent commands expect directories: if [ -d "$LINK_OR_DIR" ]; then if [ -L "$LINK_OR_DIR" ]; then # It is a symlink! # Symbolic link specific commands go here. rm "$LINK_OR_DIR" else # It's a directory! # Directory command goes here. rmdir "$LINK_OR_DIR" fi fi Take particular note of the double-quotes used to wrap the variables. The reason for this is explained by 8jean in another answer. If the variables contain spaces or other unusual characters it will probably cause the script to fail. A: If you want to check if a directory exists, regardless if it's a real directory or a symlink, use this: ls $DIR if [ $? != 0 ]; then echo "Directory $DIR already exists!" exit 1; fi echo "Directory $DIR does not exist..." Explanation: The "ls" command gives an error "ls: /x: No such file or directory" if the directory or symlink does not exist, and also sets the return code, which you can retrieve via "$?", to non-null (normally "1"). Be sure that you check the return code directly after calling "ls". A: if [ -d "$Directory" -a -w "$Directory" ] then #Statements fi The above code checks if the directory exists and if it is writable. A: More features using find * *Check existence of the folder within sub-directories: found=`find -type d -name "myDirectory"` if [ -n "$found" ] then # The variable 'found' contains the full path where "myDirectory" is. # It may contain several lines if there are several folders named "myDirectory". fi *Check existence of one or several folders based on a pattern within the current directory: found=`find -maxdepth 1 -type d -name "my*"` if [ -n "$found" ] then # The variable 'found' contains the full path where folders "my*" have been found. fi *Both combinations. In the following example, it checks the existence of the folder in the current directory: found=`find -maxdepth 1 -type d -name "myDirectory"` if [ -n "$found" ] then # The variable 'found' is not empty => "myDirectory"` exists. fi A: DIRECTORY=/tmp if [ -d "$DIRECTORY" ]; then echo "Exists" fi Try online A: From script file myScript.sh: if [ -d /home/ec2-user/apache-tomcat-8.5.5/webapps/Gene\ Directory ]; then echo "Directory exists!" echo "Great" fi Or if [ -d '/home/ec2-user/apache-tomcat-8.5.5/webapps/Gene Directory' ]; then echo "Directory exists!" echo "Great" fi A: Git Bash + Dropbox + Windows: None of the other solutions worked for my Dropbox folder, which was weird because I can Git push to a Dropbox symbolic path. #!/bin/bash dbox="~/Dropbox/" result=0 prv=$(pwd) && eval "cd $dbox" && result=1 && cd "$prv" echo $result read -p "Press Enter To Continue:" You'll probably want to know how to successfully navigate to Dropbox from Bash as well. So here is the script in its entirety. https://pastebin.com/QF2Exmpn A: Just as an alternative to the '[ -d ]' and '[ -h ]' options, you can make use of stat to obtain the file type and parse it. #! /bin/bash MY_DIR=$1 NODE_TYPE=$(stat -c '%F' ${MY_DIR} 2>/dev/null) case "${NODE_TYPE}" in "directory") echo $MY_DIR;; "symbolic link") echo $(readlink $MY_DIR);; "") echo "$MY_DIR does not exist";; *) echo "$NODE_TYPE is unsupported";; esac exit 0 Test data: $ mkdir tmp $ ln -s tmp derp $ touch a.txt $ ./dir.sh tmp tmp $ ./dir.sh derp tmp $ ./dir.sh a.txt regular file is unsupported $ ./dir.sh god god does not exist A: Note the -d test can produce some surprising results: $ ln -s tmp/ t $ if [ -d t ]; then rmdir t; fi rmdir: directory "t": Path component not a directory File under: "When is a directory not a directory?" The answer: "When it's a symlink to a directory." A slightly more thorough test: if [ -d t ]; then if [ -L t ]; then rm t else rmdir t fi fi You can find more information in the Bash manual on Bash conditional expressions and the [ builtin command and the [[ compound commmand. A: I find the double-bracket version of test makes writing logic tests more natural: if [[ -d "${DIRECTORY}" && ! -L "${DIRECTORY}" ]] ; then echo "It's a bona-fide directory" fi A: Actually, you should use several tools to get a bulletproof approach: DIR_PATH=`readlink -f "${the_stuff_you_test}"` # Get rid of symlinks and get abs path if [[ -d "${DIR_PATH}" ]] ; Then # Now you're testing echo "It's a dir"; fi There isn't any need to worry about spaces and special characters as long as you use "${}". Note that [[]] is not as portable as [], but since most people work with modern versions of Bash (since after all, most people don't even work with command line :-p), the benefit is greater than the trouble. A: Have you considered just doing whatever you want to do in the if rather than looking before you leap? I.e., if you want to check for the existence of a directory before you enter it, try just doing this: if pushd /path/you/want/to/enter; then # Commands you want to run in this directory popd fi If the path you give to pushd exists, you'll enter it and it'll exit with 0, which means the then portion of the statement will execute. If it doesn't exist, nothing will happen (other than some output saying the directory doesn't exist, which is probably a helpful side-effect anyways for debugging). It seems better than this, which requires repeating yourself: if [ -d /path/you/want/to/enter ]; then pushd /path/you/want/to/enter # Commands you want to run in this directory popd fi The same thing works with cd, mv, rm, etc... if you try them on files that don't exist, they'll exit with an error and print a message saying it doesn't exist, and your then block will be skipped. If you try them on files that do exist, the command will execute and exit with a status of 0, allowing your then block to execute. A: Shorter form: # if $DIR is a directory, then print yes [ -d "$DIR" ] && echo "Yes" A: [[ -d "$DIR" && ! -L "$DIR" ]] && echo "It's a directory and not a symbolic link" N.B: Quoting variables is a good practice. Explanation: * *-d: check if it's a directory *-L: check if it's a symbolic link A: To check more than one directory use this code: if [ -d "$DIRECTORY1" ] && [ -d "$DIRECTORY2" ] then # Things to do fi A: Check if the directory exists, else make one: [ -d "$DIRECTORY" ] || mkdir $DIRECTORY A: [ -d ~/Desktop/TEMPORAL/ ] && echo "DIRECTORY EXISTS" || echo "DIRECTORY DOES NOT EXIST" A: * *A simple script to test if a directory or file is present or not: if [ -d /home/ram/dir ] # For file "if [ -f /home/rama/file ]" then echo "dir present" else echo "dir not present" fi *A simple script to check whether the directory is present or not: mkdir tempdir # If you want to check file use touch instead of mkdir ret=$? if [ "$ret" == "0" ] then echo "dir present" else echo "dir not present" fi The above scripts will check if the directory is present or not $? if the last command is a success it returns "0", else a non-zero value. Suppose tempdir is already present. Then mkdir tempdir will give an error like below: mkdir: cannot create directory ‘tempdir’: File exists A: To check if a directory exists you can use a simple if structure like this: if [ -d directory/path to a directory ] ; then # Things to do else #if needed #also: elif [new condition] # Things to do fi You can also do it in the negative: if [ ! -d directory/path to a directory ] ; then # Things to do when not an existing directory Note: Be careful. Leave empty spaces on either side of both opening and closing braces. With the same syntax you can use: -e: any kind of archive -f: file -h: symbolic link -r: readable file -w: writable file -x: executable file -s: file size greater than zero A: Using the -e check will check for files and this includes directories. if [ -e ${FILE_PATH_AND_NAME} ] then echo "The file or directory exists." fi A: This answer wrapped up as a shell script Examples $ is_dir ~ YES $ is_dir /tmp YES $ is_dir ~/bin YES $ mkdir '/tmp/test me' $ is_dir '/tmp/test me' YES $ is_dir /asdf/asdf NO # Example of calling it in another script DIR=~/mydata if [ $(is_dir $DIR) == "NO" ] then echo "Folder doesnt exist: $DIR"; exit; fi is_dir function show_help() { IT=$(CAT <<EOF usage: DIR output: YES or NO, depending on whether or not the directory exists. ) echo "$IT" exit } if [ "$1" == "help" ] then show_help fi if [ -z "$1" ] then show_help fi DIR=$1 if [ -d $DIR ]; then echo "YES"; exit; fi echo "NO"; A: You can use test -d (see man test). -d file True if file exists and is a directory. For example: test -d "/etc" && echo Exists || echo Does not exist Note: The test command is same as conditional expression [ (see: man [), so it's portable across shell scripts. [ - This is a synonym for the test builtin, but the last argument must, be a literal ], to match the opening [. For possible options or further help, check: * *help [ *help test *man test or man [ A: As per Jonathan's comment: If you want to create the directory and it does not exist yet, then the simplest technique is to use mkdir -p which creates the directory — and any missing directories up the path — and does not fail if the directory already exists, so you can do it all at once with: mkdir -p /some/directory/you/want/to/exist || exit 1
{ "language": "en", "url": "https://stackoverflow.com/questions/59838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4290" }
Q: bug in linq Contains statement - is there a fix or workaround? I found a bug in the Contains statement in Linq (not sure if it is really in Linq or Linq to SQL) and want to know if anyone else has seen this and if there is a fix or workaround. If the querysource you do the contains with has more than 10 items in it, it does not pass the items correctly to the SQL query. It is hard to explain what it does, an example will show it best. If you look at the raw query, the parameters look like this: @P0 = 'aaa' @P1 = 'bbb' @P2 = 'ccc' ... [@P3 through @P9] @P10 = '111' @P11 = '222' ... [@p12 through @P19] @P20 = 'sss' ... [@P21 through @P99] @P100 = 'qqq' when the values are passed into the final query (all parameters resolved) it has resolved the parameters as if these were the values passed: @P0 = 'aaa' @P1 = 'bbb' @P2 = 'ccc' ... @P10 = 'bbb'0 @P11 = 'bbb'1 ... @P20 = 'ccc'0 ... @P100 = 'bbb'00 So it looks like the parameter resolving looks at the first digit only after the @P and resolves that, then adds on anything left at the end of the parameter name. At least that is what the Sql Server Query Visualizer plugin to Visual Studio shows the query doing. Really strange. So if any one has advice please share. Thanks! Update: I have rewritten the original linq statement to where I now use a join instead of the Contains, but would still like to know if there is a way around this issue. A: The more I look at it, and after running more tests, I'm thinking the bug may be in the Sql Server Query Visualizer plugin for Visual Studio, not actually in Linq to SQL itself. So it is not nearly as bad a situation as I thought - the query will return the right results, but you can't trust what the Visualizer is showing. Not great, but better than what I thought was going on. A: Try actually looking at the output from your datacontext before you pass judgement. DataContext.Log() will give you the generated SQL.
{ "language": "en", "url": "https://stackoverflow.com/questions/59840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a spring bean for a Java double primitive? I'd like to create a spring bean that holds the value of a double. Something like: <bean id="doubleValue" value="3.7"/> A: It's also worth noting that depending on your need defining your own bean may not be the best bet for you. <util:constant static-field="org.example.Constants.FOO"/> is a good way to access a constant value stored in a class and default binders also work very well for conversions e.g. <bean class="Foo" p:doubleValue="123.00"/> I've found myself replacing many of my beans in this manner, coupled with a properties file defining my values (for reuse purposes). What used to look like this <bean id="d1" class="java.lang.Double"> <constructor-arg value="3.7"/> </bean> <bean id="foo" class="Foo"> <property name="doubleVal" ref="d1"/> </bean> gets refactored into this: <bean id="propertyFile" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="classpath:my.properties" /> <bean id="foo" class="Foo" p:doubleVal="${d1}"/> A: Declare it like this: <bean id="doubleValue" class="java.lang.Double"> <constructor-arg index="0" value="3.7"/> </bean> And use like this: <bean id="someOtherBean" ...> <property name="value" ref="doubleValue"/> </bean> A: Why don't you just use a Double? any reason? A: Spring 2.5+ You can define bean like this in Java config: @Configuration public class BeanConfig { @Bean public Double doubleBean(){ return new Double(3.7); } } You can use this bean like this in your program: @Autowired Double doubleBean; public void printDouble(){ System.out.println(doubleBean); //sample usage }
{ "language": "en", "url": "https://stackoverflow.com/questions/59850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Should I use a dedicated network channel between the database and the application server? Should I use a dedicated network channel between the database and the application server? ...or... Connecting both in the switch along with all other computer nodes makes no diference at all? The matter is performance! A: It all depends on the throughput needs of your application. If you absolutely need the lowest latency possible, then it would make sense to optimize the routes. Aside from hugely scalable software, I would argue that this is rarely needed and you can just connect everything in a generic fashion. A: It depends on your non-functional requirements. Assuming the NICs are running at the same rate, keeping the database traffic away from the front-end traffic can only be a good thing from a bandwidth perspective - if bandwidth is an issue. Far more significant is that security is improved by keeping the front-side and data-sides on different networks as the only way to gain direct access to the database is to compromise the application server. A: Using the shared switch could give increased latency, especially if the switch is busy. Also, you may be able to hook up a faster dedicated network channel (e.g. gigabit ethernet, if your switch is 100Mbit). Whether any of this is worth doing or not depends on your application though. You may also want to use a dedicated channel for increased security (making your database server less accessible).
{ "language": "en", "url": "https://stackoverflow.com/questions/59857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: When Testing your MVC-based UI, how much of the test setup do you make common? I'm trying to test a simple WebForms (asp.net) based UI, and follow the MVP pattern to allow my UI to be more testable. As I follow the TDD methodology for backend algorithms, I find that there are some unit test refactorings that happen in the spirit of the DRY principle (Don't Repeat Yourself). As I try to apply this to the UI using Rhino Mocks to verify my interactions, I see many commonalities in the Controller tests when setting up the view or model expectations. My question is: how far do you typically take this refactoring, if at all? I'm curious to see how other TDDer's test their MVC/MVP based UIs. A: I would not refactor tests like standard code. Tests start to become more obscure as you refactor things into common base classes, helper methods, etc. Tests should be sufficiently clear on their own. DRY is not a test concern. That said, there are many plumbing things that are commonly done, and those should be abstracted away. A: I use MVP, and on my tests I try to apply most of the refactoring I would in standard code. It normally doesn't work quite as well on the tests, due to the slight variations needed to test different scenarios, but within parts there can be commonality, and when possible I do consolidate. This does ease the needed changes later as the project evolves; just like in your standard code it is easier to change one place instead of 20. A: I'd prefer to treat unit test as pure functional programs, to avoid to have to test them. If an operation is enough common in between tests, then I would evaluate it for the standard codebase, but even then I'd avoid refactoring tests, because I tend to have lots of them, specially for gui driven BL. A: I use selenium for functional testing and I'm using JUnit to test my controllers. I'll mock out services or resources used by the controller and test to see what URI the controller is redirecting to, etc... The only thing I'm not really testing at this point are the views. But I have employed functional testing to compensate.
{ "language": "en", "url": "https://stackoverflow.com/questions/59859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS's? Conventional wisdom states that stored procedures are always faster. So, since they're always faster, use them ALL THE TIME. I am pretty sure this is grounded in some historical context where this was once the case. Now, I'm not advocating that Stored Procs are not needed, but I want to know in what cases stored procedures are necessary in modern databases such as MySQL, SQL Server, Oracle, or <Insert_your_DB_here>. Is it overkill to have ALL access through stored procedures? A: It's a debate that rages on and on (for instance, here). It's as easy to write bad stored procedures as it is to write bad data access logic in your app. My preference is for Stored Procs, but that's because I'm typically working with very large and complex apps in an enterprise environment where there are dedicated DBAs who are responsible for keeping the database servers running sweetly. In other situations, I'm happy enough for data access technologies such as LINQ to take care of the optimisation. Pure performance isn't the only consideration, though. Aspects such as security and configuration management are typically at least as important. Edit: While Frans Bouma's article is indeed verbose, it misses the point with regard to security by a mile. The fact that it's 5 years old doesn't help its relevance, either. A: There is no noticeable speed difference for stored procedures vs parameterized or prepared queries on most modern databases, because the database will also cache execution plans for those queries. Note that a parameterized query is not the same as ad hoc sql. The main reason imo to still favor stored procedures today has more to do with security. If you use stored procedures exclusively, you can disable INSERT, SELECT, UPDATE, DELETE, ALTER, DROP, and CREATE etc permissions for your application's user, only leaving it with EXECUTE. This provides a little extra protection against 2nd order sql injection. Parameterized queries only protect against 1st order injection. A: Obviously, actual performance ought to be measured in individual cases, not assumed. But even in cases where performance is hampered by a stored procedure, there are good reasons to use them: * *Application developers aren't always the best SQL coders. Stored procedures hides SQL from the application. *Stored procedures automatically use bind variables. Application developers often avoid bind variables because they seem like unneeded code and show little benefit in small test systems. Later on, the failure to use bind variables can throttle RDBMS performance. *Stored procedures create a layer of indirection that might be useful later on. It's possible to change implementation details (including table structure) on the database side without touching application code. *The exercise of creating stored procedures can be useful for documenting all database interactions for a system. And it's easier to update the documentation when things change. That said, I usually stick raw SQL in my applications so that I can control it myself. It depends on your development team and philosophy. A: The one topic that no one has yet mentioned as a benefit of stored procedures is security. If you build the application exclusively with data access via stored procedures, you can lockdown the database so the ONLY access is via those stored procedures. Therefor, even if someone gets a database ID and password, they will be limited in what they can see or do against that database. A: In 2007 I was on a project, where we used MS SQL Server via an ORM. We had 2 big, growing tables which took up to 7-8 seconds of load time on the SQL Server. After making 2 large, stored SQL procedures, and optimizing them from the query planner, each DB load time got down to less than 20 milliseconds, so clearly there are still efficiency reasons to use stored SQL procedures. Having said that, we found out that the most important benefit of stored procedures was the added maintaince-ease, security, data-integrity, and decoupling business-logic from the middleware-logic, benefitting all middleware-logic from reuse of the 2 procedures. Our ORM vendor made the usual claim that firing off many small SQL queries were going to be more efficient than fetching large, joined data sets. Our experience (to our surprise) showed something else. This may of course vary between machines, networks, operating systems, SQL servers, application frameworks, ORM frameworks, and language implementations, so measure any benefit, you THINK you may get from doing something else. It wasn't until we benchmarked that we discovered the problem was between the ORM and the database taking all the load. A: Reasons for using stored procedures: * *Reduce network traffic -- you have to send the SQL statement across the network. With sprocs, you can execute SQL in batches, which is also more efficient. *Caching query plan -- the first time the sproc is executed, SQL Server creates an execution plan, which is cached for reuse. This is particularly performant for small queries run frequently. *Ability to use output parameters -- if you send inline SQL that returns one row, you can only get back a recordset. With sprocs you can get them back as output parameters, which is considerably faster. *Permissions -- when you send inline SQL, you have to grant permissions on the table(s) to the user, which is granting much more access than merely granting permission to execute a sproc *Separation of logic -- remove the SQL-generating code and segregate it in the database. *Ability to edit without recompiling -- this can be controversial. You can edit the SQL in a sproc without having to recompile the application. *Find where a table is used -- with sprocs, if you want to find all SQL statements referencing a particular table, you can export the sproc code and search it. This is much easier than trying to find it in code. *Optimization -- It's easier for a DBA to optimize the SQL and tune the database when sprocs are used. It's easier to find missing indexes and such. *SQL injection attacks -- properly written inline SQL can defend against attacks, but sprocs are better for this protection. A: I prefer to use SP's when it makes sense to use them. In SQL Server anyway there is no performance advantage to SP's over a parametrized query. However, at my current job my boss mentioned that we are forced to use SP's because our customer's require them. They feel that they are more secure. I have not been here long enough to see if we are implementing role based security but I have a feeling we do. So the customer's feelings trump all other arguments in this case. A: NOTE that this is a general look at stored procedures not regulated to a specific DBMS. Some DBMS (and even, different versions of the same DBMS!) may operate contrary to this, so you'll want to double-check with your target DBMS before assuming all of this still holds. I've been a Sybase ASE, MySQL, and SQL Server DBA on-and off since for almost a decade (along with application development in C, PHP, PL/SQL, C#.NET, and Ruby). So, I have no particular axe to grind in this (sometimes) holy war. The historical performance benefit of stored procs have generally been from the following (in no particular order): * *Pre-parsed SQL *Pre-generated query execution plan *Reduced network latency *Potential cache benefits Pre-parsed SQL -- similar benefits to compiled vs. interpreted code, except on a very micro level. Still an advantage? Not very noticeable at all on the modern CPU, but if you are sending a single SQL statement that is VERY large eleventy-billion times a second, the parsing overhead can add up. Pre-generated query execution plan. If you have many JOINs the permutations can grow quite unmanageable (modern optimizers have limits and cut-offs for performance reasons). It is not unknown for very complicated SQL to have distinct, measurable (I've seen a complicated query take 10+ seconds just to generate a plan, before we tweaked the DBMS) latencies due to the optimizer trying to figure out the "near best" execution plan. Stored procedures will, generally, store this in memory so you can avoid this overhead. Still an advantage? Most DBMS' (the latest editions) will cache the query plans for INDIVIDUAL SQL statements, greatly reducing the performance differential between stored procs and ad hoc SQL. There are some caveats and cases in which this isn't the case, so you'll need to test on your target DBMS. Also, more and more DBMS allow you to provide optimizer path plans (abstract query plans) to significantly reduce optimization time (for both ad hoc and stored procedure SQL!!). WARNING Cached query plans are not a performance panacea. Occasionally the query plan that is generated is sub-optimal. For example, if you send SELECT * FROM table WHERE id BETWEEN 1 AND 99999999, the DBMS may select a full-table scan instead of an index scan because you're grabbing every row in the table (so sayeth the statistics). If this is the cached version, then you can get poor performance when you later send SELECT * FROM table WHERE id BETWEEN 1 AND 2. The reasoning behind this is outside the scope of this posting, but for further reading see: http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx and http://msdn.microsoft.com/en-us/library/ms181055.aspx and http://www.simple-talk.com/sql/performance/execution-plan-basics/ "In summary, they determined that supplying anything other than the common values when a compile or recompile was performed resulted in the optimizer compiling and caching the query plan for that particular value. Yet, when that query plan was reused for subsequent executions of the same query for the common values (‘M’, ‘R’, or ‘T’), it resulted in sub-optimal performance. This sub-optimal performance problem existed until the query was recompiled. At that point, based on the @P1 parameter value supplied, the query might or might not have a performance problem." Reduced network latency A) If you are running the same SQL over and over -- and the SQL adds up to many KB of code -- replacing that with a simple "exec foobar" can really add up. B) Stored procs can be used to move procedural code into the DBMS. This saves shuffling large amounts of data off to the client only to have it send a trickle of info back (or none at all!). Analogous to doing a JOIN in the DBMS vs. in your code (everyone's favorite WTF!) Still an advantage? A) Modern 1Gb (and 10Gb and up!) Ethernet really make this negligible. B) Depends on how saturated your network is -- why shove several megabytes of data back and forth for no good reason? Potential cache benefits Performing server-side transforms of data can potentially be faster if you have sufficient memory on the DBMS and the data you need is in memory of the server. Still an advantage? Unless your app has shared memory access to DBMS data, the edge will always be to stored procs. Of course, no discussion of Stored Procedure optimization would be complete without a discussion of parameterized and ad hoc SQL. Parameterized / Prepared SQL Kind of a cross between stored procedures and ad hoc SQL, they are embedded SQL statements in a host language that uses "parameters" for query values, e.g.: SELECT .. FROM yourtable WHERE foo = ? AND bar = ? These provide a more generalized version of a query that modern-day optimizers can use to cache (and re-use) the query execution plan, resulting in much of the performance benefit of stored procedures. Ad Hoc SQL Just open a console window to your DBMS and type in a SQL statement. In the past, these were the "worst" performers (on average) since the DBMS had no way of pre-optimizing the queries as in the parameterized/stored proc method. Still a disadvantage? Not necessarily. Most DBMS have the ability to "abstract" ad hoc SQL into parameterized versions -- thus more or less negating the difference between the two. Some do this implicitly or must be enabled with a command setting (SQL server: http://msdn.microsoft.com/en-us/library/ms175037.aspx , Oracle: http://www.praetoriate.com/oracle_tips_cursor_sharing.htm). Lessons learned? Moore's law continues to march on and DBMS optimizers, with every release, get more sophisticated. Sure, you can place every single silly teeny SQL statement inside a stored proc, but just know that the programmers working on optimizers are very smart and are continually looking for ways to improve performance. Eventually (if it's not here already) ad hoc SQL performance will become indistinguishable (on average!) from stored procedure performance, so any sort of massive stored procedure use ** solely for "performance reasons"** sure sounds like premature optimization to me. Anyway, I think if you avoid the edge cases and have fairly vanilla SQL, you won't notice a difference between ad hoc and stored procedures. A: In many cases, stored procedures are actually slower because they're more genaralized. While stored procedures can be highly tuned, in my experience there's enough development and institutional friction that they're left in place once they work, so stored procedures often tend to return a lot of columns "just in case" - because you don't want to deploy a new stored procedure every time you change your application. An OR/M, on the other hand, only requests the columns the application is using, which cuts down on network traffic, unnecessary joins, etc. A: Read Frans Bouma's excellent post (if a bit biased) on that. A: To me one advantage of stored procedures is to be host language agnostic: you can switch from a C, Python, PHP or whatever application to another programming language without rewriting your code. In addition, some features like bulk operations improve really performance and are not easily available (not at all?) in host languages. A: I don't know that they are faster. I like using ORM for data access (to not re-invent the wheel) but I realize that's not always a viable option. Frans Bouma has a good article on this subject : http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx A: All I can speak to is SQL server. In that platform, stored procedures are lovely because the server stores the execution plan, which in most cases speeds up performance a good bit. I say "in most cases", because if the SP has widely varying paths of execution you might get suboptimal performance. However, even in those cases, some enlightened refactoring of the SPs can speed things up. A: Using stored procedures for CRUD operations is probably overkill, but it will depend on the tools be used and your own preferences (or requirements). I prefer inline SQL, but I make sure to use parameterized queries to prevent SQL injection attacks. I keep a print out of this xkcd comic as a reminder of what can go wrong if you are not careful. Stored procedures can have real performance benefits when you are working with multiple sets of data to return a single set of data. It's usually more efficient to process sets of data in the stored procedure than sending them over the wire to be processed at the client end. A: Realising this is a bit off-topic to the question, but if you are using a lot of stored procedures, make sure there is a consistent way to put them under some sort of source control (e.g., subversion or git) and be able to migrate updates from your development system to the test system to the production system. When this is done by hand, with no way to easily audit what code is where, this quickly becomes a nightmare. A: Stored procs are great for cases where the SQL code is run frequently because the database stores it tokenized in memory. If you repeatedly ran the same code outside of a stored proc, you will likey incur a performance hit from the database reparsing the same code over and over. I typically frequently called code as a stored proc or as a SqlCommand (.NET) object and execute as many times as needed. A: Yes, they are faster most of time. SQL composition is a huge performance tuning area too. If I am doing a back office type app I may skip them but anything production facing I use them for sure for all the reasons others spoke too...namely security. A: IMHO... Restricting "C_UD" operations to stored procedures can keep the data integrity logic in one place. This can also be done by restricting"C_UD" operations to a single middle ware layer. Read operations can be provided to the application so they can join only the tables / columns they need. A: Stored procedures can also be used instead of parameterized queries (or ad-hoc queries) for some other advantages too : * *If you need to correct something (a sort order etc.) you don't need to recompile your app *You could deny access to all tables for that user account, grant access only to stored procedures and route all access through stored procedures. This way you can have custom validation of all input much more flexible than table constraints. A: Reduced network traffic -- SP are generally worse then Dynamic SQL. Because people don't create a new SP for every select, if you need just one column you are told use the SP that has the columns they need and ignore the rest. Get an extra column and any less network usage you had just went away. Also you tend to have a lot of client filtering when SP are used. caching -- MS-SQL does not treat them any differently, not since MS-SQL 2000 may of been 7 but I don't remember. permissions -- Not a problem since almost everything I do is web or have some middle application tier that does all the database access. The only software I work with that have direct client to database access are 3rd party products that are designed for users to have direct access and are based around giving users permissions. And yes MS-SQL permission security model SUCKS!!! (have not spent time on 2008 yet) As a final part to this would like to see a survey of how many people are still doing direct client/server programming vs web and middle application server programming; and if they are doing large projects why no ORM. Separation -- people would question why you are putting business logic outside of middle tier. Also if you are looking to separate data handling code there are ways of doing that without putting it in the database. Ability to edit -- What you have no testing and version control you have to worry about? Also only a problem with client/server, in the web world not problem. Find the table -- Only if you can identify the SP that use it, will stick with the tools of the version control system, agent ransack or visual studio to find. Optimization -- Your DBA should be using the tools of the database to find the queries that need optimization. Database can tell the DBA what statements are talking up the most time and resources and they can fix from there. For complex SQL statements the programmers should be told to talk to the DBA if simple selects don't worry about it. SQL injection attacks -- SP offer no better protection. The only thing they get the nod is that most of them teach using parameters vs dynamic SQL most examples ignore parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/59880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "120" }
Q: Best method to obfuscate or secure .Net assemblies I'm looking for a technique or tool which we can use to obfuscate or somehow secure our compiled c# code. The goal is not for user/data security but to hinder reverse engineering of some of the technology in our software. This is not for use on the web, but for a desktop application. So, do you know of any tools available to do this type of thing? (They need not be free) What kind of performance implications do they have if any? Does this have any negative side effects when using a debugger during development? We log stack traces of problems in the field. How would obfuscation affect this? A: This is a pretty good list of obfuscators from Visual Studio Marketplace Obfuscators * *ArmDot *Crypto Obfuscator *Demeanor for .NET *DeployLX CodeVeil *Dotfuscator .NET Obfuscator *Semantic Designs: C# Source Code Obfuscator *Smartassembly *Spices.Net *Xenocode Postbuild 2006 *.NET Reactor I have not observed any performance issues when obfuscating my code. If your just sending text basted stack traces you might have a problem translating the method names. A: There are tools that also 'deobfuscate' obfuscated DLLs - I'd suggest turning the piece that needs to be protected into an unmanaged component. A: http://xheo.com/products/code-protection Done the job for me in the past. A: You are wasting your time going down that path. If you have code that you don't want anyone to see, you need to keep it behind closed doors. For example, only execute that code on your own server using a web service interface. Obfuscating your code only deters the most casual of people. As the video game industry leaned a long time ago, no code is safe from cracking.
{ "language": "en", "url": "https://stackoverflow.com/questions/59893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I get the directory where a Bash script is located from within the script itself? How do I get the path of the directory in which a Bash script is located, inside that script? I want to use a Bash script as a launcher for another application. I want to change the working directory to the one where the Bash script is located, so I can operate on the files in that directory, like so: $ ./application A: Try the following cross-compatible solution: CWD="$(cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)" As the commands such as realpath or readlink could be not available (depending on the operating system). Note: In Bash, it's recommended to use ${BASH_SOURCE[0]} instead of $0, otherwise path can break when sourcing the file (source/.). Alternatively you can try the following function in Bash: realpath () { [[ $1 = /* ]] && echo "$1" || echo "$PWD/${1#./}" } This function takes one argument. If argument has already absolute path, print it as it is, otherwise print $PWD variable + filename argument (without ./ prefix). Related: * *How can I set the current working directory to the directory of the script in Bash? *Bash script absolute path with OS X *Reliable way for a Bash script to get the full path to itself A: I believe I've got this one. I'm late to the party, but I think some will appreciate it being here if they come across this thread. The comments should explain: #!/bin/sh # dash bash ksh # !zsh (issues). G. Nixon, 12/2013. Public domain. ## 'linkread' or 'fullpath' or (you choose) is a little tool to recursively ## dereference symbolic links (ala 'readlink') until the originating file ## is found. This is effectively the same function provided in stdlib.h as ## 'realpath' and on the command line in GNU 'readlink -f'. ## Neither of these tools, however, are particularly accessible on the many ## systems that do not have the GNU implementation of readlink, nor ship ## with a system compiler (not to mention the requisite knowledge of C). ## This script is written with portability and (to the extent possible, speed) ## in mind, hence the use of printf for echo and case statements where they ## can be substituded for test, though I've had to scale back a bit on that. ## It is (to the best of my knowledge) written in standard POSIX shell, and ## has been tested with bash-as-bin-sh, dash, and ksh93. zsh seems to have ## issues with it, though I'm not sure why; so probably best to avoid for now. ## Particularly useful (in fact, the reason I wrote this) is the fact that ## it can be used within a shell script to find the path of the script itself. ## (I am sure the shell knows this already; but most likely for the sake of ## security it is not made readily available. The implementation of "$0" ## specificies that the $0 must be the location of **last** symbolic link in ## a chain, or wherever it resides in the path.) This can be used for some ## ...interesting things, like self-duplicating and self-modifiying scripts. ## Currently supported are three errors: whether the file specified exists ## (ala ENOENT), whether its target exists/is accessible; and the special ## case of when a sybolic link references itself "foo -> foo": a common error ## for beginners, since 'ln' does not produce an error if the order of link ## and target are reversed on the command line. (See POSIX signal ELOOP.) ## It would probably be rather simple to write to use this as a basis for ## a pure shell implementation of the 'symlinks' util included with Linux. ## As an aside, the amount of code below **completely** belies the amount ## effort it took to get this right -- but I guess that's coding for you. ##===-------------------------------------------------------------------===## for argv; do :; done # Last parameter on command line, for options parsing. ## Error messages. Use functions so that we can sub in when the error occurs. recurses(){ printf "Self-referential:\n\t$argv ->\n\t$argv\n" ;} dangling(){ printf "Broken symlink:\n\t$argv ->\n\t"$(readlink "$argv")"\n" ;} errnoent(){ printf "No such file: "$@"\n" ;} # Borrow a horrible signal name. # Probably best not to install as 'pathfull', if you can avoid it. pathfull(){ cd "$(dirname "$@")"; link="$(readlink "$(basename "$@")")" ## 'test and 'ls' report different status for bad symlinks, so we use this. if [ ! -e "$@" ]; then if $(ls -d "$@" 2>/dev/null) 2>/dev/null; then errnoent 1>&2; exit 1; elif [ ! -e "$@" -a "$link" = "$@" ]; then recurses 1>&2; exit 1; elif [ ! -e "$@" ] && [ ! -z "$link" ]; then dangling 1>&2; exit 1; fi fi ## Not a link, but there might be one in the path, so 'cd' and 'pwd'. if [ -z "$link" ]; then if [ "$(dirname "$@" | cut -c1)" = '/' ]; then printf "$@\n"; exit 0; else printf "$(pwd)/$(basename "$@")\n"; fi; exit 0 fi ## Walk the symlinks back to the origin. Calls itself recursivly as needed. while [ "$link" ]; do cd "$(dirname "$link")"; newlink="$(readlink "$(basename "$link")")" case "$newlink" in "$link") dangling 1>&2 && exit 1 ;; '') printf "$(pwd)/$(basename "$link")\n"; exit 0 ;; *) link="$newlink" && pathfull "$link" ;; esac done printf "$(pwd)/$(basename "$newlink")\n" } ## Demo. Install somewhere deep in the filesystem, then symlink somewhere ## else, symlink again (maybe with a different name) elsewhere, and link ## back into the directory you started in (or something.) The absolute path ## of the script will always be reported in the usage, along with "$0". if [ -z "$argv" ]; then scriptname="$(pathfull "$0")" # Yay ANSI l33t codes! Fancy. printf "\n\033[3mfrom/as: \033[4m$0\033[0m\n\n\033[1mUSAGE:\033[0m " printf "\033[4m$scriptname\033[24m [ link | file | dir ]\n\n " printf "Recursive readlink for the authoritative file, symlink after " printf "symlink.\n\n\n \033[4m$scriptname\033[24m\n\n " printf " From within an invocation of a script, locate the script's " printf "own file\n (no matter where it has been linked or " printf "from where it is being called).\n\n" else pathfull "$@" fi A: Hmm, if in the path, basename and dirname are just not going to cut it and walking the path is hard (what if the parent didn't export PATH?!). However, the shell has to have an open handle to its script, and in Bash the handle is #255. SELF=`readlink /proc/$$/fd/255` works for me. A: The best compact solution in my view would be: "$( cd "$( echo "${BASH_SOURCE[0]%/*}" )"; pwd )" There is no reliance on anything other than Bash. The use of dirname, readlink and basename will eventually lead to compatibility issues, so they are best avoided if at all possible. A: #!/usr/bin/env bash SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) is a useful one-liner which will give you the full directory name of the script no matter where it is being called from. It will work as long as the last component of the path used to find the script is not a symlink (directory links are OK). If you also want to resolve any links to the script itself, you need a multi-line solution: #!/usr/bin/env bash SOURCE=${BASH_SOURCE[0]} while [ -L "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd ) SOURCE=$(readlink "$SOURCE") [[ $SOURCE != /* ]] && SOURCE=$DIR/$SOURCE # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located done DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd ) This last one will work with any combination of aliases, source, bash -c, symlinks, etc. Beware: if you cd to a different directory before running this snippet, the result may be incorrect! Also, watch out for $CDPATH gotchas, and stderr output side effects if the user has smartly overridden cd to redirect output to stderr instead (including escape sequences, such as when calling update_terminal_cwd >&2 on Mac). Adding >/dev/null 2>&1 at the end of your cd command will take care of both possibilities. To understand how it works, try running this more verbose form: #!/usr/bin/env bash SOURCE=${BASH_SOURCE[0]} while [ -L "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink TARGET=$(readlink "$SOURCE") if [[ $TARGET == /* ]]; then echo "SOURCE '$SOURCE' is an absolute symlink to '$TARGET'" SOURCE=$TARGET else DIR=$( dirname "$SOURCE" ) echo "SOURCE '$SOURCE' is a relative symlink to '$TARGET' (relative to '$DIR')" SOURCE=$DIR/$TARGET # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located fi done echo "SOURCE is '$SOURCE'" RDIR=$( dirname "$SOURCE" ) DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd ) if [ "$DIR" != "$RDIR" ]; then echo "DIR '$RDIR' resolves to '$DIR'" fi echo "DIR is '$DIR'" And it will print something like: SOURCE './scriptdir.sh' is a relative symlink to 'sym2/scriptdir.sh' (relative to '.') SOURCE is './sym2/scriptdir.sh' DIR './sym2' resolves to '/home/ubuntu/dotfiles/fo fo/real/real1/real2' DIR is '/home/ubuntu/dotfiles/fo fo/real/real1/real2' A: pwd can be used to find the current working directory, and dirname to find the directory of a particular file (command that was run, is $0, so dirname $0 should give you the directory of the current script). However, dirname gives precisely the directory portion of the filename, which more likely than not is going to be relative to the current working directory. If your script needs to change directory for some reason, then the output from dirname becomes meaningless. I suggest the following: #!/usr/bin/env bash reldir="$( dirname -- "$0"; )"; cd "$reldir"; directory="$( pwd; )"; echo "Directory is ${directory}"; This way, you get an absolute, rather than a relative directory. Since the script will be run in a separate Bash instance, there isn't any need to restore the working directory afterwards, but if you do want to change back in your script for some reason, you can easily assign the value of pwd to a variable before you change directory, for future use. Although just cd "$( dirname -- "$0"; )"; solves the specific scenario in the question, I find having the absolute path to more more useful generally. A: You can do that just combining the script name ($0) with realpath and/or dirname. It works for Bash and Shell. #!/usr/bin/env bash RELATIVE_PATH="${0}" RELATIVE_DIR_PATH="$(dirname "${0}")" FULL_DIR_PATH="$(realpath "${0}" | xargs dirname)" FULL_PATH="$(realpath "${0}")" echo "RELATIVE_PATH->${RELATIVE_PATH}<-" echo "RELATIVE_DIR_PATH->${RELATIVE_DIR_PATH}<-" echo "FULL_DIR_PATH->${FULL_DIR_PATH}<-" echo "FULL_PATH->${FULL_PATH}<-" The output will be something like this: # RELATIVE_PATH->./bin/startup.sh<- # RELATIVE_DIR_PATH->./bin<- # FULL_DIR_PATH->/opt/my_app/bin<- # FULL_PATH->/opt/my_app/bin/startup.sh<- $0 is the name of the script itself 4.4. Special Variable Types An example: LozanoMatheus/get_script_paths.sh A: The dirname command is the most basic, simply parsing the path up to the filename off of the $0 (script name) variable: dirname -- "$0"; But, as matt b pointed out, the path returned is different depending on how the script is called. pwd doesn't do the job because that only tells you what the current directory is, not what directory the script resides in. Additionally, if a symbolic link to a script is executed, you're going to get a (probably relative) path to where the link resides, not the actual script. Some others have mentioned the readlink command, but at its simplest, you can use: dirname -- "$( readlink -f -- "$0"; )"; readlink will resolve the script path to an absolute path from the root of the filesystem. So, any paths containing single or double dots, tildes and/or symbolic links will be resolved to a full path. Here's a script demonstrating each of these, whatdir.sh: #!/usr/bin/env bash echo "pwd: `pwd`" echo "\$0: $0" echo "basename: `basename -- "$0"`" echo "dirname: `dirname -- "$0"`" echo "dirname/readlink: $( dirname -- "$( readlink -f -- "$0"; )"; )" Running this script in my home dir, using a relative path: >>>$ ./whatdir.sh pwd: /Users/phatblat $0: ./whatdir.sh basename: whatdir.sh dirname: . dirname/readlink: /Users/phatblat Again, but using the full path to the script: >>>$ /Users/phatblat/whatdir.sh pwd: /Users/phatblat $0: /Users/phatblat/whatdir.sh basename: whatdir.sh dirname: /Users/phatblat dirname/readlink: /Users/phatblat Now changing directories: >>>$ cd /tmp >>>$ ~/whatdir.sh pwd: /tmp $0: /Users/phatblat/whatdir.sh basename: whatdir.sh dirname: /Users/phatblat dirname/readlink: /Users/phatblat And finally using a symbolic link to execute the script: >>>$ ln -s ~/whatdir.sh whatdirlink.sh >>>$ ./whatdirlink.sh pwd: /tmp $0: ./whatdirlink.sh basename: whatdirlink.sh dirname: . dirname/readlink: /Users/phatblat There is however one case where this doesn't work, when the script is sourced (instead of executed) in bash: >>>$ cd /tmp >>>$ . ~/whatdir.sh pwd: /tmp $0: bash basename: bash dirname: . dirname/readlink: /tmp A: This is, annoyingly, the only one-liner I've found that works on both Linux and macOS when the executable script is a symlink: SCRIPT_DIR=$(python -c "import os; print(os.path.dirname(os.path.realpath('${BASH_SOURCE[0]}')))") or, similarly, using python3 pathlib module: SCRIPT_DIR=$(python3 -c "from pathlib import Path; print(Path('${BASH_SOURCE[0]}').resolve().parent)") Tested on Linux and macOS and compared to other solutions in this gist: https://gist.github.com/ptc-mrucci/61772387878ed53a6c717d51a21d9371 A: None of these other answers worked for a Bash script launched by Finder in OS X. I ended up using: SCRIPT_LOC="`ps -p $$ | sed /PID/d | sed s:.*/Network/:/Network/: | sed s:.*/Volumes/:/Volumes/:`" It is not pretty, but it gets the job done. A: Use a combination of readlink to canonicalize the name (with a bonus of following it back to its source if it is a symlink) and dirname to extract the directory name: script="`readlink -f "${BASH_SOURCE[0]}"`" dir="`dirname "$script"`" A: This worked for me when the other answers here did not: thisScriptPath=`realpath $0` thisDirPath=`dirname $thisScriptPath` echo $thisDirPath A: The top response does not work in all cases... As I had problems with the BASH_SOURCE with the included 'cd' approach on some very fresh and also on less fresh installed Ubuntu 16.04 (Xenial Xerus) systems when invoking the shell script by means of "sh my_script.sh", I tried out something different that as of now seems to run quite smoothly for my purposes. The approach is a bit more compact in the script and is further much lesser cryptic feeling. This alternate approach uses the external applications 'realpath' and 'dirname' from the coreutils package. (Okay, not anyone likes the overhead of invoking secondary processes - but when seeing the multi-line scripting for resolving the true object it won't be that bad either having it solve in a single binary usage.) So let’s see one example of those alternate solution for the described task of querying the true absolute path to a certain file: PATH_TO_SCRIPT=`realpath -s $0` PATH_TO_SCRIPT_DIR=`dirname $PATH_TO_SCRIPT` But preferably you should use this evolved version to also support the use of paths with spaces (or maybe even some other special characters): PATH_TO_SCRIPT=`realpath -s "$0"` PATH_TO_SCRIPT_DIR=`dirname "$PATH_TO_SCRIPT"` Indeed, if you don’t need the value of the SCRIPT variable then you might be able to merge this two-liner into even a single line. But why really shall you spend the effort for this? A: None of the current solutions work if there are any newlines at the end of the directory name - They will be stripped by the command substitution. To work around this you can append a non-newline character inside the command substitution and then strip just that character off: dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd && echo x)" dir="${dir%x}" This protects against two very common situations: Accidents and sabotage. A script shouldn't fail in unpredictable ways just because someone, somewhere, did a mkdir $'\n'. A: SCRIPT_DIR=$( cd ${0%/*} && pwd -P ) A: This gets the current working directory on Mac OS X v10.6.6 (Snow Leopard): DIR=$(cd "$(dirname "$0")"; pwd) A: I don't think this is as easy as others have made it out to be. pwd doesn't work, as the current directory is not necessarily the directory with the script. $0 doesn't always have the information either. Consider the following three ways to invoke a script: ./script /usr/bin/script script In the first and third ways $0 doesn't have the full path information. In the second and third, pwd does not work. The only way to get the directory in the third way would be to run through the path and find the file with the correct match. Basically the code would have to redo what the OS does. One way to do what you are asking would be to just hardcode the data in the /usr/share directory, and reference it by its full path. Data shoudn't be in the /usr/bin directory anyway, so this is probably the thing to do. A: This is the only way I've found to tell reliably: SCRIPT_DIR=$(dirname $(cd "$(dirname "$BASH_SOURCE")"; pwd)) A: I usually use: dirname $(which $BASH_SOURCE) A: $(dirname "$(readlink -f "$BASH_SOURCE")") A: $0 is not a reliable way to get the current script path. For example, this is my .xprofile: #!/bin/bash echo "$0 $1 $2" echo "${BASH_SOURCE[0]}" # $dir/my_script.sh & cd /tmp && ~/.xprofile && source ~/.xprofile /home/puchuu/.xprofile /home/puchuu/.xprofile -bash /home/puchuu/.xprofile So please use BASH_SOURCE instead. A: Here's a command that works under either Bash or zsh, and whether executed stand-alone or sourced: [ -n "$ZSH_VERSION" ] && this_dir=$(dirname "${(%):-%x}") \ || this_dir=$(dirname "${BASH_SOURCE[0]:-$0}") How it works The zsh current file expansion: ${(%):-%x} ${(%):-%x} in zsh expands to the path of the currently-executing file. The fallback substitution operator :- You know already that ${...} substitutes variables inside of strings. You might not know that certain operations are possible (in both Bash and zsh) on the variables during substitution, like the fallback expansion operator :-: % x=ok % echo "${x}" ok % echo "${x:-fallback}" ok % x= % echo "${x:-fallback}" fallback % y=yvalue % echo "${x:-$y}" yvalue The %x prompt escape code Next, we'll introduce prompt escape codes, a zsh-only feature. In zsh, %x will expand to the path of the file, but normally this is only when doing expansion for prompt strings. To enable those codes in our substitution, we can add a (%) flag before the variable name: % cat apath/test.sh fpath=%x echo "${(%)fpath}" % source apath/test.sh apath/test.sh % cd apath % source test.sh test.sh An unlikely match: the percent escape and the fallback What we have so far works, but it would be tidier to avoid creating the extra fpath variable. Instead of putting %x in fpath, we can use :- and put %x in the fallback string: % cat test.sh echo "${(%):-%x}" % source test.sh test.sh Note that we normally would put a variable name between (%) and :-, but we left it blank. The variable with a blank name can't be declared or set, so the fallback is always triggered. Finishing up: what about print -P %x? Now we almost have the directory of our script. We could have used print -P %x to get the same file path with fewer hacks, but in our case, where we need to pass it as an argument to dirname, that would have required the overhead of a starting a new subshell: % cat apath/test.sh dirname "$(print -P %x)" # $(...) runs a command in a new process dirname "${(%):-%x}" % source apath/test.sh apath apath It turns out that the hacky way is both more performant and succinct. A: One advantage of this method is that it doesn't involve anything outside Bash itself and does not fork any subshell neither. First, use pattern substitution to replace anything not starting with / (i.e., a relative path) with $PWD/. Since we use a substitution to match the first character of $0, we also have to append it back (${0:0:1} in the substitution). Now we have a full path to the script; we can get the directory by removing the last / and anything the follows (i.e., the script name). That directory can then be used in cd or as a prefix to other paths relative to your script. #!/bin/bash BIN=${0/#[!\/]/"$PWD/${0:0:1}"} DIR=${BIN%/*} cd "$DIR" If your script may be sourced rather than executed, you can of course replace $0 with ${BASH_SOURCE[0]}, such as: BIN=${BASH_SOURCE[0]/#[!\/]/"$PWD/${BASH_SOURCE[0]:0:1}"} This will work for executable scripts too. It's longer, but more polyvalent. A: Most answers either don't handle files which are symlinked via a relative path, aren't one-liners or don't handle BSD (Mac). A solution which does all three is: HERE=$(cd "$(dirname "$BASH_SOURCE")"; cd -P "$(dirname "$(readlink "$BASH_SOURCE" || echo .)")"; pwd) First, cd to bash's conception of the script's directory. Then readlink the file to see if it is a symlink (relative or otherwise), and if so, cd to that directory. If not, cd to the current directory (necessary to keep things a one-liner). Then echo the current directory via pwd. You could add -- to the arguments of cd and readlink to avoid issues of directories named like options, but I don't bother for most purposes. You can see the full explanation with illustrations here: https://www.binaryphile.com/bash/2020/01/12/determining-the-location-of-your-script-in-bash.html A: This is Linux specific, but you could use: SELF=$(readlink /proc/$$/fd/255) A: Here is a POSIX compliant one-liner: SCRIPT_PATH=`dirname "$0"`; SCRIPT_PATH=`eval "cd \"$SCRIPT_PATH\" && pwd"` # test echo $SCRIPT_PATH A: The shortest and most elegant way to do this is: #!/bin/bash DIRECTORY=$(cd `dirname $0` && pwd) echo $DIRECTORY This would work on all platforms and is super clean. More details can be found in "Which directory is that bash script in?". A: Summary: FULL_PATH_TO_SCRIPT="$(realpath "${BASH_SOURCE[-1]}")" # OR, if you do NOT need it to work for **sourced** scripts too: # FULL_PATH_TO_SCRIPT="$(realpath "$0")" # OR, depending on which path you want, in case of nested `source` calls # FULL_PATH_TO_SCRIPT="$(realpath "${BASH_SOURCE[0]}")" # OR, add `-s` to NOT expand symlinks in the path: # FULL_PATH_TO_SCRIPT="$(realpath -s "${BASH_SOURCE[-1]}")" SCRIPT_DIRECTORY="$(dirname "$FULL_PATH_TO_SCRIPT")" SCRIPT_FILENAME="$(basename "$FULL_PATH_TO_SCRIPT")" Details: How to obtain the full file path, full directory, and base filename of any script being run OR sourced... ...even when the called script is called from within another bash function or script, or when nested sourcing is being used! For many cases, all you need to acquire is the full path to the script you just called. This can be easily accomplished using realpath. Note that realpath is part of GNU coreutils. If you don't have it already installed (it comes default on Ubuntu), you can install it with sudo apt update && sudo apt install coreutils. get_script_path.sh (for the latest version of this script, see get_script_path.sh in my eRCaGuy_hello_world repo): #!/bin/bash # A. Obtain the full path, and expand (walk down) symbolic links # A.1. `"$0"` works only if the file is **run**, but NOT if it is **sourced**. # FULL_PATH_TO_SCRIPT="$(realpath "$0")" # A.2. `"${BASH_SOURCE[-1]}"` works whether the file is sourced OR run, and even # if the script is called from within another bash function! # NB: if `"${BASH_SOURCE[-1]}"` doesn't give you quite what you want, use # `"${BASH_SOURCE[0]}"` instead in order to get the first element from the array. FULL_PATH_TO_SCRIPT="$(realpath "${BASH_SOURCE[-1]}")" # B.1. `"$0"` works only if the file is **run**, but NOT if it is **sourced**. # FULL_PATH_TO_SCRIPT_KEEP_SYMLINKS="$(realpath -s "$0")" # B.2. `"${BASH_SOURCE[-1]}"` works whether the file is sourced OR run, and even # if the script is called from within another bash function! # NB: if `"${BASH_SOURCE[-1]}"` doesn't give you quite what you want, use # `"${BASH_SOURCE[0]}"` instead in order to get the first element from the array. FULL_PATH_TO_SCRIPT_KEEP_SYMLINKS="$(realpath -s "${BASH_SOURCE[-1]}")" # You can then also get the full path to the directory, and the base # filename, like this: SCRIPT_DIRECTORY="$(dirname "$FULL_PATH_TO_SCRIPT")" SCRIPT_FILENAME="$(basename "$FULL_PATH_TO_SCRIPT")" # Now print it all out echo "FULL_PATH_TO_SCRIPT = \"$FULL_PATH_TO_SCRIPT\"" echo "SCRIPT_DIRECTORY = \"$SCRIPT_DIRECTORY\"" echo "SCRIPT_FILENAME = \"$SCRIPT_FILENAME\"" IMPORTANT note on nested source calls: if "${BASH_SOURCE[-1]}" above doesn't give you quite what you want, try using "${BASH_SOURCE[0]}" instead. The first (0) index gives you the first entry in the array, and the last (-1) index gives you the last last entry in the array. Depending on what it is you're after, you may actually want the first entry. I discovered this to be the case when I sourced ~/.bashrc with . ~/.bashrc, which sourced ~/.bash_aliases with . ~/.bash_aliases, and I wanted the realpath (with expanded symlinks) to the ~/.bash_aliases file, NOT to the ~/.bashrc file. Since these are nested source calls, using "${BASH_SOURCE[0]}" gave me what I wanted: the expanded path to ~/.bash_aliases! Using "${BASH_SOURCE[-1]}", however, gave me what I did not want: the expanded path to ~/.bashrc. Example command and output: * *Running the script: ~/GS/dev/eRCaGuy_hello_world/bash$ ./get_script_path.sh FULL_PATH_TO_SCRIPT = "/home/gabriel/GS/dev/eRCaGuy_hello_world/bash/get_script_path.sh" SCRIPT_DIRECTORY = "/home/gabriel/GS/dev/eRCaGuy_hello_world/bash" SCRIPT_FILENAME = "get_script_path.sh" *Sourcing the script with . get_script_path.sh or source get_script_path.sh (the result is the exact same as above because I used "${BASH_SOURCE[-1]}" in the script instead of "$0"): ~/GS/dev/eRCaGuy_hello_world/bash$ . get_script_path.sh FULL_PATH_TO_SCRIPT = "/home/gabriel/GS/dev/eRCaGuy_hello_world/bash/get_script_path.sh" SCRIPT_DIRECTORY = "/home/gabriel/GS/dev/eRCaGuy_hello_world/bash" SCRIPT_FILENAME = "get_script_path.sh" If you use "$0" in the script instead of "${BASH_SOURCE[-1]}", you'll get the same output as above when running the script, but this undesired output instead when sourcing the script: ~/GS/dev/eRCaGuy_hello_world/bash$ . get_script_path.sh FULL_PATH_TO_SCRIPT = "/bin/bash" SCRIPT_DIRECTORY = "/bin" SCRIPT_FILENAME = "bash" And, apparently if you use "$BASH_SOURCE" instead of "${BASH_SOURCE[-1]}", it will not work if the script is called from within another bash function. So, using "${BASH_SOURCE[-1]}" is therefore the best way to do it, as it solves both of these problems! See the references below. Difference between realpath and realpath -s: Note that realpath also successfully walks down symbolic links to determine and point to their targets rather than pointing to the symbolic link. If you do NOT want this behavior (sometimes I don't), then add -s to the realpath command above, making that line look like this instead: # Obtain the full path, but do NOT expand (walk down) symbolic links; in # other words: **keep** the symlinks as part of the path! FULL_PATH_TO_SCRIPT="$(realpath -s "${BASH_SOURCE[-1]}")" This way, symbolic links are NOT expanded. Rather, they are left as-is, as symbolic links in the full path. The code above is now part of my eRCaGuy_hello_world repo in this file here: bash/get_script_path.sh. Reference and run this file for full examples both with and withOUT symlinks in the paths. See the bottom of the file for example output in both cases. References: * *How to retrieve absolute path given relative *taught me about the BASH_SOURCE variable: Unix & Linux: determining path to sourced shell script *taught me that BASH_SOURCE is actually an array, and we want the last element from it for it to work as expected inside a function (hence why I used "${BASH_SOURCE[-1]}" in my code here): Unix & Linux: determining path to sourced shell script *man bash --> search for BASH_SOURCE: BASH_SOURCE An array variable whose members are the source filenames where the corresponding shell function names in the FUNCNAME array variable are defined. The shell function ${FUNCNAME[$i]} is defined in the file ${BASH_SOURCE[$i]} and called from ${BASH_SOURCE[$i+1]}. See also: * *[my answer] Unix & Linux: determining path to sourced shell script A: pushd . > '/dev/null'; SCRIPT_PATH="${BASH_SOURCE[0]:-$0}"; while [ -h "$SCRIPT_PATH" ]; do cd "$( dirname -- "$SCRIPT_PATH"; )"; SCRIPT_PATH="$( readlink -f -- "$SCRIPT_PATH"; )"; done cd "$( dirname -- "$SCRIPT_PATH"; )" > '/dev/null'; SCRIPT_PATH="$( pwd; )"; popd > '/dev/null'; It works for all versions, including * *when called via multiple depth soft link, *when the file it *when script called by command "source" aka . (dot) operator. *when arg $0 is modified from caller. *"./script" *"/full/path/to/script" *"/some/path/../../another/path/script" *"./some/folder/script" Alternatively, if the Bash script itself is a relative symlink you want to follow it and return the full path of the linked-to script: pushd . > '/dev/null'; SCRIPT_PATH="${BASH_SOURCE[0]:-$0}"; while [ -h "$SCRIPT_PATH" ]; do cd "$( dirname -- "$SCRIPT_PATH"; )"; SCRIPT_PATH="$( readlink -f -- "$SCRIPT_PATH"; )"; done cd "$( dirname -- "$SCRIPT_PATH"; )" > '/dev/null'; SCRIPT_PATH="$( pwd; )"; popd > '/dev/null'; SCRIPT_PATH is given in full path, no matter how it is called. Just make sure you locate this at start of the script. A: #!/bin/sh PRG="$0" # need this for relative symlinks while [ -h "$PRG" ] ; do PRG=`readlink "$PRG"` done scriptdir=`dirname "$PRG"` A: This solution applies only to Bash. Note that the commonly supplied answer ${BASH_SOURCE[0]} won't work if you try to find the path from within a function. I've found this line to always work, regardless of whether the file is being sourced or run as a script. dirname ${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]} If you want to follow symlinks use readlink on the path you get above, recursively or non-recursively. Here's a script to try it out and compare it to other proposed solutions. Invoke it as source test1/test2/test_script.sh or bash test1/test2/test_script.sh. # # Location: test1/test2/test_script.sh # echo $0 echo $_ echo ${BASH_SOURCE} echo ${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]} cur_file="${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]}" cur_dir="$(dirname "${cur_file}")" source "${cur_dir}/func_def.sh" function test_within_func_inside { echo ${BASH_SOURCE} echo ${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]} } echo "Testing within function inside" test_within_func_inside echo "Testing within function outside" test_within_func_outside # # Location: test1/test2/func_def.sh # function test_within_func_outside { echo ${BASH_SOURCE} echo ${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]} } The reason the one-liner works is explained by the use of the BASH_SOURCE environment variable and its associated FUNCNAME. BASH_SOURCE An array variable whose members are the source filenames where the corresponding shell function names in the FUNCNAME array variable are defined. The shell function ${FUNCNAME[$i]} is defined in the file ${BASH_SOURCE[$i]} and called from ${BASH_SOURCE[$i+1]}. FUNCNAME An array variable containing the names of all shell functions currently in the execution call stack. The element with index 0 is the name of any currently-executing shell function. The bottom-most element (the one with the highest index) is "main". This variable exists only when a shell function is executing. Assignments to FUNCNAME doesn't have any effect and return an error status. If FUNCNAME is unset, it loses its special properties, even if it is subsequently reset. This variable can be used with BASH_LINENO and BASH_SOURCE. Each element of FUNCNAME has corresponding elements in BASH_LINENO and BASH_SOURCE to describe the call stack. For instance, ${FUNCNAME[$i]} was called from the file ${BASH_SOURCE[$i+1]} at line number ${BASH_LINENO[$i]}. The caller builtin displays the current call stack using this information. [Source: Bash manual] A: This is how I work it on my scripts: pathvar="$( cd "$( dirname $0 )" && pwd )" This will tell you which directory the Launcher (current script) is being executed from. A: If your Bash script is a symlink, then this is the way to do it: #!/usr/bin/env bash dirn="$(dirname "$0")" rl="$(readlink "$0")"; exec_dir="$(dirname $(dirname "$rl"))"; my_path="$dirn/$exec_dir"; X="$(cd $(dirname ${my_path}) && pwd)/$(basename ${my_path})" X is the directory that contains your Bash script (the original file, not the symlink). I swear to God this works, and it is the only way I know of doing this properly. A: The following will return the current directory of the script * *works if it's sourced, or not sourced *works if run in the current directory, or some other directory. *works if relative directories are used. *works with bash, not sure of other shells. /tmp/a/b/c $ . ./test.sh /tmp/a/b/c /tmp/a/b/c $ . /tmp/a/b/c/test.sh /tmp/a/b/c /tmp/a/b/c $ ./test.sh /tmp/a/b/c /tmp/a/b/c $ /tmp/a/b/c/test.sh /tmp/a/b/c /tmp/a/b/c $ cd ~ $ . /tmp/a/b/c/test.sh /tmp/a/b/c ~ $ . ../../tmp/a/b/c/test.sh /tmp/a/b/c ~ $ /tmp/a/b/c/test.sh /tmp/a/b/c ~ $ ../../tmp/a/b/c/test.sh /tmp/a/b/c test.sh #!/usr/bin/env bash # snagged from: https://stackoverflow.com/a/51264222/26510 function toAbsPath { local target target="$1" if [ "$target" == "." ]; then echo "$(pwd)" elif [ "$target" == ".." ]; then echo "$(dirname "$(pwd)")" else echo "$(cd "$(dirname "$1")"; pwd)/$(basename "$1")" fi } function getScriptDir(){ local SOURCED local RESULT (return 0 2>/dev/null) && SOURCED=1 || SOURCED=0 if [ "$SOURCED" == "1" ] then RESULT=$(dirname "$1") else RESULT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" fi toAbsPath "$RESULT" } SCRIPT_DIR=$(getScriptDir "$0") echo "$SCRIPT_DIR" A: Python was mentioned a few times. Here is the JavaScript (i.e., Node.js) alternative: baseDirRelative=$(dirname "$0") baseDir=$(node -e "console.log(require('path').resolve('$baseDirRelative'))") # Get absolute path using Node.js echo $baseDir A: I tried the followings with 3 different executions. echo $(realpath $_) . application # /correct/path/to/dir or /path/to/temporary_dir bash application # /path/to/bash /PATH/TO/application # /correct/path/to/dir echo $(realpath $(dirname $0)) . application # failed with `realpath: missing operand` bash application # /correct/path/to/dir /PATH/TO/application # /correct/path/to/dir echo $(realpath $BASH_SOURCE) $BASH_SOURCE is basically the same with ${BASH_SOURCE[0]}. . application # /correct/path/to/dir bash application # /correct/path/to/dir /PATH/TO/application # /correct/path/to/dir Only $(realpath $BASH_SOURCE) seems to be reliable. A: Yet another variant: SELF=$(SELF=$(dirname "$0") && bash -c "cd \"$SELF\" && pwd") echo "$SELF" This works on macOS as well, determines the canonical path, and does not change the current directory. A: I tried all of these and none worked. One was very close, but it had a tiny bug that broke it badly; they forgot to wrap the path in quotation marks. Also a lot of people assume you're running the script from a shell, so they forget when you open a new script it defaults to your home. Try this directory on for size: /var/No one/Thought/About Spaces Being/In a Directory/Name/And Here's your file.text This gets it right regardless how or where you run it: #!/bin/bash echo "pwd: `pwd`" echo "\$0: $0" echo "basename: `basename "$0"`" echo "dirname: `dirname "$0"`" So to make it actually useful, here's how to change to the directory of the running script: cd "`dirname "$0"`" A: Try using: real=$(realpath "$(dirname "$0")") A: Here is the simple, correct way: actual_path=$(readlink -f "${BASH_SOURCE[0]}") script_dir=$(dirname "$actual_path") Explanation: * *${BASH_SOURCE[0]} - the full path to the script. The value of this will be correct even when the script is being sourced, e.g. source <(echo 'echo $0') prints bash, while replacing it with ${BASH_SOURCE[0]} will print the full path of the script. (Of course, this assumes you're OK taking a dependency on Bash.) *readlink -f - Recursively resolves any symlinks in the specified path. This is a GNU extension, and not available on (for example) BSD systems. If you're running a Mac, you can use Homebrew to install GNU coreutils and supplant this with greadlink -f. *And of course dirname gets the parent directory of the path. A: This is a slight revision to the solution e-satis and 3bcdnlklvc04a pointed out in their answer: SCRIPT_DIR='' pushd "$(dirname "$(readlink -f "$BASH_SOURCE")")" > /dev/null && { SCRIPT_DIR="$PWD" popd > /dev/null } This should still work in all the cases they listed. This will prevent popd after a failed pushd. Thanks to konsolebox. A: I would use something like this: # Retrieve the full pathname of the called script scriptPath=$(which $0) # Check whether the path is a link or not if [ -L $scriptPath ]; then # It is a link then retrieve the target path and get the directory name sourceDir=$(dirname $(readlink -f $scriptPath)) else # Otherwise just get the directory name of the script path sourceDir=$(dirname $scriptPath) fi A: For systems having GNU coreutils readlink (for example, Linux): $(readlink -f "$(dirname "$0")") There's no need to use BASH_SOURCE when $0 contains the script filename. A: $_ is worth mentioning as an alternative to $0. If you're running a script from Bash, the accepted answer can be shortened to: DIR="$( dirname "$_" )" Note that this has to be the first statement in your script. A: You can use $BASH_SOURCE: #!/usr/bin/env bash scriptdir="$( dirname -- "$BASH_SOURCE"; )"; Note that you need to use #!/bin/bash and not #!/bin/sh since it's a Bash extension. A: This works in Bash 3.2: path="$( dirname "$( which "$0" )" )" If you have a ~/bin directory in your $PATH, you have A inside this directory. It sources the script ~/bin/lib/B. You know where the included script is relative to the original one, in the lib subdirectory, but not where it is relative to the user's current directory. This is solved by the following (inside A): source "$( dirname "$( which "$0" )" )/lib/B" It doesn't matter where the user is or how he/she calls the script. This will always work. A: These are short ways to get script information: Folders and files: Script: "/tmp/src dir/test.sh" Calling folder: "/tmp/src dir/other" Using these commands: echo Script-Dir : `dirname "$(realpath $0)"` echo Script-Dir : $( cd ${0%/*} && pwd -P ) echo Script-Dir : $(dirname "$(readlink -f "$0")") echo echo Script-Name : `basename "$(realpath $0)"` echo Script-Name : `basename $0` echo echo Script-Dir-Relative : `dirname "$BASH_SOURCE"` echo Script-Dir-Relative : `dirname $0` echo echo Calling-Dir : `pwd` And I got this output: Script-Dir : /tmp/src dir Script-Dir : /tmp/src dir Script-Dir : /tmp/src dir Script-Name : test.sh Script-Name : test.sh Script-Dir-Relative : .. Script-Dir-Relative : .. Calling-Dir : /tmp/src dir/other Also see: https://pastebin.com/J8KjxrPF A: Here is an easy-to-remember script: DIR="$( dirname -- "${BASH_SOURCE[0]}"; )"; # Get the directory name DIR="$( realpath -e -- "$DIR"; )"; # Resolve its full path if need be A: Short answer: "`dirname -- "$0";`" or (preferably): "$( dirname -- "$0"; )" A: Use dirname "$0": #!/usr/bin/env bash echo "The script you are running has basename $( basename -- "$0"; ), dirname $( dirname -- "$0"; )"; echo "The present working directory is $( pwd; )"; Using pwd alone will not work if you are not running the script from the directory it is contained in. [matt@server1 ~]$ pwd /home/matt [matt@server1 ~]$ ./test2.sh The script you are running has basename test2.sh, dirname . The present working directory is /home/matt [matt@server1 ~]$ cd /tmp [matt@server1 tmp]$ ~/test2.sh The script you are running has basename test2.sh, dirname /home/matt The present working directory is /tmp A: This should do it: DIR="$(dirname "$(realpath "$0")")" This works with symlinks and spaces in path. Please see the man pages for dirname and realpath. Please add a comment on how to support MacOS. I'm sorry I can verify it. A: I've compared many of the answers given, and came up with some more compact solutions. These seem to handle all of the crazy edge cases that arise from your favorite combination of: * *Absolute paths or relative paths *File and directory soft links *Invocation as script, bash script, bash -c script, source script, or . script *Spaces, tabs, newlines, Unicode, etc. in directories and/or filename *Filenames beginning with a hyphen If you're running from Linux, it seems that using the proc handle is the best solution to locate the fully resolved source of the currently running script (in an interactive session, the link points to the respective /dev/pts/X): resolved="$(readlink /proc/$$/fd/255 && echo X)" && resolved="${resolved%$'\nX'}" This has a small bit of ugliness to it, but the fix is compact and easy to understand. We aren't using bash primitives only, but I'm okay with that because readlink simplifies the task considerably. The echo X adds an X to the end of the variable string so that any trailing whitespace in the filename doesn't get eaten, and the parameter substitution ${VAR%X} at the end of the line gets rid of the X. Because readlink adds a newline of its own (which would normally be eaten in the command substitution if not for our previous trickery), we have to get rid of that, too. This is most easily accomplished using the $'' quoting scheme, which lets us use escape sequences such as \n to represent newlines (this is also how you can easily make deviously named directories and files). The above should cover your needs for locating the currently running script on Linux, but if you don't have the proc filesystem at your disposal, or if you're trying to locate the fully resolved path of some other file, then maybe you'll find the below code helpful. It's only a slight modification from the above one-liner. If you're playing around with strange directory/filenames, checking the output with both ls and readlink is informative, as ls will output "simplified" paths, substituting ? for things like newlines. absolute_path=$(readlink -e -- "${BASH_SOURCE[0]}" && echo x) && absolute_path=${absolute_path%?x} dir=$(dirname -- "$absolute_path" && echo x) && dir=${dir%?x} file=$(basename -- "$absolute_path" && echo x) && file=${file%?x} ls -l -- "$dir/$file" printf '$absolute_path: "%s"\n' "$absolute_path" A: I usually do: LIBDIR=$(dirname "$(readlink -f "$(type -P $0 || echo $0)")") source $LIBDIR/lib.sh A: Here is a pure Bash solution $ cat a.sh BASENAME=${BASH_SOURCE/*\/} DIRNAME=${BASH_SOURCE%$BASENAME}. echo $DIRNAME $ a.sh /usr/local/bin/. $ ./a.sh ./. $ . a.sh /usr/local/bin/. $ /usr/local/bin/a.sh /usr/local/bin/. A: Here's an excerpt from my answer to shell script: check directory name and convert to lowercase in which I demonstrate not only how to solve this problem with very basic POSIX-specified utilities, I also address how to very simply store the function's results in a returned variable... ...Well, as you can see, with some help, I hit upon a pretty simple and very powerful solution: I can pass the function a sort of messenger variable and dereference any explicit use of the resulting function's argument's $1 name with eval as necessary, and, upon the function routine's completion, I use eval and a backslashed quoting trick to assign my messenger variable the value I desire without ever having to know its name. In full disclosure, ... (I found the messenger variable portion of this) and at Rich's sh tricks and I have also excerpted the relevant portion of his page below my own answer's excerpt. ... EXCERPT: ... Though not strictly POSIX yet, realpath is a GNU core application since 2012. Full disclosure: never heard of it before I noticed it in the info coreutils TOC and immediately thought of [the linked] question, but using the following function as demonstrated should reliably, (soon POSIXLY?), and, I hope, efficiently provide its caller with an absolutely sourced $0: % _abs_0() { > o1="${1%%/*}"; ${o1:="${1}"}; ${o1:=`realpath -s "${1}"`}; eval "$1=\${o1}"; > } % _abs_0 ${abs0:="${0}"} ; printf %s\\n "${abs0}" /no/more/dots/in/your/path2.sh It may be worth highlighting that this solution uses POSIX parameter expansion to first check if the path actually needs expanding and resolving at all before attempting to do so. This should return an absolutely sourced $0via a messenger variable (with the notable exception that it will preserve symlinks) as efficiently as I could imagine it could be done whether or not the path is already absolute. ... (minor edit: before finding realpath in the docs, I had at least pared down my version of (the version below) not to depend on the time field (as it does in the first ps command), but, fair warning, after testing some I'm less convinced ps is fully reliable in its command path expansion capacity) On the other hand, you could do this: ps ww -fp $$ | grep -Eo '/[^:]*'"${0#*/}" eval "abs0=${`ps ww -fp $$ | grep -Eo ' /'`#?}" ... And from Rich's sh tricks: ... Returning strings from a shell function As can be seen from the above pitfall of command substitution, standard output is not a good avenue for shell functions to return strings to their caller, unless the output is in a format where trailing newlines are insignificant. Certainly such practice is not acceptable for functions meant to deal with arbitrary strings. So, what can be done? Try this: func () { body here eval "$1=\${foo}" } Of course, ${foo} could be replaced by any sort of substitution. The key trick here is the eval line and the use of escaping. The “$1” is expanded when the argument to eval is constructed by the main command parser. But the “${foo}” is not expanded at this stage, because the “$” has been quoted. Instead, it’s expanded when eval evaluates its argument. If it’s not clear why this is important, consider how the following would be bad: foo='hello ; rm -rf /' dest=bar eval "$dest=$foo" But of course the following version is perfectly safe: foo='hello ; rm -rf /' dest=bar eval "$dest=\$foo" Note that in the original example, “$1” was used to allow the caller to pass the destination variable name as an argument the function. If your function needs to use the shift command, for instance to handle the remaining arguments as “$@”, then it may be useful to save the value of “$1” in a temporary variable at the beginning of the function. A: The below stores the script's directory path in the dir variable. (It also tries to support being executed under Cygwin in Windows.) And at last it runs the my-sample-app executable with all arguments passed to this script using "$@": #!/usr/bin/env sh dir=$(cd "${0%[/\\]*}" > /dev/null && pwd) if [ -d /proc/cygdrive ]; then case "$(uname -s)" in CYGWIN*|MINGW32*|MSYS*|MINGW*) # We are under Windows, so translate path to Windows format. dir=$(cygpath -m "$dir"); ;; esac fi # Runs the executable which is beside this script "${dir}/my-sample-app" "$@" A: I think the simplest answer is a parameter expansion of the original variable: #!/usr/bin/env bash DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" echo "opt1; original answer: $DIR" echo '' echo "opt2; simple answer : ${BASH_SOURCE[0]%/*}" It should produce output like: $ /var/tmp/test.sh opt1; original answer: /var/tmp opt2; simple answer : /var/tmp The variable/parameter expansion ${BASH_SOURCE[0]%/*}" seems much easier to maintain. A: If not sourced by parent script and not symlinked, $0 is enough: script_path="$0" If sourced by parent script and not symlinked, use $BASH_SOURCE or ${BASH_SOURCE[0]}: script_path="$BASH_SOURCE" If symlinked, use $BASH_SOURCE with realpath or readlink -f to get the real file path: script_path="$(realpath "$BASH_SOURCE")" In addition, realpath or readlink -f returns the absolute path. To get the directory of the script, use dirname: script_directory="$(dirname "$script_path")" Note * *For MacOS, get an alternative of realpath or readlink -f here or here. *To make the code compatible with shells other than Bash, use the ${var-string} parameter expansion. Example: Make it compatible with Zsh. A: You can get the source directory of a Bash script from within the script itself on follow short way: script_path=$(dirname "$(readlink -f "$0")")"/" echo "$script_path" Sample output: /home/username/desktop/ A: Keep it simple. #!/usr/bin/env bash sourceDir=`pwd` echo $sourceDir A: The chosen answer works very well. I'm posting my solution for anyone looking for shorter alternatives that still addresses sourcing, executing, full paths, relative paths, and symlinks. Finally, this will work on macOS, given that it cannot be assumed that GNU's coreutils' version of readlink is available. The gotcha is that it's not using Bash, but it is easy to use in a Bash script. While the OP did not place any constraints on the language of the solution, it's probably best that most have stayed within the Bash world. This is just an alternative, and possibly an unpopular one. PHP is available on macOS by default, and installed on a number of other platforms, though not necessarily by default. I realize this is a shortcoming, but I'll leave this here for any people coming from search engines, anyway. export SOURCE_DIRECTORY="$(php -r 'echo dirname(realpath($argv[1]));' -- "${BASH_SOURCE[0]}")" A: I want to make sure that the script is running in its directory. So cd $(dirname $(which $0) ) After this, if you really want to know where the you are running then run the command below. DIR=$(/usr/bin/pwd) A: This one-liner works on Cygwin even if the script has been called from Windows with bash -c <script>: set mydir="$(cygpath "$(dirname "$0")")" A: There is no 100% portable and reliable way to request a path to a current script directory. Especially between different backends like Cygwin, MinGW, MSYS, Linux, etc. This issue was not properly and completely resolved in Bash for ages. For example, this could not be resolved if you want to request the path after the source command to make nested inclusion of another Bash script which is in turn use the same source command to include another Bash script and so on. In case of the source command, I suggest to replace the source command with something like this: function include() { if [[ -n "$CURRENT_SCRIPT_DIR" ]]; then local dir_path=... get directory from `CURRENT_SCRIPT_DIR/$1`, depends if $1 is absolute path or relative ... local include_file_path=... else local dir_path=... request the directory from the "$1" argument using one of answered here methods... local include_file_path=... fi ... push $CURRENT_SCRIPT_DIR in to stack ... export CURRENT_SCRIPT_DIR=... export current script directory using $dir_path ... source "$include_file_path" ... pop $CURRENT_SCRIPT_DIR from stack ... } From now on, the use of include(...) is based on previous CURRENT_SCRIPT_DIR in your script. This only works when you can replace all source commands by include command. If you can't, then you have no choice. At least until developers of the Bash interpreter make an explicit command to request the current running script directory path. My own closest implementation to this: https://sourceforge.net/p/tacklelib/tacklelib/HEAD/tree/trunk/bash/tacklelib/bash_tacklelib https://github.com/andry81/tacklelib/tree/trunk/bash/tacklelib/bash_tacklelib (search for the tkl_include function) A: This is what I crafted throughout the years to use as a header on my Bash scripts: ## BASE BRAIN - Get where you're from and who you are. MYPID=$$ ORIGINAL_DIR="$(pwd)" # This is not a hot air balloon ride.. fa="$0" # First Assumption ta= # Temporary Assumption wa= # Weighed Assumption while true; do [ "${fa:0:1}" = "/" ] && wa=$0 && break [ "${fa:0:2}" = "./" ] && ta="${ORIGINAL_DIR}/${fa:2}" && [ -e "$ta" ] && wa="$ta" && break ta="${ORIGINAL_DIR}/${fa}" && [ -e "$ta" ] && wa="$ta" && break done SW="$wa" SWDIR="$(dirname "$wa")" SWBIN="$(basename "$wa")" unset ta fa wa ( [ ! -e "$SWDIR/$SWBIN" ] || [ -z "$SW" ] ) && echo "I could not find my way around :( possible bug in the TOP script" && exit 1 At this point, your variables SW, SWDIR, and SWBIN contain what you need. A: Based on this answer, I suggest the clarified version that gets SCRIPT_HOME as the containing folder of any currently-running Bash script: s=${BASH_SOURCE[0]} ; s=`dirname $s` ; SCRIPT_HOME=`cd $s ; pwd` echo $SCRIPT_HOME A: I want to comment on the previous answer up there (How can I get the source directory of a Bash script from within the script itself?), but don't have enough reputation to do that. I found a solution for this two years ago on Apple's documentation site: https://developer.apple.com/library/archive/documentation/OpenSource/Conceptual/ShellScripting/AdvancedTechniques/AdvancedTechniques.html. And I stuck to this method afterwards. It cannot handle soft link, but otherwise works pretty well for me. I'm posting it here for any who needs it and as a request for comment. #!/bin/sh # Get an absolute path for the poem.txt file. POEM="$PWD/../poem.txt" # Get an absolute path for the script file. SCRIPT="$(which $0)" if [ "x$(echo $SCRIPT | grep '^\/')" = "x" ] ; then SCRIPT="$PWD/$SCRIPT" fi As shown by the code, after you get the absolute path of the script, then you can use the dirname command to get the path of the directory. A: function getScriptAbsoluteDir { # fold>> # @description used to get the script path # @param $1 the script $0 parameter local script_invoke_path="$1" local cwd=`pwd` # absolute path ? if so, the first character is a / if test "x${script_invoke_path:0:1}" = 'x/' then RESULT=`dirname "$script_invoke_path"` else RESULT=`dirname "$cwd/$script_invoke_path"` fi } # <<fold A: I usually include the following at the top of my scripts which works in the majority of cases: [ "$(dirname $0)" = '.' ] && SOURCE_DIR=$(pwd) || SOURCE_DIR=$(dirname $0); ls -l $0 | grep -q ^l && SOURCE_DIR=$(ls -l $0 | awk '{print $NF}'); The first line assigns source based on the value of pwd if run from the current path or dirname if called from elsewhere. The second line examines the path to see if it is a symlink and if so, updates SOURCE_DIR to the location of the link itself. There are probably better solutions out there, but this is the cleanest I've managed to come up with myself. A: Try something like this: function get_realpath() { if [[ -f "$1" ]] then # The file *must* exist if cd "$(echo "${1%/*}")" &>/dev/null then # The file *may* not be local. # The exception is ./file.ext # tTry 'cd .; cd -;' *works!* local tmppwd="$PWD" cd - &>/dev/null else # file *must* be local local tmppwd="$PWD" fi else # The file *cannot* exist return 1 # Failure fi # Reassemble realpath echo "$tmppwd"/"${1##*/}" return 0 # Success } function get_dirname(){ local realpath="$(get_realpath "$1")" if (( $? )) # True when non-zero. then return $? # Failure fi echo "${realpath%/*}" return 0 # Success } # Then from the top level: get_dirname './script.sh' # Or within a script: get_dirname "$0" # Can even test the outcome! if (( $? )) # True when non-zero. then exit 1 # Failure fi These functions and related tools are part of our product that has been made available to the community for free and can be found at GitHub as realpath-lib. It's simple, clean and well documented (great for learning), pure Bash and has no dependencies. Good for cross-platform use too. So for the above example, within a script you could simply: source '/path/to/realpath-lib' get_dirname "$0" if (( $? )) # True when non-zero. then exit 1 # Failure fi A: cur_dir=`old=\`pwd\`; cd \`dirname $0\`; echo \`pwd\`; cd $old;` A: No forks (besides subshell) and can handle "alien" pathname forms like those with newlines as some would claim: IFS= read -rd '' DIR < <([[ $BASH_SOURCE != */* ]] || cd "${BASH_SOURCE%/*}/" >&- && echo -n "$PWD") A: The key part is that I am reducing the scope of the problem: I forbid indirect execution of the script via the path (as in /bin/sh [script path relative to path component]). This can be detected because $0 will be a relative path which does not resolve to any file relative to the current folder. I believe that direct execution using the #! mechanism always results in an absolute $0, including when the script is found on the path. I also require that the pathname and any pathnames along a chain of symbolic links only contain a reasonable subset of characters, notably not \n, >, * or ?. This is required for the parsing logic. There are a few more implicit expectations which I will not go into (look at this answer), and I do not attempt to handle deliberate sabotage of $0 (so consider any security implications). I expect this to work on almost any Unix-like system with a Bourne-like /bin/sh. #!/bin/sh ( path="${0}" while test -n "${path}"; do # Make sure we have at least one slash and no leading dash. expr "${path}" : / > /dev/null || path="./${path}" # Filter out bad characters in the path name. expr "${path}" : ".*[*?<>\\]" > /dev/null && exit 1 # Catch embedded new-lines and non-existing (or path-relative) files. # $0 should always be absolute when scripts are invoked through "#!". test "`ls -l -d "${path}" 2> /dev/null | wc -l`" -eq 1 || exit 1 # Change to the folder containing the file to resolve relative links. folder=`expr "${path}" : "\(.*/\)[^/][^/]*/*$"` || exit 1 path=`expr "x\`ls -l -d "${path}"\`" : "[^>]* -> \(.*\)"` cd "${folder}" # If the last path was not a link then we are in the target folder. test -n "${path}" || pwd done ) A: Look at the test at bottom with weird directory names. To change the working directory to the one where the Bash script is located, you should try this simple, tested and verified with shellcheck solution: #!/bin/bash -- cd "$(dirname "${0}")"/. || exit 2 The test: $ ls application $ mkdir "$(printf "\1\2\3\4\5\6\7\10\11\12\13\14\15\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37\40\41\42\43\44\45\46\47testdir" "")" $ mv application *testdir $ ln -s *testdir "$(printf "\1\2\3\4\5\6\7\10\11\12\13\14\15\16\17\20\21\22\23\24\25\26\27\30\31\32\33\34\35\36\37\40\41\42\43\44\45\46\47symlink" "")" $ ls -lb total 4 lrwxrwxrwx 1 jay stacko 46 Mar 30 20:44 \001\002\003\004\005\006\a\b\t\n\v\f\r\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037\ !"#$%&'symlink -> \001\002\003\004\005\006\a\b\t\n\v\f\r\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037\ !"#$%&'testdir drwxr-xr-x 2 jay stacko 4096 Mar 30 20:44 \001\002\003\004\005\006\a\b\t\n\v\f\r\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037\ !"#$%&'testdir $ *testdir/application && printf "SUCCESS\n" "" SUCCESS $ *symlink/application && printf "SUCCESS\n" "" SUCCESS
{ "language": "en", "url": "https://stackoverflow.com/questions/59895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6014" }
Q: How do I stop an effect in jQuery I have a page that uses $(id).show("highlight", {}, 2000); to highlight an element when I start a ajax request, that might fail so that I want to use something like $(id).show("highlight", {color: "#FF0000"}, 2000); in the error handler. The problem is that if the first highlight haven't finished, the second is placed in a queue and wont run until the first is ready. Hence the question: Can I somehow stop the first effect? A: .stop(true,true) will freeze the effect so if it's invisible at the time then it remains invisible. This could be a problem if you are using the pulsate effect. $('#identifier').effect("pulsate", {times:5}, 1000); To get around this I added $('#identifier').stop(true, true).effect("pulsate", { times: 1 }, 1); A: I listed this as a comment for the accepted answer, but I thought it would be a good idea to post it as a standalone answer as it seems to be helping some people having problems with .stop() FYI - I was looking for this answer as well (trying to stop a Pulsate Effect), but I did have a .stop() in my code. After reviewing the docs, I needed .stop(true, true) A: From the jQuery docs: http://docs.jquery.com/Effects/stop Stop the currently-running animation on the matched elements.... When .stop() is called on an element, the currently-running animation (if any) is immediately stopped. If, for instance, an element is being hidden with .slideUp() when .stop() is called, the element will now still be displayed, but will be a fraction of its previous height. Callback functions are not called. If more than one animation method is called on the same element, the later animations are placed in the effects queue for the element. These animations will not begin until the first one completes. When .stop() is called, the next animation in the queue begins immediately. If the clearQueue parameter is provided with a value of true, then the rest of the animations in the queue are removed and never run. If the jumpToEnd argument is provided with a value of true, the current animation stops, but the element is immediately given its target values for each CSS property. In our above .slideUp() example, the element would be immediately hidden. The callback function is then immediately called, if provided... A: In my case, using below code does not work and keep your opacity value remain: $('#identifier').stop(true, true).effect("pulsate", { times: 1 }, 1); For me just remove opacity are working: $('#identifier').stop(true, true).css('opacity','');
{ "language": "en", "url": "https://stackoverflow.com/questions/59896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: SQL Server 2005 Encryption, asp.net and stored procedures I need to write a web application using SQL Server 2005, asp.net, and ado.net. Much of the user data stored in this application must be encrypted (read HIPAA). In the past for projects that required encryption, I encrypted/decrypted in the application code. However, this was generally for encrypting passwords or credit card information, so only a handful of columns in a couple tables. For this application, far more columns in several tables need to be encrypted, so I suspect pushing the encryption responsibilities into the data layer will be better performing, especially given SQL Server 2005's native support for several encryption types. (I could be convinced otherwise if anyone has real, empirical evidence.) I've consulted BOL, and I'm fairly adept at using google. So I don't want links to online articles or MSDN documentation (its likely I've already read it). One approach I've wrapped my head around so far is to use a symmetric key which is opened using a certificate. So the one time setup steps are (performed by a DBA in theory): * *Create a Master Key *Backup the Master Key to a file, burn to CD and store off site. *Open the Master Key and create a certificate. *Backup the certificate to a file, burn to CD and store off site. *Create the Symmetric key with encryption algorithm of choice using the certificate. Then anytime a stored procedure (or a human user via Management Studio) needs to access encrypted data you have to first open the symmetric key, execute any tsql statements or batches, and then close the symmetric key. Then as far as the asp.net application is concerned, and in my case the application code's data access layer, the data encryption is entirely transparent. So my questions are: * *Do I want to open, execute tsql statements/batches, and then close the symmetric key all within the sproc? The danger I see is, what if something goes wrong with the tsql execution, and code sproc execution never reaches the statement that closes the key. I assume this means the key will remain open until sql kills the SPID that sproc executed on. *Should I instead consider making three database calls for any given procedure I need to execute (only when encryption is necessary)? One database call to open the key, a second call to execute the sproc, and a third call to close the key. (Each call wrapped in its own try catch loop in order to maximize the odds that an open key ultimately is closed.) *Any considerations should I need to use client side transactions (meaning my code is the client, and initiates a transaction, executes several sprocs, and then commits the transaction assuming success)? A: 1) Look into using TRY..CATCH in SQL 2005. Unfortunately there is no FINALLY, so you'll have to handle both the success and error cases individually. 2) Not necessary if (1) handles the cleanup. 3) There isn't really a difference between client and server transactions with SQL Server. Connection.BeginTransaction() more or less executes "BEGIN TRANSACTION" on the server (and System.Transactions/TransactionScope does the same, until it's promoted to a distributed transaction). As for concerns with open/closing the key multiple times inside a transaction, I don't know of any issues to be aware of. A: I'm a big fan of option 3. Pretend for a minute you were going to set up transaction infrastructure anyways where: * *Whenever a call to the datastore was about to be made if an existing transaction hadn't been started then one was created. *If a transaction is already in place then calls to the data store hook into that transaction. This is often useful for business rules that are raised by save/going-to-the-database events. IE. If you had a rule that whenever you sold a widget you needed to update a WidgetAudit table, you'd probably want to wrap the widget audit insert call in the same transaction as that which is telling the datastore a widget has been sold. *Whenever a the original caller to the datastore (from step 1) is finished it commits/rollbacks the transaction, which affects all the database actions which happened during its call (using a try/catch/finally). Once this type of transactioning is created then it becomes simple to tack on a open key at the beginning (when the transaction opens) and close the key at the end (just before the transaction ends). Making "calls" to the datastore isn't nearly as expensive as opening a connection to the database. It's really things like SQLConnection.Open() that burns resources (even if ADO.NET is pooling them for you). If you want an example of these types of codes I would consider looking at NetTiers. It has quite an elegant solution for the transactioning that we just described (assuming you don't already have something in mind). Just 2 cents. Good luck. A: * *you can use @@error to see if any errors occured during the call to a sproc in SQL. *No to complicated. *You can but I prefer to use transactions in SQL Server itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/59926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: National holiday web service Is there a public/government web service that I can call to find out what the national holidays are for a given year? (For the US and/or any country in the world.) Edit: Does anybody have a set of formulas to calculate US holidays? (C# would be my language of choice if there is a choice.) A: There's a web service at http://www.holidaywebservice.com which will provide dates of holidays for the USA, Republic of Ireland, England and Scotland. They also sell a DLL and source code. As for details of algorithms, you could do worse than check out the excellent Calendrical Calculations book (third edition), which is a really fascinating read for all matters calendrical, and includes sample LISP code for their calendar algorithms. A: There are online calendars you can subscribe to. For example, Google provides US Holidays: ICAL HTML A: There are tons of similar information that really should be provided by government web services. It would certainly save a lot of money and errors in the long run if the U.S. Government could provide information like this through web services. Heck, even having it in a downloadable, parseable format would be a big step in the right direction. I ran across this question while looking for a way to ensure an application skipped all U.S. Federal holidays in working days calculations. The best .gov source I found is: Operating Status Schedules from OPM This has the data we need through 2020, but we'll have to type it into our own tables. A: No one gives that up for free (any country in the world? Get real). The best source is Copp Clark (I'm unaffiliated). They provide all holidays for all countries broken down by financial market, currency, etc. A: You can try http://kayaposoft.com/enrico/. Enrico Service is a free service providing public holidays for several countries including US. Public holidays for the countries like US or Germany are provided separately for each state. You can use either web service or json to get public holidays from Enrico. A: Some parsing may be required, and it's not 100% complete, but you can use wikipedia. A: For the method/assembly to figure out US Holidays, basically just figure out all the major holidays and the "formula" that they use. For the ones that never change, like Christmas, it's easy - December 25th. For the ones that do change somewhat, there's usually a formula - like the third Monday in February being Presidents Day. You can just have the method figure this out for a given year. This won't work for holidays without any particular pattern (i.e., some committee decides what the date is every year) but for all the major ones there's easily discernible formulas. This would actually be a great candidate for Test Driven Design. You will know all of the major holiday dates for a particular year, so you should be able to feed that year into the method and get the right answers. A: I am looking for something similar PL/SQL based. I found jollyday (sourceforge). It is java maybe you can use it with ikvm from c#. Sadly I was not able to load the java api into my oracle rdbms ... so ... if you came across a pure C or PL/SQL solution, please let me know :-) Cheers Chris
{ "language": "en", "url": "https://stackoverflow.com/questions/59934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Slowing down the playback of an audio file without changing its pitch? I am working on an application for college music majors. A feature i am considering is slowing down music playback without changing its pitch. I have seen this done in commercial software, but cannot find any libraries or open source apps that do anything like this. * *Are there libraries out there? *How could this be done from scratch from various file formats? Note: I am working in java but am not oppossed to changing languages. A: I use soundstretch to speed up podcasts which is works quite well, haven't tried it on music though. A: This site explains how it's done in the physical world: http://www.wendycarlos.com/other/Eltro-1967/index.html I don't know how you would emulate that in software though... I'll keep looking A: One way to do it would be to double the sampling rate without changing the sampling rate of your source. (Low quality example, but easy to implement. Note: You can also decrease the sampling rate as well). Check out any math related to phase vocoders. Another common method is to create an array of fft bins that store data for scheduled intervals of your sound. Then you can choose how quickly to iterate through the bins, and you can re-synthesize that audio data for as long as you choose thus enabling you to stretch out one short segment of your sound for as long as you like. A: Timestretching is quite hard. The more you slow down or speed up the sound the more artifacts you get. If you want to know what they sound like listen to "The Rockafeller Skank" by Fat Boy Slim. There are a lot of ways to do it that all have their own strengths and weaknesses. The math can get really complex. That's why there are so many proprietary algorithms. This page explains things a bit clearer than I can and links to the Dirac library. http://www.dspdimension.com/admin/time-pitch-overview/ I found this link for java code to do pitch shifting/timestretching http://www.adetorres.com/keychanger/KeyChangerReadme.html A: audacity does it out of the box and it's free. THere are several plug ins for mp3 players as well that are free. Apparently it's pretty easy to do with an mp3 since it's already coded in the frequency domain.
{ "language": "en", "url": "https://stackoverflow.com/questions/59936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the purpose of a Data Access Layer? I started a project a long time ago and created a Data Access Layer project in my solution but have never developed anything in it. What is the purpose of a data access layer? Are there any good sources that I could learn more about the Data Access Layer? A: A data access layer follows the idea of "separation of concerns" whereby all of the logic required for your business logic to interact with your data layer (database) is isolated to a single set of classes (layer). This allows you to more easily change the backend physical data storage technology (move from XML files to a database, or from SQL Server to Oracle or MySQL, for example) without having a large impact (and if done right having zero impact) to your business logic. There are a lot of tools that will help you build your data layer. If you search for the phrase "object relational mapper" or "ORM" you should find some more detailed information. A: Data access layers make a lot of sense when many different parts of your application need to access data the same way. It also makes sense when you need access the same data in many different ways. For example, how word processors can read many different file types and silently convert them into the application's internal format. Keep in mind that a DAL can also be very counter productive. If you are building a system where data access performance is critical, separating it from the business logic can make some vital optimizations impossible. A: The DAL should abstract your database from the rest of your project -- basically, there should be no SQL in any code other than the DAL, and only the DAL should know the structure of the database. The purpose is mainly to insulate the rest of your app from database changes, and to make it easier to extend and support your app because you will always know where to go to modify database-interaction code. A: In two words: Loose Coupling To keep the code you use to pull data from your data store (database, flat files, web services, whatever) separate from business logic and presentation code. This way, if you have to change data stores, you don't end up rewriting the whole thing. These days, various ORM frameworks are kind of blending the DAL with other layers. This typically makes development easier, but changing data stores can be painful. To be fair, changing data stores like that is pretty uncommon. A: There are two primary purposes of a Data Access Layer * *Abstract the actual database engine or other data store, such that your applications can switch from using say Oracle to using MS SQL server *Abstract the logical data model such that your Business Layer is decoupled from this knowledge and is agnostic of it. Giving you the ability to modify the logical data model without impacting the business layer Most answers here have provided the first reason. In my mind it is the second that is far more important. Essentially your business layer should not be aware of the logical data model that is in use. Today with ORMs and Linq #2 seems to go out the window and people tend to forget (or are not able to see the fine lines that do and should exist) about #2. Essentially, to get a good understanding of the purpose and function of a Data Layer, you need to see things from the Business Layer's perspective, keeping in mind that the Business layer should be agnostic of the logical data model of your data store. So each time the business layer need data for example, if should ask for the data it needs in a very simple logical data model agnostic way. So it would make a call into the Data Access Layer such as: GetOrdersForCustomer(42) And it gets back exactly the data it needs without being aware of what tables store this information of or relationship exists etc. I've written an article on my blog that goes into more details. The Purpose and function of a Data Access Layer A: The purpose is to abstract out the database access details that other parts of your application need not be concerned about. A: A data access layer is used to abstract away the storage and retrieval of data from its representation. You can read more about this kind of abstraction in 1994's Design Patterns A: The purpose is to abstract the data storage retrieval mechanism from data usage and manipulation. Benefits: * *Underlying storage can change (switch from Oracle to MSSQL for example), and you need a way to localize those changes *Schema changes - see above *You want a way to run disconnected from your db (demo mode): Add file serialization/deserialization to the DAL A: I recommend you read up here: http://msdn.microsoft.com/en-us/practices/default.aspx Using a DAL will help you isolate your data access from your presentation and business logic. I use it a lot so that I can easily swap out (through reflection and dynamically loading assemblies) data providers. Read up, lots of good info there. Also, look into the Data Access Block if you are planning on using .NET. It can be a big help. A: Something which hasn't been brought up that I thought I'd add is that having a DAL allows you to improve the security of your system. For instance, the DB and DAL could run on server(s) inaccessible to the public while the business logic can run on a public facing server such that the public server can't run raw SQL on the DB. This could help mitigate a lot of damage should the public server be compromised.
{ "language": "en", "url": "https://stackoverflow.com/questions/59942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Accessing .NET Web Service securely from Flex 3 We can successfully consume a .NET 2.0 web service from a Flex/AS3 application. Aside from SSL, how else can we make the security more robust (i.e., authentication)? A: You can leverage ASP.Net's built in session management by decorating your webmethods with <EnableSession()> Then, inside your method, you can check that the user still has a valid session. A: If you're talking about securing the information going over the wire, you can use Web Service Extensions (WSE) to encrypt the body of the soap message so that you don't have to secure the channel. This way the message can get passed around from more than one endpoint (ie. it can get forwarded) and you don't need multiple https certs. If you're talking abut autentication then you could do forms auth with either a password in the body or in the soap headers (once again either encrypt the body or the channel). Or one of the easiest ways to secure a webservice (if it's an internal set of services) is have IIS do it, turn on NTLM and do authentication there. You can do authorization later on in the pipeline with an HTTPModule that checks peoples credential against the code they're trying to call. A: Consider using WebOrb to communicate with your service. Here is some information on WebOrb's authentication mecahnism. There is also an article on Adobe's developer site on using WebOrb and .Net for authentication. A: You should be able to use asp.net's authentication (such as forms authentication) without much extra effort. Securing an asmx file is just like securing an aspx file. There's a ton of information on forms authentication out there, just search for 'asp.net forms authentication' A: If you are using Microsoft technologies you could build a little Asp.Net/C# application that would ask for credentials before redirecting to the correct swf. That way you could restrict the access and have different swf file depending on the user.
{ "language": "en", "url": "https://stackoverflow.com/questions/59945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ASP Server variable not working on local IIS I'm working on a simple ASP.Net page (handler, actually) where I check the value of the LOGON_USER server variable. This works using Visual Studio's built-in web server and it works in other sites deployed to the live intranet site. But it doesn't work on the IIS instance on my local XP machine. How can I fix it, or what's going on if I can't? A: What authentication do you have enabled in IIS? Anonmyous, Basic, Digest, Integrated Windows? Sounds to me like anonymous access is enabled/allowed, and nothing else. This would means that LOGON_USER is not populated. When you access your local IIS, trying using http://127.0.0.1 in particular if you use IE. IE will recognize "localhost" as being in your local trusted zone and will automatically pass your XP login credentials through when Integrated Windows auth is enabled. A: In addition to Jon's answer, IIRC even if you have Integrated Authentication enabled, if Anonymous Authentication is enabled it will take precedence...
{ "language": "en", "url": "https://stackoverflow.com/questions/59951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WPF - Programmatic Binding on a BitmapEffect I would like to be able to programmatically bind some data to the dependency properties on a BitmapEffect. With a FrameworkElement like TextBlock there is a SetBinding method where you can programmatically do these bindings like: myTextBlock.SetBinding(TextBlock.TextProperty, new Binding("SomeProperty")); And I know you can do it in straight XAML (as seen below) <TextBlock Width="Auto" Text="Some Content" x:Name="MyTextBlock" TextWrapping="Wrap" > <TextBlock.BitmapEffect> <BitmapEffectGroup> <OuterGlowBitmapEffect x:Name="MyGlow" GlowColor="White" GlowSize="{Binding Path=MyValue}" /> </BitmapEffectGroup> </TextBlock.BitmapEffect> </TextBlock> But I can't figure out how to accomplish this with C# because BitmapEffect doesn't have a SetBinding method. I've tried: myTextBlock.SetBinding(OuterGlowBitmapEffect.GlowSize, new Binding("SomeProperty") { Source = someObject }); But it doesn't work. A: You can use BindingOperation.SetBinding: Binding newBinding = new Binding(); newBinding.ElementName = "SomeObject"; newBinding.Path = new PropertyPath(SomeObjectType.SomeProperty); BindingOperations.SetBinding(MyGlow, OuterGlowBitmapEffect.GlowSizeProperty, newBinding); I think that should do what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/59958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Old-school SQL DB access versus ORM (NHibernate, EF, et al). Who wins? I've been successful with writing my own SQL access code with a combination of stored procedures and parameterized queries and a little wrapper library I've written to minimize the ADO.NET grunge. This has all worked very well for me in the past and I've been pretty productive with it. I'm heading into a new project--should I put my old school stuff behind me and dig into an ORM-based solution? (I know there are vast high-concepts differences between NHibernate and EF--I don't want to get into that here. For the sake of argument, let's even lump LINQ with the old-school alternatives.) I'm looking for advice on the real-world application of ORM type stuff against what I know (and know pretty well). Old-school ADO.NET code or ORM? I'm sure there is a curve--does the curve have an ROI that makes things worthwhile? I'm anxious and willing to learn, but do have a deadline. A: I find that LINQ to SQL is much, much faster when I'm prototyping code. It just blows away any other method when I need something now. But there is a cost. Compared to hand-rolled stored procs, LINQ is slow. Especially if you aren't very careful as seemingly minor changes can suddenly make a single turn into 1+N queries. My recommendation. Use LINQ to SQL at first, then swtich to procs if you aren't getting the performance you need. A: A good question but a very controversial topic. This blog post from Frans Bouma from a few years back citing the pros of dynamic SQL (implying ORMs) over stored procedures sparked quite the fiery flame war. A: There was a great discussion on this topic at DevTeach in Montreal. If you go to this URL: http://www.dotnetrocks.com/default.aspx?showNum=240 you will be able to hear two experts in the field (Ted Neward and Oren Eini) discuss the pros and cons of each approach. Probably the best answer you will find on a subject that has no real definite answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/59972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Implementing large system changes If you're familiar with the phrase "build one to throw away", well, we seem to have done that; we’re reaching the limits of version 1 of our online app. It's time to clean things up by: * *Re-organizing code and UI *Unifying UI processes *Adding more functionality *Building for the future *Modifying our database structure to handle all of the above What's the best way to make this transition happen? We want to avoid throwing all of our users over to a new system (once it's finished) ... they'd freak out and we couldn't handle the call load. Our users run the gamut, from technically proficient used-to-write-software types, to those that don't know what HTML is. Should we start a new "installation" of our system and move users over to it gradually after we ensure this new design sufficiently solves enough of the problems with version 1? Should we (somehow) change each module of our system incrementally, and phase? This may be difficult because the database layout will change, resulting in having to tweak the "core code" and the code for several surrounding modules. Is it common to have a set of trusted, patient, "beta tester" clients using a cutting edge version of an app? (The goal here would be to get feedback and test for bugs on a new system) Any other advice? First-hand experience? A: The answer, I'm afraid, is it depends. It depends on the kind of application and the kind of users you have. Without knowing what the system is and the scope of the changes in the version, it is difficult to offer an answer. That said, there are some rules of thumb. Firstly, avoid the big bang launch. Any launch of a system is going to have problems. The industry is littered with projects where people thought the bang-bang launch was a great idea, only for teething problems to bring the launch to its knees. Cuil was a recent high-profile causality of the big-bang launch. In order to make the teething problems manageable, you need to work with small numbers of users initially, then slowly ratchet up the number of users. Secondly, the thing that you must absolutely must positively do is put the user first. The user should have to do the least amount of work possible to use V2 of the system. The ideal amount of work would be zero. This means that if you pick to slowly migrate users from one system to the other, you are responsible for making sure all their data and settings are migrated. For example, don't do anything stupid like telling the user they must use V1 for all records before 12/09/2008 and V2 for all records after. The point of releasing V2 should be making the users' life easier, not making it needlessly more difficult. Thirdly, have a beta program. This applies even for Intranet applications. Developing an application is much like Newton-Raphson's method for finding the root of a polynomial. You make a guess of what the user wants, you deliver it to the user, the user provides feedback and slowly but surely each iteration takes you closer to the solution to the problem. A beta program will help you find the root much faster than just foisting new versions on to people without time for them to comment on the changes. Betas help get your users on-board earlier and make them feel included in the process; the importance of which I can not stress enough. A: We just finished plopping a brand new CRM system on our users, and let me tell you it was a TERRIBLE idea to do it that way: It was extremely painful for my team and for our customers. I'd go through every possible mean to do gradual releases, even if it means doing more work. You'll be grateful because you won't have to go through heroic efforts to get everything moved, and your customers will appreciate the ability to get introduced to the product a bit a a time. Hope that helps! A: I agree with Esteban that gradual releases are best. It's like remodeling a house: getting it over with seems like a good idea initially. But means you have to plan everything upfront, hire a bunch of contractors and move out. Then something changes in the plan or a contractor disappears, and all that time you hoped to save is gone. Meanwhile, gradual change gives everyone a chance to stop and think between steps. Sometimes you can avoid later changes when earlier changes work out better than you planned. I work on a system that had a huge scaling problem. We made a list of all the changes we thought we'd need and prioritized them by probable impact. Then we started making one change at a time. About half-way through the list, we found we'd solved the scaling problem. I still have the list, but I may never need to finish it. I'm free to add features and solve other problems. Of course, there are times when it's best to bit the bullet and tear the whole thing down. But that's a lot less common than people tend to believe. And for critical operational systems, the "tear-down" decision can be fatal. Look at the big government projects that everyone agrees have to be brought to the modern computing era, but can't because some vital service will be lost. If the philosophy had been gradual change, maybe they would have been modernized one piece at a time. A: It sounds like incremental re-architecture should be your agile buzz-phrase of choice. I've never done it on a web application, but I have been through some fairly radical client application changes that were done incrementally. If you invest a little bit of time up front to make sure that pieces of work are sequenced in a fairly sensible way it can work well. A small investment in good refactoring aids will be very helpful if you don't have them already. I can personally recommend jetBrains Resharper if you are using .NET, and if you are Java-based I believe IntelliJ IDEA includes similar functionality.
{ "language": "en", "url": "https://stackoverflow.com/questions/59974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: XMLSerialization in C# I have a simple type that explicitly implemets an Interface. public interface IMessageHeader { string FromAddress { get; set; } string ToAddress { get; set; } } [Serializable] public class MessageHeader:IMessageHeader { private string from; private string to; [XmlAttribute("From")] string IMessageHeade.FromAddress { get { return this.from;} set { this.from = value;} } [XmlAttribute("To")] string IMessageHeade.ToAddress { get { return this.to;} set { this.to = value;} } } Is there a way to Serialize and Deserialize objects of type IMessageHeader?? I got the following error when tried "Cannot serialize interface IMessageHeader" A: You cannot serialize IMessageHeader because you can't do Activator.CreateInstance(typeof(IMessageHeader)) which is what serialization is going to do under the covers. You need a concrete type. You can do typeof(MessageHeader) or you could say, have an instance of MessageHeader and do XmlSerializer serializer = new XmlSerializer(instance.GetType()) A: No, because the serializer needs a concrete class that it can instantiate. Given the following code: XmlSerializer ser = new XmlSerializer(typeof(IMessageHeader)); IMessageHeader header = (IMessageHeader)ser.Deserialize(data); What class does the serializer create to return from Deserialize()? In theory it's possible to serialize/deserialize an interface, just not with XmlSerializer. A: Try adding IXmlSerializable to your IMessageHeader declaration, although I don't think that will work. From what I recall, the .net xml serializer only works for concrete classes that have a default constructor. A: The issue stems from the fact that you can't deserialize an interface but need to instantiate a concrete class. The XmlInclude attribute can be used to tell the serializer what concrete classes implement the interface. A: You can create an abstract base class the implements IMessageHeader and also inherits MarshalByRefObject
{ "language": "en", "url": "https://stackoverflow.com/questions/59986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: C++ inheritance and member function pointers In C++, can member function pointers be used to point to derived (or even base) class members? EDIT: Perhaps an example will help. Suppose we have a hierarchy of three classes X, Y, Z in order of inheritance. Y therefore has a base class X and a derived class Z. Now we can define a member function pointer p for class Y. This is written as: void (Y::*p)(); (For simplicity, I'll assume we're only interested in functions with the signature void f() ) This pointer p can now be used to point to member functions of class Y. This question (two questions, really) is then: * *Can p be used to point to a function in the derived class Z? *Can p be used to point to a function in the base class X? A: The critical issue with pointers to members is that they can be applied to any reference or pointer to a class of the correct type. This means that because Z is derived from Y a pointer (or reference) of type pointer (or reference) to Y may actually point (or refer) to the base class sub-object of Z or any other class derived from Y. void (Y::*p)() = &Z::z_fn; // illegal This means that anything assigned to a pointer to member of Y must actually work with any Y. If it was allowed to point to a member of Z (that wasn't a member of Y) then it would be possible to call a member function of Z on some thing that wasn't actually a Z. On the other hand, any pointer to member of Y also points the member of Z (inheritance means that Z has all the attributes and methods of its base) is it is legal to convert a pointer to member of Y to a pointer to member of Z. This is inherently safe. void (Y::*p)() = &Y::y_fn; void (Z::*q)() = p; // legal and safe A: C++03 std, §4.11 2 Pointer to member conversions: An rvalue of type “pointer to member of B of type cv T,” where B is a class type, can be converted to an rvalue of type “pointer to member of D of type cv T,” where D is a derived class (clause 10) of B. If B is an inaccessible (clause 11), ambiguous (10.2) or virtual (10.1) base class of D, a program that necessitates this conversion is ill-formed. The result of the conversion refers to the same member as the pointer to member before the conversion took place, but it refers to the base class member as if it were a member of the derived class. The result refers to the member in D’s instance of B. Since the result has type “pointer to member of D of type cv T,” it can be dereferenced with a D object. The result is the same as if the pointer to member of B were dereferenced with the B sub-object of D. The null member pointer value is converted to the null member pointer value of the destination type. 52) 52)The rule for conversion of pointers to members (from pointer to member of base to pointer to member of derived) appears inverted compared to the rule for pointers to objects (from pointer to derived to pointer to base) (4.10, clause 10). This inversion is necessary to ensure type safety. Note that a pointer to member is not a pointer to object or a pointer to function and the rules for conversions of such pointers do not apply to pointers to members. In particular, a pointer to member cannot be converted to a void*. In short, you can convert a pointer to a member of an accessible, non-virtual base class to a pointer to a member of a derived class as long as the member isn't ambiguous. class A { public: void foo(); }; class B : public A {}; class C { public: void bar(); }; class D { public: void baz(); }; class E : public A, public B, private C, public virtual D { public: typedef void (E::*member)(); }; class F:public E { public: void bam(); }; ... int main() { E::member mbr; mbr = &A::foo; // invalid: ambiguous; E's A or B's A? mbr = &C::bar; // invalid: C is private mbr = &D::baz; // invalid: D is virtual mbr = &F::bam; // invalid: conversion isn't defined by the standard ... Conversion in the other direction (via static_cast) is governed by § 5.2.9 9: An rvalue of type "pointer to member of D of type cv1 T" can be converted to an rvalue of type "pointer to member of B of type cv2 T", where B is a base class (clause 10 class.derived) of D, if a valid standard conversion from "pointer to member of B of type T" to "pointer to member of D of type T" exists (4.11 conv.mem), and cv2 is the same cv-qualification as, or greater cv-qualification than, cv1.11) The null member pointer value (4.11 conv.mem) is converted to the null member pointer value of the destination type. If class B contains the original member, or is a base or derived class of the class containing the original member, the resulting pointer to member points to the original member. Otherwise, the result of the cast is undefined. [Note: although class B need not contain the original member, the dynamic type of the object on which the pointer to member is dereferenced must contain the original member; see 5.5 expr.mptr.oper.] 11) Function types (including those used in pointer to member function types) are never cv-qualified; see 8.3.5 dcl.fct. In short, you can convert from a derived D::* to a base B::* if you can convert from a B::* to a D::*, though you can only use the B::* on objects that are of type D or are descended from D. A: You might want to check out this article Member Function Pointers and the Fastest Possible C++ Delegates The short answer seems to be yes, in some cases. A: I'm not 100% sure what you are asking, but here is an example that works with virtual functions: #include <iostream> using namespace std; class A { public: virtual void foo() { cout << "A::foo\n"; } }; class B : public A { public: virtual void foo() { cout << "B::foo\n"; } }; int main() { void (A::*bar)() = &A::foo; (A().*bar)(); (B().*bar)(); return 0; } A: I believe so. Since the function pointer uses the signature to identify itself, the base/derived behavior would rely on whatever object you called it on. A: My experimentation revealed the following: Warning - this might be undefined behaviour. It would be helpful if someone could provide a definitive reference. * *This worked, but required a cast when assigning the derived member function to p. *This also worked, but required extra casts when dereferencing p. If we're feeling really ambitious we could ask if p can be used to point to member functions of unrelated classes. I didn't try it, but the FastDelegate page linked in dagorym's answer suggests it's possible. In conclusion, I'll try to avoid using member function pointers in this way. Passages like the following don't inspire confidence: Casting between member function pointers is an extremely murky area. During the standardization of C++, there was a lot of discussion about whether you should be able to cast a member function pointer from one class to a member function pointer of a base or derived class, and whether you could cast between unrelated classes. By the time the standards committee made up their mind, different compiler vendors had already made implementation decisions which had locked them into different answers to these questions. [FastDelegate article] A: Assume that we have class X, class Y : public X, and class Z : public Y You should be able to assign methods for both X, Y to pointers of type void (Y::*p)() but not methods for Z. To see why consider the following: void (Y::*p)() = &Z::func; // we pretend this is legal Y * y = new Y; // clearly legal (y->*p)(); // okay, follows the rules, but what would this mean? By allowing that assignment we permit the invocation of a method for Z on a Y object which could lead to who knows what. You can make it all work by casting the pointers but that is not safe or guaranteed to work. A: Here is an example of what works. You can override a method in derived class, and another method of base class that uses pointer to this overridden method indeed calls the derived class's method. #include <iostream> #include <string> using namespace std; class A { public: virtual void traverse(string arg) { find(&A::visit, arg); } protected: virtual void find(void (A::*method)(string arg), string arg) { (this->*method)(arg); } virtual void visit(string arg) { cout << "A::visit, arg:" << arg << endl; } }; class B : public A { protected: virtual void visit(string arg) { cout << "B::visit, arg:" << arg << endl; } }; int main() { A a; B b; a.traverse("one"); b.traverse("two"); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/60000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: How to build large/busy RSS feed I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts? A: I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed. A: If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications. (You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.) A: I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework. Django's models could probably handle representing the data structure of the tables you care about. You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect). That way you could use URLs like (edited to cancel the auto-linking of example URLs): * *http:// feedserver/rss/batch-file-output/ *http:// feedserver/rss/support-tickets/ *http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one) Then in the view: def get_batch_file_messages(): # Grab all the recent batch files messages here. # Maybe cache the result and only regenerate every so often. # Other feed functions here. feed_mapping = { 'batch-file-output': get_batch_file_messages, } def rss(request, *args): items_to_display = [] for feed in args: items_to_display += feed_mapping[feed]() # Processing/returning the feed. Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do. A: Without knowing your application, I can't offer specific advice. That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude. If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter. You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level. A: Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day. That gets me down to something reasonable. I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?" A: In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
{ "language": "en", "url": "https://stackoverflow.com/questions/60009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to create multiple records at once with ActiveScaffold in ROR I am wanting to use ActiveScaffold to create assignment records for several students in a single step. The records will all contain identical data, with the exception of the student_id. I was able to override the default form and replace the dropdown box for selecting the student name with a multi-select box - which is what I want. That change however, was only cosmetic, as the underlying code only grabs the first selected name from that box, and creates a single record. Can somebody suggest a good way to accomplish this in a way that doesn't require my deciphering and rewriting too much of the underlying ActiveScaffold code? Update: I still haven't found a good answer to this problem. A: I suppose you have defined your multi-select box adding :multiple => true to html parameters of select_tag. Then, in the controller, you need to access the list of names selected, what you can do like this: params[:students].collect{|student| insert_student(student, params[:assignment_id]) } With collect applied to an array or enum you can loop through each item of that array, and then do what you need with each student (in the example, to call a function for insert the students). Collect returns an array with the results of doing the code inside. A: if your assingnments have has_many :students or has_and_belongs_to_many :students, then you can change the id of the multi-select box to assignment_student_ids[], and it should work. A: I was referred to BatchCreate, an ActiveScaffold extension which looks like it might do the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/60019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you resize an IE browser window to 1024 x 768 In Firefox you can enter the following into the awesome bar and hit enter: javascript:self.resizeTo(1024,768); How do you do the same thing in IE? A: Your code works in IE, you just need to "Allow blocked Content" in the Security Toolbar A: Try: javascript:resizeTo(1024,768); This works in IE7 at least. A: javascript:resizeTo(1024,768); vbscript:resizeto(1024,768) Will work in IE7, But consider using something like javascript:moveTo(0,0);resizeTo(1024,768); because IE7 doesn't allow the window to "resize" beyond the screen borders. If you work on a 1024,768 desktop, this is what happens... * *Firefox: 1024x768 Window, going behind the taskbar. If you drop the moveTo part, the top left corner of the window won't change position.(You still get a 1024x768 window) *IE7: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders. *safari: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders, but you can ommit the moveTo part. Safari will move the top left corner of the window for you. *Opera: Nothing happens. *Chrome: Nothing happens. A: Maybe not directly related if you were looking for only a JavaScript solution but you can use the free Windows utility Sizer to automatically resize any (browser) window to a predefined size like 800x600, 1024,768, etc. A: It works in IE6, but I think IE7 added some security around this?
{ "language": "en", "url": "https://stackoverflow.com/questions/60030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Getting the array key in a 'foreach' loop How do I get the key of the current element in a foreach loop in C#? For example: PHP foreach ($array as $key => $value) { echo("$value is assigned to key: $key"); } What I'm trying to do in C#: int[] values = { 5, 14, 29, 49, 99, 150, 999 }; foreach (int val in values) { if(search <= val && !stop) { // Set key to a variable } } A: Alas there is no built-in way to do this. Either use a for loop or create a temp variable that you increment on each pass. A: I answered this in another version of this question: Foreach is for iterating over collections that implement IEnumerable. It does this by calling GetEnumerator on the collection, which will return an Enumerator. This Enumerator has a method and a property: * MoveNext() * Current Current returns the object that Enumerator is currently on, MoveNext updates Current to the next object. Obviously, the concept of an index is foreign to the concept of enumeration, and cannot be done. Because of that, most collections are able to be traversed using an indexer and the for loop construct. I greatly prefer using a for loop in this situation compared to tracking the index with a local variable. How do you get the index of the current iteration of a foreach loop? A: Grauenwolf's way is the most straightforward and performant way of doing this with an array: Either use a for loop or create a temp variable that you increment on each pass. Which would of course look like this: int[] values = { 5, 14, 29, 49, 99, 150, 999 }; for (int key = 0; key < values.Length; ++key) if (search <= values[key] && !stop) { // set key to a variable } With .NET 3.5 you can take a more functional approach as well, but it is a little more verbose at the site, and would likely rely on a couple support functions for visiting the elements in an IEnumerable. Overkill if this is all you need it for, but handy if you tend to do a lot of collection processing. A: If you want to get at the key (read: index) then you'd have to use a for loop. If you actually want to have a collection that holds keys/values then I'd consider using a HashTable or a Dictionary (if you want to use Generics). Dictionary<int, string> items = new Dictionary<int, string>(); foreach (int key in items.Keys) { Console.WriteLine("Key: {0} has value: {1}", key, items[key]); } Hope that helps, Tyler A: Actually you should use classic for (;;) loop if you want to loop through an array. But the similar functionality that you have achieved with your PHP code can be achieved in C# like this with a Dictionary: Dictionary<int, int> values = new Dictionary<int, int>(); values[0] = 5; values[1] = 14; values[2] = 29; values[3] = 49; // whatever... foreach (int key in values.Keys) { Console.WriteLine("{0} is assigned to key: {1}", values[key], key); } A: With DictionaryEntry and KeyValuePair: Based on MSDN IDictionary<string,string> openWith = new Dictionary<string,string>() { { "txt", "notepad.exe" } { "bmp", "paint.exe" } { "rtf", "wordpad.exe" } }; foreach (DictionaryEntry de in openWith) { Console.WriteLine("Key = {0}, Value = {1}", de.Key, de.Value); } // also foreach (KeyValuePair<string,string> de in openWith) { Console.WriteLine("Key = {0}, Value = {1}", de.Key, de.Value); } Releated SO question: KeyValuePair VS DictionaryEntry A: Here's a solution I just came up with for this problem Original code: int index=0; foreach (var item in enumerable) { blah(item, index); // some code that depends on the index index++; } Updated code enumerable.ForEach((item, index) => blah(item, index)); Extension Method: public static IEnumerable<T> ForEach<T>(this IEnumerable<T> enumerable, Action<T, int> action) { var unit = new Unit(); // unit is a new type from the reactive framework (http://msdn.microsoft.com/en-us/devlabs/ee794896.aspx) to represent a void, since in C# you can't return a void enumerable.Select((item, i) => { action(item, i); return unit; }).ToList(); return pSource; } A: You can implement this functionality yourself using an extension method. For example, here is an implementation of an extension method KeyValuePairs which works on lists: public struct IndexValue<T> { public int Index {get; private set;} public T Value {get; private set;} public IndexValue(int index, T value) : this() { this.Index = index; this.Value = value; } } public static class EnumExtension { public static IEnumerable<IndexValue<T>> KeyValuePairs<T>(this IList<T> list) { for (int i = 0; i < list.Count; i++) yield return new IndexValue<T>(i, list[i]); } } A: myKey = Array.IndexOf(values, val);
{ "language": "en", "url": "https://stackoverflow.com/questions/60032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: What is the easiest way to duplicate an activerecord record? I want to make a copy of an ActiveRecord object, changing a single field in the process (in addition to the id). What is the simplest way to accomplish this? I realize I could create a new record, and then iterate over each of the fields copying the data field-by-field - but I figured there must be an easier way to do this. Perhaps something like this: new_record = Record.copy(:id) A: Depending on your needs and programming style, you can also use a combination of the new method of the class and merge. For lack of a better simple example, suppose you have a task scheduled for a certain date and you want to duplicate it to another date. The actual attributes of the task aren't important, so: old_task = Task.find(task_id) new_task = Task.new(old_task.attributes.merge({:scheduled_on => some_new_date})) will create a new task with :id => nil, :scheduled_on => some_new_date, and all other attributes the same as the original task. Using Task.new, you will have to explicitly call save, so if you want it saved automatically, change Task.new to Task.create. Peace. A: To get a copy, use the dup (or clone for < rails 3.1+) method: #rails >= 3.1 new_record = old_record.dup # rails < 3.1 new_record = old_record.clone Then you can change whichever fields you want. ActiveRecord overrides the built-in Object#clone to give you a new (not saved to the DB) record with an unassigned ID. Note that it does not copy associations, so you'll have to do this manually if you need to. Rails 3.1 clone is a shallow copy, use dup instead... A: In Rails 5 you can simply create duplicate object or record like this. new_user = old_user.dup A: You may also like the Amoeba gem for ActiveRecord 3.2. In your case, you probably want to make use of the nullify, regex or prefix options available in the configuration DSL. It supports easy and automatic recursive duplication of has_one, has_many and has_and_belongs_to_many associations, field preprocessing and a highly flexible and powerful configuration DSL that can be applied both to the model and on the fly. be sure to check out the Amoeba Documentation but usage is pretty easy... just gem install amoeba or add gem 'amoeba' to your Gemfile then add the amoeba block to your model and run the dup method as usual class Post < ActiveRecord::Base has_many :comments has_and_belongs_to_many :tags amoeba do enable end end class Comment < ActiveRecord::Base belongs_to :post end class Tag < ActiveRecord::Base has_and_belongs_to_many :posts end class PostsController < ActionController def some_method my_post = Post.find(params[:id]) new_post = my_post.dup new_post.save end end You can also control which fields get copied in numerous ways, but for example, if you wanted to prevent comments from being duplicated but you wanted to maintain the same tags, you could do something like this: class Post < ActiveRecord::Base has_many :comments has_and_belongs_to_many :tags amoeba do exclude_field :comments end end You can also preprocess fields to help indicate uniqueness with both prefixes and suffixes as well as regexes. In addition, there are also numerous options so you can write in the most readable style for your purpose: class Post < ActiveRecord::Base has_many :comments has_and_belongs_to_many :tags amoeba do include_field :tags prepend :title => "Copy of " append :contents => " (copied version)" regex :contents => {:replace => /dog/, :with => "cat"} end end Recursive copying of associations is easy, just enable amoeba on child models as well class Post < ActiveRecord::Base has_many :comments amoeba do enable end end class Comment < ActiveRecord::Base belongs_to :post has_many :ratings amoeba do enable end end class Rating < ActiveRecord::Base belongs_to :comment end The configuration DSL has yet more options, so be sure to check out the documentation. Enjoy! :) A: Use ActiveRecord::Base#dup if you don't want to copy the id A: Here is a sample of overriding ActiveRecord #dup method to customize instance duplication and include relation duplication as well: class Offer < ApplicationRecord has_many :offer_items def dup super.tap do |new_offer| # change title of the new instance new_offer.title = "Copy of #{@offer.title}" # duplicate offer_items as well self.offer_items.each { |offer_item| new_offer.offer_items << offer_item.dup } end end end Note: this method doesn't require any external gem but it requires newer ActiveRecord version with #dup method implemented A: I usually just copy the attributes, changing whatever I need changing: new_user = User.new(old_user.attributes.merge(:login => "newlogin")) A: The easily way is: #your rails >= 3.1 (i was done it with Rails 5.0.0.1) o = Model.find(id) # (Range).each do |item| (1..109).each do |item| new_record = o.dup new_record.save end Or # if your rails < 3.1 o = Model.find(id) (1..109).each do |item| new_record = o.clone new_record.save end A: If you need a deep copy with associations, I recommend the deep_cloneable gem. A: You can also check the acts_as_inheritable gem. "Acts As Inheritable is a Ruby Gem specifically written for Rails/ActiveRecord models. It is meant to be used with the Self-Referential Association, or with a model having a parent that share the inheritable attributes. This will let you inherit any attribute or relation from the parent model." By adding acts_as_inheritable to your models you will have access to these methods: inherit_attributes class Person < ActiveRecord::Base acts_as_inheritable attributes: %w(favorite_color last_name soccer_team) # Associations belongs_to :parent, class_name: 'Person' has_many :children, class_name: 'Person', foreign_key: :parent_id end parent = Person.create(last_name: 'Arango', soccer_team: 'Verdolaga', favorite_color:'Green') son = Person.create(parent: parent) son.inherit_attributes son.last_name # => Arango son.soccer_team # => Verdolaga son.favorite_color # => Green inherit_relations class Person < ActiveRecord::Base acts_as_inheritable associations: %w(pet) # Associations has_one :pet end parent = Person.create(last_name: 'Arango') parent_pet = Pet.create(person: parent, name: 'Mango', breed:'Golden Retriver') parent_pet.inspect #=> #<Pet id: 1, person_id: 1, name: "Mango", breed: "Golden Retriver"> son = Person.create(parent: parent) son.inherit_relations son.pet.inspect # => #<Pet id: 2, person_id: 2, name: "Mango", breed: "Golden Retriver"> Hope this can help you. A: Since there could be more logic, when duplicating a model, I would suggest to create a new class, where you handle all the needed logic. To ease that, there's a gem that can help: clowne As per their documentation examples, for a User model: class User < ActiveRecord::Base # create_table :users do |t| # t.string :login # t.string :email # t.timestamps null: false # end has_one :profile has_many :posts end You create your cloner class: class UserCloner < Clowne::Cloner adapter :active_record include_association :profile, clone_with: SpecialProfileCloner include_association :posts nullify :login # params here is an arbitrary Hash passed into cloner finalize do |_source, record, params| record.email = params[:email] end end class SpecialProfileCloner < Clowne::Cloner adapter :active_record nullify :name end and then use it: user = User.last #=> <#User(login: 'clown', email: '[email protected]')> cloned = UserCloner.call(user, email: '[email protected]') cloned.persisted? # => false cloned.save! cloned.login # => nil cloned.email # => "[email protected]" # associations: cloned.posts.count == user.posts.count # => true cloned.profile.name # => nil Example copied from the project, but it will give a clear vision of what you can achieve. For a quick and simple record I would go with: Model.new(Model.last.attributes.reject {|k,_v| k.to_s == 'id'} A: Try rails's dup method: new_record = old_record.dup.save
{ "language": "en", "url": "https://stackoverflow.com/questions/60033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "468" }
Q: How can you find and replace text in a file using the Windows command-line environment? I am writing a batch file script using Windows command-line environment and want to change each occurrence of some text in a file (ex. "FOO") with another (ex. "BAR"). What is the simplest way to do that? Any built in functions? A: Take a look at Is there any sed like utility for cmd.exe which asked for a sed equivalent under Windows, should apply to this question as well. Executive summary: * *It can be done in batch file, but it's not pretty *Lots of available third party executables that will do it for you, if you have the luxury of installing or just copying over an exe *Can be done with VBScript or similar if you need something able to run on a Windows box without modification etc. A: Create file replace.vbs: Const ForReading = 1 Const ForWriting = 2 strFileName = Wscript.Arguments(0) strOldText = Wscript.Arguments(1) strNewText = Wscript.Arguments(2) Set objFSO = CreateObject("Scripting.FileSystemObject") Set objFile = objFSO.OpenTextFile(strFileName, ForReading) strText = objFile.ReadAll objFile.Close strNewText = Replace(strText, strOldText, strNewText) Set objFile = objFSO.OpenTextFile(strFileName, ForWriting) objFile.Write strNewText 'WriteLine adds extra CR/LF objFile.Close To use this revised script (which we’ll call replace.vbs) just type a command similar to this from the command prompt: cscript replace.vbs "C:\Scripts\Text.txt" "Jim " "James " A: Note - Be sure to see the update at the end of this answer for a link to the superior JREPL.BAT that supersedes REPL.BAT JREPL.BAT 7.0 and above natively supports unicode (UTF-16LE) via the /UTF option, as well as any other character set, including UTF-8, via ADO!!!! I have written a small hybrid JScript/batch utility called REPL.BAT that is very convenient for modifying ASCII (or extended ASCII) files via the command line or a batch file. The purely native script does not require installation of any 3rd party executeable, and it works on any modern Windows version from XP onward. It is also very fast, especially when compared to pure batch solutions. REPL.BAT simply reads stdin, performs a JScript regex search and replace, and writes the result to stdout. Here is a trivial example of how to replace foo with bar in test.txt, assuming REPL.BAT is in your current folder, or better yet, somewhere within your PATH: type test.txt|repl "foo" "bar" >test.txt.new move /y test.txt.new test.txt The JScript regex capabilities make it very powerful, especially the ability of the replacement text to reference captured substrings from the search text. I've included a number of options in the utility that make it quite powerful. For example, combining the M and X options enable modification of binary files! The M Multi-line option allows searches across multiple lines. The X eXtended substitution pattern option provides escape sequences that enable inclusion of any binary value in the replacement text. The entire utility could have been written as pure JScript, but the hybrid batch file eliminates the need to explicitly specify CSCRIPT every time you want to use the utility. Here is the REPL.BAT script. Full documentation is embedded within the script. @if (@X)==(@Y) @end /* Harmless hybrid line that begins a JScript comment ::************ Documentation *********** ::REPL.BAT version 6.2 ::: :::REPL Search Replace [Options [SourceVar]] :::REPL /?[REGEX|REPLACE] :::REPL /V ::: ::: Performs a global regular expression search and replace operation on ::: each line of input from stdin and prints the result to stdout. ::: ::: Each parameter may be optionally enclosed by double quotes. The double ::: quotes are not considered part of the argument. The quotes are required ::: if the parameter contains a batch token delimiter like space, tab, comma, ::: semicolon. The quotes should also be used if the argument contains a ::: batch special character like &, |, etc. so that the special character ::: does not need to be escaped with ^. ::: ::: If called with a single argument of /?, then prints help documentation ::: to stdout. If a single argument of /?REGEX, then opens up Microsoft's ::: JScript regular expression documentation within your browser. If a single ::: argument of /?REPLACE, then opens up Microsoft's JScript REPLACE ::: documentation within your browser. ::: ::: If called with a single argument of /V, case insensitive, then prints ::: the version of REPL.BAT. ::: ::: Search - By default, this is a case sensitive JScript (ECMA) regular ::: expression expressed as a string. ::: ::: JScript regex syntax documentation is available at ::: http://msdn.microsoft.com/en-us/library/ae5bf541(v=vs.80).aspx ::: ::: Replace - By default, this is the string to be used as a replacement for ::: each found search expression. Full support is provided for ::: substituion patterns available to the JScript replace method. ::: ::: For example, $& represents the portion of the source that matched ::: the entire search pattern, $1 represents the first captured ::: submatch, $2 the second captured submatch, etc. A $ literal ::: can be escaped as $$. ::: ::: An empty replacement string must be represented as "". ::: ::: Replace substitution pattern syntax is fully documented at ::: http://msdn.microsoft.com/en-US/library/efy6s3e6(v=vs.80).aspx ::: ::: Options - An optional string of characters used to alter the behavior ::: of REPL. The option characters are case insensitive, and may ::: appear in any order. ::: ::: A - Only print altered lines. Unaltered lines are discarded. ::: If the S options is present, then prints the result only if ::: there was a change anywhere in the string. The A option is ::: incompatible with the M option unless the S option is present. ::: ::: B - The Search must match the beginning of a line. ::: Mostly used with literal searches. ::: ::: E - The Search must match the end of a line. ::: Mostly used with literal searches. ::: ::: I - Makes the search case-insensitive. ::: ::: J - The Replace argument represents a JScript expression. ::: The expression may access an array like arguments object ::: named $. However, $ is not a true array object. ::: ::: The $.length property contains the total number of arguments ::: available. The $.length value is equal to n+3, where n is the ::: number of capturing left parentheses within the Search string. ::: ::: $[0] is the substring that matched the Search, ::: $[1] through $[n] are the captured submatch strings, ::: $[n+1] is the offset where the match occurred, and ::: $[n+2] is the original source string. ::: ::: Arguments $[0] through $[10] may be abbreviated as ::: $1 through $10. Argument $[11] and above must use the square ::: bracket notation. ::: ::: L - The Search is treated as a string literal instead of a ::: regular expression. Also, all $ found in the Replace string ::: are treated as $ literals. ::: ::: M - Multi-line mode. The entire contents of stdin is read and ::: processed in one pass instead of line by line, thus enabling ::: search for \n. This also enables preservation of the original ::: line terminators. If the M option is not present, then every ::: printed line is terminated with carriage return and line feed. ::: The M option is incompatible with the A option unless the S ::: option is also present. ::: ::: Note: If working with binary data containing NULL bytes, ::: then the M option must be used. ::: ::: S - The source is read from an environment variable instead of ::: from stdin. The name of the source environment variable is ::: specified in the next argument after the option string. Without ::: the M option, ^ anchors the beginning of the string, and $ the ::: end of the string. With the M option, ^ anchors the beginning ::: of a line, and $ the end of a line. ::: ::: V - Search and Replace represent the name of environment ::: variables that contain the respective values. An undefined ::: variable is treated as an empty string. ::: ::: X - Enables extended substitution pattern syntax with support ::: for the following escape sequences within the Replace string: ::: ::: \\ - Backslash ::: \b - Backspace ::: \f - Formfeed ::: \n - Newline ::: \q - Quote ::: \r - Carriage Return ::: \t - Horizontal Tab ::: \v - Vertical Tab ::: \xnn - Extended ASCII byte code expressed as 2 hex digits ::: \unnnn - Unicode character expressed as 4 hex digits ::: ::: Also enables the \q escape sequence for the Search string. ::: The other escape sequences are already standard for a regular ::: expression Search string. ::: ::: Also modifies the behavior of \xnn in the Search string to work ::: properly with extended ASCII byte codes. ::: ::: Extended escape sequences are supported even when the L option ::: is used. Both Search and Replace support all of the extended ::: escape sequences if both the X and L opions are combined. ::: ::: Return Codes: 0 = At least one change was made ::: or the /? or /V option was used ::: ::: 1 = No change was made ::: ::: 2 = Invalid call syntax or incompatible options ::: ::: 3 = JScript runtime error, typically due to invalid regex ::: ::: REPL.BAT was written by Dave Benham, with assistance from DosTips user Aacini ::: to get \xnn to work properly with extended ASCII byte codes. Also assistance ::: from DosTips user penpen diagnosing issues reading NULL bytes, along with a ::: workaround. REPL.BAT was originally posted at: ::: http://www.dostips.com/forum/viewtopic.php?f=3&t=3855 ::: ::************ Batch portion *********** @echo off if .%2 equ . ( if "%~1" equ "/?" ( <"%~f0" cscript //E:JScript //nologo "%~f0" "^:::" "" a exit /b 0 ) else if /i "%~1" equ "/?regex" ( explorer "http://msdn.microsoft.com/en-us/library/ae5bf541(v=vs.80).aspx" exit /b 0 ) else if /i "%~1" equ "/?replace" ( explorer "http://msdn.microsoft.com/en-US/library/efy6s3e6(v=vs.80).aspx" exit /b 0 ) else if /i "%~1" equ "/V" ( <"%~f0" cscript //E:JScript //nologo "%~f0" "^::(REPL\.BAT version)" "$1" a exit /b 0 ) else ( call :err "Insufficient arguments" exit /b 2 ) ) echo(%~3|findstr /i "[^SMILEBVXAJ]" >nul && ( call :err "Invalid option(s)" exit /b 2 ) echo(%~3|findstr /i "M"|findstr /i "A"|findstr /vi "S" >nul && ( call :err "Incompatible options" exit /b 2 ) cscript //E:JScript //nologo "%~f0" %* exit /b %errorlevel% :err >&2 echo ERROR: %~1. Use REPL /? to get help. exit /b ************* JScript portion **********/ var rtn=1; try { var env=WScript.CreateObject("WScript.Shell").Environment("Process"); var args=WScript.Arguments; var search=args.Item(0); var replace=args.Item(1); var options="g"; if (args.length>2) options+=args.Item(2).toLowerCase(); var multi=(options.indexOf("m")>=0); var alterations=(options.indexOf("a")>=0); if (alterations) options=options.replace(/a/g,""); var srcVar=(options.indexOf("s")>=0); if (srcVar) options=options.replace(/s/g,""); var jexpr=(options.indexOf("j")>=0); if (jexpr) options=options.replace(/j/g,""); if (options.indexOf("v")>=0) { options=options.replace(/v/g,""); search=env(search); replace=env(replace); } if (options.indexOf("x")>=0) { options=options.replace(/x/g,""); if (!jexpr) { replace=replace.replace(/\\\\/g,"\\B"); replace=replace.replace(/\\q/g,"\""); replace=replace.replace(/\\x80/g,"\\u20AC"); replace=replace.replace(/\\x82/g,"\\u201A"); replace=replace.replace(/\\x83/g,"\\u0192"); replace=replace.replace(/\\x84/g,"\\u201E"); replace=replace.replace(/\\x85/g,"\\u2026"); replace=replace.replace(/\\x86/g,"\\u2020"); replace=replace.replace(/\\x87/g,"\\u2021"); replace=replace.replace(/\\x88/g,"\\u02C6"); replace=replace.replace(/\\x89/g,"\\u2030"); replace=replace.replace(/\\x8[aA]/g,"\\u0160"); replace=replace.replace(/\\x8[bB]/g,"\\u2039"); replace=replace.replace(/\\x8[cC]/g,"\\u0152"); replace=replace.replace(/\\x8[eE]/g,"\\u017D"); replace=replace.replace(/\\x91/g,"\\u2018"); replace=replace.replace(/\\x92/g,"\\u2019"); replace=replace.replace(/\\x93/g,"\\u201C"); replace=replace.replace(/\\x94/g,"\\u201D"); replace=replace.replace(/\\x95/g,"\\u2022"); replace=replace.replace(/\\x96/g,"\\u2013"); replace=replace.replace(/\\x97/g,"\\u2014"); replace=replace.replace(/\\x98/g,"\\u02DC"); replace=replace.replace(/\\x99/g,"\\u2122"); replace=replace.replace(/\\x9[aA]/g,"\\u0161"); replace=replace.replace(/\\x9[bB]/g,"\\u203A"); replace=replace.replace(/\\x9[cC]/g,"\\u0153"); replace=replace.replace(/\\x9[dD]/g,"\\u009D"); replace=replace.replace(/\\x9[eE]/g,"\\u017E"); replace=replace.replace(/\\x9[fF]/g,"\\u0178"); replace=replace.replace(/\\b/g,"\b"); replace=replace.replace(/\\f/g,"\f"); replace=replace.replace(/\\n/g,"\n"); replace=replace.replace(/\\r/g,"\r"); replace=replace.replace(/\\t/g,"\t"); replace=replace.replace(/\\v/g,"\v"); replace=replace.replace(/\\x[0-9a-fA-F]{2}|\\u[0-9a-fA-F]{4}/g, function($0,$1,$2){ return String.fromCharCode(parseInt("0x"+$0.substring(2))); } ); replace=replace.replace(/\\B/g,"\\"); } search=search.replace(/\\\\/g,"\\B"); search=search.replace(/\\q/g,"\""); search=search.replace(/\\x80/g,"\\u20AC"); search=search.replace(/\\x82/g,"\\u201A"); search=search.replace(/\\x83/g,"\\u0192"); search=search.replace(/\\x84/g,"\\u201E"); search=search.replace(/\\x85/g,"\\u2026"); search=search.replace(/\\x86/g,"\\u2020"); search=search.replace(/\\x87/g,"\\u2021"); search=search.replace(/\\x88/g,"\\u02C6"); search=search.replace(/\\x89/g,"\\u2030"); search=search.replace(/\\x8[aA]/g,"\\u0160"); search=search.replace(/\\x8[bB]/g,"\\u2039"); search=search.replace(/\\x8[cC]/g,"\\u0152"); search=search.replace(/\\x8[eE]/g,"\\u017D"); search=search.replace(/\\x91/g,"\\u2018"); search=search.replace(/\\x92/g,"\\u2019"); search=search.replace(/\\x93/g,"\\u201C"); search=search.replace(/\\x94/g,"\\u201D"); search=search.replace(/\\x95/g,"\\u2022"); search=search.replace(/\\x96/g,"\\u2013"); search=search.replace(/\\x97/g,"\\u2014"); search=search.replace(/\\x98/g,"\\u02DC"); search=search.replace(/\\x99/g,"\\u2122"); search=search.replace(/\\x9[aA]/g,"\\u0161"); search=search.replace(/\\x9[bB]/g,"\\u203A"); search=search.replace(/\\x9[cC]/g,"\\u0153"); search=search.replace(/\\x9[dD]/g,"\\u009D"); search=search.replace(/\\x9[eE]/g,"\\u017E"); search=search.replace(/\\x9[fF]/g,"\\u0178"); if (options.indexOf("l")>=0) { search=search.replace(/\\b/g,"\b"); search=search.replace(/\\f/g,"\f"); search=search.replace(/\\n/g,"\n"); search=search.replace(/\\r/g,"\r"); search=search.replace(/\\t/g,"\t"); search=search.replace(/\\v/g,"\v"); search=search.replace(/\\x[0-9a-fA-F]{2}|\\u[0-9a-fA-F]{4}/g, function($0,$1,$2){ return String.fromCharCode(parseInt("0x"+$0.substring(2))); } ); search=search.replace(/\\B/g,"\\"); } else search=search.replace(/\\B/g,"\\\\"); } if (options.indexOf("l")>=0) { options=options.replace(/l/g,""); search=search.replace(/([.^$*+?()[{\\|])/g,"\\$1"); if (!jexpr) replace=replace.replace(/\$/g,"$$$$"); } if (options.indexOf("b")>=0) { options=options.replace(/b/g,""); search="^"+search } if (options.indexOf("e")>=0) { options=options.replace(/e/g,""); search=search+"$" } var search=new RegExp(search,options); var str1, str2; if (srcVar) { str1=env(args.Item(3)); str2=str1.replace(search,jexpr?replFunc:replace); if (!alterations || str1!=str2) if (multi) { WScript.Stdout.Write(str2); } else { WScript.Stdout.WriteLine(str2); } if (str1!=str2) rtn=0; } else if (multi){ var buf=1024; str1=""; while (!WScript.StdIn.AtEndOfStream) { str1+=WScript.StdIn.Read(buf); buf*=2 } str2=str1.replace(search,jexpr?replFunc:replace); WScript.Stdout.Write(str2); if (str1!=str2) rtn=0; } else { while (!WScript.StdIn.AtEndOfStream) { str1=WScript.StdIn.ReadLine(); str2=str1.replace(search,jexpr?replFunc:replace); if (!alterations || str1!=str2) WScript.Stdout.WriteLine(str2); if (str1!=str2) rtn=0; } } } catch(e) { WScript.Stderr.WriteLine("JScript runtime error: "+e.message); rtn=3; } WScript.Quit(rtn); function replFunc($0, $1, $2, $3, $4, $5, $6, $7, $8, $9, $10) { var $=arguments; return(eval(replace)); } IMPORTANT UPDATE I have ceased development of REPL.BAT, and replaced it with JREPL.BAT. This newer utility has all the same functionality of REPL.BAT, plus much more: * *Unicode UTF-16LE support via native CSCRIPT unicode capabilities, and any other character set (including UTF-8) via ADO. *Read directly from / write directly to a file: no need for pipes, redirection, or move command. *Incorporate user supplied JScript *Translation facility similar to unix tr, only it also supports regex search and JScript replace *Discard non-matching text *Prefix output lines with line number *and more... As always, full documentation is embedded within the script. The original trivial solution is now even simpler: jrepl "foo" "bar" /f test.txt /o - The current version of JREPL.BAT is available at DosTips. Read all of the subsequent posts in the thread to see examples of usage and a history of the development. A: BatchSubstitute.bat on dostips.com is an example of search and replace using a pure batch file. It uses a combination of FOR, FIND and CALL SET. Lines containing characters among "&<>]|^ may be treated incorrectly. A: Power shell command works like a charm ( test.txt | ForEach-Object { $_ -replace "foo", "bar" } | Set-Content test2.txt ) A: Two batch files that supply search and replace functions have been written by Stack Overflow members dbenham and aacini using native built-in jscript in Windows. They are both robust and very swift with large files compared to plain batch scripting, and also simpler to use for basic replacing of text. They both have Windows regular expression pattern matching. * *Thissed-like helper batch file is called repl.bat (by dbenham). Example using the L literal switch: echo This is FOO here|repl "FOO" "BAR" L echo and with a file: type "file.txt" |repl "FOO" "BAR" L >"newfile.txt" *This grep-like helper batch file is called findrepl.bat (by aacini). Example which has regular expressions active: echo This is FOO here|findrepl "FOO" "BAR" echo and with a file: type "file.txt" |findrepl "FOO" "BAR" >"newfile.txt" Both become powerful system-wide utilities when placed in a folder that is on the path, or can be used in the same folder with a batch file, or from the cmd prompt. They both have case-insensitive switches and also many other functions. A: I'm prefer to use sed from GNU utilities for Win32, the followings need to be noted * *single quote '' won't work in windows, use "" instead *sed -i won't work in windows, it will need file swapping So the working code of sed to find and replace text in a file in windows is as below sed -e "s/foo/bar/g" test.txt > tmp.txt && mv tmp.txt test.txt A: Use FNR Use the fnr utility. It's got some advantages over fart: * *Regular expressions *Optional GUI. Has a "Generate command line button" to create command line text to put in batch file. *Multi-line patterns: The GUI allows you to easily work with multi-line patterns. In FART you'd have to manually escape line breaks. *Allows you to select text file encoding. Also has an auto detect option. Download FNR here: http://findandreplace.io/?z=codeplex Usage example: fnr --cl --dir "<Directory Path>" --fileMask "hibernate.*" --useRegEx --find "find_str_expression" --replace "replace_string" A: A lot of the answers here helped point me in the right direction, however none were suitable for me, so I am posting my solution. I have Windows 7, which comes with PowerShell built-in. Here is the script I used to find/replace all instances of text in a file: powershell -Command "(gc myFile.txt) -replace 'foo', 'bar' | Out-File -encoding ASCII myFile.txt" To explain it: * *powershell starts up powershell.exe, which is included in Windows 7 *-Command "... " is a command line arg for powershell.exe containing the command to run *(gc myFile.txt) reads the content of myFile.txt (gc is short for the Get-Content command) *-replace 'foo', 'bar' simply runs the replace command to replace foo with bar *| Out-File myFile.txt pipes the output to the file myFile.txt *-encoding ASCII prevents transcribing the output file to unicode, as the comments point out Powershell.exe should be part of your PATH statement already, but if not you can add it. The location of it on my machine is C:\WINDOWS\system32\WindowsPowerShell\v1.0 UpdateApparently modern windows systems have PowerShell built in allowing you to access this directly using (Get-Content myFile.txt) -replace 'foo', 'bar' | Out-File -encoding ASCII myFile.txt A: May be a little bit late, but I am frequently looking for similar stuff, since I don't want to get through the pain of getting software approved. However, you usually use the FOR statement in various forms. Someone created a useful batch file that does a search and replace. Have a look here. It is important to understand the limitations of the batch file provided. For this reason I don't copy the source code in this answer. A: Use powershell in .bat - for Windows 7+ encoding utf8 is optional, good for web sites @echo off set ffile='myfile.txt' set fold='FOO' set fnew='BAR' powershell -Command "(gc %ffile%) -replace %fold%, %fnew% | Out-File %ffile% -encoding utf8" A: Just faced a similar problem - "Search and replace text within files", but with the exception that for both filenames and search/repalce I need to use regex. Because I'm not familiar with Powershell and want to save my searches for later use I need something more "user friendly" (preferable if it has GUI). So, while Googling :) I found a great tool - FAR (Find And Replace) (not FART). That little program has nice GUI and support regex for searching in filenames and within files. Only disadventage is that if you want to save your settings you have to run the program as an administrator (at least on Win7). A: For me, to be sure to not change the encoding (from UTF-8), keeping accents... the only way was to mention the default encoding before and after : powershell -Command "(gc 'My file.sql' -encoding "Default") -replace 'String 1', 'String 2' | Out-File -encoding "Default" 'My file.sql'" A: I don't think there's a way to do it with any built-in commands. I would suggest you download something like Gnuwin32 or UnxUtils and use the sed command (or download only sed): sed -c s/FOO/BAR/g filename A: I know I am late to the party.. Personally, I like the solution at: - http://www.dostips.com/DtTipsStringManipulation.php#Snippets.Replace We also, use the Dedupe Function extensively to help us deliver approximately 500 e-mails daily via SMTP from: - https://groups.google.com/forum/#!topic/alt.msdos.batch.nt/sj8IUhMOq6o and these both work natively with no extra tools or utilities needed. REPLACER: DEL New.txt setLocal EnableDelayedExpansion For /f "tokens=* delims= " %%a in (OLD.txt) do ( Set str=%%a set str=!str:FOO=BAR! echo !str!>>New.txt ) ENDLOCAL DEDUPLICATOR (note the use of -9 for an ABA number): REM DE-DUPLICATE THE Mapping.txt FILE REM THE DE-DUPLICATED FILE IS STORED AS new.txt set MapFile=Mapping.txt set ReplaceFile=New.txt del %ReplaceFile% ::DelDupeText.bat rem https://groups.google.com/forum/#!topic/alt.msdos.batch.nt/sj8IUhMOq6o setLocal EnableDelayedExpansion for /f "tokens=1,2 delims=," %%a in (%MapFile%) do ( set str=%%a rem Ref: http://www.dostips.com/DtTipsStringManipulation.php#Snippets.RightString set str=!str:~-9! set str2=%%a set str3=%%a,%%b find /i ^"!str!^" %MapFile% find /i ^"!str!^" %ReplaceFile% if errorlevel 1 echo !str3!>>%ReplaceFile% ) ENDLOCAL Thanks! A: @Rachel gave an excellent answer but here is a variation of it to read content to a powershell $data variable. You may then easily manipulate content multiple times before writing to a output file. Also see how multi-line values are given in a .bat batch files. @REM ASCII=7bit ascii(no bom), UTF8=with bom marker set cmd=^ $old = '\$Param1\$'; ^ $new = 'Value1'; ^ [string[]]$data = Get-Content 'datafile.txt'; ^ $data = $data -replace $old, $new; ^ out-file -InputObject $data -encoding UTF8 -filepath 'datafile.txt'; powershell -NoLogo -Noninteractive -InputFormat none -Command "%cmd%" A: If you are on Windows version that supports .Net 2.0, I would replace your shell. PowerShell gives you the full power of .Net from the command line. There are many commandlets built in as well. The example below will solve your question. I'm using the full names of the commands, there are shorter aliases, but this gives you something to Google for. (Get-Content test.txt) | ForEach-Object { $_ -replace "foo", "bar" } | Set-Content test2.txt A: Just used FART ("F ind A nd R eplace T ext" command line utility): excellent little freeware for text replacement within a large set of files. The setup files are on SourceForge. Usage example: fart.exe -p -r -c -- C:\tools\perl-5.8.9\* @@APP_DIR@@ C:\tools will preview the replacements to do recursively in the files of this Perl distribution. Only problem: the FART website icon isn't exactly tasteful, refined or elegant ;) Update 2017 (7 years later) jagb points out in the comments to the 2011 article "FARTing the Easy Way – Find And Replace Text" from Mikail Tunç As noted by Joe Jobs in the comments (Dec. 2020), if you want to replace &A for instance, you would need to use quotes in order to make sure & is not interpreted by the shell: fart in.txt "&A" "B" A: When you work with Git on Windows then simply fire up git-bash and use sed. Or, when using Windows 10, start "Bash on Ubuntu on Windows" (from the Linux subsystem) and use sed. Its a stream editor, but can edit files directly by using the following command: sed -i -e 's/foo/bar/g' filename * *-i option is used to edit in place on filename. *-e option indicates a command to run. * *s is used to replace the found expression "foo" with "bar" and g is used to replace any found matches. Note by ereOn: If you want to replace a string in versioned files only of a Git repository, you may want to use: git ls-files <eventual subfolders & filters> | xargs sed -i -e 's/foo/bar/g' which works wonders. A: I have used perl, and that works marvelously. perl -pi.orig -e "s/<textToReplace>/<textToReplaceWith>/g;" <fileName> .orig is the extension it would append to the original file For a number of files matching such as *.html for %x in (<filePattern>) do perl -pi.orig -e "s/<textToReplace>/<textToReplaceWith>/g;" %x A: I played around with some of the existing answers here and prefer my improved solution... type test.txt | powershell -Command "$input | ForEach-Object { $_ -replace \"foo\", \"bar\" }" or if you want to save the output again to a file... type test.txt | powershell -Command "$input | ForEach-Object { $_ -replace \"foo\", \"bar\" }" > outputFile.txt The benefit of this is that you can pipe in output from any program. Will look into using regular expressions with this too. Couldn't work out how to make it into a BAT file for easier use though... :-( A: Replace - Replace a substring using string substitution Description: To replace a substring with another string use the string substitution feature. The example shown here replaces all occurrences "teh" misspellings with "the" in the string variable str. set str=teh cat in teh hat echo.%str% set str=%str:teh=the% echo.%str% Script Output: teh cat in teh hat the cat in the hat ref: http://www.dostips.com/DtTipsStringManipulation.php#Snippets.Replace A: With the replacer.bat 1) With e? option that will evaluate special character sequences like \n\r and unicode sequences. In this case will replace quoted "Foo" and "Bar": call replacer.bat "e?C:\content.txt" "\u0022Foo\u0022" "\u0022Bar\u0022" 2) Straightforward replacing where the Foo and Bar are not quoted. call replacer.bat "C:\content.txt" "Foo" "Bar" A: Here's a solution that I found worked on Win XP. In my running batch file, I included the following: set value=new_value :: Setup initial configuration :: I use && as the delimiter in the file because it should not exist, thereby giving me the whole line :: echo --> Setting configuration and properties. for /f "tokens=* delims=&&" %%a in (config\config.txt) do ( call replace.bat "%%a" _KEY_ %value% config\temp.txt ) del config\config.txt rename config\temp.txt config.txt The replace.bat file is as below. I did not find a way to include that function within the same batch file, because the %%a variable always seems to give the last value in the for loop. replace.bat: @echo off :: This ensures the parameters are resolved prior to the internal variable :: SetLocal EnableDelayedExpansion :: Replaces Key Variables :: :: Parameters: :: %1 = Line to search for replacement :: %2 = Key to replace :: %3 = Value to replace key with :: %4 = File in which to write the replacement :: :: Read in line without the surrounding double quotes (use ~) :: set line=%~1 :: Write line to specified file, replacing key (%2) with value (%3) :: echo !line:%2=%3! >> %4 :: Restore delayed expansion :: EndLocal A: This is one thing that batch scripting just does not do well. The script morechilli linked to will work for some files, but unfortunately it will choke on ones which contain characters such as pipes and ampersands. VBScript is a better built-in tool for this task. See this article for an example: http://www.microsoft.com/technet/scriptcenter/resources/qanda/feb05/hey0208.mspx A: Download Cygwin (free) and use unix-like commands at the Windows command line. Your best bet: sed A: Can also see the Replace and ReplaceFilter tools at https://zoomicon.github.io/tranXform/ (source included). The 2nd one is a filter. The tool that replaces strings in files is in VBScript (needs Windows Script Host [WSH] to run in old Windows versions) The filter is probably not working with Unicode unless you recompile with latest Delphi (or with FreePascal/Lazarus) A: Powershell Command - Getting content of the file and replacing it with some other text and then storing into another file Command -1 (Get-Content filename.xml)| ForEach-Object { $_.replace("some_text","replace_text").replace("some_other_text","replace_text") } | Set-Content filename2.xml Copying another file into the original one file Command2 Copy-Item -Path filename2.xml -Destination filename.xml -PassThru removing another one file Command 3 Remove-Item filename2.xml A: I'm the author of Aba Search and Replace that you can use from Windows command line. It can do batch replacements without any user interaction and also replace text in multiple files, not only just one file. You are welcomed to try my tool; I will be happy to answer any questions. A: I have faced this problem several times while coding under Visual C++. If you have it, you can use Visual studio Find and Replace Utility. It allows you to select a folder and replace the contents of any file in that folder with any other text you want. Under Visual Studio: Edit -> Find and Replace In the opened dialog, select your folder and fill in "Find What" and "Replace With" boxes. Hope this will be helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/60034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "589" }
Q: C# NetCDF Library I am currently working on (or at least planning) a couple of projects that work with large amounts of repetitive data. The kind of data that works well in a spreadsheet or database, but is nasty in XML. :) NetCDF seems like a great option for a file format. However, my work is in C# and there is no "official" NetCDF implementation available. There is an "official" Python version that I could potentially use with IronPython. Another option is the "NetCDF Library for .Net" on CodePlex but it seems pretty quiet (http://www.codeplex.com/netcdf). Has anyone worked with either of these libraries? Can you provide any recommendations? A: First, are you sure that NetCDF is the right choice? If you want to interact with other programs that need to read in large amounts of data and they already support NetCDF, then it's probably a great choice. There aren't that many standard and well-supported file formats that support large multidimensional arrays. But if you're only going to be reading and writing files from C#, it may not be such a good choice. I'm a big fan of the "classic" NetCDF file format. It's compact and extremely simple, but flexible enough to support lots of common kinds of multidimensional well-structured data. It only took me one day to write a complete parser for classic NetCDF, and it only took an hour to write a program to output a well-formed special case of a classic NetCDF file. You could implement a pure C# NetCDF library yourself and it wouldn't be much trouble. You could easily start by implementing only the features you need. Here's the specification. Unfortunately, NetCDF-4 chose to use HDF-5 as its data format. It adds a lot of complexity and makes it much more difficult to write a complete NetCDF parser in another language. HDF-5 is very general-purpose and in my opinion, it was overengineered - it tries to be too many things to too many people. I would not recommend trying to work with it directly unless you plan to spend a month writing unit tests. If you must use netCDF-4 / HDF-5 from C#, your only realistic option would be to wrap the C library using SWIG or something like that. Note that NetCDF for Python is just a wrapper around the C code, so it's not really all that helpful; if you're going to use a wrapped C library you may as well just write a C# wrapper rather than use Python as a middle layer. A: And now Microsoft has released a newer library for netCDF, available via NuGet: https://www.nuget.org/packages/SDSLite Scientific DataSet Lite 1.4.0 This is a cross platform library for manipulating netCDF, CSV and TSV files. A: I'm adding this now because this was the top answer when I Googled about this topic. ETA Per the reply below, there is ANOTHER Microsoft NetCDF library now available: https://www.nuget.org/packages/SDSLite Scientific DataSet Lite 1.4.0 This is a cross platform library for manipulating netCDF, CSV and TSV files. Since this question was originally asked and answered, Microsoft has released a Scientific DataSet Library that has support for NetCDF http://research.microsoft.com/en-us/downloads/ccf905f6-34c6-4845-892e-a5715a508fa3/ Project Description The SDS library makes it easy for .Net developers to read, write and share scalars, vectors, matrices and multidimensional grids which are very common in scientific modelling. It supports CSV, NetCDF and other file format Programs that use the library store related data and associated metadata in a compact self-describing package. Libraries come with a set of utilities and packages: sds command line utility, DataSet Viewer application and an add-in for Microsoft Excel 2007 (and later versions). See Release page for details. A: In a project we are using the ucar netcdf implementation in c# using ikvm. IKVM can be used to 'convert' java projects into .Net libraries without a use for java VM. I have not done any performance check but it is a simple way to get netcdf in c# :). http://www.ikvm.net/stories.html http://www.unidata.ucar.edu/downloads/netcdf/netcdf-java-4/index.jsp
{ "language": "en", "url": "https://stackoverflow.com/questions/60039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Google Maps in Flex Component I'm embedding the Google Maps Flash API in Flex and it runs fine locally with the watermark on it, etc. When I upload it to the server (flex.mydomain.com) I get a sandbox security error listed below: SecurityError: Error #2121: Security sandbox violation: Loader.content: http://mydomain.com/main.swf?Fri, 12 Sep 2008 21:46:03 UTC cannot access http://maps.googleapis.com/maps/lib/map_1_6.swf. This may be worked around by calling Security.allowDomain. at flash.display::Loader/get content() at com.google.maps::ClientBootstrap/createFactory() at com.google.maps::ClientBootstrap/executeNextFrameCalls() Does anyone have any experience with embedding the Google Maps Flash API into Flex components and specifically settings security settings to make this work? I did get a new API key that is registered to my domain and am using that when it's published. I've tried doing the following in the main application as well as the component: Security.allowDomain('*') Security.allowDomain('maps.googleapis.com') Security.allowDomain('mydomain.com') A: This sounds like a crossdomain.xml related problem. I did a quick search and there seems to be many people with the same issue. Some proxy requests through XMLHttpRequest etc.. Issue 406: Add crossdomain.xml for Google Accounts A: Thanks for the help. Apparently this has something to do with including the Flex app on an ASP.NET page. When I moved it over to a flat HTML file, it worked fine. I don't have time to fully investigate right now, but that seems to have fixed it.
{ "language": "en", "url": "https://stackoverflow.com/questions/60046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java sound recording and mixer settings I'm using the javax.sound.sampled package in a radio data mode decoding program. To use the program the user feeds audio from their radio receiver into their PC's line input. The user is also required to use their mixer program to select the line in as the recording input. The trouble is some users don't know how to do this and also sometimes other programs alter the recording input setting. So my question is how can my program detect if the line in is set as the recording input ? Also is it possible for my program to change the recording input setting if it detects it is incorrect ? Thanks for your time. Ian A: To answer your first question, you can check if the Line.Info object for your recording input matches Port.Info.LINE_IN like this: public static boolean isLineIn(Line.Info lineInfo) { Line.Info[] detected = AudioSystem.getSourceLineInfo(Port.Info.LINE_IN); for (Line.Info lineIn : detected) { if (lineIn.matches(lineInfo)) { return true; } } return false; } However, this doesn't work with operating systems or soundcard driver APIs that don't provide the type of each available mixer channel. So when I test it on Windows it works, but not on Linux or Mac. For more information and recommendations, see this FAQ. Regarding your second question, you can try changing the recording input settings through a Control class. In particular, see FloatControl.Type for some common settings. Keep in mind that the availability of these controls depends on the operating system and soundcard drivers, just like line-in detection.
{ "language": "en", "url": "https://stackoverflow.com/questions/60049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Referencing back to the parent from a child object My question is pertaining to the best practice for accessing a child object's parent. So let's say a class instantiates another class, that class instance is now referenced with an object. From that child object, what is the best way to reference back to the parent object? Currently I know of a couple ways that I use often, but I'm not sure if A) there is a better way to do it or B) which of them is the better practice The first method is to use getDefinitionByName, which would not instantiate that class, but allow access to anything inside of it that was publicly declared. _class:Class = getDefinitionByName("com.site.Class") as Class; And then reference that variable based on its parent to child hierarchy. Example, if the child is attempting to reference a class that's two levels up from itself: _class(parent.parent).function(); This seems to work fine, but you are required to know the level at which the child is at compared to the level of the parent you are attempting to access. I can also get the following statement to trace out [object ClassName] into Flash's output. trace(Class); I'm not 100% on the implementation of that line, I haven't persued it as a way to reference an object outside of the current object I'm in. Another method I've seen used is to simply pass a reference to this into the class object you are creating and just catch it with a constructor argument var class:Class = new Class(this); and then in the Class file public function Class(objectRef:Object) { _parentRef = objectRef; } That reference also requires you to step back up using the child to parent hierarchy though. I could also import that class, and then use the direct filepath to reference a method inside of that class, regardless of its the parent or not. import com.site.Class; com.site.Class.method(); Of course there the parent to child relationship is irrelevant because I'm accessing the method or property directly through the imported class. I just feel like I'm missing something really obvious here. I'm basically looking for confirmation if these are the correct ways to reference the parent, and if so which is the most ideal, or am I over-looking something else? A: In general, if you need a child to communicate with a parent, you should look at having it do so by broadcasting events. This decouples the child for the parent, and makes it possible to have other classes work with the child. I would not recommend passing in a reference to the parent class into the child. Here is a a simple example (I have tested / compiled this so there may be some typos). //Child.as package { import flash.events.EventDispatcher; import flash.events.Event; public class Child extends EventDispatcher { public function doSomething():void { var e:Event = new Event(Event.COMPLETE); dispatchEvent(e); } public function foo():void { trace("foo"); } } } //Parent.as package { import flash.display.Sprite; import flash.events.Event; public class Parent extends Sprite { private var child:Child; public function Parent():void { c = new Child(); c.addEventListener(Event.COMPLETE, onComplete); c.foo();//traces foo c.doSomething() } public function onComplete(e:Event):void { trace("Child broadcast Event.COMPLETE"); } } } In most cases, you would dispatch custom events and pass data with them. Basically: Parent has reference to Child and communicates via method calls. Child does not have reference to Parent, and communicates (to anyone) via dispatching events. hope that helps... mike chambers [email protected] A: It's generally good to have the class as it's own instance and reduce tight coupling to something else (as in this case, it's parent). If you do something like parent.doSomething() it's not possible to use that class in container that doesn't have the doSometing() method. I think it's definitely better to pass in whatever the class may need and then inside the class it doesn't have to do any parent.parent etc anymore. With this if you in the future want to change the structure, it's very easy to just pass in a new reference; the implementation of the child class doesn't have to change at all. The third alternative you have here is also very different, it's accessing a class level static method (you don't have to type the whole class path when accessing that method), not an instance method as in the first two. A: I've always used your second method, passing a pointer to the parent object to the child and storing that pointer in a member variable in the child class. To me that seems to be the simplest method for the child to communicate back to the parent. A: I like to pass the parent as an interface (so the class can be contained in any parent implementing that interface) or implement the reference as an event/function pointer/delegate which the parent can hook onto. A: I like setting up a global class to handle references to classes that need to be accessed by any other class, not necessarily the child. The global class simply consists of static getters and setters like so: private static const class:Class; public static function setClass(_class:Class){ class = _class; } public static function getClass(void):Class{ return class; } The nice thing about this is that you don't have to import the class you are returning from the global class, just the global class itself. The class that needs to be referenced adds itself to the global list. The Other cool thing is that you can easily dispatch events from this centralized location. Most of the time if a class needs to be referenced by child classes or any other class, i do it through an interface. A: If these objects are in the DisplayList, then you have some more options. If I have a ParentClass and a ChildClass, in the child class, you seem to be able to access the parent if you cast the request as the ParentClass. e.g. ParentClass(parent).parentFunction(); I know for sure it works if the ParentClass is the Document Class. As the Document Class is always the first item in the display list, this works: ParentClass(stage.getChildAt(0)).parentFunction(); In my case, they were both members of the same package, so I did not even need to import anything. I haven't tested it in all circumstances, but it worked when I needed it to. Of course 'parent' and 'getChild...' only work if these objects are in the DisplayList, but that's been good enough for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/60051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a child's PID? I am currently in an operating systems class and my teacher spent half of the class period talking about PIDs. She mentioned, as many know, that processes know their parent's ID. My question is this: Does a process's PCB know its child's ID? If so, what is the way to go about it obtaining it? A: As far as I know a process doesn't have an explicit list of its children's PIDs, but it can easily be built, since a process should know which child processes it spawns. For example the UNIX fork() call returns the child PID in the parent process and 0 in the child process, CreateProcess() on Windows returns (IIRC) the PID of the new process created. A: When you use fork() on *nix, the return value is the PID of the child in the parent process, and 0 in the child process. That's one way to find out. Not sure if they keep track of the "tree" of process spawning, I think it depends on what OS you use, but since when you kill bash (or any shell), all running children are also killed, I think UNIX like systems do keep track of this. A: If you're using Linux or anything which implements the Unix APIs, when a process calls fork() to create a child process the parent receives the child PID as the return code, or -1 if the fork failed. The child process gets a zero return code. A: Process's PCB know its child's ID. As we know the Fork() is used to create processes.It takes no arguments and returns a process ID.After a new child process is created both parent and child will execute the next instruction following the fork(). There we have to distinguish the parent from the child.This can be done by testing the return value of fork(). If Fork() returns a negative value, the creation of child process is unsuccessful. If Fork() returns a Zero to the newly created child process. If Fork() returns a positive value as a process ID of the child process to the parent process.
{ "language": "en", "url": "https://stackoverflow.com/questions/60070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Error OLE.INTEROP I'm getting an error whenever I load Management Studio or open a folder in the server explorer, etc. Additionally, If I try to create a new database it constantly is updating and does not finish. I have attached a screenshot of the error. Please let me know what I can do to fix this because it's really aggravating. Error Screen http://frickinsweet.com/databaseError.gif A: From MSDN forum http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=120476&SiteID=1 A: My first guess would be Client Tool corruption. I've occasionally had to uninstall my client tools and reinstall them. Boot after uninstall. A: I had to add the registry file AND re-run "regsvr32 actxprxy.dll" This was a really odd and painful error. It only seemed to come into existence after installing VS SP1 but I really don't see why that would have happened.
{ "language": "en", "url": "https://stackoverflow.com/questions/60076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why would breakpoints in VS2008 stop working? I have a c# asp.net web app. Breakpoints in the database layer are no longer stopping execution but the breakpoints in the UI layer are still working okay. Can anyone hazard a guess why this might be happening? I've checked all the usual suspects (Debug build is on for all projects) and recompiled all projects in solution... A: I would ensure that the UI layer is referencing the appropriate 'debug' .dll's. I'd also consider pressing CTRL+ALT+U (Modules View) when you're debugging to see if symbols are loaded for your BLL and DAL .dlls. If not then Visual Studio is unable to find .PDBs for that file. Are debug files .PDBs in the same directory as the .dlls that are being referenced from the Modules window? A: * *Attach the debugger to ASP.NET process and click on the modules window. Ensure that debugging symbols are loaded for the assemblies you want to debug. *Make sure the UI is referencing the debug assemblies, not release assemblies. *Make sure the .PDB files are in the /bin/debug/ directory *Make sure you rebuild the entire solution before attaching the debugger. *If the data tier is in a seperate solution, add the project to the UI SLN (You don't need to add a reference, those should already be established or your code wouldn't compile), so that the debugger can pull up the full code. A: Thanks for the responses and ideas guys - I had already tried all of those or variations of them. I think that it must be a very subtle VS bug. A colleague suggested I make the function that I was trying to break on public (previously "undefined" so was implicitly private) and try again. I did this and the breakpoint started to get hit. I then removed the public keyword and the breakpoint continued to be hit. No idea why this solved it but it did. Thanks for you help! A: A couple of suggestions. The first one is to check the status of the breakpoint in the source line. Is it a solid red ball? If not, it generally indicates that the file in question isn't the one used for the build. Secondly - have a look at the modules view and see what module and symbols have been loaded. You may find it's not what you expect. As for why - I've no idea! Nick A: Have you tried deleting your bin directories before recompiling? A: I had the same issue and kept thinking "what did I change in web.config" to potentially do this? <location path="." inheritInChildApplications="false"> That was not allowing breakpoints to work for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/60093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: "Could not load type" in web service converted to VB.NET I wrote a simple web service in C# using SharpDevelop (which I just got and I love). The client wanted it in VB, and fortunately there's a Convert To VB.NET feature. It's great. Translated all the code, and it builds. (I've been a "Notepad" guy for a long time, so I may seem a little old-fashioned.) But I get this error when I try to load the service now. Parser Error Message: Could not load type 'flightinfo.Soap' from assembly 'flightinfo'. Source Error: Line 1: <%@ WebService Class="flightinfo.Soap,flightinfo" %> I have deleted the bins and rebuilt, and I have searched google (and stackoverflow). I have scoured the project files for any remnants of C#. Any ideas? A: In VB.NET, namespace declarations are relative to the default namespace of the project. So if the default namespace for the project is set to X.Y, everithyng between Namespace Z and End Namespace will be in the X.Y.Z namespace. In C# you have to provide the full namespace name, regardless of the default namespace of the project. So if the C# project had the default namespace X.Y, the CS files would still include the namespace X.Y declaration. After converting to VB, if both the default namespace and the namespace declarations in the files stay the same you end up with classes in the X.Y.X.Y namespace. So in your case, the Soap class is now in the flightinfo.flightinfo namespace. Thus there are three possible solutions: * *change the asmx file to *remove the default namespace from the project *remove the namespace declarations from the vb files A: <%@ WebService Class="flightinfo.Soap,flightinfo" %> What is the name of your class? A: The problem may be cause by VB.NET & C# projects using different naming conventions for project assemblies and how the project namespace is used. At least that's were I would start looking.
{ "language": "en", "url": "https://stackoverflow.com/questions/60098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I get a particular labeled version of a folder in Borland StarTeam? I'm about to perform a bunch of folder moving operations in StarTeam (including some new nesting levels) and I would like to set a label so that I can roll back in case of issues. I figured out how to set a label on a folder and all its children, but I couldn't figure out how to get the version of that folder corresponding to that particular label. It seems like labels are tied to the files themselves and not the folders/folder structure. A: I've switched to Subversion and FogBugz so I am rusty on StarTeam. I think you need a View Label. * *From View menu, select Labels... to open the Labels dialog. *On the View tab, click New... button to open View Label dialog. *Type in label name as "Release 1.2.3.4", check Frozen, and hit OK. To get back to the state, * *From View menu, select Select Configuration... to open the Select a View Configuration dialog. *Select Labeled configuration, and pick "Release 1.2.3.4" You can then create a new view from the view label to branch off you want to. See the Help file > Working with StarTeam > Managing Views. Here's a quote from Configuring a View: By default, a view has a current configuration – that is, it displays the latest revisions of the items in the project. However, you can roll back a view to a past state based on a label, promotion state, or a point in time.
{ "language": "en", "url": "https://stackoverflow.com/questions/60099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In C# (or any language) what is/are your favourite way of removing repetition? I've just coded a 700 line class. Awful. I hang my head in shame. It's as opposite to DRY as a British summer. It's full of cut and paste with minor tweaks here and there. This makes it's a prime candidate for refactoring. Before I embark on this, I'd thought I'd ask when you have lots of repetition, what are the first refactoring opportunities you look for? For the record, mine are probably using: * *Generic classes and methods *Method overloading/chaining. What are yours? A: I like to start refactoring when I need to, rather than the first opportunity that I get. You might say this is somewhat of an agile approach to refactoring. When do I feel I need to? Usually when I feel that the ugly parts of my codes are starting to spread. I think ugliness is okay as long as they are contained, but the moment when they start having the urge to spread, that's when you need to take care of business. The techniques you use for refactoring should start with the simplest. I would strongly recommand Martin Fowler's book. Combining common code into functions, removing unneeded variables, and other simple techniques gets you a lot of mileage. For list operations, I prefer using functional programming idioms. That is to say, I use internal iterators, map, filter and reduce(in python speak, there are corresponding things in ruby, lisp and haskell) whenever I can, this makes code a lot shorter and more self-contained. A: #region I made a 1,000 line class only one line with it! In all seriousness, the best way to avoid repetition is the things covered in your list, as well as fully utilizing polymorphism, examine your class and discover what would best be done in a base class, and how different components of it can be broken away a subclasses. A: Sometimes by the time you "complete functionality" using copy and paste code, you've come to a point that it is maimed and mangled enough that any attempt at refactoring will actually take much, much longer than refactoring it at the point where it was obvious. In my personal experience my favorite "way of removing repetition" has been the "Extract Method" functionality of Resharper (although this is also available in vanilla Visual Studio). Many times I would see repeated code (some legacy app I'm maintaining) not as whole methods but in chunks within completely separate methods. That gives a perfect opportunity to turn those chunks into methods. Monster classes also tend to reveal that they contain more than one functionality. That in turn becomes an opportunity to separate each distinct functionality into its own (hopefully smaller) class. I have to reiterate that doing all of these is not a pleasurable experience at all (for me), so I really would rather do it right while it's a small ball of mud, rather than let the big ball of mud roll and then try to fix that. A: First of all, I would recommend refactoring much sooner than when you are done with the first version of the class. Anytime you see duplication, eliminate it ASAP. This may take a little longer initially, but I think the results end up being a lot cleaner, and it helps you rethink your code as you go to ensure you are doing things right. As for my favorite way of removing duplication.... Closures, especially in my favorite language (Ruby). They tend to be a really concise way of taking 2 pieces of code and merging the similarities. Of course (like any "best practice" or tip), this can not be blindly done... I just find them really fun to use when I can use them. A: One of the things I do, is try to make small and simple methods that I can see on a single page in my editor (visual studio). I've learnt from experience that making code simple makes it easier for the compiler to optimise it. The larger the method, the harder the compiler has to work! I've also recently seen a problem where large methods have caused a memory leak. Basically I had a loop very much like the following: while (true) { var smallObject = WaitForSomethingToTurnUp(); var largeObject = DoSomethingWithSmallObject(); } I was finding that my application was keeping a large amount of data in memory because even though 'largeObject' wasn't in scope until smallObject returned something, the garbage collector could still see it. I easily solved this by moving the 'DoSomethingWithSmallObject()' and other associated code to another method. Also, if you make small methods, your reuse within a class will become significantly higher. I generally try to make sure that none of my methods look like any others! Hope this helps. Nick A: "cut and paste with minor tweaks here and there" is the kind of code repetition I usually solve with an entirely non-exotic approach- Take the similar chunk of code, extract it out to a seperate method. The little bit that is different in every instance of that block of code, change that to a parameter. There's also some easy techniques for removing repetitive-looking if/else if and switch blocks, courtesy of Scott Hanselman: http://www.hanselman.com/blog/CategoryView.aspx?category=Source+Code&page=2 A: I might go something like this: Create custom (private) types for data structures and put all the related logic in there. Dictionary<string, List<int>> etc. Make inner functions or properties that guarantee behaviour. If you’re continually checking conditions from a publically accessible property then create an private getter method with all of the checking baked in. Split methods apart that have too much going on. If you can’t put something succinct into the or give it a good name, then start breaking the function apart until the code is (even if these “child” functions aren’t used anywhere else). If all else fails, slap a [SuppressMessage("Microsoft.Maintainability", "CA1502:AvoidExcessiveComplexity")] on it and comment why.
{ "language": "en", "url": "https://stackoverflow.com/questions/60100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Good challenges/tasks/exercises for learning or improving object oriented programming (OOP) skills What is a good challenge to improve your skills in object oriented programming? The idea behind this poll is to provide an idea of which exercises are useful for learning OOP. The challenge should be as language agnostic as possible, requiring either little or no use of specific libraries, or only the most common of libraries. Try to include only one challenge per answer, so that a vote will correspond to the merits of that challenge alone. Would also be nice if a level of skill required was indicated, and also the rationale behind why it is a useful exercise. Solutions to the challenges could then be posted as answers to a "How to..." question and linked to from here. For example: * *Challenge - implement a last-in-first-out stack *Skill level - beginner *Rationale - gives experience of how to reference objects A: Challenge: Write a wrapper for your web site/service API of choice in your language of choice, that doesn't already exist (ex. a ZenDesk API wrapper written in C#). Release the wrapper as open source for others to use. Skill Level: Beginner to Intermediate Rationale: To learn how to extrapolate a 3rd party web service API into a meaningful set of objects/classes, making the reuse of that API easier in your chosen language. A: Building Skills in Object-Oriented Design is a free book that might be of use. The description is as follows: "The intent of this book is to help the beginning designer by giving them a sequence of interesting and moderately complex exercises in OO design. This book can also help managers develop a level of comfort with the process of OO software development. The applications we will build are a step above trivial, and will require some careful thought and design. Further, because the applications are largely recreational in nature, they are interesting and engaging. This book allows the reader to explore the processes and artifacts of OO design before project deadlines make good design seem impossible." A: After you have learned the basics, study the "Gang of four" design patterns book. http://www.amazon.com/Design-Patterns-Object-Oriented-Addison-Wesley-Professional/dp/0201633612/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1221488916&sr=8-1 This is a classic, and a must read for any coder who wants to understand how to use OO to design elegant solutions to common coding problems. A: Take a procedural-style written piece of code and try to transform it into OOP based solution. During the process, consult a book on refactoring and design patterns. A friend of mine was able to make a huge step forward in understanding object oriented concepts exactly this way. As with anything, this might not work for everyone. A: I have found CRC cards to be quite effective in learning, teaching and building good OO design. A: Write a challenging program from scratch. Try to get some people (around five, that should be doable) to use it. Respond to their change requests. Adapt your program's design. Start small, then watch it grow. Manage this growth. This is hard. You will also have to fix bugs and maintain the thing over time, which for me was a very valuable lesson. A: Certainly a good challenge, although less accessible than a "start from scratch" assignment, is to refactor some existing code that either doesn't use inheritance or doesn't use very much of it to make greater use of inheritance. The process of refactoring will expose a lot of the benefits and gotchas of oop, as it certainly has for me on my most recent project. It also pushed me to understand the concepts better than past projects have where I've created my own object oriented designs. A: A given task has very little to do with being "OOP", it's more in how you grade it. I would look at the Refactoring book, chapter 3, and make sure none of the bad code smells exist in the solution. Or, more importantly, go over ones that do apply. Most importantly, watch for the existence of setters and getters (indicating that you are operating on values from a class and not asking the class to operate on it's own values)--or using "extends" without applying the Liskov Substitution Principle, stuff like that.
{ "language": "en", "url": "https://stackoverflow.com/questions/60109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: HTTPS with Visual Studio's built-in ASP.NET Development Server Is there a way to access Visual Studio's built-in ASP.NET Development Server over HTTPS? A: Cassini does not support HTTPS. However, you can use IIS to do this, if your using IIS 5.1, download the MS IIS toolpack for IIS 6.0, it comes with a tool to create self signed SSL certificates. This works on IIS 5.1 http://www.microsoft.com/downloads/details.aspx?FamilyID=56fc92ee-a71a-4c73-b628-ade629c89499&DisplayLang=en The one tool you need (SelfSSL.exe) works just fine with IIS 5.1. Unfortunately, it comes bundled with a bunch of other stuff. Steps: * *Install the IIS Tools File. If you want, you can click Custom and tell it to only install SelfSSL. *Activate the site in IIS that you want to install a SSL certificate too. *Go to Start / Programs / IIS Resources / SelfSSL *This will launch a command prompt in the SelfSSL directory. *Using the provided help, run SelfSSL. The command I used was: selfssl.exe /N:cn=[MACHINENAME] /K:1024 /V:90 /S:5 /P:443 *The /S switch indicates which site to install the certificate. You can figure out the number by looking at your sites in IIS and counting (Starting at 1 for the first site, not 0), to the site you want. *Once this has ran, browse to your localhost over HTTPS *You should receive an error message stating that this certificate is from a untrusted source. You can either add your machinename to the browsers “Trusted Authorities” list, or you can tell the browser to ignore this. At this point, you will be able to run your localhost over HTTPS. A: Select the project-file in the Solution Explorer: for example: "WebApplication1". With pressing ALT+ENTER you enter the project-properties. Select "DEBUG" on the left side. Here you can select "Enable SSL". Then you can start your project with IIS Express normally and it will start using SSL, the new Port will be 44301 A: As of now we can use IIS Express to develop and test in SSL. Here is a complete article explaning how to use IIS Express and Visual Studion 2010 to develop websites in SSL. Next Then you will get this Working with SSL at Development Time is easier with IISExpress Introducing IIS Express A: Wilco Bauwer wrote a webdev server that will support https. He is one of the developers that worked on cassini visual studio 2005 built in web server. WebDev.WebServer2
{ "language": "en", "url": "https://stackoverflow.com/questions/60113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Silverlight Install Base - How big is it? Silverlight v2.0 is getting closer and closer to RTM but I have yet to hear any stats as to how many browsers are running Silverlight. If I ask Adobe (by googling "Flash install base") they're only too happy to tell me that 97.7% of browsers are running Flash player 9 or better. Not that I believe everything I read, but where are these statistics from Microsoft or some other vendor about Silverlight? I'm going to be making a technology choice soon and a little bit of empirical evidence would be an asset at this point... All you Silverlight developers out there, show me your stats! A: If you are developing something for a general audience, I would highly recommend against Silverlight as you immediately cut out Linux users. I went to watch videos for the Olympics (and I run exclusively Linux), and I couldn't watch the video on their site because they were in Silverlight. On top of that, they actively removed all videos from YouTube, so I had no alternative but to try and scrounge up a Windows boot. This only served to give me a very negative opinion of NBC, and consider them quite amateurish to pick such a restricting technology for something that should be available for everyone. While Flash has it's problems, it works fine in Linux, so I would say (at this point), it is a much superior technology choice. If you KNOW your audience is entirely on Windows (maybe Mac).... then you can consider Silverlight with knowing you won't be cutting out part of your audience. A: if you're that concerned about locking out potential users, you should be building a low-bandwidth HTML only version of your site anyways...regardless of whether you use Flash or Silverlight. A: I struggled with this for a while. Ultimately, I chose to develop my site using Silverlight for the major components. I did a good bit of research, and I reached the following bottom-line conclusion: If Silverlight fails, it will not be for lack of installed base. There are simply too many levers for MS to pull (windows update, embedding it in IE8, or even paying highly trafficed sites to use it. I will add this from Alexa - microsoft.com has pretty impressive daily reach and it uses SL on the main page. I would also not be surprised at all if Outlook Web Access is moved to Silverlight - thereby turning every single office outlook user who wants to access email from home/other into a roaming SL installer. Alexa Link comparing microsoft.com/ebay.com/amazon.com I will add this from ScottGu's blog entry: In addition to powering the Olympics experience in the US, Silverlight was also used in France (by FranceTV), the Netherlands (by NOS), Russia (by Sportbox.ru) and Italy (by RAI). In addition to video quality, a big reason behind these broadcasters decision to use Silverlight was the TCO and streaming cost difference Silverlight provided. In the August 2008 edition of Web Designer Magazine (a Dutch publication) a NOS representative reported that they were able to serve 100,000 concurrent users using Silverlight and 40 Windows Media Servers, whereas it would have required 270 servers if they had used Flash Media Servers. Over the last month we've seen several major new deployments of Silverlight for media scenarios. For example: CBS College Sports is now using Silverlight to stream NCAA events from its 170 partner colleges and university. Blockbuster is replacing Flash with Silverlight for its MovieLink application. And Netflix two weeks ago rolled out its new Instant Watch service using Silverlight. A: At the 2009 Microsoft Professional Developers Conference, Scott Guthrie said that Silverlight was installed on "45% of the world's Internet-connected devices" http://www.betanews.com/article/PDC-2009-Live-from-the-Day-2-keynote/1258561992 (quote taken from "9:28am PT") entry A: Quick Answer: www.riastats.com This site compares the different RIA plugins using graphical charts and graphs. It gets its data from small snippets of javascripts running on sites accross the web (approx 400,000 last time I looked) At the time of this post, Silverlight 2 was sitting at close to 11%. I would not take this as the end-all, be-all in RIA stats, but it's the best site I've found so far. A: This was the weekly poll over on CP a few weeks back. Out of the 1463 developers responding, aprox. 62% had Silverlight installed on at least one system. So... if you're making a site targeted at Windows developers... and don't mind locking out a third of your potential market... A: I haven't been able to get stats. I'd assume they might release some at PDC in late October. If you're building a site which needs to target a non-developer audience who won't want to install another plugin, you might want to wait for Silverlight. I have done a good amount of testing with Moonlight on Linux, and it works well for sites which use either use Silverlight 1.0 functionality (pretty much 100% supported) or which happen to use the Silverlight 2.0 bits which Moonlight currently supports. The caveat is that some websites explicitly check the user agent and won't offer content if you're not on a "supported" platform. That's poor website coding, not a fault of the Silverlight plugin. A: During the keynote @ ReMIX UK when ScottGu gave the figure of 1.5 million installs/day I was sat next to Andrew Shorten, one of the Adobe platform evangelists (and also a good chum). He was telling me Adobe have independant evidence of an AVERAGE of 12 million installs a day, with over 40 million downloads. It would appear 1.5 million is a tiny amount of what it could be. A: Well 6 million watched the Olympics on NBC, which used a silverlight player. So at least 6 million. I've never seen exact stats, but you can be pretty certain that it is pretty small still. Also, there is an implementation of silverlight for linux called moonlight. A: I think an interesting stat comes from this site itself. Have a look at how many silverlight questions there are! And how many responses - it's not the most active topic! A: I think you'll see a dramatic increase in the Silverlight install base after Silverlight 2.0 officially comes out. Right now it's still in beta. Silverlight 1.0 is out and runs quite well from what I've seen in Moonlight on Linux, but it's much harder to create full-scale applications for than version 2.0. According to Microsoft, Moonlight will be "100% compatible" at release time. See Scott Guthrie's blog (note: 2.0 was called 1.1 at the time). Nick R, as for the fact that there isn't much Silverlight activity on these forums, I think the biggest reason for that is the very active community on the silverlight.net forums. A: Scott Guthrie said (at Remix UK Sept 18 2008) that Silverlight is currently downloaded 1.5 million times per day. Over 115 million downloads since the version 1 release. The Version 1 installed base will automatically update to version 2 when it is out of beta. A: Wow! Scott said the same thing at Mix in February 08 about run rate - 1.5m. So it seems that a daily run rate of 1.5m per day for 6 months would add 270m installs to the installed base. So their numbers are not exactly clear in their meaning. If one assumes the 115m installed base is correct, then it implies a run rate around 700k per day in the six months since SL2. Of course, many users are upgrading versions B1 to B2 as an example. Either way, it is gaining some steady installs. It would be nice to see the run rate improve. By 2nd quarter of next year, it should be dramatically higher due to v2 shipment, application/web site adoption, pre-installation on various computers (like HP) and any unannounced distribution mechanisms. A: While in general I support the idea of developing a site using silverlight and feel that that, depending on your audience, you should not have too much trouble getting users to download the plug in I would caution you against assuming that Microsoft will release the plugin built into IE or as a part of windows update. I have had two separate Microsoft Technology Evangelists tell me that the company is reluctant to do that due to Anti-Trust reasons. This was over a year ago and their strategy has probably evolved since then, but it enough to make me not count on that as an option for greater market penetration. A: Don't forget that the Silverlight 2 install base will never include PPC Mac users. It doesn't look like the Moonlight people are targetting them at all, despite the heroic effort to add PIC streaming for Silverlight 1.0 users for the Obama inauguration. A: The larger question is how many users will your site lose if implemented in Silverlight. And, it very much depends on your audience. If you're running a site about the joys of Linux kernel hacking or the virtues of Internet security, you'll probably lose a significant chunk of your audience. If you're running a more mainstream site, my experience is that, sadly, people will download anything they're told to most of the time. That's why spyware and malware work. And, as the NBC/Olympics deal shows, Microsoft will aggressively push its partners to use Silverlight until it's fairly ubiquitous. I won't be using Silverlight until it's more mature because I do cater to a fair number of Linux users, but I might for a less technically-oriented site.
{ "language": "en", "url": "https://stackoverflow.com/questions/60121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Select rows in dataset table based on other dataset table I have a dataset that has two tables in it. I want to do the following (or something like it) is it possible and is how I have it correct? dsTabData.Tables("FilingTabs").Select("fs_ID not in (select fsp_fsid from ParentTabs)") how do you reference data from other table in the same dataset? A: ok ok before y'all flame me! ;) I did some more looking around online and found what looks like the stuff I need, now off to read some more from here: Navigating a Relationship Between Tables
{ "language": "en", "url": "https://stackoverflow.com/questions/60122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Technical issues when switching to an unmanaged Virtual Private Server (VPS) hosting provider? I'm considering moving a number of small client sites to an unmanaged VPS hosting provider. I haven't decided which one yet, but my understanding is that they'll give me a base OS install (I'd prefer Debian or Ubuntu), an IP address, a root account, SSH, and that's about it. Ideally, I would like to create a complete VM image of my configured setup and just ship those bits to the provider. Has anyone had any experience with this? I've seen Jeff talk about something like this in Coding Horror. But I'm not sure if his experience is typical. I suppose it also depends on the type of VM server used by the host. Also, do such hosts provide reverse-DNS? That's kinda useful for sites that send out e-mails. I know GMail tends to bounce anything originating from a server without it. Finally, I'd probably need multiple IP addresses as at least a couple of the sites have SSL protection which doesn't work with name-based virtual hosts. Has anyone run into trouble with multiple IPs through VPS? I wouldn't think so, but I've heard whisperings to the contrary. A: Slicehost (referral link, if you so choose) offers reverse DNS, multiple IPs ($2/month/IP), Ubuntu/Debian (along with others). The only criteria it doesn't support is the ship-a-VM one, but it does let you clone VMs you've set up in their system via snapshots. You could thus set it up once, then copy that VM as many times as you like. If that's a sacrifice you're willing to make, I highly recommend them - they've had great customer service the few times I've needed to contact them, decent rates, and a great admin backend. A: I like XenPlanet, their prices seem to be comparable, but they also allow you to purchase extras like added disk space. Not sure if they let you buy additional bandwidth. I have used them for a number of different machines and found their service to be very good.
{ "language": "en", "url": "https://stackoverflow.com/questions/60137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best way to determine the number of servers needed How much traffic can one web server handle? What's the best way to see if we're beyond that? I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM. But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box). A: What you need is some info on Capacity Planning.. Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels. A: If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run. You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.) Also check the obvious, that you aren't running out of bandwidth. A: Measure, measure, measure. Rico Mariani always says this, and he's right. Measure req/sec, RAM, CPU, Sessions, etc. You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.) See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at.. A: On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running. Without know all of the details, I would say no, you will not see any performance improvement by adding servers. By the way, if you're not using the Output Cache, I would start there.
{ "language": "en", "url": "https://stackoverflow.com/questions/60142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }