text
stringlengths 8
267k
| meta
dict |
---|---|
Q: C++ STL question: allocators I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?
A: STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.
A: The default allocator for STL containers uses operator new and delete, so it's whatever those route to for the type being contained. (In general, it comes from the heap unless you do something to override that.)
You will not get a stack overflow from allocating 5 million strings. Even if you made a stack based allocator, it would probably overflow before you even inserted one string.
A: The container itself is allocated where you decide (it can be the stack, the heap, an object's member, etc) but the memory it uses is, by default, as others described, taken on the Free Store (managed through new and delete) which is not the same as the heap (managed through malloc/free).
Don't mix the two!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do you find what debug switches are available? Or given a switch find out what is being disabled? In this question the answer was to flip on a switch that is picked up by the debugger disabling the extraneous header that was causing the problem. The Microsoft help implies these switched are user generated and does not list any switches.
<configuration>
<system.diagnostics>
<switches>
<add name="Remote.Disable" value="1" />
</switches>
</system.diagnostics>
</configuration>
What I would like to know is where the value "Remote.Disable" comes from and how find out what other things can be switched on or off. Currently it is just some config magic, and I don't like magic.
A: As you suspected, Remote.Disable stops the app from attaching debug info to remote requests. It's defined inside the .NET framework methods that make the SOAP request.
The basic situation is that these switches can be defined anywhere in code, you just need to create a new System.Diagnostics.BooleanSwitch with the name given and the config file can control them.
This particular one is defined in System.ComponentModel.CompModSwitches.DisableRemoteDebugging:
public static BooleanSwitch DisableRemoteDebugging
{
get
{
if (disableRemoteDebugging == null)
{
disableRemoteDebugging = new BooleanSwitch("Remote.Disable", "Disable remote debugging for web methods.");
}
return disableRemoteDebugging;
}
}
In your case it's probably being called from System.Web.Services.Protocols.RemoteDebugger.IsClientCallOutEnabled(), which is being called by System.Web.Services.Protocols.WebClientProtocol.NotifyClientCallOut which is in turn being called by the Invoke method of System.Web.Services.Protocols.SoapHttpClientProtocol
Unfortunately, to my knowledge, short of decompiling the framework & seaching for
new BooleanSwitch
or any of the other inheritors of the System.Diagnostics.Switch class,
there's no easy way to know what switches are defined. It seems to be a case of searching msdn/google/stack overflow for the specific case
In this case I just used Reflector & searched for the Remote.Disable string
A: You can use Reflector to search for uses of the Switch class and its subclasss (BooleanSwitch, TraceSwitch, etc). The various switches are hardcoded by name, so AFAIK there's no master list somewhere.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Visual Studio Hosting Process and "The operation could not be completed" When trying to execute from within Visual Studio 2008 your application and you get the (uninformative) message "The operation could not be completed".
The solution to this is to turn off the "Visual Studio Hosting Process".
The problem with turning off this "hosting process" is that all the "run and rewrite" functionality is no longer available. OK, so this isn't a big deal, but I'm always getting this message no matter what machine I use (and it might be nice once in a while to use the rewrite and execute functionality).
Am I doing something wrong? How come this "feature" within VS seems to complain so readily? Do other people have success with enabling the hosting process and making use of it?
A:
The problem with turning off this "hosting process" is that all the "run and rewrite" functionality is no longer available.
The Visual Studio Hosting Process is not needed to allow Edit and Continue. It is used for "Design time expression Evalutation" in the case where the project is a dll rather than an EXE. It is also used to provide debugging for partial trust scenarios. See the documentation for everything it does.
It is highly unlikely it does anything you need, so don't feel bad turning it off.
A: Is your project output folder set to a network share?
If so, try changing it to a local folder and see what happens. It appears that VS is not always able to terminate the process if the host exe is running from a share.
The other possibility is that the project is open and running in debug mode on another instance of Visual Studio - although I suspect you will allready have ensured this is not the case.
A: I honestly have never seen this message and I work with Visual Studio for at least 8 hours a day. Is this reproducible on other machines? If so is there anything weird or abnormal in your code that could cause this to crash?
A: I use 4 different machines and have got this situation on all of them. I understand what is causing the problem - it is that the VS hosting process isn't terminating after the first debug session ends, which means that the next time that you try to compile the exe the hosting process is locking the exe and preventing compilation. Another solution therefore is to use Task Manager to kill the VS hosting process and compile and debug as normal but thats even more of a hassle!
I can't think that its anything in my code that would be causing this - its probably a VS issue itself isn't it?
A: Here's the anwser: disable "Enable he Visual Studio hosting process" in he debug tab of your projects properties.
I found it here:
http://social.msdn.microsoft.com/Forums/en-US/vbide/thread/40d2d241-a0c0-4137-9da9-e40611972c0e/
A: There are several causes and workarounds regarding to this problem and you might try the following ones that are useful most of the time:
Delete the "Your_Solution_FileName.suo" file and restart Visual Studio.
or
Right click on the project and select Unload Project and then click Reload Project by right clicking on the project again might also fix it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: ASP.Net RadioButton visibility inside a RadioButtonList Is there a way to hide radio buttons inside a RadioButtonList control programmatically?
A: Under the hood, you can access the attributes of the item and assign it a CSS style.
So you should be able to then programmatically assign it by specifying:
RadioButtonList.Items(1).CssClass.Add("visibility", "hidden")
and get the job done.
A: Here's how you have to apply a style attribute to a listitem:
RadioButtonList.Items(1).Attributes.Add("style", "display:none")
- OR -
RadioButtonList.Items(1).Attributes.Add("style", "visibility:hidden")
A: Why not add and remove the radio buttons as needed?
RadioButtonList.Items.Add("Item Name" or index);
RadioButtonList.Items.Remove("Item Name" or index);
A: Try This:
RadioButtonList.Items.Remove(RadioButtonList.Items.FindByValue("3"));
A: If you mean with JavaScript, and if I remember correctly, you've got to dig out the ClientID properties of each <input type="radio" ...> tag.
A: Have you tried to hide it through the itemdatabound event onload or do you need it to hide after it loads?
A: I haven't tested it, but I'd assume (for C#)
foreach(ListItem myItem in rbl.Items)
{
if(whatever condition)
myItem.Attributes.Add("visibility","hidden");
}
A: Another answer to not visibility inside a RadioButtonList.
Try this code:
RadioButtonList.Items(1).CssClass.Add("display", "none");
and get the job to no display RadioButtonList in layout.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Scripting SVG I'm considering developing a website similar to stackoverflow, but the answers may also consist of drawings (schematics, in this case). I want to have an area in the answer form where they can make this schematic without requiring special plugins, etc.
*
*Are we to the point where SVG has or should have critical mass soon (1-2 years) such that designing a website where script-run SVG as a primary feature is reasonable (ie, requiring Firefox or another SVG/AJAX compliant browser)?
*What are some good resources for learning cross platform SVG scripting (likely in javascript)?
-Adam Davis
A: Raphael looks like an interesting take on the problem of cross-browser vector graphics.
A: Unfortunately, I don't have an answer, but I do have three pointers to projects that you could look at.
The first is the Lively Kernel by Dan Ingalls (yes, the Dan Ingalls) at Sun Labs. It is an implementation of a Smalltalk Virtual World in JavaScript on top of SVG. More precisely, it is an implementation of the Morphic GUI framework from Squeak Smalltalk in JavaScript using SVG and a port of (parts of) Squeak Smalltalk in JavaScript.
Or, if you're not a Smalltalker and the above doesn't make sense to you: it's an Operating System, written in JavaScript with the JavaScript interpreter as the CPU, SVG as the graphics card and the browser as the computer.
This is about as extreme as it gets, when it comes to JavaScript and SVG. And it only fully works in Safari 3 and partly in Firefox 3, although there is an experimental port to Internet Explorer as well.
The second project is John Resig's Processing.js port of the Processing visualization language to JavaScript. It uses the <canvas> element instead of SVG precisely because of the problems that you mentioned. This one however, only works in Firefox 3.
The third one is Real-Time 3D in JavaScript by Useless Pickles. It uses only JavaScript, DOM and CSS and no SVG or <canvas> or Flash or whatever. And it is portable to almost any browser, including Internet Explorer 7 and up. Doing 2D should be even easier than this.
Between those three projects you should be able to find some inspiration and also to find some people who tried to push the envelope with JavaScript and SVG or JavaScript and Graphics and can tell you what works and what doesn't.
Conclusion: doing cross-browser SVG or cross-browser <canvas> is nigh impossible, but with a little bit of craziness, cross-browser graphics without SVG or <canvas> is possible.
A: SVGWeb is a script that adds near-native SVG capabilities to IE using flash. All the other major browsers support SVG.
http://code.google.com/p/svgweb/
A: 1/ probably never - if IE wanted to add it, then I would have though it would have done so by now; but there are workarounds using SilverLight and Gecko to provide rendering. On the other hand, there are cross-browser graphics APIs available. I've done largish front ends using XULRunner and SVG, but nothing on the web which had to cater for IE.
2/ The two I referred to most often were the SVG pages on mozilla.org and this SVG DOM reference . All of my SVG links are here on delicious
There's one existing editor at http://www.bpel4chor.org/editor/; also if all you want is schematics where all arcs are on a grid, you can do that quite well using divs and images without SVG. Or you could just go the lo-fi route
A: As @jwmittag mentioned <canvas> is an option.
It works in Saffari and Firefox 3, Opera 9, and people are developing support for IE.
You could easily capture mouse clicks associated with the current tool and properties.
Redrawing the canvas on every page display.
I just finished a project using <canvas> and it's a simple and very powerful API to work with, especially if you have ever done any OpenGL or Cairo work.
Good Luck, sounds like a cool project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Looking up document library items in a SharePoint workflow I using SharePoint Designer to create a workflow. I'm trying to get at a sub-folder in a document library in the "Define Workflow Lookup" dialog. There are two issues with this:
*
*I can't look up items by URL Path. If I look up by Title, I can
output the URL Path, but selecting by path doesn't work. What
fields can/can't I use?
*I can't get at any sub-folders. I can get at the top-level folder,
but the sub-folders don't seem to be available. Noticed the same
things is true when looking at the data for a document library in
the "Data Source Library" in Designer.
To clarify, the workflow is on a different list, not on the document library.
UPD: Also, I know how to do this through the object model, the question is how to do it in SharePoint Designer without deploying code to the server.
A: I really don't have much experience with Sharepoint, but I thought I could at least provide some answer - even if it's the wrong one.
From another dev I've spoken to it sounds like it's tough to get into any subfolders, so you might need to look at making your own custom workflow.
Maybe something like LINQ to Sharepoint might be able to help you with actually getting in and enumerating the subfolders and getting to the data that you need? LINQ to Sharepoint
A: The issue is that "folders" are not really folders as they are accessed by querystring, not a "/" as with real folders.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Best Ways to Debug a Release Mode Application Im sure this has happened to folks before, something works in debug mode, you compile in release, and something breaks.
This happened to me while working on a Embedded XP environment, the best way i found to do it really was to write a log file to determine where it would go wrong.
What are your experiences/ discoveries trying to tackle an annoying Release-mode bug?
A: Make sure you have good debug symbols available (you can do this even with a release build, even on embedded devices). You should be able to get a stack trace and hopefully the values of some variables. A good knowledge of assembly language is probably also useful at this point.
My experience is that generally the bug is related to code that is near the area of breakage. That is to say, if you are seeing an issue arising in the function "LoadConfigInfoFromFile" then probably you should start by closely analysing that for issues, rather than "DrawControlsOnScreen", if you know what I mean. "Spooky action at a distance" type bugs do not tend to arise often (although when they do, they tend to be a major bear).
A: Tracefile is always a good idea.
When it's about crashes, I'm using adplus, which is part of debugging tools for windows. basically what adplus does, is, it attaches windbg to the executable you're monitoring. When the application crashes, you get a crash dump and a log file. You can load the crash dump in your preferred debugger and find out, which instruction lead to the crash.
As release builds are heavily optimized compared to debug builds, the way you compile your code affects its behaviour. This is basically true when crashes in multithreaded code happen in the release version but not the debug version. adplus and windbg helped me, to find out, where this happened.
ADPlus is explained here:
httx://support.microsoft.com/?scid=kb%3Ben-us%3B286350&x=15&y=12
Basically what you have to do is:
1. Download and install WinDbg into C:\debuggers
httx://www.microsoft.com/whdc/devtools/debugging/default.mspx
*Start your application
*open a cmd and cd to c:\debuggers
*start adplus like this:
"adplus.bat -crash your_exe.exe"
*reproduce the crash
*analyze the crashdump in vs2005 or in windbg
A: If it's only a small portion of the application that needs debugging then you can change those source files only to be built without optimisations. Presumably you generate debug info for all builds, and so this makes the application run mostly as it would in release, but allows you to debug the interesting parts properly.
A: How about using Trace statements. They are there for Release mode value checking.
Trace.WriteLine(myVar);
A: I agree on log file debugging to narrow it down.
I've used "Entering FunctionName" "Leaving FunctionName" until I can find what method it enters before the crash. Then I add more log messages re-compile and re-release.
A: Besides playing with turning off optimization and/or turning on debug information for your Release build as pauldoo said, a log file will good data can really help. I once wrote a "trace" app that would capture trace logs for the app if it was running when the release build started (otherwise the results would go to the debugger's output window if running under the debugger). I was able to have end-users email me log files from them reproducing the bugs they were seeing, and it was the only way I would have found the problem in at least one case.
A: Though it's probably not usable in an embedded environment, I've had good luck with WinDbg for debugging release-mode Windows applications. Even if the application is not compiled with symbol information, you can at least get a usable stack trace and plenty of other useful crash information.
A: You could also copy your debug symbols to the production environment even if it's compiled in relase mode
Here's an article with more information
A: If you problem is synchronization related dumping log in the file might be problematic.
In this case i usually will use some big array of string and dump this to screen/file after the problem was reproduces.
This is of course depend on your memory restriction, sometime i use just few symbols and numbers to store in the array if the memory on the platform is limited. Reading such logs is not a big pleasure, but sometimes this is the only choice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Best use pattern for a DataContext What's the best lifetime model for a DataContext? Should I just create a new one whenever I need it (aka, function level), should I keep one available in each class that would use it (class level), or should I create a static class with a static DataContext (app-domain level)? Are there any considered best practices on this?
A: You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be .SubmitChanges()'d later, as otherwise you will lose those changes.
If you're just querying stuff then it's fine to create them as needed, but then if later you want to .SubmitChanges() you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the datacontext global throughout your app from the beginning.
Note the data context is disconnected. The connection is only made when the query data is enumerated (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On .SubmitChanges() the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the datacontext around keeps a connection open, it doesn't (you can hook the StateChange event of the connection to confirm this for yourself, that's how I'm sure).
There is a great article over at Rick Strahl's Blog which covers this topic in depth, far more than my answer here provides!!
A: I think Jeff Atwood talked about this in the Herding Code podcast, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so.
I think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I filter nodes of TreeView and Menu controls with sitemap data sources based on user permissions? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. And I'm using a site map for site navigation.
I have ASP.NET TreeView and Menu navigation controls populated using a SiteMapDataSource. But off-limits administrator-only pages are visible to non-administrator users.
Kevin Pang wrote:
I'm not sure how this question is any
different than your other question…
The other question deals with assigning and maintaining permissions.
This question just deals with presentation of navigation. Specifically TreeView and Menu controls with sitemap data sources.
<asp:Menu ID="Menu1" runat="server" DataSourceID="SiteMapDataSource1" />
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" ShowStartingNode="False" />
Nicholas wrote:
add role="SomeRole" in the sitemap
Does that only handle the display issue? Or are such page permissions enforced?
A: I had to set securityTrimmingEnabled to "true" in my web.config file.
<?xml version="1.0"?>
<configuration>
...
<system.web>
...
<siteMap defaultProvider="default">
<providers>
<clear/>
<add name="default"
type="System.Web.XmlSiteMapProvider"
siteMapFile="web.sitemap"
securityTrimmingEnabled="true"/>
</providers>
</siteMap>
...
</system.web>
...
</configuration>
A: I'm not sure how this question is any different than your other question, but I'll try to answer it anyways.
If you want a tutorial on how to implement role-based authentication, check out the one from 4GuysFromRolla.
A: securityTrimmingEnabled="true" works for internal pages that have a config file restricting permissions, you can also add role="SomeRole" in the sitemap to ovveride the display mechanism, which is useful if you have menu items to external sites.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: .Net 2.0 - How efficient are Generic Lists? I'm creating an app that holds loads of loads of user data in memory, and it's mostly keeping it all in List<T> structures (and some Dictionary<T,T> when I need lookup).
And I'm wondering...
How efficient are Lists?
How much memory overhead do I get for each of them? (that is, memory space in addition to what the objects they contain would take)
How much of a penalty do I pay every time I instance a new one?
Is there a more efficient way?
Dictionaries are just HashTables, right? Or are them a less efficient data structure?
I'd like to use Arrays, but I have the typical problem of adding and removing things all the time from them, so having to grow / shrink them would be a pain.
Any ideas/suggestions?
Edit: I know my basic data structures 101, and why a Linked List is better for adding/removing, and a HashTable is better for Random Access.
I'm mostly concerned about .Net's idionsyncracies. How much memory each of these structure wastes, for example. And time wasted on initializing / killing them.
Things like, for example, if it takes a lot of time to instance/GC a List, but not much to clear it, maybe I should keep a little pool of Lists waiting for me, and clear them and send them back to the pool when done, instead of simply dereferencing them.
Or, if Hashtables are faster for access but waste a lot of memory, I might prefer to use Lists and traverse them, for small item counts.
And I'd also really like to focus on memory usage, since my app is hediously memory intensive (think memcached like)...
Does anyone know where I can find such info?
A: Maybe you should consider using some type of in-memory database if you have that much data that has to be held in the memory,
A: Lists are arrays underneath, so the performance hit of adding an item, unless it is at the end, will be very costly.
Otherwise they will be basically as fast as an array.
A: List uses an array internally and Dictionary uses an hash table.
They are faster then the older non-generic classes ArrayList and HashTable because you don't have the cost of converting everything to/from object (boxing, unboxing and type checking) and because MS optimized them better then the old classes.
A: If you need efficiency in inserting or removing at random places in the list there is a LinkedList data structure - the MSDN Article gives details. Obviously being a linked list random access isn't efficient.
A: The LinkedList object would take less time to add to and remove from because of the nature of linked lists. When you add an element it does not have to resize an array like a normal list does. Other than that improvement I would suspect that the LinkedList would perform about the same as a normal List.
See this on Wikipedia: Linked Lists vs. Arrays
A: If you really want to see all the gory details of how List<> and Dictionary<,> are implemented, use the wonderfully useful .NET Reflector.
See also the documentation for the excellent C5 Generic Collection Library, which has very good implementations of a number of collection types missing from the BCL.
A: If you are concerned about memory usage, the real key is to store your array on disk and map just the parts you need into memory at that time.
The key is to use FILE_FLAG_NO_BUFFERING and always read/write exactly one sector's worth of data.
A: I think the two-process thing might be overkill; plus the interprocess communication will likely have some slowness (although I've never tried such a thing so take my opinion of it as a grain of salt). I work on a data-driven application where each data unit is tiny, but we may have upwards of a billion data units at any given time. The method we use is basically:
*
*Everything resides on disk, no matter what
*Data is blocked into "chunks"; each chunk knows when it was last accessed
*Chunks are dragged up from disk into memory when they are needed
*A low-priority thread monitors memory usage and deletes the least recently used stuff
In other words, it's a homebrew caching scheme. The benefit is you can control with pinpoint accuracy what data is in memory, which you cannot if you rely on the OS paging scheme. If some commonly used variable ends up mixed in with your data on a page, that page will be repeatedly hit and prevent it from going to disk. If you design into your application an accommodation that some data requests will take longer than others, then this will work pretty well. Particularly if you know what chunks you will need ahead of time (we don't).
Keep in mind that everything in a .NET app has to fit within 2 GB of memory, and because of the way the GC works and the overhead of you app, you actually probably have somewhat less than that to work with.
To spy on exactly what your heap looks like and who is allocating, use the CLR profiler: http://www.microsoft.com/downloads/details.aspx?familyid=86ce6052-d7f4-4aeb-9b7a-94635beebdda&displaylang=en
A: The .Net List doesn't use a linked list. It is an array, it starts with 4 positions by default and I think it doubles in size as you add things. So performance can vary a bit depending on how you use it.
If your using VS 2008 run the profiler before you get too far down this rat hole. When we started actually looking at where we were losing time it didn't take long for use to figure out that debating the finer points of linked lists just really didn't matter.
A: I wouldn't move a finger until there's some performance problem and a profiler showed you were it is. Then you'll have a definitive problem to solve and it'll be much easier.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do I check if a SQL Server text column is empty? I am using SQL Server 2005. I have a table with a text column and I have many rows in the table where the value of this column is not null, but it is empty. Trying to compare against '' yields this response:
The data types text and varchar are incompatible in the not equal to operator.
Is there a special function to determine whether the value of a text column is not null but empty?
A: ISNULL(
case textcolum1
WHEN '' THEN NULL
ELSE textcolum1
END
,textcolum2) textcolum1
A: Actually, you just have to use the LIKE operator.
SELECT * FROM mytable WHERE mytextfield LIKE ''
A: where datalength(mytextfield)=0
A: SELECT * FROM TABLE
WHERE ISNULL(FIELD, '')=''
A: Use the IS NULL operator:
Select * from tb_Employee where ename is null
A: To get only empty values (and not null values):
SELECT * FROM myTable WHERE myColumn = ''
To get both null and empty values:
SELECT * FROM myTable WHERE myColumn IS NULL OR myColumn = ''
To get only null values:
SELECT * FROM myTable WHERE myColumn IS NULL
To get values other than null and empty:
SELECT * FROM myTable WHERE myColumn <> ''
And remember use LIKE phrases only when necessary because they will degrade performance compared to other types of searches.
A: I know this post is ancient but, I found it useful.
It didn't resolve my issue of returning the record with a non empty text field so I thought I would add my solution.
This is the where clause that worked for me.
WHERE xyz LIKE CAST('% %' as text)
A: Use DATALENGTH method, for example:
SELECT length = DATALENGTH(myField)
FROM myTABLE
A: Instead of using isnull use a case, because of performance it is better the case.
case when campo is null then '' else campo end
In your issue you need to do this:
case when campo is null then '' else
case when len(campo) = 0 then '' else campo en
end
Code like this:
create table #tabla(
id int,
campo varchar(10)
)
insert into #tabla
values(1,null)
insert into #tabla
values(2,'')
insert into #tabla
values(3,null)
insert into #tabla
values(4,'dato4')
insert into #tabla
values(5,'dato5')
select id, case when campo is null then 'DATA NULL' else
case when len(campo) = 0 then 'DATA EMPTY' else campo end
end
from #tabla
drop table #tabla
A: I would test against SUBSTRING(textColumn, 0, 1)
A: Are null and an empty string equivalent? If they are, I would include logic in my application (or maybe a trigger if the app is "out-of-the-box"?) to force the field to be either null or '', but not the other. If you went with '', then you could set the column to NOT NULL as well. Just a data-cleanliness thing.
A: I wanted to have a predefined text("No Labs Available") to be displayed if the value was null or empty and my friend helped me with this:
StrengthInfo = CASE WHEN ((SELECT COUNT(UnitsOrdered) FROM [Data_Sub_orders].[dbo].[Snappy_Orders_Sub] WHERE IdPatient = @PatientId and IdDrugService = 226)> 0)
THEN cast((S.UnitsOrdered) as varchar(50))
ELSE 'No Labs Available'
END
A: You have to do both:
SELECT * FROM Table WHERE Text IS NULL or Text LIKE ''
A: I know there are plenty answers with alternatives to this problem, but I just would like to put together what I found as the best solution by @Eric Z Beard & @Tim Cooper with @Enrique Garcia & @Uli Köhler.
If needed to deal with the fact that space-only could be the same as empty in your use-case scenario, because the query below will return 1, not 0.
SELECT datalength(' ')
Therefore, I would go for something like:
SELECT datalength(RTRIM(LTRIM(ISNULL([TextColumn], ''))))
A: try this:
select * from mytable where convert(varchar, mycolumn) = ''
i hope help u!
A: DECLARE @temp as nvarchar(20)
SET @temp = NULL
--SET @temp = ''
--SET @temp = 'Test'
SELECT IIF(ISNULL(@temp,'')='','[Empty]',@temp)
A:
It will do two things:
*
*Null check and string null check
*Replace empty value to default value eg NA.
SELECT coalesce(NULLIF(column_name,''),'NA') as 'desired_name') from table;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "209"
} |
Q: How do you configure HttpOnly cookies in tomcat / java webapps? After reading Jeff's blog post on Protecting Your Cookies: HttpOnly. I'd like to implement HttpOnly cookies in my web application.
How do you tell tomcat to use http only cookies for sessions?
A: httpOnly is supported as of Tomcat 6.0.19 and Tomcat 5.5.28.
See the changelog entry for bug 44382.
The last comment for bug 44382 states, "this has been applied to 5.5.x and will be included in 5.5.28 onwards." However, it does not appear that 5.5.28 has been released.
The httpOnly functionality can be enabled for all webapps in conf/context.xml:
<Context useHttpOnly="true">
...
</Context>
My interpretation is that it also works for an individual context by setting it on the desired Context entry in conf/server.xml (in the same manner as above).
A:
Update: The JSESSIONID stuff here is
only for older containers. Please use
jt's currently accepted answer unless
you are using < Tomcat 6.0.19 or < Tomcat
5.5.28 or another container that does not support HttpOnly JSESSIONID cookies as a config option.
When setting cookies in your app, use
response.setHeader( "Set-Cookie", "name=value; HttpOnly");
However, in many webapps, the most important cookie is the session identifier, which is automatically set by the container as the JSESSIONID cookie.
If you only use this cookie, you can write a ServletFilter to re-set the cookies on the way out, forcing JSESSIONID to HttpOnly. The page at http://keepitlocked.net/archive/2007/11/05/java-and-httponly.aspx http://alexsmolen.com/blog/?p=16 suggests adding the following in a filter.
if (response.containsHeader( "SET-COOKIE" )) {
String sessionid = request.getSession().getId();
response.setHeader( "SET-COOKIE", "JSESSIONID=" + sessionid
+ ";Path=/<whatever>; Secure; HttpOnly" );
}
but note that this will overwrite all cookies and only set what you state here in this filter.
If you use additional cookies to the JSESSIONID cookie, then you'll need to extend this code to set all the cookies in the filter. This is not a great solution in the case of multiple-cookies, but is a perhaps an acceptable quick-fix for the JSESSIONID-only setup.
Please note that as your code evolves over time, there's a nasty hidden bug waiting for you when you forget about this filter and try and set another cookie somewhere else in your code. Of course, it won't get set.
This really is a hack though. If you do use Tomcat and can compile it, then take a look at Shabaz's excellent suggestion to patch HttpOnly support into Tomcat.
A: also it should be noted that turning on HttpOnly will break applets that require stateful access back to the jvm.
the Applet http requests will not use the jsessionid cookie and may get assigned to a different tomcat.
A: For cookies that I am explicitly setting, I switched to use SimpleCookie provided by Apache Shiro. It does not inherit from javax.servlet.http.Cookie so it takes a bit more juggling to get everything to work correctly however it does provide a property set HttpOnly and it works with Servlet 2.5.
For setting a cookie on a response, rather than doing response.addCookie(cookie) you need to do cookie.saveTo(request, response).
A: I Found in OWASP
<session-config>
<cookie-config>
<http-only>true</http-only>
</cookie-config>
</session-config>
this is also fix for "httponlycookies in config" security issue
A: Please be careful not to overwrite the ";secure" cookie flag in https-sessions. This flag prevents the browser from sending the cookie over an unencrypted http connection, basically rendering the use of https for legit requests pointless.
private void rewriteCookieToHeader(HttpServletRequest request, HttpServletResponse response) {
if (response.containsHeader("SET-COOKIE")) {
String sessionid = request.getSession().getId();
String contextPath = request.getContextPath();
String secure = "";
if (request.isSecure()) {
secure = "; Secure";
}
response.setHeader("SET-COOKIE", "JSESSIONID=" + sessionid
+ "; Path=" + contextPath + "; HttpOnly" + secure);
}
}
A: If your web server supports Serlvet 3.0 spec, like tomcat 7.0+, you can use below in web.xml as:
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
As mentioned in docs:
HttpOnly: Specifies whether any session tracking cookies created by
this web application will be marked as HttpOnly
Secure: Specifies
whether any session tracking cookies created by this web application
will be marked as secure even if the request that initiated the
corresponding session is using plain HTTP instead of HTTPS
Please refer to how to set httponly and session cookie for java web application
A: For session cookies it doesn't seem to be supported in Tomcat yet. See the bug report Need to add support for HTTPOnly session cookie parameter. A somewhat involved work-around for now can be found here, which basically boils down to manually patching Tomcat. Can't really find an easy way to do it at this moment at this point I'm affraid.
To summarize the work-around, it involves downloading the 5.5 source, and then change the source in the following places:
org.apache.catalina.connector.Request.java
//this is what needs to be changed
//response.addCookieInternal(cookie);
//this is whats new
response.addCookieInternal(cookie, true);
}
org.apache.catalina.connectorResponse.addCookieInternal
public void addCookieInternal(final Cookie cookie) {
addCookieInternal(cookie, false);
}
public void addCookieInternal(final Cookie cookie, boolean HTTPOnly) {
if (isCommitted())
return;
final StringBuffer sb = new StringBuffer();
//web application code can receive a IllegalArgumentException
//from the appendCookieValue invokation
if (SecurityUtil.isPackageProtectionEnabled()) {
AccessController.doPrivileged(new PrivilegedAction() {
public Object run(){
ServerCookie.appendCookieValue
(sb, cookie.getVersion(), cookie.getName(),
cookie.getValue(), cookie.getPath(),
cookie.getDomain(), cookie.getComment(),
cookie.getMaxAge(), cookie.getSecure());
return null;
}
});
} else {
ServerCookie.appendCookieValue
(sb, cookie.getVersion(), cookie.getName(), cookie.getValue(),
cookie.getPath(), cookie.getDomain(), cookie.getComment(),
cookie.getMaxAge(), cookie.getSecure());
}
//of course, we really need to modify ServerCookie
//but this is the general idea
if (HTTPOnly) {
sb.append("; HttpOnly");
}
//if we reached here, no exception, cookie is valid
// the header name is Set-Cookie for both "old" and v.1 ( RFC2109 )
// RFC2965 is not supported by browsers and the Servlet spec
// asks for 2109.
addHeader("Set-Cookie", sb.toString());
cookies.add(cookie);
}
A: In Tomcat6, You can conditionally enable from your HTTP Listener Class:
public void contextInitialized(ServletContextEvent event) {
if (Boolean.getBoolean("HTTP_ONLY_SESSION")) HttpOnlyConfig.enable(event);
}
Using this class
import java.lang.reflect.Field;
import javax.servlet.ServletContext;
import javax.servlet.ServletContextEvent;
import org.apache.catalina.core.StandardContext;
public class HttpOnlyConfig
{
public static void enable(ServletContextEvent event)
{
ServletContext servletContext = event.getServletContext();
Field f;
try
{ // WARNING TOMCAT6 SPECIFIC!!
f = servletContext.getClass().getDeclaredField("context");
f.setAccessible(true);
org.apache.catalina.core.ApplicationContext ac = (org.apache.catalina.core.ApplicationContext) f.get(servletContext);
f = ac.getClass().getDeclaredField("context");
f.setAccessible(true);
org.apache.catalina.core.StandardContext sc = (StandardContext) f.get(ac);
sc.setUseHttpOnly(true);
}
catch (Exception e)
{
System.err.print("HttpOnlyConfig cant enable");
e.printStackTrace();
}
}
}
A: Implementation: in Tomcat 7.x/8.x/9.x
Go to Tomcat >> conf folder
Open web.xml and add below in session-config section
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "77"
} |
Q: Using Hibernate to work with Text Files I am using Hibernate in a Java application to access my Database and it works pretty well with MS-SQL and MySQL. But some of the data I have to show on some forms has to come from Text files, and by Text files I mean Human-Readable files, they can be CSV, Tab-Delimited, or even a key, value pair, per line since my data is as simple as this, but my preference of course is XML files.
My question is: Can I use hibernate to read those files using HQL, Query , EntityManager and all those resources Hibernate provides me to access files. Which file format should I use and How I configure My persistence.xml file to recognize files as Tables?
A: Hibernate is written against the JDBC API. So, you need a JDBC driver that works with the file format you are interested in. Obviously, even for read-only access, this isn't going to perform well, but it might still be useful if that's not a high priority. On a Windows system, you can set up ODBC datasources for delimited text files, Excel files, etc. Then you can set up the JdbcOdbcDriver in your Java application to use this data source.
For most of the applications I work on, I would not consider this approach; I would use an import/export mechanism to convert from a real database (even if it's an in-process database like Berkeley DB or Derby) to the text files. Yes, it's an extra step, but it could be automated, and the performance isn't likely to be much worse than trying to use the text files directly (it will likely be much better, overall), and it will be more robust and easy to develop.
A: A quick google came up with
*
*JDBC driver for csv files
*JDBC driver for XML files
Hope this might provide some inspiration?
A: Like erickson said, your only hope is in finding a JDBC driver for that task. There is maybe xlsql (CSV, XML and Excel driver) which could fit the task. After that, you just have to either find or write the most simple Hibernate Dialect which fits your driver.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Can you call a webservice from TSQL code? Is there a way to call out from a TSQL stored procedure or function to a webservice?
A: I would not do this for heavy traffic or mission critical stuff, HOWEVER, if you do NOT need to receive feedback from a service, then it is actually a great thing to do.
Here is an example of what I have done.
*
*Triggers Insert and Update on a Table
*Trigger called Stored Proc that is passes the JSON data of the transaction to a Web Api Endpoint that then Inserts into a MongoDB in AWS.
Don't do old XML
JSON
EXEC sp_OACreate 'WinHttp.WinHttpRequest.5.1', @Object OUT;
EXEC sp_OAMethod @Object, 'Open', NULL, 'POST', 'http://server/api/method', 'false'
EXEC sp_OAMethod @Object, 'setRequestHeader', null, 'Content-Type', 'application/json'
DECLARE @len INT = len(@requestBody)
Full example:
Alter Procedure yoursprocname
@WavName varchar(50),
@Dnis char(4)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @Object INT;
DECLARE @Status INT;
DECLARE @requestBody NVARCHAR(MAX) = '{
"WavName": "{WavName}",
"Dnis": "{Dnis}"
}'
SET @requestBody = REPLACE(@requestBody, '{WavName}', @WavName)
SET @requestBody = REPLACE(@requestBody, '{Dnis}', @Dnis)
EXEC sp_OACreate 'WinHttp.WinHttpRequest.5.1', @Object OUT;
EXEC sp_OAMethod @Object, 'Open', NULL, 'POST', 'http://server/api/method', 'false'
EXEC sp_OAMethod @Object, 'setRequestHeader', null, 'Content-Type', 'application/json'
DECLARE @len INT = len(@requestBody)
EXEC sp_OAMethod @Object, 'setRequestHeader', null, 'Content-Length', @len
EXEC sp_OAMethod @Object, 'send', null, @requestBody
EXEC sp_OAGetProperty @Object, 'Status', @Status OUT
EXEC sp_OADestroy @Object
A: Yes , you can create like this
CREATE PROCEDURE CALLWEBSERVICE(@Para1 ,@Para2)
AS
BEGIN
Declare @Object as Int;
Declare @ResponseText as Varchar(8000);
Exec sp_OACreate 'MSXML2.XMLHTTP', @Object OUT;
Exec sp_OAMethod @Object, 'open', NULL, 'get', 'http://www.webservicex.com/stockquote.asmx/GetQuote?symbol=MSFT','false'
Exec sp_OAMethod @Object, 'send'
Exec sp_OAMethod @Object, 'responseText', @ResponseText OUTPUT
Select @ResponseText
Exec sp_OADestroy @Object
END
A: In earlier versions of Sql, you could use either an extended stored proc or xp_cmdshell to shell out and call a webservice.
Not that either of these sound like a decent architecture - but sometimes you have to do crazy stuff.
A: Sure you can, but this is a terrible idea.
As web-service calls may take arbitrary amounts of time, and randomly fail, depending on how many games of counterstrike are being played on your network at the time, you can't tell how long this is going to take.
At the bare minimum you're looking at probably half a second by the time it builds the XML, sends the HTTP request to the remote server, which then has to parse the XML and send a response back.
*
*Whichever application did the INSERT INTO BLAH query which caused the web-service to fire is going to have to wait for it to finish. Unless this is something that only happens in the background like a daily scheduled task, your app's performance is going to bomb
*The web service-invoking code runs inside SQL server, and uses up it's resources. As it's going to take a long time to wait for the HTTP request, you'll end up using up a lot of resources, which will again hurt the performance of your server.
A: You can do it with the embedded VB objects.
First you create one VB object of type 'MSXML2.XMLHttp', and you use this one object for all of your queries (if you recreate it each time expect a heavy performance penalty).
Then you feed that object, some parameters, into a stored procedure that invokes sp_OAMethod on the object.
Sorry for the inprecise example, but a quick google search should reveal how the vb-script method is done.
--
But the CLR version is much....MUCH easier.
The problem with invoking webservices is that they cannot keep pace with the DB engine. You'll get lots of errors where it just cannot keep up.
And remember, web SERVICES require a new connection each time. Multiplicity comes into play. You don't want to open 5000 socket connections to service a function call on a table. Thats looney!
In that case you'd have to create a custom aggregate function, and use THAT as an argument to pass to your webservice, which would return a result set...then you'd have to collate that. Its really an awkward way of getting data.
A: Here'a an example to get some data from a webservice. In this case parse a user agent string to JSON.
--first configure MSSQL to enable calling out to a webservice (1=true, 0=false)
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ole Automation Procedures', 1;
GO
RECONFIGURE;
GO
CREATE PROCEDURE CallWebAPI_ParseUserAgent @UserAgent VARCHAR(512)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @Object INT;
DECLARE @ResponseText AS VARCHAR(8000);
DECLARE @url VARCHAR(512)
SET @url = 'http://www.useragentstring.com/?getJSON=all&uas=' + @UserAgent;
EXEC sp_OACreate 'WinHttp.WinHttpRequest.5.1', @Object OUT;
EXEC sp_OAMethod @Object, 'Open', NULL, 'GET', @url, 'false'
EXEC sp_OAMethod @Object, 'setRequestHeader', NULL, 'Content-Type', 'application/json'
EXEC sp_OAMethod @Object, 'send'
EXEC sp_OAMethod @Object, 'responseText', @ResponseText OUTPUT
SELECT @ResponseText
EXEC sp_OADestroy @Object
END
--example how to call the API
CallWebAPI_ParseUserAgent 'Mozilla/5.0 (Windows NT 6.2; rv:53.0) Gecko/20100101 Firefox/53.0'
A: Not in T-SQL code itself, but with SQL Server 2005 and above, they've enabled the ability to write CLR stored procedures, which are essentially functions in .NET code and then expose them as stored procedures for consumption. You have most of the .NET framework at your fingertips for this, so I can see consumption of a web service possible through this.
It is a little lengthy to discuss in detail here, but here's a link to an MSDN article on the topic.
A: If you're working with sql 2000 compatibility levels and cannot do clr integration, see http://www.vishalseth.com/post/2009/12/22/Call-a-webservice-from-TSQL-(Stored-Procedure)-using-MSXML.aspx
A: I been working for big/global companies around the world, using Oracle databases. We are consuming web services all time thru DB with store procedures and no issues, even those ones with heavy traffic. All of them for internal use, I mean with no access to internet, only inside the plant. I would recommend to use it but being really careful about how you design it
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: SMO and Sql Server 7.0 Does anyone have a definitive answer to whether Sql Server Management Objects is compatible with Sql Server 7.0?
The docs state:
Because SMO is compatible with SQL Server version 7.0, SQL Server 2000, SQL Server 2005, and SQL Server 2008, you easily manage a multi-version environment.
But trying to connect to a Sql 7 instance gets me:
"This SQL Server version (7.0) is not supported."
Has anyone been successful in getting these 2 to play nice?
A: you can use SMO to connect to SQL Server versions 7, 2000, and 2005, but SMO does not support databases set to compatibility levels 60, 65, and 70.
for SQL Server 7.0 the compatibility level is 70
Obviously this is conflicting information...I assume if your compatibility level of your DB is 70 you can not connect.
To check run: EXEC sp_dbcmptlevel 'databasename'
Looking through this link, it seems you might be able to change the compatibility level by running this:
EXEC sp_dbcmptlevel 'databasename', 80
Obviously make a back up before changing anything.
A: Looks like the docs are wrong (and have continued to be wrong for the last 3+ years!). I found this snippet with Reflector in Microsoft.SqlServer.Management.Common.ConnectionManager, Microsoft.SqlServer.ConnectionInfo
protected void CheckServerVersion(ServerVersion version) {
if ((version.Major <= 7 || (version.Major > 9)) {
throw new ConnectionFailureException(
StringConnectionInfo.ConnectToInvalidVersion(version.ToString())
);
}
}
So, it looks like only SQL 2000 and SQL 2005 are supported. Presumably, SQL 2008 (version 10) has updated SMO assemblies.
Bummer - guess it's back to SQL-DMO for this project.
A: Just to follow up on your commment SQL 2008 does have its own SMO package which supports SQL 2000, 2005 and 2008 which is actually definitively documented on their download page! And you're right you can't connect SQL 2005 SMO to SQL 2008.
There are some nice updates updates in Version 10 of the SMO in that if you access properties that do not existing on the version of SQL that you are connect to you get a sensible "This property is not available on this Version of SQL" exception or words to that effect.
Microsoft SQL Server 2008 Management Objects
The SQL Server Management Objects (SMO) is a .NET Framework object model that enables software developers to create client-side applications to manage and administer SQL Server objects and services. This object model will work with SQL Server 2000, SQL Server 2005 and SQL Server 2008.
A: Sorry for the late answer... there is partial support for SQL 2000 and SQL 7
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do you use a flash object as a link? Is it possible to use a flash document embedded in HTML as a link?
I tried just wrapping the object element with an a like this:
<a href="http://whatever.com">
<object ...>
<embed ... />
</object>
</a>
In Internet Explorer, that made it show the location in the status bar like a link, but it doesn't do anything.
I just have the .swf file, so I can't add a click handler in ActionScript.
A: You can use transparent div with same height and width over that object.
And let javascript open your url on click action on that div.
A: Though the object really should respond to being wrapped in an a href tag, you could open the swf in vim and just throw in an _root.onPress=function(){getURL("http://yes.no/");}; or if it's AS3, something like _root.addEventHandler(MouseEvent.PRESS, function (e:event) {getURL("http://yes.no/");}); But if editing the swf is your route, you'd likely have more success with a tool for the purpose.
A:
You could use Javascript to add a
handler (added inline for brevity):
<object onclick="window.location='URLHERE'; return false;">
That should work, methinks.
This worked for me but the litle hand for clicking stuff doesn´t appear. The link works though
A: As an addition to dlamblin's answer it is often best to use the clickTAG technique to open URLS from a flash movie.
More information can be found here:
http://www.adobe.com/resources/richmedia/tracking/designers_guide/
The advantage of using the clickTAG technique is that you can set the URL to jump to in the HTML page.
This means that you can set the flash movie to link to different places without modifying the flash file (beyond adding the initial clickTAG code). You can use link tracking on the URL as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Need a way to kick of a php script each time a particular account receives an email Working on a little side project web app...
I'd like to have it set up so that, when users send email to a certain account, I can kick off a PHP script that reads the email, pulls out some key info, and writes it to a database.
What's the best way to do this? A cron job that checks for new email?
The app is running on a "Dedicated-Virtual" Server at MediaTemple, so I guess I have a reasonable level of control, which should give me a range of options.
I'm very slowly learning the ropes (PHP, MySQL, configuring/managing the server), so your advice and insight are much appreciated.
Cheers,
Matt Stuehler
A: Procmail is how I do it. Here's an example where I actually process the text inside the email to archive it back to a MySQL database.
:0:
* ^(From).*[email protected]
{
:0 c
| php /var/www/app/process_email.php
}
A: if you have control of a mail transfer agent that is configurable to allow .forwards or similar configurable delivery options (qmail, postfix, and sendmail all are), i'd just set the script up in your .forward, .procmailrc, or other similar programmable delivery mechanism. when doing this, you should do some serious input validation on the mail (make sure the sender is who you expect, the received lines match up, the data is sane) if you don't want others who stumble onto the address to be able to muck with your system.
you'll also want to use whatever input sanitizer php uses to avoid things like sql injections from malicious data! we can all reflect upon the lesson of little bobby tables:
xkcd.com/327/
A: I recently worked on a project that had this need. I had great success using a .forward file in the mail accounts home directory. For example, let's say you're trying to do this for the address [email protected], and the server you are working with is the mail server for bar.com. You would first need to create a .forward file for this account. On the server I worked on, this would be:
/home/email/[email protected]/.forward
The contents of that file were as follows:
"|/path/to/script.php"
Also, the .forward file's owner was [email protected], and it was chmod'd to 600 (read/write to owner only.)
Next, you need to setup the script you're piping the mail to (/path/to/script.php above.)
Firstly, that script needs to be executable (+x). The rest simply reads STDIN and handles it however you wish. Here's a sample script that reads the entire message and stores it in a variable $email.
#!/usr/local/bin/php
<?php
$fd = fopen("php://stdin","r");
$email = '';
while($feof($fd)){
$email .= fread($fd, 1024);
}
fclose($fd);
?>
Hopefully that was of some help to you.
A: The Cronjob is the common solution to such a task. Checking for new Mails with PHP is no Problem. If you run a qmail-server (maybe other servers can do this too?) you can fire a script on every "received mail", which triggers your php script.
A: You can use a .forward file.
Just place the full path of your PHP script into the file, after a pipe sign:
|/full/path/to/script.php
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Caching Patterns in ASP.NET So I just fixed a bug in a framework I'm developing. The pseudo-pseudocode looks like this:
myoldObject = new MyObject { someValue = "old value" };
cache.Insert("myObjectKey", myoldObject);
myNewObject = cache.Get("myObjectKey");
myNewObject.someValue = "new value";
if(myObject.someValue != cache.Get("myObjectKey").someValue)
myObject.SaveToDatabase();
So, essentially, I was getting an object from the cache, and then later on comparing the original object to the cached object to see if I need to save it to the database in case it's changed. The problem arose because the original object is a reference...so changing someValue also changed the referenced cached object, so it'd never save back to the database. I fixed it by cloning the object off of the cached version, severing the reference and allowing me to compare the new object against the cached one.
My question is: is there a better way to do this, some pattern, that you could recommend? I can't be the only person that's done this before :)
A: Dirty tracking is the normal way to handle this, I think. Something like:
class MyObject {
public string SomeValue {
get { return _someValue; }
set {
if (value != SomeValue) {
IsDirty = true;
_someValue = value;
}
}
public bool IsDirty {
get;
private set;
}
void SaveToDatabase() {
base.SaveToDatabase();
IsDirty = false;
}
}
myoldObject = new MyObject { someValue = "old value" };
cache.Insert("myObjectKey", myoldObject);
myNewObject = cache.Get("myObjectKey");
myNewObject.someValue = "new value";
if(myNewObject.IsDirty)
myNewObject.SaveToDatabase();
A: I've done similar things, but I got around it by cloning too. The difference is that I had the cache do the cloning. When you put an object into the cache, the cache will clone the object first and store the cloned version (so you can mutate the original object without poisoning the cache). When you get an object from the cache, the cache returns a clone of the object instead of the stored object (again so that the caller can mutate the object without effecting the cached/canonical object).
I think that this is perfectly acceptable as long as the data you're storing/duping is small.
A: A little improvement on Marks anwser when using linq:
When using Linq, fetching entities from DB will mark every object as IsDirty.
I made a workaround for this, by not setting IsDirty when the value is not set; for this instance: when null. For ints, I sat the orig-value to -1, and then checked for that. This will not work, however, if the saved value is the same as the uninitialized value (null in my example).
private string _name;
[Column]
public string Name
{
get { return _name; }
set
{
if (value != _name)
{
if (_name != null)
{
IsDirty = true;
}
_name = value;
}
}
}
Could probably be improved further by setting IsDirty after initialization somehow.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Subversion: Fail update when there are conflicts? Is there a way to tell subversion "update/merge unless it would cause a conflict"?
I know you can use --dry-run / status -u to check before running the update, but I often have others running updates and getting broken webpages because they don't notice the "C index.php" line.
I've also noticed that svn doesn't seem too unhappy about conflicts - it still says "updated to revision blah" and exits zero, regardless of conflicts. So I have to parse the line-by-line output to discover them. Surely there's a better way?
A: You can use the --accept parameter to indicate what should happen when a conflict occurs:
--accept ARG : specify automatic conflict resolution action
('postpone', 'base', 'mine-full', 'theirs-full',
'edit', 'launch')
See also the interactive conflict resolution page in the svnbook
A: Perhaps a better way is to use a graphical tool? Or write a script to do the update that redirects the output to a file and does a "cat svnupdate.log | grep "^C "" at the end to show you any conflicts?
With the graphical tools that I use (TortoiseSVN and Netbeans), they make a nasty noise at the end and present you with a merge selection dialog for dealing with them. I don't know of an equivalent with as much power for the command line tools.
A: @jsight: TortoiseSVN is great, but I primarily develop in a *NIX environment, without X. So I'm usually using (restricted to) the command line.
In re your script suggestion, that's what I'm working on now - which is why I'm annoyed that I can't just check $?. Right now I'm skipping the "output to a file" and using a pipe, but otherwise exactly what you describe.
A: You could use the --diff3-cmd parameter to specify which merging tool to use (usually diff3 from diffutils).
A: you could also use a pre-commit script to look for conflict markers in files and prevent commit when they are present.
A: Subversion 1.5 (recently released) adds some ability to specify what happens during an update conflict, with the "--accept" argument.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: ssh hangs when command invoked directly, but exits cleanly when run interactive I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
A:
s = p.stderr.readline()
I suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.
When you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.
A: what if you do the following:
ssh <remote host> '<your command> ;<your regexp using awk or something>'
For example
ssh <remote host> '<your program>; ps aux | awk \'/root/ {print $2}\''
This will connect to , execute and then print each PSID for any user root or any process with root in its description.
I have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: JSF Lifecycle and Custom components There are a couple of things that I am having a difficult time understanding with regards to developing custom components in JSF. For the purposes of these questions, you can assume that all of the custom controls are using valuebindings/expressions (not literal bindings), but I'm interested in explanations on them as well.
*
*Where do I set the value for the valuebinding? Is this supposed to happen in decode? Or should decode do something else and then have the value set in encodeBegin?
*Read from the Value Binding - When do I read data from the valuebinding vs. reading it from submittedvalue and putting it into the valuebinding?
*When are action listeners on forms called in relation to all of this? The JSF lifecycle pages all mention events happening at various steps, but its not completely clear to me when just a simple listener for a commandbutton is being called
I've tried a few combinations, but always end up with hard to find bugs that I believe are coming from basic misunderstandings of the event lifecycle.
A: Action listeners, such as for a CommandButton, are called during the Invoke Application phase, which is the last phase before the final Render Response phase. This is shown in The JSF Lifecycle - figure 1.
A:
It is the only framework that I've
ever used where component creation is
a deep intricate process like this.
None of the other web frameworks
(whether in the .net world or not)
make this so painful, which is
completely inexplicable to me.
Some of the design decisions behind JSF start to make a little more sense when you consider the goals. JSF was designed to be tooled - it exposes lots of metadata for IDEs. JSF is not a web framework - it is a MVP framework that can be used as a web framework. JSF is highly extensible and configurable - you can replace 90% of the implementation on a per-application basis.
Most of this stuff just makes your job more complicated if all you want to do is slip in an extra HTML control.
The component is a composition of
several inputtext (and other) base
components, btw.
I'm assuming JSP-includes/tooling-based page fragments don't meet your requirements.
I would consider using your UIComponentELTag.createComponent to create a composite control with a UIPanel base and creating all its children from existing implementations. (I'm assuming you're using JSPs/taglibs and making a few other guesses.) You'd probably want a custom renderer if none of the existing UIPanel renderers did the job, but renderers are easy.
A: There is a pretty good diagram in the JSF specification that shows the request lifecycle - essential for understanding this stuff.
The steps are:
*
*Restore View. The UIComponent tree is rebuilt.
*Apply Request Values. Editable components should implement EditableValueHolder. This phase walks the component tree and calls the processDecodes methods. If the component isn't something complex like a UIData, it won't do much except call its own decode method. The decode method doesn't do much except find its renderer and invokes its decode method, passing itself as an argument. It is the renderer's job to get any submitted value and set it via setSubmittedValue.
*Process Validations. This phase calls processValidators which will call validate. The validate method takes the submitted value, converts it with any converters, validates it with any validators and (assuming the data passes those tests) calls setValue. This will store the value as a local variable. While this local variable is not null, it will be returned and not the value from the value binding for any calls to getValue.
*Update Model Values. This phase calls processUpdates. In an input component, this will call updateModel which will get the ValueExpression and invoke it to set the value on the model.
*Invoke Application. Button event listeners and so on will be invoked here (as will navigation if memory serves).
*Render Response. The tree is rendered via the renderers and the state saved.
*If any of these phases fail (e.g. a value is invalid), the lifecycle skips to Render Response.
*Various events can be fired after most of these phases, invoking listeners as appropriate (like value change listeners after Process Validations).
This is a somewhat simplified version of events. Refer to the specification for more details.
I would question why you are writing your own UIComponent. This is a non-trivial task and a deep understanding of the JSF architecture is required to get it right. If you need a custom control, it is better to create a concrete control that extends an exisiting UIComponent (like HtmlInputText does) with an equivalent renderer.
If contamination isn't an issue, there is an open-source JSF implementation in the form of Apache MyFaces.
A: The best article I've found is Jsf Component Writing,
as for 2 where do I read the value for a value binding in your component you have a getter that looks like this
public String getBar() {
if (null != this.bar) {
return this.bar ;
}
ValueBinding _vb = getValueBinding("bar");
return (_vb != null) ? (bar) _vb.getValue(getFacesContext()) : null;
}
how did this get into the getValueBinding?
In your tag class setProperties method
if (bar!= null) {
if (isValueReference(bar)) {
ValueBinding vb = Util.getValueBinding(bar);
foo.setValueBinding("bar", vb);
} else {
throw new IllegalStateException("The value for 'bar' must be a ValueBinding.");
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Can you recommend an ASP.NET control library? Do you have a good experience with a control library? Something that is kind of robust, well documented, consistent (across different controls) and quite well integrated into the Visual Studio.
A: Well, I can only speak about the Infragistics controls - they have a lot of bang for your buck and are well documented, very consistent and are well integrated with the standard ASP.NET programming model, etc.
Begin rant:
I personally think they are bloated and past their prime in today's world of lighter-weight JavaScript libraries and toolkits. Most developers are becoming more and more proficient in such toolkits, so the abstractions provided by Infragistics and other such and similar control vendors are not needed as much.
But that is purely my opinion.
A: I'll second the vote for Telerik. Their controls for the most part "just work" and their support has been excellent. I primarily use their forums and I still receive a response within a day (unlike some other vendors who barely seem to notice that they've even got a forum).
It also feels like they've actually spent time trying out a lot of the ways customer's will use their controls. The documentation and support reflects it. They aren't perfect, though. One issue that they had in the past, and that they've addressed in the latest releases (what they were calling their "Prometheus" controls, now just "Rad Controls for ASP.NET AJAX") is the performance of the controls. In previous releases they were definitely a bit sluggish (I'm thinking specifically of their RadGrid and RadEditor). Now they're noticeably faster (esp. the RadEditor - it loads MUCH faster).
Overall I wouldn't think twice of recommending them.
A: We're huge fans of Telerik here. Their control are all of the things you mention.
A: I've looked at a lot of control libraries ... too many to count. I like the DevExpress controls as they provided a complete suite for Windows and the Web. They also include charting, gauges and reports. We write apps for Windows and the Web so it makes it easy to transition between the two.
Though, when it comes to a Web environment we try to minimize the custom controls we use, just because of the added bloat.
A: I don't have a recommendation, but I do have some feedback on the Telerik recommendations. I can't stand their tools myself. The performance of their more complicated controls (e.g., Tree, Grid) is very sluggish and feels very un-web 2.0.
A: JDash.Net is an Asp.Net Web Forms control library which allows you to easily and seemlessly integrate end user designed dashboards into your application. JDash.Net is browser and database independent
You can provide personalized start pages and modern dashboards to your users.
Your users are able to customize start page of your application and create their own dashboards using your dashlets.
Demo Site
A: ComponentArt has some pretty cool controls. You might want to check out Telerik as well. Both companies offer pretty easy to use controls that look nice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can I export translations of place names from freebase.com So I've looked at this use of the freebase API and I was really impressed with the translations of the name that it found. IE Rome, Roma, Rom, Rzym, Rooma,로마, 罗马市. This is because I have a database of some 5000+ location names and I would very much like all French, German or Korean translations for these English names.
The problem is I spent about two hours clicking around freebase, and could find no way to get a view of city/location names in a different language mapped to English. So I'd love it if someone who understands what freebase is and how it's organized could get me a link to that view which theoretically I could then export.
Also I just wanted to share this question because I'm totally impressed with freebase and think if people haven't looked at it they should.
A: The link you posted uses mjt, a javascript framework designed for Freebase.
The Query they use.
mjt.freebase.MqlRead([{
limit: 100,
id:qid,
/* allow fuzzy matches in the value for more results... */
/* 'q:name': {'value~=': qname, value:null, lang: '/lang/'+qlang}, */
'q:name': {value: qname, lang: '/lang/'+qlang},
type: '/common/topic',
name: [{
value:null,
lang:{
id:null,
name:{
value:null,
lang:'/lang/en',
optional:true
},
'q:name':{
value:null,
lang:'/lang/'+qlang,
optional:true
}
}
}],
article: [{id:null, limit:1}],
image: [{id:null, limit:1, optional:true}],
creator: null,
timestamp:null
}])
Where:
qlang - is your desired language to translate too.
qname - is is the location to query.
To get the link you want, you'll need the API, and you can convert the above query to a link that will return a JSON object containing the translated string.
A: The query
[{
limit: 100,
type: '/location/location',
name: [{
value: null,
lang: {
name: {
value: null,
lang: '/lang/en',
},
}
}],
}];
returns for every location and every language, the name of that location in that language. The results are organized by language. For example, here is a very small segment of the return value:
{
'lang': {
'name': {
'lang': '/lang/en',
'value': 'Russian'
}
},
'value': 'Сан-Франциско'
},
{
'lang': {
'name': {
'lang': '/lang/en',
'value': 'Swedish'
}
},
'value': 'San Francisco'
},
{
'lang': {
'name': {
'lang': '/lang/en',
'value': 'Portuguese'
}
},
'value': 'São Francisco (Califórnia)'
},
For a no-programming solution, copy-paste the following into an HTML file and open it with your browser:
<html><head>
<script type="text/javascript" src="http://mjtemplate.org/dist/mjt-0.6/mjt.js"></script>
</head>
<body onload="mjt.run()">
<div mjt.task="q">
mjt.freebase.MqlRead([{
limit: 10,
type: '/location/location',
name: [{
value:null,
lang:{
name:{
value:null,
lang:'/lang/en',
},
}
}],
}])
</div>
<table><tr mjt.for="topic in q.result"><td>
<table><tr mjt.for="(var rowi = 0; rowi < topic.name.length; rowi++)"
mjt.if="rowi < topic.name.length" style="padding-left:2em"><td>
<pre mjt.script="">
var name = topic.name[rowi];
</pre>
${(name.lang['q:name']||name.lang.name).value}:
</td><td>$name.value</td></tr></table></td></tr></table></body></html>
Of course, that will only include the first 10 results. Up the limit above if you want more. (By the way, not only is Freebase cool, so is this mjt templating language!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: ASP.NET vs. Silverlight I'm starting a new web project and I am considering two presentation frameworks. I am thinking either about ASP.NET MVC or Silverlight. I would tend toward Silverlight since I'm quite experienced .NET developer while I have just a basic knowledge of ASP.NET controls.
A: Silverlight 3 with RIA seems to be very powerful. I hadn't programmed for 6 years after working on VB6. I about a months time, here I am developing a web applicaton that uses SL3, WCF and entity framework and I feel very comfortable.
The silverlight forum and stackoverflow ofcourse are very active and for some of the problems that i've had had, I have been able to get a solution easily.
The one thing I'm not sure of yet is performance.
A: Both personally and professionally, I write C# daily for Windows forms apps and Windows services. Even after years of this, I find it FAR faster to whip out a web app with PHP or Python than to do it with .NET. Maybe things have changed with Silverlight, but to me the learning curve on ASP.NET is ridiculous compared to the payback.
Edit: The above was written a year or so before I tried ASP.NET MVC. I find ASP.NET MVC wonderfully intuitive and clean.
A:
It is mainly going to be an iternal
product so browsers are not an issue.
You still have not written a proper description about the nature of your application. It is difficult to assess which technology is a good fit without first knowing well enough the domain the application is being applied to, and the problems it is designed to solve.
In general, Microsoft is positioning these array of presentation technologies on the "Reach vs Rich" continuum. You have "plain old" HTML and Javascript on one end, acceptable by the most number of client machines out there, and the ultimate full-blown WPF on the other side where limited number of machines can handle. You did mention this to be an internal app, so WPF via XBAP or ClickOnce are also possible.
So the scale would align this way: (reach) ASP.NET, AJAX, Silverlight, WPF (rich).
So the question is just how rich you want/need it to be for the users until it hurts the deployment base? Frankly if all you fetch are forms and tabular data and statistics then regular ASP.NET web forms are just fine. If you want on-the-fly resizable graphs and client-side interactive with back-end WCF web services Silverlight can do that. If you want even more powerful graphical rendering than WPF via the remote deployment options is your bet.
A: Don't forget Silverlight is going to require a plug-in to use, and to my knowledge it has not been "natively" added to IE, let alone the rest of the browsers. So there could be tons of maintenance/support issues with that alone. PDF files are considered "ubiquitous" by now, but you still run into a user or two that doesn't have Adobe Reader on their computer and it often occurs at a bad time and then you're scurrying around to get an installer.
At a fundamental level, this is what has kept me from doing Silverlight for my web apps. I think the technology behind it is good, but considering nowadays you could get equal visibility/functionality with a nice Webforms/MVC/AJAX/jQuery combination (mix and match to your liking), I'd say stick with ASP.NET.
A: IMO you may be better off with ASP.Net. While you would have a slight learning curve, you'd be developing on a proven, reliable, scalable model rather than something thats in beta and will likely change before RTM.
Also, with AJAX these days its possible to get a pretty slick user experience out of ASP.Net.
A: I would recommend ASP.NET, no additional download is needed.
I used Silverlight but a lot of companies are not allowing users to install anything also home users are not happy to install browser plugins, Silverlight is not so known as Flash player.
For beginners and advanced programmers you can find video tutorials at.
http://www.asp.net/mvc
A: It's hard to recommend one over the other without knowing what your application is. Whatever you do decide, make sure you keep your target audience in mind; not everyone is going to have Silverlight installed on their computers.
Personally, unless I was designing an incredibly interactive and beautiful web app, I would go with ASP.NET (with or without the MVC framework) if only for the fact that there is a ton of reference material for it while Silverlight is still relatively new territory.
A: It is mainly going to be an iternal product so browsers are not an issue. It's more about the price of development. Is it easier to learn Silverlight model or ASP.NET model? I expect that Silverlight is based on WebServices and so it might clearly divide my application code into a business logic (service) and presentation (silverlight application).
A: Given your background in .NET but limited Asp .net experience... I assume you are more of a service/client guy. Which will mean your javascript is probably just as limited... If this is the case, I'd go with Silverlight. It will ease you into WPF, which you may be likely to use in the future.
But more importantly working with Silverlight 2.0 feels more like building a sandboxed desktop application. More than a web application. You will be more at home with Silverlight if your prior experience is with client apps.
If you want to break into building web sites/applications go with ASP .NET MVC.
Either way knowledge of the typical ASP .NET controls will not go far, since they are for WebForms.
A: I would say - unless you need flash-like animation and interaction capabilities - go for ASP.NET. It's simpler to program against and doesn't require extra downloads for the users.
A: I think Silverlight is only required when you want to create applications like Flash. These applications are combined into a single executable which are downloaded once on the client machine. They can communicate with the server if they need any data or any functionality which resides on the server. The end user needs to install the Silverlight environment add-on to help run these applications.
Whereas if you create an asp.net application, its code resides and executes on the server itself and hence a simple internet browser can execute it. But the downside is that for user-interactive applications, there need to be separate calls made between the server and client machine when the code requires.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Could you make a case for using Berkeley DB XML I'm trying to read through the documentation on Berkeley DB XML, and I think I could really use a developer's blog post or synopsis of when they had a problem that found the XML layer atop Berkeley DB was the exact prescription for.
Maybe I'm not getting it, but it seems like they're both in-process DBs, and ultimately you will parse your XML into objects or data, so why not start by storing your data parsed, rather than as XML?
A: Ultimately I want my data stored in some reasonable format.
If that data started as XML and I want to retrieve it/them using XQuery, without the XML layer, I have to write a lot of code to do the XQuery by myself, and perhaps even worse to know my XML well enough to be able to have a reasonable storage system for it.
Conversely, so long as the performance of the system allows, I can forget about that part of the back end, and just worry about my XML document and up (i.e. to the user) level and leave the rest as a black box. It gives me the B-DB storage goodness, but I get to use it from a document-centric perspective.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I recover from an unchecked exception? Unchecked exceptions are alright if you want to handle every failure the same way, for example by logging it and skipping to the next request, displaying a message to the user and handling the next event, etc. If this is my use case, all I have to do is catch some general exception type at a high level in my system, and handle everything the same way.
But I want to recover from specific problems, and I'm not sure the best way to approach it with unchecked exceptions. Here is a concrete example.
Suppose I have a web application, built using Struts2 and Hibernate. If an exception bubbles up to my "action", I log it, and display a pretty apology to the user. But one of the functions of my web application is creating new user accounts, that require a unique user name. If a user picks a name that already exists, Hibernate throws an org.hibernate.exception.ConstraintViolationException (an unchecked exception) down in the guts of my system. I'd really like to recover from this particular problem by asking the user to choose another user name, rather than giving them the same "we logged your problem but for now you're hosed" message.
Here are a few points to consider:
*
*There a lot of people creating accounts simultaneously. I don't want to lock the whole user table between a "SELECT" to see if the name exists and an "INSERT" if it doesn't. In the case of relational databases, there might be some tricks to work around this, but what I'm really interested in is the general case where pre-checking for an exception won't work because of a fundamental race condition. Same thing could apply to looking for a file on the file system, etc.
*Given my CTO's propensity for drive-by management induced by reading technology columns in "Inc.", I need a layer of indirection around the persistence mechanism so that I can throw out Hibernate and use Kodo, or whatever, without changing anything except the lowest layer of persistence code. As a matter of fact, there are several such layers of abstraction in my system. How can I prevent them from leaking in spite of unchecked exceptions?
*One of the declaimed weaknesses of checked exceptions is having to "handle" them in every call on the stack—either by declaring that a calling method throws them, or by catching them and handling them. Handling them often means wrapping them in another checked exception of a type appropriate to the level of abstraction. So, for example, in checked-exception land, a file-system–based implementation of my UserRegistry might catch IOException, while a database implementation would catch SQLException, but both would throw a UserNotFoundException that hides the underlying implementation. How do I take advantage of unchecked exceptions, sparing myself of the burden of this wrapping at each layer, without leaking implementation details?
A: I like to repackage exceptions between the "tiers" of my application, so for example a DB-specific exception is repackaged inside of another exception which is meaningful in the context of my application (of course, I leave the original exception as a member so I don't clobber the stack trace).
That said, I think that a non-unique user name is not an "exceptional" enough situation to warrant a throw. I'd use a boolean return argument instead. Without knowing much about your architecture, it's hard for me to say anything more specific or applicable.
A: See Patterns for Generation, Handling and
Management of Errors
From the Split Domain and Technical Errors pattern
A technical error should never cause a
domain error to be generated (never
the twain should meet). When a
technical error must cause business
processing to fail, it should be
wrapped as a SystemError.
Domain errors should always start from a
domain problem and be handled by
domain code.
Domain errors should
pass "seamlessly" through technical
boundaries. It may be that such errors
must be serialized and re-constituted
for this to happen. Proxies and
facades should take responsibility for
doing this.
Technical errors should
be handled in particular points in the
application, such as boundaries (see
Log at Distribution Boundary).
The
amount of context information passed
back with the error will depend on how
useful this will be for subsequent
diagnosis and handling (figuring out
an alternative strategy). You need to
question whether the stack trace from
a remote machine is wholly useful to
the processing of a domain error
(although the code location of the
error and variable values at that time
may be useful)
So, wrap the hibernate exception at the boundary to hibernate with an unchecked domain exception such as a "UniqueUsernameException", and let that bubble up all the way to the handler of it. Make sure to javadoc the thrown exception even though it isn't a checked exception!
A: IMO, wrapping exceptions (checked or otherwise) has several benefits that are worth the cost:
1) It encourages you to think about the failure modes for the code you write. Basically, you have to consider the exceptions that the code you call may throw, and in turn you'll consider the exceptions you'll throw for the code that calls yours.
2) It gives you the opportunity to add additional debugging information into the exception chain. For instance, if you have a method that throws an exception on a duplicate username, you might wrap that exception with one that includes additional information about the circumstances of the failure (for example, the IP of the request that provided the dupe username) that wasn't available to the lower-level code. The cookie trail of exceptions may help you debug a complex problem (it certainly has for me).
3) It lets you become implementation-independent from the lower level code. If you're wrapping exceptions and need to swap out Hibernate for some other ORM, you only have to change your Hibernate-handling code. All the other layers of code will still be successfully using the wrapped exceptions and will interpret them in the same way, even though the underlying circumstances have changed. Note that this applies even if Hibernate changes in some way (ex: they switch exceptions in a new version); it's not just for wholesale technology replacement.
4) It encourages you use different classes of exceptions to represent different situations. For example, you may have a DuplicateUsernameException when the user tries to reuse a username, and a DatabaseFailureException when you can't check for dupe usernames due to a broken DB connection. This, in turn, lets you answer your question ("how do I recover?") in flexible and powerful ways. If you get a DuplicateUsernameException, you may decide to suggest a different username to the user. If you get a DatabaseFailureException, you may let it bubble up to the point where it displays a "down for maintenance" page to the user and send off a notification email to you. Once you have custom exceptions, you have customizeable responses -- and that's a good thing.
A: Since you're currently using hibernate the easiest thing to do is just check for that exception and wrap it in either a custom exception or in a custom result object you may have setup in your framework. If you want to ditch hibernate later just make sure you wrap this exception in only 1 place, the first place you catch the exception from hibernate, that's the code you'll probably have to change when you make a switch anyway, so if the catch is in one place then the additional overhead is almost zilch.
help?
A: I agree with Nick. Exception you described is not really "unexpected exception" so you should design you code accordingly taking possible exceptions into account.
Also I would recommend to take a look at documentation of Microsoft Enterprise Library Exception Handling Block it has a nice outline of error handling patterns.
A: *
*The question is not really related to checked vs. unchecked debate, the same applies to both exception types.
*Between the point where the ConstraintViolationException is thrown and the point, where we want to handle the violation by displaying a nice error message is a large number of method calls on the stack that should abort immediately and shouldn't care about the problem. That makes the exception mechanism the right choice as opposed to redesigning the code from exceptions to return values.
*In fact, using an unchecked exception instead of a checked exception is a natural fit, since we really want all intermediate methods on the call stack to ignore the exception and not handle it .
*If we want to handle the "unique name violation" only by displaying a nice error message (error page) to the user, there's not really a need for a specific DuplicateUsernameException. This will keep the number of exception classes low. Instead, we can create a MessageException that can be reused in many similar scenarios.
As soon as possible we catch the ConstraintViolationException and convert it to a MessageException with a nice message. It's important to convert it soon, when we can be sure, it's really the "unique user name constraint" that was violated and not some other constraint.
Somewhere close to the top level handler, just handle the MessageException in a different way. Instead of "we logged your problem but for now you're hosed" simply display the message contained in the MessageException, no stack trace.
The MessageException can take some additional constructor parameters, such as a detailed explanation of the problem, available next action (cancel, go to a different page), icon (error, warning)...
The code may look like this
// insert the user
try {
hibernateSession.save(user);
} catch (ConstraintViolationException e) {
throw new MessageException("Username " + user.getName() + " already exists. Please choose a different name.");
}
In a totally different place there's a top exception handler
try {
... render the page
} catch (MessageException e) {
... render a nice page with the message
} catch (Exception e) {
... render "we logged your problem but for now you're hosed" message
}
A: You can catch unchecked exceptions without needing to wrap them. For example, the following is valid Java.
try {
throw new IllegalArgumentException();
} catch (Exception e) {
System.out.println("boom");
}
So in your action/controller you can have a try-catch block around the logic where the Hibernate call is made. Depending on the exception you can render specific error messages.
But I guess in your today it could be Hibernate, and tomorrow SleepLongerDuringWinter framework. In this case you need to pretend to have your own little ORM framework that wraps around the third party framework. This will allow you to wrap any framework specific exceptions into more meaningful and/or checked exceptions that you know how to make better sense of.
A: @Jan Checked versus unchecked is a central issue here. I question your supposition (#3) that the exception should be ignored in intervening frames. If I do that, I will end up with an implementation-specific dependency in my high-level code. If I replace Hibernate, catch blocks throughout my application will have to be modified. Yet, at the same time, if I catch the exception at a lower level, I'm not receiving much benefit from using an unchecked exception.
Also, the scenario here is that I want to catch a specific logical error and change the flow of the application by re-prompting the user for a different ID. Simply changing the displayed message is not good enough, and the ability to map to different messages based on exception type is built into Servlets already.
A: @erikson
Just to add food to your thoughts:
Checked versus unchecked is also debated here
The usage of unchecked exceptions is compliant with the fact they are used IMO for exception caused by the caller of the function (and the caller can be several layers above that function, hence the necessity for other frames to ignore the exception)
Regarding your specific issue, you should catch the unchecked exception at high level, and encapsulate it, as said by @Kanook in your own exception, without displaying the callstack (as mentionned by @Jan Soltis )
That being said, if the underlying technology changes, that will indeed have an impact on those catch() already present in your code, and that does not answer your latest scenario.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is the best approach to both modularity and platform independence? I hope this question does not come off as broad as it may seem at first. I am designing a software application that I would like to be both cross-platform and modular. I am still in the planning phase and can pick practically any language and toolset.
This makes things harder, not easier, because there are seemingly so many ways of accomplishing both of the goals (modularity, platform agnosticism).
My basic premise is that security, data storage, interaction with the operating system, and configuration should all be handled by a "container" application - but most of the other functionality will be supplied through plug-in modules. If I had to describe it at a high level (without completely giving away my idea), it would be a single application that can do many different jobs, all dedicated to the same goal (there are lots of disparate things to do, but all the data has to interact and be highly available).
I find myself wrestling with not so much how to do it (I can think of lots of ways), but which method is best.
For example, I know that Eclipse practically embodies what I am describing, but I find Java applications in general (and Eclipse is no exception) to be too large and slow for what I need. Ditto desktop apps written Python and Ruby (which are excellent languages!)
I don't mind recompiling the code base for different platforms as native exectables. Yet, C and C++ have their own set of issues.
As a C# developer, I have a preference for managed code, but I am not at all sold on Mono, yet (I could be convinced).
Does anyone have any ideas/experiences/ specific favorite frameworks to share?
A: Just to cite an example: for .NET apps there are the CAB (Composite Application Block) and the Composite Application Guidance for WPF. Both are mainly implementations of a set of several design patterns focused on modularity and loose coupling between components similar to a plug-in architecture: you have an IOC framework, MVC base classes, a loosely coupled event broker, dynamic loading of modules and other stuff.
So I suppose that kind of pattern infrastructure is what you are trying to find, just not specifically for .NET. But if you see the CAB as a set of pattern implementations, you can see that almost every language and platform has some form of already built-in or third party frameworks for individual patterns.
So my take would be:
*
*Study (if you are not familiar with) some of those design patterns. You could take as an example the CAB framework for WPF documentation: Patterns in the Composite Application Library
*Design your architecture thinking on which of those patterns you think would be useful for what you want to achieve first without thinking in specific pattern implementations or products.
*Once you have your 'architectural requirements' defined more specifically, look for individual frameworks that help accomplish each one of those patterns/features for the language you decide to use and put together your own application framework based on them.
I agree that the hard part is to make all this platform independent. I really cannot think on any other solution to choose a mature platform independent language like Java.
A: Are you planning a desktop or web application?
Everyone around here seems to think that Mono is great, but I still do not think it is ready for industry use, I would equate mono to where wine is, great idea; when it works it works well, and when it doesn't...well your out of luck. mod_mono for Apache is extremely glitchy and is hard to get running correctly.
If your aiming for the desktop, nothing beats the eclipse RCP (Rich Client Platform) framework: http://wiki.eclipse.org/index.php/Rich_Client_Platform.
You can build window, linux, mac all under the same code and all UI components are native to the OS. And RCP wins in modularity hands down, it has a plug-in architecture that is unrivaled (from what I have seen)
I have worked with RCP for 1.5 years now and I dunno what else could replace it, it is #1 in it's niche.
If your totally opposed to java I would look into wxWidgets with either python or C++
A: If you want platform independence, then you'll have to trade off between performance and development effort. C++ may be faster than Java (this is debatable FWIW) but you'll get platform independence a lot more easily with Java. Python and Ruby are in the same boat.
I doubt that .NET would be much faster than Java (they're both VM languages after all), but the big problem with .NET is platform independence. Mono has a noble goal and surprisingly good results so far but it will always be playing catch-up with Microsoft on Windows. You might be able to accept its limitations but it's still not the same as having identical multiplatform environments that Java, Python, and Ruby have. Also: the .NET development and support tools are heavily skewed towards Windows, and probably always will be.
IMO, your best bet is to target Java... or, at the very least, the JVM. If you don't like the Java language (and as a C# dev I'm guessing that's not the case) then you at least have options like Jython, JRuby, and Scala. With the JVM, you get very good platform independence, good performance, and access to a huge number of libraries and support tools. There's almost always a Java library, port or implementation that will do what you need it to do. I don't think any other platform out there has the same number of options; there's real value in that flexibility.
As for modularity: that's more about how you build the software than what platform you use. I don't know much about plugin architectures like you describe but I'm guessing that it will be possible in pretty much any modern platform you pick.
A: If you plan on doing python development, you can always use pyrex to optimize some of the slower parts.
A: With my limited Mono experience I can say I'm quite sold on it. The fact that there is active development and a lot of ongoing effort to bring it up to spec with the latest .Net technologies is encouraging. It is incredibly useful to be able to use existing .Net skills on multiple platforms. I had similar issues with performance when attempting to accomplish some basic tasks in Python + PyGTK -- maybe they can be made to perform in the right hands but it is nice to not have to worry about performance 90% of the time.
A: For desktop applications, writing it in an interpreted language, and using a cross-platform UI toolkit like wxWidgets will get you a long way towards platform independence (you just have to be careful not to use any other modules that aren't cross-platform, use things like Python's os.path module, in place of doing things like config_path = "/home/$USER")
That said, to make a good cross-platform application, you will have to do some things differently on each platform..
For example, OS X is probably the most different - preferences are usually stored in ~/Library/Prefernces/ as .plists, UI's are generally based around floating windows, with a single menu-bar docked at the top-of-screen.
I suppose this is where the modularity comes into play.. With the preferences example above, you could have a class UserConfig, of which you have OS-specific versions of. The Windows one stores config data in the appropriate Application Data folder, or the registry. The Mac OS one uses .plist files on ~/Library/Preferences/, and the unix'y one uses ~/.dotfiles.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Should I use a software hosting solution for my personal projects? Right now, I keep all of my projects on my laptop. I'm thinking that I shouldn't do this, but instead use a version control system and check them in/out from an external hosting repository (Google Code, SourceForge, etc). I see several benefits here - first, I don't have to worry about losing my code if my computer crashes and burns or my external HDD crashes and burns; second, I can share my code with the world and perhaps even get more help when I need it.
Is this a good idea? If so, what are some other project hosts that I should investigate (other than Google Code and SourceForge)?
A: Assembla is awesome.
EDIT: Yes, this is a good idea - I used to use a personal copy of Vault and found it was more than I cared to manage (in case my server went down or hard drive crashed - not only was it painful to worry about losing and backing up data, but the downtime). Of course, it doesn't hurt to have your own backup as well. Cover all your bases!
A: After losing some freelance work to a hard drive crash, I've become keen on the philosophy that "It doesn't exist until its in source control". As I don't want to necessarily share the source for my projects with the rest of the world, I pay for webhosting (using Dreamhost who have great deals on basic shared hosting and easy one-click installs for things like subversion) and store my data that way. They don't claim to be any sort of backup service, but all I really want is a second copy offsite somewhere.
If I do decide to share the code I can always make it public later. Do note that sourceforge does not allow private/personal projects, and Google Code forces you to license your code using an open source license. Both have some limitations on the number of projects you can create (and aren't really intended to store everybody and their brother's personal projects).
Assembla looks pretty slick although it is hard to tell what all you get for free. I'm definitely going to try it out.
There is an extensive list at wikipedia.
A: *
*GitHub is a really great option for git.
*Most of the free, public hosting sights will insist that you license your code with an OSS license (and, possibly, your documentation). That's potentially a different thing that you're talking about (backups).
*For just backups, you may want to try a for-pay service or even something like mozy.
A: I use Assembla - You can share your code if you want, but you are not required to. That's a big plus to me.
A: Online backup is cheap and easy. Why would you not?
A: I host most of my non-code backups on Amazon's S3 service.
Code goes on a Slicehost virtual server that has automated snapshot backups (daily as well as weekly) and runs Subversion and the Trac web interface to it.
A: Github is a really great hosting service if you use Git; and of course everyone should use Git. The default is free public project hosting, but if your stuff is proprietary (or perhaps embarrassing) you can get private hosting from them for some cost per month.
A: If you want to make your projects in some form public, than a hosting-solution may be useful for you.
I made a listing of project-hosting-sites at this question. Of these list only Origo allows you also to host a closed-source-project. As long as you want to open up your source, you can choose everyone on this list.
A: For my personal projects I use a git repository on a local Fedora Server (that is backed up daily). I .tgz the repository and mysqldb (for bugzilla) and back it up on Carbonite AND a local, redundant hard drive.
I can clone the git repository from any of my other machines into all other environments.
With this you have a backup and version control. I think my system is better than the one I have at work, LOL.
A: As long as you want to publish your personal projects as open source, you have a lot of possibilities to choose from, because there are lots of hosters that provide this.
If you just want to store your code somewhere online, but not share it with the world:
Some hosters also allow private repositories, but the only free one that I know of is Bitbucket (which I use myself for my private and open source projects).
They allow an unlimited number of public and private Mercurial and Git repositories, the only limitation is that no more than five users can access your private repositories (you can have more, but then it's not free anymore).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How exactly do you configure httpOnlyCookies in ASP.NET? Inspired by this CodingHorror article, "Protecting Your Cookies: HttpOnly"
How do you set this property? Somewhere in the web config?
A: If you want to do it in code, use the System.Web.HttpCookie.HttpOnly property.
This is directly from the MSDN docs:
// Create a new HttpCookie.
HttpCookie myHttpCookie = new HttpCookie("LastVisit", DateTime.Now.ToString());
// By default, the HttpOnly property is set to false
// unless specified otherwise in configuration.
myHttpCookie.Name = "MyHttpCookie";
Response.AppendCookie(myHttpCookie);
// Show the name of the cookie.
Response.Write(myHttpCookie.Name);
// Create an HttpOnly cookie.
HttpCookie myHttpOnlyCookie = new HttpCookie("LastVisit", DateTime.Now.ToString());
// Setting the HttpOnly value to true, makes
// this cookie accessible only to ASP.NET.
myHttpOnlyCookie.HttpOnly = true;
myHttpOnlyCookie.Name = "MyHttpOnlyCookie";
Response.AppendCookie(myHttpOnlyCookie);
// Show the name of the HttpOnly cookie.
Response.Write(myHttpOnlyCookie.Name);
Doing it in code allows you to selectively choose which cookies are HttpOnly and which are not.
A: If you're using ASP.NET 2.0 or greater, you can turn it on in the Web.config file. In the <system.web> section, add the following line:
<httpCookies httpOnlyCookies="true"/>
A: Interestingly putting <httpCookies httpOnlyCookies="false"/> doesn't seem to disable httpOnlyCookies in ASP.NET 2.0. Check this article about SessionID and Login Problems With ASP .NET 2.0.
Looks like Microsoft took the decision to not allow you to disable it from the web.config. Check this post on forums.asp.net
A: With props to Rick (second comment down in the blog post mentioned), here's the MSDN article on httpOnlyCookies.
Bottom line is that you just add the following section in your system.web section in your web.config:
<httpCookies domain="" httpOnlyCookies="true|false" requireSSL="true|false" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: Extending base classes in Python I'm trying to extend some "base" classes in Python:
class xlist (list):
def len(self):
return len(self)
def add(self, *args):
self.extend(args)
return None
class xint (int):
def add(self, value):
self += value
return self
x = xlist([1,2,3])
print x.len() ## >>> 3 ok
print x ## >>> [1,2,3] ok
x.add (4, 5, 6)
print x ## >>> [1,2,3,4,5,6] ok
x = xint(10)
print x ## >>> 10 ok
x.add (2)
print x ## >>> 10 # Not ok (#1)
print type(x) ## >>> <class '__main__.xint'> ok
x += 5
print type(x) ## >>> <type 'int'> # Not ok (#2)
It works fine in the list case because the append method modifies the object "in place", without returning it. But in the int case, the add method doesn't modify the value of the external x variable. I suppose that's fine in the sense that self is a local variable in the add method of the class, but this is preventing me from modifying the initial value assigned to the instance of the class.
Is it possible to extend a class this way or should I define a class property with the base type and map all the needed methods to this property?
A: int is a value type, so each time you do an assignment, (e.g. both instances of += above), it doesn't modify the object you have on the heap, but replaces the reference with one of the result of the right hand side of the assignment (i.e. an int)
list isn't a value type, so it isn't bound by the same rules.
this page has more details on the differences: The Python Language Reference - 3. Data model
IMO, yes, you should define a new class that keeps an int as an instance variable
A: Your two xint examples don't work for two different reasons.
The first doesn't work because self += value is equivalent to self = self + value which just reassigns the local variable self to a different object (an integer) but doesn't change the original object. You can't really get this
>>> x = xint(10)
>>> x.add(2)
to work with a subclass of int since integers are immutable.
To get the second one to work you can define an __add__ method, like so:
class xint(int):
def __add__(self, value):
return xint(int.__add__(self, value))
>>> x = xint(10)
>>> type(x)
<class '__main__.xint'>
>>> x += 3
>>> x
13
>>> type(x)
<class '__main__.xint'>
A: i expanded you xlist class just a bit, made it so you could find all index points of a number making it so you can extend with multiple lists at once making it initialize and making it so you can iterate through it
class xlist:
def __init__(self,alist):
if type(alist)==type(' '):
self.alist = [int(i) for i in alist.split(' ')]
else:
self.alist = alist
def __iter__(self):
i = 0
while i<len(self.alist):
yield self.alist[i]
i+=1
def len(self):
return len(self.alist)
def add(self, *args):
if type(args[0])==type([1]):
if len(args)>1:
tmp = []
[tmp.extend(i) for i in args]
args = tmp
else:args = args[0]
if type(args)==type(''):args = [int(i) for i in args.split(' ')]
(self.alist).extend(args)
return None
def index(self,val):
gen = (i for i,x in enumerate(self.alist) if x == val)
return list(gen)
A: Ints are immutable and you can't modify them in place, so you should go with option #2 (because option #1 is impossible without some trickery).
A: I wrote an example of a mutable integer class that implements some basic methods from the list of operator methods. It can print properly, add, subtract, multiply, divide, sort, and compare equality.
If you want it to do everything an int can you'll have to implement more methods.
class MutablePartialInt:
def __init__(self, value):
self.value = value
def _do_relational_method(self, other, method_to_run):
func = getattr(self.value, method_to_run)
if type(other) is MutablePartialInt:
return func(other.value)
else:
return func(other)
def __add__(self, other):
return self._do_relational_method(other, "__add__")
def __sub__(self, other):
return self._do_relational_method(other, "__sub__")
def __mul__(self, other):
return self._do_relational_method(other, "__mul__")
def __truediv__(self, other):
return self._do_relational_method(other, "__truediv__")
def __floordiv__(self, other):
return self._do_relational_method(other, "__floordiv__")
def __eq__(self, other):
return self._do_relational_method(other, "__eq__")
def __neq__(self, other):
return self._do_relational_method(other, "__neq__")
def __lt__(self, other):
return self._do_relational_method(other, "__lt__")
def __gt__(self, other):
return self._do_relational_method(other, "__gt__")
def __str__(self):
return str(self.value)
def __repr__(self):
return self.__str__()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Converter for VB.NET Code to Linux Platform Exist? I am interested in moving a number of my projects from Visual Studio and Access/Office Basic with a SQL back-end to the Linux world.
Are there any utilities available to move code over to a similar platform on Linux?
A: Here's a link to the Mono Migration Analyzer to get started. It will help you pinpoint Microsoft specific calls, but you'll probably have to do the db conversion and data access layer manually. You may be surprised - mono does have a System.Data.SqlClient namespace so you may not have much work to do.
A: OpenOffice has a Basic interpreter which is largely compatible with VBA. This may help you with your Access applications. The OpenOffice versions should run on both Windows and Linux.
A: There are some flavours of OpenOffice that include native support for VBA. The version included with Ubuntu is one example, and the Novell version for Windows is another. For more details and a list of versions with this feature, see this article on linux.com.
They don't support all features of VBA, but they will reduce your conversion effort.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to get started with PowerShell? I played with one of the early beta versions of PowerShell V1, but haven't used it since it went "gold". What is the best way to get started using PowerShell?
Which version of PowerShell should I be using (V1.0 vs 2.0 CTP's)? What are you using PowerShell for? Are there any tools that make using PowerShell easier (that is, development environments)?
A: I just found this free ebook, linked from the Windows PowerShell blog:
Mastering PowerShell
A: For learning PowerShell, there are a number of great resources
*
*Technet Virtual Labs (Introduction to Windows PowerShell)
*PowerShellCommunity.org - Forums, blogs, script repository
*powershell on irc.freenode.net
*PowerShell podcasts - PowerScripting.net and Get-Scripting.blogspot.com
For IDE style environments, you have PowerShell Analyzer (free) and PowerGUI (free), PowerShell Plus (commercial), PrimalScript (commercial), and Admin Script Editor (commerical).
I use PowerShell for everything that I can. Right now, I'm looking at Psake, a PowerShell based build script environment. I use if for managing my Active Directory, Hyper-V, Twitter, some keyboard automation (hosting PowerShell in a winforms app to grab keystrokes), and a ton of other stuff. Another cool project I have to check out is PSExpect for testing. I also use it for database access - monitoring changes made to rows in a database by applications. It is also integrated in to my network monitoring solution.
I am also looking to use PowerShell as a scripting engine for a project I am working on.
EDIT:
If you are just learning PowerShell, I would focus on V1. As you get more comfortable, take a look at the CTP, but too much can change from the CTP to what is actually released as V2 to make that your learning tool.
Version 2 is out and available from XP SP3, Server 2003, Vista, and Server 2008 and in the box for Win7 and Server 2008 R2. What you learned for V1 will still serve you well, but now I would concentrate on V2, as there is a superior feature set.
Good luck!
A: Count me in with a vote for PowerShell in Action. There are a bunch of blogs out there as well, check out //\O//'s blog, The Huddled Masses, and JB's Powershell (SQL) as well, they go way back with the shell and have gobs of good scripts & snippets to look at.
A: Find a problem you need to solve and sit down and do it with PowerShell until it's fixed.
Don't give in and do it another way. Then find another, and another, etc. You'll take WAY longer at the start, but you'll be building knowledge to use going forward. As well as a script library to pull from for the future. One day you'll turn around and realize you now "know" PowerShell.
It's awesome. :)
A: To answer your questions one by one.
Get v2.0 of the CTP. I have used 1.0 and 2.0 and have not found any stability issues with the later version and it has more functionality.
The best way to get started is to learn three basic commands and start playing with it.
Step 1 - Discover the available commands using Get-Command
To find all of the "get" commands, for example, you just type:
*Get-Command get**
To find all of the "set" commands, for example, you just type:
*Get-Command set**
Step 2 - Learn how to use each command using Get-Help
To get basic help about the Get-Command commandlet type:
Get-Help Get-Command
To get more information type:
Get-Help Get-Command -full
Step 3 - Discover object properties and methods using Get-Member
Powershell is an object oriented scripting language. Everything is a fully fledged .Net object with properties and methods.
For example to get the properties and methods on the object emitted by the Get-Process commandlet type:
Get-Process | Get-Member
There are a few other concepts that you need to understand like pipes and regular expressions, but those should already be familiar if you have already done some scripting.
What am I using it for?
Two things:
*
*Processing log files from a massively distributed grid application. For this it has proven to be incredibly valuable and powerful.
*Quick testing of .Net classes.
A: Check PowerGUI, a PowerShell GUI and script editor. I don't use it yet, but I saw the sample videos and looks very good. Also, the site mantains a library with sample scripts.
Here is another excellent PowerShell reference.
A: The Ars Technica tutorial is a bit dated, but very good to get you up-and-running with PowerShell.
I would also second the suggestion to check out PowerGUI.
A: There are a number of PowerShell tools, for example,
*
*PowerGUI
*PowerShell Plus (not free)
*PowerShell in Action is a well-regarded book.
And the Powershell team has a blog.
A: The PowerShell CTP is NOT supported in a production environment and a lot will change between now and the time it ships. I suggest following the many PowerShell blogs (don't forget the PowerScripting podcast). There's no shortage of good books on the topic. If you want to spend a little money, SAPIEN Technologies has some self-paced learning material at www.scriptingoutpost.com. I believe Don Jones has done a series of training videos for CBT Nuggets. You can probably find out more at concentratedtechnology.com.
A: I think getting into the habit of automating small tasks is a great way to train yourself in PowerShell. For example, writing a throwaway script rather than doing an onerous looking bit of text-processing by hand. It may actually take longer the first few times, but as you get quicker and build up a library of useful snippets that you can chain together you can save yourself a lot of time.
A: There are DNRtvs on PowerShell and PowerGUI. There are also .NET Rocks! episodes about these tools.
A: A chap called Guy Thomas does some good introductions to PowerShell.
A: I would start it on the fly. What I mean by on-the-fly is that just start to work on your real case and search for help on the web or this site for help if you don't know what to do. For sure, it will very beneficial if you spend some time to learn some basics first. This is what get on to PowerShell.
I have some blog posts on PowerShell, especially 3-serials on a real case I posted recently. Search for davidchuprogramming or go here. Good luck with your PowerShell journey.
A: With regard to the IDE question:
There is a rudimentary IDE which, on my computer at least, is already installed with PowerShell.
It's labeled "WindowsPowerShell ISE", and lets you do things like have several console sessions and several script files open simultaneously... one set of tabs for the scripts, one set for the console sessions, so you can click back and forth as needed.
A: PowerGUI was a big help in and of itself. The IntelliSense feature sold me on it, then I found some useful add-ons that were very good.
As far as resources:
Free eBooks:
*
*Windows PowerShell Cookbook
*Mastering PowerShell
*PowerShell A more in-depth look
Introductory Video:
http://powergui.org/entry.jspa?externalID=2278&categoryID=361
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71"
} |
Q: How do I perform a recursive checkout using ClearCase? I want to check out all files in all subdirectories of a specified folder.
(And it is painful to do this using the GUI, because there is no recursive checkout option).
A: cleartool find somedir -exec "cleartool checkout -nc \"%CLEARCASE_PN%\""
Also an article "ClearCase: The ten best scripts" might be helpful
A: Beware: ClearCase is File-centric, not repository centric (like SVN or CVS).
That means it is rarely a good solution to checkout all files (and it can be fairly long with ClearCase ;) )
That being said, the question is perfectly legitimate and I would like to point out another way:
open a cleartool session in the 'specified folder':
c:\MyFolder> cleartool
cleartool> co -c "Reason for massive checkout" .../*
does the trick too. But as the aku's answer, it does checkout everything: files and directories... and you may most not need to checkout directories!
cleartool find somedir -type f -exec "cleartool checkout -c \"Reason for massive checkout\" \"%CLEARCASE_PN%\""
would only checkout files...
Now the problem is to checkin everything that has changed. It is problematic since often not everything has changed, and CleaCase will trigger an error message when trying to check in an identical file. Meaning you will need 2 commands:
ct lsco -r -cvi -fmt "ci -nc \"%n\"\n" | ct
ct lsco -r -cvi -fmt "unco -rm %n\n" | ct
(with 'ct being 'cleartool' : type 'doskey ct=cleartool $*' on Windows to set that alias)
Note that ct ci -nc will check-in with the comment used for the checkout stage.
So it is not a checkin without a comment (like the -nc option -- or "no comment" -- could make believe).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Drawing a custom label on a pie chart in Yahoo's Flash Library ASTRA Has anyone looked at Yahoo's ASTRA? It's fairly nifty, but I had some issues creating a custom label for a pie chart. They have an example for a line chart, which overrides an axis's series's label renderer. My solution was to override the myPieChart.dataTipFunction. For data that looks like:
myPieChart.dataProvider =
[ { category: "Groceries", cost: 50 },
{ category: "Transportation", cost: 175} ]
myPieChart.dataField = "cost";
myPieChart.categoryField = "category";
I wrote a function like this:
import com.yahoo.astra.fl.charts.series.*
myPieChart.dataTipFunction =
function (obj:Object, index:int, series:ISeries):String {
return obj.category + "\n$" + obj.cost;
};
There's ceil(2.718281828459045) problems with this:
*
*I'm directly calling the category and cost properties of the data provider. The names are actually configurable when setting up the chart, I'd like to maintain that flexibility.
*The default data tip would show the category, the cost (without a dollar sign), and the percentage it makes up in the pie chart. So here, I've lost the percentage. I just have no idea which property of what would hold that. It might be part of the series.
*I probably only need to override the dataItemRenderer for the cost part of the series, but I don't know how to access it. The documentation is a little ... lacking there.
Normally I would just look at the default implementation of the dataTipFunction but it's all inside a compiled shm that's part of the components distributed from yahoo.
Can anyone help me complete this overridden function with percentage information and the flexibility mentioned in point 1?
A: Okay... so no-one's tried Astra, or people just avoid Flash questions.
After a lot of guess work it turns out I needed to cast the series to a PieSeries and then work with those member functions, as the ISeries was useless on it's own.
myPieChart.dataTipFunction =
function (item:Object, index:int, series:ISeries):String {
var oPieSeries:PieSeries = series as PieSeries;
return oPieSeries.itemToCategory(item,index) + "\n$" +
oPieSeries.itemToData(item) + "\n" +
Number(oPieSeries.itemToPercentage(item)).toFixed(2) + "%";
};
A: The Astra components are distributed with the complete source code. Flash CS3 components use compiled shims because otherwise you'd need to manually add the raw source files to your classpath. As a bonus, they also improve compile times because they're already built for you. Look in the "Source" folder in the Astra zip file, and you'll find all the ActionScript classes for the Astra components.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: CreateProcessAsUser vs ShellExecute I need to ShellExecute something as another user, currently I start a helper process with CreateProcessAsUser that calls ShellExecute, but that seems like too much of a hack (Wrong parent process etc.) Is there a better way to do this?
@PabloG: ImpersonateLoggedOnUser does not work:
HANDLE hTok;
VERIFY(LogonUser("otheruser",0,"password",LOGON32_LOGON_INTERACTIVE,LOGON32_PROVIDER_DEFAULT,&hTok));
VERIFY(ImpersonateLoggedOnUser(hTok));
ShellExecute(0,0,"calc.exe",0,0,SW_SHOW);
RevertToSelf();
CloseHandle(hTok);
will just start calc as the logged in user, not "otheruser"
@1800 INFORMATION: CreateProcess/CreateProcessAsUser is not the same as ShellExecute, with UAC on Vista, CreateProcess is useless when you don't have control over what program the user is executing (CreateProcess will return with a error if you give it a exe file with a manifest marked as requireAdmin)
@Brian R. Bondy: I already know this info (And don't get me wrong, its good stuff), but it is off topic (IMHO) I am asking for a ShellExecuteAsUser, not about starting processes as another user, I already know how to do that.
A: The solution really depends on what your needs are, and can be pretty complex (Thanks fully to Windows Vista). This is probably going to be beyond your need, but this will help others that find this page via search.
*
*If you do not need the process to run with a GUI and you do not require elevation
*If the user you want to run as is already logged into a session
*If you need to run the process with a GUI, and the user may, or may not be logged in
*If you need to run the process with elevation
Regarding 1:
In windows Vista there exists something called session 0 isolation. All services run as session 0 and you are not supposed to have a GUI in session 0. The first logged on user is logged into session 1. In previous versions of windows (pre Vista), the first logged on user was also ran fully in session 0.
You can run several different processes with different usernames in the same session. You can find a good document about session 0 isolation here.
Since we're dealing with option 1), you don't need a GUI. Therefore you can start your process in session 0.
You'll want a call sequence something like this:
LogonUser, ExpandEnvironmentStringsForUser, GetLogonSID, LoadUserProfile, CreateEnvironmentBlock, CreateProcessAsUser.
Example code for this can be found via any search engine, or via Google code search
Regarding 2: If the user you'd like to run the process as is already logged in, you can simply use: WTSEnumerateSessions, and WTSQuerySessionInformation to get the session ID, and then WTSQueryUserToken to get the user token. From there you can just use the user token in the CreateProcessAsUser Win32 API.
This is a great method because you don't even need to login as the user nor know the user's username/password. I believe this is only possible via a service though running as local system account.
You can get the current session via WTSGetActiveConsoleSessionId.
Regarding 3:
You would follow the same steps as #1, but in addition you would use the STARTUPINFO's lpDesktop field. Set this to winsta0\Default. You will also need to try to use the OpenDesktop Win32 API and if this fails you can CreateDesktop. Before using the station and desktop handles you should use SetSecurityInfo on each of them with SE_WINDOW_OBJECT, and GROUP_SECURITY_INFORMATION | DACL_SECURITY_INFORMATION.
If the user in question later tries to login, he will actually see the running process.
Regarding 4:
This can be done as well, but it requires you to already be running an elevated process. A service running as local system account does run as elevated. I could also only get it to work by having an authenticode signed process that I wanted to start. The process you want to start also must have a manifest file associated with it with the requestedExecutionLevel level="requireAdministrator"
Other notes:
*
*You can set a token's session via SetTokenInformation and TokenSessionId
*You cannot change the session ID of an already running process.
*This whole process would be drastically more simple if Vista was not in the equation.
A: If you need ShellExecute semantics you can feed following:
C:\windwos\system32\cmd.exe /k" start <your_target_to_be_ShellExecuted>"
to CreateProcessAsUser and you are done.
A: You can wrap the ShellExecute between ImpersonateLoggedOnUser / RevertToSelf
links:
ImpersonateLoggedOnUser: http://msdn.microsoft.com/en-us/library/aa378612(VS.85).aspx
RevertToSelf: http://msdn.microsoft.com/en-us/library/aa379317.aspx
sorry, cannot hyperlink URLs with "()"
A: Why don't you just do CreateProcessAsUser specifying the process you want to run?
You may also be able to use SHCreateProcessAsUserW.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Concurrent logins in a web farm I'm really asking this by proxy, another team at work has had a change request from our customer.
The problem is that our customer doesn't want their employees to login with one user more than one at the same time. That they are getting locked out and sharing logins.
Since this is on a web farm, what would be the best way to tackle this issue?
Wouldn't caching to the database cause performance issues?
A: You could look at using a distributed cache system like memcached
It would solve this problem pretty well (it's MUCH faster than a database), and is also excellent for caching pretty much anything else too
A: It's just a cost of doing business.
Yes, caching to a database is slower than caching on your webserver. But you've got to store that state information in a centralized location, otherwise one webserver isn't going to know what users are logged into another.
Assumption: You're trying to prevent multiple concurrent log-ins by a single user.
A: A database operation at login and logout won't cause a performance problem.
*
*If you are using a caching proxy, that will cause a problem:
*a user will log out, but won't be able to log back in until the logout reaches the cache
Your biggest potential problem might be:
*
*if the app/box crashes without a chance for the user to log out, the user's state in the database will remain "logged in".
A: It depends on how the authentication is done. If you store the last successful login datetime (whatever the backend), so maybe you can change the schema to store a flag "logged_in" and that won't involve an extra performance cost. (ok, it's not clean at all)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's the maximum amount of RAM I can use in a Windows box? Obviously, that's 64-bit windows.
Also, what's the maximum amount of memory a single 64-bit process can use?
I was kind of counting on using it all...
(Yes, I know what I'm doing, please don't tell me that if I need that much RAM i must be doing something wrong)
Also, is this the same for a .Net 2.0 process? Or is there a lower limit for .Net?
A: What version of windows? it differs from XP to vista and from home to business versions of vista, and I would guess again for server.
see here for more info on maximum ram for diffrent windows versions
for Windows Server 2008 Datacenter MS quote 2 TB of physical memory.
A: Link
A: We run Windows boxes with 16 gigs of memory, but that is because we are running multiple VM Ware instances, I presume you mean in a single instance. On Vista it depends upon the edition. It breaks out like this:
Vista Basic: 8 GB
Vista Home Premium: 16 GB
Vista Business/Enterprise/Ultimate: 128+ GB
A: From http://technet.microsoft.com/en-us/library/cc758523.aspx
- Windows Server 2003, 64 bit Datacenter Edition supports physical memory up to 512GB
A single process should be able to use most of it, some will be used by the OS.
The answer from Re0sless is better then mine. The limit is now 2TB, in Datacenter SP2, and 2008.
A: Something we found out recently: with MySQL running on Win32, you can only use up to 2GB per process. On Win64, the memory is not managed as well and a single MySQL instance will run your memory into the ground. Ours used up all 16GB we have. So regarding how much memory 1 64-bit process can use: the answer is however much the OS allows.
A: According to wikipedia you can have 128 GB of physical RAM in a 64-bit Windows XP computer.
A: This is a Windows Server machine.
As for which edition (Datacenter, Enterprise, etc)... Whatever it takes to give my little .Net Process as much memory as it can.
A: Switch to Linux. You will not have any of these issues and you will get better performance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How does GPS in a mobile phone work exactly? I assume it doesn't connect to anything (other than the satelite I guess), is this right? Or it does and has some kind of charge?
A: GPS, the Global Positioning System run by the United States Military, is free for civilian use, though the reality is that we're paying for it with tax dollars.
However, GPS on cell phones is a bit more murky. In general, it won't cost you anything to turn on the GPS in your cell phone, but when you get a location it usually involves the cell phone company in order to get it quickly with little signal, as well as get a location when the satellites aren't visible (since the gov't requires a fix even if the satellites aren't visible for emergency 911 purposes). It uses up some cellular bandwidth. This also means that for phones without a regular GPS receiver, you cannot use the GPS at all if you don't have cell phone service.
For this reason most cell phone companies have the GPS in the phone turned off except for emergency calls and for services they sell you (such as directions).
This particular kind of GPS is called assisted GPS (AGPS), and there are several levels of assistance used.
GPS
A normal GPS receiver listens to a particular frequency for radio signals. Satellites send time coded messages at this frequency. Each satellite has an atomic clock, and sends the current exact time as well.
The GPS receiver figures out which satellites it can hear, and then starts gathering those messages. The messages include time, current satellite positions, and a few other bits of information. The message stream is slow - this is to save power, and also because all the satellites transmit on the same frequency and they're easier to pick out if they go slow. Because of this, and the amount of information needed to operate well, it can take 30-60 seconds to get a location on a regular GPS.
When it knows the position and time code of at least 3 satellites, a GPS receiver can assume it's on the earth's surface and get a good reading. 4 satellites are needed if you aren't on the ground and you want altitude as well.
AGPS
As you saw above, it can take a long time to get a position fix with a normal GPS. There are ways to speed this up, but unless you're carrying an atomic clock with you all the time, or leave the GPS on all the time, then there's always going to be a delay of between 5-60 seconds before you get a location.
In order to save cost, most cell phones share the GPS receiver components with the cellular components, and you can't get a fix and talk at the same time. People don't like that (especially when there's an emergency) so the lowest form of GPS does the following:
*
*Get some information from the cell phone company to feed to the GPS receiver - some of this is gross positioning information based on what cellular towers can 'hear' your phone, so by this time they already phone your location to within a city block or so.
*Switch from cellular to GPS receiver for 0.1 second (or some small, practically unoticable period of time) and collect the raw GPS data (no processing on the phone).
*Switch back to the phone mode, and send the raw data to the phone company
*The phone company processes that data (acts as an offline GPS receiver) and send the location back to your phone.
This saves a lot of money on the phone design, but it has a heavy load on cellular bandwidth, and with a lot of requests coming it requires a lot of fast servers. Still, overall it can be cheaper and faster to implement. They are reluctant, however, to release GPS based features on these phones due to this load - so you won't see turn by turn navigation here.
More recent designs include a full GPS chip. They still get data from the phone company - such as current location based on tower positioning, and current satellite locations - this provides sub 1 second fix times. This information is only needed once, and the GPS can keep track of everything after that with very little power. If the cellular network is unavailable, then they can still get a fix after awhile. If the GPS satellites aren't visible to the receiver, then they can still get a rough fix from the cellular towers.
But to completely answer your question - it's as free as the phone company lets it be, and so far they do not charge for it at all. I doubt that's going to change in the future. In the higher end phones with a full GPS receiver you may even be able to load your own software and access it, such as with mologogo on a motorola iDen phone - the J2ME development kit is free, and the phone is only $40 (prepaid phone with $5 credit). Unlimited internet is about $10 a month, so for $40 to start and $10 a month you can get an internet tracking system. (Prices circa August 2008)
It's only going to get cheaper and more full featured from here on out...
Re: Google maps and such
Yes, Google maps and all other cell phone mapping systems require a data connection of some sort at varying times during usage. When you move far enough in one direction, for instance, it'll request new tiles from its server. Your average phone doesn't have enough storage to hold a map of the US, nor the processor power to render it nicely. iPhone would be able to if you wanted to use the storage space up with maps, but given that most iPhones have a full time unlimited data plan most users would rather use that space for other things.
A: There's 3 satellites at least that you must be able to receive from of the 24-32 out there, and they each broadcast a time from a synchronized atomic clock. The differences in those times that you receive at any one time tell you how long the broadcast took to reach you, and thus where you are in relation to the satellites. So, it sort of reads from something, but it doesn't connect to that thing. Note that this doesn't tell you your orientation, many GPSes fake that (and speed) by interpolating data points.
If you don't count the cost of the receiver, it's a free service. Apparently there's higher resolution services out there that are restricted to military use. Those are likely a fixed cost for a license to decrypt the signals along with a confidentiality agreement.
Now your device may support GPS tracking, in which case it might communicate, say via GPRS, to a database which will store the location the device has found itself to be at, so that multiple devices may be tracked. That would require some kind of connection.
Maps are either stored on the device or received over a connection. Navigation is computed based on those maps' databases. These likely are a licensed item with a cost associated, though if you use a service like Google Maps they have the license with NAVTEQ and others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Testing and Managing database versions against code versions As you develop an application database changes inevitably pop up. The trick I find is keeping your database build in step with your code. In the past I have added a build step that executed SQL scripts against the target database but that is dangerous in so much as you could inadvertanly add bogus data or worse.
My question is what are the tips and tricks to keep the database in step with the code? What about when you roll back the code? Branching?
A: Version numbers embedded in the database are helpful. You have two choices, embedding values into a table (allows versioning multiple items) that can be queried, or having an explictly named object (such as a table or somesuch) you can test for.
When you release to production, do you have a rollback plan in the event of unexpected catastrophe? If you do, is it the application of a schema rollback script? Use your rollback script to rollback the database to a previous code version.
A:
You should be able to create your database from scratch into a known state.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
I've definitely been interested in some sort of DB versioning system, but I haven't found anything yet. So, instead of a solution, you'll get my vote. :-P
A: You really do want to be able to take a clean machine, get the latest version from source control, build in one step, and run all tests in one step. Making this fast makes you produce good software faster.
Just like external libraries, database configuration must also be in source control.
Note that I'm not saying that all your live database content should be in the same source control, just enough to get to a clean state. (Do back up your database content, though!)
A: Define your schema objects and your reference data in version-controlled text files. For example, you can define the schema in Torque format, and the data in DBUnit format (both use XML). You can then use tools (we wrote our own) to generate the DDL and DML that take you from one version of your app to another. Our tool can take as input either (a) the previous version's schema & data XML files or (b) an existing database, so you are always able to get a database of any state into the correct state.
A: I like the way that Django does it. You build models and the when you run a syncdb it applies the models that you have created. If you add a model you just need to run syncdb again. This would be easy to have your build script do every time you made a push.
The problem comes when you need to alter a table that is already made. I do not think that syncdb handles that. That would require you to go in and manually add the table and also add a property to the model. You would probably want to version that alter statement. The models would always be under version control though, so if you needed to you could get a db schema up and running on a new box without running the sql scripts. Another problem with this is keeping track of static data that you always want in the db.
Rails migration scripts are pretty nice too.
A DB versioning system would be great, but I don't really know of such a thing.
A:
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
Backups and compression can help you there. Sorry - there's no excuse not to be able to get a a good set of data to develop against. Even if it's just a sub-set.
A: Put your database developments under version control. I recommend to have a look at neXtep designer :
http://www.nextep-softwares.com/wiki
It is a free GPL product which offers a brand new approach to database development and deployment by connecting version information with a SQL generation engine which could automatically compute any upgrade script you need to upgrade any version of your database into another. Any existing database could be version controlled by a reverse synchronization.
It currently supports Oracle, MySql and PostgreSql. DB2 support is under development. It is a full-featured database development environment where you always work on version-controlled elements from a repository. You can publish your updates by simple synchronization during development and you can generate exportable database deliveries which you will be able to execute on any targetted database through a standalone installer which validates the versions, performs structural checks and applies the upgrade scripts.
The IDE also offers you SQL editors, dependency management, support for modular database model components, data model diagrams, SQL clients and much more.
All the documentation and concepts could be found in the wiki.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Best Practices for Managing Linq to SQL Dbml Files? I've just started using Linq to SQL, and I'm wondering if anyone has any best practices they can share for managing dbml files.
*
*How do you keep them up to date with the database?
*Do you have a single dbml file for the entire database, or is it split into multiple logical units?
*How does managing this file work in a team environment?
Any other tips and tricks welcome.
A: The fact that the L2S designer doesn't support syncing with the database structure is a huge limitation in my mind. However, there is an add-in available that provides some re-sync capabilities:
http://www.huagati.com/dbmltools/
Unfortunately, it's no longer free.
A: Since you asked for other tips and tricks for managing DBML...
When DBML files are refreshed from the database, there are certain schema settings which they don't pick up on, such as default column values, forcing you to manually change the setting. This can lead to lost hours every time you refresh the DBML without realizing or remembering where you need to make manual adjustments, and your code starts failing.
To guard against this, one trick is to write a unit test which uses reflection to check the LINQ metadata for those (manual) settings. If the test fails, it gives a descriptive error message, instructing the user to make the proper change to the column properties. It's not a perfect solution, and it might not be convenient if you have many manual settings, but it can help avoid some major pain for yourself and your team.
Here's an example of an nunit test to check that a column is set to auto-generate from the DB.
[Test]
public void TestMetaData()
{
MyObj my_obj = new MyObj()
{
Foo = "bar",
};
Type type = MyObj.GetType();
PropertyInfo prop = type.GetProperty("UpdatedOn");
IEnumerable<ColumnAttribute> info = (IEnumerable<ColumnAttribute>)prop.GetCustomAttributes(typeof(ColumnAttribute), true);
Assert.IsTrue(
info.Any<ColumnAttribute>(x => x.IsDbGenerated == true),
"The DBML file needs to have MyObj.UpdatedOn AutoGenerated == true set. This must be done manually if the DBML for this table gets refreshed from the database."
);
}
A: PLINQO is a set of code generation templates generating LINQ to SQL. It supports syncing with the database and splitting entities into multiple classes along with many other features that make LINQ to SQL easy to use.
Check out the PLINQO site at http://www.plinqo.com as well as the intro videos.
A: Here is a link that provides good information about LINQ to SQL best practices
http://www.a2zmenu.com/LINQ/LINQ%20to%20SQL%20Best%20Practice.aspx
A: Have you looked at SqlMetal? It's officially supported, although not promoted too much. You can use it to build dbmls from the commandline - we've used it as part of a db's continous integration updates (make sure you have really good code separation if you do this though - partial classes are a saviour - as the dbml will get overwritten).
If I recall correctly it doesn't have quite the same features as the model designer in Visual Studio (I think it handles pluralisation differently). There a good post about it on Ben Hall's blog.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: SQL 2000 'TRY CATCH like' Error Handling This is a sql 2000 database that I am working with.
I have what I call a staging table that is a raw data dump of data, so everything is ntext or nvarchar(255).
I need to cast/convert all of this data into the appropriate data types (ie int, decimal, nvarchar, etc.)
The way I was going to do this was to iterate through all records using a while loop and attempt a CAST on each column on a single record during each iteration, after I visit a particular record I flag it as processed (bit field).
But how can I log the error when/if it occurs but allow the while loop to continue.
At first I implemented this using a TRY CATCH in a local SQL 2005 instance (to get the project going) and all was working well, but i learned today that the dev & production database that the international DBA's have set up is a SQL 2000 instance so I have to conform.
EDIT: I am using a SSIS package to populate the staging table. I see that now I must revisit that package and implement a script component to handle the conversions. Thanks guys
EDIT: I am doing this on a record by record basis, not a batch insert, so the transaction idea seems like it would be feasible but I'm not sure how to trap @@ERROR and allow the stored procedure to continue.
EDIT: I really like Guy's approach, I am going to implement it this way.
A: What are you using to import the file? DTS has scripting abilities that can be used for data validation. If your not using DTS are you using a custom tool? If so do your validation there.
But i think this is what your looking for.
http://www.sqlteam.com/article/using-dts-to-automate-a-data-import-process
IF @@Error <> 0
GOTO LABEL
@op
In SSIS the "red line" from a data import task can redirect bad rows to a separate destination or transform. I haven't played with it in a while but hope it helps.
A: Generally I don't like "loop through the record" solutions as they tend to be slow and you end up writing a lot of custom code.
So...
Depending on how many records are in your staging table, you could post process the data with a series of SQL statements that test the columns for correctness and mark any records that fail the test.
i.e.
UPDATE staging_table
SET status_code = 'FAIL_TEST_1'
WHERE status_code IS NULL
AND ISDATE(ntext_column1) = 0;
UPDATE staging_table
SET status_code = 'FAIL_TEST_2'
WHERE status_code IS NULL
AND ISNUMERIC(ntext_column2) = 0;
etc...
Finally
INSERT INTO results_table ( mydate, myprice )
SELECT ntext_column1 AS mydate, ntext_column2 AS myprice
FROM staging_table
WHERE status_code IS NULL;
DELETE FROM staging_table
WHERE status_code IS NULL;
And the staging table has all the errors, that you can export and report out.
A: It looks like you are doomed. See this document.
TL/DR: A data conversion error always causes the whole batch to be aborted - your sql script will not continue to execute no matter what you do. Transactions won't help. You can't check @@ERROR because execution will already have aborted.
I would first reexamine why you need a staging database full of varchar(255) columns - can whatever fills that database do the conversion?
If not, I guess you'll need to write a program/script to select from the varchar columns, convert, and insert into the prod db.
A: Run each cast in a transaction, after each cast, check @@ERROR, if its clear, commit and move on.
A: You could try checking for the data type before casting and actually avoid throwing errors.
You could use functions like:
ISNUM - to check if the data is of a numeric type
ISDATE - to check if it can be cast to DATETIME
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Cocoa tips for PHP developers? I'm a PHP developer, and I use the MVC pattern and object-oriented code. I really want to write applications for the iPhone, but to do that I need to know Cocoa, but to do that I need to know Objective-C 2.0, but to do that I need to know C, and to do that I need to know about compiled languages (versus interpreted).
Where should I begin?
Do I really need to begin with plain old "C", as Joel would recommend?
Caveat: I like to produce working widgets, not elegant theories.
A: Yes, you're really best off learning C and then Objective-C. There are some resources that will get you over the C and Objective-C language learning curve:
*
*Uli Kusterer's online book Masters of the Void
*Stephen Kochan's book Programming in Objective-C
And there are some resources that will get you over the framework learning curve:
*
*CocoaLab's online book Become an Xcoder
*Aaron Hillegass' book Cocoa Programming for Mac OS X
Despite what Jeff might say, learning C is important for professional software developers for just this reason. It's sort of a baseline low-level lingua franca that other innovation happens atop. The reason Jeff has been able to get away with not learning C is not because you don't need to know C, but because he learned Pascal which is in many ways isomorphic to C. (It has all the same concepts, including pointers and manual memory management.)
A: Get Cocoa Programming For Mac OS X by Aaron Hillegass. This should get you on your way to Cocoa programming. You can look up C-related programming as things come up.
K&R C Programming Language is the definitive reference that is still applicable today to C programming.
Get the Cocoa book, work though it and if you encounter any snags, just ask your C questions here :)
A: Who reads books these days? I have the 1st edition, I forgot to read it. Go to the iPhone Developer Center. Read examples.
In case you didn't read any of that, click the pretty picture.
A: No need to start with plain C. Start with an excellent book instead: Cocoa Programming for Mac OS X.
A: I think starting with C would be a smart thing to do. After all, Objective-C is C language with some extensions.
To develop in Cocoa you are required to know well how pointers and memory allocation work (there's no garbage collection on the iPhone), plus you will have to use some standard C libraries, because a lot of the frameworks that are used to develop for the iPhone are C libraries not Cocoa libraries. Take for example CoreGraphics, the library you have to use to draw on the screen on the iPhone. That's a C framework, meaning that it is not written in Objective-C.
Of course after learning C to a modest level, you could start reading about Objective-C and Cocoa, and in that case I would start with the Objective-C language specification (link to PDF) and the Aaron Hillegas book on Cocoa.
A: The memory management concepts that are (or were, depending on if you like the whole garbage collection thing) central to the Cocoa frameworks can be a little confusing. This is particularly true for those coming over from languages such as PHP, Python, Ruby, or even Java. Knowing C, or C++ for that matter, put you at a great advantage when learning Objective-C and Cocoa.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can you make a .net windows forms project look fresh? I'm working on a visual studio 2005 vb.net windows forms project that's been around for several years. It's full of default textboxes, labels, dropdowns, datagrids, datetime pickers -- all the standard stuff. The end result is a very gray, old-looking project.
What would be the best approach to making this project look fresh and snazzy? I'd rather not rewrite the entire solution with all brand new forms objects, but would that be avoidable?
A: One other thing to also check is that your controls have the FlatStyle property set to System instead of Standard.
What this will do is make sure that the app uses the system defaults for radio buttons, standard buttons and the like. This takes all your apps from the flat Win 2000 look and gives them the XP or Vista bling depending on the OS they are running.
A: This isn't so much an "answer" as an opinion.
I tried to jazz up a WinForms project I created back a few years ago by giving the forms a fancy blue gradient background etc, and it looked pretty good on XP. But then on Vista it looked out of place. Taking away any custom painting and reverting the form to "battleship gray" made it look much better IMHO.
I'm seeing a lot of applications (particularly from MS) coming out with custom window chrome etc, and all it does is detract from the nice sense of consistency that Windows gives.
I guess what I'm saying is that you don't need to worry too much about making your application look fashionable. If you keep your colours based on the SystemColors enumeration then Windows can do that for you.
A: I recommend purchasing a good 3rd-party control library - Infragistics and DevExpress, are just a couple. Most of these libraries give you the ability to drop in new compatible controls on top of your existing ones - for example, you can replace the default EditBox with an enhanced version. They also give you access to some of the snazzy new UIs such as Ribbon, or the Outlook-style navigator people are always wanting.
The reason I specifically recommend using one of these libraries is that they were designed to be relatively easy to use in existing applications, you get support, a community, and all sorts of upgrade paths/options.
The downside: money.
A:
What would be the best approach to making this project look fresh and snazzy?
IMHO the best thing you can do is make sure the controls are logically ordered, and have ample spacing between them, and add groupboxes / labels / etc where appropriate.
If you try and change the 'sea of gray' that is the default color scheme, your app will just end up looking crap.
A: I was actually just sprucing up a dialog today. A lot of it depends on what kind of application you have, and what OS it is running on. A couple of these tips will certainly go a long way to jazzing things up.
*
*Ensure adequate spacing between controls — don't cram them all together. Space is appealing. You might also trying flowing the controls a little differently when you have more space.
*Put in some new 3D and glossy images. You can put a big yellow exclamation mark on a custom warning dialog. Replace old toolbar buttons with new ones. Two libraries I have used and like are GlyFX and IconExperience. You can find free ones too. Ideally get a graphic artist to make some custom ones for the specific actions your application does to fill in between the common ones you use (make sure they all go together). That will go a long way to making it look fancy.
*Try a different font. Tahoma is a good one. Often times the default font is MS Sans Serif. You can do better. Avoid Times New Roman and Comic Sans though. Also avoid large blocks of bold — use it sparingly. Generally you want all your fonts the same, and only use different fonts sparingly to set certain bits of text apart.
*Add subdued colors to certain controls. This is a tricky one. You always want to use subdued colors, nothing bright or stark usually, but the colors should indicate something, or if you have a grid you can use it to show logical grouping. This is a slippery slope. Be aware that users might change their system colors, which will change how your colors look. Ideally give them a few color themes, or the ability to change colors.
*Instead of thinking eye-candy, think usability. Make the most common course of action obvious. Mark Miller of DevExpress has a great talk on the Science of User Interface Design. I actually have a video of it and might be able to post it online with a little clean-up.
*Invest in a few good quality 3rd party controls. Replacing all your controls could be a pain, but if you are using the default grids for example, you would really jazz it up with a good grid from DevExpress or some other component vendor. Be aware that different vendors have different philosophies for how their components are used, so swapping them out can be a bit of a pain. Start small to test the waters, and then try something really complicated before you commit to replacing all of them. The only thing worse then ugly grids is ugly grids mixed with pretty grids. Consistency is golden!
*You also might look at replacing your old tool bars and menus with a Ribbon Control like Microsoft did in Office 2007. Then everyone will think you are really uptown! Again only replacing key components and UI elements without thinking you need to revamp the whole UI.
*Of course pay attention to the basics like tab order, etc. Consistency, consistency, consistency.
Some apps lend themselves to full blown skinning, while others don't. Generally you don't want anything flashy that gets used a lot.
A: This depends on how the existing "gray old looking" project is structured in terms of code. For example, is data access code separated from the UI in a Data Access Layer, is the business logic in a Business Logic Layer? If yes, then cleaning the UI for a snazzy look should be relatively simple.
If everything is all there in the "Button Click" event, then a rewrite is the only way in my humble opinion as otherwise it will just be too time consuming trying to work with the existing code base.
Cheers
A: You can subclass all the default controls and override their appearance. Admittedly, you will have to go thru the entire project and change all references of TextBox to MyTextBox, but all of the default properties and methods will still work. The same cannot be guaranteed if you go with a 3rd party vendor. The other advantage of this approach is you can pick one control at a time and perform an incremental upgrade of the application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: My (Java/Swing) MouseListener isn't listening, help me figure out why So I've got a JPanel implementing MouseListener and MouseMotionListener:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
public class DisplayArea extends JPanel implements MouseListener, MouseMotionListener {
public DisplayArea(Rectangle bounds, Display display) {
setLayout(null);
setBounds(bounds);
setOpaque(false);
setPreferredSize(new Dimension(bounds.width, bounds.height));
this.display = display;
}
public void paintComponent(Graphics g) {
Graphics2D g2 = (Graphics2D)g;
if (display.getControlPanel().Antialiasing()) {
g2.addRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON));
}
g2.setColor(Color.white);
g2.fillRect(0, 0, getWidth(), getHeight());
}
public void mousePressed(MouseEvent event) {
System.out.println("mousePressed()");
mx1 = event.getX();
my1 = event.getY();
}
public void mouseReleased(MouseEvent event) {
System.out.println("mouseReleased()");
mx2 = event.getX();
my2 = event.getY();
int mode = display.getControlPanel().Mode();
switch (mode) {
case ControlPanel.LINE:
System.out.println("Line from " + mx1 + ", " + my1 + " to " + mx2 + ", " + my2 + ".");
}
}
public void mouseEntered(MouseEvent event) {
System.out.println("mouseEntered()");
}
public void mouseExited(MouseEvent event) {
System.out.println("mouseExited()");
}
public void mouseClicked(MouseEvent event) {
System.out.println("mouseClicked()");
}
public void mouseMoved(MouseEvent event) {
System.out.println("mouseMoved()");
}
public void mouseDragged(MouseEvent event) {
System.out.println("mouseDragged()");
}
private Display display = null;
private int mx1 = -1;
private int my1 = -1;
private int mx2 = -1;
private int my2 = -1;
}
The trouble is, none of these mouse functions are ever called. DisplayArea is created like this:
da = new DisplayArea(new Rectangle(CONTROL_WIDTH, 0, DISPLAY_WIDTH, DISPLAY_HEIGHT), this);
I am not really a Java programmer (this is part of an assignment), but I can't see anything glaringly obvious. Can someone smarter than I see anything?
A: I don't see anywhere in the code where you call addMouseListener(this) or addMouseMotionListener(this) for the DisplayArea in order for it to subscribe to those events.
A: I don't see any code here to register to the mouse listeners. You have to call addMouseListener(this) and addMouseMotionListener(this) on the DisplayArea.
A: The implements mouselistener, mousemotionlistener just allows the displayArea class to listen to some, to be defined, Swing component's mouse events. You have to explicitly define what it should be listening at. So I suppose you could add something like this to the constructor:
this.addMouseListener(this);
this.addMouseMotionListener(this);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Change templates in Xcode How would I change the initial templates created by Xcode when creating a new Cocoa Class.
I am referring to the comments and class name created when using Xcode's new class wizard.
A: As of Xcode 7
*
*File templates: ~/Library/Developer/Xcode/Templates/File Templates
*Project templates: ~/Library/Developer/Xcode/Templates/Project Templates
For example if I place a Empty Application.xctemplate in ~/Library/Developer/Xcode/Templates/Project Templates/Application
It will appear in the template dialog as
A: In xcode 5.0.1:
1.Go->Applications
2.right click "xcode" application
3.chose "Show Package Contents"
4.contents/Developer/Library/Xcode/Templates
A: This may be useful for somebody:
As for XCode 6 to XCode 9 the file templates are in:
/Applications/Xcode.app/Contents/Developer/Library/Xcode/Templates/File Templates/Source
Update:
As @carbo18 noted, it's proably better to create the directory ~/Library/Developer/Xcode/Templates/File Templates/Source and put your custom templates there. The best way may be to use as base one of the ones in /Applications/Xcode.app/Contents/Developer/Library/Xcode/Templates/File Templates/Source
A: Xcode uses template files for file and project templates and does variable expansion in both at creation time.
Xcode 3.0 templates can be found in [Dev Installation]/Library/Xcode/, likely /Developer/Library/Xcode. If you want to modify these templates or add your own, use the following directories to save your new/modified templates so that they are not wiped out by future Developer Tool upgrades:
*
*File templates:
~/Library/Developer/Shared/Xcode/File
Templates/
*Target templates:
~/Library/Developer/Shared/Xcode/Target
Templates/
*Project templates:
~/Library/Developer/Shared/Xcode/Project
Templates/
I think that you can also use the /Library/Developer/Shared/Xcode/[File|Target|Project] Templates/ directory for templates shared by all users.
If you just want to change the MyCompanyName in the templates, the following command line will do the trick:
defaults write com.apple.Xcode PBXCustomTemplateMacroDefinitions '{ "ORGANIZATIONNAME" = "NewCompanyName";}'
A good tutorial on writing file templates is here [MacResearch.org].
A: If you are simply looking to change the Author Name and Organization see this answer.
It's much easier than modifying the templates.
A: In Xcode 4 and Xcode 5 the user file templates can be placed at:
~/Library/Developer/Xcode/Templates/[Category]
[Category] can be used to categorize your templates (choose a name of your choise)
If the folder doesn't exist already, create it!
A: You wouldn't change the existing templates. In other words, don't modify anything under the /Developer hierarchy (or wherever you installed your developer tools).
Instead, clone the templates you want to have customized variants of. Then change their names and the information in them. Finally, put them in the appropriate location in your account's Library/Application Support folder, specifically:
*
*File templates: ~/Library/Application Support/Developer/Shared/Xcode/File Templates/
*Target templates: ~/Library/Application Support/Developer/Shared/Xcode/Target Templates/
*Project templates: ~/Library/Application Support/Developer/Shared/Xcode/Project Templates/
That way they won't be overwritten when you install new developer tools, and you can tweak them to your heart's content.
Update
For newer versions of Xcode the updated path will be:
~/Library/Developer/Xcode/Templates/File Templates/Source
A: For Xcode 4.4, none of the previously mentioned methods work. This gist provides a partial hacky solution. Please fork and enhance if you know a better way.
A: Right click on xCode and select Show Package contents, then go to contents/Developer/Library/Xcode/Templates. Here you can find the templates for all programming languages.
Here some visualization:
A: In XCode 4.5 right click on project, click Show File Inspector, then change Organization name in the file inspector's second tab (Project Document group)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
} |
Q: CLR Profiler - Attaching to existing process I would like to use something like CLR Profiles on .Net 2.0 to see what objects are taking more space in the heap at any given time (of an ASP.Net worker process).
However, the CLR Profiler only lets me START an app, not attach to an existing one. I assume this is because it tracks allocations and GC too, but i'm not very interested in that. I would just like something that takes a snapshot of the current state of the heap, and shows me what is there and how many objects of each kind there are, and how many bytes total are being used by each object type.
Any ideas?
A: *
*Attach a debugger
cdb -p
*
*load .net debugger extensions
.loadby sos mscorwks
*
*dump the heap in a format the CLRProfiler understands
!TraverseHeap heap.txt
*
*detach debugger
qd
*
*load heap.txt in the clrprofiler app
A: .Net Memory Profiler is exactly what you need. It's not free but there's a trial version. Actually I used the trial to find leaks on our last project. One notable feature is:
Easily identify memory leaks by
collecting and comparing snapshots of
.NET memory
I think this is what your looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How should I build a good (web) API I'm going to build an API for a web app and I'm interested in what people can suggest as good practices.
I'm already planning to make it versioned (version 1 can only control certain aspects of the system, version 2 could control more, but this may need a change in the way authentication is performed that would be incompatible with version 1), and the authentication will be distinct from the standard username/password people use to log in (if someone does use a malicious tool it won't open them up to full impersonation, just whatever the api allows).
Does anyone have further ideas, or examples of sites with particularly good APIs you have used?
A: 1) Bake the version number directly into the URL rather than passing it as a parameter, since that gives you complete freedom to change the organization of your API namespace with each version bump.
2) Keep your URL rewriting rules (if any) as simple/lean as possible (but no simpler), while making your URLs as beautiful as possible (but no more).
3) Always look for the best HTTP status code you can find for each response (and don't forget about 202 and 207, for example).
4) Implement fascist parameter validation logic, and informative error messages.
5) Use HTTP request headers where appropriate instead of parameters (like Accept, for example, to allow clients to specify the desired data format of the response).
6) Organize your "nouns" in such a way that the URLs used by different client audiences are separated near the "root" of your URL tree (this makes it easier to enforce different authentication mechanisms for those different audiences if needed, or even map different portions of your URL tree to different servers).
7) If you're serving regular web pages off the same domain as your APIs and use the same authentication credentials, require an X-Requested-With header in your API requests so as to avoid XSRF vulnerabiities.
A: I would take a look at proven APIs:
*
*YouTube API
*Twitter API
There's a lot of argument about whether these APIs are "good" but I think their success is demonstrated, and they're all easy to use.
A: Use REST.
RESTful web services architecture is easy to implement and uses the strengths and semantics of HTTP for what they were intended. It's resource-oriented, just like the web itself.
Amazon Web Services, Google and many others offer REST APIs to interact with their products.
A: Use REST.
Read up on standards for APIs, or copy the ideas from one of the popular ones.
Be careful when authenticating users.
Start very very simple.
Build a site that uses your API (even if it's not useful) to check things work. Perhaps you could build a mobile version of the site or something that forces you to use the API in a lot of depth.
A: Read the RESTful Web Services book, which give you a good overview of how to use REST in practice, and get to up to speed quickly enough to get started now, with some confidence. This is more useful than just looking at an existing API, because it also discusses design choices and trade-offs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: JSP debugging in IntelliJ IDEA Does anyone know how to debug JSP in IntelliJ IDEA?
When I set breakpoint in my JSP files, those breakpoints never seem to take effect. The debugger never hits them. IDEA seems to think that the breakpoints are valid. I do see a red dot placed to the left of the line where I place my breakpoint.
I read in IntelliJ forum in this post that JSP files need to be under web-inf for debugging to work.
But then I also read that JSP files placed under web-inf won't be directly accessible by the user.
I am not sure who's really right.
A: Anyway, you need to launch the Tomcat in IDEA, not from a remote Tomcat.
A: For JSP debugging in Intellij there are some configurations that must be in order. The fact that Intellij always allows you to add a breakpoint on a JSP line does not necessarily imply that you’ve configured JSP debugging. In the following I refer to Intellij 8 configuration, w.r.t. previous versions you will need to do similar operations as the concepts are the same.
In order to enable JSP debugging you must do two steps: set a web application configuration in your project and add a web application server configuration.
Web application Configuration: in order to have JSP debugging, you must have a “web” facet in your project structure, pointing to the correct web.xml file. Depending on the kind of web application structure you are using, the facet may be detected automatically by Intellij (go anyway to check what it has done) or you may have to add it manually. Remember in the “Java EE build settings” tab to set as anable “Create web facet exploded directory”; if you don’t want duplications, a trick is just to enable it and point to your already existing directory.
(Web) Application server: Go to “edit configurations”, there you have to add to configurations an application server, not launch the web server as an application like any other. In this way Intellij will be able to intercept JSP calls. In the list of application servers, you should have the default one, Tomcat. Be sure to have a local Tomcat installation before you do this, and point to that when adding the web application server. The last trick is going to the “Deployment” tab and selecting as “Deployment source” the same facet that you configured in the previous step.
The same configuration works if you want to use another web application server, I tested it with the latest Caucho Resin releases and debugging works fine (it didn’t with the previous Intellij and Resin combinations).
If you don’t see Tomcat in the list of available application servers to add, check the plugins in the general Intellij settings pane: in the latest releases, more and more functionality has become “pluggable”, and even very basic functions may be disabled; this plugin is called “Tomcat integration”.
Finally, it is surely not true that JSP files need to be under WEB-INF to be under debugging.
A: For remote JSP debugging (which also applies to localhost) you'll need to install the JSR45 support plugin. Please note this feature is only supported in the Ultimate edition of IntelliJ, not the community edition.
*
*Go to Preferences > Plugins, search for the JSR45 plugin, and
enable it.
*Create a run configuration: Run > Run Configuration > click the + button, and pick JSR45 Compatible Server, and then in the dialog that opens, select Remote, and set server host and port. Setting Application Server: Generic should work fine.
*Make sure you set the correct port in Startup/Configuration > Debug.
*Open the module settings (F3 on the project folder), and add a Web Facet under Facets, and under Web Resource Directories specify your JSP root folder.
*Click the Configuration... button, and select the folders with the beans, classes and libraries that your JSPs depend on.
Now JSP breakpoints should work, provided that you started your server with the proper debug arguments.
If you have a maven project with auto-import enabled then you might want to disable auto-import because every time the auto-import is triggered your library settings will be reset.
Also see:
*
*https://www.jetbrains.com/help/idea/run-debug-configuration-jsr45-compatible-server.html
A: Please make sure, that in you tomcat's conf/web.xml suppressSmap is not enabled as support of JSR45 is required by IntelliJ's debugger.
It should look like this:
<init-param>
<param-name>suppressSmap</param-name>
<param-value>false</param-value>
</init-param>
From https://tomcat.apache.org/tomcat-7.0-doc/jasper-howto.html
suppressSmap - Should the generation of SMAP info for JSR45 debugging be suppressed? true or false, default false.
A: If you are using the Intellij debugger you can get the value of an individual attribute by putting a breakpoint inside JSP and evaluating the expression this.jspContext.request.getAttribute("attributeName").
Note that this may return a Java Object type, and you may have to cast it to the correct type. Also if you launch a remote Tomcat the IDEA won't hit any breakpoints, so you need to launch the Tomcat in debug mode from inside the IDEA.
A: For the second part of your question ("jsp files placed under web-inf won't be directly accessible by user") that is correct. To allow users to access JSP files in the WEB-INF folder servlet and servlet-mapping entries need to be made in the web.xml file for each JSP page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: WordPress MediaWiki integration On the other end of the spectrum, I would be happy if I could install a wiki and share the login credentials between WordPress and the wiki. I hacked MediaWiki a while ago to share logins with another site (in ASP Classic) via session cookies, and it was a pain to do and even worse to maintain. Ideally, I would like to find a plug-in or someone who knows a more elegant solution.
A: The tutorial WordPress, bbPress & MediaWiki should get you on the right track to integrating MediaWiki into your WordPress install. It's certainly going to be a lot easier than hacking WordPress to have wiki features, especially with the sort of granular permissions you're describing.
A: WPMW, a solution for integrating a MediaWiki within a WordPress installation, might help.
A: Both MediaWiki and Wordpress support OpenID:
http://www.wordpress.org/extend/plugins/openid/
http://www.mediawiki.org/wiki/Extension:OpenID
Though, I think for automatic logins (after you log in to one, you automatically log in to the other) you would need to look into implementing checkid_immediate
http://www.openid.net/specs/openid-authentication-2_0.html#anchor28
A: Another solution is described in The CUNY Academic Commons Announces WPMu-MediaWiki Single Sign-on. It just creates something that uses the WordPress login as the master.
A: My company uses WordPress and MediaWiki internally and we use HTTP_AUTH access control to create a "single sign on". As we add more applications, we simply integrate them into the HTTP_AUTH system where practical. For security, you can run HTTP_AUTH over SSL. The basic steps are:
Configure the .htaccess to specify the authentication type. We use MySQL in production but you could have a simple htpasswd file.
In the WordPress directory's .htaccess file add the following:
<Files wp-login.php>
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /some/path/to/htpasswd
Require valid-user
</Files>
In the WordPress wp-admin/ directory's .htaccess add the following:
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /some/path/to/htpasswd
Require valid-user
In the MediaWiki directory's .htaccess file add the following:
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /some/path/to/htpasswd
Then install the HttpAuth extension for MediaWiki and the HTTP Authentication plugin for WordPress and configure. We had to make some slight modifications to the MediaWiki extension as our hosting environment does not provide mod_php but if you have mod_php it will work out of the box.
Note that our environment is a private intranet so everyone is authenticated. The above .htaccess files will work for publicly viewable blogs but some additional tweaking may be required for the MediaWiki .htaccess depending on whether you want everyone to be required to be authenticated or not and if the site is publicly available.
A: Have a look at Wikiful, a WordPress plugin that bridges MediaWiki and WordPress. That might do the trick for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: XML attribute vs XML element At work we are being asked to create XML files to pass data to another offline application that will then create a second XML file to pass back in order to update some of our data. During the process we have been discussing with the team of the other application about the structure of the XML file.
The sample I came up with is essentially something like:
<INVENTORY>
<ITEM serialNumber="something" location="something" barcode="something">
<TYPE modelNumber="something" vendor="something"/>
</ITEM>
</INVENTORY>
The other team said that this was not industry standard and that attributes should only be used for meta data. They suggested:
<INVENTORY>
<ITEM>
<SERIALNUMBER>something</SERIALNUMBER>
<LOCATION>something</LOCATION>
<BARCODE>something</BARCODE>
<TYPE>
<MODELNUMBER>something</MODELNUMBER>
<VENDOR>something</VENDOR>
</TYPE>
</ITEM>
</INVENTORY>
The reason I suggested the first is that the size of the file created is much smaller. There will be roughly 80000 items that will be in the file during transfer. Their suggestion in reality turns out to be three times larger than the one I suggested. I searched for the mysterious "Industry Standard" that was mentioned, but the closest I could find was that XML attributes should only be used for meta data, but said the debate was about what was actually meta data.
After the long winded explanation (sorry) how do you determine what is meta data, and when designing the structure of an XML document how should you decide when to use an attribute or an element?
A: When in doubt, KISS -- why mix attributes and elements when you don't have a clear reason to use attributes. If you later decide to define an XSD, that will end up being cleaner as well. Then if you even later decide to generate a class structure from your XSD, that will be simpler as well.
A: There is no universal answer to this question (I was heavily involved in the creation of the W3C spec). XML can be used for many purposes - text-like documents, data and declarative code are three of the most common. I also use it a lot as a data model. There are aspects of these applications where attributes are more common and others where child elements are more natural. There are also features of various tools that make it easier or harder to use them.
XHTML is one area where attributes have a natural use (e.g. in class='foo'). Attributes have no order and this may make it easier for some people to develop tools. OTOH attributes are harder to type without a schema. I also find namespaced attributes (foo:bar="zork") are often harder to manage in various toolsets. But have a look at some of the W3C languages to see the mixture that is common. SVG, XSLT, XSD, MathML are some examples of well-known languages and all have a rich supply of attributes and elements. Some languages even allow more-than-one-way to do it, e.g.
<foo title="bar"/>;
or
<foo>
<title>bar</title>;
</foo>;
Note that these are NOT equivalent syntactically and require explicit support in processing tools)
My advice would be to have a look at common practice in the area closest to your application and also consider what toolsets you may wish to apply.
Finally make sure that you differentiate namespaces from attributes. Some XML systems (e.g. Linq) represent namespaces as attributes in the API. IMO this is ugly and potentially confusing.
A: Others have covered how to differentiate between attributes from elements but from a more general perspective putting everything in attributes because it makes the resulting XML smaller is wrong.
XML is not designed to be compact but to be portable and human readable. If you want to decrease the size of the data in transit then use something else (such as google's protocol buffers).
A: Some of the problems with attributes are:
*
*attributes cannot contain multiple values (child elements can)
*attributes are not easily expandable (for future changes)
*attributes cannot describe structures (child elements can)
*attributes are more difficult to manipulate by program code
*attribute values are not easy to test against a DTD
If you use attributes as containers for data, you end up with documents that are difficult to read and maintain. Try to use elements to describe data. Use attributes only to provide information that is not relevant to the data.
Don't end up like this (this is not how XML should be used):
<note day="12" month="11" year="2002"
to="Tove" to2="John" from="Jani" heading="Reminder"
body="Don't forget me this weekend!">
</note>
Source: http://www.w3schools.com/xml/xml_dtd_el_vs_attr.asp
A: Both methods for storing object's properties are perfectly valid. You should depart from pragmatic considerations. Try answering following question:
*
*Which representation leads to faster data parsing\generation?
*Which representation leads to faster data transfer?
*Does readability matter?
...
A: the million dollar question!
first off, don't worry too much about performance now. you will be amazed at how quickly an optimized xml parser will rip through your xml. more importantly, what is your design for the future: as the XML evolves, how will you maintain loose coupling and interoperability?
more concretely, you can make the content model of an element more complex but it's harder to extend an attribute.
A: Use elements for data and attributes for meta data (data about the element's data).
If an element is showing up as a predicate in your select strings, you have a good sign that it should be an attribute. Likewise if an attribute never is used as a predicate, then maybe it is not useful meta data.
Remember that XML is supposed to be machine readable not human readable and for large documents XML compresses very well.
A: "XML" stands for "eXtensible Markup Language". A markup language implies that the data is text, marked up with metadata about structure or formatting.
XHTML is an example of XML used the way it was intended:
<p><span lang="es">El Jefe</span> insists that you
<em class="urgent">MUST</em> complete your project by Friday.</p>
Here, the distinction between elements and attributes is clear. Text elements are displayed in the browser, and attributes are instructions about how to display them (although there are a few tags that don't work that way).
Confusion arises when XML is used not as a markup language, but as a data serialization language, in which the distinction between "data" and "metadata" is more vague. So the choice between elements and attributes is more-or-less arbitrary except for things that can't be represented with attributes (see feenster's answer).
A: It is arguable either way, but your colleagues are right in the sense that the XML should be used for "markup" or meta-data around the actual data. For your part, you are right in that it's sometimes hard to decide where the line between meta-data and data is when modeling your domain in XML. In practice, what I do is pretend that anything in the markup is hidden, and only the data outside the markup is readable. Does the document make some sense in that way?
XML is notoriously bulky. For transport and storage, compression is highly recommended if you can afford the processing power. XML compresses well, sometimes phenomenally well, because of its repetitiveness. I've had large files compress to less than 5% of their original size.
Another point to bolster your position is that while the other team is arguing about style (in that most XML tools will handle an all-attribute document just as easily as an all-#PCDATA document) you are arguing practicalities. While style can't be totally ignored, technical merits should carry more weight.
A: It's largely a matter of preference. I use Elements for grouping and attributes for data where possible as I see this as more compact than the alternative.
For example I prefer.....
<?xml version="1.0" encoding="utf-8"?>
<data>
<people>
<person name="Rory" surname="Becker" age="30" />
<person name="Travis" surname="Illig" age="32" />
<person name="Scott" surname="Hanselman" age="34" />
</people>
</data>
...Instead of....
<?xml version="1.0" encoding="utf-8"?>
<data>
<people>
<person>
<name>Rory</name>
<surname>Becker</surname>
<age>30</age>
</person>
<person>
<name>Travis</name>
<surname>Illig</surname>
<age>32</age>
</person>
<person>
<name>Scott</name>
<surname>Hanselman</surname>
<age>34</age>
</person>
</people>
</data>
However if I have data which does not represent easily inside of say 20-30 characters or contains many quotes or other characters that need escaping then I'd say it's time to break out the elements... possibly with CData blocks.
<?xml version="1.0" encoding="utf-8"?>
<data>
<people>
<person name="Rory" surname="Becker" age="30" >
<comment>A programmer whose interested in all sorts of misc stuff. His Blog can be found at http://rorybecker.blogspot.com and he's on twitter as @RoryBecker</comment>
</person>
<person name="Travis" surname="Illig" age="32" >
<comment>A cool guy for who has helped me out with all sorts of SVn information</comment>
</person>
<person name="Scott" surname="Hanselman" age="34" >
<comment>Scott works for MS and has a great podcast available at http://www.hanselminutes.com </comment>
</person>
</people>
</data>
A: How about taking advantage of our hard earned object orientation intuition? I usually find it is straight forward to think which is an object and which is an attribute of the object or which object it is referring to.
Whichever intuitively make sense as objects shall fit in as elements. Its attributes (or properties) would be attributes for these elements in xml or child element with attribute.
I think for simpler cases like in the example object orientation analogy works okay to figure out which is element and which is attribute of an element.
A: XML Element vs XML Attribute
XML is all about agreement. First defer to any existing XML schemas or established conventions within your community or industry.
If you are truly in a situation to define your schema from the ground up, here are some general considerations that should inform the element vs attribute decision:
<versus>
<element attribute="Meta content">
Content
</element>
<element attribute="Flat">
<parent>
<child>Hierarchical</child>
</parent>
</element>
<element attribute="Unordered">
<ol>
<li>Has</li>
<li>order</li>
</ol>
</element>
<element attribute="Must copy to reuse">
Can reference to re-use
</element>
<element attribute="For software">
For humans
</element>
<element attribute="Extreme use leads to micro-parsing">
Extreme use leads to document bloat
</element>
<element attribute="Unique names">
Unique or non-unique names
</element>
<element attribute="SAX parse: read first">
SAX parse: read later
</element>
<element attribute="DTD: default value">
DTD: no default value
</element>
</versus>
A: It may depend on your usage. XML that is used to represent stuctured data generated from a database may work well with ultimately field values being placed as attributes.
However XML used as a message transport would often be better using more elements.
For example lets say we had this XML as proposed in the answer:-
<INVENTORY>
<ITEM serialNumber="something" barcode="something">
<Location>XYX</LOCATION>
<TYPE modelNumber="something">
<VENDOR>YYZ</VENDOR>
</TYPE>
</ITEM>
</INVENTORY>
Now we want to send the ITEM element to a device to print he barcode however there is a choice of encoding types. How do we represent the encoding type required? Suddenly we realise, somewhat belatedly, that the barcode wasn't a single automic value but rather it may be qualified with the encoding required when printed.
<ITEM serialNumber="something">
<barcode encoding="Code39">something</barcode>
<Location>XYX</LOCATION>
<TYPE modelNumber="something">
<VENDOR>YYZ</VENDOR>
</TYPE>
</ITEM>
The point is unless you building some kind of XSD or DTD along with a namespace to fix the structure in stone, you may be best served leaving your options open.
IMO XML is at its most useful when it can be flexed without breaking existing code using it.
A: Just a couple of corrections to some bad info:
@John Ballinger: Attributies can contain any character data. < > & " ' need to be escaped to < > & " and ' , respectively. If you use an XML library, it will take care of that for you.
Hell, an attribute can contain binary data such as an image, if you really want, just by base64-encoding it and making it a data: URL.
@feenster: Attributes can contain space-separated multiple items in the case of IDS or NAMES, which would include numbers. Nitpicky, but this can end up saving space.
Using attributes can keep XML competitive with JSON. See Fat Markup: Trimming the Fat Markup Myth one calorie at a time.
A: I use this rule of thumb:
*
*An Attribute is something that is self-contained, i.e., a color, an ID, a name.
*An Element is something that does or could have attributes of its own or contain other elements.
So yours is close. I would have done something like:
EDIT: Updated the original example based on feedback below.
<ITEM serialNumber="something">
<BARCODE encoding="Code39">something</BARCODE>
<LOCATION>XYX</LOCATION>
<TYPE modelNumber="something">
<VENDOR>YYZ</VENDOR>
</TYPE>
</ITEM>
A: I use the following guidelines in my schema design with regards to attributes vs. elements:
*
*Use elements for long running text (usually those of string or
normalizedString types)
*Do not use an attribute if there is grouping of two values (e.g.
eventStartDate and eventEndDate) for an element. In the previous example,
there should be a new element for "event" which may contain the startDate and
endDate attributes.
*Business Date, DateTime and numbers (e.g. counts, amount and rate) should be
elements.
*Non-business time elements such as last updated, expires on should be
attributes.
*Non-business numbers such as hash codes and indices should be attributes.* Use elements if the type will be complex.
*Use attributes if the value is a simple type and does not repeat.
*xml:id and xml:lang must be attributes referencing the XML schema
*Prefer attributes when technically possible.
The preference for attributes is it provides the following:
*
*unique (the attribute cannot appear multiple times)
*order does not matter
*the above properties are inheritable (this is something that the "all" content model does not support in the current schema language)
*bonus is they are less verbose and use up less bandwidth, but that's not really a reason to prefer attributes over elements.
I added when technically possible because there are times where the use of attributes are not possible. For example, attribute set choices. For example use (startDate and endDate) xor (startTS and endTS) is not possible with the current schema language
If XML Schema starts allowing the "all" content model to be restricted or extended then I would probably drop it
A: This is very clear in HTML where the differences of attributes and markup can be clearly seen:
*
*All data is between markup
*Attributes are used to characterize this data (e.g. formats)
If you just have pure data as XML, there is a less clear difference. Data could stand between markup or as attributes.
=> Most data should stand between markup.
If you want to use attributes here: You could divide data into two categories: Data and "meta data", where meta data is not part of the record, you want to present, but things like "format version", "created date", etc.
<customer format="">
<name></name>
...
</customer>
One could also say: "Use attributes to characterize the tag, use tags to provide data itself."
A: I am always surprised by the results of these kinds of discussions. To me there is a very simple rule for deciding whether data belongs in an attribute or as content and that is whether the data has navigable sub-structure.
So for example, non-markup text always belongs in attributes. Always.
Lists belong in sub-structure or content. Text which may over time include embedded structured sub-content belong in content. (In my experience there is relatively little of this - text with markup - when using XML for data storage or exchange.)
XML schema written this way is concise.
Whenever I see cases like <car><make>Ford</make><color>Red</color></car>, I think to myself "gee did the author think that there were going to be sub-elements within the make element?" <car make="Ford" color="Red" /> is significantly more readable, there's no question about how whitespace would be handled etc.
Given just but the whitespace handling rules, I believe this was the clear intent of the XML designers.
A: The clear and unambiguous definition of an XML element is everything from (including) the element's start tag to (including) the element's end tag.
The below example is an element with text and a child element. The element's name is This_Is_An_Element. It's contents are the open and close tag and all the stuff in between, including any attributes, child elements, etc. And sub_element is also an element, but it has no contents aside from it's tag.
<This_Is_An_Element>and this is clear text <sub_element/> etc. </This_Is_An_Element>
And, an attribute is member of an element. Here, This_Is_An_Element has an attribute, WithAnAttribute. And that attribute's value is, Attribute's Value. This attribute is part of the element, This_Is_An_Element.
<This_Is_An_Element WithAnAttribute="Attribute's Value">and this is clear text <sub_element> etc. </This_Is_An_Element>
A: I agree with feenster. Stay away from attributes if you can. Elements are evolution friendly and more interoperable between web service toolkits. You'd never find these toolkits serializing your request/response messages using attributes. This also makes sense since our messages are data (not metadata) for a web service toolkit.
A: Attributes can easily become difficult to manage over time trust me. i always stay away from them personally. Elements are far more explicit and readable/usable by both parsers and users.
Only time i've ever used them was to define the file extension of an asset url:
<image type="gif">wank.jpg</image> ...etc etc
i guess if you know 100% the attribute will not need to be expanded you could use them, but how many times do you know that.

| {
"language": "en",
"url": "https://stackoverflow.com/questions/33746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "268"
} |
Q: How do I add a MIME type to .htaccess? I would like to add the following MIME type to a site run by Apache:
<mime-mapping>
<extension>jnlp</extension>
<mime-type>application/x-java-jnlp-file</mime-type>
</mime-mapping>
That is the Tomcat format.
I'm on a shared host, so I can only create an .htaccess file. Would someone please specify the complete contents of such a file?
A: You should be able to just add this line:
AddType application/x-java-jnlp-file .jnlp
A: AddType application/x-java-jnlp-file .jnlp
Note that you might not actually be allowed to do that.
See also the documentation of the AddType directive and the .htaccess howto.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How can I retrieve a list of parameters from a stored procedure in SQL Server Using C# and System.Data.SqlClient, is there a way to retrieve a list of parameters that belong to a stored procedure on a SQL Server before I actually execute it?
I have an a "multi-environment" scenario where there are multiple versions of the same database schema. Examples of environments might be "Development", "Staging", & "Production". "Development" is going to have one version of the stored procedure and "Staging" is going to have another.
All I want to do is validate that a parameter is going to be there before passing it a value and calling the stored procedure. Avoiding that SqlException rather than having to catch it is a plus for me.
Joshua
A: You can use SqlCommandBuilder.DeriveParameters() (see SqlCommandBuilder.DeriveParameters - Get Parameter Information for a Stored Procedure - ADO.NET Tutorials) or there's this way which isn't as elegant.
A: Although its not exactly what you want, here's some sample code that uses the SqlConnection.GetSchema() method to return all the stored procedures associated with a database, and then subsequently all the parameter names and types for each stored procedure. The example below just loads this into variables. Note that this also returns all the "system" stored procedures, which might not be desirable.
Steve
public void LoadProcedureInfo()
{
SqlConnection connection = new SqlConnection();
ConnectionStringSettings settings = ConfigurationManager.ConnectionStrings["ConnectionString"];
connection.ConnectionString = settings.ConnectionString;
connection.Open();
DataTable procedureDataTable = connection.GetSchema("Procedures");
DataColumn procedureDataColumn = procedureDataTable.Columns["ROUTINE_NAME"];
if (procedureDataColumn != null)
{
foreach (DataRow row in procedureDataTable.Rows)
{
String procedureName = row[procedureDataColumn].ToString();
DataTable parmsDataTable = connection.GetSchema("ProcedureParameters", new string[] { null, null, procedureName });
DataColumn parmNameDataColumn = parmsDataTable.Columns["PARAMETER_NAME"];
DataColumn parmTypeDataColumn = parmsDataTable.Columns["DATA_TYPE"];
foreach (DataRow parmRow in parmsDataTable.Rows)
{
string parmName = parmRow[parmNameDataColumn].ToString();
string parmType = parmRow[parmTypeDataColumn].ToString();
}
}
}
}
A: SqlCommandBuilder.DeriveParameters(command)
This statement does what I need it to.
Here is a full code sample for the way I solved this problem.
Public Sub GetLogEntriesForApplication(ByVal settings As FilterSettings,
Optional ByVal RowGovernor As Integer = -1)
Dim command As New SqlCommand("GetApplicationActions",
New SqlConnection(m_environment.LoggingDatabaseConnectionString))
Dim adapter As New SqlDataAdapter(command)
Using command.Connection
With command
.Connection.Open()
.CommandType = CommandType.StoredProcedure
SqlCommandBuilder.DeriveParameters(command)
With .Parameters
If settings.FilterOnLoggingLevel Then
If .Contains("@loggingLevel") Then
.Item("@loggingLevel").Value = settings.LoggingLevel
End If
End If
If settings.FilterOnApplicationID Then
If .Contains("@applicationID") Then
.Item("@applicationID").Value = settings.ApplicationID
End If
End If
If settings.FilterOnCreatedDate Then
If .Contains("@startDate") Then
.Item("@startDate").Value = settings.CreatedDate.Ticks
End If
End If
If settings.FilterOnEndDate Then
If .Contains("@endDate") Then
.Item("@endDate").Value = settings.EndDate.Ticks
End If
End If
If settings.FilterOnSuccess Then
If .Contains("@success") Then
.Item("@success").Value = settings.Success
End If
End If
If settings.FilterOnProcess Then
If settings.Process > -1 Then
If .Contains("@process") Then
.Item("@process").Value = settings.Process
End If
End If
End If
If RowGovernor > -1 Then
If .Contains("@topRows") Then
.Item("@topRows").Value = RowGovernor
End If
End If
End With
End With
adapter.TableMappings.Clear()
adapter.TableMappings.Add("Table", "ApplicationActions")
adapter.TableMappings.Add("Table1", "Milestones")
LogEntries.Clear()
Milestones.Clear()
adapter.Fill(m_logEntryData)
End Using
End Sub
A: You can use the SqlCommandBuilder object, and call the DeriveParameters method.
Basically you need to pass it a command, that is setup to call your stored proc, and it will hit the DB to discover the parameters, and create the appropriate parameters in the Parameters property of the SqlCommand
EDIT: You're all too fast!!
A: You want the SqlCommandBuilder.DeriveParameters(SqlCommand) method. Note that it requires an additional round trip to the database, so it is a somewhat significant performance hit. You should consider caching the results.
An example call:
using (SqlConnection conn = new SqlConnection(CONNSTRING))
using (SqlCommand cmd = new SqlCommand("StoredProc", conn)) {
cmd.CommandType = CommandType.StoredProcedure;
SqlCommandBuilder.DeriveParameters(cmd);
cmd.Parameters["param1"].Value = "12345";
// ....
}
A: Mark has the best implementation of DeriveParameters. As he said, make sure you cache like in this tutorial.
However, I think this is a dangerous way of solving your original problem of database sproc versioning. If you are going to change the signature of a procedure by adding or removing parameters, you should do one of the following:
*
*Code in a backwards-compatible manner by using defaults (for new params) or by simply ignoring a param (for deleted params). This ensures that your client code can always call any version of your stored procedure.
*Explicitly version the procedure by name (so you will have my_proc and my_proc_v2). This ensures that your client code and sprocs stay in sync.
Relying on DeriveParameters to validate what version of the sproc you're using seems like the wrong tool for the job, IMHO.
A: All of these ADO.NET solutions are are asking the code library to query the database's metadata on your behalf. If you are going to take that performance hit anyhow, maybe you should just write some helper functions that call
Select count(*) from information_schema.parameters
where ...(proc name =.. param name=...) (pseudo-code)
Or maybe even generate your parameters based on the param list you get back. This technique will work with multiple versions of MS SQL and sometimes other ANSI SQL databases.
A: I have been using DeriveParameters with .NET 1.1 and 2.0 since a couple of years now, and worked like a charm every time.
Now I'm working on my first assignment with .NET 3.5, and just found and ugly surprise: DeriveParameters is creating all parameters with SqlDbType "Variant", instead proper SqlDbTypes. This is creating a SqlException when trying to execute SPs with numeric parameters, because SQL Server 2005 says that sql-variant types cant be implictily converted to int (or smallint, or numeric) values.
I just tested the same code with .NET CF 2.0 and SQL Server 2000, and worked as expected, assigning the correct SqlDbType to each parameters.
I had tested .NET 2.0 apps against SQL Server 2005 Databases, so is not a SQL Server related issue, so it has to be something related with .NET 3.5
Any ideas?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Any advice for speeding up the compile time in Flex Builder 3? I run Flex Builder 3 on a mac and as my project grows - the compile time gets longer and longer and longer. I am using some SWC's and there is a fair amount of code but it shouldn't take minutes to build and crash daily should it?
A: There's no need to use mxmlc on the command line just to be able to add compiler flags. Right click your project in the Flex Navigator, select Properties and then Flex Compiler in the dialog that appears. There you can add any extra compiler flags.
Not sure that there's very much to do though, more code means more compile time, that's just the way it is. If you're not doing a release build (or whatever it's called in Flex Builder) it's unlikely that your compiler settings include optimize to begin with. Better choices to try would be -incremental (which only recompiles the parts that have changed) and -keep-generated-actionscript (which stops the compiler from deleting the ActionScript files it has generated from your application's MXML files).
I very much prefer using mxmlc on the command line (by way of Ant) compared to Flex Builder. Although I don't think that the latter compiles any slower, it feels more sluggish in every way. Using Ant also makes it possible to do more than just compilation when building, and conditional compilation (only compile a SWF or SWC if the source code has actually changed). Check out a blog post of mine for more info on that.
What you could try is the Flex Compiler Shell, another command line tool that can speed things up. Basically it tries to keep as much as possible in memory between builds, so no need to wait for things like the JVM starting up (the Flex compiler is a Java application). On the other hand this is sort of what Flex Builder does anyway.
A: In addition to the suggestions already mentioned, close any projects that you have open that you are not using.
Rich click on the Project in the Navigator view and select "Close Unrelated Projects".
Depending on how many projects you have open, this can lead to a significant improvements in compile time, as well as all around performance.
mike chambers
[email protected]
A: Slow compile time is most often caused by having large numbers of embedded resources ([Embed] or @Embed).
Option 2 on this article might help you: [http://www.rogue-development.com/blog2/2007/11/slow-flex-builder-compile-and-refresh-solution-modules/]
A: I created RAM Disk with workspace and it gives up to 10% of better compilation time. Not much, but something.
A: You want at least 4 gigs on your computer if possible, and make sure to override the default memory settings that eclipse/flexbuilder gives to the application.
If you're not sure how to do this, you can find the flexbuilder app in /Applications, right click and choose "Show Package Contents". Then go into the contents file and edit the eclipse.ini file. Edit that file have memory settings of at least:
-vmargs -Xms768m -Xmx768m -XX:PermSize=128m -XX:MaxPermSize=128m
It's also worthwhile to go into the eclipse/flexbuilder preferences and to check the "Show heap status" box under Windows->Preferences->General (This is in eclipse with the FB plugin, I'm assuming it's also there for standalone FB).
This shows the current memory in the lower right of the window and has a little trash icon so you can force garbage collection.
I'd also suggest turning off automatic building of the project when your files change (you can force a build with cmd-B).
We had a huge project with quite a few modules files and performance in FlexBuilder 3 was decent with these steps.
A: First of all, comments on some of the response:
*
*There is no need to explicitly specify -incremental in Flex Builder because it uses incremental compilation by default.
*-keep-generated-actionscript is a performance killer because it instructs the compiler to write out AS3 codes generated for MXML components in the middle of the compilation. File I/O in the middle of a compilation means unnecessary pauses and low CPU utilizations.
*-optimize slows down linking because it instructs the linker to produce smaller SWFs. Note that -optimize=true|false doesn't have any effect on building SWCs because SWCs are libraries and have to be unoptimized.
*I rarely mess with JVM settings because JVM knows its jobs well and tunes itself quite well at runtime. Most people make matter worse by setting various GC tuning parameters. That said, there are 3 settings most people understand and set correctly for their usage:
-Xmx (max heap size)
-server or -client (HotSpot Server or Client VM)
-XX:+UseSerialGC or -XX:+UseParallelGC (or other non-serial GC)
-server consistently outperforms -client by about 30% when running the Flex compiler.
-XX:+UseParallelGC turns on the parallel garbage collector. ideal for multicore computer and when the computer still has CPU cycles to spare.
You may also want to check out HellFire Compiler Daemon (http://bytecode-workshop.com/). It uses multiple processor cores to compile multiple Flex applications at the same time. You can also run the compiler on a second machine via sockets (assuming that your second machine has faster CPUs and more memory).
In my opinion, use more modules than libraries and use HFCD.
Hope this helps.
-Clement
A: Go to Project->Properties->Flex Applications. All of the applications listed are compiled each time (even though you have a default set). If you remove everything but the default (don't worry, it won't delete the actual files), it only compiles the default app. This resulted in a significant speed up for me. If you change your default app, it ADDs it to the Flex Applications list - adding to your compile time. You will need to maintain this list to get the quickest compile.
A: I always disable "automatic compile" for Flex. It compiles too much, takes too long, and so interrupts my work.
If you have many different project files and all of those needs to be recompiled, but you also have other projects open and don't want to close them always you're doing a build, you can also use Eclipse Working Sets.
Unfortunately, the default Flex Navigator does not support working sets. But you can open the Package Explorer with Window / Show View / .... Click on the little white downward arrow to the topright and select Top Level Elements: Working Sets. You can then add Working Sets (aka groups of projects). Each project needs to be in at least one working set ("Other Projects" being the default), but can be in several.
Now with Project / Build Working Set / ... you can instruct Eclipse to build all the projects in this working set, but none of the others. This is especially useful if you suspect your project references to be sometimes broken - otherwise building the 'topmost' project should trigger subsequent builds automatically.
A: As Clement said, use the HellFire Compiler Daemon. If you have multiple modules and more CPU cores on your machine it can compile them in parallel. Another option is to use IntelliJ (the commercial version) which offers the same feature.
A: The SDK 4.x.x introduced silly bug (see Adobe bugsystem, issue FB-27440), which causes projects with SVN or CVS meta data compile much slower than with SDK 3.x.x. On how it can be fixed, see here.
A: You may want to explore the command-line compiler found in the Flex SDK, mxmlc. As I recall, Flex Builder 3 seems to hide all the compiler details, but perhaps there are arguments you can append that will help you speed up the compilation.
For example, you may want to set optimize=false which will skip the step of optimizing the bytecode (perhaps reducing compilation time)? This of course comes at the price of performance and file size of the actual application.
More documentation on mxmlc can be found at: http://livedocs.adobe.com/flex/3/html/compilers_13.html.
Good luck!
A: I don't use Flex Builder, but I use the Flex SDK compiler everyday and I was wasting tons of time waiting for the MXMLC compiler to do its job until I found Flex Compiler SHell:
http://blog.zarate.tv/2008/12/07/theres-something-called-flex-compiler-shell/
Although in theory Flex Builder already uses this optimizations, might be worth checking.
A: You can use WORKING SETS to compile just a set of your components that are part of the application that you are changing and not the whole project
http://livedocs.adobe.com/flex/3/html/help.html?content=build_6.html
A: Usually the first build takes the longest, and then it's pretty quick after that. That's using Vista x64 w/ core 2 duo.
Otherwise, I am nearly certain a Intel Core i7 Extreme Edition 965 3.2GHz upgrade processor would speed your Flex building up nicely .. :) :) :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I delete 1 file from a revision in SVN? One of my co-workers checked in a some files in SVN and one of the files has a password in it. The password has been removed from the file and a new version checked in but the password is obviously still in the repository if we look at the revision history and go to that revision. (We're using TortoiseSVN as the client.)
So how do I securely delete that single file from the repository in SVN?
A: It isn't pretty: How do I completely remove a file from the repository's history?
A:
I can't seem to find any revision history now - however, it could just be that I'm not looking in the right place.
You can see it by looking at the folder history, which will give you the revision where the file was still there, and thus you'll be able to recover the confidential file. So it's a bad solution.
A: your password is still there (svn cat file@2342 where 2342 is a revision the file was still there).
you can ''svnadmin dump'' you repos to a file, search and replace you password with "ultrasecret', ''svnadmin create'' a new repos and ''svnadmin load'' the modified dump into that new repos. be aware of binary data in your dump, so use a proper editor/sed.
A: If it's the last revision (HEAD) you can (BACKING UP your repo beforehand) delete that revision's files in db\revs and db\revprops and then run the following python script to fix what revision you repo thinks HEAD is.
e.g. if head is 522 and the password was commited in 520, you'd have to delete revisions 520,521 and 522.
(This script shouldn't be necessary once SVN obliterate is implemented)
(I didn't write this script, I got it from here)
#!/usr/bin/python
def dec_to_36(dec):
key = '0123456789abcdefghijklmnopqrstuvwxyz'
result = ''
while 1:
div = dec / 36
mod = dec % 36
dec = div
result = key[mod] + result
if dec == 0:
break
return result
import os, re, sys
repo_path = sys.argv[1]
rev_path = os.path.join(repo_path, 'db', 'revs')
current_path = os.path.join(repo_path, 'db', 'current')
id_re = re.compile(r'^id:\ ([a-z0-9]+)\.([a-z0-9]+)\.r([0-9]+).*')
max_node_id = 0
max_copy_id = 0
max_rev_id = 0
for rev in os.listdir(rev_path):
f = open(os.path.join(rev_path, rev), 'r')
for line in f:
m = id_re.match(line)
if m:
node_id = int(m.group(1), 36)
copy_id = int(m.group(2), 36)
rev_id = int(m.group(3), 10)
if copy_id > max_copy_id:
max_copy_id = copy_id
if node_id > max_node_id:
max_node_id = node_id
if rev_id > max_rev_id:
max_rev_id = rev_id
f = open(current_path, 'w+b')
f.write("%d %s %s\n" % (max_rev_id, dec_to_36(max_node_id+1),
dec_to_36(max_copy_id+1)))
f.close()
A: link to subversion FAQ entry on this
A: Maybe you should change your production password to avoid the svn problem altogether.
A: That seemed to work. So what I did was:
*
*Copy the file to another folder.
*Do a TortoiseSVN delete from the current folder followed by a commit.
*Copy the file back in to the folder.
*Add the file using TortoiseSVN and commit again.
I can't seem to find any revision history now - however, it could just be that I'm not looking in the right place.
So a modified question would now be, how can I find the revision history of the file that was deleted and then resubmitted to SVN?
(BTW I apologize for not asking the question more accurately earlier as I never mentioned that one of the options was to obliterate all revision history as it hadn't occurred to me.)
A: I'm not sure. You could always create a new file and copy the latest revision into that, wiping out prior revision history.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Windows 2003 Scheduled Task Cmdlet (v 1.0) Does anyone know of a powershell cmdlet out there for automating task scheduler in XP/2003? If you've ever tried to work w/ schtasks you know it's pretty painful.
A: Ok, Pablo has sparked my interest in saying that the scheduler is accessible via COM.
In PowerShell you can do this:
$svc = new-object -com Schedule.Service
... and that gives you a handle to the task scheduler. You can see what members it has using:
$svc | get-member
One of its methods is NewTask, so I'd start there.
Edit: Some more info here. It's a VBScript example but it'll give you the gist.
A: You don't need PowerShell to automate the Task Scheduler, you can use the SCHTASKS command in XP.
According to Wikipedia, the Task Scheduler 2.0 (Vista and Server 2008) is accesible via COM.
A: This is a good article (be sure to read the other linked article in it) that discusses looking at th scheduled tasks on remote machines. It is not exactly what you were asking for but it should get you headed in the right direction.
A: Not "native" PowerShell, but if you're running powershell.exe as an administrator then you should have access to the "at" command, which you can use to schedule tasks.
A: @slipsec: I don't have access to a 2003 server to try, but googling "2003 server" SCHTASKS there are links related to questions about it. Perhaps you misspell the command?
Like Matt says, you also can use AT, but in this case you don't have access to the scheduled tasks via the Control Panel, only via the AT command.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I use `scp` to deploy a website's `.htaccess` file? I am currently using the following command to upload my site content:
scp -r web/* [email protected]:site.com/
This works great except that the .htaccess file is not sent. Presumably, this is because it's hidden.
I have tried adding a second line to send the file explicitely:
scp -r web/.htaccess [email protected]:site.com/.htaccess
This works great except now I have to enter my password twice.
Any thoughts on how to make this deploy with only 1 or 0 entries of my password?
A: Just combine the two commands:
scp -r web/* web/.htaccess [email protected]:site.com/
If you want 0 entries of your password you can set up public key authentication for ssh/scp.
A: Some background info: the * wildcard does not match so-called "dot-files" (i.e. files whose name begins with a dot).
Some shells allow you to set an option, so that it will match dot-files, however, doing that is asking for a lot of pain: now * will also match . (the current directory) and .. (the parent directory), which is usually not what is intended and can be quite surprising! (rm -rf * deleting the parent directory is probably not the best way to start a day ...)
A: A word of caution - don't attempt to match dotted files (like .htaccess) with .* - this inconveniently also matches .., and would result in copying all the files on the path to the root directory. I did this once (with rm, no less!) and I had to rebuild the server because I'd messed with /var.
@jwmittag:
I just did a test on Ubuntu and .* matches when I use cp. Here's an example:
root@krash:/# mkdir a
root@krash:/# mkdir b
root@krash:/# mkdir a/c
root@krash:/# touch a/d
root@krash:/# touch a/c/e
root@krash:/# cp -r a/c/.* b
cp: will not create hard link `b/c' to directory `b/.'
root@krash:/# ls b
d e
If .* did not match .., then d shouldn't be in b.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are some useful TextMate features? I noticed that many people here use TextMate for coding on OS X. I've recently started using it, and although I like its minimalistic interface, it makes it harder to stumble upon cool features if you don't know what you're looking for.
So, what feature have you found most helpful for coding (mainly in Python)? Are there any third-party bundles I should know about, besides what's included?
A: I mention some in a review on Boagworld, I find the snippets, project manager, columnar editing (hold down option while selecting stuff or push it after having selected stuff) and CSS scopes for syntax.
A: I like the integrated HTML/XML Tidy. Cmd-shift-H is your friend.
Also, nice integration with a variety of scp/sftp clients.
A: My favourite two features are auto-completion (bound to ⎋ [esc]), and column editing (bound to ⌥ [alt]) both of these things save me quite a lot of time, and are definitely 'robot ninjas'.
The book linked above is also a really useful into to the power of TextMate, although it doesn't specifically mention python.
A: Don't forget "Drag commands".
They give you the ability to drag, say, an image into a blog.html document and will then upload it to the proper folder and insert the markup for you.
Here is another example of how you can expand further on drag commands if you pair TM up with QuickSilver.
(Disclaimer: I wrote the blog post I linked to there. I still think it's cool though.)
A: It is worth noting here that there is a Windows alternative to TextMate called E Text Editor. It does pretty much everything TextMate does (apart from macros, but the author is working on this, I think), and even - shock, horror - does some things better, such as the superb bundles editor, the bundles manager, and the branching undo history. Update: and now there's Snippet Pipes.
So, not exactly a useful feature of TextMate as such, but very useful to know if you're a fan of TextMate and you have to use Windows for whatever reason.
A: The ease of snippet creation.
It's trivial to create new snippets that can accomplish a lot using replacements, tabbing order, and regex substitutions. Quickly assigning these to the tab key for specific languages makes me more productive. And makes me worry about code bloat. :-)
A: For me the best features are:
*
*Projects - I know every IDE under
the sun has this but TextMate makes
this useful for all sorts of editing
and text processing tasks, and
moreover makes navigating around
these projects easy without ever
lifting your hands from the
keyboard. This is huge for Rails or
Grails projects or large programming
projects with many modules.
*The excellent syntax highlighting
and 'snippets' for myriad languages
and tools
*The excellent scripting language
support (Being able to evaluate
chunks of Ruby and the like with a
single key chord)
*The built in Blogging bundle is
superb. I now use TextMate
exclusively for all my blog posts.
*Columnar editing
*The ability to use just about any
language or tool to extend TextMate,
Ruby, Perl, shell, name your poison.
*An excellent mix of great Aqua GUI
support and excellent command line
support through the
mate and
commands, for
instance making it easy and pleasant
to use TextMate as your default
editor for your SCM.
A: Don't neglect the 'mate' command line tool. You can use it to pipe output into TextMate, so if you do the following...
diff file1.py file2.py | mate
...it will not only open in TextMate, but it is smart enough to know that you're looking at a diff and highlight lines on screen.
TextMate's SVN integration is great; it also seems to have bundles for some other version control systems as well.
Add GetBundle to browse the bundle repository. I found the jQuery bundle through it and it's very handy.
As others have mentioned, rolling your own bundle for frequently used snippets is very helpful. If you have some snippets that are specific to a project or framework, you might want to prefix all of them with a common letter to keep the namespace tidy.
A: Using snippets to expand into large, repetitive blocks of code and then using the tab key to move through and only edit the pieces I need to without having to use the mouse or arrow keys.
A: Holding down option while dragging allows you to highlight a block of text. If you type while the highlight is active, your keystrokes appear on multiple lines.
A: Being able to write simple commands in any scripting language and bind them to a context-specific hotkey.
A: The Navigation menu commands Go to File (Command + T) and Go to Symbol (Command + Shift + T) are both extremely helpful.
Go to File, which works when you have a project open, lets you type any part of the file name to see only files that match what you've typed.
Go to Symbol has the same type-to-filter interface, but operates on what I'd call the basic block elements of your document. For example, if you're editing a class, Go to Symbol works on the method names, but in a CSS document, you'll be searching on your selectors. It's pretty awesome.
A: It's nice and lightweight and has all of the macros built-in for Ruby and let's you run Ruby code, or any other code for that matter just with a keystroke.
A: Check out ProjectPlus, it gives some useful options for the sidebar, it has SCM status badges for svn and git (though I find the git thing a bit buggy).
I like the fact that it can change the sidebar to an embedded panel on left or right (as opposed to the drawer that's default).
A: If, like me, you're borderline OCD when it comes to making code look neat, then Option+Cmd+] to line up all the assignments around the current line is awesome!
A: The mate command line tool is great, you can open an individual file or my favourite use of it is to open a directory of files as a project (e.g. mate .)
A: Checkout Zen Coding bundle . It gives you an awesome productivity boost to developing both HTML and CSS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How to determine if an html tag splits across multiple lines I'm writing a PHP script that involves scraping web pages. Currently, the script analyzes the page line by line, but it breaks if there is a tag that spans multiple lines, like
<img src="example.jpg"
alt="example">
If worse comes to worst, I could possibly preprocess the page by removing all line breaks, then re-inserting them at the closest >, but this seems like a kludge.
Ideally, I'd be able to detect a tag that spans lines, conjoin only those to lines, and continue processing.
So what's the best method to detect this?
A: This is one of my pet peeves: never parse HTML by hand. Never parse HTML with regexps. Never parse HTML with string comparisons. Always use an HTML parser to parse HTML – that's what they're there for.
It's been a long time since I've done any PHP, but a quick search turned up this PHP5 HTML parser.
A: Don't write a parser, use someone else's: DOMDocument::loadHTML - that's just one, I think there are a lot of others.
A: Well, this doesn't answer the question and is more of an opinion, but...
I think that the best scraping strategy (and consequently, to eliminate this problem) is not to analyze an HTML line by line, which is unnatural to HTML, but to analyze it by its natural delimiter: <> pairs.
There will be two types of course:
*
*Tag elements that are immediately closed, e.g., < br />
*Tag elements that need a separate closing tag, e.g., < p > text < /p >
You can immediately see the advantage of using this strategy in the case of paragraph(p) tags: It will be easier to parse mutiline paragraphs instead of having to track where the closing tag is.
A: Perhaps for future projects I'll use a parsing library, but that's kind of aside from the question at hand. This is my current solution. rstrpos is strpos, but from the reverse direction. Example use:
for($i=0; $i<count($lines); $i++)
{
$line = handle_mulitline_tags(&$i, $line, $lines);
}
And here's that implementation:
function rstrpos($string, $charToFind, $relativePos)
{
$searchPos = $relativePos;
$searchChar = '';
while (($searchChar != $charToFind)&&($searchPos>-1))
{
$newPos = $searchPos-1;
$searchChar = substr($string,$newPos,strlen($charToFind));
$searchPos = $newPos;
}
if (!empty($searchChar))
{
return $searchPos;
return TRUE;
}
else
{
return FALSE;
}
}
function handle_multiline_tags(&$i, $line, $lines)
{
//if a tag is opened but not closed before a line break,
$open = rstrpos($line, '<', strlen($line));
$close = rstrpos($line, '>', strlen($line));
if(($open > $close)&&($open > -1)&&($close > -1))
{
$i++;
return trim($line).trim(handle_multiline_tags(&$i, $lines[$i], $lines));
}
else
{
return trim($line);
}
}
This could probably be optimized in some way, but for my purposes, it's sufficient.
A: Why don't you read in a line, and set it to a string, then check the string for tag openings and closings, If a tag spans more then one line add the next line to the string and move the part before the opening brace to your processed string. Then just parse through the entire file doing this. Its not beautiful but it should work.
A: If you've gotta stick to your current method of parsing, and it's a regex, you can use the multi-line flag "m" to span across multiple lines.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Any way to write a Windows .bat file to kill processes? Every time I turn on my company-owned development machine, I have to kill 10+ processes using the Task Manager or any other process management app just to get decent performance out of my IDE. Yes, these are processes from programs that my company installs on my machine for security and compliance. What I'd like to do is have a .bat file or script of some kind with which I can kill the processes in question.
Does anybody know how to do this?
A: I'm assuming as a developer, you have some degree of administrative control over your machine. If so, from the command line, run msconfig.exe. You can remove many processes from even starting, thereby eliminating the need to kill them with the above mentioned solutions.
A: Get Autoruns from Mark Russinovich, the Sysinternals guy that discovered the Sony Rootkit... Best software I've ever used for cleaning up things that get started automatically.
A: Download PSKill. Write a batch file that calls it for each process you want dead, passing in the name of the process for each.
A: You can do this with 'taskkill'.
With the /IM parameter, you can specify image names.
Example:
taskkill /im somecorporateprocess.exe
You can also do this to 'force' kill:
Example:
taskkill /f /im somecorporateprocess.exe
Just add one line per process you want to kill, save it as a .bat file, and add in your startup directory. Problem solved!
If this is a legacy system, PsKill will do the same.
A: taskkill /f /im "devenv.exe"
this will forcibly kill the pid with the exe name "devenv.exe"
equivalent to -9 on the nix'y kill command
A: As TASKKILL might be unavailable on some Home/basic editions of windows here some alternatives:
TSKILL processName
or
TSKILL PID
Have on mind that processName should not have the .exe suffix and is limited to 18 characters.
Another option is WMIC :
wmic Path win32_process Where "Caption Like 'MyProcess%.exe'" Call Terminate
wmic offer even more flexibility than taskkill with its SQL-like matchers .With wmic Path win32_process get you can see the available fileds you can filter (and % can be used as a wildcard).
A: Use Powershell! Built in cmdlets for managing processes. Examples here (hard way), here(built in) and here (more).
A: Please find the below logic where it works on the condition.
If we simply call taskkill /im applicationname.exe, it will kill only if this process is running. If this process is not running, it will throw an error.
So as to check before takskill is called, a check can be done to make sure execute taskkill will be executed only if the process is running, so that it won't throw error.
tasklist /fi "imagename eq applicationname.exe" |find ":" > nul
if errorlevel 1 taskkill /f /im "applicationname.exe"
A: Here I wrote an example command that you can paste in your cmd command line prompt and is written for chrome.exe.
FOR /F "tokens=2 delims= " %P IN ('tasklist /FO Table /M "chrome*" /NH') DO (TASKKILL /PID %P)
The for just takes all the PIDs listed on the below tasklist command and executes TASKKILL /PID on every PID
tasklist /FO Table /M "chrome*" /NH
If you use the for in a batch file just use %%P instead of %P
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "141"
} |
Q: How do I detect if a function is available during JNLP execution? I have an application which really should be installed, but does work fine when deployed using JNLP.
However, it would seem that some Java functions such as Runtime.exec don't work using the default security options.
I would like to therefore disable UI functionality that relies upon such functions.
So my question is, how do I detect at runtime whether certain functions are available or not?
The case study, here of course, is Runtime.exec.
A: You want to ask to the SecurityManager if you have Exec right with the checkExec method.
A: I have also found that adding the following to the JNLP file:
<security>
<all-permissions/>
</security>
And signing the JAR file allows the app to run with all the permissions needed for Runtime.exec.
A: For the specific example of Runtime.exec there is a method on the SecurityManager class checkExec(String cmd) that will throw an exception that can be caught to determine if the necessary command can be executed. For more information see the javadoc for Runtime.exec and SecurityManager.checkExec.
The more general case requires creating a Permission object representing the task being checked and running SecurityManager's checkPermission method.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I find the revision history of the file that was deleted and then resubmitted to SVN? This is a follow on question to "How do I delete 1 file from a revision in SVN?" but because it probably has a very different answer and I believe that others would benefit from knowing the answer. (I don't know the answer yet.)
The previous question was answered and I discovered that it is not possible to remove a revision from SVN. The second best solution was to remove the file from SVN, commit and then add the file back and commit again.
I now want to make sure that the original file's revision history has gone. So I am hoping that the answer to the question "How can I find the revision history of the file that was deleted and then resubmitted to SVN?" is that you can't.
A: What makes you think that it is not possible to remove a revision from Subversion? The solution given to your other question (svndumpfilter) does exactly that (see the parameters --drop-empty-revs and --renumber-revs)! And when the revision is gone, there's obviously no way to get at the revision history, because it was never there in the first place.
A: With a simple
svn log -v [folder]
you can browse quickly the adding and deletion.
------------------------------------------------------------------------
r14 | kame | 2008-08-29 04:23:43 +0200 (ven., 29 aoû2008) | 1 line
Chemins modifié :
A /a.txt
Readded a
------------------------------------------------------------------------
r13 | kame | 2008-08-29 04:23:24 +0200 (ven., 29 aoû2008) | 1 line
Chemins modifié :
D /a.txt
Delete a
------------------------------------------------------------------------
r12 | kame | 2008-08-29 04:23:06 +0200 (ven., 29 aoû2008) | 1 line
Chemins modifié :
A /a.txt
svn log won't show the file, svn diff will pretend that the old revision does not exist, but a svn checkout targeting the old revision will happily give you the old file.
A: Short answer: you can
Long answer:
Unfortunately (for you but perhaps not for most folks) , the revision history for a deleted file is still there - it's just a little harder to get at.
Here's an example:
$ touch one
$ svn add one
$ svn ci -m "Added file one"
$ date >> one
$ svn ci -m "Updated file one"
$ date >> one
$ svn ci -m "Updated file one again"
$ svn log file:///repos/one
------------------------------------------------------------------------
r3 | andrewr | 2008-08-29 12:27:10 +1000 (Fri, 29 Aug 2008) | 1 line
Updated file one again
------------------------------------------------------------------------
r2 | andrewr | 2008-08-29 12:26:50 +1000 (Fri, 29 Aug 2008) | 1 line
Updated file one
------------------------------------------------------------------------
r1 | andrewr | 2008-08-29 12:25:07 +1000 (Fri, 29 Aug 2008) | 1 line
Added file one
------------------------------------------------------------------------
$ svn delete one
$ svn ci -m "Deleted file one"
$ svn up
$ touch one
$ svn add one
$ svn ci -m "Adding file one back in"
$ svn log file:///repos/one
------------------------------------------------------------------------
r5 | andrewr | 2008-08-29 12:29:13 +1000 (Fri, 29 Aug 2008) | 1 line
add one back
------------------------------------------------------------------------
It looks like it works (the old history is gone), but if you request the file at older revisions you get the history
of the deleted file.
$ svn log -r 3:1 file:///repos/one
------------------------------------------------------------------------
r3 | andrewr | 2008-08-29 12:27:10 +1000 (Fri, 29 Aug 2008) | 1 line
Updated file one again
------------------------------------------------------------------------
r2 | andrewr | 2008-08-29 12:26:50 +1000 (Fri, 29 Aug 2008) | 1 line
Updated file one
------------------------------------------------------------------------
r1 | andrewr | 2008-08-29 12:25:07 +1000 (Fri, 29 Aug 2008) | 1 line
Added file one
------------------------------------------------------------------------
A: I would have said you can't - you have created a new file and thus revision tree in the eyes of SVN.
It may be possible to recover the old tree independently (not sure if you managed an actual delete or just SVN Delete) but there is no link between the old revision tree and the new one.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: IE CSS Bug - How do I maintain a position:absolute when dynamic javascript content on the page changes I have a page where there is a column and a content div, somewhat like this:
<div id="container">
<div id="content">blahblahblah</div>
<div id="column"> </div>
</div>
With some styling I have an image that is split between the column and the content but needs to maintain the same vertical positioning so that it lines up.
Styling is similar to this:
#column
{
width:150px;
height:450px;
left:-150px;
bottom:-140px;
background:url(../images/image.png) no-repeat;
position:absolute;
z-index:1;
}
#container
{
background:transparent url(../images/container.png) no-repeat scroll left bottom;
position:relative;
width:100px;
}
This works great when content in #content is dynamically loaded before rendering. This also works great in firefox always. However, in IE6 and IE7 if I use javascript to change the content (and thus height) of #content, the images no longer line up (#column doesn't move). If I use IE Developer Bar to just update the div (say add position:absolute manually) the image jumps down and lines up again.
Is there something I am missing here?
@Ricky - Hmm, that means in this case there is no solution I think. At its best there will be a jaggedy matchup afterwards but as my content expands and contracts etc. hiding/showing doesn't work out to be practical. Still thanks for answering with the best solution.
A: Its a bug in the rendering engine. I run into it all the time. One potential way to solve it is to hide and show the div whenever you change the content (that in turn changes the height):
var divCol = document.getElementById('column');
divCol.style.display = 'none';
divCol.style.display = 'block';
Hopefully this happens fast enough that it isn't noticeable :)
A: Another workaround which worked for me and had no flickering effect was to add and remove a dummy CSS class name, like this using jQuery:
$(element).toggleClass('damn-you-ie')
A: If you are worried about getting a flicker from showing and hiding divCol you can ajust another css property and it will have the same effect
e.g.
var divCol = document.getElementById('column');
divCol.style.zoom = '1';
divCol.style.zoom = '';
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to avoid pauses when editing code on a network drive? I'm planning on doing more coding from home but in order to do so, I need to be able to edit files on a Samba drive on our dev server. The problem I've run into with several editors is that the network latency causes the editor to lock up for long periods of time (Eclipse, TextMate). Some editors cope with this a lot better than others, but are there any file system or other tweaks I can make to minimize the impact of lag?
A few additional points:
*
*There's a policy against having company data on personal machines, so I'd like to avoid checking out the code locally.
*The mount is over a PPTP VPN connection.
*Mounting to Linux or OS X client
A: Use a source control system — Subversion, Perforce, Git, Mercurial, Bazaar, etc. — so you're never editing code on a shared server. Instead you should be editing a local work area and committing changes to a repository located on the network.
Also, convince your company to adapt their policy such that company code is allowed on personal machines if it's on an encrypted volume. Encrypted disk images that you can use for this are trivial to create using Disk Utility, and can use strong cryptography. You can get even more security by not storing your encryption passphrase in your keychain, and instead typing it every time you mount the encrypted volume; this means that even if your local user account is compromised, as long as you don't have the volume mounted, nobody else will be able to mount it.
I did this all the time when I was consulting and none of my clients — some of whom had similar rules about company code — ever had a problem with it once I explained how things worked. (I think some of them even started using encrypted disk images even within their offices.)
A: Remate plugin simply disables this dreadful refresh-on-focus feature.
Download, unpack, doubleclick and choose "Disable Refresh on Regaining Focus" from "Window" menu (you can refresh manually by right-clicking project in drawer). Voila!
A: If you are accessing the data from your personal computer, it is in your RAM, so we will assume that you just can't store it on your hard drive, floppy, USB stick, etc.
Your solution is a RAM drive. Copy the files you need to edit there using whatever method you prefer (I would suggest source control) and then you can edit them without lag. When you are done commit them back to the server.
As was pointed out your editor may be caching changes to your temp directory, or maybe even your swap file (if it is in memory, then it can get swapped out). The solution to that is get a much larger RAM drive and run a Virtual Machine in the RAM drive. Not sure what OS you are running, but you can get a pretty slim install of most OS's if all you are doing is editing source code.
If you don't have enough RAM, then get a Gigabyte i-RAM solid state drive and remove the battery, that way it will lose everything when you power down.
Set your VMWare to not allow the OS to swap any of the virtual machine. Keep a baseline VM on your hard drive and copy it to your RAM drive before booting it up. Then you can use the hard drive in the VM like a hard drive, even though it is RAM.
Might be a good idea to run a secure erase on your RAM drive before powering down. Also keep in mind that they have found if you super cool a RAM chip before removing it from a functioning computer, and place it in a new computer quick enough, the data may still be intact.
I guess it all comes down to how detailed that policy is, and how it is interpreted.
Good luck!
A: Short answer: you can do no trick. CIFS is really geared towards LAN with a reasonably calm trafic, so you have zero chance to not suffer intermittent lag accessing a share through a VPN. The editor at some point needs to access the file in blocking IO, because it makes no real sense to do otherwise.
You could switch editor and use Emacs + TRAMP which is geared to work on remote files.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Using Unsigned Primitive Types Most of time we represent concepts which can never be less than 0. For example to declare length, we write:
int length;
The name expresses its purpose well but you can assign negative values to it. It seems that for some situations, you can represent your intent more clearly by writing it this way instead:
uint length;
Some disadvantages that I can think of:
*
*unsigned types (uint, ulong, ushort) are not CLS compliant so you can't use it with other languages that don't support this
*.Net classes use signed types most of the time so you have to cast
Thoughts?
A: If you decrement a signed number with a value of 0, it becomes negative and you can easily test for this. If you decrement an unsigned number with a value of 0, it underflows and becomes the maximum value for the type - somewhat more difficult to check for.
A: Your second point is the most important. Generally you should just use int since that's a pretty good "catch-all" for integer values. I would only use uint if you absolutely need the ability to count higher than int, but without using the extra memory long requires (it's not much more memory, so don't be cheap :-p).
A: “When in Rome, do as the Romans do.”
While there is theoretically an advantage in using unsigned values where applicable because it makes the code more expressive, this is simply not done in C#. I'm not sure why the developers initially didn't design the interfaces to handle uints and make the type CLS compliant but now the train has left the station.
Since consistency is generally important I'd advise taking the C# road and using ints.
A: I think the subtle use of uint vs. int will cause confusing with developers unless it was written into developer guidelines for the company.
If the length, for example, can't be less than zero then it should be expressed clearly in the business logic so future developers can read the code and know the true intent.
Just my 2 cents.
A: I will point out that in C# you can turn on /checked to check for arithmetic overflow / underflow, which isn't a bad idea anyways. If performance matters in a critical section, you can still use unchecked to avoid this.
For internal code (ie code that won't be referenced in any interop manor with other languages) I vote for using unsigned when the situation warrants it, such as length variables as mentioned earlier. This - along with checked arithmetic - provides one more net for developers, catching subtle bugs earlier.
Another point in the signed vs unsigned debate is that some programmers use values such as -1 to indicate errors, when they wouldn't otherwise have meaning. I subscribe to the view that each variable should have only one purpose, but if you - or colleagues you code with - like to indicate errors in this way, leaving variables signed gives you the flexibility to add error states later.
A: Your two points are good. The primary reason to avoid it is casting, though. Casting makes them incredibly annoying to use. I tried using unisigned variables once but I had to sprinkle casts absolutely everywhere because the framework methods all use signed integers. Therefore, whenever you call a framework method, you have to cast.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: .net multilingual cms i am planning a simple, dual-language website and i'd like to use a .net based cms but i can't find anything suitable. i have experience with dotnetnuke and sharepoint but neither fit the bill - dotnetnuke does not do dynamic site elements multi-lingually & sharepoint is a monster PITA no matter what angle you look at it :).
i am on the verge of choosing Joomla! & Joom!Fish. they fit the bill nicely, with one exception: i would like to create some cms plug-ins and i would much prefer to write them in .net. any suggestions?
A: There is the N2 CMS, which is pretty good. Also have a look at cuyahoga
A: Have you looked at Umbraco? I have worked with it to try out for clients and it looks really good.
I would look to them as a possible solution.
A: Kentico is pretty good too.
A: You can check out Sitefinity. It is proprietary, but supports multilingual sites and is very, very extensible. .NET-based so you can basically fine-tune it for your needs, or write anything custom that is not coming out of the box.
A: I would recommend Ektron CMS400.net -- it's an excellent CMS with great built-in translation.
A: I agree with
@Danimal
ektron is very good. It's not free, but you definitely get what you pay for.
A: BlogEngine is pretty good for a blogging platform with good multi-lingual support.
A: 1 more vote for Umbraco.
Depends on what you are used to, but is one of the nicest CMS I've used, and have found it pretty easy to add my own user controls to it.
Apparently supports multi languages, but I have never tried that.
A: +1 for umbraco. It has never ever limited me in any way. It does have a learning curve, but once you get to know the basics of the system, you'll be amazed what things can be done in a short period of time. Also, great supporting community!
A: Webnodes CMS supports multiple languages, and you don't need to know XSLT.. The templates are standard aspx pages..
You define content types in the content definitions module, and strongly typed classes are created based on those content types. This gives you strongly typed collections, and compile time error checking, as well as Intellisense for all properties on a content object(called a node). Since the system also has a built-in ORM, you never have to write a line of SQL.
A: The latest version of Umbraco DOES NOT support 1:1 or "tabbed" translations. I would never recommend it as an i18n solution.
A: For .NET, assuming you're comfortable with XSLT, Umbraco - www.umbraco.org
The XSLT qualification is important because that's the basis of the template (for content) system so whilst the end users have no requirement to use XSLT those defining the templates will.
Edit:
As we roll towards the end of 2011 there is now an alternative to XSLT, support for the Razor engine is being added to Umbraco and its fair to say that Razor is probably a bit less challenging than XSLT (much as I continute to be impressed by what one can do with XSLT, it does need a different mindset).
A: +1 for Umbraco as a great CMS. As far as multilingual support though, I'm in the same boat as seanb. I know it supports it, but I've never dealt with it myself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is it possible to call Javascript's onsubmit event programmatically on a form? In Ruby on Rails, I'm attempting to update the innerHTML of a div tag using the form_remote_tag helper. This update happens whenever an associated select tag receives an onchange event. The problem is, <select onchange="this.form.submit();">; doesn't work. Nor does document.forms[0].submit(). The only way to get the onsubmit code generated in the form_remote_tag to execute is to create a hidden submit button, and invoke the click method on the button from the select tag. Here's a working ERb partial example.
<% form_remote_tag :url => product_path, :update => 'content', :method => 'get' do -%>
<% content_tag :div, :id => 'content' do -%>
<%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange => "this.form.commit.click" %>
<%= submit_tag 'submit_button', :style => "display: none" %>
<% end %>
<% end %>
What I want to do is something like this, but it doesn't work.
<% form_remote_tag :url => product_path, :update => 'content', :method => 'get' do -%>
<% content_tag :div, :id => 'content' do -%>
# the following line does not work
<%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange => "this.form.onsubmit()" %>
<% end %>
<% end %>
So, is there any way to remove the invisible submit button for this use case?
There seems to be some confusion. So, let me explain. The basic problem is that submit() doesn't call the onsubmit() code rendered into the form.
The actual HTML form that Rails renders from this ERb looks like this:
<form action="/products/1" method="post" onsubmit="new Ajax.Updater('content', '/products/1', {asynchronous:true, evalScripts:true, method:'get', parameters:Form.serialize(this)}); return false;">
<div style="margin:0;padding:0">
<input name="authenticity_token" type="hidden" value="4eacf78eb87e9262a0b631a8a6e417e9a5957cab" />
</div>
<div id="content">
<select id="update" name="update" onchange="this.form.commit.click">
<option value="1">foo</option>
<option value="2">bar</option>
</select>
<input name="commit" style="display: none" type="submit" value="submit_button" />
</div>
</form>
I want to axe the invisible submit button, but using a straight form.submit appears to not work. So, I need some way to call the form's onsubmit event code.
Update: Orion Edwards solution would work if there wasn't a return(false); generated by Rails. I'm not sure which is worse though, sending a phantom click to an invisible submit button or calling eval on the getAttribute('onsubmit') call after removing the return call with a javascript string replacement!
A: I realize this question is kind of old, but what the heck are you doing eval for?
document.getElementById('formId').onsubmit();
document.getElementById('formId').submit();
or
document.formName.onsubmit();
document.formName.submit();
When the DOM of a document is loaded, the events are not strings any more, they are functions.
alert(typeof document.formName.onsubmit); // function
So there's no reason to convert a function to a string just so you can eval it.
A: give your form an id.
then
document.getElementById('formid').submit();
If you are loading Javascript into a div via innerHTML, it won't run...just FYI.
A: If you have to use Rail's built-in Javascript generation, I would use Orion's solution, but with one small alteration to compensate for the return code.
eval ('(function(){' + code + '})()');
However, in my opinion you'd have an easier time in the long run by separating out the Javascript code into an external file or separate callable functions.
A: Don't.
You have a solution.
Stop, move on to the next function point.
I know, it is not pretty, but there are bigger problems.
A: Not sure if you have an answer yet or not, but in the onclick function of the select, call onsubmit instead of submit.
A: If you didn't actually want to submit the form, but just invoke whatever code happened to be in the onsubmit, you could possibly do this: (untested)
var code = document.getElementById('formId').getAttribute('onsubmit');
eval(code);
A: In theory, something like eval ('function(){' + code + '}()'); could work (that syntax fails though). Even if that did work, it would still be sort of ghetto to be calling an eval through a select onchange. Another solution would be to somehow get Rails to inject the onsubmit code into the onchange field of the select tag, but I'm not sure if there's a way to do that. ActionView has link_to_remote, but there's no obvious helper to generate the same code in the onchange field.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Font rendering for web pages I always run into the same problem when creating web pages. When I add a font that is larger then about 16-18px it looks terrible. Its jagged, and pixelated. I have tried using different fonts and weights, however I haven't had much luck there.
Note: Its only in windows that it is like this. Mainly in Opera and FF also in IE7 but not quite as bad. In Linux the font looks good. I haven't looked at a Mac.
What do you guys do to fix this? if anything. I noticed that the titles here on SO are also pretty jagged but they are just small enough not to look bad.
A: There is nothing you can do to force the user to change the way that their operating system renders fonts. If it is that big a deal to you then you can replace the large headings with images, this allows you to control exactly how the font is rendered (and ensures that the heading looks exactly as you wish, even if the user doesnt have your suggested font installed).
If you do this make sure that you provide an alternative text representation for those who do not see images. I tend to use CSS to show a background image, and hide the contents of the heading. Like this.
<style>
h1
{
height: 32px;
width: 100px;
background: url("path/to/image")
}
h1 span
{
display: none;
}
</style>
<h1>
<span>
Heading Text
<span>
</h1>
To be honest this does seem like overkill if it is on all large text. And be aware that it will increase the amount of data that your clients need to download. However for a large heading this method can lead to something that looks nicer than OS rendered text.
A: Enabling anti-aliasing should solve the display problem.
A: Aside from anti-aliasing, try enabling clear type.
A: On Windows, enabling ClearType will solve this. However, you can't force users to use it. It's not a browser issue; it's the operating system's font smoothing method.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Determining width of a printed string by a webapp In my (PHP) web app, I have a part of my site that keeps a history of recent searches. The most recent queries get shown in a side box. If the query text is too long, I truncate it and show ellipses. Eg: "My very long query is..."
Currently, I truncate after a certain number of characters. Since the font is not monotype, a query of all I's is more narrow than a query of all W's. I'd like them to all be about the same width prior to the ellipses. Is there a way to get the approximate width of the resulting string so that the ellipses for any given string will occur in about the same number of pixels from the beginning? Does CSS have a way? Does PHP? Would this be better handled by JavaScript?
A: Here's another take on it and you don't have to live without the ellipsis!
<html>
<head>
<style>
div.sidebox {
width: 25%;
}
div.sidebox div.qrytxt {
height: 1em;
line-height: 1em;
overflow: hidden;
}
div.sidebox div.qrytxt span.ellipsis {
float: right;
}
</style>
</head>
<body>
<div class="sidebox">
<div class="qrytxt">
<span class="ellipsis">…</span>
Some long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">…</span>
Some more long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">…</span>
Short text. Fail!
</div>
</body>
</html>
There is one flaw with this, if the text is short enough to be fully displayed, the ellipses will still be displayed as well.
[EDIT: 6/26/2009]
At the suggestion of Power-Coder I have revised this a little. There are really only two changes, the addition of the doctype (see notes below) and the addition of the display: inline-block attribute on the .qrytxt DIV. Here is what it looks like now...
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<style>
div.sidebox
{
width: 25%;
}
div.sidebox div.qrytxt
{
height: 1em;
line-height: 1em;
overflow: hidden;
display: inline-block;
}
div.sidebox div.qrytxt span.ellipsis
{
float: right;
}
</style>
</head>
<body>
<div class="sidebox">
<div class="qrytxt">
<span class="ellipsis">…</span>
Some long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">…</span>
Some more long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
<span class="ellipsis">…</span>
Short text. FTW
</div>
</div>
</body>
</html>
Notes:
*
*Viewed in IE 8.0, Opera 9, FF 3
*A doctype is required for IE to get the display: inline-block to work correctly.
*If the .qrytxt DIV's overflow occurs on a long word, there is going to be a wide gap between the ellipsis and the last visible word. You can see this by viewing the example and resizing your browser width in small increments. (this probably existed in the original example as well, I just may have not noticed it then)
So again, an imperfect CSS-only solution. Javascript may be the only thing that can get the effect perfect.
[EDIT: 6/27/2009]
Here is another alternative which uses browser specific extensions.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<style>
div.sidebox
{
width: 26%;
}
div.sidebox div.qrytxt
{
height: 1em;
line-height: 1em;
overflow: hidden;
text-overflow:ellipsis;
-o-text-overflow:ellipsis;
-ms-text-overflow:ellipsis;
-moz-binding:url(ellipsis-xbl.xml#ellipsis);
white-space:nowrap;
}
</style>
</head>
<body>
<div class="sidebox">
<div class="qrytxt">
Some long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
Some more long text which will arbitrarily be cut off at whatever word fits best but will have an ellipsis at the end.
</div>
<div class="qrytxt">
Short text. FTW
</div>
</div>
</body>
</html>
Note that in order for the above example to work, you must create the xml file referenced by the -moz-binding rule, ellipsis-xbl.xml. It's should contain the following xml:
<?xml version="1.0" encoding="UTF-8"?>
<bindings xmlns="http://www.mozilla.org/xbl" xmlns:xbl="http://www.mozilla.org/xbl" xmlns:xul="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul">
<binding id="ellipsis">
<content>
<xul:window>
<xul:description crop="end" xbl:inherits="value=xbl:text"><children/></xul:description>
</xul:window>
</content>
</binding>
</bindings>
A: You could also quite easily use a bit of javascript:
document.getElementByID("qrytxt").offsetWidth;
will give you the width of an element in pixels and even works in IE6. If you append a span containing ellipses to the end of each query a simple logical test in JavaScript with a bit of CSS manipulation could be used to hide/show them as needed.
A:
Does CSS have a way?
No
Does PHP?
No
-
To do that you'd have to get the font metrics for each character, and apply them to all your letters in your string. While you could do this by using a drawing/rendering library like ImageMagick on the server, it wouldn't really work because different browser on different OS's render fonts differently.
Even if it did work, you wouldn't want to do it, because it would also take forever to render. Your server would be able to push 1 page per second (if that) instead of several thousand.
If you can live without the trailing ..., then you can nicely fake it using div tags and css overflow: hidden, like this:
.line_of_text {
height:1.3em;
line-height:1.3em;
overflow:hidden;
}
<div class="line_of_text"> Some long text which will arbitrarily be cut off at whatever word fits best</div>
A: @Robert
what if you put the ellipses in a div with a low z-index so that when it moves to the left (for shorter lines) they get covered up by a background image or something?
it's pretty hacky I know, but hey worth a try right?
edit Another idea: determine the position of the div containing the ellipses with javascript and if it's not pushed all the way right, hide it?
A: PHP should be left out of consideration completely due to the fact that even though there is a function designed for measuring fonts, http://www.php.net/imageftbbox , there is no way for PHP to know whether the visitor has a minimum font size setup that is larger than your anticipated font size.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Best iCalendar library for Java? I'm looking for a library to handle iCalendar data in Java.
Open source, well-documented implementations with a good object model are preferred. iCal parsing capabilities are less important to me, but still nice to have.
Does anyone have any recommendations?
A: A challenger appears! Please give biweekly a try. I'm looking for lots of feedback on how it can be improved.
A: I had limited success with iCal4j (intro) on a project last year.
It seems to be a fairly popular choice for ical work in the java community.
If I remember correctly the API can be slightly confusing at first glance.
However It's pretty solid in the long run.
Good luck,
Brian
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: Formatting Stored Procedures I currently work with an Oracle database and we use stored procedures for all our SQL queries. The problem I have is that we do not really having a coding standard for our packages. So what happens is that every developer has a different style (or in some cases no sense of style) in how they format there packages. Making them difficult to read and work on with out first reformatting. We all pretty much just use Notepad or Notepad2 to write our packages.
I am unfortunately not in a position to mandate a coding standard and enforce it (just a code monkey at this point) so I was hoping to find a free SQL code formatter that I can use myself, and possibly suggest to others on the team to use, to make my life easier.
I have considered writing a small application that would essentially take a file as input and reformat everything, but before I did this I figured I would ask if anyone new of such a tool that is already available and is free.
So does anyone now of any such tools available?
A: There is a free one online sqlformatter, also SQLinForm, personally i use TOAD and have done since before it was bought by Quest (10 years?)
A: *
*VIM script
*Aqua Data studio $ I use this one all the time.
A: I like TOAD for Oracle. It has a format feature that's decent. I see there's a freeware version, though I have not used it.
A: Toad for Oracle
*
*nicest, most mature
*$$$
*http://www.toadsoft.com
Toad for Oracle, free version
*
*free
*this will do what you want
*limitations are related to number of connections, size of data mods, etc.
*http://www.toadsoft.com
Oracle SQL Developer (up and coming, free!)
*
*free
*from Oracle
*cross platform
*http://www.oracle.com/technology/products/database/sql_developer
A: I had the exact same experience from Day One working with Oracle stored procedures - "I have to use NOTEPAD?! Oh HELL no."
So I hopped on the internets and what I found were people saying "Hey, I have to create stored procedures in Oracle, isn't there anything better than NOTEPAD?!"
And the canonical answer was: "Download TOAD, you'll be glad you did". So I followed their advice, was very happy with it, and I'm pleased (if a bit amazed) to see it is still a popular answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Ribbon Toolbar and Visual Studio 2008 Service Pack 1 Today I was listening to the Hanselminutes show about .NET 3.5 SP1...What's inside, and they twice mentioned the Office 2007-like Ribbon control that is included in Visual Studio 2008 Service Pack 1.
I am very interested about this, as I was previously looking at purchasing this from a 3rd party vendor (likely DevComponent's DotNetBar). However, I did some research this morning and have found nothing about this control being included for use with C# and WPF. Does anyone know more about the included Ribbon control and whether or not it is available for C# or WPF?
A: Yeah I did a double-take when I heard them say that too.
The ribbon control, along with a DatePicker and DataGrid, are being developed out of band over here on CodePlex. I'm not sure why Carl and Scott were suggesting that it was part of the SP1 release.
Vincent Sibal posts about DataGrid (which is available already in some form) on his blog.
A: It was in VS 2008 as part of a C++/MFC update. I'm not sure about C#/WPF.
A: A preview of the WPF Ribbon control was released last week, you can find here
A: You might also want to give a look at Phil Wright's excellent Krypton products range. It includes an excellent Ribbon component. (It's a WinForms component, not WPF).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What is tail recursion? Whilst starting to learn lisp, I've come across the term tail-recursive. What does it mean exactly?
A: In short, a tail recursion has the recursive call as the last statement in the function so that it doesn't have to wait for the recursive call.
So this is a tail recursion i.e. N(x - 1, p * x) is the last statement in the function where the compiler is clever to figure out that it can be optimised to a for-loop (factorial). The second parameter p carries the intermediate product value.
function N(x, p) {
return x == 1 ? p : N(x - 1, p * x);
}
This is the non-tail-recursive way of writing the above factorial function (although some C++ compilers may be able to optimise it anyway).
function N(x) {
return x == 1 ? 1 : x * N(x - 1);
}
but this is not:
function F(x) {
if (x == 1) return 0;
if (x == 2) return 1;
return F(x - 1) + F(x - 2);
}
I did write a long post titled "Understanding Tail Recursion – Visual Studio C++ – Assembly View"
A:
A tail recursion is a recursive function where the function calls
itself at the end ("tail") of the function in which no computation is
done after the return of recursive call. Many compilers optimize to
change a recursive call to a tail recursive or an iterative call.
Consider the problem of computing factorial of a number.
A straightforward approach would be:
factorial(n):
if n==0 then 1
else n*factorial(n-1)
Suppose you call factorial(4). The recursion tree would be:
factorial(4)
/ \
4 factorial(3)
/ \
3 factorial(2)
/ \
2 factorial(1)
/ \
1 factorial(0)
\
1
The maximum recursion depth in the above case is O(n).
However, consider the following example:
factAux(m,n):
if n==0 then m;
else factAux(m*n,n-1);
factTail(n):
return factAux(1,n);
Recursion tree for factTail(4) would be:
factTail(4)
|
factAux(1,4)
|
factAux(4,3)
|
factAux(12,2)
|
factAux(24,1)
|
factAux(24,0)
|
24
Here also, maximum recursion depth is O(n) but none of the calls adds any extra variable to the stack. Hence the compiler can do away with a stack.
A: Using regular recursion, each recursive call pushes another entry onto the call stack. When the recursion is completed, the app then has to pop each entry off all the way back down.
With tail recursion, depending on language the compiler may be able to collapse the stack down to one entry, so you save stack space...A large recursive query can actually cause a stack overflow.
Basically Tail recursions are able to be optimized into iteration.
A: The jargon file has this to say about the definition of tail recursion:
tail recursion /n./
If you aren't sick of it already, see tail recursion.
A: In traditional recursion, the typical model is that you perform your recursive calls first, and then you take the return value of the recursive call and calculate the result. In this manner, you don't get the result of your calculation until you have returned from every recursive call.
In tail recursion, you perform your calculations first, and then you execute the recursive call, passing the results of your current step to the next recursive step. This results in the last statement being in the form of (return (recursive-function params)). Basically, the return value of any given recursive step is the same as the return value of the next recursive call.
The consequence of this is that once you are ready to perform your next recursive step, you don't need the current stack frame any more. This allows for some optimization. In fact, with an appropriately written compiler, you should never have a stack overflow snicker with a tail recursive call. Simply reuse the current stack frame for the next recursive step. I'm pretty sure Lisp does this.
A: here is a Perl 5 version of the tailrecsum function mentioned earlier.
sub tail_rec_sum($;$){
my( $x,$running_total ) = (@_,0);
return $running_total unless $x;
@_ = ($x-1,$running_total+$x);
goto &tail_rec_sum; # throw away current stack frame
}
A: This is an excerpt from Structure and Interpretation of Computer Programs about tail recursion.
In contrasting iteration and recursion, we must be careful not to
confuse the notion of a recursive process with the notion of a
recursive procedure. When we describe a procedure as recursive, we are
referring to the syntactic fact that the procedure definition refers
(either directly or indirectly) to the procedure itself. But when we
describe a process as following a pattern that is, say, linearly
recursive, we are speaking about how the process evolves, not about
the syntax of how a procedure is written. It may seem disturbing that
we refer to a recursive procedure such as fact-iter as generating an
iterative process. However, the process really is iterative: Its state
is captured completely by its three state variables, and an
interpreter need keep track of only three variables in order to
execute the process.
One reason that the distinction between process and procedure may be
confusing is that most implementations of common languages (including Ada, Pascal, and
C) are designed in such a way that the interpretation of any recursive
procedure consumes an amount of memory that grows with the number of
procedure calls, even when the process described is, in principle,
iterative. As a consequence, these languages can describe iterative
processes only by resorting to special-purpose “looping constructs”
such as do, repeat, until, for, and while. The implementation of
Scheme does not share this defect. It
will execute an iterative process in constant space, even if the
iterative process is described by a recursive procedure. An
implementation with this property is called tail-recursive. With a
tail-recursive implementation, iteration can be expressed using the
ordinary procedure call mechanism, so that special iteration
constructs are useful only as syntactic sugar.
A: Tail recursion is the life you are living right now. You constantly recycle the same stack frame, over and over, because there's no reason or means to return to a "previous" frame. The past is over and done with so it can be discarded. You get one frame, forever moving into the future, until your process inevitably dies.
The analogy breaks down when you consider some processes might utilize additional frames but are still considered tail-recursive if the stack does not grow infinitely.
A: Instead of explaining it with words, here's an example. This is a Scheme version of the factorial function:
(define (factorial x)
(if (= x 0) 1
(* x (factorial (- x 1)))))
Here is a version of factorial that is tail-recursive:
(define factorial
(letrec ((fact (lambda (x accum)
(if (= x 0) accum
(fact (- x 1) (* accum x))))))
(lambda (x)
(fact x 1))))
You will notice in the first version that the recursive call to fact is fed into the multiplication expression, and therefore the state has to be saved on the stack when making the recursive call. In the tail-recursive version there is no other S-expression waiting for the value of the recursive call, and since there is no further work to do, the state doesn't have to be saved on the stack. As a rule, Scheme tail-recursive functions use constant stack space.
A: Tail Recursion is pretty fast as compared to normal recursion.
It is fast because the output of the ancestors call will not be written in stack to keep the track.
But in normal recursion all the ancestor calls output written in stack to keep the track.
A: To understand some of the core differences between tail-call recursion and non-tail-call recursion we can explore the .NET implementations of these techniques.
Here is an article with some examples in C#, F#, and C++\CLI: Adventures in Tail Recursion in C#, F#, and C++\CLI.
C# does not optimize for tail-call recursion whereas F# does.
The differences of principle involve loops vs. Lambda calculus. C# is designed with loops in mind whereas F# is built from the principles of Lambda calculus. For a very good (and free) book on the principles of Lambda calculus, see Structure and Interpretation of Computer Programs, by Abelson, Sussman, and Sussman.
Regarding tail calls in F#, for a very good introductory article, see Detailed Introduction to Tail Calls in F#. Finally, here is an article that covers the difference between non-tail recursion and tail-call recursion (in F#): Tail-recursion vs. non-tail recursion in F sharp.
If you want to read about some of the design differences of tail-call recursion between C# and F#, see Generating Tail-Call Opcode in C# and F#.
If you care enough to want to know what conditions prevent the C# compiler from performing tail-call optimizations, see this article: JIT CLR tail-call conditions.
A: Tail recursion refers to the recursive call being last in the last logic instruction in the recursive algorithm.
Typically in recursion, you have a base-case which is what stops the recursive calls and begins popping the call stack. To use a classic example, though more C-ish than Lisp, the factorial function illustrates tail recursion. The recursive call occurs after checking the base-case condition.
factorial(x, fac=1) {
if (x == 1)
return fac;
else
return factorial(x-1, x*fac);
}
The initial call to factorial would be factorial(n) where fac=1 (default value) and n is the number for which the factorial is to be calculated.
A: Recursion means a function calling itself. For example:
(define (un-ended name)
(un-ended 'me)
(print "How can I get here?"))
Tail-Recursion means the recursion that conclude the function:
(define (un-ended name)
(print "hello")
(un-ended 'me))
See, the last thing un-ended function (procedure, in Scheme jargon) does is to call itself. Another (more useful) example is:
(define (map lst op)
(define (helper done left)
(if (nil? left)
done
(helper (cons (op (car left))
done)
(cdr left))))
(reverse (helper '() lst)))
In the helper procedure, the LAST thing it does if the left is not nil is to call itself (AFTER cons something and cdr something). This is basically how you map a list.
The tail-recursion has a great advantage that the interpreter (or compiler, dependent on the language and vendor) can optimize it, and transform it into something equivalent to a while loop. As matter of fact, in Scheme tradition, most "for" and "while" loop is done in a tail-recursion manner (there is no for and while, as far as I know).
A: A function is tail recursive if each recursive case consists only of a call to the function itself, possibly with different arguments. Or, tail recursion is recursion with no pending work. Note that this is a programming-language independent concept.
Consider the function defined as:
g(a, b, n) = a * b^n
A possible tail-recursive formulation is:
g(a, b, n) | n is zero = a
| n is odd = g(a*b, b, n-1)
| otherwise = g(a, b*b, n/2)
If you examine each RHS of g(...) that involves a recursive case, you'll find that the whole body of the RHS is a call to g(...), and only that. This definition is tail recursive.
For comparison, a non-tail-recursive formulation might be:
g'(a, b, n) = a * f(b, n)
f(b, n) | n is zero = 1
| n is odd = f(b, n-1) * b
| otherwise = f(b, n/2) ^ 2
Each recursive case in f(...) has some pending work that needs to happen after the recursive call.
Note that when we went from g' to g, we made essential use of associativity
(and commutativity) of multiplication. This is not an accident, and most cases where you will need to transform recursion to tail-recursion will make use of such properties: if we want to eagerly do some work rather than leave it pending, we have to use something like associativity to prove that the answer will be the same.
Tail recursive calls can be implemented with a backwards jump, as opposed to using a stack for normal recursive calls. Note that detecting a tail call, or emitting a backwards jump is usually straightforward. However, it is often hard to rearrange the arguments such that the backwards jump is possible. Since this optimization is not free, language implementations can choose not to implement this optimization, or require opt-in by marking recursive calls with a 'tailcall' instruction and/or choosing a higher optimization setting.
Some languages (e.g. Scheme) do, however, require all implementations to optimize tail-recursive functions, maybe even all calls in tail position.
Backwards jumps are usually abstracted as a (while) loop in most imperative languages, and tail-recursion, when optimized to a backwards jump, is isomorphic to looping.
A: *
*Tail Recursive Function is a recursive function in which recursive call is the last executed thing in the function.
Regular recursive function, we have stack and everytime we invoke a recursive function within that recursive function, adds another layer to our call stack. In normal recursion
space: O(n) tail recursion makes space complexity from
O(N)=>O(1)
*
*Tail call optimization means that it is possible to call a function from another function without growing the call stack.
*We should write tail recursion in recursive solutions. but certain languages do not actually support the tail recursion in their engine that compiles the language down. since ecma6, there has been tail recursion that was in the specification. BUt none of the engines that compile js have implemented tail recursion into it. you wont achieve O(1) in js, because the compiler itself does not know how to implement this tail recursion. As of January 1, 2020 Safari is the only browser that supports tail call optimization.
*Haskell and Java have tail recursion optimization
Regular Recursive Factorial
function Factorial(x) {
//Base case x<=1
if (x <= 1) {
return 1;
} else {
// x is waiting for the return value of Factorial(x-1)
// the last thing we do is NOT applying the recursive call
// after recursive call we still have to multiply.
return x * Factorial(x - 1);
}
}
we have 4 calls in our call stack.
Factorial(4); // waiting in the memory for Factorial(3)
4 * Factorial(3); // waiting in the memory for Factorial(2)
4 * (3 * Factorial(2)); // waiting in the memory for Factorial(1)
4 * (3 * (2 * Factorial(1)));
4 * (3 * (2 * 1));
*
*We are making 4 Factorial() calls, space is O(n)
*this mmight cause Stackoverflow
Tail Recursive Factorial
function tailFactorial(x, totalSoFar = 1) {
//Base Case: x===0. In recursion there must be base case. Otherwise they will never stop
if (x === 0) {
return totalSoFar;
} else {
// there is nothing waiting for tailFactorial to complete. we are returning another instance of tailFactorial()
// we are not doing any additional computaion with what we get back from this recursive call
return tailFactorial(x - 1, totalSoFar * x);
}
}
*
*We dont need to remember anything after we make our recursive call
A: There are two basic kinds of recursions: head recursion and tail recursion.
In head recursion, a function makes its recursive call and then
performs some more calculations, maybe using the result of the
recursive call, for example.
In a tail recursive function, all calculations happen first and
the recursive call is the last thing that happens.
Taken from this super awesome post.
Please consider reading it.
A: This question has a lot of great answers... but I cannot help but chime in with an alternative take on how to define "tail recursion", or at least "proper tail recursion." Namely: should one look at it as a property of a particular expression in a program? Or should one look at it as a property of an implementation of a programming language?
For more on the latter view, there is a classic paper by Will Clinger, "Proper Tail Recursion and Space Efficiency" (PLDI 1998), that defined "proper tail recursion" as a property of a programming language implementation. The definition is constructed to allow one to ignore implementation details (such as whether the call stack is actually represented via the runtime stack or via a heap-allocated linked list of frames).
To accomplish this, it uses asymptotic analysis: not of program execution time as one usually sees, but rather of program space usage. This way, the space usage of a heap-allocated linked list vs a runtime call stack ends up being asymptotically equivalent; so one gets to ignore that programming language implementation detail (a detail which certainly matters quite a bit in practice, but can muddy the waters quite a bit when one attempts to determine whether a given implementation is satisfying the requirement to be "property tail recursive")
The paper is worth careful study for a number of reasons:
*
*It gives an inductive definition of the tail expressions and tail calls of a program. (Such a definition, and why such calls are important, seems to be the subject of most of the other answers given here.)
Here are those definitions, just to provide a flavor of the text:
Definition 1 The tail expressions of a program written in Core Scheme are defined inductively as follows.
*
*The body of a lambda expression is a tail expression
*If (if E0 E1 E2) is a tail expression, then both E1 and E2 are tail expressions.
*Nothing else is a tail expression.
Definition 2 A tail call is a tail expression that is a procedure call.
(a tail recursive call, or as the paper says, "self-tail call" is a special case of a tail call where the procedure is invoked itself.)
*
*It provides formal definitions for six different "machines" for evaluating Core Scheme, where each machine has the same observable behavior except for the asymptotic space complexity class that each is in.
For example, after giving definitions for machines with respectively, 1. stack-based memory management, 2. garbage collection but no tail calls, 3. garbage collection and tail calls, the paper continues onward with even more advanced storage management strategies, such as 4. "evlis tail recursion", where the environment does not need to be preserved across the evaluation of the last sub-expression argument in a tail call, 5. reducing the environment of a closure to just the free variables of that closure, and 6. so-called "safe-for-space" semantics as defined by Appel and Shao.
*In order to prove that the machines actually belong to six distinct space complexity classes, the paper, for each pair of machines under comparison, provides concrete examples of programs that will expose asymptotic space blowup on one machine but not the other.
(Reading over my answer now, I'm not sure if I'm managed to actually capture the crucial points of the Clinger paper. But, alas, I cannot devote more time to developing this answer right now.)
A: Many people have already explained recursion here. I would like to cite a couple of thoughts about some advantages that recursion gives from the book “Concurrency in .NET, Modern patterns of concurrent and parallel programming” by Riccardo Terrell:
“Functional recursion is the natural way to iterate in FP because it
avoids mutation of state. During each iteration, a new value is passed
into the loop constructor instead to be updated (mutated). In
addition, a recursive function can be composed, making your program
more modular, as well as introducing opportunities to exploit
parallelization."
Here also are some interesting notes from the same book about tail recursion:
Tail-call recursion is a technique that converts a regular recursive
function into an optimized version that can handle large inputs
without any risks and side effects.
NOTE The primary reason for a tail call as an optimization is to
improve data locality, memory usage, and cache usage. By doing a tail
call, the callee uses the same stack space as the caller. This reduces
memory pressure. It marginally improves the cache because the same
memory is reused for subsequent callers and can stay in the cache,
rather than evicting an older cache line to make room for a new cache
line.
A: It means that rather than needing to push the instruction pointer on the stack, you can simply jump to the top of a recursive function and continue execution. This allows for functions to recurse indefinitely without overflowing the stack.
I wrote a blog post on the subject, which has graphical examples of what the stack frames look like.
A: An important point is that tail recursion is essentially equivalent to looping. It's not just a matter of compiler optimization, but a fundamental fact about expressiveness. This goes both ways: you can take any loop of the form
while(E) { S }; return Q
where E and Q are expressions and S is a sequence of statements, and turn it into a tail recursive function
f() = if E then { S; return f() } else { return Q }
Of course, E, S, and Q have to be defined to compute some interesting value over some variables. For example, the looping function
sum(n) {
int i = 1, k = 0;
while( i <= n ) {
k += i;
++i;
}
return k;
}
is equivalent to the tail-recursive function(s)
sum_aux(n,i,k) {
if( i <= n ) {
return sum_aux(n,i+1,k+i);
} else {
return k;
}
}
sum(n) {
return sum_aux(n,1,0);
}
(This "wrapping" of the tail-recursive function with a function with fewer parameters is a common functional idiom.)
A: The best way for me to understand tail call recursion is a special case of recursion where the last call(or the tail call) is the function itself.
Comparing the examples provided in Python:
def recsum(x):
if x == 1:
return x
else:
return x + recsum(x - 1)
^RECURSION
def tailrecsum(x, running_total=0):
if x == 0:
return running_total
else:
return tailrecsum(x - 1, running_total + x)
^TAIL RECURSION
As you can see in the general recursive version, the final call in the code block is x + recsum(x - 1). So after calling the recsum method, there is another operation which is x + ...
However, in the tail recursive version, the final call(or the tail call) in the code block is tailrecsum(x - 1, running_total + x) which means the last call is made to the method itself and no operation after that.
This point is important because tail recursion as seen here is not making the memory grow because when the underlying VM sees a function calling itself in a tail position (the last expression to be evaluated in a function), it eliminates the current stack frame, which is known as Tail Call Optimization(TCO).
EDIT
NB. Do bear in mind that the example above is written in Python whose runtime does not support TCO. This is just an example to explain the point. TCO is supported in languages like Scheme, Haskell etc
A: Here is a quick code snippet comparing two functions. The first is traditional recursion for finding the factorial of a given number. The second uses tail recursion.
Very simple and intuitive to understand.
An easy way to tell if a recursive function is a tail recursive is if it returns a concrete value in the base case. Meaning that it doesn't return 1 or true or anything like that. It will more than likely return some variant of one of the method parameters.
Another way is to tell is if the recursive call is free of any addition, arithmetic, modification, etc... Meaning its nothing but a pure recursive call.
public static int factorial(int mynumber) {
if (mynumber == 1) {
return 1;
} else {
return mynumber * factorial(--mynumber);
}
}
public static int tail_factorial(int mynumber, int sofar) {
if (mynumber == 1) {
return sofar;
} else {
return tail_factorial(--mynumber, sofar * mynumber);
}
}
A: Consider a simple function that adds the first N natural numbers. (e.g. sum(5) = 0 + 1 + 2 + 3 + 4 + 5 = 15).
Here is a simple JavaScript implementation that uses recursion:
function recsum(x) {
if (x === 0) {
return 0;
} else {
return x + recsum(x - 1);
}
}
If you called recsum(5), this is what the JavaScript interpreter would evaluate:
recsum(5)
5 + recsum(4)
5 + (4 + recsum(3))
5 + (4 + (3 + recsum(2)))
5 + (4 + (3 + (2 + recsum(1))))
5 + (4 + (3 + (2 + (1 + recsum(0)))))
5 + (4 + (3 + (2 + (1 + 0))))
5 + (4 + (3 + (2 + 1)))
5 + (4 + (3 + 3))
5 + (4 + 6)
5 + 10
15
Note how every recursive call has to complete before the JavaScript interpreter begins to actually do the work of calculating the sum.
Here's a tail-recursive version of the same function:
function tailrecsum(x, running_total = 0) {
if (x === 0) {
return running_total;
} else {
return tailrecsum(x - 1, running_total + x);
}
}
Here's the sequence of events that would occur if you called tailrecsum(5), (which would effectively be tailrecsum(5, 0), because of the default second argument).
tailrecsum(5, 0)
tailrecsum(4, 5)
tailrecsum(3, 9)
tailrecsum(2, 12)
tailrecsum(1, 14)
tailrecsum(0, 15)
15
In the tail-recursive case, with each evaluation of the recursive call, the running_total is updated.
Note: The original answer used examples from Python. These have been changed to JavaScript, since Python interpreters don't support tail call optimization. However, while tail call optimization is part of the ECMAScript 2015 spec, most JavaScript interpreters don't support it.
A: This excerpt from the book Programming in Lua shows how to make a proper tail recursion (in Lua, but should apply to Lisp too) and why it's better.
A tail call [tail recursion] is a kind of goto dressed
as a call. A tail call happens when a
function calls another as its last
action, so it has nothing else to do.
For instance, in the following code,
the call to g is a tail call:
function f (x)
return g(x)
end
After f calls g, it has nothing else
to do. In such situations, the program
does not need to return to the calling
function when the called function
ends. Therefore, after the tail call,
the program does not need to keep any
information about the calling function
in the stack. ...
Because a proper tail call uses no
stack space, there is no limit on the
number of "nested" tail calls that a
program can make. For instance, we can
call the following function with any
number as argument; it will never
overflow the stack:
function foo (n)
if n > 0 then return foo(n - 1) end
end
... As I said earlier, a tail call is a
kind of goto. As such, a quite useful
application of proper tail calls in
Lua is for programming state machines.
Such applications can represent each
state by a function; to change state
is to go to (or to call) a specific
function. As an example, let us
consider a simple maze game. The maze
has several rooms, each with up to
four doors: north, south, east, and
west. At each step, the user enters a
movement direction. If there is a door
in that direction, the user goes to
the corresponding room; otherwise, the
program prints a warning. The goal is
to go from an initial room to a final
room.
This game is a typical state machine,
where the current room is the state.
We can implement such maze with one
function for each room. We use tail
calls to move from one room to
another. A small maze with four rooms
could look like this:
function room1 ()
local move = io.read()
if move == "south" then return room3()
elseif move == "east" then return room2()
else print("invalid move")
return room1() -- stay in the same room
end
end
function room2 ()
local move = io.read()
if move == "south" then return room4()
elseif move == "west" then return room1()
else print("invalid move")
return room2()
end
end
function room3 ()
local move = io.read()
if move == "north" then return room1()
elseif move == "east" then return room4()
else print("invalid move")
return room3()
end
end
function room4 ()
print("congratulations!")
end
So you see, when you make a recursive call like:
function x(n)
if n==0 then return 0
n= n-2
return x(n) + 1
end
This is not tail recursive because you still have things to do (add 1) in that function after the recursive call is made. If you input a very high number it will probably cause a stack overflow.
A: The recursive function is a function which calls by itself
It allows programmers to write efficient programs using a minimal amount of code.
The downside is that they can cause infinite loops and other unexpected results if not written properly.
I will explain both Simple Recursive function and Tail Recursive function
In order to write a Simple recursive function
*
*The first point to consider is when should you decide on coming out
of the loop which is the if loop
*The second is what process to do if we are our own function
From the given example:
public static int fact(int n){
if(n <=1)
return 1;
else
return n * fact(n-1);
}
From the above example
if(n <=1)
return 1;
Is the deciding factor when to exit the loop
else
return n * fact(n-1);
Is the actual processing to be done
Let me the break the task one by one for easy understanding.
Let us see what happens internally if I run fact(4)
*
*Substituting n=4
public static int fact(4){
if(4 <=1)
return 1;
else
return 4 * fact(4-1);
}
If loop fails so it goes to else loop
so it returns 4 * fact(3)
*In stack memory, we have 4 * fact(3)
Substituting n=3
public static int fact(3){
if(3 <=1)
return 1;
else
return 3 * fact(3-1);
}
If loop fails so it goes to else loop
so it returns 3 * fact(2)
Remember we called ```4 * fact(3)``
The output for fact(3) = 3 * fact(2)
So far the stack has 4 * fact(3) = 4 * 3 * fact(2)
*In stack memory, we have 4 * 3 * fact(2)
Substituting n=2
public static int fact(2){
if(2 <=1)
return 1;
else
return 2 * fact(2-1);
}
If loop fails so it goes to else loop
so it returns 2 * fact(1)
Remember we called 4 * 3 * fact(2)
The output for fact(2) = 2 * fact(1)
So far the stack has 4 * 3 * fact(2) = 4 * 3 * 2 * fact(1)
*In stack memory, we have 4 * 3 * 2 * fact(1)
Substituting n=1
public static int fact(1){
if(1 <=1)
return 1;
else
return 1 * fact(1-1);
}
If loop is true
so it returns 1
Remember we called 4 * 3 * 2 * fact(1)
The output for fact(1) = 1
So far the stack has 4 * 3 * 2 * fact(1) = 4 * 3 * 2 * 1
Finally, the result of fact(4) = 4 * 3 * 2 * 1 = 24
The Tail Recursion would be
public static int fact(x, running_total=1) {
if (x==1) {
return running_total;
} else {
return fact(x-1, running_total*x);
}
}
*
*Substituting n=4
public static int fact(4, running_total=1) {
if (x==1) {
return running_total;
} else {
return fact(4-1, running_total*4);
}
}
If loop fails so it goes to else loop
so it returns fact(3, 4)
*In stack memory, we have fact(3, 4)
Substituting n=3
public static int fact(3, running_total=4) {
if (x==1) {
return running_total;
} else {
return fact(3-1, 4*3);
}
}
If loop fails so it goes to else loop
so it returns fact(2, 12)
*In stack memory, we have fact(2, 12)
Substituting n=2
public static int fact(2, running_total=12) {
if (x==1) {
return running_total;
} else {
return fact(2-1, 12*2);
}
}
If loop fails so it goes to else loop
so it returns fact(1, 24)
*In stack memory, we have fact(1, 24)
Substituting n=1
public static int fact(1, running_total=24) {
if (x==1) {
return running_total;
} else {
return fact(1-1, 24*1);
}
}
If loop is true
so it returns running_total
The output for running_total = 24
Finally, the result of fact(4,1) = 24
A: In Java, here's a possible tail recursive implementation of the Fibonacci function:
public int tailRecursive(final int n) {
if (n <= 2)
return 1;
return tailRecursiveAux(n, 1, 1);
}
private int tailRecursiveAux(int n, int iter, int acc) {
if (iter == n)
return acc;
return tailRecursiveAux(n, ++iter, acc + iter);
}
Contrast this with the standard recursive implementation:
public int recursive(final int n) {
if (n <= 2)
return 1;
return recursive(n - 1) + recursive(n - 2);
}
A: I'm not a Lisp programmer, but I think this will help.
Basically it's a style of programming such that the recursive call is the last thing you do.
A: A tail recursive function is a recursive function where the last operation it does before returning is make the recursive function call. That is, the return value of the recursive function call is immediately returned. For example, your code would look like this:
def recursiveFunction(some_params):
# some code here
return recursiveFunction(some_args)
# no code after the return statement
Compilers and interpreters that implement tail call optimization or tail call elimination can optimize recursive code to prevent stack overflows. If your compiler or interpreter doesn't implement tail call optimization (such as the CPython interpreter) there is no additional benefit to writing your code this way.
For example, this is a standard recursive factorial function in Python:
def factorial(number):
if number == 1:
# BASE CASE
return 1
else:
# RECURSIVE CASE
# Note that `number *` happens *after* the recursive call.
# This means that this is *not* tail call recursion.
return number * factorial(number - 1)
And this is a tail call recursive version of the factorial function:
def factorial(number, accumulator=1):
if number == 0:
# BASE CASE
return accumulator
else:
# RECURSIVE CASE
# There's no code after the recursive call.
# This is tail call recursion:
return factorial(number - 1, number * accumulator)
print(factorial(5))
(Note that even though this is Python code, the CPython interpreter doesn't do tail call optimization, so arranging your code like this confers no runtime benefit.)
You may have to make your code a bit more unreadable to make use of tail call optimization, as shown in the factorial example. (For example, the base case is now a bit unintuitive, and the accumulator parameter is effectively used as a sort of global variable.)
But the benefit of tail call optimization is that it prevents stack overflow errors. (I'll note that you can get this same benefit by using an iterative algorithm instead of a recursive one.)
Stack overflows are caused when the call stack has had too many frame objects pushed onto. A frame object is pushed onto the call stack when a function is called, and popped off the call stack when the function returns. Frame objects contain info such as local variables and what line of code to return to when the function returns.
If your recursive function makes too many recursive calls without returning, the call stack can exceed its frame object limit. (The number varies by platform; in Python it is 1000 frame objects by default.) This causes a stack overflow error. (Hey, that's where the name of this website comes from!)
However, if the last thing your recursive function does is make the recursive call and return its return value, then there's no reason it needs to keep the current frame object needs to stay on the call stack. After all, if there's no code after the recursive function call, there's no reason to hang on to the current frame object's local variables. So we can get rid of the current frame object immediately rather than keep it on the call stack. The end result of this is that your call stack doesn't grow in size, and thus cannot stack overflow.
A compiler or interpreter must have tail call optimization as a feature for it to be able to recognize when tail call optimization can be applied. Even then, you may have rearrange the code in your recursive function to make use of tail call optimization, and it's up to you if this potential decrease in readability is worth the optimization.
A: Here is a Common Lisp example that does factorials using tail-recursion. Due to the stack-less nature, one could perform insanely large factorial computations ...
(defun ! (n &optional (product 1))
(if (zerop n) product
(! (1- n) (* product n))))
And then for fun you could try (format nil "~R" (! 25))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2008"
} |
Q: 100% width textarea ignores parent element's width in IE7 I have the following textarea in a table:
<table width="300"><tr><td>
<textarea style="width:100%">
longstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstring
</textarea>
</td></tr></table>
With a long string in the textarea, the textarea stretches out to accommodate it in one line in IE7, but retains its 300px width in other browsers.
Any ideas as to how to fix this in IE?
A: Apply the width to the td, not the table.
EDIT: @Emmett - the width could just as easily be applied via CSS.
td {
width: 300px;
}
produces the desired result. Or, if you're using jQuery, you could add the width through script:
$('textarea[width=100%]').parent('td').css('width', '300px');
Point being, there's more than one way to apply a width to a table cell, if development constraints prevent you from applying it directly.
A: @Peter Meyer, Jim Robert
I tried different overflow values, to no avail.
Experimenting with different values for the wrap attribute and the word-wrap style also wasn't fruitful.
EDIT:
@dansays, seanb
Due to some awkward application-specific constraints, the width can only be applied to the table.
@travis
Setting style="word-break:break-all;" sort of worked! It still wraps differently in IE7 and FF. I'll accept this answer if nothing better comes up.
A: Another hacky option, but the only option that works for me - none of the other suggestions on this page do - is to wrap the textarea in a single cell table with a fixed table layout.
<table style="width:100%;table-layout:fixed"><tr><td>
<textarea style="width:100%">longstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstringlongstring</textarea>
</td></tr></table>
A: Another very hacky option, if you are stuck with a lot of constraints, but know what the surrounding dom will look like:
style="width:100%;width:expression(this.parentNode.parentNode.parentNode.parentNode.width +'px')"
not pretty, but does work in IE7.
Using jquery or similar would be a much neater solution, but it depends on the other constraints you have.
A: did you try...
overflow: hidden;
??
I'm not sure if it should be in the table of the textarea... experiment a bit
A: or, how about:
overflow: scroll;
Edit:
I actually tested this. I think the behavior is such because the width is on the table, which I believe (I have nothing to back this up) I read long ago that the table width is a suggested width, but can be expanded to accommodate its content. Not sure. I know if you use a <DIV> rather than a table, it works. Additionally, if you apply the 300 pixel width to the containing <TD> element as opposed to the <TABLE> element, it works as well. Also, the overflow: scroll does nothing! :P
Nice, funky IE behavior, for sure!
A: IE also supports the word-break CSS 3 property.
A: Best thing I could find to make it work, a little hacky:
wrap textarea with <div style="width:300px; overflow:auto;">
might want to play around with the overflow value
A: The overflow property is the way to go. In particular, if you want the extra text to be ignored, you can use "overflow:hidden" as a css property on the text.
In general, when a browser has an unbreakable object, such as a long string without spaces, it can have a conflict between various size constraints - those of the string (long) vs its container (short). If you see different behavior in different browsers, they are just resolving this conflict differently.
By the way, there is a nice trick available for long strings - the <wbr> tag. If your browser sees longstringlongstring, then it will try to fit it in the container as a single, unbroken string -- but if it can't fit, it will break that string in half at the wbr. It's basically a break point with a implicit request to not break there, if possible (sort of like a hyphen in printed texts). By the way, it's a little buggy in some versions of Safari and Opera - check out this quirksmode page for more.
A: I've run into this problem before. It's related to how HTML parses table and cell widths.
You're fine setting 300 as a width as long as the contents of the element can never exceed that (setting a div with a definite width inside and an overflow rule is my favorite way).
But absent a solution like the above, the minute ANY element pushes you past that width, all bets are off. The element becomes as wide as it has to to accommodate the contents.
Additional tip - encase your width values in whatever set of quotes will nest the value properly (<table width='300'). If someone comes along and changes it to a %, it will ignore the %, otherwise.
Unfortunately, you're always going to have trouble breaking strings that do not have 'natural' breaks in IE, unless you can do something to break them up via code.
A: For solve this issue you use space in your text,and you too use this code
overflow:hidden
A: Give the width in pixels.this should work properly
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to handle variable width FieldObjects in Crystal Reports I have a Crystal Report which is viewed via a CrystalReportViewer control on an .aspx page (using VS2008).
The report has two data-driven FieldObjects (which can contain a variable number of chars) which I would like to display on the same line beside each other.
Problem is when the text in the first FieldObject is too long it overlaps the text in the second FieldObject.
I have tried setting the 'CanGrow=True' and 'MaxNumberOfLines=1' on the first FieldObject to 'push' the second FieldObject further to the right, but this didn't work.
How do I get the second FieldObject to always display immediately after the first FieldObject regardless of the length of the text in the first?
Cheers in advance of any knowledge you can drop.
A: you can add a text object to the report. And while editing the text of the text object, drag the field you want to show from the object explorer into the text box. Then hit space, then drag the second field in to the same text box. Your two fields will always be one space a part. You could, of course, add more spaces or any other text you want.
A: Or you can create a function which returns field1 + " " + field2 and add the function to the report.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Convert Web.config from .NET 2.0 to 3.5 What is the minimum I need to add to a .NET 2.0 WebSite's web.config to make it .NET 3.5?
Visual Studio adds all the config sections and script handlers, but if you aren't using those are they are really necessary?
Is there a command line tool to "upgrade" a .NET 2.0 web.config to 3.5?
A: There is a good description of the 3.5 web.config available here:
https://web.archive.org/web/20211020153237/https://www.4guysfromrolla.com/articles/121207-1.aspx
The assemblies and config sections are important because they tell the runtime to use the new 3.5 dlls instead of the 2.0 dlls
The codedom section tells the compiler to use 3.5.
If you're not using ASP.Net Ajax you can probably skip the rest. I've never tested that though.
A: I don't think either of these answers are definitive. The 4guysfromrolla reference is helpful.
Deploying .NET 3.5 to 100+ sites will be a pain. You can't just upgrade the server to the new framework, you have to upgrade the web.config of each site. As far as I can tell, there is no command line tool to do it.
A: If you want to upgrade every site on a server you could probably make changes to the machine.config
A: It depends on which features you want to include. Most of the 3.5 ASP.NET extensions are optional. You will want to include the assembly for System.Core and System.Xml.Linq. You will also to add compiler support for C# 3.0 if you plan to use that in your code behind. If you're deploying to IIS 7 there are HTTP handlers for the ASP.NET extensions and script modules.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: NHibernate SetTimeout on ICriteria Could someone tell me what the units the SetTimeout(int) method in the ICriteria interface uses?
Is it milliseconds, seconds, minutes or other?
A: I think it's seconds. The NHibernate API closely mirrors Hibernate Core for Java, where the Criteria.setTimeout(int) method uses seconds as the units (see also Statement.setQueryTimeout(int)).
Also, after looking at some NHibernate source, it appears that it's using that value to set the timeout for the underlying ADO.NET query, which uses seconds.
A: A little bit of poking around suggests that it could be seconds:
Assuming that ICriteria is the same as the Criteria interface in Hibernate core, then the JavaDoc for org.hibernate.Criteria provides a hint - the "see also" link to java.sql.Statement.setQueryTimeout(). The latter refers to its timeout parameter as seconds.
Assuming that the NHibernate implementation follows the implied contract of that method, then that should be fine. However, for peace of mind's sake, I went and looked for some NHibernate specific stuff. There are various references to CommandTimeout; for example, here, related to NHibernate. Sure enough, the documentation for CommandTimeout states that it's seconds.
I almost didn't post the above, because I don't know the answer outright, and can't find any concrete documentation - but since there is so little on the issue, I figured it couldn't hurt to present these findings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to get facet ranges in solr results? Assume that I have a field called price for the documents in Solr and I have that field faceted. I want to get the facets as ranges of values (eg: 0-100, 100-500, 500-1000, etc). How to do it?
I can specify the ranges beforehand, but I also want to know whether it is possible to calculate the ranges (say for 5 values) automatically based on the values in the documents?
A: I have worked out how to calculate sensible dynamic facets for product price ranges. The solution involves some pre-processing of documents and some post-processing of the query results, but it requires only one query to Solr, and should even work on old version of Solr like 1.4.
Round up prices before submission
First, before submitting the document, round up the the price to the nearest "nice round facet boundary" and store it in a "rounded_price" field. Users like their facets to look like "250-500" not "247-483", and rounding also means you get back hundreds of price facets not millions of them. With some effort the following code can be generalised to round nicely at any price scale:
public static decimal RoundPrice(decimal price)
{
if (price < 25)
return Math.Ceiling(price);
else if (price < 100)
return Math.Ceiling(price / 5) * 5;
else if (price < 250)
return Math.Ceiling(price / 10) * 10;
else if (price < 1000)
return Math.Ceiling(price / 25) * 25;
else if (price < 2500)
return Math.Ceiling(price / 100) * 100;
else if (price < 10000)
return Math.Ceiling(price / 250) * 250;
else if (price < 25000)
return Math.Ceiling(price / 1000) * 1000;
else if (price < 100000)
return Math.Ceiling(price / 2500) * 2500;
else
return Math.Ceiling(price / 5000) * 5000;
}
Permissible prices go 1,2,3,...,24,25,30,35,...,95,100,110,...,240,250,275,300,325,...,975,1000 and so forth.
Get all facets on rounded prices
Second, when submitting the query, request all facets on rounded prices sorted by price: facet.field=rounded_price. Thanks to the rounding, you'll get at most a few hundred facets back.
Combine adjacent facets into larger facets
Third, after you have the results, the user wants see only 3 to 7 facets, not hundreds of facets. So, combine adjacent facets into a few large facets (called "segments") trying to get a roughly equal number of documents in each segment. The following rather more complicated code does this, returning tuples of (start, end, count) suitable for performing range queries. The counts returned will be correct provided prices were been rounded up to the nearest boundary:
public static List<Tuple<string, string, int>> CombinePriceFacets(int nSegments, ICollection<KeyValuePair<string, int>> prices)
{
var ranges = new List<Tuple<string, string, int>>();
int productCount = prices.Sum(p => p.Value);
int productsRemaining = productCount;
if (nSegments < 2)
return ranges;
int segmentSize = productCount / nSegments;
string start = "*";
string end = "0";
int count = 0;
int totalCount = 0;
int segmentIdx = 1;
foreach (KeyValuePair<string, int> price in prices)
{
end = price.Key;
count += price.Value;
totalCount += price.Value;
productsRemaining -= price.Value;
if (totalCount >= segmentSize * segmentIdx)
{
ranges.Add(new Tuple<string, string, int>(start, end, count));
start = end;
count = 0;
segmentIdx += 1;
}
if (segmentIdx == nSegments)
{
ranges.Add(new Tuple<string, string, int>(start, "*", count + productsRemaining));
break;
}
}
return ranges;
}
Filter results by selected facet
Fourth, suppose ("250","500",38) was one of the resulting segments. If the user selects "$250 to $500" as a filter, simply do a filter query fq=price:[250 TO 500]
A: There may well be a better Solr-specific answer, but I work with straight Lucene, and since you're not getting much traction I'll take a stab. There, I'd create a populate a Filter with a FilteredQuery wrapping the original Query. Then I'd get a FieldCache for the field of interest. Enumerate the hits in the filter's bitset, and for each hit, you get the value of the field from the field cache, and add it to a SortedSet. When you've got all of the hits, divide the size of the set into the number of ranges you want (five to seven is a good number according the user interface guys), and rather than a single-valued constraint, your facets will be a range query with the lower and upper bounds of each of those subsets.
I'd recommend using some special-case logic for a small number of values; obviously, if you only have four distinct values, it doesn't make sense to try and make 5 range refinements out of them. Below a certain threshold (say 3*your ideal number of ranges), you just show the facets normally rather than ranges.
A: You can use solr facet ranges
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_by_Range
A: To answer your first question, you can get facet ranges by using the the generic facet query support. Here's an example:
http://localhost:8983/solr/select?q=video&rows=0&facet=true&facet.query=price:[*+TO+500]&facet.query=price:[500+TO+*]
As for your second question (automatically suggesting facet ranges), that's not yet implemented. Some argue that this kind of querying would be best implemented on your application rather that letting Solr "guess" the best facet ranges.
Here are some discussions on the topic:
*
*(Archived) https://web.archive.org/web/20100416235126/http://old.nabble.com/Re:-faceted-browsing-p3753053.html
*(Archived) https://web.archive.org/web/20090430160232/http://www.nabble.com/Re:-Sorting-p6803791.html
*(Archived) https://web.archive.org/web/20090504020754/http://www.nabble.com/Dynamically-calculated-range-facet-td11314725.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do you get the ethernet address using Java? I would like to retrieve the ethernet address of the network interface that is used to access a particular website.
How can this be done in Java?
Solution Note that the accepted solution of getHardwareAddress is only available in Java 6. There does not seem to be a solution for Java 5 aside from executing i(f|p)confing.
A: You can get the address that connects to your ServerSocket using http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getInetAddresses()
However if your client is connecting via a NAT, then you will get the address of the router and NOT the Ethernet address. If it is on your local network (via a hub/switch, no router with NAT) the it wil work as intended.
A: Actually, beyond other right answers (JDK 6; exec 'ifconfig'), there are JNI-based libraries. Java Uuid Generator (JUG) 2.0 has code for some platforms. This works on JDK 1.2 and above at least (maybe 1.1 even)
A: java.net.NetworkInterface.getHardwareAddress (method added in Java 6)
It has to be called on the machine you are interested in - the MAC is not transferred across network boundaries (i.e. LAN and WAN). If you want to make use of it on a website server to interrogate the clients, you'd have to run an applet that would report the result back to you.
For Java 5 and older I found code parsing output of command line tools on various systems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Best way to implement request throttling in ASP.NET MVC? We're experimenting with various ways to throttle user actions in a given time period:
*
*Limit question/answer posts
*Limit edits
*Limit feed retrievals
For the time being, we're using the Cache to simply insert a record of user activity - if that record exists if/when the user does the same activity, we throttle.
Using the Cache automatically gives us stale data cleaning and sliding activity windows of users, but how it will scale could be a problem.
What are some other ways of ensuring that requests/user actions can be effectively throttled (emphasis on stability)?
A: Microsoft has a new extension for IIS 7 called Dynamic IP Restrictions Extension for IIS 7.0 - Beta.
"The Dynamic IP Restrictions for IIS 7.0 is a module that provides protection against denial of service and brute force attacks on web server and web sites. Such protection is provided by temporarily blocking IP addresses of the HTTP clients who make unusually high number of concurrent requests or who make large number of requests over small period of time."
http://learn.iis.net/page.aspx/548/using-dynamic-ip-restrictions/
Example:
If you set the criteria to block after X requests in Y milliseconds or X concurrent connections in Y milliseconds the IP address will be blocked for Y milliseconds then requests will be permitted again.
A: It took me some time to work out an equivalent for .NET 5+ (formerly .NET Core), so here's a starting point.
The old way of caching has gone and been replaced by Microsoft.Extensions.Caching.Memory with IMemoryCache.
I separated it out a bit more, so here's what you need...
The Cache Management Class
I've added the whole thing here, so you can see the using statements.
using Microsoft.Extensions.Caching.Memory;
using Microsoft.Extensions.Primitives;
using System;
using System.Threading;
namespace MyWebApplication
{
public interface IThrottleCache
{
bool AddToCache(string key, int expriryTimeInSeconds);
bool AddToCache<T>(string key, T value, int expriryTimeInSeconds);
T GetFromCache<T>(string key);
bool IsInCache(string key);
}
/// <summary>
/// A caching class, based on the docs
/// https://learn.microsoft.com/en-us/aspnet/core/performance/caching/memory?view=aspnetcore-6.0
/// Uses the recommended library "Microsoft.Extensions.Caching.Memory"
/// </summary>
public class ThrottleCache : IThrottleCache
{
private IMemoryCache _memoryCache;
public ThrottleCache(IMemoryCache memoryCache)
{
_memoryCache = memoryCache;
}
public bool AddToCache(string key, int expriryTimeInSeconds)
{
bool isSuccess = false; // Only a success if a new value gets added.
if (!IsInCache(key))
{
var cancellationTokenSource = new CancellationTokenSource(
TimeSpan.FromSeconds(expriryTimeInSeconds));
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetSize(1)
.AddExpirationToken(
new CancellationChangeToken(cancellationTokenSource.Token));
_memoryCache.Set(key, DateTime.Now, cacheEntryOptions);
isSuccess = true;
}
return isSuccess;
}
public bool AddToCache<T>(string key, T value, int expriryTimeInSeconds)
{
bool isSuccess = false;
if (!IsInCache(key))
{
var cancellationTokenSource = new CancellationTokenSource(
TimeSpan.FromSeconds(expriryTimeInSeconds));
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetAbsoluteExpiration(DateTimeOffset.Now.AddSeconds(expriryTimeInSeconds))
.SetSize(1)
.AddExpirationToken(
new CancellationChangeToken(cancellationTokenSource.Token));
_memoryCache.Set<T>(key, value, cacheEntryOptions);
isSuccess = true;
}
return isSuccess;
}
public T GetFromCache<T>(string key)
{
return _memoryCache.Get<T>(key);
}
public bool IsInCache(string key)
{
var item = _memoryCache.Get(key);
return item != null;
}
}
}
The attribute itself
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Filters;
using System;
using System.Net;
namespace MyWebApplication
{
/// <summary>
/// Decorates any MVC route that needs to have client requests limited by time.
/// Based on how they throttle at stack overflow (updated for .NET5+)
/// https://stackoverflow.com/questions/33969/best-way-to-implement-request-throttling-in-asp-net-mvc/1318059#1318059
/// </summary>
/// <remarks>
/// Uses the current System.Web.Caching.Cache to store each client request to the decorated route.
/// </remarks>
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class ThrottleByIPAddressAttribute : ActionFilterAttribute
{
/// <summary>
/// The caching class (which will be instantiated as a singleton)
/// </summary>
private IThrottleCache _throttleCache;
/// <summary>
/// A unique name for this Throttle.
/// </summary>
/// <remarks>
/// We'll be inserting a Cache record based on this name and client IP, e.g. "Name-192.168.0.1"
/// </remarks>
public string Name { get; set; }
/// <summary>
/// The number of seconds clients must wait before executing this decorated route again.
/// </summary>
public int Seconds { get; set; }
/// <summary>
/// A text message that will be sent to the client upon throttling. You can include the token {n} to
/// show this.Seconds in the message, e.g. "Wait {n} seconds before trying again".
/// </summary>
public string Message { get; set; } = "You may only perform this action every {n} seconds.";
public override void OnActionExecuting(ActionExecutingContext c)
{
if(_throttleCache == null)
{
var cache = c.HttpContext.RequestServices.GetService(typeof(IThrottleCache));
_throttleCache = (IThrottleCache)cache;
}
var key = string.Concat(Name, "-", c.HttpContext.Request.HttpContext.Connection.RemoteIpAddress);
var allowExecute = _throttleCache.AddToCache(key, Seconds);
if (!allowExecute)
{
if (String.IsNullOrEmpty(Message))
Message = "You may only perform this action every {n} seconds.";
c.Result = new ContentResult { Content = Message.Replace("{n}", Seconds.ToString()) };
// see 409 - http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
c.HttpContext.Response.StatusCode = (int)HttpStatusCode.Conflict;
}
}
}
}
Startup.cs or Program.cs - Register the services with DI
This example uses Startup.cs/ConfigureServices - Put the code somewhere after AddControllersWithViews).
For a project created in .NET6+ I think you'd add the equivalent between builder.Services.AddRazorPages(); and var app = builder.Build(); in program.cs. services would be builder.Services.
If you don't get the placement of this code right, the cache will be empty every time you check it.
// The cache for throttling must be a singleton and requires IMemoryCache to be set up.
// Place it after AddControllersWithViews or AddRazorPages as they build a cache themselves
// Need this for IThrottleCache to work.
services.AddMemoryCache(_ => new MemoryCacheOptions
{
SizeLimit = 1024, /* TODO: CHECK THIS IS THIS THE RIGHT SIZE FOR YOU! */
CompactionPercentage = .3,
ExpirationScanFrequency = TimeSpan.FromSeconds(30),
});
services.AddSingleton<IThrottleCache, ThrottleCache>();
Example Usage
[HttpGet, Route("GetTest")]
[ThrottleByIPAddress(Name = "MyControllerGetTest", Seconds = 5)]
public async Task<ActionResult<string>> GetTest()
{
return "Hello world";
}
To help understand caching in .NET 5+, I've also made a caching console demo.
A: Here's a generic version of what we've been using on Stack Overflow for the past year:
/// <summary>
/// Decorates any MVC route that needs to have client requests limited by time.
/// </summary>
/// <remarks>
/// Uses the current System.Web.Caching.Cache to store each client request to the decorated route.
/// </remarks>
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class ThrottleAttribute : ActionFilterAttribute
{
/// <summary>
/// A unique name for this Throttle.
/// </summary>
/// <remarks>
/// We'll be inserting a Cache record based on this name and client IP, e.g. "Name-192.168.0.1"
/// </remarks>
public string Name { get; set; }
/// <summary>
/// The number of seconds clients must wait before executing this decorated route again.
/// </summary>
public int Seconds { get; set; }
/// <summary>
/// A text message that will be sent to the client upon throttling. You can include the token {n} to
/// show this.Seconds in the message, e.g. "Wait {n} seconds before trying again".
/// </summary>
public string Message { get; set; }
public override void OnActionExecuting(ActionExecutingContext c)
{
var key = string.Concat(Name, "-", c.HttpContext.Request.UserHostAddress);
var allowExecute = false;
if (HttpRuntime.Cache[key] == null)
{
HttpRuntime.Cache.Add(key,
true, // is this the smallest data we can have?
null, // no dependencies
DateTime.Now.AddSeconds(Seconds), // absolute expiration
Cache.NoSlidingExpiration,
CacheItemPriority.Low,
null); // no callback
allowExecute = true;
}
if (!allowExecute)
{
if (String.IsNullOrEmpty(Message))
Message = "You may only perform this action every {n} seconds.";
c.Result = new ContentResult { Content = Message.Replace("{n}", Seconds.ToString()) };
// see 409 - http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
c.HttpContext.Response.StatusCode = (int)HttpStatusCode.Conflict;
}
}
}
Sample usage:
[Throttle(Name="TestThrottle", Message = "You must wait {n} seconds before accessing this url again.", Seconds = 5)]
public ActionResult TestThrottle()
{
return Content("TestThrottle executed");
}
The ASP.NET Cache works like a champ here - by using it, you get automatic clean-up of your throttle entries. And with our growing traffic, we're not seeing that this is an issue on the server.
Feel free to give feedback on this method; when we make Stack Overflow better, you get your Ewok fix even faster :)
A: We use the technique borrowed from this URL http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx, not for throttling, but for a poor man's Denial Of Service (D.O.S). This is also cache-based, and may be similar to what you are doing. Are you throttling to prevent D.O.S. attacks? Routers can certainly be used to reduce D.O.S; do you think a router could handle the throttling you need?
A: Since the highly voted answers to this question are too old, I am sharing the latest solution which worked for me.
I tried using the Dynamic IP restrictions as given in an answer on this page but when I tried to use that extension, I found that this extension has been discontinued by Microsoft and on the download page they have clearly written the below message.
Microsoft has discontinued the Dynamic IP Restrictions extension and this download is no longer available.
So I researched further and found that the Dynamic IP Restrictions is now by default included in IIS 8.0 and above. The below information is fetched from the Microsoft Dynamic IP Restrictions page.
In IIS 8.0, Microsoft has expanded the built-in functionality to include several new features:
*
*Dynamic IP address filtering, which allows administrators to
configure their server to block access for IP addresses that exceed
the specified number of requests.
*The IP address filtering features now allow administrators to specify
the behavior when IIS blocks an IP address, so requests from
malicious clients can be aborted by the server instead of returning
HTTP 403.6 responses to the client.
*IP filtering now feature a proxy mode, which allows IP addresses to
be blocked not only by the client IP that is seen by IIS but also by
the values that are received in the x-forwarded-for HTTP header
For step by step instructions to implement Dynamic IP Restrictions, please visit the below link:
https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-dynamic-ip-address-restrictions
I hope it helps someone stuck in a similar problem.
A: Created ThrottlingTroll - my take on throttling/rate limiting in ASP.NET Core.
It is similar to Stefan Prodan's AspNetCoreRateLimit and ASP.NET 7's Rate Limiting Middleware, but has advantages:
*
*Both ingress and egress throttling (egress means that your specially configured HttpClient won't make more than N requests per second and will instead produce 429 status code by itself).
*Distributed rate counter stores (including, but not limited to Redis).
*Dynamic (re)configuration - allows to adjust limits without restarting the service.
*Propagating 429 statuses from egress to ingress.
Check out more in the repo.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "227"
} |
Q: Standardised text editing behaviour across Mac applications I've switched over to a Mac recently and, although things have been going quite well, the very different text-editing behaviours across applications is driving me insane.
Home, End, Page Up, Page Down, Apple-arrow, Ctrl-arrow, alt-arrow etc. quite often do different things depending on the application.
Is there a way to standardise this behaviour?
A: There are standards, but they are not based around what you're used to from windows. It drove me mad until I got over myself and decided to learn what the actual standards were. Since then I've been sold.
The ones I use:
*
*Command-Left/Right - Jump to start/end of line
*
*Can also do this with ctrl-a/e which is great if you're used to ssh
*Command-Up/Down - Jump to top/bottom of text field or document
*Option-Left/Right - Jump to start/end of word or previous/next word
These basically replace home/end/pgup/pgdown, and ctrl-left/right from the windows world.
I find this to be a massive win due to the fact I have a macbook pro and almost no laptops have proper home/end/pgup/pgdown keys - not needing them in OSX is a godsend
Here's a big list of the rest of them
A: And what's funny (and frustrating!) is that the Microsoft OS X apps (e.g. Entourage) use the Windows standards.
I develop on WinXP during the day but have an iMac at home, so it's confusing enough trying to switch modes between work and home. But then I have to remember if I'm writing an e-mail in Entourage, I need to revert back to Windows mode.
I can't think of any good reason why MS wouldn't follow the OS X keyboard standards...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I efficiently keep track of the smallest element in a collection? In the vein of programming questions: suppose there's a collection of objects that can be compared to each other and sorted. What's the most efficient way to keep track of the smallest element in the collection as objects are added and the current smallest occasionally removed?
A: Using a min-heap is the best way.
http://en.wikipedia.org/wiki/Heap_(data_structure)
It is tailor made for this application.
A: If you need random insert and removal, the best way is probably a sorted array. Inserts and removals should be O(log(n)).
A: @Harpreet
That is not optimal. When an object is removed, erickson will have to scan entire collection to find the new smallest.
You want to read up on Binary search tree's. MS has a good site to start down the path. But you may want to get a book like Introduction to algorithms (Cormen, Leiserson, Rivest, Stein) if you want to deep dive.
A: For occasional removes a Fibonacci Heap is even faster than the min-heap. Insertion is O(1), and finding the min is also O(1). Removal is O(log(n))
A:
If you need random insert and removal,
the best way is probably a sorted
array. Inserts and removals should be
O(log(n)).
Yes, but you will need to re-sort on each insert and (maybe) each deletion, which, as you stated, is O(log(n)).
With the solution proposed by Harpreet:
*
*you have one O(n) pass in the beginning to find the smallest element
*inserts are O(1) thereafter (only 1 comparison needed to the already-known smallest element)
*deletes will be O(n) because you will need to re-find the smallest element (keep in mind Big O notation is worst case). You could also optimize by checking to see if the element to be deleted is the (known) smallest, and if not, just don't do any of the re-check to find the smallest element.
So, it depends. One of these algorithms will be better for an insert-heavy use case with few deletes, but the other is overall more consistent. I think I would default to Harpreet's mechanism unless I knew that the smallest number would be removed often, because that exposes a weak point in that algorithm.
A: Harpreet:
the inserts into that would be linear since you have to move items for an insert.
Doesn't that depend on the implementation of the collection? If it acts like a linked-list, inserts would be O(1), while if it were implemented like an array it would be linear, as you stated.
A: Depends on which operations you need your container to support. A min-heap is the best if you might need to remove the min element at any given time, although several operations are nontrivial (amortized log(n) time in some cases).
However, if you only need to push/pop from the front/back, you can just use a mindeque which achieves amortized constant time for all operations (including findmin). You can do a scholar.google.com search to learn more about this structure. A friend and I recently collaborated to reach a much easier-to-understand and -to-implement version of a mindeque, as well. If this is what you're looking for I could post the details for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Microsoft Office 2007 automated installation - editing the config.xml file I'm creating an automated installation of Office 2007. To customise your Office 2007 installation the Office Customization Tool (OCT) does most of the work for you. One the OCT's features is the ability to run additional programs during the Office installation. However it is pretty poor at it.
Fortunately by editing the appropiate config.xml file contained within the installer files you have more control over running these additional programs. Within the config.xml file this feature is defined by the command element. This link on TechNet talks all about it.
In this documentation it states:
Attributes
You can specify double-quotation marks (") in the Path and Args attributes by specifying two double-quotation marks together ("").
<Command Path="myscript.exe" Args="/id ""123 abc"" /q" />
I would like to use double-quotation marks in an argument that I wish to pass to the command I'm executing. Unfortunately when I configure my config.xml file as shown in the example, the Office 2007 installer crashes and displays the following error message in the setup logs:
Parsing config.xml at: \\aumel1pc356\c$\Documents and Settings\nichollsd2\Desktop\source\office\Enterprise.WW\config.xml
Error: XML document load failed for file: \\aumel1pc356\c$\Documents and Settings\nichollsd2\Desktop\source\office\Enterprise.WW\config.xml HResult: 0x1.
Does anyone have any experience with this issue? I'd love to get another perspective on it.
A: In standard XML you embed quotes in attribute values using ", &34; or .
See the page on Wikipedia for a list of XML entity references.
I don't know if this will solve your problem, but seeing as it is an XML parser error it should.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Find out how much memory is being used by an object in Python How would you go about finding out how much memory is being used by an object? I know it is possible to find out how much is used by a block of code, but not by an instantiated object (anytime during its life), which is what I want.
A: I haven't any personal experience with either of the following, but a simple search for a "Python [memory] profiler" yield:
*
*PySizer, "a memory profiler for Python," found at http://pysizer.8325.org/. However the page seems to indicate that the project hasn't been updated for a while, and refers to...
*Heapy, "support[ing] debugging and optimization regarding memory related issues in Python programs," found at http://guppy-pe.sourceforge.net/#Heapy.
Hope that helps.
A: This must be used with care because an override on the objects __sizeof__ might be misleading.
Using the bregman.suite, some tests with sys.getsizeof output a copy of an array object (data) in an object instance as being bigger than the object itself (mfcc).
>>> mfcc = MelFrequencyCepstrum(filepath, params)
>>> data = mfcc.X[:]
>>> sys.getsizeof(mfcc)
64
>>> sys.getsizeof(mfcc.X)
>>>80
>>> sys.getsizeof(data)
80
>>> mfcc
<bregman.features.MelFrequencyCepstrum object at 0x104ad3e90>
A: For big objects you may use a somewhat crude but effective method:
check how much memory your Python process occupies in the system, then delete the object and compare.
This method has many drawbacks but it will give you a very fast estimate for very big objects.
A: Try this:
sys.getsizeof(object)
getsizeof() Return the size of an object in bytes. It calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.
A recursive recipe
A: There's no easy way to find out the memory size of a python object. One of the problems you may find is that Python objects - like lists and dicts - may have references to other python objects (in this case, what would your size be? The size containing the size of each object or not?). There are some pointers overhead and internal structures related to object types and garbage collection. Finally, some python objects have non-obvious behaviors. For instance, lists reserve space for more objects than they have, most of the time; dicts are even more complicated since they can operate in different ways (they have a different implementation for small number of keys and sometimes they over allocate entries).
There is a big chunk of code (and an updated big chunk of code) out there to try to best approximate the size of a python object in memory.
You may also want to check some old description about PyObject (the internal C struct that represents virtually all python objects).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "291"
} |
Q: Can't create a subversion repository with Eclipse 3.4.0, svn 1.5.1 I'm working on mac OS x 10.4. I have a subversion repository stored on an external drive connected via USB. I created a new python project in Eclipse (using the PyDev plugin). When I use right click Team->Share Project to set up a new project with subversion, I get the following error:
Error while creating module: org.tigris.subversion.javahl.ClientException: Couldn't open a repository
svn: Unable to open ra_local session to URL
svn: Unable to open repository 'file:///Volumes/svn-repos/Palindrome/Palindrome'
The subversion repository has the following permissions:
drwxrwxrwx 9 cameronl cameronl 306 Aug 23 10:08 svn-repos
The external drive is formatted as Mac OS extended
I'm completely stumped. Anybody have any suggestions?
A: Try adding the repository first using the "SVN Repository Exploring" perspective (Window > Open Perspective > Other... > SVN Repository Exploring).
Make sure that the URL you are using points to the correct directory, which typically contains these default repository files:
conf/ dav/ db/ format hooks/ locks/ README.txt
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Tool for querying databases I want to query a number of different databases mainly Oracle and Informix.
Can anyone suggest me some tool using which I can do this easily?
A: Try WinSQL lite at http://www.indus-soft.com/SynametricsWebApp/WinSQL.jsp. It is absolutely free and does not expire. It is only one file and does not come with any bulky DLLs. If you don't like it, simply delete the file from your hard drive.
An introduction about how to use it against an informix database can be found in this article.
A: I use and love DbVisualizer.
A: I like SQuirreL SQL Client. It's cross platform and database independent, and quite handy.
A: As a previous answer stated, WinSQL is one of the best "generic" sql query programs, although it is far from perfect. Generally speaking, the programs dedicated to a particular sql product are better (usually 3rd party products, not written by the SQL vendor). TOAD is a great program for Oracle (originally written by an Oracle employee in his spare time before being bought by Quest). TOAD has become a little bloated of recent versions, but is still a fantastic product. I think there are versions of TOAD for MySQL and maybe one or two others, however, the Oracle version is by far the best. When I last used Informix (2004) there was a reasonable 3rd party Java program whose name escapes me for the moment. The standard tools that come with Informix are from the dark ages (I used the Unix utilities that look a bit like DOS versions of Lotus 123), so anything else is better. I used WinSQL with Informix with great success.
A: The best alround one is TOAD
A: @littlegeek: Toad is not available for Informix. Additionally, the OP seems to want a single program that can query several different brands of DBMS, and you have to buy a different version of Toad for every DBMS you want to use it with.
A: Informix is not very well supported among third party database tool vendors.
Interestingly, Oracle' SQL Developer supports browsing (and converting to oracle) several databases, including SQL Server and MySQL.
A: Try the following:
*
*Query Express (single 100KB executable, no install)
*Query ExPlus (improved Query Express)
A: There is several options on this page: http://freewarehome.com/index.html?http%3A//freewarehome.com/bx/index.php%3Faction%3Dvthread%26forum%3D8%26topic%3D7136
GenDAT is not bad, but I am biased as I did write it ! It has been voted highly though.
I need cross platform now so I use Oracle SQL Developer which I think is great. It can handle other databases not just Oracle. Another good one (cross platform) is DB Solo.
A: I like Aqua Data Studio from Aquafold. It supports all of the major database players as well as some of the less including Informix. Great features like code beautification and syntax highlighting are perks.
A: We use Aqua Data
A: I have used Query Tool for years http://www.gpoulose.com/
It is lightwight and gets the job done.
However, I will also investigate some of the other ones listed here. Must say that SQLDeveloper and Toad are too bloated for my needs. I work on many different systems in any day, all at different clients with different security and down to very low-end machines. Having one simple tool and good SQL chops goes a long way!
I also would like to note that because I am installing it on new servers constantly, it needs to be free, or have an unlimited machine license. I'd be happy to pay for my own use of the tool, but need to be able to install it over and over again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: php scripts writing to non-world-writable files How can you allow a PHP script to write to a file with high-security restrictions, such as only allowing a single user to write to it?
The difficulty seems to be that a PHP script is running as a low-permissions user (maybe apache, or www, or nobody?), and even if I chown apache the_writable_file, the directory it's in might not be writable for the low-level user. In general, what's the usual way that PHP can work with local files in a secure way?
A: Unfortunately, in shared hosts that use mod_php, there is no way to restrict access to secure files to your web app and login user.
The solution is to run your web app as your login user. When you do that, UNIX file permissions can correctly lock everyone else out. There are several ways to implement that, including SuExec, suPHP, or running PHP with FastCGI with mod_fcgid or mod_proxy_fcgid. FastCGI is my favorite way.
Another solution is to use a dedicated host or virtual private server.
A: Sure, chgrp apache the_writable_file and chmod g+w the_writable_file. After that, only your secure user and the apache user will be able to write to the file. Since the apache user is typically forbidden from logging in, you only have to worry about web users writing to your secure file using through the http daemon.
A: All the containing folders need to have execute permissions.
For example, if the file's in /foo/bar/the_writable_file, the directories "foo" and "bar" both need to have executable permission to access the_writable_file, even if they don't have read/write permission.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Install the Radrails plugin for Aptana Studio offline I downloaded and installed the Aptana Studio free version. But apparently, to install the Radrails plugin for ruby on rails development you have to connect to the internet. I don't have internet on my machine right now. So is there a way I could download the installer from another machine and copy it over my existing Aptana installation?
Update: Found a link for download here (Access denied now)
A: I wrote down my duel with Aptana Rails - See if this helps you.
There is a link on manual installation that may be what you're looking for.
A: If you're able to actually install it on the machine with the Internet connection, then you can simply copy over the directory you installed it in. Eclipse installations are completely self-contained in their installation directories.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Are Python threads buggy? A reliable coder friend told me that Python's current multi-threading implementation is seriously buggy - enough to avoid using altogether. What can said about this rumor?
A: The GIL (Global Interpreter Lock) might be a problem, but the API is quite OK. Try out the excellent processing module, which implements the Threading API for separate processes. I am using that right now (albeit on OS X, have yet to do some testing on Windows) and am really impressed. The Queue class is really saving my bacon in terms of managing complexity!
EDIT: it seemes the processing module is being included in the standard library as of version 2.6 (import multiprocessing). Joy!
A: Python threads are good for concurrent I/O programming. Threads are swapped out of the CPU as soon as they block waiting for input from file, network, etc. This allows other Python threads to use the CPU while others wait. This would allow you to write a multi-threaded web server or web crawler, for example.
However, Python threads are serialized by the GIL when they enter interpreter core. This means that if two threads are crunching numbers, only one can run at any given moment. It also means that you can't take advantage of multi-core or multi-processor architectures.
There are solutions like running multiple Python interpreters concurrently, using a C based threading library. This is not for the faint of heart and the benefits might not be worth the trouble. Let's hope for an all Python solution in a future release.
A: As far as I know there are no real bugs, but the performance when threading in cPython is really bad (compared to most other threading implementations, but usually good enough if all most of the threads do is block) due to the GIL (Global Interpreter Lock), so really it is implementation specific rather than language specific. Jython, for example, does not suffer from this due to using the Java thread model.
See this post on why it is not really feasible to remove the GIL from the cPython implementation, and this for some practical elaboration and workarounds.
Do a quick google for "Python GIL" for more information.
A: If you want to code in python and get great threading support, you might want to check out IronPython or Jython. Since the python code in IronPython and Jython run on the .NET CLR and Java VM respectively, they enjoy the great threading support built into those libraries. In addition to that, IronPython doesn't have the GIL, an issue that prevents CPython threads from taking full advantage of multi-core architectures.
A: The standard implementation of Python (generally known as CPython as it is written in C) uses OS threads, but since there is the Global Interpreter Lock, only one thread at a time is allowed to run Python code. But within those limitations, the threading libraries are robust and widely used.
If you want to be able to use multiple CPU cores, there are a few options. One is to use multiple python interpreters concurrently, as mentioned by others. Another option is to use a different implementation of Python that does not use a GIL. The two main options are Jython and IronPython.
Jython is written in Java, and is now fairly mature, though some incompatibilities remain. For example, the web framework Django does not run perfectly yet, but is getting closer all the time. Jython is great for thread safety, comes out better in benchmarks and has a cheeky message for those wanting the GIL.
IronPython uses the .NET framework and is written in C#. Compatibility is reaching the stage where Django can run on IronPython (at least as a demo) and there are guides to using threads in IronPython.
A: I've used it in several applications and have never had nor heard of threading being anything other than 100% reliable, as long as you know its limits. You can't spawn 1000 threads at the same time and expect your program to run properly on Windows, however you can easily write a worker pool and just feed it 1000 operations, and keep everything nice and under control.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How to embed control change commands inside of a MIDI file I am making a simple game in order to learn a new language. I am in the process of collecting some music for the game and would like to use the MIDI format so that I can control the flow of the track (i.e., I would like to have an introduction that only plays once and does not play again when the song loops.)
I am having a tough time finding information on how to modify existing MIDI files so that they may send a control change signal to the synthesizer. Has anyone had experience with this?
I think that I should have been more clear with my original question. I am using an existing game engine which takes care of playing the music. I am under the impression that this control change value must be embedded directly in the MIDI file itself as I have no control over the synthesizer. From the manual:
MIDI files are played via the
DirectMusic Synthesizer. If a BGM MIDI
file contains the control change value
111, that value is recognized as where
the song will start repeating after it
reaches the end.
I wish I could do it programmatically. I suppose what I am after here is some sort of editor which will allow me to modify the MIDI file that I already have.
A: Sounds like what you really want is a midi editor
A: try looking in the Midi 1.0 spec
Here's a table of the control change messages though it looks like you're looking for a way to do this in software. yes?
you could try just sending it as raw midi data (ie. the messages on that table)
looking over your question again... my answer is not that useful...
what I would do if I were you is separate the introduction into it's own file and then you have a file containing just what you want to loop.
you could also look at the spec for the Standard Midi File format (SMF)
A: DirectMusicProducer is probably your best free option if you are playing using DirectMusic. I don't believe the MIDI record feature will include control changes, but your engine may support playing segment files which are much more flexible.
The only MIDI sequencer I use cost around $300 (USD) about 10 years ago (and no longer appears to exist), but I am not aware of any good quality free MIDI file sequencers. (Note that "MIDI editor" is probably different to "MIDI file editor" or "MIDI sequencer")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the difference between Build Solution and Batch Build in Visual Studio 2008? What is the difference between Build Solution and Batch Build in Visual Studio 2008?
A: The key point which seems to be missed in both the existing answers is that batch build allows you to build multiple configurations of each project (e.g. you can build debug and release configurations with a single operation).
With a normal build, you have to use the configuration manager to select just one configuration for each project.
A: in addition to what has been mentioned so far, batch build allows a combination of projects or configurations to be stored as a preset for easier future access.
A: Batch build allows you to build any project that you select, and a Solution build only builds the projects that are part of the active solution.
You can customise what projects are part of a solution build by going to menu Tools → Configuration Manager.
A: Another nice thing about batch build is that it lets you build a configuration different than the current one. It is handy for solutions that take a while to switch.
A: Building the solution is the same as batch building all projects. Both methods respect the solution's dependencies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How to read a value from the Windows registry Given the key for some registry value (e.g. HKEY_LOCAL_MACHINE\blah\blah\blah\foo) how can I:
*
*Safely determine that such a key exists.
*Programmatically (i.e. with code) get its value.
I have absolutely no intention of writing anything back to the registry (for the duration of my career if I can help it). So we can skip the lecture about every molecule in my body exploding at the speed of light if I write to the registry incorrectly.
Prefer answers in C++, but mostly just need to know what the special Windows API incantation to get at the value is.
A: Since Windows >=Vista/Server 2008, RegGetValue is available, which is a safer function than RegQueryValueEx. No need for RegOpenKeyEx, RegCloseKey or NUL termination checks of string values (REG_SZ, REG_MULTI_SZ, REG_EXPAND_SZ).
#include <iostream>
#include <string>
#include <exception>
#include <windows.h>
/*! \brief Returns a value from HKLM as string.
\exception std::runtime_error Replace with your error handling.
*/
std::wstring GetStringValueFromHKLM(const std::wstring& regSubKey, const std::wstring& regValue)
{
size_t bufferSize = 0xFFF; // If too small, will be resized down below.
std::wstring valueBuf; // Contiguous buffer since C++11.
valueBuf.resize(bufferSize);
auto cbData = static_cast<DWORD>(bufferSize * sizeof(wchar_t));
auto rc = RegGetValueW(
HKEY_LOCAL_MACHINE,
regSubKey.c_str(),
regValue.c_str(),
RRF_RT_REG_SZ,
nullptr,
static_cast<void*>(valueBuf.data()),
&cbData
);
while (rc == ERROR_MORE_DATA)
{
// Get a buffer that is big enough.
cbData /= sizeof(wchar_t);
if (cbData > static_cast<DWORD>(bufferSize))
{
bufferSize = static_cast<size_t>(cbData);
}
else
{
bufferSize *= 2;
cbData = static_cast<DWORD>(bufferSize * sizeof(wchar_t));
}
valueBuf.resize(bufferSize);
rc = RegGetValueW(
HKEY_LOCAL_MACHINE,
regSubKey.c_str(),
regValue.c_str(),
RRF_RT_REG_SZ,
nullptr,
static_cast<void*>(valueBuf.data()),
&cbData
);
}
if (rc == ERROR_SUCCESS)
{
cbData /= sizeof(wchar_t);
valueBuf.resize(static_cast<size_t>(cbData - 1)); // remove end null character
return valueBuf;
}
else
{
throw std::runtime_error("Windows system error code: " + std::to_string(rc));
}
}
int main()
{
std::wstring regSubKey;
#ifdef _WIN64 // Manually switching between 32bit/64bit for the example. Use dwFlags instead.
regSubKey = L"SOFTWARE\\WOW6432Node\\Company Name\\Application Name\\";
#else
regSubKey = L"SOFTWARE\\Company Name\\Application Name\\";
#endif
std::wstring regValue(L"MyValue");
std::wstring valueFromRegistry;
try
{
valueFromRegistry = GetStringValueFromHKLM(regSubKey, regValue);
}
catch (std::exception& e)
{
std::cerr << e.what();
}
std::wcout << valueFromRegistry;
}
Its parameter dwFlags supports flags for type restriction, filling the value buffer with zeros on failure (RRF_ZEROONFAILURE) and 32/64bit registry access (RRF_SUBKEY_WOW6464KEY, RRF_SUBKEY_WOW6432KEY) for 64bit programs.
A: Here is some pseudo-code to retrieve the following:
*
*If a registry key exists
*What the default value is for that registry key
*What a string value is
*What a DWORD value is
Example code:
Include the library dependency: Advapi32.lib
HKEY hKey;
LONG lRes = RegOpenKeyExW(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Perl", 0, KEY_READ, &hKey);
bool bExistsAndSuccess (lRes == ERROR_SUCCESS);
bool bDoesNotExistsSpecifically (lRes == ERROR_FILE_NOT_FOUND);
std::wstring strValueOfBinDir;
std::wstring strKeyDefaultValue;
GetStringRegKey(hKey, L"BinDir", strValueOfBinDir, L"bad");
GetStringRegKey(hKey, L"", strKeyDefaultValue, L"bad");
LONG GetDWORDRegKey(HKEY hKey, const std::wstring &strValueName, DWORD &nValue, DWORD nDefaultValue)
{
nValue = nDefaultValue;
DWORD dwBufferSize(sizeof(DWORD));
DWORD nResult(0);
LONG nError = ::RegQueryValueExW(hKey,
strValueName.c_str(),
0,
NULL,
reinterpret_cast<LPBYTE>(&nResult),
&dwBufferSize);
if (ERROR_SUCCESS == nError)
{
nValue = nResult;
}
return nError;
}
LONG GetBoolRegKey(HKEY hKey, const std::wstring &strValueName, bool &bValue, bool bDefaultValue)
{
DWORD nDefValue((bDefaultValue) ? 1 : 0);
DWORD nResult(nDefValue);
LONG nError = GetDWORDRegKey(hKey, strValueName.c_str(), nResult, nDefValue);
if (ERROR_SUCCESS == nError)
{
bValue = (nResult != 0) ? true : false;
}
return nError;
}
LONG GetStringRegKey(HKEY hKey, const std::wstring &strValueName, std::wstring &strValue, const std::wstring &strDefaultValue)
{
strValue = strDefaultValue;
WCHAR szBuffer[512];
DWORD dwBufferSize = sizeof(szBuffer);
ULONG nError;
nError = RegQueryValueExW(hKey, strValueName.c_str(), 0, NULL, (LPBYTE)szBuffer, &dwBufferSize);
if (ERROR_SUCCESS == nError)
{
strValue = szBuffer;
}
return nError;
}
A: The pair RegOpenKey and RegQueryKeyEx will do the trick.
If you use MFC CRegKey class is even more easier solution.
A: RegQueryValueEx
This gives the value if it exists, and returns an error code ERROR_FILE_NOT_FOUND if the key doesn't exist.
(I can't tell if my link is working or not, but if you just google for "RegQueryValueEx" the first hit is the msdn documentation.)
A: Typically the register key and value are constants in the program. If so, here is an example how to read a DWORD registry value Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled:
#include <windows.h>
DWORD val;
DWORD dataSize = sizeof(val);
if (ERROR_SUCCESS == RegGetValueA(HKEY_LOCAL_MACHINE, "SYSTEM\\CurrentControlSet\\Control\\FileSystem", "LongPathsEnabled", RRF_RT_DWORD, nullptr /*type not required*/, &val, &dataSize)) {
printf("Value is %i\n", val);
// no CloseKey needed because it is a predefined registry key
}
else {
printf("Error reading.\n");
}
To adapt for other value types, see https://learn.microsoft.com/en-us/windows/win32/api/winreg/nf-winreg-reggetvaluea for complete spec.
A: const CString REG_SW_GROUP_I_WANT = _T("SOFTWARE\\My Corporation\\My Package\\Group I want");
const CString REG_KEY_I_WANT= _T("Key Name");
CRegKey regKey;
DWORD dwValue = 0;
if(ERROR_SUCCESS != regKey.Open(HKEY_LOCAL_MACHINE, REG_SW_GROUP_I_WANT))
{
m_pobLogger->LogError(_T("CRegKey::Open failed in Method"));
regKey.Close();
goto Function_Exit;
}
if( ERROR_SUCCESS != regKey.QueryValue( dwValue, REG_KEY_I_WANT))
{
m_pobLogger->LogError(_T("CRegKey::QueryValue Failed in Method"));
regKey.Close();
goto Function_Exit;
}
// dwValue has the stuff now - use for further processing
A: This console app will list all the values and their data from a registry key for most of the potential registry values. There's some weird ones not often used. If you need to support all of them, expand from this example while referencing this Registry Value Type documentation.
Let this be the registry key content you can import from a .reg file format:
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\added\subkey]
"String_Value"="hello, world!"
"Binary_Value"=hex:01,01,01,01
"Dword value"=dword:00001224
"QWord val"=hex(b):24,22,12,00,00,00,00,00
"multi-line val"=hex(7):4c,00,69,00,6e,00,65,00,20,00,30,00,00,00,4c,00,69,00,\
6e,00,65,00,20,00,31,00,00,00,4c,00,69,00,6e,00,65,00,20,00,32,00,00,00,00,\
00
"expanded_val"=hex(2):25,00,55,00,53,00,45,00,52,00,50,00,52,00,4f,00,46,00,49,\
00,4c,00,45,00,25,00,5c,00,6e,00,65,00,77,00,5f,00,73,00,74,00,75,00,66,00,\
66,00,00,00
The console app itself:
#include <Windows.h>
#include <iostream>
#include <string>
#include <locale>
#include <vector>
#include <iomanip>
int wmain()
{
const auto hKey = HKEY_CURRENT_USER;
constexpr auto lpSubKey = TEXT("added\\subkey");
auto openedKey = HKEY();
auto status = RegOpenKeyEx(hKey, lpSubKey, 0, KEY_READ, &openedKey);
if (status == ERROR_SUCCESS) {
auto valueCount = static_cast<DWORD>(0);
auto maxNameLength = static_cast<DWORD>(0);
auto maxValueLength = static_cast<DWORD>(0);
status = RegQueryInfoKey(openedKey, NULL, NULL, NULL, NULL, NULL, NULL,
&valueCount, &maxNameLength, &maxValueLength, NULL, NULL);
if (status == ERROR_SUCCESS) {
DWORD type = 0;
DWORD index = 0;
std::vector<wchar_t> valueName = std::vector<wchar_t>(maxNameLength + 1);
std::vector<BYTE> dataBuffer = std::vector<BYTE>(maxValueLength);
for (DWORD index = 0; index < valueCount; index++) {
DWORD charCountValueName = static_cast<DWORD>(valueName.size());
DWORD charBytesData = static_cast<DWORD>(dataBuffer.size());
status = RegEnumValue(openedKey, index, valueName.data(), &charCountValueName,
NULL, &type, dataBuffer.data(), &charBytesData);
if (type == REG_SZ) {
const auto reg_string = reinterpret_cast<wchar_t*>(dataBuffer.data());
std::wcout << L"Type: REG_SZ" << std::endl;
std::wcout << L"\tName: " << valueName.data() << std::endl;
std::wcout << L"\tData : " << reg_string << std::endl;
}
else if (type == REG_EXPAND_SZ) {
const auto casted = reinterpret_cast<wchar_t*>(dataBuffer.data());
TCHAR buffer[32000];
ExpandEnvironmentStrings(casted, buffer, 32000);
std::wcout << L"Type: REG_EXPAND_SZ" << std::endl;
std::wcout << L"\tName: " << valueName.data() << std::endl;
std::wcout << L"\tData: " << buffer << std::endl;
}
else if (type == REG_MULTI_SZ) {
std::vector<std::wstring> lines;
const auto str = reinterpret_cast<wchar_t*>(dataBuffer.data());
auto line = str;
lines.emplace_back(line);
for (auto i = 0; i < charBytesData / sizeof(wchar_t) - 1; i++) {
const auto c = str[i];
if (c == 0) {
line = str + i + 1;
const auto new_line = reinterpret_cast<wchar_t*>(line);
if (wcsnlen_s(new_line, 1024) > 0)
lines.emplace_back(new_line);
}
}
std::wcout << L"Type: REG_MULTI_SZ" << std::endl;
std::wcout << L"\tName: " << valueName.data() << std::endl;
std::wcout << L"\tData: " << std::endl;
for (size_t i = 0; i < lines.size(); i++) {
std::wcout << L"\t\tLine[" << i + 1 << L"]: " << lines[i] << std::endl;
}
}
if (type == REG_DWORD) {
const auto dword_value = reinterpret_cast<unsigned long*>(dataBuffer.data());
std::wcout << L"Type: REG_DWORD" << std::endl;
std::wcout << L"\tName: " << valueName.data() << std::endl;
std::wcout << L"\tData : " << std::to_wstring(*dword_value) << std::endl;
}
else if (type == REG_QWORD) {
const auto qword_value = reinterpret_cast<unsigned long long*>(dataBuffer.data());
std::wcout << L"Type: REG_DWORD" << std::endl;
std::wcout << L"\tName: " << valueName.data() << std::endl;
std::wcout << L"\tData : " << std::to_wstring(*qword_value) << std::endl;
}
else if (type == REG_BINARY) {
std::vector<uint16_t> bins;
for (auto i = 0; i < charBytesData; i++) {
bins.push_back(static_cast<uint16_t>(dataBuffer[i]));
}
std::wcout << L"Type: REG_BINARY" << std::endl;
std::wcout << L"\tName: " << valueName.data() << std::endl;
std::wcout << L"\tData:";
for (size_t i = 0; i < bins.size(); i++) {
std::wcout << L" " << std::uppercase << std::hex << \
std::setw(2) << std::setfill(L'0') << std::to_wstring(bins[i]);
}
std::wcout << std::endl;
}
}
}
}
RegCloseKey(openedKey);
return 0;
}
Expected console output:
Type: REG_SZ
Name: String_Value
Data : hello, world!
Type: REG_BINARY
Name: Binary_Value
Data: 01 01 01 01
Type: REG_DWORD
Name: Dword value
Data : 4644
Type: REG_DWORD
Name: QWord val
Data : 1188388
Type: REG_MULTI_SZ
Name: multi-line val
Data:
Line[1]: Line 0
Line[2]: Line 1
Line[3]: Line 2
Type: REG_EXPAND_SZ
Name: expanded_val
Data: C:\Users\user name\new_stuff
A: #include <windows.h>
#include <map>
#include <string>
#include <stdio.h>
#include <string.h>
#include <tr1/stdint.h>
using namespace std;
void printerr(DWORD dwerror) {
LPVOID lpMsgBuf;
FormatMessage(
FORMAT_MESSAGE_ALLOCATE_BUFFER |
FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS,
NULL,
dwerror,
MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), // Default language
(LPTSTR) &lpMsgBuf,
0,
NULL
);
// Process any inserts in lpMsgBuf.
// ...
// Display the string.
if (isOut) {
fprintf(fout, "%s\n", lpMsgBuf);
} else {
printf("%s\n", lpMsgBuf);
}
// Free the buffer.
LocalFree(lpMsgBuf);
}
bool regreadSZ(string& hkey, string& subkey, string& value, string& returnvalue, string& regValueType) {
char s[128000];
map<string,HKEY> keys;
keys["HKEY_CLASSES_ROOT"]=HKEY_CLASSES_ROOT;
keys["HKEY_CURRENT_CONFIG"]=HKEY_CURRENT_CONFIG; //DID NOT SURVIVE?
keys["HKEY_CURRENT_USER"]=HKEY_CURRENT_USER;
keys["HKEY_LOCAL_MACHINE"]=HKEY_LOCAL_MACHINE;
keys["HKEY_USERS"]=HKEY_USERS;
HKEY mykey;
map<string,DWORD> valuetypes;
valuetypes["REG_SZ"]=REG_SZ;
valuetypes["REG_EXPAND_SZ"]=REG_EXPAND_SZ;
valuetypes["REG_MULTI_SZ"]=REG_MULTI_SZ; //probably can't use this.
LONG retval=RegOpenKeyEx(
keys[hkey], // handle to open key
subkey.c_str(), // subkey name
0, // reserved
KEY_READ, // security access mask
&mykey // handle to open key
);
if (ERROR_SUCCESS != retval) {printerr(retval); return false;}
DWORD slen=128000;
DWORD valuetype = valuetypes[regValueType];
retval=RegQueryValueEx(
mykey, // handle to key
value.c_str(), // value name
NULL, // reserved
(LPDWORD) &valuetype, // type buffer
(LPBYTE)s, // data buffer
(LPDWORD) &slen // size of data buffer
);
switch(retval) {
case ERROR_SUCCESS:
//if (isOut) {
// fprintf(fout,"RegQueryValueEx():ERROR_SUCCESS:succeeded.\n");
//} else {
// printf("RegQueryValueEx():ERROR_SUCCESS:succeeded.\n");
//}
break;
case ERROR_MORE_DATA:
//what do I do now? data buffer is too small.
if (isOut) {
fprintf(fout,"RegQueryValueEx():ERROR_MORE_DATA: need bigger buffer.\n");
} else {
printf("RegQueryValueEx():ERROR_MORE_DATA: need bigger buffer.\n");
}
return false;
case ERROR_FILE_NOT_FOUND:
if (isOut) {
fprintf(fout,"RegQueryValueEx():ERROR_FILE_NOT_FOUND: registry value does not exist.\n");
} else {
printf("RegQueryValueEx():ERROR_FILE_NOT_FOUND: registry value does not exist.\n");
}
return false;
default:
if (isOut) {
fprintf(fout,"RegQueryValueEx():unknown error type 0x%lx.\n", retval);
} else {
printf("RegQueryValueEx():unknown error type 0x%lx.\n", retval);
}
return false;
}
retval=RegCloseKey(mykey);
if (ERROR_SUCCESS != retval) {printerr(retval); return false;}
returnvalue = s;
return true;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103"
} |
Q: Checking if userinput is a valid URI in XUL Is there a built-in function/method that can check if a given string is a valid URI or not in the Mozilla XUL toolkit? I have looked for one but found none, but since this is my first time using XUL and its documentation it could be that I just overlooked it. So I'm just making sure before I start writing my own IsValidURI function.
A: The nsIIOService.newURI(...) method is what you're looking for. It throws NS_ERROR_MALFORMED_URI if the URI string is invalid.
Example:
try {
var ioServ = Components.classes["@mozilla.org/network/io-service;1"]
.getService(Components.interfaces.nsIIOService);
var uriObj = ioServ.newURI(uriString, uriCharset, baseURI);
} catch (e) {
// catch the error here
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to specify an authenticated proxy for a python http connection? What's the best way to specify a proxy with username and password for an http connection in python?
A: This works for me:
import urllib2
proxy = urllib2.ProxyHandler({'http': 'http://
username:password@proxyurl:proxyport'})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
urllib2.install_opener(opener)
conn = urllib2.urlopen('http://python.org')
return_str = conn.read()
A: Or if you want to install it, so that it is always used with urllib2.urlopen (so you don't need to keep a reference to the opener around):
import urllib2
url = 'www.proxyurl.com'
username = 'user'
password = 'pass'
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
# None, with the "WithDefaultRealm" password manager means
# that the user/pass will be used for any realm (where
# there isn't a more specific match).
password_mgr.add_password(None, url, username, password)
auth_handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
print urllib2.urlopen("http://www.example.com/folder/page.html").read()
A: Use this:
import requests
proxies = {"http":"http://username:password@proxy_ip:proxy_port"}
r = requests.get("http://www.example.com/", proxies=proxies)
print(r.content)
I think it's much simpler than using urllib. I don't understand why people love using urllib so much.
A: Here is the method use urllib
import urllib.request
# set up authentication info
authinfo = urllib.request.HTTPBasicAuthHandler()
proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})
# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
urllib.request.CacheFTPHandler)
# install it
urllib.request.install_opener(opener)
f = urllib.request.urlopen('http://www.python.org/')
"""
A: Setting an environment var named http_proxy like this: http://username:password@proxy_url:port
A: The best way of going through a proxy that requires authentication is using urllib2 to build a custom url opener, then using that to make all the requests you want to go through the proxy. Note in particular, you probably don't want to embed the proxy password in the url or the python source code (unless it's just a quick hack).
import urllib2
def get_proxy_opener(proxyurl, proxyuser, proxypass, proxyscheme="http"):
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, proxyurl, proxyuser, proxypass)
proxy_handler = urllib2.ProxyHandler({proxyscheme: proxyurl})
proxy_auth_handler = urllib2.ProxyBasicAuthHandler(password_mgr)
return urllib2.build_opener(proxy_handler, proxy_auth_handler)
if __name__ == "__main__":
import sys
if len(sys.argv) > 4:
url_opener = get_proxy_opener(*sys.argv[1:4])
for url in sys.argv[4:]:
print url_opener.open(url).headers
else:
print "Usage:", sys.argv[0], "proxy user pass fetchurls..."
In a more complex program, you can seperate these components out as appropriate (for instance, only using one password manager for the lifetime of the application). The python documentation has more examples on how to do complex things with urllib2 that you might also find useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: Parsing, where can I learn about it I've been given a job of 'translating' one language into another. The source is too flexible (complex) for a simple line by line approach with regex. Where can I go to learn more about lexical analysis and parsers?
A: Try ANLTR:
ANTLR, ANother Tool for Language
Recognition, is a language tool that
provides a framework for constructing
recognizers, interpreters, compilers,
and translators from grammatical
descriptions containing actions in a
variety of target languages.
There's a book for it also.
A: Niklaus Wirth's book "Compiler Construction" (available as a free PDF)
http://www.google.com/search?q=wirth+compiler+construction
A: I've recently been working with PLY which is an implementation of lex and yacc in Python. It's quite easy to get started with it and there are some simple examples in the documentation.
Parsing can quickly become a very technical topic and you'll find that you probably won't need to know all the details of the parsing algorithm if you're using a parser builder like PLY.
A: Lots of people have recommended books. For many these are much more useful in a structured environment with assignments and due dates and so forth. Even if not, having the material presented in a different way can help greatly.
(a) Have you considered going to a school with a decent CS curriculum?
(b) There are lots of online lectures, such as MIT's Open Courseware. Their EE/CS section has many courses that touch on parsing, though I can't see any on parsing per se. It's typically introduced as one of the first theory courses as language classification and automata is at the heart of much of CS theory.
A: If you want to get "emotional" about the subject, pick up a copy of "The Dragon Book." It is usually the text in a compiler design course. It will definitely meet your need "learn more about lexical analysis and parsers" as well as a bunch of other fun stuff!
IMH(umble)O, save yourself an arm and/or leg and buy an older edition - it will fill your information desires.
A: If you prefer Java based tools, the Java Compiler Compiler, JavaCC, is a nice parser/scanner. It's config file driven, and will generate java code that you can include in your program. I haven't used it a couple years though, so I'm not sure how the current version is. You can find out more here: https://javacc.dev.java.net/
A: flex and bison are the new lex and yacc though. The syntax for BNF is often derided for being a bit obtuse. Some have moved to ANTLR and Ragel for this reason.
If you're not doing much translation, you may one to pull a one-off using multiline regexes with Perl or Ruby. Writing a compatible BNF grammar for an existing language is not a task to be taken lightly.
On the other hand, it is entirely possible to leverage any given language's .l and .y files if they are available as open source. Then, you could construct new code from an existing parse tree.
A: Lexing/Parsing + typecheck + code generation is a great CS exercise I would recommend it to anyone wanting a solid basis, so I'm all for the Dragon Book
A: Yet another textbook to consider is Programming Language Pragmatics. I prefer it over the Dragon book, but YMMV.
If you're using Perl, yet another tool to consider is Parse::RecDescent.
If you just need to do this translation once and don't know anything about compiler technology, I would suggest that you get as far as you can with some fairly simplistic translations and then fix it up by hand. Yes, it is a lot of work. But it is less work than learning a complex subject and coding up the right solution for one job. That said, you should still learn the subject, but don't let not knowing it be a roadblock to finishing your current project.
A: I found this site helpful:
Lex and YACC primer/HOWTO
The first time I used lex/yacc was for a relatively simple project. This tutorial was all I really needed. When I approached more complex projects later, the familiarity I had from this tutorial and a simple project allowed me to build something fancier.
A: After taking (quite) a few compilers classes, I've used both The Dragon Book and C&T. I think C&T does a far better job of making compiler construction digestible. Not to take anything away from The Dragon Book, but I think C&T is a far more practical book.
Also, if you like writing in Java, I recommend using JFlex and BYACC/J for your lexing and parsing needs.
A: Parsing Techniques - A Practical Guide
By Dick Grune and Ceriel J.H. Jacobs
This book (freely available as PDF) gives an extensive overview of different parsing techniques/algorithms. If you really want to understand the different parsing algorithms, this IMO is a better reference than the Dragon Book (as Parsing Techniques focuses entirely on parsing, while the Dragon Book covers parsing only as one - although important - part of the compiler construction process).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Dividing a list of nodes in half <xsl:for-each select="./node [position() <= (count(*) div 2)]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="./node [count(*) div 2 < position()]">
<li>bar</li>
</xsl:for-each>
My list has 12 nodes, but the second list is always 8 and the first is always 4. What's wrong with my selects?
A: When you do count(*), the current node is the node element being processed. You want either count(current()/node) or last() (preferable), or just calculate the midpoint to a variable for better performance and clearer code:
<xsl:variable name="nodes" select="node"/>
<xsl:variable name="mid" select="count($nodes) div 2"/>
<xsl:for-each select="$nodes[position() <= $mid]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="$nodes[$mid < position()]">
<li>bar</li>
</xsl:for-each>
A: You could try using the last() function which will give you the size of the current context:
<xsl:for-each select="./node [position() <= last() div 2]">
<li>foo</li>
</xsl:for-each>
<xsl:for-each select="./node [last() div 2 < position()]">
<li>bar</li>
</xsl:for-each>
A: I'm not at all sure, but it seems to me that count(*) is not doing what you think it is. That counts the number of children of the current node, not the size of the current node list. Could you print it out to check that it's 8 or 9 instead of 12?
Use last() to get the context size.
A: Try count(../node). The following will gives the correct result on my test XML file (a simple nodes root with node elements), using the xsltproc XSLT processor.
<xsl:for-each select="node[position() <= (count(../node) div 2)]">
...
</xsl:for-each>
<xsl:for-each select="node[(count(../node) div 2) < position()]">
...
</xsl:for-each>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to apply an XSLT Stylesheet in C# I want to apply an XSLT Stylesheet to an XML Document using C# and write the output to a File.
A: This might help you
public static string TransformDocument(string doc, string stylesheetPath)
{
Func<string,XmlDocument> GetXmlDocument = (xmlContent) =>
{
XmlDocument xmlDocument = new XmlDocument();
xmlDocument.LoadXml(xmlContent);
return xmlDocument;
};
try
{
var document = GetXmlDocument(doc);
var style = GetXmlDocument(File.ReadAllText(stylesheetPath));
System.Xml.Xsl.XslCompiledTransform transform = new System.Xml.Xsl.XslCompiledTransform();
transform.Load(style); // compiled stylesheet
System.IO.StringWriter writer = new System.IO.StringWriter();
XmlReader xmlReadB = new XmlTextReader(new StringReader(document.DocumentElement.OuterXml));
transform.Transform(xmlReadB, null, writer);
return writer.ToString();
}
catch (Exception ex)
{
throw ex;
}
}
A: I found a possible answer here: http://web.archive.org/web/20130329123237/http://www.csharpfriends.com/Articles/getArticle.aspx?articleID=63
From the article:
XPathDocument myXPathDoc = new XPathDocument(myXmlFile) ;
XslTransform myXslTrans = new XslTransform() ;
myXslTrans.Load(myStyleSheet);
XmlTextWriter myWriter = new XmlTextWriter("result.html",null) ;
myXslTrans.Transform(myXPathDoc,null,myWriter) ;
Edit:
But my trusty compiler says, XslTransform is obsolete: Use XslCompiledTransform instead:
XPathDocument myXPathDoc = new XPathDocument(myXmlFile) ;
XslCompiledTransform myXslTrans = new XslCompiledTransform();
myXslTrans.Load(myStyleSheet);
XmlTextWriter myWriter = new XmlTextWriter("result.html",null);
myXslTrans.Transform(myXPathDoc,null,myWriter);
A: Here is a tutorial about how to do XSL Transformations in C# on MSDN:
http://support.microsoft.com/kb/307322/en-us/
and here how to write files:
http://support.microsoft.com/kb/816149/en-us
just as a side note: if you want to do validation too here is another tutorial (for DTD, XDR, and XSD (=Schema)):
http://support.microsoft.com/kb/307379/en-us/
i added this just to provide some more information.
A: Based on Daren's excellent answer, note that this code can be shortened significantly by using the appropriate XslCompiledTransform.Transform overload:
var myXslTrans = new XslCompiledTransform();
myXslTrans.Load("stylesheet.xsl");
myXslTrans.Transform("source.xml", "result.html");
(Sorry for posing this as an answer, but the code block support in comments is rather limited.)
In VB.NET, you don't even need a variable:
With New XslCompiledTransform()
.Load("stylesheet.xsl")
.Transform("source.xml", "result.html")
End With
A: I would like to share this small piece of code which reads from Database and transforms using XSLT. On the top I also have used xslt-extensions which makes it little different than others.
Note: This is just a draft code and may need cleanup before using in production.
var schema = XDocument.Load(XsltPath);
using (var connection = new SqlConnection(ConnectionString))
{
connection.Open();
using (var command = new SqlCommand(Sql, connection))
{
var reader = command.ExecuteReader();
var dt = new DataTable(SourceNode);
dt.Load(reader);
string xml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + Environment.NewLine;
using (var stringWriter = new StringWriter())
{
dt.WriteXml(stringWriter, true);
xml += stringWriter.GetStringBuilder().ToString();
}
XDocument transformedXml = new XDocument();
var xsltArgumentList = new XsltArgumentList();
xsltArgumentList.AddExtensionObject("urn:xslt-extensions", new XsltExtensions());
using (XmlWriter writer = transformedXml.CreateWriter())
{
XslCompiledTransform xslt = new XslCompiledTransform();
xslt.Load(schema.CreateReader());
xslt.Transform(XmlReader.Create(new StringReader(xml)), xsltArgumentList, writer);
}
var result = transformedXml.ToString();
}
}
XsltPath is path to your xslt file.
ConnectionString constant is pointing to your database.
Sql is your query.
SourceNode is node of each record in source xml.
Now the interesting part, please note the use of urn:xslt-extensions and new XsltExtensions() in above code. You can use this if need some complex computation which may not be possible in xslt. Following is a simple method to format date.
public class XsltExtensions
{
public string FormatDate(string dateString, string format)
{
DateTime date;
if (DateTime.TryParse(dateString, out date))
return date.ToString(format);
return dateString;
}
}
In XSLT file you can use it as below;
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ext="urn:xslt-extensions">
...
<myTag><xsl:value-of select="ext:FormatDate(record_date, 'yyyy-MM-dd')"/></myTag>
...
</xsl:stylesheet>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "201"
} |
Q: MS WF state machine workflows and MS CRM Dynamics 4.0 MS CRM Dynamics 4.0 incorporates the MS WF engine. The built in designer allows the creation of sequential workflows whos activities have native access to CRM entities.
Is it possible to:
*
*Create a state machine workflow
outside of CRM (i.e. in visual studio) and import it into CRM?
*Have this workflow access the CRM
entities?
A: *
*It is NOT possible to create a state machine workflow for use in MSCRM.
*It is also not supported to create any workflow outside of MSCRM and import it.
*As a work around you could write either all the logic you need into a custom workflow activity and import that into MSCRM and have it called from a normal workflow.
*The other option is build a seperate application which runs a state machine workflow and interacts with MSCRM via the web services. You could (would need to?) combine this with a custom workflow activity to kick off processes.
A: It is possible to create no code workflow...
http://blogs.msdn.com/jonasd/archive/2008/01/21/Creating-a-no_2D00_code-workflow-for-CRM-4.0-with-Visual-Studio-2005-_2800_2008_2900_.aspx
and take a look at the other thread...
Is is possible/a good idea to edit workflows in Visual Studio?
A: I don't know the answer to your specific question, but hopefully this information will point you in the right direction.
The "native" format for WF workflows is ".xoml" files. These are basically identical to XAML files, and both are nothing more than generic persistence formats for a .NET object tree. If you can access the saved data that is output by the Dynamics designer, it should be in the same format. If it is, you should be able to open it from the Visual Studio designer.
The key here is that CRM undoubtedly defines its own set of custom activities that you'll need to be able to reference from within the alternate designer. With any luck, these will be in assemblies with obvious names and/or in the GAC.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: VS2008: Copy Web Site from command line How do I execute the "Copy Web Site" command for an ASP.NET project in VS2008 from the command line? If I need to script this, let me have some pointers on where I can learn that.
A: Would this help you get started?
Walkthrough: Deploying an ASP.NET Web Application Using XCOPY
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: HTML Scraping in Php I've been doing some HTML scraping in PHP using regular expressions. This works, but the result is finicky and fragile. Has anyone used any packages that provide a more robust solution? A config driven solution would be ideal, but I'm not picky.
A: If the page you're scraping is valid X(HT)ML, then any of PHP's built-in XML parsers will do.
I haven't had much success with PHP libraries for scraping. If you're adventurous though, you can try simplehtmldom. I'd recommend Hpricot for Ruby or Beautiful Soup for Python, which are both excellent parsers for HTML.
A: I had some fun working with htmlSQL, which is not so much a high end solution, but really simple to work with.
A: I would also recommend 'Simple HTML DOM Parser.' It is a good option particularly if your familiar with jQuery or JavaScript selectors then you will find yourself at home.
I have even blogged about it in the past.
A: Using PHP for HTML scraping, I'd recommend cURL + regexp or cURL + some DOM parsers though I personally use cURL + regexp. If you have a profound taste of regexp, it's actually more accurate sometimes.
A: I would recomend PHP Simple HTML DOM Parser after you have scraped the HTML from the page. It supports invalid HTML, and provides a very easy way to handle HTML elements.
A: I've had very good with results with the Simple Html DOM Parser mentioned above as well. And then there's the tidy Extension for PHP as well which works really well too.
A: I had to use curl on my host 1and1.
http://www.quickscrape.com/ is what I came up with using the Simple DOM class!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Which, if any, C++ compilers do tail-recursion optimization? It seems to me that it would work perfectly well to do tail-recursion optimization in both C and C++, yet while debugging I never seem to see a frame stack that indicates this optimization. That is kind of good, because the stack tells me how deep the recursion is. However, the optimization would be kind of nice as well.
Do any C++ compilers do this optimization? Why? Why not?
How do I go about telling the compiler to do it?
*
*For MSVC: /O2 or /Ox
*For GCC: -O2 or -O3
How about checking if the compiler has done this in a certain case?
*
*For MSVC, enable PDB output to be able to trace the code, then inspect the code
*For GCC..?
I'd still take suggestions for how to determine if a certain function is optimized like this by the compiler (even though I find it reassuring that Konrad tells me to assume it)
It is always possible to check if the compiler does this at all by making an infinite recursion and checking if it results in an infinite loop or a stack overflow (I did this with GCC and found out that -O2 is sufficient), but I want to be able to check a certain function that I know will terminate anyway. I'd love to have an easy way of checking this :)
After some testing, I discovered that destructors ruin the possibility of making this optimization. It can sometimes be worth it to change the scoping of certain variables and temporaries to make sure they go out of scope before the return-statement starts.
If any destructor needs to be run after the tail-call, the tail-call optimization can not be done.
A: As Greg mentions, compilers won't do it in debug mode. It's ok for debug builds to be slower than a prod build, but they shouldn't crash more often: and if you depend on a tail call optimization, they may do exactly that. Because of this it is often best to rewrite the tail call as an normal loop. :-(
A: As well as the obvious (compilers don't do this sort of optimization unless you ask for it), there is a complexity about tail-call optimization in C++: destructors.
Given something like:
int fn(int j, int i)
{
if (i <= 0) return j;
Funky cls(j,i);
return fn(j, i-1);
}
The compiler can't (in general) tail-call optimize this because it needs
to call the destructor of cls after the recursive call returns.
Sometimes the compiler can see that the destructor has no externally visible side effects (so it can be done early), but often it can't.
A particularly common form of this is where Funky is actually a std::vector or similar.
A: gcc 4.3.2 completely inlines this function (crappy/trivial atoi() implementation) into main(). Optimization level is -O1. I notice if I play around with it (even changing it from static to extern, the tail recursion goes away pretty fast, so I wouldn't depend on it for program correctness.
#include <stdio.h>
static int atoi(const char *str, int n)
{
if (str == 0 || *str == 0)
return n;
return atoi(str+1, n*10 + *str-'0');
}
int main(int argc, char **argv)
{
for (int i = 1; i != argc; ++i)
printf("%s -> %d\n", argv[i], atoi(argv[i], 0));
return 0;
}
A: All current mainstream compilers perform tail call optimisation fairly well (and have done for more than a decade), even for mutually recursive calls such as:
int bar(int, int);
int foo(int n, int acc) {
return (n == 0) ? acc : bar(n - 1, acc + 2);
}
int bar(int n, int acc) {
return (n == 0) ? acc : foo(n - 1, acc + 1);
}
Letting the compiler do the optimisation is straightforward: Just switch on optimisation for speed:
*
*For MSVC, use /O2 or /Ox.
*For GCC, Clang and ICC, use -O3
An easy way to check if the compiler did the optimisation is to perform a call that would otherwise result in a stack overflow — or looking at the assembly output.
As an interesting historical note, tail call optimisation for C was added to the GCC in the course of a diploma thesis by Mark Probst. The thesis describes some interesting caveats in the implementation. It's worth reading.
A: Most compilers don't do any kind of optimisation in a debug build.
If using VC, try a release build with PDB info turned on - this will let you trace through the optimised app and you should hopefully see what you want then. Note, however, that debugging and tracing an optimised build will jump you around all over the place, and often you cannot inspect variables directly as they only ever end up in registers or get optimised away entirely. It's an "interesting" experience...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "167"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.