text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Do you continue development in a branch or in the trunk? Suppose you're developing a software product that has periodic releases. What are the best practices with regard to branching and merging? Slicing off periodic release branches to the public (or whomever your customer is) and then continuing development on the trunk, or considering the trunk the stable version, tagging it as a release periodically, and doing your experimental work in branches. What do folks think is the trunk considered "gold" or considered a "sand box"?
A: We develop on the trunk unless the changes are too major, destabilizing, or we are nearing a major release of one of our products, in which case we create a temporary branch. We also create a permanent branch for every individual product release. I found Microsoft's document on Branching Guidance quite helpful. Eric Sink's tutorial on branching is also interesting, and points out that what works for Microsoft may be too heavy for some of the rest of us. It was in our case, we actually use the approach Eric says his team does.
A: I've worked with both techniques and I would say that developing on the trunk and branching off stable points as releases is the best way to go.
Those people above who object saying that you'll have:
*
*Constant build problems for daily builds
*Productivity loss when a a developer commits a problem for all
other people on the project
have probably not used continuous integration techniques.
It's true that if you don't perform several test builds during the day, say once every hour or so, will leave themselves open to these problems which will quickly strangle the pace of development.
Doing several test builds during the day quickly folds in updates to the main code base so that other's can use it and also alerts you during the day if someone has broken the build so that they can fix it before going home.
As pointed out, only finding out about a broken build when the nightly build for running the regression tests fails is sheer folly and will quickly slow things down.
Have a read of Martin Fowler's paper on Continuous Integration. We rolled our own such system for a major project (3,000kSLOC) in about 2,000 lines of Posix sh.
A: It depends on your situations. We use Perforce and have typically have several lines of development. The trunk is considered "gold" and all development happens on branches that get merged back to the mainline when they are stable enough to integrate. This allows rejection of features that don't make the cut and can provide solid incremental capability over time that independent projects/features can pick up.
There is integration cost to the merging and catching up to new features rolled into the trunk, but you're going to suffer this pain anyway. Having everyone develop on the trunk together can lead to a wild west situation, while branching allows you to scale and choose the points at which you'd like to take the bitter integration pills. We're currently scaled to over a hundred developers on a dozen projects, each with multiple releases using the same core components, and it works pretty well.
The beauty of this is that you can do this recursively: a big feature branch can be its own trunk with other branches coming off if it. Also, final releases get a new branch to give you a place to do stable maintenance.
A: Attempting to manage maintenance of current production code in line with new development is problematic at best. In order to mitigate those problems code should branch into a maintenance line once testing efforts have completed and the code is ready for delivery. Additionally, the mainline should branch to assist in release stabilization, to contain experimental development efforts, or to house any development efforts whose lifecycle extends across multiple releases.
A non-maintenance branch should be created only when there is the likelihood (or certainty) of collisions among the code that would be difficult to manage any other way. If the branch does not solve a logistical problem, it will create one.
Normal release development occurs in the mainline. Developers check into and out of the mainline for normal release work. Development work for patches to current Production code should be in the branch for that release and then merged with the mainline once the patch has passed testing and is deployed. Work in non-maintenance branches should be coordinated on a case-by-case basis.
A: It depends on the size of your development effort. Multiple teams working in parallel won't be able to work effectively all on the same code (trunk). If you have just a small group of people working and your main concern is cutting a branch so you can continue to work while going back to the branch for making bug-fixes to the current production code that would work. This is a trivial use of branching and not too burdensome.
If you have a lots of parallel development you'll want to have branches for each of the efforts but that'll also require more discipline: Making sure your branches are tested and ready to merge back. Scheduling merges so two groups aren't trying to merge at the same time etc.
Some branches are under development for so long that you have to permit merges from the trunk to the branch in order to reduce the number of surprises when finally merging back to the trunk.
You will have to experiment if you have a large group of developers and get a feel for what works in your situation. Here is a page from Microsoft that may be somewhat useful: http://msdn.microsoft.com/en-us/library/aa730834(VS.80).aspx
A: We are using the trunk for main development and branch for releases maintenance work. It works nice. But then branches should only be used for bug fixes, no major changes, especially on database side, we have a rule that only a schema change can happen on the main trunk and never in the branch.
A: I tend to take the "release branch" approach. The trunk is volatile. Once release time approaches, I'd make a release branch, which I would treat more cautiously. When that's finally done, I'd label/tag the state of the repository so I'd know the "official" released version.
I understand there are other ways to do it - this is just the way I've done it in the past.
A: If you are gonna be working through a release cycle, big feature, you get marooned to a branch. Otherwise we work in trunk, and branch for every production release at the moment we build.
Previous production builds are moved at that time to old_production_ and current prod release is always just production. All our build server knows about production is how to deploy the production branch, and we kick that build off with a force trigger.
A: We follow the trunk=current development stream, branch=release(s) approach. On release to the customer we branch the trunk and just keep the trunk rolling forward. You'll need to make a decision on how many releases you're prepared to support. The more you support the more merging you'll be doing on bug fixes. We try and keep our customers on no more than 2 releases behind the trunk. (Eg. Dev = 1.3, supported releases 1.2 and 1.1).
A: Both.
The trunk is used for the majority of development. But it's expected that best efforts will be made to ensure that any check-in to the trunk won't break it. (partially verified by an automated build and test system)
Releases are maintained in their own directory, with only bug fixes being made on them (and then merged into trunk).
Any new feature that is going to leave the trunk in an unstable or non-working state is done in it's own separate branch and then merged into the trunk up on completion.
A: I have tried both methods with a large commercial application.
The answer to which method is better is highly dependent on your exact situation, but I will write what my overall experience has shown so far.
The better method overall (in my experience): The trunk should be always stable.
Here are some guidelines and benefits of this method:
*
*Code each task (or related set of tasks) in its own branch, then you will have the flexibility of when you would like to merge these tasks and perform a release.
*QA should be done on each branch before it is merged to the trunk.
*By doing QA on each individual branch, you will know exactly what caused the bug easier.
*This solution scales to any number of developers.
*This method works since branching is an almost instant operation in SVN.
*Tag each release that you perform.
*You can develop features that you don't plan to release for a while and decide exactly when to merge them.
*For all work you do, you can have the benefit of committing your code. If you work out of the trunk only, you will probably keep your code uncommitted a lot, and hence unprotected and without automatic history.
If you try to do the opposite and do all your development in the trunk you'll have the following issues:
*
*Constant build problems for daily builds
*Productivity loss when a a developer commits a problem for all other people on the project
*Longer release cycles, because you need to finally get a stable version
*Less stable releases
You simply will not have the flexibility that you need if you try to keep a branch stable and the trunk as the development sandbox. The reason is that you can't pick and chose from the trunk what you want to put in that stable release. It would already be all mixed in together in the trunk.
The one case in particular that I would say to do all development in the trunk, is when you are starting a new project. There may be other cases too depending on your situation.
By the way distributed version control systems provide much more flexibility and I highly recommend switching to either hg or git.
A: I like and use the approach described by Henrik Kniberg in Version Control for Multiple Agile Teams. Henrik did a great job at explaining how to handle version control in an agile environment with multiple teams (works for single team in traditional environments too) and there is no point at paraphrasing him so I'll just post the "cheat sheet" (which is self explaining) below:
I like it because:
*
*It is simple: you can get it from the picture.
*It works (and scales) well without too much merge and conflict troubles.
*You can release "working software" at any time (in the spirit of agile).
And just in case it wasn't explicit enough: development is done in "work branch(es)", the trunk is used for DONE (releasable) code. Check Version Control for Multiple Agile Teams for all the details.
A: A good reference on a development process that keeps trunk stable and does all work in branches is Divmod's Ultimate Quality Development System. A quick summary:
*
*All work done must have a ticket associated with it
*A new branch is created for each ticket where the work for that ticket is done
*Changes from that branch are not merged back into the mainline trunk without being reviewed by another project member
They use SVN for this, but this could easily be done with any of the distributed version control systems.
A: I think your second approach (e.g., tagging releases and doing experimental stuff in branches, considering the trunk stable) is the best approach.
It should be clear that branches inherit all the bugs of a system at the point in time where it is branched: if fixes are applied to a trunk, you will have to go one by one to all branches if you maintain branches as a sort of release cycle terminator. If you have already had 20 releases and you discovered a bug that goes as far back as the first one, you'll have to reapply your fix 20 times.
Branches are supposed to be the real sand boxes, although the trunk will have to play this role as well: tags will indicate whether the code is "gold" at that point in time, suitable for release.
A: The trunk is generally the main development line.
Releases are branched off and often times experimental or major work is done on branches then merged back to the trunk when it's ready to be integrated with the main development line.
A: The trunk should generally be your main development source. Otherwise you will spend a lot of time merging in new features. I've seen it done the other way and it usually leads to a lot of last minute integration headaches.
We label our releases so we can quickly respond to production emergencies without distribing active development.
A: For me, it depends on the software I'm using.
Under CVS, I would just work in "trunk" and never tag/branch, because it was really painful to do otherwise.
In SVN, I would do my "bleeding edge" stuff in trunk, but when it was time to do a server push get tagged appropriately.
I recently switching to git. Now I find that I never work in trunk. Instead I use a named "new-featurename" sandbox branch and then merge into a fixed "current-production" branch. Now that I think about it, I really should be making "release-VERSIONNUMBER" branches before merging back into "current-production" so I can go back to older stable versions...
A: It really depends on how well your organization/team manages versions and which SCM you use.
*
*If what's next(in the next release) can be easily planned, you are better off with developing in the trunk. Managing branches takes more time and resources. But if next can't be planned easily(happens all the time in bigger organizations), you would probably end up cherry picking commits(hundreds/thousands) rather than branches(severals or tens).
*With Git or Mercurial, managing branches is much easier than cvs and subversion. I would go for the stable trunk/topic branches methodlogy. This is what the git.git team using. read:http://www.kernel.org/pub/software/scm/git/docs/gitworkflows.html
*With Subversion, I first applied the develop-in-the-trunk methodlogy. There was quite some work when it came to release date because everytime I had to cherry pick commits(my company is no good at planning). Now I am sort of expert in Subversion and know quite well about manaing branches in Subversion, so I am moving towards the stable trunk/topic branches methodlogy. It works much better than before. Now I am trying the way how git.git team works, although we will probably stick with Subversion.
A: Here is the SVN design that I prefer:
*
*root
*
*development
*
*branches
*
*feature1
*feature2
*...
*trunk
*beta
*
*tags
*trunk
*release
*
*tags
*trunk
All work is done from development/trunk, except for major features that require its own branch. After work is tested against development/trunk, we merge tested issues into beta/trunk. If necessary, code is tested against the beta server. When we are ready to roll some changes out, we just merge appropriate revisions into release/trunk and deploy.
Tags can be made in the beta branch or the release branch so we can keep track of specific release for both beta and release.
This design allows for a lot of flexibility. It also makes it easy for us to leave revisions in beta/trunk while merging others to release/trunk if some revisions did not pass tests in beta.
A: There's no one-size-fits-all answer for the subversion convention question IMHO.
It really depends on the dynamics of the project and company using it. In a very fast-paced environment, when a release might happen as often as every few days, if you try to religiously tag and branch, you'll end up with an unmanageable repository. In such an environment, the branch-when-needed approach would create a much more maintainable environment.
Also - in my experience it is extremely easy, from a pure administrative standpoint, to switch between svn methodologies when you choose to.
The two approaches I've known to work best are the branch-when-needed, and the branch-each-task. These are, of course, sort of the exact opposite of one another. Like I said - it's all about the project dynamics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "171"
} |
Q: Windows Home Server versus Vista Backup and Restore Center I've been using Window Home Server for my backups here at home for most of a year now, and I'm really pleased with it. It's far better than the software I was using previously (Acronis). I'm thinking about a backup strategy for my work machine and I'd like to know how WHS compares with Vista's built-in backup and restore features. The plan is to do a full backup to a local external hard drive and backup the documents folder to a network drive on the server. Anyone have experience using the Vista backup feature like this?
A: Chris,
They're different beasts. WHS backup is pretty much automatic and uses deltas - Vista's is manual and I don't believe offers incremental updates.
While your solution (Vista + network copy) would preserve your data it has two problems I an see;
*
*Your documents will only have the latest revision. If you find something was corrupted a month ago it could be very awkward to recover it. Vista's shadow copies may help though.
*As soon as you install a program/patch/config your Vista backup is out of date and needs remade, or these repeated if you reinstall.
These might not be dealbreakers and indeed Vista's backup is pretty decent, it's just nowhere near as good as WHS. In my opinion WHS leaves almost everything else standing, you can be sure this tech will be in the "big brother" server versions shortly.
A: Also, remember that many backup strategies are busted in some way, and we don't find out until it's time to restore after a hardware failure. This is a bad time to find that out!
When you work out your backup strategy, test that you can actually restore from it. Do this periodically.
A: WHS is such a quick, simple, robust way to get your stuff backed up. Plug it in to the network; install the client software; done. I'd hate to live without it.
However, as a programmer, I also set up scripts to run each night and back up my pending changes to another machine. For example, when using TFS, I run 'tf workspaces' then 'tf shelve' on each workspace to make a copy. Shelveset names begin with 'z' to make them sort to the end of the list.
A: Vista Home Premium does not provide in its built-in Backup app the features for saving and restoring the OS image; it only does data and folder backups. For a home user to get the full disk image Vista built-in Backup support without going third-party, you need to have Vista Ultimate.
WHS is nice because it is "centrally" managed and does great things with power management and sleep, if you enable the features (such as waking a machine up in the middle of the night to perform a backup and go back to sleep). I am not familiar with the scheduling features of the Vista app, but the WHS feature set in this space seems pretty complete.
Macrium Reflect (there is a Free Edition which de-features some things) works under Vista and lets you save images over the network and restore them to a drive from a boot disk. I had used this as a solution for my Vista Home Premium host before I got my own WHS up.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I begin using SVN? I am ready to start using SVN, but I have NO (as in the money required for free beer) experience with source control. I have installed subversion on my server (that was easy, 'apt-get install subversion') but now I don't know what to do, how to configure it, or how to use it.What suggestions do you have, and where can I find good resources to learn to start using it?
Update:
O.K. So the feedback has been great and I have read through a bit of it but I want to clarify my question by saying that I am looking for more information on how to actually go about setting my up my repositories, clients, server, etc. I know that I could do a quick Google search and find dozens (or more) resources but I'm hoping that someone whom has experience with subversion and a client(I have installed tortoise) could suggest a good reference that will be reliable, and have quality content.
A: Another good Subversion book is Pragmatic Version Control with Subversion, a Pragmatic Programmer book that goes out of its way to make key concepts of version control (from checkin & checkout to branching & merging) clear.
A: http://blog.clickablebliss.com/2006/04/26/introduction-to-subversion-screencast/ explains how to use SVN very well.
A: You might also wanted to see Intro to Distributed Version Control (Illustrated) and a visual guide to version control.
It was only with this guides that I FINALLY understood a lot of things, specially the Branching and Merging part ;)
A: Jeff posted a good "getting started" article for Windows, including how to setup svnserve:
Setting up Subversion on Windows
A: Where do you live that you can get free bear!?
Subversion is complicated to set up -- if you have no experience with version control at all, I'd recommend using a distributed VCS because they don't require any server configuration. Bazaar in five minutes is a good start.
For Subversion, you'll want to set up either svnserve or the mod_dav_svn Apache module. I prefer the Apache module, because it gives you basic web-based repository browsing in the bargain. You'll also need to create and configure a repository -- see the SVN red book Chapter 5 for more information on repository administration. Then read chapter 2 to learn how to use Subversion itself.
A: Eric Sink has an excellent series on source code control aimed at beginners. For Subversion specifics, including setting up and administering a server, the Subversion book is a great resource, and includes a section with examples of a typical session with Subversion (checkout, commit, merging and updating basics).
Update: I forgot to mention that for beginners, I'd also recommend messing around in a graphical client, which removes the command-line hassle from the learning experience. RapidSVN is a reasonable cross-platform client. You'll also find that common IDEs either come with Subversion support, or have plugins which can be installed, which allow most version control operations to be performed within that environment.
@John Millikin: While setting up a Subversion server can be complicated, depending on one's general admin experience, don't forget that you don't need to do that just to mess about with a repository and get to grips with the basics - the client can interact with a repository in the local filesystem.
A: Another route you could take is not to mess around with your own repository per se, for fear of messing things up, but you could use someone else's repository or set up your own elsewhere. Point being, I learned by using SourceForge, which has both CVS and SVN... but hearing good things about SVN and weighing the differences between the two, I of course went with SVN. Getting back to SourceForge, I applied for a test project, more or less to see how SourceForge worked... but once I was in I got to playing around with their SVN for my own project listed there; experimenting with it both remotely and locally. Once I got a broad grasp of its features through testing it there, I then went on to read the go-to book for SVN, the freely distributed book by the tool's authors(the book already mentioned). It's truly a great book and at that point I began to feel comfortable setting up my own repository on critical systems. From that point all you need is a Q&A site like this for specific issues you come across and of course keep the free SVN book referenced in a bookmark for easy access.
Post your questions if you get stuck along the way and we'll be happy to help. Best of luck!
A: I recommend using SVN with apache on Linux, svn as a linux client, and TortioseSVN on windows (It does great MS Office diffs).
I have lots of stuff on my svn, and I would hate not using it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: What is the best architecture to bridge to XMPP? If I have a separate system with its own concept of users and presence, what is the most appropriate architecture for creating a bridge to an XMPP server network? As far as I can tell there are three primary ways:
*
*Act as a server. This creates one touchpoint, but I fear it has implications for compatibility, and potentially creates complexity in my system for emulating a server.
*Act as a clients. This seems to imply that I need one connection per user in my system, which just isn't going to scale well.
*I've heard of an XMPP gateway protocol, but it's unclear if this is any better than the client solution. I also can't tell if this is standard or not.
Any suggestions or tradeoffs would be appreciated. For example, would any of these solutions require running code inside the target XMPP server (not likely something I can do).
A: The XMPP gateway protocol you've heard of is most likely to do with transports. A transport is a server that connects to both a XMPP server and a non-XMPP server. By running a transport, I can use my Jabber client to talk to someone using, say, MSN Messenger.
A transport typically connects once to the remote network for each JID that it sees as online. That is, it's your option 2 in reverse. This is because there is no special relationship between the transport and the non-XMPP network; the transport is simply acting as a bunch of regular clients. For this to work, XMPP clients must first register with the transport, giving login credentials for the remote network, and allowing the transport to view their presence.
The only reason this has a chance of scaling better is that there can be many transports for the same remote network. For example, my Jabber server could run a transport to MSN, another Jabber server could run another one, and so on, each one providing connections for a different subset of XMPP users. While this spreads out the load on the Jabber side, and load balancing on your system may spread out the load as well, it still requires many connections between the two systems.
In your case, because (I assume) the non-XMPP side of things is cooperating, putting a XMPP server interface on the non-XMPP server is likely your best bet. That server interface is best suited for managing the mapping between XMPP JIDs and how that JID will appear on its own network, rather than forcing XMPP users to register and so on.
In case you haven't seen these, you might find them useful:
*
*http://www.jabber.org/jabber-for-geeks/technology-overview
*http://www.xmpp.org/protocols/
*http://www.xmpp.org/extensions/
Hope that helps.
A: I too am working on a similar system.
I am going with the gateway/component route. I have looked at several options and settled with this one.
The gateway is basically a component with the specific purpose of bridging Jabber/XMPP with another network. You will have to build most of the things you take for granted when using XMPP as a client. Stuff like roster control.
There is very little help online on the actual design and building of a component. Like the above answer I found that the xmpp protocols/extensions to be of help. The main ones being:
*
*Basic Client 2008
*Basic Server 2008
*Intermediate Client 2008
*Intermediate Server 2008
Reading through these will show you what XEPs you will be expected to be able to handle. Ignore the stuff that will be handled by the server that your component will be attched to.
It's a shame that Djabberd has such poor documentation as their system of "everything is a module" gave the possibility of backend of the server could interface directly to the other network. I made no headway on this.
A: There are basically two types of server to server (s2s) connections. The first is either called a gateway or a transport, but they're the same thing. This is probably the kind you're looking for. I couldn't find specific documentation for the non-XMPP side, but how XMPP thinks about doing translations to legacy servers is at http://xmpp.org/extensions/xep-0100.html. The second kind really isn't explained in any additional XEPs -- it's regular XMPP s2s connections. Look for "Server-to-Server Communication" in RFC 3920 or RFC 3920bis for the latest draft update.
Since you have your own users and presence on your server, and it's not XMPP, the concepts aren't going to map completely to the XMPP model. This is where the work of the transport comes in. You have to do the translation from your model to the XMPP model. While this is some work, you do get to make all the decisions.
Which brings us right to one of the key design choices -- you need to really decide which things you are going to map to XMPP from your service and what you aren't. These feature and use case descriptions will drive the overall structure. For example, is this like a transport to talk to AOL or MSN chat services? Then you'll need a way to map their equivalent of rosters, presence, and keep session information along with logins and passwords from your local users to the remote server. This is because your transport will need to pretend to be those users and will need to login for them.
Or, maybe you're just an s2s bridge to someone else's XMPP based chess game, so you don't need a login on the remote server, and can just act similarly to an email server and pass the information back and forth. (With normal s2s connections the only session that would be stored would be SASL authentication used with the remote server, but at the user level s2s just maintains the connection, and not the login session.)
Other factors are scalability and modularity on your end. You nailed some of the scalability concerns. Take a look at putting in multiple transports to balance the load. For modularity, see where you want to make decisions about what to do with each packet or action. For example, how do you handle and keep track of subscription data? You can put it on your transport, but then that makes using multiple transports harder. Or if you make that decision closer to your core server you can have simpler transports and use some common code if you need to talk to services other than XMPP. The trade off is a more complex core server with more vulnerability potential.
A: What architecture you should use depends on the non-XMPP system.
*
*Do you operate the non-XMPP system? If yes, you should find a way to add an XMPP-S2S interface to that system, in other words, make it act as an XMPP server. AOL is using this approach for AIM. Unfortunately, they have restricted their gateway to GoogleTalk.
*You don't operate the non-XMPP system but it has a federation interface that you can use - i. e. your gateway can talk to the other system as a server and has a namespace of its own. In this case, you can build a gateway that acts as a federated server on both sides. For I don't know of any example of a gateway that uses this approach but you could use it if you want to build a public XMPP-to-SIP bridge.
*If the non-XMPP system doesn't give you a federation interface, then you have no other option but acting as a bunch of clients. In the XMPP world, this is called a "transport". The differences between a transport and a normal server are basically:
*
*the JIDs of the transport are mapped from another system (e.g. john.doe\[email protected] - really ugly!)
*XMPP users who want to use the transport need to create an account on the non-XMPP system and give the login credentials of that account to the transport service. The XMPP protocol even has a protocol extension that allows XMPP users to do transport registrations in-band.
A: One other approach is to work with your XMPP server vendor. Most have internal APIs that make injecting presence possible from third party applications. For example, Jabber XCP provides an API for this that's really easy to use.
(Disclosure: I work for Jabber, Inc, the company behind Jabber XCP)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: "Multi-agent computing" in simple terms I've encountered the term "multi-agent computing" as of late, and I don't quite get what it is. I've read a book about it, but that didn't answer the fundamental question of what an agent was.
Does someone out there have a pointer to some reference which is clear and concise and answers the question without a load of bullshit/marketing speak? I want to know if this is something I should familiarise myself, or whether it's some crap I can probably ignore, because I honestly can't tell.
A: In simple terms, multiagent research tries to design system composed of autonomous agents. That is, you have a bunch of robots/people/software-agents around, each of which can take its own actions but can only "see" stuff that is around him, how do get the system to behave as you want?
Example,
Given a bunch of robots with limited sensing capabilities, how do you get them to monitor a field for enemies? to find all the mines in a field?
Given a bunch of people, how do you get them to maximize the happiness of the least happy person? without taking away their freedom.
Given a group of people, how do you set up a meeting time(s) that maximizes their happiness? without revealing their private information?
Some of these questions might appear really easy to solve, but they are not.
Multiagent research mixes techniques from game theory, Economics, artificial intelligence, and sometimes even Biology in order to answer these questions.
If you want more details, I have a free textbook that I am working on called Fundamentals of Multiagent Systems.
A: A multi-agent system is a concept borrowed from AI. It's almost like a virtual world where you have agents that are able to observe, communicate, and react. To give an example, you might have a memory allocation agent that you have to ask for memory and it decides whether or not to give it to you. Or you might have an agent that monitors a web server and restarts it if it hangs. The main goal behind multiagent systems is to have a more Smalltalk-like communication system between different parts of the system in order to get everything to work together, as opposed to more top-down directives that come from a central program.
A: "Agents" are another abstraction in software design.
As a crude hierarchy;
Machine code, assembly, machine-independent languages, sub-routines, procedures, abstract data types, objects, and finally agents.
As interconnection and distribution become more important in computing, the need for systems that can co-operate and reach agreements with other systems (with different interests) becomes apparent; this is where agents come in. Acting independently agents represent your best interests in their environment.
Other examples of agents:
*
*Space craft control, to make quick decisions when there's no time for craft-ground crew-craft messaging (eg NASA's Deep Space 1)
*Air traffic control (Systems over-riding pilots; this is in place in most commercial flights, and has saved lives)
Multi-agent systems are related to;
*
*Economics
*Game theory
*Logic
*Philosophy
*Social sciences
I don't think agents are something you should gloss over. There's 2 million hits on google scholar for "multi agent" and more on CiteSeer; it's a rapidly evolving branch of computer science.
A: There are several key aspects to multi-agent computing, distribution and independence are among them.
Multi-agents don't have to be on different machines, they could as @Kyle says, be multiple processes on a single chip or machine, but they act without explicit centralised direction. They might act in concert, so they have certain synchronisation rules - doing their jobs separately before coming together to compare results, for example.
Generally though the reasoning behind the segmentation into separate agents is to allow for differing priorities to guide each agent's actions and reactions. Perhaps using an economic model to divide up common resources or because the different functions are physically separated so don't need to interact tightly with each other.
<sweeping generalisation>
Is it something to ignore? Well it's not really anything in particular so it's a little like "can I ignore the concept of quicksort?" If you don't understand what quicksort is then you're not going to fail to be a developer because most of your life will be totally unaffected. If you have more understanding of different architectures and models, you'll have more knowledge to deploy in new and unpredictable places.
<sweeping generalisation>
Ten years ago, 'multi-agent systems' (MAS) was one of those phrases that appeared everywhere in the academic literature. These days it is less prevalent, but some of the ideas it represents are really useful in some places. But totally unnecessary in others. So I hope that's clear ;)
A: It is difficult to say what multi-agent computing is, because the definition of an agent is usually very soft surrounded by markting terms etc. I'll try to explain what is it and where it could be used based on the research of manufacturing systems, which is the area, I am familiar with.
One of the "unsolved" problems of modern manufacturing is scheduling. When the definition of the problem is static, an optimal solution can be found, but in reality, people don't come to work, manufacturing resources fail, computers fail etc. The demand is changing all the time, different products are required (i.e. mass customization of the product - one produced car has air conditioning, the next one doesn't, ...). This all leads to the conclusions that a) manufacturing is very complex, b) static approaches, like scheduling in advance for a week, don't work. So the idea is this: why wouldn't we have intelligent programs representing parts of the systems, working the way out of this mess on their own? These programs are called agents. They should communicate and negotiate amongst themselves and make sure the tasks are done in due time. By using agents we want to lower the complexity of the control system, make it more manageable, enable better human - machine interaction, make it more robust and less error prone and very importantly: make the control system decentralized.
In short: agents are just a concept, but they are a concept everyone can intuitively understand. Code still needs to be written, but it is written in a different way, one abstraction higher than OOP.
A: There was a time when it was hard to find good material on software agents, primarily because of the perception of marketing potential. The bloom on that rose has diminished so the signal to noise ratio on the Internet has improved vis-a-vie software agents.
Here is a good introduction to software agents on this blog post of an open source project for software agents. The term multi-agent systems just means a system where multiple software agents run and communicate and delegate sub tasks to each other.
A: According to Jennings and Wooldridge who are 2 of the top Mulit-agent researchers an agent is an object that is reactive to its environment, proactive and social. That is an agent is a piece of software that can react to its environment in real time in a way that is suitable to its own objeective. It is proactive, which means that it doesnt just always wait to be asked to perform a task, if it sees a chance to do something that it feels would be beneficial to its objectives it does it. And that it is social, ie that it can communicate with other Agents, doesnt nessecaily ever have to do any of these things in meeting its own objectives but it should be able to to do these if the situation arose. And thus a multi-agent system is just a collection of these in a distrubuted system that can all communicate and try to perform their own personal goals hat normally lead to an overall achievement of the system goal.
A: You can find a concentration of white papers concerning agents here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: On a two-column page, how can I grow the left div to the same height of the right div using CSS or Javascript? I'm trying to make a two-column page using a div-based layout (no tables please!). Problem is, I can't grow the left div to match the height of the right one. My right div typically has a lot of content.
Here's a paired down example of my template to illustrate the problem.
<div style="float:left; width: 150px; border: 1px solid;">
<ul>
<li>nav1</li>
<li>nav2</li>
<li>nav3</li>
<li>nav4</li>
</ul>
</div>
<div style="float:left; width: 250px">
Lorem ipsum dolor sit amet, consectetur adipisicing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna
....
</div>
A: Your simplest answer lies in the next version of css (3), which currently no browser supports.
For now you are relegated to calculating heights in javascript and setting them on the left side.
If the navigation is so important to be positioned in such a way, run it along the top.
you could also do a visual trick by moving the borders to the container and the bigger inner, and make it appear to be the same size.
this makes it look the same, but it isn't.
<div style="border-left:solid 1px black;border-bottom:solid 1px black;">
<div style="float:left; width: 150px; border-top: 1px solid;">
<ul>
<li>nav1</li>
<li>nav2</li>
<li>nav3</li>
<li>nav4</li>
</ul>
</div>
<div style="float:left; width: 250px; border:solid 1px black;border-bottom:0;">
Lorem ipsum dolor sit amet, consectetur adipisicing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna
Lorem ipsum dolor sit amet, consectetur adipisicing elit,
...
</div>
<div style="clear:both;" ></div>
</div>
A: It can be done in CSS! Don't let people tell you otherwise.
The easiest, most pain-free way to do it is to use the Faux Columns method.
However, if that solution doesn't work for you, you'll want to read up on this technique. But be warned, this is the kind of CSS hackery that will make you wake up in a cold sweat in the middle of the night.
The gist of it is that you assign a large amount of padding to the bottom of the column, and a negative margin of the same size. Then you place your columns in a container that has overflow: hidden set. More or less the padding/margin values allow the box to keep expanding until it reaches the end of the wrapper (which is determined by the column with the most content), and any extra space generated by the padding is cut off as overflow. It doesn't make much sense, I know...
<div id="wrapper">
<div id="col1">Content</div>
<div id="col2">Longer Content</div>
</div>
#wrapper {
overflow: hidden;
}
#col1, #col2 {
padding-bottom: 9999px;
margin-bottom: -9999px;
}
Be sure to read the entire article I linked to, there are a number of caveats and other implementation issues. It's not a pretty technique, but it works fairly well.
A: You can do it in jQuery really simple, but I am not sure JS should be used for such things. The best way is to do it with pure css.
*
*Take a look at faux columns or even Fluid Faux Columns
*Also another technique(doesn't work on the beautiful IE6) is to position:relative the parent container. The child container(the nav list in your case) should be positioned absolute and forced to occupy the whole space with 'top:0; bottom:0;'
A: Use jQuery for this problem; just call this function in your ready function:
function setHeight(){
var height = $(document).height(); //optionally, subtract some from the height
$("#leftDiv").css("height", height + "px");
}
A: This is one of those perfectly reasonable, simple things that CSS can't do. Faux Columns, as suggested by Silviu, is a hacky but functional workaround.
It would be lovely if someday there was a way to say
div.foo {
height: $(div.blah.height);
}
A: This can be done with css using background colors
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<style type="text/css">
* {
margin: 0;
padding: 0;
}
body {
font-family: Arial, Helvetica, sans-serif;
background: #87ceeb;
font-size: 1.2em;
}
#container {
width:100%; /* any width including 100% will work */
color: inherit;
margin:0 auto; /* remove if 100% width */
background:#FFF;
}
#header {
width: 100%;
height: 160px;
background: #1e90ff;
}
#content {/* use for left sidebar, menu etc. */
background: #99C;
color: #000;
float: right;/* float left for right sidebar */
margin: 0 0 0 -200px; /* adjust margin if borders added */
width: 100%;
}
#content .wrapper {
background: #FFF;
margin: 0 0 0 200px;
overflow: hidden;
padding: 10px; /* optional, feel free to remove */
}
#sidebar {
background: #99C;
color: inherit;
float: left;
width: 180px;
padding: 10px;
}
.clearer {
height: 1px;
font-size: -1px;
clear: both;
}
/* content styles */
#header h1 {
padding: 0 0 0 5px;
}
#menu p {
font-size: 1em;
font-weight: bold;
padding: 5px 0 5px 5px;
}
#footer {
clear: both;
border-top: 1px solid #1e90ff;
border-bottom: 10px solid #1e90ff;
text-align: center;
font-size: 50%;
font-weight: bold;
}
#footer p {
padding: 10px 0;
}
</style>
</head>
<body>
<div id="container">
<!--header and menu content goes here -->
<div id="header">
<h1>Header Goes Here</h1>
</div>
<div id="content">
<div class="wrapper">
<!--main page content goes here -->
<p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Duis ligula lorem, consequat eget, tristique nec, auctor quis, purus. Vivamus ut sem. Fusce aliquam nunc
vitae purus. Aenean viverra malesuada libero. </p>
</div>
</div>
<div id="sidebar">
<!--sidebar content, menu goes here -->
<p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Duis ligula lorem, consequat eget, tristique nec, auctor quis, purus.</p>
</div>
<div class="clearer"></div><!--clears footer from content-->
<!--footer content goes here -->
<div id="footer">
<p>Footer Info Here</p>
</div>
</div>
</body>
</html>
A: @hoyhoy
If a designer can make this work in html, then he can have this design. If he is a true master of web design, he will realize that this is a limitation of the media, as video is not possible in magazine ads.
If he would like to simulate weight by giving the 2 columns equal importance, than change the borders, so that they appear to be of the same weight, and make the colors of the borders contrast to the font color of the columns.
But as for making the physical elements the same height, you can only do that with a table construct, or setting the heights, at this point in time. To simulate them appearing the same size, they don't have to be the same size.
A: Come to think of it, I've never done it with a bottom border on the column. It's probably just overflowing, and getting cut off. You might want to have the bottom border come from a separate element that's part of the column content.
Anyway, I know it's not a perfect magic bullet solution. You might just have to play with it, or hack around its shortcomings.
A: There is also a Javascript based solution. If you have jQuery, you can use the below plugin.
<script type="text/javascript">
// plugin
jQuery.fn.equalHeights=function() {
var maxHeight=0;
this.each(function(){
if (this.offsetHeight>maxHeight) {maxHeight=this.offsetHeight;}
});
this.each(function(){
$(this).height(maxHeight + "px");
if (this.offsetHeight>maxHeight) {
$(this).height((maxHeight-(this.offsetHeight-maxHeight))+"px");
}
});
};
// usage
$(function() {
$('.column1, .column2, .column3').equalHeights();
});
</script>
A: I use this to align 2 columns with ID "center" and "right":
var c = $("#center");
var cp = parseInt(c.css("padding-top"), 10) + parseInt(c.css("padding-bottom"), 10) + parseInt(c.css("borderTopWidth"), 10) + parseInt(c.css("borderBottomWidth"), 10);
var r = $("#right");
var rp = parseInt(r.css("padding-top"), 10) + parseInt(r.css("padding-bottom"), 10) + parseInt(r.css("borderTopWidth"), 10) + parseInt(r.css("borderBottomWidth"), 10);
if (c.outerHeight() < r.outerHeight()) {
c.height(r.height () + rp - cp);
} else {
r.height(c.height () + cp - rp);
}
Hope it helps.
A: To grow the left menu div with same height as the right content div.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Untitled Document</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js">
</script>
<script>
$(document).ready(function () {
var height = $(document).height(); //optionally, subtract some from the height
$("#menu").css("height", (height) + "px");
$("#content").css("height", (height) + "px");
});
</script>
<style type="text/css">
<!--
html, body {
font-family: Arial;
font-size: 12px;
}
#header {
background-color: #F9C;
height: 200px;
width: 100%;
float: left;
position: relative;
}
#menu {
background-color: #6CF;
float: left;
min-height: 100%;
height: auto;
width: 10%;
position: relative;
}
#content {
background-color: #6f6;
float: right;
height: auto;
width: 90%;
position: relative;
}
#footer {
background-color: #996;
float: left;
height: 100px;
width: 100%;
position: relative;
}
-->
</style>
</head>
<body>
<div id="header">
i am a header
</div>
<div id="menu">
i am a menu
</div>
<div id="content">
I am an example of how to do layout with css rules and divs.
<p> I am an example of how to do layout with css rules and divs. </p>
<p> I am an example of how to do layout with css rules and divs. </p>
<p> I am an example of how to do layout with css rules and divs. </p>
<p> I am an example of how to do layout with css rules and divs. </p>
<p> I am an example of how to do layout with css rules and divs. </p>
<p> I am an example of how to do layout with css rules and divs. </p>
<p> I am an example of how to do layout with css rules and divs. </p>
</div>
<div id="footer">
footer
</div>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Writing cross-platform apps in C What things should be kept most in mind when writing cross-platform applications in C? Targeted platforms: 32-bit Intel based PC, Mac, and Linux. I'm especially looking for the type of versatility that Jungle Disk has in their USB desktop edition ( http://www.jungledisk.com/desktop/download.aspx )
What are tips and "gotchas" for this type of development?
A: Try to avoid platform-dependent #ifdefs, as they tend to grow exponentially when you add new platforms. Instead, try to organize your source files as a tree with platform-independent code at the root, and platform-dependent code on the "leaves". There is a nice book on the subject, Multi-Platform Code Management. Sample code in it may look obsolete, but ideas described in the book are still brilliantly vital.
A: Further to Kyle's answer, I would strongly recommend against trying to use the Posix subsystem in Windows. It's implemented to an absolute bare minimum level such that Microsoft can claim "Posix support" on a feature sheet tick box. Perhaps somebody out there actually uses it, but I've never encountered it in real life.
One can certainly write cross-platform C code, you just have to be aware of the differences between platforms, and test, test, test. Unit tests and a CI (continuous integration) solution will go a long way toward making sure your program works across all your target platforms.
A good approach is to isolate the system-dependent stuff in one or a few modules at most. Provide a system-independent interface from that module. Then build everything else on top of that module, so it doesn't depend on the system you're compiling for.
A: I maintained for a number of years an ANSI C networking library that was ported to close to 30 different OS's and compilers. The library didn't have any GUI components, which made it easier. We ended up abstracting out into dedicated source files any routine that was not consistent across platforms, and used #defines where appropriate in those source files. This kept the code that was adjusted per platform isolated away from the main business logic of the library. We also made extensive use of typedefs and our own dedicated types so that we could easily change them per platform if needed. This made the port to 64-bit platforms fairly easy.
If you are looking to have GUI components, I would suggest looking at GUI toolkits such as WxWindows or Qt (which are both C++ libraries).
A: XVT have a cross platform GUI C API which is mature 15+ years and sits on top of the native windowing toollkits. See WWW.XVT.COM.
They support at least LINUX, Windows, and MAC.
A: Try to write as much as you can with POSIX. Mac and Linux support POSIX natively and Windows has a system that can run it (as far as I know - I've never actually used it). If your app is graphical, both Mac and Linux support X11 libraries (Linux natively, Mac through X11.app) and there are numerous ways of getting X11 apps to run on Windows.
However, if you're looking for true multi-platform deployment, you should probably switch to a language like Java or Python that's capable of running the same program on multiple systems with little or no change.
Edit: I just downloaded the application and looked at the files. It does appear to have binaries for all 3 platforms in one directory. If your concern is in how to write apps that can be moved from machine to machine without losing settings, you should probably write all your configuration to a file in the same directory as the executable and not touch the Windows registry or create any dot directories in the home folder of the user that's running the program on Linux or Mac. And as far as creating a cross-distribution Linux binary, 32-bit POSIX/X11 would probably be the safest bet. I'm not sure what JungleDisk uses as I'm currently on a Mac.
A: There do exist quite few portable libraries just examples I've worked within the past
1) glib and gtk+
2) libcurl
3) libapr
Those cover nearly every platform and so they are extremly useful tool.
Posix is fine on Unices but well I doubt it's that great on windows, besides we do not have any stuff for portable GUIs there.
A: I also second the recommendation to separate code for different platforms into different modules/trees instead of ifdefs.
Also I recommend to check beforehand what are the differences in you platforms and how you could abstract them. E.g. this is some OS related stuff (e.g. the annoying CR,CRLF,LF in text files), or hardware stuff. E.g. the previous mentioned posix compability doesnt stop you from
int c;
fread(&c, sizeof(int), 1, file);
But on different hardware platforms the internal memory layout can be complete different (endianess), forcing you to use conversion functions on some of the target platforms.
A: You can use NAppGUI for both console and desktop apps. The SDK uses ANSI-C and your code will work on Windows/macOS/Linux.
https://www.nappgui.com
It's free and OpenSource.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Decision making in distributed applications With a distributed application, where you have lots of clients and one main server, should you:
*
*Make the clients dumb and the server smart: clients are fast and non-invasive. Business rules are needed in only 1 place
*Make the clients smart and the server dumb: take as much load as possible off of the server
Additional info:
*
*Clients collect tons of data about the computer they are on. The server must analyze all of this info to determine the health of these computers
*The owners of the client computers are temperamental and will shut down the clients if the client starts to consume too many resources (thus negating the purpose of the distributed app in helping diagnose problems)
A: You should do as much client-side processing as possible. This will enable your application to scale better than doing processing server-side. To solve your temperamental user problem, you could look into making your client processes run at a very low priority so there's no noticeable decrease in performance on the part of the user.
A: In a client-server setting, if you care about security, you should always program on the assumption that the client may have been compromised. Even if it hasn't, there is always the risk of somebody using an old version of the client, using a competing or modified version of the client, or just of the net connection being a bit screwy.
So while you do as much work on the client as possible, processing and marshalling information into the right form, the server then needs to do a thorough sanity check on anything the client gives it.
So the answer I guess is "both".
A:
The server must analyze all of this
info to determine the health of these
computers
That is probably the biggest clue so far explaning what your application is kinda about. Are you able to provide a more elaborate briefing on what this application is seeking to achieve in this distributed environment? We do not even know if the client-side processing is disk I/O or processor intensive. How you design the solution is dependent on the nature of what needs to be done to help the users/business accomplish their jobs and objectives.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I change my Active Sound Card on the Fly? I currently have speakers set up both in my office and in my living room, connected to my PC via two sound cards, and would like to switch the set of speakers I'm outputting to on the fly.
Anyone know an application or a windows API call that I can use to change the default sound output device? It is currently a bit of a pain to traverse the existing control panel system.
A: That topic is covered in depth here Easily Change or Switch the Default Audio Sound Output in Vista or XP. Note that sound management was changed in Vista significantly.
On a side note, I believe SnapStream is/was working on an application to allo multi-channel sound cards to output to different rooms (sets of speakers) simultaneously.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Seeking code highlighter recommendation for WordPress Can anybody recommend a reliable and decently documented code highlighter for WordPress 2.6.1? I have tried Code Snippet by Roman Roan and Developer Formatter by Gilberto Saraiva. But they don't seem to work as described in the documentation and are mangling the code snippets instead of prettifying them.
A: I use WP-Syntax and it's worked very well for me. It's supported every language I've thrown at it so far, and the colors can be customized for a particular theme (though the defaults look just fine too)
A: You should also checkout syntaxhighlighter from Google Code.
A: I use the GeSHi Syntax Highlighter Plugin in my blog and I find it works well. Which highlighter you use tends to depend on which languages you use most frequently and how good the support is for them.
[Edit] I forgot that GeSHi is the highlighter associated with the Wordpress SyntaxHighlighter plugin. :/
A: WP-Syntax uses GeSHi to do the highlighting, the WordPress Syntax Highlighter uses the Javascript SyntaxHighlighter
A: http://wordpress.org/extend/plugins/wp-synhighlight
It utilizes shordcodes and works well in GUI editor also unlike some others (WP-Syntax has some problems with GUI).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Desktop search utility for pdf,chm and djvu files I want to write a tool that helps me search pdf/chm/djvu files in linux. Any pointers on how to go about it?
The major problem is reading/importing data from all these files. Can this be done with C and shell scripting?
A: Tracker ships with Ubuntu 8.04 -- it was a significant switch from Beagle which users believed was too resource (CPU) intensive and didn't yield good enough results. It indexes both pdf and chm and according to this bug report it also indexes djvu.
A: Note that djvu is an image compression format (optimized to compress 'pictures of text', typically the results of scanning). As such, you won't be able to search for text, except in the metadata -this is what the link sent by cdleary refers to-, or if you first use OCR on the document to convert it into text.
The same is true for PDFs which content are scanned articles/books.
A: How about a plugin for Beagle ?
It already searches PDFs but you can add other file types.
Here is the relevant wikipedia page : http://en.wikipedia.org/wiki/Beagle_(software)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: FileUpload control inside an UpdatePanel without refreshing the whole page? According to Microsoft the FileUpload control is not compatible with an AJAX UpdatePanel.
I am aware that a PostBackTrigger can be added to the submit button of the form like this:
<Triggers>
<asp:PostBackTrigger ControlID="Button1" />
</Triggers>
The problem is that this forces the form to perform a full post-back which voids out the whole point of using the UpdatePanel in the first place. Is there a workaround to this issue that does not cause the whole page to refresh?
A: I know of a third party component that can do that. It's called "swfupload" and is free to use and open source, and uses javascript and flash to do the magic.
here is a list of the features they offer:
(from their site)
*
*Upload multiple files at once by ctrl/shift-selecting in dialog
*Javascript callbacks on all events
*Get file information before upload starts
*Style upload elements with XHTML and css
*Display information while files are uploading using HTML
*No page reloads necessary
*Works on all platforms/browsers that has Flash support.
*Degrades gracefully to normal HTML upload form if Flash or javascript is
unavailable
*Control filesize before upload starts
*Only display chosen filetypes in dialog
*Queue uploads, remove/add files before starting upload
They also have a demo area where you can play around with their control. That way you can make sure it is exactly what you want.
We used it in one of our projects and it has never failed us so far, so I think this is a safe bet.
oh and here is the download page: http://code.google.com/p/swfupload/
A: You can't upload file(s) via AJAX only by reloading a whole HTML document. You should either use iframes if you prefer pure HTML (this is more common, eg. used by WordPress) or something else like swfupload suggested by Sven.
A: Add this to your button control:
OnClientClick="javascript:document.forms[0].encoding = 'multipart/form-data';"
-or-
Make your page Form tag look like:
<form id="form1" runat="server" enctype="multipart/form-data">
A: I found this the other day when I ran into the same problem: http://vinayakshrestha.wordpress.com/2007/03/13/uploading-files-using-aspnet-ajax-extensions/.
For my implementation, I put the iframe in a modal popup and added a button with style="display:none" to handle the closing of the popup. In the javascript function that watches for the change in the iframe, I added document.getElementById("<%=btnCloseUpload.ClientID%>").click(); for the hidden button.
A: Try the AJAX AsyncFileUpload. It works well if used in the matter it is intended to be used (handle the UploadedComplete event).
http://www.asp.net/AJAX/AjaxControlToolkit/Samples/AsyncFileUpload/AsyncFileUpload.aspx
A: Can use IFrame have "target" attribute which mention to FileUpload Page OR use Jquery example at link
How to make Asynchronous(AJAX) File Upload using iframe?
A: The button that is triggering the upload event needs to have UseSubmitBehavior property set to false:
clsUploadButton.UseSubmitBehavior = False;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Are there any tools to visualize template/class methods and their usage? I have taken over a large code base and would like to get an overview how and where certain classes and their methods are used.
Is there any good tool that can somehow visualize the dependencies and draw a nice call tree or something similar?
The code is in C++ in Visual Studio if that helps narrow down any selection.
A: Here are a few options:
*
*CodeDrawer
*CC-RIDER
*Doxygen
The last one, doxygen, is more of an automatic documentation tool, but it is capable of generating dependency graphs and inheritance diagrams. It's also licensed under the GPL, unlike the first two which are not free.
A: When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.
A: In Java I would start with JDepend. In .NET, with NDepend. Don't know about C++.
A: David, thanks for the suggestions. I spent the weekend trialing the programs.
Doxygen seems to be the most comprehensive of the 3, but it still leaves some things to be desired in regard to callers of methods.
All 3 seem to have problems with C++ templates to varying degrees. CC-Rider simply crashed in the middle of the analysis and CodeDrawer does not show many of the relationships. Doxygen worked pretty well, but it too did not find and show all relations and instead overwhelmed me with lots of macro references until I filtered them out.
So, maybe I should clarify "large codebase" a bit for eventual other suggestions: >100k lines of code overall spread out over more than 100 template files plus several actual class files pulling it all together.
Any other tools out there, that might be up to the task and could do better (more thoroughly)? Oh and specifically: anything that understands IDL and COM interfaces?
A:
When I have used Doxygen it has produced a full list of callers and callees. I think you have to turn it on.
I did that of course, but like I mentioned, doxygen does not consider interfaces between objects as they are defined in the IDL. It "only" shows direct C++ calls.
Don't get me wrong, it is already amazing what it does, but it is still not complete from my high level view trying to get a good understanding of how everything fits together.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is it possible to list named events in Windows? I would like to create events for certain resources that are used across various processes and access these events by name. The problem seems to be that the names of the events must be known to all applications referring to them.
Is there maybe a way to get a list of names events in the system?
I am aware that I might use some standard names, but it seems rather inflexible with regard to future extensibility (all application would require a recompile).
I'm afraid, I can't even consider ZwOpenDirectoryObject, because it is described as needing Windows XP or higher, so it is out of question. Thanks for the suggestion though.
I am a little unsure about shared memory, because I haven't tried it so far. Might do some reading in that area I guess. Configuration files and registry are a slight problem, because they do tend to fail with Vista due to access problems. I am a bit afraid, that shared memory will have the same problem.
The idea with ProcessExplorer sounds promising. Does anyone know an API that could be used for listing events for a process? And, does it work without administrative rights?
Thank you for the clarification.
There is not really a master process. It is more of a driver dll that is used from different processes and the events would be used to "lock" resources used by these processes.
I am thinking about setting up a central service that has sufficient access rights even under Vista. It will certainly complicate things, but it might be the only thing left facing the problems with security.
A: No, there is not any facility to enumerate named events. You could enumerate all objects in the respective object manager directory using ZwOpenDirectoryObject and then filter for events. But this routine is undocumented and therefore should not be used without good reason.
Why not use a separate mechanism to share the event names? You could list them in a configuration file, a registry key or maybe even in shared memory.
A: ProcessExplorer is able to enumerate all the named events held by some specific process. You could go over the entire process list and do something similar although I have now clue as to what API is used to get the list...
A: Do not mix up the user mode ZwOpenDirectoryObject with the kernel mode ZwOpenDirectoryObject -- the kernel mode API (http://msdn.microsoft.com/en-us/library/ms800966.aspx) indeed seems to available as of XP only, but the user mode version should be available at least since NT 4. Anyway, I would not recommend using ZwOpenDirectoryObject.
Why should configuration files and registry keys fail on Vista? Of course, you have to get the security settings right -- but you would have to do that for your named events as well -- so there should not be a big difference here. Maybe you should tell us some more details about the nature of your processes -- do they all run within the same logon session or do they run as different users even? And is there some master process or who creates the events in the first place?
Frankly, I tend to find the Process Explorer idea to be not a very good one. Despite the fact that you probably will not be able to accomplish that without using undocumented APIs and/or a device driver, I do not think that a process should be spelunking around in the handle table of another process just to find out the names of some kernel objects. And, of course, the same security issues apply again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is Python good for big software projects (not web based)? Right now I'm developing mostly in C/C++, but I wrote some small utilities in Python to automatize some tasks and I really love it as language (especially the productivity).
Except for the performances (a problem that could be sometimes solved thanks to the ease of interfacing Python with C modules), do you think it is proper for production use in the development of stand-alone complex applications (think for example to a word processor or a graphic tool)?
What IDE would you suggest? The IDLE provided with Python is not enough even for small projects in my opinion.
A: Python is considered (among Python programmers :) to be a great language for rapid prototyping. There's not a lot of extraneous syntax getting in the way of your thought processes, so most of the work you do tends to go into the code. (There's far less idioms required to be involved in writing good Python code than in writing good C++.)
Given this, most Python (CPython) programmers ascribe to the "premature optimization is the root of all evil" philosophy. By writing high-level (and significantly slower) Python code, one can optimize the bottlenecks out using C/C++ bindings when your application is nearing completion. At this point it becomes more clear what your processor-intensive algorithms are through proper profiling. This way, you write most of the code in a very readable and maintainable manner while allowing for speedups down the road. You'll see several Python library modules written in C for this very reason.
Most graphics libraries in Python (i.e. wxPython) are just Python wrappers around C++ libraries anyway, so you're pretty much writing to a C++ backend.
To address your IDE question, SPE (Stani's Python Editor) is a good IDE that I've used and Eclipse with PyDev gets the job done as well. Both are OSS, so they're free to try!
[Edit] @Marcin: Have you had experience writing > 30k LOC in Python? It's also funny that you should mention Google's scalability concerns, since they're Python's biggest supporters! Also a small organization called NASA also uses Python frequently ;) see "One coder and 17,000 Lines of Code Later".
A: Nothing to add to the other answers, besides that if you choose python you must use something like pylint which nobody mentioned so far.
A: One way to judge what python is used for is to look at what products use python at the moment. This wikipedia page has a long list including various web frameworks, content management systems, version control systems, desktop apps and IDEs.
As it says here - "Some of the largest projects that use Python are the Zope application server, YouTube, and the original BitTorrent client. Large organizations that make use of Python include Google, Yahoo!, CERN and NASA. ITA uses Python for some of its components."
So in short, yes, it is "proper for production use in the development of stand-alone complex applications". So are many other languages, with various pros and cons. Which is the best language for your particular use case is too subjective to answer, so I won't try, but often the answer will be "the one your developers know best".
A: We've used IronPython to build our flagship spreadsheet application (40kloc production code - and it's Python, which IMO means loc per feature is low) at Resolver Systems, so I'd definitely say it's ready for production use of complex apps.
There are two ways in which this might not be a useful answer to you :-)
*
*We're using IronPython, not the more usual CPython. This gives us the huge advantage of being able to use .NET class libraries. I may be setting myself up for flaming here, but I would say that I've never really seen a CPython application that looked "professional" - so having access to the WinForms widget set was a huge win for us. IronPython also gives us the advantage of being able to easily drop into C# if we need a performance boost. (Though to be honest we have never needed to do that. All of our performance problems to date have been because we chose dumb algorithms rather than because the language was slow.) Using C# from IP is much easier than writing a C Extension for CPython.
*We're an Extreme Programming shop, so we write tests before we write code. I would not write production code in a dynamic language without writing the tests first; the lack of a compile step needs to be covered by something, and as other people have pointed out, refactoring without it can be tough. (Greg Hewgill's answer suggests he's had the same problem. On the other hand, I don't think I would write - or especially refactor - production code in any language these days without writing the tests first - but YMMV.)
Re: the IDE - we've been pretty much fine with each person using their favourite text editor; if you prefer something a bit more heavyweight then WingIDE is pretty well-regarded.
A: Refactoring is inevitable on larger codebases and the lack of static typing makes this much harder in python than in statically typed languages.
A: You'll find mostly two answers to that – the religous one (Yes! Of course! It's the best language ever!) and the other religious one (you gotta be kidding me! Python? No... it's not mature enough). I will maybe skip the last religion (Python?! Use Ruby!). The truth, as always, is far from obvious.
Pros: it's easy, readable, batteries included, has lots of good libraries for pretty much everything. It's expressive and dynamic typing makes it more concise in many cases.
Cons: as a dynamic language, has way worse IDE support (proper syntax completion requires static typing, whether explicit in Java or inferred in SML), its object system is far from perfect (interfaces, anyone?) and it is easy to end up with messy code that has methods returning either int or boolean or object or some sort under unknown circumstances.
My take – I love Python for scripting, automation, tiny webapps and other simple well defined tasks. In my opinion it is by far the best dynamic language on the planet. That said, I would never use it any dynamically typed language to develop an application of substantial size.
Say – it would be fine to use it for Stack Overflow, which has three developers and I guess no more than 30k lines of code. For bigger things – first your development would be super fast, and then once team and codebase grow things are slowing down more than they would with Java or C#. You need to offset lack of compilation time checks by writing more unittests, refactorings get harder cause you never know what your refacoring broke until you run all tests or even the whole big app, etc.
Now – decide on how big your team is going to be and how big the app is supposed to be once it is done. If you have 5 or less people and the target size is roughly Stack Overflow, go ahead, write in Python. You will finish in no time and be happy with good codebase. But if you want to write second Google or Yahoo, you will be much better with C# or Java.
Side-note on C/C++ you have mentioned: if you are not writing performance critical software (say massive parallel raytracer that will run for three months rendering a film) or a very mission critical system (say Mars lander that will fly three years straight and has only one chance to land right or you lose $400mln) do not use it. For web apps, most desktop apps, most apps in general it is not a good choice. You will die debugging pointers and memory allocation in complex business logic.
A: In my opinion python is more than ready for developing complex applications. I see pythons strength more on the server side than writing graphical clients. But have a look at http://www.resolversystems.com/. They develop a whole spreadsheet in python using the .net ironpython port.
If you are familiar with eclipse have a look at pydev which provides auto-completion and debugging support for python with all the other eclipse goodies like svn support. The guy developing it has just been bought by aptana, so this will be solid choice for the future.
@Marcin
Cons: as a dynamic language, has way
worse IDE support (proper syntax
completion requires static typing,
whether explicit in Java or inferred
in SML),
You are right, that static analysis may not provide full syntax completion for dynamic languages, but I thing pydev gets the job done very well. Further more I have a different development style when programming python. I have always an ipython session open and with one F5 I do not only get the perfect completion from ipython, but object introspection and manipulation as well.
But if you want to write second Google
or Yahoo, you will be much better with
C# or Java.
Google just rewrote jaiku to work on top of App Engine, all in python. And as far as I know they use a lot of python inside google too.
A: I really like python, it's usually my language of choice these days for small (non-gui) stuff that I do on my own.
However, for some larger Python projects I've tackled, I'm finding that it's not quite the same as programming in say, C++. I was working on a language parser, and needed to represent an AST in Python. This is certainly within the scope of what Python can do, but I had a bit of trouble with some refactoring. I was changing the representation of my AST and changing methods and classes around a lot, and I found I missed the strong typing that would be available to me in a C++ solution. Python's duck typing was almost too flexible and I found myself adding a lot of assert code to try to check my types as the program ran. And then I couldn't really be sure that everything was properly typed unless I had 100% code coverage testing (which I didn't at the time).
Actually, that's another thing that I miss sometimes. It's possible to write syntactically correct code in Python that simply won't run. The compiler is incapable of telling you about it until it actually executes the code, so in infrequently-used code paths such as error handlers you can easily have unseen bugs lurking around. Even code that's as simple as printing an error message with a % format string can fail at runtime because of mismatched types.
I haven't used Python for any GUI stuff so I can't comment on that aspect.
A:
And as far as I know they use a lot of python inside google too.
Well i'd hope so, the maker of python still works at google if i'm not mistaken?
As for the use of Python, i think it's a great language for stand-alone apps. It's heavily used in a lot of Linux programs, and there are a few nice widget sets out there to aid in the development of GUI's.
A: Python is a delight to use. I use it routinely and also write a lot of code for work in C#. There are two drawbacks to writing UI code in Python. one is that there is not a single ui framework that is accepted by the majority of the community. when you write in c# the .NET runtime and class libraries are all meant to work together. With Python every UI library has at's own semantics which are often at odds with the pythonic mindset in which you are trying to write your program. I am not blaming the library writers. I've tried several libraries (wxwidgets, PythonWin[Wrapper around MFC], Tkinter), When doing so I often felt that I was writing code in a language other than Python (despite the fact that it was python) because the libraries aren't exactly pythonic they are a port from another language be it c, c++, tk.
So for me I will write UI code in .NET (for me C#) because of the IDE & the consistency of the libraries. But when I can I will write business logic in python because it is more clear and more fun.
A: I know I'm probably stating the obvious, but don't forget that the quality of the development team and their familiarity with the technology will have a major impact on your ability to deliver.
If you have a strong team, then it's probably not an issue if they're familiar. But if you have people who are more 9 to 5'rs who aren't familiar with the technology, they will need more support and you'd need to make a call if the productivity gains are worth whatever the cost of that support is.
A: I had only one python experience, my trash-cli project.
I know that probably some or all problems depends of my inexperience with python.
I found frustrating these things:
*
*the difficult of finding a good IDE for free
*the limited support to automatic refactoring
Moreover:
*the need of introduce two level of grouping packages and modules confuses me.
*it seems to me that there is not a widely adopted code naming convention
*it seems to me that there are some standard library APIs docs that are incomplete
*the fact that some standard libraries are not fully object oriented annoys me
Although some python coders tell me that they does not have these problems, or they say these are not problems.
A: Try Django or Pylons, write a simple app with both of them and then decide which one suits you best. There are others (like Turbogears or Werkzeug) but those are the most used.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Linux GUI development I have a large GUI project that I'd like to port to Linux.
What is the most recommended framework to utilize for GUI programming in Linux? Are Frameworks such as KDE / Gnome usable for this objective Or is better to use something more generic other than X?
I feel like if I chose one of Gnome or KDE, I'm closing the market out for a chunk of the Linux market who have chosen one over the other. (Yes I know there is overlap)
Is there a better way? Or would I have to create 2 complete GUI apps to have near 100% coverage?
It's not necessary to have a cross-platform solution that will also work on Win32.
A: I recommend wxWidgets or Qt. They are both mature, well-structured and cross-platform, with decent documentation and sample source code.
A: Gnome apps work on KDE desktops and vice versa; you won't be locking anyone out. As far as toolkits go, it's fairly subjective. All of the toolkits are fairly cross-platform. If you're not open source, then GTK+ would be the cheaper option, as Qt is only free for open source use, whereas GTK+ is LGPL.
A: Your best bet may be to port it to a cross-platform widget library such as wxWidgets, which would give you portability to any platform wxWidgets supports.
It's also important to make the distinction between Gnome libraries and GTK, and likewise KDE libraries and Qt. If you write the code to use GTK or Qt, it should work fine for users of any desktop environment, including less popular ones like XFCE. If you use other Gnome or KDE-specific libraries to do non-widget-related tasks, your app would be less portable between desktop environments.
A: Have you thought of using Mono? Programs like Paint.NET work great under Linux & Windows.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How to embed user-specific data in .NET windows setup app at setup download time? I'd like to have a link in my ASP.NET web site that authenticated users click to download a windows app that is already pre-configured with their client ID and some site config data. My goal is no typing required for the user during the client app install, both for the user friendliness, and to avoid config errors from mis-typed technical bits. Ideally I'd like the web server-side code to run as part of the ASP.NET app.
FogBugz seems to do something like this. There is a menu option within the web app to download a screenshot tool, and when you download and run the installer, it knows your particular FogBugz web address so it can send screenshots there. (Hey Joel, looking for a question to answer? hint—hint)
A: The way the FogBugz screenshot setup tool does this is that it appends a 256 byte block at the end of the setup program at the moment it is downloaded. In other words, the download script spits out all the bytes from setup.exe and then an extra 256 with the url for the FogBugz server, plus any padding.
Windows ignores these extra bytes when the .exe is run (provided you turned off the CRC check for your setup installer - we're using InnoSetup).
After installation, we run the Screenshot program with a command line switch that tells it where the setup installer is. It looks at the end of the setup.exe and finds it's info, and then writes that to the registry so the user doesn't have to know it.
A: If it helps RegexBuddy does this also.
A: Does the information need to be secure? If not, ClickOnce can use URL-based parameters. Here's an article about that on MSDN.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Formatting tabular data using unicode characters I need to produce a calculation trace file containing tabular data showing intermediate results. I am currently using a combination of the standard ascii pipe symbols (|) and dashes (-) to draw the table lines:
E.g.
Numerator | Denominator | Result
----------|-------------|-------
6 | 2 | 3
10 | 5 | 2
Are there any unicode characters that could be used to produce a more professional looking table?
(The file must be a raw text format and cannot use HTML or any other markup)
Edit: I've added an example of what the table now looks like having taken the suggestion on board and used the unicode box drawing characters:
Numerator │ Denominator │ Result
──────────┼─────────────┼───────
6 │ 2 │ 3
10 │ 5 │ 2
A: There are Unicode box drawing characters (look for Box Drawing under Geometrical Symbols - the chart itself is a PDF). I don't have any idea how widely supported those characters are, though.
A: Your table is getting help from the monospaced font triggered by the code tags here. Proportional fonts can prevent tabular alignment of the digits. Unicode has digits that retain tabular alignment regardless of fonts in the Mathematical Alphanumeric Symbols from 1D7CE-1D7FF like these 𝟶𝟷𝟸𝟹𝟺𝟻𝟼𝟽𝟾𝟿
𝟶 𝟷 𝟸 𝟹 𝟺 𝟻 𝟼 𝟽 𝟾 𝟿
A: You should look at this Javascript Box Drawing Demo. This is a JavaScript Unicode box drawing tool whose purpose is to make it easy for users to draw Unicode box art in HTML textareas. There you will see how to draw boxes using the arrow keys.
*
*First you should select a style other than "Off".
*Then using the arrow keys move around and you will see the box being drawn as you type
*Once you are satisfied with the look of your drawing, simply copy it from the box and paste it on your HTML code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: XML serialization in Java? What is the Java analogue of .NET's XML serialization?
A: 2008 Answer
The "Official" Java API for this is now JAXB - Java API for XML Binding. See Tutorial by Oracle. The reference implementation lives at http://jaxb.java.net/
2018 Update
Note that the Java EE and CORBA Modules are deprecated in SE in JDK9 and to be removed from SE in JDK11. Therefore, to use JAXB it will either need to be in your existing enterprise class environment bundled by your e.g. app server, or you will need to bring it in manually.
A: Worth mentioning that since version 1.4, Java had the classes java.beans.XMLEncoder and java.beans.XMLDecoder. These classes perform XML encoding which is at least very comparable to XML Serialization and in some circumstances might do the trick for you.
If your class sticks to the JavaBeans specification for its getters and setters, this method is straightforward to use and you don't need a schema. With the following caveats:
*
*As with normal Java serialization
*
*coding and decoding run over a InputStream and OutputStream
*the process uses the familar writeObject and readObject methods
*In contrast to normal Java serialization
*
*the encoding but also decoding causes constructors and initializers to be invoked
*encoding and decoding work regardless if your class implements Serializable or not
*transient modifiers are not taken into account
*works only for public classes, that have public constructors
For example, take the following declaration:
public class NPair {
public NPair() { }
int number1 = 0;
int number2 = 0;
public void setNumber1(int value) { number1 = value;}
public int getNumber1() { return number1; }
public void setNumber2(int value) { number2 = value; }
public int getNumber2() {return number2;}
}
Executing this code:
NPair fe = new NPair();
fe.setNumber1(12);
fe.setNumber2(13);
FileOutputStream fos1 = new FileOutputStream("d:\\ser.xml");
java.beans.XMLEncoder xe1 = new java.beans.XMLEncoder(fos1);
xe1.writeObject(fe);
xe1.close();
Would result in the following file:
<?xml version="1.0" encoding="UTF-8"?>
<java version="1.7.0_02" class="java.beans.XMLDecoder">
<object class="NPair">
<void property="number1">
<int>12</int>
</void>
<void property="number2">
<int>13</int>
</void>
</object>
</java>
A: XStream is pretty good at serializing object to XML without much configuration and money! (it's under BSD license).
We used it in one of our project to replace the plain old java-serialization and it worked almost out of the box.
A: XMLBeans works great if you have a schema for your XML. It creates Java objects for the schema and creates easy to use parse methods.
A: "Simple XML Serialization" Project
You may want to look at the Simple XML Serialization project. It is the closest thing I've found to the System.Xml.Serialization in .Net.
A: JAXB is part of JDK standard edition version 1.6+. So it is FREE and no extra libraries to download and manage.
A simple example can be found here
XStream seems to be dead. Last update was on Dec 6 2008.
Simple seems as easy and simpler as JAXB but I could not find any licensing information to evaluate it for enterprise use.
A: If you're talking about automatic XML serialization of objects, check out Castor:
Castor is an Open Source data binding framework for Java[tm]. It's the shortest path between Java objects, XML documents and relational tables. Castor provides Java-to-XML binding, Java-to-SQL persistence, and more.
A: Usually I use jaxb or XMLBeans if I need to create objects serializable to XML. Now, I can see that XStream might be very useful as it's nonintrusive and has really simple api. I'll play with it soon and probably use it. The only drawback I noticed is that I can't create object's id on my own for cross referencing.
@Barak Schiller
Thanks for posting link to XStream!
A: Don't forget JiBX.
A: if you want a structured solution (like ORM) then JAXB2 is a good solution.
If you want a serialization like DOT NET then you could use Long Term Persistence of JavaBeans Components
The choice depends on use of serialization.
A: public static String genXmlTag(String tagName, String innerXml, String properties )
{
return String.format("<%s %s>%s</%s>", tagName, properties, innerXml, tagName);
}
public static String genXmlTag(String tagName, String innerXml )
{
return genXmlTag(tagName, innerXml, "");
}
public static <T> String serializeXML(List<T> list)
{
String result = "";
if (list.size() > 0)
{
T tmp = list.get(0);
String clsName = tmp.getClass().getName();
String[] splitCls = clsName.split("\\.");
clsName = splitCls[splitCls.length - 1];
Field[] fields = tmp.getClass().getFields();
for (T t : list)
{
String row = "";
try {
for (Field f : fields)
{
Object value = f.get(t);
row += genXmlTag(f.getName(), value == null ? "" : value.toString());
}
} catch (IllegalAccessException e) {
e.printStackTrace();
}
row = genXmlTag(clsName, row);
result += row;
}
}
result = genXmlTag("root", result);
return result;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "107"
} |
Q: Techniques to detect Polymorphic and Metamorphic viruses? What techniques can be applied to detect Polymorphic and Metamorphic viruses?
How difficult is to implement these techniques?
Are these techniques being applied in modern day anti-virus softwares?
A: I thought most of the virus scanners nowadays use sandbox techniques to check for "bad" behavior. Therefore the polymorphic virusses will also be detected.
of course these detection techniques are also known to virus creators, and can easily be bypassed using a bunch of random, unharmfull, code executions before the actual payload.
A: It's impossiable to detect all known poly/metamorphic bad-code. White lists verification is the only provable technique. It's not always possiable, especially if your infrastructure/computer has not been maintainedd very well. Which is a good reason why signature, heuristic, emulation based detection is still valuable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why is my instance variable not in __dict__? If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
A: B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.
The distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.
A: class A:
def _ _init_ _(self):
self.name = 'A'
a = A()
Creates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__
class B:
name = 'B'
b = B()
Creates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How to escape os.system() calls? When using os.system() it's often necessary to escape filenames and other arguments passed as parameters to commands. How can I do this? Preferably something that would work on multiple operating systems/shells but in particular for bash.
I'm currently doing the following, but am sure there must be a library function for this, or at least a more elegant/robust/efficient option:
def sh_escape(s):
return s.replace("(","\\(").replace(")","\\)").replace(" ","\\ ")
os.system("cat %s | grep something | sort > %s"
% (sh_escape(in_filename),
sh_escape(out_filename)))
Edit: I've accepted the simple answer of using quotes, don't know why I didn't think of that; I guess because I came from Windows where ' and " behave a little differently.
Regarding security, I understand the concern, but, in this case, I'm interested in a quick and easy solution which os.system() provides, and the source of the strings is either not user-generated or at least entered by a trusted user (me).
A: This is what I use:
def shellquote(s):
return "'" + s.replace("'", "'\\''") + "'"
The shell will always accept a quoted filename and remove the surrounding quotes before passing it to the program in question. Notably, this avoids problems with filenames that contain spaces or any other kind of nasty shell metacharacter.
Update: If you are using Python 3.3 or later, use shlex.quote instead of rolling your own.
A: Perhaps you have a specific reason for using os.system(). But if not you should probably be using the subprocess module. You can specify the pipes directly and avoid using the shell.
The following is from PEP324:
Replacing shell pipe line
-------------------------
output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
A: Note that pipes.quote is actually broken in Python 2.5 and Python 3.1 and not safe to use--It doesn't handle zero-length arguments.
>>> from pipes import quote
>>> args = ['arg1', '', 'arg3']
>>> print 'mycommand %s' % (' '.join(quote(arg) for arg in args))
mycommand arg1 arg3
See Python issue 7476; it has been fixed in Python 2.6 and 3.2 and newer.
A: I believe that os.system just invokes whatever command shell is configured for the user, so I don't think you can do it in a platform independent way. My command shell could be anything from bash, emacs, ruby, or even quake3. Some of these programs aren't expecting the kind of arguments you are passing to them and even if they did there is no guarantee they do their escaping the same way.
A: Notice: This is an answer for Python 2.7.x.
According to the source, pipes.quote() is a way to "Reliably quote a string as a single argument for /bin/sh". (Although it is deprecated since version 2.7 and finally exposed publicly in Python 3.3 as the shlex.quote() function.)
On the other hand, subprocess.list2cmdline() is a way to "Translate a sequence of arguments into a command line string, using the same rules as the MS C runtime".
Here we are, the platform independent way of quoting strings for command lines.
import sys
mswindows = (sys.platform == "win32")
if mswindows:
from subprocess import list2cmdline
quote_args = list2cmdline
else:
# POSIX
from pipes import quote
def quote_args(seq):
return ' '.join(quote(arg) for arg in seq)
Usage:
# Quote a single argument
print quote_args(['my argument'])
# Quote multiple arguments
my_args = ['This', 'is', 'my arguments']
print quote_args(my_args)
A: The function I use is:
def quote_argument(argument):
return '"%s"' % (
argument
.replace('\\', '\\\\')
.replace('"', '\\"')
.replace('$', '\\$')
.replace('`', '\\`')
)
that is: I always enclose the argument in double quotes, and then backslash-quote the only characters special inside double quotes.
A: shlex.quote() does what you want since python 3.
(Use pipes.quote to support both python 2 and python 3,
though note that pipes has been deprecated since 3.10
and slated for removal in 3.13)
A: Maybe subprocess.list2cmdline is a better shot?
A: On UNIX shells like Bash, you can use shlex.quote in Python 3 to escape special characters that the shell might interpret, like whitespace and the * character:
import os
import shlex
os.system("rm " + shlex.quote(filename))
However, this is not enough for security purposes! You still need to be careful that the command argument is not interpreted in unintended ways. For example, what if the filename is actually a path like ../../etc/passwd? Running os.system("rm " + shlex.quote(filename)) might delete /etc/passwd when you only expected it to delete filenames found in the current directory! The issue here isn't with the shell interpreting special characters, it's that the filename argument isn't interpreted by the rm as a simple filename, it's actually interpreted as a path.
Or what if the valid filename starts with a dash, for example, -f? It's not enough to merely pass the escaped filename, you need to disable options using -- or you need to pass a path that doesn't begin with a dash like ./-f. The issue here isn't with the shell interpreting special characters, it's that the rm command interprets the argument as a filename or a path or an option if it begins with a dash.
Here is a safer implementation:
if os.sep in filename:
raise Exception("Did not expect to find file path separator in file name")
os.system("rm -- " + shlex.quote(filename))
A: I think these answers are a bad idea for escaping command-line arguments on Windows. Based on the results: people are trying to apply a black-list approach to filtering 'bad' characters, assuming (and hoping) they got them all. Windows is very complex and there could be all manner of characters found in the future that might allow an attacker to hijack command line arguments.
I've already seen some answers neglect to filter basic meta-characters in Windows (like the semi-colon.) The approach I take is far simpler:
*
*Make a list of allowed ASCII characters.
*Remove all chars that aren't in that list.
*Escape slashes and double-quotes.
*Surround entire command with double quotes so the command argument cannot be maliciously broken and commandeered with spaces.
A basic example:
def win_arg_escape(arg, allow_vars=0):
allowed_list = """'"/\\abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_-. """
if allow_vars:
allowed_list += "~%$"
# Filter out anything that isn't a
# standard character.
buf = ""
for ch in arg:
if ch in allowed_list:
buf += ch
# Escape all slashes.
buf = buf.replace("\\", "\\\\")
# Escape double quotes.
buf = buf.replace('"', '""')
# Surround entire arg with quotes.
# This avoids spaces breaking a command.
buf = '"%s"' % (buf)
return buf
The function has an option to enable use of environmental variables and other shell variables. Enabling this poses more risk so its disabled by default.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "142"
} |
Q: Get a list of current windows, and give one of them focus, in .Net Without resorting to PInvoke, is there a way in .net to find out what windows are open? This is slightly different than asking what applications are running in memory. For example, Firefox could be running, but could be more than one window. Basically, I just want to be privy to the same information that the taskbar (and alt-tab?) is.
Also, once I have a reference to a window, is there any way to programatically give it focus?
Is there any way to do this with managed code?
A: You could check out the new UI Automation stuff in .NET 3.5. It is supposed to mask a whole lot of the PInovke stuff and works with web and WPF applications.
I haven't used it yet, so I don't have a more specific place to direct you, but it might fit the bill.
A: Check out this LGPL project. I know it can set foreground for a window. Otherwise aku is correct. It'll require most likely some pinvoke calls.
http://mwinapi.sourceforge.net/
If you need information on pinvoke use:
http://www.pinvoke.net/
A: I'm afraid there is no way you can do it without PInvoke. To give focus to some window you should call SetForegroundWindow function, see this article for details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the Difference Between Mercurial and Git? I've been using git for some time now on Windows (with msysGit) and I like the idea of distributed source control. Just recently I've been looking at Mercurial (hg) and it looks interesting. However, I can't wrap my head around the differences between hg and git.
Has anyone made a side-by-side comparison between git and hg? I'm interested to know what differs hg and git without having to jump into a fanboy discussion.
A: There is a dynamic comparison chart over at the versioncontrolblog where you can compare several different version control systems.
Here is a comparison table between git, hg and bzr.
A: Git is a platform, Mercurial is “just” an application. Git is a versioned filesystem platform that happens to ship with a DVCS app in the box, but as normal for platform apps, it is more complex and has rougher edges than focused apps do. But this also means git’s VCS is immensely flexible, and there is a huge depth of non-source-control things you can do with git.
That is the essence of the difference.
Git is best understood from the ground up – from the repository format up. Scott Chacon’s Git Talk is an excellent primer for this. If you try to use git without knowing what’s happening under the hood, you’ll end up confused at some point (unless you stick to only very basic functionality). This may sound stupid when all you want is a DVCS for your daily programming routine, but the genius of git is that the repository format is actually very simple and you can understand git’s entire operation quite easily.
For some more technicality-oriented comparisons, the best articles I have personally seen are Dustin Sallings’:
*
*The Differences Between Mercurial and Git
*Reddit thread where git-experienced Dustin answers his own git neophyte questions
He has actually used both DVCSs extensively and understands them both well – and ended up preferring git.
A: There are quite significant differences when it comes to working with branches (especially short-term ones).
It is explained in this article (BranchingExplained) which compares Mercurial with Git.
A: Are there any Windows-based collaborators on your project?
Because if there are, the Git-for-Windows GUI seems awkward, difficult, unfriendly.
Mercurial-on-Windows, by contrast, is a no-brainer.
A: One thing to notice between mercurial of bitbucket.org and git of github is, mercurial can have as many private repositories as you want, but github you have to upgrade to a paid account. So, that's why I go for bitbucket which uses mercurial.
A: The big difference is on Windows. Mercurial is supported natively, Git isn't. You can get very similar hosting to github.com with bitbucket.org (actually even better as you get a free private repository). I was using msysGit for a while but moved to Mercurial and been very happy with it.
A: Sometime last year I evaluated both git and hg for my own use, and decided to go with hg. I felt it looked like a cleaner solution, and worked better on more platforms at the time. It was mostly a toss-up, though.
More recently, I started using git because of git-svn and the ability to act as a Subversion client. This won me over and I've now switched completely to git. I think it's got a slightly higher learning curve (especially if you need to poke around the insides), but it really is a great system. I'm going to go read those two comparison articles that John posted now.
A: I'm currently in the process of migrating from SVN to a DVCS (while blogging about my findings, my first real blogging effort...), and I've done a bit of research (=googling). As far as I can see you can do most of the things with both packages. It seems like git has a few more or better implemented advanced features,
I do feel that the integration with windows is a bit better for mercurial, with TortoiseHg. I know there's Git Cheetah as well (I tried both), but the mercurial solution just feels more robust.
Seeing how they're both open-source (right?) I don't think either will be lacking important features. If something is important, people will ask for it, people will code it.
I think that for common practices, Git and Mercurial are more than sufficient. They both have big projects that use them (Git -> linux kernel, Mercurial -> Mozilla foundation projects, both among others of course), so I don't think either are really lacking something.
That being said, I am interested in what other people say about this, as it would make a great source for my blogging efforts ;-)
A: There is a great and exhaustive comparison tables and charts on git, Mercurial and Bazaar over at InfoQ's guide about DVCS.
A: If you are a Windows developer looking for basic disconnected revision control, go with Hg. I found Git to be incomprehensible while Hg was simple and well integrated with the Windows shell. I downloaded Hg and followed this tutorial (hginit.com) - ten minutes later I had a local repo and was back to work on my project.
A: These articles may help:
*
*Git vs. Mercurial: Please Relax (Git is MacGyver and Mercurial is James Bond)
*The Differences Between Mercurial and Git
Edit: Comparing Git and Mercurial to celebrities seems to be a trend. Here's one more:
*
*Git is Wesley Snipes, Mercurial is Denzel Washington
A: I realize this isn't a part of the answer, but on that note, I also think the availability of stable plugins for platforms like NetBeans and Eclipse play a part in which tool is a better fit for the task, or rather, which tool is the best fit for "you". That is, unless you really want to do it the CLI-way.
Both Eclipse (and everything based on it) and NetBeans sometimes have issues with remote file systems (such as SSH) and external updates of files; which is yet another reason why you want whatever you choose to work "seamlessly".
I'm trying to answer this question for myself right now too .. and I've boiled down the candidates to Git or Mercurial .. thank you all for providing useful inputs on this topic without going religious.
A: I work on Mercurial, but fundamentally I believe both systems are equivalent. They both work with the same abstractions: a series of snapshots (changesets) which make up the history. Each changeset knows where it came from (the parent changeset) and can have many child changesets. The recent hg-git extension provides a two-way bridge between Mercurial and Git and sort of shows this point.
Git has a strong focus on mutating this history graph (with all the consequences that entails) whereas Mercurial does not encourage history rewriting, but it's easy to do anyway and the consequences of doing so are exactly what you should expect them to be (that is, if I modify a changeset you already have, your client will see it as new if you pull from me). So Mercurial has a bias towards non-destructive commands.
As for light-weight branches, then Mercurial has supported repositories with multiple branches since..., always I think. Git repositories with multiple branches are exactly that: multiple diverged strands of development in a single repository. Git then adds names to these strands and allow you to query these names remotely. The Bookmarks extension for Mercurial adds local names, and with Mercurial 1.6, you can move these bookmarks around when you push/pull..
I use Linux, but apparently TortoiseHg is faster and better than the Git equivalent on Windows (due to better usage of the poor Windows filesystem). Both http://github.com and http://bitbucket.org provide online hosting, the service at Bitbucket is great and responsive (I haven't tried github).
I chose Mercurial since it feels clean and elegant -- I was put off by the shell/Perl/Ruby scripts I got with Git. Try taking a peek at the git-instaweb.sh file if you want to know what I mean: it is a shell script which generates a Ruby script, which I think runs a webserver. The shell script generates another shell script to launch the first Ruby script. There is also a bit of Perl, for good measure.
I like the blog post that compares Mercurial and Git with James Bond and MacGyver -- Mercurial is somehow more low-key than Git. It seems to me, that people using Mercurial are not so easily impressed. This is reflected in how each system do what Linus described as "the coolest merge EVER!". In Git you can merge with an unrelated repository by doing:
git fetch <project-to-union-merge>
GIT_INDEX_FILE=.git/tmp-index git-read-tree FETCH_HEAD
GIT_INDEX_FILE=.git/tmp-index git-checkout-cache -a -u
git-update-cache --add -- (GIT_INDEX_FILE=.git/tmp-index git-ls-files)
cp .git/FETCH_HEAD .git/MERGE_HEAD
git commit
Those commands look quite arcane to my eye. In Mercurial we do:
hg pull --force <project-to-union-merge>
hg merge
hg commit
Notice how the Mercurial commands are plain and not special at all -- the only unusual thing is the --force flag to hg pull, which is needed since Mercurial will abort otherwise when you pull from an unrelated repository. It is differences like this that makes Mercurial seem more elegant to me.
A: I think the best description about "Mercurial vs. Git" is:
"Git is Wesley Snipes. Mercurial is Denzel Washington"
A: They are almost identical.
The most important difference, from my point of view (I mean, the reason that got me to choose one DVCS over the other) is how the two programs manage branches.
To start a new branch, with Mercurial, you simply clone the repository to another directory and start developing. Then, you pull and merge.
With git, you have to explicitly give a name to the new topic branch you want to use, then you start coding using the same directory.
In short, each branch in Mercurial needs its own directory; in git you usually work in on single directory.
Switching branches in Mercurial means changing directories; in git, it means asking git to change the directory's content with git checkout.
I'm honest: I don't know if it's possible to do the same with Mercurial, but since I usually work on web projects, using always the same directory with git seems much confortable to me, since I don't have to re-configure Apache and restart it and I don't mess my filesystem everytime I branch.
Edit: As Deestan noted, Hg has named branches, which can be stored in a single repository and allow the developer to switch branches within the same working copy. git branches are not exactly the same as Mercurial named branches, anyway: they are permanent and not throw away branches, like in git. That means that if you use a named branch for experimental tasks even if you decide to never merge it it will be stored in the repository. That's the reason why Hg encourages to use clones for experimental, short-running tasks and named branches for long-running tasks, like for release branches.
The reason why a lot of Hg users prefere clones over named branch is much more social or cultural than technical. For example, with last versions of Hg, it's even possible to close a named branch and recursively remove metadata from changesets.
On the other side, git invites to use "named branches" which are not permanent and are not stored as metadata on each changeset.
From my personal point of view, then, git's model is deeply linked to the concept of named branches and switch between a branch and another withing the same directory; hg can do the same with named branches, but yet it encourages the use of clones, which I personally don't like too much.
A: Yet another interesting comparison of mercurial and git: Mercurial vs Git.
Main focus is on internals and their influence on branching process.
A: If you are interested in a performance comparison of Mercurial and Git have a look at this article. The conclusion is:
Git and Mercurial both turn in good numbers but make an interesting trade-off between speed and repository size. Mercurial is fast with both adds and modifications, and keeps repository growth under control at the same time. Git is also fast, but its repository grows very quickly with modified files until you repack — and those repacks can be very slow. But the packed repository is much smaller than Mercurial's.
A: The mercurial website has a great description of the similarities and differences between the two systems, explaining the differences of vocabulary and underlying concepts. As a long time git user, it really helped my understand the Mercurial mindset.
A: If you are migrating from SVN, use Mercurial as its syntax is MUCH more understandable for SVN users. Other than that, you can't go wrong with either. But do check GIT tutorial and HGinit before selecting one of them.
A: There's one huge difference between git and mercurial; the way the represent each commit. git represents commits as snapshots, while mercurial represents them as diffs.
What does this means in practice? Well, many operations are faster in git, such as switching to another commit, comparing commits, etc. Specially if these commits are far away.
AFAIK there's no advantage of mercurial's approach.
A: Nothing. They both do the same, both perform about equally. The only reason you should choose one over the other is if you help out with a project that already uses one..
The other possible reason for choosing one is an application or service which only supports one of the system.. For example, I pretty much chose to learn git because of github..
A: Also google's comparison (though it's a bit old, done in 2008)
http://code.google.com/p/support/wiki/DVCSAnalysis
A: If I understand them correctly (and I'm far from an expert on each) they fundamentally each have a different philosophy. I first used mercurial for 9 months. Now I've used git for 6.
hg is version control software. It's main goal is to track versions of a piece of software.
git is a time based file system. It's goal is to add another dimension to a file system. Most have files and folders, git adds time. That it happens to work awesome as a VCS is a byproduct of its design.
In hg, there's a history of the entire project it's always trying to maintain. By default I believe hg wants all changes to all objects by all users when pushing and pulling.
In git there's just a pool of objects and these tracking files (branches/heads) that determine which set of those objects represent the tree of files in a particular state. When pushing or pulling git only sends the objects needed for the the particular branches you are pushing or pulling, which is a small subset of all objects.
As far as git is concerned there is no "1 project". You could have 50 projects all in the same repo and git wouldn't care. Each one could be managed separately in the same repo and live fine.
Hg's concept of branches is branches off the main project or branches off branches etc. Git has no such concept. A branch in git is just a state of the tree, everything is a branch in git. Which branch is official, current, or newest has no meaning in git.
I don't know if that made any sense. If I could draw pictures hg might look like this where each commit is a o
o---o---o
/
o---o---o---o---o---o---o---o
\ /
o---o---o
A tree with a single root and branches coming off of it. While git can do that and often people use it that way that's not enforced. A git picture, if there is such a thing, could easily look like this
o---o---o---o---o
o---o---o---o
\
o---o
o---o---o---o
In fact in some ways it doesn't even make sense to show branches in git.
One thing that is very confusing for the discussion, git and mercurial both have something called a "branch" but they are not remotely the same things. A branch in mercurial comes about when there are conflicts between different repos. A branch in git is apparently similar to a clone in hg. But a clone, while it might give similar behavior is most definitely not the same. Consider me trying these in git vs hg using the chromium repo which is rather large.
$ time git checkout -b some-new-branch
Switched to new branch 'some-new-branch'
real 0m1.759s
user 0m1.596s
sys 0m0.144s
And now in hg using clone
$ time hg clone project/ some-clone/
updating to branch default
29387 files updated, 0 files merged, 0 files removed, 0 files unresolved.
real 0m58.196s
user 0m19.901s
sys 0m8.957
Both of those are hot runs. Ie, I ran them twice and this is the second run. hg clone is the actually the same as git-new-workdir. Both of those make an entirely new working dir almost as though you had typed cp -r project project-clone. That's not the same as making a new branch in git. It's far more heavy weight. If there is true equivalent of git's branching in hg I don't know what it is.
I understand at some level hg and git might be able to do similar things. If so then there is a still a huge difference in the workflow they lead you to. In git, the typical workflow is to create a branch for every feature.
git checkout master
git checkout -b add-2nd-joypad-support
git checkout master
git checkout -b fix-game-save-bug
git checkout master
git checkout -b add-a-star-support
That just created 3 branches, each based off a branch called master.
(I'm sure there's some way in git to make those 1 line each instead of 2)
Now to work on one I just do
git checkout fix-game-save-bug
and start working. Commit things, etc. Switching between branches even in a project as big as chrome is nearly instantaneous. I actually don't know how to do that in hg. It's not part of any tutorials I've read.
One other big difference. Git's stage.
Git has this idea of a stage. You can think of it as a hidden folder. When you commit you only commit what's on the stage, not the changes in your working tree. That might sound strange. If you want to commit all the changes in your working tree you'd do git commit -a which adds all the modified files to the stage and then commits them.
What's the point of the stage then? You can easily separate your commits. Imagine you edited joypad.cpp and gamesave.cpp and you want to commit them separately
git add joypad.cpp // copies to stage
git commit -m "added 2nd joypad support"
git add gamesave.cpp // copies to stage
git commit -m "fixed game save bug"
Git even has commands to decide which particular lines in the same file you want to copy to the stage so you can split up those commits separately as well. Why would you want to do that? Because as separate commits others can pull only the ones they want or if there was an issue they can revert just the commit that had the issue.
A: This link may help you to understand the difference
http://www.techtatva.com/2010/09/git-mercurial-and-bazaar-a-comparison/
A: Some people think that VCS systems have to be complicated. They encourage inventing terms and concepts on the field. They would probably think that numerous PhDs on the subject would be interesting. Among those are probably the ones that designed Git.
Mercurial is designed with a different mentality. Developers should not care much about VCS, and they should instead spend their time on their main function: software engineering. Mercurial allows users to use and happily abuse the system without letting them make any non-recoverable mistakes.
Any professional tool must come with a clearly designed and intuitive CLI. Mercurial users can do most of the work by issuing simple commands without any strange options. In Git double dash, crazy options are the norm. Mercurial has a substantial advantage if you are a CLI person (and to be honest, any self-respecting Software Engineer should be).
To give an example, suppose you do a commit by mistake. You forgot to edit some files. To undo you action in Mercurial you simply type:
$ hg rollback
You then get a message that the system undos your last transaction.
In Git you have to type:
$ git reset --soft HEAD^
So ok suppose you have an idea what reset is about. But in addition you have to know what "--soft" and "--hard" resets are (any intuitive guesses?). Oh and of course don't forget the '^' character in the end! (now what in Ritchie's name is that...)
Mercurial's integration with 3rd party tools like kdiff3 and meld is much better as well. Generate your patches merge your branches without much fuss. Mercurial also includes a simple http server that you activate by typing
hg serve
And let others browse your repository.
The bottom line is, Git does what Mercurial does, in a much more complicated way and with a far inferior CLI. Use Git if you want to turn the VCS of your project into a scientific-research field. Use Mercurial if you want to get the VCS job done without caring much about it, and focus on your real tasks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "726"
} |
Q: How can a Java program get its own process ID? How do I get the id of my Java process?
I know there are several platform-dependent hacks, but I would prefer a more generic solution.
A: public static long getPID() {
String processName = java.lang.management.ManagementFactory.getRuntimeMXBean().getName();
if (processName != null && processName.length() > 0) {
try {
return Long.parseLong(processName.split("@")[0]);
}
catch (Exception e) {
return 0;
}
}
return 0;
}
A: Here's a backdoor method which might not work with all VMs but should work on both linux and windows (original example here):
java.lang.management.RuntimeMXBean runtime =
java.lang.management.ManagementFactory.getRuntimeMXBean();
java.lang.reflect.Field jvm = runtime.getClass().getDeclaredField("jvm");
jvm.setAccessible(true);
sun.management.VMManagement mgmt =
(sun.management.VMManagement) jvm.get(runtime);
java.lang.reflect.Method pid_method =
mgmt.getClass().getDeclaredMethod("getProcessId");
pid_method.setAccessible(true);
int pid = (Integer) pid_method.invoke(mgmt);
A: I am adding this, in addition to other solutions.
with Java 10, to get process id
final RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
final long pid = runtime.getPid();
out.println("Process ID is '" + pid);
A: The latest I have found is that there is a system property called sun.java.launcher.pid that is available at least on linux. My plan is to use that and if it is not found to use the JMX bean.
A: There exists no platform-independent way that can be guaranteed to work in all jvm implementations.
ManagementFactory.getRuntimeMXBean().getName() looks like the best (closest) solution, and typically includes the PID. It's short, and probably works in every implementation in wide use.
On linux+windows it returns a value like "12345@hostname" (12345 being the process id). Beware though that according to the docs, there are no guarantees about this value:
Returns the name representing the running Java virtual machine. The
returned name string can be any arbitrary string and a Java virtual
machine implementation can choose to embed platform-specific useful
information in the returned name string. Each running virtual machine
could have a different name.
In Java 9 the new process API can be used:
long pid = ProcessHandle.current().pid();
A: It depends on where you are looking for the information from.
If you are looking for the information from the console you can use the jps command. The command gives output similar to the Unix ps command and comes with the JDK since I believe 1.5
If you are looking from the process the RuntimeMXBean (as said by Wouter Coekaerts) is probably your best choice. The output from getName() on Windows using Sun JDK 1.6 u7 is in the form [PROCESS_ID]@[MACHINE_NAME]. You could however try to execute jps and parse the result from that:
String jps = [JDK HOME] + "\\bin\\jps.exe";
Process p = Runtime.getRuntime().exec(jps);
If run with no options the output should be the process id followed by the name.
A: This is the code JConsole, and potentially jps and VisualVM uses. It utilizes classes from
sun.jvmstat.monitor.* package, from tool.jar.
package my.code.a003.process;
import sun.jvmstat.monitor.HostIdentifier;
import sun.jvmstat.monitor.MonitorException;
import sun.jvmstat.monitor.MonitoredHost;
import sun.jvmstat.monitor.MonitoredVm;
import sun.jvmstat.monitor.MonitoredVmUtil;
import sun.jvmstat.monitor.VmIdentifier;
public class GetOwnPid {
public static void main(String[] args) {
new GetOwnPid().run();
}
public void run() {
System.out.println(getPid(this.getClass()));
}
public Integer getPid(Class<?> mainClass) {
MonitoredHost monitoredHost;
Set<Integer> activeVmPids;
try {
monitoredHost = MonitoredHost.getMonitoredHost(new HostIdentifier((String) null));
activeVmPids = monitoredHost.activeVms();
MonitoredVm mvm = null;
for (Integer vmPid : activeVmPids) {
try {
mvm = monitoredHost.getMonitoredVm(new VmIdentifier(vmPid.toString()));
String mvmMainClass = MonitoredVmUtil.mainClass(mvm, true);
if (mainClass.getName().equals(mvmMainClass)) {
return vmPid;
}
} finally {
if (mvm != null) {
mvm.detach();
}
}
}
} catch (java.net.URISyntaxException e) {
throw new InternalError(e.getMessage());
} catch (MonitorException e) {
throw new InternalError(e.getMessage());
}
return null;
}
}
There are few catches:
*
*The tool.jar is a library distributed with Oracle JDK but not JRE!
*You cannot get tool.jar from Maven repo; configure it with Maven is a bit tricky
*The tool.jar probably contains platform dependent (native?) code so it is not easily
distributable
*It runs under assumption that all (local) running JVM apps are "monitorable". It looks like
that from Java 6 all apps generally are (unless you actively configure opposite)
*It probably works only for Java 6+
*Eclipse does not publish main class, so you will not get Eclipse PID easily
Bug in MonitoredVmUtil?
UPDATE: I have just double checked that JPS uses this way, that is Jvmstat library (part of tool.jar). So there is no need to call JPS as external process, call Jvmstat library directly as my example shows. You can aslo get list of all JVMs runnin on localhost this way.
See JPS source code:
A: Try Sigar . very extensive APIs. Apache 2 license.
private Sigar sigar;
public synchronized Sigar getSigar() {
if (sigar == null) {
sigar = new Sigar();
}
return sigar;
}
public synchronized void forceRelease() {
if (sigar != null) {
sigar.close();
sigar = null;
}
}
public long getPid() {
return getSigar().getPid();
}
A: The following method tries to extract the PID from java.lang.management.ManagementFactory:
private static String getProcessId(final String fallback) {
// Note: may fail in some JVM implementations
// therefore fallback has to be provided
// something like '<pid>@<hostname>', at least in SUN / Oracle JVMs
final String jvmName = ManagementFactory.getRuntimeMXBean().getName();
final int index = jvmName.indexOf('@');
if (index < 1) {
// part before '@' empty (index = 0) / '@' not found (index = -1)
return fallback;
}
try {
return Long.toString(Long.parseLong(jvmName.substring(0, index)));
} catch (NumberFormatException e) {
// ignore
}
return fallback;
}
Just call getProcessId("<PID>"), for instance.
A: For older JVM, in linux...
private static String getPid() throws IOException {
byte[] bo = new byte[256];
InputStream is = new FileInputStream("/proc/self/stat");
is.read(bo);
for (int i = 0; i < bo.length; i++) {
if ((bo[i] < '0') || (bo[i] > '9')) {
return new String(bo, 0, i);
}
}
return "-1";
}
A: Since Java 9 there is a method Process.getPid() which returns the native ID of a process:
public abstract class Process {
...
public long getPid();
}
To get the process ID of the current Java process one can use the ProcessHandle interface:
System.out.println(ProcessHandle.current().pid());
A: Based on Ashwin Jayaprakash's answer (+1)
about the Apache 2.0 licensed SIGAR, here is how I use it to get only the PID of the current process:
import org.hyperic.sigar.Sigar;
Sigar sigar = new Sigar();
long pid = sigar.getPid();
sigar.close();
Even though it does not work on all platforms, it does work on Linux, Windows, OS X and various Unix platforms as listed here.
A: I know this is an old thread, but I wanted to call out that API for getting the PID (as well as other manipulation of the Java process at runtime) is being added to the Process class in JDK 9: http://openjdk.java.net/jeps/102
A: You can check out my project: JavaSysMon on GitHub. It provides process id and a bunch of other stuff (CPU usage, memory usage) cross-platform (presently Windows, Mac OSX, Linux and Solaris)
A: You could use JNA. Unfortunately there is no common JNA API to get the current process ID yet, but each platform is pretty simple:
Windows
Make sure you have jna-platform.jar then:
int pid = Kernel32.INSTANCE.GetCurrentProcessId();
Unix
Declare:
private interface CLibrary extends Library {
CLibrary INSTANCE = (CLibrary) Native.loadLibrary("c", CLibrary.class);
int getpid ();
}
Then:
int pid = CLibrary.INSTANCE.getpid();
Java 9
Under Java 9 the new process API can be used to get the current process ID. First you grab a handle to the current process, then query the PID:
long pid = ProcessHandle.current().pid();
A: java.lang.management.ManagementFactory.getRuntimeMXBean().getName().split("@")[0]
A: In Scala:
import sys.process._
val pid: Long = Seq("sh", "-c", "echo $PPID").!!.trim.toLong
This should give you a workaround on Unix systems until Java 9 will be released.
(I know, the question was about Java, but since there is no equivalent question for Scala, I wanted to leave this for Scala users who might stumble into the same question.)
A: For completeness there is a wrapper in Spring Boot for the
String jvmName = ManagementFactory.getRuntimeMXBean().getName();
return jvmName.split("@")[0];
solution. If an integer is required, then this can be summed up to the one-liner:
int pid = Integer.parseInt(ManagementFactory.getRuntimeMXBean().getName().split("@")[0]);
If someone uses Spring boot already, she/he might use org.springframework.boot.ApplicationPid
ApplicationPid pid = new ApplicationPid();
pid.toString();
The toString() method prints the pid or '???'.
Caveats using the ManagementFactory are discussed in other answers already.
A: You can try getpid() in JNR-Posix.
It has a Windows POSIX wrapper that calls getpid() off of libc.
A: I found a solution that may be a bit of an edge case and I didn't try it on other OS than Windows 10, but I think it's worth noticing.
If you find yourself working with J2V8 and nodejs, you can run a simple javascript function returning you the pid of the java process.
Here is an example:
public static void main(String[] args) {
NodeJS nodeJS = NodeJS.createNodeJS();
int pid = nodeJS.getRuntime().executeIntegerScript("process.pid;\n");
System.out.println(pid);
nodeJS.release();
}
A: This is what I used when I had similar requirement. This determines the PID of the Java process correctly. Let your java code spawn a server on a pre-defined port number and then execute OS commands to find out the PID listening on the port. For Linux
netstat -tupln | grep portNumber
A: Here is my solution:
public static boolean isPIDInUse(int pid) {
try {
String s = null;
int java_pid;
RuntimeMXBean rt = ManagementFactory.getRuntimeMXBean();
java_pid = Integer.parseInt(rt.getName().substring(0, rt.getName().indexOf("@")));
if (java_pid == pid) {
System.out.println("In Use\n");
return true;
}
} catch (Exception e) {
System.out.println("Exception: " + e.getMessage());
}
return false;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "409"
} |
Q: VB.NET form Height Question I have a VB6.0 project and I want to convert it in VB.Net.
In my VB6.0 application some of the MDI Child form’s height is 17000 and width is 13000. Now I want to set the same form size in VB.Net forms, but it allows maximum form width = 1036, height = 780 for resolution 1024x768.
How can I increase form size with same resolution?
Also I want to print this from so, I can not use auto scroll property of vb.net forms.
Thaks
A: Your classic VB units are in what are called "twips". You will most likely be able to divide those numbers by 12 or 15 (depending on if you are using large or small fonts) and you will get a certain number of pixels.
A: I done some Googling on this, and came across this..
Yes, this size of the form is limited
to the size of the desktop (more
specifcally
SystemInformation.MaxWindowTrackSize).
This is done in the Form.SetBoundsCore
protected virtual method. This
behaviour cannot be changes or at
least without a great deal of work and
using PInvoke.
Also supported here
The size of the form in the designer
is limited by your screen size.
It sounds like you have your display
at 1600x1200, hence the designer won't
let you go larger then 1212.
If you had your display at 1280x1024,
then the designer wouldn't let you go
larger then 1036.
I'm not really sure why the size of
the form in the designer is limited to
the screen size, as I may deploy on a
machine that has a larger screen size
them my development machine...
So looks like it cannot be done.. Thats some strange behaviour since it looks like you are limited to whatever your dev machine is..
I think the only way to do it is to size to the maximum resolution possible, set the form size, then revert back, but never touch the size again.
A: You are limited in the designer, but not in code:
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
Me.Height = 17000 'or whatever you need
Me.Width = 13000
End Sub
A: I think the VB6 units are not the same with the VB.Net one. So you have to do a conversion.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What logging is good logging for your app? So we've discussed logging in passing at my place of work and I was wondering if some of you guys here could give me some ideas of your approaches?
Typically our scenario is, no logging really at all, and mostly .NET apps, winforms/WPF clients talking through web services or direct to a db.
So, the real question is, where or what would you log? At the moment we have users reporting error messages - so I would assume log startups/shutdowns, exceptions...
Do you take it to calls to the web services or db? Page loads?
How do you get a good idea of what the user was trying to do at the time?
Is it better to go all the way and log everything across multiple attempts/days, or log only what you need to (given hdd is cheap).
I guess that's a few questions, but I wanted to get more of an idea of what the actual practice is out there in larger shops!
A: Being an admin, I really appreciate apps that log to the Event Log (preferably their own, otherwise the application log) for all logging but trace logs. By logging to the event log, you make it much more likely that warnings or errors can be found and addressed by the admin staff before they become a major problem (if it is a issue they can address), or allows them to get in contact with the devs, who can use the trace logs to further troubleshoot the issue.
My biggest pain point in supporting a custom .NET app right now is that there are 8 different applications (some console apps, some winforms, and some web) from the same vendor. None of them log to the event log, they all have their own custom log files. But for all the winforms and console apps, they keep the file open while they are running, so I can't monitor it for issues. Also, the logs are all written slightly differently, so I would have to parse them a bit differently to get useful information.
This forces me to monitor the appearance of an application (is it responding on the ports it is active on, is the process working set getting too high, etc..), rather than what the state of the application really is.
Please, please consider the folks who maintain your application after it is deployed and provide logging they can use. Thanks!
A: This post on highscalability.com provides a good perspective on logging in a large scale distributed system. (And coincidentally it starts out by mentioning a post on the JoelOnSoftware).
A:
Is it better to go all the way and log everything across multiple attempts/days, or log only what you need to (given hdd is cheap).
The fact harddrives are cheap really isn't a good reason to verbosely log everything possible, for a few reasons.. For one, with a very busy application, you really don't want to slow it down and tie up disc-writes writing logs (harddrives are pretty slow). The second point, and the more important one - there's really very little to gain from terabytes worth of logs.. For development, they can are useful, but you don't need to keep more than a few minutes of them..
Some logging is of course useful, having different levels is about the only way to go about it - for example debug() info() only get logged if requested (in a config, or command line flag), then maybe warning() and error() get sent to a log file
For most of the things I've written (smallish scripts) I generally just have a debug() function, that checks if --verbose is set, and prints the message.. That way I can shove debug("some value: %s" % (avar)) when needed, and not have to worry about going back and removing debugging print() statements everwhere.
For web applications, I generally just use the web-server logs for statistics, and the error log. I use things like mod_rewrite's log when needed, but it would be idiotic to leave this enabled beyond development (as it creates many many lines on each page request)
I suppose it depends on the application itself, but generally, for big applications use multiple levels of logs that can be activated when needed. For smaller things, a --verbose flag or equivalent, for web applications, log errors and (to a point) log hits.
Basically, in "production" log only the information you can use, in development log everything you could possible need to fix problems.
A: The key thing for logging is good planning. I would suggest that you look into the enterprise library exception and logging application block (http://msdn.microsoft.com/en-us/library/cc467894.aspx). There is a wee bit of a learning curve but it does work quite well. The approach I favour at the moment is to define 4 priority levels. 4=Unhandled exception (error in event log), 3=Handled exception (warning in event log), 2=Access an external resource such as a webservice, db or mainframe system (information in event log), 1=Verbose/anything else of interest (information in event log).
Using the application block it's then quite easy to tweak what level of priority you want to log. So in development you'd log everything but as you get a stable system in production, you'd probably only be interested in unhandled exceptions and possibly handled exceptions.
Update: For clarity, I would suggest you have logging in both your winform/wpf app and your webservices. In a web scenario, I've had problems in the past where it can be difficult to tie an error on the client back through to the app servers. Mainly because any error through webservices gets wrapped up as a SOAP exception. I can't remember off the top of my head, but I think if you use a custom exception handler (that is part of the enterprise library) you can add data onto exceptions such as the handlinginstance id of the exception from the app server. This makes it easier to tie up exceptions on a client back to your app box by using LogParser (http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en).
Second Update: I also like to give each different event a seperate event id and to track that in a text file or spreadsheet under source control. Yes, its a pain but if you're lucky enough to have an IT team looking after your systems in production, I find they tend to expect different events to have different event ids.
A: As a quick answer I would say to come up with a series of categories and have switchable logging levels, e.g. info, warning, error, critical, etc.
Then make it easy to set the logging level to tune the level of detail that you need. Typically, set the logging level in a config file and stop and restart the app.
I would also publicize to the developers what the meaning is for each of the levels.
edit: I would also set up a system to rotate out, compress and archive log files on a regular basis, maybe nightly.
A: For a typical desktop app, I'd store everything on the current session, and maybe store info messages for the past n sessions or up to x in size.
I'm assuming that your messages are organized. We use 4 categories; errors, warnings, info, and trace. We're still figuring out what goes at which level. As I'm getting used to parsing log files, I generally say "log more". Don't sweat readability, you're probably gonna have to process the log file a bit before you can use it.
In the end, find a good logging framework that allows you to control your spool usage on lifetime and storage space, and a proper api that minimizes the effect on your code. Ideally you just type info("waaah") or warning("waah") and the API does all the fancy tagging for you.
A: Thanks guys, lot of good info, but Martin has given me a bit more detail on how to proceed. I'll give him the answer, as it seems like now we're off the front few pages answers will drop off.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: P/Invoke in Mono What's the current status of Mono's Platform Invoke implementation on Linux and on Solaris?
A: Working, usable and stable. It's well tested since quite a lot of mono's own low-level functionality has to be marshaled through it to the underlying operating system.
There are some P/Invoke extensions when compared to Microsoft .Net implementation (after all they deal with a single OS family and three architectures at most). Most notable of those would be that library mappings transform the library name to OS-specific variants (e.g. mylib.dll searches for mylib.so on Linux, mylib.dylib on OS X and so on) and take into account various other system specific conventions. There is also a DLLMap configuration extension which can be used if the default name translations are not enough. Usually it's convenient to have the same API of the binary lib exposed on different OSes, so that migrating between platforms only requires changes in the C code, not the .Net part.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Comparison of Lat, Long Coordinates I have a list of more than 15 thousand latitude and longitude coordinates. Given any X,Y coordinates, what is the fastest way to find the closest coordinates on the list?
A: I did this once for a web site. I.e. find the dealer within 50 miles of your zip code. I used the great circle calculation to find the coordinates that were 50 miles north, 50 miles east, 50 miles south, and 50 miles west. That gave me a min and max lat and a min and max long. From there then I did a database query:
select *
from dealers
where latitude >= minlat
and latitude <= maxlat
and longitude >= minlong
and longitude <= maxlong
Since some of those results will still be more than 50 miles away, then I used the great circle formula once more on that small list of coordinates. Then I printed out the list along with the distance from the target.
Of course, if you wanted to search for points near the international date line or the poles, than this won't work. But it works great for searches inside North America!
A: You will want to use a geometric construction called a Voronoi diagram. This divides up the plane into a number of areas, one for each point, that encompass all the points that are closest to each of your given points.
The code for the exact algorithms to create the Voronoi diagram and arrange the data structure lookups are too large to fit in this little edit box. :)
@Linor: That's essentially what you would do after creating a Voronoi diagram. But instead of making a rectangular grid, you can choose dividing lines that closely match the lines of the Voronoi diagram (this way you will get fewer areas that cross dividing lines). If you recursively divide your Voronoi diagram in half along the best dividing line for each subdiagram, you can then do a tree search for each point you want to look up. This requires a bit of work up front but saves time later. Each lookup would be on the order of log N where N is the number of points. 16 comparisons is a lot better than 15,000!
A: The general concept you're describing is nearest-neighbour search, and there are a whole raft of techniques which deal with solving these types of queries, either exactly or approximately. The basic idea is to use a spatial partitioning technique to reduce the complexity from O(n) per query to (approximately) O( log n ) per query.
KD-Trees, and variants of KD-Trees seem to work very well, but quad-trees will also work. The quality of these searches depends on whether your set of 15,000 data points are static (you're not adding a-lot of data points to the reference set). Mount and Arya's work on the Approximate Nearest Neighbour library is both easy to use and understand, even without a good grounding in the math. It also gives you some flexibility in the types and tolerances of your queries.
A: It rather depends how many times you want to do it, and what resources are available - if you're doing the test once, then the O(log N) techniques are good. If you're doing it a thousand times on a server, constructing a bitmap lookup table would be faster, either giving the result directly or as a first stage of. 2GB of bitmap can map the whole world lat-lon to a 32bit value at 0.011 degree pixels (1.2km at equator), and should fit into memory. If you're only doing single country, or can exclude the poles, you can have a smaller map or higher resolution. For 15,000 points you probably have a much smaller map - I first sized it up as a first step to doing lat-lon to postcode searches, which needs higher resolution. Depending on requirements, you use the mapped value to point at the result directly, or to short list of the candidates (which would allow a smaller map, but requires greater subsequent processing - you're not in O(1) lookup territory any more).
A: You didn't specify what you meant by fastest. If you want to get the answer quickly without writing any code, I would give the gpsbabel radius filter a go.
A: Based on your clarifications, I would use a geometrical data structure such as a KD-tree or an R-tree. MySQL has a SPATIAL data type which does this. Other languages/frameworks/databases have libraries to support this. Basically, such a data structure embeds the points in a tree of rectangles, and searches the tree using a radius. This should be fast enough, and I believe is simpler than building a Voronoi diagram. I guess there is some threshold above which you would prefer the added performance of a Voronoi diagram so you will be ready to pay the added complexity.
A: This can be solved in several ways. I would first approach this problem by generating a Delaunay network connecting closest points to each other. This can be accomplished with the v.delaunay command in the open source GIS application GRASS. You could complete the problem in GRASS using one of the many network analysis modules in GRASS. Alternatively, you could use the free spatial RDBMS PostGIS to do the distance queries. The PostGIS spatial queries are considerably more powerful than those in MySQL, as they are not constrained to BBOX operations. For example:
SELECT network_id, ST_Length(geometry) from spatial_table where ST_Length(geometry) < 10;
Since you are using Longitude and Latitude, you probably want to use the Spheroid-Distance functions. With a spatial index, PostGIS scales very well for large datasets.
A: Even if you create a voronoi diagram, that still means you need to compare your x, y coordinates to all 15 thousand created areas. To make that easier, the first thing that popped into my mind though was to create some sort of grid over the possible values, so that you can easily place and x/y coordinate into one of the boxes in a grid, if the same is done for the list of areas you should quickly shrink the possible candidates for comparison (because the grid would be more rectangular, it's possible for an area to be in multiple grid positions).
A: Premature optimization is the root of all evil.
15K coordinates aren't that much. Why not iterate over the 15K coordinates and see if that's really a performance problem? You could save a lot of work and maybe it never gets too slow to even notice.
A: How large an area are these coordinates spread out over? What latitude are they at? How much accuracy do you require? If they're fairly close together, you can probably ignore the fact that the earth is round and just treat this as a Cartesian plane rather than messing about with spherical geometry and great circle distances. Of course, as you get further from the equator, degrees of longitute get smaller compared to degrees of latitude, so some sort of scaling factor may be appropriate.
Start with a fairly simple distance formula and a brute force search and see how long that's going to take and if the results are accurate enough before you get fancy.
A: Thanks everyone for the answers.
@Tom, @Chris Upchurch: The coordinates are fairly close to each others, and they are in a relatively small area of about 800 sq km. I guess I can assume the surface to be flat. I need to process the requests over and over again, and the response should be faster enough for more web experience.
A: A grid is very simple, and very fast. It's basically just a 2D array of lists. Each array entry represents the points that fall inside a grid cell. Very easy to set the grid up:
for each point p
get cell that contains p
add point to that cell's list
and it's very easy to look things up:
given a query point p
get cell that contains p
check points in that cell (and its 8 neighbors), against query point p
Alejo
A: Just to be contrairian, do you mean close in distance or (driving) time? In an urban area I'd gladly drive 5 miles (5min) on the highway than 4 miles (20min stop and go) in another direction.
Thus if it's a 'closest' metric you need, I'd look into GIS databases with travel time metrics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Base64 Encoding Image I am building an open search add-on for Firefox/IE and the image needs to be Base64 Encoded so how can I base 64 encode the favicon I have?
I am only familiar with PHP
A: My synopsis of rfc2397 is:
Once you've got your base64 encoded image data put it inside the <Image></Image> tags prefixed with "data:{mimetype};base64," this is similar to the prefixing done in the parenthesis of url() definition in CSS or in the quoted value of the src attribute of the img tag in [X]HTML. You can test the data url in firefox by putting the data:image/... line into the URL field and pressing enter, it should show your image.
For actually encoding I think we need to go over all your options, not just PHP,
because there's so many ways to base64 encode something.
*
*Use the base64 command line tool. It's part of the GNU coreutils (v6+) and pretty much default in any Cygwin, Linux, GnuWin32 install, but not the BSDs I tried. Issue: $ base64 imagefile.ico > imagefile.base64.txt
*Use a tool that features the option to convert to base64, like Notepad++ which has the feature under plugins->MIME tools->base64 Encode
*Email yourself the file and view the raw email contents, copy and paste.
*Use a web form.
A note on mime-types:
I would prefer you use one of image/png image/jpeg or image/gif as I can't find the popular image/x-icon. Should that be image/vnd.microsoft.icon?
Also the other formats are much shorter.
compare 265 bytes vs 1150 bytes:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAVFBMVEWcZjTcViTMuqT8/vzcYjTkhhTkljT87tz03sRkZmS8mnT03tT89vTsvoTk1sz86uTkekzkjmzkwpT01rTsmnzsplTUwqz89uy0jmzsrmTknkT0zqT3X4fRAAAAbklEQVR4XnXOVw6FIBBAUafQsZfX9r/PB8JoTPT+QE4o01AtMoS8HkALcH8BGmGIAvaXLw0wCqxKz0Q9w1LBfFSiJBzljVerlbYhlBO4dZHM/F3llybncbIC6N+70Q7OlUm7DdO+gKs9gyRwdgd/LOcGXHzLN5gAAAAASUVORK5CYII=
data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAD/////ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv///////////2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb///////////9mZmb/ZmZm//////////////////////////////////////////////////////9mZmb/ZmZm////////////ZmZm/2ZmZv//////ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv//////ZmZm/2ZmZv///////////2ZmZv9mZmb//////2ZmZv9mZmb/ZmZm/2ZmZv9mZmb/ZmZm/2ZmZv9mZmb//////2ZmZv9mZmb///////////9mZmb/ZmZm////////////////////////////8fX4/8nW5P+twtb/oLjP//////9mZmb/ZmZm////////////////////////////oLjP/3eZu/9pj7T/M2aZ/zNmmf8zZpn/M2aZ/zNmmf///////////////////////////////////////////zNmmf8zZpn/M2aZ/zNmmf8zZpn/d5m7/6C4z/+WwuH/wN/3//////////////////////////////////////+guM//rcLW/8nW5P/x9fj//////9/v+/+w1/X/QZ7m/1Cm6P//////////////////////////////////////////////////////7/f9/4C+7v8xluT/EYbg/zGW5P/A3/f/0933/9Pd9//////////////////////////////////f7/v/YK7q/xGG4P8RhuD/MZbk/7DX9f//////4uj6/zJh2/8yYdv/8PT8////////////////////////////UKbo/xGG4P8xluT/sNf1////////////4uj6/zJh2/8jVtj/e5ro/////////////////////////////////8Df9/+gz/P/////////////////8PT8/0944P8jVtj/bI7l/////////////////////////////////////////////////////////////////2yO5f8jVtj/T3jg//D0/P///////////////////////////////////////////////////////////3ua6P8jVtj/MmHb/+Lo+v////////////////////////////////////////////////////////////D0/P8yYdv/I1bY/9Pd9///////////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
A: As far as I remember there is an xml element for the image data. You can use this website to encode a file (use the upload field). Then just copy and paste the data to the XML element.
You could also use PHP to do this like so:
<?php
$im = file_get_contents('filename.gif');
$imdata = base64_encode($im);
?>
Use Mozilla's guide for help on creating OpenSearch plugins. For example, the icon element is used like this:
<img width="16" height="16">data:image/x-icon;base64,imageData</>
Where imageData is your base64 data.
A: Check the following example:
// First get your image
$imgPath = 'path-to-your-picture/image.jpg';
$img = base64_encode(file_get_contents($imgPath));
echo '<img width="100" height="100" src="data:image/jpg;base64,'. $img .'" />'
A: $encoded_data = base64_encode(file_get_contents('path-to-your-image.jpg'));
A: Google led me to this solution (base64_encode). Hope this helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: Grid controls compatible with .NET and Mono? Do you know any grid control compatible with .NET and Mono?
DataGridView seems to be quite buggy on Mono, and GTK# controls depends on GTK+ so you need to install it in windows machines where, usually, it's not present.
A: You might want to try out the preview of Mono 2.0. DataGridView is vastly better in this version, though there are still several places where it is still lacking.
http://mono.ximian.com/monobuild/preview/download-preview/
A: I tried mono 1.9.1 (Mono 2.0 beta) and had some problems with sorting, generated columns, and some nasty exceptions.
A: Our grid should be compatible with Mono
see http://www.pfgrid.com
Bye
Matthias
A: Which version did you try? Perhaps you should give Mono 2.0 preview a go, it might work for you (no, the Data* controls are not yet perfect, but they have improved greatly). From my experience GTK# controls on Windows are not that great either...
A: hello there if you want to try free grid controls that can work in .nt and mono, just try obout or try this links http://www.obout.com/ ok.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How can I play compressed sound files in C# in a portable way? Is there a portable, not patent-restricted way to play compressed sound files in C# / .Net? I want to play short "jingle" sounds on various events occuring in the program.
System.Media.SoundPlayer can handle only WAV, but those are typically to big to embed in a downloadable apllication. MP3 is protected with patents, so even if there was a fully managed decoder/player it wouldn't be free to redistribute. The best format available would seem to be OGG Vorbis, but I had no luck getting any C# Vorbis libraries to work (I managed to extract a raw PCM with csvorbis but I don't know how to play it afterwards).
I neither want to distribute any binaries with my application nor depend on P/Invoke, as the project should run at least on Windows and Linux. I'm fine with bundling .Net assemblies as long as they are license-compatible with GPL.
[this question is a follow up to a mailing list discussion on mono-dev mailing list a year ago]
A: I finally revisited this topic, and, using help from BrokenGlass on writing WAVE header, updated csvorbis. I've added an OggDecodeStream that can be passed to System.Media.SoundPlayer to simply play any (compatible) Ogg Vorbis stream. Example usage:
using (var file = new FileStream(oggFilename, FileMode.Open, FileAccess.Read))
{
var player = new SoundPlayer(new OggDecodeStream(file));
player.PlaySync();
}
'Compatible' in this case means 'it worked when I tried it out'. The decoder is fully managed, works fine on Microsoft .Net - at the moment, there seems to be a regression in Mono's SoundPlayer that causes distortion.
Outdated:
System.Diagnostics.Process.Start("fullPath.mp3");
I am surprised but the method Dinah mentioned actually works. However, I was thinking about playing short "jingle" sounds on various events occurring in the program, I don't want to launch user's media player each time I need to do a 'ping!' sound.
As for the code project link - this is unfortunately only a P/Invoke wrapper.
A:
I neither want to distribute any
binaries with my application nor
depend on P/Invoke, as the project
should run at least on Windows and
Linux. I'm fine with bundling .Net
assemblies as long as they are
license-compatible with GPL.
Unfortunatly its going to be impossible to avoid distributing binaries, or avoid P/Invoke. The .net class libraries use P/Invoke underneath anyway, the managed code has to communicate with the unmanage operating system API at some point, in order to do anything.
Converting the OGG file to PCM should be possible in Managed code, but because there is no Native Support for Audio in .net, you really have 3 options:
*
*Call an external program to play the sound (as suggested earlier)
*P/Invoke a C module to play the sound
*P/Invoke the OS APIs to play the sound.
(4.) If you're only running this code on windows you could probably just use DirectShow.
P/Invoke can be used in a cross platform way
http://www.mono-project.com/Interop_with_Native_Libraries#Library_Names
Once you have your PCM data (using a OGG C Lib or Managed Code, something like this http://www.robburke.net/mle/mp3sharp/ of course there are licencing issues with MP3), you will need a way to play it, unfortunatly .net does not provide any direct assess to your sound card or methods to play streaming audio. You could convert the ogg files to PCM at startup, and then use System.Media.SoundPlayer, to play the wav files generated. The current method Microsoft suggests uses P/Invoke to access Sound playing API in the OS http://msdn.microsoft.com/en-us/library/ms229685.aspx
A cross platform API to play PCM sound is OpenAL and you should be able to play (PCM) sound using the c# bindings for OpenAL at www.taoframework.com, you will unfortunatly need to copy a number of DLL and .so files with your application in order for it to work when distributed, but this is, as i've explained earlier unavoidable.
A: Calling something which is located in 'System.Diagnostics' to play a sound looks like a pretty bad idea to me. Here is what that function is meant for:
//
// Summary:
// Starts a process resource by specifying the name of a document or application
// file and associates the resource with a new System.Diagnostics.Process component.
//
// Parameters:
// fileName:
// The name of a document or application file to run in the process.
//
// Returns:
// A new System.Diagnostics.Process component that is associated with the process
// resource, or null, if no process resource is started (for example, if an
// existing process is reused).
//
// Exceptions:
// System.ComponentModel.Win32Exception:
// There was an error in opening the associated file.
//
// System.ObjectDisposedException:
// The process object has already been disposed.
//
// System.IO.FileNotFoundException:
// The PATH environment variable has a string containing quotes.
A: i think you should have a look a fmod, which is the mother of all audio api
please feel free to dream about http://www.fmod.org/index.php/download#FMODExProgrammersAPI
A: The XNA Audio APIs work well in .net/c# applications, and work beautifully for this application. Event-based triggering, along with concurent playback of multiple sounds. Exactly what you want. Oh, and compression as well.
A: Well, it depends on a patent-related laws in a given country, but there is no way to write a mp3 decoder without violating patents, as far as i know. I think the best cross-platform, open source solution for your problem is GStreamer. It has c# bindings, which evolve rapidly. Using and building GStreamer on Windows is not an easy task however. Here is a good starting point. Banshee project uses this approach, but it is not really usable on windows yet (however, there are some almost-working nightly builds). FMOD is also a good alternative. Unfortunately, it is not open source and i find that its API is somehow C-styled.
A: There is a pure C# vorbis decoder available that is open source:
http://anonsvn.mono-project.com/viewvc/trunk/csvorbis/
A: Not sure if this is still relevant. Simplest solution would be to use NAudio, which is a managed open source audio API written in C#. Another thing to try would be utilizing ffmpeg, and creating a process to ffplay.exe (the right binaries are under shared builds).
A: There is no way for you to do this without using something else for your play handling.
Using the System.Diagnostic will launch an external software and I doubt you want that, right? You just want X sound file to play in the background when Y happens in your program, right?
Voted up because it looks like an interesting question. :D
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How do I autorun an application in a terminal in Ubuntu? I've created a few autorun script files on various USB devices that run bash scripts when they mount. These scripts run "in the background", how do I get them to run in a terminal window? (Like the "Application in Terminal" gnome Launcher type.)
A: Run them as a two stage process with your "autorun" script calling the second script in a new terminal eg
gnome-terminal -e top --title Testing
Would run the program "top" in a new gnome terminal window with the title "Testing" You can add additional arguments like setting the geometry to determine the size and location of the window checkout the man page for gnome-terminal and the "X" man page for more details
A: xterm -e shellscript.sh
or (if xterm isn't installed)
gnome-terminal -e shellscript.sh
or (if you're using kubuntu / kde)
konsole -e shellscript.sh
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: 3rd Party UI components for .net Compact Framework? I have a .Net compact framework application with a frankly unimpressive UI.
My win32 app uses Dev Express components and looks great, but I can't find anything similar for the compact framework.
Can anyone recommend components that will jazz up my UI?
Does such a thing exist, oram I going to have to owner draw my components or even worse use native code?
A: A number of vendors provide controls for the Windows Mobile environment.
Component One Mobile
Pocket PC Controls
Resco Mobile Forms Toolkit
A: OpenNETCF is a large collection of classes, components and controls for the compact framework. I'm not sure that they have anything that'll jazz up your UI, but it'd be worth a look.
A: I really like Resco's mobile controls. They are solid and the support is good.
I purchased a few controls from Pocket PC Controls. They are not bad, but if you run into bugs its hit or miss if you hear back from the developer. Basically, it doesn't seem like the components are constantly being updated.
Resco on the otherhand has frequent updates, new features and new components.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Launch a file with command line arguments without knowing location of exe? Here's the situation: I am trying to launch an application, but the location of the .exe isn't known to me. Now, if the file extension is registered (in Windows), I can do something like:
Process.Start("Sample.xls");
However, I need to pass some command line arguments as well. I couldn't get this to work
Process p = new Process();
p.StartInfo.FileName = "Sample.xls";
p.StartInfo.Arguments = "/r"; // open in read-only mode
p.Start();
Any suggestions on a mechanism to solve this?
Edit @ aku
My StackOverflow search skills are weak; I did not find that post. Though I generally dislike peering into the registry, that's a great solution. Thanks!
A: Using my code from this answer you can get command associated with xls extension. Then you can pass this command to Process.Start method.
A: If you query the registry, you can retrieve the data about the registered file type and then call the app directly passing the command line arguments. See Programmatically Checking and Setting File Types for an example of retrieving shell information for a file type.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Are there any "nice to program" GUI toolkits for Python? I've played around with GTK, TK, wxPython, Cocoa, curses and others. They are are fairly horrible to use.. GTK/TK/wx/curses all seem to basically be direct-ports of the appropriate C libraries, and Cocoa basically mandates using both PyObjC and Interface Builder, both of which I dislike..
The Shoes GUI library for Ruby is great.. It's very sensibly designed, and very "rubyish", and borrows some nice-to-use things from web development (like using hex colours codes, or :color => rgb(128,0,0))
As the title says: are there any nice, "Pythonic" GUI toolkits?
A: Seconding PyQt. Coupled with the book Rapid GUI Programming with Python and Qt, it's really easy to learn.
A: Have you looked at Qt/PyQt? Although PyQt is a direct port from the C++ library, I find it much more pythonic and nice to program with compared to the others you listed. It also has very good documentation.
Dabo has a nice ui library implemented on top of wxPython. It's a framework intended mostly for database-centric applications, but the ui library can be used separately.
There are/were several other attempts to create a very pythonic gui as a layer on top of PyGtk or wxPython, such as wax and PyGui, which seem to be "stuck" at various degrees of being complete.
Also, an exhaustive list of Python GUI toolkits can be found here.
A: Please check out Dabo, our framework for desktop applications. http://dabodev.com
We have wrapped the wxPython toolkit for the UI classes, and replaced their ugly C++ style functions with simple properties. You mentioned assigning color: in Dabo, you would do it very simply, using your choice of:
obj.BackColor = "red"
obj.BackColor = (255, 0, 0)
obj.BackColor = "FF0000"
obj.BackColor = "#FF0000"
Dabo understands all of these, and handles the differences for you automatically.
I am one of the authors of Dabo, and would be happy to answer any other questions that you may have.
--- Ed Leafe
A: I've used Glade with some success, though I didn't manage to wrap my head around creating anything really complex. It has a nice GUI builder and stores the forms as xml files that are loaded dynamically. Kind of like XAML afiak.
A: I use pyGtk. I think wxPython is nice but it's too limited, and PyQt is, well, Qt. =)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Logging in a PHP webapp I want to keep logs of some things that people do in my app, in some cases so that it can be undone if needed.
Is it best to store such logs in a file or a database? I'm completely at a loss as to what the pros and cons are except that it's another table to setup.
Is there a third (or fourth etc) option that I'm not aware of that I should look into and learn about?
A: There is at least one definite reason to go for storing in the database. You can use INSERT DELAYED in MySQL (or similar constructs in other databases), which returns immediately. You won't get any return data from the database with these kinds of queries, and they are not guaranteed to be applied.
By using INSERT DELAYED, you won't slow down your app to much because of the logging. The database is free to write the INSERTs to disk at any time, so it can bundle a bunch of inserts together.
You need to watch out for using MySQL's built in timestamp function (like CURRENT_TIMESTAMP or CUR_DATE()), because they will be called whenever the query is actually executed. So you should make sure that any time data is generated in your programming language, and not by the database. (This paragraph might be MySQL-specific)
A: You will almost certainly want to use a database for flexible, record based access and to take advantage of the database's ability to handle concurrent data access. If you need to track information that may need to be undone, having it in a structured format is a benefit, as is having the ability to update a row indicating when and by whom a given transaction has been undone.
You likely only want to write to a file if very high performance is an issue, or if you have very unstructured or large amounts of data per record that might be unweidly to store in a database. Note that Unless your application has a very large number of transactions database speed is unlikely to be an issue. Also note that if you are working with a file you'll need to handle concurrent access (read / write / locking) very carefully which is likely not something you want to have to deal with.
A: I'm a big fan of log4php. It gives you a standard interface for logging actions. It's based on log4j. The library loads a central config file, so you never need to change your code to change logging. It also offers several log targets, like files, syslog, databases, etc.
A: I'd use a database simply for maintainability - also multiple edits on a file may cause some getting missed out.
A: I will second both of the above suggestions and add that file locking on a flat file log may cause issues when there are a lot of users.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Django templates and variable attributes I'm using Google App Engine and Django templates.
I have a table that I want to display the objects look something like:
Object Result:
Items = [item1,item2]
Users = [{name='username',item1=3,item2=4},..]
The Django template is:
<table>
<tr align="center">
<th>user</th>
{% for item in result.items %}
<th>{{item}}</th>
{% endfor %}
</tr>
{% for user in result.users %}
<tr align="center">
<td>{{user.name}}</td>
{% for item in result.items %}
<td>{{ user.item }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
Now the Django documention states that when it sees a . in variables
It tries several things to get the data, one of which is dictionary lookup which is exactly what I want but doesn't seem to happen...
A: Or you can use the default django system which is used to resolve attributes in tempaltes like this :
from django.template import Variable, VariableDoesNotExist
@register.filter
def hash(object, attr):
pseudo_context = { 'object' : object }
try:
value = Variable('object.%s' % attr).resolve(pseudo_context)
except VariableDoesNotExist:
value = None
return value
That just works
in your template :
{{ user|hash:item }}
A: @Dave Webb (i haven't been rated high enough to comment yet)
The dot lookups can be summarized like this: when the template system encounters a dot in a variable name, it tries the following lookups, in this order:
* Dictionary lookup (e.e., foo["bar"])
* Attribute lookup (e.g., foo.bar)
* Method call (e.g., foo.bar())
* List-index lookup (e.g., foo[bar])
The system uses the first lookup type that works. It’s short-circuit logic.
A: I found a "nicer"/"better" solution for getting variables inside
Its not the nicest way, but it works.
You install a custom filter into django which gets the key of your dict as a parameter
To make it work in google app-engine you need to add a file to your main directory,
I called mine django_hack.py which contains this little piece of code
from google.appengine.ext import webapp
register = webapp.template.create_template_register()
def hash(h,key):
if key in h:
return h[key]
else:
return None
register.filter(hash)
Now that we have this file, all we need to do is tell the app-engine to use it...
we do that by adding this little line to your main file
webapp.template.register_template_library('django_hack')
and in your template view add this template instead of the usual code
{{ user|hash:item }}
And its should work perfectly =)
A: As a replacement for k,v in user.items on Google App Engine using django templates where user = {'a':1, 'b', 2, 'c', 3}
{% for pair in user.items %}
{% for keyval in pair %} {{ keyval }}{% endfor %}<br>
{% endfor %}
a 1
b 2
c 3
pair = (key, value) for each dictionary item.
A: I'm assuming that the part the doesn't work is {{ user.item }}.
Django will be trying a dictionary lookup, but using the string "item" and not the value of the item loop variable. Django did the same thing when it resolved {{ user.name }} to the name attribute of the user object, rather than looking for a variable called name.
I think you will need to do some preprocessing of the data in your view before you render it in your template.
A: shouldn't this:
{{ user.item }}
be this?
{{ item }}
there is no user object in the context within that loop....?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: I don't understand std::tr1::unordered_map I need an associative container that makes me index a certain object through a string, but that also keeps the order of insertion, so I can look for a specific object by its name or just iterate on it and retrieve objects in the same order I inserted them.
I think this hybrid of linked list and hash map should do the job, but before I tried to use std::tr1::unordered_map thinking that it was working in that way I described, but it wasn't. So could someone explain me the meaning and behavior of unordered_map?
@wesc: I'm sure std::map is implemented by STL, while I'm sure std::hash_map is NOT in the STL (I think older version of Visual Studio put it in a namespace called stdext).
@cristopher: so, if I get it right, the difference is in the implementation (and thus performances), not in the way it behaves externally.
A: You need to index an associative container two ways:
*
*Insertion order
*String comparison
Try Boost.MultiIndex or Boost.Intrusive. I haven't used it this way but I think it's possible.
A: Boost documentation of unordered containers
The difference is in the method of how you generate the look up.
In the map/set containers the operator< is used to generate an ordered tree.
In the unordered containers, an operator( key ) => index is used.
See hashing for a description of how that works.
A: Sorry, read your last comment wrong. Yes, hash_map is not in STL, map is. But unordered_map and hash_map are the same from what I've been reading.
map -> log (n) insertion, retrieval, iteration is efficient (and ordered by key comparison)
hash_map/unordered_map -> constant time insertion and retrieval, iteration time is not guarantee to be efficient
Neither of these will work for you by themselves, since the map orders things based on the key content, and not the insertion sequence (unless your key contains info about the insertion sequence in it).
You'll have to do either what you described (list + hash_map), or create a key type that has the insertion sequence number plus an appropriate comparison function.
A: You've asked for the canonical reason why Boost::MultiIndex was made: list insertion order with fast lookup by key. Boost MultiIndex tutorial: list fast lookup
A: I think that an unordered_map and hash_map are more or less the same thing. The difference is that the STL doesn't officially have a hash_map (what you're using is probably a compiler specific thing), so unordered_map is the fix for that omission.
unordered_map is just that... unordered. You can't depend on it preserving any ordering on iteration.
A: You sure that std::hash_map exists in all STL implementations? SGI STL implements it, however GNU g++ doesn't have it (it's located in the __gnu_cxx namespace) as of 4.3.1 anyway. As far as I know, hash_map has always been non-standard, and now tr1 is fixing that.
A: @wesc: STL has std::map... so what's the difference with unordered_map? I don't think STL would implement twice the same thing and call it differently.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you full-text search multiple criteria on left-joined tables in SQL Server? I have a query that originally looks like this:
select c.Id, c.Name, c.CountryCode, c.CustomerNumber, cacc.AccountNumber, ca.Line1, ca.CityName, ca.PostalCode
from dbo.Customer as c
left join dbo.CustomerAddress as ca on ca.CustomerId = c.Id
left join dbo.CustomerAccount as cacc on cacc.CustomerId = c.Id
where c.CountryCode = 'XX' and (cacc.AccountNumber like '%C17%' or c.Name like '%op%'
or ca.Line1 like '%ae%' or ca.CityName like '%ab%' or ca.PostalCode like '%10%')
On a database with 90,000 records this query takes around 7 seconds to execute (obviously all the joins and likes are taxing).
I have been trying to find a way to bring the query execution time down with full-text search on the columns concerned. However, I haven't seen an example of a full-text search that has three table joins like this, especially since my join condition is not part of the search term.
Is there a way to do this in full-text search?
@David
Yep, there are indexes on the Ids.
I've tried adding indexes on the CustomerAddress stuff (CityName, PostalCode, etc.) and it brought down the query to 3 seconds, but I still find that too slow for something like this.
Note that all of the text fields (with the exception of the ids) are nvarchars, and Line1 is an nvarchar 1000, so that might affect the speed, but still.
A: NOTE: This isn't really an answer, just an attempt to clarify what might actually be causing the performance problem(s).
90,000 records is really a fairly small data set and the query is relatively simple with just two join. Do you have indexes on CustomerAddress.CustomerId and CustomerAccount.CustomerId? That seems more likely to be causing performance issues than the where condition LIKE predicates. Are you typically searching to find a match on all of those columns at the same time?
A: I would echo David's suggestion. You'd probably want to examine how the RDBMS is executing your query (e.g., via table scans or using indexes).
One quick check would be to time just the part of the query involving the text search. Something like this:
SELECT ca.Line1, ca.CityName, ca.PostalCode
FROM CustomerAddress as ca
WHERE ca.CustomerId = <some id number>
AND (ca.Line1 LIKE '%ae%' OR ca.CityName LIKE '%ab%' OR ca.PostalCode LIKE '%10%');
If that takes a long time, then the LIKEs are the issue (remove one expression at a time from the ORed line to see if just one of those columns is causing the slowdown). If it's quick, then the joins are suspect.
You could write a similar query for the CustomerAccount table as well.
A: Run it through the query analyzer and see what the query plan is. My guess would be that the double root (ie. %ae%) searches are causing it do do a table scan when looking for the matching rows. Double root searches are inherently slow, as you can't use any kind of index to match them usually.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Add predefined data for typedef enums in C What is the best approach to define additional data for typedef enums in C?
Example:
typedef enum {
kVizsla = 0,
kTerrier = 3,
kYellowLab = 10
} DogType;
Now I would like to define names for each, for example kVizsla should be "vizsla".
I currently use a function that returns a string using a large switch block.
A: A perfect fit for X() macros. These types of macros can use the C preprocessor to construct enums and arrays from the same source. You only need to add new data to the #define containing the X() macro.
Your example can be written as follows:
// All dog data goes in this list
#define XDOGTYPE \
X(kVizsla,0,"vizsla") \
X(kTerrier,3,"terrier") \
X(kYellowLab,10,"yellowlab")
// Dog info
typedef struct {
int val; // Defined value
char * desc; // Text description
} DogType;
// Build an array index using the Names
typedef enum {
#define X(Name,Val,Text) Name,
XDOGTYPE
#undef X
MAXDOGS
} DogIndex;
// Build a lookup table of values
DogType Dog[] = {
#define X(Name,Val,Text) {Val,Text},
XDOGTYPE
#undef X
};
// Access the values
for (i=0; i < MAXDOGS; i++)
printf("%d: %s\n",Dog[i].val,Dog[i].desc);
A: @dmckee: I think the suggested solution is good, but for simple data (e.g. if only the name is needed) it could be augmented with auto-generated code. While there are lots of ways to auto-generate code, for something as simple as this I believe you could write a simple XSLT that takes in an XML representation of the enum and outputs the code file.
The XML would be of the form:
<EnumsDefinition>
<Enum name="DogType">
<Value name="Vizsla" value="0" />
<Value name="Terrier" value="3" />
<Value name="YellowLab" value="10" />
</Enum>
</EnumsDefinition>
and the resulting code would be something similar to what dmckee suggested in his solution.
For information of how to write such an XSLT try here or just search it up in google and find a tutorial that fits. Writing XSLT is not much fun IMO, but it's not that bad either, at least for relatively simple tasks such as these.
A: If your enumerated values are dense enough, you can define an array to hold the strings and just look them up (use NULL for any skipped value and add a special case handler on your lookup routine).
char *DogList[] = {
"vizsla", /* element 0 */
NULL,
NULL,
NULL,
"terrier", /* element 3 */
...
};
This is inefficient for sparse enumerations.
Even if the enumeration is not dense, you can use an array of structs to hold the mapping.
typedef struct DogMaps {
DogType index;
char * name;
} DogMapt;
DogMapt DogMap[] = {
{kVizsla, "vizsla"},
{kTerrier, "terrier"},
{kYellowLab, "yellow lab"},
NULL
};
The second approach is very flexible, but it does mean a search through the mapping every time you need to use the data. For large data sets consider a b-tree or hash instead of an array.
Either method can be generalized to connect more data. In the first use an array of structs, in the second just add more members to the struct.
You will, of course, want to write various handlers to simplify your interaction with these data structures.
@Hershi By all means, separate code and data. The above examples are meant to be clear rather than functional.
I blush to admit that I still use whitespace separated flat files for that purpose, rather than the kind of structured input you exhibit, but my production code would read as much of the data from external sources as possible.
Wait, I see that you mean code generation.
Sure. Nothing wrong with that.
I suspect, though that the OP was interested in what the generated code should look like...
A: That's kind of an open ended question, but one suggestion would be to use a map with the enum as the key type and the extra information in the value. (If your indices are continuous, unlike the example, you can use a sequence container instead of a map).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Embed Images in emails created using SQL Server Database Mail I'm working on an email solution in SQL Server ONLY that will use Database Mail to send out HTML formatted emails. The catch is that the images in the HTML need to be embedded in the outgoing email. This wouldn't be a problem if I were using a .net app to generate & send the emails but, unfortunately, all I have is SQL Server.
Is it possible for SQL Server to embed images on its own?
A: You have two possibilities:
*
*(easy) Host the images somewhere, and reference them in the <img src="...">.
*(difficult) Encode them in Base64 and build a multipart MIME message with known content IDs, so they can be referenced in the message body via cid: URIs.
Each possibility has its downsides:
*
*Remote images may not be loaded on the modern e-mail clients for privacy.
*Probably rises spam score.
When the receiving clients are in your control (e.g. same organization), you might be equally fine with either way.
A: You could try to encode the image as base64 and reference it directly in an img tag within the email ( <img src="data:image/png;base64[your encoded image here...] ) but i think most email clients correlate this technique with spam. I think you're better off referencing hosted images or simply attaching it to the email.
A: Yes, what you need to do is include the images as attachments and then they can be referenced within the HTML.
Use the @file_attachment parameter of sp_send_dbmail
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Rolling back bad changes with svn in Eclipse Let's say I have committed some bad changes to Subversion repository. Then I commit good changes, that I want to keep.
What would be easiest way to roll back those bad changes in Eclipse, and keep the good changes? Assuming that files relating to bad changes are not same as those relating to the good changes. How things change if good changes were made to same files as bad changes?
I am mostly looking a way to do this via Eclipse plugins (Subclipse or Subversive) but commandline commands are also interesting.
A: In Eclipse Ganymede (Subclipse)
Select project/file that contains bad change, and from pop-up menu choose:
Team -> Show History
Revisions related to that project/file will be shown in History tab.
Find revision where "bad changes" were committed and from pop-up menu choose:
Revert Changes from Revision X
This will merge changes in file(s) modified within bad revision, with revision prior to bad revision.
There are two scenarios from here:
*
*If you committed no changes for that file (bad revision is last revision for that file), it will simply remove changes made in bad revision. Those changes are merged to your working copy so you have to commit them.
*If you committed some changes for that file (bad revision is not last revision for that file), you will have to manually resolve conflict. Let say that you have file readme.txt with, and bad revision number is 33. Also, you've made another commit for that file in revision 34. After you choose Revert Changes from Revision 33 you will have following in your working copy:
readme.txt.merge-left.r33 - bad revision
readme.txt.merge-right.r32 - before bad revision
readme.txt.working - working copy version (same as in r34 if you don't have any uncommitted changes)
Original readme.txt will be marked conflicted, and will contain merged version (where changes from bad revision are removed) with some markers (<<<<<<< .working etc). If you just want to remove changes from bad revision and keep changes made after that, then all you have to do is remove markers. Otherwise, you can copy contents from one of 3 files mentioned above to original file. Whatever you choose, when you are done, mark conflict resolved by
Team - Mark Resolved
Temporary files will be removed and your file will be marked changed. As in 1, you have to commit changes.
Note that this does not remove revision from revision history in svn repository. You simply made new revision where changes from bad revision are removed.
A: In Eclipse using Subversive:
Right click your project > Team > Merge
In the merge window, select the revisions you want to revert as normally but also enable checkbox "Reversed merge".
Merge as normally.
A: I have written a couple of blog posts on this subject. One that is Subclipse centric: http://markphip.blogspot.com/2007/01/how-to-undo-commit-in-subversion.html and one that is command-line centric: http://blogs.collab.net/subversion/2007/07/second-chances/
A: You have two choices to do this.
The Quick and Dirty is selecting your files (using ctrl) in Project Explorer view, right-click them, choose Replace with... and then you choose the best option for you, from Latest from Repository, or some Branch version. After getting those files you modify them (with a space, or fix something, your call and commit them to create a newer revision.
A more clean way is choosing Merge at team menu and navigate through the wizard that will help you to recovery the old version in the actual revision.
Both commands have their command-line equivalents: svn revert and svn merge.
A: If you want to do 1 file at a time you can go to the History view for the file assuming you have an Eclipse SVN plugin installed. "Team->Show History"
In the History view, find the last good version of that file, right click and choose "Get Contents". This will replace your current version with that version's contents. Then you can commit the changes when you've fixed it all up.
A: The svnbook has a section on how Subversion allows you to revert the changes from a particular revision without affecting the changes that occured in subsequent revisions:
http://svnbook.red-bean.com/en/1.4/svn.branchmerge.commonuses.html#svn.branchmerge.commonuses.undo
I don't use Eclipse much, but in TortoiseSVN you can do this from the from the log dialogue; simply right-click on the revision you want to revert and select "Revert changes from this revision".
In the case that the files for which you want to revert "bad changes" had "good changes" in subsequent revisions, then the process is the same. The changes from the "bad" revision will be reverted leaving the changes from "good" revisions untouched, however you might get conflicts.
A: I have same problem but CleanUp eclipse option doesn't work for me.
1) install TortoiseSVN
2) Go to windows explorer and right click on your project directory
3 Choice CleanUp option (by checking break lock option)
It's works.
Hope this helps someone.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83"
} |
Q: C-like structures in Python Is there a way to conveniently define a C-like structure in Python? I'm tired of writing stuff like:
class MyStruct():
def __init__(self, field1, field2, field3):
self.field1 = field1
self.field2 = field2
self.field3 = field3
A: Perhaps you are looking for Structs without constructors:
class Sample:
name = ''
average = 0.0
values = None # list cannot be initialized here!
s1 = Sample()
s1.name = "sample 1"
s1.values = []
s1.values.append(1)
s1.values.append(2)
s1.values.append(3)
s2 = Sample()
s2.name = "sample 2"
s2.values = []
s2.values.append(4)
for v in s1.values: # prints 1,2,3 --> OK.
print v
print "***"
for v in s2.values: # prints 4 --> OK.
print v
A: You access C-Style struct in python in following way.
class cstruct:
var_i = 0
var_f = 0.0
var_str = ""
if you just want use object of cstruct
obj = cstruct()
obj.var_i = 50
obj.var_f = 50.00
obj.var_str = "fifty"
print "cstruct: obj i=%d f=%f s=%s" %(obj.var_i, obj.var_f, obj.var_str)
if you want to create an array of objects of cstruct
obj_array = [cstruct() for i in range(10)]
obj_array[0].var_i = 10
obj_array[0].var_f = 10.00
obj_array[0].var_str = "ten"
#go ahead and fill rest of array instaces of struct
#print all the value
for i in range(10):
print "cstruct: obj_array i=%d f=%f s=%s" %(obj_array[i].var_i, obj_array[i].var_f, obj_array[i].var_str)
Note:
instead of 'cstruct' name, please use your struct name
instead of var_i, var_f, var_str, please define your structure's member variable.
A: How about a dictionary?
Something like this:
myStruct = {'field1': 'some val', 'field2': 'some val'}
Then you can use this to manipulate values:
print myStruct['field1']
myStruct['field2'] = 'some other values'
And the values don't have to be strings. They can be pretty much any other object.
A: This might be a bit late but I made a solution using Python Meta-Classes (decorator version below too).
When __init__ is called during run time, it grabs each of the arguments and their value and assigns them as instance variables to your class. This way you can make a struct-like class without having to assign every value manually.
My example has no error checking so it is easier to follow.
class MyStruct(type):
def __call__(cls, *args, **kwargs):
names = cls.__init__.func_code.co_varnames[1:]
self = type.__call__(cls, *args, **kwargs)
for name, value in zip(names, args):
setattr(self , name, value)
for name, value in kwargs.iteritems():
setattr(self , name, value)
return self
Here it is in action.
>>> class MyClass(object):
__metaclass__ = MyStruct
def __init__(self, a, b, c):
pass
>>> my_instance = MyClass(1, 2, 3)
>>> my_instance.a
1
>>>
I posted it on reddit and /u/matchu posted a decorator version which is cleaner. I'd encourage you to use it unless you want to expand the metaclass version.
>>> def init_all_args(fn):
@wraps(fn)
def wrapped_init(self, *args, **kwargs):
names = fn.func_code.co_varnames[1:]
for name, value in zip(names, args):
setattr(self, name, value)
for name, value in kwargs.iteritems():
setattr(self, name, value)
return wrapped_init
>>> class Test(object):
@init_all_args
def __init__(self, a, b):
pass
>>> a = Test(1, 2)
>>> a.a
1
>>>
A: Update: Data Classes
With the introduction of Data Classes in Python 3.7 we get very close.
The following example is similar to the NamedTuple example below, but the resulting object is mutable and it allows for default values.
from dataclasses import dataclass
@dataclass
class Point:
x: float
y: float
z: float = 0.0
p = Point(1.5, 2.5)
print(p) # Point(x=1.5, y=2.5, z=0.0)
This plays nicely with the new typing module in case you want to use more specific type annotations.
I've been waiting desperately for this! If you ask me, Data Classes and the new NamedTuple declaration, combined with the typing module are a godsend!
Improved NamedTuple declaration
Since Python 3.6 it became quite simple and beautiful (IMHO), as long as you can live with immutability.
A new way of declaring NamedTuples was introduced, which allows for type annotations as well:
from typing import NamedTuple
class User(NamedTuple):
name: str
class MyStruct(NamedTuple):
foo: str
bar: int
baz: list
qux: User
my_item = MyStruct('foo', 0, ['baz'], User('peter'))
print(my_item) # MyStruct(foo='foo', bar=0, baz=['baz'], qux=User(name='peter'))
A: I wrote a decorator which you can use on any method to make it so that all of the arguments passed in, or any defaults, are assigned to the instance.
def argumentsToAttributes(method):
argumentNames = method.func_code.co_varnames[1:]
# Generate a dictionary of default values:
defaultsDict = {}
defaults = method.func_defaults if method.func_defaults else ()
for i, default in enumerate(defaults, start = len(argumentNames) - len(defaults)):
defaultsDict[argumentNames[i]] = default
def newMethod(self, *args, **kwargs):
# Use the positional arguments.
for name, value in zip(argumentNames, args):
setattr(self, name, value)
# Add the key word arguments. If anything is missing, use the default.
for name in argumentNames[len(args):]:
setattr(self, name, kwargs.get(name, defaultsDict[name]))
# Run whatever else the method needs to do.
method(self, *args, **kwargs)
return newMethod
A quick demonstration. Note that I use a positional argument a, use the default value for b, and a named argument c. I then print all 3 referencing self, to show that they've been properly assigned before the method is entered.
class A(object):
@argumentsToAttributes
def __init__(self, a, b = 'Invisible', c = 'Hello'):
print(self.a)
print(self.b)
print(self.c)
A('Why', c = 'Nothing')
Note that my decorator should work with any method, not just __init__.
A: I don't see this answer here, so I figure I'll add it since I'm leaning Python right now and just discovered it. The Python tutorial (Python 2 in this case) gives the following simple and effective example:
class Employee:
pass
john = Employee() # Create an empty employee record
# Fill the fields of the record
john.name = 'John Doe'
john.dept = 'computer lab'
john.salary = 1000
That is, an empty class object is created, then instantiated, and the fields are added dynamically.
The up-side to this is its really simple. The downside is it isn't particularly self-documenting (the intended members aren't listed anywhere in the class "definition"), and unset fields can cause problems when accessed. Those two problems can be solved by:
class Employee:
def __init__ (self):
self.name = None # or whatever
self.dept = None
self.salary = None
Now at a glance you can at least see what fields the program will be expecting.
Both are prone to typos, john.slarly = 1000 will succeed. Still, it works.
A: Here is a solution which uses a class (never instantiated) to hold data. I like that this way involves very little typing and does not require any additional packages etc.
class myStruct:
field1 = "one"
field2 = "2"
You can add more fields later, as needed:
myStruct.field3 = 3
To get the values, the fields are accessed as usual:
>>> myStruct.field1
'one'
A: Use a named tuple, which was added to the collections module in the standard library in Python 2.6. It's also possible to use Raymond Hettinger's named tuple recipe if you need to support Python 2.4.
It's nice for your basic example, but also covers a bunch of edge cases you might run into later as well. Your fragment above would be written as:
from collections import namedtuple
MyStruct = namedtuple("MyStruct", "field1 field2 field3")
The newly created type can be used like this:
m = MyStruct("foo", "bar", "baz")
You can also use named arguments:
m = MyStruct(field1="foo", field2="bar", field3="baz")
A: Personally, I like this variant too. It extends @dF's answer.
class struct:
def __init__(self, *sequential, **named):
fields = dict(zip(sequential, [None]*len(sequential)), **named)
self.__dict__.update(fields)
def __repr__(self):
return str(self.__dict__)
It supports two modes of initialization (that can be blended):
# Struct with field1, field2, field3 that are initialized to None.
mystruct1 = struct("field1", "field2", "field3")
# Struct with field1, field2, field3 that are initialized according to arguments.
mystruct2 = struct(field1=1, field2=2, field3=3)
Also, it prints nicer:
print(mystruct2)
# Prints: {'field3': 3, 'field1': 1, 'field2': 2}
A: There is a python package exactly for this purpose. see cstruct2py
cstruct2py is a pure python library for generate python classes from C code and use them to pack and unpack data. The library can parse C headres (structs, unions, enums, and arrays declarations) and emulate them in python. The generated pythonic classes can parse and pack the data.
For example:
typedef struct {
int x;
int y;
} Point;
after generating pythonic class...
p = Point(x=0x1234, y=0x5678)
p.packed == "\x34\x12\x00\x00\x78\x56\x00\x00"
How to use
First we need to generate the pythonic structs:
import cstruct2py
parser = cstruct2py.c2py.Parser()
parser.parse_file('examples/example.h')
Now we can import all names from the C code:
parser.update_globals(globals())
We can also do that directly:
A = parser.parse_string('struct A { int x; int y;};')
Using types and defines from the C code
a = A()
a.x = 45
print a
buf = a.packed
b = A(buf)
print b
c = A('aaaa11112222', 2)
print c
print repr(c)
The output will be:
{'x':0x2d, 'y':0x0}
{'x':0x2d, 'y':0x0}
{'x':0x31316161, 'y':0x32323131}
A('aa111122', x=0x31316161, y=0x32323131)
Clone
For clone cstruct2py run:
git clone https://github.com/st0ky/cstruct2py.git --recursive
A: Here is a quick and dirty trick:
>>> ms = Warning()
>>> ms.foo = 123
>>> ms.bar = 'akafrit'
How does it works? It just re-use the builtin class Warning (derived from Exception) and use it as it was you own defined class.
The good points are that you do not need to import or define anything first, that "Warning" is a short name, and that it also makes clear you are doing something dirty which should not be used elsewhere than a small script of yours.
By the way, I tried to find something even simpler like ms = object() but could not (this last exemple is not working). If you have one, I am interested.
A: NamedTuple is comfortable. but there no one shares the performance and storage.
from typing import NamedTuple
import guppy # pip install guppy
import timeit
class User:
def __init__(self, name: str, uid: int):
self.name = name
self.uid = uid
class UserSlot:
__slots__ = ('name', 'uid')
def __init__(self, name: str, uid: int):
self.name = name
self.uid = uid
class UserTuple(NamedTuple):
# __slots__ = () # AttributeError: Cannot overwrite NamedTuple attribute __slots__
name: str
uid: int
def get_fn(obj, attr_name: str):
def get():
getattr(obj, attr_name)
return get
if 'memory test':
obj = [User('Carson', 1) for _ in range(1000000)] # Cumulative: 189138883
obj_slot = [UserSlot('Carson', 1) for _ in range(1000000)] # 77718299 <-- winner
obj_namedtuple = [UserTuple('Carson', 1) for _ in range(1000000)] # 85718297
print(guppy.hpy().heap()) # Run this function individually.
"""
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1000000 24 112000000 34 112000000 34 dict of __main__.User
1 1000000 24 64000000 19 176000000 53 __main__.UserTuple
2 1000000 24 56000000 17 232000000 70 __main__.User
3 1000000 24 56000000 17 288000000 87 __main__.UserSlot
...
"""
if 'performance test':
obj = User('Carson', 1)
obj_slot = UserSlot('Carson', 1)
obj_tuple = UserTuple('Carson', 1)
time_normal = min(timeit.repeat(get_fn(obj, 'name'), repeat=20))
print(time_normal) # 0.12550550000000005
time_slot = min(timeit.repeat(get_fn(obj_slot, 'name'), repeat=20))
print(time_slot) # 0.1368690000000008
time_tuple = min(timeit.repeat(get_fn(obj_tuple, 'name'), repeat=20))
print(time_tuple) # 0.16006120000000124
print(time_tuple/time_slot) # 1.1694481584580898 # The slot is almost 17% faster than NamedTuple on Windows. (Python 3.7.7)
If your __dict__ is not using, please choose between __slots__ (higher performance and storage) and NamedTuple (clear for reading and use)
You can review this link(Usage of slots
) to get more __slots__ information.
A:
dF: that's pretty cool... I didn't
know that I could access the fields in
a class using dict.
Mark: the situations that I wish I had
this are precisely when I want a tuple
but nothing as "heavy" as a
dictionary.
You can access the fields of a class using a dictionary because the fields of a class, its methods and all its properties are stored internally using dicts (at least in CPython).
...Which leads us to your second comment. Believing that Python dicts are "heavy" is an extremely non-pythonistic concept. And reading such comments kills my Python Zen. That's not good.
You see, when you declare a class you are actually creating a pretty complex wrapper around a dictionary - so, if anything, you are adding more overhead than by using a simple dictionary. An overhead which, by the way, is meaningless in any case. If you are working on performance critical applications, use C or something.
A: I would also like to add a solution that uses slots:
class Point:
__slots__ = ["x", "y"]
def __init__(self, x, y):
self.x = x
self.y = y
Definitely check the documentation for slots but a quick explanation of slots is that it is python's way of saying: "If you can lock these attributes and only these attributes into the class such that you commit that you will not add any new attributes once the class is instantiated (yes you can add new attributes to a class instance, see example below) then I will do away with the large memory allocation that allows for adding new attributes to a class instance and use just what I need for these slotted attributes".
Example of adding attributes to class instance (thus not using slots):
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
p1 = Point(3,5)
p1.z = 8
print(p1.z)
Output: 8
Example of trying to add attributes to class instance where slots was used:
class Point:
__slots__ = ["x", "y"]
def __init__(self, x, y):
self.x = x
self.y = y
p1 = Point(3,5)
p1.z = 8
Output: AttributeError: 'Point' object has no attribute 'z'
This can effectively works as a struct and uses less memory than a class (like a struct would, although I have not researched exactly how much). It is recommended to use slots if you will be creating a large amount of instances of the object and do not need to add attributes. A point object is a good example of this as it is likely that one may instantiate many points to describe a dataset.
A: You can subclass the C structure that is available in the standard library. The ctypes module provides a Structure class. The example from the docs:
>>> from ctypes import *
>>> class POINT(Structure):
... _fields_ = [("x", c_int),
... ("y", c_int)]
...
>>> point = POINT(10, 20)
>>> print point.x, point.y
10 20
>>> point = POINT(y=5)
>>> print point.x, point.y
0 5
>>> POINT(1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: too many initializers
>>>
>>> class RECT(Structure):
... _fields_ = [("upperleft", POINT),
... ("lowerright", POINT)]
...
>>> rc = RECT(point)
>>> print rc.upperleft.x, rc.upperleft.y
0 5
>>> print rc.lowerright.x, rc.lowerright.y
0 0
>>>
A: https://stackoverflow.com/a/32448434/159695 does not work in Python3.
https://stackoverflow.com/a/35993/159695 works in Python3.
And I extends it to add default values.
class myStruct:
def __init__(self, **kwds):
self.x=0
self.__dict__.update(kwds) # Must be last to accept assigned member variable.
def __repr__(self):
args = ['%s=%s' % (k, repr(v)) for (k,v) in vars(self).items()]
return '%s(%s)' % ( self.__class__.__qualname__, ', '.join(args) )
a=myStruct()
b=myStruct(x=3,y='test')
c=myStruct(x='str')
>>> a
myStruct(x=0)
>>> b
myStruct(x=3, y='test')
>>> c
myStruct(x='str')
A: The following solution to a struct is inspired by the namedtuple implementation and some of the previous answers. However, unlike the namedtuple it is mutable, in it's values, but like the c-style struct immutable in the names/attributes, which a normal class or dict isn't.
_class_template = """\
class {typename}:
def __init__(self, *args, **kwargs):
fields = {field_names!r}
for x in fields:
setattr(self, x, None)
for name, value in zip(fields, args):
setattr(self, name, value)
for name, value in kwargs.items():
setattr(self, name, value)
def __repr__(self):
return str(vars(self))
def __setattr__(self, name, value):
if name not in {field_names!r}:
raise KeyError("invalid name: %s" % name)
object.__setattr__(self, name, value)
"""
def struct(typename, field_names):
class_definition = _class_template.format(
typename = typename,
field_names = field_names)
namespace = dict(__name__='struct_%s' % typename)
exec(class_definition, namespace)
result = namespace[typename]
result._source = class_definition
return result
Usage:
Person = struct('Person', ['firstname','lastname'])
generic = Person()
michael = Person('Michael')
jones = Person(lastname = 'Jones')
In [168]: michael.middlename = 'ben'
Traceback (most recent call last):
File "<ipython-input-168-b31c393c0d67>", line 1, in <module>
michael.middlename = 'ben'
File "<string>", line 19, in __setattr__
KeyError: 'invalid name: middlename'
A: You can also pass the init parameters to the instance variables by position
# Abstract struct class
class Struct:
def __init__ (self, *argv, **argd):
if len(argd):
# Update by dictionary
self.__dict__.update (argd)
else:
# Update by position
attrs = filter (lambda x: x[0:2] != "__", dir(self))
for n in range(len(argv)):
setattr(self, attrs[n], argv[n])
# Specific class
class Point3dStruct (Struct):
x = 0
y = 0
z = 0
pt1 = Point3dStruct()
pt1.x = 10
print pt1.x
print "-"*10
pt2 = Point3dStruct(5, 6)
print pt2.x, pt2.y
print "-"*10
pt3 = Point3dStruct (x=1, y=2, z=3)
print pt3.x, pt3.y, pt3.z
print "-"*10
A: Whenever I need an "instant data object that also behaves like a dictionary" (I don't think of C structs!), I think of this cute hack:
class Map(dict):
def __init__(self, **kwargs):
super(Map, self).__init__(**kwargs)
self.__dict__ = self
Now you can just say:
struct = Map(field1='foo', field2='bar', field3=42)
self.assertEquals('bar', struct.field2)
self.assertEquals(42, struct['field3'])
Perfectly handy for those times when you need a "data bag that's NOT a class", and for when namedtuples are incomprehensible...
A: You can use a tuple for a lot of things where you would use a struct in C (something like x,y coordinates or RGB colors for example).
For everything else you can use dictionary, or a utility class like this one:
>>> class Bunch:
... def __init__(self, **kwds):
... self.__dict__.update(kwds)
...
>>> mystruct = Bunch(field1=value1, field2=value2)
I think the "definitive" discussion is here, in the published version of the Python Cookbook.
A: Some the answers here are massively elaborate. The simplest option I've found is (from: http://norvig.com/python-iaq.html):
class Struct:
"A structure that can have any fields defined."
def __init__(self, **entries): self.__dict__.update(entries)
Initialising:
>>> options = Struct(answer=42, linelen=80, font='courier')
>>> options.answer
42
adding more:
>>> options.cat = "dog"
>>> options.cat
dog
edit: Sorry didn't see this example already further down.
A: If you don't have a 3.7 for @dataclass and need mutability, the following code might work for you. It's quite self-documenting and IDE-friendly (auto-complete), prevents writing things twice, is easily extendable and it is very simple to test that all instance variables are completely initialized:
class Params():
def __init__(self):
self.var1 : int = None
self.var2 : str = None
def are_all_defined(self):
for key, value in self.__dict__.items():
assert (value is not None), "instance variable {} is still None".format(key)
return True
params = Params()
params.var1 = 2
params.var2 = 'hello'
assert(params.are_all_defined)
A: The best way I found to do this was to use a custom dictionary class as explained in this post: https://stackoverflow.com/a/14620633/8484485
If iPython autocompletion support is needed, simply define the dir() function like this:
class AttrDict(dict):
def __init__(self, *args, **kwargs):
super(AttrDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def __dir__(self):
return self.keys()
You then define your pseudo struct like so: (this one is nested)
my_struct=AttrDict ({
'com1':AttrDict ({
'inst':[0x05],
'numbytes':2,
'canpayload':False,
'payload':None
})
})
You can then access the values inside my_struct like this:
print(my_struct.com1.inst)
=>[5]
A: The cleanest way I can think of is to use a class decorator that lets you declare a static class and rewrite it to act as a struct with normal, named properties:
from as_struct import struct
@struct
class Product():
name = 'unknown product'
quantity = -1
sku = '-'
# create instance
p = Product('plush toy', sku='12-345-6789')
# check content:
p.name # plush toy
p.quantity # -1
p.sku # 12-345-6789
Using the following decorator code:
def struct(struct_class):
# create a new init
def struct_init(self, *args, **kwargs):
i = 0 # we really don't need enumerate() here...
for value in args:
name = member_names[i]
default_value = member_values[i]
setattr(self, name, value if value is not None else default_value)
i += 1 # ...we just need to inc an int
for key,value in kwargs.items():
i = member_names.index(key)
default_value = member_values[i]
setattr(self, key, value if value is not None else default_value)
# extract the struct members
member_names = []
member_values = []
for attr_name in dir(struct_class):
if not attr_name.startswith('_'):
value = getattr(struct_class, attr_name)
if not callable(value):
member_names.append(attr_name)
member_values.append(value)
# rebind and return
struct_class.init = struct_init
return struct_class
Which works by taking the class, extracting the field names and their default values, then rewriting the class's __init__ function to set self attributes based on knowing which argument index maps to which property name.
A: I think Python structure dictionary is suitable for this requirement.
d = dict{}
d[field1] = field1
d[field2] = field2
d[field2] = field3
A: Extending @gz.'s (generally superior to this one) answer, for a quick and dirty namedtuple structure we can do:
import collections
x = collections.namedtuple('foobar', 'foo bar')(foo=1,bar=2)
y = collections.namedtuple('foobar', 'foo bar')(foo=3,bar=4)
print(x,y)
>foobar(foo=1, bar=2) foobar(foo=3, bar=4)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "642"
} |
Q: Altering database tables in Django I'm considering using Django for a project I'm starting (fyi, a browser-based game) and one of the features I'm liking the most is using syncdb to automatically create the database tables based on the Django models I define (a feature that I can't seem to find in any other framework).
I was already thinking this was too good to be true when I saw this in the documentation:
Syncdb will not alter existing tables
syncdb will only create tables for models which have not yet been installed. It will never issue ALTER TABLE statements to match changes made to a model class after installation. Changes to model classes and database schemas often involve some form of ambiguity and, in those cases, Django would have to guess at the correct changes to make. There is a risk that critical data would be lost in the process.
If you have made changes to a model and wish to alter the database tables to match, use the sql command to display the new SQL structure and compare that to your existing table schema to work out the changes.
It seems that altering existing tables will have to be done "by hand".
What I would like to know is the best way to do this. Two solutions come to mind:
*
*As the documentation suggests, make the changes manually in the DB;
*Do a backup of the database, wipe it, create the tables again (with syncdb, since now it's creating the tables from scratch) and import the backed-up data (this might take too long if the database is big)
Any ideas?
A: One good way to do this is via fixtures, particularly the initial_data fixtures.
A fixture is a collection of files that contain the serialized contents of the database. So it's like having a backup of the database but as it's something Django is aware of it's easier to use and will have additional benefits when you come to do things like unit testing.
You can create a fixture from the data currently in your DB using django-admin.py dumpdata. By default the data is in JSON format, but other options such as XML are available. A good place to store fixtures is a fixtures sub-directory of your application directories.
You can load a fixure using django-admin.py loaddata but more significantly, if your fixture has a name like initial_data.json it will be automatically loaded when you do a syncdb, saving the trouble of importing it yourself.
Another benefit is that when you run manage.py test to run your Unit Tests the temporary test database will also have the Initial Data Fixture loaded.
Of course, this will work when when you're adding attributes to models and columns to the DB. If you drop a column from the Database you'll need to update your fixture to remove the data for that column which might not be straightforward.
This works best when doing lots of little database changes during development. For updating production DBs a manually generated SQL script can often work best.
A: Manually doing the SQL changes and dump/reload are both options, but you may also want to check out some of the schema-evolution packages for Django. The most mature options are django-evolution and South.
EDIT: And hey, here comes dmigrations.
UPDATE: Since this answer was originally written, django-evolution and dmigrations have both ceased active development and South has become the de-facto standard for schema migration in Django. Parts of South may even be integrated into Django within the next release or two.
UPDATE: A schema-migrations framework based on South (and authored by Andrew Godwin, author of South) is included in Django 1.7+.
A: I've been using django-evolution. Caveats include:
*
*Its automatic suggestions have been uniformly rotten; and
*Its fingerprint function returns different values for the same database on different platforms.
That said, I find the custom schema_evolution.py approach handy. To work around the fingerprint problem, I suggest code like:
BEFORE = 'fv1:-436177719' # first fingerprint
BEFORE64 = 'fv1:-108578349625146375' # same, but on 64-bit Linux
AFTER = 'fv1:-2132605944'
AFTER64 = 'fv1:-3559032165562222486'
fingerprints = [
BEFORE, AFTER,
BEFORE64, AFTER64,
]
CHANGESQL = """
/* put your SQL code to make the changes here */
"""
evolutions = [
((BEFORE, AFTER), CHANGESQL),
((BEFORE64, AFTER64), CHANGESQL)
]
If I had more fingerprints and changes, I'd re-factor it. Until then, making it cleaner would be stealing development time from something else.
EDIT: Given that I'm manually constructing my changes anyway, I'll try dmigrations next time.
A: django-command-extensions is a django library that gives some extra commands to manage.py. One of them is sqldiff, which should give you the sql needed to update to your new model. It is, however, listed as 'very experimental'.
A: So far in my company we have used the manual approach. What works best for you depends very much on your development style.
We generally have not so many schema changes in production systems and somewhat formalized rollouts from development to production servers. Whenever we roll out (10-20 times a year) we do a fill diff of the current and the upcoming production branch reviewing all the code and noting what has to be changed on the production server. The required changes might be additional dependencies, changes to the settings file and changes to the database.
This works very well for us. Having it all automated is a niche vision but to difficult for us - maybe we could manage migrations but we still would need to handle additional library, server, whatever dependencies.
A: Django 1.7 (currently in development) is adding native support for schema migration with manage.py migrate and manage.py makemigrations (migrate deprecates syncdb).
A: As noted in other answers to the same topic, be sure to watch the DjangoCon 2008 Schema Evolution Panel on YouTube.
Also, two new projects on the map: Simplemigrations and Migratory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: How do I execute a file in Cygwin? How can I execute a.exe using the Cygwin shell?
I created a C file in Eclipse on Windows and then used Cygwin to navigate to the directory. I called gcc on the C source file and a.exe was produced. I would like to run a.exe.
A: ./a.exe at the prompt
A: just type ./a in the shell
A: To execute a file in the current directory, the syntax to use is: ./foo
As mentioned by allain, ./a.exe is the correct way to execute a.exe in the working directory using Cygwin.
Note: You may wish to use the -o parameter to cc to specify your own output filename. An example of this would be: cc helloworld.c -o helloworld.exe.
A: Thomas wrote:
Apparently, gcc doesn't behave like the one described in The C Programming language
It does in general. For your program to run on Windows it needs to end in .exe, "the C Programming language" was not written with Windows programmers in mind. As you've seen, cygwin emulates many, but not all, features of a POSIX environment.
A: gcc under cygwin does not generate a Linux executable output file of type " ELF 32-bit LSB executable," but it generates a windows executable of type "PE32 executable for MS Windows" which has a dependency on cygwin1.dll, so it needs to be run under cygwin shell. If u need to run it under dos prompt independently, they cygwin1.dll needs to be in your Windows PATH.
-AD.
A: you should just be able to call it by typing in the file name. You may have to call ./a.exe as the current directory is usually not on the path for security reasons.
A:
Apparently, gcc doesn't behave like the one described in The C Programming language, where it says that the command cc helloworld.c produces a file called a.out which can be run by typing a.out on the prompt.
A Unix hasn't behaved in that way by default (so you can just write the executable name without ./ at the front) in a long time. It's called a.exe, because else Windows won't execute it, as it gets file types from the extension.
A: When you start in Cygwin you are in the "/home/Administrator" zone, so put your a.exe file there.
Then at the prompt run:
cd a.exe
It will be read in by Cygwin and you will be asked to install it.
A: Just call it
> a
Make sure it will be found (path).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
} |
Q: SQL Server 2005 Auto Updated DateTime Column - LastUpdated I have a table defined (see code snippet below). How can I add a constraint or whatever so that the LastUpdate column is automatically updated anytime the row is changed?
CREATE TABLE dbo.Profiles
(
UserName varchar(100) NOT NULL,
LastUpdate datetime NOT NULL CONSTRAINT DF_Profiles_LastUpdate DEFAULT (getdate()),
FullName varchar(50) NOT NULL,
Birthdate smalldatetime NULL,
PageSize int NOT NULL CONSTRAINT DF_Profiles_PageSize DEFAULT ((10)),
CONSTRAINT PK_Profiles PRIMARY KEY CLUSTERED (UserName ASC),
CONSTRAINT FK_Profils_Users FOREIGN KEY (UserName) REFERENCES dbo.Users (UserName) ON UPDATE CASCADE ON DELETE CASCADE
)
A: A default constraint only works on inserts; for an update use a trigger.
A: I agree with the trigger idea, although I would use a join to inserted instead of a subquery. However, I want to point out that username is a particularly poor choice for a primary key. Usernames often change and when they do you need to change all related tables. It is far better to have a user id as the key and then put a unique index on username. Then when the user name changes, you don't need to change anything else.
A: I agree with the others -- set a default value of GetDate() on the LastUpdate column and then use a trigger to handle any updates.
Just something simple like this:
CREATE TRIGGER KeepUpdated on Profiles
FOR UPDATE, INSERT AS
UPDATE dbo.Profiles
SET LastUpdate = GetDate()
WHERE Username IN (SELECT Username FROM inserted)
If you want to get really fancy, have it evaluate what's being changed versus what's in the database and only modify LastUpdate if there was a difference.
Consider this...
*
*7am - User 'jsmith' is created with a last name of 'Smithe' (oops), LastUpdate defaults to 7am
*8am - 'jsmith' emails IT to say his name is incorrect. You immediately perform the update, so the last name is now 'Smith' and (thanks to the trigger) LastUpdate shows 8am
*2pm - Your slacker coworker finally gets bored with StumbleUpon and checks his email. He sees the earlier message from 'jsmith' regarding the name change. He runs: UPDATE Profiles SET LastName='Smith' WHERE Username='jsmith' and then goes
back to surfing MySpace. The trigger doesn't care that the last name was already 'Smith', however, so LastUpdate now shows 2pm.
If you just blindly change LastUpdate whenever an update statement runs, it's TECHNICALLY correct because an update did happen, but it probably makes more sense to actually compare the changes and act accordingly. That way, the 2pm Update statement by the coworker would still run, but LastUpdate would still show 8am.
--Kevin
A: You're going to have to use triggers for that.
A: My suggestion would be to create a stored procedure which defaults the lastUpdate to getdate().
I've tried to avoid triggers in the past because pre-SQL2005 locating and editing them was a pain in the rump. Especially for developers who are new to your project.
Also add that as the default value for your column definition.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Why is .NET exception not caught by try/catch block? I'm working on a project using the ANTLR parser library for C#. I've built a grammar to parse some text and it works well. However, when the parser comes across an illegal or unexpected token, it throws one of many exceptions. The problem is that in some cases (not all) that my try/catch block won't catch it and instead stops execution as an unhandled exception.
The issue for me is that I can't replicate this issue anywhere else but in my full code. The call stack shows that the exception definitely occurs within my try/catch(Exception) block. The only thing I can think of is that there are a few ANTLR assembly calls that occur between my code and the code throwing the exception and this library does not have debugging enabled, so I can't step through it. I wonder if non-debuggable assemblies inhibit exception bubbling? The call stack looks like this; external assembly calls are in Antlr.Runtime:
Expl.Itinerary.dll!TimeDefLexer.mTokens() Line 1213 C#
Antlr3.Runtime.dll!Antlr.Runtime.Lexer.NextToken() + 0xfc bytes
Antlr3.Runtime.dll!Antlr.Runtime.CommonTokenStream.FillBuffer() + 0x22c bytes
Antlr3.Runtime.dll!Antlr.Runtime.CommonTokenStream.LT(int k = 1) + 0x68 bytes
Expl.Itinerary.dll!TimeDefParser.prog() Line 109 + 0x17 bytes C#
Expl.Itinerary.dll!Expl.Itinerary.TDLParser.Parse(string Text = "", Expl.Itinerary.IItinerary Itinerary = {Expl.Itinerary.MemoryItinerary}) Line 17 + 0xa bytes C#
The code snippet from the bottom-most call in Parse() looks like:
try {
// Execution stopped at parser.prog()
TimeDefParser.prog_return prog_ret = parser.prog();
return prog_ret == null ? null : prog_ret.value;
}
catch (Exception ex) {
throw new ParserException(ex.Message, ex);
}
To me, a catch (Exception) clause should've captured any exception whatsoever. Is there any reason why it wouldn't?
Update: I traced through the external assembly with Reflector and found no evidence of threading whatsoever. The assembly seems to just be a runtime utility class for ANTLR's generated code. The exception thrown is from the TimeDefLexer.mTokens() method and its type is NoViableAltException, which derives from RecognitionException -> Exception. This exception is thrown when the lexer cannot understand the next token in the stream; in other words, invalid input. This exception is SUPPOSED to happen, however it should've been caught by my try/catch block.
Also, the rethrowing of ParserException is really irrelevant to this situation. That is a layer of abstraction that takes any exception during parse and convert to my own ParserException. The exception handling problem I'm experiencing is never reaching that line of code. In fact, I commented out the "throw new ParserException" portion and still received the same result.
One more thing, I modified the original try/catch block in question to instead catch NoViableAltException, eliminating any inheritance confusion. I still received the same result.
Someone once suggested that sometimes VS is overactive on catching handled exceptions when in debug mode, but this issue also happens in release mode.
Man, I'm still stumped! I hadn't mentioned it before, but I'm running VS 2008 and all my code is 3.5. The external assembly is 2.0. Also, some of my code subclasses a class in the 2.0 assembly. Could a version mismatch cause this issue?
Update 2: I was able to eliminate the .NET version conflict by porting relevant portions of my .NET 3.5 code to a .NET 2.0 project and replicate the same scenario. I was able to replicate the same unhandled exception when running consistently in .NET 2.0.
I learned that ANTLR has recently released 3.1. So, I upgraded from 3.0.1 and retried. It turns out the generated code is a little refactored, but the same unhandled exception occurs in my test cases.
Update 3:
I've replicated this scenario in a simplified VS 2008 project. Feel free to download and inspect the project for yourself. I've applied all the great suggestions, but have not been able to overcome this obstacle yet.
If you can find a workaround, please do share your findings. Thanks again!
Thank you, but VS 2008 automatically breaks on unhandled exceptions. Also, I don't have a Debug->Exceptions dialog. The NoViableAltException that is thrown is fully intended, and designed to be caught by user code. Since it is not caught as expected, program execution halts unexpectedly as an unhandled exception.
The exception thrown is derived from Exception and there is no multi-threading going on with ANTLR.
A: Regardless of whether the assembly has been compiled as a release build the exception should certainly 'bubble' up to the caller, there's no reason an assembly not being compiled in debug mode should have any affect on that.
I'd agree with Daniel is suggesting that perhaps the exception is occurring on a separate thread - try hooking the thread exception event in Application.ThreadException. This should be raised when any unhandled thread exception occurs. You could adapt your code thus:-
using System.Threading;
...
void Application_ThreadException(object sender, ThreadExceptionEventArgs e) {
throw new ParserException(e.Exception.Message, e.Exception);
}
...
var exceptionHandler =
new ThreadExceptionEventHandler(Application_ThreadException);
Application.ThreadException += exceptionHandler;
try {
// Execution stopped at parser.prog()
TimeDefParser.prog_return prog_ret = parser.prog();
return prog_ret == null ? null : prog_ret.value;
}
catch (Exception ex) {
throw new ParserException(ex.Message, ex);
}
finally {
Application.ThreadException -= exceptionHandler;
}
A: Are you using .Net 1.0 or 1.1? If so then catch(Exception ex) won't catch exceptions from unmanaged code. You'll need to use catch {} instead. See this article for further details:
http://www.netfxharmonics.com/2005/10/net-20-trycatch-and-trycatchexception/
A: I'm with @Shaun Austin - try wrapping the try with the fully qualified name
catch (System.Exception)
and see if that helps.Does the ANTLR doc say what Exceptions should be thrown?
A: I believe I understand the problem. The exception is being caught, the issue is confusion over the debugger's behavior and differences in the debugger settings among each person trying to repro it.
In the 3rd case from your repro I believe you are getting the following message: "NoViableAltException was unhandled by user code" and a callstack that looks like this:
[External Code]
> TestAntlr-3.1.exe!TimeDefLexer.mTokens() Line 852 + 0xe bytes C#
[External Code]
TestAntlr-3.1.exe!TimeDefParser.prog() Line 141 + 0x14 bytes C#
TestAntlr-3.1.exe!TestAntlr_3._1.Program.ParseTest(string Text = "foobar;") Line 49 + 0x9 bytes C#
TestAntlr-3.1.exe!TestAntlr_3._1.Program.Main(string[] args = {string[0x00000000]}) Line 30 + 0xb bytes C#
[External Code]
If you right click in the callstack window and run turn on show external code you see this:
Antlr3.Runtime.dll!Antlr.Runtime.DFA.NoViableAlt(int s = 0x00000000, Antlr.Runtime.IIntStream input = {Antlr.Runtime.ANTLRStringStream}) + 0x80 bytes
Antlr3.Runtime.dll!Antlr.Runtime.DFA.Predict(Antlr.Runtime.IIntStream input = {Antlr.Runtime.ANTLRStringStream}) + 0x21e bytes
> TestAntlr-3.1.exe!TimeDefLexer.mTokens() Line 852 + 0xe bytes C#
Antlr3.Runtime.dll!Antlr.Runtime.Lexer.NextToken() + 0xc4 bytes
Antlr3.Runtime.dll!Antlr.Runtime.CommonTokenStream.FillBuffer() + 0x147 bytes
Antlr3.Runtime.dll!Antlr.Runtime.CommonTokenStream.LT(int k = 0x00000001) + 0x2d bytes
TestAntlr-3.1.exe!TimeDefParser.prog() Line 141 + 0x14 bytes C#
TestAntlr-3.1.exe!TestAntlr_3._1.Program.ParseTest(string Text = "foobar;") Line 49 + 0x9 bytes C#
TestAntlr-3.1.exe!TestAntlr_3._1.Program.Main(string[] args = {string[0x00000000]}) Line 30 + 0xb bytes C#
[Native to Managed Transition]
[Managed to Native Transition]
mscorlib.dll!System.AppDomain.ExecuteAssembly(string assemblyFile, System.Security.Policy.Evidence assemblySecurity, string[] args) + 0x39 bytes
Microsoft.VisualStudio.HostingProcess.Utilities.dll!Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() + 0x2b bytes
mscorlib.dll!System.Threading.ThreadHelper.ThreadStart_Context(object state) + 0x3b bytes
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) + 0x81 bytes
mscorlib.dll!System.Threading.ThreadHelper.ThreadStart() + 0x40 bytes
The debugger's message is telling you that an exception originating outside your code (from NoViableAlt) is going through code you own in TestAntlr-3.1.exe!TimeDefLexer.mTokens() without being handled.
The wording is confusing, but it does not mean the exception is uncaught. The debugger is letting you know that code you own mTokens()" needs to be robust against this exception being thrown through it.
Things to play with to see how this looks for those who didn't repro the problem:
*
*Go to Tools/Options/Debugging and
turn off "Enable Just My code
(Managed only)". or option.
*Go to Debugger/Exceptions and turn off "User-unhandled" for
Common-Language Runtime Exceptions.
A: Is it possible that the exception is being thrown in another thread? Obviously your calling code is single threaded, but maybe the library you are consuming is doing some multithreaded operations under the covers.
A: You can set up VS.Net to break as soon as any exception occurs. Just run your project in debug mode, and it will stop as soon as the exception is thrown. Then you should have a better idea of why it isn't being caught.
Also, you can put some code in to catch all unhandled exceptions.
Application.ThreadException += new ThreadExceptionEventHandler(ThreadExceptionHandler);
// Catch all unhandled exceptions in all threads.
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(UnhandledExceptionHandler);
A:
To me, a catch (Exception) clause should've captured any exception whatsoever. Is there any reason why it wouldn't?
The only possibility I can think of is that something else is catching it before you and handling it in a way that appears to be an uncaught exception (e.g. exiting the process).
my try/catch block won't catch it and instead stops execution as an unhandled exception.
You need to find what is causing the exit process. It might be something other than an unhandled exception.
You might try using the native debugger with a breakpoint set on "{,,kernel32.dll}ExitProcess". Then use SOS to determine what managed code is calling exit process.
A: Personally I'm not convinced by the threading theory at all.
The one time I've seen this before, I was working with a library which also defined Exception and the usings I had meant that the actual Catch was referring to a different "Exception" type (if it had been fully qualified it was Company.Lib.Exception but it wasnt because of the using) so when it came to catching a normal exception that was being thrown (some kind of argument exception if I remember correctly) it just wouldn't catch it because the type didn't match.
So in summary, is there another Exception type in a different namespace that is in a using in that class?
EDIT: A quick way to check this is make sure in your catch clause you fully qualify the Exception type as "System.Exception" and give it a whirl!
EDIT2: OK I've tried the code and concede defeat for now. I'll have to have another look at it in the morning if no one has come up with a solution.
A: Hmm, I don't understand the problem. I downloaded and tried your example solution file.
An exception is thrown in TimeDefLexer.cs, line 852, which is subsequently handled by the catch block in Program.cs that just says Handled exception.
If I uncomment the catch block above it, it will enter that block instead.
What seems to be the problem here?
As Kibbee said, Visual Studio will stop on exceptions, but if you ask it to continue, the exception will get caught by your code.
A: I downloaded the sample VS2008 project, and am a bit stumped here too. I was able to get past the exceptions however, although probably not in a way that will work will great for you. But here's what I found:
This mailing list post had a discussion of what looks to be the same issue you are experiencing.
From there, I added a couple dummy classes in the main program.cs file:
class MyNoViableAltException : Exception
{
public MyNoViableAltException()
{
}
public MyNoViableAltException(string grammarDecisionDescription, int decisionNumber, int stateNumber, Antlr.Runtime.IIntStream input)
{
}
}
class MyEarlyExitException : Exception
{
public MyEarlyExitException()
{
}
public MyEarlyExitException(int decisionNumber, Antlr.Runtime.IIntStream input)
{
}
}
and then added the using lines into TimeDefParser.cs and TimeDefLexer.cs:
using NoViableAltException = MyNoViableAltException;
using EarlyExitException = NoViableAltException;
With that the exceptions would bubble into the fake exception classes and could be handled there, but there was still an exception being thrown in the mTokens method in TimeDefLexer.cs. Wrapping that in a try catch in that class caught the exception:
try
{
alt4 = dfa4.Predict(input);
}
catch
{
}
I really don't get why wrapping it in the internal method rather than where it is being called from handle the error if threading isn't in play, but anyways hopefully that will point someone smarter than me here in the right direction.
A: I downloaded your code and everything work as expected.
Visual Studio debugger correctly intercepts all exceptions. Catch blocks work as expected.
I'm running Windows 2003 server SP2, VS2008 Team Suite (9.0.30729.1 SP)
I tried to compile you project for .NET 2.0, 3.0 & 3.5
@Steve Steiner, debugger options you mentioned have nothing to do with this behavior.
I tried to play with these options with no visible effects - catch blocks managed to intercept all exceptions.
A: Steve Steiner is correct that the exception is originating in the antlr library, passing through the mTokens() method and being caught in the antlr library. The problem is that this method is auto-generated by antlr. Therefore, any changes to handle the exception in mTokens() will be overwritten when your generate your parser/lexer classes.
By default, antlr will log errors and try to recover parsing. You can override this so that the parser.prog() will throw an exception whenever an error is encountered. From your example code i think this is the behaviour you were expecting.
Add this code to your grammer (.g) file. You will also need to turn off "Enable Just My Code" in the debugging menu.
@members {
public override Object RecoverFromMismatchedSet(IIntStream input,RecognitionException e, BitSet follow)
{
throw e;
}
}
@rulecatch {
catch (RecognitionException e)
{
throw e;
}
}
This is my attempt at a C# version of the example given in the "Exiting the recogniser on first error" chapter of the "Definitive ANTLR Reference" book.
Hope this is what you were looking for.
A: I can tell you what's happening here...
Visual Studio is breaking because it thinks the exception is unhandled. What does unhandled mean? Well, in Visual Studio, there is a setting in the Tools... Options... Debugging... General... "Enable Just My Code (Managed only)". If this is checked and if the exception propagates out of your code and out to a stack frame associated with a method call that exists in an assembly which is "NOT YOUR CODE" (for example, Antlr), that is considered "unhandled". I turn off that Enable Just My Code feature for this reason. But, if you ask me, this is lame... let's say you do this:
ExternalClassNotMyCode c = new ExternalClassNotMyCode();
try {
c.doSomething( () => { throw new Exception(); } );
}
catch ( Exception ex ) {}
doSomething calls your anonymous function there and that function throws an exception...
Note that this is an "unhandled exception" according to Visual Studio if "Enable Just My Code" is on. Also, note that it stops as if it were a breakpoint when in debug mode, but in a non-debugging or production environment, the code is perfectly valid and works as expected. Also, if you just "continue" in the debugger, the app goes on it's merry way (it doesn't stop the thread). It is considered "unhandled" because the exception propagates through a stack frame that is NOT in your code (i.e. in the external library). If you ask me, this is lousy. Please change this default behavior Microsoft. This is a perfectly valid case of using Exceptions to control program logic. Sometimes, you can't change the third party library to behave any other way, and this is a very useful way to accomplish many tasks.
Take MyBatis for example, you can use this technique to stop processing records that are being collected by a call to SqlMapper.QueryWithRowDelegate.
A: Oh and in reference to what Kibbee said; if you select Debug|Exceptions in VS and just click all the boxes in the 'thrown' column it should pick everything up AFAIK as a 'first chance exception', i.e. VS will indicate when the exception is about to be processed by everything else and break on the relevant code. This should help with debugging.
A: The best option sounds like setting Visual Studio to break on all unhandled exceptions (Debug -> Exceptions dialog, check the box for "Common Language Runtime Exceptions" and possibly the others as well). Then run your program in debug mode. When the ANTLR parser code throws an exception it should be caught by Visual Studio and allow you to see where it is occurring, the exception type, etc.
Based on the description, the catch block appears to be correct, so one of several things could be happening:
*
*the parser is not actually throwing an exception
*the parser is ultimately throwing something that isn't deriving from System.Exception
*there is an exception being thrown on another thread that isn't being handled
It sounds like you have potentially ruled out issue #3.
A:
I traced through the external assembly with Reflector and found no evidence of threading whatsoever.
You can't find any threading does not mean there is no threading
.NET has a 'thread pool' which is a set of 'spare' threads that sit around mostly idle. Certain methods cause things to run in one of the thread pool threads so they don't block your main app.
The blatant examples are things like ThreadPool.QueueUserWorkItem, but there are lots and lots of other things which can also run things in the thread pool that don't look so obvious, like Delegate.BeginInvoke
Really, you need to do what kibbee suggests.
A: have you tried to print (Console.WriteLine()) the exception inside the catch clause, and not use visual studio and run your application on console?
A: I believe Steve Steiner is correct. When researching Steve's suggestions, I came across this thread talking about the "Enable Just My Code" option in Tools|Options|Debugger|General. It is suggested that the debugger will break in certain conditions when non-user code either throws or handles an exception. I'm not exactly sure why this even matters, or why the debugger specifically says the exception was unhandled when it really was.
I was able to eliminate the false breaks by disabling the "Enable Just My Code" option. This also changes the Debug|Exceptions dialog by removing the "User-handled" column as it no longer applies. Or, you can just uncheck the "User-handled" box for CLR and get the same result.
Bigtime thanks for the help everyone!
A:
"Also, you can put some code in to
catch all unhandled exceptions. Read
the link for more info, but the basics
are these two lines."
This is false. This used to catch all unhandled exceptions in .NET 1.0/1.1 but it was a bug and it wasn't supposed to and it was fixed in .NET 2.0.
AppDomain.CurrentDomain.UnhandledException
Is only intended to be used as a last chance logging saloon so you can log the exception before the program exits. It wont catch the exception as of 2.0 onwards (although in .NET 2.0 at least there is a config value you can modify to make it act like 1.1 but it isn't recommended practice to use this.).
Its worth noting that there are few exceptions that you cannot catch, such as StackOverflowException and OutOfMemoryException. Otherwise as other people have suggested it might be an exception in a background thread somewhere. Also I'm pretty sure you can't catch some/all unmanaged/native exceptions either.
A: I don't get it...your catch block just throws a new exception (with the same message). Meaning that your statement of:
The problem is that in some cases (not all) that my try/catch block won't catch it and instead stops execution as an unhandled exception.
is exactly what is expected to happen.
A: I agree with Daniel Auger and kronoz that this smells like an exception that has something to do with threads. Beyond that, here are my other questions:
*
*What does the complete error message say? What kind of exception is it?
*Based on the stack trace you've provided here, isn't the exception thrown by you code in TimeDefLexer.mTokens()?
A: I'm not sure if I'm being unclear, but if so, I'm seeing the debugger halt execution with an "Unhandled Exception" of type NoViableAltException. Initially, I didn't know anything about this Debug->Exceptions menu item because MS expects you, at VS install time, to commit to a profile when you have no idea how they are different. Apparently, I was not on the C# dev profile and was missing this option. After finally debugging all thrown CLR exceptions, I was unfortunately unable to discover any new behavior leading to the reason for this unhandled exception issue. All the exceptions thrown were expected and supposedly handled in a try/catch block.
I reviewed the external assembly and there is no evidence of multithreading. By that, I mean no reference exists to System.Threading and no delegates were used whatsoever. I'm familiar with that constitutes instantiating a thread. I verify this by observing the Threads toolbox at the time of the unhandled exception to view there is only one running thread.
I have an open issue with the ANTLR folks so perhaps they've been able to tackle this issue before. I've been able to replicate it in a simple console app project using .NET 2.0 and 3.5 under VS 2008 and VS 2005.
It's just a pain point because it forces my code to only work with known valid parser input. Using an IsValid() method would be risky if it threw an unhandled exception based on user input. I'll keep this question up to date when more is learned of this issue.
A: @spoulson,
If you can replicate it, can you post it somewhere? One avenue you could try is usign WinDBG with the SOS extensions to run the app and catch the unhandled exception. It will break on the first chance exception (before the runtime tries to find a handler) and you can see at that point where it is coming from, and what thread.
If you haven't used WinDBG before, it can be a little overwhelming, but here's a good tutorial:
http://blogs.msdn.com/johan/archive/2007/11/13/getting-started-with-windbg-part-i.aspx
Once you start up WinDBG, you can toggle the breaking of unhandled exceptions by going to Debug->Event Filters.
A: Wow, so of the reports so far, 2 worked correctly, and 1 experienced the issue I reported. What are the versions of Windows, Visual Studio used and .NET framework with build numbers?
I'm running XP SP2, VS 2008 Team Suite (9.0.30729.1 SP), C# 2008 (91899-270-92311015-60837), and .NET 3.5 SP1.
A: If you are using com objects your project and try catch blocks not catch the exceptions you will be need disable Tools/Debugging/Break when exceptions cross AppDomain or managed/native boundaries(Managed only) option.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Deleting messages from Exchange IMAP mailbox on iPhone I have a secondary Exchange mailbox configured on my iPhone using IMAP. This all appears to work fine except when a message is deleted on the phone, it still shows normally in Outlook. It does not seem to matter what I set the "remove deleted messages" setting to on the phone.
I understand this is due to a combination of the phone not expunging the deleted messages and Exchange showing deleted but not expunged messages in Outlook.
I'm looking for an automated solution to this that does not have a large delay between deleting the message on the phone and it disappearing in Outlook. The message should also show in the Deleted Items when deleted from the phone.
I've thought about creating a background process which connects to the mailbox via IMAP and sits in IDLE mode until there's a deleted message in the folder. It will then expunge the folder and return to IDLE mode. This wouldn't work with more than one folder (without multiple instances) but it would probably do the job.
Any recommendations on an easily scriptable tool or library that supports IMAP IDLE?
A: I can wholeheartedly recommend writing such a process with a simple Perl client using the Mail::MAPClient module.
#!/usr/bin/perl -w
use strict;
use Mail::IMAPClient;
# returns an unconnected Mail::IMAPClient object:
my $imap = Mail::IMAPClient->new(
Server => $host,
User => $id,
Password=> $pass,
) or die "Cannot connect to $host as $id: $@";
$imap->expunge();
This can then be run from the crontab or some other scheduler.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to assign a method's output to a textbox value without code behind How do I assign a method's output to a textbox value without code behind?
<%@ Page Language="VB" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<script runat="server">
Public TextFromString As String = "test text test text"
Public TextFromMethod As String = RepeatChar("S", 50) 'SubSonic.Sugar.Web.GenerateLoremIpsum(400, "w")
Public Function RepeatChar(ByVal Input As String, ByVal Count As Integer)
Return New String(Input, Count)
End Function
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" runat="server">
<title>Test Page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<%=TextFromString%>
<br />
<asp:TextBox ID="TextBox1" runat="server" Text="<%# TextFromString %>"></asp:TextBox>
<br />
<%=TextFromMethod%>
<br />
<asp:TextBox ID="TextBox2" runat="server" Text="<%# TextFromMethod %>"></asp:TextBox>
</div>
</form>
</body>
</html>
it was mostly so the designer guys could use it in the aspx page. Seems like a simple thing to push a variable value into a textbox to me.
It's also confusing to me why
<asp:Label runat="server" ID="label1"><%=TextFromString%></asp:Label>
and
<asp:TextBox ID="TextBox3" runat="server">Hello</asp:TextBox>
works but
<asp:TextBox ID="TextBox4" runat="server"><%=TextFromString%></asp:TextBox>
causes a compilation error.
A: There's a couple of different expression types in .ASPX files. There's:
<%= TextFromMethod %>
which simply reserves a literal control, and outputs the text at render time.
and then there's:
<%# TextFromMethod %>
which is a databinding expression, evaluated when the control is DataBound(). There's also expression builders, like:
<%$ ConnectionStrings:Database %>
but that's not really important here....
So, the <%= %> method won't work because it would try to insert a Literal into the .Text property...obviously, not what you want.
The <%# %> method doesn't work because the TextBox isn't DataBound, nor are any of it's parents. If your TextBox was in a Repeater or GridView, then this method would work.
So - what to do? Just call TextBox.DataBind() at some point. Or, if you have more than 1 control, just call Page.DataBind() in your Page_Load.
Private Function Page_Load(sender as Object, e as EventArgs)
If Not IsPostback Then
Me.DataBind()
End If
End Function
A: Have you tried using an HTML control instead of the server control? Does it also cause a compilation error?
<input type="text" id="TextBox4" runat="server" value="<%=TextFromString%>" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Yaws uses old config file I'm developing a web app on Yaws 1.65 (installed through apt) running on Debian etch on a VPS with UML. Whenever I do /etc/init.d/yaws restart or a stop/start, it initializes according to an old version of the config file (/etc/yaws/yaws.conf).
I know this because I changed the docroot from the default to another directory (call it A), then a few weeks later changed it to directory B, and the config file has stayed with B for the last several months. But then, after a restart, it switches back to A. If it switched back to the package default, that would be understandable, but it switches to an old customized version instead.
The funny thing is that if I leave it stopped for several minutes, when I start it again, everything switches back to normal (using directory B). But while it's stopped, if I run ps, I don't see any yaws-related processes (yaws, heart, etc). This problem has survived several reboots, so it's got to be an old cached copy of the config somewhere, but I have yet to find anything like that.
Any idea what could be going on?
Update:
@Gorgapor - I stopped yaws, renamed the config file and tried to start it again. It failed to start. However, I was able to restart a couple of times and this time it didn't switch back to the old version.
A: I'm completely inexperienced with yaws, but I have a troubleshooting suggestion: What happens if you remove the config file completely? If it still starts yaws without a config file, that could be a clear sign that something is being cached.
For what it's worth, with a quick 5 minutes of googling, I found no mention of any caching behavior.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Templates spread across multiple files C++ seems to be rather grouchy when declaring templates across multiple files. More specifically, when working with templated classes, the linker expect all method definitions for the class in a single compiler object file. When you take into account headers, other declarations, inheritance, etc., things get really messy.
Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files?
A: Across how many files? If you just want to separate class definitions from implementation then try this article in the C++ faqs. That's about the only way I know of that works at the moment, but some IDEs (Eclipse CDT for example) won't link this method properly and you may get a lot of errors. However, writing your own makefiles or using Visual C++ has always worked for me :-)
A: When/if your compiler supports C++0x, the extern keyword can be used to separate template declarations from definitions.
See here for a brief explanation.
Also, section 6.3, "The Separation Model," of C++ Templates: The Complete Guide by David Vandevoorde and Nicolai M. Josuttis describes other options.
A:
Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files?
Yes; don't.
The C++ spec permits a compiler to be able to "see" the entire template (declaration and definition) at the point of instantiation, and (due to the complexities of any implementation) most compilers retain this requirement. The upshot is that #inclusion of any template header must also #include any and all source required to instantiate the template.
The easiest way to deal with this is to dump everything into the header, inline where posible, out-of-line where necessary.
If you really regard this as an unacceptable affront, a common option is to split the template into the usual header/implementation pair, and then #include the implementation file at the end of the header.
C++'s "export" feature may or may not provide another workaround. The feature is poorly supported and poorly defined; although it in principle should permit some kind of separate compilation of templates, it doesn't necessarily obviate the demand that the compiler be able to see the entire template body.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Anyone have experience with Sphinx speech recognition? Has anyone used the Sphinx speech recognition stack to build IVR applications? I am looking for open source alternatives to the expensive and somewhat limiting choices from MSFT and others. I have not been able to find a comprehensive package that ties open source speech/voip applications together.
A: You could try integrating Sphinx with Asterisk:
*
*http://www.syednetworks.com/asterisk-integration-with-sphinx-voice-recognition-system
*http://www.voip-info.org/wiki/view/Sphinx
A:
Last I looked at Sphinx, it had issues with 8khz audio which resulted
in really poor performance. There's not a lot of people talking about
successful deployments of Sphinx in real environments, but you might
be able to get it to work with some trailblazing effort. See here for
more info:
http://www.voip-info.org/wiki-Sphinx
The closest thing to open-source that really works is using LumenVox
with Asterisk. Asterisk is the open-source PBX that you can use to
integrate with a VoIP service or gateway, or even the PSTN. LumenVox
is a commercial speech engine that integrates with Asterisk:
http://www.asterisk.org
http://www.lumenvox.com
http://www.lumenvox.com/partners/digium/Asterisk.aspx
There's lots of people successfully using LumenVox with Asterisk.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What's a good open source VoiceXML implementation? I am trying to find out if it's possible to build a complete IVR application by cobbling together parts from open source projects. Is anyone using a non-commercial VoiceXML implementation to build speech-enabled systems?
A: I've tried JVoiceXML in the past and had some luck with it.
http://jvoicexml.sourceforge.net/
It's java of course, but that wasn't a problem for my situation.
A: Voiceglue (http://www.voiceglue.org/) is an implementation of voicexml using openvxi and asterisk. It may be a good option for you, it is GPL licensed.
A: You might want to take a look at OpenVXI, I believe that a number of companies that sell very expensive IVR platforms (such as Avaya) have based their voice browser on it.
http://en.wikipedia.org/wiki/OpenVXI
A: You can check out Asterix - http://www.asterisk.org/ for an open source solution.
A: If you want to build an IVR and you're not married to VoiceXML, you might try Twilio. They have a simple XML syntax, an awesome REST API, and small-project-friendly pay per minute pricing which lets you concentrate on building your app and not building/hosting telephony infrastructure. I built an IVR app using their system a few days and it was a pleasure.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How do I close a popup window, and open the next page in the main window in ROR? I have a popup window containing a form which gathers data for a report.
When I click submit in that window, I want it to close the popup, and open the report in the original window that called the popup.
I think I can open the report in the correct window by using
{ :target => <name of window> }
in the form_tag, but I don't know how to determine or set the name of the originating window.
I also don't know how to close the popup window.
A: :target => adds the html attribute target to the link. This opens up a new window and names the new window the target.
You have to use javascript or Ajax to redirect the old page,
window.opener.location.href="http://new_url";
and then close the old window.
window.close();
This can be done either through the rjs file or directly in the javascript.
A: How is this for starters?
# The submit button in your child window's view:
<%= button_to_function 'Save', "$('my_form').submit(); window.opener.location.reload(); window.close();" %>
A: The popup window can be closed using the onClick html event as follows:
<%= submit_tag "Go!", {:onClick => "window.close()"} %>
A: Try this:
function fclosepopup(){
window.opener.location.replace="URL";
window.close();
}
It will close the current window and bring you to the next page in the parent window.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is the replacement of Controller.ReadFromRequest in ASP.NET MVC? I am attempting to update a project from ASP.NET MVC Preview 3 to Preview 5 and it seems that Controller.ReadFromRequest(string key) has been removed from the Controller class. Does anyone know of any alternatives to retrieving information based on an identifier from a form?
A: Looks like they've added controller.UpdateModel to address this issue, signature is:
UpdateModel(object model, string[] keys)
I haven't upgraded my app personally, so I'm not sure of the actual usage. I'll be interested to find out about this myself, as I'm using controller.ReadFromRequest as well.
A: Not sure where it went. You could roll your own extension though:
public static class MyBindingExtensions
{
public static T ReadFromRequest < T > (this Controller controller, string key)
{
// Setup
HttpContextBase context = controller.ControllerContext.HttpContext;
object val = null;
T result = default(T);
// Gaurd
if (context == null)
return result; // no point checking request
// Bind value (check form then query string)
if (context.Request.Form[key] != null)
val = context.Request.Form[key];
if (val == null)
{
if (context.Request.QueryString[key] != null)
val = context.Request.QueryString[key];
}
// Cast value
if (val != null)
result = (t)val;
return result;
}
}
A: could you redo that link in something like tinyurl.com?
I need this info too but can get that mega-link to work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Finding out the source of an exception in C++ after it is caught? I'm looking for an answer in MS VC++.
When debugging a large C++ application, which unfortunately has a very extensive usage of C++ exceptions. Sometimes I catch an exception a little later than I actually want.
Example in pseudo code:
FunctionB()
{
...
throw e;
...
}
FunctionA()
{
...
FunctionB()
...
}
try
{
Function A()
}
catch(e)
{
(<--- breakpoint)
...
}
I can catch the exception with a breakpoint when debugging. But I can't trace back if the exception occurred in FunctionA() or FunctionB(), or some other function. (Assuming extensive exception use and a huge version of the above example).
One solution to my problem is to determine and save the call stack in the exception constructor (i.e. before it is caught). But this would require me to derive all exceptions from this base exception class. It would also require a lot of code, and perhaps slow down my program.
Is there an easier way that requires less work? Without having to change my large code base?
Are there better solutions to this problem in other languages?
A: There's an excellent book written by John Robbins which tackles many difficult debugging questions. The book is called Debugging Applications for Microsoft .NET and Microsoft Windows. Despite the title, the book contains a host of information about debugging native C++ applications.
In this book, there is a lengthy section all about how to get the call stack for exceptions that are thrown. If I remember correctly, some of his advice involves using structured exception handling (SEH) instead of (or in addition to) C++ exceptions. I really cannot recommend the book highly enough.
A: Put a breakpoint in the exception object constructor. You'll get your breakpoint before the exception is thrown.
A: There is no way to find out the source of an exception after it's caught, unless you include that information when it is thrown. By the time you catch the exception, the stack is already unwound, and there's no way to reconstruct the stack's previous state.
Your suggestion to include the stack trace in the constructor is your best bet. Yes, it costs time during construction, but you probably shouldn't be throwing exceptions often enough that this is a concern. Making all of your exceptions inherit from a new base may also be more than you need. You could simply have the relevant exceptions inherit (thank you, multiple inheritance), and have a separate catch for those.
You can use the StackTrace64 function to build the trace (I believe there are other ways as well). Check out this article for example code.
A: Here's how I do it in C++ using GCC libraries:
#include <execinfo.h> // Backtrace
#include <cxxabi.h> // Demangling
vector<Str> backtrace(size_t numskip) {
vector<Str> result;
std::vector<void*> bt(100);
bt.resize(backtrace(&(*bt.begin()), bt.size()));
char **btsyms = backtrace_symbols(&(*bt.begin()), bt.size());
if (btsyms) {
for (size_t i = numskip; i < bt.size(); i++) {
Aiss in(btsyms[i]);
int idx = 0; Astr nt, addr, mangled;
in >> idx >> nt >> addr >> mangled;
if (mangled == "start") break;
int status = 0;
char *demangled = abi::__cxa_demangle(mangled.c_str(), 0, 0, &status);
Str frame = (status==0) ? Str(demangled, demangled+strlen(demangled)) :
Str(mangled.begin(), mangled.end());
result.push_back(frame);
free(demangled);
}
free(btsyms);
}
return result;
}
Your exception's constructor can simply call this function and store away the stack trace. It takes the param numskip because I like to slice off the exception's constructor from my stack traces.
A: You pointed to a breakpoint in the code. Since you are in the debugger, you could set a breakpoint on the constructor of the exception class, or set Visual Studio debugger to break on all thrown exceptions (Debug->Exceptions Click on C++ exceptions, select thrown and uncaught options)
A: If you are just interested in where the exception came from, you could just write a simple macro like
#define throwException(message) \
{ \
std::ostringstream oss; \
oss << __FILE __ << " " << __LINE__ << " " \
<< __FUNC__ << " " << message; \
throw std::exception(oss.str().c_str()); \
}
which will add the file name, line number and function name to the exception text (if the compiler provides the respective macros).
Then throw exceptions using
throwException("An unknown enum value has been passed!");
A: There's no standard way to do this.
Further, the call stack must typically be recorded at the time of the exception being thrown; once it has been caught the stack has unrolled, so you no longer know what was going on at the point of being thrown.
In VC++ on Win32/Win64, you might get usable-enough results by recording the value from the compiler intrinsic _ReturnAddress() and ensuring that your exception class constructor is __declspec(noinline). In conjunction with the debug symbol library, I think you could probably get the function name (and line number, if your .pdb contains it) that corresponds to the return address using SymGetLineFromAddr64.
A: In native code you can get a shot at walking the callstack by installing a Vectored Exception handler. VC++ implements C++ exceptions on top of SEH exceptions and a vectored exception handler is given first shot before any frame based handlers. However be really careful, problems introduced by vectored exception handling can be difficult to diagnose.
Also Mike Stall has some warnings about using it in an app that has managed code. Finally, read Matt Pietrek's article and make sure you understand SEH and vectored exception handling before you try this. (Nothing feels quite so bad as tracking down a critical problem to code you added help track down critical problems.)
A: If you're debugging from the IDE, go to Debug->Exceptions, click Thrown for C++ exceptions.
A: I believe MSDev allows you to set break points when an exception is thrown.
Alternatively put the break point on the constructor of your exception object.
A: Other languages? Well, in Java you call e.printStackTrace(); It doesn't get much simpler than that.
A: In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump() that can do better crash dumps that will allow seeing full program state, with all global variables, etc. As for call stacks, I doubt they can be better because of optimizations... unless you turn (maybe some) optimizations off.
Also, I think disabling inline functions and whole program optimization will help quite a lot.
In fact, there are many dump types, maybe you could choose one small enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack though, they only affect the amount of variables you'll be able to see.
I noticed some of those dump types aren't supported in dbghelp.dll version 5.1 that we use. We could update it to the newest, 6.9 version though, I've just checked the EULA for MS Debugging Tools -- the newest dbghelp.dll is still ok to redistribute.
A: I use my own exceptions. You can handle them quite simple - also they contain text. I use the format:
throw Exception( "comms::serial::serial( )", "Something failed!" );
Also I have a second exception format:
throw Exception( "comms::serial::serial( )", ::GetLastError( ) );
Which is then converted from a DWORD value to the actual message using FormatMessage. Using the where/what format will show you what happened and in what function.
A: By now, it has been 11 years since this question was asked and today, we can solve this problem using only standard C++11, i.e. cross-platform and without the need for a debugger or cumbersome logging.
You can trace the call stack that led to an exception
Use std::nested_exception and std::throw_with_nested
This won't give you a stack unwind, but in my opinion the next best thing.
It is described on StackOverflow here and here, how you can get a backtrace on your exceptions inside your code without need for a debugger or cumbersome logging, by simply writing a proper exception handler which will rethrow nested exceptions.
It will, however, require that you insert try/catch statements at the functions you wish to trace (i.e. functions without this will not appear in your trace).
You could automate this with macros, reducing the amount of code you have to write/change.
Since you can do this with any derived exception class, you can add a lot of information to such a backtrace!
You may also take a look at my MWE on GitHub, where a backtrace would look something like this:
Library API: Exception caught in function 'api_function'
Backtrace:
~/Git/mwe-cpp-exception/src/detail/Library.cpp:17 : library_function failed
~/Git/mwe-cpp-exception/src/detail/Library.cpp:13 : could not open file "nonexistent.txt"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: PHP mail using Gmail In my PHP web app, I want to be notified via email whenever certain errors occur. I'd like to use my Gmail account for sending these. How could this be done?
A: You could use PEAR's mail function with Gmail's SMTP Server
Note that when sending e-mail using Gmail's SMTP server, it will look like it came from your Gmail address, despite what you value is for $from.
(following code taken from About.com Programming Tips )
<?php
require_once "Mail.php";
$from = "Sandra Sender <[email protected]>";
$to = "Ramona Recipient <[email protected]>";
$subject = "Hi!";
$body = "Hi,\n\nHow are you?";
// stick your GMAIL SMTP info here! ------------------------------
$host = "mail.example.com";
$username = "smtp_username";
$password = "smtp_password";
// --------------------------------------------------------------
$headers = array ('From' => $from,
'To' => $to,
'Subject' => $subject);
$smtp = Mail::factory('smtp',
array ('host' => $host,
'auth' => true,
'username' => $username,
'password' => $password));
$mail = $smtp->send($to, $headers, $body);
if (PEAR::isError($mail)) {
echo("<p>" . $mail->getMessage() . "</p>");
} else {
echo("<p>Message successfully sent!</p>");
}
?>
A: Gmail's SMTP-server requires a very specific configuration.
From Gmail help:
Outgoing Mail (SMTP) Server (requires TLS)
- smtp.gmail.com
- Use Authentication: Yes
- Use STARTTLS: Yes (some clients call this SSL)
- Port: 465 or 587
Account Name: your full email address (including @gmail.com)
Email Address: your email address ([email protected])
Password: your Gmail password
You can probably set these settings up in Pear::Mail or PHPMailer. Check out their documentation for more details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to Catch an exception in a using block with .NET 2.0? I'm trying to leverage the using block more and more these days when I have an object that implements IDisposable but one thing I have not figured out is how to catch an exception as I would in a normal try/catch/finally ... any code samples to point me in the right direction?
Edit: The question was modified after reading through the replies. It was "How to Throw an exception in a using block with .NET 2.0?" but I was actually looking for a way to catch these exceptions inside a using block.
I'm looking for more detail on rolling my own catching block inside a using block.
Edit: What I wanted to avoid is having to use a try/catch/finally inside my using block like @Blair showed. But maybe this is a non issue...
Edit: @Blair, this is exactly what I was looking for, thanks for the detailed reply!
A: I don't really understand the question - you throw an exception as you normally would.
If MyThing implements IDisposable, then:
using ( MyThing thing = new MyThing() )
{
...
throw new ApplicationException("oops");
}
And thing.Dispose will be called as you leave the block, as the exception's thrown. If you want to combine a try/catch/finally and a using, you can either nest them:
try
{
...
using ( MyThing thing = new MyThing() )
{
...
}
...
}
catch ( Exception e )
{
....
}
finally
{
....
}
(Or put the try/catch/finally in the using):
using ( MyThing thing = new MyThing() )
{
...
try
{
...
}
catch ( Exception e )
{
....
}
finally
{
....
}
...
} // thing.Dispose is called now
Or you can unroll the using and explicitly call Dispose in the finally block as @Quarrelsome demonstrated, adding any extra exception-handling or -recovery code that you need in the finally (or in the catch).
EDIT: In response to @Toran Billups, if you need to process exceptions aside from ensuring that your Dispose method is called, you'll either have to use a using and try/catch/finally or unroll the using - I don't thinks there's any other way to accomplish what you want.
A: Yeah there is nothing different about throwing exceptions in using blocks.
Remember that the using block basically translates to:
IDisposable disposable = null;
try
{
disposable = new WhateverYouWantedToMake();
}
finally
{
disposable.Dispose()
}
So you will have to roll your own catching if you want to catch anything but catching/throwing is a completely separate concern from the using. The finally is almost guaranteed to execute (save an uncatchable exception (e.g. stackoverflow or outofmemory) or someone pulling the power out of the PC).
A: You need to have a try statement to catch an exception
Either you can use an try statement within the using block or you can use a using block in a try block
But you need to use a try block to catch any exceptions occuring
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Subversion or Adobe Version CUE 3 for Photoshop/Illustrator Files This might seem like a stupid question I admit. But I'm in a small shop me plus two designers. Our backups are getting out of hand because they just copy/paste files if they need to make a change (version).
I was all set to try Subversion to handle all of our files my text (code) files and their photoshop/illustrator and asset files. That is until I noticed there was a new version of Adobe Version Cue v3. We've tried previously to use version cue but it got complicated and the designers quickly stopped using it.
Looking for anyone that has some experience with version 3 of Version Cue.
Thanks for the great feedback. Maybe I should have asked what's the best tool to use for Versioning Photoshop and related files. I did notice the binary file issue and was worried about trying to explain it and keep it "working". I signed up for the beta at Gridiron thanks for that!
Here is the other question related to this one.
A: PixelNovel Timeline is a dedicated Subversion client for Photoshop - works as a plugin and shows all you versions in an additional Photoshop palette. It also comes with a web storage where you can view your files via a web browser. Give it a go - may be it's exactly what you are after....
A: I was going to suggest Gridiron Flow. But Brian beat me to it.
A: @Theo Subversion does not save a whole new copy of a photoshop file with each version. In fact it seems to be very efficient in saving typical updates while working on a design.
This experience is based on my personal experiences… but being reproduced by others: http://joshcarter.com/productivity/svn_hg_git_for_home_directory
Beware of Version Cue because it will create a adobe-software-only environment where everybody has to install the creative suite to get access to your files (at least you will have to use Adobe Bridge). On the other hand there are a multitude of opensource webbased subversion servers that will give everyone in your team access to your work using a simple browser (rss feed included).
And I highly recommend reading the link provided by @balexandre "The Ultimate Guide to Version Control for Designers"
A: Subversion is not an ideal solution for binary files, regardless of how little has changed it will save a new copy each time you check it in. Moreover, although Subversion has some locking capabilities, it doesn't lock by default, which means that if two persons modify the same binary file the one that checks in the last will overwrite the other one's changes.
Also, there's no tool out there that's as integrated with the Adobe design tools as Version Cue is.
Subversion is great for text-based content, but really really not suited to the kind of files you will be working with.
A: I have used Subversion for this exact thing, and Theo is right, you have to remember to lock your files. I am on CS2 and so have not used Version Cue, but I have not been able to find a whole lot online about other folks using it either, for some reason. The other problem I had using Subversion related to disk space. Subversion stores an alternate "shadow copy" of your files in your working directory. For Photoshop and Illustrator this is normally not that big of a deal, but I was using Premiere and After Effects as well, and the disk space required for the shadow copies doubled my disk usage. You might also check out Gridiron's new Gridiron Flow product, which John Nack raves about. I would love to use it - it's due out about now, and it will likely run in the several-hundred dollar range, I think...
Update 3/6/2009: Gridiron Flow is out, and it does versioning on a single machine, but it's not clear from their demos whether it does collaborative versioning. Also, I just stumbled across this very good comparison of subversion, git & mercurial for managing a home directory - including various versions of large Photoshop files.
A: See this answer (the one by me, and the product is evolphin): Best Versioning Tools to use for Photoshop/Illustrator and related binary files?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I set the name of a window in ROR? How do I "name" a browser window in ROR, such that I can open a page in it later, from another (popup) window (using the target="name" html parameter)
A: You have to use JavaScript for this:
<script type="text/javascript">
window.name = "MyWindow";
</script>
Of course you could easily package this up into a Rails helper method. For example, in app/helpers/application_helper.rb add a new method:
def window_name(name)
content_for(:window_name) do
"<script type=\"text/javascript\">window.name = \"#{name}\";</script>"
end
end
Next, in your layout file, add this line somewhere within the HTML <head> element:
<%= yield :window_name %>
Finally, in your view templates, simply add a line like this (can be anywhere you want) to output the correct JavaScript:
<% window_name 'MyWindow' %>
A: You could try below:
var x=window.open("", "myWindow");
var y="<head><title>my window</title></head><body>my window</body>";
x.document.write(y);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are some alternatives to a bit array? I have an information retrieval application that creates bit arrays on the order of 10s of million bits. The number of "set" bits in the array varies widely, from all clear to all set. Currently, I'm using a straight-forward bit array (java.util.BitSet), so each of my bit arrays takes several megabytes.
My plan is to look at the cardinality of the first N bits, then make a decision about what data structure to use for the remainder. Clearly some data structures are better for very sparse bit arrays, and others when roughly half the bits are set (when most bits are set, I can use negation to treat it as a sparse set of zeroes).
*
*What structures might be good at each extreme?
*Are there any in the middle?
Here are a few constraints or hints:
*
*The bits are set only once, and in index order.
*I need 100% accuracy, so something like a Bloom filter isn't good enough.
*After the set is built, I need to be able to efficiently iterate over the "set" bits.
*The bits are randomly distributed, so run-length–encoding algorithms aren't likely to be much better than a simple list of bit indexes.
*I'm trying to optimize memory utilization, but speed still carries some weight.
Something with an open source Java implementation is helpful, but not strictly necessary. I'm more interested in the fundamentals.
A: I would strongly consider using range encoding in place of Huffman coding. In general, range encoding can exploit asymmetry more effectively than Huffman coding, but this is especially so when the alphabet size is so small. In fact, when the "native alphabet" is simply 0s and 1s, the only way Huffman can get any compression at all is by combining those symbols -- which is exactly what range encoding will do, more effectively.
A: Maybe too late for you, but there is a very fast and memory efficient library for sparse bit arrays (lossless) and other data types based on tries. Look at Judy arrays
A: Unless the data is truly random and has a symmetric 1/0 distribution, then this simply becomes a lossless data compression problem and is very analogous to CCITT Group 3 compression used for black and white (i.e.: Binary) FAX images. CCITT Group 3 uses a Huffman Coding scheme. In the case of FAX they are using a fixed set of Huffman codes, but for a given data set, you can generate a specific set of codes for each data set to improve the compression ratio achieved. As long as you only need to access the bits sequentially, as you implied, this will be a pretty efficient approach. Random access would create some additional challenges, but you could probably generate a binary search tree index to various offset points in the array that would allow you to get close to the desired location and then walk in from there.
Note: The Huffman scheme still works well even if the data is random, as long as the 1/0 distribution is not perfectly even. That is, the less even the distribution, the better the compression ratio.
Finally, if the bits are truly random with an even distribution, then, well, according to Mr. Claude Shannon, you are not going to be able to compress it any significant amount using any scheme.
A: Thanks for the answers. This is what I'm going to try for dynamically choosing the right method:
I'll collect all of the first N hits in a conventional bit array, and choose one of three methods, based on the symmetry of this sample.
*
*If the sample is highly asymmetric,
I'll simply store the indexes to the
set bits (or maybe the distance to
the next bit) in a list.
*If the sample is highly symmetric,
I'll keep using a conventional bit
array.
*If the sample is moderately
symmetric, I'll use a lossless
compression method like Huffman
coding suggested by
InSciTekJeff.
The boundaries between the asymmetric, moderate, and symmetric regions will depend on the time required by the various algorithms balanced against the space they need, where the relative value of time versus space would be an adjustable parameter. The space needed for Huffman coding is a function of the symmetry, and I'll profile that with testing. Also, I'll test all three methods to determine the time requirements of my implementation.
It's possible (and actually I'm hoping) that the middle compression method will always be better than the list or the bit array or both. Maybe I can encourage this by choosing a set of Huffman codes adapted for higher or lower symmetry. Then I can simplify the system and just use two methods.
A: One more compression thought:
If the bit array is not crazy long, you could try applying the Burrows-Wheeler transform before using any repetition encoding, such as Huffman. A naive implementation would take O(n^2) memory during (de)compression and O(n^2 log n) time to decompress - there are almost certainly shortcuts to be had, as well. But if there's any sequential structure to your data at all, this should really help the Huffman encoding out.
You could also apply that idea to one block at a time to keep the time/memory usage more practical. Using one block at time could allow you to always keep most of the data structure compressed if you're reading/writing sequentially.
A: Straight forward lossless compression is the way to go. To make it searchable you will have to compress relatively small blocks and create an index into an array of the blocks. This index can contain the bit offset of the starting bit in each block.
A: Quick combinatoric proof that you can't really save much space:
Suppose you have an arbitrary subset of n/2 bits set to 1 out of n total bits. You have (n choose n/2) possibilities. Using Stirling's formula, this is roughly 2^n / sqrt(n) * sqrt(2/pi). If every possibility is equally likely, then there's no way to give more likely choices shorter representations. So we need log_2 (n choose n/2) bits, which is about n - (1/2)log(n) bits.
That's not a very good savings of memory. For example, if you're working with n=2^20 (1 meg), then you can only save about 10 bits. It's just not worth it.
Having said all that, it also seems very unlikely that any really useful data is truly random. In case there's any more structure to your data, there's probably a more optimistic answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How to get controls in WPF to fill available space? Some WPF controls (like the Button) seem to happily consume all the available space in its' container if you don't specify the height it is to have.
And some, like the ones I need to use right now, the (multiline) TextBox and the ListBox seem more worried about just taking the space necessary to fit their contents, and no more.
If you put these guys in a cell in a UniformGrid, they will expand to fit the available space. However, UniformGrid instances are not right for all situations. What if you have a grid with some rows set to a * height to divide the height between itself and other * rows? What if you have a StackPanel and you have a Label, a List and a Button, how can you get the list to take up all the space not eaten by the label and the button?
I would think this would really be a basic layout requirement, but I can't figure out how to get them to fill the space that they could (putting them in a DockPanel and setting it to fill also doesn't work, it seems, since the DockPanel only takes up the space needed by its' subcontrols).
A resizable GUI would be quite horrible if you had to play with Height, Width, MinHeight, MinWidth etc.
Can you bind your Height and Width properties to the grid cell you occupy? Or is there another way to do this?
A: Use the HorizontalAlignment and VerticalAlignment layout properties. They control how an element uses the space it has inside its parent when more room is available than it required by the element.
The width of a StackPanel, for example, will be as wide as the widest element it contains. So, all narrower elements have a bit of excess space. The alignment properties control what the child element does with the extra space.
The default value for both properties is Stretch, so the child element is stretched to fill all available space. Additional options include Left, Center and Right for HorizontalAlignment and Top, Center and Bottom for VerticalAlignment.
A: There are also some properties you can set to force a control to fill its available space when it would otherwise not do so. For example, you can say:
HorizontalContentAlignment="Stretch"
... to force the contents of a control to stretch horizontally. Or you can say:
HorizontalAlignment="Stretch"
... to force the control itself to stretch horizontally to fill its parent.
A: Well, I figured it out myself, right after posting, which is the most embarassing way. :)
It seems every member of a StackPanel will simply fill its minimum requested size.
In the DockPanel, I had docked things in the wrong order. If the TextBox or ListBox is the only docked item without an alignment, or if they are the last added, they WILL fill the remaining space as wanted.
I would love to see a more elegant method of handling this, but it will do.
A: Each control deriving from Panel implements distinct layout logic performed in Measure() and Arrange():
*
*Measure() determines the size of the panel and each of its children
*Arrange() determines the rectangle where each control renders
The last child of the DockPanel fills the remaining space. You can disable this behavior by setting the LastChild property to false.
The StackPanel asks each child for its desired size and then stacks them. The stack panel calls Measure() on each child, with an available size of Infinity and then uses the child's desired size.
A Grid occupies all available space, however, it will set each child to their desired size and then center them in the cell.
You can implement your own layout logic by deriving from Panel and then overriding MeasureOverride() and ArrangeOverride().
See this article for a simple example.
A: Use SizeChanged="OnSizeChanged" in your xaml and the set the sizes you want in the code behind.
private void OnSizeChanged(object sender, SizeChangedEventArgs e)
{
TheScrollViewer.Height = MainWin.Height - 100;
}
Long term it will be better for you.
When your manager comes along and asks "make that a bit bigger" you won't to spend the afternoon messing about with layout controls trying to get it to work. Also you won't have to explain WHY you spent the afternoon trying to make it work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "348"
} |
Q: Quoting command-line arguments in shell scripts The following shell script takes a list of arguments, turns Unix paths into WINE/Windows paths and invokes the given executable under WINE.
#! /bin/sh
if [ "${1+set}" != "set" ]
then
echo "Usage; winewrap EXEC [ARGS...]"
exit 1
fi
EXEC="$1"
shift
ARGS=""
for p in "$@";
do
if [ -e "$p" ]
then
p=$(winepath -w $p)
fi
ARGS="$ARGS '$p'"
done
CMD="wine '$EXEC' $ARGS"
echo $CMD
$CMD
However, there's something wrong with the quotation of command-line arguments.
$ winewrap '/home/chris/.wine/drive_c/Program Files/Microsoft Research/Z3-1.3.6/bin/z3.exe' -smt /tmp/smtlib3cee8b.smt
Executing: wine '/home/chris/.wine/drive_c/Program Files/Microsoft Research/Z3-1.3.6/bin/z3.exe' '-smt' 'Z: mp\smtlib3cee8b.smt'
wine: cannot find ''/home/chris/.wine/drive_c/Program'
Note that:
*
*The path to the executable is being chopped off at the first space, even though it is single-quoted.
*The literal "\t" in the last path is being transformed into a tab character.
Obviously, the quotations aren't being parsed the way I intended by the shell. How can I avoid these errors?
EDIT: The "\t" is being expanded through two levels of indirection: first, "$p" (and/or "$ARGS") is being expanded into Z:\tmp\smtlib3cee8b.smt; then, \t is being expanded into the tab character. This is (seemingly) equivalent to
Y='y\ty'
Z="z${Y}z"
echo $Z
which yields
zy\tyz
and not
zy yz
UPDATE: eval "$CMD" does the trick. The "\t" problem seems to be echo's fault: "If the first operand is -n, or if any of the operands contain a backslash ( '\' ) character, the results are implementation-defined." (POSIX specification of echo)
A: *
*bash’s arrays are unportable but the only sane way to handle argument lists in shell
*The number of arguments is in ${#}
*Bad stuff will happen with your script if there are filenames starting with a dash in the current directory
*If the last line of your script just runs a program, and there are no traps on exit, you should exec it
With that in mind
#! /bin/bash
# push ARRAY arg1 arg2 ...
# adds arg1, arg2, ... to the end of ARRAY
function push() {
local ARRAY_NAME="${1}"
shift
for ARG in "${@}"; do
eval "${ARRAY_NAME}[\${#${ARRAY_NAME}[@]}]=\${ARG}"
done
}
PROG="$(basename -- "${0}")"
if (( ${#} < 1 )); then
# Error messages should state the program name and go to stderr
echo "${PROG}: Usage: winewrap EXEC [ARGS...]" 1>&2
exit 1
fi
EXEC=("${1}")
shift
for p in "${@}"; do
if [ -e "${p}" ]; then
p="$(winepath -w -- "${p}")"
fi
push EXEC "${p}"
done
exec "${EXEC[@]}"
A: I you do want to have the assignment to CMD you should use
eval $CMD
instead of just $CMD in the last line of your script. This should solve your problem with spaces in the paths, I don't know what to do about the "\t" problem.
A: You can try preceeding the spaces with \ like so:
/home/chris/.wine/drive_c/Program Files/Microsoft\ Research/Z3-1.3.6/bin/z3.exe
You can also do the same with your \t problem - replace it with \\t.
A: replace the last line from $CMD to just
wine '$EXEC' $ARGS
You'll note that the error is ''/home/chris/.wine/drive_c/Program' and not '/home/chris/.wine/drive_c/Program'
The single quotes are not being interpolated properly, and the string is being split by spaces.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Variable Holding data in a while statement I know I must be missing something, but in a while statement how does the variable hold the data, when it finishes the first pass and goes into the second pass?
{
int num1 = 0 ;
int num2 = 0;
int num3 = 0;
while (num1 < 10)
{cout << "enter your first number: ";
cin >> num1;
cout << "Enter your second number: ";
cin >> num2;
num1 = num1 + num2 ;
cout << "Number 1 is now: " << num1 <<endl;
cout << "Enter Number 3: " ;
cin >> num3;
num1 = num1 + num3;
cout << "Number 1 is now: " << num1 << endl;
num1++;
};
In this code. The Variable doesn't hold the data. I'm not sure what I'm doing wrong!
A: I'm not clear exactly what you're asking, but variables will maintain their value for each iteration of a loop, as long as they're declared outside of the loop itself. For example:
int a = 0;
while(a < 10)
{
int b = 0;
cout << "a: " << a << " b: " << b << "\n";
a++;
b++;
}
In the above, the value output for b will always be 0, as it's declared inside the loop and is being reinitialized each time, whereas a will maintain its value and get incremented each iteration. If b were an object, rather than an int, its constructor and destructor would get called each iteration.
A: Is num1 the variable you're having trouble with? This line:
cin >> num1;
is setting num1 to the value input by the user. So the value calculated for it in the previous run through the loop is being overwritten each time by the new input.
A: I'm not sure I understand your question. In C any data that's not overwritten is carried over into the next iteration of the loop, and imagine that C++ works much the same way.
A: Do you understand how when you say "num1" you're referring to the same variable each time, and that each time you change num1 you replace the previous value?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the most efficient way to keep track of a specific character's index in a string? Take the following string as an example:
"The quick brown fox"
Right now the q in quick is at index 4 of the string (starting at 0) and the f in fox is at index 16. Now lets say the user enters some more text into this string.
"The very quick dark brown fox"
Now the q is at index 9 and the f is at index 26.
What is the most efficient method of keeping track of the index of the original q in quick and f in fox no matter how many characters are added by the user?
Language doesn't matter to me, this is more of a theory question than anything so use whatever language you want just try to keep it to generally popular and current languages.
The sample string I gave is short but I'm hoping for a way that can efficiently handle any size string. So updating an array with the offset would work with a short string but will bog down with to many characters.
Even though in the example I was looking for the index of unique characters in the string I also want to be able to track the index of the same character in different locations such as the o in brown and the o in fox. So searching is out of the question.
I was hoping for the answer to be both time and memory efficient but if I had to choose just one I care more about performance speed.
A: Your question is a little ambiguous - are you looking to keep track of the first instances of every letter? If so, an array of length 26 might be the best option.
Whenever you insert text into a string at a position lower than the index you have, just compute the offset based on the length of the inserted string.
A: Let's say that you have a string and some of its letters are interesting. To make things easier let's say that the letter at index 0 is always interesting and you never add something before it—a sentinel. Write down pairs of (interesting letter, distance to the previous interesting letter). If the string is "+the very Quick dark brown Fox" and you are interested in q from 'quick' and f from 'fox' then you would write: (+,0), (q,10), (f,17). (The sign + is the sentinel.)
Now you put these in a balanced binary tree whose in-order traversal gives the sequence of letters in the order they appear in the string. You might now recognize the partial sums problem: You enhance the tree so that nodes contain (letter, distance, sum). The sum is the sum of all distances in the left subtree. (Therefore sum(x)=distance(left(x))+sum(left(x)).)
You can now query and update this data structure in logarithmic time.
To say that you added n characters to the left of character c you say distance(c)+=n an then go and update sum for all parents of c.
To ask what is the index of c you compute sum(c)+sum(parent(c))+sum(parent(parent(c)))+...
A: It would also help if you had a target language in mind as not all data structures and interactions are equally efficient and effective in all languages.
A: The standard trick that usually helps in similar situations is to keep the characters of the string as leaves in a balanced binary tree. Additionally, internal nodes of the tree should keep sets of letters (if the alphabet is small and fixed, they could be bitmaps) that occur in the subtree rooted at a particular node.
Inserting or deleting a letter into this structure only needs O(log(N)) operations (update the bitmaps on the path to root) and finding the first occurence of a letter also takes O(log(N)) operations - you descend from the root, going for the leftmost child whose bitmap contains the interesting letter.
Edit: The internal nodes should also keep number of leaves in the represented subtree, for efficient computation of the letter's index.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Any recommended VC++ settings for better PDB analysis on release builds Are there any VC++ settings I should know about to generate better PDB files that contain more information?
I have a crash dump analysis system in place based on the project crashrpt.
Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.
A:
"Are there any VC++ settings I should know about"
Make sure you turn off Frame pointer ommision. Larry osterman's blog has the historical details about fpo and the issues it causes with debugging.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code.
What version of VS are you using? (Or are you using Windbg?) ... in VS it should defintely prompt for source the first time if it doesn't find the location. However it also keeps a list of source that was 'not found' so it doesn't ask you for it every time. Sometimes the don't look list is a pain ... to get the prompt back up you need to go to solution explorer/solution node/properties/debug properties and edit the file list in the lower pane.
Finally you might be using 'stripped symbols'. These are pdb files generated to provide debug info for walking the callstack past FPO, but with source locations stripped out (along with other data). The public symbols for windows OS components are stripped pdbs. For your own code these simply cause pain and are not worth it unless you are providing your pdbs to externals. How would you have one of these horrible stripped pdbs? You might have them if you use "binplace" with the -a command.
Good luck! A proper mini dump story is a godsend for production debugging.
A: If your build directly from your sourcecode management system, you should annotate your pdb files with the file origins. This allows you to automatically fetch the exact source files while debugging. (This is the same proces as used for retrieving the .Net framework sourcecode).
See http://msdn.microsoft.com/en-us/magazine/cc163563.aspx for more information. If you use subversion as your SCM you can check out the SourceServerSharp project.
A: You could trying using the MS-DOS subst command to assign your source code directory to the D: drive.
A: This is the procedure I used after some trouble similar to yours:
a) Copied to the production server all the EXE & DLL files that were built, each with its corresponding PDB to the same directory, started the system, and waited for the crash to happen.
b) Copied back all the EXE, DLL & PDB files to the development machine (to a temporary folder) along with the minidump (in the same folder). Used Visual Studio to load the minidump from that folder.
Since VS found the source files where they were originally compiled, it was always able to identify them and load them correctly. As with you, in the production machine the drive used was not C:, but in the development machine it was.
Two more tips:
*
*One thing I did often was to copy an EXE/DLL rebuilt and forget to copy the new PDB. This ruined the debug cycle, VS would not be able to show me the call stack.
*Sometimes, I got a call stack that didn't make sense in VS. After some headache, I discovered that windbg would always show me the correct stack, but VS often wouldn't. Don't know why.
A: In case anyone is interested, a co-worker replied to this question to me via email:
Artem wrote:
There is a flag to MiniDumpWriteDump()
that can do better crash dumps that
will allow seeing full program state,
with all global variables, etc. As for
call stacks, I doubt they can be
better because of optimizations...
unless you turn (maybe some)
optimizations off.
Also, I think disabling inline
functions and whole program
optimization will help quite a lot.
In fact, there are many dump types,
maybe you could choose one small
enough but still having more info
http://msdn.microsoft.com/en-us/library/ms680519(VS.85).aspx
Those types won't help with call stack
though, they only affect the amount of
variables you'll be able to see.
I noticed some of those dump types
aren't supported in dbghelp.dll
version 5.1 that we use. We could
update it to the newest, 6.9 version
though, I've just checked the EULA for
MS Debugging Tools -- the newest
dbghelp.dll is still ok to
redistribute.
A: Is Visual Studio prompting you for the path to the source file? If it isn't then it doesn't think it has symbols for the callstack. Setting the source path should work without having to map the exact original location.
You can tell if symbols are loaded by looking at the 'modules' window in Visual Studio.
Assuming you are building a PDB then I don't think there are any options that control the amount of information in the PDB directly. You can change the type of optimizations performed by the compiler to improve debuggabilty, but this will cost performance -- as your co-worker points out, disabling inline will help make things more obvious in the crash file, but will cost at runtime.
Depending on the nature of your application I would recommend working with full dump files if you can, they are bigger, but give you all the information about the process ... and how often does it crash anyway :)
A:
Is Visual Studio prompting you for the
path to the source file?
No.
If it isn't then it doesn't think it has symbols
for the callstack. Setting the source
path should work without having to map
the exact original location.
Symbols are loaded successfully. It shows the callstack, but double clicking on an entry doesn't bring me to the source code. I can of course search in files for the line in question, but this is hard work :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What are some real life examples of Design Patterns used in software I'm reading through head first design patterns at the moment and while the book is excellent I also would like to see how these are actually used in the real world.
If you know of a good example of design pattern usage (preferably in a OSS program so we can have a look :) then please list it below.
A: An ah-ha moment for me for the observer pattern was to realize how closely associated it is with events. Consider a Windows program that needs to acheive loosely communications between two forms. That can easily be accomplished with the observer pattern.
The code below shows how Form2 fires an event and any other class registered as an observer get its data.
See this link for a great patterns resource:
http://sourcemaking.com/design-patterns-and-tips
Form1's code:
namespace PublishSubscribe
{
public partial class Form1 : Form
{
Form2 f2 = new Form2();
public Form1()
{
InitializeComponent();
f2.PublishData += new PublishDataEventHander( DataReceived );
f2.Show();
}
private void DataReceived( object sender, Form2EventArgs e )
{
MessageBox.Show( e.OtherData );
}
}
}
Form2's code
namespace PublishSubscribe
{
public delegate void PublishDataEventHander( object sender, Form2EventArgs e );
public partial class Form2 : Form
{
public event PublishDataEventHander PublishData;
public Form2()
{
InitializeComponent();
}
private void button1_Click( object sender, EventArgs e )
{
PublishData( this, new Form2EventArgs( "data from form2" ) );
}
}
public class Form2EventArgs : System.EventArgs
{
public string OtherData;
public Form2EventArgs( string OtherData )
{
this.OtherData = OtherData;
}
}
}
A: I use passive view, a flavor of the Model View Presenter pattern, with any web forms like development (.NET) to increase testability/maintainability/etc
For example, your code-behind file might look something like this
Partial Public Class _Default
Inherits System.Web.UI.Page
Implements IProductView
Private presenter As ProductPresenter
Protected Overrides Sub OnInit(ByVal e As System.EventArgs)
MyBase.OnInit(e)
presenter = New ProductPresenter(Me)
End Sub
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
presenter.OnViewLoad()
End Sub
Private ReadOnly Property PageIsPostBack() As Boolean Implements IProductView.PageIsPostBack
Get
Return Page.IsPostBack
End Get
End Property
Public Property Products() As System.Collections.Generic.List(Of Product) Implements Library.IProductView.Products
Get
Return DirectCast(gridProducts.DataSource(), List(Of Product))
End Get
Set(ByVal value As System.Collections.Generic.List(Of Product))
gridProducts.DataSource = value
gridProducts.DataBind()
End Set
End Property
End Class
This code behind is acting as a very thin view with zero logic. This logic is instead pushed into a presenter class that can be unit tested.
Public Class ProductPresenter
Private mView As IProductView
Private mProductService As IProductService
Public Sub New(ByVal View As IProductView)
Me.New(View, New ProductService())
End Sub
Public Sub New(ByVal View As IProductView, ByVal ProductService As IProductService)
mView = View
mProductService = ProductService
End Sub
Public Sub OnViewLoad()
If mView.PageIsPostBack = False Then
PopulateProductsList()
End If
End Sub
Public Sub PopulateProductsList()
Try
Dim ProductList As List(Of Product) = mProductService.GetProducts()
mView.Products = ProductList
Catch ex As Exception
Throw ex
End Try
End Sub
End Class
A: Use code.google.com
For example the search result for "Factory" will get you a lot of cases where the factory Pattern is implemented.
A: The Chain of Responsibility pattern is implemented in the handling of DOM events. For example, (and simplifying slightly) when an element is clicked on, that element gets the first opportunity to handle the event, and then each ancestor in tern until the top level document is reached or one of them explicitly stops the event "bubbling" any further.
A: C#, Java and Python have a standard implementation of the Iterator pattern. In C# and Python this has been intergrated in the language so you can just use yield return statements.
A: Template pattern is commonly used in the implementation of dotnet events to set up preconditions and respond to postconditions. The degenerate case is
void FireMyEvent(object sender, EventArgs e)
{
if (_myevent != null) _myEvent(sender, e);
}
in which the precondition is checked. In this case the precondition is that handlers can be invoked only when at least one has been bound. (Please don't tell me I should invoke the handlers asynchronously. I know that. I am illustrating Template pattern, not asynchronous programming technique.)
A more elaborate precondition might involve checking a property that governs the firing of events.
Template pattern is also commonly used to implement hooks, for example
public virtual void BeforeOpenFile(string filepath)
{
//stub
}
public virtual void AfterOpenFile(string filepath)
{
//stub
}
public sealed void OpenFile(string filepath)
{
BeforeOpenFile(filepath); //do user customisable pre-open bits
//do standard bits here
AfterOpenFile(filepath); //do user customisable post-open bits
}
A: If you're familiar with Python, check out the Twisted framework.
http://twistedmatrix.com/trac/
A: Perhaps a good example, as pointed out in the Head First Design Patterns too, is the JAVA Swing API which implements the Observer pattern. More specifically, the JButton (or the superclass AbstractButton) is the Observable class and provides methods to add and remove "Observers", or "Listeners" as they are called in Swing.
A: Composite is used extensively in UI. Components can be leaf components e.g. buttons and labels or composites e.g. panels, that can contain other leaf or composite components. From the point of view of the client, all components are treated the same, which greatly simplifies the client code.
A: The Command pattern is used everywhere you have Undo functionality.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to sort a list of strings? What is the best way of creating an alphabetically sorted list in Python?
A:
But how does this handle language specific sorting rules? Does it take locale into account?
No, list.sort() is a generic sorting function. If you want to sort according to the Unicode rules, you'll have to define a custom sort key function. You can try using the pyuca module, but I don't know how complete it is.
A: It is also worth noting the sorted() function:
for x in sorted(list):
print x
This returns a new, sorted version of a list without changing the original list.
A: Basic answer:
mylist = ["b", "C", "A"]
mylist.sort()
This modifies your original list (i.e. sorts in-place). To get a sorted copy of the list, without changing the original, use the sorted() function:
for x in sorted(mylist):
print x
However, the examples above are a bit naive, because they don't take locale into account, and perform a case-sensitive sorting. You can take advantage of the optional parameter key to specify custom sorting order (the alternative, using cmp, is a deprecated solution, as it has to be evaluated multiple times - key is only computed once per element).
So, to sort according to the current locale, taking language-specific rules into account (cmp_to_key is a helper function from functools):
sorted(mylist, key=cmp_to_key(locale.strcoll))
And finally, if you need, you can specify a custom locale for sorting:
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') # vary depending on your lang/locale
assert sorted((u'Ab', u'ad', u'aa'),
key=cmp_to_key(locale.strcoll)) == [u'aa', u'Ab', u'ad']
Last note: you will see examples of case-insensitive sorting which use the lower() method - those are incorrect, because they work only for the ASCII subset of characters. Those two are wrong for any non-English data:
# this is incorrect!
mylist.sort(key=lambda x: x.lower())
# alternative notation, a bit faster, but still wrong
mylist.sort(key=str.lower)
A: list.sort()
It really is that simple :)
A: The proper way to sort strings is:
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') # vary depending on your lang/locale
assert sorted((u'Ab', u'ad', u'aa'), cmp=locale.strcoll) == [u'aa', u'Ab', u'ad']
# Without using locale.strcoll you get:
assert sorted((u'Ab', u'ad', u'aa')) == [u'Ab', u'aa', u'ad']
The previous example of mylist.sort(key=lambda x: x.lower()) will work fine for ASCII-only contexts.
A: Please use sorted() function in Python3
items = ["love", "like", "play", "cool", "my"]
sorted(items2)
A: Old question, but if you want to do locale-aware sorting without setting locale.LC_ALL you can do so by using the PyICU library as suggested by this answer:
import icu # PyICU
def sorted_strings(strings, locale=None):
if locale is None:
return sorted(strings)
collator = icu.Collator.createInstance(icu.Locale(locale))
return sorted(strings, key=collator.getSortKey)
Then call with e.g.:
new_list = sorted_strings(list_of_strings, "de_DE.utf8")
This worked for me without installing any locales or changing other system settings.
(This was already suggested in a comment above, but I wanted to give it more prominence, because I missed it myself at first.)
A: l =['abc' , 'cd' , 'xy' , 'ba' , 'dc']
l.sort()
print(l1)
Result
['abc', 'ba', 'cd', 'dc', 'xy']
A: Suppose s = "ZWzaAd"
To sort above string the simple solution will be below one.
print ''.join(sorted(s))
A:
Or maybe:
names = ['Jasmine', 'Alberto', 'Ross', 'dig-dog']
print ("The solution for this is about this names being sorted:",sorted(names, key=lambda name:name.lower()))
A: It is simple:
https://trinket.io/library/trinkets/5db81676e4
scores = '54 - Alice,35 - Bob,27 - Carol,27 - Chuck,05 - Craig,30 - Dan,27 - Erin,77 - Eve,14 - Fay,20 - Frank,48 - Grace,61 - Heidi,03 - Judy,28 - Mallory,05 - Olivia,44 - Oscar,34 - Peggy,30 - Sybil,82 - Trent,75 - Trudy,92 - Victor,37 - Walter'
scores = scores.split(',')
for x in sorted(scores):
print(x)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "477"
} |
Q: How do you make a post request into a new browser tab using JavaScript / XUL? I'm trying to open a new browser tab with the results of a POST request. I'm trying to do so using a function containing the following code:
var windowManager = Components.classes["@mozilla.org/appshell/window-mediator;1"]
.getService(Components.interface
s.nsIWindowMediator);
var browserWindow = windowManager.getMostRecentWindow("navigator:browser");
var browser = browserWindow.getBrowser();
if(browser.mCurrentBrowser.currentURI.spec == "about:blank")
browserWindow.loadURI(url, null, postData, false);
else
browser.loadOneTab(url, null, null, postData, false, false);
I'm using a string as url, and JSON data as postData. Is there something I'm doing wrong?
What happens, is a new tab is created, the location shows the URL I want to post to, but the document is blank. The Back, Forward, and Reload buttons are all grayed out on the browser. It seems like it did everything except executed the POST. If I leave the postData parameter off, then it properly runs a GET.
Build identifier: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1
A: Something which is less Mozilla specific and should work reasonably well with most of the browsers:
*
*Create a hidden form with the fields set up the way you need them
*Make sure that the "target" attribute of the form is set to "_BLANK"
*Submit the form programatically
A: The answer to this was found by shog9. The postData parameter needs to be a nsIMIMEInputStream object as detailed in here.
A: try with addTab instead of loadOneTab, and remove the last parameter.
Check out this page over at the Mozilla Development Center for information on how to open tabs.
You could use this function, for example:
function openAndReuseOneTabPerURL(url) {
var wm = Components.classes["@mozilla.org/appshell/window-mediator;1"]
.getService(Components.interfaces.nsIWindowMediator);
var browserEnumerator = wm.getEnumerator("navigator:browser");
// Check each browser instance for our URL
var found = false;
while (!found && browserEnumerator.hasMoreElements()) {
var browserInstance = browserEnumerator.getNext().getBrowser();
// Check each tab of this browser instance
var numTabs = browserInstance.tabContainer.childNodes.length;
for(var index=0; index<numTabs; index++) {
var currentBrowser = browserInstance.getBrowserAtIndex(index);
if ("about:blank" == currentBrowser.currentURI.spec) {
// The URL is already opened. Select this tab.
browserInstance.selectedTab = browserInstance.tabContainer.childNodes[index];
// Focus *this* browser
browserInstance.focus();
found = true;
break;
}
}
}
// Our URL isn't open. Open it now.
if (!found) {
var recentWindow = wm.getMostRecentWindow("navigator:browser");
if (recentWindow) {
// Use an existing browser window
recentWindow.delayedOpenTab(url, null, null, null, null);
}
else {
// No browser windows are open, so open a new one.
window.open(url);
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do you unsubscribe from a ubiquity command I can't seem to find details on how to unsubscribe from ubiquity commands. The command list page only seems to have information about the installed commands and there are no links to deleting them. Am I missing something?
A: Go to about:ubiquity in Firefox. Under the section "subscribed feeds" there should be an option to unsubscribe to command feeds you no longer desire.
Also, if you clear your entire browser history, it will delete all command feeds (this will be fixed by 0.2)
A: The way to delete commands is to find them in the Subscribed Feeds section of the main help page:
*
*ubiq help | about:ubiquity
*Scroll down to "Subscribed Feeds" in the right hand column
*Click '[unsubscribe]' for the one you want to delete.
*Profit!
A: Check this out:
http://getsatisfaction.com/mozilla/topics/how_do_you_edit_delete_the_default_ubiquity_commands_verbs
Also, you can find a utility to reset your ubiquity to default in \extensions\[email protected]\chrome\content\reset.html
EDIT1: The above file does nothing, please ignore.
EDIT2 (answer?): You can use the ffx addon SQLite Manager to open the \ubiquity_ann.sqlite database and remove all rows for the command you want to delete (there will be several rows, but they are identified by the url the script came from and so easy to identify). When you restart firefox, the command will be gone.
(The code that populates the Commands page uses javascript to create an instance of UbiquitySetup object, and executes the method .createServices().commandSource which reads that SQLite database directly for a list of commands (returning an object or array of objects/commands to iterate through). This seems to delete the command itself, since it will no longer be found in ubiquity or the command list.)
Good luck.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How expensive is ST_GeomFromText In postgis, is the ST_GeomFromText call very expensive? I ask mostly because I have a frequently called query that attempts to find the point that is nearest another point that matches some criteria, and which is also within a certain distance of that other point, and the way I currently wrote it, it's doing the same ST_GeomFromText twice:
$findNearIDMatchStmt = $postconn->prepare(
"SELECT internalid " .
"FROM waypoint " .
"WHERE id = ? AND " .
" category = ? AND ".
" (b.category in (1, 3) OR type like ?) AND ".
" ST_DWithin(point, ST_GeomFromText(?," . SRID .
" ),". SMALL_EPSILON . ") " .
" ORDER BY ST_Distance(point, ST_GeomFromText(?,", SRID .
" )) " .
" LIMIT 1");
Is there a better way to re-write this?
Slightly OT: In the preview screen, all my underscores are being rendered as & # 9 5 ; - I hope that's not going to show up that way in the post.
A: I don't believe ST_GeomFromText() is particularly expensive, although in the past I've optimized PostGIS queries by creating a function, declaring a variable and then assigning the result of ST_GeomFromText to the variable.
Have you tried checking the execution plan for you query with a variety of different parameters because that should give you a definite idea of which bits of the query are taking the time?
I'm guessing most of the execution time will be in the calls to ST_DWithin() and ST_Distance(), although if the id and category columns aren't indexed then it might be doing some interesting table scanning.
A: @Ubiguch
It appears that ST_DWithin uses the spatial index, so that seems to cut down on the number of points to be queried pretty quickly.
navaid=> explain select internalid from waypoint where id != 'KROC' AND ST_DWithin(point, ST_GeomFromText('POINT(-77.6723888888889 43.1188611111111)',4326), 0.05) order by st_distance(point, st_geomfromtext('POINT(-77.6723888888889 43.1188611111111)',4326)) limit 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=8.37..8.38 rows=1 width=104)
-> Sort (cost=8.37..8.38 rows=1 width=104)
Sort Key: (st_distance(point, '0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry))
-> Index Scan using waypoint_point_idx on waypoint (cost=0.00..8.36 rows=1 width=104)
Index Cond: (point && '0103000020E61000000100000005000000000000C03B6E53C000000060D0884540000000C03B6E53C0000000409D95454000000020D56753C0000000409D95454000000020D56753C000000060D0884540000000C03B6E53C000000060D0884540'::geometry)
Filter: (((id)::text <> 'KROC'::text) AND (point && '0103000020E61000000100000005000000000000C03B6E53C000000060D0884540000000C03B6E53C0000000409D95454000000020D56753C0000000409D95454000000020D56753C000000060D0884540000000C03B6E53C000000060D0884540'::geometry) AND ('0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry && st_expand(point, 0.05::double precision)) AND (st_distance(point, '0101000020E61000002FFE676B086B53C0847E44D7368F4540'::geometry) < 0.05::double precision))
(6 rows)
Without the order by and the limit, it looks like a typical query is only returning 5-10 waypoints max. So I probably shouldn't worry about the additional cost of the filter that's applied to the points returned.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Replacing the nth instance of a regex match in Javascript I'm trying to write a regex function that will identify and replace a single instance of a match within a string without affecting the other instances. For example, I have this string:
12||34||56
I want to replace the second set of pipes with ampersands to get this string:
12||34&&56
The regex function needs to be able to handle x amount of pipes and allow me to replace the nth set of pipes, so I could use the same function to make these replacements:
23||45||45||56||67 -> 23&&45||45||56||67
23||34||98||87 -> 23||34||98&&87
I know that I could just split/replace/concat the string at the pipes, and I also know that I can match on /\|\|/ and iterate through the resulting array, but I'm interested to know if it's possible to write a single expression that can do this. Note that this would be for Javascript, so it's possible to generate a regex at runtime using eval(), but it's not possible to use any Perl-specific regex instructions.
A: A more general-purpose function
I came across this question and, although the title is very general, the accepted answer handles only the question's specific use case.
I needed a more general-purpose solution, so I wrote one and thought I'd share it here.
Usage
This function requires that you pass it the following arguments:
*
*original: the string you're searching in
*pattern: either a string to search for, or a RegExp with a capture group. Without a capture group, it will throw an error. This is because the function calls split on the original string, and only if the supplied RegExp contains a capture group will the resulting array contain the matches.
*n: the ordinal occurrence to find; eg, if you want the 2nd match, pass in 2
*replace: Either a string to replace the match with, or a function which will take in the match and return a replacement string.
Examples
// Pipe examples like the OP's
replaceNthMatch("12||34||56", /(\|\|)/, 2, '&&') // "12||34&&56"
replaceNthMatch("23||45||45||56||67", /(\|\|)/, 1, '&&') // "23&&45||45||56||67"
// Replace groups of digits
replaceNthMatch("foo-1-bar-23-stuff-45", /(\d+)/, 3, 'NEW') // "foo-1-bar-23-stuff-NEW"
// Search value can be a string
replaceNthMatch("foo-stuff-foo-stuff-foo", "foo", 2, 'bar') // "foo-stuff-bar-stuff-foo"
// No change if there is no match for the search
replaceNthMatch("hello-world", "goodbye", 2, "adios") // "hello-world"
// No change if there is no Nth match for the search
replaceNthMatch("foo-1-bar-23-stuff-45", /(\d+)/, 6, 'NEW') // "foo-1-bar-23-stuff-45"
// Passing in a function to make the replacement
replaceNthMatch("foo-1-bar-23-stuff-45", /(\d+)/, 2, function(val){
//increment the given value
return parseInt(val, 10) + 1;
}); // "foo-1-bar-24-stuff-45"
The Code
var replaceNthMatch = function (original, pattern, n, replace) {
var parts, tempParts;
if (pattern.constructor === RegExp) {
// If there's no match, bail
if (original.search(pattern) === -1) {
return original;
}
// Every other item should be a matched capture group;
// between will be non-matching portions of the substring
parts = original.split(pattern);
// If there was a capture group, index 1 will be
// an item that matches the RegExp
if (parts[1].search(pattern) !== 0) {
throw {name: "ArgumentError", message: "RegExp must have a capture group"};
}
} else if (pattern.constructor === String) {
parts = original.split(pattern);
// Need every other item to be the matched string
tempParts = [];
for (var i=0; i < parts.length; i++) {
tempParts.push(parts[i]);
// Insert between, but don't tack one onto the end
if (i < parts.length - 1) {
tempParts.push(pattern);
}
}
parts = tempParts;
} else {
throw {name: "ArgumentError", message: "Must provide either a RegExp or String"};
}
// Parens are unnecessary, but explicit. :)
indexOfNthMatch = (n * 2) - 1;
if (parts[indexOfNthMatch] === undefined) {
// There IS no Nth match
return original;
}
if (typeof(replace) === "function") {
// Call it. After this, we don't need it anymore.
replace = replace(parts[indexOfNthMatch]);
}
// Update our parts array with the new value
parts[indexOfNthMatch] = replace;
// Put it back together and return
return parts.join('');
}
An Alternate Way To Define It
The least appealing part of this function is that it takes 4 arguments. It could be simplified to need only 3 arguments by adding it as a method to the String prototype, like this:
String.prototype.replaceNthMatch = function(pattern, n, replace) {
// Same code as above, replacing "original" with "this"
};
If you do that, you can call the method on any string, like this:
"foo-bar-foo".replaceNthMatch("foo", 2, "baz"); // "foo-bar-baz"
Passing Tests
The following are the Jasmine tests that this function passes.
describe("replaceNthMatch", function() {
describe("when there is no match", function() {
it("should return the unmodified original string", function() {
var str = replaceNthMatch("hello-there", /(\d+)/, 3, 'NEW');
expect(str).toEqual("hello-there");
});
});
describe("when there is no Nth match", function() {
it("should return the unmodified original string", function() {
var str = replaceNthMatch("blah45stuff68hey", /(\d+)/, 3, 'NEW');
expect(str).toEqual("blah45stuff68hey");
});
});
describe("when the search argument is a RegExp", function() {
describe("when it has a capture group", function () {
it("should replace correctly when the match is in the middle", function(){
var str = replaceNthMatch("this_937_thing_38_has_21_numbers", /(\d+)/, 2, 'NEW');
expect(str).toEqual("this_937_thing_NEW_has_21_numbers");
});
it("should replace correctly when the match is at the beginning", function(){
var str = replaceNthMatch("123_this_937_thing_38_has_21_numbers", /(\d+)/, 2, 'NEW');
expect(str).toEqual("123_this_NEW_thing_38_has_21_numbers");
});
});
describe("when it has no capture group", function() {
it("should throw an error", function(){
expect(function(){
replaceNthMatch("one_1_two_2", /\d+/, 2, 'NEW');
}).toThrow('RegExp must have a capture group');
});
});
});
describe("when the search argument is a string", function() {
it("should should match and replace correctly", function(){
var str = replaceNthMatch("blah45stuff68hey", 'stuff', 1, 'NEW');
expect(str).toEqual("blah45NEW68hey");
});
});
describe("when the replacement argument is a function", function() {
it("should call it on the Nth match and replace with the return value", function(){
// Look for the second number surrounded by brackets
var str = replaceNthMatch("foo[1][2]", /(\[\d+\])/, 2, function(val) {
// Get the number without the [ and ]
var number = val.slice(1,-1);
// Add 1
number = parseInt(number,10) + 1;
// Re-format and return
return '[' + number + ']';
});
expect(str).toEqual("foo[1][3]");
});
});
});
May not work in IE7
This code may fail in IE7 because that browser incorrectly splits strings using a regex, as discussed here. [shakes fist at IE7]. I believe that this is the solution; if you need to support IE7, good luck. :)
A: here's something that works:
"23||45||45||56||67".replace(/^((?:[0-9]+\|\|){n})([0-9]+)\|\|/,"$1$2&&")
where n is the one less than the nth pipe, (of course you don't need that first subexpression if n = 0)
And if you'd like a function to do this:
function pipe_replace(str,n) {
var RE = new RegExp("^((?:[0-9]+\\|\\|){" + (n-1) + "})([0-9]+)\|\|");
return str.replace(RE,"$1$2&&");
}
A: function pipe_replace(str,n) {
m = 0;
return str.replace(/\|\|/g, function (x) {
//was n++ should have been m++
m++;
if (n==m) {
return "&&";
} else {
return x;
}
});
}
A: Thanks Binda, I have modified the code for generic uses:
private replaceNthMatch(original, pattern, n, replace) {
let m = -1;
return original.replaceAll(pattern, x => {
m++;
if ( n == m ) {
return replace;
} else {
return x;
}
});
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Best Versioning Tools to use for Photoshop/Illustrator and related binary files? I previously asked about Version Cue 3 vs Subversion. I think this is a better question and someone suggested http://www.gridironsoftware.com/Flow/ I hope this question will allow others to join in and suggest other tools or give specific recommendation to using Version Que versus other tools.
A: PixelNovel's Timeline is a SVN plugin for Photoshop. They have standalone and hosted versions.
A: Take a look at this article comparing Subversion, Mercurial, Git and Bazaar for managing the files in a home directory, including image files and large Photoshop files that are being edited and versioned.
EDIT: The link is dead and I can't find the article, however the information in the article is now severely outdated anyway. Today I would strongly recommend using Git-LFS (Large File System), with the file locking mechanism that was added in 2017, I believe. This is the solution I currently use, as it solves both the problem of needed to lock binary files, and avoids the inefficiencies of git when it comes to storing large files - which was one of the main points of that article.
A: I'd like to suggest https://www.pixelapse.com
It was made for designers with designer's needs in mind. Other solutions (GIT, SVN, etc.) cannot give you proper usability, and as well, your clients would be not able to comment, review and browse design milestones in an easy way.
Other way is to use https://layervault.com but they are struggling with some security issues. Also the usability is not really great.
A: Take a look at Perforce (www.perforce.com), particularly if you are managing these files in the context of development projects. It is a code-oriented system, but it supports binary files well and has a Photoshop plugin. P4 isn't free, but it is worth every penny if you need professional-grade SCM - it is solid, fast, flexible and easy to use. (I am a very satisfied customer.)
A: For my game development I found this: Evolphin Zoom . It's pretty fast and it is compatible with all Adobe products. I like the Visual Asset Browser because it has a lot of ways to find things. It also has a web dashboard, which is useful if you have a team.
They are advertising it as a 'more than a replacement for version cue'. So if you're coming from that, you might find that a nice perk, too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ASP.NET MVC: Structuring Controllers So I'm embarking on an ASP.NET MVC project and while the experience has been a good one overall, I'm not quite as pleased with the spaghetti mess that my controllers have become. I've looked around online (CodeCampServer, etc...) and they all seem to suffer the same issue wherein controller methods violate SRP (single responsibility principle) pretty consistently - such as a controller method that simply renders the view if the request is a GET but updates the model if it's a POST. Now I've got controller methods responsible for multiple logical routes throughout the application - say it checks for which button was clicked on the form and acts accordingly. I could redirect each button click to a different form action using JavaScript, but something doesn't feel right there either... The other big issue is the proliferation of magic strings - ViewData["foo"] = blah; Long story short, how do you guys structure your controller logic? One giant model object per view? Lots of little controller methods and JavaScript is the router? My goal is maintainable code - as features get piled on I'm starting to slide down that slippery slope...
A: ASP.NET Preview 5 (available on CodePlex) has an answer for this: the [AcceptVerbs] attribute. Phil Haack has a blog post discussion how it's used.
As for the view data magic key question, it's an interesting problem. If you think of a view as being a bunch of semi-independent components (especially in light of the new partial view support), then making a strongly-typed model becomes less ideal, as the several pieces of the view should be relatively independent of one another.
A: How are different people handling this issue? I know that i just spent a couple hours reviewing the jumble inside of the model folder. I'm finding creating folders is helpful in reducing the visual clutter, using matching namespaces helps alot too.
But my controllers are monoliths at the moment. the trouble is that i've been focused on learning to this point in the project (still lots to sort out as well).
I'm getting a good handle on MVC now, so it is time to review the complexity and consider modifying the controllers up into better named and cleaner functions.
Are other people breaking their controllers up into sub controllers? (If there is such a thing)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: What should we do to prepare for 2038? I would like to think that some of the software I'm writing today will be used in 30 years. But I am also aware that a lot of it is based upon the UNIX tradition of exposing time as the number of seconds since 1970.
#include <stdio.h>
#include <time.h>
#include <limits.h>
void print(time_t rt) {
struct tm * t = gmtime(&rt);
puts(asctime(t));
}
int main() {
print(0);
print(time(0));
print(LONG_MAX);
print(LONG_MAX+1);
}
Execution results in:
*
*Thu Jan 1 00:00:00 1970
*Sat Aug 30 18:37:08 2008
*Tue Jan 19 03:14:07 2038
*Fri Dec 13 20:45:52 1901
The functions ctime(), gmtime(), and localtime() all take as an argument a time value representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970; see time(3) ).
I wonder if there is anything proactive to do in this area as a programmer, or are we to trust that all software systems (aka Operating Systems) will some how be magically upgraded in the future?
Update It would seem that indeed 64-bit systems are safe from this:
import java.util.*;
class TimeTest {
public static void main(String[] args) {
print(0);
print(System.currentTimeMillis());
print(Long.MAX_VALUE);
print(Long.MAX_VALUE + 1);
}
static void print(long l) {
System.out.println(new Date(l));
}
}
*
*Wed Dec 31 16:00:00 PST 1969
*Sat Aug 30 12:02:40 PDT 2008
*Sat Aug 16 23:12:55 PST 292278994
*Sun Dec 02 08:47:04 PST 292269055
But what about the year 292278994?
A: I have written portable replacement for time.h (currently just localtime(), gmtime(), mktime() and timegm()) which uses 64 bit time even on 32 bit machines. It is intended to be dropped into C projects as a replacement for time.h. It is being used in Perl and I intend to fix Ruby and Python's 2038 problems with it as well. This gives you a safe range of +/- 292 million years.
You can find the code at the y2038 project. Please feel free to post any questions to the issue tracker.
As to the "this isn't going to be a problem for another 29 years", peruse this list of standard answers to that. In short, stuff happens in the future and sometimes you need to know when. I also have a presentation on the problem, what is not a solution, and what is.
Oh, and don't forget that many time systems don't handle dates before 1970. Stuff happened before 1970, sometimes you need to know when.
A: Visual Studio moved to a 64 bit representation of time_t in Visual Studio 2005 (whilst still leaving _time32_t for backwards compatibility).
As long as you are careful to always write code in terms of time_t and don't assume anything about the size then as sysrqb points out the problem will be solved by your compiler.
A: You can always implement RFC 2550 and be safe forever ;-)
The known universe has a finite past and future. The current age of
the universe is estimated in [Zebu] as between 10 ** 10 and 2 * 10 **
10 years. The death of the universe is estimated in [Nigel] to occur
in 10 ** 11 - years and in [Drake] as occurring either in 10 ** 12
years for a closed universe (the big crunch) or 10 ** 14 years for an
open universe (the heat death of the universe).
Y10K compliant programs MAY choose to limit the range of dates they
support to those consistent with the expected life of the universe.
Y10K compliant systems MUST accept Y10K dates from 10 ** 12 years in
the past to 10 ** 20 years into the future. Y10K compliant systems
SHOULD accept dates for at least 10 ** 29 years in the past and
future.
A: I think that we should leave the bug in. Then about 2036 we can start selling consultancy for large sums of money to test everything. After all isn't that how we successfully managed the 1999-2000 rollover.
I'm only joking!
I was sat in a bank in London in 1999 and was quite amazed when I saw a consultant start Y2K testing the coffee machine. I think if we learnt anything from that fiasco, it was that the vast majority of software will just work and most of the rest won't cause a melt down if it fails and can be fixed after the event if needed. As such, I wouldn't take any special precautions until much nearer the time, unless you are dealing with a very critical piece of software.
A: Given my age, I think I should pay a lot into my pension and pay of all my depts, so someone else will have to fit the software!
Sorry, if you think about the “net present value” of any software you write today, it has no effect what the software does in 2038. A “return on investment” of more than a few years is uncommon for any software project, so you make a lot more money for your employer by getting the software shipped quicker, rather than thinking that far ahead.
The only common exception is software that has to predict future, 2038 is already a problem for mortgage quotation systems.
A: I work in embedded and I thought I would post our solution here. Our systems are on 32 bits, and what we sell right now has a warantee of 30 years which means that they will encounter the year 2038 bug. Upgrading in the future was not a solution.
To fix this, we set the kernel date 28 years earlier that the current date. It's not a random offset, 28 years is excatly the time it will take for the days of the week to match again. For instance I'm writing this on a thursday and the next time march 7 will be a thursday is in 28 years.
Furthermore, all the applications that interact with dates on our systems will take the system date (time_t) convert it to a custom time64_t and apply the 28 years offset to the right date.
We made a custom library to handle this. The code we're using is based off this: https://github.com/android/platform_bionic
Thus, with this solution you can buy yourself an extra 28 years easily.
A: Keep good documentation, and include a description of your time dependencies. I don't think many people have thought about how hard this transition might be, for example HTTP cookies are going to break on that date.
A:
What should we do to prepare for 2038?
Hide, because the apocalypse is coming.
But seriously, I hope that compilers (or the people who write them, to be precise) can handle this. They've got almost 30 years. I hope that's enough time.
At what point do we start preparing for Y10K? Have any hardware manufacturers / research labs looked into the easiest way to move to whatever new technology we'll have to have because of it?
A: Operative word being "should".
If you need to ensure futureproofing then you can construct your own date/time class and use that but I'd only do that if you think that what you write will be used on legacy OS'
A: By 2038, time libraries should all be using 64-bit integers, so this won't actually be that big of a deal (on software that isn't completely unmaintained).
COBOL programs might be fun though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Dealing with circular dependencies in OCaml I'm writing an interpreter for an experimental language. Three of the main constructs of the language are definitions, statements, and expressions. Definitions can contain statements and expressions, statements can contain definitions and expressions, and one kind of expression can contain statements. I represent all of these using union types so I can easily use pattern matching on them. Ideally, I would like to put the code for these in different files, but OMake complains about circular dependency issues. As far as I know, circular type definitions across modules are not allowed.
The only way I know of to solve this is to define all three types at once:
type defn = ...
and stmt = ...
and expr = ...
It seems like this requires all the code for types to be in the same file. Is there any way around this? How do you deal with circular definitions in your code?
A: Another solution often used is to abstract the types in the interfaces. Since the types are abstract in the interfaces, these interfaces are not recursively dependent. In the implementations, you can specify the types, and since the implementations depend only on the interfaces, they are not recursive either.
The only problem is that, with this solution, you cannot anymore pattern-matching on these types outside of their implementation.
Personally, but it is probably a matter of taste, I like to have all the types of my program defined in one module (I think it helps in the readability of the program). So, this restriction of OCaml is not really a problem for me.
A: Recursive definitions need to appear in the same file. If you want to separate definitions, statements, and expressions into separate modules, you can do so using recursive modules, but they will still need to appear in the same file. DAG-ifying inter-file dependencies is one of the annoyances of OCaml.
A: This is easily solved by parameterizing your types over the types they refer to:
type ('stmt, 'expr) defn = ...
type ('defn, 'expr) stmt = ...
type ('defn, 'stmt) expr = ...
This technique is called "untying the recursive knot" (in reference to Gordian's knot) and was described in an OCaml Journal article.
Cheers,
Jon Harrop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How much should one DataSet represent? How much should one DataSet represent? Using the example of an ordering system: While showing your order I also show a list of items similar to one of yours as well as a list of our most popular items.
While your items are tangled in a web of relationships involving you and your past orders, preferred suppliers, and the various other kinds of information related to you as a client, the other items do not have these same relationships. The set of queries I use to navigate the set of stuff representing you is different than the queries I use for one of these other lists of items.
My inclination is to create different DataSets for different kinds of relationships but then I create ten separate item DataTables and that seems wrong. When I instantiate the larger DataSet even though I am only interested in a small subset that seems wrong, and when I try to pack all of these into one DataSet I have a big messy looking thing with several items tables next to each other and I am pretty sure that IS wrong.
Maybe I am over-valuing the relationship feature of DataSets or maybe I just need to get over myself, either way I could use some guidance.
A: The DataSet is vastly overrated and overused. Use strongly-typed collections (thank you, generics and automatic properties!). As icing on the cake, you can now even do cool query things against your custom objects with LINQ.
Good Esposito article on datasets versus custom objects:
http://msdn.microsoft.com/en-us/magazine/cc163751.aspx
Automatic properties:
http://weblogs.asp.net/dwahlin/archive/2007/12/04/c-3-0-features-automatic-properties.aspx
LINQ with your objects:
http://blogs.msdn.com/wriju/archive/2006/09/16/linq-custom-object-query.aspx
A: This is why I don't use datasets. If you use strongly-typed datasets you benefit from the strong typing but you pay for it in terms of the time it takes to create one even if you're just using part of it and its extensibility in terms of the code base. If you want to modify an existing one and you modify a row definition then this will create "shotgun" breaks in the code base as each definition for adding a new row will have to be modified as it wont compile anymore.
To avoid the above scenario the most sensible approach is to generally give up on sensible re-use. Define a dataset per purpose and per use. However the main issue with this is API use, you end up with dataset that is simliar to another dataset but because it is a different dataset type you have to transform it to use the common API which is both painful and inelegant.
This, plus the fact that strongly typed datasets make your code look horrid (the length of the type declarations) are pretty much the reasons i've given up on datasets and switched to business objects instead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is Lazy Loading? What is Lazy Loading?
[Edit after reading a few answers]
Why do people use this term so often?
Say you just use a ASP/ADO recordset and load it with data or ADO.NET Datasource for a gridview.
I guess I should have asked why people use the term Lazy Loading, what "other" types are their?
A: wikipedia's Definition
Lazy loading is a design pattern commonly used in computer programming to defer initialization of an object until the point at which it is needed. ...
http://en.wikipedia.org/wiki/Lazy%20loading
A: The term lazy loading is usually used when talking about object relational mappers. If you use ADO.NET directly you always get eager loading (ie it always loads just what you specify).
OR-mappers like nHibernate support returning proxy objects that get "filled in" with the right data only when you access the data. That way you only load the data that you really use. This is a usefull feature when you specify a lot of relations between objects that can get loaded from the database, you don't want the OR-mapper to load all the related objects and the objects related to the related objects and so on. That can result in your whole database getting loaded.
This problem can be prevented by carefull design of your object model too. (using aggregates and only loading aggregate roots like in domain driven design is a way to get around this without using lazy loading).
Lazy loading can result in the or mapper doing lots of small database accesses instead of retrieving all the data you need once. This can result in performance problems too.
A: Here's an example from some actual Python code I wrote:
class Item(Model):
...
@property
def total(self):
if not hasattr(self, "_total"):
self._total = self.quantity \
+ sum(bi.quantity for bi in self.borroweditem_set.all())
return self._total
Basically, I have an Item class which represents an item in our inventory. The total number of items we have is the number that we own plus the sum of all of the items that we're borrowing from various sources. These numbers are all stored in our database, and it would be pointless to calculate this until the total is actually requested (since often Items will be used without the total being requested).
So the total property checks whether the _total field exists. If it doesn't, then the property code queries the database and computes it, then stores the value in the _total field so that it need not be recomputed the next time it's requested.
A: Lazy Loading is a programming practice in which you only load or initialize an object when you first need it. This can potentially give you a big performance boost, especially if you have a lot of components in your application.
As usual, Wikipedia has more details.
A: Lazy loading: you don't waste your time (nor your memory) with stuff you might not need. Then when you need it, it takes longer, but that's fine.
Example from life: instead of actually learning that French phrasebook, you learn the phrases one at a time, as they're needed. When does this make sense? If you're only going to be in France for a short time (i.e., you won't need a lot of the phrases) or if you need to leave very soon. If you're there for two years and/or you have a long time to study, then it might be much more efficient to just learn the whole phrasebook up front (eager loading).
[Inspired by the Atom as taught in gang terms by Venus on WKRP.]
A: Lazy loading is a term frequently used in databases to refer to the concept of loading parts of the required info only when it's needed.
I.e. suppose you needed to have a record which has a join of several tables. If you fetched it all at once it would take longer than if you would fetch say only the main table. Using lazy-loading the rest of the information will be fetched only if it is needed. So it is actually 'efficient-loading' in certain scenarios.
The other types of 'loading' is:
*
*Eager Loading - Loading all the connected tables at once.
A: is a Design pattern.
Lazy loading: Untill your code require some operation done by a particular object, object is not initilaized, and once it's initialized it doesn't re-initialize the object but uses the previously initialized object.
This makes your code much more efficient and helps managing memory usage.
Example Applications of Lazy loading:
Ghost
Lazy initialization
Value holder
A: Some of the advantages of lazy loading:
*
*Minimizes start up time of the application.
*Application consumes less memory because of on-demand loading.
*Unnecessary request to server is avoided.
A: It's called lazy loading because, like a lazy person, you are putting off doing something you don't want to. The opposite is Eager Loading, where you load something right away, long before you need it.
If you are curious why people might use lazy loading, consider an application that takes a LOOOOONG time to start. This application is probably doing a lot of eager loading... loading things from disk, and doing calculations and whatnot long before it is ever needed.
Compare this to lazy loading, the application would start much faster, but then the first time you need to do something that requires some long running load, there may be a slight pause while it is loaded for the first time. Thus, with lazy loading, you are amortizing the load time throughout the course of running your application... and you may actually save from loading things that the user may never intend to use.
A: An example of Lazy Loading would be a grid or table with lots of data on a webpage to view where the application only loads what the users browser viewpoint size is at that time. When they scroll down to want to view more content or data, more data would be loaded into view at that moment.
This is becoming more of a common visual/interaction design pattern as well via ajax or jQuery.
And as mentioned above, opposite would be Eager Loading where you don't take client into consideration thus potentially having a performance hit.
A: Lazy loading is a concept where we delay the loading of the object unit the point where we need it. Putting in simple words on demand object loading rather than loading the objects unnecessarily. For instance if you have a "Customer" class which has "Orders" object aggregated. So you like to load the customer data but the orders objects you would like to delay until your application needs it.
Below is a youtube video which demonstrates how to use lazy loading , how we can implement lazy loading and advantages and disadvantages of the same.
http://www.youtube.com/watch?v=2SrfdAkwmFo
A: Lazy<T> is now part of c# 4.0 - there is a nice page on MSDN which explains the concept.
A: According to geeksforgeeks, Lazy loading is a software design pattern where the initialization of an object occurs only when it is actually needed and not before to preserve the simplicity of usage and improve performance.
https://www.geeksforgeeks.org/lazy-loading-design-pattern/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "95"
} |
Q: F# language - hints for newbie Looks like here in StackOveflow there is a group of F# enthusiasts.
I'd like to know better this language, so, apart from the functional programming theory, can you point me to the better starting points to start using the F# language? I mean, tutorials, how-tos, but first of all working samples to have the chance to start doing something and enjoy the language.
Thanks a lot
Andrea
A: Without doubt, you should purchase Don Syme's excellent book "Expert F#". The book is very well written and is suitable for both beginners and experts alike. In it, you'll find both introductory material and much more challenging material too. At nearly 600 pages it is good value for money.
I found that it taught me a lot of useful techniques for writing more functional C# as well as providing all the reference material I needed to get started writing Windows hosted F# applications.
The book is published by Apress and has an accompanying web site at:
http://www.expert-fsharp.com/default.aspx
A: @kronoz - well thanks a lot for your long answer, that's a really good place to start from. I'll follow your advices, and look for the book @vecstasy mentioned.
now, let me go coding :-)
let thanksalot = "thanks a lot"
printfn "%s" (thanksalot);;
A: Not to whore myself horribly but I wrote a couple F# overview posts on my blog here and here. Chris Smith (guy on the F# team at MS) has an article called 'F# in 20 minutes' - part 1 and part 2.
Note you have to be careful as the latest CTP of F# (version 1.9.6.0) has some seriously breaking changes compared to previous versions, so some examples/tutorials out there might not work without modification.
Here's a quick run-down of some cool stuff, maybe I can give you a few hints here myself which are clearly very brief and probably not great but hopefully gives you something to play with!:-
First note - most examples on the internet will assume 'lightweight syntax' is turned on. To achieve this use the following line of code:-
#light
This prevents you from having to insert certain keywords that are present for OCaml compatibility and also having to terminate each line with semicolons. Note that using this syntax means indentation defines scope. This will become clear in later examples, all of which rely on lightweight syntax being switched on.
If you're using the interactive mode you have to terminate all statements with double semi-colons, for example:-
> #light;;
> let f x y = x + y;;
val f : int -> int -> int
> f 1 2;;
val it : int = 3
Note that interactive mode returns a 'val' result after each line. This gives important information about the definitions we are making, for example 'val f : int -> int -> int' indicates that a function which takes two ints returns an int.
Note that only in interactive do we need to terminate lines with semi-colons, when actually defining F# code we are free of that :-)
You define functions using the 'let' keyword. This is probably the most important keyword in all of F# and you'll be using it a lot. For example:-
let sumStuff x y = x + y
let sumStuffTuple (x, y) = x + y
We can call these functions thus:-
sumStuff 1 2
3
sumStuffTuple (1, 2)
3
Note there are two different ways of defining functions here - you can either separate parameters by whitespace or specify parameters in 'tuples' (i.e. values in parentheses separated by commas). The difference is that we can use 'partial function application' to obtain functions which take less than the required parameters using the first approach, and not with the second. E.g.:-
let sumStuff1 = sumStuff 1
sumStuff 2
3
Note we are obtaining a function from the expression 'sumStuff 1'. When we can pass around functions just as easily as data that is referred to as the language having 'first class functions', this is a fundamental part of any functional language such as F#.
Pattern matching is pretty darn cool, it's basically like a switch statement on steroids (yeah I nicked that phrase from another F#-ist :-). You can do stuff like:-
let someThing x =
match x with
| 0 -> "zero"
| 1 -> "one"
| 2 -> "two"
| x when x < 0 -> "negative = " + x.ToString()
| _ when x%2 = 0 -> "greater than two but even"
| _ -> "greater than two but odd"
Note we use the '_' symbol when we want to match on something but the expression we are returning does not depend on the input.
We can abbreviate pattern matching using if, elif, and else statements as required:-
let negEvenOdd x = if x < 0 then "neg" elif x % 2 = 0 then "even" else "odd"
F# lists (which are implemented as linked lists underneath) can be manipulated thus:-
let l1 = [1;2;3]
l1.[0]
1
let l2 = [1 .. 10]
List.length l2
10
let squares = [for i in 1..10 -> i * i]
squares
[1; 4; 9; 16; 25; 36; 49; 64; 81; 100]
let square x = x * x;;
let squares2 = List.map square [1..10]
squares2
[1; 4; 9; 16; 25; 36; 49; 64; 81; 100]
let evenSquares = List.filter (fun x -> x % 2 = 0) squares
evenSqares
[4; 16; 36; 64; 100]
Note the List.map function 'maps' the square function on to the list from 1 to 10, i.e. applies the function to each element. List.filter 'filters' a list by only returning values in the list that pass the predicate function provided. Also note the 'fun x -> f' syntax - this is the F# lambda.
Note that throughout we have not defined any types - the F# compiler/interpreter 'infers' types, i.e. works out what you want from usage. For example:-
let f x = "hi " + x
Here the compiler/interpreter will determine x is a string since you're performing an operation which requires x to be a string. It also determines the return type will be string as well.
When there is ambiguity the compiler makes assumptions, for example:-
let f x y = x + y
Here x and y could be a number of types, but the compiler defaults to int. If you want to define types you can using type annotation:-
let f (x:string) y = x + y
Also note that we have had to enclose x:string in parentheses, we often have to do this to separate parts of a function definition.
Two really useful and heavily used operators in F# are the pipe forward and function composition operators |> and >> respectively.
We define |> thus:-
let (|>) x f = f x
Note that you can define operators in F#, this is pretty cool :-).
This allows you to write things in a clearer way, e.g.:-
[1..10] |> List.map (fun x -> x * x) |> List.filter (fun x -> x % 2 = 0)
Will allow you to obtain the first 10 even squares. That is clearer than:-
List.filter (fun x -> x % 2 = 0) (List.map (fun x -> x * x) [1..10])
Well, at least I think so :-)
Function composition defined by the >> operator is defined as follows:-
let (>>) f g x = g(f(x))
I.e. you forward-pipe an operation only the parameter of the first function remains unspecified. This is useful as you can do the following:-
let mapFilter = List.map (fun x -> x * x) >> List.filter (fun x -> x % 2 = 0)
Here mapFilter will accept a list an input and return the list filtered as before. It's an abbreviated version of:-
let mapFilter = l |> List.map (fun x -> x * x) |> List.filter (fun x -> x % 2 = 0)
If we want to write recursive functions we have to define the function as recursive by placing 'rec' after the let. Examples below.
Some cool stuff:-
Factorial
let rec fact x = if x <= 1 then 1 else x * fact (x-1)
nth Fibonacci Number
let rec fib n = if n <= 1 then n else fib (n-1) + fib (n-2)
FizzBuzz
let (/%) x y = x % y = 0
let fb = function
| x when x /% 15 -> "FizzBuzz"
| x when x /% 3 -> "Fizz"
| x when x /% 5 -> "Buzz"
| x -> x.ToString()
[1..100] |> List.map (fb >> printfn "%s")
Anyway that's a very brief overview, hopefully it helps a little!!
A: I've been reading Real World Functional Programming
With examples in F# and C# by:Tomas Petricek
So far I find it very good at teaching F# concepts by showing the implementations in C# on the side. Great for OO Programmers.
A: The first chapter of my book F# for Scientists is freely available here. We have a series of free F# toy programs here. The first article from our F#.NET Journal is freely available here.
A: Check out the F# Developer Center. There is also hubFS, a forum dedicated to F#.
A: If you have the current CTP release in Visual Studio it lets you create a F# Tutorial project, which gives you a Tutorial.fs, exactly containing what it's name suggests.
That tutorial also points to a larger collection of F# examples at Microsoft.
Also, there is an F# samples project going on at CodePlex.
Hope this helps,
Michiel
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: What's the answer to this Microsoft PDC challenge? In today's channel9.msdn.com video, the PDC guys posted a challenge to decipher this code:
2973853263233233753482843823642933243283
6434928432937228939232737732732535234532
9335283373377282333349287338349365335325
3283443783243263673762933373883363333472
8936639338428833535236433333237634438833
3275387394324354374325383293375366284282
3323383643473233852922933873933663333833
9228632439434936334633337636632933333428
9285333384346333346365364364365365336367
2873353883543533683523253893663653393433
8837733538538437838338536338232536832634
8284348375376338372376377364368392352393
3883393733943693253343433882852753933822
7533337432433532332332328232332332932432
3323323323323336323333323323323327323324
2873323253233233233892792792792792792792
7934232332332332332332332733432333832336
9344372376326339329376282344
Decipher it and win a t-shirt. (Lame, I know, was hoping for a free trip to the PDC.)
I notice some interesting patterns in this code, such as the 332 pattern towards the end, but I'm at a loss as to where to go from here. They've said the answer is a text question.
Any ideas on deciphering this code?
A: I'm still fiddling with this -- no answer yet, or even a clear direction, but some of this random assortment of facts might be useful to someone..
Meta: Is there any way to mark "read more" in an answer? Sorry in advance for all the scrolling this answer will cause!
The code is 708 digits long. Prime factorization: 2 2 3 59. Unless they're being tricky by padding the ends, the chunk size must be 1, 2, 4, 6, or 12; the higher factors are silly. This assumes, of course, that the code is based on concatenated chunks, which may not be the case.
Mike Stone suggested a chunk size of 3. Here's the distribution for that:
Number of distinct chunks: 64
Number of chunks: 236 (length of message)
275: ###
279: #######
282: ####
283: #
284: ####
285: ##
286: #
287: ###
288: #
289: ###
292: #
293: ####
297: #
323: #############################
324: #######
325: #######
326: ####
327: ####
328: ##
329: #####
332: ###
333: ###########
334: ###
335: ######
336: ###
337: #
338: ####
339: ###
342: #
343: ##
344: ###
345: #
346: ###
347: ##
348: ###
349: ###
352: ####
353: #
354: ##
363: ##
364: #######
365: #####
366: #####
367: ##
368: ###
369: ##
372: ###
373: ##
374: ##
375: ###
376: #######
377: ####
378: ##
382: ###
383: ###
384: ###
385: ####
387: ##
388: ######
389: ##
392: ###
393: ####
394: ###
449: #
If it's base64 encoded then we might have something ;) but my gut tells me that there are too many distinct chunks of length 3 for plain English text. There is indeed that odd blip for the symbol "323" though.
Somewhat more interesting is a chunk size of 2:
Number of distinct chunks: 49
Number of chunks: 354 (length of message)
22: ##
23: ########################
24: #####
25: ######
26: #
27: ######
28: #########
29: ####
32: ##################################
33: ################################################
34: ###########
35: ########
36: ##############
37: ############
38: ##################
39: ####
42: ##
43: ###########
44: ###
45: #
46: #
47: #
49: ##
52: #
53: #########
54: ##
62: #
63: #############
64: ####
65: ###
66: ##
67: ##
68: #
72: ###
73: ############
74: #
75: ####
76: #####
77: #
79: ####
82: ######
83: ###########
84: #####
85: ####
88: ####
89: #
92: #########
93: ################
94: ##
As for letter frequency, that's a good strategy, but remember that the text is likely to contain spaces and punctuation. Space might be the most common character by far!
Meta: This question re-asks a question found elsewhere. Does that count as homework? :)
A: Well, based on the 332 pattern you pointed out and the fact that the number of numbers is divisible by 3, and that several of the first 3 digit groups have matches... it might be that each 3 digits represent a character. Get a distribution of the number matches for all the 3 digit groups, then see if that distribution looks like the distribution of common letters.
If so, each 3 digit code could then be mapped to a character, and you might get a lot of the characters filled in for you this way, then just see if you can fill in the blanks of the less common letters that may not match the distribution perfectly.
A quick google search revealed this source for distribution of frequency in the English language.
This, of course, may not be fruitful, but it's a good first attempt.
A: I wrote some C# code to scan the cipher and give me some stats back. Here are some interesting results:
With a chunk size of 3,
*
*There are 236 chunks.
*There are 172 duplicates.
*The 323 code shows up a whopping
total of 29 times!
*The 333 code shows up 11 times.
*All other codes show up 7 times or less.
*35 chunks start with a 2.
*200 chunks start with a 3. (Interesting!)
*1 chunk starts with a 4.
*Despite the cipher containing 2s, 3s, 4s, 5s, 6s, 7s, 8s, and 9s, chunks only start with 2 and 3, except the 1 chunk that starts with 4.
*There are no 0s.
*There are no 1s.
*There are 115 2s.
*There are 293 3s.
*There are 56 4s.
*There are 38 5s.
*There are 49 6s.
*There are 52 7s.
*There are 63 8s.
*There are 42 9s.
I'd describe the 323 appearance count highly irregular. I'd also suggest that the fact that all of the chunks start with either 3 or 2 (barring the 1 appearance of a 4 chunk) is also highly irregular.
I've ran the same analysis using chunks of 2, 4, and 8, and the results look more or less random. At this point, I'm leaning towards a 3 chunk.
A: I'd say that anyone that finds the answer should keep it to themselves, and instead of posting it should just add a note that you can go read a particular url to find it, or send someone an email or something if they want to know the answer to it. At the time when Channel9 says its broken or posts the answer themselves, post it here, but until then, just let the discussion and pondering going. Much better for the brain.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is 'Currying'? I've seen references to curried functions in several articles and blogs but I can't find a good explanation (or at least one that makes sense!)
A: It can be a way to use functions to make other functions.
In javascript:
let add = function(x){
return function(y){
return x + y
};
};
Would allow us to call it like so:
let addTen = add(10);
When this runs the 10 is passed in as x;
let add = function(10){
return function(y){
return 10 + y
};
};
which means we are returned this function:
function(y) { return 10 + y };
So when you call
addTen();
you are really calling:
function(y) { return 10 + y };
So if you do this:
addTen(4)
it's the same as:
function(4) { return 10 + 4} // 14
So our addTen() always adds ten to whatever we pass in. We can make similar functions in the same way:
let addTwo = add(2) // addTwo(); will add two to whatever you pass in
let addSeventy = add(70) // ... and so on...
Now the obvious follow up question is why on earth would you ever want to do that? It turns what was an eager operation x + y into one that can be stepped through lazily, meaning we can do at least two things
1. cache expensive operations
2. achieve abstractions in the functional paradigm.
Imagine our curried function looked like this:
let doTheHardStuff = function(x) {
let z = doSomethingComputationallyExpensive(x)
return function (y){
z + y
}
}
We could call this function once, then pass around the result to be used in lots of places, meaning we only do the computationally expensive stuff once:
let finishTheJob = doTheHardStuff(10)
finishTheJob(20)
finishTheJob(30)
We can get abstractions in a similar way.
A: Currying is translating a function from callable as f(a, b, c) into callable as f(a)(b)(c).
Otherwise currying is when you break down a function that takes multiple arguments into a series of functions that take part of the arguments.
Literally, currying is a transformation of functions: from one way of calling into another. In JavaScript, we usually make a wrapper to keep the original function.
Currying doesn’t call a function. It just transforms it.
Let’s make curry function that performs currying for two-argument functions. In other words, curry(f) for two-argument f(a, b) translates it into f(a)(b)
function curry(f) { // curry(f) does the currying transform
return function(a) {
return function(b) {
return f(a, b);
};
};
}
// usage
function sum(a, b) {
return a + b;
}
let carriedSum = curry(sum);
alert( carriedSum(1)(2) ); // 3
As you can see, the implementation is a series of wrappers.
*
*The result of curry(func) is a wrapper function(a).
*When it is called like sum(1), the argument is saved in the Lexical Environment, and a new wrapper is returned function(b).
*Then sum(1)(2) finally calls function(b) providing 2, and it passes the call to the original multi-argument sum.
A: Here's a toy example in Python:
>>> from functools import partial as curry
>>> # Original function taking three parameters:
>>> def display_quote(who, subject, quote):
print who, 'said regarding', subject + ':'
print '"' + quote + '"'
>>> display_quote("hoohoo", "functional languages",
"I like Erlang, not sure yet about Haskell.")
hoohoo said regarding functional languages:
"I like Erlang, not sure yet about Haskell."
>>> # Let's curry the function to get another that always quotes Alex...
>>> am_quote = curry(display_quote, "Alex Martelli")
>>> am_quote("currying", "As usual, wikipedia has a nice summary...")
Alex Martelli said regarding currying:
"As usual, wikipedia has a nice summary..."
(Just using concatenation via + to avoid distraction for non-Python programmers.)
Editing to add:
See http://docs.python.org/library/functools.html?highlight=partial#functools.partial,
which also shows the partial object vs. function distinction in the way Python implements this.
A: Here is the example of generic and the shortest version for function currying with n no. of params.
const add = a => b => b ? add(a + b) : a;
const add = a => b => b ? add(a + b) : a;
console.log(add(1)(2)(3)(4)());
A: Currying is one of the higher-order functions of Java Script.
Currying is a function of many arguments which is rewritten such that it takes the first argument and return a function which in turns uses the remaining arguments and returns the value.
Confused?
Let see an example,
function add(a,b)
{
return a+b;
}
add(5,6);
This is similar to the following currying function,
function add(a)
{
return function(b){
return a+b;
}
}
var curryAdd = add(5);
curryAdd(6);
So what does this code means?
Now read the definition again,
Currying is a function of many arguments which is rewritten such that it takes first argument and return a function which in turns uses the remaining arguments and returns the value.
Still, Confused?
Let me explain in deep!
When you call this function,
var curryAdd = add(5);
It will return you a function like this,
curryAdd=function(y){return 5+y;}
So, this is called higher-order functions. Meaning, Invoking one function in turns returns another function is an exact definition for higher-order function. This is the greatest advantage for the legend, Java Script.
So come back to the currying,
This line will pass the second argument to the curryAdd function.
curryAdd(6);
which in turns results,
curryAdd=function(6){return 5+6;}
// Which results in 11
Hope you understand the usage of currying here.
So, Coming to the advantages,
Why Currying?
It makes use of code reusability.
Less code, Less Error.
You may ask how it is less code?
I can prove it with ECMA script 6 new feature arrow functions.
Yes! ECMA 6, provide us with the wonderful feature called arrow functions,
function add(a)
{
return function(b){
return a+b;
}
}
With the help of the arrow function, we can write the above function as follows,
x=>y=>x+y
Cool right?
So, Less Code and Fewer bugs!!
With the help of these higher-order function one can easily develop a bug-free code.
I challenge you!
Hope, you understood what is currying. Please feel free to comment over here if you need any clarifications.
Thanks, Have a nice day!
A: If you understand partial you're halfway there. The idea of partial is to preapply arguments to a function and give back a new function that wants only the remaining arguments. When this new function is called it includes the preloaded arguments along with whatever arguments were supplied to it.
In Clojure + is a function but to make things starkly clear:
(defn add [a b] (+ a b))
You may be aware that the inc function simply adds 1 to whatever number it's passed.
(inc 7) # => 8
Let's build it ourselves using partial:
(def inc (partial add 1))
Here we return another function that has 1 loaded into the first argument of add. As add takes two arguments the new inc function wants only the b argument -- not 2 arguments as before since 1 has already been partially applied. Thus partial is a tool from which to create new functions with default values presupplied. That is why in a functional language functions often order arguments from general to specific. This makes it easier to reuse such functions from which to construct other functions.
Now imagine if the language were smart enough to understand introspectively that add wanted two arguments. When we passed it one argument, rather than balking, what if the function partially applied the argument we passed it on our behalf understanding that we probably meant to provide the other argument later? We could then define inc without explicitly using partial.
(def inc (add 1)) #partial is implied
This is the way some languages behave. It is exceptionally useful when one wishes to compose functions into larger transformations. This would lead one to transducers.
A: Currying is a transformation that can be applied to functions to allow them to take one less argument than previously.
For example, in F# you can define a function thus:-
let f x y z = x + y + z
Here function f takes parameters x, y and z and sums them together so:-
f 1 2 3
Returns 6.
From our definition we can can therefore define the curry function for f:-
let curry f = fun x -> f x
Where 'fun x -> f x' is a lambda function equivilent to x => f(x) in C#. This function inputs the function you wish to curry and returns a function which takes a single argument and returns the specified function with the first argument set to the input argument.
Using our previous example we can obtain a curry of f thus:-
let curryf = curry f
We can then do the following:-
let f1 = curryf 1
Which provides us with a function f1 which is equivilent to f1 y z = 1 + y + z. This means we can do the following:-
f1 2 3
Which returns 6.
This process is often confused with 'partial function application' which can be defined thus:-
let papply f x = f x
Though we can extend it to more than one parameter, i.e.:-
let papply2 f x y = f x y
let papply3 f x y z = f x y z
etc.
A partial application will take the function and parameter(s) and return a function that requires one or more less parameters, and as the previous two examples show is implemented directly in the standard F# function definition so we could achieve the previous result thus:-
let f1 = f 1
f1 2 3
Which will return a result of 6.
In conclusion:-
The difference between currying and partial function application is that:-
Currying takes a function and provides a new function accepting a single argument, and returning the specified function with its first argument set to that argument. This allows us to represent functions with multiple parameters as a series of single argument functions. Example:-
let f x y z = x + y + z
let curryf = curry f
let f1 = curryf 1
let f2 = curryf 2
f1 2 3
6
f2 1 3
6
Partial function application is more direct - it takes a function and one or more arguments and returns a function with the first n arguments set to the n arguments specified. Example:-
let f x y z = x + y + z
let f1 = f 1
let f2 = f 2
f1 2 3
6
f2 1 3
6
A: Curry can simplify your code. This is one of the main reasons to use this. Currying is a process of converting a function that accepts n arguments into n functions that accept only one argument.
The principle is to pass the arguments of the passed function, using the closure (closure) property, to store them in another function and treat it as a return value, and these functions form a chain, and the final arguments are passed in to complete the operation.
The benefit of this is that it can simplify the processing of parameters by dealing with one parameter at a time, which can also improve the flexibility and readability of the program. This also makes the program more manageable. Also dividing the code into smaller pieces would make it reuse-friendly.
For example:
function curryMinus(x)
{
return function(y)
{
return x - y;
}
}
var minus5 = curryMinus(1);
minus5(3);
minus5(5);
I can also do...
var minus7 = curryMinus(7);
minus7(3);
minus7(5);
This is very great for making complex code neat and handling of unsynchronized methods etc.
A: A curried function is a function of several arguments rewritten such that it accepts the first argument and returns a function that accepts the second argument and so on. This allows functions of several arguments to have some of their initial arguments partially applied.
A: I found this article, and the article it references, useful, to better understand currying:
http://blogs.msdn.com/wesdyer/archive/2007/01/29/currying-and-partial-function-application.aspx
As the others mentioned, it is just a way to have a one parameter function.
This is useful in that you don't have to assume how many parameters will be passed in, so you don't need a 2 parameter, 3 parameter and 4 parameter functions.
A: As all other answers currying helps to create partially applied functions. Javascript does not provide native support for automatic currying. So the examples provided above may not help in practical coding. There is some excellent example in livescript (Which essentially compiles to js)
http://livescript.net/
times = (x, y) --> x * y
times 2, 3 #=> 6 (normal use works as expected)
double = times 2
double 5 #=> 10
In above example when you have given less no of arguments livescript generates new curried function for you (double)
A: A curried function is applied to multiple argument lists, instead of just
one.
Here is a regular, non-curried function, which adds two Int
parameters, x and y:
scala> def plainOldSum(x: Int, y: Int) = x + y
plainOldSum: (x: Int,y: Int)Int
scala> plainOldSum(1, 2)
res4: Int = 3
Here is similar function that’s curried. Instead
of one list of two Int parameters, you apply this function to two lists of one
Int parameter each:
scala> def curriedSum(x: Int)(y: Int) = x + y
curriedSum: (x: Int)(y: Int)Intscala> second(2)
res6: Int = 3
scala> curriedSum(1)(2)
res5: Int = 3
What’s happening here is that when you invoke curriedSum, you actually get two traditional function invocations back to back. The first function
invocation takes a single Int parameter named x , and returns a function
value for the second function. This second function takes the Int parameter
y.
Here’s a function named first that does in spirit what the first traditional
function invocation of curriedSum would do:
scala> def first(x: Int) = (y: Int) => x + y
first: (x: Int)(Int) => Int
Applying 1 to the first function—in other words, invoking the first function
and passing in 1 —yields the second function:
scala> val second = first(1)
second: (Int) => Int = <function1>
Applying 2 to the second function yields the result:
scala> second(2)
res6: Int = 3
A: An example of currying would be when having functions you only know one of the parameters at the moment:
For example:
func aFunction(str: String) {
let callback = callback(str) // signature now is `NSData -> ()`
performAsyncRequest(callback)
}
func callback(str: String, data: NSData) {
// Callback code
}
func performAsyncRequest(callback: NSData -> ()) {
// Async code that will call callback with NSData as parameter
}
Here, since you don't know the second parameter for callback when sending it to performAsyncRequest(_:) you would have to create another lambda / closure to send that one to the function.
A: Most of the examples in this thread are contrived (adding numbers). These are useful for illustrating the concept, but don't motivate when you might actually use currying in an app.
Here's a practical example from React, the JavaScript user interface library. Currying here illustrates the closure property.
As is typical in most user interface libraries, when the user clicks a button, a function is called to handle the event. The handler typically modifies the application's state and triggers the interface to re-render.
Lists of items are common user interface components. Each item might have an identifier associated with it (usually related to a database record). When the user clicks a button to, for example, "like" an item in the list, the handler needs to know which button was clicked.
Currying is one approach for achieving the binding between id and handler. In the code below, makeClickHandler is a function that accepts an id and returns a handler function that has the id in its scope.
The inner function's workings aren't important for this discussion. But if you're curious, it searches through the array of items to find an item by id and increments its "likes", triggering another render by setting the state. State is immutable in React so it takes a bit more work to modify the one value than you might expect.
You can think of invoking the curried function as "stripping" off the outer function to expose an inner function ready to be called. That new inner function is the actual handler passed to React's onClick. The outer function is a closure for the loop body to specify the id that will be in scope of a particular inner handler function.
const List = () => {
const [items, setItems] = React.useState([
{name: "foo", likes: 0},
{name: "bar", likes: 0},
{name: "baz", likes: 0},
].map(e => ({...e, id: crypto.randomUUID()})));
// .----------. outer func inner func
// | currying | | |
// `----------` V V
const makeClickHandler = (id) => (event) => {
setItems(prev => {
const i = prev.findIndex(e => e.id === id);
const cpy = {...prev[i]};
cpy.likes++;
return [
...prev.slice(0, i),
cpy,
...prev.slice(i + 1)
];
});
};
return (
<ul>
{items.map(({name, likes, id}) =>
<li key={id}>
<button
onClick={
/* strip off first function layer to get a click
handler bound to `id` and pass it to onClick */
makeClickHandler(id)
}
>
{name} ({likes} likes)
</button>
</li>
)}
</ul>
);
};
ReactDOM.createRoot(document.querySelector("#app"))
.render(<List />);
button {
font-family: monospace;
font-size: 2em;
}
<script crossorigin src="https://unpkg.com/react@18/umd/react.development.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@18/umd/react-dom.development.js"></script>
<div id="app"></div>
A: In an algebra of functions, dealing with functions that take multiple arguments (or equivalent one argument that's an N-tuple) is somewhat inelegant -- but, as Moses Schönfinkel (and, independently, Haskell Curry) proved, it's not needed: all you need are functions that take one argument.
So how do you deal with something you'd naturally express as, say, f(x,y)? Well, you take that as equivalent to f(x)(y) -- f(x), call it g, is a function, and you apply that function to y. In other words, you only have functions that take one argument -- but some of those functions return other functions (which ALSO take one argument;-).
As usual, wikipedia has a nice summary entry about this, with many useful pointers (probably including ones regarding your favorite languages;-) as well as slightly more rigorous mathematical treatment.
A: Here's a concrete example:
Suppose you have a function that calculates the gravitational force acting on an object. If you don't know the formula, you can find it here. This function takes in the three necessary parameters as arguments.
Now, being on the earth, you only want to calculate forces for objects on this planet. In a functional language, you could pass in the mass of the earth to the function and then partially evaluate it. What you'd get back is another function that takes only two arguments and calculates the gravitational force of objects on earth. This is called currying.
A: Currying means to convert a function of N arity into N functions of arity 1. The arity of the function is the number of arguments it requires.
Here is the formal definition:
curry(f) :: (a,b,c) -> f(a) -> f(b)-> f(c)
Here is a real world example that makes sense:
You go to ATM to get some money. You swipe your card, enter pin number and make your selection and then press enter to submit the "amount" alongside the request.
here is the normal function for withdrawing money.
const withdraw=(cardInfo,pinNumber,request){
// process it
return request.amount
}
In this implementation function expects us entering all arguments at once. We were going to swipe the card, enter the pin and make the request, then function would run. If any of those steps had issue, you would find out after you enter all the arguments. With curried function, we would create higher arity, pure and simple functions. Pure functions will help us easily debug our code.
this is Atm with curried function:
const withdraw=(cardInfo)=>(pinNumber)=>(request)=>request.amount
ATM, takes the card as input and returns a function that expects pinNumber and this function returns a function that accepts the request object and after the successful process, you get the amount that you requested. Each step, if you had an error, you will easily predict what went wrong. Let's say you enter the card and got error, you know that it is either related to the card or machine but not the pin number. Or if you entered the pin and if it does not get accepted you know that you entered the pin number wrong. You will easily debug the error.
Also, each function here is reusable, so you can use the same functions in different parts of your project.
A: Currying is when you break down a function that takes multiple arguments into a series of functions that each take only one argument. Here's an example in JavaScript:
function add (a, b) {
return a + b;
}
add(3, 4); // returns 7
This is a function that takes two arguments, a and b, and returns their sum. We will now curry this function:
function add (a) {
return function (b) {
return a + b;
}
}
This is a function that takes one argument, a, and returns a function that takes another argument, b, and that function returns their sum.
add(3)(4); // returns 7
var add3 = add(3); // returns a function
add3(4); // returns 7
*
*The first statement returns 7, like the add(3, 4) statement.
*The second statement defines a new function called add3 that will
add 3 to its argument. (This is what some may call a closure.)
*The third statement uses the add3 operation to add 3 to 4, again
producing 7 as a result.
A: Here you can find a simple explanation of currying implementation in C#. In the comments, I have tried to show how currying can be useful:
public static class FuncExtensions {
public static Func<T1, Func<T2, TResult>> Curry<T1, T2, TResult>(this Func<T1, T2, TResult> func)
{
return x1 => x2 => func(x1, x2);
}
}
//Usage
var add = new Func<int, int, int>((x, y) => x + y).Curry();
var func = add(1);
//Obtaining the next parameter here, calling later the func with next parameter.
//Or you can prepare some base calculations at the previous step and then
//use the result of those calculations when calling the func multiple times
//with different input parameters.
int result = func(1);
A: "Currying" is the process of taking the function of multiple arguments and converting it into a series of functions that each take a single argument and return a function of a single argument, or in the case of the final function, return the actual result.
A: The other answers have said what currying is: passing fewer arguments to a curried function than it expects is not an error, but instead returns a function that expects the rest of the arguments and returns the same result as if you had passed them all in at once.
I’ll try to motivate why it’s useful. It’s one of those tools that you never realized you needed until you do. Currying is above all a way to make your programs more expressive - you can combine operations together with less code.
For example, if you have a curried function add, you can write the equivalent of JS x => k + x (or Python lambda x: k + x or Ruby { |x| k + x } or Lisp (lambda (x) (+ k x)) or …) as just add(k). In Haskelll you can even use the operator: (k +) or (+ k) (The two forms let you curry either way for non-commutative operators: (/ 9) is a function that divides a number by 9, which is probably the more common use case, but you also have (9 /) for a function that divides 9 by its argument.) Besides being shorter, the curried version contains no made-up parameter name like the x found in all the other versions. It’s not needed. You’re defining a function that adds some constant k to a number, and you don’t need to give that number a name just to talk about the function. Or even to define it. This is an example of what’s called “point-free style”. You can combine operations together given nothing but the operations themselves. You don’t have to declare anonymous functions that do nothing but apply some operation to their argument, because *that’s what the operations already are.
This becomes very handy with higher-order functions when they’re defined in a currying-friendly way. For instance, a curried map(fn, list) let’s you define a mapper with just map(fn) that can be applied it to any list later. But currying a map defined instead as map(list, fn) just lets you define a function that will apply some other function to a constant list, which is probably less generally useful.
Currying reduces the need for things like pipes and threading. In Clojure, you might define a temperature conversion function using the threading macro ->: (defn f2c (deg) (-> deg (- 32) (* 5) (/ 9)). That’s cool, it reads nicely left to right (“subtract 32, multiply by 5 and divide by 9.”) and you only have to mention the parameter twice instead of once for every suboperation… but it only works because -> is a macro that transforms the whole form syntactically before anything is evaluated. It turns into a regular nested expression behind the scenes: (/ (* (- deg 32) 5) 9). If the math ops were curried, you wouldn’t need a macro to combine them so nicely, as in Haskell let f2c = (subtract 32) & (* 5) & (/ 9). (Although it would admittedly be more idiomatic to use function composition, which reads right to left: (/ 9) . (* 5) . (subtract 32).)
Again, it’s hard to find good demo examples; currying is most useful in complex cases where it really helps the readability of the solution, but those take so much explanation just to get you to understand the problem that the overall lesson about currying can get lost in the noise.
A: There is an example of "Currying in ReasonML".
let run = () => {
Js.log("Curryed function: ");
let sum = (x, y) => x + y;
Printf.printf("sum(2, 3) : %d\n", sum(2, 3));
let per2 = sum(2);
Printf.printf("per2(3) : %d\n", per2(3));
};
A: Below is one of currying example in JavaScript, here the multiply return the function which is used to multiply x by two.
const multiply = (presetConstant) => {
return (x) => {
return presetConstant * x;
};
};
const multiplyByTwo = multiply(2);
// now multiplyByTwo is like below function & due to closure property in JavaScript it will always be able to access 'presetConstant' value
// const multiplyByTwo = (x) => {
// return presetConstant * x;
// };
console.log(`multiplyByTwo(8) : ${multiplyByTwo(8)}`);
Output
multiplyByTwo(8) : 16
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "787"
} |
Q: Alternative to HttpUtility for .NET 3.5 SP1 client framework? It'd be really nice to target my Windows Forms app to the .NET 3.5 SP1 client framework. But, right now I'm using the HttpUtility.HtmlDecode and HttpUtility.UrlDecode functions, and the MSDN documentation doesn't point to any alternatives inside of, say, System.Net or something.
So, short from reflectoring the source code and copying it into my assembly---which I don't think would be worth it---are there alternatives inside of the .NET 3.5 SP1 client framework that you know of, to replace this functionality? It seems a bit strange that they'd restrict these useful functions to server-only code.
A: I reverse engineered the Microsoft System.Net.WebUtility class from .NET 4.0 using Reflector (I think they'd be ok with it given the circumstances). So you could either use .NET 4.0 Client Framework (which now has this new class) or use the code here.
As long as you use a strong name on your assembly etc., you'll be safe enough. Here:
/// <summary>
/// Taken from System.Net in 4.0, useful until we move to .NET 4.0 - needed for Client Profile
/// </summary>
public static class WebUtility
{
// Fields
private static char[] _htmlEntityEndingChars = new char[] { ';', '&' };
// Methods
public static string HtmlDecode(string value)
{
if (string.IsNullOrEmpty(value))
{
return value;
}
if (value.IndexOf('&') < 0)
{
return value;
}
StringWriter output = new StringWriter(CultureInfo.InvariantCulture);
HtmlDecode(value, output);
return output.ToString();
}
public static void HtmlDecode(string value, TextWriter output)
{
if (value != null)
{
if (output == null)
{
throw new ArgumentNullException("output");
}
if (value.IndexOf('&') < 0)
{
output.Write(value);
}
else
{
int length = value.Length;
for (int i = 0; i < length; i++)
{
char ch = value[i];
if (ch == '&')
{
int num3 = value.IndexOfAny(_htmlEntityEndingChars, i + 1);
if ((num3 > 0) && (value[num3] == ';'))
{
string entity = value.Substring(i + 1, (num3 - i) - 1);
if ((entity.Length > 1) && (entity[0] == '#'))
{
ushort num4;
if ((entity[1] == 'x') || (entity[1] == 'X'))
{
ushort.TryParse(entity.Substring(2), NumberStyles.AllowHexSpecifier, (IFormatProvider)NumberFormatInfo.InvariantInfo, out num4);
}
else
{
ushort.TryParse(entity.Substring(1), NumberStyles.Integer, (IFormatProvider)NumberFormatInfo.InvariantInfo, out num4);
}
if (num4 != 0)
{
ch = (char)num4;
i = num3;
}
}
else
{
i = num3;
char ch2 = HtmlEntities.Lookup(entity);
if (ch2 != '\0')
{
ch = ch2;
}
else
{
output.Write('&');
output.Write(entity);
output.Write(';');
goto Label_0117;
}
}
}
}
output.Write(ch);
Label_0117: ;
}
}
}
}
public static string HtmlEncode(string value)
{
if (string.IsNullOrEmpty(value))
{
return value;
}
if (IndexOfHtmlEncodingChars(value, 0) == -1)
{
return value;
}
StringWriter output = new StringWriter(CultureInfo.InvariantCulture);
HtmlEncode(value, output);
return output.ToString();
}
public static unsafe void HtmlEncode(string value, TextWriter output)
{
if (value != null)
{
if (output == null)
{
throw new ArgumentNullException("output");
}
int num = IndexOfHtmlEncodingChars(value, 0);
if (num == -1)
{
output.Write(value);
}
else
{
int num2 = value.Length - num;
fixed (char* str = value)
{
char* chPtr = str;
char* chPtr2 = chPtr;
while (num-- > 0)
{
chPtr2++;
output.Write(chPtr2[0]);
}
while (num2-- > 0)
{
chPtr2++;
char ch = chPtr2[0];
if (ch <= '>')
{
switch (ch)
{
case '&':
{
output.Write("&");
continue;
}
case '\'':
{
output.Write("'");
continue;
}
case '"':
{
output.Write(""");
continue;
}
case '<':
{
output.Write("<");
continue;
}
case '>':
{
output.Write(">");
continue;
}
}
output.Write(ch);
continue;
}
if ((ch >= '\x00a0') && (ch < 'Ā'))
{
output.Write("&#");
output.Write(((int)ch).ToString(NumberFormatInfo.InvariantInfo));
output.Write(';');
}
else
{
output.Write(ch);
}
}
}
}
}
}
private static unsafe int IndexOfHtmlEncodingChars(string s, int startPos)
{
int num = s.Length - startPos;
fixed (char* str = s)
{
char* chPtr = str;
char* chPtr2 = chPtr + startPos;
while (num > 0)
{
char ch = chPtr2[0];
if (ch <= '>')
{
switch (ch)
{
case '&':
case '\'':
case '"':
case '<':
case '>':
return (s.Length - num);
case '=':
goto Label_0086;
}
}
else if ((ch >= '\x00a0') && (ch < 'Ā'))
{
return (s.Length - num);
}
Label_0086:
chPtr2++;
num--;
}
}
return -1;
}
// Nested Types
private static class HtmlEntities
{
// Fields
private static string[] _entitiesList = new string[] {
"\"-quot", "&-amp", "'-apos", "<-lt", ">-gt", "\x00a0-nbsp", "\x00a1-iexcl", "\x00a2-cent", "\x00a3-pound", "\x00a4-curren", "\x00a5-yen", "\x00a6-brvbar", "\x00a7-sect", "\x00a8-uml", "\x00a9-copy", "\x00aa-ordf",
"\x00ab-laquo", "\x00ac-not", "\x00ad-shy", "\x00ae-reg", "\x00af-macr", "\x00b0-deg", "\x00b1-plusmn", "\x00b2-sup2", "\x00b3-sup3", "\x00b4-acute", "\x00b5-micro", "\x00b6-para", "\x00b7-middot", "\x00b8-cedil", "\x00b9-sup1", "\x00ba-ordm",
"\x00bb-raquo", "\x00bc-frac14", "\x00bd-frac12", "\x00be-frac34", "\x00bf-iquest", "\x00c0-Agrave", "\x00c1-Aacute", "\x00c2-Acirc", "\x00c3-Atilde", "\x00c4-Auml", "\x00c5-Aring", "\x00c6-AElig", "\x00c7-Ccedil", "\x00c8-Egrave", "\x00c9-Eacute", "\x00ca-Ecirc",
"\x00cb-Euml", "\x00cc-Igrave", "\x00cd-Iacute", "\x00ce-Icirc", "\x00cf-Iuml", "\x00d0-ETH", "\x00d1-Ntilde", "\x00d2-Ograve", "\x00d3-Oacute", "\x00d4-Ocirc", "\x00d5-Otilde", "\x00d6-Ouml", "\x00d7-times", "\x00d8-Oslash", "\x00d9-Ugrave", "\x00da-Uacute",
"\x00db-Ucirc", "\x00dc-Uuml", "\x00dd-Yacute", "\x00de-THORN", "\x00df-szlig", "\x00e0-agrave", "\x00e1-aacute", "\x00e2-acirc", "\x00e3-atilde", "\x00e4-auml", "\x00e5-aring", "\x00e6-aelig", "\x00e7-ccedil", "\x00e8-egrave", "\x00e9-eacute", "\x00ea-ecirc",
"\x00eb-euml", "\x00ec-igrave", "\x00ed-iacute", "\x00ee-icirc", "\x00ef-iuml", "\x00f0-eth", "\x00f1-ntilde", "\x00f2-ograve", "\x00f3-oacute", "\x00f4-ocirc", "\x00f5-otilde", "\x00f6-ouml", "\x00f7-divide", "\x00f8-oslash", "\x00f9-ugrave", "\x00fa-uacute",
"\x00fb-ucirc", "\x00fc-uuml", "\x00fd-yacute", "\x00fe-thorn", "\x00ff-yuml", "Œ-OElig", "œ-oelig", "Š-Scaron", "š-scaron", "Ÿ-Yuml", "ƒ-fnof", "ˆ-circ", "˜-tilde", "Α-Alpha", "Β-Beta", "Γ-Gamma",
"Δ-Delta", "Ε-Epsilon", "Ζ-Zeta", "Η-Eta", "Θ-Theta", "Ι-Iota", "Κ-Kappa", "Λ-Lambda", "Μ-Mu", "Ν-Nu", "Ξ-Xi", "Ο-Omicron", "Π-Pi", "Ρ-Rho", "Σ-Sigma", "Τ-Tau",
"Υ-Upsilon", "Φ-Phi", "Χ-Chi", "Ψ-Psi", "Ω-Omega", "α-alpha", "β-beta", "γ-gamma", "δ-delta", "ε-epsilon", "ζ-zeta", "η-eta", "θ-theta", "ι-iota", "κ-kappa", "λ-lambda",
"μ-mu", "ν-nu", "ξ-xi", "ο-omicron", "π-pi", "ρ-rho", "ς-sigmaf", "σ-sigma", "τ-tau", "υ-upsilon", "φ-phi", "χ-chi", "ψ-psi", "ω-omega", "ϑ-thetasym", "ϒ-upsih",
"ϖ-piv", " -ensp", " -emsp", " -thinsp", "-zwnj", "-zwj", "-lrm", "-rlm", "–-ndash", "—-mdash", "‘-lsquo", "’-rsquo", "‚-sbquo", "“-ldquo", "”-rdquo", "„-bdquo",
"†-dagger", "‡-Dagger", "•-bull", "…-hellip", "‰-permil", "′-prime", "″-Prime", "‹-lsaquo", "›-rsaquo", "‾-oline", "⁄-frasl", "€-euro", "ℑ-image", "℘-weierp", "ℜ-real", "™-trade",
"ℵ-alefsym", "←-larr", "↑-uarr", "→-rarr", "↓-darr", "↔-harr", "↵-crarr", "⇐-lArr", "⇑-uArr", "⇒-rArr", "⇓-dArr", "⇔-hArr", "∀-forall", "∂-part", "∃-exist", "∅-empty",
"∇-nabla", "∈-isin", "∉-notin", "∋-ni", "∏-prod", "∑-sum", "−-minus", "∗-lowast", "√-radic", "∝-prop", "∞-infin", "∠-ang", "∧-and", "∨-or", "∩-cap", "∪-cup",
"∫-int", "∴-there4", "∼-sim", "≅-cong", "≈-asymp", "≠-ne", "≡-equiv", "≤-le", "≥-ge", "⊂-sub", "⊃-sup", "⊄-nsub", "⊆-sube", "⊇-supe", "⊕-oplus", "⊗-otimes",
"⊥-perp", "⋅-sdot", "⌈-lceil", "⌉-rceil", "⌊-lfloor", "⌋-rfloor", "〈-lang", "〉-rang", "◊-loz", "♠-spades", "♣-clubs", "♥-hearts", "♦-diams"
};
private static Dictionary<string, char> _lookupTable = GenerateLookupTable();
// Methods
private static Dictionary<string, char> GenerateLookupTable()
{
Dictionary<string, char> dictionary = new Dictionary<string, char>(StringComparer.Ordinal);
foreach (string str in _entitiesList)
{
dictionary.Add(str.Substring(2), str[0]);
}
return dictionary;
}
public static char Lookup(string entity)
{
char ch;
_lookupTable.TryGetValue(entity, out ch);
return ch;
}
}
}
A: I’d strongly not recommend rolling your own encoding. I’d use the Microsoft Anti-Cross Site Scripting Library which is very small (v1.5 is ~30kb) if HttpUtility.HtmlEncode isn’t available.
As for decoding, maybe you could use the decoding routine from Mono?
A: Apparently google couldn't find it either, so they wrote there own api-compatible version:
Here's a workaround:
*
*Compile Google's version as a library
wget 'http://google-gdata.googlecode.com/svn/trunk/clients/cs/src/core/HttpUtility.cs'
gmcs -t:library HttpUtility.cs
*Edit your-project.cs to include that namespace
using Google.GData.Client; // where HttpUtility lives
*Recompile using the library
gmcs your-project.cs -r:System.Web.Services -r:System.Web -r:HttpUtility
From glancing at the source code, it appears that this is .NET 2.0-compatible.
I just wrote my first hello world in csharp yesterday and I ran into this problem, so I hope this helps someone else.
A: When addressing Windows Phone and the Desktop (Client profile!) world the fastest way I found is:
private static string HtmlDecode(string text)
{
#if WINDOWS_PHONE
return System.Net.HttpUtility.HtmlDecode(text);
#else
return System.Net.WebUtility.HtmlDecode(text);
#endif
}
The odd thing for me was that the namespace is System.Net but the class name differs in the Windows Phone world ... (don't like to use System.Web/full profile when not really needed and the System.Web isn't supported on the Windows Phone platform anyway ...)
A: Found today from this here little site that HtmlEncode/Decode can be done using System.Net library in C# 4.0 Client Profile:
Uri.EscapeDataString(...)
WebUtility.HtmlEncode(...)
Edit: I re-read that the question applied for the 3.5 Client Framework but maybe this can be useful those who have updated 4.0..
A: The .NET 3.5 SP1 Client Profile Setup Package is the "cut down" version of .NET that only includes what Microsoft perceive to be the "useful" bits of .NET for client applications. So, useful things like the HttpUtility classes are missing.
For more on that see ScottGu's blog, search for "Client Profile Setup Package".
To get around this you could always extract System.Web.dll from the GAC (it'll be in c:\windows\Microsoft.NET\Framework\ ... ) and deploy it with your application. You will, however, need to track updates and service packs as you deploy.
Better might be to take the hit of the full .NET Framework deployment.
A: Two main ways :
*
*Deploy using the full .NET Framework
*Write your own / 3rd party lib for these functionalities
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: "The system cannot find the file specified" when invoking subprocess.Popen in python I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue.
I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path):
P:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> i,k = os.popen4("svn --version")
>>> i.close()
>>> k.readline()
'svn, version 1.4.2 (r22196)\n'
Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking
os.popen4() it uses subprocess.Popen(). Trying that reproduces the error:
C:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE,
>>> close_fds=False, stderr=subprocess.PIPE)
Traceback (most recent call last):
File "", line 1, in
File "C:\Python25\lib\subprocess.py", line 594, in __init__
errread, errwrite)
File "C:\Python25\lib\subprocess.py", line 816, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>>
For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution.
If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
A: It's a bug, see the documentation of subprocess.Popen. There either needs to be a "shell=True" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, "svn --version" which it doesn't find.
I don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How can I store user-tweakable configuration in app.config? I know it is a good idea to store configuration data in app.config (e.g. database connection strings) instead of hardcoing it, even if I am writing an application just for myself. But is there a way to update the configuration data stored in app.config from the program that is using it?
A: If you use the Settings for the project, you can mark each setting as either application or user.
If they're set as user, they will be stored per-user and when you call the Save method it will be updated in the config for that user.
Code project has a really detailed article on saving all types of settings.
A: app.config isn't what you want to use for user-tweakable data, as it'll be stored somewhere in Program Files (which the user shouldn't have write permissions to). Instead, settings marked with a UserScopedSettingAttribute will end up in a user-scoped .config file somewhere in %LocalAppData%.
I found the best way to learn this stuff was to mess with the Visual Studio "Settings" tab (on your project's property pages), then look at the code that it generates and look in %LocalAppData% to see the file that it generates.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Preview theme in WordPress In the latest version of WordPress, it gives you the opportunity to view a preview of what your site would look like using a different theme. You basically just click on the theme, it takes over the screen and you have a chance to activate or close it (and return to the previous screen, which is grayed out in the background). I have seen a similar technique used on a number of websites recently for display images as well.
I'm wondering what technology/code they use to do this?
A: It's open source - use the source, Luke.
Look in wp-admin/js/theme-preview.js
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why can't SGML::Parser::OpenSP find the symbol __ZTI15SGMLApplication? I'm trying to "install SGML::Parser::OpenSP" from the cpan shell, but it fails on the first "make test". I also get the same error if I go into the build directory and run make test.
I believe this bit of the output below is the relevant part. Note the Symbol not found when perl gets to the "use" line for the new library. The file listed there exists and is readable. When I run the unix command "nm" it does show the symbol.
I don't know what to make of the symbol not found error. I'm not running as admin/root if that matters. This is on a mac, 10.4.11 My googling turned up some hints that this can happen if gcc is called instead of g++, but I believe that is set up correctly.
What else could it be, and how can I try to fix?
Here's the excerpt from running make test:
PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/01basic...........1/4
# Failed test 'use SGML::Parser::OpenSP;'
# at t/01basic.t line 14.
# Tried to use 'SGML::Parser::OpenSP'.
# Error: Can't load '/Users/joshgold/.cpan/build/SGML-Parser-OpenSP-0.994/blib/arch/auto/SGML/Parser/OpenSP/OpenSP.bundle' for module SGML::Parser::OpenSP: dlopen(/Users/joshgold/.cpan/build/SGML-Parser-OpenSP-0.994/blib/arch/auto/SGML/Parser/OpenSP/OpenSP.bundle, 2): Symbol not found: __ZTI15SGMLApplication
# Referenced from: /Users/joshgold/.cpan/build/SGML-Parser-OpenSP-0.994/blib/arch/auto/SGML/Parser/OpenSP/OpenSP.bundle
# Expected in: dynamic lookup
# at (eval 3) line 2
# Compilation failed in require at (eval 3) line 2.
# BEGIN failed--compilation aborted at (eval 3) line 2.
A: This isn't necessarily an answer to your question, but I've had great success using MacPorts for installing Perl stuff on OS X. It's much smoother than trying to use CPAN because it knows that it's installing for OS X and will patch modules appropriately. Definitely recommended.
A: Rob,
Have you ensured that OpenJade and/or OpenSP are installed? I don't see them on my default install of OSX, but it does exist on my FreeBSD build server.
I'd suggest starting with making sure those are installed. They're linked off the CPAN page for SGML::Parser::OpenSP.
A: It could be that your OpenSP library was compiled by a different C++ compiler than you are currently trying to use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What are the differences between "generic" types in C++ and Java? Java has generics and C++ provides a very strong programming model with templates.
So then, what is the difference between C++ and Java generics?
A: C++ has templates. Java has generics, which look kinda sorta like C++ templates, but they're very, very different.
Templates work, as the name implies, by providing the compiler with a (wait for it...) template that it can use to generate type-safe code by filling in the template parameters.
Generics, as I understand them, work the other way around: the type parameters are used by the compiler to verify that the code using them is type-safe, but the resulting code is generated without types at all.
Think of C++ templates as a really good macro system, and Java generics as a tool for automatically generating typecasts.
A: Basically, AFAIK, C++ templates create a copy of the code for each type, while Java generics use exactly the same code.
Yes, you can say that C++ template is equivalent to Java generic concept ( although more properly would be to say Java generics are equivalent to C++ in concept )
If you are familiar with C++'s template mechanism, you might think that generics are similar, but the similarity is superficial. Generics do not generate a new class for each specialization, nor do they permit “template metaprogramming.”
from: Java Generics
A: Java (and C#) generics seem to be a simple run-time type substitution mechanism.
C++ templates are a compile-time construct which give you a way to modify the language to suit your needs. They are actually a purely-functional language that the compiler executes during a compile.
A: Another advantage of C++ templates is specialization.
template <typename T> T sum(T a, T b) { return a + b; }
template <typename T> T sum(T* a, T* b) { return (*a) + (*b); }
Special sum(const Special& a, const Special& b) { return a.plus(b); }
Now, if you call sum with pointers, the second method will be called, if you call sum with non-pointer objects the first method will be called, and if you call sum with Special objects, the third will be called. I don't think that this is possible with Java.
A: The answer below is from the book Cracking The Coding Interview Solutions to Chapter 13, which I think is very good.
The implementation of Java generics is rooted in an idea of"type erasure:'This technique eliminates the parameterized types when source code is translated to the Java Virtual Machine (JVM) bytecode. For example, suppose you have the Java code below:
Vector<String> vector = new Vector<String>();
vector.add(new String("hello"));
String str = vector.get(0);
During compilation, this code is re-written into:
Vector vector = new Vector();
vector.add(new String("hello"));
String str = (String) vector.get(0);
The use of Java generics didn't really change much about our capabilities; it just made things a bit prettier. For this reason, Java generics are sometimes called"syntactic sugar:'.
This is quite different from C++. In C++, templates are essentially a glorified macro set, with the compiler creating a new copy of the template code for each type. Proof of this is in the fact that an instance of MyClass will not share a static variable withMyClass. Tow instances of MyClass, however, will share a static variable.
/*** MyClass.h ***/
template<class T> class MyClass {
public:
static int val;
MyClass(int v) { val v;}
};
/*** MyClass.cpp ***/
template<typename T>
int MyClass<T>::bar;
template class MyClass<Foo>;
template class MyClass<Bar>;
/*** main.cpp ***/
MyClass<Foo> * fool
MyClass<Foo> * foo2
MyClass<Bar> * barl
MyClass<Bar> * bar2
new MyClass<Foo>(10);
new MyClass<Foo>(15);
new MyClass<Bar>(20);
new MyClass<Bar>(35);
int fl fool->val; // will equal 15
int f2 foo2->val; // will equal 15
int bl barl->val; // will equal 35
int b2 bar2->val; // will equal 35
In Java, static variables are shared across instances of MyClass, regardless of the different type parameters.
Java generics and C ++ templates have a number of other differences. These include:
*
*C++ templates can use primitive types, like int. Java cannot and must
instead use Integer.
*In Java, you can restrict the template's type parameters to be of a
certain type. For instance, you might use generics to implement a
CardDeck and specify that the type parameter must extend from
CardGame.
*In C++, the type parameter can be instantiated, whereas Java does not
support this.
*In Java, the type parameter (i.e., the Foo in MyClass) cannot be
used for static methods and variables, since these would be shared between MyClass and MyClass. In C++, these classes are different, so the type parameter can be used for static methods and variables.
*In Java, all instances of MyClass, regardless of their type parameters, are the same type. The type parameters are erased at runtime. In C++, instances with different type parameters are different types.
A: I will sum it up in a single sentence: templates create new types, generics restricts existing types.
A: There is a big difference between them. In C++ you don't have to specify a class or an interface for the generic type. That's why you can create truly generic functions and classes, with the caveat of a looser typing.
template <typename T> T sum(T a, T b) { return a + b; }
The method above adds two objects of the same type, and can be used for any type T that has the "+" operator available.
In Java you have to specify a type if you want to call methods on the objects passed, something like:
<T extends Something> T sum(T a, T b) { return a.add ( b ); }
In C++ generic functions/classes can only be defined in headers, since the compiler generates different functions for different types (that it's invoked with). So the compilation is slower. In Java the compilation doesn't have a major penalty, but Java uses a technique called "erasure" where the generic type is erased at runtime, so at runtime Java is actually calling ...
Something sum(Something a, Something b) { return a.add ( b ); }
Nevertheless, Java's generics help with type-safety.
A: Another feature that C++ templates have that Java generics don't is specialization. That allows you to have a different implementation for specific types. So you can, for example, have a highly optimized version for an int, while still having a generic version for the rest of the types. Or you can have different versions for pointer and non-pointer types. This comes in handy if you want to operate on the dereferenced object when handed a pointer.
A: Java Generics are massively different to C++ templates.
Basically in C++ templates are basically a glorified preprocessor/macro set (Note: since some people seem unable to comprehend an analogy, I'm not saying template processing is a macro). In Java they are basically syntactic sugar to minimize boilerplate casting of Objects. Here is a pretty decent introduction to C++ templates vs Java generics.
To elaborate on this point: when you use a C++ template, you're basically creating another copy of the code, just as if you used a #define macro. This allows you to do things like have int parameters in template definitions that determine sizes of arrays and such.
Java doesn't work like that. In Java all objects extent from java.lang.Object so, pre-Generics, you'd write code like this:
public class PhoneNumbers {
private Map phoneNumbers = new HashMap();
public String getPhoneNumber(String name) {
return (String) phoneNumbers.get(name);
}
}
because all the Java collection types used Object as their base type so you could put anything in them. Java 5 rolls around and adds generics so you can do things like:
public class PhoneNumbers {
private Map<String, String> phoneNumbers = new HashMap<String, String>();
public String getPhoneNumber(String name) {
return phoneNumbers.get(name);
}
}
And that's all Java Generics are: wrappers for casting objects. That's because Java Generics aren't refined. They use type erasure. This decision was made because Java Generics came along so late in the piece that they didn't want to break backward compatibility (a Map<String, String> is usable whenever a Map is called for). Compare this to .Net/C# where type erasure isn't used, which leads to all sorts of differences (e.g. you can use primitive types and IEnumerable and IEnumerable<T> bear no relation to each other).
And a class using generics compiled with a Java 5+ compiler is usable on JDK 1.4 (assuming it doesn't use any other features or classes that require Java 5+).
That's why Java Generics are called syntactic sugar.
But this decision on how to do generics has profound effects so much so that the (superb) Java Generics FAQ has sprung up to answer the many, many questions people have about Java Generics.
C++ templates have a number of features that Java Generics don't:
*
*Use of primitive type arguments.
For example:
template<class T, int i>
class Matrix {
int T[i][i];
...
}
Java does not allow the use of primitive type arguments in generics.
*Use of default type arguments, which is one feature I miss in Java but there are backwards compatibility reasons for this;
*Java allows bounding of arguments.
For example:
public class ObservableList<T extends List> {
...
}
It really does need to be stressed that template invocations with different arguments really are different types. They don't even share static members. In Java this is not the case.
Aside from the differences with generics, for completeness, here is a basic comparison of C++ and Java (and another one).
And I can also suggest Thinking in Java. As a C++ programmer a lot of the concepts like objects will be second nature already but there are subtle differences so it can be worthwhile to have an introductory text even if you skim parts.
A lot of what you'll learn when learning Java is all the libraries (both standard--what comes in the JDK--and nonstandard, which includes commonly used things like Spring). Java syntax is more verbose than C++ syntax and doesn't have a lot of C++ features (e.g. operator overloading, multiple inheritance, the destructor mechanism, etc) but that doesn't strictly make it a subset of C++ either.
A: There is a great explanation of this topic in Java Generics and Collections
By Maurice Naftalin, Philip Wadler. I highly recommend this book. To quote:
Generics in Java resemble templates in
C++. ... The syntax is deliberately
similar and the semantics are
deliberately different. ...
Semantically, Java generics are
defined by erasure, where as C++
templates are defined by expansion.
Please read the full explanation here.
(source: oreilly.com)
A: @Keith:
That code is actually wrong and apart from the smaller glitches (template omitted, specialization syntax looks differently), partial specialization doesn't work on function templates, only on class templates. The code would however work without partial template specialization, instead using plain old overloading:
template <typename T> T sum(T a, T b) { return a + b; }
template <typename T> T sum(T* a, T* b) { return (*a) + (*b); }
A: Templates are nothing but a macro system. Syntax sugar. They are fully expanded before actual compilation (or, at least, compilers behave as if it were the case).
Example:
Let's say we want two functions. One function takes two sequences (list, arrays, vectors, whatever goes) of numbers, and returns their inner product. Another function takes a length, generates two sequences of that length, passes them to the first function, and returns it's result. The catch is that we might make a mistake in the second function, so that these two functions aren't really of the same length. We need the compiler to warn us in this case. Not when the program is running, but when it's compiling.
In Java you can do something like this:
import java.io.*;
interface ScalarProduct<A> {
public Integer scalarProduct(A second);
}
class Nil implements ScalarProduct<Nil>{
Nil(){}
public Integer scalarProduct(Nil second) {
return 0;
}
}
class Cons<A implements ScalarProduct<A>> implements ScalarProduct<Cons<A>>{
public Integer value;
public A tail;
Cons(Integer _value, A _tail) {
value = _value;
tail = _tail;
}
public Integer scalarProduct(Cons<A> second){
return value * second.value + tail.scalarProduct(second.tail);
}
}
class _Test{
public static Integer main(Integer n){
return _main(n, 0, new Nil(), new Nil());
}
public static <A implements ScalarProduct<A>>
Integer _main(Integer n, Integer i, A first, A second){
if (n == 0) {
return first.scalarProduct(second);
} else {
return _main(n-1, i+1,
new Cons<A>(2*i+1,first), new Cons<A>(i*i, second));
//the following line won't compile, it produces an error:
//return _main(n-1, i+1, first, new Cons<A>(i*i, second));
}
}
}
public class Test{
public static void main(String [] args){
System.out.print("Enter a number: ");
try {
BufferedReader is =
new BufferedReader(new InputStreamReader(System.in));
String line = is.readLine();
Integer val = Integer.parseInt(line);
System.out.println(_Test.main(val));
} catch (NumberFormatException ex) {
System.err.println("Not a valid number");
} catch (IOException e) {
System.err.println("Unexpected IO ERROR");
}
}
}
In C# you can write almost the same thing. Try to rewrite it in C++, and it won't compile, complaining about infinite expansion of templates.
A: I would like to quote askanydifference here:
The main difference between C++ and Java lies in their dependency on the platform. While, C++ is platform dependent language, Java is platform independent language.
The above statement is the reason why C++ is able to provide true generic types. While Java does have strict checking and hence they don't allow using generics the way C++ allows it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "174"
} |
Q: How to pass a single object[] to a params object[] I have a method which takes params object[] such as:
void Foo(params object[] items)
{
Console.WriteLine(items[0]);
}
When I pass two object arrays to this method, it works fine:
Foo(new object[]{ (object)"1", (object)"2" }, new object[]{ (object)"3", (object)"4" } );
// Output: System.Object[]
But when I pass a single object[], it does not take my object[] as the first param, instead it takes its all elements like I wanted to pass them one by one:
Foo(new object[]{ (object)"1", (object)"2" });
// Output: 1, expected: System.Object[]
How do I pass a single object[] as a first argument to a params array?
A: This is a one line solution involving LINQ.
var elements = new String[] { "1", "2", "3" };
Foo(elements.Cast<object>().ToArray())
A: The params parameter modifier gives callers a shortcut syntax for passing multiple arguments to a method. There are two ways to call a method with a params parameter:
1) Calling with an array of the parameter type, in which case the params keyword has no effect and the array is passed directly to the method:
object[] array = new[] { "1", "2" };
// Foo receives the 'array' argument directly.
Foo( array );
2) Or, calling with an extended list of arguments, in which case the compiler will automatically wrap the list of arguments in a temporary array and pass that to the method:
// Foo receives a temporary array containing the list of arguments.
Foo( "1", "2" );
// This is equivalent to:
object[] temp = new[] { "1", "2" );
Foo( temp );
In order to pass in an object array to a method with a "params object[]" parameter, you can either:
1) Create a wrapper array manually and pass that directly to the method, as mentioned by lassevk:
Foo( new object[] { array } ); // Equivalent to calling convention 1.
2) Or, cast the argument to object, as mentioned by Adam, in which case the compiler will create the wrapper array for you:
Foo( (object)array ); // Equivalent to calling convention 2.
However, if the goal of the method is to process multiple object arrays, it may be easier to declare it with an explicit "params object[][]" parameter. This would allow you to pass multiple arrays as arguments:
void Foo( params object[][] arrays ) {
foreach( object[] array in arrays ) {
// process array
}
}
...
Foo( new[] { "1", "2" }, new[] { "3", "4" } );
// Equivalent to:
object[][] arrays = new[] {
new[] { "1", "2" },
new[] { "3", "4" }
};
Foo( arrays );
Edit: Raymond Chen describes this behavior and how it relates to the C# specification in a new post.
A: You need to encapsulate it into another object[] array, like this:
Foo(new Object[] { new object[]{ (object)"1", (object)"2" }});
A: Another way to solve this problem (it's not so good practice but looks beauty):
static class Helper
{
public static object AsSingleParam(this object[] arg)
{
return (object)arg;
}
}
Usage:
f(new object[] { 1, 2, 3 }.AsSingleParam());
A: A simple typecast will ensure the compiler knows what you mean in this case.
Foo((object)new object[]{ (object)"1", (object)"2" }));
As an array is a subtype of object, this all works out. Bit of an odd solution though, I'll agree.
A: One option is you can wrap it into another array:
Foo(new object[]{ new object[]{ (object)"1", (object)"2" } });
Kind of ugly, but since each item is an array, you can't just cast it to make the problem go away... such as if it were Foo(params object items), then you could just do:
Foo((object) new object[]{ (object)"1", (object)"2" });
Alternatively, you could try defining another overloaded instance of Foo which takes just a single array:
void Foo(object[] item)
{
// Somehow don't duplicate Foo(object[]) and
// Foo(params object[]) without making an infinite
// recursive call... maybe something like
// FooImpl(params object[] items) and then this
// could invoke it via:
// FooImpl(new object[] { item });
}
A: new[] { (object) 0, (object) null, (object) false }
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "128"
} |
Q: Relative Root with Visual Studio ASP.NET debugger I am working on an ASP.NET project which is physically located at C:\Projects\MyStuff\WebSite2. When I run the app with the Visual Studio debugger it seems that the built in web server considers "C:\Projects\MyStuff\" to be the relative root, not "C:\Projects\MyStuff\WebSite2". Is there a web.config setting or something that will allow tags like <img src='/img/logo.png' /> to render correctly without having to resort to the ASP.NET specific tags like <asp:image />? If I code for the debugger's peculiarities then when I upload to the production IIS server everthing is off.
How do you resolve this?
A: you can try this trick that Scott Guthrie posted on his blog http://weblogs.asp.net/scottgu/archive/2006/12/19/tip-trick-how-to-run-a-root-site-with-the-local-web-server-using-vs-2005-sp1.aspx
to cut to the fix: select your project/solution in solution explorer and then open the Properties tab like you would if you were editing a textbox. If you right click and go to "Property Pages" that is the wrong place.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Firefox add-ons What Firefox add-ons do you use that are useful for programmers?
A: I'd also recommend the Web Developer extension by Chris Pederick.
A: As far as web development, especially for javascript, I find Firebug to be invaluable. Web developer toolbar is also very useful.
A: The ones I have are...
*
*Y-SLow
*Live Headers
*Firebug
*Dom Inspector
A: One that wasn't mentioned yet is this HTML Validator extension that I found very useful.
A: I guess it's silly to mention Firebug -- doubt any of us could live without it. Other than that I use the following (only listing dev-related):
*
*Console2: next-generation error console
*DOM inspector: as the title might indicate, allows you to browse the DOM
*Edit Cookies: change cookies on the fly
*Execute JS: ad-hoc Javascript execution
*IE Tab: render a page in IE
*Inspect This: brings the selected object into the DOM inspector
*JSView: display linked javascript and CSS
*LORI (Life of Request Info): shows how long it takes to render a page
*Measure IT: a popup ruler.
*URL Params: shows GET and POST variables
*Web Developer: a myriad of tools for the web developer
A: Here are mine (developer centric):
*
*FireBug - a myriad of productivity enhancing tools, includes javascript debugger, DOM inspector, allows you to edit the CSS/HTML on the fly which is highly valuable for troubleshooing layout and display problems.
*Web Developer - again another great developer productivity tool. I mostly use it for quickly validating pages, disabling javascript (yes I disable javascript sometimes, don't you?), viewing cookies, etc.
*Tamper Data - lets you tamper with http headers, form values, cookies, etc. prior to posting back to a page, or getting a page. Incredibly valuable for poking and prodding your pages, and seeing how your web app responds when used with slightly malicious intent.
*JavaScript Debugger - has a few more features than javascript debugger provided by firebug. Although I must admit, I sparingly use this one since firebug has largely won me over.
*Live HTTP Headers - invaluable for troubleshooting, use it frequently. Lets you spy on all HTTP headers communicated back and forth between client and server. It has helped me track down nefarious problems, especially when debugging issues when deploying your web app between environments.
*Header Spy - nice addon for the geeky types, shows you the web server and platform a web site runs on in the status bar.
*MeasureIt - I don't use this all too frequently, but I've still found it valuable from time to time.
*ColorZilla - again, not something I use all that frequently, but when I need it, I need it. Valuable when you want to know a color and you don't want to dig through a CSS file, or open up a graphics editing app to get a color embedded in some image.
*Add N Edit Cookies - this has been a great debugging tool in web farms where the load balancer writes a cookie, and uses the cookie value to keep your session "sticky". It allowed me to switch at will between servers to track down problems on specific machine. Also a good tool if you want to try to mess with a site that uses cookies to track your login status/account, and you want to see how your code responds to malformed or hacked info.
*Yellowpipe Lynx Viewer Tool - yeah I know what your thinking, lynx, who needs it, its so 1994. But if you are developing a site that needs to take web accessibility into account (meaning accessible to users with visual impairments who use screen readers), or if you need to get a sense of how a web spider/indexer "sees" your site, this tool is invaluable. Granted, you could always just go out and grab Lynx for yourselfhere's the windows xp port that I use.
I've got a handful of other addons that I've used from time to time that I'll just quickly mention: FireFTP (one I installed wasn't stable and I've not tried a newer release), Html Validator (also found this one unstable, least back when I installed like a year ago), IE Tab (I usually just have both IE and FireFox open concurrently, but that is just me, I know many others that find this addon useful).
A: @Flávio Amieiro
MeasureIt is an unnecessary extension to have if you install the Web Developer Toolbar. Web Developer Toolbar includes a ruler as one of its features. Under the "Miscellaneous" category for Web Developer click the option "Display Ruler" to use a ruler identical to the MeasureIt one.
That will allow you reduce the number of extensions needed by at least one.
A: Firefox addons:
*
*FireBug:helps web developers and designers test and inspect front-end code. It provides us with many useful features such as a console panel for logging information, a DOM inspector, detailed information about page elements, and much, much more.
*Web Developer-gives you the power disable CSS, edit CSS on the fly, measure certain areas of a page and much more.
*ColorZilla
Just click on the icon, hover over the area you'd like to know the hex color for, and click.
*Window Resizer
to make sure the layout is displayed properly in the standard resolutions of today.
*Total Validator
validating websites much easier by checking HTML, links, CSS and doing a lot more.
A: Web Developer for web development. Scribefire if you're a blogger-progammer
A: For web developing I use the Web Developer Toolbar, CSS Viewer and MeasureIt.
But I'm really not one of those who has a thousand of extensions to do everything. I like to keep things simple.
EDIT: Thanks to Dan's answer I don't need MeasureIt anymore. Can't believe I've never seen that! I guess I'll just have to pay more atention to this WebDeveloper toolbar.
A: Adding to everyones lists, Tamper Data is quite useful, lets you intercept requests and change the data in them.
It can be used to bypass javascript validation and check whether the server side is doing its thing.
A: I use Web Developer, it's a real time saver.
A: *
*+1 for LORI ("life-of-request-info"). It's a very convenient alternative for rough measurements of the load time of a particular web page -- the kind of thing that you might otherwise use an external stopwatch for.
*New Tab Homepage. Combined with a "speed dial"-type homepage (a personal, fast-loading page of links that you use frequently), helps you get where you're going faster when you open a new browser tab.
*LastTab. Changes the behavior of Ctrl+Tab to let you navigate back and forth between your most-recently-used tabs with repeated presses of Ctrl+Tab, the same way that Alt+Tab works in Windows. Also provides a nice view of all open tabs while Ctrl is still being held down for easy navigation. (The resultant behavior is very similar to the Ctrl+Tab behavior in recent releases of Visual Studio.)
A: FireFTP is good for grabbing/uploading any necessary files.
A: I find Hackbar to be quite useful. Very useful if you want to edit the querystring part of the url, to test for vulnerabilities, or just general other types of testing where you might end up with complicated query string values.
A: I was learning DOM inspector, but I've switched to Firebug.
A: Some of which has been missed above are here
*
*Load Time Analyzer – View detailed graphs of the loading time of web pages in firefox. The graphs display events like page requests, image loading times etc.
*Poster – A must have tool for web developers enabling them to interact with web services and other web resources.
*Aardvark – A cool extension for web developers and designers, allows them to view CSS attributes, id, class by highlighting page element individually.
A: Fiddler is a really great debugging proxy. Think of it as a more powerful version of the "Net" panel in Firebug or the Live HTTP headers.
It used to be an IE-only extension, now it also has hooks into Firefox.
A: Groundspeed, is useful for testing server side code. It was created for input validation tests during pentest, but can be useful for any test that require manipulating input (similar to TamperData).
It lets you control the form elements in the page, you can change their type and other attributes (size, lenght, javascript event handlers, etc). So for example you can change a hidden field or a select to a textbox and then enter any value to test the server response and stuff like that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: PHP best practices? What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code.
I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable.
What can I be doing better in my PHP development?
A: Take a look at how some of the popular PHP frameworks use templating. Examples include cakePHP, Zend Framework, and Code Igniter. Even if you are not going to base your site on these frameworks, the template design pattern is a good way to keep php code away from your web designers, so they can focus on layout and not functionality.
A: It sounds to me like you need to begin implementing what is known as "separation of concerns" across your application generally. The examples folks give about templating, in response to your specific complaint about page editors breaking your code, are important, but represent just one example of this tactic. As your program gets larger and more complex it becomes harder to modify and debug--even if your designer is not breaking your code.
Probably the most common separation is a three way split between data, logic and presentation as described in the design pattern Model-View-Controller (MVC). You do not need a full blown MVC framework in place to implement the same basic principles. The idea is simply to encapsulate code that deals with your data (model) in one place, the code that presents this data to the user (view) in another. You tie that code together with code that is only concerned with presenting the right data to the right user at a the right time (controller).
From your description, it sounds like you have right now is a Transaction Script pattern, where you have a php file "dothis.php" that is loaded in the browser, and all the function definitions and HTML for the display are together. You already have functions, so you are already beginning to encapsulate pieces of logic.
The way I would approach this would be, in keeping with the other answers here about templating, is to remove all of the HTML into another file only referencing simple PHP variables and maybe some loops (but as little conditional switching as you can). That will make the template easier to read and harder to break. When your page editor wants to modify the layout, give them THAT file.
You then separate all of your data access functions to another file, ideally creating a class (or several classes, depending on how complex your data is and how frequently you need to reuse it).
At this point your "dothis.php" has been stripped down to maybe some configuration code (which you can separate out to an include, and some authentication code (which you can separate out to its own class), and is only calling the data access functions, and calling the included template file. Your controller itself is therefore greatly simplified and easier to manage.
A: I would highly recommend reading the book PHP In Action. It takes you through abstracting your database connections, templating systems and all the other basics of a web application. If every PHP developer read this book then the language would have a much better reputation.
It also has chapters on refactoring, unit testing and the MVC control pattern.
A: Does the outside person need to edit the logic, or just the display (HTML)?
If it's the latter case, check out the Smarty template engine.
A: You don't need a "system" to do templating.
You can do it on your own by keeping presentation & logic separate.
This way the designer can screw up the display, but not the logic behind it.
Here's a simple example:
<?php
$people = array('derek','joel','jeff');
$people[0] = 'martin'; // all your logic goes here
include 'templates/people.php';
?>
Now here's the people.php file (which you give your designer):
<html>
<body>
<?php foreach($people as $name):?>
<b>Person:</b> <?=$name?> <br />
<?php endforeach;?>
</body>
</html>
A: I think I'd like to stay away from an unweildy framework. Just some approach I can use that generally makes the pages more readable with cleaner code.
Stack Overflow wants me to decide which answer is best, when best is a subjective opinion. Who is to say what the 'best' practice is.
A: If you decide to continue using functions, you can get some inspiration from WordPress. You can probably reduce the "program" to a minimum by making templates more granular.
Also, good tools (i.e. HTML editors) can help designers ignore your PHP and work on the design without breaking the code. (But I have no suggestions, sorry.)
The other way to some things is to create own template system instead of SMARTY, but it would probably take too long to create a working system to satisfy your needs that would go past just a replacing something like %%VARIABLE%% with a text.
Our company uses SMARTY and even with a lot of code in templates, designers know how to work with it. For simple CMS sites we use ExpressionEngine, which uses HTML-like tags for inserting logic into templates.
A: There's a lot that can be said on this topic but a very basic starting point would be to move as much code as possible out into separate files and then use include statements.
A: I usually use includes, as they can be very useful for organising and grouping functions together. Also, comment your code. There's nothing worse than for someone else to see your work and not know why you've done this. Naming variables and functions sensibly can go a long way too - for example:
$userName = "John Doe";
$dateOfBirth = "04/02/1982";
function calculateUserAgeFromBirth($userName, $dateOfBirth)
Naming variables like this also helps minimise comments about what your code actually does.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: What are the important Ruby commands? I'm not sure of all of them, but what are the commands to do things like update Ruby, download a new gem, or update an existing gem? What other important things are there?
Since it might matter, I'm running Windows.
A: By Ruby commands you probably mean the command line programs for Ruby. These are also called Ruby Helper programs. Here are a few:
*
*ruby - The interpreter itself. Run Ruby scripts or statements.
*gem - Ruby Package Manager. Great for automatically downloading or updating small Ruby modules like XML libraries, web servers, or even whole Ruby programs.
*irb - Interactive Ruby Prompt. This is an entire Ruby shell that will let you execute any Ruby code you want. You can load libraries, test code directly, anything you can do with Ruby you can do in this shell. Believe me, there is quite a lot that you can do with it to improve your Ruby development workflow [1].
*ri - Quick shell access to Ruby documentation. You can find the RDoc information on nearly any Ruby Class or method. The same kind of documentation that you would find on the online ruby-docs.
*erb - Evaluates embedded Ruby in Ruby Templated documents. Embedded Ruby is just like embedding php into a document, and this is an interpreter for that kind of document. This is really more for the rails crowd. An alternative would be haml.
*rdoc - Generate the standard Ruby documentation for one of your Ruby classes. Its like Javadocs. It parses the Ruby source files and generates the standard documentation from special comments.
*testrb and rake. I'm not familiar enough with these. I'd love it if someone could fill these in!
Hopefully this was what you were looking for!
A: Useful command: Rake
In addition to the commands listed by Joseph Pecoraro, the 'rake' command is also pretty standard when working with Ruby. Rake makes it easy to automate (simple) tasks; like building a RubyGem or running your unit tests.
With rake, the only important command to remember is 'rake -T', which shows a list of rake tasks available in the current directory.
Updating a Ruby gem
To get back to your specific question:
To update a specific gem, you can do two things: simply update the gem:
gem update <gemname>
This will update the gem to the latest version.
Install a Ruby gem
If you want to update to a specific version, you must install it:
gem install <gemname> -v <gemversion>
You can leave out the -v options. Rubygems then installs the latest version.
How to help yourself
Two useful gem commands to remember are:
gem help
This shows how to get help with rubygems.
gem help commands
This shows all commands available to rubygems.
From here you can get more specific help on a command by using gem help:
gem help update
A: sudo gem install gemname
sudo gem update gemname
A: Okay. I see what you're going for but again try to go abstract because I know someone will give you a direct answer (which people should up-vote over this).
Everyone should get comfortable with man pages. But even if you are, you'll find that these commands lack decent man pages. However, those that do will point you to cmd --help and you will find some decent documentation there. I linked each of the commands above to a hopefully useful resource that will lead you to an answer if you're worried about command line switches. I see someone already posted the commands so I won't repeat those for gem. But I'd go further and say:
sudo gem update [gemname]
The default behavior will update all installed gems.
Also, as a bonus there is a neat gem called cheat. The idea is that instead of typing man cmd you will type cheat cmd and you can get a community editable man page for that command. Or better yet, it doesn't have to be a command, it can be an entire topic. Coincidentally to install cheat you would do:
sudo gem install cheat
And then:
cheat gem
That will list out a "man page" written by users like you about the gem command. The commands that you asked for are on that page. Anyone can add new pages, update existing pages, and contribute to the community. If you're interested here is a quick addition you can make to have autocompletion for the cheat command from the command line.
I know I have long winded answers ;)
A:
Is there a similar command to update Ruby itself?
Alas, no there is not. I'm afraid that if you want to update Ruby itself you will have to either download an installer from the Ruby website, or compile it from source.
I should mention though that compiling from source is very easy and offers developers quite a bit of neat flexibility. You can add a suffix to the generated commands so that you can have standalone Ruby 1.8 and Ruby 1.9 builds both at the same time. That can be very helpful for testing.
Finally, its always a danger to update an operating systems built in commands unless it occurs through an official update. Installed applications may be expecting to a Ruby 1.8 in the standard location and crash if they meet an updated version. Any updates you make should just not overwrite one that came with an OS. (If any app crashes then its the fault of the app's developers for not specifying the absolute path to the OS version).
A:
@John Topley: Thanks. Is there a
similar command to update Ruby itself?
Not really. You don't say which operating system you're using. I use Mac OS X and tend to build Ruby from source.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Alignment restrictions for malloc()/free() Older K&R (2nd ed.) and other C-language texts I have read that discuss the implementation of a dynamic memory allocator in the style of malloc() and free() usually also mention, in passing, something about data type alignment restrictions. Apparently certain computer hardware architectures (CPU, registers, and memory access) restrict how you can store and address certain value types. For example, there may be a requirement that a 4 byte (long) integer must be stored beginning at addresses that are multiples of four.
What restrictions, if any, do major platforms (Intel & AMD, SPARC, Alpha) impose for memory allocation and memory access, or can I safely ignore aligning memory allocations on specific address boundaries?
A: Sparc, MIPS, Alpha, and most other "classical RISC" architectures only allow aligned accesses to memory, even today. An unaligned access will cause an exception, but some operating systems will handle the exception by copying from the desired address in software using smaller loads and stores. The application code won't know there was a problem, except that the performance will be very bad.
MIPS has special instructions (lwl and lwr) which can be used to access 32 bit quantities from unaligned addresses. Whenever the compiler can tell that the address is likely unaligned it will use this two instruction sequence instead of a normal lw instruction.
x86 can handle unaligned memory accesses in hardware without an exception, but there is still a performance hit of up to 3X compared to aligned accesses.
Ulrich Drepper wrote a comprehensive paper on this and other memory-related topics, What Every Programmer Should Know About Memory. It is a very long writeup, but filled with chewy goodness.
A: Alignment is still quite important today. Some processors (the 68k family jumps to mind) would throw an exception if you tried to access a word value on an odd boundary. Today, most processors will run two memory cycles to fetch an unaligned word, but this will definitely be slower than an aligned fetch. Some other processors won't even throw an exception, but will fetch an incorrect value from memory!
If for no other reason than performance, it is wise to try to follow your processor's alignment preferences. Usually, your compiler will take care of all the details, but if you're doing anything where you lay out the memory structure yourself, then it's worth considering.
A: You still need to be aware of alignment issues when laying out a class or struct in C(++). In these cases the compiler will do the right thing for you, but the overall size of the struct/class may be more wastefull than necessary
For example:
struct
{
char A;
int B;
char C;
int D;
};
Would have a size of 4 * 4 = 16 bytes (assume Windows on x86) whereas
struct
{
char A;
char C;
int B;
int D;
};
Would have a size of 4*3 = 12 bytes.
This is because the compiler enforces a 4 byte alignment for integers, but only 1 byte for chars.
In general pack member variables of the same size (type) together to minimize wasted space.
A: As Greg mentioned it is still important today (perhaps more so in some ways) and compilers usually take care of the alignment based on the target of the architecture. In managed environments, the JIT compiler can optimize the alignment based on the runtime architecture.
You may see pragma directives (in C/C++) that change the alignment. This should only be used when very specific alignment is required.
// For example, this changes the pack to 2 byte alignment.
#pragma pack(2)
A: Note that even on IA-32 and the AMD64, some of the SSE instructions/intrinsics require aligned data. These instructions will throw an exception if the data is unaligned, so at least you won't have to debug "wrong data" bugs. There are equivalent unaligned instructions as well, but like Denton says, they're are slower.
If you're using VC++, then besides the #pragma pack directives, you also have the __declspec(align) directives for precise alignment. VC++ documentation also mentions an __aligned_malloc function for specific alignment requirements.
As a rule of thumb, unless you are moving data across compilers/languages or are using the SSE instructions, you can probably ignore alignment issues.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Designing a threaded commenting system What would be the best way to design a threaded commenting system so that it doesn't hammer the database?
A: Modified pre-order tree traversal (or what Matt refers to as "nested set") is the way to go.
If you happen to be working in Django, there's a third-party app, django-mptt, that makes implementing MPTT in your models a one-liner.
A: SELECT ... START WITH ... CONNECT BY
Oracle has an extension to SELECT that allows easy tree-based retrieval.
This query will traverse a table where the nesting relationship is stored in parent and child columns.
select * from my_table
start with parent = :TOP_ARTICLE
connect by prior child = parent;
http://www.adp-gmbh.ch/ora/sql/connect_by.html
A: I'm guessing your question is about arranging the system so you don't have to work as:
*
*Select all the top level comments
*Select all comments whose parents were found in the step prior
*Select all comments whose parents were found in the step prior
*... repeat until no comments found
I would suggest desiging the db table with a thread key which would be string of all the parents of that post. You'd have to limit your discussion to a certain depth, but your sql statements would be straight selects and order by the thread key, giving you back threaded comments. Less taxing on your DB and Webserver.
A thread key would be something like it's current post id joined onto it's parent's thread key with a delimiter.
How does that sound?
A: This website lists some common techniques:
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
I'd do the "nested set" model, but have multiple roots (e.g. each "topic" is a new tree). It's very fast, simple to query, but complicated to maintain...
A: What I normally do in this case is to have a single thread that is responsible for putting the data into the database, and have all auxiliary threads report to that thread, which then queues up the data, and writes it either serially, or in batches (depending on the requirements, and how much database activity I'm willing to put up with).
A: I'm guessing you have something resembling a "comments" table, with a foreign key to itself, pointing to the parent comment of each row. This makes the threaded comments into a tree structure with the thread starter as the tree root.
So we can rephrase the question as "What is the best way to select a tree structure from a database?". Well I won't assume to know the best way, but my first inclination (probably wrong) is to use a stored procedure to walk the tree, and compile a list of rows to return. It still takes multiple select statements to get all the children, but it's only one database round trip.
Aryeh's method with the accumulated parent list is probably better :)
A: I have to second Carl Meyer's suggestions to use the link text technique. I'm working on a system like this right now but with some further optimizations for a forums.
In forum systems that support replies you will frequently be doing inserts into the middle of the tree which yields poor performance. To reduce the pain I'm working on allowing gaps in the number line. This works like pre-allocating memory in an array list. The cost for shifting the tree to the right is the same for 1 and for 100. So on successive replies (which are more likely) I can update fewer tree nodes and they will be much faster.
The downsize is the counting of descendant nodes (number of replies below this post) by comparing the current nodes left and right values will break. This information can be cached in the data structure to make that fast. So on insert I will have to update all ancestor nodes with new counts. Still far fewer nodes will be updated and the result will be much faster average case insert times.
Multiple trees are being stored in the same table. Each tree has a unique tree id, one per topic. Smaller trees update much faster.
Hope That Helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Long-term Static Page Caching I maintain several client sites that have no dynamic data whatsoever, everything is static asp.net with c#.
Are there any pitfalls to caching the entire page for extreme periods of time, like a week?
Kibbee, We use a couple controls on the sites (ad rotator, some of the ajax extensions) on the sites. They could probably be completely written in html but for convenience sake I just stuck with what we use for every other site.
A: The only significant pitfall to long cache times occurs when you want to update that data. To be safe, you have to assume that it will take up to a week for the new version to become available. Intermediate hosts such as a ISP level proxy servers often do cache aggressively so this delay will happen.
If there are large files to be cached, I'd look at ensuring your content engine supports If-Modified-Since.
For smaller files (page content, CSS, images, etc), where reducing the number of round-trips is the key, having a long expiry time (a year?) and changing the URL when the content changes is the best. This lets you control when user agents will fetch the new content.
Yahoo! have published a two part article on reducing HTTP requests and browser cache usage. I won't repeat it all here, but these are good reads which will guide you on what to do.
My feeling is to pick a time period high enough to cover most users single sessions but low enough to not cause too much inconvenience should you wish to update the content. Be sure to support If-Modified-Since if you have a Last-Modified for all your content.
Finally, if your content is cacheable at all and you need to push new content out now, you can always use a new URL. This final cachable content URL can sit behind a fixed HTTP 302 redirect URL should you wish to publish a permanent link to the latest version.
A: We have a similar issue on a project I am working on. There is data that is pretty much static, but is open to change..
What I ended up doing is saving the data to a local file and then monitoring it for changes. The DB server is then never hit unless we remove the file, in which case it will scoot of to the DB and regenerate the data file.
So what we basically have a little bit of disk IO while loading/saving, no traffic to the DB server unless necessary and we are still in control of it (we can either delete manually or script it etc).
I should also add is that you could then tie this up with the actual web server caching model if you wanted to reduce the disk IO (we didnt really need to in our case)..
This could be totally the wrong way to go about it, but it seems to work quite nice for us :)
A: If it's static, why bother caching at all? Let IIS worry about it.
A: When you say that you have no data, how are you even using asp.net or c#. What functionality does that provide you over plain HTML? Also, if you do plan on caching, it's probably best to cache to a file, and then when a request is made, stream out the file. The OS will take care of keeping the file in memory so that you won't have to read it off the disk all the time.
A: You may want to build in a cache updating mechanism if you want to do this, just to make sure you can clear the cache if you need to do a code update. Other than that, there aren't any problems that I can think of.
A: If it is static you would probably be better off generating the pages once and then serve up the resulting static HTML file directly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I Send Email from the Command Line? I would like to quickly send email from the command line. I realize there are probably a number of different ways to do this.
I'm looking for a simple way to do this from a linux terminal (likely a bash shell but anything should do) and an alternative way to do this on Windows. I want to be able to whip up an email right on the command line or have the flexibility to pipe the message into the command line program. How would you go about doing this? If you have small scripts that would be fine as well.
A: You can also use this sendmail version for windows. It is very simple to use, standard UNIX-like behavior. Fast. Does not need any installation, just call the EXE wherever it is located on your system.
Composing the email:
echo To: [email protected], [email protected] >> the.mail
echo From: [email protected] >> the.mail
echo Subject: This is a SENDMAIL notification >> the.mail
echo Hello World! >> the.mail
echo This is simple enough. >> the.mail
echo .>> the.mail
Sending the file:
\usr\lib\sendmail.exe -t < the.mail
type the.mail | C:\Projects\Tools\sendmail.exe -t
A: If you are looking to do this from a Windows command line, there is a tool called blat that can be used from a CMD prompt.
It is a bit more fun from PowerShell. Since PowerShell has access to the .NET Framework, you can use the classes from System.Net.Mail to send email. There is an example script on the PowerShell Community Script Repository.
A: IIRC you'll also have to configure a mail transfer agent (MTA) to use mail or most email libraries. Sendmail is the most well known but is a real pig when it comes to configuration. Exim, Qmail and Postfix are all popular alternatives that are a bit more modern.
There are also more lightweight MTAs that are only able to send out mail, not receive it: nullmailer, mstmp, ssmtp, etc.
Postfix is default for Ubuntu. This wiki article describes how to configure it - be sure to only allow forwarding from your local address!
A: Here is a Power Shell example of a script to send email:
$smtp = new-object Net.Mail.SmtpClient("mail.example.com")
if( $Env:SmtpUseCredentials -eq "true" ) {
$credentials = new-object Net.NetworkCredential("username","password")
$smtp.Credentials = $credentials
}
$objMailMessage = New-Object System.Net.Mail.MailMessage
$objMailMessage.From = "[email protected]"
$objMailMessage.To.Add("[email protected]")
$objMailMessage.Subject = "eMail subject Notification"
$objMailMessage.Body = "Hello world!"
$smtp.send($objMailMessage)
A: $ echo "This is the email body" | mail -s "This is the subject" [email protected]
Alternatively:
$ cat | mail -s "A few lines off the top of my head" [email protected]
This is where my
multiline
message would go
^D
^D - means press ctrl+d
A: You can use mail:
$mail -s <subject> <recipients>
You then type your message and end it with a line that has only a period. This signals you are done and sends the message.
You can also pipe your email in from STDIN and it will be sent as the text of an email:
$<mail-generating-program> | mail -s <subject> <recipients>
One small note with this approach - unless your computer is connected to the internet and your DNS settings are set properly, you won't be able to receive replies to your message. For a more robust command-line program you can link to your POP or IMAP email account, check out either pine or mutt.
A: If you want to invoke an email program, then see this article:
How do I open the default mail program with a Subject and Body in a cross-platform way?
A: If you are on a Linux server, but mail isn't available (which can be the case on shared servers), you can write a simple PHP / Perl / Ruby (depending on what's available) script to do the same thing, e.g. something like this:
#! /usr/bin/php
<?php
if ($argc < 3) {
echo "Usage: " . basename($argv[0]) . " TO SUBJECT [CC]\n";
exit(1);
}
$message = file_get_contents('php://stdin', 'r');
$headers = $argc >= 4 ? "Cc: $argv[3]\r\n" : null;
$ret = mail($argv[1], $argv[2], $message, $headers);
exit($ret ? 0 : 1);
Then invoke as follows:
mail [email protected] test < message
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How can I disable DLL Caching in Windows Vista via CMD? I know Windows Vista (and XP) cache recently loaded DLL's in memory...
How can this be disabled via the command prompt?
A: The only thing you can do is disable SuperFetch, which can be done from the command prompt with this command (there has to be a space between the = sign and disabled).
sc config Superfetch start= disabled
There is a myth out there that you can disable DLL caching, but that only worked for systems prior to Windows 2000. [source]
A: Perhaps it would be helpful to know why you want to do this and then try to help solve the original problem...
A: Windows does not cache recently used DLLs in memory.
It does cache the contents of the files in the file cache, like it would normally do with data files.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Why functional languages? I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application?
A: I must be dense, but I still don't get it. Are there any actual examples of small application's written in a functional language like F# where you can look at the source code and see how and why it was better to use such an approach than, say, C#?
A: I'd point out that everything you've said about functional languages, most people were saying about object-oriented langauges about 20 years ago. Back then it was very common to hear about OO:
* The average corporate programmer, e.g. most of the people I work with, will not understand it and most work environments will not let you program in it
* It's not really taught at universities (or is it nowadays?)
* Most applications are simple enough to be solved in normal IMPERATIVE ways
Change has to come from somewhere. A meaningful and important change will make itself happen regardless of whether people trained in earlier technologies take the opinion that change isn't necessary. Do you think the change to OO was good despite all the people that were against it at the time?
A: Even if you never work in a functional language professionally, understanding functional programming will make you a better developer. It will give you a new perspective on your code and programming in general.
I say there's no reason to not learn it.
I think the languages that do a good job of mixing functional and imperative style are the most interesting and are the most likely to succeed.
A: F# could catch on because Microsoft is pushing it.
Pro:
*
*F# is going to be part of next version of Visual Studio
*Microsoft is building community for some time now - evangelists, books, consultants that work with high profile customers, significant exposure at MS conferences.
*F# is first class .NET language and it's the first functional language that comes with really big foundation (not that I say that Lisp, Haskell, Erlang, Scala, OCaml do not have lots of libraries, they are just not as complete as .NET is)
*Strong support for parallelism
Contra:
*
*F# is very hard to start even if you are good with C# and .NET - at least for me :(
*it will probably be hard to find good F# developers
So, I give 50:50 chance to F# to become important. Other functional languages are not going to make it in near future.
A: I'm always skeptical about the Next Big Thing. Lots of times the Next Big Thing is pure accident of history, being there in the right place at the right time no matter whether the technology is good or not. Examples: C++, Tcl/Tk, Perl. All flawed technologies, all wildly successful because they were perceived either to solve the problems of the day or to be nearly identical to entrenched standards, or both. Functional programming may indeed be great, but that doesn't mean it will be adopted.
But I can tell you why people are excited about functional programming: many, many programmers have had a kind of "conversion experience" in which they discover that using a functional language makes them twice as productive (or maybe ten times as productive) while producing code that is more resilient to change and has fewer bugs. These people think of functional programming as a secret weapon; a good example of this mindset is Paul Graham's Beating the Averages. Oh, and his application? E-commerce web apps.
Since early 2006 there has also been some buzz about functional programming and parallelism. Since people like Simon Peyton Jones have been worrying about parallelism off and on since at least 1984, I'm not holding my breath until functional languages solve the multicore problem. But it does explain some of the additional buzz right about now.
In general, American universities are doing a poor job teaching functional programming. There's a strong core of support for teaching intro programming using Scheme, and Haskell also enjoys some support there, but there's very little in the way of teaching advanced technique for functional programmer. I've taught such a course at Harvard and will do so again this spring at Tufts. Benjamin Pierce has taught such a course at Penn. I don't know if Paul Hudak has done anything at Yale. The European universities are doing a much better job; for example, functional programming is emphasized in important places in Denmark, the Netherlands, Sweden, and the UK. I have less of a sense of what's happening in Australasia.
A: I think one reason is that some people feel that the most important part of whether a language will be accepted is how good the language is. Unfortunately, things are rarely so simple. For example, I would argue that the biggest factor behind Python's acceptance isn't the language itself (although that is pretty important). The biggest reason why Python is so popular is its huge standard library and the even bigger community of third-party libraries.
Languages like Clojure or F# may be the exception to the rule on this considering that they're built upon the JVM/CLR. As a result, I don't have an answer for them.
A: Most applications can be solved in [insert your favorite language, paradigm, etc. here].
Although, this is true, different tools can be used to solve different problems. Functional just allows another high (higher?) level abstraction that allows to do our jobs more effectively when used correctly.
A: It seems to me that those people who never learned Lisp or Scheme as an undergraduate are now discovering it. As with a lot of things in this field there is a tendency to hype and create high expectations...
It will pass.
Functional programming is great. However, it will not take over the world. C, C++, Java, C#, etc will still be around.
What will come of this I think is more cross-language ability - for example implementing things in a functional language and then giving access to that stuff in other languages.
A: When reading "The Next Mainstream Programming Language: A Game Developer’s Perspective" by Tim Sweeney, Epic Games, my first thought was - I got to learn Haskell.
PPT
Google's HTML Version
A: I don't see anyone mentioning the elephant in the room here, so I think it's up to me :)
JavaScript is a functional language. As more and more people do more advanced things with JS, especially leveraging the finer points of jQuery, Dojo, and other frameworks, FP will be introduced by the web-developer's back-door.
In conjunction with closures, FP makes JS code really light, yet still readable.
Cheers,
PS
A: Check out Why Functional Programming Matters.
A: Things have been moving in a functional direction for a while. The two cool new kids of the past few years, Ruby and Python, are both radically closer to functional languages than what came before them — so much so that some Lispers have started supporting one or the other as "close enough."
And with the massively parallel hardware putting evolutionary pressure on everyone — and functional languages in the best place to deal with the changes — it's not as far a leap as it once was to think that Haskell or F# will be the next big thing.
A: Have you been following the evolution of programming languages lately? Every new release of all mainstream programming languages seems to borrow more and more features from functional programming.
*
*Closures, anonymous functions, passing and returning functions as values used to be exotic features known only to Lisp and ML hackers. But gradually, C#, Delphi, Python, Perl, JavaScript, have added support for closures. It's not possible for any up-and-coming language to be taken seriously without closures.
*Several languages, notably Python, C#, and Ruby have native support for list comprehensions and list generators.
*pioneered generic programming in 1973, but support for generics ("parametric polymorphism") has only become an industry standard in the last 5 years or so. If I remember correctly, Fortran supported generics in 2003, followed by Java 2004, C# in 2005, Delphi in 2008. (I know C++ has supported templates since 1979, but 90% of discussions on C++'s STL start with "here there be demons".)
What makes these features appealing to programmers? It should be plainly obvious: it helps programmers write shorter code. All languages in the future are going to support—at a minimum—closures if they want to stay competitive. In this respect, functional programming is already in the mainstream.
Most applications are simple enough to
be solved in normal OO ways
Who says can't use functional programming for simple things too? Not every functional program needs to be a compiler, theorem prover, or massively parallel telecommunications switch. I regularly use F# for ad hoc throwaway scripts in addition to my more complicated projects.
A: It's catching on because it's the best tool around for controlling complexity.
See:
- slides 109-116 of Simon Peyton-Jones talk "A Taste of Haskell"
- "The Next Mainstream Programming Language: A Game Developer's Perspective" by Tim Sweeney
A:
Most applications are simple enough to be solved in normal OO ways
*
*OO ways have not always been "normal." This decade's standard was last decade's marginalized concept.
*Functional programming is math. Paul Graham on Lisp (replace Lisp by functional programming):
So the short explanation of why this
1950s language is not obsolete is that
it was not technology but math, and
math doesn’t get stale. The right
thing to compare Lisp to is not 1950s
hardware, but, say, the Quicksort
algorithm, which was discovered in
1960 and is still the fastest
general-purpose sort.
A: I bet you didn't know you were functional programming when you used:
*
*Excel formulas
*Quartz Composer
*JavaScript
*Logo (Turtle graphics)
*LINQ
*SQL
*Underscore.js (or Lodash),
D3
A: Functional languages use a different paradigm than imperative and object-oriented languages. They use side-effect-free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to).
One of the biggest advantages with functional programming is that the order of execution of side-effect-free functions is not important. For example, in Erlang this is used to enable concurrency in a very transparent way.
And because functions in functional languages behave very similar to mathematical functions it's easy to translate those into functional languages. In some cases, this can make code more readable.
Traditionally, one of the big disadvantages of functional programming was also the lack of side effects. It's very difficult to write useful software without I/O, but I/O is hard to implement without side effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed-paradigm languages like F# or Scala this is easier.
Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in Python too. I think the reasons for the popularity of functional programming is mostly because of two reasons: Concurrency is getting to be a real problem in normal programming, because we're getting more and more multiprocessor computers; and the languages are getting more accessible.
A: I don't think that there's any question about the functional approach to programming "catching on", because it's been in use (as a style of programming) for about 40 years. Whenever an OO programmer writes clean code that favors immutable objects, that code is borrowing functional concepts.
However, languages that enforce a functional style are getting lots of virtual ink these days, and whether those languages will become dominant in the future is an open question. My own suspicion is that hybrid, multi-paradigm languages such as Scala or OCaml
will likely dominate over "purist" functional languages in the same way that pure OO language (Smalltalk, Beta, etc.) have influenced mainstream programming but haven't ended up as the most widely-used notations.
Finally, I can't resist pointing out that your comments re FP are highly parallel to the remarks I heard from procedural programmers not that many years ago:
*
*The (mythical, IMHO) "average" programmer doesn't understand it.
*It's not widely taught.
*Any program you can write with it can be written another way with current techniques.
Just as graphical user interfaces and "code as a model of the business" were concepts that helped OO become more widely appreciated, I believe that increased use of immutability and simpler (massive) parallelism will help more programmers see the benefits that the functional approach offers. But as much as we've learned in the past 50 or so years that make up the entire history of digital computer programming, I think we still have much to learn. Twenty years from now, programmers will look back in amazement at the primitive nature of the tools we're currently using, including the now-popular OO and FP languages.
A: I agree with the first point, but times change. Corporations will respond, even if they're late adopters, if they see that there's an advantage to be had. Life is dynamic.
They were teaching Haskell and ML at Stanford in the late 1990s. I'm sure that places like Carnegie Mellon, MIT, Stanford, and other good schools are presenting it to students.
I agree that most "expose relational databases on the web" applications will continue in that vein for a long time. Java EE, .NET, Ruby on Rails, and PHP have evolved some pretty good solutions to that problem.
You've hit on something important: It might be the problem that can't be solved easily by other means that will boost functional programming. What would that be?
Will massive multicore hardware and cloud computing push them along?
A: Because functional programming has significant benefits in terms of productivity, reliability and maintainability. Many-core may be a killer application that finally gets big corporations to switch over despite large volumes of legacy code. Furthermore, even big commercial languages like C# are taking on a distinct functional flavour as a result of many-core concerns. Side effects simply don't fit well with concurrency and parallelism.
I do not agree that "normal" programmers won't understand it. They will, just like they eventually understood OOP (which is just as mysterious and weird, if not more so).
Also, most universities do teach functional programming , many even teach it as the first programming course.
A: Wow - this is an interesting discussion. My own thoughts on this:
FP makes some tasks relatively simple (compared to none-FP languages).
None-FP languages are already starting to take ideas from FP, so I suspect that this trend will continue and we will see more of a merge which should help people make the leap to FP easier.
A: I don't know whether it will catch on or not, but from my investigations, a functional language is almost certainly worth learning, and will make you a better programmer. Just understanding referential transparency makes a lot of design decisions so much easier- and the resulting programs much easier to reason about. Basically, if you run into a problem, then it tends to only be a problem with the output of a single function, rather than a problem with an inconsistant state, which could have been caused by any of the hundreds of classes/methods/functions in an imparative language with side effects.
The stateless nature of FP maps more naturally to the stateless nature of the web, and thus functional languages lend themselves more easily to more elegant, RESTFUL webapps. Contrast with JAVA and .NET frameworks that need to resort to horribly ugly HACKS like VIEWSTATE and SESSION keys to maintain application state, and maintain the (occasionally quite leaky) abstraction of a stateful imperative language, on an essentially stateless functional platform like the web.
And also, the more stateless your application, the more easily it can lend itself to parallel processing. Terribly important for the web, if your website happens to get popular. It's not always straightforward to just add more hardware to a site to get better performance.
A: My view is that it will catch on now that Microsoft have pushed it much further into the mainstream. For me it's attractive because of what it can do for us, because it's a new challenge and because of the job opportunities it resents for the future.
Once mastered it will be another tool to further help make us more productive as programmers.
A: A point missed in the discussion is that the best type systems are found in contemporary FP languages. What's more, compilers can infer all (or at least most) types automatically.
It is interesting that one spends half the time writing type names when programming Java, yet Java is by far not type safe. While you may never write types in a Haskell programm (except as a kind of compiler checked documentation) and the code is 100% type safe.
A:
The average corporate programmer, e.g.
most of the people I work with, will
not understand it and most work
environments will not let you program
in it
That one is just a matter of time though. Your average corporate programmer learns whatever the current Big Thing is. 15 years ago, they didn't understand OOP.
If functional programming catches on, your "average corporate programmers" will follow.
It's not really taught at universities
(or is it nowadays?)
It varies a lot. At my university, SML is the very first language students are introduced to.
I believe MIT teaches Lisp as a first-year course. These two examples may not be representative, of course, but I believe most universities at the very least offer some optional courses on functional programming, even if they don't make it a mandatory part of the curriculum.
Most applications are simple enough to
be solved in normal OO ways
It's not really a matter of "simple enough" though. Would a solution be simpler (or more readable, robust, elegant, performant) in functional programming? Many things are "simple enough to be solved in Java", but it still requires a godawful amount of code.
In any case, keep in mind that functional programming proponents have claimed that it was the Next Big Thing for several decades now. Perhaps they're right, but keep in mind that they weren't when they made the same claim 5, 10 or 15 years ago.
One thing that definitely counts in their favor, though, is that recently, C# has taken a sharp turn towards functional programming, to the extent that it's practically turning a generation of programmers into functional programming programmers, without them even noticing. That might just pave the way for the functional programming "revolution". Maybe. ;)
A: Man cannot understand the perfection and imperfections of his chosen art if he cannot see the value in other arts. Following rules only permits development up to a point in technique and then the student and artist has to learn more and seek further. It makes sense to study other arts as well as those of strategy.
Who has not learned something more about themselves by watching the activities of others? To learn the sword study the guitar. To learn the fist study commerce. To just study the sword will make you narrow-minded and will not permit you to grow outward.
-- Miyamoto Musashi, "A Book of Five Rings"
A: One key feature in a functional language is the concept of first-class functions. The idea is that you can pass functions as parameters to other functions and return them as values.
Functional programming involves writing code that does not change state. The primary reason for doing so is so that successive calls to a function will yield the same result. You can write functional code in any language that supports first-class functions, but there are some languages, like Haskell, which do not allow you to change state. In fact, you're not supposed to make any side effects (like printing out text) at all - which sounds like it could be completely useless.
Haskell instead employs a different approach to I/O: monads. These are objects that contain the desired I/O operation to be executed by your interpreter's toplevel. At any other level they are simply objects in the system.
What advantages does functional programming provide? Functional programming allows coding with fewer potentials for bugs because each component is completely isolated. Also, using recursion and first-class functions allows for simple proofs of correctness which typically mirror the structure of the code.
A: The main plus for me is its inherent parallelism, especially as we are now moving away from higher CPU clock frequency and towards more and more cores.
I don't think it will become the next programming paradigm and completely replace OO type methods, but I do think we will get to the point that we need to either write some of our code in a functional language, or our general purpose languages will grow to include more functional constructs.
A: I don't think most realistic people think that functional programming will catch on (becomes the main paradigm like OO). After all, most business problems are not pretty math problems but hairy imperative rules to move data around and display them in various ways, which means it's not a good fit for pure functional programming paradigm (the learning curve of monad far exceeds OO.)
OTOH, functional programming is what makes programming fun. It makes you appreciate the inherent, timeless beauty of succinct expressions of the underlying math of the universe. People say that learning functional programming will make you a better programmer. This is of course highly subjective. I personally don't think that's completely true either.
It makes you a better sentient being.
A: In addition to the other answers, casting the solution in pure functional terms forces one to understand the problem better. Conversely, thinking in a functional style will develop better* problem solving skills.
*Either because the functional paradigm is better or because it will afford an additional angle of attack.
A:
It's not really taught at universities (or is it nowadays?)
I don't know about nowadays, but I was taught both Miranda and Lisp as part of my computer science course in the mid 1990s. Despite not using a pure functional language since, it has influenced the way I solve problems.
Most applications are simple enough to be solved in normal OO ways
In the same mid 1990s computer science course, OO (taught using Eiffel) was taught pretty much on a par with functional programming. Both were non-mainstream at the time. OO may be "normal" now, but it was not ever thus.
I'll be interested to see whether F# is the thing that pushes functional programming into the mainstream.
A: I think the biggest argument for functional programming languages to become the "next big thing" is that in the future multi-core processors will be the norm. Programmers will have to take advantage of that, and functional programming offers really wonderful possibilities for building top of the line concurrent software.
P.S. When I was in college at Boston University ('98-'02) we spent a semester learning Scheme which is a close cousin of LISP. When we first started learning it, I wanted to rip my hair out. By the end of the course it was very natural.
A: *
*How long did it take OOP to get understood by the average corporate programmer...?
*I was taught functional programming at Utrecht University in - I think - 1994 and only see it start to catch on "in the real world" in the last couple of years.
*There is no such thing as a "simple application". ;-)
I think that (approaching) side effect free programming for some key parts of software will be essential when we start to get more and more cores in our hardware. Give functional programming a bit more time. And the functional sprinkling in current and future versions of C# will go a long way in preparing those corporate programmers for functional programming without them even realising it...
A: Some thoughts:
*
*The debate between FP and imperative programming (OO, structured, etc), has been raging since Lisp versus Fortran. I think you pose excellent questions but recognize that they are not especially new.
*Part of the hoopla over FP is that we seem to be recognizing that concurrency is very difficult, and that locks and other mechanisms in OO (e.g. Java) are just one solution. FP offers a refreshing sea change with ideas such as Actors and the power of stateless computing. To those wrestling with OO, the landscape seems highly appealing.
*Yes, schools teach FP. In fact, the University of Waterloo and others offer Scheme in first year classes (reference here).
*Regarding the average programmer, I'm sure that the same arguments were given against C++ back in the early 1990s. And look what happened. If businesses can gain an advantage via a technology, you can bet that people will receive training.
This is not to say that it is a sure thing, or that there won't be a backlash in 3-5 years (as there always is). However, the trend towards FP has merit and is worth watching.
A: There's a great article from Slava Akhmechet called Functional Programming For The Rest of Us (this was the article that got me into FP btw). Amongst the benefits FP brings, he unorthodoxly emphasizes the following (which I believe contributes to the appeal for software engineers):
*
*Unit Testing
*Debugging
*Concurrency
*Hot Code Deployment
*Machine Assisted Proofs and Optimizations
And then goes on to discuss the goodness of more traditionally discussed aspects of FP like higher order functions, currying, lazy evaluation, optimization, abstracting control structures (although not discussing monads), infinite data structures, strictness, continuations, pattern matching, closures and so on.
Highly recommended !
A: I have a hard time envisioning a purely functional language being the common language of the day, the reasons for which I won't get in to (because they're flame fodder). That being said, programming in a functional way can provide benefits no matter the language (if it allows such). For me, it's the ability to test my code much easier. I work with databases a lot... I tend to:
*
*write a function that takes data, manipulates it, and returns data
*write a dead simple wrapper that calls the database and then returns the result of passing that data through my function
Doing so allows me to write unit tests for my manipulation function, without the need to create mocks and the like.
I do think purely functional languages are very interesting... I just think it's what we can learn from them that matters to me, not what we can do with them.
A: I'm actually learning Lisp after reading Hackers and Painters and I do believe I will learn something from Lisp that will give me a better understanding of everything else I program. Now I don’t think I will actually be using Lisp in my everyday just because some guy in 1995 created a website that became Yahoo Stores. So it’s a win-win anyway (if it catches on, I win. If not, I get more points of views on how to program and how stuff works).
Now... on another question kind of related, do I think programming will change a lot with 32 cores processors arriving next year? Yes, I don’t know if it will be functional programming, but... I’m pretty sure there will be something different!
A: It has already caught on with Map/reduce in Hadoop
A: I think the answer to your question lies more in the statement, "the right tool for the job", than the hottest thing. There will always be hot new technologies, and there will always be those who jump on them.
Functional languages have been around for a while, it's just now they are getting more press.
A: Uh, sorry to be a pedant, but it has already caught on - we call it Excel.
http://research.microsoft.com/en-us/um/people/simonpj/papers/excel/
The vast majority of programmes that run on computers are written in Excel or one of the many popular clones of it.
(there are many programmes that are run many times, and programmes written in Excel tend NOT to be ones of these - most Excel programmes have 1 run instance)
A: FP is the next best paradigm that is for sure. Now which language could be the next step, That is the hard stuff but I believe could be Haskell, F#, Clojure, Ocaml or Erlang. Or could be Python with more FP constructs and better support for parallelism/performance or also Perl 6 with parrot looks very interesting.
A: Lots of people have mentioned Functional Languages.
But some of the Most commonly used Functional Languages in use today besides Javascript.
Excel, SQL, XSLT, XQuery, J and K are used in financial realm.
Of course Erlang.
So I would say that from that list that Functional Programming techniques are used in mainstream every day.
A: Functional Programming will likely be a tool that is used by Engineers, Scientists to solve the problems that they are facing. It isn't going to take the world like earlier langages. However, the hard product to beat is Excel, if I am an engineer and need to do calculations, Excel is awesome.
However, F# is going to be another source and will likely fill design needs by the non-Computer Scientists. Let's face it, Computer Scientists have done a great job of creating a WHOLE new way of doing things. Object Oriented Programming is GREAT. But sometimes you just need a way to solve an equation, get a solution and graph it. That's it. Then a language like F# fills the bill. Or maybe you want to build a finite state machine, F# again could be one of the solutions, but then C could be a solution as well.
But when it comes to parallel processing, Excel shines and in time F# will be there as well. In a friendly manner though, F#= friendly.
A: Microsoft is really pushing F# with the next version on Visual Studio. It is a hybrid language such as Scala and integrates very nicely with the rest of the .net framework. I think a lot of Microsoft shops are going to use it to speed-up the development of highly parallel data-processing applications and functions.
A: I personally think for distributed systems and multi-threaded/parallel programming functional programming will have a break-through soon. As long as it integrates with existing OOP paradigms through programming libraries. So... the purely functional approach - in my opinion - will remain academic.
A: Functional programming has been around for a long time, since LISP was one of the earliest languages to have a compiler, and since MIT's LISP machines. It's not a new paradigm (OO is much newer) but the dominant software platforms have tended to be written in languages that translate easily to assembly language, and their APIs heavily favor imperative code (UNIX with C, Windows with C, and Macintosh with Pascal and later C).
I think the new innovation in the last few years is for a diversity of APIs to catch on, particularly for things like web development where the platform APIs are irrelevant. Since you're not coding directly to the Win32 API or the POSIX API, that gives people the freedom to try out functional languages.
A: I don't think that functional languages will solve anything, and that this is just a hype that management is trying to sell, remember the only truth:
There is no silver bullet.
All the rest, is bullshit, also they've said that OO would solve our problems, that Web Services would solve our problems, that XML would solve our problems, but in the end the above truth applied, and everything went down. Also, twenty years from now on, who says that we will be using binary computers at all? Why not quantic computers? No one can predict the future, at least not on this planet. (That is the second truth.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "345"
} |
Q: VB.NET Get underlying system.type from nullable type I'm attempting to create a dataset based on the properties of an object. For example, I have an instance of a Person class with properties including ID, Forename, Surname, DOB etc. Using reflection, I'm adding columns to a new dataset based on the object properties:
For Each pi As PropertyInfo In person.GetType().GetProperties()
Dim column As New DataColumn(pi.Name, pi.PropertyType)
table.Columns.Add(column)
Next
My problem is that some of those properies are nullable types which aren't supported by datasets. Is there any way to extract the underlying system type from a nullable type?
Thanks.
A: Nullable.GetUnderylingType(myType)
will return the underlying type or null if it's not a nullable type.
A: Here's your answer, in VB. This may be overkill for your purposes, but it also might be useful to some other folks.
First off, here's the code to find out if you're dealing with a Nullable type:
Private Function IsNullableType(ByVal myType As Type) As Boolean
Return (myType.IsGenericType) AndAlso (myType.GetGenericTypeDefinition() Is GetType(Nullable(Of )))
End Function
Note the unusual syntax in the GetType. It's necessary. Just doing GetType(Nullable) as one of the commentors suggested did not work for me.
So, armed with that, you can do something like this... Here, in an ORM tool, I am trying to get values into a generic type that may or not be Nullable:
If (Not value Is Nothing) AndAlso IsNullableType(GetType(T)) Then
Dim UnderlyingType As Type = Nullable.GetUnderlyingType(GetType(T))
Me.InnerValue = Convert.ChangeType(value, UnderlyingType)
Else
Me.InnerValue = value
End If
Note that I check for Nothing in the first line because Convert.ChangeType will choke on it... You may not have that problem, but my situation is extremely open-ended.
Hopefully if I didn't answer your question directly, you can cannibalize this and get you where you need to go - but I just implemented this moments ago, and my tests are all passing.
A: I'm guessing that the problem is recognizing whether the property is nullable or not. In C# you do this with this code:
if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
... but I'm not sure what the equivalent of that last clause is in VB.NET.
A: You can also use the GetGenericParameters() method on that type. myNullableObject.GetType().GetGenericParameters()[0] should give you the type of nullable it is (so Guid, Int32, etc.)
A: @Mendelt Siebenga: You can only call GetType on the value property if the variable is not set to null; otherwise, you'll get an exception.
What you want to do is use the "GetValueOrDefault" property and call GetType on that, since you are guaranteed it will not be null. Example:
Dim i As Nullable(Of Integer) = Nothing
Dim t As Type = i.GetValueOrDefault().GetType()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.