text
stringlengths 8
267k
| meta
dict |
---|---|
Q: How do I export (and then import) a Subversion repository? I'm just about wrapped up on a project where I was using a commercial SVN provider to store the source code. The web host the customer ultimately picked includes a repository as part of the hosting package, so, now that the project is over, I'd like to relocate the repository to their web host and discontinue the commercial account.
How would I go about doing this?
A: Excerpt from my Blog-Note-to-myself:
Now you can import a dump file e.g. if you are migrating between machines / subversion versions. e.g. if I had created a dump file from the source repository and load it into the new repository as shown below.
Commands for Unix-like systems (from terminal):
svnadmin dump /path/to/your/old/repo > backup.dump
svnadmin load /path/to/your/new/repo < backup.dump.dmp
Commands for Microsoft Windows systems (from cmd shell):
svnadmin dump C:\path\to\your\old\repo > backup.dump
svnadmin load C:\path\to\your\old\repo < backup.dump
A: You can also use svnsync. This only requires read-only access on the source repository
more at svnbook
A: If you want to move the repository and keep history, you'll probably need filesystem access on both hosts. The simplest solution, if your backend is FSFS (the default on recent versions), is to make a filesystem copy of the entire repository folder.
If you have a Berkley DB backend, if you're not sure of what your backend is, or if you're changing SVN version numbers, you're going to want to use svnadmin to dump your old repository and load it into your new repository. Using svnadmin dump will give you a single file backup that you can copy to the new system. Then you can create the new (empty) repository and use svnadmin load, which will essentially replay all the commits along with its metadata (author, timestamp, etc).
You can read more about the dump/load process here:
http://svnbook.red-bean.com/en/1.8/svn.reposadmin.maint.html#svn.reposadmin.maint.migrate
Also, if you do svnadmin load, make sure you use the --force-uuid option, or otherwise people are going to have problems switching to the new repository. Subversion uses a UUID to identify the repository internally, and it won't let you switch a working copy to a different repository.
If you don't have filesystem access, there may be other third party options out there (or you can write something) to help you migrate: essentially you'd have to use the svn log to replay each revision on the new repository, and then fix up the metadata afterwards. You'll need the pre-revprop-change and post-revprop-change hook scripts in place to do this, which sort of assumes filesystem access, so YMMV. Or, if you don't want to keep the history, you can use your working copy to import into the new repository. But hopefully this isn't the case.
A: The tool to do that would be
svnadmin dump
But for this to work, you need filesystem-access to the repository. And once you have that (and provided the repository is in FSFS format), you can just copy the repository to its new location (if it's in BDB format, dump/load is strongly recommended).
If you do not have filesystem access, you would have to ask your repository provider to provide the dump for you (and make them delete their repository - and hope they comply)
A: If you do not have file access to the repository, I prefer rsvndump (remote Subversion repository dump) to make the dump file.
A: You can also use the svnadmin hotcopy command:
svnadmin hotcopy OLD_REPOS_PATH NEW_REPOS_PATH
It takes a full backup from repository, including all hooks, configuration files, etc.
More at SVN Book
A: Basically, there are plenty of ways to accomplish the task. The topic is covered in depth in SVNBook | Migrating Repository Data Elsewhere, so I suggest reading the book's section.
Here is a brief description of your options:
*
*It depends on your environment, but there is a great chance that you can simply copy the repository to the new server and it will work. You have to revise repository hook scripts after copying the repo to ensure that they are working as you expect.
*You can use svnadmin dump and svnadmin load commands to, ehm, generate full dump and then load it to another repository on another server. You will need to svnadmin create a new clean repository to load the dump into it. Keep in mind that the approach deals with repository history only and does not move hook scripts and repository configuration files! As well, you must have read filesystem access to the original repository to dump it.
*Since Subversion 1.7, svnrdump tool is available. Generally speaking, it mimics svnadmin dump and svnadmin load functionality, but operates remotely. You are not required to have read / write filesystem access to original and target repositories as tool operates remotely like Subversion client, e.g. over HTTPS protocol. So you need to have read access to original repository and read / write to the target one.
*Another option is to use svnadmin hotcopy command. The command is mostly used for backup purpose, it creates full copy of the repository including configuration and hook scripts. You can move hotcopied repository to another server then.
A: rsvndump worked great for me migrating a repository from svnrepository.com to an Ubuntu server that I control.
How to install and use rsvndump on Ubuntu:
*
*Install missing dependencies ("APR" and Subversion libraries)
sudo apt-get install apache2-threaded-dev
sudo apt-get install libsvn-dev
*Install rsvndump
wget http://prdownloads.sourceforge.net/rsvndump/rsvndump-0.5.5.tar.gz
tar xvfz rsvndump-0.5.5.tar.gz
cd rsvndump-0.5.5
./configure
make
sudo make install
*Dump the remote SVN repository to a local file
rsvndump http://my.svnrepository.com/svn/old_repo > old_repo_dump
*Create a new repository and load in the local dump file
sudo svnadmin create /opt/subversion/my_new_rep
sudo svnadmin load --force-uuid /opt/subversion/my_new_repo < old_repo_dump
A: Assuming you have the necessary privileges to run svnadmin, you need to use the dump and load commands.
A: I found an article about how to move svn repositories from a hosting service to another, and how to do local backups:
*
*Define where you will store your repositories:
mkdir ~/repo
MYREPO=/home/me/someplace ## you should use full path here
*Now create a empty svn repository with svnadmin create $MYREPO
*Create a hook file and make it executable:
echo '#!/bin/sh' > $MYREPO/hooks/pre-revprop-change
chmod +x $MYREPO/hooks/pre-revprop-change
*Now we can start importing the repository with svnsync, that will initialize a destination repository for synchronization from another repository:
svnsync init file://$MYREPO http://your.svn.repo.here/
*And the finishing touch to transfer all pending revisions to the destination from the source with which it was initialized:
svnsync sync file://$MYREPO
There now you have a local svn repository in the ~/repo directory.
Source:
*
*http://kylecordes.com/2007/svnsync-svn-backup
*http://www.workhabit.com/blog/moving-svn-repository-hosted-service
A: You might find some help on migrating SVN repositories in Chapter 5. Repository Administration, Migrating a repository.
This approach requires access to svnadmin.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86"
} |
Q: How does one decrypt a PDF with an owner password, but no user password? Although the PDF specification is available from Adobe, it's not exactly the simplest document to read through. PDF allows documents to be encrypted so that either a user password and/or an owner password is required to do various things with the document (display, print, etc). A common use is to lock a PDF so that end users can read it without entering any password, but a password is required to do anything else.
I'm trying to parse PDFs that are locked in this way (to get the same privileges as you would get opening them in any reader). Using an empty string as the user password doesn't work, but it seems (section 3.5.2 of the spec) that there has to be a user password to create the hash for the admin password.
What I would like is either an explanation of how to do this, or any code that I can read (ideally Python, C, or C++, but anything readable will do) that does this so that I can understand what I'm meant to be doing. Standalone code, rather than reading through (e.g.) the gsview source, would be best.
A: A plugin for GSview for viewing encrypted PDFs is here.
If this works for you, you may be able to look at the source.
A: If I remember correctly, there is a fixed padding string of 32 (?) bytes to apply to any password. All passwords need to be 32 bytes at the start of computing the encryption key, either by truncating or adding some of those padding bytes.
If no user password was set you simply have to pad with all 32 bytes of the string, i.e. use the 32 padding bytes as the starting point for computing the encryption key.
I have to admit it's been a while since I've done this, I do remember that the encryption part of the PDF is an absolute mess as it got changed significantly in nearly every revision, requiring you to cope with a lot of cases to handle all PDF's.
Good luck.
A: xpdf is probably a good reference implementation for this sort of problem. I have successfully used them to open encrypted pdfs before.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to recover a deleted branch in TFS? I deleted a branch in TFS and just found out that I need the changes that were on it.
How do I recover the branch or the changes done on it?
A: Specifically in Visual Studio go to "Tools-Options" then Select "Source Control-visual Studio Team Founation Server" and check the "Show deleted items in the Source Control explorer".
Having done that - you can then right click a folder and say "Undelete"
A: As described in the TFS FAQ:
Are Deletes physical or logical? Can accidental deletes be recovered?
Deletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: What's the state of play with "Visual Inheritance" We have an application that has to be flexible in how it displays it's main form to the user - depending on the user, the form should be slightly different, maybe an extra button here or there, or some other nuance. In order to stop writing code to explicitly remove or add controls etc, I turned to visual inheritance to solve the problem - in what I thought was a neat, clean and logical OO style - turns out that half the time inherited forms have a hard time rendering themeselves in VS for no good reason etc - and I get the feeling that developers and to some extent Microsoft have shunned the practice of Visual Inheritance - can you confirm this, am I missing something here?
Regards.
A: I thought they had more or less sorted the desktop designer issues in 2005.
Have you tried the usual culprits?
*
*No abstract control types
*No constructor arguments in any form
*Initialisation moved to Form_Load as opposed to the Ctor
*No controls in the same project as the usercontrol/form that they are put inside
*Close all documents -> Clean -> Rebuild
*Restart VS
I seemed to think that as long as you did all of the above it worked..... mostly.
A: I am studying towards the (admittedly soon-to-be-obsoleted) MCAD, and part of the WinForms element was Visual Inheritence.
I personally have had no major problems with it, however, there are considerations to take in to account.
For me, the main problem has always initialization.. You need to remember that the designer cannot/does not instantiate forms in the same way it does at run time (similarly, it cannot do this with web dev, which is why care is needed with custom control rendering).
Also, once a form is changed, a complete re-build of the project is required in order to propagate the changes to the form to the child forms that inherit from it.
I personally have seen no evidence to suggest that it has been "shunned". AFAIK, its still good practice to exercise code re-use where possible. Visual inheritance provides that.
May I suggest creating a new question with the actual problems you are having, with sample code? We can then look at it to see if we can get it working and explain why :)
A: I've seen some problems in VS2005 with this. They were mostly due to problems with construction of the forms-objects in the designer. There were issues with code that tried to access the database from the form-constructors etc.
You can debug issues like this by starting a second instance of visual studio and loading up the first instance in the debugger. If you set breakpoints in your code you can then debug what happens in the designers in the first instance.
Another problem I can remember was generics in form classes
public class MyForm<MyObject> : Form
this won't work
A: I often stumble upon such problems in Visual Studio. In many cases MSVS forms designer fails to render form correctly. Back in the days I worked with WinForms I had to do all kind of weird tricks to enable some complex scenarios. However I think that using visual inheritance is very beneficial and should not be thrown away regardless of MSVS designer bugs.
A: I think I've found a way how to avoid this problem.
Don't hook the Form_Load Event in your parent form, this will break the designer.
Also don't take the Default empty constructor away from Visual Studio in the Parent Form. If you want to have Dependency Injection, create another constructor.
Like this:
public ProductDetail()
{
InitializeComponent();
}
public ProductDetail(ISupplierController supplierController) : base()
{
InitializeComponent();
this.supplierController = supplierController;
}
You can then still do this from your inherited Form:
public NewProduct(ISupplierController supplierController)
: base(supplierController)
{
InitializeComponent();
}
This worked for me so far, and I had some weird designer issues too.
cheers, Daniel
A: Read this: http://cs.rthand.com/blogs/blog_with_righthand/archive/2005/11/10/186.aspx
AFAIK, there are still issues with Visual Inheritance and objects that rely on collections for the design elements, typically grid controls etc. I believe MS still have removed the possibility of changing f.ex. a GridView in an inherited form/usercontrol etc. But other controls like TextBox, Form, UserControl, Panel etc. should work as expected.
I've so far had no problem with VI using 3rd party grid controls myself, but you have to be careful, in particular, removing items from collections MUST be avoided.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: VB.NET FormatNumber equivalent in C#? Is there a C# equivalent for the VB.NET FormatNumber function?
I.e.:
JSArrayString += "^" + (String)FormatNumber(inv.RRP * oCountry.ExchangeRate, 2);
A: Yes, the .ToString(string) methods.
For instance,
int number = 32;
string formatted = number.ToString("D4");
Console.WriteLine(formatted);
// Shows 0032
Note that in C# you don't use a number to specify a format, but you use a character or a sequence of characters.
Formatting numbers and dates in C# takes some minutes to learn, but once you understand the principle, you can quickly get anything you want from looking at the reference.
Here's a couple MSDN articles to get you started :
Standard Numeric Format Strings
Formatting Types
A: In both C# and VB.NET you can use either the .ToString() function or the String.Format() method to format the text.
Using the .ToString() method your example could be written as:
JSArrayString += "^" + (inv.RRP * oCountry.ExchangeRate).ToString("#0.00")
Alternatively using the String.Format() it could written as:
JSArrayString = String.Format("{0}^{1:#0.00}",JSArrayString,(inv.RRP * oCountry.ExchangeRate))
In both of the above cases I have used custom formatting for the currency with # representing an optional place holder and 0 representing a 0 or value if one exists.
Other formatting characters can be used to help with formatting such as D2 for 2 decimal places or C to display as currency. In this case you would not want to use the C formatter as this would have inserted the currency symbol and further separators which were not required.
See "String.Format("{0}", "formatting string"};" or "String Format for Int" for more information and examples on how to use String.Format and the different formatting options.
A: You can use string formatters to accomplish the same thing.
double MyNumber = inv.RRP * oCountry.ExchangeRate;
JSArrayString += "^" + MyNumber.ToString("#0.00");
A: While I would recommend using ToString in this case, always keep in mind you can use ANY VB.Net function or class from C# just by referencing Microsoft.VisalBasic.dll.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Track down where packets are being blocked/dropped When I was in China my company's website was blocked for about 24 hours.
I assume it was the "Great Chinese Firewall" but I was wondering if there is any way that I can find out exactly where a packet or TCP/IP connection gets blocked.
I was able to verify that it wasn't being blocked at our end(I used the local host file to point to the backup server inside of China) or at the end of our server (Other people could still connect to both ISPs).
I tried tracert but only port 80 was being redirected. I could ssh into the server without any problems.
The other problem is that most of the routers in China just drop the packets and don't respond to ping etc so you can't find out their IP addresses.
In the future are there any tools that can track down where packets are being blocked?
A: tcptraceroute
A: I have lot's of problems with that firewall. Having my server into EEUU doesn't help. If you need tools to test your site hosted outside from china like you were in China, you can try that page:
http://www.websitepulse.com/help/tools.php
Good luck
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Targeting with VS 2008 after installing SP1 of .NET 3.5 How do I target .NET 3.5 alone after installing SP1 in VS2008? This is because VS 2008 lists only .NET 3.5, .NET 3.0 & .NET 2.0 and does not specifically show .NET 3.5 SP1.
A: I think that you cannot specify SP1, only different versions of the framework. It does make sense, otherwise you could have a lot of problems with an application specifically compiled for a given SP. You can also have problems with the current situation, but I think it's less headache.
A: I think if you reference SP1 assemblies, it should automatically target SP1.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is Bouncy Castle API Thread Safe? Is Bouncy Castle API Thread Safe ? Especially,
org.bouncycastle.crypto.paddings.PaddedBufferedBlockCipher
org.bouncycastle.crypto.paddings.PKCS7Padding
org.bouncycastle.crypto.engines.AESFastEngine
org.bouncycastle.crypto.modes.CBCBlockCipher
I am planning to write a singleton Spring bean for basic level cryptography support in my app. Since it is a web application, there are greater chances of multiple threads accessing this component at a time. So tread safety is essential here.
Please let me know if you have come across such situations using Bouncy Castle.
A: It really does not matter if the API/Code is thread safe. CBC encryption in itself is not thread safe.
Some terminology -
E(X) = Enctrypt message X
D(X) = Dectrypt X. (Note that D(E(X)) = X)
IV = Initialization vector. A random sequence to bootstrap the CBC algorithm
CBC = Cipher block chaining.
A really simple CBC implementation can look like:
P1, P2, P3 = Plain text messages
1. Generate an IV, just random bits.
2. Calculate E( P1 xor IV) call this C1
3. Calculate E( P2 xor C1) call this C2
4. Calculate E( P3 xor C2) call this C3.
As you can see, the result of encrypting P1, P2 and P3 (in that order) is different from encrypting P2, P1 and P3 (in that order).
So, in a CBC implementation, order is important. Any algorithm where order is important can not, by definition, be thread safe.
You can make a Singleton factory that delivers encryption objects, but you cant trust them to be thread safe.
A: The J2ME version is not thread safe.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Git ignore file for Xcode projects Which files should I include in .gitignore when using Git in conjunction with Xcode?
A: Mine is a .bzrignore, but it is the same idea :)
.DS_Store
*.mode1v3
*.pbxuser
*.perspectivev3
*.tm_build_errors
The tm_build_errors is for when I use TextMate to build my project. It is not quite as comprehensive as Hagelin, but I thought it was worth posting for the tm_build_errors line.
A: I was previously using the top-voted answer, but it needs a bit of cleanup, so here it is redone for Xcode 4, with some improvements.
I've researched every file in this list, but several of them do not exist in Apple's official Xcode documentation, so I had to go on Apple mailing lists.
Apple continues to add undocumented files, potentially corrupting our live projects. This IMHO is unacceptable, and I've now started logging bugs against it each time they do so. I know they don't care, but maybe it'll shame one of them into treating developers more fairly.
If you need to customize, here's a gist you can fork: https://gist.github.com/3786883
#########################
# .gitignore file for Xcode4 and Xcode5 Source projects
#
# Apple bugs, waiting for Apple to fix/respond:
#
# 15564624 - what does the xccheckout file in Xcode5 do? Where's the documentation?
#
# Version 2.6
# For latest version, see: http://stackoverflow.com/questions/49478/git-ignore-file-for-xcode-projects
#
# 2015 updates:
# - Fixed typo in "xccheckout" line - thanks to @lyck for pointing it out!
# - Fixed the .idea optional ignore. Thanks to @hashier for pointing this out
# - Finally added "xccheckout" to the ignore. Apple still refuses to answer support requests about this, but in practice it seems you should ignore it.
# - minor tweaks from Jona and Coeur (slightly more precise xc* filtering/names)
# 2014 updates:
# - appended non-standard items DISABLED by default (uncomment if you use those tools)
# - removed the edit that an SO.com moderator made without bothering to ask me
# - researched CocoaPods .lock more carefully, thanks to Gokhan Celiker
# 2013 updates:
# - fixed the broken "save personal Schemes"
# - added line-by-line explanations for EVERYTHING (some were missing)
#
# NB: if you are storing "built" products, this WILL NOT WORK,
# and you should use a different .gitignore (or none at all)
# This file is for SOURCE projects, where there are many extra
# files that we want to exclude
#
#########################
#####
# OS X temporary files that should never be committed
#
# c.f. http://www.westwind.com/reference/os-x/invisibles.html
.DS_Store
# c.f. http://www.westwind.com/reference/os-x/invisibles.html
.Trashes
# c.f. http://www.westwind.com/reference/os-x/invisibles.html
*.swp
#
# *.lock - this is used and abused by many editors for many different things.
# For the main ones I use (e.g. Eclipse), it should be excluded
# from source-control, but YMMV.
# (lock files are usually local-only file-synchronization on the local FS that should NOT go in git)
# c.f. the "OPTIONAL" section at bottom though, for tool-specific variations!
#
# In particular, if you're using CocoaPods, you'll want to comment-out this line:
*.lock
#
# profile - REMOVED temporarily (on double-checking, I can't find it in OS X docs?)
#profile
####
# Xcode temporary files that should never be committed
#
# NB: NIB/XIB files still exist even on Storyboard projects, so we want this...
*~.nib
####
# Xcode build files -
#
# NB: slash on the end, so we only remove the FOLDER, not any files that were badly named "DerivedData"
DerivedData/
# NB: slash on the end, so we only remove the FOLDER, not any files that were badly named "build"
build/
#####
# Xcode private settings (window sizes, bookmarks, breakpoints, custom executables, smart groups)
#
# This is complicated:
#
# SOMETIMES you need to put this file in version control.
# Apple designed it poorly - if you use "custom executables", they are
# saved in this file.
# 99% of projects do NOT use those, so they do NOT want to version control this file.
# ..but if you're in the 1%, comment out the line "*.pbxuser"
# .pbxuser: http://lists.apple.com/archives/xcode-users/2004/Jan/msg00193.html
*.pbxuser
# .mode1v3: http://lists.apple.com/archives/xcode-users/2007/Oct/msg00465.html
*.mode1v3
# .mode2v3: http://lists.apple.com/archives/xcode-users/2007/Oct/msg00465.html
*.mode2v3
# .perspectivev3: http://stackoverflow.com/questions/5223297/xcode-projects-what-is-a-perspectivev3-file
*.perspectivev3
# NB: also, whitelist the default ones, some projects need to use these
!default.pbxuser
!default.mode1v3
!default.mode2v3
!default.perspectivev3
####
# Xcode 4 - semi-personal settings
#
# Apple Shared data that Apple put in the wrong folder
# c.f. http://stackoverflow.com/a/19260712/153422
# FROM ANSWER: Apple says "don't ignore it"
# FROM COMMENTS: Apple is wrong; Apple code is too buggy to trust; there are no known negative side-effects to ignoring Apple's unofficial advice and instead doing the thing that actively fixes bugs in Xcode
# Up to you, but ... current advice: ignore it.
*.xccheckout
#
#
# OPTION 1: ---------------------------------
# throw away ALL personal settings (including custom schemes!
# - unless they are "shared")
# As per build/ and DerivedData/, this ought to have a trailing slash
#
# NB: this is exclusive with OPTION 2 below
xcuserdata/
# OPTION 2: ---------------------------------
# get rid of ALL personal settings, but KEEP SOME OF THEM
# - NB: you must manually uncomment the bits you want to keep
#
# NB: this *requires* git v1.8.2 or above; you may need to upgrade to latest OS X,
# or manually install git over the top of the OS X version
# NB: this is exclusive with OPTION 1 above
#
#xcuserdata/**/*
# (requires option 2 above): Personal Schemes
#
#!xcuserdata/**/xcschemes/*
####
# Xcode 4 workspaces - more detailed
#
# Workspaces are important! They are a core feature of Xcode - don't exclude them :)
#
# Workspace layout is quite spammy. For reference:
#
# /(root)/
# /(project-name).xcodeproj/
# project.pbxproj
# /project.xcworkspace/
# contents.xcworkspacedata
# /xcuserdata/
# /(your name)/xcuserdatad/
# UserInterfaceState.xcuserstate
# /xcshareddata/
# /xcschemes/
# (shared scheme name).xcscheme
# /xcuserdata/
# /(your name)/xcuserdatad/
# (private scheme).xcscheme
# xcschememanagement.plist
#
#
####
# Xcode 4 - Deprecated classes
#
# Allegedly, if you manually "deprecate" your classes, they get moved here.
#
# We're using source-control, so this is a "feature" that we do not want!
*.moved-aside
####
# OPTIONAL: Some well-known tools that people use side-by-side with Xcode / iOS development
#
# NB: I'd rather not include these here, but gitignore's design is weak and doesn't allow
# modular gitignore: you have to put EVERYTHING in one file.
#
# COCOAPODS:
#
# c.f. http://guides.cocoapods.org/using/using-cocoapods.html#what-is-a-podfilelock
# c.f. http://guides.cocoapods.org/using/using-cocoapods.html#should-i-ignore-the-pods-directory-in-source-control
#
#!Podfile.lock
#
# RUBY:
#
# c.f. http://yehudakatz.com/2010/12/16/clarifying-the-roles-of-the-gemspec-and-gemfile/
#
#!Gemfile.lock
#
# IDEA:
#
# c.f. https://www.jetbrains.com/objc/help/managing-projects-under-version-control.html?search=workspace.xml
#
#.idea/workspace.xml
#
# TEXTMATE:
#
# -- UNVERIFIED: c.f. http://stackoverflow.com/a/50283/153422
#
#tm_build_errors
####
# UNKNOWN: recommended by others, but I can't discover what these files are
#
A: Regarding the 'build' directory exclusion -
If you place your build files in a different directory from your source, as I do, you don't have the folder in the tree to worry about.
This also makes life simpler for sharing your code, preventing bloated backups, and even when you have dependencies to other Xcode projects (while require the builds to be in the same directory as each other)
You can grab an up-to-date copy from the Github gist https://gist.github.com/708713
My current .gitignore file is
# Mac OS X
*.DS_Store
# Xcode
*.pbxuser
*.mode1v3
*.mode2v3
*.perspectivev3
*.xcuserstate
project.xcworkspace/
xcuserdata/
# Generated files
*.o
*.pyc
#Python modules
MANIFEST
dist/
build/
# Backup files
*~.nib
A: For Xcode 5 I add:
####
# Xcode 5 - VCS metadata
#
*.xccheckout
From Berik's Answer
A: For Xcode 4 I also add:
YourProjectName.xcodeproj/xcuserdata/*
YourProjectName.xcodeproj/project.xcworkspace/xcuserdata/*
A: Best of all,
gitignore.io
Go and choose your language, and then it'll give you the file.
A: I've added:
xcuserstate
xcsettings
and placed my .gitignore file at the root of my project.
After committing and pushing. I then ran:
git rm --cached UserInterfaceState.xcuserstate WorkspaceSettings.xcsettings
buried with the folder below:
<my_project_name>/<my_project_name>.xcodeproj/project.xcworkspace/xcuserdata/<my_user_name>.xcuserdatad/
I then ran git commit and push again
A: I use the following .gitignore file generated in gitignore.io:
### Xcode ###
build/
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata
*.xccheckout
*.moved-aside
DerivedData
*.xcuserstate
### Objective-C ###
# Xcode
#
build/
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata
*.xccheckout
*.moved-aside
DerivedData
*.hmap
*.ipa
*.xcuserstate
# CocoaPods
#
# We recommend against adding the Pods directory to your .gitignore. However
# you should judge for yourself, the pros and cons are mentioned at:
# http://guides.cocoapods.org/using/using-cocoapods.html#should-i-ignore-the-pods-directory-in-source-control
#
Pods/
A: gitignore.io: Create useful .gitignore files for your project
*
*Example: Preview | Edit
*
*Programming Languages: Objective-C Swift
*Build Automation Tool: SwiftPackageManager Carthage
*IDEs: Xcode
*Operating Systems: macOS
*Steps to use in Terminal (Refer to the YouTube Video)
*
*Create Git global config alias (One time only)
git config --global alias.ignore '!gi() { curl -L -s https://www.gitignore.io/api/$@ ;}; gi'
*Enter the project directory
cd <the project directory>
*Generate .gitignore file
git ignore Objective-C,Swift,SwiftPackageManager,Carthage,Xcode,macOS >.gitignore
*Add and commit .gitignore file
git add .gitignore
git commit -m "Add .gitignore file"
A: Based on this guide for Mercurial my .gitignore includes:
.DS_Store
*.swp
*~.nib
build/
*.pbxuser
*.perspective
*.perspectivev3
I've also chosen to include:
*.mode1v3
*.mode2v3
which, according to this Apple mailing list post, are "user-specific project settings".
And for Xcode 4:
xcuserdata
A: Here's the .gitignore that GitHub uses by default for new Xcode repositories:
https://github.com/github/gitignore/blob/master/Objective-C.gitignore
It's likely to be reasonably correct at any given time.
A: The people of GitHub have exhaustive and documented .gitignore files for Xcode projects:
Swift: https://github.com/github/gitignore/blob/master/Swift.gitignore
Objective-C: https://github.com/github/gitignore/blob/master/Objective-C.gitignore
A: You should checkout gitignore.io for Objective-C and Swift.
Here is the .gitignore file I'm using:
# Xcode
.DS_Store
*/build/*
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata
profile
*.moved-aside
DerivedData
.idea/
*.hmap
*.xccheckout
*.xcworkspace
!default.xcworkspace
#CocoaPods
Pods
A: I'm using both AppCode and XCode.
So .idea/ should be ignored.
append this to Adam's .gitignore
####
# AppCode
.idea/
A: Adding a .gitignore file for
Mac OS X + Xcode + Swift
This is how I have added a .gitignore file into my Swift project:
*
*Select you project in Xcode and right click → New Group → name it "Git"
*Select the Git folder and right click → Add new file
*Within the iOS tab → select Other → empty file
*Give the file name here ".gitignore"
*Confirm the file name and type
Here is the result structure:
*Open the file and past the below code
# file
#########################################################################
# #
# Title - .gitignore file #
# For - Mac OS X, Xcode 7 and Swift Source projects #
# Updated by - Ramdhan Choudhary #
# Updated on - 13 - November - 2015 #
# #
#########################################################################
########### Xcode ###########
# Xcode temporary files that should never be committed
## Build generated
build/
DerivedData
# NB: NIB/XIB files still exist even on Storyboard projects, so we want this
*~.nib
*.swp
## Various settings
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata
## Other
*.xccheckout
*.moved-aside
*.xcuserstate
*.xcscmblueprint
*.xcscheme
########### Mac OS X ###########
# Mac OS X temporary files that should never be committed
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
########## Objective-C/Swift specific ##########
*.hmap
*.ipa
# CocoaPods
#
# We recommend against adding the Pods directory to your .gitignore. However
# you should judge for yourself, the pros and cons are mentioned at:
# https://guides.cocoapods.org/using/using-cocoapods.html#should-i-check-the-pods-directory-into-source-control
#
# Pods/
# Carthage
#
# Add this line if you want to avoid checking in source code from Carthage dependencies.
# Carthage/Checkouts
Carthage/Build
# fastlane
#
# It is recommended to not store the screenshots in the Git repository. Instead, use fastlane to re-generate the
fastlane/report.xml
fastlane/screenshots
Well, thanks to Adam. His answer helped me a lot, but still I had to add a few more entries as I wanted a .gitignore file for:
Mac OS X + Xcode + Swift
References: this and this
A: Most of the answers are from the Xcode 4-5 era. I recommend an ignore file in a modern style.
# Xcode Project
**/*.xcodeproj/xcuserdata/
**/*.xcworkspace/xcuserdata/
**/.swiftpm/xcode/xcuserdata/
**/*.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist
**/*.xcworkspace/xcshareddata/*.xccheckout
**/*.xcworkspace/xcshareddata/*.xcscmblueprint
**/*.playground/**/timeline.xctimeline
.idea/
# Build
Scripts/build/
build/
DerivedData/
*.ipa
# Carthage
Carthage/
# CocoaPods
Pods/
# fastlane
fastlane/report.xml
fastlane/Preview.html
fastlane/screenshots
fastlane/test_output
fastlane/sign&cert
# CSV
*.orig
.svn
# Other
*~
.DS_Store
*.swp
*.save
._*
*.bak
Keep it updated from: https://github.com/BB9z/iOS-Project-Template/blob/master/.gitignore
A: We did find that even if you add the .gitignore and the .gitattribte the *.pbxproj file can get corrupted. So we have a simple plan.
Every person that codes in office simply discards the changes made to this file. In the commit we simple mention the files that are added into the source. And then push to the server. Our integration manager than pulls and sees the commit details and adds the files into the resources.
Once he updates the remote everyone will always have a working copy. In case something is missing then we inform him to add it in and then pull once again.
This has worked out for us without any issues.
A: I recommend using joe to generate a .gitignore file.
For an iOS project run the following command:
$ joe g osx,xcode > .gitignore
It will generate this .gitignore:
.DS_Store
.AppleDouble
.LSOverride
Icon
._*
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
build/
DerivedData
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata
*.xccheckout
*.moved-aside
*.xcuserstate
A: If someone need a standard gitignore file as a simple way.
Just run this line in cmd/ terminal after navigating to your project.
npx gitignore Objective-C
A: This will create up-to-date gitignore file
For iOS development
git ignore swift,ios >.gitignore
For macOS development
git ignore swift,macos >.gitignore
A:
A Structure of a standerd .gitignore file for Xcode project >
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
Icon?
ehthumbs.db
Thumbs.db
build/
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
!default.xcworkspace
xcuserdata
profile
*.moved-aside
DerivedData
.idea/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "944"
} |
Q: Apache rewrite based on subdomain I'm trying to redirect requests for a wildcard domain to a sub-directory.
ie. something.blah.example.com --> blah.example.com/something
I don't know how to get the subdomain name to use in the rewrite rule.
Final Solution:
RewriteCond %{HTTP_HOST} !^blah\.example\.com
RewriteCond %{HTTP_HOST} ^([^.]+)
RewriteRule ^(.*) /%1/$1 [L]
Or as pointed out by pilif
RewriteCond %{HTTP_HOST} ^([^.]+)\.blah\.example\.com$
A: Try this:
RewriteCond %{HTTP_HOST} (.+)\.blah\.domain\.com
RewriteRule ^(.+)$ /%1/$1 [L]
@pilif (see comment): Okay, that's true. I just copied a .htaccess that I use on one of my projects. Guess it has a slightly different approach :)
A: You should have a look at the URL Rewriting Guide from the apache documentation.
The following is untested, but it should to the trick:
RewriteCond %{HTTP_HOST} ^([^.]+)\.blah\.domain\.com$
RewriteRule ^/(.*)$ http://blah.domain.com/%1/$1 [L,R]
This only works if the subdomain contains no dots. Otherwise, you'd have to alter the Regexp in RewriteCond to match any character which should still work due to the anchoring, but this certainly feels safer.
A: @Sam
your RewriteCond line is wrong. The expansion of the variable is triggered with %, not $.
RewriteCond %{HTTP_HOST} ^([^\.]+)\.media\.xnet\.tk$
^
that should do the trick
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Controlling which Network Card TCP/IP message are sent on The system I'm currently working on consists of a controller PC running XP with .Net 2 connected to a set of embedded systems. All these components communicate with each other over an ethernet network. I'm currently using TcpClient.Connect on the XP computer to open a connection to the embedded systems to send TCP/IP messages.
I now have to connect the XP computer to an external network to send processing data to, so there are now two network cards on the XP computer. However, the messages sent to the external network mustn't appear on the network connecting the embedded systems together (don't want to consume the bandwidth) and the messages to the embedded systems mustn't appear on the external network.
So, the assertion I'm making is that messages sent to a defined IP address are sent out on both network cards when using the TcpClient.Connect method.
How do I specify which physical network card messages are sent via, ideally using the .Net networking API. If no such method exists in .Net, then I can always P/Invoke the Win32 API.
Skizz
A: Try using a Socket for your client instead of the TcpClient Class.
Then you can use Socket.Bind to target your local network adapter
int port = 1234;
IPHostEntry entry = Dns.GetHostEntry(Dns.GetHostName());
//find ip address for your adapter here
IPAddress localAddress = entry.AddressList.FirstOrDefault();
IPEndPoint localEndPoint = new IPEndPoint(localAddress, port);
//use socket instead of a TcpClient
Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
//binds client to the local end point
client.Bind(localEndPoint);
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.bind.aspx
A: If you have two network cards on the machine, then there shouldn't be a problem. Normal IP behaviour should ensure that traffic for your 'private' network (embedded systems in this case) is separate from your public network, without you having to do anything in your code. All that is required is for the two networks to be on different IP subnets, and for your 'public' NIC to be the default.
Assuming your two NICs are configured as follows:
NIC A (Public): 192.168.1.10 mask 255.255.255.0
NIC B (Private): 192.168.5.10 mask 255.255.255.0
The only configuration you need to verify is that NIC A is your default. When you try to send packets to any address in your private network (192.168.50.0 - 192.168.50.255), your IP stack will look in the routing table and see a directly connected network, and forward traffic via the private NIC. Any traffic to the (directly connected) public network will be sent to NIC A, as will traffic to any address for which you do not have a more specific route in your routing table.
Your routing table (netstat -rn) should look something like this:
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.10 266 <<--
127.0.0.0 255.0.0.0 On-link 127.0.0.1 306
127.0.0.1 255.255.255.255 On-link 127.0.0.1 306
127.255.255.255 255.255.255.255 On-link 127.0.0.1 306
169.254.0.0 255.255.0.0 On-link 192.168.1.10 286
169.254.255.255 255.255.255.255 On-link 192.168.1.10 266
192.168.1.0 255.255.255.0 On-link 192.168.1.10 266
192.168.1.10 255.255.255.255 On-link 192.168.1.10 266
192.168.1.255 255.255.255.255 On-link 192.168.1.10 266
192.168.5.0 255.255.255.0 On-link 192.168.5.10 266
192.168.5.10 255.255.255.255 On-link 192.168.5.10 266
192.168.5.255 255.255.255.255 On-link 192.168.5.10 266
255.255.255.255 255.255.255.255 On-link 192.168.1.10 276
255.255.255.255 255.255.255.255 On-link 192.168.5.10 276
===========================================================================
There will also be some multicast routes (starting with 224) which have been omitted for brevity. The '<<--' indicates the default route, which should be using the public interface.
A: Basically, once the TcpClient.Connect method has been successful, it will have created a mapping between the physical MAC address of the embedded system and the route it should take to that address (i.e. which network card to use).
I don't believe that all messages then sent over the TcpClient connection will be sent out via both network cards.
Do you have any data to suggest otherwise, or are you mealy guessing?
A: Xp maintains a routing table where it maps ranges of ip-adresses to networks and gateways.
you can view the table using "route print", with "route add" you can add a route to your embedded device.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you set your Cocoa application as the default web browser? How do you set your Cocoa application as the default web browser?
I want to create an application that is launched by default when the user clicks on an HTTP or HTTPS link in other applications (Mail, iChat etc.).
A: There are four steps to making an app that can act as the default web browser. The first three steps allow your app to act as a role handler for the relevant URL schemes (HTTP and HTTPS) and the final step makes it the default role handler for those schemes.
1) Add the URL schemes your app can handle to your application's info.plist file
To add support for http:// and https:// you'd need to add the following to your application's info.plist file. This tells the OS that your application is capable of handling HTTP and HTTP URLs.
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>http URL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>http</string>
</array>
</dict>
<dict>
<key>CFBundleURLName</key>
<string>Secure http URL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>https</string>
</array>
</dict>
</array>
2) Write an URL handler method
This method will be called by the OS when it wants to use your application to open a URL. It doesn't matter which object you add this method to, that'll be explicitly passed to the Event Manager in the next step. The URL handler method should look something like this:
- (void)getUrl:(NSAppleEventDescriptor *)event
withReplyEvent:(NSAppleEventDescriptor *)replyEvent
{
// Get the URL
NSString *urlStr = [[event paramDescriptorForKeyword:keyDirectObject]
stringValue];
//TODO: Your custom URL handling code here
}
3) Register the URL handler method
Next, tell the event manager which object and method to call when it wants to use your app to load an URL. In the code here I'm passed self as the event handler, assuming that we're calling setEventHandler from the same object that defines the getUrl:withReplyEvent: method.
You should add this code somewhere in your application's initialisation code.
NSAppleEventManager *em = [NSAppleEventManager sharedAppleEventManager];
[em
setEventHandler:self
andSelector:@selector(getUrl:withReplyEvent:)
forEventClass:kInternetEventClass
andEventID:kAEGetURL];
Some applications, including early versions of Adobe AIR, use the alternative WWW!/OURL AppleEvent to request that an application opens URLs, so to be compatible with those applications you should also add the following:
[em
setEventHandler:self
andSelector:@selector(getUrl:withReplyEvent:)
forEventClass:'WWW!'
andEventID:'OURL'];
4) Set your app as the default browser
Everything we've done so far as told the OS that your application is a browser, now we need to make it the default browser.
We've got to use the Launch Services API to do this. In this case we're setting our app to be the default role handler for HTTP and HTTPS links:
CFStringRef bundleID = (CFStringRef)[[NSBundle mainBundle] bundleIdentifier];
OSStatus httpResult = LSSetDefaultHandlerForURLScheme(CFSTR("http"), bundleID);
OSStatus httpsResult = LSSetDefaultHandlerForURLScheme(CFSTR("https"), bundleID);
//TODO: Check httpResult and httpsResult for errors
(It's probably best to ask the user's permission before changing their default browser.)
Custom URL schemes
It's worth noting that you can also use these same steps to handle your own custom URL schemes. If you're creating a custom URL scheme it's a good idea to base it on your app's bundle identifier to avoid clashes with other apps. So if your bundle ID is com.example.MyApp you should consider using x-com-example-myapp:// URLs.
A: macOS Big Sur and Up
Copy and paste this code into your info.plist
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>Web site URL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>http</string>
<string>https</string>
</array>
</dict>
<dict>
<key>CFBundleURLName</key>
<string>http URL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>http</string>
</array>
</dict>
<dict>
<key>CFBundleURLName</key>
<string>Secure http URL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>https</string>
</array>
</dict>
<dict>
<key>CFBundleTypeName</key>
<string>HTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.html</string>
</array>
</dict>
<dict>
<key>CFBundleTypeName</key>
<string>XHTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.xhtml</string>
</array>
</dict>
</array>
<key>CFBundleDocumentTypes</key>
<array>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>GIF image</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>com.compuserve.gif</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>HTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.html</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>XHTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.xhtml</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>JavaScript script</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>com.netscape.javascript-source</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>JPEG image</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.jpeg</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>MHTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>org.ietf.mhtml</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>HTML5 Audio (Ogg)</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>org.xiph.ogg-audio</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>HTML5 Video (Ogg)</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>org.xiph.ogv</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>PNG image</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.png</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>SVG document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.svg-image</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>Plain text document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.text</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>HTML5 Video (WebM)</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>org.webmproject.webm</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>WebP image</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>org.webmproject.webp</string>
</array>
</dict>
<dict>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>org.chromium.extension</string>
</array>
</dict>
<dict>
<key>CFBundleTypeIconFile</key>
<string>document.icns</string>
<key>CFBundleTypeName</key>
<string>PDF Document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>com.adobe.pdf</string>
</array>
</dict>
</array>
Your app will be shown in the system preferences and will be default browser
Make sure you do this
func application(_ application: NSApplication, open urls: [URL]) {
// do a for loop, I recommend it
}
A: If you just want to change the default helper app for http(s), you can do so in the Safari preferences. There you’ll find a drop down which will let you select all the registered handler applications for http. To automatically have the app set itself as the default browser see the previous instructions.
A: In order to appear as an option on System Preferences > General > Default web browser (at least for macOS 11) you need to add the document types for HTML and XHTML to the Info.plist (after the 4 steps already described on the accepted answer), like this:
<key>CFBundleDocumentTypes</key>
<array>
<dict>
<key>CFBundleTypeName</key>
<string>HTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.html</string>
</array>
</dict>
<dict>
<key>CFBundleTypeName</key>
<string>XHTML document</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSItemContentTypes</key>
<array>
<string>public.xhtml</string>
</array>
</dict>
</array>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: Using a wiki as a central development project repository I have played with the idea of using a wiki (MediaWiki) to centralize all project information for a development project. This was done using extensions that pull information from SVN (using SVNKit) and by linking to Bugzilla to extract work assigned to a developer or work remaining for a release.
Examples:
<bugzilla type="summary" user="[email protected]" />
would return a summary
<bugzilla type="status" status="ASSIGNED" product="SCM BEPPI" />
would return
Do you think that this would be useful? If so then what other integrations would you think would be valuable?
A: Of course it's useful, there are already ready-made packages for this kind of project-overviews (like http://trac.edgewall.org/).
If possible, I'd integrate any existing CI-engine into the wiki, so that you have a complete overview over the current progress and your project's health.
A: I think this would be extremly useful. Depending on the size of a project team members come and go. And a wiki is a good tool to keep the history and the "spirit" of a project available to new team members. I did that in many projects, and though the projects were already finished, all the informations are available.
One more idea: also try to integrate meeting schedules, minutes etc. If your team communicates via IM, try to integrate a log of the conversations.
A: You might be interested in the mediawiki extension I've created @ http://www.mediawiki.org/wiki/Extension:BugzillaReports. I'm getting a lot of great feedback that this is hitting a sweet spot - it allows you to bring bugzilla reports in line into mediawiki documents and create standard aggregated reports.
A: The other classic integration would be your source code repository, e.g. svn, or cvs. trac is an existing product that does exactly this - it combines a wiki, custom bug tracker, and integrates nicely with svn.
A: The other integration I worked on was integrating to MS Project but the integration was a little messy requiring upload of .mpp files and then using MPXJ to extract project information from the .mpp file
The result was OK I suppose
<project file="AOZA_BEPPI_Billing_Project_Plan_v0.2.mpp" type="list" user="Martin" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I *really* justify a horizontal menu in HTML+CSS? You find plenty of tutorials on menu bars in HTML, but for this specific (though IMHO generic) case, I haven't found any decent solution:
# THE MENU ITEMS SHOULD BE JUSTIFIED JUST AS PLAIN TEXT WOULD BE #
# ^ ^ #
*
*There's an varying number of text-only menu items and the page layout is fluid.
*The first menu item should be left-aligned, the last menu item should be right-aligned.
*The remaining items should be spread optimally on the menu bar.
*The number is varying,so there's no chance to pre-calculate the optimal widths.
Note that a TABLE won't work here as well:
*
*If you center all TDs, the first and the last item aren’t aligned correctly.
*If you left-align and right-align the first resp. the last items, the spacing will be sub-optimal.
Isn’t it strange that there is no obvious way to implement this in a clean way by using HTML and CSS?
A: The simplest thing to do is to is to force the line to break by inserting an element at the end of the line that will occupy more than the left available space and then hiding it. I've accomplished this quite easily with a simple span element like so:
#menu {
text-align: justify;
}
#menu * {
display: inline;
}
#menu li {
display: inline-block;
}
#menu span {
display: inline-block;
position: relative;
width: 100%;
height: 0;
}
<div id="menu">
<ul>
<li><a href="#">Menu item 1</a></li>
<li><a href="#">Menu item 3</a></li>
<li><a href="#">Menu item 2</a></li>
</ul>
<span></span>
</div>
All the junk inside the #menu span selector is (as far as I've found) required to please most browsers. It should force the width of the span element to 100%, which should cause a line break since it is considered an inline element due to the display: inline-block rule. inline-block also makes the span possible to block-level style rules like width which causes the element to not fit in line with the menu and thus the menu to line-break.
You of course need to adjust the width of the span to your use case and design, but I hope you get the general idea and can adapt it.
A: Got a solution. Works in FF, IE6, IE7, Webkit, etc.
Make sure you don't put any whitespace before closing the span.inner. IE6 will break.
You can optionally give .outer a width
.outer {
text-align: justify;
}
.outer span.finish {
display: inline-block;
width: 100%;
}
.outer span.inner {
display: inline-block;
white-space: nowrap;
}
<div class="outer">
<span class="inner">THE MENU ITEMS</span>
<span class="inner">SHOULD BE</span>
<span class="inner">JUSTIFIED</span>
<span class="inner">JUST AS</span>
<span class="inner">PLAIN TEXT</span>
<span class="inner">WOULD BE</span>
<span class="finish"></span>
</div>
A: Modern Approach - Flexboxes!
Now that CSS3 flexboxes have better browser support, some of us can finally start using them. Just add additional vendor prefixes for more browser coverage.
In this instance, you would just set the parent element's display to flex and then change the justify-content property to either space-between or space-around in order to add space between or around the children flexbox items.
Using justify-content: space-between - (example here):
ul {
list-style: none;
padding: 0;
margin: 0;
}
.menu {
display: flex;
justify-content: space-between;
}
<ul class="menu">
<li>Item One</li>
<li>Item Two</li>
<li>Item Three Longer</li>
<li>Item Four</li>
</ul>
Using justify-content: space-around - (example here):
ul {
list-style: none;
padding: 0;
margin: 0;
}
.menu {
display: flex;
justify-content: space-around;
}
<ul class="menu">
<li>Item One</li>
<li>Item Two</li>
<li>Item Three Longer</li>
<li>Item Four</li>
</ul>
A: Works with Opera , Firefox, Chrome and IE
ul {
display: table;
margin: 1em auto 0;
padding: 0;
text-align: center;
width: 90%;
}
li {
display: table-cell;
border: 1px solid black;
padding: 0 5px;
}
A: yet another solution. I had no option to tackle the html like adding distinguished class etc., so I found a pure css way.
Works in Chrome, Firefox, Safari..don't know about IE.
Test: http://jsfiddle.net/c2crP/1
ul {
margin: 0;
padding: 0;
list-style: none;
width: 200px;
text-align: justify;
list-style-type: none;
}
ul > li {
display: inline;
text-align: justify;
}
/* declaration below will add a whitespace after every li. This is for one line codes where no whitespace (of breaks) are present and the browser wouldn't know where to make a break. */
ul > li:after {
content: ' ';
display: inline;
}
/* notice the 'inline-block'! Otherwise won't work for webkit which puts after pseudo el inside of it's parent instead of after thus shifting also the parent on next line! */
ul > li:last-child:after {
display: inline-block;
margin-left: 100%;
content: ' ';
}
<ul>
<li><a href="#">home</a></li>
<li><a href="#">exposities</a></li>
<li><a href="#">werk</a></li>
<li><a href="#">statement</a></li>
<li><a href="#">contact</a></li>
</ul>
A: Make it a <p> with text-align: justify ?
Update: Nevermind. That doesn't work at all as I'd thought.
Update 2: Doesn't work in any browsers other than IE right now, but CSS3 has support for this in the form of text-align-last
A: Ok, this solution doesn't work on IE6/7, because of the lack of support of :before/:after, but:
ul {
text-align: justify;
list-style: none;
list-style-image: none;
margin: 0;
padding: 0;
}
ul:after {
content: "";
margin-left: 100%;
}
li {
display: inline;
}
a {
display: inline-block;
}
<div id="menu">
<ul>
<li><a href="#">Menu item 1</a></li>
<li><a href="#">Menu item 2</a></li>
<li><a href="#">Menu item 3</a></li>
<li><a href="#">Menu item 4</a></li>
<li><a href="#">Menu item 5</a></li>
</ul>
</div>
The reason why I have the a tag as an inline-block is because I don't want the words inside to be justified as well, and I don't want to use non-breaking spaces either.
A: For Gecko-based browsers, I came up with this solution. This solution doesn't work with WebKit browsers, though (e.g. Chromium, Midori, Epiphany), they still show trailing space after the last item.
I put the menu bar in a justified paragraph. Problem is that the last line of a justified paragraph won't be rendered justified, for obvious reasons. Therefore I add a wide invisible element (e.g. an img) which warrants that the paragraph is at least two lines long.
Now the menu bar is justified by the same algorithm the browser uses for justifying plain text.
Code:
<div style="width:500px; background:#eee;">
<p style="text-align:justify">
<a href="#">THE MENU ITEMS</a>
<a href="#">SHOULD BE</a>
<a href="#">JUSTIFIED</a>
<a href="#">JUST AS</a>
<a href="#">PLAIN TEXT</a>
<a href="#">WOULD BE</a>
<img src="/Content/Img/stackoverflow-logo-250.png" width="400" height="0"/>
</p>
<p>There's an varying number of text-only menu items and the page layout is fluid.</p>
<p>The first menu item should be left-aligned, the last menu item should be right-aligned. The remaining items should be spread optimal on the menu bar.</p>
<p>The number is varying,so there's no chance to pre-calculate the optimal widths.</p>
<p>Note that a TABLE won't work here as well:</p>
<ul>
<li>If you center all TDs, the first and the last item aren't aligned correctly.</li>
<li>If you left-align and right-align the first resp. the last items, the spacing will be sub-optimal.</li>
</ul>
</div>
Remark: Do you notice I cheated? To add the space filler element, I have to make some guess about the width of the menu bar. So this solution is not completely down to the rules.
A: Text is only justified if the sentence naturally causes a line break. So all you need to do is naturally force a line break, and hide whats on the second line:
CSS:
ul {
text-align: justify;
width: 400px;
margin: 0;
padding: 0;
height: 1.2em;
/* forces the height of the ul to one line */
overflow: hidden;
/* enforces the single line height */
list-style-type: none;
background-color: yellow;
}
ul li {
display: inline;
}
ul li.break {
margin-left: 100%;
/* use e.g. 1000px if your ul has no width */
}
HTML:
<ul>
<li><a href="/">The</a></li>
<li><a href="/">quick</a></li>
<li><a href="/">brown</a></li>
<li><a href="/">fox</a></li>
<li class="break"> </li>
</ul>
The li.break element must be on the same line as the last menu item and must contain some content (in this case a non breaking space), otherwise in some browsers, if it's not on the same line then you'll see some small extra space on the end of your line, and if it contains no content then it's ignored and the line is not justified.
Tested in IE7, IE8, IE9, Chrome, Firefox 4.
A: if to go with javascript that is possible (this script is base on mootools)
<script type="text/javascript">//<![CDATA[
window.addEvent('load', function(){
var mncontainer = $('main-menu');
var mncw = mncontainer.getSize().size.x;
var mnul = mncontainer.getFirst();//UL
var mnuw = mnul.getSize().size.x;
var wdif = mncw - mnuw;
var list = mnul.getChildren(); //get all list items
//get the remained width (which can be positive or negative)
//and devided by number of list item and also take out the precision
var liwd = Math.floor(wdif/list.length);
var selw, mwd=mncw, tliw=0;
list.each(function(el){
var elw = el.getSize().size.x;
if(elw < mwd){ mwd = elw; selw = el;}
el.setStyle('width', elw+liwd);
tliw += el.getSize().size.x;
});
var rwidth = mncw-tliw;//get the remain width and set it to item which has smallest width
if(rwidth>0){
elw = selw.getSize().size.x;
selw.setStyle('width', elw+rwidth);
}
});
//]]>
</script>
and the css
<style type="text/css">
#main-menu{
padding-top:41px;
width:100%;
overflow:hidden;
position:relative;
}
ul.menu_tab{
padding-top:1px;
height:38px;
clear:left;
float:left;
list-style:none;
margin:0;
padding:0;
position:relative;
left:50%;
text-align:center;
}
ul.menu_tab li{
display:block;
float:left;
list-style:none;
margin:0;
padding:0;
position:relative;
right:50%;
}
ul.menu_tab li.item7{
margin-right:0;
}
ul.menu_tab li a, ul.menu_tab li a:visited{
display:block;
color:#006A71;
font-weight:700;
text-decoration:none;
padding:0 0 0 10px;
}
ul.menu_tab li a span{
display:block;
padding:12px 10px 8px 0;
}
ul.menu_tab li.active a, ul.menu_tab li a:hover{
background:url("../images/bg-menutab.gif") repeat-x left top;
color:#999999;
}
ul.menu_tab li.active a span,ul.menu_tab li.active a.visited span, ul.menu_tab li a:hover span{
background:url("../images/bg-menutab.gif") repeat-x right top;
color:#999999;
}
</style>
and the last html
<div id="main-menu">
<ul class="menu_tab">
<li class="item1"><a href="#"><span>Home</span></a></li>
<li class="item2"><a href="#"><span>The Project</span></a></li>
<li class="item3"><a href="#"><span>About Grants</span></a></li>
<li class="item4"><a href="#"><span>Partners</span></a></li>
<li class="item5"><a href="#"><span>Resources</span></a></li>
<li class="item6"><a href="#"><span>News</span></a></li>
<li class="item7"><a href="#"><span>Contact</span></a></li>
</ul>
</div>
A: Simpler markup, tested in Opera, FF, Chrome, IE7, IE8:
<div class="nav">
<a href="#" class="nav_item">nav item1</a>
<a href="#" class="nav_item">nav item2</a>
<a href="#" class="nav_item">nav item3</a>
<a href="#" class="nav_item">nav item4</a>
<a href="#" class="nav_item">nav item5</a>
<a href="#" class="nav_item">nav item6</a>
<span class="empty"></span>
</div>
and css:
.nav {
width: 500px;
height: 1em;
line-height: 1em;
text-align: justify;
overflow: hidden;
border: 1px dotted gray;
}
.nav_item {
display: inline-block;
}
.empty {
display: inline-block;
width: 100%;
height: 0;
}
Live example.
A: try this
*{
padding: 0;
margin: 0;
box-sizing: border-box;
}
ul {
list-style: none;
display: flex;
align-items: center;
justify-content: space-evenly;
}
<ul>
<li>List item One</li>
<li>List item Two</li>
<li>List item Three </li>
<li>List item Four</li>
</ul>
A: I know the original question specified HTML + CSS, but it didn't specifically say no javascript ;)
Trying to keep the css and markup as clean as possible, and as semantically meaningful as possible to (using a UL for the menu) I came up with this suggestion. Probably not ideal, but it may be a good starting point:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Kind-of-justified horizontal menu</title>
<style type="text/css">
ul {
list-style: none;
margin: 0;
padding: 0;
width: 100%;
}
ul li {
display: block;
float: left;
text-align: center;
}
</style>
<script type="text/javascript">
setMenu = function() {
var items = document.getElementById("nav").getElementsByTagName("li");
var newwidth = 100 / items.length;
for(var i = 0; i < items.length; i++) {
items[i].style.width = newwidth + "%";
}
}
</script>
</head>
<body>
<ul id="nav">
<li><a href="#">first item</a></li>
<li><a href="#">item</a></li>
<li><a href="#">item</a></li>
<li><a href="#">item</a></li>
<li><a href="#">item</a></li>
<li><a href="#">last item</a></li>
</ul>
<script type="text/javascript">
setMenu();
</script>
</body>
</html>
A: This can be achieved perfectly by some careful measurements and the last-child selector.
ul li {
margin-right:20px;
}
ul li:last-child {
margin-right:0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
} |
Q: How do we control web page caching, across all browsers? Our investigations have shown us that not all browsers respect the HTTP cache directives in a uniform manner.
For security reasons we do not want certain pages in our application to be cached, ever, by the web browser. This must work for at least the following browsers:
*
*Internet Explorer 6+
*Firefox 1.5+
*Safari 3+
*Opera 9+
*Chrome
Our requirement came from a security test. After logging out from our website you could press the back button and view cached pages.
A: DISCLAIMER: I strongly suggest reading @BalusC's answer. After reading the following caching tutorial: http://www.mnot.net/cache_docs/ (I recommend you read it, too), I believe it to be correct. However, for historical reasons (and because I have tested it myself), I will include my original answer below:
I tried the 'accepted' answer for PHP, which did not work for me. Then I did a little research, found a slight variant, tested it, and it worked. Here it is:
header('Cache-Control: no-store, private, no-cache, must-revalidate'); // HTTP/1.1
header('Cache-Control: pre-check=0, post-check=0, max-age=0, max-stale = 0', false); // HTTP/1.1
header('Pragma: public');
header('Expires: Sat, 26 Jul 1997 05:00:00 GMT'); // Date in the past
header('Expires: 0', false);
header('Last-Modified: '.gmdate('D, d M Y H:i:s') . ' GMT');
header ('Pragma: no-cache');
That should work. The problem was that when setting the same part of the header twice, if the false is not sent as the second argument to the header function, header function will simply overwrite the previous header() call. So, when setting the Cache-Control, for example if one does not want to put all the arguments in one header() function call, he must do something like this:
header('Cache-Control: this');
header('Cache-Control: and, this', false);
See more complete documentation here.
A: There's a bug in IE6
Content with "Content-Encoding: gzip" is always cached even if you use "Cache-Control: no-cache".
http://support.microsoft.com/kb/321722
You can disable gzip compression for IE6 users (check the user agent for "MSIE 6")
A: For ASP.NET Core, create a simple middleware class:
public class NoCacheMiddleware
{
private readonly RequestDelegate m_next;
public NoCacheMiddleware( RequestDelegate next )
{
m_next = next;
}
public async Task Invoke( HttpContext httpContext )
{
httpContext.Response.OnStarting( ( state ) =>
{
// ref: http://stackoverflow.com/questions/49547/making-sure-a-web-page-is-not-cached-across-all-browsers
httpContext.Response.Headers.Append( "Cache-Control", "no-cache, no-store, must-revalidate" );
httpContext.Response.Headers.Append( "Pragma", "no-cache" );
httpContext.Response.Headers.Append( "Expires", "0" );
return Task.FromResult( 0 );
}, null );
await m_next.Invoke( httpContext );
}
}
then register it with Startup.cs
app.UseMiddleware<NoCacheMiddleware>();
Make sure you add this somewhere after
app.UseStaticFiles();
A: These directives does not mitigate any security risk. They are really intended to force UA's to refresh volatile information, not keep UA's from being retaining information. See this similar question. At the very least, there is no guarantee that any routers, proxies, etc. will not ignore the caching directives as well.
On a more positive note, policies regarding physical access to computers, software installation, and the like will put you miles ahead of most firms in terms of security. If the consumers of this information are members of the public, the only thing you can really do is help them understand that once the information hits their machine, that machine is their responsibility, not yours.
A: Setting the modified http header to some date in 1995 usually does the trick.
Here's an example:
Expires: Wed, 15 Nov 1995 04:58:08 GMT
Last-Modified: Wed, 15 Nov 1995 04:58:08 GMT
Cache-Control: no-cache, must-revalidate
A: The RFC for HTTP 1.1 says the proper method is to add an HTTP Header for:
Cache-Control: no-cache
Older browsers may ignore this if they are not properly compliant to HTTP 1.1. For those you can try the header:
Pragma: no-cache
This is also supposed to work for HTTP 1.1 browsers.
A: If you're facing download problems with IE6-IE8 over SSL and cache:no-cache header (and similar values) with MS Office files you can use cache:private,no-store header and return file on POST request. It works.
A: The PHP documentation for the header function has a rather complete example (contributed by a third party):
header('Pragma: public');
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past
header('Last-Modified: '.gmdate('D, d M Y H:i:s') . ' GMT');
header('Cache-Control: no-store, no-cache, must-revalidate'); // HTTP/1.1
header('Cache-Control: pre-check=0, post-check=0, max-age=0', false); // HTTP/1.1
header ("Pragma: no-cache");
header("Expires: 0", false);
A: I found the web.config route useful (tried to add it to the answer but doesn't seem to have been accepted so posting here)
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="no-cache, no-store, must-revalidate" />
<!-- HTTP 1.1. -->
<add name="Pragma" value="no-cache" />
<!-- HTTP 1.0. -->
<add name="Expires" value="0" />
<!-- Proxies. -->
</customHeaders>
</httpProtocol>
</system.webServer>
And here is the express / node.js way of doing the same:
app.use(function(req, res, next) {
res.setHeader('Cache-Control', 'no-cache, no-store, must-revalidate');
res.setHeader('Pragma', 'no-cache');
res.setHeader('Expires', '0');
next();
});
A: in my case i fix the problem in chrome with this
<form id="form1" runat="server" autocomplete="off">
where i need to clear the content of a previus form data when the users click button back for security reasons
A: The accepted answer does not appear to work for IIS7+, going by the large number of questions about cache headers not being sent in II7:
*
*Something is forcing responses to have cache-control: private in IIS7
*IIS7: Cache Setting Not Working... why?
*IIS7 + ASP.NET MVC Client Caching Headers Not Working
*Set cache-control for aspx pages
*Cache-control: no-store, must-revalidate not sent to client browser in IIS7 + ASP.NET MVC
And so on
The accepted answer is correct in which headers must be set, but not in how they must be set. This way works with IIS7:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.AppendCacheExtension("no-store, must-revalidate");
Response.AppendHeader("Pragma", "no-cache");
Response.AppendHeader("Expires", "-1");
The first line sets Cache-control to no-cache, and the second line adds the other attributes no-store, must-revalidate
A: I've had best and most consistent results across all browsers by setting
Pragma: no-cache
A: The headers in the answer provided by BalusC does not prevent Safari 5 (and possibly older versions as well) from displaying content from the browser cache when using the browser's back button. A way to prevent this is to add an empty onunload event handler attribute to the body tag:
<body onunload="">
This hack apparently breaks the back-forward cache in Safari: Is there a cross-browser onload event when clicking the back button?
A: Also, just for good measure, make sure you reset the ExpiresDefault in your .htaccess file if you're using that to enable caching.
ExpiresDefault "access plus 0 seconds"
Afterwards, you can use ExpiresByType to set specific values for the files you want to cache:
ExpiresByType image/x-icon "access plus 3 month"
This may also come in handy if your dynamic files e.g. php, etc. are being cached by the browser, and you can't figure out why. Check ExpiresDefault.
A: I found that all of the answers on this page still had problems. In particular, I noticed that none of them would stop IE8 from using a cached version of the page when you accessed it by hitting the back button.
After much research and testing, I found that the only two headers I really needed were:
Cache-Control: no-store
Vary: *
For an explanation of the Vary header, check out http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6
On IE6-8, FF1.5-3.5, Chrome 2-3, Safari 4, and Opera 9-10, these headers caused the page to be requested from the server when you click on a link to the page, or put the URL directly in the address bar. That covers about 99% of all browsers in use as of Jan '10.
On IE6, and Opera 9-10, hitting the back button still caused the cached version to be loaded. On all other browsers I tested, they did fetch a fresh version from the server. So far, I haven't found any set of headers that will cause those browsers to not return cached versions of pages when you hit the back button.
Update: After writing this answer, I realized that our web server is identifying itself as an HTTP 1.0 server. The headers I've listed are the correct ones in order for responses from an HTTP 1.0 server to not be cached by browsers. For an HTTP 1.1 server, look at BalusC's answer.
A: Introduction
The correct minimum set of headers that works across all mentioned clients (and proxies):
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
The Cache-Control is per the HTTP 1.1 spec for clients and proxies (and implicitly required by some clients next to Expires). The Pragma is per the HTTP 1.0 spec for prehistoric clients. The Expires is per the HTTP 1.0 and 1.1 specs for clients and proxies. In HTTP 1.1, the Cache-Control takes precedence over Expires, so it's after all for HTTP 1.0 proxies only.
If you don't care about IE6 and its broken caching when serving pages over HTTPS with only no-store, then you could omit Cache-Control: no-cache.
Cache-Control: no-store, must-revalidate
Pragma: no-cache
Expires: 0
If you don't care about IE6 nor HTTP 1.0 clients (HTTP 1.1 was introduced in 1997), then you could omit Pragma.
Cache-Control: no-store, must-revalidate
Expires: 0
If you don't care about HTTP 1.0 proxies either, then you could omit Expires.
Cache-Control: no-store, must-revalidate
On the other hand, if the server auto-includes a valid Date header, then you could theoretically omit Cache-Control too and rely on Expires only.
Date: Wed, 24 Aug 2016 18:32:02 GMT
Expires: 0
But that may fail if e.g. the end-user manipulates the operating system date and the client software is relying on it.
Other Cache-Control parameters such as max-age are irrelevant if the abovementioned Cache-Control parameters are specified. The Last-Modified header as included in most other answers here is only interesting if you actually want to cache the request, so you don't need to specify it at all.
How to set it?
Using PHP:
header("Cache-Control: no-cache, no-store, must-revalidate"); // HTTP 1.1.
header("Pragma: no-cache"); // HTTP 1.0.
header("Expires: 0"); // Proxies.
Using Java Servlet, or Node.js:
response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setHeader("Expires", "0"); // Proxies.
Using ASP.NET-MVC
Response.Cache.SetCacheability(HttpCacheability.NoCache); // HTTP 1.1.
Response.Cache.AppendCacheExtension("no-store, must-revalidate");
Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.0.
Response.AppendHeader("Expires", "0"); // Proxies.
Using ASP.NET Web API:
// `response` is an instance of System.Net.Http.HttpResponseMessage
response.Headers.CacheControl = new CacheControlHeaderValue
{
NoCache = true,
NoStore = true,
MustRevalidate = true
};
response.Headers.Pragma.ParseAdd("no-cache");
// We can't use `response.Content.Headers.Expires` directly
// since it allows only `DateTimeOffset?` values.
response.Content?.Headers.TryAddWithoutValidation("Expires", 0.ToString());
Using ASP.NET:
Response.AppendHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.0.
Response.AppendHeader("Expires", "0"); // Proxies.
Using ASP.NET Core v3
// using Microsoft.Net.Http.Headers
Response.Headers[HeaderNames.CacheControl] = "no-cache, no-store, must-revalidate";
Response.Headers[HeaderNames.Expires] = "0";
Response.Headers[HeaderNames.Pragma] = "no-cache";
Using ASP:
Response.addHeader "Cache-Control", "no-cache, no-store, must-revalidate" ' HTTP 1.1.
Response.addHeader "Pragma", "no-cache" ' HTTP 1.0.
Response.addHeader "Expires", "0" ' Proxies.
Using Ruby on Rails:
headers["Cache-Control"] = "no-cache, no-store, must-revalidate" # HTTP 1.1.
headers["Pragma"] = "no-cache" # HTTP 1.0.
headers["Expires"] = "0" # Proxies.
Using Python/Flask:
response = make_response(render_template(...))
response.headers["Cache-Control"] = "no-cache, no-store, must-revalidate" # HTTP 1.1.
response.headers["Pragma"] = "no-cache" # HTTP 1.0.
response.headers["Expires"] = "0" # Proxies.
Using Python/Django:
response["Cache-Control"] = "no-cache, no-store, must-revalidate" # HTTP 1.1.
response["Pragma"] = "no-cache" # HTTP 1.0.
response["Expires"] = "0" # Proxies.
Using Python/Pyramid:
request.response.headerlist.extend(
(
('Cache-Control', 'no-cache, no-store, must-revalidate'),
('Pragma', 'no-cache'),
('Expires', '0')
)
)
Using Go:
responseWriter.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate") // HTTP 1.1.
responseWriter.Header().Set("Pragma", "no-cache") // HTTP 1.0.
responseWriter.Header().Set("Expires", "0") // Proxies.
Using Clojure (require Ring utils):
(require '[ring.util.response :as r])
(-> response
(r/header "Cache-Control" "no-cache, no-store, must-revalidate")
(r/header "Pragma" "no-cache")
(r/header "Expires" 0))
Using Apache .htaccess file:
<IfModule mod_headers.c>
Header set Cache-Control "no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires 0
</IfModule>
Using HTML:
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="0">
HTML meta tags vs HTTP response headers
Important to know is that when an HTML page is served over an HTTP connection, and a header is present in both the HTTP response headers and the HTML <meta http-equiv> tags, then the one specified in the HTTP response header will get precedence over the HTML meta tag. The HTML meta tag will only be used when the page is viewed from a local disk file system via a file:// URL. See also W3 HTML spec chapter 5.2.2. Take care with this when you don't specify them programmatically because the webserver can namely include some default values.
Generally, you'd better just not specify the HTML meta tags to avoid confusion by starters and rely on hard HTTP response headers. Moreover, specifically those <meta http-equiv> tags are invalid in HTML5. Only the http-equiv values listed in HTML5 specification are allowed.
Verifying the actual HTTP response headers
To verify the one and the other, you can see/debug them in the HTTP traffic monitor of the web browser's developer toolset. You can get there by pressing F12 in Chrome/Firefox23+/IE9+, and then opening the "Network" or "Net" tab panel, and then clicking the HTTP request of interest to uncover all detail about the HTTP request and response. The below screenshot is from Chrome:
I want to set those headers on file downloads too
First of all, this question and answer are targeted on "web pages" (HTML pages), not "file downloads" (PDF, zip, Excel, etc). You'd better have them cached and make use of some file version identifier somewhere in the URI path or query string to force a redownload on a changed file. When applying those no-cache headers on file downloads anyway, then beware of the IE7/8 bug when serving a file download over HTTPS instead of HTTP. For detail, see IE cannot download foo.jsf. IE was not able to open this internet site. The requested site is either unavailable or cannot be found.
A: In addition to the headers consider serving your page via https. Many browsers will not cache https by default.
A: //In .net MVC
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public ActionResult FareListInfo(long id)
{
}
// In .net webform
<%@ OutputCache NoStore="true" Duration="0" VaryByParam="*" %>
A: After a bit of research we came up with the following list of headers that seemed to cover most browsers:
*
*Expires: Sat, 26 Jul 1997 05:00:00 GMT
*Cache-Control: no-cache, private, must-revalidate, max-stale=0, post-check=0, pre-check=0 no-store
*Pragma: no-cache
In ASP.NET we added these using the following snippet:
Response.ClearHeaders();
Response.AppendHeader("Cache-Control", "no-cache"); //HTTP 1.1
Response.AppendHeader("Cache-Control", "private"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "no-store"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "must-revalidate"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "max-stale=0"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "post-check=0"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "pre-check=0"); // HTTP 1.1
Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.0
Response.AppendHeader("Expires", "Sat, 26 Jul 1997 05:00:00 GMT"); // HTTP 1.0
Found from: http://forums.asp.net/t/1013531.aspx
A: (hey, everyone: please don't just mindlessly copy&paste all headers you can find)
First of all, Back button history is not a cache:
The freshness model (Section 4.2) does not necessarily apply to history mechanisms. That is, a history mechanism can display a previous representation even if it has expired.
In the old HTTP spec, the wording was even stronger, explicitly telling browsers to disregard cache directives for back button history.
Back is supposed to go back in time (to the time when the user was logged in). It does not navigate forward to a previously opened URL.
However, in practice, the cache can influence the back button, in very specific circumstances:
*
*Page must be delivered over HTTPS, otherwise, this cache-busting won't be reliable. Plus, if you're not using HTTPS, then your page is vulnerable to login stealing in many other ways.
*You must send Cache-Control: no-store, must-revalidate (some browsers observe no-store and some observe must-revalidate)
You never need any of:
*
*<meta> with cache headers — it doesn't work at all. Totally useless.
*post-check/pre-check — it's an IE-only directive that only applies to cachable resources.
*Sending the same header twice or in dozen parts. Some PHP snippets out there actually replace previous headers, resulting in only the last one being sent.
If you want, you could add:
*
*no-cache or max-age=0, which will make resource (URL) "stale" and require browsers to check with the server if there's a newer version (no-store already implies this even stronger).
*Expires with a date in the past for HTTP/1.0 clients (although real HTTP/1.0-only clients are completely non-existent these days).
Bonus: The new HTTP caching RFC.
A: As @Kornel stated, what you want is not to deactivate the cache, but to deactivate the history buffer. Different browsers have their own subtle ways to disable the history buffer.
In Chrome (v28.0.1500.95 m) we can do this only by Cache-Control: no-store.
In FireFox (v23.0.1) any one of these will work:
*
*Cache-Control: no-store
*Cache-Control: no-cache (https only)
*Pragma: no-cache (https only)
*Vary: * (https only)
In Opera (v12.15) we only can do this by Cache-Control: must-revalidate (https only).
In Safari (v5.1.7, 7534.57.2) any one of these will work:
*
*Cache-Control: no-store
<body onunload=""> in html
*Cache-Control: no-store (https only)
In IE8 (v8.0.6001.18702IC) any one of these will work:
*
*Cache-Control: must-revalidate, max-age=0
*Cache-Control: no-cache
*Cache-Control: no-store
*Cache-Control: must-revalidate
Expires: 0
*Cache-Control: must-revalidate
Expires: Sat, 12 Oct 1991 05:00:00 GMT
*Pragma: no-cache (https only)
*Vary: * (https only)
Combining the above gives us this solution which works for Chrome 28, FireFox 23, IE8, Safari 5.1.7, and Opera 12.15: Cache-Control: no-store, must-revalidate (https only)
Note that https is needed because Opera wouldn't deactivate history buffer for plain http pages. If you really can't get https and you are prepared to ignore Opera, the best you can do is this:
Cache-Control: no-store
<body onunload="">
Below shows the raw logs of my tests:
HTTP:
*
*Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Opera 12.15
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7
*Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Opera 12.15
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7
*Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
Fail: Safari 5.1.7, Opera 12.15
Success: Chrome 28, FireFox 23, IE8
*Cache-Control: private, no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
Fail: Safari 5.1.7, Opera 12.15
Success: Chrome 28, FireFox 23, IE8
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: no-store
Fail: Safari 5.1.7, Opera 12.15
Success: Chrome 28, FireFox 23, IE8
*Cache-Control: no-store
<body onunload="">
Fail: Opera 12.15
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7
*Cache-Control: no-cache
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Vary: *
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7, Opera 12.15
Success: none
*Pragma: no-cache
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7, Opera 12.15
Success: none
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: must-revalidate, max-age=0
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: must-revalidate
Expires: 0
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: must-revalidate
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Fail: Chrome 28, FireFox 23, Safari 5.1.7, Opera 12.15
Success: IE8
*Cache-Control: private, must-revalidate, proxy-revalidate, s-maxage=0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7, Opera 12.15
Success: none
HTTPS:
*
*Cache-Control: private, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
<body onunload="">
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7, Opera 12.15
Success: none
*Cache-Control: private, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
<body onunload="">
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7, Opera 12.15
Success: none
*Vary: *
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Pragma: no-cache
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Cache-Control: no-cache
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Cache-Control: private, no-cache, max-age=0, proxy-revalidate, s-maxage=0
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Cache-Control: private, no-cache, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Cache-Control: private, no-cache, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Cache-Control: must-revalidate
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7
Success: Opera 12.15
*Cache-Control: private, must-revalidate, proxy-revalidate, s-maxage=0
<body onunload="">
Fail: Chrome 28, FireFox 23, IE8, Safari 5.1.7
Success: Opera 12.15
*Cache-Control: must-revalidate, max-age=0
Fail: Chrome 28, FireFox 23, Safari 5.1.7
Success: IE8, Opera 12.15
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, Safari 5.1.7
Success: FireFox 23, IE8, Opera 12.15
*Cache-Control: private, no-cache, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Chrome 28, Safari 5.1.7
Success: FireFox 23, IE8, Opera 12.15
*Cache-Control: no-store
Fail: Opera 12.15
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7
*Cache-Control: private, no-cache, no-store, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Opera 12.15
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7
*Cache-Control: private, no-cache, no-store, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
<body onunload="">
Fail: Opera 12.15
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7
*Cache-Control: private, no-cache
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
Fail: Chrome 28, Safari 5.1.7, Opera 12.15
Success: FireFox 23, IE8
*Cache-Control: must-revalidate
Expires: 0
Fail: Chrome 28, FireFox 23, Safari 5.1.7,
Success: IE8, Opera 12.15
*Cache-Control: must-revalidate
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Fail: Chrome 28, FireFox 23, Safari 5.1.7,
Success: IE8, Opera 12.15
*Cache-Control: private, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: 0
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7,
Success: IE8, Opera 12.15
*Cache-Control: private, must-revalidate, max-age=0, proxy-revalidate, s-maxage=0
Expires: Sat, 12 Oct 1991 05:00:00 GMT
<body onunload="">
Fail: Chrome 28, FireFox 23, Safari 5.1.7,
Success: IE8, Opera 12.15
*Cache-Control: private, must-revalidate
Expires: Sat, 12 Oct 1991 05:00:00 GMT
Pragma: no-cache
Vary: *
Fail: Chrome 28, Safari 5.1.7
Success: FireFox 23, IE8, Opera 12.15
*Cache-Control: no-store, must-revalidate
Fail: none
Success: Chrome 28, FireFox 23, IE8, Safari 5.1.7, Opera 12.15
A: The use of the pragma header in the response is a wives tale. RFC2616 only defines it as a request header
http://www.mnot.net/cache_docs/#PRAGMA
A: To complete BalusC -> ANSWER
If you are using perl you can use CGI to add HTTP headers.
Using Perl:
Use CGI;
sub set_new_query() {
binmode STDOUT, ":utf8";
die if defined $query;
$query = CGI->new();
print $query->header(
-expires => 'Sat, 26 Jul 1997 05:00:00 GMT',
-Pragma => 'no-cache',
-Cache_Control => join(', ', qw(
private
no-cache
no-store
must-revalidate
max-age=0
pre-check=0
post-check=0
))
);
}
Using apache httpd.conf
<FilesMatch "\.(html|htm|js|css|pl)$">
FileETag None
<ifModule mod_headers.c>
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</ifModule>
Note: When I tried to use the html META, browsers ignored them and cached the page.
A: I just want to point out that if someone wants to prevent caching ONLY dynamic content, adding those additional headers should be made programmatically.
I edited configuration file of my project to append no-cache headers, but that also disabled caching static content, which isn't usually desirable.
Modifying response headers in code assures that images and style files will be cached.
This is quite obvious, yet still worth mentioning.
And another caution. Be careful using ClearHeaders method from HttpResponse class. It may give you some bruises if you use it recklessly. Like it gave me.
After redirecting on ActionFilterAttribute event the consequences of clearing all headers are losing all session data and data in TempData storage. It's safer to redirect from an Action or don't clear headers when redirection is taking place.
On second thought I discourage all to use ClearHeaders method. It's better to remove headers separately. And to set Cache-Control header properly I'm using this code:
filterContext.HttpContext.Response.Cache.SetCacheability(HttpCacheability.NoCache);
filterContext.HttpContext.Response.Cache.AppendCacheExtension("no-store, must-revalidate");
A: See this link to a Case Study on Caching:
http://securityevaluators.com/knowledge/case_studies/caching/
Summary, according to the article, only Cache-Control: no-store works on Chrome, Firefox and IE. IE accepts other controls, but Chrome and Firefox do not. The link is a good read complete with the history of caching and documenting proof of concept.
A: I had no luck with <head><meta> elements. Adding HTTP cache related parameters directly (outside of the HTML doc) does indeed work for me.
Sample code in Python using web.py web.header calls follows. I purposefully redacted my personal irrelevant utility code.
import web
import sys
import PERSONAL-UTILITIES
myname = "main.py"
urls = (
'/', 'main_class'
)
main = web.application(urls, globals())
render = web.template.render("templates/", base="layout", cache=False)
class main_class(object):
def GET(self):
web.header("Cache-control","no-cache, no-store, must-revalidate")
web.header("Pragma", "no-cache")
web.header("Expires", "0")
return render.main_form()
def POST(self):
msg = "POSTed:"
form = web.input(function = None)
web.header("Cache-control","no-cache, no-store, must-revalidate")
web.header("Pragma", "no-cache")
web.header("Expires", "0")
return render.index_laid_out(greeting = msg + form.function)
if __name__ == "__main__":
nargs = len(sys.argv)
# Ensure that there are enough arguments after python program name
if nargs != 2:
LOG-AND-DIE("%s: Command line error, nargs=%s, should be 2", myname, nargs)
# Make sure that the TCP port number is numeric
try:
tcp_port = int(sys.argv[1])
except Exception as e:
LOG-AND-DIE ("%s: tcp_port = int(%s) failed (not an integer)", myname, sys.argv[1])
# All is well!
JUST-LOG("%s: Running on port %d", myname, tcp_port)
web.httpserver.runsimple(main.wsgifunc(), ("localhost", tcp_port))
main.run()
A: Not sure if my answer sounds simple and stupid, and perhaps it has already been known to you since long time ago, but since preventing someone from using browser back button to view your historical pages is one of your goals, you can use:
window.location.replace("https://www.example.com/page-not-to-be-viewed-in-browser-history-back-button.html");
Of course, this may not be possible to be implemented across the entire site, but at least for some critical pages, you can do that. Hope this helps.
A: you can use location block for set individual file instead of whole app get caching in IIS
<location path="index.html">
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="no-cache" />
</customHeaders>
</httpProtocol>
</system.webServer>
</location>
A: i have solved in this way.
2 considerations:
1) the server side events are not fired on back click, instead of javascript.
2) i have 2 javascript to read/write cookies
function setCookie(name, value, days)
{
var expires = "";
if (days)
{
var date = new Date();
date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
expires = "; expires=" + date.toUTCString();
}
document.cookie = name + "=" + (value || "") + expires + "; path=/";
}
function getCookie(name)
{
var nameEQ = name + "=";
var ca = document.cookie.split(';');
for (var i = ca.length - 1; i >= 0; i--)
{
var c = ca[i];
while (c.charAt(0) == ' ')
{
c = c.substring(1, c.length);
}
if (c.indexOf(nameEQ) == 0)
{
return c.substring(nameEQ.length, c.length);
}
}
return null;
}
in my Page_Load i inserted this: (this is NOT fired on back click)
protected void Page_Load(object sender, EventArgs e)
{
Page.RegisterClientScriptBlock("", "<script>setCookie('" + Session.SessionID + "', '" + Login + "', '100');</script>");
}
where 'Login' is my id value that is -1 after logout (you could use something else, a boolean for example).
then in my page i added this: (this IS fired on back click)
<script type="text/javascript">
if (getCookie('<%= Session.SessionID %>') < 0)
{
if (history.length > 0)
{
history.go(+1);
}
}
</script>
nothing else.
with this solution back click is enable on every page and disable only after logout on each page on the same browser.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1842"
} |
Q: LINQ and Database Permissions I'm still trying to get my head around LINQ and accessing a SQL Database.
I was always taught that you should only have execute permissions of stored procedures to your data.
You should never have select / insert / update / delete.
(This is because of performance and security)
To get the data out of LINQ you obviously need select permissions. I know you can use stored procs with LINQ, but since I can't do joins what's the point?
Have I missed something???
A: 1) We're programmers, not DBA fascists. If your requirements state that the database must be locked down 100%, Linq is not for you. I'm not a DBA, so I believe that most of the performance/security hype is just that. Linq is for me.
2) You can do joins with linq.
@Philippe: Linq automatically transforms evaluations into query parameters, so it provides some sql injection protection. However, you still have to closely evaluate your requirements to determine how much security you need and at what levels. Linq makes dealing with the database much easier, but it makes it easier to put secuirty design on the back burner, which is a bad thing.
A: I'm very much in agreement with Jeff Atwood on the "Stored Procedures vs. Inline SQL/LINQ" issue: Who Needs Stored Procedures, Anyways?.
I'm confused as to why you'd even want to perform a JOIN if you're in the SPROCs-for-everything crowd; shouldn't you wrap that JOIN up into another SPROC?
As Will said, LINQ wasn't designed for the kind of DB use you're talking about; it was designed to give us statically-typed inline SQL. You could, however, still control access through user permissions if you use LINQ to SQL.
A: Well, for security reasons you should not input any user entered data into queries. If you stick with this rule, I don't see the problem of having select permission.
A: Whether all of your database access is "behind" stored procedures depends on the needs of the application and the company. I have implemented systems that use views to get all data and stored procedures for all updates. This allows for centralized security and database logic while still letting front-end developers use SQL queries where appropriate.
Like so many other things in programming - it depends on the needs for your project.
LinqToSql does support stored procedures. Scott Gu has a post on it:
http://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: HTML layout for winforms Instead of arranging controls on a winform form by specifying pixel locations, I'd like to lay it out similar to the way you'd layout a form in html. This would make it scale better (for larger fonts etc).
Does anyone know of a layout library that allows you to define the form in xml and lay it out similar to html?
A: Have you checked out the TableLayoutPanel and FlowLayoutPanel in the .NET framework? It might be what you are looking for.
A: Yeah, it's called WPF :)
Seriously, there are some newer panel types in WinForms 2.0 that will let you place controls without setting Location and Size. They are FlowLayoutPanel and TableLayoutPanel.
You should also look into the AutoSize property. It takes care of sizing when the value of the label, say, changes. Also, don't forget about Docking and Anchoring.
Once you master those concepts, writing a little parser that converts from XML to controls shouldn't be that hard if you feel you really need it.
A: Not sure there is anything perfect for this.
MyXAML was kicking about a few years ago that enabled you to add forms in XML as opposed to embedding them into the binary. Not sure if that project is dead or not.
WinForm does have the flow layout control already
However if you want to do this kind of thing properly I think the only answer is to move to WPF.
A: You may also want to consider using Windows Presentation Foundation (WPF) instead of WinForms - WPF has an XML declarative markup language (XAML) that works well for defining scalable UI.
A: I've already got something like MyXAML - my screens are loaded from xml files already. It suffers the same problem as MyXAML which is that you still have to position the controls with pixel positions whereas I want something like html with the automatic flow and tables and such.
I think TableLayoutPanel might be what I'm looking for.
A: The only one I know of is a 3rd party from DevExpress called the LayoutControl..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Where do I start designing a Custom Control that contains child objects? I think this is a fun engineering-level question.
I need to design a control which displays a line chart. What I want to be able to do is use a designer to add multiple Pens which actually describe the data and presentation so that it ends up with Xaml something along these lines:
<Chart>
<Pen Name="SalesData" Color="Green" Data="..."/>
<Pen Name="CostData" Color="Red" Data="..." />
...
</chart>
My first thought is to extend ItemsControl for the Chart class. Will that get me where I want to go or should I be looking at it from a different direction such as extending Panel?
The major requirement is to be able to use it in a designer without adding any C# code. In order for that to even be feasible, it needs to retain its structure in the tree-view model. In other words, if I were working with this in Expression Blend or Mobiform Aurora, I would be able to select the chart from the logical tree or select any of the individual pens to edit their properties.
A: I would go with Chart as an ItemsControl and its ItemsPanel be a Canvas(For some light use I would go with Grid as ItemsPanel). And each Pen will be a CustomControl derived from PolyLine class. Does that make any sense?
A: For those interested in making their own control from a personal development point of view (as stated in the original question) then I'd suggest a structure in this format.
<Chart>
<Chart.Pens>
<Pen Name="SalesData" Data="{Binding Name=SalesData}" />
<Pen Name="CostData">
<Pen.Data>
<PenData Y="12" X="Jan" />
<PenData Y="34" X="Feb" />
</Pen.Data>
</Pen>
</Chart.Pens>
</Chart>
For that you will need to expose a Pens collection in the user control (I would not see this as a derived ItemsControl control). This derived user control will encapsulate other controls typically a grid to place a label for the heading, labels for the axis names, possibly a Canvas for the drawing area and a ItemsControl to display the items in the Legend.
For those looking for a ready to go solution
For those looking for a ready to go solution then I'd check out CodePlex for VisiFire's WPF chart control. It is WPF and Silverlight compatible and it even has a Silverlight app for you to enter your data & style and have it generate XAML (or HTML) to paste into your application.
A: Another option is to extend Canvas for the chart and extend Shape for the Pens. Then dynamically draw the shape based on the Color/Data properties.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to implement file upload progress bar on web? I would like display something more meaningful that animated gif while users upload file to my web application. What possibilities do I have?
Edit: I am using .Net but I don't mind if somebody shows me platform agnostic version.
A: Here are a couple of versions of what you're looking for for some common JavaScript toolkits.
*
*Mootools - http://digitarald.de/project/fancyupload/
*Extjs - http://extjs.com/learn/Extension:UploadForm
A: ASP.NET File Upload with Real-Time Progress Bar
http://mattberseth.com/blog/2008/07/aspnet_file_upload_with_realti_1.html
A: I've tried various techniques and had most success with SWFUpload.
You create and interact with an SWFUpload object using Javascript, but uses a (hidden) Flash file for file selection, uploading and upload progress monitoring. You can specify a wide range of Javascript event handlers (uploadStarted, uploadProgress, uploadError etc.) that the Flash will call during the upload progress making it very flexible. It also implements a file queue, so it works well for single or multiple files.
Links:
*
*API documentation
*Demos
*Downloads
A: If you are interested how all this generally works client-site, this is it:
All the solutions hook up the form via javascript and change the forms target to a newly created, invisible IFRAME. Then they are free to use AJAX to request some status about the file from the server.
The IFRAME trick is needed because all the scripts running in the window that is doing the upload will hang until the the request is completed at which time the file is fully uploaded.
A: Some good .NET-specific help for doing progress bars can be found in Dino Esposito's MSDN Magazine articles from last July and August:
Context-Sensitive Feedback with AJAX (July '07)
Canceling Server Tasks with ASP.NET AJAX (August '07)
A: A good example of doing this without using Ajax is given at CodeProject by Nilesh Thakkar.
A: jQuery UI Progressbar - http://jqueryui.com/demos/progressbar/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Re-Running Database Development Scripts In our current database development evironment we have automated build procceses check all the sql code out of svn create database scripts and apply them to the various development/qa databases.
This is all well and good, and is a tremdous improvement over what we did in the past, but we have a problem with rerunning scripts. Obviously this isn't a problem with some scripts like altering procedures, because you can run them over and over without adversly affecting the system. Right now to add metadata and run statements like create/alter table statements we add code to check and see if the objects exists, and if they do, don't run them.
Our problem is that we really only get one shot to run the script, because once the script has been run, the objects are in the environment and system won't run the script again. If something needs to change once it's been deployed, we have a difficult process of running update scripts agaist the update scripts and hoping that everything falls in the correct order and all of the PKs line up between the environments (the databases are, shall we say, "special").
Short of dropping the database and starting the process from scratch (the last most current release), does anyone have a more elegant solution to this?
A: I'm not sure how best to approach the problem in your specific environment, but I'd suggest reading up on Rail's migrations feature for some inspiration on how to get started.
http://wiki.rubyonrails.org/rails/pages/UnderstandingMigrations
A: We address this - or at least a similar problem to this - as follows:
*
*The schema has a version number - this is represented by a table which has one row per version which, as well as the version number, carries boring things like a date/time stamp for when that version came into existence.
*By having the schema create/modify DDL wrapped in code that performs the changes for us.
In the context above one would build the schema change code as part of the build process then run it and it would only apply schema changes that haven't already been applied.
In our experience (which is bound not to be representative) in most cases the schema changes are sufficiently small/fast that they can safely be run in a transaction which means that if it fails we get a rollback and the db is "safe" - although one would always recommend taking backups before applying schema updates if practicable.
I evolved this out of nasty painful experience. Its not a perfect system (or an original idea) but as a result of working this way we have a high degree of confidence that if there are two instances of one of our databases with the same version that then the schema for those two databases will be the same in almost all respects and that we can safely bring any db up to the current schema for that application without ill effects. (That last isn't 100% true unfortunately - there's always an exception - but its not too far from the truth!)
A: Do you keep your existing data in the database? If not, you may want to look at something similar to what Matt mentioned for .NET called RikMigrations
http://www.rikware.com/RikMigrations.html
I use that on my projects to update my database on the fly, while keeping track of revisions. Also, it makes it very simple to move database schema to different servers, etc.
A: if you want to have re-runnability in your scripts, then you can't have them as definitions... what I mean by this is that you need to focus on change scripts rather than here is my Table script.
let's say you have a table Customers:
create table Customers (
id int identity(1,1) primary key,
first_name varchar(255) not null,
last_name varchar(255) not null
)
and later you want to add a status column. Don't modify your original table script, that one has already run (and can have the if(! exists) syntax to prevent it from causing errors while running again).
Instead, have a new script, called add_customer_status.sql
in this script you'll have something like:
alter table Customers
add column status varchar(50) null
update Customers set status = 'Silver' where status is null
alter table Customers
alter column status varchar(50) not null
Again you can wrap this with an if(! exists) block to allow re-running, but here we've leveraged the notion that this is a change script, and we adapt the database accordingly. If there is data already in the customers table then we're still okay, since we add the column, seed it with data, then add the not null constraint.
Both of the migration frameworks mentioned above are good, I've also had excellent experience with MigratorDotNet.
A: Scott named a couple of other SQL tools that address the problem of change management. But I'm still rolling my own.
I would like to second this question, and add my puzzlement that there is still no free, community-based tool for this problem. Obviously, scripts are not a satisfactory way to maintain a database schema; neither are instances. So, why don't we keep metadata in a separate (and while we're at it, platform-neutral) format?
That's what I'm doing now. My master database schema is a version-controlled XML file, created initially from a simple web service. A simple javascript program compares instances against it, and a simple XSL transform yields the CREATE or ALTER statements. It has limits, like RikMigrations; for instance it doesn't always sequence inter-depdendent objects correctly. (But guess what — neither does Microsoft's SQL Server Database Publication tool.) Really, it's too simple. I simply didn't include objects (roles, users, etc.) that I wasn't using.
So, my view is that this problem is indeed inadequately addressed, and that sooner or later we'll have to get together and tackle the devilish details.
A: We went the 'drop and recreate the schema' route. We had some classes in our JUnit test package which parameterized the scripts to create all the objects in the schema for the developer executing the code. This allowed all the developers to share one test database and everyone could simultaneously create/test/drop their test tables without conflicts.
Did it take a long time to run? Yes. At first we used the setup method for this which meant the tables were dropped/created for every test and that took way too long. Then we created a TestSuite which could be run once before all the tests for a class and then cleaned up when all the class tests were complete. This still meant that the db setup ran many times when we ran our 'AllTests' class which included all the tests in all our packages. How I solved it was adding a semaphore to the OracleTestSuite code so when the first test requested the database to be setup it would do that but any subsequent call would just increment a counter. As each tearDown() method was called, the counter would decrement the counter until it reached 0 and the OracleTestSuite code would drop everything. One issue this leaves is whether the tests assume that the database is empty. It can be convenient to let database tests know the order in which they run so they can take advantage of the state of the database because it can reduce the duplication of DB setup.
We used the concept of ObjectMothers to solve a similar problem with creating complex domain objects for testing purposes. Mock objects might be a better answer but we hadn't heard about them at the time. After all this time, I'd recommend creating test helper methods that could create standardized datasets for the typical scenarios. Plus that would help document the important edge cases from a data perspective.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: String initialization with and without explicit trailing terminator What is the difference between
char str1[32] = "\0";
and
char str2[32] = "";
A: Since you already declared the sizes, the two declarations are exactly equal. However, if you do not specify the sizes, you can see that the first declaration makes a larger string:
char a[] = "a\0";
char b[] = "a";
printf("%i %i\n", sizeof(a), sizeof(b));
prints
3 2
This is because a ends with two nulls (the explicit one and the implicit one) while b ends only with the implicit one.
A: As others have pointed out, "" implies one terminating '\0' character, so "\0" actually initializes the array with two null characters.
Some other answerers have implied that this is "the same", but that isn't quite right. There may be no practical difference -- as long the only way the array is used is to reference it as a C string beginning with the first character. But note that they do indeed result in two different memory initalizations, in particular they differ in whether Str[1] is definitely zero, or is uninitialized (and could be anything, depending on compiler, OS, and other random factors). There are some uses of the array (perhaps not useful, but still) that would have different behaviors.
A: Well, assuming the two cases are as follows (to avoid compiler errors):
char str1[32] = "\0";
char str2[32] = "";
As people have stated, str1 is initialized with two null characters:
char str1[32] = {'\0','\0'};
char str2[32] = {'\0'};
However, according to both the C and C++ standard, if part of an array is initialized, then remaining elements of the array are default initialized. For a character array, the remaining characters are all zero initialized (i.e. null characters), so the arrays are really initialized as:
char str1[32] = {'\0','\0','\0','\0','\0','\0','\0','\0',
'\0','\0','\0','\0','\0','\0','\0','\0',
'\0','\0','\0','\0','\0','\0','\0','\0',
'\0','\0','\0','\0','\0','\0','\0','\0'};
char str2[32] = {'\0','\0','\0','\0','\0','\0','\0','\0',
'\0','\0','\0','\0','\0','\0','\0','\0',
'\0','\0','\0','\0','\0','\0','\0','\0',
'\0','\0','\0','\0','\0','\0','\0','\0'};
So, in the end, there really is no difference between the two.
A: Unless I'm mistaken, the first will initialize 2 chars to 0 (the '\0' and the terminator that's always there, and leave the rest untouched, and the last will initialize only 1 char (the terminator).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Binding custom functions to DOM events in prototype? Jquery has a great language construct that looks like this:
$(document).ready(function() {
$("a").click(function() {
alert("Hello world!");
});
});
As you might guess this, once the document has loaded, binds a custom function to the onClick event of all a tags.
The question is, how can I achieve this same kind of behavior in Prototype?
A: Prototype 1.6 provides the dom:loaded event on document:
document.observe("dom:loaded", function() {
$$('a').each(function(elem) {
elem.observe("click", function() { alert("Hello World"); });
});
});
I also use the each iterator on the array returned by $$().
A: $(document).observe('dom:loaded', function() {
$$('a').invoke('observe', 'click', function() {
alert('Hello world!');
});
});
A: Event.observe(window, 'load', function() {
Event.observe(element, 'click', function() {
alert("Hello World!");
});
});
Of course you need to "select" the elements first in Prototype.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is there a barebones Windows version control system that's suitable for only one guy? I'm trying to find a source control for my own personal use that's as simple as possible. The main feature I need is being able to read/pull a past version of my code. I am the only developer. I've looked at a lot of different version control systems, but they all seem way more complicated than I need. I need one that's simple, runs under Windows, and doesn't expose itself to the network.
Specifically, the version control system should not require exposing an HTTP interface, it should interact with the local filesystem only. It just needs to be a version control system geared for one guy and one guy only. Graphical UI is a plus.
Does anyone know of software would satisfy what I'm looking for?
Thanks!
-Mike
A: Sourcegear Vault is free for a single user and you can run both the client and the server on your own machine.
A: Subversion with TortoiseSVN.
Like all version control systems, it will sound reasonably complex when you start off, but it's really very simple once you get into it, works well for a single developer, and doesn't require any network access if you don't want it to.
Plus, it's free.
A: For what it's worth, you can use Subversion & TortoiseSVN without a server using file:/// URLs to connect to you repository. I've done this to create repositories on USB thumb drives that I can move from machine to machine.
Here's a nice write-up: http://www.fredshack.com/docs/tortoisesvn.html
A: Subversion is great -- you can run the server yourself or use something like assembla.com to host your code (although that exposes it to the network).
There are numerous gui applications like tortoise svn that would allow you to interact w/ the source control repo
A: I use the free (2 user?) licence of Perforce. Powerful, fast, and well documented.
A: I'm a very satisfied user msysgit for Windows. It contains a recent copy of git as well as a GUI, a shell and a history browser in a single install package.
No need for a server component and if you do decide to host it somewhere your repository is signed and cannot be modified by the hoster without you seeing it. Finally, moving the repo to a server is a easy "push" operation which keeps all of your history.
A: You really can't get much easier than VisualSVN for version control on Windows.
A: I like to use Google Code, even for my one man projects, as it provides a Subversion repository already set up. Also, the server is offsite, which protects against hard drive failures and other disasters.
A: You might find Mercurial to be pretty nice for that purpose. You won't have to set up a server and creating the repository is as simple as doing "hg init" in the directory where your work is.
A: All the previous suggestions are pretty simple, and I know cvs is a bit out of vogue these days, but I like to use it's local mode for a repository that doesn't even need a server to install or set up. The repository can be anywhere on your hard drive. I have mine on a memory stick to have access to it anywhere even without an internet connection.
The key commands are:
cvs -d:local:/full/path/repository init
to create the repository
mkdir /full/path/repository/project
to create the module, and
cvs -d:local:/full/path/repository/cvs co project
to check out a local version.
TortoiseCVS gives you your Graphical UI
A: Bazaar. See Bazaar in five minutes for a great start.
Whenever you save a file, run the $ bzr commit -m "Added first line of text" command, and it's all taken care.
If you edit over FTP, make the FTP folder as a drive or folder, and bzr update after the commit.
A: +1 for Subversion, for those not familiar with it I would recommend the SVN Book.
A: VisualSVN Server is a complete installer for Subversion Server on Windows.
VisualSVN is a Visual Studio plugin for Subversion integration.
A: You could go with Mercurial.
*
*It's very easy to start working with and there's TortoiseHg which integrates nicely with Windows shell.
*You don't need a server for it as it's a distributed version control system - you can hold a whole repository copy on a flash drive and push/pull changes from it.
*If you wish, you can put hg in a web server mode that makes the repository easily accessible over http.
*As opposed to SVN and CVS, it doesn't spread its metadata directories all over the repository. There's just one .hg directory in repository root.
I use it daily and love it!
A: From what I understand, and at the risk of sounding like a fanboy, you might want to consider a DVCS (distributed version control system) like git or mercurial. They essentially take away the central repository part, so it should be ideal to use when you're a solo developer.
Another advantage is that when you decide to add people to your one-man team, you don't have to set up a central repository. All they have to do is clone your repository and they're good to go!
If you're windows based and are used to a shell plugin like TortoiseSVN I'd pick mercurial. Their windows integration is just a bit better than git's, using TortoiseHg. The git counterpart (cheetah) is on hold at the moment, due to the developer getting sick and tired of all the demands people were making ;-)
If DVCS is too exotic for this situation you could always rely on SVN. I've heard good stories about the already mentioned VisualSVN solution. Install, make some repositories and go. Install TortoiseSVN for shell integration, or perhaps Subclipse or ankhSVN for eclipse and visual studio, respectively.
Note: I have not actually tried git or mercurial in a real life project, just some test setups. I now have a simple project WITH version control (using mercurial in my case), without having to have access to a central repository.
A: I use Subversion and TortoiseSVN — both are free. Your repository can be on the local machine. You don't have to work over a network.
However, for disaster recovery or even simple machine fault, it's probably a good idea to store your repository on a different computer and also back it up.
You might want to consider using a third party service to host your repositories off-site over the internet. I use CVSDude and am satisfied.
A: I am also a lone developer, and I use Subversion and TortoiseSVN.
Setup of Subversion is quick and painless; it can be done in less than half an hour including setting up the repository.
There is no requirement by Subversion to run on a server, I actually run it on my local machine and keep my repositories on a separate drive. Connecting to the repository uses svn:// instead of http://. I'm not sure why you require that it does not expose itself to the network, but this would be a matter of security via obscurity. I'm sure networking experts could suggest better methods for locking it down, should that be necessary.
Once the repository has been created, commits and updates from the repository are as simple as right-clicking on a folder in Windows Explorer.
A: Any distributed revision control system is best for lone developers, like git or Mercurial. Best thing is you can incorporate more developers to your project seamlessly, as opposed to having to give them access to your main centralized SVN or CVS repository.
A: SVN and TortoiseSVN work for me. Definitely ensure you have offsite backup.
You might want to check out the wiki article Comparison of revision control software. A (slightly hard-to-read) comparison tool might help. You might enjoy If Version Control Systems Were Airlines.
A: I came here looking for the same thing, and I saw someone suggest Google Code. I tried it out, and it was brain dead easy to set up. Exactly what I was looking for. Works like a charm with TortoiseSVN (my favorite).
I came here for a solution, Google Code was all set up in about 2 minutes. You can choose SVN, git, or mercurial for your version control.
A: You should check CVSNT as server and use any of the clients you would like (standalone or integrated with your IDE). There are plenty of them.
A: I have heard of a hosted Subversion vendor Versionshelf (http://www.versionshelf.com) on a podcast I listen to.
This site also has a list: http://snook.ca/archives/servers/hosted_subversion/
A: Use Visual SVN to setup your server and then use Tortoise to access your repository. Both are free to use and we have been successfully using it for quite sometime now.
A: @gorgapor: Doesn't the Google Code TOS specify an open source license? It's not a generally applicable solution in that case.
A: I haven't seen anyone mention Perforce. Perforce allows you to use their software for up-to 2 users for free. You can run the server and clients in the same machine, which will give you the environment that you want.
A: This is much the same question as Source control system for single developer
The bottom line is: yes there is. More than one.
My opinion is that SVN will do just fine. it does for me in similar cases, as described here: Single serving source control
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How to limit result set size for arbitrary query in Ingres? In Oracle, the number of rows returned in an arbitrary query can be limited by filtering on the "virtual" rownum column. Consider the following example, which will return, at most, 10 rows.
SELECT * FROM all_tables WHERE rownum <= 10
Is there a simple, generic way to do something similar in Ingres?
A: Blatantly changing my answer. "Limit 10" works for MySql and others, Ingres uses
Select First 10 * from myTable
Ref
A: select * from myTable limit 10 does not work.
Have discovered one possible solution:
TIDs are "tuple identifiers" or row addresses. The TID contains the
page number and the index of the offset to the row relative to the
page boundary. TIDs are presently implemented as 4-byte integers.
The TID uniquely identifies each row in a table. Every row has a
TID. The high-order 23 bits of the TID are the page number of the page
in which the row occurs. The TID can be addressed in SQL by the name
`tid.'
So you can limit the number of rows coming back using something like:
select * from SomeTable where tid < 2048
The method is somewhat inexact in the number of rows it returns. It's fine for my requirement though because I just want to limit rows coming back from a very large result set to speed up testing.
A: Hey Craig. I'm sorry, I made a Ninja Edit.
No, Limit 10 does not work, I was mistaken in thinking it was standard SQL supported by everyone. Ingres uses (according to doc) "First" to solve the issue.
A: Hey Ninja editor from Stockholm! No worries, have confirmed that "first X" works well and a much nicer solution than I came up with. Thankyou!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: 64 bit tools like BoundsChecker & Purify For many years I have used two great tools BoundsChecker & Purify, but the developers of these applications have let me down, they no longer put effort into maintaining them or developing them. We have corporate accounts with both companies, and they both tell me that they have no intention of producing versions to support 64 bit applications.
Can anyone recommend either open source or commercial alternatives that support 64 bit native C++/MFC applications?
A: Viva64 (http://www.viva64.com/viva64-tool/) tool provides detection of errors typical of 64-bit Windows applications. Viva64 is a lint-like static analyzer of C/C++ code. Viva64 integrates into Visual Studio 2005/2008 environment and provides user-friendly interface to test your software projects.
A: Parasoft has a tool called Insure++ (link: http://www.parasoft.com/jsp/products/quick_facts.jsp?product=Insure) which says it'll do that.
I've used Insure++ on 32-bit and 64-bit apps on Linux and it worked okay. It sometimes got confused when it was trying to parse template/stl code and would fall over.
That url says it works on 32- and 64-bit windows, good luck!
A: BoundsChecker 9.01 now supports VC2008 and x64 bit, at last.
A: insure++ only workse if you instrument your code. I once tried it. It took about 5 minutes to compile about 1000 lines of code. Since the project that I needed to compile was huge, I quickly determined that Insure++ was not going to work.
Not to mention their reporting, or output from Insure++ is pretty archaic. also the runtime performance penalty was attrocious.
Note about boundschecker from numega/compuware/other_new_company: Don't buy it. It's only profiles 32 bit apps. It does NOT do 64 bit apps. It can be installed on a 64 bit OS though. I stopped using it years ago on our app. I do use it on CppUnit Tests though... sometimes.
In general I'm completely disgusted with all the native memory leak tools out there. They all don't work, or just lock up your application on shutdown.
A: FYI: BoundsChecker 10.0 runs on Windows XP through Windows 7, on both 32 and 64 bit versions. It supports WOW64 applications, and it also supports Visual Studio 2010. In fact, we released VS2010 support within 30 days of Microsoft's release.
We are catching up with our backlog. We were very late getting the VS2005 and VS2008 support out (with BC 9.0, Fall 2008), but there were a variety of reasons why this happened. The miracle was that we got it out at all.
BoundsChecker 10.5, when it comes out, should have some more goodies. Stay tuned.
Disclosure: I work for MicroFocus.
A: Intel(R) Parallel Inspector (http://software.intel.com/en-us/intel-parallel-inspector/) is a threading and memory-checking plugin tool to Microsoft* Visual Studio; it supports 32-bit and 64-bit C/C++ on Windows. It's a commercial application with a 30-day free evaluation.
Disclosure: I work for Intel.
A: I've used bounds checking and other dynamic analysis tools, and while the architectures are different it's the code that you're checking - in theory you could run bounds checking on any backend and the result would be the same - the code either steps outside its bounds or it does not.
The only complications are addressing more than 4GB of memory space, dealing with pieces of code you can't cross-compile to a 32-bit architecture (64 bit object files for which you have no source, etc), and general 64 bit migration issues (platform specific code such as checking for 0xFFFFFFFF instead of -1)
What other problems are you running into doing bounds checking on your program? Are you unable to compile a 32 bit version?
It's not your ideal solution, certainly, and one should always check the code they're going to run, but in this case you might not have a choice, unless you want to do your own bounds checking (which is a good idea in any case...).
-Adam
A: it is my understanding that BC 9.0 will support WOW64
A: Application verifier, for x64 and x86, detects heap corruption
http://www.microsoft.com/download/en/details.aspx?id=20028
A: From IBM PurifyPlus support for 64-bit versions of Microsoft Windows:
Technote (FAQ)
Question
Is IBM Rational PurifyPlus supported on 64-bit versions of Microsoft Windows?
Cause
64-bit versions of Microsoft Windows are getting popular.
Answer
Beginning with version 7.0.1 iFix 003, PurifyPlus supports testing 64 bit applications on Windows.
More information about iFix 003 can be found in the following technote IBM Rational PurifyPlus for Windows v7.0.1.0-003
You install this version of Purify and you get a "Purify (for 64-bit applications)" entry in your start menu.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Team System notification of unassociated checkins How can I be notified when someone checks a file into Team System and doesn't associate it with a work item?
A: Use the very cool team foundation server event subscription tool. You can find the tool here: http://www.codeplex.com/tfseventsubscription. Once installed, setup a subscription with the following parameters:
*
*XPath: PolicyOverrideComment <> ''
*Event: CheckinEvent
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Nant and maintain directory structure How do you use the nant <copy> command and maintain the directory structure? This is what I am doing, but it is copying all the files to a single directory.
<copy todir="..\out">
<fileset>
<includes name="..\src\PrecompiledWeb\**\*" />
</fileset>
</copy>
A: Try:
<fileset baseDir="../src/PrecompiledWeb"><includes name="**/*" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Problems with mouseout event I'm using JavaScript to hide an image and show some text thats hidden under it. But, when the text is shown if you scroll over it, it fires the mouseout event on the container, that then hides the text and shows the image again, and it just goes into a weird loop.
The html looks like this:
<div onmouseover="jsHoverIn('1')"
onmouseout="jsHoverOut('1')">
<div id="image1" />
<div id="text1" style="display: none;">
<p>some content</p>
<p>some more content</p>
</div>
</div>
And the javascript (It uses scriptaculous):
function jsHoverIn(id) {
if(!visible[id]) {
new Effect.Fade ("image" + id, {queue: { position: 'end', scope: id } });
new Effect.Appear ("text" + id, {queue: { position: 'end', scope: id } });
visible[id] = true;
}
}
function jsHoverOut (id) {
var scope = Effect.Queues.get(id);
scope.each(function(effect) { effect.cancel(); });
new Effect.Fade ("text" + id, {queue: { position: 'end', scope: id } });
new Effect.Appear ("image" + id, {queue: { position: 'end', scope: id } });
visible[id] = false;
}
This seems really simple, but i just cant wrap my head around it.
A: I'd give the container div:
position: relative;
and add a third div in the container (should be the last child of the container) with:
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
and catch the mouseover and mouseout events on this div instead.
Because it has no child elements, you shouldn't get spurious mouseover and mouseout events propagating to it.
Edit:
What I believe happens, is that when the cursor moves from a parent element onto a child element, a mouseout event occurs on the parent element, and a mouseover event occurs on the child element. However, if the mouseover handler on the child element does not catch the event and stop it propagating, the parent element will also receive the mouseover event.
A: It sounds like what you really want is mouseenter/mouseleave (IE proprietary events, but easy to emulate):
// Observe mouseEnterLeave on mouseover/mouseout
var mouseEnterLeave = function(e) {
var rel = e.relatedTarget, cur = e.currentTarget;
if (rel && rel.nodeType == 3) {
rel = rel.parentNode;
}
if(
// Outside window
rel == undefined ||
// Firefox/other XUL app chrome
(rel.tagName && rel.tagName.match(/^xul\:/i)) ||
// Some external element
(rel && rel != cur && rel.descendantOf && !rel.descendantOf(cur))
) {
e.currentTarget.fire('mouse:' + this, e);
return;
}
};
$(yourDiv).observe('mouseover', mouseEnterLeave.bind('enter'));
$(yourDiv).observe('mouseout', mouseEnterLeave.bind('leave'));
// Use mouse:enter and mouse:leave for your events
$(yourDiv).observe(!!Prototype.Browser.IE ? 'mouseenter' : 'mouse:enter', yourObserver);
$(yourDiv).observe(!!Prototype.Browser.IE ? 'mouseleave' : 'mouse:leave', yourObserver);
Alternatively, patch prototype.js and use mouseenter and mouseleave with confidence. Note that I've expanded the check for leaving the window or entering XUL chrome; this seemed to fix some edge cases in Firefox for me.
A: Shouldn't the onmouseover event be on the image div and the onmouseout event be on the text div?
A: I'm not sure if this would fit with the rest of your styling, but perhaps if you changed the css on the text div so it was the same size as the image, or fixed the size of the outer div, then when the mouseover event fired, the size of the outer div wouldn't change so much as to cause the mouseout event.
Does this make sense?
A: This may not be the best solution but you could set a global boolean variable that would be accessible to both methods that would just specify if the last action was HoverIn or HoverOut. You could use this boolean variable to determine if the code should run or not.
if (bWasHoverIn){
...
}
A: Try using
onmouseenter instead of onmouseover and
onmouseleave instead of onmouseout.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Are there any good automated test suites for Perl? Can someone suggest some good automated test suite framework for Perl?
A: Check out CPAN Testers, which have a lot of tools for automated testing. Most of that should be on CPAN so you'll be able to modify it to meet your needs. It's also very easy to write your own tester using TAP::Harness.
What exactly do you need to do and how are you trying to fit it into your process?
A: As long as you are using tests that produce TAP (Test Anything Protocol) output you might find this to be useful: http://sourceforge.net/projects/smolder
A: Have you seen smolder?
"Smoke Test Aggregator used by developers and testers to upload (automated or manually) and view smoke/regression tests using the Test Anything Protocol. Details and trends are graphed and notifications provided via email or Atom feeds."
A: It really depends on what you're trying to do, but here's some background for much of this.
First, you would generally write your test programs with Test::More or Test::Simple as the core testing program:
use Test::More tests => 2;
is 3, 3, 'basic equality should work';
ok !0, '... and zero should be false';
Internally, Test::Builder is called to output those test results as TAP (Test Anything Protocol). Test::Harness (a thin wrapper around TAP::Harness), reads and interprets the TAP, telling you if your tests passed or failed. The "prove" tool mentioned above is bundled with Test::Harness, so let's say that save the above in the t/ directory (the standard Perl testing directory) as "numbers.t", then you can run it with this command:
prove --verbose t/numbers.t
Or to run all tests in that directory (recursively, assuming you want to descend into subdirectories):
prove --verbose -r t/
(--verbose, of course, is optional).
As a side note, don't use TestUnit. Many people recommend it, but it was abandoned a long time ago and doesn't integrate with modern testing tools.
A: If I understand you correctly you are looking for TAP::Harness
A: If you're using ExtUtils::MakeMaker or Module::Build, then you can run all your tests automatically by entering the command "make test" or "Build test", which will execute any *.t files in your project's t/ subfolder.
If you're not using either of these, then you can use TAP::Harness to automate execution of multiple test scripts.
To actually write the tests, use Test::More or any of the modules that others have suggested here.
A:
we have to run all the test files
manually for testing
You certainly want to be using prove (runs your test) and/or Module::Build (builds your code and then runs your tests using the same test harness code which prove uses internally.)
A: You said:
"What I am looking for is a more of automated framework which can do incremental testing/build checks etc"
Still not entirely sure what you're after. As others have mentioned you want to look at things that are based on Test::Harness/TAP. The vast majority of the Perl testing community uses that framework - so you'll get much more support (and useful existing code) by using that.
Can you talk a little more about what you mean by "incremental testing/build checks"?
I'm guessing that you want to divide up your tests into groups so that you're only running certain sets of tests in certain circumstances?
There are a couple of ways to do this. The simplest would be to just use the file system - split up your test directories so you have things like:
core/
database.t
infrastructure.t
style/
percritic.t
ui/
something.t
something-else.t
And so on... you can then use the command line "prove" tool to run them all, or only certain directories, etc.
prove has a lot of useful options that let you choose which tests are run and in which order (e.g. things like most-recently-failed order). This - all by itself - will probably get you towards what you need.
(BTW it's important to get a recent version of Test::Simple/prove/etc. from CPAN. Recent versions have much, much more functionality).
If you're of an OO mindset, or have previous experience of xUnit frameworks, than you might want to take a look at Test::Class which is a Perl xUnit framework that's build on top of the TAP/Test::Harness layer. I think it's quite a lot better than PerlUnit - but I would say that since I wrote it :-)
Check out delicious for some more info on Test::Class http://delicious.com/tag/Test::Class
If this isn't what you're after - could you go into a bit more detail on what functionality you want?
Cheers,
Adrian
A: Personally, I like Test::Most, its basically Test::More with some added cool features.
A: The test suite framework of choice is Test::Harness, which takes care of controlling a test run, collecting the results, etc.
Various modules exist to provide certain kinds of tests, the most common of which can be found in Test::Simple and Test::More (both are included in the Test-Simple distribution). The entire Test namespace on the CPAN is dedicated to specialized unit-testing modules, the majority of which are designed to be run under Test::Harness.
By convention, tests are stored in the t/ directory of a project, and each test file uses the file extension .t ; tests are commonly run via
prove t/*.t
Module distributions typically include a make target named 'test' that runs the test suite before installation. By default, the CPAN installation process requires that tests pass after the build before a module will be installed.
A: I'd go for Test::More, or in general, anything that outputs TAP
A: As of now, we use Test::More but current issue is that we have to run all the test files manually for testing. What I am looking for is a more of automated framework which can do incremental testing/build checks etc.
A wrapper around Test::More for that would be ideal but anything better and more functional would be fine too.
I am going through PerlUnit to see if that helps.
A: Are you aware of the 'prove' utility (from App::Prove)? You can tell it to run all the tests recursively in a given directory, with or without verbosity, etc.
A: For automated testing in perl take a look at Test::Harness, which contains the prove tool.
The prove tool can be executed with the following command:
prove -r -Ilib t
This will recursivly test all *.t files in the 't/' directory, while adding lib to the include path.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Software evaluation licensing My company is looking to start distributing some software we developed and would like to be able to let people try the software out before buying. We'd also like to make sure it can't be copied and distributed to our customers' customers.
One model we've seen is tying a license to a MAC address so the software will only work on one machine.
What I'm wondering is, what's a good way to generate a license key with different information embedded in it such as license expiration date, MAC address, and different software restrictions?
A: I'd suggest you take the pieces of information you want in the key, and hash it with md5, and then just take the first X characters (where X is a key length you think is manageable).
Cryptographically, it's far from perfect, but this is the sort of area where you want to put in the minimum amount of effort which will stop a casual attacker - anything more quickly becomes a black hole.
Oh, I should also point out, you will want to provide the expiration date (and any other information you might want to read out yourself) in plain text (or slightly obfuscated) as part of the key as well if you go down this path - The md5 is just to stop the end user from changing he expiration date to extend the license.
The easiest thing would be a key file like this...
# License key for XYZZY
expiry-date=2009-01-01
other-info=blah
key=[md5 has of MAC address, expiry date, other-info]
A: We've used the following algorithm at my company for years without a single incident.
*
*Decide the fields you want in the code. Bit-pack as much as possible. For example, dates could be "number of days since 2007," and then you can get away with 16-bits.
*Add an extra "checksum" field. (You'll see why in a second.) The value of this field is a checksum of the packed bytes from the other fields. We use "first 32 bits from MD5."
*Encrypt everything using TEA. For the key, use something that identifies the customer (e.g. company name + personal email address), that way if someone wants to post a key on the interweb they have to include their own contact info in plain text.
*Convert hex to a string in some sensible way. You can do straight hex digits but some people like to pick a different set of 16 characters to make it less obvious. Also include dashes or something regularly so it's easier to read it over the phone.
To decrypt, convert hex to string and decrypt with TEA. But then there's this extra step: Compute your own checksum of the fields (ignoring the checksum field) and compare to the given checksum. This is the step that ensures no one tampered with the key.
The reason is that TEA mixes the bits completely, so if even one bit is changed, all other bits are equally likely to change during TEA decryption, therefore the checksum will not pass.
Is this hackable? Of course! Almost everything is, but this is tight enough and simple to implement.
If tying to contact information is not sufficient, then include a field for "Node ID" and lock it to MAC address or somesuch as you suggest.
A: Don't use MAC addresses. On some hardware we've tested - in particular some IBM Thinkpads - the MAC address can change on a restart. We didn't bother investigating why this was, but we learned quite early during our research not to rely on it.
Obligatory disclaimer & plug: the company I co-founded produces the OffByZero Cobalt licensing solution. So it probably won't surprise you to hear that I recommend outsourcing your licensing, & focusing on your core competencies.
Seriously, this stuff is quite tricky to get right, & the consequences of getting it wrong could be quite bad. If you're low-volume high-price a few pirated copies could seriously dent your revenue, & if you're high-volume low-price then there's incentive for warez d00dz to crack your software for fun & reputation.
One thing to bear in mind is that there is no such thing as truly crack-proof licensing; once someone has your byte-code on their hardware, you have given away the ability to completely control what they do with it.
What a good licensing system does is raise the bar sufficiently high that purchasing your software is a better option - especially with the rise in malware-infected pirated software. We recommend you take a number of measures towards securing your application:
*
*get a good third-party licensing system
*pepper your code with scope-contained checks (e.g. no one global variable like fIsLicensed, don't check the status of a feature near the code that implements the feature)
*employ serious obfuscation in the case of .NET or Java code
A: The company I worked for actually used a usb dongle. This was handy because:
*
*Our software was also installed on that USB Stick
*The program would only run if it found the (unique) hardware key (any standard USB key has that, so you don't have to buy something special, any stick will do)
*it was not restricted to a computer, but could be installed on another system if desired
I know most people don't like dongles, but in this case it was quite handy as it was actually used for a special purpose media player that we also delivered, the USB keys could thus be used as a demo on any pc, but also, and without any modifications, be used in the real application (ie the real players), once the client was satisfied
A: I've used both FLEXlm from Macrovision (formerly Globetrotter) and the newer RLM from Reprise Software (as I understand, written by FlexLM's original authors). Both can key off either the MAC address or a physical dongle, can be either node-locked (tied to one machine only) or "floating" (any authorized machine on the network can get a license doled out by a central license server, up to a maximum number of simultaneously checked-out copies determined by how much they've paid for). There are a variety of flexible ways to set it up, including expiration dates, individual sub-licensed features, etc. Integration into an application is not very difficult. These are just the two I've used, I'm sure there are others that do the job just as well.
These programs are easily cracked, meaning that there are known exploits that let people either bypass the security of your application that uses them, either by cutting their own licenses to spoof the license server, or by merely patching your binary to bypass the license check (essentially replacing the subroutine call to their library with code that just says "return 'true'". It's more complicated than that, but that's what it mostly boils down to. You'll see cracked versions of your product posted to various Warez sites. It can be very frustrating and demoralizing, all the more so because they're often interested in cracking for cracking sake, and don't even have any use for your product or knowledge of what to do with it. (This is obvious if you have a sufficiently specialized program.)
Because of this, some people will say you should write your own, maybe even change the encryption scheme frequently. But I disagree. It's true that rolling your own means that known exploits against FLEXlm or RLM won't instantly work for your application. However, unless you are a total expert on this kind of security (which clearly you aren't or you wouldn't be asking the question), it's highly likely that in your inexperience you will end up writing a much less secure and more crackable scheme than the market leaders (weak as they may be).
The other reason not to roll your own is simply that it's an endless cat and mouse game. It's better for your customers and your sales to put minimal effort into license security and spend that time debugging or adding features. You need to come to grips with the licensing scheme as merely "keeping honest people honest", but not preventing determined cracking. Accept that the crackers wouldn't have paid for the software anyway.
Not everybody can take this kind of zen attitude. Some people can't sleep at night knowing that somebody somewhere is getting something for nothing. But try to learn to deal with it. You can't stop the pirates, but you can balance your time/effort/expense trying to stop all piracy versus making your product better for users. Remember, sometimes the most pirated applications are also the most popular and profitable. Good luck and sleep well.
A: We keep it simple: store every license data to an XML (easy to read and manage), create a hash of the whole XML and then crypt it with a utility (also own and simple).
This is also far from perfect, but it can hold for some time.
A: Almost every commercial license system has been cracked, we have used many over the years all eventually get cracked, the general rule is write your own, change it every release, once your happy try to crack it yourself.
Nothing is really secure, ultimately look at the big players Microsoft etc, they go with the model honest people will pay and other will copy, don't put too much effort into it.
If you application is worth paying money for people will.
A: I've used a number of different products that do the license generation and have created my own solution but it comes down to what will give you the most flexibility now and down the road.
Topics that you should focus on for generating your own license keys are...
HEX formating, elliptic curve cryptography, and any of the algorithms for encryption such as AES/Rijndael, DES, Blowfish, etc. These are great for creating license keys.
Of course it isn't enough to have a key you also need to associate it to a product and program the application to lock down based on a key system you've created.
I have messed around with creating my own solution but in the end when it came down to making money with the software I had to cave and get a commercial solution that would save me time in generating keys and managing my product line...
My favorite so far has been License Vault from SpearmanTech but I've also tried FlexNet (costly), XHEO (way too much programming required), and SeriousBit Ellipter.
I chose the License Vault product in the end because I would get it for much cheaper than the others and it simply had more to offer me as we do most of our work in .NET 3.5.
A: It is difficult to provide a good answer without knowing anything about your product and customers. For enterprise software sold to technical people you can use a fairly complex licensing system and they'll figure it out. For consumer software sold to the barely computer-literate, you need a much simpler system.
In general, I've adopted the practice of making a very simple system that keeps the honest people honest. Anyone who really wants to steal your software will find a way around any DRM system.
In the past I've used Armadillo (now Software Passport) for C++ projects. I'm currently using XHEO for C# projects.
A: If your product requires the use of the internet, then you can generate a unique id for the machine and use that to check with a license web service.
If it does not, I think going with a commercial product is the way to go. Yes, they can be hacked, but for the person who is absolutely determined to hack it, it is unlikely they ever would have paid.
We have used: http://www.aspack.com/asprotect.aspx
We also use a function call in their sdk product that gives us a unique id for a machine.
Good company although clearly not native English speakers since their first product was called "AsPack".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Where can I find thorough DCOM documentation? I work on an application that uses DCOM to communicate between what are essentially several peers; in the course of normal use, instances on separate machines serve a variety of objects to one another. Historically, for this to work we have used some magic incantations, chief among which is that on every machine the user must log into an account of the same name (note that these are local accounts; there is no domain available). Obviously, this is an aspect of our user experience that could be improved.
I would like to better understand how DCOM authentication works, but I am having difficulty assembling the whole story from the MSDN documentation for CoInitializeSecurity(), CoSetProxyBlanket(), and the like. Are there any thorough explanations available of how, exactly, DCOM operations are accepted or denied? Books, journals, web, any format is fine.
A: Programming Windows Security by Keith Brown includes a thorough discussion of DCOM security. I can highly recommend this book.
A: You could also try to round up a copy of Inside Distributed COM by Guy and Henry Eddon (Microsoft Press) - It is out of print but amazon shows a number of used copies for sale:
http://www.amazon.com/Inside-Distributed-Com-Mps-Eddon/dp/157231849X/ref=sr_1_5?ie=UTF8&s=books&qid=1231968553&sr=8-5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Sources of inspiration for navigation breadcrumbs I'm looking for sources of inspiration and/or design patterns for navigation 'breadcrumbs'. So far I have found the breadcrumb collection on Pattern Tap. Does anyone know of any other sources?
A: The article 'Taming lists' from A List Apart has some good advice on CSS styling for breadcrumbs (look down for the heading 'Breadcrumb Trails').
A: The Yahoo pattern library has a useful wee bit about breadcrumbs too.
A: http://www.greepit.com/2009/02/06/breadcrumb-inspiration-for-designers/
A: I found a few good/bad examples of breadcrumbs here.
Also, there's a blog post about breadcrumb designs.
A: There's a great page on breadcrumbs at the Diemen Repository of Interaction Design Patterns
A: Smashing Magazine also has a decent roundup:
http://www.smashingmagazine.com/2009/03/17/breadcrumbs-in-web-design-examples-and-best-practices-2/
A: If using Microsoft ASP.NET, there's a built in control: SiteMapPath.
A: Web & Patterns breadcrumbs category
"Inspirational and creative Breadcrumbs for web design"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: GetLocalTime() API time resolution I need to find out time taken by a function in my application. Application is a MS VIsual Studio 2005 solution, all C code.
I used thw windows API GetLocalTime(SYSTEMTIME *) to get the current system time before and after the function call which I want to measure time of.
But this has shortcoming that it lowest resolution is only 1msec. Nothing below that. So I cannot get any time granularity in micro seconds.
I know that time() which gives the time elapsed since the epoch time, also has resolution of 1msec (No microseconds)
1.) Is there any other Windows API which gives time in microseconds which I can use to measure the time consumed by my function?
-AD
A: You can try to use clock() which will provide the number of "ticks" between two points. A "tick" is the smallest unit of time a processor can measure.
As a side note, you can't use clock() to determine the actual time - only the number of ticks between two points in your program.
A: One caution on multiprocessor systems:
from http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
On a multiprocessor computer, it should not matter which processor is called. However, you can get different results on different processors due to bugs in the basic input/output system (BIOS) or the hardware abstraction layer (HAL). To specify processor affinity for a thread, use the SetThreadAffinityMask function.
Al Weiner
A: There are some other possibilities.
QueryPerformanceCounter and QueryPerformanceFrequency
QueryPerformanceCounter will return a "performance counter" which is actually a CPU-managed 64-bit counter that increments from 0 starting with the computer power-on. The frequency of this counter is returned by the QueryPerformanceFrequency. To get the time reference in seconds, divide performance counter by performance frequency. In Delphi:
function QueryPerfCounterAsUS: int64;
begin
if QueryPerformanceCounter(Result) and
QueryPerformanceFrequency(perfFreq)
then
Result := Round(Result / perfFreq * 1000000);
else
Result := 0;
end;
On multiprocessor platforms, QueryPerformanceCounter should return consistent results regardless of the CPU the thread is currently running on. There are occasional problems, though, usually caused by bugs in hardware chips or BIOSes. Usually, patches are provided by motherboard manufacturers. Two examples from the MSDN:
*
*Programs that use the QueryPerformanceCounter function may perform poorly in Windows Server 2003 and in Windows XP
*Performance counter value may unexpectedly leap forward
Another problem with QueryPerformanceCounter is that it is quite slow.
RDTSC instruction
If you can limit your code to one CPU (SetThreadAffinity), you can use RDTSC assembler instruction to query performance counter directly from the processor.
function CPUGetTick: int64;
asm
dw 310Fh // rdtsc
end;
RDTSC result is incremented with same frequency as QueryPerformanceCounter. Divide it by QueryPerformanceFrequency to get time in seconds.
QueryPerformanceCounter is much slower thatn RDTSC because it must take into account multiple CPUs and CPUs with variable frequency. From Raymon Chen's blog:
(QueryPerformanceCounter) counts elapsed time. It has to, since its value is
governed by the QueryPerformanceFrequency function, which returns a number
specifying the number of units per second, and the frequency is spec'd as not
changing while the system is running.
For CPUs that can run at variable speed, this means that the HAL cannot
use an instruction like RDTSC, since that does not correlate with elapsed time.
timeGetTime
TimeGetTime belongs to the Win32 multimedia Win32 functions. It returns time in milliseconds with 1 ms resolution, at least on a modern hardware. It doesn't hurt if you run timeBeginPeriod(1) before you start measuring time and timeEndPeriod(1) when you're done.
GetLocalTime and GetSystemTime
Before Vista, both GetLocalTime and GetSystemTime return current time with millisecond precision, but they are not accurate to a millisecond. Their accuracy is typically in the range of 10 to 55 milliseconds. (Precision is not the same as accuracy)
On Vista, GetLocalTime and GetSystemTime both work with 1 ms resolution.
A: On Windows you can use the 'high performance counter API'. Check out: QueryPerformanceCounter and QueryPerformanceCounterFrequency for the details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Anyone know of Objective-J syntax highlighting in vi? I have been looking at the new Objective-J / Cappuccino javascript framework from 280North. They provide plug-ins for SubEthaEdit and TextMate to handle syntax highlighting, but I primarily use vi. Does anyone know of a way to get Objective-J syntax highlighting in vi, or a good way to convert whatever format the other two editors use?
A: The Objective-J Tools package (http://cappuccino.org/download) and the source on github now include a vim highlight module.
A: Here is another way to add Objective J highlighting with automatic file detection to (Mac)Vim and it is a lot cleaner and shorter than bootload’s version (it won’t add a menu to MacVim, but I don’t need it at all):
*
*Download the objj.vim file Francisco mentioned.
*Place it into ~/.vim/syntax/ (create folder if necessary)
*Add the line au BufNewFile,BufRead *.j setf objj to ~/.vim/filetype.vim
*Do not forget to turn on filetype detection in your ~/.vimrc configuration file: filetype plugin on
A:
the source on github now include a vim
highlight module.
I've found just doing what is suggested here is not enough:
*
*download the file as Francisco suggests
*unzip, cd Tools/ dir
*run the shell, sh install-tools
*copy the objj.vim file to vim dir, cp Tools/Editors/objj.vim /usr/share/vim/vim71/syntax/
Problem
I found no syntax highlighting worked for ".j" files. So the problem here is no file extension recognition. If you are using gvim as I am there is also no menu item.
Add Objective-J to gvim menu
To add a menu-item in gvim for Syntax->Ne-MO->Objective J:
*
*sudo vim /usr/share/vim/vim71/synmenu.vim
add the following line.
*
*an 50.70.465 &Syntax.Me-NO.Objective\ J :cal SetSyn("objj")<CR> below the objective-C entry.
save and quit
*
*:wq!
then reload a ".j" file in gvim. If you then go:
*
*Syntax->Ne-MO->Objective J
highlighting for your selected Objective-J file should occur.
Objective-J auto-highlighting?
But what about auto-highlighting when you load the file? There appears to be no file associations. So:
*
*sudo vim /usr/share/vim/vim7.1/filetype.vim
In the file you will find a list of filetype associations. If you want an idea where to add the line, search for "setf ocaml" in filetype.vim. Add the line below above (alphabetical):
*
*"" Objective J au BufNewFile,BufRead *.j setf objj
Save the result. You should now be able to load a file with a ".j" extension and syntax highlighting for Objective-J files works.
Result
Now you should get automatic recognition of the Objective-J files by file type ".j" and a way to set this filetype in gvim. This probably should be added by Bram or whoever does the official release of vim but for the moment this hack works for me. (Ubuntu 8.10, Vim 7.1)
A: If regular javascript syntax highlighting is good enough, you can map that to .j files by adding something like this to your .vimrc file:
augroup objective-j
au! BufRead,BufNewFile *.j set filetype=objective-j
au! Syntax objective-j source /usr/share/vim/vim71/syntax/javascript.vim
augroup END
I haven't tried this exact code, but did something similar when mapping C# syntax to .vala files on my Linux machine. (NOTE: The javascript.vim file might be located somewhere else on your computer.) You could of course make a objective-j.vim file based on that javascript.vim syntax definition instead of using it as it is.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Templates In VB I've got some VB code (actually VBA) which is basically the same except for the type on which it operates. Since I think the DRY principle is a good guiding principle for software development, I want to write one routine for all of the different types which need to be operated on. For example if I had two snippets of code like these:
Dim i as Obj1
Set i = RoutineThatReturnsObj1()
i.property = newvalue
Dim i as Obj2
Set i = RoutineThatReturnsObj2()
i.property = newvalue
I'd like to have something like this to handle both instances:
Sub MyRoutine(o as ObjectType, r as RoutineToInitializeObject, newvalue as value)
Dim i as o
Set i = r
i.property = newvalue
End Sub
If I were using C++ I'd generate a template and say no more about it. But I'm using VBA. I'm fairly sure there's no capability like C++ templates in the VBA language definition but is there any other means by which I might achieve the same effect? I'm guessing the answer is no but I ask here because maybe there is some feature of VBA that I've missed.
A: There's nothing in VB6 that will do that. If you update to Visual Studio Tools for Office with .Net you can use generics:
Function MyRoutine(Of O)(R As Delegate, newvalue As Object) As O
Dim i As O = CType(r.Method.Invoke(Nothing, Nothing), O)
'you need another parameter to tell it which property to use'
' and then use reflection to set the value'
i.property = newvalue
return i
End Function
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Programmatically extract macro (VBA) code from Word 2007 docs Is it possible to extract all of the VBA code from a Word 2007 "docm" document using the API?
I have found how to insert VBA code at runtime, and how to delete all VBA code, but not pull the actual code out into a stream or string that I can store (and insert into other documents in the future).
Any tips or resources would be appreciated.
Edit: thanks to everyone, Aardvark's answer was exactly what I was looking for. I have converted his code to C#, and was able to call it from a class library using Visual Studio 2008.
using Microsoft.Office.Interop.Word;
using Microsoft.Vbe.Interop;
...
public List<string> GetMacrosFromDoc()
{
Document doc = GetWordDoc(@"C:\Temp\test.docm");
List<string> macros = new List<string>();
VBProject prj;
CodeModule code;
string composedFile;
prj = doc.VBProject;
foreach (VBComponent comp in prj.VBComponents)
{
code = comp.CodeModule;
// Put the name of the code module at the top
composedFile = comp.Name + Environment.NewLine;
// Loop through the (1-indexed) lines
for (int i = 0; i < code.CountOfLines; i++)
{
composedFile += code.get_Lines(i + 1, 1) + Environment.NewLine;
}
// Add the macro to the list
macros.Add(composedFile);
}
CloseDoc(doc);
return macros;
}
A: You could export the code to files and then read them back in.
I've been using the code below to help me keep some Excel macros under source control (using Subversion & TortoiseSVN). It basically exports all the code to text files any time I save with the VBA editor open. I put the text files in subversion so that I can do diffs. You should be able to adapt/steal some of this to work in Word.
The registry check in CanAccessVBOM() corresponds to the "Trust access to Visual Basic Project" in the security setting.
Sub ExportCode()
If Not CanAccessVBOM Then Exit Sub ' Exit if access to VB object model is not allowed
If (ThisWorkbook.VBProject.VBE.ActiveWindow Is Nothing) Then
Exit Sub ' Exit if VBA window is not open
End If
Dim comp As VBComponent
Dim codeFolder As String
codeFolder = CombinePaths(GetWorkbookPath, "Code")
On Error Resume Next
MkDir codeFolder
On Error GoTo 0
Dim FileName As String
For Each comp In ThisWorkbook.VBProject.VBComponents
Select Case comp.Type
Case vbext_ct_ClassModule
FileName = CombinePaths(codeFolder, comp.Name & ".cls")
DeleteFile FileName
comp.Export FileName
Case vbext_ct_StdModule
FileName = CombinePaths(codeFolder, comp.Name & ".bas")
DeleteFile FileName
comp.Export FileName
Case vbext_ct_MSForm
FileName = CombinePaths(codeFolder, comp.Name & ".frm")
DeleteFile FileName
comp.Export FileName
Case vbext_ct_Document
FileName = CombinePaths(codeFolder, comp.Name & ".cls")
DeleteFile FileName
comp.Export FileName
End Select
Next
End Sub
Function CanAccessVBOM() As Boolean
' Check resgistry to see if we can access the VB object model
Dim wsh As Object
Dim str1 As String
Dim AccessVBOM As Long
Set wsh = CreateObject("WScript.Shell")
str1 = "HKEY_CURRENT_USER\Software\Microsoft\Office\" & _
Application.Version & "\Excel\Security\AccessVBOM"
On Error Resume Next
AccessVBOM = wsh.RegRead(str1)
Set wsh = Nothing
CanAccessVBOM = (AccessVBOM = 1)
End Function
Sub DeleteFile(FileName As String)
On Error Resume Next
Kill FileName
End Sub
Function GetWorkbookPath() As String
Dim fullName As String
Dim wrkbookName As String
Dim pos As Long
wrkbookName = ThisWorkbook.Name
fullName = ThisWorkbook.fullName
pos = InStr(1, fullName, wrkbookName, vbTextCompare)
GetWorkbookPath = Left$(fullName, pos - 1)
End Function
Function CombinePaths(ByVal Path1 As String, ByVal Path2 As String) As String
If Not EndsWith(Path1, "\") Then
Path1 = Path1 & "\"
End If
CombinePaths = Path1 & Path2
End Function
Function EndsWith(ByVal InString As String, ByVal TestString As String) As Boolean
EndsWith = (Right$(InString, Len(TestString)) = TestString)
End Function
A: You'll have to add a reference to Microsoft Visual Basic for Applications Extensibility 5.3 (or whatever version you have). I have the VBA SDK and such on my box - so this may not be exactly what office ships with.
Also you have to enable access to the VBA Object Model specifically - see the "Trust Center" in Word options. This is in addition to all the other Macro security settings Office provides.
This example will extract code from the current document it lives in - it itself is a VBA macro (and will display itself and any other code as well). There is also a Application.vbe.VBProjects collection to access other documents. While I've never done it, I assume an external application could get to open files using this VBProjects collection as well. Security is funny with this stuff so it may be tricky.
I also wonder what the docm file format is now - XML like the docx? Would that be a better approach?
Sub GetCode()
Dim prj As VBProject
Dim comp As VBComponent
Dim code As CodeModule
Dim composedFile As String
Dim i As Integer
Set prj = ThisDocument.VBProject
For Each comp In prj.VBComponents
Set code = comp.CodeModule
composedFile = comp.Name & vbNewLine
For i = 1 To code.CountOfLines
composedFile = composedFile & code.Lines(i, 1) & vbNewLine
Next
MsgBox composedFile
Next
End Sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Is automatic upgrades a realistic feature to expect from enterprise Web applications? Most of the work I do is with what could be considered enterprise Web applications. These projects have large budgets, longer timelines (from 3-12 months), and heavy customizations. Because as developers we have been touting the idea of the Web as the next desktop OS, customers are coming to expect the software running on this "new OS" to react the same as on the desktop. That includes easy to manage automatic upgrades. In other words, "An update is available. Do you want to upgrade?" Is this even a realistic expectation? Can anyone speak from experience on trying to implement this feature?
A: At my company we have enterprise installations ranging into the thousands of seats. If we implemented an auto-upgrade, our customers would mutiny!
Large installations have peculiar issues that don't apply to small ones. For example, with 2000 users (not all of whom are, let us say, the most sophisticated of tool users), tool-training is a big deal: training time, internal demos, internal process documents, etc.. They cannot unleash a new feature or UI change without a chance to understand how it fits in their process and therefore what their internal best practices are and how to communicate that to their users.
Also when applications fail, it's the internal IT team who are responsible. Therefore, they want time to install a new version in a test area, beat it up, and deploy on a Saturday only when they're good and ready.
I can see the value in making minor patches more easy to install, particularly when the patch is just for a bug-fix and not for anything that would require retraining, and if the admins still get final say over when it's installed. But even then, I don't believe anyone has ever asked for this! Whether because they don't want it or they are trained to not expect it, it doesn't seem worth it.
A: Well, it really depends on your business model but for a lot of applications the SaaS model can end up biting you. It's great for a lot of things but for some larger applications the users are not investing as significant amount up front and could possibly move to something else before you've made any money.
See
http://news.zdnet.com/2424-9595_22-218408.html
and here
http://www.25hoursaday.com/weblog/2008/07/21/SoftwareAsAServiceWhenYourBusinessModelBecomesAParadox.aspx
for more information
A: One of the primary reasons to implement an application as a web application is that you get automatic upgrades for free. Why would users be getting prompted for upgrades on a web app?
For Windows applications, the "update is available, do you want to upgrade?" functionality is provided by Microsoft using ClickOnce, which I have used in an enterprise environment successfully -- there are a few gotchas but for the most part it is a good way to manage automatic deployment and upgrade of Windows apps.
For mobile apps, you can also implement auto-upgrades, although it is a little trickier.
In any case, to answer your question in a broad sense, I don't know if it is expected that all enterprise apps should make upgrading easy, but it certainly is worth the money from an IT support standpoint to architect them to allow for easy upgrading.
A: If you're providing a hosted solution, I wouldn't bother. Let the upgrade happen silently (perhaps with a notice that you did it). If you're selling an application that's hosted on their servers, let the upgrade decision be made by a single owner, not every user of the app.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Use-cases for reflection Recently I was talking to a co-worker about C++ and lamented that there was no way to take a string with the name of a class field and extract the field with that name; in other words, it lacks reflection. He gave me a baffled look and asked when anyone would ever need to do such a thing.
Off the top of my head I didn't have a good answer for him, other than "hey, I need to do it right now". So I sat down and came up with a list of some of the things I've actually done with reflection in various languages. Unfortunately, most of my examples come from my web programming in Python, and I was hoping that the people here would have more examples. Here's the list I came up with:
*
*Given a config file with lines like
x = "Hello World!"
y = 5.0
dynamically set the fields of some config object equal to the values in that file. (This was what I wished I could do in C++, but actually couldn't do.)
*When sorting a list of objects, sort based on an arbitrary attribute given that attribute's name from a config file or web request.
*When writing software that uses a network protocol, reflection lets you call methods based on string values from that protocol. For example, I wrote an IRC bot that would translate
!some_command arg1 arg2
into a method call actions.some_command(arg1, arg2) and print whatever that function returned back to the IRC channel.
*When using Python's __getattr__ function (which is sort of like method_missing in Ruby/Smalltalk) I was working with a class with a whole lot of statistics, such as late_total. For every statistic, I wanted to be able to add _percent to get that statistic as a percentage of the total things I was counting (for example, stats.late_total_percent). Reflection made this very easy.
So can anyone here give any examples from their own programming experiences of times when reflection has been helpful? The next time a co-worker asks me why I'd "ever want to do something like that" I'd like to be more prepared.
A: I've used reflection to get current method information for exceptions, logging, etc.
string src = MethodInfo.GetCurrentMethod().ToString();
string msg = "Big Mistake";
Exception newEx = new Exception(msg, ex);
newEx.Source = src;
instead of
string src = "MyMethod";
string msg = "Big MistakeA";
Exception newEx = new Exception(msg, ex);
newEx.Source = src;
It's just easier for copy/paste inheritance and code generation.
A: I'm in a situation now where I have a stream of XML coming in over the wire and I need to instantiate an Entity object that will populate itself from elements in the stream. It's easier to use reflection to figure out which Entity object can handle which XML element than to write a gigantic, maintenance-nightmare conditional statement. There's clearly a dependency between the XML schema and how I structure and name my objects, but I control both so it's not a big problem.
A: I can list following usage for reflection:
*
*Late binding
*Security (introspect code for security reasons)
*Code analysis
*Dynamic typing (duck typing is not possible without reflection)
*Metaprogramming
Some real-world usages of reflection from my personal experience:
*
*Developed plugin system based on reflection
*Used aspect-oriented programming model
*Performed static code analysis
*Used various Dependency Injection frameworks
*...
Reflection is good thing :)
A: There are lot's of times you want to dynamically instantiate and work with objects where the type isn't known until runtime. For example with OR-mappers or in a plugin architecture. Mocking frameworks use it, if you want to write a logging-library and dynamically want to examine type and properties of exceptions.
If I think a bit longer I can probably come up with more examples.
A: I find reflection very useful if the input data (like xml) has a complex structure which is easily mapped to object-instances or i need some kind of "is a" relationship between the instances.
As reflection is relatively easy in java, I sometimes use it for simple data (key-value maps) where I have a small fixed set of keys. One one hand it's simple to determine if a key is valid (if the class has a setter setKey(String data)), on the other hand i can change the type of the (textual) input data and hide the transformation (e.g simple cast to int in getKey()), so the rest of the application can rely on correctly typed data.
If the type of some key-value-pair changes for one object (e.g. form int to float), i only have to change it in the data-object and its users but don't have to keep in mind to check the parser too. This might not be a sensible approach, if performance is an issue...
A: Writing dispatchers. Twisted uses python's reflective capabilities to dispatch XML-RPC and SOAP calls. RMI uses Java's reflection api for dispatch.
Command line parsing. Building up a config object based on the command line parameters that are passed in.
When writing unit tests, it can be helpful to use reflection, though mostly I've used this to bypass access modifiers (Java).
A: I've used reflection in C# when there was some internal or private method in the framework or a third party library that I wanted to access.
(Disclaimer: It's not necessarily a best-practice because private and internal methods may be changed in later versions. But it worked for what I needed.)
A: Well, in statically-typed languages, you'd want to use reflection any time you need to do something "dynamic". It comes in handy for tooling purposes (scanning the members of an object). In Java it's used in JMX and dynamic proxies quite a bit. And there are tons of one-off cases where it's really the only way to go (pretty much anytime you need to do something the compiler won't let you do).
A: I generally use reflection for debugging. Reflection can more easily and more accurately display the objects within the system than an assortment of print statements. In many languages that have first-class functions, you can even invoke the functions of the object without writing special code.
There is, however, a way to do what you want(ed). Use a hashtable. Store the fields keyed against the field name.
If you really wanted to, you could then create standard Get/Set functions, or create macros that do it on the fly. #define GetX() Get("X") sort of thing.
You could even implement your own imperfect reflection that way.
For the advanced user, if you can compile the code, it may be possible to enable debug output generation and use that to perform reflection.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Autoupdating .net applications I've written 2 reasonably large scale apps in .net so far, and both of them have needed an updating facility to automatically update the application when I roll out new code.
I've found the 'Enterprise application block updater' a bit too complex for my needs, and I've found 'click once' frustrating when it comes to publishing.
The most adequate updating code I've found is the .net Application Updater Component, which I've used for both projects. I've had to modify it recently because it uses web dav, which isn't always installed on our web servers (it still needs directory browsing, however).
I'm surprised that there isn't more on the web about automatically updating applications, and was wondering whether people have had success with any other methods than the ones mentioned above?
A: See the answers to this similar question: a few auto-update frameworks were recommended there which are designed for (or work with) .net apps:
*
*Application Updater Block (which you mention)
*ClickOnce,
*ClickThrough
A: At my company, we use a custom inhouse updater for our applications. It's embedded as as a resource in the main application executable and when the application needs to update, the updater is extracted, written to disk and launched to do the update. The updater can download a .msi and launch it or it can download a zip file and unzip it's content int the application folder. Pretty simple and effective.
A: Try an off-the-shelf autoupdate product: http://www.AutoUpdatePlus.com
A: Try AutoUpdater.NET class library for .net created by me. You just need to add One line of code in and user can update your application with ease.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Design Pattern for Undo Engine I'm writing a structural modeling tool for a civil enginering application. I have one huge model class representing the entire building, which include collections of nodes, line elements, loads, etc. which are also custom classes.
I have already coded an undo engine which saves a deep-copy after each modification to the model. Now I started thinking if I could have coded differently. Instead of saving the deep-copies, I could perhaps save a list of each modifier action with a corresponding reverse modifier. So that I could apply the reverse modifiers to the current model to undo, or the modifiers to redo.
I can imagine how you would carry out simple commands that change object properties, etc. But how about complex commands? Like inserting new node objects to the model and adding some line objects which keep references to the new nodes.
How would one go about implementing that?
A: Most examples I've seen use a variant of the Command-Pattern for this. Every user-action that's undoable gets its own command instance with all the information to execute the action and roll it back. You can then maintain a list of all the commands that have been executed and you can roll them back one by one.
A: You might want to refer to the Paint.NET code for their undo - they've got a really nice undo system. It's probably a bit simpler than what you'll need, but it might give you some ideas and guidelines.
-Adam
A: This might be a case where CSLA is applicable. It was designed to provide complex undo support to objects in Windows Forms applications.
A: I've implemented complex undo systems sucessfully using the Memento pattern - very easy, and has the benefit of naturally providing a Redo framework too. A more subtle benefit is that aggregate actions can be contained within a single Undo too.
In a nutshell, you have two stacks of memento objects. One for Undo, the other for Redo. Every operation creates a new memento, which ideally will be some calls to change the state of your model, document (or whatever). This gets added to the undo stack. When you do an undo operation, in addition to executing the Undo action on the Memento object to change the model back again, you also pop the object off the Undo stack and push it right onto the Redo stack.
How the method to change the state of your document is implemented depends completely on your implementation. If you can simply make an API call (e.g. ChangeColour(r,g,b)), then precede it with a query to get and save the corresponding state. But the pattern will also support making deep copies, memory snapshots, temp file creation etc - it's all up to you as it is is simply a virtual method implementation.
To do aggregate actions (e.g. user Shift-Selects a load of objects to do an operation on, such as delete, rename, change attribute), your code creates a new Undo stack as a single memento, and passes that to the actual operation to add the individual operations to. So your action methods don't need to (a) have a global stack to worry about and (b) can be coded the same whether they are executed in isolation or as part of one aggregate operation.
Many undo systems are in-memory only, but you could persist the undo stack out if you wish, I guess.
A: Just been reading about the command pattern in my agile development book - maybe that's got potential?
You can have every command implement the command interface (which has an Execute() method). If you want undo, you can add an Undo method.
more info here
A: I'm with Mendelt Siebenga on the fact that you should use the Command Pattern. The pattern you used was the Memento Pattern, which can and will become very wasteful over time.
Since you're working on a memory-intensive application, you should be able to specify either how much memory the undo engine is allowed to take up, how many levels of undo are saved or some storage to which they will be persisted. Should you not do this, you will soon face errors resulting from the machine being out of memory.
I would advise you check whether there's a framework that already created a model for undos in the programming language / framework of your choice. It is nice to invent new stuff, but it's better to take something already written, debugged and tested in real scenarios. It would help if you added what you're writing this in, so people can recommend frameworks they know.
A: I think both memento and command are not practical when you are dealing with a model of the size and scope that the OP implies. They would work, but it would be a lot of work to maintain and extend.
For this type of problem, I think you need to build in support to your data model to support differential checkpoints for every object involved in the model. I've done this once and it worked very slick. The biggest thing you have to do is avoid the direct use of pointers or references in the model.
Every reference to another object uses some identifier (like an integer). Whenever the object is needed, you lookup the current definition of the object from a table. The table contains a linked list for each object that contains all the previous versions, along with information regarding which checkpoint they were active for.
Implementing undo/redo is simple: Do your action and establish a new checkpoint; rollback all object versions to the previous checkpoint.
It takes some discipline in the code, but has many advantages: you don't need deep copies since you are doing differential storage of the model state; you can scope the amount of memory you want to use (very important for things like CAD models) by either number of redos or memory used; very scalable and low-maintenance for the functions that operate on the model since they don't need to do anything to implement undo/redo.
A: Codeplex project:
It's a simple framework to add Undo/Redo functionality to your applications, based on the classical Command design pattern. It supports merging actions, nested transactions, delayed execution (execution on top-level transaction commit) and possible non-linear undo history (where you can have a choice of multiple actions to redo).
A: I had to do this when writing a solver for a peg-jump puzzle game. I made each move a Command object that held enough information that it could be either done or undone. In my case this was as simple as storing the starting position and the direction of each move. I then stored all these objects in a stack so the program could easily undo as many moves as it needed while backtracking.
A: Most examples I've read do it by using either the command or memento pattern. But you can do it without design patterns too with a simple deque-structure.
A: For reference, here's a simple implementation of the Command pattern for Undo/Redo in C#: Simple undo/redo system for C#.
A: A clever way to handle undo, which would make your software also suitable for multi user collaboration, is implementing an operational transformation of the data structure.
This concept is not very popular but well defined and useful. If the definition looks too abstract to you, this project is a successful example of how an operational transformation for JSON objects is defined and implemented in Javascript
A: If you're talking GoF, the Memento pattern specifically addresses undo.
A: As others have stated, the command pattern is a very powerful method of implementing Undo/Redo. But there is important advantage I would like to mention to the command pattern.
When implementing undo/redo using the command pattern, you can avoid large amounts of duplicated code by abstracting (to a degree) the operations performed on the data and utilize those operations in the undo/redo system. For example in a text editor cut and paste are complementary commands (aside from the management of the clipboard). In other words, the undo operation for a cut is paste and the undo operation for a paste is cut. This applies to much simpler operations as typing and deleting text.
The key here is that you can use your undo/redo system as the primary command system for your editor. Instead of writing the system such as "create undo object, modify the document" you can "create undo object, execute redo operation on undo object to modify the document".
Now, admittedly, many people are thinking to themselves "Well duh, isn't part of the point of the command pattern?" Yes, but I've seen too many command systems that have two sets of commands, one for immediate operations and another set for undo/redo. I'm not saying that there won't be commands that are specific to immediate operations and undo/redo, but reducing the duplication will make the code more maintainable.
A: We reused the file load and save serialization code for “objects” for a convenient form to save and restore the entire state of an object. We push those serialized objects on the undo stack – along with some information about what operation was performed and hints on undo-ing that operation if there isn’t enough info gleaned from the serialized data. Undo and Redoing is often just replacing one object with another (in theory).
There have been many MANY bugs due to pointers (C++) to objects that were never fixed-up as you perform some odd undo redo sequences (those places not updated to safer undo aware “identifiers”). Bugs in this area often ...ummm... interesting.
Some operations can be special cases for speed/resource usage - like sizing things, moving things around.
Multi-selection provides some interesting complications as well. Luckly we already had a grouping concept in the code. Kristopher Johnson comment about sub-items is pretty close to what we do.
A: You can try ready-made implementation of Undo/Redo pattern in PostSharp. https://www.postsharp.net/model/undo-redo
It lets you add undo/redo functionality to your application without implementing the pattern yourself. It uses Recordable pattern to track the changes in your model and it works with INotifyPropertyChanged pattern which is also implemented in PostSharp.
You are provided with UI controls and you can decide what the name and granularity of each operation will be.
A: I once worked on an application in which all changes made by a command to the application's model (i.e. CDocument... we were using MFC) were persisted at the end of the command by updating fields in an internal database maintained within the model. So we did not have to write separate undo/redo code for each action. The undo stack simply remembered the primary keys, field names and old values every time a record was changed (at the end of each command).
A: The first section of Design Patterns (GoF, 1994) has a use case for implementing the undo/redo as a design pattern.
A: You can make your initial idea performant.
Use persistent data structures, and stick with keeping a list of references to old state around. (But that only really works if operations all data in your state class are immutable, and all operations on it return a new version---but the new version doesn't need to be a deep copy, just replace the changed parts 'copy-on-write'.)
A: I've found the Command pattern to be very useful here. Instead of implementing several reverse commands, I'm using rollback with delayed execution on a second instance of my API.
This approach seems reasonable if you want low implementation effort and easy maintainability (and can afford the extra memory for the 2nd instance).
See here for an example:
https://github.com/thilo20/Undo/
A: I don't know if this is going to be of any use to you, but when I had to do something similar on one of my projects, I ended up downloading UndoEngine from http://www.undomadeeasy.com - a wonderful engine and I really didn't care too much about what was under the bonnet - it just worked.
A: In my opinion, the UNDO/REDO could be implemented in 2 ways broadly.
1. Command Level (called command level Undo/Redo)
2. Document level (called global Undo/Redo)
Command level: As many answers point out, this is efficiently achieved using Memento pattern. If the command also supports journalizing the action, a redo is easily supported.
Limitation: Once the scope of the command is out, the undo/redo is impossible, which leads to document level(global) undo/redo
I guess your case would fit into the global undo/redo since it is suitable for a model which involves a lot of memory space. Also, this is suitable to selectively undo/redo also. There are two primitive types
*
*All memory undo/redo
*Object level Undo Redo
In "All memory Undo/Redo", the entire memory is treated as a connected data (such as a tree, or a list or a graph) and the memory is managed by the application rather than the OS. So new and delete operators if in C++ are overloaded to contain more specific structures to effectively implement operations such as a. If any node is modified, b. holding and clearing data etc.,
The way it functions is basically to copy the entire memory(assuming that memory allocation is already optimized and managed by the application using advanced algorithms) and store it in a stack. If the copy of the memory is requested, the tree structure is copied based on the need to have a shallow or deep copy. A deep copy is made only for that variable which is modified. Since every variable is allocated using custom allocation, the application has the final say when to delete it if need be.
Things become very interesting if we have to partition the Undo/Redo when it so happens that we need to programatically-selectively Undo/Redo a set of operation. In this case, only those new variables, or deleted variables or modified variables are given a flag so that Undo/Redo only undoes/redoes those memory
Things become even more interesting if we need to do a partial Undo/Redo inside an object. When such is the case, a newer idea of "Visitor pattern" is used. It is called "Object Level Undo/redo"
*Object level Undo/Redo: When the notification to undo/redo is called, every object implements a streaming operation wherein, the streamer gets from the object the old data/new data which is programmed. The data which is not be disturbed is left undisturbed. Every object gets a streamer as argument and inside the UNDo/Redo call, it streams/unstreams the data of the object.
Both 1 and 2 could have methods such as
1. BeforeUndo()
2. AfterUndo()
3. BeforeRedo()
4. AfterRedo(). These methods have to be published in the basic Undo/redo Command ( not the contextual command) so that all objects implement these methods too to get specific action.
A good strategy is to create a hybrid of 1 and 2. The beauty is that these methods(1&2) themselves use command patterns
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "124"
} |
Q: What is a good design when trying to build objects from a list of key value pairs? So if I have a method of parsing a text file and returning a list of a list of key value pairs, and want to create objects from the kvps returned (each list of kvps represents a different object), what would be the best method?
The first method that pops into mind is pretty simple, just keep a list of keywords:
private const string NAME = "name";
private const string PREFIX = "prefix";
and check against the keys I get for the constants I want, defined above. This is a fairly core piece of the project I'm working on though, so I want to do it well; does anyone have any more robust suggestions (not saying there's anything inherently un-robust about the above method - I'm just asking around)?
Edit:
More details have been asked for. I'm working on a little game in my spare time, and I am building up the game world with configuration files. There are four - one defines all creatures, another defines all areas (and their locations in a map), another all objects, and a final one defines various configuration options and things that don't fit else where. With the first three configuration files, I will be creating objects based on the content of the files - it will be quite text-heavy, so there will be a lot of strings, things like names, plurals, prefixes - that sort of thing. The configuration values are all like so:
-
key: value
key: value
-
key: value
key: value
-
Where the '-' line denotes a new section/object.
A: Take a deep look at the XmlSerializer. Even if you are constrained to not use XML on-disk, you might want to copy some of its features. This could then look like this:
public class DataObject {
[Column("name")]
public string Name { get; set; }
[Column("prefix")]
public string Prefix { get; set; }
}
Be careful though to include some kind of format version in your files, or you will be in hell's kitchen come the next format change.
A: Making a lot of unwarranted assumptions, I think that the best approach would be to create a Factory that will receive the list of key value pairs and return the proper object or throw an exception if it's invalid (or create a dummy object, or whatever is better in the particular case).
private class Factory {
public static IConfigurationObject Factory(List<string> keyValuePair) {
switch (keyValuePair[0]) {
case "x":
return new x(keyValuePair[1]);
break;
/* etc. */
default:
throw new ArgumentException("Wrong parameter in the file");
}
}
}
The strongest assumption here is that all your objects can be treated partly like the same (ie, they implement the same interface (IConfigurationObject in the example) or belong to the same inheritance tree).
If they don't, then it depends on your program flow and what are you doing with them. But nonetheless, they should :)
EDIT: Given your explanation, you could have one Factory per file type, the switch in it would be the authoritative source on the allowed types per file type and they probably share something in common. Reflection is possible, but it's riskier because it's less obvious and self documenting than this one.
A: What do you need object for? The way you describe it, you'll use them as some kind (of key-wise) restricted map anyway. If you do not need some kind of inheritance, I'd simply wrap a map-like structure into a object like this:
[java-inspired pseudo-code:]
class RestrictedKVDataStore {
const ALLOWED_KEYS = new Collection('name', 'prefix');
Map data = new Map();
void put(String key, Object value) {
if (ALLOWED_KEYS.contains(key))
data.put(key, value)
}
Object get(String key) {
return data.get(key);
}
}
A: You could create an interface that matched the column names, and then use the Reflection.Emit API to create a type at runtime that gave access to the data in the fields.
A: EDIT:
Scratch that, this still applies, but I think what your doing is reading a configuration file and parsing it into this:
List<List<KeyValuePair<String,String>>> itemConfig =
new List<List<KeyValuePair<String,String>>>();
In this case, we can still use a reflection factory to instantiate the objects, I'd just pass in the nested inner list to it, instead of passing each individual key/value pair.
OLD POST:
Here is a clever little way to do this using reflection:
The basic idea:
*
*Use a common base class for each Object class.
*Put all of these classes in their own assembly.
*Put this factory in that assembly too.
*Pass in the KeyValuePair that you read from your config, and in return it finds the class that matches KV.Key and instantiates it with KV.Value
public class KeyValueToObjectFactory
{
private Dictionary _kvTypes = new Dictionary();
public KeyValueToObjectFactory()
{
// Preload the Types into a dictionary so we can look them up later
// Obviously, you want to reuse the factory to minimize overhead, so don't
// do something stupid like instantiate a new factory in a loop.
foreach (Type type in typeof(KeyValueToObjectFactory).Assembly.GetTypes())
{
if (type.IsSubclassOf(typeof(KVObjectBase)))
{
_kvTypes[type.Name.ToLower()] = type;
}
}
}
public KVObjectBase CreateObjectFromKV(KeyValuePair kv)
{
if (kv != null)
{
string kvName = kv.Key;
// If the Type information is in our Dictionary, instantiate a new instance of that class.
Type kvType;
if (_kvTypes.TryGetValue(kvName, out kvType))
{
return (KVObjectBase)Activator.CreateInstance(kvType, kv.Value);
}
else
{
throw new ArgumentException("Unrecognized KV Pair");
}
}
else
{
return null;
}
}
}
A: @David:
I already have the parser (and most of these will be hand written, so I decided against XML). But that looks like I really nice way of doing it; I'll have to check it out. Excellent point about versioning too.
@Argelbargel:
That looks good too. :')
A:
...This is a fairly core piece of the
project I'm working on though...
Is it really?
It's tempting to just abstract it and provide a basic implementation with the intention of refactoring later on.
Then you can get on with what matters: the game.
Just a thought
<bb />
A:
Is it really?
Yes; I have thought this out. Far be it from me to do more work than neccessary. :')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: In HTML, what should happen to a selected, disabled option element? In my specific example, I'm dealing with a drop-down, e.g.:
<select name="foo" id="bar">
<option disabled="disabled" selected="selected">Select an item:</option>
<option>an item</option>
<option>another item</option>
</select>
Of course, that's pretty nonsensical, but I'm wondering whether any strict behaviour is defined. Opera effectively rejects the 'selected' attribute and selects the next item in the list. All other browsers appear to allow it, and it remains selected.
Update: To clarify, I'm specifically interested in the initial selection. I'm dealing with one of those 'Select an item:'-type drop-downs, in which case the first option is really a label, and an action occurs onchange(). This is fairly well 'progressively enhanced', in that a submit button is present, and only removed via JavaScript. If the "select..." option were removed, whatever then were to become the first item would not be selectable. Are we just ruling out onchange drop downs altogether, or should the "select..." option be selectable, just with no effect?
A: The HTML specs are a bit vague (ie. completely lacking) with regard to this odd combination. They do say that a form element with the disabled attribute set should not be successful, so it really can't be selected.
The browser may well render it so that it looks selected, but it shouldn't show up in the POSTed data. Looks like Opera's got it right to me.
A: The HTML specs state that both selected & disabled are available options for the <option> element, but doesn't specify what should happen in case of a conflict. In the section on disabled controls it says
When set, the disabled attribute has
the following effects on an element:
*
*Disabled controls do not receive focus.
*Disabled controls are skipped in tabbing navigation.
*Disabled controls cannot be successful.
it also says
How disabled elements are rendered depends on the user agent. For example, some user agents "gray out" disabled menu items, button labels, etc. In this example, the INPUT element is disabled. Therefore, it cannot receive user input nor will its value be submitted with the form.
While this specific case isn't specified, my reading of this says that the actual rendering of a 'selected' 'disabled' element is left up to the browser. As long as the user cannot select it then it's working as standard. It does say that a script can act upon the element, so it is possible for Javascript to set a disabled option as selected (or disable a selected option). This isn't contrary to the standards, but on form submission, that option's value couldn't be the selected value. The select list would (I assume) have to have an empty value in this case.
A: In reply to the update in the question, I would say that the 'label' option should be selectable but either make it do nothing on submission or via JavaScript, don't allow the form to be submitted without a value being selected (assuming it's a required field).
From a usablilty point of view I'd suggest doing both, that way all bases are covered.
A: According to the HTML 4.01 Specification, disabled is a standard attribute for the option element, but behavior is probably indeterminate based on the standard (read over the information on the select element and the options elements. Here is a portion I think may shed light on Opera's reasons for their implementation:
When set, the disabled attribute has the following effects on an element:
* Disabled controls do not receive focus.
* Disabled controls are skipped in tabbing navigation.
* Disabled controls cannot be successful.
So, it is very likely that this is just one of those things where the spec is vague enough to allow for both interpretations. This is the kind of idiosyncrasy that makes programming for the web so fun and rewarding. :P
A:
Are we just ruling out 'onchange' drop
downs altogether, or should the
"select..." option be selectable, just
with no effect?
"onchange" drop-downs are frowned upon by more standards-obsessed types.
I would typically do some client-side validation. "Please select an item from the drop down" kind of thing. i.e.
should the "select..." option be selectable, just with no effect?
So I just said "Yes" to your A or B question. :/ Sorry!
A: unfortunately it doesn't really matter what should happen, because IE doesn't support the disabled attribute on options period.
http://webbugtrack.blogspot.com/2007/11/bug-293-cant-disable-options-in-ie.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How often do you use System.Component.BackgroundWorker in your UIs ? (if ever) I am sure a responsive UI is something that everyone strives for and the reccomended way to do stuff is to use the BackgroundWorker for this.
Do you find it easy to work with ? Do you use it often ? Or do you have your own frameworks for lengthy tasks and reporting process.
I have found that I am using it quite a lot and even using its delegates wherever I need some sort of progress reporting.
A: BackgroundWorker makes things a lot easier. One thing I found the hard way is Backgroundworker itself has thread affinity even though it is supposed to hide the thread switching problem. It does not automatically switch to the UI thread in every case. It needs to be created and run from the UI thread for thread switching to happen properly.
A: Multithreaded programming is hard to grasp in the beginning (and veterans still fail sometimes) and BackgroundWorker makes it a bit easier to use. I like the fact that BackgroundWorker has functionality which is easy to implement but even easier to wrongly implement in a subtle way, like cancellation.
I use it if I have and need a progress update, so I can display a meaningful progress bar.
If not, I use a Thread (or borrow from the ThreadPool), because I don't need all the functionality of BackgroundWorker and am proficient enough with threads to start a Thread and wait for it to stop.
As for delegates for non-related tasks, I use those of the Thread classes, like plain void ThreadStart(), or I create my own.
A: I use it quite often for tasks such as progress indication and background data loading\processing.
Recently i found use case that is not supported out of box. It's "Overridable Task". However Patric Smacchia come up with nice solution.
A: I've used it once and was quite happy with it. Often, there is no need for "big" multithreading, but only for 2 Threads (UI and Worker), and it works really well without having to worry too much about the underlying Threading Logic.
A: @Gulzar Thank you for this piece of info : It needs to be created and run from the UI thread for thread switching to happen properly.
One thing to watch for when using a background worker that I have found is exception handlings.
If an exception is thrown on the async process it will not throw an exception to the main thread, the process will finish and BackgroundWorker RunWorkerCompleted event will fire with the error being hidden in the RunWorkerCompletedEventArgs.Error.
I like the fact that BackgroundWorker has functionality which is easy to implement but even easier to wrongly implement in a subtle way, like cancellation.
A: My biggest issue with the background worker class is that there really is no way to know when the worker has finished due to cancellation. The BackgroundWorker does not expose the thread it uses so you can't use the standard techniques for synchronizing thread termination (join, etc.). You also can't just wait in a loop on the UI thread for it to end because the RunWorkerCompleted event will never end up firing. The hack I've always had to use is to simply set a flag and then start a timer that will continue checking for the background worker to end. But it's very messy and complicates the business logic.
So it is great as long as you don't need to support deterministic cancellation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to create a non-interactive window in MFC In my application I have a window which I popup with small messages on it (think similar to tooltip). This window uses the layered attributes to draw alpha backgrounds etc.
If I have several of these windows open at once, and I click one with my mouse, when they disappear they cause my application to lose focus (it switches focus to the app behind the current one).
How do I stop any interaction in my window?
A: After playing with the WM_NCACTIVATE message with no luck, I overrode the WM_SETFOCUS message:
void CMyWindow::OnSetFocus(CWnd* pOldWnd)
{
if (pOldWnd != NULL)
{
pOldWnd->SetFocus();
}
}
That seems to do the trick. No idea why it works though! Comments welcome on that issue.
A: It works because OnSetFocus (like many of the On* methods) gives you a chance to pre-empt an action before it actually occurs. The focus never actually switches to your non-interactive window.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Java -> Python? Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
A: One key difference in Python is significant whitespace. This puts a lot of people off - me too for a long time - but once you get going it seems natural and makes much more sense than ;s everywhere.
From a personal perspective, Python has the following benefits over Java:
*
*No Checked Exceptions
*Optional Arguments
*Much less boilerplate and less verbose generally
Other than those, this page on the Python Wiki is a good place to look with lots of links to interesting articles.
A: *
*List comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace("spam","eggs") for line in open("somefile.txt") if line.startswith("nee")] is really nice.
*Functions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.
*Everything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.
*Properties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'
*Default and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name="Eli", age=25)
*Functions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.
*Built-in syntax for lists and dictionaries.
*Operator Overloading.
*Generally better designed libraries. For example, to parse an XML document in Java, you say
Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse("test.xml");
and in Python you say
doc = parse("test.xml")
Anyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.
Java has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.
A: With Jython you can have both. It's only at Python 2.2, but still very useful if you need an embedded interpreter that has access to the Java runtime.
A: Apart from what Eli Courtwright said:
*
*I find iterators in Python more concise. You can use for i in something, and it works with pretty much everything. Yeah, Java has gotten better since 1.5, but for example you can iterate through a string in python with this same construct.
*Introspection: In python you can get at runtime information about an object or a module about its symbols, methods, or even its docstrings. You can also instantiate them dynamically. Java has some of this, but usually in Java it takes half a page of code to get an instance of a class, whereas in Python it is about 3 lines. And as far as I know the docstrings thing is not available in Java
A: I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features).
*
*Python is Not Java
*Java is Not Python, either
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Open source or low cost "log shipping" program I have written a log shipping program a number of times. It is a simple program that is used to maintain a warm fail over box for SQL Server.
It has two pieces. On the live dB server it:
*
*Does full and transaction backups and removes old files
On the backup server it:
*
*Copies the backups from the live box
*Restores the backups or trans into databases that are set to recovery
*zips the backups
*deletes them based on retention
If there is a failure, the program can go through each database on the backup server and set them to active.
I am looking for an open source or low cost program that does this.
A: MS SQL server 2005 and 2008 already support this.
http://technet.microsoft.com/en-us/library/ms188698.aspx
http://technet.microsoft.com/en-us/library/ms188698(SQL.90).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: I understand threading in theory but not in practice in .net I have a basic cs-major understanding of multi-threading but have never had to do anything beyond simple timers in an application. Does anyone know of a good resource that will give me a tour how to work with multi-threaded applications, explaining the basics and maybe posing some of the more difficult stuff?
A: There are 4 basic ways to synchronize threads in .Net:
*
*BackgroundWorker control
*WaitHandles
*Callback functions
*polling an ASyncResult object
Generally you want to start at the top of that list and work down. That means first look and see if a backgroundworker control is appropriate to the situation. However, it pretty much assumes windows forms and that you're only spawning one new thread.
So next try waithandles. Waithandles are good for coordinating several threads together. You can kick them all off and wait for them all to finish, or if you want to keep a certain number active you keep waiting for just one and spawning the next when it finishes. Or maybe you know one thread will finish much sooner, so you can wait for it to finish, do a little bit of work, and then wait for the rest to finish.
Waithandles might seem like a bit much if, say, you're only spawning one additional thread and you don't want to block until it's finished. Then you might use a callback, so that the function you designate will be called as soon as the thread completes.
Finally, if and only if for some reason none of the above will work you can fall back to polling.
I can think of 5 different ways to get a new thread in .Net, also roughly in order:
*
*OS created, normally as the result of winforms event (including the BackgoundWorker).
*Obj.Begin___()/End____(). Certain CLR classes already have these asynchronous methods defined for you, and obviously you want to use them when they're available.
*ThreadPool.QueueUserWorkItem(). Use this most of the time to create your own threads.
*Delegate.BeginInvoke()/EndInvoke(). You can wrap any method this way.
*Thread.Start(). You could do it this way, but I read something recently (don't have the link now) that if QueueUserWorkItem won't work the delegate method is probably better.
A: This is a great free resource by Joseph Albahari. Threading in C#
A: A good web-resource to learn about multi-threading in .NET:
*
*HTML version.
*Printable version
*Further resources - (including examples)
A: Two great articles:
What Every Dev Must Know About Multithreaded Apps
Understand the Impact of Low-Lock Techniques in Multithreaded Apps
Although this article isn't exactly what you are looking for specifically, it will hopefully be of assistance generally (i.e. it is related, and a very good read):
The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software
A: One of the best resources I know on the subject is the "threading in C#" book:
http://www.albahari.com/threading/
I has a great overview of all a .net developer need to understand in order to program multi threaded applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Simpler interface for SQL Server analysis services cubes for end users Is there a simpler interface for end users to run "queries" on pre-existing SqlServer Analysis Service cubes? I'm looking for a way to deploy the cubes and allow the users to work with the data through a simpler interface than BIDS. Is this even possible?
A: I would recommend Excel too. It is an environment that your users are familiar with anyway, and they will be able to perform additional analysis (totals etc) without learning any new interfaces.
However, I would advise against pivot tables as a method of getting the data into Excel. I once worked on a project using pivot tables, and it was a filthy nightmare. The more recent versions of Office have a slightly different tool called "Microsoft Office Excel Add-in for SQL Server Analysis Services" which can get OLAP data into Excel. I downloaded XLAddinSetup.msi for Excel 2002/3 or you can use this method for Excel 2007.
A: You can use Excel with pivot tables for that, no need to write any queries at all, they can drill down to all the data they need
A: There's a couple of End User Reporting Tools around.
Our tool - RSinteract, is quite cheap and effective. It uses an AJAXy web interface so no need to install on the client and has drag and drop functionality similar to the other tools. It also has a 30 day evaluation.
A: There are many, many tools. An incomplete overview can be found here: http://www.ssas-info.com/analysis-services-client-tools-frontend
A: Dundas has a set of tools that let you drag and drop dimensions/hierarchies/measures to create visualizations like charts and/or grids. The product name is Dundas Chart for ASP.NET Enterprise Edition, and it has a free demo.
ProClarity also had a suite of tools. Not sure how you get those tools any longer, but I think they are part of MSDN now.
A: As stated by Jay, there are several client tools you can use to query the cubes that give the end user the ability to drag and drop dimensions for ad-hoc querying.
ProClarity has been acquired by Microsoft, and most of the functionality is being incorporated into PerformancePoint
Panorama Software (original developers of Analysis Services) also provide access with their NovaView products
A: Another option is Report Builder, that comes for free with SQL Server.
Though the SQL Server 2005 version is a bit cranky, the new release with SQL Server 2008 seems to work much better.
Although it isn't as flexible as excel for ad-hoc queries,it comes very handy for some scenarios.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Colored build output in Visual Studio I am using a Visual Studio project with custom build script/batch file (ala make, ant, etc.).
When the build is run from the command line we have placed colored highlighting on various output lines.
However, when built via Visual Studio (2005 in my case) the output window does not show the color anymore.
Is this possible? I am quite happy to put specific code into the build script if required.
A: The problem isn't with your build scripts, but with Visual Studio not supporting ANSI control codes to change the color.
A: If you don't want to go with the pro version of the VSCommands plug-in, there is a free one called VSColorOutput, which does just that. I've worked with it a bit, does what it says.
See http://coolthingoftheday.blogspot.com/2011/12/vscoloroutput-visual-studio-output.html or look it up in the extension gallery.
A: The VSCommands plug-in for Visual Studio 2010 adds colour formatting to the output window so errors are red. I'm not sure how they are doing it but it might give you a starting point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: apply-templates in reverse order Say I have this given XML file:
<root>
<node>x</node>
<node>y</node>
<node>a</node>
</root>
And I want the following to be displayed:
ayx
Using something similar to:
<xsl:template match="/">
<xsl:apply-templates select="root/node"/>
</xsl:template>
<xsl:template match="node">
<xsl:value-of select="."/>
</xsl:template>
A: Easy!
<xsl:template match="/">
<xsl:apply-templates select="root/node">
<xsl:sort select="position()" data-type="number" order="descending"/>
</xsl:apply-templates>
</xsl:template>
<xsl:template match="node">
<xsl:value-of select="."/>
</xsl:template>
A: You can do this, using xsl:sort. It is important to set the data-type="number" because else, the position will be sorted as a string, end therefor, the 10th node would ge considered before the 2nd one.
<xsl:template match="/">
<xsl:apply-templates select="root/node">
<xsl:sort
select="position()"
order="descending"
data-type="number"/>
</xsl:apply-templates>
</xsl:template>
<xsl:template match="node">
<xsl:value-of select="."/>
</xsl:template>
A: <xsl:template match="/">
<xsl:apply-templates select="root/node[3]"/>
<xsl:apply-templates select="root/node[2]"/>
<xsl:apply-templates select="root/node[1]"/>
</xsl:template>
<xsl:template match="node">
<xsl:value-of select="."/>
</xsl:template>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Mediawiki custom tag Stops page parsing I created a few mediawiki custom tags, using the guide found here
http://www.mediawiki.org/wiki/Manual:Tag_extensions
I will post my code below, but the problem is after it hits the first custom tag in the page, it calls it, and prints the response, but does not get anything that comes after it in the wikitext. It seems it just stops parsing the page.
Any Ideas?
if ( defined( 'MW_SUPPORTS_PARSERFIRSTCALLINIT' ) ) {
$wgHooks['ParserFirstCallInit'][] = 'tagregister';
} else { // Otherwise do things the old fashioned way
$wgExtensionFunctions[] = 'tagregister';
}
function tagregister(){
global $wgParser;
$wgParser->setHook('tag1','tag1func');
$wgParser->setHook('tag2','tag2func');
return true;
}
function tag1func($input,$params)
{
return "It called me";
}
function tag2func($input,$params)
{
return "It called me -- 2";
}
Update: @George Mauer -- I have seen that as well, but this does not stop the page from rendering, just the Mediawiki engine from parsing the rest of the wikitext. Its as if hitting the custom function is signaling mediawiki that processing is done. I am in the process of diving into the rabbit hole but was hoping someone else has seen this behavior.
A: Never used Mediawiki but that sort of problem in my experience is indicative of a PHP error that occurred but was suppressed either with the @ operator or because PHP error output to screen is turned off.
I hate to resort to this debugging method but when absolutely and utterly frustrated in PHP I will just start putting echo statements every few lines (always with a marker so I remember to remove them later), to figure out exactly where the error is coming from. Eventually, you'll get to the bottom of the rabbit hole and figure out exactly what the problematic line of code is.
A: Silly me.
Had to close the tags.
Instead of<tag1> I had to change it to <tag1 /> or <tag1></tag1>
Now all works!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Which is the best way to bring a file from a remote host to local host over an SSH session? When connecting to remote hosts via ssh, I frequently want to bring a file on that system to the local system for viewing or processing. Is there a way to copy the file over without (a) opening a new terminal/pausing the ssh session (b) authenticating again to either the local or remote hosts which works (c) even when one or both of the hosts is behind a NAT router?
The goal is to take advantage of as much of the current state as possible: that there is a connection between the two machines, that I'm authenticated on both, that I'm in the working directory of the file---so I don't have to open another terminal and copy and paste the remote host and path in, which is what I do now. The best solution also wouldn't require any setup before the session began, but if the setup was a one-time or able to be automated, than that's perfectly acceptable.
A: zssh (a ZMODEM wrapper over openssh) does exactly what you want.
*
*Install zssh and use it instead of openssh (which I assume that you normally use)
*You'll have to have the lrzsz package installed on both systems.
Then, to transfer a file zyxel.png from remote to local host:
antti@local:~$ zssh remote
Press ^@ (C-Space) to enter file transfer mode, then ? for help
...
antti@remote:~$ sz zyxel.png
**B00000000000000
^@
zssh > rz
Receiving: zyxel.png
Bytes received: 104036/ 104036 BPS:16059729
Transfer complete
antti@remote:~$
Uploading goes similarly, except that you just switch rz(1) and sz(1).
Putty users can try Le Putty, which has similar functionality.
A: On a linux box I use the ssh-agent and sshfs. You need to setup the sshd to accept connections with key pairs. Then you use ssh-add to add you key to the ssh-agent so you don't have type your password everytime. Be sure to use -t seconds, so the key doesn't stay loaded forever.
ssh-add -t 3600 /home/user/.ssh/ssh_dsa
After that,
sshfs hostname:/ /PathToMountTo/
will mount the server file system on your machine so you have access to it.
Personally, I wrote a small bash script that add my key and mount the servers I use the most, so when I start to work I just have to launch the script and type my passphrase.
A: Using some little known and rarely used features of the openssh
implementation you can accomplish precisely what you want!
*
*takes advantage of the current state
*can use the working directory where you are
*does not require any tunneling setup before the session begins
*does not require opening a separate terminal or connection
*can be used as a one-time deal in an interactive session or can be used as part of an automated session
You should only type what is at each of the local>, remote>, and
ssh> prompts in the examples below.
local> ssh username@remote
remote> ~C
ssh> -L6666:localhost:6666
remote> nc -l 6666 < /etc/passwd
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username@remote
local> (sleep 1; nc localhost 6666 > /tmp/file) & fg
[2] 17357
ssh username@remote
remote> exit
[2]- Done ( sleep 1; nc localhost 6666 > /tmp/file )
local> cat /tmp/file
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
...
Or, more often you want to go the other direction, for example if you
want to do something like transfer your ~/.ssh/id_rsa.pub file from
your local machine to the ~/.ssh/authorized_keys file of the remote
machine.
local> ssh username@remote
remote> ~C
ssh> -R5555:localhost:5555
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username@remote
local> nc -l 5555 < ~/.ssh/id_rsa.pub &
[2] 26607
local> fg
ssh username@remote
remote> nc localhost 5555 >> ~/.ssh/authorized_keys
remote> cat ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2ZQQQQBIwAAAQEAsgaVp8mnWVvpGKhfgwHTuOObyfYSe8iFvksH6BGWfMgy8poM2+5sTL6FHI7k0MXmfd7p4rzOL2R4q9yjG+Hl2PShjkjAVb32Ss5ZZ3BxHpk30+0HackAHVqPEJERvZvqC3W2s4aKU7ae4WaG1OqZHI1dGiJPJ1IgFF5bWbQl8CP9kZNAHg0NJZUCnJ73udZRYEWm5MEdTIz0+Q5tClzxvXtV4lZBo36Jo4vijKVEJ06MZu+e2WnCOqsfdayY7laiT0t/UsulLNJ1wT+Euejl+3Vft7N1/nWptJn3c4y83c4oHIrsLDTIiVvPjAj5JTkyH1EA2pIOxsKOjmg2Maz7Pw== username@local
A little bit of explanation is in order.
The first step is to open a LocalForward; if you don't already have
one established then you can use the ~C escape character to open an
ssh command line which will give you the following commands:
remote> ~C
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
In this example I establish a LocalForward on port 6666 of localhost
for both the client and the server; the port number can be any
arbitrary open port.
The nc command is from the netcat package; it is described as the
"TCP/IP swiss army knife"; it is a simple, yet very flexible and
useful program. Make it a standard part of your unix toolbelt.
At this point nc is listening on port 6666 and waiting for another
program to connect to that port so it can send the contents of
/etc/passwd.
Next we make use of another escape character ~^Z which is tilde
followed by control-Z. This temporarily suspends the ssh process and
drops us back into our shell.
One back on the local system you can use nc to connect to the
forwarded port 6666. Note the lack of a -l in this case because that
option tells nc to listen on a port as if it were a server which is
not what we want; instead we want to just use nc as a client to
connect to the already listening nc on the remote side.
The rest of the magic around the nc command is required because if
you recall above I said that the ssh process was temporarily
suspended, so the & will put the whole (sleep + nc) expression
into the background and the sleep gives you enough time for ssh to
return to the foreground with fg.
In the second example the idea is basically the same except we set up
a tunnel going the other direction using -R instead of -L so that
we establish a RemoteForward. And then on the local side is where
you want to use the -l argument to nc.
The escape character by default is ~ but you can change that with:
-e escape_char
Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot
(‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any
escapes and makes the session fully transparent.
A full explanation of the commands available with the escape characters is available in the ssh manpage
ESCAPE CHARACTERS
When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character.
A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted
as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option.
The supported escapes (assuming the default ‘~’) are:
~. Disconnect.
~^Z Background ssh.
~# List forwarded connections.
~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate.
~? Display a list of escape characters.
~B Send a BREAK to the remote system (only useful for SSH protocol version 2 and if the peer supports it).
~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing remote port-
forwardings using -KR[bind_address:]port. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is avail‐
able, using the -h option.
~R Request rekeying of the connection (only useful for SSH protocol version 2 and if the peer supports it).
A: Using ControlMaster (the -M switch) is the best solution, way simpler and easier than the rest of the answers here. It allows you to share a single connection among multiple sessions. Sounds like it does what the poster wants. You still have to type the scp or sftp command line though. Try it. I use it for all of my sshing.
A: In order to do this I have my home router set up to forward port 22 back to my home machine (which is firewalled to only accept ssh connections from my work machine) and I also have an account set up with DynDNS to provide Dynamic DNS that will resolve to my home IP automatically.
Then when I ssh into my work computer, the first thing I do is run a script that starts an ssh-agent (if your server doesn't do that automatically). The script I run is:
#!/bin/bash
ssh-agent sh -c 'ssh-add < /dev/null && bash'
It asks for my ssh key passphrase so that I don't have to type it in every time. You don't need that step if you use an ssh key without a passphrase.
For the rest of the session, sending files back to your home machine is as simple as
scp file_to_send.txt your.domain.name:~/
A: Here is a hack called ssh-xfer which addresses the exact problem, but requires patching OpenSSH, which is a nonstarter as far as I'm concerned.
A: Here is my preferred solution to this problem. Set up a reverse ssh tunnel upon creating the ssh session. This is made easy by two bash function: grabfrom() needs to be defined on the local host, while grab() should be defined on the remote host. You can add any other ssh variables you use (e.g. -X or -Y) as you see fit.
function grabfrom() { ssh -R 2202:127.0.0.1:22 ${@}; };
function grab() { scp -P 2202 $@ [email protected]:~; };
Usage:
localhost% grabfrom remoteuser@remotehost
password: <remote password goes here>
remotehost% grab somefile1 somefile2 *.txt
password: <local password goes here>
Positives:
*
*It works without special software on either host beyond OpenSSH
*It works when local host is behind a NAT router
*It can be implemented as a pair of two one-line bash function
Negatives:
*
*It uses a fixed port number so:
*
*won't work with multiple connections to remote host
*might conflict with a process using that port on the remote host
*It requires localhost accept ssh connections
*It requires a special command on initiation the session
*It doesn't implicitly handle authentication to the localhost
*It doesn't allow one to specify the destination directory on localhost
*If you grab from multiple localhosts to the same remote host, ssh won't like the keys changing
Future work:
This is still pretty kludgy. Obviously, it would be possible to handle the authentication issue by setting up ssh keys appropriately and it's even easier to allow the specification of a remote directory by adding a parameter to grab()
More difficult is addressing the other negatives. It would be nice to pick a dynamic port but as far as I can tell there is no elegant way to pass that port to the shell on the remote host; As best as I can tell, OpenSSH doesn't allow you to set arbitrary environment variables on the remote host and bash can't take environment variables from a command line argument. Even if you could pick a dynamic port, there is no way to ensure it isn't used on the remote host without connecting first.
A: You can use SCP protocol for tranfering a file.you can refer this link
http://tekheez.biz/scp-protocol-in-unix/
A: The best way to use this you can expose your files over HTTP and download it from another server, you can achieve this using ZSSH Python library,
ZSSH - ZIP over SSH (Simple Python script to exchange files between servers).
Install it using PIP.
python3 -m pip install zssh
Run this command from your remote server.
python3 -m zssh -as --path /desktop/path_to_expose
It will give you an URL to execute from another server.
In the local system or another server where you need to download those files and extract.
python3 -m zssh -ad --path /desktop/path_to_download --zip http://example.com/temp_file.zip
For more about this library: https://pypi.org/project/zssh/
A: You should be able to set up public & private keys so that no auth is needed.
Which way you do it depends on security requirements, etc (be aware that there are linux/unix ssh worms which will look at keys to find other hosts they can attack).
I do this all the time from behind both linksys and dlink routers. I think you may need to change a couple of settings but it's not a big deal.
A: Use the -M switch.
"Places the ssh client into 'master' mode for connection shar-ing. Multiple -M options places ssh into ``master'' mode with confirmation required before slave connections are accepted. Refer to the description of ControlMaster in ssh_config(5) for details."
I don't quite see how that answers the OP's question - can you expand on this a bit, David?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Is there a way to generalize an Apache ANT target? We have an Apache ANT script to build our application, then check in the resulting JAR file into version control (VSS in this case). However, now we have a change that requires us to build 2 JAR files for this project, then check both into VSS.
The current target that checks the original JAR file into VSS discovers the name of the JAR file through some property. Is there an easy way to "generalize" this target so that I can reuse it to check in a JAR file with any name? In a normal language this would obviously call for a function parameter but, to my knowledge, there really isn't an equivalent concept in ANT.
A: Take a look at Ant macros. They allow you to define reusable "routines" for Ant builds. You can find an example here (item 15).
A: I would suggest to work with macros over subant/antcall because the main advantage I found with macros is that you're in complete control over the properties that are passed to the macro (especially if you want to add new properties).
You simply refactor your Ant script starting with your target:
<target name="vss.check">
<vssadd localpath="D:\build\build.00012.zip"
comment="Added by automatic build"/>
</target>
creating a macro (notice the copy/paste and replacement with the @{file}):
<macrodef name="private-vssadd">
<attribute name="file"/>
<sequential>
<vssadd localpath="@{file}"
comment="Added by automatic build"/>
</sequential>
</macrodef>
and invoke the macros with your files:
<target name="vss.check">
<private-vssadd file="D:\build\File1.zip"/>
<private-vssadd file="D:\build\File2.zip"/>
</target>
Refactoring, "the Ant way"
A: Also check out the subant task, which lets you call the same target on multiple build files:
<project name="subant" default="subant1">
<property name="build.dir" value="subant.build"/>
<target name="subant1">
<subant target="">
<property name="build.dir" value="subant1.build"/>
<property name="not.overloaded" value="not.overloaded"/>
<fileset dir="." includes="*/build.xml"/>
</subant>
</target>
</project>
A: It is generally considered a bad idea to version control your binaries and I do not recommend doing so. But if you absolutely have to, you can use antcall combined with param to pass parameters and call a target.
<antcall target="reusable">
<param name="some.variable" value="var1"/>
</antcall>
<target name="reusable">
<!-- Do something with ${some.variable} -->
</target>
You can find more information about the antcall task here.
A: You can use Gant to script your build with groovy to do what you want or have a look at the groovy ant task.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How to create custom pages in dasBlog? I know I've seen this in the past, but I can't seem to find it now.
Basically I want to create a page that I can host on a dasBlog instance that contains the layout from my theme, but the content of the page I control.
Ideally the content is a user control or ASPX that I write. Anybody know how I can accomplish this?
A: The easist way to do this is to "hijack" the FormatPage functionality.
First add the following to your web.config in the newtelligence.DasBlog.UrlMapper section:
<add matchExpression="(?<basedir>.*?)/Static\.aspx\?=(?<value>.+)" mapTo="{basedir}/FormatPage.aspx?path=content/static/{value}.format.html" />
Now you can create a directory in your content directory called static. From there, you can create html files and the file name will map to the url like this:
http://BASEURL/Static.aspx?=FILENAME
will map to a file called:
/content/static/FILENAME.format.html
You can place anything in that file that you would normally place in itemTemplate.blogtemplate, except it obviously won't have any post data. But you can essentially use this to put other macros, and still have it use the hometemplate.blogtemplate to keep the rest of your theme wrapped around the page.
A: I did something similar setting up a handler to stream video files from the blog on my home server. I ended up ditching it because it killed my bandwidth whenever someone would view a video, but I did have it up and working for a while.
To get it to work I had to check dasBlog out from source control and open it in visual studio. I had VS2008 and it was built using VS2005, so it took some work to get everything to build. Once I could get the unaltered solution to build I added a new class library project to hold my code. This is to make sure my code stays separate across dasBlog updates.
I don't have access to the code here at work so I can't tell you exact names right now, but if you want your pages to be able to use the themes then they need to inherit from a class in the newtelligence.dasBlog.Web namespace, and I believe also implement an interface. A good place to look is in FormatPage and FormatControl.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best .NET memory and performance profiler? We are using JetBrains' dotTrace. What other profiling tools can be recommended that are better for profiling C# Windows Forms applications?
A: We've got on really well with AQTime. The great thing from our point of view is that it does the unmanaged parts of our code too.
A: It hasn't been mentioned yet, but for memory analysis Windbg is about as thorough and low-level as you can get. Using it in combination with sos.dll is incredibly powerful, but there is a fairly steep learning curve.
It's a free tool though, and Tess Ferrandez' blog is a great place to start with it. ANTS and other profilers are much more user-friendly, but Windbg can slice and dice the managed heap like none other in my opinion.
A: Ants Profiler just released version 4.
We use it, and are quite happy with it. There's a 14 day trial to evaluate (as is true for most offerings).
A: No. I have tried pretty much every .NET profiler on the market (ANTS, vTune, OptimizeIt, DevPartner, YourKit), and in my opinion dotTrace is the best of the lot. It is one of only two profilers I have used (the other being YourKit) that has low enough overhead to handle a highly CPU-intensive application.
If and only if your application is relatively light, I could recommend ANTS Profiler. Its line-by-line stats are sometimes quite useful, but they come at a price in profiling efficiency.
A: We use DotTrace like you, but in the past we used Ants Profiler by RedGate. It is a nice tool also.
A: I am very happy with RedGate ANTS. The only other one I tried was the one that comes with Visual Studio Team, and it sucks.
A: I have used the EQATEC Profiler. It is free and is a code profiler, not a memory profiler.
A: For memory profiling you have both the free CLR profiler and the commercial .NET memory profiler. Both are excellent but the latter is a bit more polished.
A: You should check out SpeedTrace. We are pleased with the software, and it helps us a lot in resolving the root causes of my problem.
A: nProf is a good tool if you're looking for something free. It's kind of finicky at points, and a little buggy, but if you're on a tight budget, it'll do the job.
A: I've been using the free SlimTune since its recent release. Although it has a minimal interface, it is super easy to use and provides good diagnostics which have already helped me a lot. It currently supports two kinds of displays, one of which is similar to nProf. It is from the same developer as SlimDX, so I expect the tool to become even better in the short term.
EDIT: As far as I know, it does not support memory profiling yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: Regex to Match first 28 days of the month I am looking for a Regular expression to match only if a date is in the first 28 days of the month. This is for my validator control in ASP.NET
A: I don't think this is a task very well-suited for a regexp.
I'd try and use the library functions (DateTime.Parse for .NET) to parse the date and then check the day component of it. Everything else is duplicating half the library function anyways.
A: Don't do this with Regex. Dates are formatted differently in different countries. Use the DateTime.TryParse routine instead:
DateTime parsedDate;
if ( DateTime.TryParse( dateString, out parsedDate) && parsedDate.Day <= 28 )
{
// logic goes here.
}
Regex is nearly the golden hammer of input validation, but in this instance, it's the wrong choice.
A: Why not just covert it to a date data type and check the day? Using a regular expression, while it could be done, just makes it overly complicated.
A: ([1-9]|1\d|2[0-8]) // matches 1 to 28 but woudn't allow leading zeros for single digits
(0?[1-9]|1\d|2[0-8]) // matches 1 to 28 and would allow 01, 02,... 09
(where \d matches any digit, use [0-9] if your regex engine doesn't support it.)
See also the question What is the regex pattern for datetime (2008-09-01 12:35:45 ) ?
A: I would use one of the DateTime.TryParse techniques in conjunction with a CustomValidator
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is the difference between UNION and UNION ALL? What is the difference between UNION and UNION ALL?
A: (From Microsoft SQL Server Book Online)
UNION [ALL]
Specifies that multiple result sets are to be combined and returned as a single result set.
ALL
Incorporates all rows into the results. This includes duplicates. If not specified, duplicate rows are removed.
UNION will take too long as a duplicate rows finding like DISTINCT is applied on the results.
SELECT * FROM Table1
UNION
SELECT * FROM Table2
is equivalent of:
SELECT DISTINCT * FROM (
SELECT * FROM Table1
UNION ALL
SELECT * FROM Table2) DT
A side effect of applying DISTINCT over results is a sorting operation on results.
UNION ALL results will be shown as arbitrary order on results But UNION results will be shown as ORDER BY 1, 2, 3, ..., n (n = column number of Tables) applied on results. You can see this side effect when you don't have any duplicate row.
A: I add an example,
UNION, it is merging with distinct --> slower, because it need comparing (In Oracle SQL developer, choose query, press F10 to see cost analysis).
UNION ALL, it is merging without distinct --> faster.
SELECT to_date(sysdate, 'yyyy-mm-dd') FROM dual
UNION
SELECT to_date(sysdate, 'yyyy-mm-dd') FROM dual;
and
SELECT to_date(sysdate, 'yyyy-mm-dd') FROM dual
UNION ALL
SELECT to_date(sysdate, 'yyyy-mm-dd') FROM dual;
A: UNION removes duplicates, whereas UNION ALL does not.
In order to remove duplicates the result set must be sorted, and this may have an impact on the performance of the UNION, depending on the volume of data being sorted, and the settings of various RDBMS parameters ( For Oracle PGA_AGGREGATE_TARGET with WORKAREA_SIZE_POLICY=AUTO or SORT_AREA_SIZE and SOR_AREA_RETAINED_SIZE if WORKAREA_SIZE_POLICY=MANUAL ).
Basically, the sort is faster if it can be carried out in memory, but the same caveat about the volume of data applies.
Of course, if you need data returned without duplicates then you must use UNION, depending on the source of your data.
I would have commented on the first post to qualify the "is much less performant" comment, but have insufficient reputation (points) to do so.
A: UNION merges the contents of two structurally-compatible tables into a single combined table.
*
*Difference:
The difference between UNION and UNION ALL is that UNION will omit duplicate records whereas UNION ALL will include duplicate records.
Union Result set is sorted in ascending order whereas UNION ALL Result set is not sorted
UNION performs a DISTINCT on its Result set so it will eliminate any duplicate rows. Whereas UNION ALL won't remove duplicates and therefore it is faster than UNION.*
Note: The performance of UNION ALL will typically be better than UNION, since UNION requires the server to do the additional work of removing any duplicates. So, in cases where it is certain that there will not be any duplicates, or where having duplicates is not a problem, use of UNION ALL would be recommended for performance reasons.
A: Suppose that you have two table Teacher & Student
Both have 4 Column with different Name like this
Teacher - ID(int), Name(varchar(50)), Address(varchar(50)), PositionID(varchar(50))
Student- ID(int), Name(varchar(50)), Email(varchar(50)), PositionID(int)
You can apply UNION or UNION ALL for those two table which have same number of columns. But they have different name or data type.
When you apply UNION operation on 2 tables, it neglects all duplicate entries(all columns value of row in a table is same of another table). Like this
SELECT * FROM Student
UNION
SELECT * FROM Teacher
the result will be
When you apply UNION ALL operation on 2 tables, it returns all entries with duplicate(if there is any difference between any column value of a row in 2 tables). Like this
SELECT * FROM Student
UNION ALL
SELECT * FROM Teacher
Output
Performance:
Obviously UNION ALL performance is better that UNION as they do additional task to remove the duplicate values. You can check that from Execution Estimated Time by press ctrl+L at MSSQL
A: In ORACLE: UNION does not support BLOB (or CLOB) column types, UNION ALL does.
A: Both UNION and UNION ALL concatenate the result of two different SQLs. They differ in the way they handle duplicates.
*
*UNION performs a DISTINCT on the result set, eliminating any duplicate rows.
*UNION ALL does not remove duplicates, and it therefore faster than UNION.
Note: While using this commands all selected columns need to be of the same data type.
Example: If we have two tables, 1) Employee and 2) Customer
*
*Employee table data:
*Customer table data:
*UNION Example (It removes all duplicate records):
*UNION ALL Example (It just concatenate records, not eliminate duplicates, so it is faster than UNION):
A:
The basic difference between UNION and UNION ALL is union operation eliminates the duplicated rows from the result set but union all returns all rows after joining.
from http://zengin.wordpress.com/2007/07/31/union-vs-union-all/
A: UNION
The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected.
UNION ALL
The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values.
The difference between Union and Union all is that Union all will not eliminate duplicate rows, instead it just pulls all rows from all tables fitting your query specifics and combines them into a table.
A UNION statement effectively does a SELECT DISTINCT on the results set. If you know that all the records returned are unique from your union, use UNION ALL instead, it gives faster results.
A: UNION removes duplicate records in other hand UNION ALL does not. But one need to check the bulk of data that is going to be processed and the column and data type must be same.
since union internally uses "distinct" behavior to select the rows hence it is more costly in terms of time and performance.
like
select project_id from t_project
union
select project_id from t_project_contact
this gives me 2020 records
on other hand
select project_id from t_project
union all
select project_id from t_project_contact
gives me more than 17402 rows
on precedence perspective both has same precedence.
A: One more thing i would like to add-
Union:- Result set is sorted in ascending order.
Union All:- Result set is not sorted. two Query output just gets appended.
A: If there is no ORDER BY, a UNION ALL may bring rows back as it goes, whereas a UNION would make you wait until the very end of the query before giving you the whole result set at once. This can make a difference in a time-out situation - a UNION ALL keeps the connection alive, as it were.
So if you have a time-out issue, and there's no sorting, and duplicates aren't an issue, UNION ALL may be rather helpful.
A: Important! Difference between Oracle and Mysql: Let's say that t1 t2 don't have duplicate rows between them but they have duplicate rows individual. Example: t1 has sales from 2017 and t2 from 2018
SELECT T1.YEAR, T1.PRODUCT FROM T1
UNION ALL
SELECT T2.YEAR, T2.PRODUCT FROM T2
In ORACLE UNION ALL fetches all rows from both tables. The same will occur in MySQL.
However:
SELECT T1.YEAR, T1.PRODUCT FROM T1
UNION
SELECT T2.YEAR, T2.PRODUCT FROM T2
In ORACLE, UNION fetches all rows from both tables because there are no duplicate values between t1 and t2. On the other hand in MySQL the resultset will have fewer rows because there will be duplicate rows within table t1 and also within table t2!
A: UNION removes duplicate records (where all columns in the results are the same), UNION ALL does not.
There is a performance hit when using UNION instead of UNION ALL, since the database server must do additional work to remove the duplicate rows, but usually you do not want the duplicates (especially when developing reports).
To identify duplicates, records must be comparable types as well as compatible types. This will depend on the SQL system. For example the system may truncate all long text fields to make short text fields for comparison (MS Jet), or may refuse to compare binary fields (ORACLE)
UNION Example:
SELECT 'foo' AS bar UNION SELECT 'foo' AS bar
Result:
+-----+
| bar |
+-----+
| foo |
+-----+
1 row in set (0.00 sec)
UNION ALL example:
SELECT 'foo' AS bar UNION ALL SELECT 'foo' AS bar
Result:
+-----+
| bar |
+-----+
| foo |
| foo |
+-----+
2 rows in set (0.00 sec)
A: You can avoid duplicates and still run much faster than UNION DISTINCT (which is actually same as UNION) by running query like this:
SELECT * FROM mytable WHERE a=X UNION ALL SELECT * FROM mytable WHERE b=Y AND a!=X
Notice the AND a!=X part. This is much faster then UNION.
A: Just to add my two cents to the discussion here: one could understand the UNION operator as a pure, SET-oriented UNION - e.g. set A={2,4,6,8}, set B={1,2,3,4}, A UNION B = {1,2,3,4,6,8}
When dealing with sets, you would not want numbers 2 and 4 appearing twice, as an element either is or is not in a set.
In the world of SQL, though, you might want to see all the elements from the two sets together in one "bag" {2,4,6,8,1,2,3,4}. And for this purpose T-SQL offers the operator UNION ALL.
A: UNION - results in distinct records while
UNION ALL - results in all the records including duplicates.
Both are blocking operators and hence I personally prefer using JOINS over Blocking Operators(UNION, INTERSECT, UNION ALL etc. ) anytime.
To illustrate why Union operation performs poorly in comparison to Union All checkout the following example.
CREATE TABLE #T1 (data VARCHAR(10))
INSERT INTO #T1
SELECT 'abc'
UNION ALL
SELECT 'bcd'
UNION ALL
SELECT 'cde'
UNION ALL
SELECT 'def'
UNION ALL
SELECT 'efg'
CREATE TABLE #T2 (data VARCHAR(10))
INSERT INTO #T2
SELECT 'abc'
UNION ALL
SELECT 'cde'
UNION ALL
SELECT 'efg'
Following are results of UNION ALL and UNION operations.
A UNION statement effectively does a SELECT DISTINCT on the results set. If you know that all the records returned are unique from your union, use UNION ALL instead, it gives faster results.
Using UNION results in Distinct Sort operations in the Execution Plan. Proof to prove this statement is shown below:
A:
Not sure that it matters which database
UNION and UNION ALL should work on all SQL Servers.
You should avoid of unnecessary UNIONs they are huge performance leak. As a rule of thumb use UNION ALL if you are not sure which to use.
A: UNION ALL also works on more data types as well. For example when trying to union spatial data types. For example:
select a.SHAPE from tableA a
union
select b.SHAPE from tableB b
will throw
The data type geometry cannot be used as an operand to the UNION, INTERSECT or EXCEPT operators because it is not comparable.
However union all will not.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1659"
} |
Q: Open source alternative to MATLAB's fmincon function? Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / NumPy / SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.
A: GNU Octave is another MATLAB clone that might have what you need.
A: Is your problem convex? Linear? Non-linear? I agree that SciPy.optimize will probably do the job, but fmincon is a sort of bazooka for solving optimization problems, and you'll be better off if you can confine it to one of the categories below (in increasing level of difficulty to solve efficiently)
*
*Linear Program (LP)
*Quadratic Program (QP)
*Convex Quadratically-Constrained Quadratic Program (QCQP)
*Second Order Cone Program (SOCP)
*Semidefinite Program (SDP)
*Non-Linear Convex Problem
*Non-Convex Problem
There are also combinatoric problems such as Mixed-Integer Linear Programs (MILP), but you didn't mention any sort of integrality constraints, suffice to say that they fall into a different class of problems.
The CVXOpt package will be of great use to you if your problem is convex.
If your problem is not convex, you need to choose between finding a local solution or the global solution. Many convex solvers 'sort of' work in a non-convex domain. Finding a good approximation to the global solution would require some form Simulated Annealing or Genetic Algorithm. Finding the global solution will require an enumeration of all local solutions or a combinatorial strategy such as Branch and Bound.
A: For numerical optimization in Python you may take a look at OpenOpt solvers:
http://openopt.org/NLP
http://openopt.org/Problems
A: Python optimization software:
*
*OpenOpt http://openopt.org (this one is numpy-based as you wish, with automatic differentiation by FuncDesigner)
*Pyomo https://software.sandia.gov/trac/coopr/wiki/Package/pyomo
*CVXOPT http://abel.ee.ucla.edu/cvxopt/
*NLPy http://nlpy.sourceforge.net/
A: The open source Python package,SciPy, has quite a large set of optimization routines including some for multivariable problems with constraints (which is what fmincon does I believe). Once you have SciPy installed type the following at the Python command prompt
help(scipy.optimize)
The resulting document is extensive and includes the following which I believe might be of use to you.
Constrained Optimizers (multivariate)
fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer
(if you use this please quote their papers -- see help)
fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and
adapted to C by Jean-Sebastien Roy.
fmin_cobyla -- Constrained Optimization BY Linear Approximation
A: There is a program called SciLab that is a MATLAB clone.
I haven't used it at all, but it is open source and might have the function you are looking for.
A: I don't know if it's in there, but there's a python distribution called Enthought that might have what you're looking for. It was designed specifically for data analysis has over 60 additional libraries.
A: Have a look at http://www.aemdesign.com/downloadfsqp.htm.
There you will find C code which provides the same functionality as fmincon. (However, using a different algorithm. You can read the manual if you are interested in the details.)
It's open source but not under GPL.
A: Octave in the latest version implements an equivalent to the Matlab fmincon function into the optimization package.
https://octave.sourceforge.io/optim/function/fmincon.html
A: Scilab has an implementation of fmincon (using IPOpt) which is now regularly updated:
https://atoms.scilab.org/toolboxes/fmincon
For large-scale optimization it outperforms Matlab's fmincon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How come a 32 bit kernel can run a 64 bit binary? On my OS X box, the kernel is a 32-bit binary and yet it can run a 64-bit binary.
How does this work?
cristi:~ diciu$ file ./a.out
./a.out: Mach-O 64-bit executable x86_64
cristi:~ diciu$ file /mach_kernel
/mach_kernel: Mach-O universal binary with 2 architectures
/mach_kernel (for architecture i386): Mach-O executable i386
/mach_kernel (for architecture ppc): Mach-O executable ppc
cristi:~ diciu$ ./a.out
cristi:~ diciu$ echo $?
1
A: It's not the kernel that runs the binary. It's the processor.
The binary does call library functions and those need to be 64bit. And if they need to make a system call, it's their responsibility to cope with the fact that they themselves are 64bit, but the kernel is only 32.
But that's not something you would have to worry about.
A: Note that not all 32-bit kernels are capable of running 64-bit processes. Windows certainly doesn't have this property and I've never seen it done on Linux.
A: The CPU can be switched from 64 bit execution mode to 32 bit when it traps into kernel context, and a 32 bit kernel can still be constructed to understand the structures passed in from 64 bit user-space apps.
The MacOS X kernel does not directly dereference pointers from the user app anyway, as it resides its own separate address space. A user-space pointer in an ioctl call, for example, must first be resolved to its physical address and then a new virtual address created in the kernel address space. It doesn't really matter whether that pointer in the ioctl was 64 bits or 32 bits, the kernel does not dereference it directly in either case.
So mixing a 32 bit kernel and 64 bit binaries can work, and vice-versa. The thing you cannot do is mix 32 bit libraries with a 64 bit application, as pointers passed between them would be truncated. MacOS X supplies more of its frameworks in both 32 and 64 bit versions in each release.
A: The 32 bit kernel that is capable of loading and running 64 bit binaries has to have some 64 bit code to handle memory mapping, program loading and a few other 64 bit issues.
However, the scheduler and many other OS operations aren't required to work in the 64 bit mode in order to deal with other issues - it switches the processor to 32 bit mode and back as needed to handle drivers, tasks, memory allocation and mapping, interrupts, etc.
In fact, most of the things that the OS does wouldn't necessarily perform any faster running at 64 bits - the OS is not a heavy data processor, and those portions that are (streams, disk I/O, etc) are likely converted to 64 bit (plugins to the OS anyway).
But the bare kernel itself probably won't task switch any faster, etc, if it were 64 bit.
This is especially the case when most people are still running 32 bit apps, so the mode switching isn't always needed, even though that's a low overhead operation, it does take some time.
-Adam
A: An ELF32 file can contain 64bit instructions and run in 64 bit mode. Only thing it is having is that organization of header and symbols are in 32bit format. Symbols table offsets are 32 bits. Symbol table entries are 32 bit wide etc. A file which contain both 64 bit code and 32 bit code can expose itself as 32 bit ELF file wheres it uses 64 bit registors for its internal calculations. mach_kernel is one such executable. Advantage it get is that 32 bit driver ELFs can linked to it. If it take care of passing pointers which are located below 4GBs to other linked ELF binaries it will work fine.
A: For the kernel to be 64-bit would only bring the effective advantage that kernel extensions (i.e., typically drivers) could be 64-bit. In fact, you'd need to have either all 64-bit kernel extensions, or (as is the case now) all 32-bit ones; they need to be native to the architecture of the running kernel.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: What is Dynamic Code Analysis? What is Dynamic Code Analysis?
How is it different from Static Code Analysis (ie, what can it catch that can't be caught in static)?
I've heard of bounds checking and memory analysis - what are these?
What other things are checked using dynamic analysis?
-Adam
A: Basically you instrument your code to analyze your software as it is running (dynamic) rather than just analyzing the software without running (static). Also see this JavaOne presentation comparing the two. Valgrind is one example dynamic analysis tool for C. You could also use code coverage tools like Cobertura or EMMA for Java analysis.
From Wikipedia's definition of dynamic program analysis:
Dynamic program analysis is the
analysis of computer software that is
performed with executing programs
built from that software on a real or
virtual processor (analysis performed
without executing programs is known as
static code analysis). Dynamic program
analysis tools may require loading of
special libraries or even
recompilation of program code.
A: Simply put, static analysis collect information based on source code and dynamic analysis is based on the system execution, often using instrumentation.
Advantages of dynamic analysis
*
*Is able to detect dependencies that are not possible to detect in static analysis. Ex.: dynamic dependencies using reflection, dependency injection, polymorphism.
*Can collect temporal information.
*Deals with real input data. During the static analysis it is difficult to impossible to know what files will be passed as input, what WEB requests will come, what user will click, etc.
Disadvantages of dynamic analysis
*
*May negatively impact the performance of the application.
*Cannot guarantee the full coverage of the source code, as it's runs are based on user interaction or automatic tests.
Resources
There's many dynamic analysis tools in the market, being debuggers the most notorious one. On the other hand, it's still an academic research field. There's many researchers studying how to use dynamic analysis for better understanding of software systems. There's an annual workshop dedicated to dependency analysis.
A: You asked for a good explanation of "bounds checking and memory analysis" issues.
Our Memory Safety Check tool instruments your application to watch at runtime for memory access errors (buffer overruns, array subscript errors, bad pointers, alloc/free errors). The link contains
a detailed explanation complete with examples. This SO answer shows two programs that have pointers into dead stack frame, and how CheckPointer detects and reports the point of error in the source code
A briefer example: C (and C++) infamously do not check accesses to arrays, to see if the access is inside the bounds of the array. The benefit: well-designed program don't pay the cost of such a check in production mode. The downside: buggy programs can touch things outside the array, and this can cause behavior which is very hard to understand; thus the buggy program is difficult to debug.
What a dynamic instrumentation tool like the Memory Safety Checker does, is associate some metadata with every pointer (e.g., the type of the thing to which the pointer "points", and if it is an array, the array bounds), and then check at runtime, any accesses via pointers to arrays, whether the array bound is violated. The tool modifies the original program to collect the metadata where it is generated (e.g., on entry to scopes in which arrays are declared, or as the result of a malloc operation, etc.) and modifies the program at every array reference (written both as x[y] where either x or y is an array pointer and the the value is some type of integral type, similarly for *(x+y)!) to check the access. Now if the program runs, and performs an out-of-bounds access, the check catches the error and it reported at the first place where it could be detected. [If you think about it, you'll realize the instrumentation for metadata collection and checking has to be pretty clever, to handle all the variant cases a language like C may have. Its actually hard to make this work completely).
The good news is that now such access is reported early where it is easier to detect the problem and fix the program. Such a tool isn't intended production use; one uses during development and testing to help verify absence of errors. If there are no errors discovered, then one does a normal compile and runs the programs without the checks.
This is an extremely good example of a dynamic analysis tool: the testing happens at runtime.
A:
Bounds checking
This means runtime checks of array accesses. Contrary to C's laissez-faire approach to memory accesses and pointer arithmetic, other languages like Java or C# actually check whether or not a given array has the element one is trying to access.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Garbage Collection: Is it necessary to set large objects to null in a Dispose method? Is it necessary to set large objects to null when implementing a Dispose() method?
A: If a class has a Dispose method the best practice is to call it. the reason
behind that is that Dispose runs when called, whereas setting the object to
null simply adds a entry to the Finalize queue in GC, and we cannot
determine when GC will run.
There is no performance benefit in implementing the Dispose method on types that use only managed resources (such as arrays) because they are automatically reclaimed by the garbage collector. Use the Dispose method primarily on managed objects that use native resources and on COM objects that are exposed to the .NET Framework. Managed objects that use native resources (such as the FileStream class) implement the IDisposable interface.
An elegant means of inoking Dispose that have adopted is using the "using" construct. For those of you who may not be familiar with the construct, it provide a means to implicity invoke Dispose() on an instance that implements IDisposable even if an exception is thrown durring the operation. The following is an example of the using construct:
using(DisposableClass dc = new DisposableClass())
{
dc.PerformActionOnUmanagedResources();
dc.PerformAnotherActionOnUmanagedResources();
}
In the previous example, if an exception was thrown in the PerformActionOnUmanagedResources() method, although the PerformAnotherActionOnUmanagedResources() method would not be processed, the using block will still implicity invoke the Dispose method on dc ensuring the realese of any unmanaged resources.
A: Not usually.
The garbage collector looks for rooted objects, and circular dependencies don't prevent collection if neither object is rooted.
There is a caveat: if object A has a reference to object B, and object B is being disposed, you may want to clean up that relationship or else you could end up with a leak. The most common place this surfaces is in event handlers (the reference from A->B is one that B controls, because it subscribed to an event on A). In this case, if A is still rooted, B cannot be collected even though it's been disposed.
A: The purpose of a dispose method is to release all resources associated with your class, and the parent's class by calling the base class dispose method. Have a read of this link, it should make things a little clearer:
http://msdn.microsoft.com/en-us/library/fs2xkftw.aspx
A: what do you mean by "large object"?
You should at least call Dispose() on any member implementing IDisposable, though.
A: It isn't necessary as others have pointed out, but it is good practice and helps with debugging.
Once an object has finished with a pointer it is using then setting it to null helps prevent reuse of that object later (you'll get an null reference exception).
The same logic applies to setting member pointers to null in C++ destructors once you have deleted them. There is no need to do it, but it helps with troubleshooting later.
A: Think about the purpose of Disposable methods for a bit: it's usually because you're holding some resource that won't be released during garbage collection. This is usually something like a database connection or a file handle. Thus, once the Dispose method has been called, all those resources have been released.
I'd argue that having nulls floating around is more harmful that having "zombie" objects floating around.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Task Schedulers Had an interesting discussion with some colleagues about the best scheduling strategies for realtime tasks, but not everyone had a good understanding of the common or useful scheduling strategies.
For your answer, please choose one strategy and go over it in some detail, rather than giving a little info on several strategies. If you have something to add to someone else's description and it's short, add a comment rather than a new answer (if it's long or useful, or simply a much better description, then please use an answer)
*
*What is the strategy - describe the general case (assume people know what a task queue is, semaphores, locks, and other OS fundamentals outside the scheduler itself)
*What is this strategy optimized for (task latency, efficiency, realtime, jitter, resource sharing, etc)
*Is it realtime, or can it be made realtime
Current strategies:
*
*Priority Based Preemptive
*Lowest power slowest clock
-Adam
A: As described in a paper titled Real-Time Task Scheduling for Energy-Aware Embedded Systems, Swaminathan and Chakrabarty describe the challenges of real-time task scheduling in low-power (embedded) devices with multiple processor speeds and power consumption profiles available. The scheduling algorithm they outline (and is shown to be only about 1% worse than an optimal solution in tests) has an interesting way of scheduling tasks they call the LEDF Heuristic.
From the paper:
The low-energy earliest deadline first
heuristic, or simply LEDF, is an
extension of the well-known earliest
deadline first (EDF) algorithm. The
operation of LEDF is as follows: LEDF
maintains a list of all released
tasks, called the “ready list”. When
tasks are released, the task with the
nearest deadline is chosen to be
executed. A check is performed to see
if the task deadline can be met by
executing it at the lower voltage
(speed). If the deadline can be met,
LEDF assigns the lower voltage to the
task and the task begins execution.
During the task’s execution, other
tasks may enter the system. These
tasks are assumed to be placed
automatically on the “ready list”.
LEDF again selects the task with the
nearest deadline to be executed. As
long as there are tasks waiting to be
executed, LEDF does not keep the pro-
cessor idle. This process is repeated
until all the tasks have been
scheduled.
And in pseudo-code:
Repeat forever {
if tasks are waiting to be scheduled {
Sort deadlines in ascending order
Schedule task with earliest deadline
Check if deadline can be met at lower speed (voltage)
If deadline can be met,
schedule task to execute at lower voltage (speed)
If deadline cannot be met,
check if deadline can be met at higher speed (voltage)
If deadline can be met,
schedule task to execute at higher voltage (speed)
If deadline cannot be met,
task cannot be scheduled: run the exception handler!
}
}
It seems that real-time scheduling is an interesting and evolving problem as small, low-power devices become more ubiquitous. I think this is an area in which we'll see plenty of further research and I look forward to keeping abreast!
A: One common real-time scheduling scheme is to use priority-based preemptive multitasking.
Each tasks is assigned a different priority level.
The highest priority task on the ready queue will be the task that runs. It will run until it either gives up the CPU (i.e. delays, waits on a semaphore, etc...) or a higher priority task becomes ready to run.
The advantage of this scheme is that the system designer has full control over what tasks will run at what priority. The scheduling algorithm is also simple and should be deterministic.
On the other hand, low priority tasks might be starved for CPU. This would indicate a design problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Count a list of cells with the same background color Each cell contains some text and a background color. So I have some cells that are blue and some that are red. What function do I use to count the number of red cells?
I have tried =COUNTIF(D3:D9,CELL("color",D3)) with no success (Where D3 is red).
A: The worksheet formula, =CELL("color",D3) returns 1 if the cell is formatted with color for negative values (else returns 0).
You can solve this with a bit of VBA. Insert this into a VBA code module:
Function CellColor(xlRange As Excel.Range)
CellColor = xlRange.Cells(1, 1).Interior.ColorIndex
End Function
Then use the function =CellColor(D3) to display the .ColorIndex of D3
A: I just created this and it looks easier. You get these 2 functions:
=GetColorIndex(E5) <- returns color number for the cell
from (cell)
=CountColorIndexInRange(C7:C24,14) <- returns count of cells C7:C24 with color 14
from (range of cells, color number you want to count)
example shows percent of cells with color 14
=ROUND(CountColorIndexInRange(C7:C24,14)/18, 4 )
Create these 2 VBA functions in a Module (hit Alt-F11)
open + folders. double-click on Module1
Just paste this text below in, then close the module window (it must save it then):
Function GetColorIndex(Cell As Range)
GetColorIndex = Cell.Interior.ColorIndex
End Function
Function CountColorIndexInRange(Rng As Range, TestColor As Long)
Dim cnt
Dim cl As Range
cnt = 0
For Each cl In Rng
If GetColorIndex(cl) = TestColor Then
Rem Debug.Print ">" & TestColor & "<"
cnt = cnt + 1
End If
Next
CountColorIndexInRange = cnt
End Function
A: I was needed to solve absolutely the same task. I have divided visually the table using different background colors for different parts. Googling the Internet I've found this page https://support.microsoft.com/kb/2815384. Unfortunately it doesn't solve the issue because ColorIndex refers to some unpredictable value, so if some cells have nuances of one color (for example different values of brightness of the color), the suggested function counts them. The solution below is my fix:
Function CountBgColor(range As range, criteria As range) As Long
Dim cell As range
Dim color As Long
color = criteria.Interior.color
For Each cell In range
If cell.Interior.color = color Then
CountBgColor = CountBgColor + 1
End If
Next cell
End Function
A: Excel has no way of gathering that attribute with it's built-in functions. If you're willing to use some VB, all your color-related questions are answered here:
http://www.cpearson.com/excel/colors.aspx
Example form the site:
The SumColor function is a color-based
analog of both the SUM and SUMIF
function. It allows you to specify
separate ranges for the range whose
color indexes are to be examined and
the range of cells whose values are to
be summed. If these two ranges are the
same, the function sums the cells
whose color matches the specified
value. For example, the following
formula sums the values in B11:B17
whose fill color is red.
=SUMCOLOR(B11:B17,B11:B17,3,FALSE)
A: Yes VBA is the way to go.
But, if you don't need to have a cell with formula that auto-counts/updates the number of cells with a particular colour, an alternative is simply to use the 'Find and Replace' function and format the cell to have the appropriate colour fill.
Hitting 'Find All' will give you the total number of cells found at the bottom left of the dialogue box.
This becomes especially useful if your search range is massive. The VBA script will be very slow but the 'Find and Replace' function will still be very quick.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to remove black border around hyperlinked image? When I turn an image (<img>) into a hyperlink (by wrapping it in <a>), Firefox adds a black border around the image. Safari does not display the same border.
What CSS declaration would be best to eliminate the border?
A: Just add:
border: 0;
or:
a img {
border: 0;
}
to remove border from all image links.
That should do the trick.
A: in the code use border=0. so for example:
<img href="mypic.gif" border="0" />
within css
border : 0;
under whatever class your image is.
A: img {
border: 0
}
Or old-fashioned:
<img border="0" src="..." />
^^^^^^^^^^
A: a img {
border-width: 0;
}
A: Try this:
img {
border-style: none;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: In what order are locations searched to load referenced DLLs? I know that the .NET framework looks for referenced DLLs in several locations
*
*Global assembly cache (GAC)
*Any private paths added to the AppDomain
*The current directory of the executing assembly
What order are those locations searched? Is the search for a DLL ceased if a match is found or does it continue through all locations (and if so, how are conflicts resolved)?
Also, please confirm or deny those locations and provide any other locations I have failed to mention.
A: I found an article referencing the MSDN article on DLL search order that says
For managed code dependencies, the
Global Assembly Cache always prevails;
the local assembly in application
directory will not be picked up if
there is an existing (or newer with
policy) copy in the GAC.
Considering this, I guess the MSDN list is correct with one addition
0. Global assembly cache
A: Assembly loading is a rather elaborate process which depends on lots of different factors like configuration files, publisher policies, appdomain settings, CLR hosts, partial or full assembly names, etc.
The simple version is that the GAC is first, then the private paths. %PATH% is never used.
It is best to use Assembly Binding Log Viewer (Fuslogvw.exe) to debug any assembly loading problems.
EDIT
How the Runtime Locates Assemblies explains the process in more detail.
A:
No longer is the current directory searched first when loading DLLs! This change was also made in Windows XP SP1. The default behavior now is to look in all the system locations first, then the current directory, and finally any user-defined paths.
(ref. http://weblogs.asp.net/pwilson/archive/2003/06/24/9214.aspx)
The default search order, which can be changed by the application, is also described on MSDN: Dynamic-Link Library Search Order.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Really killing a process in Windows Occasionally a program on a Windows machine goes crazy and just hangs. So I'll call up the task manager and hit the "End Process" button for it. However, this doesn't always work; if I try it enough times then it'll usually die eventually, but I'd really like to be able to just kill it immediately. On Linux I could just kill -9 to guarantee that a process will die.
This also could be used for writing batch scripts and writing batch scripts is programming.
Is there some program or command that comes with Windows that will always kill a process? A free third-party app would be fine, although I'd prefer to be able to do this on machines I sit down at for the first time.
A: setup an AT command to run task manager or process explorer as SYSTEM.
AT 12:34 /interactive "C:/procexp.exe"
If process explorer was in your root C drive then this would open it as SYSTEM and you could kill any process without getting any access denied errors. Set this for like a minute in the future, then it will pop up for you.
A: Process Hacker has numerous ways of killing a process.
(Right-click the process, then go to Miscellaneous->Terminator.)
A: "End Process" on the Processes-Tab calls TerminateProcess which is the most ultimate way Windows knows to kill a process.
If it doesn't go away, it's currently locked waiting on some kernel resource (probably a buggy driver) and there is nothing (short of a reboot) you could do to make the process go away.
Have a look at this blog-entry from wayback when: http://blogs.technet.com/markrussinovich/archive/2005/08/17/unkillable-processes.aspx
Unix based systems like Linux also have that problem where processes could survive a kill -9 if they are in what's known as "Uninterruptible sleep" (shown by top and ps as state D) at which point the processes sleep so well that they can't process incoming signals (which is what kill does - sending signals).
Normally, Uninterruptible sleep should not last long, but as under Windows, broken drivers or broken userpace programs (vfork without exec) can end up sleeping in D forever.
A: taskkill /im myprocess.exe /f
The "/f" is for "force".
If you know the PID, then you can specify that, as in:
taskkill /pid 1234 /f
Lots of other options are possible, just type taskkill /? for all of them. The "/t" option kills a process and any child processes; that may be useful to you.
A: JosepStyons is right. Open cmd.exe and run
taskkill /im processname.exe /f
If there is an error saying,
ERROR: The process "process.exe" with PID 1234 could not be
terminated.
Reason: Access is denied.
then try running cmd.exe as administrator.
A: Get process explorer from sysinternals (now Microsoft)
Process Explorer - Windows Sysinternals | Microsoft Docs
A: One trick that works well is to attach a debugger and then quit the debugger.
On XP or Windows 2003 you can do this using ntsd that ships out of the box:
ntsd -pn myapp.exe
ntsd will open up a new window. Just type 'q' in the window to quit the debugger and take out the process.
I've known this to work even when task manager doesn't seem able to kill a process.
Unfortunately ntsd was removed from Vista and you have to install the (free) debbugging tools for windows to get a suitable debugger.
A: FYI you can sometimes use SYSTEM or Trustedinstaller to kill tasks ;)
google quickkill_3_0.bat
sc config TrustedInstaller binPath= "cmd /c TASKKILL /F /IM notepad.exe
sc start "TrustedInstaller"
A: I had this issue too, here is how I solved it.
1/ Open the « task manager «
2/ Locate the application name in the list
3/ Once found, right click on its name then click on « properties »
4/ In the properties interface, click on « security «
5/ Click on « edit » to change permissions
6/ « Deny » all permissions for all users, click on « apply » then « ok »
7/ click on « advanced » for special permissions settings
8/ Remove permissions for all users
9/ click on « apply » then « ok »
10/ click on « apply » then « ok » again
11/ you can now kill the process on task manager as well as uninstall the app of you want to.
A: When ntsd access is denied, try:
ZeroWave was designed to be a simple tool that will provide a multilevel termination of any kind of process.
ZeroWave is also a easy-to-use program due to its simple installation and its very friendly graphical interface.
ZeroWave has three termination modes and with the "INSANE" mode can terminate any kind of process that can run on Windows.
It seems that ZeroWave can't kill avp.exe
A: wmic process where processid="11008" call terminate
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "398"
} |
Q: SQL 2000 database copy to SQL 2005 options We have a production web and database server with SQL Server 2000. (However, a few clients they have their own servers with SQL 2005.) So we have local installs of SQL 2005 Express for development on Windows XP SP3 boxes (which don't allow SQL 2000 Enterprise installations).
We often need to copy SQL 2000 databases to SQL 2005 instances. In the past, we have used the SQL Publishing tool (also mentioned here). However, one of our databases is so big that using that tool fails as it creates SQL scripts that get too large for Management Studio to handle them properly. Besides, it takes too long... :)
We would use the Copy Database Wizard included with SQL 2005, but our development machines run SQL 2005 Express which don't included SQL Server Agent, which is required for Copy Database Wizard to work. So, I guess our solution will be to upgrade our development installs with the full version of SQL 2005 (we have an MSDN subscription of course).
I was wondering what other solutions, if any, work well for you guys? (Besides complaining to the bosses to upgrade our production servers to 2005 or even 2008--which I've already tried.)
A: Back it up in SQL Server 2000 and then use the RESTORE WITH MOVE command into 2005 Express.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Flash hyperlinks seem to truncate surrounding space I have a weird bug involving Flash text and hyperlinks, htmlText in a TextField with <a> tags seem to truncate surrounding space:
Once I place my cursor over the text, it "fixes" itself:
Here is the HTML in the textField:
<p>The speeches at both the <a href="http://www.demconvention.com/speeches/" target="_blank">Democratic National Convention</a> last week and the <a href="http://www.gopconvention2008.com/videos" target="_blank">Republican National Convention</a> this week, have been, for me at least, must see TV.</p>
When I disable the styleSheet attached to it, the effect still occurs, but placing my mouse over it does not fix the spacing. I am using "Anti-alias for readability", and have embedded the all Uppercase, Lowercase, Numerals, and Punctuation. I will also point out that if I change the rendering setting to "Use Device fonts" the bug goes away.
Any thoughts?
A: Make sure you styleSheet declares what it is supposed to do with Anchors. You are obviously using htmlText if your using CSS so soon as it sees < in front of "a href" it immedietly looks for the CSS class definition for a and when it doesn't find one, the result is the different looking text you see.
Add the following to your CSS and make sure that it has the same settings as the regular style of your text as far as style, wieght, and size. The only thing that should differ is the color.
a:link
{
font-family: sameAsReg;
font-size: 12px; //Note flash omits things like px and pt.
color:#FF0000; //Red
}
Be sure that the fonts you are embedding are in the library and being instantiated into your code. Embedding each textfield through the UI is silly when you can merely load the font from the library at runtime and then you can use it anywhere.
You can also import multiple fonts at compile time and use them in the same textfield with the use of < class span="someCSSClass">Some Text < /span>< class span="someOtherCSSClass">Some Other Text < /span>
Good luck and I hope this helps.
A: Holy cow just had the same problem. Apparently the order in which you set the variables matters. Look a the sixth reply on this thread. I also noticed that this only occurs with AntiAliasType.ADVANCED, which is the default on a TextField.
_description = new TextField();
_description.selectable = false;
_description.width = WIDTH; // Global.
addChild(_description);
var myriadPro:Font = new MyriadPro(); // Embedded font.
var style:StyleSheet = new StyleSheet();
var styleObj:Object = new Object();
styleObj.fontFamily = myriadPro.fontName;
styleObj.fontSize = 13;
styleObj.textAlign = "left";
styleObj.color = "#FFFFFF";
style.setStyle("p", styleObj);
style.setStyle("a:link", styleObj);
style.setStyle("a:hover", styleObj);
_description.autoSize = TextFieldAutoSize.LEFT;
_description.antiAliasType = AntiAliasType.ADVANCED;
_description.condenseWhite = true;
_description.wordWrap = true;
_description.multiline = true;
_description.embedFonts = true;
_description.styleSheet = style;
_description.htmlText = '<p>A short description with an <a href="http://www.example.com/">HTML</a> link that can be clicked.</p>';
A: I was having this issue also and found setting the following fixed it for me.
yourTextField.gridFitType = GridFitType.SUBPIXEL;
It looks as though it might be caused by when the AntiAliasType is set to advanced which defaults to a PIXEL gridFitType. I presume its shifting some of the copy as its trying to fix the characters to whole pixel values.
A: I tried all the above, but found making sure autosize = false was what fixed it for me.
A: I had the same problem. If you set autoSize to center, it gets rid of the problem. I don't know why but it does fix it. It seems to be a Flash bug.
A: Does it make any difference if you put non-breaking spaces immediately before and after the anchor element?
<p> ... <a ... >Link text</a> ... </p>
Admittedly a workaround at best but it might buy you some time to research a real solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Asp.Net Form DefaultButton Error in Firefox The .Net generated code for a form with the "DefaultButton" attribute set contains poor javascript that allows the functionality to work in IE but not in other browsers (Firefox specifcially).
Hitting enter key does submit the form with all browsers but Firefox cannot disregard the key press when it happens inside of a <textarea> control. The result is a multiline text area control that cannot be multiline in Firefox as the enter key submits the form instead of creating a new line.
For more information on the bug, read it here.
This could be fixed in Asp.Net 3.0+ but a workaround still has to be created for 2.0.
Any ideas for the lightest workaround (a hack that doesn't look like a hack =D)? The solution in the above link scares me a little as it could easily have unintended side-effects.
A: I use this function adapted from codesta. [Edit: the very same one, I see, that scares you! Oops. Can't help you then.]
http://blog.codesta.com/codesta_weblog/2007/12/net-gotchas---p.html.
You use it by surrounding your code with a div like so. You could subclass the Form to include this automatically. I don't use it that much, so I didn't.
<div onkeypress="return FireDefaultButton(event, '<%= aspButtonID.ClientID %>')">
(your form goes here)
</div>
Here's the function.
function FireDefaultButton(event, target)
{
// srcElement is for IE
var element = event.target || event.srcElement;
if (13 == event.keyCode && !(element && "textarea" == element.tagName.toLowerCase()))
{
var defaultButton;
defaultButton = document.getElementById(target);
if (defaultButton && "undefined" != typeof defaultButton.click)
{
defaultButton.click();
event.cancelBubble = true;
if (event.stopPropagation)
event.stopPropagation();
return false;
}
}
return true;
}
A: It seems that the fix codesta.com that harpo link to is no longer necessary, since the fix event.srcElement is not integrade in ASP.NET 3.5. The implementation of DefaultButton does however still have some problems, because it is catching the Enter key press in too many cases. For example: If you have activated a button in the form using tab, pressing Enter should click on the button and not submit the form.
Include the following JavaScript code at the bottom of your ASP.NET web page to make Enter behave the way it should.
// Fixes ASP.NET's behavior of default button by testing for more controls
// than just textarea where the event should not be caugt by the DefaultButton
// action. This method has to override ASP.NET's WebForm_FireDefaultButton, so
// it has to included at the bottom of the page.
function WebForm_FireDefaultButton(event, target) {
if (event.keyCode == 13) {
var src = event.srcElement || event.target;
if (!(
src
&&
(
src.tagName.toLowerCase() == "textarea"
|| src.tagName.toLowerCase() == "a"
||
(
src.tagName.toLowerCase() == "input"
&&
(
src.getAttribute("type").toLowerCase() == "submit"
|| src.getAttribute("type").toLowerCase() == "button"
|| src.getAttribute("type").toLowerCase() == "reset"
)
)
|| src.tagName.toLowerCase() == "option"
|| src.tagName.toLowerCase() == "select"
)
)) {
var defaultButton;
if (__nonMSDOMBrowser) {
defaultButton = document.getElementById(target);
}
else {
defaultButton = document.all[target];
}
if (defaultButton && typeof (defaultButton.click) != "undefined") {
defaultButton.click();
event.cancelBubble = true;
if (event.stopPropagation) event.stopPropagation();
return false;
}
}
}
return true;
}
A: For this particular issue, the reason is because javascript generated by
ASP.NET 2.0 has some IE only notation: event.srcElement is not availabe in
FireFox (use event.target instead):
function WebForm_FireDefaultButton(event, target) {
if (!__defaultFired && event.keyCode == 13 && !(event.srcElement &&
(event.srcElement.tagName.toLowerCase() == "textarea"))) {
var defaultButton;
if (__nonMSDOMBrowser) {
defaultButton = document.getElementById(target);
}
else {
defaultButton = document.all[target];
}
if (defaultButton && typeof(defaultButton.click) !=
"undefined") {
__defaultFired = true;
defaultButton.click();
event.cancelBubble = true;
if (event.stopPropagation) event.stopPropagation();
return false;
}
}
return true;
}
If we change the first 2 lines into:
function WebForm_FireDefaultButton(event, target) {
var element = event.target || event.srcElement;
if (!__defaultFired && event.keyCode == 13 && !(element &&
(element.tagName.toLowerCase() == "textarea"))) {
Put the changed code in a file and then do
protected void Page_Load(object sender, EventArgs e)
{
ClientScript.RegisterClientScriptInclude("js1", "JScript.js");
}
Then it will work for both IE and FireFox.
Source:
http://www.velocityreviews.com/forums/t367383-formdefaultbutton-behaves-incorrectly.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What are some good SharePoint security resources? I've got a SharePoint application and I'm sad to say that in my SharePoint-induced excitement, I ignored a lot of the security concerns I should have been paying more attention to. Though we didn't before, now we actually need granular security, so I need to get educated. I'm mostly interested in how to best create groups and add users to those groups. We have a single main site collection and a couple dozen subsites under that collection. How can I best create a granular security world where I can independently assign rights to each of these subsites?
A: To have permissions vary at the "sub site" level which is the SPWeb object in object model terms you need to enable unique permission for the site.
A good article outlining the permission hierarchy in SharePoint 2007 can be found on the office web site About controlling access to sites and site content
In my experience if you are able to use permission inheritance over granular security it's much less hassle to manage.
Breaking site permission inheritance
*
*Click "People and groups"
*Click "Site permissions"
*From the actions menu in the list click "Edit Permissions"
http://blog.richfinn.net/content/binary/WindowsLiveWriter/InstallandConfiguretheCommunityKitforSha_E660/image_3.png http://blog.richfinn.net/content/binary/WindowsLiveWriter/InstallandConfiguretheCommunityKitforSha_E660/image_3.png
Other references
*
*SharePoint 2007: Permissions, permissions, permissions.
*SharePoint 2007 SiteGroups - part 1 - the basics
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Graphics card memory usage in linux What tools are available to monitor graphics card memory usage in linux?
A: NVIDIA PerfKit has a linux version which allows real-time monitoring of various graphics card properties, including graphics card memory usage. Obviously, this only works for NVIDIA graphics cards, and it also requires the use of a special instrumented driver.
A: If you just need to know it for 3D graphics development purposes, you may want to look into something like gDEBugger or, if you only care about NVIDIA cards, you can try NVIDIA PerfHUD. I have not used them myself, but I would expect them to track such information.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is Visual Studio 2003 still available/supported Pretty much what the title says really.
We have some code that is .NET 1.1 based and no real desire to up-convert it. However, we are looking to add developers to the team and they will need copies of Visual Studio.
My understanding is that they will need VS 2003 - as this is the only IDE that supports .NET 1.1 but I am wondering if we are still able to purchase it!
A: You can build 1.1 projects in Visual Studio 2005:
http://www.hanselman.com/blog/BuildingNET11ProjectsUsingVisualStudio2005.aspx
The MSBuild Everett Environment (MSBEE) has been announced, and soon this will be a (reasonably) supported scenario and we'll all be able to build both 1.1 and 2.0 versions of .NET code on Visual Studio 2005.
Also read this post about this issue:
http://blogs.msdn.com/clichten/archive/2005/11/08/490541.aspx
And also:
MSBuild Extras – Toolkit for .NET 1.1 “MSBee” is an addition to MSBuild that allows developers to build managed applications using Visual Studio 2005 projects that target .NET 1.1.
A: Visual Studio 2003 is still available to download for MSDN subscribers.
The EULA for Visual Studio includes a 'downgrade' clause, which appears, IMNAL, to allow you to buy Visual Studio 2008 and then install 2003 under the same license.
DOWNGRADE. You may install and use
this version and an earlier version of
the software at the same time. This
agreement applies to your use of the
earlier version. If the earlier
version includes different components,
any terms for those components in the
agreement that comes with the earlier
version apply to your use of them.
Microsoft is not obligated to supply
earlier versions to you.
A: Mainstream support for VS2003 ends in October of this year:
http://support.microsoft.com/lifecycle/search/?sort=PN&alpha=Visual+Studio
Extended support (whatever that means) is still available for quite some time.
A: In addition to Espo's link, look into MSBee, an enhancements kit for MSBuild to better support .NET Framework 1.1.
It seems you can even use .NET 1.1 with Visual Studio 2008, though, so you should have no problem.
That said, I'd be interested in hearing what made you choose against upgrading.
A: Supported: Yes
Available: Not through normal channels. You might still find a boxed copy on Amazon or somewhere.
A: .NET 1.1 code can be imported in VS 2005, as .NET 2.0 is backward compatible with .NET 1.1.
You'll probably have to convert the project, but it should still run in VS 2005.
A: I believe that vs2003 looses support in october
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Setting Colors in SWT This is pretty simple, I come from a swing/awt background.
I'm just wondering what the proper way to set the background color for a SWT widget is?
I've been trying:
widget.setBackground( );
Except I have no idea how to create the color Object in SWT?
A: For standard colors (including common colors and default colors used by the operating system) Use Display.getSystemColor(int), and pass in the SWT.COLOR_* constant for the color you want.
Display display = Display.getCurrent();
Color blue = display.getSystemColor(SWT.COLOR_BLUE);
Color listBackground = display.getSystemColor(SWT.COLOR_LIST_BACKGROUND);
Note that you do not need to dispose these colors because SWT created them.
A: To create a color, try this:
Device device = Display.getCurrent ();
Color red = new Color (device, 255, 0, 0);
A: Remember that in SWT you must explicitly dispose any resources that you create when you are done with them. This includes widgets, fonts, colors, images, displays, printers, and GCs. If you do not dispose these resources, eventually your application will reach the resource limit of your operating system and the application will cease to run.
See also: SWT: Managing Operating System Resources
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: Where do the Linux TCP/IP hackers 'live'? Over the past year or so my production platform has been plagued by an odd TCP/IP issue. I've spent zillions of hours working with competent & knowledgeable sysadmins, scouring the net, reading source code, been jerked around by RH's pathetic support, and crying tears of blood! To no avail. (Google 'unkn-4' and you'll see many posts with my name all over.)
Work-a-rounds are in place, so the issue is not a priority. But the geek in me really would like to understand and solve this problem.
So, where can a moderately competent systems programmer go to ask detailed questions and receive detailed answers from The Lords of TCP/IP stacks? I assume that their world so close to the bare metal, their population so small, is different from my own. That, and they don't want to answer emails to "My modem doesn't work" so they hide in the shadows.
Any pointers would be greatly appreciated.
A: Dave Miller (person in charge of networking in the linux kernel) and their fellow henchmen all inhabit the lkml or Linux Kernel Mailing List. If you can provide a reasonably decent bug report they'll get you a reasonable answer.
On the other hand if you tell them it's a very old kernel, they'll tell you to try the newest... At the very least you can try searching its archives.
A: The linux-net mailing list might interest you. There should be more details here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: C Image Library Can anyone recommend a decent C image library?
I'm after loaders for bmp, gif, jpg, png and tga.
I want to use this for programming my Sony Playstation Portable, so opensource would be very handy.
After some googleing I've found FreeImage and CImg, but both feel rather heavy, and CImg is C++ not C.
A: I used FreeImage for PSP games in the past, but it was for pre-processing the data rather than in-game.
A: DevIL is often recommended. Whether or not it does what you want, I don't know.
A: If you control the images you're loading, the lightest loader I know is Sean Barrett's awesome stb_image.c (direct link to single file source code!).
There are also other very worthwhile libraries on Sean's site such as a tiny TrueType rasterizer and Vorbis decompressor, btw.
If you need OpenGL image loading that uses stb_image, I'll humbly point you to SOILex...
A: I will second Thomas Owens's ImageMagick suggestion. It is mind-boggling just how comprehensive the library is, and how much time it saves you in the end.
A: ImageMagick has a C API to connect to its libraries. There's also what they call a "low-level interface" between C and the ImageMagick libraries.
A: Here is some code I wrote for handling images. It is in c++ ( not c ) but you should be able to easily extract the BMP and GIF load code. It's licensed LGPL.
I use the libpng and jpeglib for decompressing those formats.
A: For one of my project, I am using CImg Library. It's very useful to start with. Moreover, they also have a decent documentations.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How do you backup IIS's metabase in C#? exact code will be helpful. I assume the DirectoryServices namespace does it but I can't find the method that does it.
I need actual C# code. All the samples I found so far are VB or VBScript. The C# examples I found are for reading/setting ADSI properties. A command like backup seems to have a certain .NET syntax which I am not clear how to use. In VB there's a straightforward backup command. Need an equivalent in .NET.
A: You'll need to use ADSI objects. The IIsComputer.Backup method is what you want.
As far as how to access ADSI objects from C#, check out this MSDN page.
EDIT: Here's a sample implementation in C#.
A: I found it:
DirectoryEntry de = new DirectoryEntry("IIS://localhost");
de.Invoke("Backup", new object[0] );
new object needs to be set to hold proper arguments like overwriting current backup
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to pass password to scp? I know it is not recommended, but is it at all possible to pass the user's password to scp?
I'd like to copy a file via scp as part of a batch job and the receiving server does, of course, need a password and, no, I cannot easily change that to key-based authentication.
A: Here is an example of how you do it with expect tool:
sub copyover {
$scp = Expect->spawn("/usr/bin/scp ${srcpath}/$file $who:${destpath}/$file");
$scp->expect(30,"ssword: ") || die "Never got password prompt from $dest:$!\n";
print $scp 'password' . "\n";
$scp->expect(30,"-re",'$\s') || die "Never got prompt from parent system:$!\n";
$scp->soft_close();
return;
}
A: You may use ssh-copy-id to add ssh key:
$which ssh-copy-id #check whether it exists
If exists:
ssh-copy-id "user@remote-system"
A: Nobody mentioned it, but Putty scp (pscp) has a -pw option for password.
Documentation can be found here: https://the.earth.li/~sgtatham/putty/0.67/htmldoc/Chapter5.html#pscp
A: Use sshpass:
sshpass -p "password" scp -r [email protected]:/some/remote/path /some/local/path
or so the password does not show in the bash history
sshpass -f "/path/to/passwordfile" scp -r [email protected]:/some/remote/path /some/local/path
The above copies contents of path from the remote host to your local.
Install :
*
*ubuntu/debian
*
*apt install sshpass
*centos/fedora
*
*yum install sshpass
*mac w/ macports
*
*port install sshpass
*mac w/ brew
*
*brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
A: just generate a ssh key like:
ssh-keygen -t rsa -C "[email protected]"
copy the content of ~/.ssh/id_rsa.pub
and lastly add it to the remote machines ~/.ssh/authorized_keys
make sure remote machine have the permissions 0700 for ~./ssh folder and 0600 for ~/.ssh/authorized_keys
A: If you are connecting to the server from Windows, the Putty version of scp ("pscp") lets you pass the password with the -pw parameter.
This is mentioned in the documentation here.
A: Once you set up ssh-keygen as explained above, you can do
scp -i ~/.ssh/id_rsa /local/path/to/file [email protected]:/path/in/remote/server/
If you want to lessen typing each time, you can modify your .bash_profile file and put
alias remote_scp='scp -i ~/.ssh/id_rsa /local/path/to/file [email protected]:/path/in/remote/server/
Then from your terminal do source ~/.bash_profile. Afterwards if you type remote_scp in your terminal it should run the scp command without password.
A: curl can be used as a alternative to scp to copy a file and it supports a password on the commandline.
curl --insecure --user username:password -T /path/to/sourcefile sftp://desthost/path/
A: You can script it with a tool like expect (there are handy bindings too, like Pexpect for Python).
A: Here's a poor man's Linux/Python/Expect-like example based on this blog post: Upgrading simple shells to fully interactive
TTYs. I needed this for old machines where I can't install Expect or add modules to Python.
Code:
(
echo 'scp [email protected]:./install.sh .'
sleep 5
echo 'scp-passwd'
sleep 5
echo 'exit'
) |
python -c 'import pty; pty.spawn("/usr/bin/bash")'
Output:
scp [email protected]:install.sh .
bash-4.2$ scp [email protected]:install.sh .
Password:
install.sh 100% 15KB 236.2KB/s 00:00
bash-4.2$ exit
exit
A: *
*Make sure password authentication is enabled on the target server. If it runs Ubuntu, then open /etc/ssh/sshd_config on the server, find lines PasswordAuthentication=no and comment all them out (put # at the start of the line), save the file and run sudo systemctl restart ssh to apply the configuration. If there is no such line then you're done.
*Add -o PreferredAuthentications="password" to your scp command, e.g.:
scp -o PreferredAuthentications="password" /path/to/file user@server:/destination/directory
A: You can use the 'expect' script on unix/terminal
For example create 'test.exp' :
#!/usr/bin/expect
spawn scp /usr/bin/file.txt root@<ServerLocation>:/home
set pass "Your_Password"
expect {
password: {send "$pass\r"; exp_continue}
}
run the script
expect test.exp
I hope that helps.
A: *
*make sure you have "expect" tool before, if not, do it
# apt-get install expect
*create the a script file with following content. (# vi /root/scriptfile)
spawn scp /path_from/file_name user_name_here@to_host_name:/path_to
expect "password:"
send put_password_here\n;
interact
*execute the script file with "expect" tool
# expect /root/scriptfile
A: copy files from one server to other server ( on scripts)
Install putty on ubuntu or other Linux machines. putty comes with pscp. we can copy files with pscp.
apt-get update
apt-get install putty
echo n | pscp -pw "Password@1234" -r user_name@source_server_IP:/copy_file_path/files /path_to_copy/files
For more options see pscp help.
A: Using SCP non interactively from Windows:
*
*Install the community Edition of netcmdlets
*Import Module
*Use Send-PowerShellServerFile -AuthMode password -User MyUser -Password not-secure -Server YourServer -LocalFile C:\downloads\test.txt -RemoteFile C:\temp\test.txt for sending File with non-interactive password
A: All the solutions mentioned above can work only if you the app installed or you should have the admin rights to install except or sshpass.
I found this very useful link to simply start the scp in Background.
$ nohup scp file_to_copy user@server:/path/to/copy/the/file > nohup.out 2>&1
https://charmyin.github.io/scp/2014/10/07/run-scp-in-background/
A: An alternative would be add the public half of the user's key to the authorized-keys file on the target system. On the system you are initiating the transfer from, you can run an ssh-agent daemon and add the private half of the key to the agent. The batch job can then be configured to use the agent to get the private key, rather than prompting for the key's password.
This should be do-able on either a UNIX/Linux system or on Windows platform using pageant and pscp.
A: I found this really helpful answer here.
rsync -r -v --progress -e ssh user@remote-system:/address/to/remote/file /home/user/
Not only you can pass there the password, but also it will show the progress bar when copying. Really awesome.
A: In case if you observe a strict host key check error then use -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null options.
The complete example is as follows
sshpass -p "password" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected]:/tmp/from/psoutput /tmp/to/psoutput
A: You can use below steps. This works for me!
Step1-
create a normal file suppose "fileWithScpPassword" which contains the ssh password for the destination server.
Step2- use sshpaas -f followed by password file name and then normal scp command.
sshpass -f "fileWithScpPassword" scp /filePathToUpload user@ip:/destinationPath/
A: One easy way I do this:
Use the same scp cmd as you use with ssh keys i.e
scp -C -i <path_to opens sshkey> <'local file_path'> user@<ip_address_VM>: <'remote file_path’>
for transferring file from local to remote
but instead of providing the correct <path_to_opensshkey>, use some garbage path. Due to wrong key path you will be asked for password instead and you can simply pass the password now to get the work done!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "420"
} |
Q: How do I get a result from a modal dialog in jQuery I would like to use an add-in like simple-modal or the dialog add-in in the UI kit. However, how do I use these or any other and get a result back. Basically I want the modal to do some AJAX interaction with the server and return the result for the calling code to do some stuff with.
A: Here is how the confirm window works on simpleModal:
$(document).ready(function () {
$('#confirmDialog input:eq(0)').click(function (e) {
e.preventDefault();
// example of calling the confirm function
// you must use a callback function to perform the "yes" action
confirm("Continue to the SimpleModal Project page?", function () {
window.location.href = 'http://www.ericmmartin.com/projects/simplemodal/';
});
});
});
function confirm(message, callback) {
$('#confirm').modal({
close: false,
overlayId: 'confirmModalOverlay',
containerId: 'confirmModalContainer',
onShow: function (dialog) {
dialog.data.find('.message').append(message);
// if the user clicks "yes"
dialog.data.find('.yes').click(function () {
// call the callback
if ($.isFunction(callback)) {
callback.apply();
}
// close the dialog
$.modal.close();
});
}
});
}
A: If your HTML is like the following, and you are trying to avoid bootstrap, then you try it like the following. You can also apply AJAX on this structure since this just like any other part of the HTML of your page. Or you try the same using Bootstrap and your work will be easier. Here is a code, please give it a try. It still can be enhanced and modified:
$("button.try-it").on("click", function() {
$(".modal-container").removeClass("hide");
});
$(".close-btn").on("click", function() {
$(".modal-container").addClass("hide");
});
.modal-container {
position: absolute;
background-color: rgba(35, 35, 35, 0.41);
top: 0;
bottom: 0;
height: 300px;
width: 100%;
}
.modal-body {
width: 100px;
height: 100px;
margin: 0 auto;
background: white;
}
.close-btn {
float: right;
}
.hide {
display: none;
}
.body-container {
position: relative;
box-sizing: border-box;
}
.close-btn {
cursor: pointer;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="body-container">
<div class="button">
<button class="try-it">Try It!!</button>
</div>
<div class="modal-container hide">
<div class="modal-body">
<span class="close-btn">x</span>
<p>Here is the content of the modal</p>
<!--You can apply AJAX on this structure since this just like any other part of the HTML of your page-->
<!--Or you can use Bootstrap modal instead of this one.-->
</div>
</div>
</div>
Hope this was helpful.
Here is the link to a fiddle.
A: Since the modal dialog is on the page, you're free to set any document variable you want. However all of the modal dialog scripts I've seen included a demo using the return value, so it's likely on that page.
(the site is blocked for me otherwise I'd look)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Comparing two collections for equality irrespective of the order of items in them I would like to compare two collections (in C#), but I'm not sure of the best way to implement this efficiently.
I've read the other thread about Enumerable.SequenceEqual, but it's not exactly what I'm looking for.
In my case, two collections would be equal if they both contain the same items (no matter the order).
Example:
collection1 = {1, 2, 3, 4};
collection2 = {2, 4, 1, 3};
collection1 == collection2; // true
What I usually do is to loop through each item of one collection and see if it exists in the other collection, then loop through each item of the other collection and see if it exists in the first collection. (I start by comparing the lengths).
if (collection1.Count != collection2.Count)
return false; // the collections are not equal
foreach (Item item in collection1)
{
if (!collection2.Contains(item))
return false; // the collections are not equal
}
foreach (Item item in collection2)
{
if (!collection1.Contains(item))
return false; // the collections are not equal
}
return true; // the collections are equal
However, this is not entirely correct, and it's probably not the most efficient way to do compare two collections for equality.
An example I can think of that would be wrong is:
collection1 = {1, 2, 3, 3, 4}
collection2 = {1, 2, 2, 3, 4}
Which would be equal with my implementation. Should I just count the number of times each item is found and make sure the counts are equal in both collections?
The examples are in some sort of C# (let's call it pseudo-C#), but give your answer in whatever language you wish, it does not matter.
Note: I used integers in the examples for simplicity, but I want to be able to use reference-type objects too (they do not behave correctly as keys because only the reference of the object is compared, not the content).
A: EDIT: I realized as soon as I posed that this really only works for sets -- it will not properly deal with collections that have duplicate items. For example { 1, 1, 2 } and { 2, 2, 1 } will be considered equal from this algorithm's perspective. If your collections are sets (or their equality can be measured that way), however, I hope you find the below useful.
The solution I use is:
return c1.Count == c2.Count && c1.Intersect(c2).Count() == c1.Count;
Linq does the dictionary thing under the covers, so this is also O(N). (Note, it's O(1) if the collections aren't the same size).
I did a sanity check using the "SetEqual" method suggested by Daniel, the OrderBy/SequenceEquals method suggested by Igor, and my suggestion. The results are below, showing O(N*LogN) for Igor and O(N) for mine and Daniel's.
I think the simplicity of the Linq intersect code makes it the preferable solution.
__Test Latency(ms)__
N, SetEquals, OrderBy, Intersect
1024, 0, 0, 0
2048, 0, 0, 0
4096, 31.2468, 0, 0
8192, 62.4936, 0, 0
16384, 156.234, 15.6234, 0
32768, 312.468, 15.6234, 46.8702
65536, 640.5594, 46.8702, 31.2468
131072, 1312.3656, 93.7404, 203.1042
262144, 3765.2394, 187.4808, 187.4808
524288, 5718.1644, 374.9616, 406.2084
1048576, 11420.7054, 734.2998, 718.6764
2097152, 35090.1564, 1515.4698, 1484.223
A: static bool SetsContainSameElements<T>(IEnumerable<T> set1, IEnumerable<T> set2) {
var setXOR = new HashSet<T>(set1);
setXOR.SymmetricExceptWith(set2);
return (setXOR.Count == 0);
}
Solution requires .NET 3.5 and the System.Collections.Generic namespace. According to Microsoft, SymmetricExceptWith is an O(n + m) operation, with n representing the number of elements in the first set and m representing the number of elements in the second. You could always add an equality comparer to this function if necessary.
A: If you use Shouldly, you can use ShouldAllBe with Contains.
collection1 = {1, 2, 3, 4};
collection2 = {2, 4, 1, 3};
collection1.ShouldAllBe(item=>collection2.Contains(item)); // true
And finally, you can write an extension.
public static class ShouldlyIEnumerableExtensions
{
public static void ShouldEquivalentTo<T>(this IEnumerable<T> list, IEnumerable<T> equivalent)
{
list.ShouldAllBe(l => equivalent.Contains(l));
}
}
UPDATE
A optional parameter exists on ShouldBe method.
collection1.ShouldBe(collection2, ignoreOrder: true); // true
A: In the case of no repeats and no order, the following EqualityComparer can be used to allow collections as dictionary keys:
public class SetComparer<T> : IEqualityComparer<IEnumerable<T>>
where T:IComparable<T>
{
public bool Equals(IEnumerable<T> first, IEnumerable<T> second)
{
if (first == second)
return true;
if ((first == null) || (second == null))
return false;
return first.ToHashSet().SetEquals(second);
}
public int GetHashCode(IEnumerable<T> enumerable)
{
int hash = 17;
foreach (T val in enumerable.OrderBy(x => x))
hash = hash * 23 + val.GetHashCode();
return hash;
}
}
Here is the ToHashSet() implementation I used. The hash code algorithm comes from Effective Java (by way of Jon Skeet).
A: Why not use .Except()
// Create the IEnumerable data sources.
string[] names1 = System.IO.File.ReadAllLines(@"../../../names1.txt");
string[] names2 = System.IO.File.ReadAllLines(@"../../../names2.txt");
// Create the query. Note that method syntax must be used here.
IEnumerable<string> differenceQuery = names1.Except(names2);
// Execute the query.
Console.WriteLine("The following lines are in names1.txt but not names2.txt");
foreach (string s in differenceQuery)
Console.WriteLine(s);
http://msdn.microsoft.com/en-us/library/bb397894.aspx
A: Create a Dictionary "dict" and then for each member in the first collection, do dict[member]++;
Then, loop over the second collection in the same way, but for each member do dict[member]--.
At the end, loop over all of the members in the dictionary:
private bool SetEqual (List<int> left, List<int> right) {
if (left.Count != right.Count)
return false;
Dictionary<int, int> dict = new Dictionary<int, int>();
foreach (int member in left) {
if (dict.ContainsKey(member) == false)
dict[member] = 1;
else
dict[member]++;
}
foreach (int member in right) {
if (dict.ContainsKey(member) == false)
return false;
else
dict[member]--;
}
foreach (KeyValuePair<int, int> kvp in dict) {
if (kvp.Value != 0)
return false;
}
return true;
}
Edit: As far as I can tell this is on the same order as the most efficient algorithm. This algorithm is O(N), assuming that the Dictionary uses O(1) lookups.
A: A duplicate post of sorts, but check out my solution for comparing collections. It's pretty simple:
This will perform an equality comparison regardless of order:
var list1 = new[] { "Bill", "Bob", "Sally" };
var list2 = new[] { "Bob", "Bill", "Sally" };
bool isequal = list1.Compare(list2).IsSame;
This will check to see if items were added / removed:
var list1 = new[] { "Billy", "Bob" };
var list2 = new[] { "Bob", "Sally" };
var diff = list1.Compare(list2);
var onlyinlist1 = diff.Removed; //Billy
var onlyinlist2 = diff.Added; //Sally
var inbothlists = diff.Equal; //Bob
This will see what items in the dictionary changed:
var original = new Dictionary<int, string>() { { 1, "a" }, { 2, "b" } };
var changed = new Dictionary<int, string>() { { 1, "aaa" }, { 2, "b" } };
var diff = original.Compare(changed, (x, y) => x.Value == y.Value, (x, y) => x.Value == y.Value);
foreach (var item in diff.Different)
Console.Write("{0} changed to {1}", item.Key.Value, item.Value.Value);
//Will output: a changed to aaa
Original post here.
A: This simple solution forces the IEnumerable's generic type to implement IComparable. Because of
OrderBy's definition.
If you don't want to make such an assumption but still want use this solution, you can use the following piece of code :
bool equal = collection1.OrderBy(i => i?.GetHashCode())
.SequenceEqual(collection2.OrderBy(i => i?.GetHashCode()));
A: This is my (heavily influenced by D.Jennings) generic implementation of the comparison method (in C#):
/// <summary>
/// Represents a service used to compare two collections for equality.
/// </summary>
/// <typeparam name="T">The type of the items in the collections.</typeparam>
public class CollectionComparer<T>
{
/// <summary>
/// Compares the content of two collections for equality.
/// </summary>
/// <param name="foo">The first collection.</param>
/// <param name="bar">The second collection.</param>
/// <returns>True if both collections have the same content, false otherwise.</returns>
public bool Execute(ICollection<T> foo, ICollection<T> bar)
{
// Declare a dictionary to count the occurence of the items in the collection
Dictionary<T, int> itemCounts = new Dictionary<T,int>();
// Increase the count for each occurence of the item in the first collection
foreach (T item in foo)
{
if (itemCounts.ContainsKey(item))
{
itemCounts[item]++;
}
else
{
itemCounts[item] = 1;
}
}
// Wrap the keys in a searchable list
List<T> keys = new List<T>(itemCounts.Keys);
// Decrease the count for each occurence of the item in the second collection
foreach (T item in bar)
{
// Try to find a key for the item
// The keys of a dictionary are compared by reference, so we have to
// find the original key that is equivalent to the "item"
// You may want to override ".Equals" to define what it means for
// two "T" objects to be equal
T key = keys.Find(
delegate(T listKey)
{
return listKey.Equals(item);
});
// Check if a key was found
if(key != null)
{
itemCounts[key]--;
}
else
{
// There was no occurence of this item in the first collection, thus the collections are not equal
return false;
}
}
// The count of each item should be 0 if the contents of the collections are equal
foreach (int value in itemCounts.Values)
{
if (value != 0)
{
return false;
}
}
// The collections are equal
return true;
}
}
A: It turns out Microsoft already has this covered in its testing framework: CollectionAssert.AreEquivalent
Remarks
Two collections are equivalent if they
have the same elements in the same
quantity, but in any order. Elements
are equal if their values are equal,
not if they refer to the same object.
Using reflector, I modified the code behind AreEquivalent() to create a corresponding equality comparer. It is more complete than existing answers, since it takes nulls into account, implements IEqualityComparer and has some efficiency and edge case checks. plus, it's Microsoft :)
public class MultiSetComparer<T> : IEqualityComparer<IEnumerable<T>>
{
private readonly IEqualityComparer<T> m_comparer;
public MultiSetComparer(IEqualityComparer<T> comparer = null)
{
m_comparer = comparer ?? EqualityComparer<T>.Default;
}
public bool Equals(IEnumerable<T> first, IEnumerable<T> second)
{
if (first == null)
return second == null;
if (second == null)
return false;
if (ReferenceEquals(first, second))
return true;
if (first is ICollection<T> firstCollection && second is ICollection<T> secondCollection)
{
if (firstCollection.Count != secondCollection.Count)
return false;
if (firstCollection.Count == 0)
return true;
}
return !HaveMismatchedElement(first, second);
}
private bool HaveMismatchedElement(IEnumerable<T> first, IEnumerable<T> second)
{
int firstNullCount;
int secondNullCount;
var firstElementCounts = GetElementCounts(first, out firstNullCount);
var secondElementCounts = GetElementCounts(second, out secondNullCount);
if (firstNullCount != secondNullCount || firstElementCounts.Count != secondElementCounts.Count)
return true;
foreach (var kvp in firstElementCounts)
{
var firstElementCount = kvp.Value;
int secondElementCount;
secondElementCounts.TryGetValue(kvp.Key, out secondElementCount);
if (firstElementCount != secondElementCount)
return true;
}
return false;
}
private Dictionary<T, int> GetElementCounts(IEnumerable<T> enumerable, out int nullCount)
{
var dictionary = new Dictionary<T, int>(m_comparer);
nullCount = 0;
foreach (T element in enumerable)
{
if (element == null)
{
nullCount++;
}
else
{
int num;
dictionary.TryGetValue(element, out num);
num++;
dictionary[element] = num;
}
}
return dictionary;
}
public int GetHashCode(IEnumerable<T> enumerable)
{
if (enumerable == null) throw new
ArgumentNullException(nameof(enumerable));
int hash = 17;
foreach (T val in enumerable)
hash ^= (val == null ? 42 : m_comparer.GetHashCode(val));
return hash;
}
}
Sample usage:
var set = new HashSet<IEnumerable<int>>(new[] {new[]{1,2,3}}, new MultiSetComparer<int>());
Console.WriteLine(set.Contains(new [] {3,2,1})); //true
Console.WriteLine(set.Contains(new [] {1, 2, 3, 3})); //false
Or if you just want to compare two collections directly:
var comp = new MultiSetComparer<string>();
Console.WriteLine(comp.Equals(new[] {"a","b","c"}, new[] {"a","c","b"})); //true
Console.WriteLine(comp.Equals(new[] {"a","b","c"}, new[] {"a","b"})); //false
Finally, you can use your an equality comparer of your choice:
var strcomp = new MultiSetComparer<string>(StringComparer.OrdinalIgnoreCase);
Console.WriteLine(strcomp.Equals(new[] {"a", "b"}, new []{"B", "A"})); //true
A: You could use a Hashset. Look at the SetEquals method.
A: A simple and fairly efficient solution is to sort both collections and then compare them for equality:
bool equal = collection1.OrderBy(i => i).SequenceEqual(
collection2.OrderBy(i => i));
This algorithm is O(N*logN), while your solution above is O(N^2).
If the collections have certain properties, you may be able to implement a faster solution. For example, if both of your collections are hash sets, they cannot contain duplicates. Also, checking whether a hash set contains some element is very fast. In that case an algorithm similar to yours would likely be fastest.
A: Here's my extension method variant of ohadsc's answer, in case it's useful to someone
static public class EnumerableExtensions
{
static public bool IsEquivalentTo<T>(this IEnumerable<T> first, IEnumerable<T> second)
{
if ((first == null) != (second == null))
return false;
if (!object.ReferenceEquals(first, second) && (first != null))
{
if (first.Count() != second.Count())
return false;
if ((first.Count() != 0) && HaveMismatchedElement<T>(first, second))
return false;
}
return true;
}
private static bool HaveMismatchedElement<T>(IEnumerable<T> first, IEnumerable<T> second)
{
int firstCount;
int secondCount;
var firstElementCounts = GetElementCounts<T>(first, out firstCount);
var secondElementCounts = GetElementCounts<T>(second, out secondCount);
if (firstCount != secondCount)
return true;
foreach (var kvp in firstElementCounts)
{
firstCount = kvp.Value;
secondElementCounts.TryGetValue(kvp.Key, out secondCount);
if (firstCount != secondCount)
return true;
}
return false;
}
private static Dictionary<T, int> GetElementCounts<T>(IEnumerable<T> enumerable, out int nullCount)
{
var dictionary = new Dictionary<T, int>();
nullCount = 0;
foreach (T element in enumerable)
{
if (element == null)
{
nullCount++;
}
else
{
int num;
dictionary.TryGetValue(element, out num);
num++;
dictionary[element] = num;
}
}
return dictionary;
}
static private int GetHashCode<T>(IEnumerable<T> enumerable)
{
int hash = 17;
foreach (T val in enumerable.OrderBy(x => x))
hash = hash * 23 + val.GetHashCode();
return hash;
}
}
A: Here is a solution which is an improvement over this one.
public static bool HasSameElementsAs<T>(
this IEnumerable<T> first,
IEnumerable<T> second,
IEqualityComparer<T> comparer = null)
{
var firstMap = first
.GroupBy(x => x, comparer)
.ToDictionary(x => x.Key, x => x.Count(), comparer);
var secondMap = second
.GroupBy(x => x, comparer)
.ToDictionary(x => x.Key, x => x.Count(), comparer);
if (firstMap.Keys.Count != secondMap.Keys.Count)
return false;
if (firstMap.Keys.Any(k1 => !secondMap.ContainsKey(k1)))
return false;
return firstMap.Keys.All(x => firstMap[x] == secondMap[x]);
}
A: Based on this answer of a duplicate question, and the comments below the answer, and @brian-genisio answer I came up with these:
public static bool AreEquivalentIgnoringDuplicates<T>(this IEnumerable<T> items, IEnumerable<T> otherItems)
{
var itemList = items.ToList();
var otherItemList = otherItems.ToList();
var except = itemList.Except(otherItemList);
return itemList.Count == otherItemList.Count && except.IsEmpty();
}
public static bool AreEquivalent<T>(this IEnumerable<T> items, IEnumerable<T> otherItems)
{
var itemList = items.ToList();
var otherItemList = otherItems.ToList();
var except = itemList.Except(otherItemList);
return itemList.Distinct().Count() == otherItemList.Count && except.IsEmpty();
}
Tests for these two:
[Test]
public void collection_with_duplicates_are_equivalent()
{
var a = new[] {1, 5, 5};
var b = new[] {1, 1, 5};
a.AreEquivalentIgnoringDuplicates(b).ShouldBe(true);
}
[Test]
public void collection_with_duplicates_are_not_equivalent()
{
var a = new[] {1, 5, 5};
var b = new[] {1, 1, 5};
a.AreEquivalent(b).ShouldBe(false);
}
A: erickson is almost right: since you want to match on counts of duplicates, you want a Bag. In Java, this looks something like:
(new HashBag(collection1)).equals(new HashBag(collection2))
I'm sure C# has a built-in Set implementation. I would use that first; if performance is a problem, you could always use a different Set implementation, but use the same Set interface.
A: There are many solutions to this problem.
If you don't care about duplicates, you don't have to sort both. First make sure that they have the same number of items. After that sort one of the collections. Then binsearch each item from the second collection in the sorted collection. If you don't find a given item stop and return false.
The complexity of this:
- sorting the first collection: NLog(N)
- searching each item from second into the first: NLOG(N)
so you end up with 2*N*LOG(N) assuming that they match and you look up everything. This is similar to the complexity of sorting both. Also this gives you the benefit to stop earlier if there's a difference.
However, keep in mind that if both are sorted before you step into this comparison and you try sorting by use something like a qsort, the sorting will be more expensive. There are optimizations for this.
Another alternative, which is great for small collections where you know the range of the elements is to use a bitmask index. This will give you a O(n) performance.
Another alternative is to use a hash and look it up. For small collections it is usually a lot better to do the sorting or the bitmask index. Hashtable have the disadvantage of worse locality so keep that in mind.
Again, that's only if you don't care about duplicates. If you want to account for duplicates go with sorting both.
A: In many cases the only suitable answer is the one of Igor Ostrovsky , other answers are based on objects hash code.
But when you generate an hash code for an object you do so only based on his IMMUTABLE fields - such as object Id field (in case of a database entity) -
Why is it important to override GetHashCode when Equals method is overridden?
This means , that if you compare two collections , the result might be true of the compare method even though the fields of the different items are non-equal .
To deep compare collections , you need to use Igor's method and implement IEqualirity .
Please read the comments of me and mr.Schnider's on his most voted post.
James
A: Allowing for duplicates in the IEnumerable<T> (if sets are not desirable\possible) and "ignoring order" you should be able to use a .GroupBy().
I'm not an expert on the complexity measurements, but my rudimentary understanding is that this should be O(n). I understand O(n^2) as coming from performing an O(n) operation inside another O(n) operation like ListA.Where(a => ListB.Contains(a)).ToList(). Every item in ListB is evaluated for equality against each item in ListA.
Like I said, my understanding on complexity is limited, so correct me on this if I'm wrong.
public static bool IsSameAs<T, TKey>(this IEnumerable<T> source, IEnumerable<T> target, Expression<Func<T, TKey>> keySelectorExpression)
{
// check the object
if (source == null && target == null) return true;
if (source == null || target == null) return false;
var sourceList = source.ToList();
var targetList = target.ToList();
// check the list count :: { 1,1,1 } != { 1,1,1,1 }
if (sourceList.Count != targetList.Count) return false;
var keySelector = keySelectorExpression.Compile();
var groupedSourceList = sourceList.GroupBy(keySelector).ToList();
var groupedTargetList = targetList.GroupBy(keySelector).ToList();
// check that the number of grouptings match :: { 1,1,2,3,4 } != { 1,1,2,3,4,5 }
var groupCountIsSame = groupedSourceList.Count == groupedTargetList.Count;
if (!groupCountIsSame) return false;
// check that the count of each group in source has the same count in target :: for values { 1,1,2,3,4 } & { 1,1,1,2,3,4 }
// key:count
// { 1:2, 2:1, 3:1, 4:1 } != { 1:3, 2:1, 3:1, 4:1 }
var countsMissmatch = groupedSourceList.Any(sourceGroup =>
{
var targetGroup = groupedTargetList.Single(y => y.Key.Equals(sourceGroup.Key));
return sourceGroup.Count() != targetGroup.Count();
});
return !countsMissmatch;
}
A: If comparing for the purpose of Unit Testing Assertions, it may make sense to throw some efficiency out the window and simply convert each list to a string representation (csv) before doing the comparison. That way, the default test Assertion message will display the differences within the error message.
Usage:
using Microsoft.VisualStudio.TestTools.UnitTesting;
// define collection1, collection2, ...
Assert.Equal(collection1.OrderBy(c=>c).ToCsv(), collection2.OrderBy(c=>c).ToCsv());
Helper Extension Method:
public static string ToCsv<T>(
this IEnumerable<T> values,
Func<T, string> selector,
string joinSeparator = ",")
{
if (selector == null)
{
if (typeof(T) == typeof(Int16) ||
typeof(T) == typeof(Int32) ||
typeof(T) == typeof(Int64))
{
selector = (v) => Convert.ToInt64(v).ToStringInvariant();
}
else if (typeof(T) == typeof(decimal))
{
selector = (v) => Convert.ToDecimal(v).ToStringInvariant();
}
else if (typeof(T) == typeof(float) ||
typeof(T) == typeof(double))
{
selector = (v) => Convert.ToDouble(v).ToString(CultureInfo.InvariantCulture);
}
else
{
selector = (v) => v.ToString();
}
}
return String.Join(joinSeparator, values.Select(v => selector(v)));
}
A: Here's my stab at the problem. It's based on this strategy but also borrows some ideas from the accepted answer.
public static class EnumerableExtensions
{
public static bool SequenceEqualUnordered<TSource>(this IEnumerable<TSource> source, IEnumerable<TSource> second)
{
return SequenceEqualUnordered(source, second, EqualityComparer<TSource>.Default);
}
public static bool SequenceEqualUnordered<TSource>(this IEnumerable<TSource> source, IEnumerable<TSource> second, IEqualityComparer<TSource> comparer)
{
if (source == null)
throw new ArgumentNullException(nameof(source));
if (second == null)
throw new ArgumentNullException(nameof(second));
if (source.TryGetCount(out int firstCount) && second.TryGetCount(out int secondCount))
{
if (firstCount != secondCount)
return false;
if (firstCount == 0)
return true;
}
IEqualityComparer<ValueTuple<TSource>> wrapperComparer = comparer != null ? new WrappedItemComparer<TSource>(comparer) : null;
Dictionary<ValueTuple<TSource>, int> counters;
ValueTuple<TSource> key;
int counter;
using (IEnumerator<TSource> enumerator = source.GetEnumerator())
{
if (!enumerator.MoveNext())
return !second.Any();
counters = new Dictionary<ValueTuple<TSource>, int>(wrapperComparer);
do
{
key = new ValueTuple<TSource>(enumerator.Current);
if (counters.TryGetValue(key, out counter))
counters[key] = counter + 1;
else
counters.Add(key, 1);
}
while (enumerator.MoveNext());
}
foreach (TSource item in second)
{
key = new ValueTuple<TSource>(item);
if (counters.TryGetValue(key, out counter))
{
if (counter <= 0)
return false;
counters[key] = counter - 1;
}
else
return false;
}
return counters.Values.All(cnt => cnt == 0);
}
private static bool TryGetCount<TSource>(this IEnumerable<TSource> source, out int count)
{
switch (source)
{
case ICollection<TSource> collection:
count = collection.Count;
return true;
case IReadOnlyCollection<TSource> readOnlyCollection:
count = readOnlyCollection.Count;
return true;
case ICollection nonGenericCollection:
count = nonGenericCollection.Count;
return true;
default:
count = default;
return false;
}
}
private sealed class WrappedItemComparer<TSource> : IEqualityComparer<ValueTuple<TSource>>
{
private readonly IEqualityComparer<TSource> _comparer;
public WrappedItemComparer(IEqualityComparer<TSource> comparer)
{
_comparer = comparer;
}
public bool Equals(ValueTuple<TSource> x, ValueTuple<TSource> y) => _comparer.Equals(x.Item1, y.Item1);
public int GetHashCode(ValueTuple<TSource> obj) => _comparer.GetHashCode(obj.Item1);
}
}
Improvements on the MS solution:
*
*Doesn't take the ReferenceEquals(first, second) shortcut because it's kind of debatable. For example, consider a custom IEnumerable<T> which has an implementation like this: public IEnumerator<T> GetEnumerator() => Enumerable.Repeat(default(T), new Random().Next(10)).GetEnumerator().
*Takes possible shortcuts when both enumerable is a collection but checks not only for ICollection<T> but also for other collection interfaces.
*Handles null values properly. Counting null values separately from the other (non-null) values also doesn't look 100% fail-safe. Consider a custom equality comparer which handles null values in a non-standard way.
This solution is also available in my utility NuGet package.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "182"
} |
Q: Looking for a SQL Transaction Log file viewer If any of you have worked with a cool tool for viewing/querying the SQL Transaction logs, please let me know. This should show all the transactional sql statements which are committed or rolled back.
For Database files, if it has some additional graphical capabilities like showing the internal Binary Tree structure of the indexes, that will be awesome but I guess I am asking for too much huh..
A: You can use the undocumented DBCC LOG command.
A: There's a commercial product from Lumigent called "Log Explorer". It's $995 per seat, but should cover your basic requirements.
A: This is only relevant if you're talking SQL Server 2000 but RedGate produced a free tool called SQL Log Rescue.
Otherwise, for SQL Server 2005 ApexSQLLog from ApexSQL is the only other product I'm aware of
A: you can use this query :
Select * from ::fn_dblog(null,null)
or see this link : How can I view SQL Server 2005 Transaction log file
or this link : How Do You Decode A Simple Entry in the Transaction Log?
A: There are some companies that produce log readers like Lumigent and Red Gate. However they do not work with SQL server versions greater than 2000 because of meta data changes in the underlying system tables and data types, they might work if you do not use any new functionality but if you use varchar(max) XML datatype etc you are out of luck
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Does WCF raise the bar or just the complexity level? I understand the value of the three-part service/host/client model offered by WCF. But is it just me or does it seem like WCF took something pretty direct and straightforward (the ASMX model) and made a mess out of it?
Is there an alternative to using SvcUtil's command line step back in time to generate the proxy? With ASMX services a test harness was automatically provided; is there a good alternative today with WCF?
I appreciate that the WS* stuff is more tightly integrated with WCF and hope to find some payoff for WCF there, but geeze, otherwise I'm perplexed.
Also, the state of books available for WCF is abysmal at best. Juval Lowy, a superb author, has written a good O'Reilly reference book "Programming WCF Services" but it doesn't do that much (for me anyway) for learning now to use WCF. That book's precursor (and a little better organized, but not much, as a tutorial) is Michele Leroux Bustamante's Learning WCF. It has good spots but is outdated in place and its corresponding Web site is gone.
Do you have good WCF learning references besides just continuing to Google the bejebus out of things?
A: Okay, here we go. First, Michele Leroux Bustamante's book has been updated for VS2008. The website for the book is not gone. It's up right now, and it has tons of great WCF info. On that website she provides updated code compatible with VS2008 for all the examples in her book. If you order from Amazon, you will get the reprint which is updated.
WCF is not only a replacement for ASMX. Sure it can (and does quite well) replace ASMX, but the real benefit is that it allows your services to be self-hosted. Most of the functionality from WSE has been baked in from the start. The framework is highly configurable, and the ability to serve multiple endpoints over multiple protocols is amazing, IMO.
While you can still generate proxy classes from the "Add Service Reference" option, it's not necessary. All you really have to do is copy your ServiceContract interface and tell your code where to find the endpoint for the service, and that's it. You can call methods from the service with very little code. Using this method, you have complete control over the implementation. Regardless of the method you choose to generate a proxy class, Michele shows both and uses both in her excellent series of webcasts on the subject.
Michele has tons of great material out there, and I recommend you check out her website(s). Here's some links that were incredibly helpful for me as I was learning WCF. I hope that you'll come to realize how strong WCF really is, and how easy it is to implement. The learning curve is a little bit steep, but the rewards for your time investment are well worth it:
*
*Michele's webcasts: http://www.dasblonde.net/2007/06/24/WCFWebcastSeries.aspx
*Michele's book website (alive and updated for VS2008): http://www.thatindigogirl.com/
I recommend you watch at least 1 of Michele's webcasts. She is a very effective presenter, and she's obviously incredibly knowledgeable when it comes to WCF. She does a great job of demystifying the inner workings of WCF from the ground up.
A: Wait.... did you ever use .NET Remoting, cause thats the real thing its replacing. .NET Remoting is pretty complicated itself. I find WCF easier and better laid out.
A: I don't see it mentioned often enough, but you can still implement fairly simple services with WCF, very similar to ASMX services. For example:
[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class SimpleService
{
[OperationContract]
public string HelloWorld()
{
return "Hello World";
}
}
You still have to register the end point in your web.config, but that's not so bad.
Eliminating the verbosity of the separated data, service, and operation contracts goes a long way toward making WCF more manageable for me.
A: VS2008 includes the "Add Service Reference" context menu item which will create the proxy for you behind the scenes.
As was mentioned previously, WCF is not intended solely as a replacement for the ASMX web service types, but to provide a consistent, secure and scalable methodology for all interoperable services, whether it is over HTTP, tcp, named pipes or MSMQ transports.
I will confess that I do have other issues with WCF (e.g. re-writing method signatures when exposing a service over basicHTTP - see here, but overall I think it is a definite imrovement
A: If you're using VS2008 and create a WCF project then you automatically get a test harness when you hit run/debug and you can add a reference without having to use svcutil.
A: My initial thoughts of WCF were exactly the same! Here are some solutions:
*
*Program your own proxy/client layer utilising generics (see classes ClientBase, Binding). I've found this easy to get working, but hard to perfect.
*Use a third party implementation of 1 (SoftwareIsHardwork is my current favourite)
A: WCF is a replacement for all earlier web service technologies from Microsoft. It also does a lot more than what is traditionally considered as "web services".
WCF "web services" are part of a much broader spectrum of remote communication enabled through WCF. You will get a much higher degree of flexibility and portability doing things in WCF than through traditional ASMX because WCF is designed, from the ground up, to summarize all of the different distributed programming infrastructures offered by Microsoft. An endpoint in WCF can be communicated with just as easily over SOAP/XML as it can over TCP/binary and to change this medium is simply a configuration file mod. In theory, this reduces the amount of new code needed when porting or changing business needs, targets, etc.
ASMX is older than WCF, and anything ASMX can do so can WCF (and more). Basically you can see WCF as trying to logically group together all the different ways of getting two apps to communicate in the world of Microsoft; ASMX was just one of these many ways and so is now grouped under the WCF umbrella of capabilities.
Web Services can be accessed only over HTTP & it works in stateless environment, where WCF is flexible because its services can be hosted in different types of applications. Common scenarios for hosting WCF services are IIS,WAS, Self-hosting, Managed Windows Service.
The major difference is that Web Services Use XmlSerializer. But WCF Uses DataContractSerializer which is better in Performance as compared to XmlSerializer.
In what scenarios must WCF be used
*
*A secure service to process business transactions. A service that
*supplies current data to others, such as a traffic report or other
*monitoring service. A chat service that allows two people to
*communicate or exchange data in real time. A dashboard application
*that polls one or more services for data and presents it in a logical
*presentation. Exposing a workflow implemented using Windows Workflow
*Foundation as a WCF service. A Silverlight application to poll a
*service for the latest data feeds.
Features of WCF
*
*Service Orientation
*Interoperability
*Multiple Message Patterns
*Service Metadata
*Data Contracts
*Security
*Multiple Transports and Encodings
*Reliable and Queued Messages
*Durable Messages
*Transactions
*AJAX and REST Support
*Extensibility
source: main source of text
A: I typically use Google to find my WCF answers and commonly find myself on the following blogs:
Blogs with valuable WCF articles
*
*http://blogs.msdn.com/drnick/default.aspx
*http://blogs.msdn.com/wenlong/default.aspx
*http://blogs.thinktecture.com/buddhike/
*http://www.dasblonde.net/default.aspx
Other valuable articles I've found
*
*http://blogs.conchango.com/pauloreichert/archive/2007/02/22/WCF-Reliable-Sessions-Puzzle.aspx
*http://blogs.msdn.com/salvapatuel/archive/2007/04/25/why-using-is-bad-for-your-wcf-service-host.aspx
A: I'm having a hardtime to see when I should or would use WCF. Why? Because I put productivity and simplicity on top of my list. Why was the ASMX model so succesful, because it worked, and you get it to work fast. And with VS 2005 and .NET 2.0 wsdl.exe was spitting out pretty nice and compliant services.
In real life you should have very few communication protocols in your architecture. This keeps it simple an maintainable. If you need to acces to legacy systems, write specific adapters for them so they can play along in the nice shiny and beautiful SOA world.
A: WCF is much more powerful than ASMX and it extends it in several ways. ASMX is limited to only HTTP, whereas WCF can use several protocols for its communication (granted, HTTP is still the way most people will use it, at least for services that need to be interoperable). WCF is also easier to extend. At least, it is possible to extend it in ways that ASMX cannot be extended. "Easy" may be stretching it. =)
The added functionality offered by WCF far outweighs the complexity it adds, in my opinion. I also feel that the programming model is easier. DataContracts are much nicer than having to serialize using XML serialization with public properties for everything, for example. It's also much more declarative in nature, which is also nice.
A: MSDN? I usually do pretty well with the Library reference itself, and I usually expect to find valuable articles there.
A: In terms of what it offers, I think the answer is compatibility. The ASMX services were pretty Microsofty. Not to say that they didn't try to be compatible with other consumers; but the model wasn't made to fit much besides ASP.NET web pages and some other custom Microsoft consumers. Whereas WCF, because of its architecture, allows your service to have very open-standard--based endpoints, e.g. REST, JSON, etc. in addition to the usual SOAP. Other people will probably have a much easier time consuming your WCF service than your ASMX one.
(This is all basically inferred from comparative MSDN reading, so someone who knows more should feel free to correct me.)
A: WCF should not be thought of as a replacement for ASMX. Judging at how it is positioned and how it is being used internally by Microsoft, it is really a fundamental architecture piece that is used for any type of cross-boundary communication.
A: I believe that WCF really advances ASMX web services implementation in many ways. First of all it provides a very nice layered object model that helps hide the intrinsic complexity of distributed applications.
Secondly you can have more than request-replay messaging patterns, including asynchronous notifications from server to client (impossible with pure HTTP), and thirdly abstracting away the underlying transport protocol from XML messaging and thus elegantly supporting HTTP, HTTPS, TCP and other. Backward compatibility with "1-st generation" web services is also a plus.
WCF uses XML standard as the internal representation format. This could be perceived as advantage or disadvantage, especially with the growing popularity "fat-free alternatives to XML" like JSON.
A: The difficult things I find with WCF is managing the configurations for clients and servers, and troubleshooting the not so nice faulted state exceptions.
It would be great if anyone had any shortcuts or tips for those.
A: I find that is a pain; in that I have .NET at both ends, have the same "contract" dlls loaded at both ends etc. But then I have to mess about with a lot of details like "KnownType" attributes.
WCF also defaults to only letting 1 or 2 clients connect to a service until you change lots of configuration. Changing the config from code is not easy, shipping lots of comfig files is not an option, as it is too hard to merge our changes into any changes a customer may have made at the time of an upgrade (also we don't want customers playing with WCF settings!)
.NET remoting tended to just work most of the time.
I think trying to pretend that .NET to .NET object based communications is the same as sending bit so of Text (xml) to an unknown system, was a step too far.
(The few times we have used WCF to talk to a Java system, we found that the XSD that the java system gave out did not match what XML it wanted anyway, so had to hand-code a lot of the XML mappings.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Using shadowbox disables keyboard shortcuts? So my site uses shadowbox to do display some dynamic text. Problem is I need the user to be able to copy and paste that text.
Right-clicking and selecting copy works but Ctrl+C doesn't (no keyboard shortcuts do) and most people use Ctrl+C? You can see an example of what I'm talking about here.
Just go to the "web" examples and click "inline". Notice keyboard shortcuts do work on the "this page" example. The only difference between the two I see is the player js files they use. "Inline" uses the html.js player and "this page" uses iframe.js. Also, I believe it uses the mootools library. Any ideas?
A: The best option is to disable keyboard navigation shortcuts in the shadowbox by setting the "enableKeys" option to false (see this page).
Alternatively you could do what Robby suggests and modify the shadowbox.js file, but only do this if you need to have the shadowbox keyboard navigation. I think that you want to search for this block of code and modify it so that it only cancels the default event if one of the shortcuts is used (I've added some line breaks and indention):
var handleKey = function(e) {
var code = SL.keyCode(e);
SL.preventDefault(e);
if (code == 81 || code == 88 || code == 27) {
SB.close()
} else {
if (code == 37) {
SB.previous()
} else {
if (code == 39) {
SB.next()
} else {
if (code == 32) {
SB[(typeof slide_timer == "number" ? "pause" : "play")]()
}
}
}
}
};
I think you could change it to look more like this:
var handleKey = function(e) {
switch (SL.keyCode(e)) {
case 81:
case 88:
case 27:
SB.close()
SL.preventDefault(e);
break;
case 37:
SB.previous()
SL.preventDefault(e);
break;
case 39:
SB.next()
SL.preventDefault(e);
break;
case 32:
SB[(typeof slide_timer == "number" ? "pause" : "play")]()
SL.preventDefault(e);
break;
}
};
This should prevent the shadowbox event handler from swallowing any keystrokes that it doesn't care about.
A: This problem is caused by some JavaScript which eats keyboard events. You can hit the escape key, for example, which is trapped by one of the .js files and causes the shadow box to close.
Your choices are to hack through the files and find the problem, or not use shadowbox. Good luck!
A: The solution is to set the enableKeys option to false. However, this doesn't seem to work on an open() call for inline HTML. It does work, however, if you set it in your init() call.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Database abstraction layers for (Visual) C++ What options exist for accessing different databases from C++?
Put differently, what alternatives are there to ADO?
What are the pros and cons?
A: *
*Microsoft ODBC.
*The MFC ODBC classes such as CDatabase.
*OleDB (via COM).
*And you can always go through the per-RDBMS native libraries (for example, the SQL Server native library)
*DAO (don't).
*3rd party ORM providers.
I would recommend going through ODBC or OleDB by default. Native libraries really restrict you, DAO is no fun, there aren't a lot of great 3rd-party ORM for C++/Windows.
A: Although this question and its answers are several years old, they are still valuable for people like me that cruise by on an evaluation trip. For this reason, I would like to add the Qt C++ framework's QtSql module as an option for database connectivity.
Note that I am familiar with Qt in general, but have no experience with QtSql in particular.
Pros (just a few that should also apply if you just choose Qt for its QtSql module): Qt is cross-platform. In my experience, Qt is well-designed, pretty intuitive to use, and extremely well documented. It has been around for a long time, is maintained by an active community and backed by Nokia, so it won't become unavailable over night. Since 2009, Qt has been licensed under the LGPL, so it is a real no-cost option even for commercial applications.
Cons: Qt is not small. You will introduce new types such as QString to your project. Qt is licenced under the LGPL, so you need to acknowledge its use even in commercial apps.
A: One thing - if speed is important and your code doesn't need to be portable, then it may be worth it to use the native libraries.
I don't know much about SQL Server, but I do know that the Oracle OCI calls are faster than using ODBC. But, they tie you to Oracle's version of SQL. It would make sense for SQL Server to be the same way.
A: There is the POCO Data library, which supports ODBC, MySQL and SQLite. Part of the free open source POCO C++ Libraries.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Ever Heard of a License Transfer Fee upon Acquisition? My employer was recently acquired by a much larger company. In the process of sorting out all the legal details around our licenses for our development software, we have learned that the vendor of our IDE charges a "nominal" fee of 25% of the cost of a new license to transfer our existing licenses to the new corporate name.
This struck me as absurd. I have not seen such a customer-unfriendly policy from any other vendor. Has anyone else seen this type of policy? Am I way off base in considering this unfriendly and abnormal?
A: Unfriendly? Yes. Abnormal? No. Its actually very common for tools with a hefty per-seat license fee to charge for a transfer after acquisition. I believe they do it because they can: the cost of transferring license is either overlooked during the M&A due diligence or is considered inconsequential compared to the rest.
The tool vendor justifies the fee because they now have one less potential customer, and the combined company will be paying a lower price per seat due to volume discounts.
A: I would say you are not, i have never seen a practice like that before.
edit : well i must be very lucky, seems that it is common. Very glad i have not run across this before :)
A: I've heard of it before in regards to some high-end graphics software, but this was also back in the 1990's and only applied if you sold your license to someone else.
However, it does seem to be a bit odd to change 25% of a new license to just change the name on it. I'm not a lawyer, but isn't there some way that you could get around having to change the name on the software?
A: Things like this are quite common. It all depends on the agreement between the vendor and leasor. It's not limited to software either. Think about buying music, images etc. I have heard of some agreements where you can't transfer the license at all. You just have to buy a new copy. The thing that has to be remembered is that techinically when we buy a copy of a program, we don't "own" the copy, we just lease the use of it. It sucks at times, but that is the way it works.
A: There have been cases where the tools (capital) a company has purchased is worth more than the company, and the company is purchased and gutted just to obtain those tools at a discount.
This is bad for the company, of course, but the tool vender especially doesn't want this to happen - they lose a potential full-price customer for software where there is no real competitor. Further, the company that originally purchased the tool doesn't mind the contract because it helps prevent acquisitions based only on getting the capital. (Corollary: If your company is negotiating out of such a contract, get ready to be purchased...)
For tools that are very, very expensive, this is not unheard of. Think 10's of thousands of dollars per seat, and you can see why this economy becomes reality. Further, sometimes tools are purchased for the company by a client (DoD) and they are actually a small company ( a few developers that won a nice contract) - if the client does not retain the license, then the company might go bust and the license sold for pennies on the dollar at an auction to pay creditors.
Etc, etc, etc. In short, very, very expensive licenses change the economic playground enough that very strange rules apply. Note that "expensive" may also mean scarce, as in the case of liquor licenses for restaurants, or otherwise difficult to get (Qualcomm might not want to sell a given company a license for their CDMA patents, but they may not be able to legally prevent that company from acquiring such a license through legal methods).
-Adam
A: I would have expected your new overlords to have been made aware of this as part of their takeover plans. Part of the process involves checking for exactly this kind of gotcha.
Sounds like they chose to ignore the information or did not check it out.
A: That sounds pretty harsh to me, but if you think about the amount of money that changes hands during acquisitions, it's probably one of those cases where your IDE vendor just gets paid without complaint most of the time, so they keep with the policy.
I can see why it shouldn't be completely free to transfer the license -- there is some (probably 'nominal') administrative work to do on the vendor's side, and they need to discourage people from transferring licenses all over the place when they really shouldn't be. But 25% seems awfully high for the amount of work and verification they need to do -- it seems like they could put some sort of cap on the license transfer fee, or have a fixed price.
It does seem like the kind of policy that would drive customers to a competitor, particularly one that does not have the same kind of draconian license transfer policy.
A: It seems that something like this could be negotiable. We have never though of "fees" as a hard nonnegotiable item. If they value your business I would bet they could discount the transfer fee. It certainly seems that some kind of fee is reasonable for administrative changes that are required. To me that should be a flat fee per license. The work required to change their database is the same no matter how much the license costs.
A: This is quite common. Unless you address this issue up front when you enter into a license you are at the mercy of the licensor when a transaction like you describe happens. The licensor may or may not have a policy to come along and charge a fee, but unless the matter is addressed in your license, they will have the legal ability to do so.
The reason is this: a license is a legal contract with a specific legal entity (your employer in this case) and grants no rights in the software to anyone else (they buyer company in your example). Now your employer could have insisted on a clause in the original agreement saying that the license could be freely transferred to a possible future buyer without fee, but without such a clause, the licensor can do what they wish. Including charging the 25% fee.
This is one reason that many companies have their licenses routinely reviewed by legal counsel who are knowledgeable about software licensing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Replace huge Case statement in Classic ASP I have a 200+ case statement in Classic ASP which is getting out of control. It sets 5 variables, based on which case is selected. Address, Phone, Name etc. Just sets the variables and nothing else in each case.
Databases or files are not an option for this application.
Coming from the Perl world I would use a hash to store this info.
A: Getting out of control? I think it's already out of control!
Can you not categorise the cases into 'x' general areas and split down into helper routines?
A: Brian, the classic ASP equivalent of a Perl hash is the Scripting.Dictionary object.
A: Depends on what you want for performance.
The case statement is ugly but does not consume memory that would need to be allocated.
However, you could create a class for your fields and load instances of them into a Dictionary. Perform this operation in the global.asp script so it only happens once. Store the dictionary in the global asp collection such that it is only allocated once but used with each page call.
My appologies for not getting too specific here... it's been a while.
A: A lot of people use VBScript for Classic ASP, but you can use JavaScript / JScript on the server as an alternative. As a matter of fact, this is my preferred way of doing Classic ASP before finally moving to .NET (except in some cases, you will have to mix in VBScript for special cases, i.e. Disconnected Recordset, ExecuteNoRecords, etc.). It will provide you with better OOP support vs VBScript. Maybe you can try refactor that to.some sort of Strategy pattern afterward. Worth looking into I guess for better maintenance in the long run.
A: The fact that you can't migrate this over to a database or a text file is a bit of an issue as they would be the best solution for this type of data. However, if you have to have it in the code you could always try putting it into a matrix that you predefine. Then you could provide a function that returns the data from a given row in the matrix.
A: This should be done with a database, but since you said that is not an option, nothing you will write will be any less complex than a switch statement, since it's all required to live in your code (according to your terms of no db and no files).
I mean, you could use an Excel Spreadsheet if the idea of a database is too complicated but technically that would be a file as well!
A: Scripting dictionary is the best option IMHO.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Symmetric key storage My company is going to be storing sensitive data for our customers, and will be encrypting data using one of the managed .NET encryption algorithm classes. Most of the work is done, but we haven't figured out how/where to store the key. I've done some light searching and reading, and it seems like a hardware solution might be the most secure. Does anyone have any recommendations on a key storage solution or method?
Thanks for your replies, everyone.
spoulson, the issue is actually both the "scopes" that you mentioned. I suppose I should have been clearer.
The data itself, as well as the logic that encrypts it and decrypts it is abstracted away into an ASP.NET profile provider. This profile provider allows both encrypted profile properties as well as plain text ones. The encrypted property values are stored in exactly the same way the plain text ones are - with the obvious exception that they've been encrypted.
That said, the key will need to be able to be summoned for one of three reasons:
*
*The authorized web application, running on an authorized server, needs to encrypt data.
*Same as #1, but for decrypting the data.
*Authorized members of our business team need to view the encrypted data.
The way I'm imagining it is that nobody would ever actually know the key - there would be a piece of software controlling the actual encrypting and decrypting of data. That said, the key still needs to come from somewhere.
Full disclosure - if you couldn't already tell, I've never done anything like this before, so if I'm completely off base in my perception of how this should work, by all means, let me know.
A: There only two real solutions for (the technical aspect of) this problem.
Assuming it's only the application itself that needs access the key...
*
*Hardware Security Module (HSM) - usually pretty expensive, and not simple to implement. Can be dedicated appliance (e.g. nCipher) or specific token (e.g. Alladin eToken). And then you still have to define how to handle that hardware...
*DPAPI (Windows Data Protection API). There are classes for this in System.Security.Cryptography (ProtectedMemory, ProtectedStorage, etc). This hands off key management to the OS - and it handles it well. Used in "USER_MODE", DPAPI will lock decryption of the key to the single user that encrypted it.
(Without getting too detailed, the user's password is part of the encryption/decryption scheme - and no, changing the password does not foul it up.)
ADDED: Best to use DPAPI for protecting your master key, and not encrypting your application's data directly. And don't forget to set strong ACLs on your encrypted key...
A: In response to #3 of this answer from the OP
One way for authorized members to be able to view the encrypted data, but without them actually knowing the key would be to use key escrow (rsa labs) (wikipedia)
In summary the key is broken up into seperate parts and given to 'trustees'. Due to the nature of private keys each segment is useless to by its self. Yet if data is needed to be decrypted then the 'trustees' can assemble thier segments into the whole key.
A: We have the same problem, and have been through the same process.
We need to have a process start up on one computer (client) which then logs in to a second computer (database server).
We currently believe that the best practice would be:
*
*Operator manually starts the process on client PC.
*Client PC prompts operator for his personal login credentials.
*Operator enters his credentials.
*Client PC uses these to login to the database server.
*Client PC requests its own login credentials from database server.
*Database server checks that operator's login credentials are authorised to get the client process' credentials and returns them to the client PC.
*Client PC logs out of datbase server.
*Client PC logs back into database server using its own credentials.
Effectively, the operator's login password is the key, but it isn't stored anywhere.
A: Microsoft Rights Management Server (RMS) has a similar problem. It just solves it by encrypting its configuration with a master password. ...A password on a password, if you will.
A: Your best bet is to physically secure the hardware the key is on. Also, don't ever write it to disk - find some way to prevent that section of memory from being paged to disk. When encrypting/decrypting the key needs to be loaded into memory, and with unsecure hardware there's always this venue of attack.
There are, like you said, hardware encryption devices but they don't scale - all encryption/decryption passes through the chip.
A: You can encrypt the symmetric key using another symmetric key that is derived from a password using something like PBKDF2.
Have the user present a password, generate a new key used to encrypt the data, generate another key using the password, then encrypt and store the data encryption key.
It isn't as secure as using a hardware token, but it might still be good enough and is pretty easy to use.
A: I think I misunderstood your question. What you're asking for is not in scope of how the application handles its key storage, but rather how your company will store it.
In that case, you have two obvious choices:
*
*Physical: Write to USB drive, burn to CD, etc. Store in physically secure location. But you run into the recursive problem: where do you store the key to the vault? Typically, you delegate 2 or more people (or a team) to hold the keys.
*Software: Cyber-Ark Private Ark is what my company uses to store its secret digital information. We store all our admin passwords, license keys, private keys, etc. It works by running a Windows "vault" server that is not joined to a domain, firewalls all ports except its own, and stores all its data encrypted on disk. Users access through a web interface that first authenticates the user, then securely communicates with the vault server via explorer-like interface. All changes and versions are logged. But, this also has the same recursive problem... a master admin access CD. This is stored in our physical vault with limited access.
A: Use a hard-coded key to encrypt the generated key before writing it out. Then you can write it anywhere.
Yes you can find the hard-coded key, but so long as you're assuming it's OK to store a symmetric key anywhere, it's not less secure.
A: Depending on your application you could use the Diffie-Hellman method for two parties to securely agree on a symmetric key.
After an initial, secure exchange, the key is agreed upon and the rest of the session (or a new session) can use this new symmetric key.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: A good algorithm similar to Levenshtein but weighted for Qwerty keyboards? I noticed some posts here on string matching, which reminded me of an old problem I'd like to solve. Does anyone have a good Levenshtein-like algorithm that is weighted toward Qwerty keyboards?
I want to compare two strings, and and allow for typos. Levenshtein is okay, but I'd prefer to also accept spelling errors based on the physical distance between keys on Qwerty keyboard. In other words, the algorithm should prefer "yelephone" to "zelephone" since the "y" key is located nearer to the "t" key than to the "z" key on most keyboards.
Any help would be great... this feature isn't central to my project, so I don't want to veer off into a rat-hole when I should be doing something more productive.
A: In bioinformatics when you align two sequences of DNA you might have a model that has a different cost based on if the substitution is a transition or a transversion. This is exactly what you want but instead of a 4x4 matrix, you want a 40x40 matrix or some, dare I say distance function? So the cost of a replacement is from the matrix/function, not a constant.
CAVEAT: Be sure that deletions and insertions are weighted properly though, so they aren't over accepted as the minimum. You'll end up with a string of insertions/deletions/no-change-substitution characters.
The new function you are trying to minimize would be:
d[i, j] := minimum(
d[i-1, j] + del_cost,
d[i, j-1] + ins_cost,
d[i-1, j-1] + keyboard_distance( s[i], t[j] )
)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to promote WCF to a non-techie? How would you describe and promote WCF as a technology to a non-technical client/manager/CEO/etc?
What are competing solutions or ideas that they might bring up(such as those they read about in their magazines touting new technology)?
What is WCF not good for that you've seen people try to shoehorn it into?
-Adam
A: Comparing with .asmx: WCF is the next generation of Microsoft's Web service development platform, which addresses many of the issues with older versions, specifically:
*
*better interoperation, so you can interoperate with Web services that aren't from Microsoft or that are published on the Internet
*much more flexible, so it's easier and faster for developers to get their jobs done
*easier to configure without changing code, reducing the cost of maintenance significantly
It may be that they raise the question of how it relates to SOA, a "service-oriented architecture". WCF is the Microsoft solution for creating applications that participate in these distributed systems.
A: Tell them it'll let you do your job easier which translates into less time and less money.
A: In a single sentence, I'd say that WCF is "software that lets you set up and manage communication between systems a lot more efficiently than in the past".
I can see them bringing up BizTalk as a competitor, but of course you could say that WCF works with it and is in fact used as base technology for it in the more recent versions.
I'm not sure if I can think of any inappropriate shoe-horning of WCF that I have seen, although there are plenty of legacy apps that will probably be "upgraded" to WCF that don't really need to be for any real business reason.
A: There is an inter-op angle as well. If you upgrade your Asmx services to WCF services you can still honor your asmx clients and then start moving forward with newer WCF clients. WCF is starting to get some ReST attention, RSS is there, Silverlight has a place with WCF. Performance is better, depending on the bindings you choose. One of the big draw backs is a steeper learning curve comapred to Asmx services, the great power/great responsibilty problem and then the 101 ways to do the same thing.
None of this is CxO talk but refactor the language into magazine buzz words so that they can see the future of this technology.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why does a "file exists" method in many languages return true for a directory? I know that it does in PHP, and I'm pretty sure it does in Java. I haven't used the latest versions of .NET, so I won't speak for them. It seems very awkward, but I was wondering if there was an underlying reason for this.
A: One reason is compatibility - anyone who has done 'check for existence' knows to exclude directories; changing that behaviour may confuse those who rely on that behaviour.
Secondly, the underlying code often does a check on the operating system for existence in a catlog of filesystem entries - to the OS, a directory is the same as a file. In other words, it's looking for an entry of 'xyz' in the catalog not a file with name 'xyz' in the catalog.
Backwards compatability is the main reason, I suspect.
A: There is also a formal reason why a directory is a file:
Files (or links to files) can be
located in directories. However, more
generally, a directory can contain
either a list of files or a list of
links to files. Within this
definition, it is of paramount
importance that the term "file"
includes directories. This permits the
existence of directory hierarchies,
i.e., directories containing
subdirectories.
From Wikipedia, ‘Computer file’
A: It is common to use a “file exists” function to check a path before writing to it. In this use case the type of file is irrelevant, if there is a directory called “/home/foo” you won't be able to create a file called “/home/foo”
Also PHP, one of the languages you mentioned, provides several functions depending on what kind(s) of file you care about:
*
*file_exists() will return TRUE for files, directories and symbolic links
*is_file() will return TRUE for files, but FALSE for directories and sym links
*is_dir() will return TRUE for directories, but FALSE for files and sym links
*is_link() will return TRUE for symbolic links, but FALSE for files and directories
A: Part of the Unix philosophy is that "everything is a file". This has influenced other environments as well to some degree.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best way to convert DateTime to "n Hours Ago" in SQL I wrote a SQL function to convert a datetime value in SQL to a friendlier "n Hours Ago" or "n Days Ago" etc type of message. And I was wondering if there was a better way to do it.
(Yes I know "don't do it in SQL" but for design reasons I have to do it this way).
Here is the function I've written:
CREATE FUNCTION dbo.GetFriendlyDateTimeValue
(
@CompareDate DateTime
)
RETURNS nvarchar(48)
AS
BEGIN
DECLARE @Now DateTime
DECLARE @Hours int
DECLARE @Suff nvarchar(256)
DECLARE @Found bit
SET @Found = 0
SET @Now = getDate()
SET @Hours = DATEDIFF(MI, @CompareDate, @Now)/60
IF @Hours <= 1
BEGIN
SET @Suff = 'Just Now'
SET @Found = 1
RETURN @Suff
END
IF @Hours < 24
BEGIN
SET @Suff = ' Hours Ago'
SET @Found = 1
END
IF @Hours >= 8760 AND @Found = 0
BEGIN
SET @Hours = @Hours / 8760
SET @Suff = ' Years Ago'
SET @Found = 1
END
IF @Hours >= 720 AND @Found = 0
BEGIN
SET @Hours = @Hours / 720
SET @Suff = ' Months Ago'
SET @Found = 1
END
IF @Hours >= 168 AND @Found = 0
BEGIN
SET @Hours = @Hours / 168
SET @Suff = ' Weeks Ago'
SET @Found = 1
END
IF @Hours >= 24 AND @Found = 0
BEGIN
SET @Hours = @Hours / 24
SET @Suff = ' Days Ago'
SET @Found = 1
END
RETURN Convert(nvarchar, @Hours) + @Suff
END
A: As you say, I probably wouldn't do it in SQL, but as a thought exercise have a MySQL implementation:
CASE
WHEN compare_date between date_sub(now(), INTERVAL 60 minute) and now()
THEN concat(minute(TIMEDIFF(now(), compare_date)), ' minutes ago')
WHEN datediff(now(), compare_date) = 1
THEN 'Yesterday'
WHEN compare_date between date_sub(now(), INTERVAL 24 hour) and now()
THEN concat(hour(TIMEDIFF(NOW(), compare_date)), ' hours ago')
ELSE concat(datediff(now(), compare_date),' days ago')
END
Based on a similar sample seen on the MySQL Date and Time manual pages
A: In Oracle:
select
CC.MOD_DATETIME,
'Last modified ' ||
case when (sysdate - cc.mod_datetime) < 1
then round((sysdate - CC.MOD_DATETIME)*24) || ' hours ago'
when (sysdate - CC.MOD_DATETIME) between 1 and 7
then round(sysdate-CC.MOD_DATETIME) || ' days ago'
when (sysdate - CC.MOD_DATETIME) between 8 and 365
then round((sysdate - CC.MOD_DATETIME) / 7) || ' weeks ago'
when (sysdate - CC.MOD_DATETIME) > 365
then round((sysdate - CC.MOD_DATETIME) / 365) || ' years ago'
end
from
customer_catalog CC
A: My attempt - this is for MS SQL. It supports 'ago' and 'from now', pluralization and it doesn't use rounding or datediff, but truncation -- datediff gives 1 month diff between 8/30 and 9/1 which is probably not what you want. Rounding gives 1 month diff between 9/1 and 9/16. Again, probably not what you want.
CREATE FUNCTION dbo.GetFriendlyDateTimeValue( @CompareDate DATETIME ) RETURNS NVARCHAR(48) AS BEGIN
declare @s nvarchar(48)
set @s='Now'
select top 1 @s=convert(nvarchar,abs(n))+' '+s+case when abs(n)>1 then 's' else '' end+case when n>0 then ' ago' else ' from now' end from (
select convert(int,(convert(float,(getdate()-@comparedate))*n)) as n, s from (
select 1/365 as n, 'Year' as s union all
select 1/30, 'Month' union all
select 1, 'Day' union all
select 7, 'Week' union all
select 24, 'Hour' union all
select 24*60, 'Minute' union all
select 24*60*60, 'Second'
) k
) j where abs(n)>0 order by abs(n)
return @s
END
A: Your code looks functional. As for a better way, that is going to get subjective. You might want to check out this page as it deals with time spans in SQL.
A: How about this? You could expand this pattern to do "years" messages, and you could put in a check for "1 day" or "1 hour" so it wouldn't say "1 days ago"...
I like the CASE statement in SQL.
drop function dbo.time_diff_message
GO
create function dbo.time_diff_message (
@input_date datetime
)
returns varchar(200)
as
begin
declare @msg varchar(200)
declare @hourdiff int
set @hourdiff = datediff(hour, @input_date, getdate())
set @msg = case when @hourdiff < 0 then ' from now' else ' ago' end
set @hourdiff = abs(@hourdiff)
set @msg = case when @hourdiff > 24 then convert(varchar, @hourdiff/24) + ' days' + @msg
else convert(varchar, @hourdiff) + ' hours' + @msg
end
return @msg
end
GO
select dbo.time_diff_message('Dec 7 1941')
A: Thanks for the various code posted above.
As Hafthor pointed out there are limitations of the original code to do with rounding. I also found that some of the results his code kicked out didn't match with what I'd expect e.g. Friday afternoon -> Monday morning would show as '2 days ago'. I think we'd all call that 3 days ago, even though 3 complete 24 hour periods haven't elapsed.
So I've amended the code (this is MS SQL). Disclaimer: I am a novice TSQL coder so this is quite hacky, but works!!
I've done some overrides - e.g. anything up to 2 weeks is expressed in days. Anything over that up to 2 months is expressed in weeks. Anything over that is in months etc. Just seemed like the intuitive way to express it.
CREATE FUNCTION [dbo].[GetFriendlyDateTimeValue]( @CompareDate DATETIME ) RETURNS NVARCHAR(48) AS BEGIN
declare @s nvarchar(48)
set @s='Now'
select top 1 @s=convert(nvarchar,abs(n))+' '+s+case when abs(n)>1 then 's' else '' end+case when n>0 then ' ago' else ' from now' end from (
select convert(int,(convert(float,(getdate()-@comparedate))*n)) as n, s from (
select 1/365 as n, 'year' as s union all
select 1/30, 'month' union all
select 1/7, 'week' union all
select 1, 'day' union all
select 24, 'hour' union all
select 24*60, 'minute' union all
select 24*60*60, 'second'
) k
) j where abs(n)>0 order by abs(n)
if @s like '%days%'
BEGIN
-- if over 2 months ago then express in months
IF convert(nvarchar,DATEDIFF(MM, @CompareDate, GETDATE())) >= 2
BEGIN
select @s = convert(nvarchar,DATEDIFF(MM, @CompareDate, GETDATE())) + ' months ago'
END
-- if over 2 weeks ago then express in weeks, otherwise express as days
ELSE IF convert(nvarchar,DATEDIFF(DD, @CompareDate, GETDATE())) >= 14
BEGIN
select @s = convert(nvarchar,DATEDIFF(WK, @CompareDate, GETDATE())) + ' weeks ago'
END
ELSE
select @s = convert(nvarchar,DATEDIFF(DD, @CompareDate, GETDATE())) + ' days ago'
END
return @s
END
A: The posts above gave me some good ideas so here is another function for anyone using SQL Server 2012.
CREATE FUNCTION [dbo].[FN_TIME_ELAPSED]
(
@TIMESTAMP DATETIME
)
RETURNS VARCHAR(50)
AS
BEGIN
RETURN
(
SELECT TIME_ELAPSED =
CASE
WHEN @TIMESTAMP IS NULL THEN NULL
WHEN MINUTES_AGO < 60 THEN CONCAT(MINUTES_AGO, ' minutes ago')
WHEN HOURS_AGO < 24 THEN CONCAT(HOURS_AGO, ' hours ago')
WHEN DAYS_AGO < 365 THEN CONCAT(DAYS_AGO, ' days ago')
ELSE CONCAT(YEARS_AGO, ' years ago') END
FROM ( SELECT MINUTES_AGO = DATEDIFF(MINUTE, @TIMESTAMP, GETDATE()) ) TIMESPAN_MIN
CROSS APPLY ( SELECT HOURS_AGO = DATEDIFF(HOUR, @TIMESTAMP, GETDATE()) ) TIMESPAN_HOUR
CROSS APPLY ( SELECT DAYS_AGO = DATEDIFF(DAY, @TIMESTAMP, GETDATE()) ) TIMESPAN_DAY
CROSS APPLY ( SELECT YEARS_AGO = DATEDIFF(YEAR, @TIMESTAMP, GETDATE()) ) TIMESPAN_YEAR
)
END
GO
And the implementation:
SELECT TIME_ELAPSED = DBO.FN_TIME_ELAPSED(AUDIT_TIMESTAMP)
FROM SOME_AUDIT_TABLE
A: CASE WHEN datediff(SECOND,OM.OrderDate,GETDATE()) < 60 THEN
CONVERT(NVARCHAR(MAX),datediff(SECOND,OM.OrderDate,GETDATE())) +' seconds ago'
WHEN datediff(MINUTE,OM.OrderDate,GETDATE()) < 60 THEN
CONVERT(NVARCHAR(MAX),datediff(MINUTE,OM.OrderDate,GETDATE())) +' minutes ago'
WHEN datediff(HOUR,OM.OrderDate,GETDATE()) < 24 THEN
CONVERT(NVARCHAR(MAX),datediff(HOUR,OM.OrderDate,GETDATE())) +' hours ago'
WHEN datediff(DAY,OM.OrderDate,GETDATE()) < 8 THEN
CONVERT(NVARCHAR(MAX),datediff(DAY,OM.OrderDate,GETDATE())) +' Days ago'
ELSE FORMAT(OM.OrderDate,'dd/MM/yyyy hh:mm tt') END AS TimeStamp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is the best way to display a status message in WPF? I have several wpf pages with update/delete/add buttons. I want to display to the user messages like "successful delete", etc. How can I best implement this so the message is defined in a single place (similar to an asp.net master page) and I can update this message from anywhere?
A: You may want to consider doing a publish/subscribe ("Observer" pattern) -- define a "status changed" event on a base page, and create a custom control that sets up a delegate and event handler to listen for status updates.
Then you could drop the custom control on any page that inherits from the base, and it would automatically listen for and display status messages whenever the event is fired.
Edit: I put together a sample implementation of this pattern and published a blog post walking through the code.
A: I don't think you have the ASP.Net master pages translated to the WPF Page world just yet.
A workaround till MS gets there, I would probably put a Control at the top of the page (or wherever) that just displays a particular User-level "Application Setting". You can update the string property like
MyAppUserSettings.StatusMessage = "You just deleted the administrator!"
Crude but will get the job done I think!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Interprocess communication for Windows in C# (.NET 2.0) I've never had to do IPC on Windows before. I'm developing a pair of programs, a standard GUI/CLI app, and a windows service. The app has to tell the service what to do. So, assuming the communication is local only, what would be the best communication method for these two processes?
By best I mean more robust and less error prone, not the best performance nor the easiest to code.
Note I'm asking about what to use, a standard TCP socket, named pipes, or some other means of communication only.
A: IPC in .Net can be achieved using:
WCF
using named pipes requires .Net 3.0 and above.
Code example
*
*The WCF class NetNamedPipeBinding can be used for interprocess communication on the same machine. The MSDN documentaion for this class includes a code sample covering this scenario http://msdn.microsoft.com/en-us/library/system.servicemodel.netnamedpipebinding.aspx
Remoting
The original IPC framework released with .Net 1.0. I believe remoting is no longer being actively developed, and you are encouraged to use WCF instead
Code example
Inter-process communication via Remoting - uses a tcp channel
Resources
*
*GenuineChannels, sell a remoting toolkit that includes a Shared Memory Channel. http://www.genuinechannels.com/Index.aspx
*Ingo Rammer, wrote the definitive .Net remoting book, Advanced .NET Remoting, Second Edition
Win32 RPC using csharptest-net RpcLibrary
I came across a project recently that has wrapped the Win32 RPC library and created a .net class library that can be used for local and remote RPC
Project home page: http://csharptest.net/projects/rpclibrary/
MSDN references:
*
*How rpc works: http://technet.microsoft.com/en-us/library/cc738291(v=ws.10).aspx
*RPC functions: http://msdn.microsoft.com/en-us/library/aa378623(v=VS.85).aspx
Also has a google protocol buffers rpc client that runs on top of the library: https://code.google.com/p/protobuf-csharp-rpc/
WM_COPYDATA
For completeness it's also possible to use the WIN32 method with the WM_COPYDATA message. I've used this method before in .Net 1.1 to create a single instance application opening multiple files from windows explorer.
Resources
*
*MSDN - WM_COPYDATA
*Code example
*PInvoke.net declaration
Sockets
Using a custom protocol (harder)
A: For local only, we have had success using Named Pipes. Avoids the overhead of TCP, and is pretty much (at least for .NET) as efficient as you can get while also having a decent API to work with.
A: Since you are limited to .Net 2.0 WCF is perhaps not an option. You could use .Net remoting with shared memory as the underlying communication mechanism between app domains on the same machine. Using this approach you can easily put your processes on different machines and replace the shared memory protocol with a network protocol.
A: The standard method of communicating with a windows service is to use service control codes. Windows services can receive codes from 0 to 255. 0-127 is reserved for system. 128 to 255 can be used for custom commands.
If you need to send complex objects to the service use database, xml, file, tcp, http etc. Other than that for sending control commands like reload configuration, process items etc this control codes should be used.
There are additional functionalities available such as querying the service. See Windows service documentation and api.
http://arcanecode.com/2007/05/30/windows-services-in-c-sending-commands-to-your-windows-service-part-7/
A: Your best bet is to use WCF. You will be able to create a service host in the windows service and expose a well defined interface that the GUI application can consume. WCF will let you communicate via named pipes if you choose, or you can choose any other communication protocal like TCP, HTTP, etc. Using WCF you get great tool support and lots of available information.
A: I'd like to add to this discussion. Please rebuke me if this is way out there - but couldn't a semaphore (or multiple semaphores) be used for rudimentary communication?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: How to show all shared libraries used by executables in Linux? I'd like to know which libraries are used by executables on my system. More specifically, I'd like to rank which libraries are used the most, along with the binaries that use them. How can I do this?
A: I didn't have ldd on my ARM toolchain so I used objdump:
$(CROSS_COMPILE)objdump -p
For instance:
objdump -p /usr/bin/python:
Dynamic Section:
NEEDED libpthread.so.0
NEEDED libdl.so.2
NEEDED libutil.so.1
NEEDED libssl.so.1.0.0
NEEDED libcrypto.so.1.0.0
NEEDED libz.so.1
NEEDED libm.so.6
NEEDED libc.so.6
INIT 0x0000000000416a98
FINI 0x000000000053c058
GNU_HASH 0x0000000000400298
STRTAB 0x000000000040c858
SYMTAB 0x0000000000402aa8
STRSZ 0x0000000000006cdb
SYMENT 0x0000000000000018
DEBUG 0x0000000000000000
PLTGOT 0x0000000000832fe8
PLTRELSZ 0x0000000000002688
PLTREL 0x0000000000000007
JMPREL 0x0000000000414410
RELA 0x0000000000414398
RELASZ 0x0000000000000078
RELAENT 0x0000000000000018
VERNEED 0x0000000000414258
VERNEEDNUM 0x0000000000000008
VERSYM 0x0000000000413534
A: On UNIX system, suppose binary (executable) name is test. Then we use the following command to list the libraries used in the test is
ldd test
A: On Linux I use:
lsof -P -T -p Application_PID
This works better than ldd when the executable uses a non default loader
A: to learn what libraries a binary uses, use ldd
ldd path/to/the/tool
You'd have to write a little shell script to get to your system-wide breakdown.
A: With ldd you can get the libraries that tools use. To rank the usage of libraries for a set of tool you can use something like the following command.
ldd /bin/* /usr/bin/* ... | sed -e '/^[^\t]/ d; s/^\t\(.* => \)\?\([^ ]*\) (.*/\2/g' | sort | uniq -c
(Here sed strips all lines that do not start with a tab and the filters out only the actual libraries. With sort | uniq -c you get each library with a count indicating the number of times it occurred.)
You might want to add sort -g at the end to get the libraries in order of usage.
Note that you probably get lines two non-library lines with the above command. One of static executables ("not a dynamic executable") and one without any library. The latter is the result of linux-gate.so.1 which is not a library in your file system but one "supplied" by the kernel.
A: *
*Use ldd to list shared libraries for each executable.
*Cleanup the output
*Sort, compute counts, sort by count
To find the answer for all executables in the "/bin" directory:
find /bin -type f -perm /a+x -exec ldd {} \; \
| grep so \
| sed -e '/^[^\t]/ d' \
| sed -e 's/\t//' \
| sed -e 's/.*=..//' \
| sed -e 's/ (0.*)//' \
| sort \
| uniq -c \
| sort -n
Change "/bin" above to "/" to search all directories.
Output (for just the /bin directory) will look something like this:
1 /lib64/libexpat.so.0
1 /lib64/libgcc_s.so.1
1 /lib64/libnsl.so.1
1 /lib64/libpcre.so.0
1 /lib64/libproc-3.2.7.so
1 /usr/lib64/libbeecrypt.so.6
1 /usr/lib64/libbz2.so.1
1 /usr/lib64/libelf.so.1
1 /usr/lib64/libpopt.so.0
1 /usr/lib64/librpm-4.4.so
1 /usr/lib64/librpmdb-4.4.so
1 /usr/lib64/librpmio-4.4.so
1 /usr/lib64/libsqlite3.so.0
1 /usr/lib64/libstdc++.so.6
1 /usr/lib64/libz.so.1
2 /lib64/libasound.so.2
2 /lib64/libblkid.so.1
2 /lib64/libdevmapper.so.1.02
2 /lib64/libpam_misc.so.0
2 /lib64/libpam.so.0
2 /lib64/libuuid.so.1
3 /lib64/libaudit.so.0
3 /lib64/libcrypt.so.1
3 /lib64/libdbus-1.so.3
4 /lib64/libresolv.so.2
4 /lib64/libtermcap.so.2
5 /lib64/libacl.so.1
5 /lib64/libattr.so.1
5 /lib64/libcap.so.1
6 /lib64/librt.so.1
7 /lib64/libm.so.6
9 /lib64/libpthread.so.0
13 /lib64/libselinux.so.1
13 /lib64/libsepol.so.1
22 /lib64/libdl.so.2
83 /lib64/ld-linux-x86-64.so.2
83 /lib64/libc.so.6
Edit - Removed "grep -P"
A: on ubuntu
print packages related to an executable
ldd executable_name|awk '{print $3}'|xargs dpkg -S |awk -F ":" '{print $1}'
A: One more option can be just read the file located at
/proc/<pid>/maps
For example is the process id is 2601 then the command is
cat /proc/2601/maps
And the output is like
7fb37a8f2000-7fb37a8f4000 r-xp 00000000 08:06 4065647 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/network_networkmanager.so
7fb37a8f4000-7fb37aaf3000 ---p 00002000 08:06 4065647 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/network_networkmanager.so
7fb37aaf3000-7fb37aaf4000 r--p 00001000 08:06 4065647 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/network_networkmanager.so
7fb37aaf4000-7fb37aaf5000 rw-p 00002000 08:06 4065647 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/network_networkmanager.so
7fb37aaf5000-7fb37aafe000 r-xp 00000000 08:06 4065646 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/config_gnome3.so
7fb37aafe000-7fb37acfd000 ---p 00009000 08:06 4065646 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/config_gnome3.so
7fb37acfd000-7fb37acfe000 r--p 00008000 08:06 4065646 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/config_gnome3.so
7fb37acfe000-7fb37acff000 rw-p 00009000 08:06 4065646 /usr/lib/x86_64-linux-gnu/libproxy/0.4.15/modules/config_gnome3.so
7fb37acff000-7fb37ad1d000 r-xp 00000000 08:06 3416761 /usr/lib/x86_64-linux-gnu/libproxy.so.1.0.0
7fb37ad1d000-7fb37af1d000 ---p 0001e000 08:06 3416761 /usr/lib/x86_64-linux-gnu/libproxy.so.1.0.0
7fb37af1d000-7fb37af1e000 r--p 0001e000 08:06 3416761 /usr/lib/x86_64-linux-gnu/libproxy.so.1.0.0
7fb37af1e000-7fb37af1f000 rw-p 0001f000 08:06 3416761 /usr/lib/x86_64-linux-gnu/libproxy.so.1.0.0
7fb37af1f000-7fb37af21000 r-xp 00000000 08:06 4065186 /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
7fb37af21000-7fb37b121000 ---p 00002000 08:06 4065186 /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
7fb37b121000-7fb37b122000 r--p 00002000 08:06 4065186 /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
7fb37b122000-7fb37b123000 rw-p 00003000 08:06 4065186 /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
A: If you don't care about the path to the executable file -
ldd `which <executable>` # back quotes, not single quotes
A: Check shared library dependencies of a program executable
To find out what libraries a particular executable depends on, you can use ldd command. This command invokes dynamic linker to find out library dependencies of an executable.
> $ ldd /path/to/program
Note that it is NOT recommended to run ldd with any untrusted third-party executable because some versions of ldd may directly invoke the executable to identify its library dependencies, which can be security risk.
Instead, a safer way to show library dependencies of an unknown application binary is to use the following command.
$ objdump -p /path/to/program | grep NEEDED
for more info
A: readelf -d recursion
redelf -d produces similar output to objdump -p which was mentioned at: https://stackoverflow.com/a/15520982/895245
But beware that dynamic libraries can depend on other dynamic libraries, to you have to recurse.
Example:
readelf -d /bin/ls | grep 'NEEDED'
Sample ouptut:
0x0000000000000001 (NEEDED) Shared library: [libselinux.so.1]
0x0000000000000001 (NEEDED) Shared library: [libacl.so.1]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
Then:
$ locate libselinux.so.1
/lib/i386-linux-gnu/libselinux.so.1
/lib/x86_64-linux-gnu/libselinux.so.1
/mnt/debootstrap/lib/x86_64-linux-gnu/libselinux.so.1
Choose one, and repeat:
readelf -d /lib/x86_64-linux-gnu/libselinux.so.1 | grep 'NEEDED'
Sample output:
0x0000000000000001 (NEEDED) Shared library: [libpcre.so.3]
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x0000000000000001 (NEEDED) Shared library: [ld-linux-x86-64.so.2]
And so on.
/proc/<pid>/maps for running processes
This is useful to find all the libraries currently being used by running executables. E.g.:
sudo awk '/\.so/{print $6}' /proc/1/maps | sort -u
shows all currently loaded dynamic dependencies of init (PID 1):
/lib/x86_64-linux-gnu/ld-2.23.so
/lib/x86_64-linux-gnu/libapparmor.so.1.4.0
/lib/x86_64-linux-gnu/libaudit.so.1.0.0
/lib/x86_64-linux-gnu/libblkid.so.1.1.0
/lib/x86_64-linux-gnu/libc-2.23.so
/lib/x86_64-linux-gnu/libcap.so.2.24
/lib/x86_64-linux-gnu/libdl-2.23.so
/lib/x86_64-linux-gnu/libkmod.so.2.3.0
/lib/x86_64-linux-gnu/libmount.so.1.1.0
/lib/x86_64-linux-gnu/libpam.so.0.83.1
/lib/x86_64-linux-gnu/libpcre.so.3.13.2
/lib/x86_64-linux-gnu/libpthread-2.23.so
/lib/x86_64-linux-gnu/librt-2.23.so
/lib/x86_64-linux-gnu/libseccomp.so.2.2.3
/lib/x86_64-linux-gnu/libselinux.so.1
/lib/x86_64-linux-gnu/libuuid.so.1.3.0
This method also shows libraries opened with dlopen, tested with this minimal setup hacked up with a sleep(1000) on Ubuntu 18.04.
See also: https://superuser.com/questions/310199/see-currently-loaded-shared-objects-in-linux/1243089
A: On OS X by default there is no ldd, objdump or lsof. As an alternative, try otool -L:
$ otool -L `which openssl`
/usr/bin/openssl:
/usr/lib/libcrypto.0.9.8.dylib (compatibility version 0.9.8, current version 0.9.8)
/usr/lib/libssl.0.9.8.dylib (compatibility version 0.9.8, current version 0.9.8)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0)
In this example, using which openssl fills in the fully qualified path for the given executable and current user environment.
A: I found this post very helpful as I needed to investigate dependencies from a 3rd party supplied library (32 vs 64 bit execution path(s)).
I put together a Q&D recursing bash script based on the 'readelf -d' suggestion on a RHEL 6 distro.
It is very basic and will test every dependency every time even if it might have been tested before (i.e very verbose). Output is very basic too.
#! /bin/bash
recurse ()
# Param 1 is the nuumber of spaces that the output will be prepended with
# Param 2 full path to library
{
#Use 'readelf -d' to find dependencies
dependencies=$(readelf -d ${2} | grep NEEDED | awk '{ print $5 }' | tr -d '[]')
for d in $dependencies; do
echo "${1}${d}"
nm=${d##*/}
#libstdc++ hack for the '+'-s
nm1=${nm//"+"/"\+"}
# /lib /lib64 /usr/lib and /usr/lib are searched
children=$(locate ${d} | grep -E "(^/(lib|lib64|usr/lib|usr/lib64)/${nm1})")
rc=$?
#at least locate... didn't fail
if [ ${rc} == "0" ] ; then
#we have at least one dependency
if [ ${#children[@]} -gt 0 ]; then
#check the dependeny's dependencies
for c in $children; do
recurse " ${1}" ${c}
done
else
echo "${1}no children found"
fi
else
echo "${1}locate failed for ${d}"
fi
done
}
# Q&D -- recurse needs 2 params could/should be supplied from cmdline
recurse "" !!full path to library you want to investigate!!
redirect the output to a file and grep for 'found' or 'failed'
Use and modify, at your own risk of course, as you wish.
A: The other answers miss an important point:
Shared libs can either be directly linked, or indirectly linked through another lib.
For only the directly linked:
objdump --private-headers "${bin}" | grep 'NEEDED' | cut --delimiter=' '
--fields=18-
For all:
ldd "${bin}" | cut --fields=2 | cut --delimiter=' ' --fields=1 | rev | cut --delimiter='/' --fields=1 | rev | sort --unique --version-sort
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "280"
} |
Q: How can I lock down my MS-SQL DB from my users and yet still access it through ODBC? I've got an ms-access application that's accessing and ms-sql db through an ODBC connection. I'm trying to force my users to update the data only through the application portion, but I don't care if they read the data directly or through their own custom ms-access db (they use it for creating ad hoc reports).
What I'm looking for is a way to make the data only editable if they are using the compiled .mde file I distribute to them. I know I can make the data read only for the general population, and editable for select users.
Is there a way I can get ms-sql to make the data editable only if they are accessing it through the my canned mde?
Thought, is there a way to get ms-access to log into the database as a different user (or change the login once connected)?
@Jake,
Yes, it's using forms. What I'm looking to do is just have it switch users once when I have my launchpad/mainmenu form pop up.
@Peter,
That is indeed the direction I'm headed. What I haven't determined was how to go about switching to that second ID. I'm not so worried about the password being sniffed, the users are all internal, and on an internal LAN. If they can sniff that password, they can certainly sniff the one for my privileged ID.
@no one in general,
Right now its security by obscurity. I've given the uses a special .mdb for doing reporting that will let them read data, but not update it. They don't know about relinking to the tables through the ODBC connection. A slightly more ms-access/DB literate user could by pass what I've done in seconds - and there a few who imagine themselves to be DBA, so they will figure it out eventually.
A: There is a way to do this that is effective with internal users, but can be hacked. You create two IDs for each user. One is a reporting ID that has read-only access. This is they ID that the user knows about: Fred / mypassword
The second is an ID that can do updates. That id is Fred_app / mypassword_mangled. They log on to your app with Fred. When your application accesses data, it uses the application id.
This can be sniffed, but for many applications it is sufficient.
A: Does you app allow for linked table updates or does it go through forms? Sounds like your idea of using a centralized user with distinct roles is the way to go. Yes, you could change users but I that may introduce more coding and once you start adding more and more code other solutions (stored procedures, etc) may sound more inviting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Optimizing a LINQ to SQL query I have a query that looks like this:
public IList<Post> FetchLatestOrders(int pageIndex, int recordCount)
{
DatabaseDataContext db = new DatabaseDataContext();
return (from o in db.Orders
orderby o.CreatedDate descending
select o)
.Skip(pageIndex * recordCount)
.Take(recordCount)
.ToList();
}
I need to print the information of the order and the user who created it:
foreach (var o in FetchLatestOrders(0, 10))
{
Console.WriteLine("{0} {1}", o.Code, o.Customer.Name);
}
This produces a SQL query to bring the orders and one query for each order to bring the customer. Is it possible to optimize the query so that it brings the orders and it's customer in one SQL query?
Thanks
UDPATE: By suggestion of sirrocco I changed the query like this and it works. Only one select query is generated:
public IList<Post> FetchLatestOrders(int pageIndex, int recordCount)
{
var options = new DataLoadOptions();
options.LoadWith<Post>(o => o.Customer);
using (var db = new DatabaseDataContext())
{
db.LoadOptions = options;
return (from o in db.Orders
orderby o.CreatedDate descending
select o)
.Skip(pageIndex * recordCount)
.Take(recordCount)
.ToList();
}
}
Thanks sirrocco.
A: Something else you can do is EagerLoading. In Linq2SQL you can use LoadOptions : More on LoadOptions
One VERY weird thing about L2S is that you can set LoadOptions only before the first query is sent to the Database.
A: you might want to look into using compiled queries
have a look at http://www.3devs.com/?p=3
A: Given a LINQ statement like:
context.Cars
.OrderBy(x => x.Id)
.Skip(50000)
.Take(1000)
.ToList();
This roughly gets translated into:
select * from [Cars] order by [Cars].[Id] asc offset 50000 rows fetch next 1000 rows
Because offset and fetch are extensions of order by, they are not executed until after the select-portion runs (google). This means an expensive select with lots of join-statements are executed on the whole dataset ([Cars]) prior to getting the fetched-results.
Optimize the statement
All that is needed is taking the OrderBy, Skip, and Take statements and putting them into a Where-clause:
context.Cars
.Where(x => context.Cars.OrderBy(y => y.Id).Select(y => y.Id).Skip(50000).Take(1000).Contains(x.Id))
.ToList();
This roughly gets translated into:
exec sp_executesql N'
select * from [Cars]
where exists
(select 1 from
(select [Cars].[Id] from [Cars] order by [Cars].[Id] asc offset @p__linq__0 rows fetch next @p__linq__1 rows only
) as [Limit1]
where [Limit1].[Id] = [Cars].[Id]
)
order by [Cars].[Id] asc',N'@p__linq__0 int,@p__linq__1 int',@p__linq__0=50000,@p__linq__1=1000
So now, the outer select-statement only executes on the filtered dataset based on the where exists-clause!
Again, your mileage may vary on how much query time is saved by making the change. General rule of thumb is the more complex your select-statement and the deeper into the dataset you want to go, the more this optimization will help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Does "display: marker" work in any current browsers, and if so, how? I can't be sure if my code is sucking, or if it's just that the browsers haven't caught up with the spec yet.
My goal is to simulate list markers using generated content, so as to get e.g. continuation of the counters from list to list in pure CSS.
So the code below, which I think is correct according to the spec, is like this:
html {
counter-reset: myCounter;
}
li {
counter-increment: myCounter;
}
li:before {
content: counter(myCounter)". ";
display: marker;
width: 5em;
text-align: right;
marker-offset: 1em;
}
<ol>
<li>The<li>
<li>quick</li>
<li>brown</li>
</ol>
<ol>
<li>fox</li>
<li>jumped</li>
<li>over</li>
</ol>
But this doesn't seem to generate markers, in either FF3, Chrome, or IE8 beta 2, and if I recall correctly not Opera either (although I've since uninstalled Opera).
So, does anyone know if markers are supposed to work? Quirksmode.org isn't being its usual helpful self in this regard :(.
A: Apparently marker was introduced as a value in CSS 2 but did not make it to CSS 2.1 because of lacking browser support.
I suppose that didn’t help its popularity …
Source: http://de.selfhtml.org/css/eigenschaften/positionierung.htm#display (German)
A: Oh ouch, did not know that :-|. That probably seals its case, then. Because mostly I was under the assumption that such a basic CSS2 property should definitely be supported in modern browsers, but if it didn't make it into CSS 2.1, then it makes a lot more sense that it isn't.
For future reference, it doesn't show up in the Mozilla Development Center, so presumably Firefox doesn't support it at all.
Also for future reference, I got my original example to work with inline-block instead:
li:before
{
content: counter(myCounter)". ";
display: inline-block;
width: 2em;
padding-right: 0.3em;
text-align: right;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Linux/X11 input library without creating a window Is there a good library to use for gathering user input in Linux from the mouse/keyboard/joystick that doesn't force you to create a visible window to do so? SDL lets you get user input in a reasonable way, but seems to force you to create a window, which is troublesome if you have abstracted control so the control machine doesn't have to be the same as the render machine. However, if the control and render machines are the same, this results in an ugly little SDL window on top of your display.
Edit To Clarify:
The renderer has an output window, in its normal use case, that window is full screen, except when they are both running on the same computer, just so it is possible to give the controller focus. There can actually be multiple renderers displaying a different view of the same data on different computers all controlled by the same controller, hence the total decoupling of the input from the output (Making taking advantage of the built in X11 client/server stuff for display less useable) Also, multiple controller applications for one renderer is also possible. Communication between the controllers and renderers is via sockets.
A: OK, if you're under X11 and you want to get the kbd, you need to do a grab.
If you're not, my only good answer is ncurses from a terminal.
Here's how you grab everything from the keyboard and release again:
/* Demo code, needs more error checking, compile
* with "gcc nameofthisfile.c -lX11".
/* weird formatting for markdown follows. argh! */
#include <X11/Xlib.h>
int main(int argc, char **argv)
{
Display *dpy;
XEvent ev;
char *s;
unsigned int kc;
int quit = 0;
if (NULL==(dpy=XOpenDisplay(NULL))) {
perror(argv[0]);
exit(1);
}
/*
* You might want to warp the pointer to somewhere that you know
* is not associated with anything that will drain events.
* (void)XWarpPointer(dpy, None, DefaultRootWindow(dpy), 0, 0, 0, 0, x, y);
*/
XGrabKeyboard(dpy, DefaultRootWindow(dpy),
True, GrabModeAsync, GrabModeAsync, CurrentTime);
printf("KEYBOARD GRABBED! Hit 'q' to quit!\n"
"If this job is killed or you get stuck, use Ctrl-Alt-F1\n"
"to switch to a console (if possible) and run something that\n"
"ungrabs the keyboard.\n");
/* A very simple event loop: start at "man XEvent" for more info. */
/* Also see "apropos XGrab" for various ways to lock down access to
* certain types of info. coming out of or going into the server */
for (;!quit;) {
XNextEvent(dpy, &ev);
switch (ev.type) {
case KeyPress:
kc = ((XKeyPressedEvent*)&ev)->keycode;
s = XKeysymToString(XKeycodeToKeysym(dpy, kc, 0));
/* s is NULL or a static no-touchy return string. */
if (s) printf("KEY:%s\n", s);
if (!strcmp(s, "q")) quit=~0;
break;
case Expose:
/* Often, it's a good idea to drain residual exposes to
* avoid visiting Blinky's Fun Club. */
while (XCheckTypedEvent(dpy, Expose, &ev)) /* empty body */ ;
break;
case ButtonPress:
case ButtonRelease:
case KeyRelease:
case MotionNotify:
case ConfigureNotify:
default:
break;
}
}
XUngrabKeyboard(dpy, CurrentTime);
if (XCloseDisplay(dpy)) {
perror(argv[0]);
exit(1);
}
return 0;
}
Run this from a terminal and all kbd events should hit it. I'm testing it under Xorg
but it uses venerable, stable Xlib mechanisms.
Hope this helps.
BE CAREFUL with grabs under X. When you're new to them, sometimes it's a good
idea to start a time delay process that will ungrab the server when you're
testing code and let it sit and run and ungrab every couple of minutes.
It saves having to kill or switch away from the server to externally reset state.
From here, I'll leave it to you to decide how to multiplex renderes. Read
the XGrabKeyboard docs and XEvent docs to get started.
If you have small windows exposed at the screen corners, you could jam
the pointer into one corner to select a controller. XWarpPointer can
shove the pointer to one of them as well from code.
One more point: you can grab the pointer as well, and other resources. If you had one controller running on the box in front of which you sit, you could use keyboard and mouse input to switch it between open sockets with different renderers. You shouldn't need to resize the output window to less than full screen anymore with this approach, ever. With more work, you could actually drop alpha-blended overlays on top using the SHAPE and COMPOSITE extensions to get a nice overlay feature in response to user input (which might count as gilding the lily).
A: For the mouse you can use GPM.
I'm not sure off the top of my head for keyboard or joystick.
It probably wouldn't be too bad to read directly off there /dev files if need be.
Hope it helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Source Control for Everyone? I've got a number of non-technical users that all share a set of project files. It would be ideal to have them using version control, but I think that both subversion and git are too technical for non-technical office staff.
Is there any distributed source control software that would work well for normal people?
A: This sounds more like a use case for a collaborative tool like BaseCamp, SpiceBird, or SharePoint than "source control." Those tools have the same aim as source control but are more geared toward Word Document type stuff and the corresponding users. It's one more item for the IT folks to maintain on the server but it also removes the possibility of someone's assistant wiping out your code.
A:
If they only need to edit Office files one user at a time infrequently, get the files on a network share with appropriate permissions and back them up nightly. Active Directory will warn them if someone already has it open.
If it's more complicated than just office consider Sharepoint. I think SVN is too complicated especially since conflicts and comparisons of binary files e.g. old word docs doesn't work really.
A: I would try Mercurial with TortoiseHG for Explorer integration.
It's easy enough to use that I could without problems:
*
*teach it to a not-that-computer-savvy-collegue for writing text together.
*guide a friend by phone through installing Mercurial (TortoiseHG), creating a repository and setting it up for working together using seperate push (his) and pull (mine) repositories - after installing it only once on a Windows machine (I only run GNU/Linux).
And since it is fully distributed, they can't break your repository when they break theirs - you can simply decide not to pull their changes or to pull only the good changes (for example avoiding these huge binary files beginners tend to put under version control).
I since then switched to also managing all my static websites via Mercurial (and a push-upload hook which automatically uploads the website to my FTP-server, so I don't have to worry about that anymore).
A: If source control is too technical they can use Subversion with WebDav.
The less technical people will just save files normally from whatever application they use, without worrying/thinking about source control. They get the benefit of auto-versioning without doing anything.
When ever they need more functionality they can learn to use TortoiseSVN to view diffs, revert to old version that were made automatically for them etc...
From the subversion book :
Because so many operating systems already have integrated WebDAV clients, the use case for this feature borders on fantastical: imagine an office of ordinary users running Microsoft Windows or Mac OS. Each user “mounts” the Subversion repository, which appears to be an ordinary network folder. They use the shared folder as they always do: open files, edit them, save them. Meanwhile, the server is automatically versioning everything. Any administrator (or knowledgeable user) can still use a Subversion client to search history and retrieve older versions of data.
A: Have you tried Tortoise SVN? I can't imagine source control getting much easier to use.
A: I think the best solution would be to get everyone to use the version control system directly. If you are on a Windows platform, TortoiseSVN would be my recommendation.
If using TortoiseSVN directly is too difficult, I have had good experiences with setting up a Samba file share where all project documents are stored and automatically synchronizing this with Subversion. You lose the benefits of people writing comments on their commits, but in many cases automatic version history is better than no version history. This way the people involved don't even have to be aware of the version tracking, as long as they save their documents in the right place. How often you need to synchronize depends on how often documents are changed, but in my case a synchronization every 24 hours was adequate.
Note: To implement this I had to write a custom script that checked out the latest version from the repository, compared it with the local copy and issued svn (or cvs) commands to add, remove and update any changed files. I'm not sure if there exists a general (open source) solution to do this, but I don't think it should be too hard to implement yourself anyway (I wrote a simple script to do it in a few hours).
A: I am currently exploring the extent to which SharePoint can provide non-techie friendly yet reliable version control in a similar context. The preliminary result is "meh". Even in the case we come to a conclusion, it is already becoming clear that revision control requires quite an important shift in users' attitudes to document management.
Now if this was for teams using Apple Macs, which I presume it isn't, I'd strongly recommend Versions, which is an extremely intuitive SVN client. This is the first and only software where I've seen revision control and its paradigm shifts being adopted easily by non-programmers.
A: I made a howto for the subversion+webdav answer:
http://timwise.wikispaces.com/document-versioning
A: Have you tried Adobe's version cue? This is not open source / free but it may be easier to use for the end-user.
http://www.adobe.com/products/creativesuite/versioncue/
A: If Subversion with TortiseSVN is too complex - and it may be, since version control is a whole paradigm different from Open, Modify, Save - then you might start them off with a much simpler hand version control:
myDocument-20080908-beverlyd.doc
It's simple, easy to understand, and you can write a script that every night or week archives all the older versions so they really only see the latest version or two.
If someone wants to see differences, teach them diff.
-Adam
A: "Project files" is potentially vague - if the files in question aren't primarily ASCII files and are Word documents or what have you, I'm not sure that traditional source control tools will really work.
SVN et. al. will happily support binary files, but you if that's all you're using it for then you don't really get most of the useful features and generally end up confusing the non-technical users. SVN (and git, etc.) are tools designed for programmers - if you're just looking for a good way to manage document revisions and keep a history, I'm guessing there are better tools for your particular platform (though I don't know enough to recommend a particular one).
That said, if they are mostly ASCII files, I suspect TortoiseSVN is your best bet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Are there any issues with using log4net in a multi-threaded environment? I'm wondering if anyone has any experience using log4net in a multi-threaded environment like asp.net. We are currently using log4net and I want to make sure we won't run into any issues.
A: We run log4net (and log4cxx) in highly multi-threaded environments without issue. You will want to be careful how you configure them though.
The issue with log4net that Jeff describes pertains to the use of a certain appender. We stick with simple log file appenders on the whole to reduce the impact of logging on the operation of the code. Writing a line to a file is pretty minimal, kicking off another database transaction is very heavy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Does a language-specific IDE have any advantages over a plugin for a multi-language IDE? I do mostly Java and C/C++ development, but I'm starting to do more web development (PHP, Rails) and Eiffel (learning a new language is always good).
Currently, I use Eclipse for Java, C/C++, and Ruby (not Rails). Since I know the environment, I'm thinking that it would be easier for me to find a plugin and use Eclipse for all of my development languages. But are there cases where a language-specific IDE (EiffelStudio for Eiffel, as an example) would be better than Eclipse?
A: I have used many many IDE's and in most cases to me it breaks down to personal preferences. Sometimes the language specific ones have some addins/addons/features that are nice but unless they are things you can not live without you should go with what is most comfortable for you.
I would think that if you are comfortable with the multi-language IDE it would be better to stick with that one. This way you dont have to memorize multiple IDE layouts, keyboard shortcuts etc.
A: Mastering an IDE takes time and energy. Using a multi-language IDE is definitively beneficial for a programmer who needs to develop in several languages. It is for the same reason that tools like VI and Emacs are so popular.
On the other side, IDE specialized in one language could sometimes go much further on some aspect and could be the preferred choice in some situations.
I love Eclipse as Java IDE (so much that we decided to build some Eclipse based application) and I'm an Emacs fan. But I also like the Groovy support of IntelliJ and the efficiency of EiffelStudio.
It's a matter of taste, you forgive the ones you love...
A: It entirely depends on the user and the language itself, if you are comfortable with the keyboard shortcuts then you can consider the plugin else you can go for a IDE . However most of the IDE comes with a cross-functional key maps so you use the key maps which u are more comfortable with....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.