text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Is there a good library for dealing with the Modbus protocol in .NET? Does anyone know of a good (preferably open source) library for dealing with the Modbus protocol? I have seen a few libraries, but I am looking for some people's personal experiences, not just the top ten Google hits. I figure there has to be at least one other person who deals with PLCs and automation hardware like I do out there.
Open to any other materials that might have been a help to you as well...
A: AdvancedHMI has a drivers package included that's just great for almost anything comms, though it may be a bit heavy if all you want is Modbus. AdvancedHMI is also under active development and has a strong user base.
http://sourceforge.net/projects/advancedhmi/
NModbus seems to be the most likely to have continued development and there are lots of example projects scattered over the internet.
https://code.google.com/p/nmodbus/
I have liked using this one in the past (Modbus via TCP):
http://www.codeproject.com/Tips/16260/Modbus-TCP-class
Modbus to a Siemens Logo PLC is a little different. I have an example project I could send upon request.
A: Modbus is a very simple protocol to implement. All information you need can easily be found for free on the Internet.
If you choose to implement it yourself, I will be happy to answer any questions you have along the way.
If you choose to go for a modbus master library I would look for:
*
*Modbus TCP support.
*Modbus RTU over TCP/UDP and COM-port.
*Configurable byte swapping, word swapping
*Configurable "base" address so you can choose address 1 to actually be address 0 (sounds stupid, but I prefer to always specify addresses the same way they are documented)
*it must support reading several addresses as a block, but it need to be flexible, some modbus slaves will return error if any address in the block is unused/reserved).
A: I have done a lot of communication with devices for the past few years, since I work for a home automation company, but we don't use Modbus. We do communication in a standard and open way using Web Services for Devices(WSD) which is also know as Devices Profile for Web Services(DPWS).
During this time at one point, I did hear of a project called NModbus. It is an open source library for working with modbus. I have not used it, but looking at the site and the changesets on Google Code, it looks pretty active. You may want to give it a look and even get involved in. This is the only library that I have heard of that targets .Net.
A: FieldTalk Modbus Library - handles all Modbus functions
A: Have a look at the offering from Colway Solutions http://www.colwaysolutions.com. They have a unique licensing scheme where you pay for each Modbus function code that you desire to use. Its not free but the pricing seems to be low. I also saw a few ports of the library to some popular microcontrollers and RTOS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: What are some best practices for creating my own custom exception? In a follow-up to a previous question regarding exceptions, what are best practices for creating a custom exception in .NET?
More specifically should you inherit from System.Exception, System.ApplicationException or some other base exception?
A: There is a code snippet for it. Use that. Plus, check your code analysis afterwards; the snippet leaves out one of the constructors you should implement.
A: In the C# IDE, type 'exception' and hit TAB. This will expand to get you started in writing a new exception type. There are comments withs links to some discussion of exception practices.
Personally, I'm a big fan of creating lots of small classes, at that extends to exception types. For example, in writing the Foo class, I can choose between:
*
*throw new Exception("Bar happened in Foo");
*throw new FooException("Bar happened");
*throw new FooBarException();
where
class FooException : Exception
{
public FooException(string message) ...
}
and
class FooBarException : FooException
{
public FooBarException()
: base ("Bar happened")
{
}
}
I prefer the 3rd option, because I see it as being an OO solution.
A: Inherit from System.Exception. System.ApplicationException is useless and the design guidelines say "Do not throw or derive from System.ApplicationException."
See http://blogs.msdn.com/kcwalina/archive/2006/06/23/644822.aspx
A: I think the single most important thing to remember when dealing with exceptions at any level (making custom, throwing, catching) is that exceptions are only for exceptional conditions.
A: The base exception from where all other exceptions inherit from is System.Exception, and that is what you should inherit, unless of course you have a use for things like, say, default messages of a more specific exception.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Better way of opening a Document from Java? I've been using the following code to open Office Documents, PDF, etc. on my windows machines using Java and it's working fine, except for some reason when a filename has embedded it within it multiple contiguous spaces like "File[SPACE][SPACE]Test.doc".
How can I make this work? I'm not averse to canning the whole piece of code... but I'd rather not replace it with a third party library that calls JNI.
public static void openDocument(String path) throws IOException {
// Make forward slashes backslashes (for windows)
// Double quote any path segments with spaces in them
path = path.replace("/", "\\").replaceAll(
"\\\\([^\\\\\\\\\"]* [^\\\\\\\\\"]*)", "\\\\\\\"$1\"");
String command = "C:\\Windows\\System32\\cmd.exe /c start " + path + "";
Runtime.getRuntime().exec(command);
}
EDIT: When I run it with the errant file windows complains about finding the file. But... when I run the command line directly from the command line it runs just fine.
A: If you are using Java 6 you can just use the open method of java.awt.Desktop to launch the file using the default application for the current platform.
A: Not sure if this will help you much... I use java 1.5+'s ProcessBuilder to launch external shell scripts in a java program. Basically I do the following: ( although this may not apply because you don't want to capture the commands output; you actually wanna fire up the document - but, maybe this will spark something that you can use )
List<String> command = new ArrayList<String>();
command.add(someExecutable);
command.add(someArguemnt0);
command.add(someArgument1);
command.add(someArgument2);
ProcessBuilder builder = new ProcessBuilder(command);
try {
final Process process = builder.start();
...
} catch (IOException ioe) {}
A: The issue may be the "start" command you are using, rather than your file name parsing. For example, this seems to work well on my WinXP machine (using JDK 1.5)
import java.io.IOException;
import java.io.File;
public class test {
public static void openDocument(String path) throws IOException {
path = "\"" + path + "\"";
File f = new File( path );
String command = "C:\\Windows\\System32\\cmd.exe /c " + f.getPath() + "";
Runtime.getRuntime().exec(command);
}
public static void main( String[] argv ) {
test thisApp = new test();
try {
thisApp.openDocument( "c:\\so\\My Doc.doc");
}
catch( IOException e ) {
e.printStackTrace();
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How well does .NET scale? (I'll begin by making it clear, I am not a .NET developer and am not tied to any other environment.)
Recently, I heard that the London Stock Exchange went down for an entire day. I've also heard that the software was written in .NET. Up to this point they would experience performance hits on busy days. People seem to be blaming .NET.
I don't want to debate the story, but it brought to mind the question of just how does .NET scale? How big is too big for .NET?
A: Many big sites like MySpace, Dell.com runs on asp.net. Also, check out this MSDN article which gives a good perspective from experts.
A: You can write bad code that fails to scale in any language.
.Net is quite capable at scaling to any size of system but, like with any other tech stack, you have to build the system with scaling in mind.
The LSE went down on the worst possible day, but whatever the reason I doubt it was the underlying stack that was the issue. I suspect this is a case of poor workmen blaming their tools.
A: Here's a big fat book for you to read on this topic:
Improving .NET Application Performance and Scalability (Microsoft Press)
A: Actually the downtime at the LSE had absolutely nothing to do with it's .NET trading platform:
The LSE said the system had been hit by a "connectivity issue" and insisted that the problem did not lie with its flagship TradElect trading platform.
http://www.itworld.com/networking/54760/london-stock-exchange-trading-stops-network-fails
A: Honestly, I think it boils down to code optimization, apart from just the infrastructure.
In StackOverflow Podcast 19, Jeff discussed about how they had to tweak SQL Server to handle the kinds of loads StackOverflow has; notice that it was not .NET that needed tweaking here.
One also has to note that MySpace.com, one of the most massive social networks out there, runs on ASP.NET.
The MySpace use of ASP.NET alone is a testament to its scalability. It will boil down to how developers will write their applications in such a way that best leverages that capability.
A: Dot Net scales well. We have clusters of servers running IIS server and asp.net web sites and applications and when our user load increases we can add servers (easily) to increase capacity. This happens during certain events and the .net architecture scalability has not let us down.
I would hazard a guess (as the others have) that this was not a .net issue.
A: Unfortunately, there are so many other issues that could cause a project to go down as it scales, that you have a lot to wade through before you can get down to a framework to blame. And unless you can see and thoroughly analyze the source code, it would be difficult to say what the root cause was. I'd be willing to bet it wasn't the framework.
And no, I don't work with .NET on a daily basis.
A: Perhaps it was the transaction volume that brought down the exchange.
While many examples given thus far are good, they are just large websites. (You hate me for saying "just") They dole out pages and the occasional application (ie. scrabulous) to users. A stock exchange processes buys/sells and matches buyers/sellers. That would be a few orders of magnitude more work for app servers.
I could see the database(s) falling down though.
A: It all boils down to 3 things:
*
*How well is the application planned
*What was the initial scale goal of the person who built it
*Ongoing work in the pits to improve and scale a solution.
MySpace was mentioned before, it is a know fact they've rewritten their application a few times when they hit a new scaling step (# of users / pageviews / etc). If they've chosen to build the last version to begin with, it would have been too expensive to maintain and wasn't cost effective - scalability should be based on current position and the next scale goal.
One last thing - although it's often considered evasive, solid stress testing can give you a good picture on how your application deals with load you're aiming at before your users experience it and disaster strikes.
A: As other people said -- it is not a matter of platform.
What matters is architecture of your application - load balancing, state management, partitioning etc... These are not platform specific.
A: Later news...
"TradElect, the group’s trading platform technology that is set to be replaced later this year by software developed by MillenniumIT, the Sri Lankan technology vendor owned by the LSE." ...
http://www.efinancialnews.com/story/2010-09-13/ex-lse-tech-chief-joins-green-investment-company
“This transaction enables the Group to implement a new, more agile, innovative and efficient IT capability for our future business development, as well as running a new cash trading platform which will provide substantially lower latency, significantly higher capacity and improved scalability.” ...
http://www.computerworlduk.com/news/it-business/16590/london-stock-exchange-buys-millennium-it-trading-platform-supplier/
A: Why would .NET have any limitations on size that other platforms wouldn't have?
I cannot imagine any situation where you are going to get 'too big' for .NET.
However, you should really specify whether you are talking about a .NET winforms application or ASP.NET as well as other relevant factors. This question is just too vague to ever answer in detail.
The fact your name is 'Dr Unix' does imply some bias, btw.
A: Done right, the architecture offloads most transient state onto the client which makes clustering easier, which makes it surprisingly scalable. So it's an issue of the System as a whole as opposed to ASP.NET directly at that point.
My 2 cents.
A: Well I think this site is on a .net framework. As well the Microsoft site(s) are built on it. So I think that if done properly then a .net site will scale. Look at some of the comments Jeff has made about this site and the issues tend to come to coding error or architecture problems.
A: I run a relatively large asp.net website, and have found it to scale excellently. Of course much of this I attribute to having some great tools to diagnose and fix bottlenecks in the code. I'd venture to guess that coding issues cause 99.99% of issues that people have in any framework.
A: It really bothers me when people say .NET is a platform of choice because 'its scalable', its no more or less scalable than any other platform: PHP, ColdFusion, JSP or native compiled apps with C++/Delphi etc... Scalability isn't a feature of the framework, it's a feature of the application design.
MySpace is certainly no advocate for scalability, instead look at the technology behind Google search, or the SETI@home project.
.NET is actually my least favorite platform to work with because its gone too far in trying to simplify software, so much so that there are things that I want to do that it can't, and so trying to overcome .NET limitations wastes time where it would have been easily and quickly achieved with C++ or PHP. .NET is to software development what duplo bricks are to mechanical engineering - no self respecting mechanical engineer would want to be constrained to using only inch wide square blocks.
If an application needs to be scalable you need to think about what data needs to be shared between servers, and what is the minimum data required for the application to run and serve its purpose. The need to scale an application can often be avoided by having super-efficient code in the first place (eg. not .NET or Java), but this generally requires a basic understanding of assembly at least and how your chosen language is translated to machine code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Best way to parse Space Separated Text I have string like this
/c SomeText\MoreText "Some Text\More Text\Lol" SomeText
I want to tokenize it, however I can't just split on the spaces. I've come up with somewhat ugly parser that works, but I'm wondering if anyone has a more elegant design.
This is in C# btw.
EDIT: My ugly version, while ugly, is O(N) and may actually be faster than using a RegEx.
private string[] tokenize(string input)
{
string[] tokens = input.Split(' ');
List<String> output = new List<String>();
for (int i = 0; i < tokens.Length; i++)
{
if (tokens[i].StartsWith("\""))
{
string temp = tokens[i];
int k = 0;
for (k = i + 1; k < tokens.Length; k++)
{
if (tokens[k].EndsWith("\""))
{
temp += " " + tokens[k];
break;
}
else
{
temp += " " + tokens[k];
}
}
output.Add(temp);
i = k + 1;
}
else
{
output.Add(tokens[i]);
}
}
return output.ToArray();
}
A: The Microsoft.VisualBasic.FileIO namespace (in Microsoft.VisualBasic.dll) has a TextFieldParser you can use to split on space delimeted text. It handles strings within quotes (i.e., "this is one token" thisistokentwo) well.
Note, just because the DLL says VisualBasic doesn't mean you can only use it in a VB project. Its part of the entire Framework.
A: There is the state machine approach.
private enum State
{
None = 0,
InTokin,
InQuote
}
private static IEnumerable<string> Tokinize(string input)
{
input += ' '; // ensure we end on whitespace
State state = State.None;
State? next = null; // setting the next state implies that we have found a tokin
StringBuilder sb = new StringBuilder();
foreach (char c in input)
{
switch (state)
{
default:
case State.None:
if (char.IsWhiteSpace(c))
continue;
else if (c == '"')
{
state = State.InQuote;
continue;
}
else
state = State.InTokin;
break;
case State.InTokin:
if (char.IsWhiteSpace(c))
next = State.None;
else if (c == '"')
next = State.InQuote;
break;
case State.InQuote:
if (c == '"')
next = State.None;
break;
}
if (next.HasValue)
{
yield return sb.ToString();
sb = new StringBuilder();
state = next.Value;
next = null;
}
else
sb.Append(c);
}
}
It can easily be extended for things like nested quotes and escaping. Returning as IEnumerable<string> allows your code to only parse as much as you need. There aren't any real downsides to that kind of lazy approach as strings are immutable so you know that input isn't going to change before you have parsed the whole thing.
See: http://en.wikipedia.org/wiki/Automata-Based_Programming
A: The computer term for what you're doing is lexical analysis; read that for a good summary of this common task.
Based on your example, I'm guessing that you want whitespace to separate your words, but stuff in quotation marks should be treated as a "word" without the quotes.
The simplest way to do this is to define a word as a regular expression:
([^"^\s]+)\s*|"([^"]+)"\s*
This expression states that a "word" is either (1) non-quote, non-whitespace text surrounded by whitespace, or (2) non-quote text surrounded by quotes (followed by some whitespace). Note the use of capturing parentheses to highlight the desired text.
Armed with that regex, your algorithm is simple: search your text for the next "word" as defined by the capturing parentheses, and return it. Repeat that until you run out of "words".
Here's the simplest bit of working code I could come up with, in VB.NET. Note that we have to check both groups for data since there are two sets of capturing parentheses.
Dim token As String
Dim r As Regex = New Regex("([^""^\s]+)\s*|""([^""]+)""\s*")
Dim m As Match = r.Match("this is a ""test string""")
While m.Success
token = m.Groups(1).ToString
If token.length = 0 And m.Groups.Count > 1 Then
token = m.Groups(2).ToString
End If
m = m.NextMatch
End While
Note 1: Will's answer, above, is the same idea as this one. Hopefully this answer explains the details behind the scene a little better :)
A: You also might want to look into regular expressions. That might help you out. Here is a sample ripped off from MSDN...
using System;
using System.Text.RegularExpressions;
public class Test
{
public static void Main ()
{
// Define a regular expression for repeated words.
Regex rx = new Regex(@"\b(?<word>\w+)\s+(\k<word>)\b",
RegexOptions.Compiled | RegexOptions.IgnoreCase);
// Define a test string.
string text = "The the quick brown fox fox jumped over the lazy dog dog.";
// Find matches.
MatchCollection matches = rx.Matches(text);
// Report the number of matches found.
Console.WriteLine("{0} matches found in:\n {1}",
matches.Count,
text);
// Report on each match.
foreach (Match match in matches)
{
GroupCollection groups = match.Groups;
Console.WriteLine("'{0}' repeated at positions {1} and {2}",
groups["word"].Value,
groups[0].Index,
groups[1].Index);
}
}
}
// The example produces the following output to the console:
// 3 matches found in:
// The the quick brown fox fox jumped over the lazy dog dog.
// 'The' repeated at positions 0 and 4
// 'fox' repeated at positions 20 and 25
// 'dog' repeated at positions 50 and 54
A: Craig is right — use regular expressions. Regex.Split may be more concise for your needs.
A:
[^\t]+\t|"[^"]+"\t
using the Regex definitely looks like the best bet, however this one just returns the whole string. I'm trying to tweak it, but not much luck so far.
string[] tokens = System.Text.RegularExpressions.Regex.Split(this.BuildArgs, @"[^\t]+\t|""[^""]+""\t");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is the difference between old style and new style classes in Python? What is the difference between old style and new style classes in Python? When should I use one or the other?
A: New style classes may use super(Foo, self) where Foo is a class and self is the instance.
super(type[, object-or-type])
Return a proxy object that delegates method calls to a parent or sibling class of type. This is useful for accessing inherited methods that have been overridden in a class. The search order is same as that used by getattr() except that the type itself is skipped.
And in Python 3.x you can simply use super() inside a class without any parameters.
A: From New-style and classic classes:
Up to Python 2.1, old-style classes were the only flavour available to the user.
The concept of (old-style) class is unrelated to the concept of type:
if x is an instance of an old-style class, then x.__class__
designates the class of x, but type(x) is always <type
'instance'>.
This reflects the fact that all old-style instances, independently of
their class, are implemented with a single built-in type, called
instance.
New-style classes were introduced in Python 2.2 to unify the concepts of class and type.
A new-style class is simply a user-defined type, no more, no less.
If x is an instance of a new-style class, then type(x) is typically
the same as x.__class__ (although this is not guaranteed – a
new-style class instance is permitted to override the value returned
for x.__class__).
The major motivation for introducing new-style classes is to provide a unified object model with a full meta-model.
It also has a number of immediate benefits, like the ability to
subclass most built-in types, or the introduction of "descriptors",
which enable computed properties.
For compatibility reasons, classes are still old-style by default.
New-style classes are created by specifying another new-style class
(i.e. a type) as a parent class, or the "top-level type" object if no
other parent is needed.
The behaviour of new-style classes differs from that of old-style
classes in a number of important details in addition to what type
returns.
Some of these changes are fundamental to the new object model, like
the way special methods are invoked. Others are "fixes" that could not
be implemented before for compatibility concerns, like the method
resolution order in case of multiple inheritance.
Python 3 only has new-style classes.
No matter if you subclass from object or not, classes are new-style
in Python 3.
A: Old style classes are still marginally faster for attribute lookup. This is not usually important, but it may be useful in performance-sensitive Python 2.x code:
In [3]: class A:
...: def __init__(self):
...: self.a = 'hi there'
...:
In [4]: class B(object):
...: def __init__(self):
...: self.a = 'hi there'
...:
In [6]: aobj = A()
In [7]: bobj = B()
In [8]: %timeit aobj.a
10000000 loops, best of 3: 78.7 ns per loop
In [10]: %timeit bobj.a
10000000 loops, best of 3: 86.9 ns per loop
A: Guido has written The Inside Story on New-Style Classes, a really great article about new-style and old-style class in Python.
Python 3 has only new-style class. Even if you write an 'old-style class', it is implicitly derived from object.
New-style classes have some advanced features lacking in old-style classes, such as super, the new C3 mro, some magical methods, etc.
A: Declaration-wise:
New-style classes inherit from object, or from another new-style class.
class NewStyleClass(object):
pass
class AnotherNewStyleClass(NewStyleClass):
pass
Old-style classes don't.
class OldStyleClass():
pass
Python 3 Note:
Python 3 doesn't support old style classes, so either form noted above results in a new-style class.
A: Here's a very practical, true/false difference. The only difference between the two versions of the following code is that in the second version Person inherits from object. Other than that, the two versions are identical, but with different results:
*
*Old-style classes
class Person():
_names_cache = {}
def __init__(self,name):
self.name = name
def __new__(cls,name):
return cls._names_cache.setdefault(name,object.__new__(cls,name))
ahmed1 = Person("Ahmed")
ahmed2 = Person("Ahmed")
print ahmed1 is ahmed2
print ahmed1
print ahmed2
>>> False
<__main__.Person instance at 0xb74acf8c>
<__main__.Person instance at 0xb74ac6cc>
>>>
*New-style classes
class Person(object):
_names_cache = {}
def __init__(self,name):
self.name = name
def __new__(cls,name):
return cls._names_cache.setdefault(name,object.__new__(cls,name))
ahmed1 = Person("Ahmed")
ahmed2 = Person("Ahmed")
print ahmed2 is ahmed1
print ahmed1
print ahmed2
>>> True
<__main__.Person object at 0xb74ac66c>
<__main__.Person object at 0xb74ac66c>
>>>
A: Important behavior changes between old and new style classes
*
*super added
*MRO changed (explained below)
*descriptors added
*new style class objects cannot be raised unless derived from Exception (example below)
*__slots__ added
MRO (Method Resolution Order) changed
It was mentioned in other answers, but here goes a concrete example of the difference between classic MRO and C3 MRO (used in new style classes).
The question is the order in which attributes (which include methods and member variables) are searched for in multiple inheritance.
Classic classes do a depth-first search from left to right. Stop on the first match. They do not have the __mro__ attribute.
class C: i = 0
class C1(C): pass
class C2(C): i = 2
class C12(C1, C2): pass
class C21(C2, C1): pass
assert C12().i == 0
assert C21().i == 2
try:
C12.__mro__
except AttributeError:
pass
else:
assert False
New-style classes MRO is more complicated to synthesize in a single English sentence. It is explained in detail here. One of its properties is that a base class is only searched for once all its derived classes have been. They have the __mro__ attribute which shows the search order.
class C(object): i = 0
class C1(C): pass
class C2(C): i = 2
class C12(C1, C2): pass
class C21(C2, C1): pass
assert C12().i == 2
assert C21().i == 2
assert C12.__mro__ == (C12, C1, C2, C, object)
assert C21.__mro__ == (C21, C2, C1, C, object)
New style class objects cannot be raised unless derived from Exception
Around Python 2.5 many classes could be raised, and around Python 2.6 this was removed. On Python 2.7.3:
# OK, old:
class Old: pass
try:
raise Old()
except Old:
pass
else:
assert False
# TypeError, new not derived from `Exception`.
class New(object): pass
try:
raise New()
except TypeError:
pass
else:
assert False
# OK, derived from `Exception`.
class New(Exception): pass
try:
raise New()
except New:
pass
else:
assert False
# `'str'` is a new style object, so you can't raise it:
try:
raise 'str'
except TypeError:
pass
else:
assert False
A: New-style classes inherit from object and must be written as such in Python 2.2 onwards (i.e. class Classname(object): instead of class Classname:). The core change is to unify types and classes, and the nice side-effect of this is that it allows you to inherit from built-in types.
Read descrintro for more details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1117"
} |
Q: How to use JQuery "after" selector I can't seem to figure out a good way to do this, but it seems like it should be simple. I have an element that I want to append a div to. Then I have another element that I want to clone and shove into that intermediate div. Here's what I was hoping to do:
$("#somediv > ul").after("<div id='xxx'></div>").append($("#someotherdiv").clone());
This seems to be close, but not quite there. The problem with this is that the "append" seems to be operating on the original #somediv > ul selector. This sort of makes sense, but it's not what I wanted. How can I most efficiently select that intermediate div that I added with the after and put my #someotherdiv into it?
A: Go the other way around and use insertAfter().
$("<div id='xxx'></div>")
.append($("#someotherdiv").clone())
.insertAfter("#somediv > ul")
Try to add your generated DOM nodes to the document only after finishing your work.
Once the nodes are added to the displayed document, the browser starts listening to any change to refresh the view. Doing all the work before adding the nodes to the displayed document does improve browser performance.
A: use insertAfter():
$("<div id='xxx'></div>").insertAfter("#somediv > ul").append($("#someotherdiv").clone())
A:
How can I most efficiently select that intermediate div that I added with the "after" and put my "#someotherdiv" into it?
@Vincent's solution is probably the fastest way to get the same result. However if for whatever reason you need add the div with after() then need to select it and operate on it you can use
.nextAll( [expr] )
Find all sibling elements after the current element.
Use an optional expression to filter the matched set.
So your js becomes:
$("#somediv > ul")
.after("<div id='xxx'></div>")
.nextAll('#xxx')
.append($("#someotherdiv").clone());
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I in java add a stacktrace to my debugging printout What's the easiest way to print a stacktrace from a debugging printout? Often during testing you would like to know the callstack leading up to the situation provoking a debug message.
A: If you want to save the stack trace into a String you can do this;
String exception = "";
for (StackTraceElement element : e.getStackTrace())
exception += element.toString() + "\n";
Where e is, obviously, an exception.
Besides, it sounds very weird to autogenerate an own Exception just to find get a stack trace for a debug. Get Eclipse and use it's debug mode, it's really awesome.
A: Just creating an arbitrary exception does the trick for me:
System.out.println("Oops, the bad thing happened");
new IllegalStateException().printStackTrace();
A: As well as what @jjnguy said, if you don't have an exception, you can also call Thread.getStackTrace().
A: If you're using log4j
Exception e = new Exception();
log.error("error here", e);
will print the stacktrace to your log.
A: You should be catching the exception in a try-catch block.
e.getStackTrace();
That returns StackTraceElement[] that you can then interpret.
Also:
e.printStackTrace()
will...print the stacktrace.
A: To simply print the current stack trace to stderr, you can call:
Thread.dumpStack();
which itself just calls:
new Exception("Stack trace").printStackTrace();
To output to stdout rather than stderr, pass System.out to printStackTrace():
new Exception("Stack trace").printStackTrace(System.out);
A: Thread.dumpStack();
A: Just because I needed it myself:
As inspired by answer How do I find the caller of a method using stacktrace or reflection? , you can retrieve the call stack using
StackTraceElement[] stackTraceElements = Thread.currentThread().getStackTrace()
Then you process and print/log whatever you are interested in. More work than using Thread.dumpStack(), but more flexible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Hidden features of Eclipse Alright it can be a lame question, but everybody uses these things differently. What's some of the best time savers out there for this IDE.
Tom
A: Ctrl+Alt+UP or Ctrl+Alt+DOWN to copy lines
A: Ctrl+Alt+H on a method to get the call hierarchy for it. Fast way to see where it is called from.
A: ctrl-shift-r and its buddy, ctrl-shift-t, to open a resource or type, respectively. Resources includes all files in your open projects (including non-java files), and types includes java types either in your projects, or in a library included in the projects.
A: Alt + Shift + R to refactor and rename.
A: Ctrl+Shift+L will show you all the currently available keyboard shortcuts
A: Here is my collection of the most useful keyboard shortcuts for Eclipse 3:
Eclipse 3 Favorite Keyboard Shortcuts.
by -=MaGGuS=-
Navigate:
• Ctrl + Shift + L – Shows useful keyboard shortcuts in popup window
• Ctrl + H – Search.
• Ctrl + K – Goes to next search match in a single file. Shift + Ctrl + K – goes to previous match.
• F3 - Goes to ‘declaration’ of something. Same as Ctrl + Click.
• Ctrl + Shift + G - Use this on a method name or variable. It will search for references in the code (all the code) to that item.
• Ctrl + O – Shows outline view of the current class or interface.
• Ctrl + T – Shows class hierarchy of the current class or interface. F4 – shows the same in separate tab.
• Ctrl + Shift + T - Open Type. Search for any type globally in the workspace.
• Ctrl + Shift + R – Open Resource. Search for any file inside workspace.
• Ctrl + J – Incremental search. Similar to the search in firefox. It shows you results as you type. Shift + Ctrl +J - Reverse incremental search.
• Ctrl + Q – Goes to the last edit location.
• Ctrl + Left|Right – Go Back/Forward in history.
• Ctrl + L – Go to line number.
• Ctrl + E – This will give you a list of all the source code windows that are currently open. You can arrow up or down on the items to go to a tab.
• Ctrl +PgUp|PgDown – Cycles through editor tabs.
• Ctrl + Shift + Up|Down - Bounces you up and down through the methods in the source code.
• Ctrl + F7 – Switches between panes (views).
• Ctrl + ,|. – Go to the previous/next error. Great in combination with Ctrl + 1.
• Ctrl + 1 on an error – Brings up suggestions for fixing the error. The suggestions can be clicked.
• Ctrl + F4 – Close one source window.
Edit:
• Ctrl + Space – Auto-completion.
• Ctrl + / – Toggle comment selected lines.
• Ctrl + Shift + /|\ – Block comment/uncomment selected lines.
• Ctrl + Shift + F – Quickly ‘formats’ your java code based on your preferences set up under Window –> Preferences.
• Ctrl + I – Correct indentations.
• Alt + Up|Down – move the highlighted code up/down one line. If nothing is selected, selects the current line.
• Ctrl + D – Delete row.
• Alt + Shift + Up|Down|Left|Right – select increasing semantic units.
• Ctrl + Shift + O – Organize Imports.
• Alt + Shift + S – Brings up “Source” menu.
o Shift + Alt + S, R – Generate getter/setter.
o Shift + Alt + S, O – Generate constructor using fields.
o Shift + Alt + S, C – Generate constructor from superclass.
• Alt + Shift + T – Brings up “Refactor” menu.
• Alt + Shift + J – Insert javadoc comment.
• F2 – Display javadoc popup for current item. Shift + F2 – Display javadoc in external browser.
Run/Debug:
• F11 / Ctrl + F11 – Execute/debug.
• Ctrl + Shift +B – Toggle breakpoint.
• When paused: F5 – Step into, F6 – Step over, F7 – Step out, F8 – Resume.
• Ctrl + F2 – Terminate.
EOF
A: Not so Hidden but IMO the best Trick.
Assuming Default Settings (and you have'nt added new snippets)
Highlight (or select) a Text (String or Variable)...Press Ctrl+Space. Hit End+Enter.
the "sysout" snippet is triggered which wraps the selection around as its parameter.
eg.
"hello world!"
becomes
System.out.println("hello world!");
I love it so much that i've implemented a similar snippet for Android's Toast and Log.i()
HUGE Time saver during Manual Debugging....
A: Crtl+1 is my favorite. The quick fixes for the red-squiggles.
It is also located in the Edit Menu -> Quick Fix.
A: Ctrl+Shift+O to organize imports, which will format them nicely, remove unneeded imports, and add missing imports.
A: Ctrl-Alt-h To open the Call hierarchy of the selected method.
Really useful on large codebases or unknown codebases
A: Ctrl+, and Ctrl+. move the text cursor to the next and previous error or warning (red or yellow squiggle) in the source. This gets really useful if you're dealing with a big block of dirty or broken code when you're in the depths of refactoring or pasting. Combined with Ctrl+1 for suggest fix you can quickly repair the code without having to move your hand to the mouse.
In fact, you barely have to lift your finger off Ctrl...
A: A hidden gem is the conditional breakpoint. Really useful for skipping over portions of loops, pausing if something is null or meets a certain value, etc... just right-click on the breakpoint, Breakpoint Properties --> Enable Condition. There's even code assist within the textbox!
A: Ctrl-J starts an incremental find.
Hit Ctrl-J, then start typing. Use up/down to find previous/next instances of what you typed.
Ctrl-Shift-J searches backwards.
A: Save Actions rocks. There you can get your Imports organized (Ctrl+Shift+o) and formatting of code (CTRL + SHIFT + f). Besides from that i love ALt + Shift + R for refactoring.
My favorite things is the plugins though:
They might cause you to use more time but most of the time they give quality (subjective I know)
*
*Code coveragde (ECLEMMA)
*Static analysis on source(PMD)
*Static analysis on byte code(FindBugs)
*CheckStyle
*SpringIDE.
Then you start to rock with the mandatory source control plugins and the maven 2 plugin.
Rock on!
A: ctrl + O is an popup outline view that lets you start typing to filter on a name
Ctrl + F3 works similarly, but it can open other types' outlines based on where your cursor is.
Turn on the Save Action to clean up your code and it will be automatically formatted and import optimized every time you save. To easily get to this option choose "Windows|Preferences" start type "Save Act" in the filter box and turn on the option.
In the new 3.4 release, turn on the "Breadcrumb trail" at the top of the editor window. There's a new toolbar button for this.
A: Ctrl + Shift + P to find the matching brace. Really useful while working with long codes.
A: Type 'syso' then press Ctrl+Space to expand it to System.out.println().
Tres handy.
A: CTRL+3 brings up a type-ahead list of any menu command.
A: If you want to put a System.out.println("anything"); to your code you can simply do as follows:
Only write ", then mark the "" and press Crtl-Space Up-Arrow and enter (you should land on "sysout").
Voila, there it is :)
A: How about:
Ctrl-PgUp and Ctrl-PgDn to navigate through the open files in the editor (including the overflow section if you Ctrl-PgDn all the way to the right).
A: You can CTRL-click on just about any type, field, method, or variable and eclipse will bring you to the declaration of that item:
ie:
*
*on a local variable - brings you to the declaration statement in the function
*on a member variable - brings you to the definition in a class file that the member is declared (or the parent class if it's not overridden in a child class
*on a class - brings you to the top of the class file for that class
You can also CTRL-hover over a type to bring up the option to find an implementation. This is useful if you are using an interface and want to see what classes implement that interface. It also works to see what super-classes and subclasses might implement/override a certain function.
A: *
*CTRL-SHIFT-g : finds usages of the method or field under the cursor, absolutely necessary for understanding code
*CTRL-F6 : navigate between the list of open editor windows, if you just type it once and let go you toggle back to the previous editor window, doing this successively is a nice way to jump back and forth
*CTRL-t : on a class or method will show you the type hierarchy, very useful for finding implementations of an interface method for example
A: Clicking on the return type in a method's declaration highlights all exit points of the method.
for instance:
1: public void foo()
2: {
3: somecode();
4: if ( blah ) return;
5:
6: bar();
7: }
clicking on void will highlight the return on line 4 and the close } on line 7.
Update: It even works for try{} catch blocks. If you put cursor on exception in the catch block and eclipse will highlight the probable methods which may throw that exception.
A: Code completion supports CamelCase, e.g., typing CWAR will show a result for ClassWithAReallyLongName. Start using this feature and you'll never type another long classname again.
(parts copied from another answer because i think answers w/ just one hint/tip are best for polling)
A: Alt-Up Arrow moves the current selection up a line, Alt-Down Arrow moves it down. I also use Alt-Shift-Up/Down Arrow all the time. Ctrl-K and Ctrl-Shift-K is quite handy, finding next/previous occurrence of the current selection (or the last Find, if nothing is selected).
A: There's an option to place the opening curly brace and a semicolon automagically in the "correct" position. You'll have to enable this - Choose Window/Preferences and type "brace" in the searchbox - should be easily findable (no eclipse on this computer). The effect:
*
*Typing a semicolon anywhere on the line will place it at this lines end (as in word/openoffice: Backspace if you'd like to have it in the original place)
*Typing an opening curly brace when you're just inside another pair of braces will place it at the end of this line - as in this example
("|" is the cursor):
if(i==0|)
typing "{" now will result in
if(i==0) {|
A: Hippie expand/Word Complete, afaik inspired by Emacs: will autocomplete any word in any editor based on other words in that file. Autocomplete inside String literals in Java code, in xml files, everywhere.
Alt + /
A: Ctrl+f then tick the "Regular expressions" checkbox. From that, you can search with regular expressions, but even more powerfully, you can include group matches in your replacement string ($1, $2, etc, or $0 for the whole match).
A: ctrl-alt-up/down to copy a line up (or down). That followed by alt-up/down is often much quicker than a copy-paste
A: Don't know a keyboard shortcut to it, but select a local variable in a method, and then right click. Under refactor is "convert local variable to field". Very useful on occasions. Just wish there was a shortcut for it!
A: Ctrl-Shift- Up or Down in Java editor jumps to to nearest declaration of a method or a field in that direction.
A: One combination to rules them all.
CTL+SHFT+L
Get the list of all these "hidden" features.
A: I am sorry if this is a duplicate, but I don't think I have seen this one mentioned here and I scanned over all of the posts:
Word completion:
Alt + /
is a really nice alternative to Ctrl+Space. It doesn't quite replace Ctrl+Space, but is much faster. And don't be afraid to press it multiple times, it will keep cycling over possible options.
A: alt+shift+z - to active the "surround with" sub menu. Handy when have to surround with a try catch block.
A: Ctrl + H searches/replaces through the whole workspace or project.
A: Alt-Shift-R stands for rename, not refactor. Refactoring is a more general term (as defined by the book).
Nevertheless, it is one of my favorite refactorings. Others include:
*
*Alt-Shift-M: Extract Method (when a code block or an expression is selected)
*Alt-Shift-L: Extract Local Variable (when an expression is selected)
Extract Local Variable is especially useful when I don't remember (or bother to type) the result type of a method. Assuming you have a method JdbcTemplate createJdbcTemplate() in your class, write some code such as this:
void someQuery() {
createJdbcTemplate()
}
Select the expression createJdbcTemplate(), click Alt-Shift-L, type the name of variable and press enter.
void someQuery() {
JdbcTemplate myTemplate = createJdbcTemplate();
}
A: CTRL + D - to delete current line
A: Absolutely, Ctrl+Q to go to last edit location.
It is very useful just after being interrupted by phone, boss or others.
A: Alt+Shift+Up Arrow does escalating selection. Alt+Shift+Down does the opposite.
A: Alt+Up or Alt+Down to move lines
A: Ctrl + Shift + M: changes a static method or static attribute reference of a class to a static import.
Before
import X;
...
X.callSomething();
After
import static X.callSomething;
...
callSomething();
A: Nobody's mentioned the best one yet. Click on a class or method name and press Ctrl+T.
You get a quick type hierarchy. For a class name you see the entire class hierarchy. For a method name you get the hierarchy showing superclasses and subclasses, with implementations of that method distinguished from abstract mentions, or classes that don't mention the method.
This is huge when you are at an abstract method declaration and quickly want to see where it is implemented.
A: Don't forget Ctrl+Shift+L, which displays a list of all the keyboard shortcut combinations (just in case you forget any of those listed here).
A: F3 has been my favorite, opens the definition for the selected item.
Ctrl+Shift+R has an interesting feature, you can use just the uppercase camel letters from a class when searching (such as typing CWAR will show a result for ClassWithAReallyLongName).
Alt+Shift+W > Package Explorer makes life easier when browsing large projects.
A: Ctrl-2 something
Seems that nobody mentioned Ctrl-2 L (assign to new local variable) and Ctrl-2 F (assign to a new field), these ones have changed how I write code.
Previously, I was typing, say (| is cursor location):
Display display = new |
and then I pushed Ctrl-Space to complete the constructor call. Now I type:
new Display()|
and press Ctrl-2 L, which results in:
Display display = new Display()|
This really speeds things up. (Ctrl-2 F does the same, but assigns to a new field rather than a new variable.)
Another good shortcut is Ctrl-2 R: rename in file. It is much faster than rename refactoring (Alt-Shift-R) when renaming things like local variables.
Actually I went to Keys customization preference page and assigned all sorts of additional quick fixes to Ctrl-2-something. For example I now press Ctrl-2 J to split/join variable declaration, Ctrl-2 C to extract an inner class into top-level, Ctrl-2 T to add throws declaration to the function, etc. There are tons of assignable quick fixes, go pick your favourite ones and assign them to Ctrl-2 shortcuts.
Templates
Another favourite of mine in my “npe” template, defined as:
if (${arg:localVar} == null)
throw new ${exception:link(NullPointerException,IllegalArgumentException)}("${arg:localVar} is null");
This allows me to quickly add null argument checks at the start of every function (especially ones that merely save the argument into a field or add it into a collection, especially constructors), which is great for detecting bugs early.
See more useful templates at www.tarantsov.com/eclipse/templates/. I won't list them all here because there are many, and because I often add new ones.
Completion
A few code completion tricks:
*
*camel case support mentioned in another answer: type cTM, get currentTimeMillis
*default constructor: in the class declaration with no default constructor push Ctrl-Space, the first choice will be to create one
*overloading: in the class declaration start typing name of a method you can overload, Ctrl-Space, pick one
*getter/setter creation: type “get”, Ctrl-Space, choose a getter to create; same with “is” and “set”
Assign To A New Field
This is how I add fields.
*
*If you have no constructors yet, add one. (Ctrl-Space anywhere in a class declaration, pick the first proposal.)
*Add an argument (| is cursor position):
public class MyClass {
public MyClass(int something|) {
}
}
*Press Ctrl-1, choose “assign to a new field”. You get:
public class MyClass {
private final Object something;
public MyClass(Object something) {
this.something = something;
}
}
*Add a null-pointer check if appropriate (see “npe” template above):
public class MyClass {
private final Object something;
public MyClass(Object something) {
npe|
this.something = something;
}
}
Hit Ctrl-Space, get:
public class MyClass {
private final Object something;
public MyClass(Object something) {
if (something == null)
throw new NullPointerException("something is null");
this.something = something;
}
}
A great time saver!
A: A non-keyboard shortcut trick is to use commit sets in your Team->Synchronise view to organise your changes before committing.
Set a change set to be the default, and all changes you make on files will be put in that set, making it easy to see what you have changed while working on a specific defect/feature, and other changes you had while testing etc.
A: CTRL+SPACE, for anything, anywhere.
Generate getters and setters.
Create Constructors using Fields
Extract Method...
Refactor->Rename
CTRL+O for the quick outline. CTRL+O+CTRL+O for the inherited outline.
F4 to display a type hierarchy
Open Call Hierarchy to display where a method is called from.
CTRL+SHIFT+T to open a Java Type
CTRL+SHIFT+R to open any resource.
ALT + left or right to go forward or backwards through edit places in your documents (easy navigation)
Override/Implement methods if you know you're going to do a lot of methods (otherwise, CTRL+SPACE is better for one at a time selection.
Refactor->Extract Interface
Refactor->Pull up
Refactor->Push down
CTRL+SHIFT+O for organize imports (when typing the general class name such as Map, pressing CTRL+SPACE and then selecting the appropriate class will import it directly for you).
CTRL+SHIFT+F for formatting (although Eclipse's built in formatter can be a little braindead for long lines of code)
EDIT: Oh yeah, some debugging:
F5: Step into (show me the details!)
F6: Step over (I believe you, on to the next part...)
F7: Step out (I thought I cared about this method, but it turns out I don't, get me out of here!)
F8: Resume (go until the next breakpoint is reached)
CTRL+SHIFT+I: inspect an expression. CTRL+SHIFT+I+CTRL+SHIFT+I: create a watch expression on the inspected expression.
Conditional breakpoints: Right click a breakpoint and you may set a condition that occurs which triggers its breaking the execution of the program (context assist, with Ctrl+Space, is available here!)
F11 - Debug last launched (application)
CTRL+F11 - Run last launched (application)
A: Breakpoint on Exception
Eclipse let you set breakpoints based on where an Exception occurs.
You access the option via the "j!" alt text http://help.eclipse.org/stable/topic/org.eclipse.jdt.doc.user/images/org.eclipse.jdt.debug.ui/elcl16/exc_catch.png icon in the debugging window.
alt text http://blogs.bytecode.com.au/glen/2007/04/06/images/2007/AddExceptionWindow.png
The official help topic "Add Java Exception Breakpoint " has more on this.
*
*The Uncaught Exception option is to suspend execution when an exception of the same type as the breakpoint is thrown in an uncaught location.
*The Caught Exception option is to suspend execution when an exception of the same type as the breakpoint is thrown in a caught location.
*do not forget the Exception Breakpoint Suspend on Subclass of this Exception:
to suspend execution when subclasses of the exception type are encountered.
For example, if an exception breakpoint for RuntimeException is configured to suspend on subclasses, it will also be triggered by a NullPointerException.
alt text http://help.eclipse.org/stable/topic/org.eclipse.jdt.doc.user/reference/breakpoints/images/ref-breakpoint_suspendsubclass.PNG
A: Ctrl+Shift+Enter to move the current line down by one and start typing above it.
Ctrl+Shift+X to capitalize the current selection, Ctrl-Shift-Y to change it lowercase.
Ctrl+. Autocompletes the current word. This works for variables as well as strings (which is a huge timesaver for array keys, for example)
A: When debugging I find the "Display" view really useful. It lets you type code (using auto complete) and lets you run/display the outcome of whatever you write.
Give it a try!
A: Quick Assist: Ctrl + 2, followed by F (assign to field), L(assign to local variable) and R (rename in file)
Last edit location: Ctrl+Q
Check out this article: http://dmy999.com/article/29/using-eclipse-efficiently
A: ALT+Shift+X + T
This will run your current file as a unit test.
A: Ctrl-1 to convert if to conditional expression and back, split an assignment or join it back or do other such small manipulations. There is a list of these in the help.
A: Depending on what time saver means to you...
Adding TODO and FIXME in a comment automatically adds a task to the task list in Eclipse.
So if there is code you want to come back to, say you were debugging and need to do some research, you can do...
FIXME means it is urgent, which puts a red ! in the task window
TODO is normal urgency
//FIXME: This accidentally deletes user accounts
user.account().delete();
//TODO: Add some validation before assigning everyone as admin
user.setPrivilege("Admin");
And then there are the setters/getters automatically being built. This is great if you are creating a bean or something. Say you have declared a class such as:
public class SomeBean {
private static int FIRST_VALUE = 0;
private static int SECOND_VALUE = 1;
...
private static int THOUSANDTH_VALUE = 1000;
}
You can create all the variables, then right-click in the editor, go to Source and then pick Generate Setters & Getters. This will automatically create them for you.
A: Shift-F2 goes to the Javadoc for any method.
Use it a LOT. For libraries you need to configure the location , but for standard classes they are predefined by Eclipse
A: CTRL-MouseClick (left) as an alternative for F3 to go to declaration.
A: ctrl+d to delete the current line
alt+up/down to move the current line or block of selected text up or down
ctrl+alt+up/down to copy/duplication the current line or block of selected text up or down
ctrl+alt+c SVN commit (with subversive)
ctrl+alt+u SVN update (with subversive)
A: I recently mapped alt-enter to the same command as ctrl-1. It's just a bit easier to get to.
I also use alt+shift+x &t a bunch, but I'm not a fan of how the integrated test runner works.
A: If you are using the F3 key to navigate to the source code of a method, you could often waste your time to switch to the the Interface instead of going directly to the implementation class (there is often only one, for DAO, Service,... for example)
Using Ctrl+Mouse pointer to one method of the code, you will be able to choose between directly going to the Directly go to the Implementation (the class) or the Declaration (the interface)
More info about this tip here:
http://www.ibm.com/developerworks/opensource/library/os-eclipse-galnav/index.html
This is only available in Galileo and you can use Ctrl + T as well for the same.
A: Double click next to an opening bracket will highlight all the code till the closing bracket, and vice versa.
A: Install the MouseFeed Eclipse plugin. After installation, it will show you a popup with the keyboard shortcut whenever you click on a button or a menu item that is associated with a shortcut.
A: Ctrl-F6 to cycle focus through open Editor windows (with Ctrl-Shift-F6 to cycle backwards)
Ctrl-F7 to cycle focus through Eclipse views
Ctrl-F8 to cycle Eclipse perspectives
A: NOT so hidden feature ,but very less people use it and do not explore it
Template
Key board shortcut
and Alex has explained about Member sort
Move lines
A: CTRL + b: to build the project under c++
CTRL + SHIFT + f: to format your code (c++)
A: Of course all these shortcuts are available in the menus but who has time for that when you're in the "zone".
I like the code hot swapping.
A: Of course if you can't find the binding you are looking for, or don't like the current binding Window -> Preferences -> General -> Keys will allow you to change, add & delete the mappings of your key combo's.
A: If you build your project with Ant you can assign a shortcut to "Runs the last launched external Tool" like Ctrl+Enter and it will repeat your last build. It is much easier than standard Alt+Shift+X,Q also it helps with a bug in the latest Eclipse that cannot find an ant build file in the project.
A: I'm really biased and this is blatant advertising...
Still, I think my new Eclipse plugin, nWire, is the best time saver you can get for Eclipse. I developed it after years of working with Eclipse, I just came to the conclusion that I need one tool to show me all the associations of my code instead of learning different tools and views.
Check out the demo on my web site.
A: Enabling 'Ignore white space' in the Compare/Patch settings is a real time saver!
A: Hit CTRL+S very often. It's CTRL+1's best friend.
A: I'm surprised no one mentioned the Emacs keybinding setting available in Eclipse. This is one of my favorite little features; it allows me to transition from Emacs to Eclipse with little adjustment in my navigation preferences.
Windows->Preferences->General->Keys->Scheme.
A: In
Windows/Preferences/General/Keys
define
Alt + C
for SVN Commit
Alt + U
for SVN Update
Shift + Ctrl + N
for New Class Dialog.
A: The eclipse help contains a lot of useful resources. Just search for "tips & tricks". In particular the "Tips and Tricks (JDT)" i found to be very useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "97"
} |
Q: As a ASP.NET programmer, do I need to be concerned about email injection attacks? There are lots of PHP articles about the subject so is this a PHP only problem.
I am sending emails using System.Net.Mail after some regular expression checks of course.
Similar to http://weblogs.asp.net/scottgu/archive/2005/12/10/432854.aspx
A: the PHP email injection attack works because of a weakness in the PHP Mail() function. As a .net developer you need not worry.
A: I've never heard of that issue in ASP.NET. However, you should trust user input about as much as you'd trust a hooker with your wallet.
A: As long as you are using the MailAddress object, I think you're fine, because injections will only manage to throw FormatExceptions for the specified address.
Examples of how to properly use the System.Net.Mail components are included in that MSDN page; be sure to follow them and you will be fine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I clone a generic List in Java? I have an ArrayList<String> that I'd like to return a copy of. ArrayList has a clone method which has the following signature:
public Object clone()
After I call this method, how do I cast the returned Object back to ArrayList<String>?
A: List<String> shallowClonedList = new ArrayList<>(listOfStrings);
Keep in mind that this is only a shallow not a deep copy, ie. you get a new list, but the entries are the same. This is no problem for simply strings. Get's more tricky when the list entries are objects themself.
A: If you want this in order to be able to return the List in a getter it would be better to do:
ImmutableList.copyOf(list);
A: ArrayList newArrayList = (ArrayList) oldArrayList.clone();
A: To clone a generic interface like java.util.List you will just need to cast it. here you are an example:
List list = new ArrayList();
List list2 = ((List) ( (ArrayList) list).clone());
It is a bit tricky, but it works, if you are limited to return a List interface, so anyone after you can implement your list whenever he wants.
I know this answer is close to the final answer, but my answer answers how to do all of that while you are working with List -the generic parent- not ArrayList
A: Why would you want to clone? Creating a new list usually makes more sense.
List<String> strs;
...
List<String> newStrs = new ArrayList<>(strs);
Job done.
A: Be very careful when cloning ArrayLists. Cloning in java is shallow. This means that it will only clone the Arraylist itself and not its members. So if you have an ArrayList X1 and clone it into X2 any change in X2 will also manifest in X1 and vice-versa. When you clone you will only generate a new ArrayList with pointers to the same elements in the original.
A: This should also work:
ArrayList<String> orig = new ArrayList<String>();
ArrayList<String> copy = (ArrayList<String>) orig.clone()
A: This is the code I use for that:
ArrayList copy = new ArrayList (original.size());
Collections.copy(copy, original);
Hope is usefull for you
A: With Java 8 it can be cloned with a stream.
import static java.util.stream.Collectors.toList;
...
List<AnObject> clone = myList.stream().collect(toList());
A: Be advised that Object.clone() has some major problems, and its use is discouraged in most cases. Please see Item 11, from "Effective Java" by Joshua Bloch for a complete answer. I believe you can safely use Object.clone() on primitive type arrays, but apart from that you need to be judicious about properly using and overriding clone. You are probably better off defining a copy constructor or a static factory method that explicitly clones the object according to your semantics.
A: I think this should do the trick using the Collections API:
Note: the copy method runs in linear time.
//assume oldList exists and has data in it.
List<String> newList = new ArrayList<String>();
Collections.copy(newList, oldList);
A: I find using addAll works fine.
ArrayList<String> copy = new ArrayList<String>();
copy.addAll(original);
parentheses are used rather than the generics syntax
A: ArrayList first = new ArrayList ();
ArrayList copy = (ArrayList) first.clone ();
A: I am not a java professional, but I have the same problem and I tried to solve by this method. (It suppose that T has a copy constructor).
public static <T extends Object> List<T> clone(List<T> list) {
try {
List<T> c = list.getClass().newInstance();
for(T t: list) {
T copy = (T) t.getClass().getDeclaredConstructor(t.getclass()).newInstance(t);
c.add(copy);
}
return c;
} catch(Exception e) {
throw new RuntimeException("List cloning unsupported",e);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "175"
} |
Q: Make a JPanel not draw its background (Transparent) Is it possible, in Java, to make a JPanel skip drawing its background thus being transparent except for the components on it?
A: This article seems to have some handy info on how to create shaped and transparent windows in Java:
https://docs.oracle.com/javase/tutorial/uiswing/misc/trans_shaped_windows.html
A: setOpaque(false)
It'll pass off painting the background to its parent, which may draw its own background.
You can do a screen capture and then use that to paint the background of the panel.
A: Technically a JPanel may start off non-opague. This was true for the Gtk look & feel in 1.5 (or 1.4?), but no other PL&Fs as far as I am aware.
A: If you are using NetBeans, Then Do these steps:-
*
*Right Click on JPanel.
*Scroll Down and Search For (( Opaque )). It must be there.
*Uncheck it.
*Now your JPanel background will be removed and what will appear in
JPanel Background is your Background of JFrame.
A: class TransparentJPanel extends JPanel
{
TransparentJPanel()
{
super() ;
this.setOpaque( false ) ; // this will make the JPanel transparent
// but not its components (JLabel, TextField etc.)
this.setLayout( null ) ;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Hidden Features of ASP.NET
This question exists because it has
historical significance, but it is not
considered a good, on-topic question
for this site, so please do not use it
as evidence that you can ask similar
questions here.
More info: https://stackoverflow.com/faq
There are always features that would be useful in fringe scenarios, but for that very reason most people don't know them. I am asking for features that are not typically taught by the text books.
What are the ones that you know?
A: Valid syntax that VS chokes on:
<input type="checkbox" name="roles" value='<%# Eval("Name") %>'
<%# ((bool) Eval("InRole")) ? "checked" : "" %>
<%# ViewData.Model.IsInRole("Admin") ? "" : "disabled" %> />
A: System.Web.Hosting.HostingEnvironment.MapPath
A: one feature came to my mind, sometimes you will need to hide some part of your page from the crowlers. you can do it with javascript or using this simple code:
if (Request.Browser.Crawler){
HideArticleComments();
A: Similarly to the optimizeCompilations=”true” solution, here another one to speed up the time you spend waiting in between builds (very good especially if you are working with a large project): create a ram-based drive (i.e. using RamDisk) and change your default “Temporary ASP.NET Files” to this memory-based drive.
The full details on how to do this is on my blog: http://www.wagnerdanda.me/2009/11/speeding-up-build-times-in-asp-net-with-ramdisk/
Basically you first and configure a RamDisk (again, in my blog there a link to a free ramdisk) and then you change your web.config according to this:
<system.web>
....
<compilation debug="true" tempDirectory="R:\ASP_NET_TempFiles\">
....
</compilation>
....
</system.web>
It greatly increase my development time, you just need invest in memory for you computer :)
Happy Programming!
Wagner Danda
A: Here's the best one. Add this to your web.config for MUCH faster compilation. This is post 3.5SP1 via this QFE.
<compilation optimizeCompilations="true">
Quick summary: we are introducing a
new optimizeCompilations switch in
ASP.NET that can greatly improve the
compilation speed in some scenarios.
There are some catches, so read on for
more details. This switch is
currently available as a QFE for
3.5SP1, and will be part of VS 2010.
The ASP.NET compilation system takes a
very conservative approach which
causes it to wipe out any previous
work that it has done any time a ‘top
level’ file changes. ‘Top level’ files
include anything in bin and App_Code,
as well as global.asax. While this
works fine for small apps, it becomes
nearly unusable for very large apps.
E.g. a customer was running into a
case where it was taking 10 minutes to
refresh a page after making any change
to a ‘bin’ assembly.
To ease the pain, we added an
‘optimized’ compilation mode which
takes a much less conservative
approach to recompilation.
Via here:
A: *
*HttpContext.Current will always give you access to the current context's Request/Response/etc., even when you don't have access to the Page's properties (e.g., from a loosely-coupled helper class).
*You can continue executing code on the same page after redirecting the user to another one by calling Response.Redirect(url, false )
*You don't need .ASPX files if all you want is a compiled Page (or any IHttpHandler). Just set the path and HTTP methods to point to the class in the <httpHandlers> element in the web.config file.
*A Page object can be retrieved from an .ASPX file programmatically by calling PageParser.GetCompiledPageInstance(virtualPath,aspxFileName,Context)
A: Retail mode at the machine.config level:
<configuration>
<system.web>
<deployment retail="true"/>
</system.web>
</configuration>
Overrides the web.config settings to enforce debug to false, turns custom errors on and disables tracing. No more forgetting to change attributes before publishing - just leave them all configured for development or test environments and update the production retail setting.
A: I thought it was neat when I dumped a xmlDocument() into a label and it displayed using it's xsl transforms.
A: Request.IsLocal Property :
It indicates whether current request is coming from Local Computer or not.
if( Request.IsLocal )
{
LoadLocalAdminMailSettings();
}
else
{
LoadServerAdminMailSettings();
}
A: By default any web form page inherits from System.Web.UI.Page class. What if you want your pages to inherit from a custom base class, which inherits from System.Web.UI.Page?
There is a way to constraint any page to inherit from your own base class. Simply add a new line on your web.config:
<system.web>
<pages pageBaseType="MyBasePageClass" />
</system.web>
Caution: this is only valid if your class is a stand-alone one. I mean a class that has no code-behind, which looks like <%@ Page Language="C#" AutoEventWireup="true" %>
A: Enabling intellisense for MasterPages in the content pages
I am sure this is a very little known hack
Most of the time you have to use the findcontrol method and cast the controls in master page from the content pages when you want to use them, the MasterType directive will enable intellisense in visual studio once you to this
just add one more directive to the page
<%@ MasterType VirtualPath="~/Masters/MyMainMasterPage.master" %>
If you do not want to use the Virtual Path and use the class name instead then
<%@ MasterType TypeName="MyMainMasterPage" %>
Get the full article here
A: HttpContext.Items as a request-level caching tool
A: Two things stand out in my head:
1) You can turn Trace on and off from the code:
#ifdef DEBUG
if (Context.Request.QueryString["DoTrace"] == "true")
{
Trace.IsEnabled = true;
Trace.Write("Application:TraceStarted");
}
#endif
2) You can build multiple .aspx pages using only one shared "code-behind" file.
Build one class .cs file :
public class Class1:System.Web.UI.Page
{
public TextBox tbLogin;
protected void Page_Load(object sender, EventArgs e)
{
if (tbLogin!=null)
tbLogin.Text = "Hello World";
}
}
and then you can have any number of .aspx pages (after you delete .designer.cs and .cs code-behind that VS has generated) :
<%@ Page Language="C#" AutoEventWireup="true" Inherits="Namespace.Class1" %>
<form id="form1" runat="server">
<div>
<asp:TextBox ID="tbLogin" runat="server"></asp: TextBox >
</div>
</form>
You can have controls in the ASPX that do not appear in Class1, and vice-versa, but you need to remeber to check your controls for nulls.
A: Attach a class located in your App_Code folder to your Global Application Class file.
ASP.NET 2.0 - Global.asax - Code Behind file.
This works in Visual Studio 2008 as well.
A: EnsureChildControls Method : It checks the child controls if they're initiated. If the child controls are not initiated it calls CreateChildControls method.
A: You can use:
Request.Params[Control.UniqueId]
To get the value of a control BEFORE viewstate is initialized (Control.Text etc will be empty at this point).
This is useful for code in Init.
A: WebMethods.
You can using ASP.NET AJAX callbacks to web methods placed in ASPX pages. You can decorate a static method with the [WebMethod()] and [ScriptMethod()] attributes. For example:
[System.Web.Services.WebMethod()]
[System.Web.Script.Services.ScriptMethod()]
public static List<string> GetFruitBeginingWith(string letter)
{
List<string> products = new List<string>()
{
"Apple", "Banana", "Blackberry", "Blueberries", "Orange", "Mango", "Melon", "Peach"
};
return products.Where(p => p.StartsWith(letter)).ToList();
}
Now, in your ASPX page you can do this:
<form id="form1" runat="server">
<div>
<asp:ScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true" />
<input type="button" value="Get Fruit" onclick="GetFruit('B')" />
</div>
</form>
And call your server side method via JavaScript using:
<script type="text/javascript">
function GetFruit(l)
{
PageMethods.GetFruitBeginingWith(l, OnGetFruitComplete);
}
function OnGetFruitComplete(result)
{
alert("You got fruit: " + result);
}
</script>
A: Check to see if the client is still connected, before starting a long-running task:
if (this.Response.IsClientConnected)
{
// long-running task
}
A: One little known and rarely used feature of ASP.NET is:
Tag Mapping
It's rarely used because there's only a specific situation where you'd need it, but when you need it, it's so handy.
Some articles about this little know feature:
Tag Mapping in ASP.NET
Using Tag Mapping in ASP.NET 2.0
and from that last article:
Tag mapping allows you to swap
compatible controls at compile time on
every page in your web application. A
useful example is if you have a stock
ASP.NET control, such as a
DropDownList, and you want to replace
it with a customized control that is
derived from DropDownList. This could
be a control that has been customized
to provide more optimized caching of
lookup data. Instead of editing every
web form and replacing the built in
DropDownLists with your custom
version, you can have ASP.NET in
effect do it for you by modifying
web.config:
<pages>
<tagMapping>
<clear />
<add tagType="System.Web.UI.WebControls.DropDownList"
mappedTagType="SmartDropDown"/>
</tagMapping>
</pages>
A: HttpModules. The architecture is crazy elegant. Maybe not a hidden feature, but cool none the less.
A: You can find any control by using its UniqueID property:
Label label = (Label)Page.FindControl("UserControl1$Label1");
A: Lots of people mentioned how to optimize your code when recompiling. Recently I discovered I can do most of my development (code-behind stuff) in the aspx page and skipping completely the build step. Just save the file and refresh your page. All you have to do is wrap your code in the following tag:
<script runat="server">
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Response.Write("Look Ma', I didn't even had to build!")
End Sub
</script>
Once you are done, Just move all to the code-behind, build, test everything works and voila!
-D
A: You can use ASP.NET Comments within an .aspx page to comment out full parts of a page including server controls. And the contents that is commented out will never be sent to the client.
<%--
<div>
<asp:Button runat="server" id="btnOne"/>
</div>
--%>
A: The Code Expression Builder
Sample markup:
Text = '<%$ Code: GetText() %>'
Text = '<%$ Code: MyStaticClass.MyStaticProperty %>'
Text = '<%$ Code: DateTime.Now.ToShortDateString() %>'
MaxLenth = '<%$ Code: 30 + 40 %>'
The real beauty of the code expression builder is that you can use databinding like expressions in non-databinding situations. You can also create other Expression Builders that perform other functions.
web.config:
<system.web>
<compilation debug="true">
<expressionBuilders>
<add expressionPrefix="Code" type="CodeExpressionBuilder" />
The cs class that makes it all happen:
[ExpressionPrefix("Code")]
public class CodeExpressionBuilder : ExpressionBuilder
{
public override CodeExpression GetCodeExpression(
BoundPropertyEntry entry,
object parsedData,
ExpressionBuilderContext context)
{
return new CodeSnippetExpression(entry.Expression);
}
}
A: While testing, you can have emails sent to a folder on your computer instead of an SMTP server. Put this in your web.config:
<system.net>
<mailSettings>
<smtp deliveryMethod="SpecifiedPickupDirectory">
<specifiedPickupDirectory pickupDirectoryLocation="c:\Temp\" />
</smtp>
</mailSettings>
</system.net>
A: Usage of the ASHX file type:
If you want to just output some basic html or xml without going through the page event handlers then you can implement the HttpModule in a simple fashion
Name the page as SomeHandlerPage.ashx and just put the below code (just one line) in it
<%@ webhandler language="C#" class="MyNamespace.MyHandler" %>
Then the code file
using System;
using System.IO;
using System.Web;
namespace MyNamespace
{
public class MyHandler: IHttpHandler
{
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/xml";
string myString = SomeLibrary.SomeClass.SomeMethod();
context.Response.Write(myString);
}
public bool IsReusable
{
get { return true; }
}
}
}
A: Setting Server Control Properties Based on Target Browser and more.
<asp:Label runat="server" ID="labelText"
ie:Text="This is IE text"
mozilla:Text="This is Firefox text"
Text="This is general text"
/>
That one kinda took me by surprise.
A: System.Web.VirtualPathUtility
A: I worked on a asp.net application which went through a security audit by a leading security company and I learned this easy trick to preventing a lesser known but important security vulnerability.
The below explanation is from:
http://www.guidanceshare.com/wiki/ASP.NET_2.0_Security_Guidelines_-_Parameter_Manipulation#Consider_Using_Page.ViewStateUserKey_to_Counter_One-Click_Attacks
Consider using Page.ViewStateUserKey to counter one-click attacks. If you authenticate your callers and use ViewState, set the Page.ViewStateUserKey property in the Page_Init event handler to prevent one-click attacks.
void Page_Init (object sender, EventArgs e) {
ViewStateUserKey = Session.SessionID;
}
Set the property to a value you know is unique to each user, such as a session ID, user name, or user identifier.
A one-click attack occurs when an attacker creates a Web page (.htm or .aspx) that contains a hidden form field named __VIEWSTATE that is already filled with ViewState data. The ViewState can be generated from a page that the attacker had previously created, such as a shopping cart page with 100 items. The attacker lures an unsuspecting user into browsing to the page, and then the attacker causes the page to be sent to the server where the ViewState is valid. The server has no way of knowing that the ViewState originated from the attacker. ViewState validation and HMACs do not counter this attack because the ViewState is valid and the page is executed under the security context of the user.
By setting the ViewStateUserKey property, when the attacker browses to a page to create the ViewState, the property is initialized to his or her name. When the legitimate user submits the page to the server, it is initialized with the attacker's name. As a result, the ViewState HMAC check fails and an exception is generated.
A: HttpContext.Current.IsDebuggingEnabled
This is great for determining which scripts to output (min or full versions) or anything else you might want in dev, but not live.
A: If you place a file named app_offline.htm
in the root of a web application directory, ASP.NET 2.0+ will shut-down the application and stop normal processing any new incoming requests for that application, showing only the contents of the app_offline.htm file for all new requests.
This is the quickest and easiest way to display your "Site Temporarily Unavailable" notice while re-deploying (or rolling back) changes to a Production server.
Also, as pointed out by marxidad, make sure you have at least 512 bytes of content within the file so IE6 will render it correctly.
A: Included in ASP.NET 3.5 SP1:
*
*customErrors now supports "redirectMode" attribute with a value of "ResponseRewrite". Shows error page without changing URL.
*The form tag now recognizes the action attribute. Great for when you're using URL rewriting
A: DefaultButton property in Panels.
It sets default button for a particular panel.
A: ClientScript property on Page object.
A: If you use web services instead WCF services, you can still use standard .Net membership to enforce authentication and login session behaviour
on a set web services similarly to a how you would secure web site with membership forms authentication & without the need for a special session
and/or soap headers implementations by simply calling System.Web.Security.FormsAuthentication.SetAuthCookie(userName, false) [after calling
Membership.ValidateUser(userName, password) of course] to create cookie in the response as if the user has logged in via a web form.
Then you can retrieve this authentication cookie with Response.Cookies[].Value and return it as a string to the user
which can be used to authenticate the user in subsequent calls by re-creating the cookie in the Application_BeginRequest by extracting the
cookie method call param from the Request.InputStream and re-creating the auth cookie before the membership authenticates the request this way the
membership provider gets tricked and will know the request is authenticated and enforce all its rules.
Sample web method signature to return this cookie to the user would be:
string Login(userName,password)
Sample subsequent web method call would be:
string DoSomething(string authcookie,string methodParam1,int methodParam2 etc,etc) where you need to extract authcookie(which is value obtained from Login method) param from the Request.InputStreamis
This also simulates a login session and calling FormsAuthentication.SignOut in a web method like this Logout(authcookie) would
make the user need to sign in again.
A: 'file' attribute on appsettings element in web.config.
Specifies a relative path to an external file that contains custom application configuration settings.
If you have few app settings out of many that need to modified on different environments (prod), this is excellent choice.
Because any changes to the Web.config file cause the application to restart, using a separate file allows users to modify values that are in the appSettings section without causing the application to restart. The contents of the separate file are merged with the appSettings section in the Web.config file.
A: ScottGu has a bunch of tricks at http://weblogs.asp.net/scottgu/archive/2006/04/03/441787.aspx
A: Using configSource to split configuration files.
You can use the configSource attribute in a web.config file to push configuration elements to other .config files, for example,
instead of:
<appSettings>
<add key="webServiceURL" value="https://some/ws.url" />
<!-- some more keys -->
</appSettings>
...you can have the entire appSettings section stored in another configuration file. Here's the new web.config :
<appSettings configSource="myAppSettings.config" />
The myAppSettings.config file :
<appSettings>
<add key="webServiceURL" value="https://some/ws.url" />
<!-- some more keys -->
</appSettings>
This is quite useful for scenarios where you deploy an application to a customer and you don't want them interfering with the web.config file itself and just want them to be able to change just a few settings.
ref: http://weblogs.asp.net/fmarguerie/archive/2007/04/26/using-configsource-to-split-configuration-files.aspx
A: MaintainScrollPositionOnPostback attribute in Page directive. It is used to maintain scroll position of aspx page across postbacks.
A: HttpContext.IsCustomErrorEnabled is a cool feature.I've found it useful more than once. Here is a short post about it.
A: By default, any content between tags for a custom control is added as a child control. This can be intercepted in an AddParsedSubObject() override for filtering or additional parsing (e.g., of text content in LiteralControls):
protected override void AddParsedSubObject(object obj)
{ var literal = obj as LiteralControl;
if (literal != null) Controls.Add(parseControl(literal.Text));
else base.AddParsedSubObject(obj);
}
...
<uc:MyControl runat='server'>
...this text is parsed as a LiteralControl...
</uc:MyControl>
A: If you have ASP.NET generating an RSS feed, it will sometimes put an extra line at the top of the page. This won't validate with common RSS validators. You can work around it by putting the page directive <@Page> at the bottom of the page.
A: Before ASP.NET v3.5 added routes you could create your own friendly URLs simply by writing an HTTPModule to and rewrite the request early in the page pipeline (like the BeginRequest event).
Urls like http://servername/page/Param1/SomeParams1/Param2/SomeParams2 would get mapped to another page like below (often using regular expressions).
HttpContext.RewritePath("PageHandler.aspx?Param1=SomeParms1&Param2=SomeParams2");
DotNetNuke has a really good HttpModule that does this for their friendly urls. Is still useful for machines where you can't deploy .NET v3.5.
A: My team uses this a lot as a hack:
WebRequest myRequest = WebRequest.Create("http://www.google.com");
WebResponse myResponse = myRequest.GetResponse();
StreamReader sr = new StreamReader(myResponse.GetResponseStream());
// here's page's response loaded into a string for further use
String thisReturn = sr.ReadToEnd().Trim();
It loads a webpage's response as a string. You can send in post parameters too.
We use it in the place of ASCX/AJAX/WebServices when we need something cheap and fast. Basically, its a quick way to access web-available content across servers. In fact, we just dubbed it the "Redneck Web Service" yesterday.
A: throw new HttpException(404, "Article not found");
This will be caught by ASP.NET which will return the customErrors page. Learned about this one in a recent .NET Tip of the Day Post
A: Did you know it's possible to run ASP.Net outside of IIS or Visual Studio?
The whole runtime is packaged up and ready to be hosted in any process that wants to give it a try. Using ApplicationHost, HttpRuntime and HttpApplication classes, you too can grind up those .aspx pages and get shiny HTML output from them.
HostingClass host = ApplicationHost.CreateApplicationHost(typeof(HostingClass),
"/virtualpath", "physicalPath");
host.ProcessPage(urlToAspxFile);
And your hosting class:
public class HostingClass : MarshalByRefObject
{
public void ProcessPage(string url)
{
using (StreamWriter sw = new StreamWriter("C:\temp.html"))
{
SimpleWorkerRequest worker = new SimpleWorkerRequest(url, null, sw);
HttpRuntime.ProcessRequest(worker);
}
// Ta-dah! C:\temp.html has some html for you.
}
}
A: CompilationMode="Never" is a feature which can be crucial in certain ASP.NET sites.
If you have an ASP.NET application where ASPX pages are frequently generated and updated via a CMS or other publishing system, it is important to use CompilationMode="Never".
Without this setting, the ASPX file changes will trigger recompilations which will quickly make your appdomain restart. This can wipe out session state and httpruntime cache, not to mention lag caused by recompilation.
(To prevent recompilation you could increase the numRecompilesBeforeAppRestart setting, but that is not ideal as it consumes more memory.)
One caveat to this feature is that the ASPX pages cannot contain any code blocks. To get around this, one may place code in custom controls and/or base classes.
This feature is mostly irrelevant in cases where ASPX pages don't change often.
A: Application variables can be used with web application for communicating across the whole application. It is initialized in Global.asax file and used over the pages in that web application by all the user independent of the session they create.
A: It's possible to package ASPX pages into a Library (.dll), and serve them with the ASP.NET engine.
You will need to implement your own VirtualPathProvider, which will load via Relfection specific DLL's, or you could include the DLL name in your pathname. It's up to you.
The magic happens when overriding the VirtualFile.Open method, where you return the ASPX file as a resource from the Assembly class: Assembly.GetManifestResourceStream. The ASP.NET engine will process the resource since it is served via the VirtualPathProvider.
This allows to plug-in pages, or like I did, use it to include a HttpHandler with a control.
A: Templated user controls. Once you know how they work you will see all sorts of possibilities. Here's the simplest implementation:
TemplatedControl.ascx
The great thing here is using the easy and familiar user control building block and being able to layout the different parts of your UI using HTML and some placeholders.
<%@ Control Language="C#" CodeFile="TemplatedControl.ascx.cs" Inherits="TemplatedControl" %>
<div class="header">
<asp:PlaceHolder ID="HeaderPlaceHolder" runat="server" />
</div>
<div class="body">
<asp:PlaceHolder ID="BodyPlaceHolder" runat="server" />
</div>
TemplatedControl.ascx.cs
The 'secret' here is using public properties of type ITemplate and knowing about the [ParseChildren] and [PersistenceMode] attributes.
using System.Web.UI;
[ParseChildren(true)]
public partial class TemplatedControl : System.Web.UI.UserControl
{
[PersistenceMode(PersistenceMode.InnerProperty)]
public ITemplate Header { get; set; }
[PersistenceMode(PersistenceMode.InnerProperty)]
public ITemplate Body { get; set; }
void Page_Init()
{
if (Header != null)
Header.InstantiateIn(HeaderPlaceHolder);
if (Body != null)
Body.InstantiateIn(BodyPlaceHolder);
}
}
Default.aspx
<%@ Register TagPrefix="uc" TagName="TemplatedControl" Src="TemplatedControl.ascx" %>
<uc:TemplatedControl runat="server">
<Header>Lorem ipsum</Header>
<Body>
// You can add literal text, HTML and server controls to the templates
<p>Hello <asp:Label runat="server" Text="world" />!</p>
</Body>
</uc:TemplatedControl>
You will even get IntelliSense for the inner template properties. So if you work in a team you can quickly create reusable UI to achieve the same composability that your team already enjoys from the built-in ASP.NET server controls.
The MSDN example (same link as the beginning) adds some extra controls and a naming container, but that only becomes necessary if you want to support 'repeater-type' controls.
A: This seems like a huge, vague question...
But I will throw in Reflection, as it has allowed me to do some incredibly powerful things like pluggable DALs and such.
A: After the website was published and deployed in the production server, If we need to do some changes on the server side button click event. We can override the existing click event by using the newkeyword in the aspx page itself.
Example
Code Behind Method
Protected void button_click(sender object, e System.EventArgs)
{
Response.Write("Look Ma', I Am code behind code!")
}
OverRided Method:
<script runat="server">
Protected void new button_click(sender object, e System.EventArgs)
{
Response.Write("Look Ma', I am overrided method!")
}
</script
In this way we can easily fix the production server errors without redeployment.
A: One of the things I use that work with multiple versions of VB, VBScript and VB.NET is to convert the recordset values to string to eliminate muliple tests for NULL or blank. i.e. Trim(rsData("FieldName").Value & " ")
In the case of a whole number value this would be: CLng("0" & Trim(rsData("FieldName").Value & " "))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "292"
} |
Q: How do I know when to use state based testing versus mock testing? Which scenarios, areas of an application/system, etc. are best suited for 'classic' state based testing versus using mock objects?
A: You should be using mocks for dependencies. I don't think that its an either-or; Usually you will create mocks for dependencies, set expectations (whether it is calls or state) on them, then run the unit under test. Then you would check its state, and verify the expectations on the mocks, afterwards.
A: Using mock objects doesnt mean you're not doing state based testing.
A: I'm going to tackle it from a TDD/BDD perspective, where the tests are driving the designs.
First off it depends what style of design you buy into, because remember this is about design first. As Martin Fowler discusses in this excellent article Mocks Aren't Stubs there are two school of thoughts and they do produce different types of designs.
If you want to buy into the mockist approach then I highly suggest you start by looking at the mockobjects site and their article Mock Roles, Not Objects. They also have a book coming out.
The truth is even if you do believe that the mock-first style of design is not for you, or that you don't want to do it right through your application (e.g. not when testing your domain/service layer) then you will still want to use test doubles. xUnit Test Patterns explains the different types of test doubles and their purposes.
Personally I never mock domain classes, so never mocking entities/value objects. However I've been trying out the mock objects style approach a lot recently, it does change my designs a bit and I find the working style comfortable. Working in this style in an MVC app I'll probably start with an automated acceptance test, then I'll write a controller test that mocks out any non-domain objects (e.g. repositories/services), then I'll move down to testing those repositories/services again mocking out their dependencies. I stop when I reach a class with no troublesome dependencies such as a domain entities/value object. I could go on and test against specific role interfaces which are then implemented by my domain classes, which is what the mockobjects guys would recommend, but I don't currently see a lot of value in that approach.
Obviously it is worth adding that design for test is important here, but remember that although 90% of IoC/mocking/DIP examples show interface-implementation pairs (ICustomerRepository/CustomerRepository) there is a lot of value in instead looking for role interfaces.
A: When using services, whether my own or third party, I design to interfaces as a matter of course, so I tend to focus on the interactions there. This encourages me to design minimal interfaces.
I check state on anything focusing on value objects, just as simple calculations, lookups, and the like.
The next time you find yourself designing something that typically follows Model/View/Controller-or-Presenter, I highly recommend trying the Presenter First approach (Google it) using interfaces for the Model and View. This will give you a great feel for how to use stubs/mocks effectively.
A: Its a matter of style.. Mockist vs Classic TDDers.
Personally.. I'd go testing real classes as far as possible. Tone down on Mocks as much as possible; only to decouple things like IO (filesystems, DB Connections, network), Third party components, etc. things that are slow/difficult to get under test.
A: As an experienced TDD'er, this is a question that I'm frequently asked by other developers. For me, it's a mistake to get sucked into a Mockist vs. Classicist debate, as such a discussion is misleading. State-based and behaviuour-based unit testing are two different tools in your toolbox, and there's no reason why they should be mutually exclusive.
State-based unit testing is suitable when you want to query the internal properties of an object after talking to its external interface. If one or more of its collaborators involve potentially expensive calls to other objects, then by all means stub those calls and simply disregard collaborators.
Behaviour-based unit testing is a good idea when you want to consider the "how" of a unit test and focus upon discoving relationships between objects, as opposed to the traditional "what" questions of a state-based unit test. If you wish to assert that collaborators are used in a certain way and/or sequential order, use mocks - stubs that provide assertions.
I suggest you focus upon standard unit testing practices - exercise as little production code as possible, and have one assertion per test. This will force you to think "what exactly is it that I want to exercise and assert in this test?", and the answer to that question will help you choose the correct unit testing tools.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Java, UTF-8, and Windows console We try to use Java and UTF-8 on Windows. The application writes logs on the console, and we would like to use UTF-8 for the logs as our application has internationalized logs.
It is possible to configure the JVM so it generates UTF-8, using -Dfile.encoding=UTF-8 as arguments to the JVM. It works fine, but the output on a Windows console is garbled.
Then, we can set the code page of the console to 65001 (chcp 65001), but in this case, the .bat files do not work. This means that when we try to launch our application through our script (named start.bat), absolutely nothing happens. The command simple returns:
C:\Application> chcp 65001
Activated code page: 65001
C:\Application> start.bat
C:\Application>
But without chcp 65001, there is no problem, and the application can be launched.
Any hints about that?
A: Java on windows does NOT support unicode ouput by default. I have written a workaround method by calling Native API with JNA library.The method will call WriteConsoleW for unicode output on the console.
import com.sun.jna.Native;
import com.sun.jna.Pointer;
import com.sun.jna.ptr.IntByReference;
import com.sun.jna.win32.StdCallLibrary;
/** For unicode output on windows platform
* @author Sandy_Yin
*
*/
public class Console {
private static Kernel32 INSTANCE = null;
public interface Kernel32 extends StdCallLibrary {
public Pointer GetStdHandle(int nStdHandle);
public boolean WriteConsoleW(Pointer hConsoleOutput, char[] lpBuffer,
int nNumberOfCharsToWrite,
IntByReference lpNumberOfCharsWritten, Pointer lpReserved);
}
static {
String os = System.getProperty("os.name").toLowerCase();
if (os.startsWith("win")) {
INSTANCE = (Kernel32) Native
.loadLibrary("kernel32", Kernel32.class);
}
}
public static void println(String message) {
boolean successful = false;
if (INSTANCE != null) {
Pointer handle = INSTANCE.GetStdHandle(-11);
char[] buffer = message.toCharArray();
IntByReference lpNumberOfCharsWritten = new IntByReference();
successful = INSTANCE.WriteConsoleW(handle, buffer, buffer.length,
lpNumberOfCharsWritten, null);
if(successful){
System.out.println();
}
}
if (!successful) {
System.out.println(message);
}
}
}
A: Try chcp 65001 && start.bat
The chcp command changes the code page, and 65001 is the Win32 code page identifier for UTF-8 under Windows 7 and up. A code page, or character encoding, specifies how to convert a Unicode code point to a sequence of bytes or back again.
A: We had some similar problems in Linux. Our code was in ISO-8859-1 (mostly cp-1252 compatible) but the console was UTF-8, making the code to not compile. Simply changing the console to ISO-8859-1 would make the build script, in UTF-8, to break. We found a couple of choices:
1- define some standard encoding and sticky to it. That was our choice. We choose to keep all in ISO-8859-1, modifying the build scripts.
2- Setting the encoding before starting any task, even inside the build scripts. Some code like the erickson said. In Linux was like :
lang=pt_BR.ISO-8859-1 /usr/local/xxxx
My eclipse is still like this. Both do work well.
A: Windows doesn't support the 65001 code page: http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/chcp.mspx?mfr=true
A: Have you tried PowerShell rather than old cmd.exe.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Count the number of nodes that match a given XPath expression in XmlSpy I am using XmlSpy to analyze an xml file, and I want to get a quick count of the number of nodes that match a given xpath. I know how to enter the XPathand get the list of nodes, but I am really just interested in the count. Is it possible to get this?
I'm using XmlSpy Professional Edition version 2007 sp2, if it matters.
A: I just figureed it out. I just needed to put count() around my xpath, like so:
count(//my/node)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is a prepared statement? I see a bunch of lines in the .log files in the postgres pg_log directory that say something like:
ERROR: prepared statement "pdo_pgsql_stmt_09e097f4" does not exist
What are prepared statements, and what kinds of things can cause these error messages to be displayed?
A: From the documentation:
A prepared statement is a server-side
object that can be used to optimize
performance. When the PREPARE
statement is executed, the specifie
statement is parsed, rewritten, and
planned. When an EXECUTE command is
subsequently issued, the prepared
statement need only be executed. Thus,
the parsing, rewriting, and planning
stages are only performed once,
instead of every time the statement is
executed.
Searching the net, I found that the "pdo_pgsql_stmt" command is from some sort of PHP-connection to your database. Maybe this link can help you find a suiteable mailing-list or issue-tracker that you can send your error-messages to?
EDIT: I think I found your bug here:
http://bugs.php.net/bug.php?id=37870
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is a good platform (environment/language/reuseable component) for data visualization? I have lots and lots of data in various structures. Are there any better platforms other than Excel charts which can help me.
thanks
A: http://services.alphaworks.ibm.com/manyeyes/browse/visualizations
Here you can upload data sets and get different online visualization, your data will be made public tough.
A: What about google charts?
A: A starting point
The field of data visualisation is growing rapidly at the moment. Traditional toolchains such as Microsoft Excel were augmented by powerful visualisation solutions as part of the dashboarding craze that came with the last wave of ERPs. We're even more spoiled now as the programming community has joined with traditional analytics to explore java, javascript, and any language you can think of.
The story gets even better with open source and cloud-based solutions. Keeping up is hard work, but I've found some great jump-off points in a recent round of research. If you take an evening to take a few minutes with each of the tools listed in this great Computer World article, you will surely find one that immediately appeals to your preferences and skills.
22 Free Tools for Data Visualization and Analysis
If this is a little much to digest in one sitting, take a glance over the handy chart first to get an overview of some of what is out there.
Bonus
A great one not on that list is d3.js, which is a currently maintained successor to the protovis project, which I believe is no longer active. You can find d3.js on github, which again shows how lucky we are to have such great community efforts in open sourcing these kinds of powerful visualisation solutions.
A: Depends a bit what your objectives are and how technical you are willing to get.
Incanter is a great toolset that I can heartily recommend (I use it for visualisation in my own projects). It's a statistical computing and visualisation library for Clojure - which in turn is a very flexible and dynamic langauge, good for interactive experiements.
I particularly like the DSL for creating charts, e.g. to create a histogram of 1000 samples from the normal distribution you can just do:
(view (histogram (sample-normal 1000)))
A: Tale a look at R. It has a strong community and ecosystem. If you enjoy working from a console, you'll probably enjoy how easy it is to go from a CSV, for example, to various data visualizations.
I found this interactive tutorial from Code School to be very helpful in getting started.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How would you refactor this LINQ code? I've got a lot of ugly code that looks like this:
if (!string.IsNullOrEmpty(ddlFileName.SelectedItem.Text))
results = results.Where(x => x.FileName.Contains(ddlFileName.SelectedValue));
if (chkFileName.Checked)
results = results.Where(x => x.FileName == null);
if (!string.IsNullOrEmpty(ddlIPAddress.SelectedItem.Text))
results = results.Where(x => x.IpAddress.Contains(ddlIPAddress.SelectedValue));
if (chkIPAddress.Checked)
results = results.Where(x => x.IpAddress == null);
...etc.
results is an IQueryable<MyObject>.
The idea is that for each of these innumerable dropdowns and checkboxes, if the dropdown has something selected, the user wants to match that item. If the checkbox is checked, the user wants specifically those records where that field is null or an empty string. (The UI doesn't let both be selected at the same time.) This all adds to the LINQ Expression which gets executed at the end, after we've added all the conditions.
It seems like there ought to be some way to pull out an Expression<Func<MyObject, bool>> or two so that I can put the repeated parts in a method and just pass in what changes. I've done this in other places, but this set of code has me stymied. (Also, I'd like to avoid "Dynamic LINQ", because I want to keep things type-safe if possible.) Any ideas?
A: I'd convert it into a single Linq statement:
var results =
//get your inital results
from x in GetInitialResults()
//either we don't need to check, or the check passes
where string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) ||
x.FileName.Contains(ddlFileName.SelectedValue)
where !chkFileName.Checked ||
string.IsNullOrEmpty(x.FileName)
where string.IsNullOrEmpty(ddlIPAddress.SelectedItem.Text) ||
x.FileName.Contains(ddlIPAddress.SelectedValue)
where !chkIPAddress.Checked ||
string.IsNullOrEmpty(x. IpAddress)
select x;
It's no shorter, but I find this logic clearer.
A: In that case:
//list of predicate functions to check
var conditions = new List<Predicate<MyClass>>
{
x => string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) ||
x.FileName.Contains(ddlFileName.SelectedValue),
x => !chkFileName.Checked ||
string.IsNullOrEmpty(x.FileName),
x => string.IsNullOrEmpty(ddlIPAddress.SelectedItem.Text) ||
x.IpAddress.Contains(ddlIPAddress.SelectedValue),
x => !chkIPAddress.Checked ||
string.IsNullOrEmpty(x.IpAddress)
}
//now get results
var results =
from x in GetInitialResults()
//all the condition functions need checking against x
where conditions.All( cond => cond(x) )
select x;
I've just explicitly declared the predicate list, but these could be generated, something like:
ListBoxControl lbc;
CheckBoxControl cbc;
foreach( Control c in this.Controls)
if( (lbc = c as ListBoxControl ) != null )
conditions.Add( ... );
else if ( (cbc = c as CheckBoxControl ) != null )
conditions.Add( ... );
You would need some way to check the property of MyClass that you needed to check, and for that you'd have to use reflection.
A: Have you seen the LINQKit? The AsExpandable sounds like what you're after (though you may want to read the post Calling functions in LINQ queries at TomasP.NET for more depth).
A: Don't use LINQ if it's impacting readability. Factor out the individual tests into boolean methods which can be used as your where expression.
IQueryable<MyObject> results = ...;
results = results
.Where(TestFileNameText)
.Where(TestFileNameChecked)
.Where(TestIPAddressText)
.Where(TestIPAddressChecked);
So the the individual tests are simple methods on the class. They're even individually unit testable.
bool TestFileNameText(MyObject x)
{
return string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) ||
x.FileName.Contains(ddlFileName.SelectedValue);
}
bool TestIPAddressChecked(MyObject x)
{
return !chkIPAddress.Checked ||
x.IpAddress == null;
}
A: results = results.Where(x =>
(string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) || x.FileName.Contains(ddlFileName.SelectedValue))
&& (!chkFileName.Checked || string.IsNullOrEmpty(x.FileName))
&& ...);
A: Neither of these answers so far is quite what I'm looking for. To give an example of what I'm aiming at (I don't regard this as a complete answer either), I took the above code and created a couple of extension methods:
static public IQueryable<Activity> AddCondition(
this IQueryable<Activity> results,
DropDownList ddl,
Expression<Func<Activity, bool>> containsCondition)
{
if (!string.IsNullOrEmpty(ddl.SelectedItem.Text))
results = results.Where(containsCondition);
return results;
}
static public IQueryable<Activity> AddCondition(
this IQueryable<Activity> results,
CheckBox chk,
Expression<Func<Activity, bool>> emptyCondition)
{
if (chk.Checked)
results = results.Where(emptyCondition);
return results;
}
This allowed me to refactor the code above into this:
results = results.AddCondition(ddlFileName, x => x.FileName.Contains(ddlFileName.SelectedValue));
results = results.AddCondition(chkFileName, x => x.FileName == null || x.FileName.Equals(string.Empty));
results = results.AddCondition(ddlIPAddress, x => x.IpAddress.Contains(ddlIPAddress.SelectedValue));
results = results.AddCondition(chkIPAddress, x => x.IpAddress == null || x.IpAddress.Equals(string.Empty));
This isn't quite as ugly, but it's still longer than I'd prefer. The pairs of lambda expressions in each set are obviously very similar, but I can't figure out a way to condense them further...at least not without resorting to dynamic LINQ, which makes me sacrifice type safety.
Any other ideas?
A: @Kyralessa,
You can create extension method AddCondition for predicates that accepts parameter of type Control plus lambda expression and returns combined expression. Then you can combine conditions using fluent interface and reuse your predicates. To see example of how it can be implemented see my answer on this question:
How do I compose existing Linq Expressions
A: I'd be wary of the solutions of the form:
// from Keith
from x in GetInitialResults()
//either we don't need to check, or the check passes
where string.IsNullOrEmpty(ddlFileName.SelectedItem.Text) ||
x.FileName.Contains(ddlFileName.SelectedValue)
My reasoning is variable capture. If you're immediately execute just the once you probably won't notice a difference. However, in linq, evaluation isn't immediate but happens each time iterated occurs. Delegates can capture variables and use them outside the scope you intended.
It feels like you're querying too close to the UI. Querying is a layer down, and linq isn't the way for the UI to communicate down.
You may be better off doing the following. Decouple the searching logic from the presentation - it's more flexible and reusable - fundamentals of OO.
// my search parameters encapsulate all valid ways of searching.
public class MySearchParameter
{
public string FileName { get; private set; }
public bool FindNullFileNames { get; private set; }
public void ConditionallySearchFileName(bool getNullFileNames, string fileName)
{
FindNullFileNames = getNullFileNames;
FileName = null;
// enforce either/or and disallow empty string
if(!getNullFileNames && !string.IsNullOrEmpty(fileName) )
{
FileName = fileName;
}
}
// ...
}
// search method in a business logic layer.
public IQueryable<MyClass> Search(MySearchParameter searchParameter)
{
IQueryable<MyClass> result = ...; // something to get the initial list.
// search on Filename.
if (searchParameter.FindNullFileNames)
{
result = result.Where(o => o.FileName == null);
}
else if( searchParameter.FileName != null )
{ // intermixing a different style, just to show an alternative.
result = from o in result
where o.FileName.Contains(searchParameter.FileName)
select o;
}
// search on other stuff...
return result;
}
// code in the UI ...
MySearchParameter searchParameter = new MySearchParameter();
searchParameter.ConditionallySearchFileName(chkFileNames.Checked, drpFileNames.SelectedItem.Text);
searchParameter.ConditionallySearchIPAddress(chkIPAddress.Checked, drpIPAddress.SelectedItem.Text);
IQueryable<MyClass> result = Search(searchParameter);
// inform control to display results.
searchResults.Display( result );
Yes it's more typing, but you read code around 10x more than you write it. Your UI is clearer, the search parameters class takes care of itself and ensures mutually exclusive options don't collide, and the search code is abstracted away from any UI and doesn't even care if you use Linq at all.
A: Since you are wanting to repeatedly reduce the original results query with innumerable filters, you can use Aggregate(), (which corresponds to reduce() in functional languages).
The filters are of predictable form, consisting of two values for every member of MyObject - according to the information I gleaned from your post. If every member to be compared is a string, which may be null, then I recommend using an extension method, which allows for null references to be associated to an extension method of its intended type.
public static class MyObjectExtensions
{
public static bool IsMatchFor(this string property, string ddlText, bool chkValue)
{
if(ddlText!=null && ddlText!="")
{
return property!=null && property.Contains(ddlText);
}
else if(chkValue==true)
{
return property==null || property=="";
}
// no filtering selected
return true;
}
}
We now need to arrange the property filters in a collection, to allow for iterating over many. They are represented as Expressions for compatibility with IQueryable.
var filters = new List<Expression<Func<MyObject,bool>>>
{
x=>x.Filename.IsMatchFor(ddlFileName.SelectedItem.Text,chkFileName.Checked),
x=>x.IPAddress.IsMatchFor(ddlIPAddress.SelectedItem.Text,chkIPAddress.Checked),
x=>x.Other.IsMatchFor(ddlOther.SelectedItem.Text,chkOther.Checked),
// ... innumerable associations
};
Now we aggregate the innumerable filters onto the initial results query:
var filteredResults = filters.Aggregate(results, (r,f) => r.Where(f));
I ran this in a console app with simulated test values, and it worked as expected. I think this at least demonstrates the principle.
A: One thing you might consider is simplifying your UI by eliminating the checkboxes and using an "<empty>" or "<null>" item in your drop down list instead. This would reduce the number of controls taking up space on your window, remove the need for complex "enable X only if Y is not checked" logic, and would enable a nice one-control-per-query-field.
Moving on to your result query logic, I would start by creating a simple object to represent a filter on your domain object:
interface IDomainObjectFilter {
bool ShouldInclude( DomainObject o, string target );
}
You can associate an appropriate instance of the filter with each of your UI controls, and then retrieve that when the user initiates a query:
sealed class FileNameFilter : IDomainObjectFilter {
public bool ShouldInclude( DomainObject o, string target ) {
return string.IsNullOrEmpty( target )
|| o.FileName.Contains( target );
}
}
...
ddlFileName.Tag = new FileNameFilter( );
You can then generalize your result filtering by simply enumerating your controls and executing the associated filter (thanks to hurst for the Aggregate idea):
var finalResults = ddlControls.Aggregate( initialResults, ( c, r ) => {
var filter = c.Tag as IDomainObjectFilter;
var target = c.SelectedValue;
return r.Where( o => filter.ShouldInclude( o, target ) );
} );
Since your queries are so regular, you might be able to simplify the implementation even further by using a single filter class taking a member selector:
sealed class DomainObjectFilter {
private readonly Func<DomainObject,string> memberSelector_;
public DomainObjectFilter( Func<DomainObject,string> memberSelector ) {
this.memberSelector_ = memberSelector;
}
public bool ShouldInclude( DomainObject o, string target ) {
string member = this.memberSelector_( o );
return string.IsNullOrEmpty( target )
|| member.Contains( target );
}
}
...
ddlFileName.Tag = new DomainObjectFilter( o => o.FileName );
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Replacing Windows Explorer With Third Party Tool How would I go about replacing Windows Explorer with a third party tool such as TotalCommander, explorer++, etc?
I would like to have one of those load instead of win explorer when I type "C:\directoryName" into the run window. Is this possible?
A: From a comment on the first LifeHacker link,
How to make x² your default folder application
As part of the installation process, x² adds "open with xplorer2" in the context menu for
filesystem folders.
If you want to have this the default action (so that folders always open in x2 when you click on
them) then make sure this is the default verb, either using Folder Options ("file folder" type) or
editing the registry:
[HKEY_CLASSES_ROOT\Directory\shell]
@="open_x2"
If you want some slightly different command line options, you can add any of the supported
options by editing the following registry key:
[HKEY_CLASSES_ROOT\Directory\shell\open\command]
@="C:\Program files\zabkat\xplorer2\xplorer2_UC.exe" /T /1 "%1"
Notes:
*
*Please check your installation folder first: Your installation path may be different.
Secondly, your executable may be called xplorer2.exe, if it is the non-Unicode version.
*Note that "%1" is required (including the quotation marks), and is replaced by the folder path you are trying to open.
*The /T switch causes no tabs to be restored and the /1 switch puts x² in single pane mode. (You do not have to use these switches, but they make sense).
(The above are from xplorer2 user manual)
A: If you go to Control Panel -> Folder Options And go to the File Types tab. You can go to the "Folder" file type (with "(NONE)" as the extension). Go to Advanced, create a new action that uses your program (I tried it with FreeCommander). Make sure you set it as default.
That should do it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Profiling visualization tools? I need to display profiling information pulled from a deeply embedded CPU, presenting it in a way which other developers on my team will be able to act upon. The profiling data is a snapshot of a cycle counter at the entry and exit of every function, so we have a call graph annotated with sub-microsecond timing accuracy. I'd prefer not to just dump out function names and timing like gprof, I'm looking for something easier to understand and act upon.
Has anyone worked with a particularly good profiling tool (on any platform), which made it easy to identify areas of the code to drill into? I'm looking for an inspirational example to follow for how to display the call graph, but if there is good tool with an input format I can massage my data to I'll use it. I could use Windows, Linux, or MacOS X to run the visualization tool.
A profiling article on IBM DeveloperWorks led me to GraphViz, with a profiling example on their site. Barring another suggestion here, I'll use GraphViz and mimic their profiling example.
A: I use Kprof
http://kprof.sourceforge.net/
it is old, but I never found a better tool to inspect the results from gprof.
A: How about "GTKWave"?
But you have to insert the probe in your code.
A: Valgrind does profiling (and more), and there are GUIs for visualization.
A: I suggest you drop gprof+graphviz for OProfileUI, unless you don't have a choice.
A: IE 8b2 offers a simple display of the call tree for javascript that I believe is much more useful than the GraphViz chart.
The GraphViz chart is wonderful for visualizing the call tree but makes it very difficult to visualize timing issues (IMHO the more important data).
**Edit: I thought it is worth pointing out that all of the tools suggested use a grid based tree to visualize the call tree. This allows you to see the calling structure without downplaying the timing data as I believe you do with the GraphViz chart.*
A: JetBrains dotTrace (has a trial demo you can play with). It organizes the call stacks and can easily find the trouble spots. Has a lot of filtering capabilities as well. Very easy to navigate and find what you're looking for.
A: You can use Senseo, a plugin for Eclipse. It shows you the performance, memory allocation, objects created, time spent, actual methods invoked, hover over method signatures or calls, call context tree, package explorer and more.
A: Another neat tool to visualize profiling data is the gprof2dot.py python script.
It can be used to visualize several different formats: "This is a Python script to convert the output from prof, gprof, oprofile, Shark, AQtime, and python profilers into a dot graph." This is what the output look like:
(source: googlecode.com)
A: I've written a browser-based visualization tool, profile_eye, which operates on the output of gprof2dot.
gprof2dot is great at grokking many profiling-tool outputs, and does a great job at graph-element placement. The final rendering is a static graphic, which is often very cluttered.
Using d3.js it's possible to remove much of that clutter, through relative fading of unfocused elements, tooltips, and a fisheye distortion.
For comparison, see profile_eye's visualization of the canonical example used by gprof2dot.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How to Prevent the "Please tell Microsoft about this problem" Dialog Boxes We have an error that we can't seem to find and don't have the need/resources to try and track it down. What we do need to do is just keep the freaking "Please tell Microsoft about this problem" dialog boxes from cluttering up the server.
It is from an MS-Access error (we think) but I can't find where Access is installed (I don't think it is); there is no MS Office directory on any of the drives.
If we could just stop the dialog boxes from appearing that would be great.
Thanks.
Spec: Window Server 2003 sp2
A: From http://www.codeproject.com/KB/exception/UnhandledExceptionClass.aspx:
If you also want to disable the Windows “Send Error Report” dialog on your computer, right-click on the “My Computer” icon, select “Properties”, switch to the “Advanced” tab, and click on the “Error Reporting” button. In the Options dialog, select the “Disable error reporting” radio button:
Compare also http://www.codeproject.com/KB/exception/ExceptionHandling.aspx for general .NET Exception Handling sanity.
A: I don't think you realize jusy how serious this error might be. These errors pop up under severe conditions like buffer overflows, null pointer dereferences, division by zero errors, etc. Given that you mentioned that this runs on a server, you potentially (depending on your situation) have a network facing, easily exploitable program!
As others have suggested, you can turn off error-reporting system wide, or you can use the SetErrorMode API to disable it for just your application, but again I would strongly advise to investigate and fix the problem at its root!
A: Well, I would try and figure out what application is using Access. Not knowing what is running is a real issue, especially when it is having unhandled exceptions!
As far as just disabling?
Try:
Right click 'My Computer'
left click 'Properties'
left click 'Advanced' tab
left click 'Error Reporting' (near bottom)
left click 'Disable error reporting'
A: It's called the Windows Error Reporting Dialog. You can disable it by following these instructions:
http://www.windowsnetworking.com/articles_tutorials/Disable-Error-Reporting-Windows-XP-Server-2003.html
However, I would be concerned with why you are getting those error messages. You are just concealing a problem which could be very bad.
A: Since I see three different people posted instructions on how to disable the dialog on their own machine, I'll post this again as an answer instead of just a comment:
They don't want to do that on their customer's machines, because that will disable any application from using the dialog. They just want to prevent it from coming up for their application.
A: It's not concealing anything, if a program crashes windows will still tell you. The difference is that it won't compile a core dump and try to send it to microsoft for you. It's very useful to disable this garbage.
A: When working in .net, you can catch all unhandled exceptions (which is what causes this message). But I'm not aware of a way to do the same in Access.
http://devcity.net/Articles/60/1/unhandled_exceptions.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Adopting standard libraries My team has a command parsing library for console apps. Each team around us has their own as well. There isn't anything in the BCL so I suppose this is natural.
I've looked at the the module in Mono, which seems solid, and the one on CodePlex looks fine as well. There are probably others out there that will work (and I would love to hear your suggestions).
The real question is: how do I get my team, and others around us, to commit to just using one?
A: Well to be honest, you can't make everyone settle on one solution. You can suggest a solution and point out it's benefits, but eventually the advantages would have to be greater than the inertia that they have built up with their present library.
To make them settle on one library you would need to go up the management change until you get to the person that manages all the groups involved. Convince that person why everyone should use one library then let it filter back down.
Now that I have said that, why does it matter? Does your team routinely have to work on code from the other teams? Are the other teams using libraries that cause problems for your code? Is this standardization purely for the sake of standardization or is there some specific problem that not standardizing causes?
A: Once you find a solution, you start forcing it in code reviews. If it's not implemented in new code, tell them, sorry, but you have to go back and do it again. If you already have standards and reviews in place, this is a lot easier to implement.
A: EBGreen, good point, I should have mentioned why I am looking to do this. Our teams frequently read and edit code from the surrounding teams. And I mean feature teams, not just dev/test/pm divisions.
This is just one of those little things that slow everybody down. Working on Team C's code? Got to track down their lib, which mysteriously isn't in the nightly builds (another problem, but independent of this). Reviewing another dev's work? Need to figure out how their parser works. Starting a new project? Need to decide which library to import.
I think that your response does indicate the solution though: Put the library somewhere very convenient, so it can be picked up by new projects (I doubt that many existing ones will be revised, nor should they be if they're working fine), and make the advantages of using it clear.
Thanks!
A: @fatcat1111, in that case by all means a standardized library would be advantageous. As for how to convince the other teams, there are two approaches that I can think of. First point out that standardization across a group always reduces coding effort (discounting for the initial ramp up for people that are new to the standardized library). Second, try to convince them on features. Hopefully you would be choosing the most feature complete library so it would be superior to what everyone else is using.
A: PowerShell provides great command line parsing options for free. When you create a cmdlet, you define properties and use attributes to determine command line options. The PowerShell runtime then handles the parsing of input for you.
Also, since PowerShell works with .NET objects, the result of your commands can be rich objects with properties and methods.
Here is a nice blog post demonstrating writing and debugging cmdlets.
A: I recommend NDesk.Options. It makes use of Lambda functions / .NET 3.5 and it's very
nice:
http://www.ndesk.org/Options
What you basically do is:
var p = new OptionSet () {
{ "file=", v => data = v },<br>
{ "v|verbose", v => { ++verbose } },<br>
{ "h|?|help", v => help = v != null },<br>
};
List<string> extra = p.Parse (args);
And you're done.
It support key/value pairs with custom separators, lists, single value options and toggle options
You WILL NOT regret using it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do prepared statements work? I'm writing some DB routines and I'm using prepared statements. My environment is PDO with PHP5.
I understand prepared statements primarily provide a performance benefit, as well as some auxiliary bonuses such as not having to manually SQL-escape input data.
My question is about the performance part.
I have two implementations of a getPrice function below that takes a product id and returns its price.
getPrice_A reuses the same PDOStatement object across subsequent calls within the same script execution. Is this necessary or recommended? If so, is there any way to avoid duplicating this extra code across every single get*() in every single model?
getPrice_B creates a new PDOStatement object on every call. Will the DBMS recognize this statement has already been prepared and still be able to skip some work? In other words, does this implementation properly take advantage of the performance benefits of prepared statements?
Having written all this out and read it over, I imagine getPrice_B is fine and getPrice_A is providing a negligible benefit on top of that, which may or may not be worth the extra complication.
I'd still like to hear for sure from someone more knowledgable though.
Assume that $pdo is a valid, connected PDO object in the examples below.
<?php
class Product {
static function &getPrice_A($id) {
static $stmt;
if (!$stmt) {
$stmt = $pdo->prepare('SELECT price FROM products WHERE id = ?');
}
$stmt->execute(array($id));
return $stmt->fetchColumn(0);
}
static function &getPrice_B($id) {
$stmt = $pdo->prepare('SELECT price FROM products WHERE id = ?');
$stmt->execute(array($id));
return $stmt->fetchColumn(0);
}
}
// example usage:
$price = Product::getPrice(4982);
echo "Product 4982 costs $price\n";
A: From what I understand, prepared statements will reuse the generated SQL plan if it is the same statement, so the database will see the same prepared statement and not have to do the work to figure out how to query the database. I would say the extra work of saving the prepared statement in Product::getPrice_A is not typically very helpful, more because it can obscure the code rather than an issue of performance. When dealing with performance, I feel it's always best to focus on code clarity and then performance when you have real statistics that indicate a problem.
I would say "yes, the extra work is unnecessary" (regardless of if it really boosts performance). Also, I am not a very big DB expert, but the performance gain of prepared statements is something I heard from others, and it is at the database level, not the code level (so if the code is actually invoking a parameterized statement on the actual DB, then the DB can do these execution plan caching... though depending on the database, you may get the benefit even without the parameterized statement).
Anyways, if you are really worried about (and seeing) database performance issues, you should look into a caching solution... of which I would highly recommend memcached. With such a solution, you can cache your query results and not even hit the database for things you access frequently.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Change windows hostname from command line Is it possible to change the hostname in Windows 2003 from the command line with out-of-the-box tools?
A: The previously mentioned wmic command is the way to go, as it is installed by default in recent versions of Windows.
Here is my small improvement to generalize it, by retrieving the current name from the environment:
wmic computersystem where name="%COMPUTERNAME%"
call rename name="NEW-NAME"
NOTE: The command must be given in one line, but I've broken it into two to make scrolling unnecessary. As @rbeede mentions you'll have to reboot to complete the update.
A: cmd (command):
netdom renamecomputer %COMPUTERNAME% /Newname "NEW-NAME"
powershell (windows 2008/2012):
netdom renamecomputer "$env:COMPUTERNAME" /Newname "NEW-NAME"
after that, you need to reboot your computer.
A: I don't know of a command to do this, but you could do it in VBScript or something similar.
Somthing like:
sNewName = "put new name here"
Set oShell = CreateObject ("WSCript.shell" )
sCCS = "HKLM\SYSTEM\CurrentControlSet\"
sTcpipParamsRegPath = sCCS & "Services\Tcpip\Parameters\"
sCompNameRegPath = sCCS & "Control\ComputerName\"
With oShell
.RegDelete sTcpipParamsRegPath & "Hostname"
.RegDelete sTcpipParamsRegPath & "NV Hostname"
.RegWrite sCompNameRegPath & "ComputerName\ComputerName", sNewName
.RegWrite sCompNameRegPath & "ActiveComputerName\ComputerName", sNewName
.RegWrite sTcpipParamsRegPath & "Hostname", sNewName
.RegWrite sTcpipParamsRegPath & "NV Hostname", sNewName
End With ' oShell
MsgBox "Computer name changed, please reboot your computer"
Original
A: The netdom.exe command line program can be used. This is available from the Windows XP Support Tools or Server 2003 Support Tools (both on the installation CD).
Usage guidelines here
A: Here's another way of doing it with a WHS script:
Set objWMIService = GetObject("Winmgmts:root\cimv2")
For Each objComputer in _
objWMIService.InstancesOf("Win32_ComputerSystem")
objComputer.rename "NewComputerName", NULL, NULL
Next
Source
A: Use below command to change computer hostname remotely , Require system reboot after change..
psexec.exe -h -e \\\IPADDRESS -u USERNAME -p PASSWORD netdom renamecomputer CurrentComputerName /newname:NewComputerName /force
A: Why be easy when it can be complicated?
Why use third-party applications like netdom.exe when correct interogations is the way?
Try 2 interogations:
wmic computersystem where caption='%computername%' get caption, UserName, Domain /format:value
wmic computersystem where "caption like '%%%computername%%%'" get caption, UserName, Domain /format:value
or in a batch file use loop
for /f "tokens=2 delims==" %%i in ('wmic computersystem where "Caption like '%%%currentname%%%'" get UserName /format:value') do (echo. UserName- %%i)
A: If you are looking to do this from Windows 10 IoT, then there is a built in command you can use:
setcomputername [newname]
Unfortunately, this command does not exist in the full build of Windows 10.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Generating Random Passwords When a user on our site loses his password and heads off to the Lost Password page we need to give him a new temporary password. I don't really mind how random this is, or if it matches all the "needed" strong password rules, all I want to do is give them a password that they can change later.
The application is a Web application written in C#. so I was thinking of being mean and going for the easy route of using part of a Guid. i.e.
Guid.NewGuid().ToString("d").Substring(1,8)
Suggesstions? thoughts?
A: I created this class that uses RNGCryptoServiceProvider and it is flexible. Example:
var generator = new PasswordGenerator(minimumLengthPassword: 8,
maximumLengthPassword: 15,
minimumUpperCaseChars: 2,
minimumNumericChars: 3,
minimumSpecialChars: 2);
string password = generator.Generate();
A: I don't like the passwords that Membership.GeneratePassword() creates, as they're too ugly and have too many special characters.
This code generates a 10 digit not-too-ugly password.
string password = Guid.NewGuid().ToString("N").ToLower()
.Replace("1", "").Replace("o", "").Replace("0","")
.Substring(0,10);
Sure, I could use a Regex to do all the replaces but this is more readable and maintainable IMO.
A: I know that this is an old thread, but I have what might be a fairly simple solution for someone to use. Easy to implement, easy to understand, and easy to validate.
Consider the following requirement:
I need a random password to be generated which has at least 2 lower-case letters, 2 upper-case letters and 2 numbers. The password must also be a minimum of 8 characters in length.
The following regular expression can validate this case:
^(?=\b\w*[a-z].*[a-z]\w*\b)(?=\b\w*[A-Z].*[A-Z]\w*\b)(?=\b\w*[0-9].*[0-9]\w*\b)[a-zA-Z0-9]{8,}$
It's outside the scope of this question - but the regex is based on lookahead/lookbehind and lookaround.
The following code will create a random set of characters which match this requirement:
public static string GeneratePassword(int lowercase, int uppercase, int numerics) {
string lowers = "abcdefghijklmnopqrstuvwxyz";
string uppers = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
string number = "0123456789";
Random random = new Random();
string generated = "!";
for (int i = 1; i <= lowercase; i++)
generated = generated.Insert(
random.Next(generated.Length),
lowers[random.Next(lowers.Length - 1)].ToString()
);
for (int i = 1; i <= uppercase; i++)
generated = generated.Insert(
random.Next(generated.Length),
uppers[random.Next(uppers.Length - 1)].ToString()
);
for (int i = 1; i <= numerics; i++)
generated = generated.Insert(
random.Next(generated.Length),
number[random.Next(number.Length - 1)].ToString()
);
return generated.Replace("!", string.Empty);
}
To meet the above requirement, simply call the following:
String randomPassword = GeneratePassword(3, 3, 3);
The code starts with an invalid character ("!") - so that the string has a length into which new characters can be injected.
It then loops from 1 to the # of lowercase characters required, and on each iteration, grabs a random item from the lowercase list, and injects it at a random location in the string.
It then repeats the loop for uppercase letters and for numerics.
This gives you back strings of length = lowercase + uppercase + numerics into which lowercase, uppercase and numeric characters of the count you want have been placed in a random order.
A: I'll add another ill-advised answer to the pot.
I have a use case where I need random passwords for machine-machine communication, so I don't have any requirement for human readability. I also don't have access to Membership.GeneratePassword in my project, and don't want to add the dependency.
I am fairly certain Membership.GeneratePassword is doing something similar to this, but here you can tune the pools of characters to draw from.
public static class PasswordGenerator
{
private readonly static Random _rand = new Random();
public static string Generate(int length = 24)
{
const string lower = "abcdefghijklmnopqrstuvwxyz";
const string upper = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const string number = "1234567890";
const string special = "!@#$%^&*_-=+";
// Get cryptographically random sequence of bytes
var bytes = new byte[length];
new RNGCryptoServiceProvider().GetBytes(bytes);
// Build up a string using random bytes and character classes
var res = new StringBuilder();
foreach(byte b in bytes)
{
// Randomly select a character class for each byte
switch (_rand.Next(4))
{
// In each case use mod to project byte b to the correct range
case 0:
res.Append(lower[b % lower.Count()]);
break;
case 1:
res.Append(upper[b % upper.Count()]);
break;
case 2:
res.Append(number[b % number.Count()]);
break;
case 3:
res.Append(special[b % special.Count()]);
break;
}
}
return res.ToString();
}
}
And some example output:
PasswordGenerator.Generate(12)
"pzY=64@-ChS$"
"BG0OsyLbYnI_"
"l9#5^2&adj_i"
"#++Ws9d$%O%X"
"IWhdIN-#&O^s"
To preempt complaints about the use of Random: The primary source of randomness is still the crypto RNG. Even if you could deterministically preordain the sequence coming out of Random (say it only produced 1) you still wouldn't know the next char that would be picked (though that would limit the range of possibilities).
One simple extension would be to add weighting to the different character sets, which could be as simple as upping the max value and adding fall-through cases to increase weight.
switch (_rand.Next(6))
{
// Prefer letters 2:1
case 0:
case 1:
res.Append(lower[b % lower.Count()]);
break;
case 2:
case 3:
res.Append(upper[b % upper.Count()]);
break;
case 4:
res.Append(number[b % number.Count()]);
break;
case 5:
res.Append(special[b % special.Count()]);
break;
}
For a more humanistic random password generator I once implemented a prompt system using the EFF dice-word list.
A: There's always System.Web.Security.Membership.GeneratePassword(int length, int numberOfNonAlphanumericCharacters).
A: For this sort of password, I tend to use a system that's likely to generate more easily "used" passwords. Short, often made up of pronouncable fragments and a few numbers, and with no intercharacter ambiguity (is that a 0 or an O? A 1 or an I?). Something like
string[] words = { 'bur', 'ler', 'meh', 'ree' };
string word = "";
Random rnd = new Random();
for (i = 0; i < 3; i++)
word += words[rnd.Next(words.length)]
int numbCount = rnd.Next(4);
for (i = 0; i < numbCount; i++)
word += (2 + rnd.Next(7)).ToString();
return word;
(Typed right into the browser, so use only as guidelines. Also, add more words).
A: The main goals of my code are:
*
*The distribution of strings is almost uniform (don't care about minor deviations, as long as they're small)
*It outputs more than a few billion strings for each argument set. Generating an 8 character string (~47 bits of entropy) is meaningless if your PRNG only generates 2 billion (31 bits of entropy) different values.
*It's secure, since I expect people to use this for passwords or other security tokens.
The first property is achieved by taking a 64 bit value modulo the alphabet size. For small alphabets (such as the 62 characters from the question) this leads to negligible bias. The second and third property are achieved by using RNGCryptoServiceProvider instead of System.Random.
using System;
using System.Security.Cryptography;
public static string GetRandomAlphanumericString(int length)
{
const string alphanumericCharacters =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ" +
"abcdefghijklmnopqrstuvwxyz" +
"0123456789";
return GetRandomString(length, alphanumericCharacters);
}
public static string GetRandomString(int length, IEnumerable<char> characterSet)
{
if (length < 0)
throw new ArgumentException("length must not be negative", "length");
if (length > int.MaxValue / 8) // 250 million chars ought to be enough for anybody
throw new ArgumentException("length is too big", "length");
if (characterSet == null)
throw new ArgumentNullException("characterSet");
var characterArray = characterSet.Distinct().ToArray();
if (characterArray.Length == 0)
throw new ArgumentException("characterSet must not be empty", "characterSet");
var bytes = new byte[length * 8];
new RNGCryptoServiceProvider().GetBytes(bytes);
var result = new char[length];
for (int i = 0; i < length; i++)
{
ulong value = BitConverter.ToUInt64(bytes, i * 8);
result[i] = characterArray[value % (uint)characterArray.Length];
}
return new string(result);
}
(This is a copy of my answer to How can I generate random 8 character, alphanumeric strings in C#?)
A: public string GenerateToken(int length)
{
using (RNGCryptoServiceProvider cryptRNG = new RNGCryptoServiceProvider())
{
byte[] tokenBuffer = new byte[length];
cryptRNG.GetBytes(tokenBuffer);
return Convert.ToBase64String(tokenBuffer);
}
}
(You could also have the class where this method lives implement IDisposable, hold a reference to the RNGCryptoServiceProvider, and dispose of it properly, to avoid repeatedly instantiating it.)
It's been noted that as this returns a base-64 string, the output length is always a multiple of 4, with the extra space using = as a padding character. The length parameter specifies the length of the byte buffer, not the output string (and is therefore perhaps not the best name for that parameter, now I think about it). This controls how many bytes of entropy the password will have. However, because base-64 uses a 4-character block to encode each 3 bytes of input, if you ask for a length that's not a multiple of 3, there will be some extra "space", and it'll use = to fill the extra.
If you don't like using base-64 strings for any reason, you can replace the Convert.ToBase64String() call with either a conversion to regular string, or with any of the Encoding methods; eg. Encoding.UTF8.GetString(tokenBuffer) - just make sure you pick a character set that can represent the full range of values coming out of the RNG, and that produces characters that are compatible with wherever you're sending or storing this. Using Unicode, for example, tends to give a lot of Chinese characters. Using base-64 guarantees a widely-compatible set of characters, and the characteristics of such a string shouldn't make it any less secure as long as you use a decent hashing algorithm.
A: This is a lot larger, but I think it looks a little more comprehensive:
http://www.obviex.com/Samples/Password.aspx
///////////////////////////////////////////////////////////////////////////////
// SAMPLE: Generates random password, which complies with the strong password
// rules and does not contain ambiguous characters.
//
// To run this sample, create a new Visual C# project using the Console
// Application template and replace the contents of the Class1.cs file with
// the code below.
//
// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND,
// EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
//
// Copyright (C) 2004 Obviex(TM). All rights reserved.
//
using System;
using System.Security.Cryptography;
/// <summary>
/// This class can generate random passwords, which do not include ambiguous
/// characters, such as I, l, and 1. The generated password will be made of
/// 7-bit ASCII symbols. Every four characters will include one lower case
/// character, one upper case character, one number, and one special symbol
/// (such as '%') in a random order. The password will always start with an
/// alpha-numeric character; it will not start with a special symbol (we do
/// this because some back-end systems do not like certain special
/// characters in the first position).
/// </summary>
public class RandomPassword
{
// Define default min and max password lengths.
private static int DEFAULT_MIN_PASSWORD_LENGTH = 8;
private static int DEFAULT_MAX_PASSWORD_LENGTH = 10;
// Define supported password characters divided into groups.
// You can add (or remove) characters to (from) these groups.
private static string PASSWORD_CHARS_LCASE = "abcdefgijkmnopqrstwxyz";
private static string PASSWORD_CHARS_UCASE = "ABCDEFGHJKLMNPQRSTWXYZ";
private static string PASSWORD_CHARS_NUMERIC= "23456789";
private static string PASSWORD_CHARS_SPECIAL= "*$-+?_&=!%{}/";
/// <summary>
/// Generates a random password.
/// </summary>
/// <returns>
/// Randomly generated password.
/// </returns>
/// <remarks>
/// The length of the generated password will be determined at
/// random. It will be no shorter than the minimum default and
/// no longer than maximum default.
/// </remarks>
public static string Generate()
{
return Generate(DEFAULT_MIN_PASSWORD_LENGTH,
DEFAULT_MAX_PASSWORD_LENGTH);
}
/// <summary>
/// Generates a random password of the exact length.
/// </summary>
/// <param name="length">
/// Exact password length.
/// </param>
/// <returns>
/// Randomly generated password.
/// </returns>
public static string Generate(int length)
{
return Generate(length, length);
}
/// <summary>
/// Generates a random password.
/// </summary>
/// <param name="minLength">
/// Minimum password length.
/// </param>
/// <param name="maxLength">
/// Maximum password length.
/// </param>
/// <returns>
/// Randomly generated password.
/// </returns>
/// <remarks>
/// The length of the generated password will be determined at
/// random and it will fall with the range determined by the
/// function parameters.
/// </remarks>
public static string Generate(int minLength,
int maxLength)
{
// Make sure that input parameters are valid.
if (minLength <= 0 || maxLength <= 0 || minLength > maxLength)
return null;
// Create a local array containing supported password characters
// grouped by types. You can remove character groups from this
// array, but doing so will weaken the password strength.
char[][] charGroups = new char[][]
{
PASSWORD_CHARS_LCASE.ToCharArray(),
PASSWORD_CHARS_UCASE.ToCharArray(),
PASSWORD_CHARS_NUMERIC.ToCharArray(),
PASSWORD_CHARS_SPECIAL.ToCharArray()
};
// Use this array to track the number of unused characters in each
// character group.
int[] charsLeftInGroup = new int[charGroups.Length];
// Initially, all characters in each group are not used.
for (int i=0; i<charsLeftInGroup.Length; i++)
charsLeftInGroup[i] = charGroups[i].Length;
// Use this array to track (iterate through) unused character groups.
int[] leftGroupsOrder = new int[charGroups.Length];
// Initially, all character groups are not used.
for (int i=0; i<leftGroupsOrder.Length; i++)
leftGroupsOrder[i] = i;
// Because we cannot use the default randomizer, which is based on the
// current time (it will produce the same "random" number within a
// second), we will use a random number generator to seed the
// randomizer.
// Use a 4-byte array to fill it with random bytes and convert it then
// to an integer value.
byte[] randomBytes = new byte[4];
// Generate 4 random bytes.
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
rng.GetBytes(randomBytes);
// Convert 4 bytes into a 32-bit integer value.
int seed = BitConverter.ToInt32(randomBytes, 0);
// Now, this is real randomization.
Random random = new Random(seed);
// This array will hold password characters.
char[] password = null;
// Allocate appropriate memory for the password.
if (minLength < maxLength)
password = new char[random.Next(minLength, maxLength+1)];
else
password = new char[minLength];
// Index of the next character to be added to password.
int nextCharIdx;
// Index of the next character group to be processed.
int nextGroupIdx;
// Index which will be used to track not processed character groups.
int nextLeftGroupsOrderIdx;
// Index of the last non-processed character in a group.
int lastCharIdx;
// Index of the last non-processed group.
int lastLeftGroupsOrderIdx = leftGroupsOrder.Length - 1;
// Generate password characters one at a time.
for (int i=0; i<password.Length; i++)
{
// If only one character group remained unprocessed, process it;
// otherwise, pick a random character group from the unprocessed
// group list. To allow a special character to appear in the
// first position, increment the second parameter of the Next
// function call by one, i.e. lastLeftGroupsOrderIdx + 1.
if (lastLeftGroupsOrderIdx == 0)
nextLeftGroupsOrderIdx = 0;
else
nextLeftGroupsOrderIdx = random.Next(0,
lastLeftGroupsOrderIdx);
// Get the actual index of the character group, from which we will
// pick the next character.
nextGroupIdx = leftGroupsOrder[nextLeftGroupsOrderIdx];
// Get the index of the last unprocessed characters in this group.
lastCharIdx = charsLeftInGroup[nextGroupIdx] - 1;
// If only one unprocessed character is left, pick it; otherwise,
// get a random character from the unused character list.
if (lastCharIdx == 0)
nextCharIdx = 0;
else
nextCharIdx = random.Next(0, lastCharIdx+1);
// Add this character to the password.
password[i] = charGroups[nextGroupIdx][nextCharIdx];
// If we processed the last character in this group, start over.
if (lastCharIdx == 0)
charsLeftInGroup[nextGroupIdx] =
charGroups[nextGroupIdx].Length;
// There are more unprocessed characters left.
else
{
// Swap processed character with the last unprocessed character
// so that we don't pick it until we process all characters in
// this group.
if (lastCharIdx != nextCharIdx)
{
char temp = charGroups[nextGroupIdx][lastCharIdx];
charGroups[nextGroupIdx][lastCharIdx] =
charGroups[nextGroupIdx][nextCharIdx];
charGroups[nextGroupIdx][nextCharIdx] = temp;
}
// Decrement the number of unprocessed characters in
// this group.
charsLeftInGroup[nextGroupIdx]--;
}
// If we processed the last group, start all over.
if (lastLeftGroupsOrderIdx == 0)
lastLeftGroupsOrderIdx = leftGroupsOrder.Length - 1;
// There are more unprocessed groups left.
else
{
// Swap processed group with the last unprocessed group
// so that we don't pick it until we process all groups.
if (lastLeftGroupsOrderIdx != nextLeftGroupsOrderIdx)
{
int temp = leftGroupsOrder[lastLeftGroupsOrderIdx];
leftGroupsOrder[lastLeftGroupsOrderIdx] =
leftGroupsOrder[nextLeftGroupsOrderIdx];
leftGroupsOrder[nextLeftGroupsOrderIdx] = temp;
}
// Decrement the number of unprocessed groups.
lastLeftGroupsOrderIdx--;
}
}
// Convert password characters into a string and return the result.
return new string(password);
}
}
/// <summary>
/// Illustrates the use of the RandomPassword class.
/// </summary>
public class RandomPasswordTest
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{
// Print 100 randomly generated passwords (8-to-10 char long).
for (int i=0; i<100; i++)
Console.WriteLine(RandomPassword.Generate(8, 10));
}
}
//
// END OF FILE
///////////////////////////////////////////////////////////////////////////////
A: I created this method similar to the available in the membership provider. This is usefull if you don't want to add the web reference in some applications.
It works great.
public static string GeneratePassword(int Length, int NonAlphaNumericChars)
{
string allowedChars = "abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ0123456789";
string allowedNonAlphaNum = "!@#$%^&*()_-+=[{]};:<>|./?";
Random rd = new Random();
if (NonAlphaNumericChars > Length || Length <= 0 || NonAlphaNumericChars < 0)
throw new ArgumentOutOfRangeException();
char[] pass = new char[Length];
int[] pos = new int[Length];
int i = 0, j = 0, temp = 0;
bool flag = false;
//Random the position values of the pos array for the string Pass
while (i < Length - 1)
{
j = 0;
flag = false;
temp = rd.Next(0, Length);
for (j = 0; j < Length; j++)
if (temp == pos[j])
{
flag = true;
j = Length;
}
if (!flag)
{
pos[i] = temp;
i++;
}
}
//Random the AlphaNumericChars
for (i = 0; i < Length - NonAlphaNumericChars; i++)
pass[i] = allowedChars[rd.Next(0, allowedChars.Length)];
//Random the NonAlphaNumericChars
for (i = Length - NonAlphaNumericChars; i < Length; i++)
pass[i] = allowedNonAlphaNum[rd.Next(0, allowedNonAlphaNum.Length)];
//Set the sorted array values by the pos array for the rigth posistion
char[] sorted = new char[Length];
for (i = 0; i < Length; i++)
sorted[i] = pass[pos[i]];
string Pass = new String(sorted);
return Pass;
}
A: I've always been very happy with the password generator built-in to KeePass. Since KeePass is a .Net program, and open source, I decided to dig around the code a bit. I ended up just referncing KeePass.exe, the copy provided in the standard application install, as a reference in my project and writing the code below. You can see how flexible it is thanks to KeePass. You can specify length, which characters to include/not include, etc...
using KeePassLib.Cryptography.PasswordGenerator;
using KeePassLib.Security;
public static string GeneratePassword(int passwordLength, bool lowerCase, bool upperCase, bool digits,
bool punctuation, bool brackets, bool specialAscii, bool excludeLookAlike)
{
var ps = new ProtectedString();
var profile = new PwProfile();
profile.CharSet = new PwCharSet();
profile.CharSet.Clear();
if (lowerCase)
profile.CharSet.AddCharSet('l');
if(upperCase)
profile.CharSet.AddCharSet('u');
if(digits)
profile.CharSet.AddCharSet('d');
if (punctuation)
profile.CharSet.AddCharSet('p');
if (brackets)
profile.CharSet.AddCharSet('b');
if (specialAscii)
profile.CharSet.AddCharSet('s');
profile.ExcludeLookAlike = excludeLookAlike;
profile.Length = (uint)passwordLength;
profile.NoRepeatingCharacters = true;
KeePassLib.Cryptography.PasswordGenerator.PwGenerator.Generate(out ps, profile, null, _pool);
return ps.ReadString();
}
A: public string CreatePassword(int length)
{
const string valid = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
StringBuilder res = new StringBuilder();
Random rnd = new Random();
while (0 < length--)
{
res.Append(valid[rnd.Next(valid.Length)]);
}
return res.ToString();
}
This has a good benefit of being able to choose from a list of available characters for the generated password (e.g. digits only, only uppercase or only lowercase etc.)
A: I like to look at generating passwords, just like generating software keys. You should choose from an array of characters that follow a good practice. Take what @Radu094 answered with and modify it to follow good practice. Don't put every single letter in the character array. Some letters are harder to say or understand over the phone.
You should also consider using a checksum on the password that was generated to make sure that it was generated by you. A good way of accomplishing this is to use the LUHN algorithm.
A: public static string GeneratePassword(int passLength) {
var chars = "abcdefghijklmnopqrstuvwxyz@#$&ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
var random = new Random();
var result = new string(
Enumerable.Repeat(chars, passLength)
.Select(s => s[random.Next(s.Length)])
.ToArray());
return result;
}
A: This package allows you to generate a random password while fluently indicating which characters it should contain (if needed):
https://github.com/prjseal/PasswordGenerator/
Example:
var pwd = new Password().IncludeLowercase().IncludeUppercase().IncludeSpecial();
var password = pwd.Next();
A: If you want to make use of the cryptographically secure random number generation used by System.Web.Security.Membership.GeneratePassword but also want to restrict the character set to alphanumeric characters, you can filter the result with a regex:
static string GeneratePassword(int characterCount)
{
string password = String.Empty;
while(password.Length < characterCount)
password += Regex.Replace(System.Web.Security.Membership.GeneratePassword(128, 0), "[^a-zA-Z0-9]", string.Empty);
return password.Substring(0, characterCount);
}
A: check this code...
I added the .remove(length) to improve anaximander's response
public string GeneratePassword(int length)
{
using(RNGCryptoServiceProvider cryptRNG = new RNGCryptoServiceProvider();)
{
byte[] tokenBuffer = new byte[length];
cryptRNG.GetBytes(tokenBuffer);
return Convert.ToBase64String(tokenBuffer).Remove(length);
}
}
A: How to Generate the Random Password in C#.
Output : (https://prnt.sc/11fac8v)
Run : https://onlinegdb.com/HJe5OHBLu
private static Random random = new Random();
public static void Main()
{
Console.WriteLine("Random password with length of 8 character.");
Console.WriteLine("===========================================");
Console.WriteLine("Capital latters : 2");
Console.WriteLine("Number latters : 2");
Console.WriteLine("Special latters : 2");
Console.WriteLine("Small latters : 2");
Console.WriteLine("===========================================");
Console.Write("The Random Password : ");
Console.WriteLine(RandomStringCap(2) + RandomStringNum(2) + RandomStringSpe(2) + RandomStringSml(2));
Console.WriteLine("===========================================");
}
public static string RandomStringCap(int length)
{
const string chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)]).ToArray());
}
public static string RandomStringNum(int length)
{
const string chars = "0123456789";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)]).ToArray());
}
public static string RandomStringSml(int length)
{
const string chars = "abcdefghijklmnopqrstuvwxyz";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)]).ToArray());
}
public static string RandomStringSpe(int length)
{
const string chars = "!@#$%^&*_-=+";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)]).ToArray());
}
A: Inspired by the answer from @kitsu.eb, but using RandomNumberGenerator instead of Random or RNGCryptoServiceProvider (deprecated in .NET 6), and added a few more special characters.
Optional parameter to exclude characters that will be escaped when using System.Text.Json.JsonSerializer.Serialize - for example & which is escaped as \u0026 - so that you can guarantee the length of the serialized string will match the length of the password.
For .NET Core 3.0 and above.
public static class PasswordGenerator
{
const string lower = "abcdefghijklmnopqrstuvwxyz";
const string upper = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const string number = "1234567890";
const string special = "!@#$%^&*()[]{},.:`~_-=+"; // excludes problematic characters like ;'"/\
const string specialJsonSafe = "!@#$%^*()[]{},.:~_-="; // excludes problematic characters like ;'"/\ and &`+
const int lowerLength = 26; // lower.Length
const int upperLength = 26; // upper.Length;
const int numberLength = 10; // number.Length;
const int specialLength = 23; // special.Length;
const int specialJsonSafeLength = 20; // specialJsonSafe.Length;
public static string Generate(int length = 96, bool jsonSafeSpecialCharactersOnly = false)
{
Span<char> result = length < 1024 ? stackalloc char[length] : new char[length].AsSpan();
for (int i = 0; i < length; ++i)
{
switch (RandomNumberGenerator.GetInt32(4))
{
case 0:
result[i] = lower[RandomNumberGenerator.GetInt32(0, lowerLength)];
break;
case 1:
result[i] = upper[RandomNumberGenerator.GetInt32(0, upperLength)];
break;
case 2:
result[i] = number[RandomNumberGenerator.GetInt32(0, numberLength)];
break;
case 3:
if (jsonSafeSpecialCharactersOnly)
{
result[i] = specialJsonSafe[RandomNumberGenerator.GetInt32(0, specialJsonSafeLength)];
}
else
{
result[i] = special[RandomNumberGenerator.GetInt32(0, specialLength)];
}
break;
}
}
return result.ToString();
}
}
A: Pretty easy way to require one from each group using Random and linq-to-objects.
*
*Randomize each group
*Select random amount from first group
*Select remaining random amounts from following groups
Random rand = new Random();
int min = 8;
int max = 16;
int totalLen = rand.Next(min, max);
int remainingGroups = 4;
string[] allowedLowerChars = "a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z".Split(',');
string [] allowedUpperChars = "A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z".Split(',');
string [] allowedNumbers = "1,2,3,4,5,6,7,8,9,0".Split(',');
string [] allowedSpecialChars = "!,@,#,$,%,&,?".Split(',');
var password = allowedLowerChars.OrderBy(c => rand.Next()).Take(rand.Next(1, totalLen-remainingGroups--)).ToList();
password.AddRange(allowedUpperChars.OrderBy(c => rand.Next()).Take(rand.Next(1, totalLen-password.Count-remainingGroups--)).ToList());
password.AddRange(allowedNumbers.OrderBy(c => rand.Next()).Take(rand.Next(1, totalLen-password.Count-remainingGroups--)).ToList());
password.AddRange(allowedSpecialChars.OrderBy(c => rand.Next()).Take(totalLen-password.Count).ToList());
password = password.OrderBy(c => rand.Next()).ToList(); // randomize groups
A: Since Random is not secure and RNGCryptoServiceProvider is obsolte I ended up doing this:
// possible characters that password can have
private const string passChars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ" +
"abcdefghijklmnopqrstuvwxyz" +
"0123456789" +
"!@#$%.-_"
;
public static string GetRandomPassword(int length)
{
char[] p = new char[length];
for (int i = 0; i < length; i++)
p[i] = passChars[RandomNumberGenerator.GetInt32(0, passChars.Length)];
return new string(p);
}
A: Here is a solution that uses RNGCryptoServiceProvider to mimic the functionality of Membership.GeneratePassword from the System.Web.Security namespace.
I needed a drop-in replacement for running it in Azure Function.
It can be tested here:
https://dotnetfiddle.net/V0cNJw
public static string GeneratePassword(int length, int numberOfNonAlphanumericCharacters)
{
const string allowedChars = "abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ0123456789";
const string nonAlphanumericChars = "!@#$%^&*()_-+=[{]};:<>|./?";
var randNum = new byte[4];
using (var rng = new RNGCryptoServiceProvider())
{
rng.GetBytes(randNum);
var randomSeed = BitConverter.ToInt32(randNum, 0);
var random = new Random(randomSeed);
var chars = new char[length];
var allowedCharCount = allowedChars.Length;
var nonAlphanumericCharCount = nonAlphanumericChars.Length;
var numNonAlphanumericCharsAdded = 0;
for (var i = 0; i < length; i++)
{
if (numNonAlphanumericCharsAdded < numberOfNonAlphanumericCharacters && i < length - 1)
{
chars[i] = nonAlphanumericChars[random.Next(nonAlphanumericCharCount)];
numNonAlphanumericCharsAdded++;
}
else
{
chars[i] = allowedChars[random.Next(allowedCharCount)];
}
}
return new string(chars);
}
}
Here is a version that runs on .Net 6.0+
Sandbox: https://dotnetfiddle.net/XqgTSg
public static string GeneratePassword(int length, int numberOfNonAlphanumericCharacters)
{
const string allowedChars = "abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ0123456789";
const string nonAlphanumericChars = "!@#$%^&*()_-+=[{]};:<>|./?";
var randNum = new byte[4];
using (var rng = RandomNumberGenerator.Create())
{
rng.GetBytes(randNum);
var randomSeed = BitConverter.ToInt32(randNum, 0);
var random = new Random(randomSeed);
var chars = new char[length];
var allowedCharCount = allowedChars.Length;
var nonAlphanumericCharCount = nonAlphanumericChars.Length;
var numNonAlphanumericCharsAdded = 0;
for (var i = 0; i < length; i++)
{
if (numNonAlphanumericCharsAdded < numberOfNonAlphanumericCharacters && i < length - 1)
{
chars[i] = nonAlphanumericChars[random.Next(nonAlphanumericCharCount)];
numNonAlphanumericCharsAdded++;
}
else
{
chars[i] = allowedChars[random.Next(allowedCharCount)];
}
}
return new string(chars);
}
}
A: Insert a Timer: timer1, 2 buttons: button1, button2, 1 textBox: textBox1, and a comboBox: comboBox1. Make sure you declare:
int count = 0;
Source Code:
private void button1_Click(object sender, EventArgs e)
{
// This clears the textBox, resets the count, and starts the timer
count = 0;
textBox1.Clear();
timer1.Start();
}
private void timer1_Tick(object sender, EventArgs e)
{
// This generates the password, and types it in the textBox
count += 1;
string possible = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
string psw = "";
Random rnd = new Random { };
psw += possible[rnd.Next(possible.Length)];
textBox1.Text += psw;
if (count == (comboBox1.SelectedIndex + 1))
{
timer1.Stop();
}
}
private void Form1_Load(object sender, EventArgs e)
{
// This adds password lengths to the comboBox to choose from.
comboBox1.Items.Add("1");
comboBox1.Items.Add("2");
comboBox1.Items.Add("3");
comboBox1.Items.Add("4");
comboBox1.Items.Add("5");
comboBox1.Items.Add("6");
comboBox1.Items.Add("7");
comboBox1.Items.Add("8");
comboBox1.Items.Add("9");
comboBox1.Items.Add("10");
comboBox1.Items.Add("11");
comboBox1.Items.Add("12");
}
private void button2_click(object sender, EventArgs e)
{
// This encrypts the password
tochar = textBox1.Text;
textBox1.Clear();
char[] carray = tochar.ToCharArray();
for (int i = 0; i < carray.Length; i++)
{
int num = Convert.ToInt32(carray[i]) + 10;
string cvrt = Convert.ToChar(num).ToString();
textBox1.Text += cvrt;
}
}
A: public string Sifre_Uret(int boy, int noalfa)
{
// 01.03.2016
// Genel amaçlı şifre üretme fonksiyonu
//Fonskiyon 128 den büyük olmasına izin vermiyor.
if (boy > 128 ) { boy = 128; }
if (noalfa > 128) { noalfa = 128; }
if (noalfa > boy) { noalfa = boy; }
string passch = System.Web.Security.Membership.GeneratePassword(boy, noalfa);
//URL encoding ve Url Pass + json sorunu yaratabilecekler pass ediliyor.
//Microsoft Garanti etmiyor. Alfa Sayısallar Olabiliyorimiş . !@#$%^&*()_-+=[{]};:<>|./?.
//https://msdn.microsoft.com/tr-tr/library/system.web.security.membership.generatepassword(v=vs.110).aspx
//URL ve Json ajax lar için filtreleme
passch = passch.Replace(":", "z");
passch = passch.Replace(";", "W");
passch = passch.Replace("'", "t");
passch = passch.Replace("\"", "r");
passch = passch.Replace("/", "+");
passch = passch.Replace("\\", "e");
passch = passch.Replace("?", "9");
passch = passch.Replace("&", "8");
passch = passch.Replace("#", "D");
passch = passch.Replace("%", "u");
passch = passch.Replace("=", "4");
passch = passch.Replace("~", "1");
passch = passch.Replace("[", "2");
passch = passch.Replace("]", "3");
passch = passch.Replace("{", "g");
passch = passch.Replace("}", "J");
//passch = passch.Replace("(", "6");
//passch = passch.Replace(")", "0");
//passch = passch.Replace("|", "p");
//passch = passch.Replace("@", "4");
//passch = passch.Replace("!", "u");
//passch = passch.Replace("$", "Z");
//passch = passch.Replace("*", "5");
//passch = passch.Replace("_", "a");
passch = passch.Replace(",", "V");
passch = passch.Replace(".", "N");
passch = passch.Replace("+", "w");
passch = passch.Replace("-", "7");
return passch;
}
A: This is short and it works great for me.
public static string GenerateRandomCode(int length)
{
Random rdm = new Random();
StringBuilder sb = new StringBuilder();
for(int i = 0; i < length; i++)
sb.Append(Convert.ToChar(rdm.Next(101,132)));
return sb.ToString();
}
A: Here Is what i put together quickly.
public string GeneratePassword(int len)
{
string res = "";
Random rnd = new Random();
while (res.Length < len) res += (new Func<Random, string>((r) => {
char c = (char)((r.Next(123) * DateTime.Now.Millisecond % 123));
return (Char.IsLetterOrDigit(c)) ? c.ToString() : "";
}))(rnd);
return res;
}
A: Generate random password of specified length with
- Special characters
- Number
- Lowecase
- Uppercase
public static string CreatePassword(int length = 12)
{
const string lower = "abcdefghijklmnopqrstuvwxyz";
const string upper = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const string number = "1234567890";
const string special = "!@#$%^&*";
var middle = length / 2;
StringBuilder res = new StringBuilder();
Random rnd = new Random();
while (0 < length--)
{
if (middle == length)
{
res.Append(number[rnd.Next(number.Length)]);
}
else if (middle - 1 == length)
{
res.Append(special[rnd.Next(special.Length)]);
}
else
{
if (length % 2 == 0)
{
res.Append(lower[rnd.Next(lower.Length)]);
}
else
{
res.Append(upper[rnd.Next(upper.Length)]);
}
}
}
return res.ToString();
}
A: I use this code for generate password with balance composition of alphabet, numeric and non_alpha_numeric chars.
public static string GeneratePassword(int Length, int NonAlphaNumericChars)
{
string allowedChars = "abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ0123456789";
string allowedNonAlphaNum = "!@#$%^&*()_-+=[{]};:<>|./?";
string pass = "";
Random rd = new Random(DateTime.Now.Millisecond);
for (int i = 0; i < Length; i++)
{
if (rd.Next(1) > 0 && NonAlphaNumericChars > 0)
{
pass += allowedNonAlphaNum[rd.Next(allowedNonAlphaNum.Length)];
NonAlphaNumericChars--;
}
else
{
pass += allowedChars[rd.Next(allowedChars.Length)];
}
}
return pass;
}
A: On my website I use this method:
//Symb array
private const string _SymbolsAll = "~`!@#$%^&*()_+=-\\|[{]}'\";:/?.>,<";
//Random symb
public string GetSymbol(int Length)
{
Random Rand = new Random(DateTime.Now.Millisecond);
StringBuilder result = new StringBuilder();
for (int i = 0; i < Length; i++)
result.Append(_SymbolsAll[Rand.Next(0, _SymbolsAll.Length)]);
return result.ToString();
}
Edit string _SymbolsAll for your array list.
A: Added some supplemental code to the accepted answer. It improves upon answers just using Random and allows for some password options. I also liked some of the options from the KeePass answer but did not want to include the executable in my solution.
private string RandomPassword(int length, bool includeCharacters, bool includeNumbers, bool includeUppercase, bool includeNonAlphaNumericCharacters, bool includeLookAlikes)
{
if (length < 8 || length > 128) throw new ArgumentOutOfRangeException("length");
if (!includeCharacters && !includeNumbers && !includeNonAlphaNumericCharacters) throw new ArgumentException("RandomPassword-Key arguments all false, no values would be returned");
string pw = "";
do
{
pw += System.Web.Security.Membership.GeneratePassword(128, 25);
pw = RemoveCharacters(pw, includeCharacters, includeNumbers, includeUppercase, includeNonAlphaNumericCharacters, includeLookAlikes);
} while (pw.Length < length);
return pw.Substring(0, length);
}
private string RemoveCharacters(string passwordString, bool includeCharacters, bool includeNumbers, bool includeUppercase, bool includeNonAlphaNumericCharacters, bool includeLookAlikes)
{
if (!includeCharacters)
{
var remove = new string[] { "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z" };
foreach (string r in remove)
{
passwordString = passwordString.Replace(r, string.Empty);
passwordString = passwordString.Replace(r.ToUpper(), string.Empty);
}
}
if (!includeNumbers)
{
var remove = new string[] { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
foreach (string r in remove)
passwordString = passwordString.Replace(r, string.Empty);
}
if (!includeUppercase)
passwordString = passwordString.ToLower();
if (!includeNonAlphaNumericCharacters)
{
var remove = new string[] { "!", "@", "#", "$", "%", "^", "&", "*", "(", ")", "-", "_", "+", "=", "{", "}", "[", "]", "|", "\\", ":", ";", "<", ">", "/", "?", "." };
foreach (string r in remove)
passwordString = passwordString.Replace(r, string.Empty);
}
if (!includeLookAlikes)
{
var remove = new string[] { "(", ")", "0", "O", "o", "1", "i", "I", "l", "|", "!", ":", ";" };
foreach (string r in remove)
passwordString = passwordString.Replace(r, string.Empty);
}
return passwordString;
}
This was the first link when I searched for generating random passwords and the following is out of scope for the current question but might be important to consider.
*
*Based upon the assumption that System.Web.Security.Membership.GeneratePassword is cryptographically secure with a minimum of 20% of the characters being Non-Alphanumeric.
*Not sure if removing characters and appending strings is considered good practice in this case and provides enough entropy.
*Might want to consider implementing in some way with SecureString for secure password storage in memory.
A: validChars can be any construct, but I decided to select based on ascii code ranges removing control chars. In this example, it is a 12 character string.
string validChars = String.Join("", Enumerable.Range(33, (126 - 33)).Where(i => !(new int[] { 34, 38, 39, 44, 60, 62, 96 }).Contains(i)).Select(i => { return (char)i; }));
string.Join("", Enumerable.Range(1, 12).Select(i => { return validChars[(new Random(Guid.NewGuid().GetHashCode())).Next(0, validChars.Length - 1)]; }))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "278"
} |
Q: How Scalable is SQLite? I recently read this Question about SQLite vs MySQL and the answer pointed out that SQLite doesn't scale well and the official website sort-of confirms this, however.
How scalable is SQLite and what are its upper most limits?
A: Sqlite is scalable in terms of single-user, I have multi-gigabyte database that performs very well and I haven't had much problems with it.
But it is single-user, so it depends on what kind of scaling you're talking about.
In response to comments. Note that there is nothing that prevents using an Sqlite database in a multi-user environment, but every transaction (in effect, every SQL statement that modifies the database) takes a lock on the file, which will prevent other users from accessing the database at all.
So if you have lots of modifications done to the database, you're essentially going to hit scaling problems very quick. If, on the other hand, you have lots of read access compared to write access, it might not be so bad.
But Sqlite will of course function in a multi-user environment, but it won't perform well.
A: Yesterday I released a small site* to track your rep that used a shared SQLite database for all visitors. Unfortunately, even with the modest load that it put on my host it ran quite slowly. This is because the entire database was locked every time someone viewed the page because it contained updates/inserts. I soon switched to MySQL and while I haven't had much time to test it out, it seems much more scaleable than SQLite. I just remember slow page loads and occasionally getting a database locked error when trying to execute queries from the shell in sqlite. That said, I am running another site from SQLite just fine. The difference is that the site is static (i.e. I'm the only one that can change the database) and so it works just fine for concurrent reads. Moral of the story: only use SQLite for websites where updates to the database happen rarely (less often than every page loaded).
edit: I just realized that I may not have been fair to SQLite - I didn't index any columns in the SQLite database when I was serving it from a web page. This partially caused the slowdown I was experiencing. However, the observation of database-locking stands - if you have particularly onerous updates, SQLite performance won't match MySQL or Postgres.
another edit: Since I posted this almost 3 months ago I've had the opportunity to closely examine the scalability of SQLite, and with a few tricks it can be quite scalable. As I mentioned in my first edit, database indexes dramatically reduce query time, but this is more of a general observation about databases than it is about SQLite. However, there is another trick you can use to speed up SQLite: transactions. Whenever you have to do multiple database writes, put them inside a transaction. Instead of writing to (and locking) the file each and every time a write query is issued, the write will only happen once when the transaction completes.
The site that I mention I released in the first paragraph has been switched back to SQLite, and it's running quite smoothly once I tuned my code in a few places.
* the site is no longer available
A: SQLite drives the sqlite.org web site and others that have lots of traffic. They suggest that if you have less than 100k hits per day, SQLite should work fine. And that was written before they delivered the "Writeahead Logging" feature.
If you want to speed things up with SQLite, do the following:
*
*upgrade to SQLite 3.7.x
*Enable write-ahead logging
*Run the following pragma: "PRAGMA cache_size = Number-of-pages;" The default size (Number-of-pages) is 2000 pages, but if you raise that number, then you will raise the amount of data that is running straight out of memory.
You may want to take a look at my video on YouTube called "Improve SQLite Performance With Writeahead Logging" which shows how to use write-ahead logging and demonstrates a 5x speed improvement for writes.
A: Think of it this way. SQL Lite will be locked every time someone uses it (SQLite doesn't lock on reading). So if your serving up a web page or a application that has multiple concurrent users only one could use your app at a time with SQLLite. So right there is a scaling issue. If its a one person application say a Music Library where you hold hundreds of titles, ratings, information, usage, playing, play time then SQL Lite will scale beautifully holding thousands if not millions of records(Hard drive willing)
MySQL on the other hand works well for servers apps where people all over will be using it concurrently. It doesn't lock and it is quite large in size. So for your music library MySql would be over kill as only one person would see it, UNLESS this is a shared music library where thousands add or update it. Then MYSQL would be the one to use.
So in theory MySQL scales better then Sqllite cause it can handle mutiple users, but is overkill for a single user app.
A: i think that a (in numbers 1) webserver serving hunderts of clients appears on the backend with a single connection to the database, isn't it?
So there is no concurrent access in the database an therefore we can say that the database is working in 'single user mode'. It makes no sense to diskuss multi-user access in such a circumstance and so SQLite works as well as any other serverbased database.
A: Sqlite is a desktop or in-process database. SQL Server, MySQL, Oracle, and their brethren are servers.
Desktop databases are by their nature not a good choices for any application that needs to support concurrent write access to the data store. This includes at some level most web sites ever created. If you even have to log in for anything, you probably need write access to the DB.
A: Have you read this SQLite docs - http://www.sqlite.org/whentouse.html ?
SQLite usually will work great as the
database engine for low to medium
traffic websites (which is to say,
99.9% of all websites). The amount of web traffic that SQLite can handle
depends, of course, on how heavily the
website uses its database. Generally
speaking, any site that gets fewer
than 100K hits/day should work fine
with SQLite. The 100K hits/day figure
is a conservative estimate, not a hard
upper bound. SQLite has been
demonstrated to work with 10 times
that amount of traffic.
A: SQLite scalability will highly depend on the data used, and their format. I've had some tough experience with extra long tables (GPS records, one record per second). Experience showed that SQLite would slow down in stages, partly due to constant rebalancing of the growing binary trees holding the indexes (and with time-stamped indexes, you just know that tree is going to get rebalanced a lot, yet it is vital to your searches). So in the end at about 1GB (very ballpark, I know), queries become sluggish in my case. Your mileage will vary.
One thing to remember, despite all the bragging, SQLite is NOT made for data warehousing. There are various uses not recommended for SQLite. The fine people behind SQLite say it themselves:
Another way to look at SQLite is this: SQLite is not designed to replace Oracle. It is designed to replace fopen().
And this leads to the main argument (not quantitative, sorry, but qualitative), SQLite is not for all uses, whereas MySQL can cover many varied uses, even if not ideally. For example, you could have MySQL store Firefox cookies (instead of SQLite), but you'd need that service running all the time. On the other hand, you could have a transactional website running on SQLite (as many people do) instead of MySQL, but expect a lot of downtime.
A: SQLite's website (the part that you referenced) indicates that it can be used for a variety of multi-user situations.
I would say that it can handle quite a bit. In my experience it has always been very fast. Of course, you need to index your tables and when coding against it, you need to make sure you use parameritized queries and the like. Basically the same stuff you would do with any database to improve performance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "188"
} |
Q: What's a clean/simple way to ensure the security of a page? Supposing you have a form that collects and submits sensitive information and you want to ensure it is never accessed via insecure (non-HTTPS) means, how might you best go about enforcing that policy?
A: If you're running Apache, you can put a RewriteRule in your .htaccess, like so:
RewriteCond %{HTTPS} "off"
RewriteRule /mypage.html https://example.com/mypage.html
A: I think the most bullet-proof solution is to keep the code inside your SSL document root only. This will ensure that you (or another developer in the future) can't accidentally link to a non-secure version of the form. If you have the form on both HTTP and HTTPS, you might not even notice if the wrong one gets used inadvertently.
If this isn't doable, then I would take at least two precautions. Do the Apache URL rewriting, and have a check in your code to make sure the session is encrypted - check the HTTP headers.
A: Take a look at this: http://www.dotnetmonster.com/Uwe/Forum.aspx/asp-net/75369/Enforcing-https
Edit: This shows solutions from an IIS point of view, but you should be able to configure about any web server for this.
A: In IIS? Go to security settings and hit "Require secure connection". Alternately, you can check the server variables in page load and redirect to the secure page.
A: I'd suggest looking at the request in the code that renders the form, and if it is not using SSL, issue a redirect to the https URL.
You could also use a rewite rule in Apache to redirect the user.
Or, you could just not serve up the page via HTTP, and keep it only in the document root of your HTTPS site.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Should I always use the AndAlso and OrElse operators? Is there ever a circumstance in which I would not want to use the AndAlso operator rather than the And operator? …or in which I would not want to use the OrElse operator rather than the Or operator?
A: From MSDN:
Short-Circuiting Trade-Offs
Short-circuiting can improve performance by not evaluating an expression that cannot alter the result of the logical operation. However, if that expression performs additional actions, short-circuiting skips those actions. For example, if the expression includes a call to a Function procedure, that procedure is not called if the expression is short-circuited, and any additional code contained in the Function does not run. If your program logic depends on any of that additional code, you should probably avoid short-circuiting operators.
A:
Is there ever a circumstance in which I would not want to use the AndAlso operator rather than the And operator?
Sure: if you want to make sure that both sides of the expression are evaluated. This might be the case if, for example, both sides are method calls that return booleans as a result of some other operation that has a side effect.
But in general, use AndAlso/OrElse whenever you would use &&/|| in C/C++/C#, which of course is the vast majority of the time.
A: They are completely different functionalities in VB.net and both have their use cases (though for the most common use both will work)
AndAlso and OrElse are conditional operators, they return a true or false boolean value and only evaluate what they need to until they reach the result that cannot be altered (false for AndAlso, true for OrElse). These are the equivalent of && and || in languages like C# and Java.
And and Or are bitwise operators, they combine the bits of whatever is passed to them into a result, and since those bits can always be altered by further evaluation, need to evaluate everything. These are the equivalent of & and | in languages like C# and Java.
Most of the time, when you're looking at an and/or operation, you want conditional results. Bitwise operations will produce a value that is truthy or falsey and will (usually) match the true or false results from proper conditional operators, they will just take a bit longer on average since they evaluate everything.
Bitwise operators are meant to be used when you care about the specific bits in a value, often used for storing settings or permissions in as compact a way as possible, and for reading and applying them as fast as possible. For example the read/write/execute permissions on files in Unix based systems use bit flags this way:
0b001 - execute permission
0b010 - write permission
0b100 - read permission
so, in decimal, a value of 7 can do all three, a value of 4 can only read, 6 can read and write, etc. If someone is trying to write to a file it checks their permissions by using And 0b010 and gets a truthy or falsy result based on the bit in that position. To update the value to let users execute a file, you would use Or 0b001 on their current permissions and store the result.
The difference comes here:
0b001 (truthy) And 0b010 (truthy) = 0b000 (falsy)
0b001 (truthy) AndAlso 0b010 (truthy) = True
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: Application to Stress Test in a Windows .NET Application I am developing a Windows .NET application (WinForms) and I need to simulate a stress test of the database and the application ( more than 100 conections).
What tools do you recommend?
A: Tools like AutomatedQA TestComplete allow you to make a script which simulates a user controlling your application. Running multiple scripts at the same time could be your stress test.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Rehsarper 4.0 - Renaming a class does not rename the file Rehsarper 4.0 - Renaming a class does not rename the file…
I probably set something on accident in the options, but I can't find it. Any ideas?
A: When I do a rename in Resharper there's a checkbox below the textbox where you type the new name that says "Synchronise file name with class name". Check that and the file is renamed too.
Be sure you're using Resharper's rename (right-click/Refactor/Rename) rather than Visual Studio's (right-click/Rename), as the latter definitely doesn't rename the file.
A: *
*Go to the class name and click Ctrl+R+R
*Set the new name
*select the check box "Also rename files to reflect this change."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I program a simple chat bot AI? I want to build a bot that asks someone a few simple questions and branches based on the answer. I realize parsing meaning from the human responses will be challenging, but how do you setup the program to deal with the "state" of the conversation?
It will be a one-to-one conversation between a human and the bot.
A: Imagine a neural network with parsing capabilities in each node or neuron. Depending on rules and parsing results, neurons fire. If certain neurons fire, you get a good idea about topic and semantic of the question and therefore can give a good answer.
Memory is done by keeping topics talked about in a session, adding to the firing for the next question, and therefore guiding the selection process of possible answers at the end.
Keep your rules and patterns in a knowledge base, but compile them into memory at start time, with a neuron per rule. You can engineer synapses using something like listeners or event functions.
A: I think you can look at the code for Kooky, and IIRC it also uses Markov Chains.
Also check out the kooky quotes, they were featured on Coding Horror not long ago and some are hilarious.
A: You probably want to look into Markov Chains as the basics for the bot AI. I wrote something a long time ago (the code to which I'm not proud of at all, and needs some mods to run on Python > 1.5) that may be a useful starting place for you: http://sourceforge.net/projects/benzo/
EDIT: Here's a minimal example in Python of a Markov Chain that accepts input from stdin and outputs text based on the probabilities of words succeeding one another in the input. It's optimized for IRC-style chat logs, but running any decent-sized text through it should demonstrate the concepts:
import random, sys
NONWORD = "\n"
STARTKEY = NONWORD, NONWORD
MAXGEN=1000
class MarkovChainer(object):
def __init__(self):
self.state = dict()
def input(self, input):
word1, word2 = STARTKEY
for word3 in input.split():
self.state.setdefault((word1, word2), list()).append(word3)
word1, word2 = word2, word3
self.state.setdefault((word1, word2), list()).append(NONWORD)
def output(self):
output = list()
word1, word2 = STARTKEY
for i in range(MAXGEN):
word3 = random.choice(self.state[(word1,word2)])
if word3 == NONWORD: break
output.append(word3)
word1, word2 = word2, word3
return " ".join(output)
if __name__ == "__main__":
c = MarkovChainer()
c.input(sys.stdin.read())
print c.output()
It's pretty easy from here to plug in persistence and an IRC library and have the basis of the type of bot you're talking about.
A: I think to start this project, it would be good to have a database with questions (organized as a tree. In every node one or more questions).
These questions sould be answered with "yes " or "no".
If the bot starts to question, it can start with any question from yuor database of questions marked as a start-question. The answer is the way to the next node in the tree.
Edit: Here is a somple one written in ruby you can start with: rubyBOT
A: Folks have mentioned already that statefulness isn't a big component of typical chatbots:
*
*a pure Markov implementations may express a very loose sort of state if it is growing its lexicon and table in real time—earlier utterances by the human interlocutor may get regurgitated by chance later in the conversation—but the Markov model doesn't have any inherent mechanism for selecting or producing such responses.
*a parsing-based bot (e.g. ELIZA) generally attempts to respond to (some of the) semantic content of the most recent input from the user without significant regard for prior exchanges.
That said, you certainly can add some amount of state to a chatbot, regardless of the input-parsing and statement-synthesis model you're using. How to do that depends a lot on what you want to accomplish with your statefulness, and that's not really clear from your question. A couple general ideas, however:
*
*Create a keyword stack. As your human offers input, parse out keywords from their statements/questions and throw those keywords onto a stack of some sort. When your chatbot fails to come up with something compelling to respond to in the most recent input—or, perhaps, just at random, to mix things up—go back to your stack, grab a previous keyword, and use that to seed your next synthesis. For bonus points, have the bot explicitly acknowledge that it's going back to a previous subject, e.g. "Wait, HUMAN, earlier you mentioned foo. [Sentence seeded by foo]".
*Build RPG-like dialogue logic into the bot. As your parsing human input, toggle flags for specific conversational prompts or content from the user and conditionally alter what the chatbot can talk about, or how it communicates. For example, a chatbot bristling (or scolding, or laughing) at foul language is fairly common; a chatbot that will get het up, and conditionally remain so until apologized to, would be an interesting stateful variation on this. Switch output to ALL CAPS, throw in confrontational rhetoric or demands or sobbing, etc.
Can you clarify a little what you want the state to help you accomplish?
A: naive chatbot program. No parsing, no cleverness, just a training file and output.
It first trains itself on a text and then later uses the data from that training to generate responses to the interlocutor’s input. The training process creates a dictionary where each key is a word and the value is a list of all the words that follow that word sequentially anywhere in the training text. If a word features more than once in this list then that reflects and it is more likely to be chosen by the bot, no need for probabilistic stuff just do it with a list.
The bot chooses a random word from your input and generates a response by choosing another random word that has been seen to be a successor to its held word. It then repeats the process by finding a successor to that word in turn and carrying on iteratively until it thinks it’s said enough. It reaches that conclusion by stopping at a word that was prior to a punctuation mark in the training text. It then returns to input mode again to let you respond, and so on.
It isn’t very realistic but I hereby challenge anyone to do better in 71 lines of code !! This is a great challenge for any budding Pythonists, and I just wish I could open the challenge to a wider audience than the small number of visitors I get to this blog. To code a bot that is always guaranteed to be grammatical must surely be closer to several hundred lines, I simplified hugely by just trying to think of the simplest rule to give the computer a mere stab at having something to say.
Its responses are rather impressionistic to say the least ! Also you have to put what you say in single quotes.
I used War and Peace for my “corpus” which took a couple of hours for the training run, use a shorter file if you are impatient…
here is the trainer
#lukebot-trainer.py
import pickle
b=open('war&peace.txt')
text=[]
for line in b:
for word in line.split():
text.append (word)
b.close()
textset=list(set(text))
follow={}
for l in range(len(textset)):
working=[]
check=textset[l]
for w in range(len(text)-1):
if check==text[w] and text[w][-1] not in '(),.?!':
working.append(str(text[w+1]))
follow[check]=working
a=open('lexicon-luke','wb')
pickle.dump(follow,a,2)
a.close()
here is the bot
#lukebot.py
import pickle,random
a=open('lexicon-luke','rb')
successorlist=pickle.load(a)
a.close()
def nextword(a):
if a in successorlist:
return random.choice(successorlist[a])
else:
return 'the'
speech=''
while speech!='quit':
speech=raw_input('>')
s=random.choice(speech.split())
response=''
while True:
neword=nextword(s)
response+=' '+neword
s=neword
if neword[-1] in ',?!.':
break
print response
You tend to get an uncanny feeling when it says something that seems partially to make sense.
A: I would suggest looking at Bayesian probabilities. Then just monitor the chat room for a period of time to create your probability tree.
A: I'm not sure this is what you're looking for, but there's an old program called ELIZA which could hold a conversation by taking what you said and spitting it back at you after performing some simple textual transformations.
If I remember correctly, many people were convinced that they were "talking" to a real person and had long elaborate conversations with it.
A: If you're just dabbling, I believe Pidgin allows you to script chat style behavior. Part of the framework probably tacks the state of who sent the message when, and you'd want to keep a log of your bot's internal state for each of the last N messages. Future state decisions could be hardcoded based on inspection of previous states and the content of the most recent few messages. Or you could do something like the Markov chains discussed and use it both for parsing and generating.
A: If you do not require a learning bot, using AIML (http://www.aiml.net/) will most likely produce the result you want, at least with respect to the bot parsing input and answering based on it.
You would reuse or create "brains" made of XML (in the AIML-format) and parse/run them in a program (parser). There are parsers made in several different languages to choose from, and as far as I can tell the code seems to be open source in most cases.
A: You can use "ChatterBot", and host it locally using - 'flask-chatterbot-master"
Links:
*
*[ChatterBot Installation]
https://chatterbot.readthedocs.io/en/stable/setup.html
*[Host Locally using - flask-chatterbot-master]: https://github.com/chamkank/flask-chatterbot
Cheers,
Ratnakar
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: How to capitalize the first letter of each word in a string in SQL Server What’s the best way to capitalize the first letter of each word in a string in SQL Server.
A: From http://www.sql-server-helper.com/functions/initcap.aspx
CREATE FUNCTION [dbo].[InitCap] ( @InputString varchar(4000) )
RETURNS VARCHAR(4000)
AS
BEGIN
DECLARE @Index INT
DECLARE @Char CHAR(1)
DECLARE @PrevChar CHAR(1)
DECLARE @OutputString VARCHAR(255)
SET @OutputString = LOWER(@InputString)
SET @Index = 1
WHILE @Index <= LEN(@InputString)
BEGIN
SET @Char = SUBSTRING(@InputString, @Index, 1)
SET @PrevChar = CASE WHEN @Index = 1 THEN ' '
ELSE SUBSTRING(@InputString, @Index - 1, 1)
END
IF @PrevChar IN (' ', ';', ':', '!', '?', ',', '.', '_', '-', '/', '&', '''', '(')
BEGIN
IF @PrevChar != '''' OR UPPER(@Char) != 'S'
SET @OutputString = STUFF(@OutputString, @Index, 1, UPPER(@Char))
END
SET @Index = @Index + 1
END
RETURN @OutputString
END
GO
There is a simpler/smaller one here (but doesn't work if any row doesn't have spaces, "Invalid length parameter passed to the RIGHT function."):
http://www.devx.com/tips/Tip/17608
A: As a table-valued function:
CREATE FUNCTION dbo.InitCap(@v AS VARCHAR(MAX))
RETURNS TABLE
AS
RETURN
WITH a AS (
SELECT (
SELECT UPPER(LEFT(value, 1)) + LOWER(SUBSTRING(value, 2, LEN(value))) AS 'data()'
FROM string_split(@v, ' ')
ORDER BY CHARINDEX(value,@v)
FOR XML PATH (''), TYPE) ret)
SELECT CAST(a.ret AS varchar(MAX)) ret from a
GO
Note that string_split requires COMPATIBILITY_LEVEL 130.
A: A variation of the one I've been using for quite some time is:
CREATE FUNCTION [widget].[properCase](@string varchar(8000)) RETURNS varchar(8000) AS
BEGIN
SET @string = LOWER(@string)
DECLARE @i INT
SET @i = ASCII('a')
WHILE @i <= ASCII('z')
BEGIN
SET @string = REPLACE( @string, ' ' + CHAR(@i), ' ' + CHAR(@i-32))
SET @i = @i + 1
END
SET @string = CHAR(ASCII(LEFT(@string, 1))-32) + RIGHT(@string, LEN(@string)-1)
RETURN @string
END
You can easily modify to handle characters after items other than spaces if you wanted to.
A: Another solution without using the loop - pure set-based approach with recursive CTE
create function [dbo].InitCap (@value varchar(max))
returns varchar(max) as
begin
declare
@separator char(1) = ' ',
@result varchar(max) = '';
with r as (
select value, cast(null as varchar(max)) [x], cast('' as varchar(max)) [char], 0 [no] from (select rtrim(cast(@value as varchar(max))) [value]) as j
union all
select right(value, len(value)-case charindex(@separator, value) when 0 then len(value) else charindex(@separator, value) end) [value]
, left(r.[value], case charindex(@separator, r.value) when 0 then len(r.value) else abs(charindex(@separator, r.[value])-1) end ) [x]
, left(r.[value], 1)
, [no] + 1 [no]
from r where value > '')
select @result = @result +
case
when ascii([char]) between 97 and 122
then stuff(x, 1, 1, char(ascii([char])-32))
else x
end + @separator
from r where x is not null;
set @result = rtrim(@result);
return @result;
end
A: If you are looking for the answer to the same question in Oracle/PLSQL then you may use the function INITCAP. Below is an example for the attribute dname from a table department which has the values ('sales', 'management', 'production', 'development').
SQL> select INITCAP(dname) from department;
INITCAP(DNAME)
--------------------------------------------------
Sales
Management
Production
Development
A: ;WITH StudentList(Name) AS (
SELECT CONVERT(varchar(50), 'Carl-VAN')
UNION SELECT 'Dean o''brian'
UNION SELECT 'Andrew-le-Smith'
UNION SELECT 'Eddy thompson'
UNION SELECT 'BOBs-your-Uncle'
), Student AS (
SELECT CONVERT(varchar(50), UPPER(LEFT(Name, 1)) + LOWER(SUBSTRING(Name, 2, LEN(Name)))) Name,
pos = PATINDEX('%[-'' ]%', Name)
FROM StudentList
UNION ALL
SELECT CONVERT(varchar(50), LEFT(Name, pos) + UPPER(SUBSTRING(Name, pos + 1, 1)) + SUBSTRING(Name, pos + 2, LEN(Name))) Name,
pos = CASE WHEN PATINDEX('%[-'' ]%', RIGHT(Name, LEN(Name) - pos)) = 0 THEN 0 ELSE pos + PATINDEX('%[-'' ]%', RIGHT(Name, LEN(Name) - pos)) END
FROM Student
WHERE pos > 0
)
SELECT Name
FROM Student
WHERE pos = 0
ORDER BY Name
This will result in:
*
*Andrew-Le-Smith
*Bobs-Your-Uncle
*Carl-Van
*Dean O'Brian
*Eddy Thompson
Using a recursive CTE set based query should out perform a procedural while loop query.
Here I also have made my separate to be 3 different characters [-' ] instead of 1 for a more advanced example. Using PATINDEX as I have done allows me to look for many characters. You could also use CHARINDEX on a single character and this function excepts a third parameter StartFromPosition so I could further simply my 2nd part of the recursion of the pos formula to (assuming a space): pos = CHARINDEX(' ', Name, pos + 1).
A: The suggested function works fine, however, if you do not want to create any function this is how I do it:
select ID,Name
,string_agg(concat(upper(substring(value,1,1)),lower(substring(value,2,len(value)-1))),' ') as ModifiedName
from Table_Customer
cross apply String_Split(replace(trim(Name),' ',' '),' ')
where Name is not null
group by ID,Name;
The above query split the words by space (' ') and create different rows of each having one substring, then convert the first letter of each substring to upper and keep remaining as lower. The final step is to string aggregate based on the key.
A: BEGIN
DECLARE @string varchar(100) = 'asdsadsd asdad asd'
DECLARE @ResultString varchar(200) = ''
DECLARE @index int = 1
DECLARE @flag bit = 0
DECLARE @temp varchar(2) = ''
WHILE (@Index <LEN(@string)+1)
BEGIN
SET @temp = SUBSTRING(@string, @Index-1, 1)
--select @temp
IF @temp = ' ' OR @index = 1
BEGIN
SET @ResultString = @ResultString + UPPER(SUBSTRING(@string, @Index, 1))
END
ELSE
BEGIN
SET @ResultString = @ResultString + LOWER(SUBSTRING(@string, @Index, 1))
END
SET @Index = @Index+ 1--increase the index
END
SELECT @ResultString
END
A: It can be as simple as this:
DECLARE @Name VARCHAR(500) = 'Roger';
SELECT @Name AS Name, UPPER(LEFT(@Name, 1)) + SUBSTRING(@Name, 2, LEN(@Name)) AS CapitalizedName;
A: Here is the simplest one-liner to do this:
SELECT LEFT(column, 1)+ lower(RIGHT(column, len(column)-1) ) FROM [tablename]
A: I was looking for the best way to capitalize and i recreate simple sql script
how to use SELECT dbo.Capitalyze('this is a test with multiple spaces')
result "This Is A Test With Multiple Spaces"
CREATE FUNCTION Capitalyze(@input varchar(100) )
returns varchar(100)
as
begin
declare @index int=0
declare @char as varchar(1)=' '
declare @prevCharIsSpace as bit=1
declare @Result as varchar(100)=''
set @input=UPPER(LEFT(@input,1))+LOWER(SUBSTRING(@input, 2, LEN(@input)))
set @index=PATINDEX('% _%',@input)
if @index=0
set @index=len(@input)
set @Result=substring(@input,0,@index+1)
WHILE (@index < len(@input))
BEGIN
SET @index = @index + 1
SET @char=substring(@input,@index,1)
if (@prevCharIsSpace=1)
begin
set @char=UPPER(@char)
if (@char=' ')
set @char=''
end
if (@char=' ')
set @prevCharIsSpace=1
else
set @prevCharIsSpace=0
set @Result=@Result+@char
--print @Result
END
--print @Result
return @Result
end
A: fname is column name if fname value is akhil then UPPER(left(fname,1)) provide capital First letter(A) and substring function SUBSTRING(fname,2,LEN(fname)) provide(khil) concate both using + then result is (Akhil)
select UPPER(left(fname,1))+SUBSTRING(fname,2,LEN(fname)) as fname
FROM [dbo].[akhil]
A: On SQL Server 2016+ using JSON which gives guaranteed order of the words:
CREATE FUNCTION [dbo].[InitCap](@Text NVARCHAR(MAX))
RETURNS NVARCHAR(MAX)
AS
BEGIN
RETURN STUFF((
SELECT ' ' + UPPER(LEFT(s.value,1)) + LOWER(SUBSTRING(s.value,2,LEN(s.value)))
FROM OPENJSON('["' + REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@Text,'\','\\'),'"','\"'),CHAR(9),'\t'),CHAR(10),'\n'),' ','","') + '"]') s
ORDER BY s.[key]
FOR XML PATH(''),TYPE).value('(./text())[1]','NVARCHAR(MAX)'),1,1,'');
END
A: GO
CREATE FUNCTION [dbo].[Capitalize](@text NVARCHAR(MAX)) RETURNS NVARCHAR(MAX) AS
BEGIN
DECLARE @result NVARCHAR(MAX) = '';
DECLARE @c NVARCHAR(1);
DECLARE @i INT = 1;
DECLARE @isPrevSpace BIT = 1;
WHILE @i <= LEN(@text)
BEGIN
SET @c = SUBSTRING(@text, @i, 1);
SET @result += IIF(@isPrevSpace = 1, UPPER(@c), LOWER(@c));
SET @isPrevSpace = IIF(@c LIKE '[ -]', 1, 0);
SET @i += 1;
END
RETURN @result;
END
GO
DECLARE @sentence NVARCHAR(100) = N'i-thINK-this soLUTION-works-LiKe-a charm';
PRINT dbo.Capitalize(@sentence);
-- I-Think-This Solution-Works-Like-A Charm
A: IF OBJECT_ID ('dbo.fnCapitalizeFirstLetterAndChangeDelimiter') IS NOT NULL
DROP FUNCTION dbo.fnCapitalizeFirstLetterAndChangeDelimiter
GO
CREATE FUNCTION [dbo].[fnCapitalizeFirstLetterAndChangeDelimiter] (@string NVARCHAR(MAX), @delimiter NCHAR(1), @new_delimeter NCHAR(1))
RETURNS NVARCHAR(MAX)
AS
BEGIN
DECLARE @result NVARCHAR(MAX)
SELECT @result = '';
IF (LEN(@string) > 0)
DECLARE @curr INT
DECLARE @next INT
BEGIN
SELECT @curr = 1
SELECT @next = CHARINDEX(@delimiter, @string)
WHILE (LEN(@string) > 0)
BEGIN
SELECT @result =
@result +
CASE WHEN LEN(@result) > 0 THEN @new_delimeter ELSE '' END +
UPPER(SUBSTRING(@string, @curr, 1)) +
CASE
WHEN @next <> 0
THEN LOWER(SUBSTRING(@string, @curr+1, @next-2))
ELSE LOWER(SUBSTRING(@string, @curr+1, LEN(@string)-@curr))
END
IF (@next > 0)
BEGIN
SELECT @string = SUBSTRING(@string, @next+1, LEN(@string)-@next)
SELECT @next = CHARINDEX(@delimiter, @string)
END
ELSE
SELECT @string = ''
END
END
RETURN @result
END
GO
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: What's the best Django search app? I'm building a Django project that needs search functionality, and until there's a django.contrib.search, I have to choose a search app. So, which is the best? By "best" I mean...
*
*easy to install / set up
*has a Django- or at least Python-friendly API
*can perform reasonably complex searches
Here are some apps I've heard of, please suggest others if you know of any:
*
*djangosearch
*django-sphinx
I'd also like to avoid using a third-party search engine (like Google SiteSearch), because some of the data I'd like to index is for site members only and should not be public.
A: The google code page for djangosearch indicates that it is no longer under active development, and suggests haystack or solango.
A: I'd recommend Sphinx for full-text search and aggregation, and django-sphinx is good enough for production use. We found that Sphinx was the least resource-intensive and fastest way to index and search our documents and that django-sphinx was a nice wrapper on top of the sphinx client.
The group by aggregation is particularly nice, if for example you want to display how many documents with a certain tag or by a certain author (or both) matched a search. In memory attribute updates were convenient too, especially for removing deleted articles immediately.
A: Thanks Garth. I had seen that djangosearch wanted to become the official Django search, but I was hesitant to use it because I couldn't find any documentation! Luckily, there's a README in subversion that I hadn't seen before, and it makes the API look very cool:
# set up the model
class Event(models.Model):
title = models.CharField(max_length=255)
date = models.DateField()
is_outdoors = models.BooleanField()
index = djangosearch.ModelIndex(text=['title'],
additional=['date', 'is_outdoors'])
# run a search
results = Event.index.search("django conference")
A: I just needed a very quick solution that was no-fuss for an internal app.
I found the article Adding search to Django in a snap, and that worked splendid for me!
Obviously it lacks the speed, scalability and features of the real projects like Haystack, but this one is easier to set up, and I don't really need anything else than keyword AND-search.
A: You might want to consider letting Yahoo do all the hard work with their Build your own Search Service (BOSS). Here is a great blog post that walks you through the process:
http://www.peterkrantz.com/2008/yahoo-search-in-django/
A: It looks like everyone here missed django-xappy
After quick evaluation of all existing search addons for Django, I found this one as most flexible and easiest to use. It's rough on the edges in few places, but it's still the best way to use power of Xapian search engine inside Django projects.
A: You might want to look at Django Solr search (aka "Solango") which comes with some nice documentation to get you started...
A: Justin, I'd try djangosearch first: Jacob Kaplan-Moss (Django's lead developer) is working on it.
Potential hazards:
*
*The home page warns the API might not be entirely stable
Potential benefits:
*
*“The long term goal is for this to become django.contrib.search.”
A: I am searching for the same thing, as are a lot of other people. Let's hope that django.contrib.search will be added soon.
In the meantime, this is what I found:
*
*http://code.google.com/p/djangosearch/
*http://code.google.com/p/django-sphinx/
*http://code.google.com/p/djapian/
*http://code.google.com/p/django-search-lucene/
*http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
To me, most look quite complicated and, frankly, a little daunting to implement.
I'd be interested to learn what you think of these.
A: Check out Haystack Search - a new model based search abstraction layer that currently supports Xapian, Solr and Whoosh. Looks like it's well supported and documented.
A: If you have large amount of data to be indexed or you expect high traffic, I'd suggest using some external search engine, like Solr. This way, you'll keep shared-nothing approach and be able to scale your site components independently.
A: I think I am going to have to give a shout out to Djapian.
It is rock-solid...just pull down a source distribution and peek inside. Top notch code, not very many comments tho..
It's still a young software project, but I think the django community should throw it's weight behind this one.
A: Thanks Joe,
We decided to go with Tsearch2 and a custom postgres adaptor. Tsearch2 does not need an extra process to run, which was convenient since we are on a WebFaction hosting with limited memory... It's not completely done yet, but seems to be a good solution...
A: I found Djoosh which relies on the pure-python external search engine Whoosh to work well with my 'Python' brain.
A: If you are willing to use a 3rd party search engine I can recommend Yahoo BOSS and django-bosssearch.
Yahoo BOSS is a paid service, but it saves you setting up and maintaining other search software on your server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114"
} |
Q: Php function argument error suppression, empty() isset() emulation I'm pretty sure the answer to this question is no, but in case there's some PHP guru
is it possible to write a function in a way where invalid arguments or non existent variables can be passed in and php will not error without the use of '@'
Much like empty and isset do. You can pass in a variable you just made up and it won't error.
ex:
empty($someBogusVar); // no error
myHappyFunction($someBogusVar); // Php warning / notice
A: Summing up, the proper answer is no, you shouldn't (see caveat below).
There are workarounds already mentioned by many people in this thread, like using reference variables or isset() or empty() in conditions and suppressing notices in PHP configuration. That in addition to the obvious workaround, using @, which you don't want.
Summarizing an interesting comment discussion with Gerry: Passing the variable by reference is indeed valid if you check for the value of the variable inside the function and handle undefined or null cases properly. Just don't use reference passing as a way of shutting PHP up (this is where my original shouldn't points to).
A: You can do this using func_get_args like so:
error_reporting(E_ALL);
ini_set('display_errors', 1);
function defaultValue() {
$args = func_get_args();
foreach($args as $arg) {
if (!is_array($arg)) {
$arg = array($arg);
}
foreach($arg as $a) {
if(!empty($a)) {
return $a;
}
}
}
return false;
}
$var = 'bob';
echo defaultValue(compact('var'), 'alpha') . "\n"; //returns 'bob'
echo defaultValue(compact('var2'), 'alpha') . "\n"; //returns 'alpha'
echo defaultValue('alpha') . "\n"; //return
echo defaultValue() . "\n";
This func goes one step further and would give you the first non empty value of any number of args (you could always force it to only take up to two args but this look more useful to me like this).
EDIT: original version didn't use compact to try and make an array of args and STILL gave an error. Error reporting bumped up a notch and this new version with compact is a little less tidy, but still does the same thing and allows you to provide a default value for non existent vars.
A: There are valid cases where checking becomes cumbersome and unnessesary.
Therfore i've written this little magic function:
/**
* Shortcut for getting a value from a possibly unset variable.
* Normal:
* if (isset($_GET['foo']) && $_GET['foo'] == 'bar') {
* Short:
* if (value($_GET['foo']) == 'bar') {
*
* @param mixed $variable
* @return mixed Returns null if not set
*/
function value(&$variable) {
if (isset($variable)) {
return $variable;
}
}
It doesn't require any changes to myHappyFunction().
You'll have to change
myHappyFunction($someBogusVar);
to
myHappyFunction(value($someBogusVar));
Stating your intent explicitly. which makes it good practice in my book.
A: You don't get any error when a variable is passed by reference (PHP will create a new variable silently):
function myHappyFunction(&$var)
{
}
But I recommend against abusing this for hiding programming errors.
A: No, because this isn't really anything to do with the function; the error is coming from attempting to de-reference a non-existent array key. You can change the warning level of your PHP setup to surpress these errors, but you're better off just not doing this.
Having said that, you could do something like
function safeLookup($array, $key)
{
if (isset($array, $key))
return $array[$key];
return 0;
}
And use it in place of array key lookup
defaultValue(safeLookup($foo, "bar"), "baz);
Now I need to take a shower :)
A:
is it possible to write a function in a way where invalid arguments or non existent variables can be passed in and php will not error without the use of '@'
Yes you can!
porneL is correct [edit:I don't have enough points to link to his answer or vote it up, but it's on this page]
He is also correct when he cautions "But I recommend against abusing this for hiding programming errors." however error suppression via the Error Control Operator (@) should also be avoided for this same reason.
I'm new to Stack Overflow, but I hope it's not common for an incorrect answer to be ranked the highest on a page while the correct answer receives no votes. :(
A:
@Brian: I use a trinary operation to do the check for me:
return $value ? $value : $default;
this returns either $value OR $default. Depending upon the value of $value. If it is 0, false, empty or anything similar the value in $default will be returned.
I'm more going for the challenge to emulate functions like empty() and isset()
A: I'm sure there could be a great discussion on ternary operators vrs function calls. But the point of this question was to see if we can create a function that won't throw an error if a non existent value is passed in without using the '@'
A: @Sean That was already answered by Brian
return isset($input) ? $input : $default;
A: Sean, you could do:
$result = ($func_result = doLargeIntenseFunction()) ? $func_result : 'no result';
EDIT:
I'm sure there could be a great
discussion on ternary operators vrs
function calls. But the point of this
question was to see if we can create a
function that won't throw an error if
a non existent value is passed in
without using the '@'
And I told you, check it with isset(). A ternary conditional's first part doesn't check null or not null, it checks true or false. If you try to check true or false on a null value in PHP, you get these warnings. isset() checks whether a variable or expression returns a null value or not, and it returns a boolean, which can be evaluated by the first part of your ternary without any errors.
A: While the answer to the original question is "no", there is an options no one has mentioned.
When you use the @ sign, all PHP is doing is overriding the error_reporting level and temporarily setting it to zero. You can use "ini_restore('error_reporting');" to set it back to whatever it was before the @ was used.
This was useful to me in the situation where I wanted to write a convenience function to check and see if a variable was set, and had some other properties as well, otherwise, return a default value. But, sending an unset variable through caused a PHP notice, so I used the @ to suppress that, but then set error_reporting back to the original value inside the function.
Something like:
$var = @foo($bar);
function foo($test_var)
{
ini_restore('error_reporting');
if(is_set($test_var) && strlen($test_var))
{
return $test_var;
}
else
{
return -1;
}
}
So, in the case above, if $bar is not set, I won't get an error when I call foo() with a non-existent variable. However, I will get an error from within the function where I mistakenly typed is_set instead of isset.
This could be a useful option covering what the original question was asking in spirit, if not in actual fact.
A: If you simply add a default value to the parameter, you can skip it when calling the function. For example:
function empty($paramName = ""){
if(isset($paramName){
//Code here
}
else if(empty($paramName)){
//Code here
}
}
A: With a single line, you can acomplish it: myHappyFunction($someBogusVar="");
I hope this is what you are looking for. If you read the php documentation, under default argument values, you can see that assigning a default value to an function's argument helps you prevent an error message when using functions.
In this example you can see the difference of using a default argument and it's advantages:
PHP code:
<?php
function test1($argument)
{
echo $argument;
echo "\n";
}
function test2($argument="")
{
echo $argument;
echo "\n";
}
test1();
test1("Hello");
test1($argument);
$argument = "Hello world";
test1($argument);
test2();
test2("Hello");
test2($argument);
$argument = "Hello world";
test2($argument);
?>
Output for test1() lines:
Warning: Missing argument 1 for test1() .
Hello.
.
Hello world.
Output for test2() lines:
.
Hello.
Hello world.
This can also be used in combination to isset() and other functions to accomplish what you want.
A: And going further up the abstraction tree, what are you using this for?
You could either initialize those values in each class as appropriate or create a specific class containing all the default values and attributes, like:
class Configuration {
private var $configValues = array( 'cool' => 'Defaultcoolval' ,
'uncool' => 'Defuncoolval' );
public setCool($val) {
$this->configValues['cool'] = $val;
}
public getCool() {
return $this->configValues['cool'];
}
}
The idea being that, when using defaultValue function everywhere up and down in your code, it will become a maintenance nightmare whenever you have to change a value, looking for all the places where you've put a defaultValue call. And it'll also probably lead you to repeat yourself, violating DRY.
Whereas this is a single place to store all those default values. You might be tempted to avoid creating those setters and getters, but they also help in maintenance, in case it becomse pertinent to do some modification of outputs or validation of inputs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to jump to a class definition in CodeRush Just downloaded the CodeRush trial version and I can't easily find the one feature that I really wanted. I would like to be able to start typing a class name and to jump to its definition, sort of like the quick navigator but I want it to search in closed files within my solution as well as open ones. I know R# has that ability, I assume CodeRush does too.
A: 1) Ctrl + Shift + Q (this will bring up the Quick Nav)
2) Start typing the name of the Type, Variable, etc.
3) Hit Enter to select when the target shows in the top of the list
If the scope is not already set to "Solution" (you can tell via the drop-down on the right of the Quick Nav), you can hit Alt + Shift + S to set and it will save the state.
A: A quick remark to the excellent answer by Troy, if the version you downloaded is the newer one, 3.2, the Quick Nav has been remapped to Ctrl + Shift + Q.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Display a PDF in WPF Application Any ideas how to display a PDF file in a WPF Windows Application?
I am using the following code to run the browser but the Browser.Navigate method does not do anything!
WebBrowser browser = new WebBrowser();
browser.Navigate("http://www.google.com");
this.AddChild(browser); // this is the System.Windows.Window
A: Oops. this is for a winforms app. Not for WPF. I will post this anyway.
try this
private AxAcroPDFLib.AxAcroPDF axAcroPDF1;
this.axAcroPDF1 = new AxAcroPDFLib.AxAcroPDF();
this.axAcroPDF1.Dock = System.Windows.Forms.DockStyle.Fill;
this.axAcroPDF1.Enabled = true;
this.axAcroPDF1.Name = "axAcroPDF1";
this.axAcroPDF1.OcxState = ((System.Windows.Forms.AxHost.State)(resources.GetObject("axAcroPDF1.OcxState")));
axAcroPDF1.LoadFile(DownloadedFullFileName);
axAcroPDF1.Visible = true;
A: Try MoonPdfPanel - A WPF-based PDF viewer control
http://www.codeproject.com/Articles/579878/MoonPdfPanel-A-WPF-based-PDF-viewer-control
GitHub: https://github.com/reliak/moonpdf
A: The following code expects Adobe Reader to be installed and the Pdf extension to be connected to this.
It simply runs it:
String fileName = "FileName.pdf";
System.Diagnostics.Process process = new System.Diagnostics.Process();
process.StartInfo.FileName = fileName;
process.Start();
process.WaitForExit();
A: Just use a frame and a webbrowser like so
Frame frame = new Frame();
WebBrowserbrowser = new WebBrowser();
browser.Navigate(new Uri(filename));
frame.Content = browser;
Then when you don't need it anymore do this to clean it up:
WebBrowser browser = frame.Content as WebBrowser;
browser.Dispose();
frame.Content = null;
If you don't clean it up then you might have memory leak problems depending on the version of .NET your using. I saw bad memory leaks in .NET 3.5 if I didn't clean up.
A: You can get the Acrobat Reader control working in a WPF app by using the WindowsFormHost control. I have a blog post about it here:
http://hugeonion.com/2009/04/06/displaying-a-pdf-file-within-a-wpf-application/
I also have a 5 minute screencast of how I made it here:
http://www.screencast.com/t/JXRhGvzvB
A: You could simply host a Web Browser control on the form and use it to open the PDF.
There's a new native WPF "WebBrowser" control in .NET 3.51, or you could host the Windows.Forms browser in your WPF app.
A: Disclosure: Here is a commercial one and I work for this company.
I realize that an answer has already been accepted but the following does not require Adobe Reader/Acrobat and it is a WPF solution - as opposed to Winforms. I also realize this is an old question but it has just been updated so I guess it is still actual.
PDFRasterizer.NET 3.0 allows you to render to a WPF FixedDocument. It preserves all vector graphics (PDF graphics are converted to more or less equivalent WPF elements. This is probably closest to what you need.
using (FileStream file = new FileStream(path, FileMode.Open, FileAccess.Read))
{
pdfDoc = new Document(file);
ConvertToWpfOptions convertOptions = new ConvertToWpfOptions();
RenderSettings renderSettings = new RenderSettings();
...
FixedDocument wpfDoc = pdfDoc.ConvertToWpf(renderSettings, convertOptions, 0, 9, summary);
}
You can pass the wpfDoc to e.g. the WPF DocumentViewer to quickly implement a viewer.
A: You can also use FoxitReader. It's free and comes with an ActiveX control that registers in the web browsers (IE and others) after you install the FoxitReader application.
So after you install FoxitReader on the system put a WebBrowser Control and set its Source property to point to the file path of your PDF file.
A: Check this out: http://itextsharp.sourceforge.net/
You may have to use a WindowsFormsHost, but since it is open source, you might be able to make it a little more elegant in WPF.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How to deal with arrays (declared on the stack) in C++? I have a class to parse a matrix that keeps the result in an array member:
class Parser
{
...
double matrix_[4][4];
};
The user of this class needs to call an API function (as in, a function I have no control over, so I can't just change its interface to make things work more easily) that looks like this:
void api_func(const double matrix[4][4]);
The only way I have come up with for the caller to pass the array result to the function is by making the member public:
void myfunc()
{
Parser parser;
...
api_func(parser.matrix_);
}
Is this the only way to do things? I'm astounded by how inflexible multidimensional arrays declared like this are. I thought matrix_ would essentially be the same as a double** and I could cast (safely) between the two. As it turns out, I can't even find an unsafe way to cast between the things. Say I add an accessor to the Parser class:
void* Parser::getMatrix()
{
return (void*)matrix_;
}
This will compile, but I can't use it, because there doesn't seem to be a way to cast back to the weirdo array type:
// A smorgasbord of syntax errors...
api_func((double[][])parser.getMatrix());
api_func((double[4][4])parser.getMatrix());
api_func((double**)parser.getMatrix()); // cast works but it's to the wrong type
The error is:
error C2440: 'type cast' : cannot convert from 'void *' to 'const double [4][4]'
...with an intriguing addendum:
There are no conversions to array types, although there are conversions to references or pointers to arrays
I can't determine how to cast to a reference or pointer to array either, albeit that it probably won't help me here.
To be sure, at this point the matter is purely academic, as the void* casts are hardly cleaner than a single class member left public!
A: Try this. It compiles cleanly on gcc 4.1.3:
typedef double FourSquare[4][4];
class Parser
{
private:
double matrix_[4][4];
public:
Parser()
{
for(int i=0; i<4; i++)
for(int j=0; j<4; j++)
matrix_[i][j] = i*j;
}
public:
const FourSquare& GetMatrix()
{
return matrix_;
}
};
void api_func( const double matrix[4][4] )
{
}
int main( int argc, char** argv )
{
Parser parser;
api_func( parser.GetMatrix() );
return 0;
}
A: I've used a union like this to pass around matrices in the past:
union matrix {
double dflat[16];
double dmatr[4][4];
};
Then pass a pointer in to your setter and copy the data into the matrix in your class.
There are ways of handling this otherwise (that are more generic), but this solution tends to be the cleanest in the end, in my experience.
A:
I thought matrix_ would essentially be the same as a double**
In C there are true multi-dimensional arrays, not arrays of pointers to arrays, so a double[4][4] is a contiguous array of four double[4] arrays, equivalent to a double[16], not a (double*)[4].
There are no conversions to array types, although there are conversions to references or pointers to arrays
Casting a value to a double[4][4] would attempt to construct one on the stack - equivalent to std::string(parser.getMatrix()) - except that the array doesn't supply a suitable constructor. You probably did't want to do that, even if you could.
Since the type encodes the stride, you need a full type (double[][] won't do). You can reinterpret cast the void* to ((double[4][4])*), and then take the reference. But it's easiest to typedef the matrix and return a reference of the correct type in the first place:
typedef double matrix_t[4][4];
class Parser
{
double matrix_[4][4];
public:
void* get_matrix () { return static_cast<void*>(matrix_); }
const matrix_t& get_matrix_ref () const { return matrix_; }
};
int main ()
{
Parser p;
matrix_t& data1 = *reinterpret_cast<matrix_t*>(p.get_matrix());
const matrix_t& data2 = p.get_matrix_ref();
}
A: To elaborate on the selected answer, observe this line
const matrix& getMatrix() const
This is great, you don't have to worry about pointers and casting. You're returning a reference to the underlying matrix object. IMHO references are one of the best features of C++, which I miss when coding in straight C.
If you're not familiar with the difference between references and pointers in C++, read this
At any rate, you do have to be aware that if the Parser object which actually owns the underlying matrix object goes out of scope, any code which tries to access the matrix via that reference will now be referencing an out-of-scope object, and you'll crash.
A: Here's a nice, clean way:
class Parser
{
public:
typedef double matrix[4][4];
// ...
const matrix& getMatrix() const
{
return matrix_;
}
// ...
private:
matrix matrix_;
};
Now you're working with a descriptive type name rather than an array, but since it's a typedef the compiler will still allow passing it to the unchangeable API function that takes the base type.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Is there a good WPF pivot control? Does anyone have any experience with a good wpf pivot table control?
A: DevExpress Grid control has good pivot functionality. Used it for Winforms a lot. I think the WPF version is also available.
A: The Xceed WPF Grid looks like it has a lot of great capabilities. We use their WinForms grid and it does a pretty good job. They have a demo app you can download that shows off the different ways you can use the grid.
A: VIBlend's data grid control has pivot table capabilities. I'm not sure if they have a WPF version but you can certainly host WinForms controls in WPF. Another alternative is to try hosting the MS Office OWC.
A: If possible, I also need to be able to drag column headers and make them row headers and have the data summarize and group accordingly. Something along the lines of old ActiveX DataDynamics Dynamicube.
http://www.datadynamics.com/Products/ProductOverview.aspx?Product=DC
Auto Summary, Filtering of the Data Items being used, etc. without having to write a lot of code to do it in a custom fashion.
A: Alternative:
Excel OWC is a great query and reporting tool if you are running OLAP. It's a little dated and poorly documented, but works well on the intranet and can squeak by over the net.
http://msdn.microsoft.com/en-us/magazine/cc164070.aspx
A: I would suggest WPF Toolkit which can be downloaded from DatGrid available in Codeplex. This is compatible with the latest WPF(.NET3.5 SP1) which is free and have almost all the features for a general purpose(and even more than that) But there are commercial vendors who have good grid controls(Not really free). Like Xceed, Infragistics, Component One, Telerik
A: Currently there are no WPF Pivot Grid controls. The vendor most likely to have it, DevExpress does not yet have a WPF version of the XtraPivotGrid.
Your best bet is to use XtraPivotGrid hosted inside your WPF control.
The other thing you can do is to use another grid vendor and do the "pivoting" by using LINQ or by manipulating the DataTable manually.
A: DevExpress Pivot Grid for WPF is almost ready. It will be released in the first half of the year.
A: I am a consultant at Infragistics and I've been working exclusively the last few weeks with the Infragistics XamPivotGrid. We've been working to make this control fast and memory efficient. As a user and a developer, I am highly impressed with the usability of this control. You can check out this control in the WPF and the Silverlight NetAdvantage Data Visualization products at www.infragistics.com!
:-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Find checkout history for SVN working folder We have an intranet site backed by SVN, such that the site is a checkout out copy of the repository (working folder used only by IIS). Something on the site has been causing problems today, and I want to know how to find out what was checked out to that working folder in the last 48 hours.
Update: If there's an option I need to turn on to enable this in the future, what is it?
Also, as a corollary question, if I have to use the file creation time, how can I do that quickly in a recursive manner for a large folder?
If I have to check creation times, then this question will be helpful to the solution as well.
A: You could use creation-dates on the local files. You can't use modification-dates because Subversion sets those to last-changed upon checkout.
Also Subversion can log checkouts, but that's server-side
A: All the code in the web folder should be backed by SVN commits, shouldn't it?
If this is the case you should easily be able to track the problem down just by looking through your SVN logs at the last few changes that got committed.
svn info will tell you which revision the working copy currently is at, so you know where to start looking
Once you track down the commit with the bug in it, you can use svn blame to find the person that did it, and explain to them what they overlooked and how they caused the bug. Then you can make them buy everyone lunch for screwing up the site.
If you have locally modified/added any files which aren't in SVN, then svn stat and svn diff will show you what those changes are, so you can figure out if they are causing the problem too. You should then revert those changes so your working copy is a clean checkout, or commit the changes into the repository.
There's nothing worse than trying to track down a bug in your code only to find out 3 hours later that the bug is not actually in any of your code, but in some stupid local tweak someone made in the working copy that never got committed :-(
A: Depending how you access your SVN repo - if you're accessing it as file:// URLs, I think you're out of luck. But if you're using svnserve, or one of the HTTP gateways, you should be able to check your server logs for access to the SVN urls.
A: I would run a svn st in the web folder (to find any files that are changed since the checkout) and compare that to the repository.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I return an anonymous type from a method? I have a Linq query that I want to call from multiple places:
var myData = from a in db.MyTable
where a.MyValue == "A"
select new {
a.Key,
a.MyValue
};
How can I create a method, put this code in it, and then call it?
public ??? GetSomeData()
{
// my Linq query
}
A: IQueryable
So your method declaration would look like
public IQueryable GetSomeData()
A: A generic method should give you intellisense:
public class MyType {Key{get;set;} Value{get;set}}
public IQueryable<T> GetSomeData<T>() where T : MyType, new()
{ return from a in db.MyTable
where a.MyValue == "A"
select new T {Key=a.Key,Value=a.MyValue};
}
A: If you want to return, you need a type.
Instead of var, declare using IEnumerable<> and return that variable. Iterating through it actually executes the query.
A: IQueryable and IEnumerable both work. But you want to use a type specific version, IQueryable<T> or IEnumerable <T>.
So you'll want to create a type to keep the data.
var myData = from a in db.MyTable
where a.MyValue == "A"
select new MyType
{
Key = a.Key,
Value = a.MyValue
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How do I convert a .docx to html using asp.net? Word 2007 saves its documents in .docx format which is really a zip file with a bunch of stuff in it including an xml file with the document.
I want to be able to take a .docx file and drop it into a folder in my asp.net web app and have the code open the .docx file and render the (xml part of the) document as a web page.
I've been searching the web for more information on this but so far haven't found much. My questions are:
*
*Would you (a) use XSLT to transform the XML to HTML, or (b) use xml manipulation libraries in .net (such as XDocument and XElement in 3.5) to convert to HTML or (c) other?
*Do you know of any open source libraries/projects that have done this that I could use as a starting point?
Thanks!
A: Try this post? I don't know but might be what you are looking for.
A: I wrote mammoth.js, which is a JavaScript library that converts docx files to HTML. If you want to do the rendering server-side in .NET, there is also a .NET version of Mammoth available on NuGet.
Mammoth tries to produce clean HTML by looking at semantic information -- for instance, mapping paragraph styles in Word (such as Heading 1) to appropriate tags and style in HTML/CSS (such as <h1>). If you want something that produces an exact visual copy, then Mammoth probably isn't for you. If you have something that's already well-structured and want to convert that to tidy HTML, Mammoth might do the trick.
A: Word 2007 has an API that you can use to convert to HTML. Here's a post that talks about it http://msdn.microsoft.com/en-us/magazine/cc163526.aspx. You can find documentation around the API, but I remember that there is a convert to HTML function in the API.
A: This code will helps to convert .docx file to text
function read_file_docx($filename){
$striped_content = '';
$content = '';
if(!$filename || !file_exists($filename)) { echo "sucess";}else{ echo "not sucess";}
$zip = zip_open($filename);
if (!$zip || is_numeric($zip)) return false;
while ($zip_entry = zip_read($zip)) {
if (zip_entry_open($zip, $zip_entry) == FALSE) continue;
if (zip_entry_name($zip_entry) != "word/document.xml") continue;
$content .= zip_entry_read($zip_entry, zip_entry_filesize($zip_entry));
zip_entry_close($zip_entry);
}// end while
zip_close($zip);
//echo $content;
//echo "<hr>";
//file_put_contents('1.xml', $content);
$content = str_replace('</w:r></w:p></w:tc><w:tc>', " ", $content);
$content = str_replace('</w:r></w:p>', "\r\n", $content);
//header("Content-Type: plain/text");
$striped_content = strip_tags($content);
$striped_content = preg_replace("/[^a-zA-Z0-9\s\,\.\-\n\r\t@\/\_\(\)]/","",$striped_content);
echo nl2br($striped_content);
}
A: I'm using Interop. It is somewhat problamatic but works fine in most of the case.
using System.Runtime.InteropServices;
using Microsoft.Office.Interop.Word;
This one returns the list of html converted documents' path
public List<string> GetHelpDocuments()
{
List<string> lstHtmlDocuments = new List<string>();
foreach (string _sourceFilePath in Directory.GetFiles(""))
{
string[] validextentions = { ".doc", ".docx" };
if (validextentions.Contains(System.IO.Path.GetExtension(_sourceFilePath)))
{
sourceFilePath = _sourceFilePath;
destinationFilePath = _sourceFilePath.Replace(System.IO.Path.GetExtension(_sourceFilePath), ".html");
if (System.IO.File.Exists(sourceFilePath))
{
//checking if the HTML format of the file already exists. if it does then is it the latest one?
if (System.IO.File.Exists(destinationFilePath))
{
if (System.IO.File.GetCreationTime(destinationFilePath) != System.IO.File.GetCreationTime(sourceFilePath))
{
System.IO.File.Delete(destinationFilePath);
ConvertToHTML();
}
}
else
{
ConvertToHTML();
}
lstHtmlDocuments.Add(destinationFilePath);
}
}
}
return lstHtmlDocuments;
}
And this one to convert doc to html.
private void ConvertToHtml()
{
IsError = false;
if (System.IO.File.Exists(sourceFilePath))
{
Microsoft.Office.Interop.Word.Application docApp = null;
string strExtension = System.IO.Path.GetExtension(sourceFilePath);
try
{
docApp = new Microsoft.Office.Interop.Word.Application();
docApp.Visible = true;
docApp.DisplayAlerts = WdAlertLevel.wdAlertsNone;
object fileFormat = WdSaveFormat.wdFormatHTML;
docApp.Application.Visible = true;
var doc = docApp.Documents.Open(sourceFilePath);
doc.SaveAs2(destinationFilePath, fileFormat);
}
catch
{
IsError = true;
}
finally
{
try
{
docApp.Quit(SaveChanges: false);
}
catch { }
finally
{
Process[] wProcess = Process.GetProcessesByName("WINWORD");
foreach (Process p in wProcess)
{
p.Kill();
}
}
Marshal.ReleaseComObject(docApp);
docApp = null;
GC.Collect();
}
}
}
The killing of the word is not fun, but can't let it hanging there and block others, right?
In the web/html i render html to a iframe.
There is a dropdown which contains the list of help documents. Value is the path to the html version of it and text is name of the document.
private void BindHelpContents()
{
List<string> lstHelpDocuments = new List<string>();
HelpDocuments hDoc = new HelpDocuments(Server.MapPath("~/HelpDocx/docx/"));
lstHelpDocuments = hDoc.GetHelpDocuments();
int index = 1;
ddlHelpDocuments.Items.Insert(0, new ListItem { Value = "0", Text = "---Select Document---", Selected = true });
foreach (string strHelpDocument in lstHelpDocuments)
{
ddlHelpDocuments.Items.Insert(index, new ListItem { Value = strHelpDocument, Text = strHelpDocument.Split('\\')[strHelpDocument.Split('\\').Length - 1].Replace(".html", "") });
index++;
}
FetchDocuments();
}
on selected index changed, it is renedred to frame
protected void RenderHelpContents(object sender, EventArgs e)
{
try
{
if (ddlHelpDocuments.SelectedValue == "0") return;
string strHtml = ddlHelpDocuments.SelectedValue;
string newaspxpage = strHtml.Replace(Server.MapPath("~/"), "~/");
string pageVirtualPath = VirtualPathUtility.ToAbsolute(newaspxpage);//
documentholder.Attributes["src"] = pageVirtualPath;
}
catch
{
lblGError.Text = "Selected document doesn't exist, please refresh the page and try again. If that doesn't help, please contact Support";
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Where does RegexBuddy store its working data between uses? Ok, so I'm an idiot.
So I was working on a regex that took way to long to craft. After perfecting it, I upgraded my work machine with a blazing fast hard drive and realized that I never saved the regex anywhere and simply used RegexBuddy's autosave to store it. Dumb dumb dumb.
I sent a copy of the regex to a coworker but now he can't find it (or the record of our communication). My best hope of finding the regex is to find it in RegexBuddy on the old hard drive. RegexBuddy automatically saves whatever you were working on each time you close it. I've done some preliminary searches to try to determine where it actually saves that working data but I'm having no success.
This question is the result of my dumb behavior but I thought it was a good chance to finally ask a question here.
A: On my XP box, it was in the registry here:
HKEY_CURRENT_USER\Software\JGsoft\RegexBuddy3\History
There were two REG_BINARY keys called Action0 and Action1 that had hex data containing my two regexes from the history.
The test data that I was testing the regex against was here:
C:\Documents and Settings\<username>\Application Data\JGsoft\RegexBuddy 3
A: It depends on the OS, of cause, but on Windows I would guess the application data directory. I can't remember the path on xp but on vista it's something like this:
C:\Users\ user name \AppData\
And then it would probably be here:
C:\Users\ user name \AppData\roaming
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: NullReferenceException on instantiated object? This is a segment of code from an app I've inherited, a user got a Yellow screen of death:
Object reference not set to an instance of an object
on the line:
bool l_Success ...
Now I'm 95% sure the faulty argument is ref l_Monitor which is very weird considering the object is instantiated a few lines before. Anyone have a clue why it would happen? Note that I have seen the same issue pop up in other places in the code.
IDMS.Monitor l_Monitor = new IDMS.Monitor();
l_Monitor.LogFile.Product_ID = "SE_WEB_APP";
if (m_PermType_RadioButtonList.SelectedIndex == -1) {
l_Monitor.LogFile.Log(
Nortel.IS.IDMS.LogFile.MessageTypes.ERROR,
"No permission type selected"
);
return;
}
bool l_Success = SE.UI.Utilities.GetPermissionList(
ref l_Monitor,
ref m_CPermissions_ListBox,
(int)this.ViewState["m_Account_Share_ID"],
(m_PermFolders_DropDownList.Enabled)
? m_PermFolders_DropDownList.SelectedItem.Value
: "-1",
(SE.Types.PermissionType)m_PermType_RadioButtonList.SelectedIndex,
(SE.Types.PermissionResource)m_PermResource_RadioButtonList.SelectedIndex);
A: You sure that one of the properties trying to be accessed on the l_Monitor instance isn't null?
A: Sprinkle in a few variables for all the property-queries on that (loooooongg) line temporarily. Run the debugger, Check values and Corner the little bug.
A: I'm inclined to agree with the others; it sounds like one of the parameters you are passing SE.UI.Utilities.GetPermissionList is null which is causing the exception. Your best bet is to fire up the debugger and check was the variables are before that code is called.
A: The NullReferenceException was actually thrown within a catch block so the stack trace couldn't display that line of code so instead it stopped at the caller.
It was indeed one of the properties of the l_Monitor instance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Database engines Comparison - Windows Mobile What are the different database options on Windows Mobile available?
I have used CEDB and EDB for linear dataset needs.
I have heard of SQL server 2005 Mobile edition. But what are the advantages over others (if there is any)
A: Also take a look at SQLite for Windows CE. There are also .NET bindings available to use it from the Compact Framework.
A: I've found both sqllite and codebase to be easy to implement and install. Easier (and more stable) than the Microsoft options, which seem to be in serious flux.
A: I think it's called SQL Server Compact now. The advantages are that it's syntax-compatible with full sql server in that any query you write for it is guaranteed to work on an equivalent SQL Server Express/Standard/Enterprise DB. The reverse isn't necessarily true, though.
A: Do you need SQL support? If not, look at a lightweight embeddable DBM-like solution. I've used Tokyo Cabinet for a number of embedded solutions where SQL wasn't necessary and have greatly enjoyed the speed and flexibility it provides. YMMV.
A: We use Sybase Ultralite. Before that we were using Codebase to connect to foxpro tables. I'm not a mobile developer but I understand we chose it over SQL Server for performance and memory usage. Also we found they had pretty good support when we found any issues.
A: I would avoid using CEDB any more as I beleave that WM7 is dropping support for it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Embed a File Chooser in a UserControl / Form I've inherited a desktop application which has a custom .NET file chooser that is embedded in a control, but it has some issues.
I'd like to replace it with a non-custom File Chooser (like the OpenFileDialog).
However, for a variety of reasons it needs to be embedded in the parent control not a popup dialog.
Is there a control I'm missing, or does MS only provide the popup dialog out of the box?
A: The .Net control is a thin wrapper for the common dialog built into windows, and that is a dialog. So there is no way to embed it as though it were a control.
A: Depending on your needs, you COULD abuse the web browser control to show local files and folders. It won't match all the functionality of the OpenFileDialog, but it could work.
Here's one that I remembered from way-back. The Shell Mega-Pack. It has ActiveX and .NET versions. It looks promising.
Alternatively, if you want to build your own, you could start here on CodeProject: A Windows Explorer in a User Control. That looks like a good start. Here's another one: An All VB.NET Explorer Tree Control with ImageList Management.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: In SQL Server is it possible to get "id" of a record when Insert is executed? In SQL Server 2005 I have an "id" field in a table that has the "Is Identity" property set to 'Yes'. So, when an Insert is executed on that table the "id" gets set automatically to the next incrementing integer. Is there an easy way when the Insert is executed to get what the "id" was set to without having to do a Select statement right after the Insert?
duplicate of:
Best way to get identity of inserted row?
A: In .Net at least, you can send multiple queries to the server in one go. I do this in my app:
command.CommandText = "INSERT INTO [Employee] (Name) VALUES (@Name); SELECT SCOPE_IDENTITY()";
int id = (int)command.ExecuteScalar();
Works like a charm.
A: Scope_identity() is the preferred way, see: 6 Different Ways To Get The Current Identity Value
A: SCOPE_IDENTITY(); is your best bet. And if you are using .NET just pass an our parameter and check the value after the procedure is run.
CREATE PROCEDURE [dbo].[InsertProducts]
@id INT = NULL OUT,
@name VARCHAR(150) = NULL,
@desc VARCHAR(250) = NULL
AS
INSERT INTO dbo.Products
(Name,
Description)
VALUES
(@name,
@desc)
SET @id = SCOPE_IDENTITY();
A: If you're inserting multiple rows, the use of the OUTPUT and INSERTED.columnname clause on the insert statement is a simple way of getting all the ids into a temp table.
DECLARE @MyTableVar table( ID int,
Name varchar(50),
ModifiedDate datetime);
INSERT MyTable
OUTPUT INSERTED.ID, INSERTED.Name, INSERTED.ModifiedDate INTO @MyTableVar
SELECT someName, GetDate() from SomeTable
A: You have to select the scope_identity() function.
To do this from application code, I normally encapsulate this process in a stored procedure so it still looks like one query to my application.
A: I tend to prefer attaching a trigger to the table using enterprise manager. That way you don't need to worry about writing out extra sql statements in your code. Mine look something like this:
Create Trigger tblName
On dbo.tblName
For Insert
As
select new_id = @@IDENTITY
Then, from within your code, treat your insert statements like select statements- Just execute and evaluate the results. the "newID" column will contain the identity of the row you just created.
A: This is probably the best working solution I found for SQL Server..
Sql Server return the value of identity column after insert statement
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Why all the Linq To Entities Hate? I've noticed that there seems to be quite a bit of hostility towards Linq To Entities particularly from the Alt.Net folks. I understand the resistance to more "drag and drop" programming, but from my understanding, Linq To Entities doesn't require it.
We're currently using Linq to SQL, and we are using the DBML document to define it (once you get more than a dozen or so tables, the designer is pretty useless.)
So why wouldn't the same approach work for Linq To Entities?
A: Actually, once you start delving into it, LTE is completely useless for enterprise level frameworks. The fact that there is very little inheritance support (in LTS as well) makes for a lot of redundant code. Also, I will be moving back to LTS (Linq to SQL) because it actually allows you to define mappings via Attributes instead of a File. LTE only works with an external file.
A: I don't think it's a hate for the idea of it per se. It's just that people don't like the implementation of it.
http://efvote.wufoo.com/forms/ado-net-entity-framework-vote-of-no-confidence/
A: The Linq to Entity hate is much deserved. This product fails any purpose more complex then the lame demos GU uses it for on his blog. EF is far from ready for prime time. Microsoft just can't get data correct in the .BLOAT world they seem to changes data paradigm every time the wind blows. FoxPro has been around for 20 years with the same basic data core. Given SQL Server uses much of VFP data technology perhaps MSFT could learn a bit about manipulating data and data centric languages from something that worked.
A: I am quite sold on the priciples of Linq to Entities, and the Entity Framework in general, but I do have reservations about its current incarnation. I do freely admit to not having used it in anything more than a self-educational and very small way, though. The level of flexibility doesn't seem to be there yet but I'm sure it will come. I was told by one of the MS technology evangelists (great job title) that EF was the MS strategic choice for the future. Assuming this is the case, I can only see things getting better in this arena.
A: There might also be a bit of "second place" animosity as well. MS are very late to market with L2E, I myself became interested in ORM about three years ago or so and MS was nowhere to be seen at this point.
A lot of us have already spent the time learning another ORM (such as NHibernate) and are used to a certain level and type of functionality being available and I this isn't evident in L2E yet.
This "second place" animosity isn't old news to be honest I don't know why MS don't spend more time supporting solutions already in place, we've seen this all before with NAnt -> MSBuild and NUnit -> MsTest, it would save everyone a lot of time and effort if they just accepted one of the better and mature solutions and endeavoured to support that as opposed to brewing their own all the time.
A: I would add that LTE implementation of TPT inheritance is nothing short of criminal. See my question here.
And while I'm at it, I believe that the many published EF pundits are at least in part complicit. I have yet to find any published material on EF that cautions against queries of base types. If I were to try it on the model that I have, SQL Server simply gives up with the exception.
Some part of your SQL statement is
nested too deeply. Rewrite the query
or break it up into smaller queries.
I would love to rewrite the query, but LTE as absolved me of that burden. Thanks (^not)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: In Python, how can you easily retrieve sorted items from a dictionary? Dictionaries unlike lists are not ordered (and do not have the 'sort' attribute). Therefore, you can not rely on getting the items in the same order when first added.
What is the easiest way to loop through a dictionary containing strings as the key value and retrieving them in ascending order by key?
For example, you had this:
d = {'b' : 'this is b', 'a': 'this is a' , 'c' : 'this is c'}
I want to print the associated values in the following sequence sorted by key:
this is a
this is b
this is c
A: This snippet will do it. If you're going to do it frequently, you might want to make a 'sortkeys' method to make it easier on the eyes.
keys = list(d.keys())
keys.sort()
for key in keys:
print d[key]
Edit: dF's solution is better -- I forgot all about sorted().
A: You can also sort a dictionary by value and control the sort order:
import operator
d = {'b' : 'this is 3', 'a': 'this is 2' , 'c' : 'this is 1'}
for key, value in sorted(d.iteritems(), key=operator.itemgetter(1), reverse=True):
print key, " ", value
Output:
b this is 3
a this is 2
c this is 1
A: Do you mean that you need the values sorted by the value of the key?
In that case, this should do it:
for key in sorted(d):
print d[key]
EDIT: changed to use sorted(d) instead of sorted(d.keys()), thanks Eli!
A: Or shorter,
for key, value in sorted(d.items()):
print value
A: >>> d = {'b' : 'this is b', 'a': 'this is a' , 'c' : 'this is c'}
>>> for k,v in sorted(d.items()):
... print v, k
...
this is a a
this is b b
this is c c
A: d = {'b' : 'this is b', 'a': 'this is a' , 'c' : 'this is c'}
ks = d.keys()
ks.sort()
for k in ks:
print "this is " + k
A: for key in sorted(d):
print d[key]
A: Do you mean "sorted" instead of "ordered"? It seems your question aims at sorting a dictionary and not at ordering it. If you do mean "ordered", you can use an OrderedDict from the collections module. Those dictionaries remember the order in which the key/value pairs were entered:
from collections import OrderedDict
Reference information: https://docs.python.org/2/library/collections.html#collections.OrderedDict
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I insert text into a textbox after popping up another window to request information? I have an asp.net web page written in C#.
Using some javascript I popup another .aspx page which has a few controls that are filled in and from which I create a small snippet of text.
When the user clicks OK on that dialog box I want to insert that piece of text into a textbox on the page that initial "popped up" the dialog/popup page.
I'm guessing that this will involve javascript which is not a strong point of mine.
How do I do this?
A: You will have to do something like:
parent.opener.document.getElemenyById('ParentTextBox').value = "New Text";
A: What you could do is create an ajax modal pop-up instead of a new window. The semantic and aesthetic value is greater not to mention the data-passing is much easier.
http://www.asp.net/ajax/ajaxcontroltoolkit/samples/modalpopup/modalpopup.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the operator precedence order in Visual Basic 6.0? What is the operator precedence order in Visual Basic 6.0 (VB6)?
In particular, for the logical operators.
A: Use parentheses
EDIT: That's my advice for new code! But Oscar is reading someone else's code, so must figure it out somehow. I suggest the VBA manual topic Operator Precedence. VBA is 99% equivalent to VB6 - and expression evaluation is 100% equivalent. I have pasted the logical operator information here.
Logical operators are evaluated in the following order of precedence:
Not
And
Or
Xor
Eqv
Imp
The topic also explains precedence for comparison and arithmetic operators.
I would suggest once you have figured out the precendence, you put in parentheses unless there is some good reason not to edit the code.
A: Arithmetic Operation Precedence Order
*
*^
*- (unary negation)
**, /
*\
*Mod
*+, - (binary addition/subtraction)
*&
Comparison Operation Precedence Order
*
*=
*<>
*<
*>
*<=
*>=
*Like, Is
Logical Operation Precedence Order
*
*Not
*And
*Or
*Xor
*Eqv
*Imp
Source: Sams Teach Yourself Visual Basic 6 in 24 Hours — Appendix A: Operator Precedence
A: It depends on whether or not you're in the debugger. Really. Well, sort of.
Parentheses come first, of course. Then arithmateic (+,-,*,/, etc). Then comparisons (>, <, =, etc). Then the logical operators. The trick is the order of execution within a given precedence level is not defined. Given the following expression:
If A < B And B < C Then
you are guaranteed the < inequality operators will both be evaluated before the logical And comparison. But you are not guaranteed which inequality comparison will be executed first.
IIRC, the debugger executes left to right, but the compiled application executes right to left. I could have them backwards (it's been a long time), but the important thing is they're different. The actual precedence doesn't change, but the order of execution might.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Algorithm to generate anagrams What would be the best strategy to generate anagrams.
An anagram is a type of word play, the result of rearranging the letters
of a word or phrase to produce a new word or phrase, using all the original
letters exactly once;
ex.
*
*Eleven plus two is anagram of Twelve plus one
*A decimal point is anagram of I'm a dot in place
*Astronomers is anagram of Moon starers
At first it looks straightforwardly simple, just to jumble the letters and generate all possible combinations. But what would be the efficient approach to generate only the words in dictionary.
I came across this page, Solving anagrams in Ruby.
But what are your ideas?
A: See this assignment from the University of Washington CSE department.
Basically, you have a data structure that just has the counts of each letter in a word (an array works for ascii, upgrade to a map if you want unicode support). You can subtract two of these letter sets; if a count is negative, you know one word can't be an anagram of another.
A: Pre-process:
Build a trie with each leaf as a known word, keyed in alphabetical order.
At search time:
Consider the input string as a multiset. Find the first sub-word by traversing the index trie as in a depth-first search. At each branch you can ask, is letter x in the remainder of my input? If you have a good multiset representation, this should be a constant time query (basically).
Once you have the first sub-word, you can keep the remainder multiset and treat it as a new input to find the rest of that anagram (if any exists).
Augment this procedure with memoization for faster look-ups on common remainder multisets.
This is pretty fast - each trie traversal is guaranteed to give an actual subword, and each traversal takes linear time in the length of the subword (and subwords are usually pretty darn small, by coding standards). However, if you really want something even faster, you could include all n-grams in your pre-process, where an n-gram is any string of n words in a row. Of course, if W = #words, then you'll jump from index size O(W) to O(W^n). Maybe n = 2 is realistic, depending on the size of your dictionary.
A: Most of these answers are horribly inefficient and/or will only give one-word solutions (no spaces). My solution will handle any number of words and is very efficient.
What you want is a trie data structure. Here's a complete Python implementation. You just need a word list saved in a file named words.txt You can try the Scrabble dictionary word list here:
http://www.isc.ro/lists/twl06.zip
MIN_WORD_SIZE = 4 # min size of a word in the output
class Node(object):
def __init__(self, letter='', final=False, depth=0):
self.letter = letter
self.final = final
self.depth = depth
self.children = {}
def add(self, letters):
node = self
for index, letter in enumerate(letters):
if letter not in node.children:
node.children[letter] = Node(letter, index==len(letters)-1, index+1)
node = node.children[letter]
def anagram(self, letters):
tiles = {}
for letter in letters:
tiles[letter] = tiles.get(letter, 0) + 1
min_length = len(letters)
return self._anagram(tiles, [], self, min_length)
def _anagram(self, tiles, path, root, min_length):
if self.final and self.depth >= MIN_WORD_SIZE:
word = ''.join(path)
length = len(word.replace(' ', ''))
if length >= min_length:
yield word
path.append(' ')
for word in root._anagram(tiles, path, root, min_length):
yield word
path.pop()
for letter, node in self.children.iteritems():
count = tiles.get(letter, 0)
if count == 0:
continue
tiles[letter] = count - 1
path.append(letter)
for word in node._anagram(tiles, path, root, min_length):
yield word
path.pop()
tiles[letter] = count
def load_dictionary(path):
result = Node()
for line in open(path, 'r'):
word = line.strip().lower()
result.add(word)
return result
def main():
print 'Loading word list.'
words = load_dictionary('words.txt')
while True:
letters = raw_input('Enter letters: ')
letters = letters.lower()
letters = letters.replace(' ', '')
if not letters:
break
count = 0
for word in words.anagram(letters):
print word
count += 1
print '%d results.' % count
if __name__ == '__main__':
main()
When you run the program, the words are loaded into a trie in memory. After that, just type in the letters you want to search with and it will print the results. It will only show results that use all of the input letters, nothing shorter.
It filters short words from the output, otherwise the number of results is huge. Feel free to tweak the MIN_WORD_SIZE setting. Keep in mind, just using "astronomers" as input gives 233,549 results if MIN_WORD_SIZE is 1. Perhaps you can find a shorter word list that only contains more common English words.
Also, the contraction "I'm" (from one of your examples) won't show up in the results unless you add "im" to the dictionary and set MIN_WORD_SIZE to 2.
The trick to getting multiple words is to jump back to the root node in the trie whenever you encounter a complete word in the search. Then you keep traversing the trie until all letters have been used.
A: One of the seminal works on programmatic anagrams was by Michael Morton (Mr. Machine Tool), using a tool called Ars Magna. Here is a light article based on his work.
A: So here's the working solution, in Java, that Jason Cohen suggested and it performs somewhat better than the one using trie. Below are some of the main points:
*
*Only load dictionary with the words that are subsets of given set of words
*Dictionary will be a hash of sorted words as key and set of actual words as values (as suggested by Jason)
*Iterate through each word from dictionary key and do a recursive forward lookup to see if any valid anagram is found for that key
*Only do forward lookup because, anagrams for all the words that have already been traversed, should have already been found
*Merge all the words associated to the keys for e.g. if 'enlist' is the word for which anagrams are to be found and one of the set of keys to merge are [ins] and [elt], and the actual words for key [ins] is [sin] and [ins], and for key [elt] is [let], then the final set of merge words would be [sin, let] and [ins, let] which will be part of our final anagrams list
*Also to note that, this logic will only list unique set of words i.e. "eleven plus two" and "two plus eleven" would be same and only one of them would be listed in the output
Below is the main recursive code which finds the set of anagram keys:
// recursive function to find all the anagrams for charInventory characters
// starting with the word at dictionaryIndex in dictionary keyList
private Set<Set<String>> findAnagrams(int dictionaryIndex, char[] charInventory, List<String> keyList) {
// terminating condition if no words are found
if (dictionaryIndex >= keyList.size() || charInventory.length < minWordSize) {
return null;
}
String searchWord = keyList.get(dictionaryIndex);
char[] searchWordChars = searchWord.toCharArray();
// this is where you find the anagrams for whole word
if (AnagramSolverHelper.isEquivalent(searchWordChars, charInventory)) {
Set<Set<String>> anagramsSet = new HashSet<Set<String>>();
Set<String> anagramSet = new HashSet<String>();
anagramSet.add(searchWord);
anagramsSet.add(anagramSet);
return anagramsSet;
}
// this is where you find the anagrams with multiple words
if (AnagramSolverHelper.isSubset(searchWordChars, charInventory)) {
// update charInventory by removing the characters of the search
// word as it is subset of characters for the anagram search word
char[] newCharInventory = AnagramSolverHelper.setDifference(charInventory, searchWordChars);
if (newCharInventory.length >= minWordSize) {
Set<Set<String>> anagramsSet = new HashSet<Set<String>>();
for (int index = dictionaryIndex + 1; index < keyList.size(); index++) {
Set<Set<String>> searchWordAnagramsKeysSet = findAnagrams(index, newCharInventory, keyList);
if (searchWordAnagramsKeysSet != null) {
Set<Set<String>> mergedSets = mergeWordToSets(searchWord, searchWordAnagramsKeysSet);
anagramsSet.addAll(mergedSets);
}
}
return anagramsSet.isEmpty() ? null : anagramsSet;
}
}
// no anagrams found for current word
return null;
}
You can fork the repo from here and play with it. There are many optimizations that I might have missed. But the code works and does find all the anagrams.
A: And here is my novel solution.
Jon Bentley’s book Programming Pearls contains a problem about finding anagrams of words.
The statement:
Given a dictionary of english words, find all sets of anagrams. For
instance, “pots”, “stop” and “tops” are all anagrams of one another
because each can be formed by permuting the letters of the others.
I thought a bit and it came to me that the solution would be to obtain the signature of the word you’re searching and comparing it with all the words in the dictionary. All anagrams of a word should have the same signature. But how to achieve this? My idea was to use the Fundamental Theorem of Arithmetic:
The fundamental theorem of arithmetic states that
every positive integer (except the number 1) can be represented in
exactly one way apart from rearrangement as a product of one or more
primes
So the idea is to use an array of the first 26 prime numbers. Then for each letter in the word we get the corresponding prime number A = 2, B = 3, C = 5, D = 7 … and then we calculate the product of our input word. Next we do this for each word in the dictionary and if a word matches our input word, then we add it to the resulting list.
The performance is more or less acceptable. For a dictionary of 479828 words, it takes 160 ms to get all anagrams. This is roughly 0.0003 ms / word, or 0.3 microsecond / word. Algorithm’s complexity seems to be O(mn) or ~O(m) where m is the size of the dictionary and n is the length of the input word.
Here’s the code:
package com.vvirlan;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Scanner;
public class Words {
private int[] PRIMES = new int[] { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73,
79, 83, 89, 97, 101, 103, 107, 109, 113 };
public static void main(String[] args) {
Scanner s = new Scanner(System.in);
String word = "hello";
System.out.println("Please type a word:");
if (s.hasNext()) {
word = s.next();
}
Words w = new Words();
w.start(word);
}
private void start(String word) {
measureTime();
char[] letters = word.toUpperCase().toCharArray();
long searchProduct = calculateProduct(letters);
System.out.println(searchProduct);
try {
findByProduct(searchProduct);
} catch (Exception e) {
e.printStackTrace();
}
measureTime();
System.out.println(matchingWords);
System.out.println("Total time: " + time);
}
private List<String> matchingWords = new ArrayList<>();
private void findByProduct(long searchProduct) throws IOException {
File f = new File("/usr/share/dict/words");
FileReader fr = new FileReader(f);
BufferedReader br = new BufferedReader(fr);
String line = null;
while ((line = br.readLine()) != null) {
char[] letters = line.toUpperCase().toCharArray();
long p = calculateProduct(letters);
if (p == -1) {
continue;
}
if (p == searchProduct) {
matchingWords.add(line);
}
}
br.close();
}
private long calculateProduct(char[] letters) {
long result = 1L;
for (char c : letters) {
if (c < 65) {
return -1;
}
int pos = c - 65;
result *= PRIMES[pos];
}
return result;
}
private long time = 0L;
private void measureTime() {
long t = new Date().getTime();
if (time == 0L) {
time = t;
} else {
time = t - time;
}
}
}
A: For each word in the dictionary, sort the letters alphabetically. So "foobar" becomes "abfoor."
Then when the input anagram comes in, sort its letters too, then look it up. It's as fast as a hashtable lookup!
For multiple words, you could do combinations of the sorted letters, sorting as you go. Still much faster than generating all combinations.
(see comments for more optimizations and details)
A: I've used the following way of computing anagrams a couple of month ago:
*
*Compute a "code" for each word in your dictionary: Create a lookup-table from letters in the alphabet to prime numbers, e.g. starting with ['a', 2] and ending with ['z', 101]. As a pre-processing step compute the code for each word in your dictionary by looking up the prime number for each letter it consists of in the lookup-table and multiply them together. For later lookup create a multimap of codes to words.
*Compute the code of your input word as outlined above.
*Compute codeInDictionary % inputCode for each code in the multimap. If the result is 0, you've found an anagram and you can lookup the appropriate word. This also works for 2- or more-word anagrams as well.
Hope that was helpful.
A: The book Programming Pearls by Jon Bentley covers this kind of stuff quite nicely. A must-read.
A: How I see it:
you'd want to build a table that maps unordered sets of letters to lists words i.e. go through the dictionary so you'd wind up with, say
lettermap[set(a,e,d,f)] = { "deaf", "fade" }
then from your starting word, you find the set of letters:
astronomers => (a,e,m,n,o,o,r,r,s,s,t)
then loop through all the partitions of that set ( this might be the most technical part, just generating all the possible partitions), and look up the words for that set of letters.
edit: hmmm, this is pretty much what Jason Cohen posted.
edit: furthermore, the comments on the question mention generating "good" anagrams, like the examples :). after you build your list of all possible anagrams, run them through WordNet and find ones that are semantically close to the original phrase :)
A: A while ago I have written a blog post about how to quickly find two word anagrams. It works really fast: finding all 44 two-word anagrams for a word with a textfile of more than 300,000 words (4 Megabyte) takes only 0.6 seconds in a Ruby program.
Two Word Anagram Finder Algorithm (in Ruby)
It is possible to make the application faster when it is allowed to preprocess the wordlist into a large hash mapping from words sorted by letters to a list of words using these letters. This preprocessed data can be serialized and used from then on.
A: If I take a dictionary as a Hash Map as every word is unique and the Key is a binary(or Hex) representation of the word. Then if I have a word I can easily find the meaning of it with O(1) complexity.
Now, if we have to generate all the valid anagrams, we need to verify if the generated anagram is in the dictionary, if it is present in dictionary, its a valid one else we need to ignore that.
I will assume that there can be a word of max 100 characters(or more but there is a limit).
So any word we take it as a sequence of indexes like a word "hello" can be represented like
"1234".
Now the anagrams of "1234" are "1243", "1242" ..etc
The only thing we need to do is to store all such combinations of indexes for a particular number of characters. This is an one time task.
And then words can be generated from the combinations by picking the characters from the index.Hence we get the anagrams.
To verify if the anagrams are valid or not, just index into the dictionary and validate.
The only thing need to be handled is the duplicates.That can be done easily. As an when we need to compare with the previous ones that has been searched in dictionary.
The solution emphasizes on performance.
A: Off the top of my head, the solution that makes the most sense would be to pick a letter out of the input string randomly and filter the dictionary based on words that start with that. Then pick another, filter on the second letter, etc. In addition, filter out words that can't be made with the remaining text. Then when you hit the end of a word, insert a space and start it over with the remaining letters. You might also restrict words based on word type (e.g. you wouldn't have two verbs next to each other, you wouldn't have two articles next to each other, etc).
A: *
*As Jason suggested, prepare a dictionary making hashtable with key being word sorted alphabetically, and value word itself (you may have multiple values per key).
*Remove whitespace and sort your query before looking it up.
After this, you'd need to do some sort of a recursive, exhaustive search. Pseudo code is very roughly:
function FindWords(solutionList, wordsSoFar, sortedQuery)
// base case
if sortedQuery is empty
solutionList.Add(wordsSoFar)
return
// recursive case
// InitialStrings("abc") is {"a","ab","abc"}
foreach initialStr in InitalStrings(sortedQuery)
// Remaining letters after initialStr
sortedQueryRec := sortedQuery.Substring(initialStr.Length)
words := words matching initialStr in the dictionary
// Note that sometimes words list will be empty
foreach word in words
// Append should return a new list, not change wordSoFar
wordsSoFarRec := Append(wordSoFar, word)
FindWords(solutionList, wordSoFarRec, sortedQueryRec)
In the end, you need to iterate through the solutionList, and print the words in each sublist with spaces between them. You might need to print all orderings for these cases (e.g. "I am Sam" and "Sam I am" are both solutions).
Of course, I didn't test this, and it's a brute force approach.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: Unique key generation I looking for a way, specifically in PHP that I will be guaranteed to always get a unique key.
I have done the following:
strtolower(substr(crypt(time()), 0, 7));
But I have found that once in a while I end up with a duplicate key (rarely, but often enough).
I have also thought of doing:
strtolower(substr(crypt(uniqid(rand(), true)), 0, 7));
But according to the PHP website, uniqid() could, if uniqid() is called twice in the same microsecond, it could generate the same key. I'm thinking that the addition of rand() that it rarely would, but still possible.
After the lines mentioned above I am also remove characters such as L and O so it's less confusing for the user. This maybe part of the cause for the duplicates, but still necessary.
One option I have a thought of is creating a website that will generate the key, storing it in a database, ensuring it's completely unique.
Any other thoughts? Are there any websites out there that already do this that have some kind of API or just return the key. I found http://userident.com but I'm not sure if the keys will be completely unique.
This needs to run in the background without any user input.
A: There are only 3 ways to generate unique values, rather they be passwords, user IDs, etc.:
*
*Use an effective GUID generator - these are long and cannot be shrunk. If you only use part you FAIL.
*At least part of the number is sequentially generated off of a single sequence. You can add fluff or encoding to make it look less sequential. Advantage is they start short - disadvantage is they require a single source. The work around for the single source limitation is to have numbered sources, so you include the [source #] + [seq #] and then each source can generate its own sequence.
*Generate them via some other means and then check them against the single history of previously generated values.
Any other method is not guaranteed. Keep in mind, fundamentally you are generating a binary number (it is a computer), but then you can encode it in Hexadecimal, Decimal, Base64, or a word list. Pick an encoding that fits your usage. Usually for user entered data you want some variation of Base32 (which you hinted at).
Note about GUIDS: They gain their strength of uniqueness from their length and the method used to generate them. Anything less than 128-bits is not secure. Beyond random number generation there are characteristics that go into a GUID to make it more unique. Keep in mind they are only practically unique, not completely unique. It is possible, although practically impossible to have a duplicate.
Updated Note about GUIDS: Since writing this I learned that many GUID generators use a cryptographically secure random number generator (difficult or impossible to predict the next number generated, and a not likely to repeat). There are actually 5 different UUID algorithms. Algorithm 4 is what Microsoft currently uses for the Windows GUID generation API. A GUID is Microsoft's implementation of the UUID standard.
Update: If you want 7 to 16 characters then you need to use either method 2 or 3.
Bottom line: Frankly there is no such thing as completely unique. Even if you went with a sequential generator you would eventually run out of storage using all the atoms in the universe, thus looping back on yourself and repeating. Your only hope would be the heat death of the universe before reaching that point.
Even the best random number generator has a possibility of repeating equal to the total size of the random number you are generating. Take a quarter for example. It is a completely random bit generator, and its odds of repeating are 1 in 2.
So it all comes down to your threshold of uniqueness. You can have 100% uniqueness in 8 digits for 1,099,511,627,776 numbers by using a sequence and then base32 encoding it. Any other method that does not involve checking against a list of past numbers only has odds equal to n/1,099,511,627,776 (where n=number of previous numbers generated) of not being unique.
A: Any algorithm will result in duplicates.
Therefore, might I suggest that you use your existing algorithm* and simply check for duplicates?
*Slight addition: If uniqid() can be non-unique based on time, also include a global counter that you increment after every invocation. That way something is different even in the same microsecond.
A: Without writing the code, my logic would be:
Generate a random string from whatever acceptable characters you like.
Then add half the date stamp (partial seconds and all) to the front and the other half to the end (or somewhere in the middle if you prefer).
Stay JOLLY!
H
A: If you use your original method, but add the username or emailaddress in front of the password, it will always be unique if each user only can have 1 password.
A: You may be interested in this article which deals with the same issue: GUIDs are globally unique, but substrings of GUIDs aren't.
The goal of this algorithm is to use the combination of time and location ("space-time coordinates" for the relativity geeks out there) as the uniqueness key. However, timekeeping is not perfect, so there's a possibility that, for example, two GUIDs are generated in rapid succession from the same machine, so close to each other in time that the timestamp would be the same. That's where the uniquifier comes in.
A: I usually do it like this:
$this->password = '';
for($i=0; $i<10; $i++)
{
if($i%2 == 0)
$this->password .= chr(rand(65,90));
if($i%3 == 0)
$this->password .= chr(rand(97,122));
if($i%4 == 0)
$this->password .= chr(rand(48,57));
}
I suppose there are some theoretical holes but I've never had an issue with duplication. I usually use it for temporary passwords (like after a password reset) and it works well enough for that.
A: You might be interested in Steve Gibson's over-the-top-secure implementation of a password generator (no source, but he has a detailed description of how it works) at https://www.grc.com/passwords.htm.
The site creates huge 64-character passwords but, since they're completely random, you could easily take the first 8 (or however many) characters for a less secure but "as random as possible" password.
EDIT: from your later answers I see you need something more like a GUID than a password, so this probably isn't what you want...
A: As Frank Kreuger commented, go with a GUID generator.
Like this one
A: I'm still not seeing why the passwords have to be unique? What's the downside if 2 of your users have the same password?
This is assuming we're talking about passwords that are tied to userids, and not just unique identifiers. If that's what you're looking for, why not use GUIDs?
A: I do believe that part of your issue is that you are trying to us a singular function for two separate uses... passwords and transaction_id
these really are two different problem areas and it really is not best to try to address them together.
A: I recently wanted a quick and simple random unique key so I did the following:
$ukey = dechex(time()) . crypt( time() . md5(microtime() + mt_rand(0, 100000)) );
So, basically, I get the unix time in seconds and add a random md5 string generated from time + random number. It's not the best, but for low frequency requests it is pretty good. It's fast and works.
I did a test where I'd generate thousands of keys and then look for repeats, and having about 800 keys per second there were no repetitions, so not bad. I guess it totally depends on mt_rand()
I use it for a survey tracker where we get a submission rate of about 1000 surveys per minute... so for now (crosses fingers) there are no duplicates. Of course, the rate is not constant (we get the submissions at certain times of the day) so this is not fail proof nor the best solution... the tip is using an incremental value as part of the key (in my case, I used time(), but could be better).
A: Ingoring the crypting part that does not have much to do with creating a unique value I usually use this one:
function GetUniqueValue()
{
static $counter = 0; //initalized only 1st time function is called
return strtr(microtime(), array('.' => '', ' ' => '')) . $counter++;
}
When called in same process $counter is increased so value is always unique in same process.
When called in different processes you must be really unlucky to get 2 microtime() call with the same values, think that microtime() calls usually have different values also when called in same script.
A: I usually do a random substring (randomize how many chars between 8 an 32, or less for user convenience) or the MD5 of some value I have gotten in, or the time, or some combination. For more randomness I do MD5 of come value (say last name) concatenate that with the time, MD5 it again, then take the random substring. Yes, you could get equal passwords, but its not very likely at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I implement a HTML cache for a PHP site? What is the best way of implementing a cache for a PHP site? Obviously, there are some things that shouldn't be cached (for example search queries), but I want to find a good solution that will make sure that I avoid the 'digg effect'.
I know there is WP-Cache for WordPress, but I'm writing a custom solution that isn't built on WP. I'm interested in either writing my own cache (if it's simple enough), or you could point me to a nice, light framework. I don't know much Apache though, so if it was a PHP framework then it would be a better fit.
Thanks.
A: The best way to go is to use a proxy cache (Squid, Varnish) and serve appropriate Cache-Control/Expires headers, along with ETags : see Mark Nottingham's Caching Tutorial for a full description of how caches work and how you can get the most performance out of a caching proxy.
Also check out memcached, and try to cache your database queries (or better yet, pre-rendered page fragments) in there.
A: If a proxy cache is out of the question, and you're serving complete HTML files, you'll get the best performance by bypassing PHP altogether. Study how WP Super Cache works.
Uncached pages are copied to a cache folder with similar URL structure as your site. On later requests, mod_rewrite notes the existence of the cached file and serves it instead. other RewriteCond directives are used to make sure commenters/logged in users see live PHP requests, but the majority of visitors will be served by Apache directly.
A: I would recommend Memcached or APC. Both are in-memory caching solutions with dead-simple APIs and lots of libraries.
The trouble with those 2 is you need to install them on your web server or another server if it's Memcached.
APC
Pros:
*
*Simple
*Fast
*Speeds up PHP execution also
Cons
*
*Doesn't work for distributed systems, each machine stores its cache locally
Memcached
Pros:
*
*Fast(ish)
*Can be installed on a separate server for all web servers to use
*Highly tested, developed at LiveJournal
*Used by all the big guys (Facebook, Yahoo, Mozilla)
Cons:
*Slower than APC
*Possible network latency
*Slightly more configuration
I wouldn't recommend writing your own, there are plenty out there. You could go with a disk-based cache if you can't install software on your webserver, but there are possible race issues to deal with. One request could be writing to the file while another is reading.
You actually could cache search queries, even for a few seconds to a minute. Unless your db is being updated more than a few times a second, some delay would be ok.
A: The PHP Smarty template engine (http://www.smarty.net) includes a fairly advanced caching system.
You can find details in the caching section of the Smarty manual: http://www.smarty.net/manual/en/caching.php
A: You can use output buffering to selectively save parts of your output (those you want to cache) and display them to the next user if it hasn't been long enough. This way you're still rendering other parts of the page on-the-fly (e.g., customizable boxes, personal information).
A: You seems to be looking for a PHP cache framework.
I recommend you the template system TinyButStrong that comes with a very good CacheSystem plugin.
It's simple, light, customizable (you can cache whatever part of the html file you want), very powerful ^^
A: Simple caching of pages, or parts of pages - the Pear::CacheLite class. I also use APC and memcache for different things, but the other answers I've seen so far are more for more complete, and complex systems. If you just need to save some effort rebuilding a part of a page - Cache_lite with a file-backed store is entirely sufficient, and very simple to implement.
A: Project Gazelle (an open source torrent site) provides a step by step guide on setting up Memcached on the site which you can easily use on any other website you might want to set up which will handle a lot of traffic.
Grab down the source and read the documentation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Files on Windows and Contiguous Sectors Is there a way to guarantee that a file on Windows (using the NTFS file system) will use contiguous sectors on the hard disk? In other words, the first chunk of the file will be stored in a certain sector, the second chunk of the file will be stored in the next sector, and so on.
I should add that I want to be able to create this file programmatically, so I'd rather not just ask the user to defrag their harddrive after creating this file. If there is a way to programmatically defrag just the file that I create, then that would be OK too.
A: I would start here:
http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
and follow Mark's documentation of the defrag stuff:
http://technet.microsoft.com/en-us/sysinternals/bb897427.aspx
A: I know of no such guarantees.
But also keep in mind that NTFS "files" are comprised of multiple data streams. So you are actually looking for a way to guarantee that a stream is contiguous.
A: I believe there's no way to achieve that. You can only defragment the file after it's been written.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Refresh all update panels on the page? I have some code that modifies a value that several controls in other update panels are bound to. When this event handler fires, I'd like it to force the other update panels to refresh as well, so they can rebind.
Is this possible?
Edit:
To clarify, I have an update panel in one user control, the other update panels are in other user controls, so they can't see each other unless I were to expose some custom properties and use findControl etc etc...
Edit Again:
Here is what I came up with:
public void Update()
{
recursiveUpdate(this);
}
private void recursiveUpdate(Control control)
{
foreach (Control c in control.Controls)
{
if (c is UpdatePanel)
{
((UpdatePanel)c).Update();
}
if (c.HasControls())
{
recursiveUpdate(c);
}
}
}
I had 3 main user controls that were full of update panels, these controls were visible to the main page, so I added an Update method there that called Update on those three.
In my triggering control, I just cast this.Page into the currentpage and called Update.
Edit:
AARRGGGG!
While the update panels refresh, it does not call Page_Load within the subcontrols in them...What do I do now!
A: What about registering a PostBackTrigger (instead of an AsyncPostBackTrigger) that will refresh every panel when a specific event fires.
Or add the trigger that already refreshes some UpdatePanels to the other UpdatePanels as well.
A: You can set triggers on the events in the update panel you want updated or you can explicitly say updatepanel.update() in the code behind.
A: This is a good technique if you want to refresh updatepanel from client side Javascript.
A: Page.DataBind() kicks off a round of databind on all child controls. That'll cause Asp.Net to re-evaluate bind expressions on each control. If that's insufficient, you can add whatever logic you want to make sure gets kicked off to an OnDataBinding or OnDataBound override in your usercontrols. If you need to re-execute the Page_Load event, for example, you can simply call it in your overridden OnDataBound method.
A: instantuate both view panels to a third presenter class, Then let the presenter class control both views. for example:
You could just pass over what you need the 'middle class' to do its job for example, in your main you could have;
PresenterClass.AttachInterface(mIOrder);
PresenterClass.DoSomeCalulation();
PresenterClass.drawPanel(1);
PresenterClass.AttachInterface(mIOtherOrder);
PresenterClass.DoSomeCalulation();
PresenterClass.drawPanel(2);
each view will have its own controls. So many differant ways you could do this.. alternitivly you could use the middle class to instantuate both your panels then in each of your panels you could have 'get methods' to retrive the data for processing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are the advantages of VistaDB I have seen the references to VistaDB over the years and with tools like SQLite, Firebird, MS SQL et. al. I have never had a reason to consider it.
What are the benefits of paying for VistaDB vs using another technology? Things I have thought of:
1. Compact Framework Support. SQLite+MSSQL support the CF.
2. Need migration path to a 'more robust' system. Firebird+MSSQL.
3. Need more advanced features such as triggers. Firebird+MSSQL
A: Well, the main thing is that it is pure managed code - for what that is worth; it works not only on your typical Windows machines running .NET, but works wherever you run the Compact Framework and even works on Mono. Here are some noteworthy bullet points from their homepage:
*
*Small < 1 MB footprint truly embedded ZeroClick
*Microsoft SQL Server 2005 compatible data types and T-SQL syntax
*None of the SQL CE limits
*Single user, multi user local or using shared network.
*Partially trusted shared hosting is no problem.
*Royalty-free distribution - single CPU deployment of SQL Server costs more than a site license of VistaDB!
One thing worth noting is that Rob Howard's company, telligent, uses it as the default database for their new CMS software, "Graffiti."
I have played with it here and there but have yet to build anything against it.
A: For me this most interesting feature of VistaDB is that it can be run in Medium Trust environment. Which makes it perfect solution for creating small to medium .NET websites which can be deployed on server by copying and pasting (x-copy deployment).
And almost all windows shared hosting providers (like GoDaddy) won't let you run your websites in Full Trust mode. And also won't install for you any 3rd party binaries into GAC like System.Data.SQLite.dll if you wish to use SQLite for example.
A: The VistaDB client runtime is free. The runtime will never "expire at 3am" as you put it. Only the developer tools are licensed in that manner. You need 1 license per developer, simple. We even offer a really inexpensive Lite version with no Visual Studio tools.
Some other benefits
100% managed code - there are no interop or other unmanaged calls in the engine. This is a big deal to some, and others couldn't care less.
No registry access required - Most other in proc databases require registry access to look for parent controls, or permissions. VistaDB only does what you tell it to do, and will even run in Medium Trust.
XCopy deployment for runtime and your database (single file). You can xcopy you application, the runtime, and your database and run. Nothing to install or configure on the machine, no special privileges needed (we can run in Medium Trust or higher).
Isolated storage - You can put your entire database into Isolated Storage and run it from there directly. This makes it very easy to build secure click once applications that write databases in a domain friendly way for corporate environments. There is no need to store the user data on a shared drive or worry about permission mapping.
CLR Triggers / CLR Procs - You can write CLR Code and use them as Triggers or Stored Procs. We have just recently introduced changes to make it even easier to maintain a single CLR Assembly that can run in both VistaDB and SQL Server 2005/2008.
T-SQL Procs - VistaDB T-SQL Procs are compatible with SQL Server 2005/2008. Any procedure that works in our engine will run in SQL Server. That does not mean anything that runs there will port to us. We are a subset of the functionality in SQL Server. But we are also the only way to run T-SQL Procs without SQL Server (SQL CE can't do it).
I personally think one of the biggest features is the ability to upsize to SQL Server later. All of the VistaDB types, syntax, and CLR Procs, T-SQL procs, etc all will run on SQL Server. (You can't take everything from SQL Server down to VistaDB though, it is a subset)
32/64 bit Deployment - VistaDB is a single assembly deployment that runs both 32 and 64 bit without changes. SQL CE requires two different runtimes depending upon the OS, and cannot run under IIS at all. Access has no 64 bit runtime, and the most recent 32 bit runtime can only be deployed through MSI. The 32 bit version of Windows has the runtime, the 64 bit version does not.
Relational Integrity - VistaDB also actually enforces your constraints and Foreign Keys. You can specific cascade update, and delete operations. The person who commented we are like SQLITE is wrong in this regard. They parse constraints, but do not enforce them.
EDIT: They do have support for FK's now in SQLite. But they are not compiled in by default, and do not use the same syntax as SQL Server.
Medium Trust - The ability to run on a medium trust web server is another feature that many will not care about, but it is a big deal. Many third party controls can't even run in Medium Trust. We can run the complete engine within Medium Trust because of our commitment to 100% managed code and least permission required.
- Full disclosure - I am the owner of VistaDB so I may be biased. :)
A: I hadn't seen VistaDB before, it does look pretty cool.
Update: Received a comment from someone from VistaDB - their update model is only for getting new versions. Your old ones won't stop working if your license expires, which is good to know.
Keeping the original post here as IMHO the warning about expiring software licenses is still worth thinking about, even though VistaDB itself is fine.
It definitely seems 'more featureful' than SQLite, but I don't see anything there to justify the cost. The site seems to indicate that you can buy one license for $279, but it implies this is just a 1 year subscription. Would you have to then pay another $279 next year to stop your site falling over?
If so, remember to factor into the 'cost' how much inconvenience it's going to be when you get a call at 3am (murphy's law, it's always 3am) from your panicking customers because their VistaDB license has expired :-(
I've had this experience personally with some expiring software, and it's never good. You can send your customers emails and messages and flash their entire screen blinking red saying "YOU NEED TO GET A NEW LICENSE BEFORE NEXT WEEK" and they'll still never do it, and you'll still get the pain at 3am when it does expire.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Kill a specific PHP script running on FastCGI / IIS? I'm a PHP developer, but honestly my knowledge of server management is somewhat lacking.
I fired off a script today that took a regrettably long time to run, and because it had an embedded call to ignore_user_abort(), pressing "stop" in the browser was obviously futile.
There was a time limit of 15 minutes enforced in the FastCGI settings, but this was still incessantly long since I really had to just wait it out before I could continue with anything else.
Is there some way to manage/kill whatever PHP scripts are being executed by FastCGI at any given moment?
A: Does the php process appear in the taskmanager?I wonder what happens if you kill it there. Will the IIS start another one to handle the next request?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How exactly do you configure httpOnly Cookies in ASP Classic? I'm looking to implement httpOnly in my legacy ASP classic sites.
Anyone knows how to do it?
A: If you run your Classic ASP web pages on IIS 7/7.5, then you can use the IIS URL Rewrite module to write a rule to make your cookies HTTPOnly.
Paste the following into the section of your web.config:
<rewrite>
<outboundRules>
<rule name="Add HttpOnly" preCondition="No HttpOnly">
<match serverVariable="RESPONSE_Set_Cookie" pattern=".*" negate="false" />
<action type="Rewrite" value="{R:0}; HttpOnly" />
<conditions>
</conditions>
</rule>
<preConditions>
<preCondition name="No HttpOnly">
<add input="{RESPONSE_Set_Cookie}" pattern="." />
<add input="{RESPONSE_Set_Cookie}" pattern="; HttpOnly" negate="true" />
</preCondition>
</preConditions>
</outboundRules>
</rewrite>
See here for the details: http://forums.iis.net/t/1168473.aspx/1/10
For background, HTTPOnly cookies are required for PCI compliance reasons. The PCI standards folks (for credit card security) make you have HTTPOnly on your sessionID cookies at the very least in order to help prevent XSS attacks.
Also, at the current time (2-11-2013), all major browser support the HTTPOnly restriction on cookies. This includes current versions of IE, Firefox, Chrome and Safari.
See here for more info on how this works and support by various browser versions:
https://www.owasp.org/index.php/HTTPOnly
A: Response.AddHeader "Set-Cookie", "mycookie=yo; HttpOnly"
Other options like expires, path and secure can be also added in this way. I don't know of any magical way to change your whole cookies collection, but I could be wrong about that.
A: You need to append ";HttpOnly" to the Response cookies collection.
A: Response.AddHeader "Set-Cookie", ""&CStr(Request.ServerVariables("HTTP_COOKIE"))&";path=/;HttpOnly"&""
A: HttpOnly does very little to improve the security of web applications. For one thing, it only works in IE (Firefox "supports" it, but still discloses cookies to Javascript in some situations). For another thing, it only prevents a "drive-by" attack against your application; it does nothing to keep a cross-site scripting attack from resetting passwords, changing email addresses, or placing orders.
Should you use it? Sure. It's not going to hurt you. But there are 10 things you should be sure you're doing before you start messing with HttpOnly.
A: If you are using IIS7 or IIS7.5 and install the URL Rewriting add-in then you can do this. You can create a rewriting rule that adds "HttpOnly" to any out going "Set-Cookie" headers. Paste the following into the <system.webServer> section of your web.config. I then used Fiddler to prove the output.
Regards, Jeremy
<rewrite>
<outboundRules>
<rule name="Add HttpOnly" preCondition="No HttpOnly">
<match serverVariable="RESPONSE_Set_Cookie" pattern=".*" negate="false" />
<action type="Rewrite" value="{R:0}; HttpOnly" />
<conditions>
</conditions>
</rule>
<preConditions>
<preCondition name="No HttpOnly">
<add input="{RESPONSE_Set_Cookie}" pattern="." />
<add input="{RESPONSE_Set_Cookie}" pattern="; HttpOnly" negate="true" />
</preCondition>
</preConditions>
</outboundRules>
</rewrite>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: asp consuming a web service, what do do with recordset object? Currently I run an classic (old) ASP webpage with recordset object used directly in bad old spagethi code fasion.
I'm thinking of implementing a data layer in asp.net as web serivce to improve manageability. This is also a first step towards upgrading the website to asp.net.
The site itself remains ASP for the moment...
Can anybody recommend a good way of replacing the recordset object type with a web service compatible type (like an array or something)?
What do I replace below with?:
set objRS = oConn.execute(SQL)
while not objRS.eof
...
name = Cstr(objRS(1))
...
wend
and also mutliple recordsets can be replaced with?
I'm talking :
set objRS = objRs.nextRecordset
Anybody went through this and can recommend?
@AdditionalInfo - you asked for it :-)
Let me start at the beginning.
Existing Situation is:
I have an old ASP website with classical hierachical content (header, section, subsection, content) pulled out of database via stored procedures and content pages are in database also (a link to html file).
Now bad thing is, ASP code everywhere spread over many .asp files all doing their own database connections, reading, writing (u have to register for content). Recently we had problems with SQL injection attacks so I was called to fix it.
I could go change all the .asp pages to prevent sql injection but that would be madness. So I thought build a data layer - all pages using this layer to access database. Once place to fix and update db access code.
Coming to that decision I thought asp.net upgrade isn'f far away, why not start using asp.net for the data layer? This way it can be re-used when upgrading the site.
That brings me to the questions above!
A: First my favorite advice of this week: do not treat your Web Service like it if was a local object or you are going to pay a very hefty performance price. Essentially, don't do things like this in your web application:
MyDataWebService ws = new MyDataWebService();
foreach(DataItem item in myData)
{
ws.Insert(item);
}
You should always prefer to minimize calls to your Web Service (and SQL):
MyDataWebService ws = new MyDataWebService();
ws.Insert(myData); // Let the web service process the whole set at once.
Now, as far as the data type to use for your web service calls, you basically have two choices:
*
*DataSet
*Everything else (Array)
Most collections returned from a web service (like a List<MyData>) actually convert to an Array during the Web Service invocation. Remember that Web Services don't return objects (data + behavior) but just data structures (or a sequence of). Therefore, there is little distinction between a List and an Array.
DataSets are more complex classes; they use their own custom serializer and pretty much get fully recreated in the calling application. There is a cost in performance to be paid for using DataSets like that, so I don't usually recommend it for most scenarios. Using arrays to pass data back and forth tends to be more efficient, and quite frankly it's easier to do.
Your case is a bit different; because you are converting an existing site that already uses ADO, an ADO.NET DataSet might be your best updgrade path. ADO.NET and ADO are similar enough that a straight update might be easier that way. It kind of depends how your web site is built.
For the last part of your question, DataSets do support multiple recordsets similar to ADO's Recordset. They are called DataTables. Every DataSet has at least one DataTable and you can read them in any order.
Good luck.
A: I'd suggest using the XmlHttp class in your ASP code.
Assuming you have an ASMX web service similar to this, in MyService.asmx:
[WebMethod]
public string HelloWorld()
{
return "Hello World";
}
You could call it in ASP something like this:
Dim xhr
Set xhr = server.CreateObject("MSXML2.XMLHTTP")
xhr.Open "POST", "/MyService.asmx/HelloWorld", false
xhr.SetRequestHeader "content-type", "application/x-www-form-urlencoded"
xhr.Send
Response.Write(xhr.ResponseText)
ResponseText would be an XML response of:
<string>Hello World</string>
Assuming your service returned a collection of data, you could iterate over it using XPath or any other XML processing technique/library.
Googling around about MSXML2 will probably answer any specific questions you have, since it's specific to ASP classic.
A: Instead of thinking in layers, why not try taking vertical slices through the application and converting those to .net. That way you will get entire features coded in .net instead of disjoint parts. What is the business value in replacing perfectly working code without improving the user experience or adding features?
You might also consider the trade-off of performance you are going to give up with a Web Service over direct ado calls. Web Services are a good solution to the problem of multiple disjoint applications/teams accessing a common schema; they do not make a single isolated application more maintainable, only slower and more complex.
A: If you wanted to stick with Classic ASP then I would suggest creating a Database handling object via ASP Classes then just use that object to do your recordset creations. This would centralize your database handling code and make it so that you only have to handle SQL Injection attacks in a single location.
A simple example.
Class clsDatabase
Private Sub Class_Initialize()
If Session("Debug") Then Response.Write "Database Initialized<br />"
End Sub
Private Sub Class_Terminate()
If Session("Debug") Then Response.Write "Database Terminated<br />"
End Sub
Public Function Run(SQL)
Set RS = CreateObject("ADODB.Recordset")
RS.CursorLocation = adUseClient
RS.Open SQLValidate(SQL), Application("Data"), adOpenKeyset, adLockReadOnly, adCmdText
Set Run = RS
Set RS = nothing
End Function
Public Function SQLValidate(SQL)
SQLValidate = SQL
SQLValidate = Replace(SQLValidate, "--", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, ";", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, "SP_", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, "@@", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, " DECLARE", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, "EXEC", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, " DROP", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, " CREATE", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, " GRANT", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, " XP_", "", 1, -1, 1)
SQLValidate = Replace(SQLValidate, "CHAR(124)", "", 1, -1, 1)
End Function
End Class
Then to use this you would change your calls to:
Set oData = new clsDatabase
Set Recordset = oData.Run("SELECT field FROM table WHERE something = another")
Set oData = nothing
Of course you can expand the basic class to handle parametrized stored procedures or what not and more validations etc.
A: Another alternative is to use COM Interop to create an assembly in .NET that is callable from classic ASP.
To create a COM Interop assembly from Visual Studio (e.g. Microsoft Visual C# 2005 Express Edition):
*
*Create a new Class Library project
*Open the project properties
*
*Under Application select Assembly Information... and enable "Make assembly COM-Visible"
*Under Signing enable Sign the assembly and create or select an existing strong name key file
*Write and build the library
*
*COM Interop classes must have a default constructor and only non-static classes and methods are published
*Copy the .dll to the desired folder/machine
*Register the .dll for COM using RegAsm
For example (adjust as necessary):
"C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe" "C:\path\to\assembly.dll" /tlb /codebase
*
*Call the assembly from ASP
For example (adjust as necessary):
Dim obj, returnValue
Set obj = Server.CreateObject("MyProject.MyClass")
returnValue = obj.DoSomething(param1, param2)
Note:
*
*the assembly must be re-registered via RegAsm when it's updated
See also:
*
*Exposing .NET Framework Components to COM (MSDN)
*CodeProject: Call a .NET component from an ASP Page
A: Sql injection should be handled by using parametrized sql queries. Not only will this eliminate the security risk but it will significantly speed up your database performance because it will be able to reuse an execution plan instead of recalcing it every time. The suggestion to handle it through string replacements is foolish. VB is terrible at handling strings and those "replace" statements will be extremely costly in performance and memory (also, you actually only need to handle the ' character anyway)
Moving code to .net doesn't make it better. Having db code in your pages isn't bad; especially if id you are talking about a small site with only a couple devs. Thousands of sites use that technique to process bazillions of dollars in transactions. Now, unparameterized dynamic sql is bad and you should work to eliminate that, but that doesn't require a rewrite of the app or .net to do it. I'm always curious why people see .net as a defacto improvement to their app. Most of the bad code and bad habits that existed in the COM model just propagate forward during a conversion.
You either need to make a commitment to creating a truly cohesive, minimally coupled, OO design; or just keep what you have going because it isn't all that bad.
A: Too bad I did not see this question in 2008.
For me it looks like your site is using Justa framework.
Simple way is to modify Justa code for submit of search and data inputs to urlencode.
I did it and work perfectly for me.
Rest of the code secure enough to prevent any type os SQL injections or other attempt to get into database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Automatic code quality tool for Ruby? One thing I really miss about Java is the tool support. FindBugs, Checkstyle and PMD made for a holy trinity of code quality metrics and automatic bug checking.
Is there anything that will check for simple bugs and / or style violations of Ruby code? Bonus points if I can adapt it for frameworks such as Rails so that Rails idioms are adhered to.
A: Saikuro and Flog can be good for getting a basic idea of code complexity. You can also use a tool like rcov to look at your test coverage.
There is a plugin for Rails projects that combine all those metrics into a single rake task. It is called metric_fu.
A: Projects I've found and tested recently:
*
*https://github.com/railsbp/rails_best_practices
*
*Seems to work, and gives sensible warnings
*https://github.com/simplabs/excellent
*
*Works, but quite a few false positives
*https://github.com/troessner/reek
*
*I disagree with most of the warnings from this tool, but it works
*http://www.cs.umd.edu/projects/PL/druby/
*
*This requires ocaml; I haven't tried it, but it looks like it might be good
*http://roodi.rubyforge.org/
*
*does not appear to be Ruby 1.9 compatible
*https://github.com/gdb/ruby-static-checker
*
*Is broken for me, and only catches name errors, so unit tests should cover that.
*https://github.com/michaeledgar/laser
*
*Doesn't compile for me
A: You might want to try out RuboCop. It is a Ruby code style checker based on the Ruby Style Guide. It's maintained pretty actively and it's based on standard Ruby tooling (like the ripper library). It works well with Ruby 1.9 and 2.0 and has great Emacs integration. I hope you'll find it useful!
A: Dust looks like it can help you find unused and useless code, which seems like it sort-of fits what you're after.
I'm not aware of any other such tools.
This problem is vastly harder to address in ruby than it is in java - you'll note that all those java tools brand themselves as using 'static analysis' of the code.
Static analysis of ruby code often isn't possible, because there isn't anything static that you can analyze (methods often get created at runtime and so on)
At any rate, some of these things are unneeded in ruby because the language builds them in.
For example, you don't need a coding standard to enforce that your classes are all NamedLikeThis because the code won't work if they aren't.
P.S. I have to add the standard disclaimer that those kind of tools can often be a bit of a red herring. You can spend all day making your code adhere to what the tool thinks it should be, and end up with more bugs than you started with.
IMHO the best solution is to write your code fluently so you can read it more easily. No amount of static analysis is going to be as good as a human reading code which clearly states what it is meant to do. Being able to do this is where ruby is light-years ahead of many other languages. I personally would recommend you aim your efforts at learning how to write more fluently, and educating your team about such things, than spending time on static analysis.
A: Another nice tool, although in early stages according to the author is reek:
http://reek.rubyforge.org/
reek currently includes very naive checks for the following code smells:
*
*Long Method
*Large Class
*Feature Envy
*Uncommunicative Name
*Long Parameter List
*Utility Function
*Nested Iterators
*Control Couple
*Duplication
*List item
Personally I think it still has too much false positives, but just looking at the output in some of my code helped me rethink some decisions about code style and architecture.
A: Code Climate is a SaaS tool that integrates through git and automatically "grades" your code. It notifies you via various channels if there a sudden drop in quality. Nice UI as well.
A: I've recently started looking for something like this for Ruby. What I've run across so far:
*
*Saikuro
*Roodi
*Flog
These might be places to start. Unfortunately I haven't used any of the three enough yet to offer a good opinion.
A: I didn't see this questions when asked, but a blog post I did might help as well. In it I cover a bunch of Ruby tools and specifically cover 4 code quality tools...
*
*Roodi
*Dust
*Flog
*Saikuro
It might also be worth checking out Towelie and Flay
http://devver.wordpress.com/2008/10/03/ruby-tools-roundup/
Now we have combined a lot of tools into an only Ruby code quality and metrics monitoring tool called Caliper. This might fit your needs well. It tracks various quality metrics over the life of a project.
Caliper - improve your Ruby code
A: There is also excellent. I haven't tried it yet, but it too looks promising.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: OpenGl And Flickering When objects from a CallList intersect the near plane I get a flicker..., what can I do?
Im using OpenGL and SDL.
Yes it is double buffered.
A: It sounds like you're getting z-fighting.
"Z-fighting is a phenomenon in 3D rendering that occurs when two or more primitives have similar values in the z-buffer, and is particularly prevalent with coplanar polygons. The effect causes pseudo-random pixels to be rendered with the color of one polygon or another in a non-deterministic manner, varying as the scene is animated, causing one polygon to "win" the z test, then another, and so on."
(From wikipedia)
You can get more information about the problem in the OpenGL FAQ.
glPolygonOffset might help, but you can also get yourself into trouble with it. Tom Forsyth has a good explanation in his FAQ Note: It talks about ZBIAS, but that's just the DirectX equivilent.
A: The problem was that my rotation function had some floating point errors which screwed up my model_view matrix.
None of you could have guessed it, sorry for the waste of your time.
Although I don't think that moving the near plane should be even considered a solution to any kind of problem usually something else is wrong, because openGL does support polygon intersection with the near plane.
A: Try to put the near clipping plane a little bit further :
for example with gluPerspective -> third parameter zNear
http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/perspective.html
A: Ah, you meant the near plane. :)
Well...another thing when drawing polygons in the same plane is to use glPolygonOffset
From the description
glPolygonOffset is useful for rendering hidden-line images,
for applying decals to surfaces, and for rendering solids
with highlighted edges.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: JVM choices on Windows Mobile What are the JVM implementations available on Windows Mobile?
Esmertec JBed is the one on my WinMo phone.
Wondering how many other JVM vendors are in this zone. Are there any comparison or benchmarking data available?
A: JVM Choices for Windows CE in general (including Pocket PC and Windows Mobile):
*
*CrE-ME
*Mysaifu
*Skelmir CEEJ
If you're looking to have a common code base between WinMo and Symbina, you might also look at Red Five Labs. They have a Symbian runtime that allows you to run COmpact Framework apps, so you could have a CF codebase that works on both. I evaluated the early betas of Red Five's offering, but haven't used it since, so I can't attest to the quality or coverage.
A: The reason was to have same codebase in WinMo and Symbian.
My personal preference would be to have native solution on both. But that would mean, developing & maintaining two set of code bases. And the management does not prefer that for some reason ;)
A: There are 2 JVMs for WinMo, Mysaifu for J2SE and IBM WebSphere Everyplace Micro Environment for J2ME.
A: Prakash, since your goal is to have a common code base between J2ME and WinMo handsets, check out alcheMo. alcheMo is not a JVM, but a fully automated J2ME to native Win32 WinMo porting solution.
A: Even when running a little late on this question and just for completenes sake: JBlend(micro) is the Java environment for Windows Mobile 6.1 and 6.5 which is used e.g. on HTC devices. It enables for CLDC and MIDP 1.0 and 2.0.
A: phoneME (java.net page) seems to be a another choice. It is recommended by the LWUIT FAQ.
A: This is not really an answer, but wouldn't it make more sense to target your software at the .NET framework compact edition if you're developing for WinMo?
A: HP also had a JVM called HP Chai on their old models of Pocket PC
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: ASP.NET Web Application Build Output - How do I include all deployment files? When I build my ASP.NET web application I get a .dll file with the code for the website in it (which is great) but the website also needs all the .aspx files and friends, and these need to be placed in the correct directory structure. How can I get this all in one directory as the result of each build? Trying to pick the right files out of the source directory is a pain.
The end result should be xcopy deployable.
Update: I don't want to have to manually use the Publish command which I'm aware of. I want the full set of files required by the application to be the build output - this means I also get the full set of files in one place from running MSBuild.
A: One solution appears to be Web Deployment Projects (WDPs), an add-on for Visual Studio (and msbuild) available that builds a web project to a directory and can optionally merge assemblies and alter the web.config file. The output of building a WDP is all the files necessary to deploy the site in one directory.
More information about Web Deployment Projects:
*
*Announcement on webdevtools MSDN blog for WDP 2008
*ScottGu introduction to WDP 2005
The only disadvantage to this solution is the requirement on an add-on which must be available on the build machine. Still, it's good enough for now!
A: ASP.NET doesn't have real xcopy deployment for new sites. It depends on having a virtual directory/Application in IIS. However, once that virtual directory is created you can use xcopy for updates.
A: You can Publish Web site..If you want to automate your deployment, you need to use some script.
A: Have you tried using the aspnet_compiler.exe in your .net framework directory? I'm pretty sure you can create a "deploy ready" version of a web application or web site.
A: The _CopyWebApplication target on MSBuild will do exactly what you need. The catch is that only the main assembly will be copied to the bin folder and that's why a copy task is needed to also copy any other file on the bin folder.
I was trying to post the sample script as part of this post but wasn't able to.
Please take a look at this article on my blog that describes how to create a MSBuild script similar to the one you need.
A: Have you tried right clicking the website in Solution Explorer and clicking 'Publish Website'?
A: Build --> Publish
A dialog box will appear that will guide you through the process.
A: For the automated building you describe in the update, I would recommend you look into MSBuild and CruiseControl.NET
A: It depends on how complicated solution you need, you could just use a script and jenkins for example. You can use MSBUild with Jenkins for just deploying to an IIS. And if you got Jenkins other tools is pretty easy to connect into it later on. But if you just want to build, use a script that jenins execute every build that uses MSDeploy and it will work great.
This is how i do it, just to give you a feeling:
Sonarqube uses Gallio, Gendarme, FXcop, Stylecop, NDepths and PartCover to get your metrics and all this is pretty straight forward since SonarQube do this automatically without much configuration.
Here is Jenkins witch builds and get Sonar metrics and a another job for deploying automatically to IIS. I use a simple script one line that calls my MSBuild and wich URL, pass and user.
And Sonarqube, all metrics for my project. This is a simple MVC4 app, but it works great!:
If you want more information can i provide you with a good guide.
This whole setup uses MSBuild, too build and deploy the apps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I maintain position of a DragPanelExtender across postbacks? I already found this article:
http://www.dotnetcurry.com/ShowArticle.aspx?ID=181&AspxAutoDetectCookieSupport=1
But I've got a different situation. I am embedding some hiddenFields inside of the master page and trying to store the position of the dragPanel in those.
I am using javascript to store the position of the dragPanel and then when the user clicks on a link, the new page is loaded, but the dragPanel is reset into the starting position.
Is there any easy way to do this?
Pseudocode:
**this is in MasterPage.master**
function pageLoad()
{
// call the savePanelPosition when the panel is moved
$find('DragP1').add_move(savePanelPosition);
var elem = $get("<%=HiddenField1.ClientID%>");
if(elem.value != "0")
{
var temp = new Array();
temp = elem.value.split(';');
// set the position of the panel manually with the retrieve value
$find('<%=Panel1_DragPanelExtender.BehaviorID%>').set_location(new
Sys.UI.Point(parseInt(temp[0]),parseInt(temp[1])));
}
}
function savePanelPosition()
{
var elem = $find('DragP1').get_element();
var loc = $common.getLocation(elem);
var elem1 = $get("<%=HiddenField1.ClientID%>");
// store the value in the hidden field
elem1.value = loc.x + ';' + loc.y;
}
<asp:Button ID="Button1" runat="server" Text="Button"/>
<asp:HiddenField ID="HiddenField1" runat="server" Value="0"
However, HiddenField is not visible in the redirected page, foo.aspx
A: Rather than storing the position information in a hidden field, store it in a cookie. The information is small, so it will have minimal effect on the page load performance.
A: ok so I got the drag stuff to work, saves in a database and all, brings up cool on my one monitor 1600X1050 all good, fine and dandy! BUT WAIT! I bring up the same page on my other monitor 1366x768 and the panels are all off.
The save function saves in pixels, so when you move over to another "resolution" the panels are off. ya know?
P.S. I could pop up a message stating the user to change their monitor settings, lol...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Bespoke SQL Server 'encoding' sproc - is there a neater way of doing this? I'm just wondering if there's a better way of doing this in SQL Server 2005.
Effectively, I'm taking an originator_id (a number between 0 and 99) and a 'next_element' (it's really just a sequential counter between 1 and 999,999).
We are trying to create a 6-character 'code' from them.
The originator_id is multiplied up by a million, and then the counter added in, giving us a number between 0 and 99,999,999.
Then we convert this into a 'base 32' string - a fake base 32, where we're really just using 0-9 and A-Z but with a few of the more confusing alphanums removed for clarity (I, O, S, Z).
To do this, we just divide the number up by powers of 32, at each stage using the result we get for each power as an index for a character from our array of selected character.
Thus, an originator ID of 61 and NextCodeElement of 9 gives a code of '1T5JA9'
(61 * 1,000,000) + 9 = 61,000,009
61,000,009 div (5^32 = 33,554,432) = 1 = '1'
27,445,577 div (4^32 = 1,048,576) = 26 = 'T'
182,601 div (3^32 = 32,768) = 5 = '5'
18,761 div (2^32 = 1,024) = 18 = 'J'
329 div (1^32 = 32) = 10 = 'A'
9 div (0^32 = 1) = 9 = '9'
so my code is 1T5JA9
Previously I've had this algorithm working (in Delphi) but now I really need to be able to recreate it in SQL Server 2005. Obviously I don't quite have the same functions to hand that I have in Delphi, but this is my take on the routine. It works, and I can generate codes (or reconstruct codes back into their components) just fine.
But it looks a bit long-winded, and I'm not sure that the trick of selecting the result of a division into an int (ie casting it, really) is necessarily 'right' - is there a better SQLS approach to this kind of thing?
CREATE procedure dummy_RP_CREATE_CODE @NextCodeElement int, @OriginatorID int,
@code varchar(6) output
as
begin
declare @raw_num int;
declare @bcelems char(32);
declare @chr int;
select @bcelems='0123456789ABCDEFGHJKLMNPQRTUVWXY';
select @code='';
-- add in the originator_id, scaled into place
select @raw_num = (@OriginatorID * 1000000) + @NextCodeElement;
-- now to convert this to a 6-char code
-- 5^32
select @chr = @raw_num / 33554432;
select @raw_num = @raw_num - (@chr * 33554432);
select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1);
-- 4^32
select @chr = @raw_num / 1048576;
select @raw_num = @raw_num - (@chr * 1048576);
select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1);
-- 3^32
select @chr = @raw_num / 32768;
select @raw_num = @raw_num - (@chr * 32768);
select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1);
-- 2^32
select @chr = @raw_num / 1024;
select @raw_num = @raw_num - (@chr * 1024);
select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1);
-- 1^32
select @chr = @raw_num / 32;
select @raw_num = @raw_num - (@chr * 32);
select @code = @code + SUBSTRING(@bcelems, 1 + @chr, 1);
-- 0^32
select @code = @code + SUBSTRING(@bcelems, 1 + @raw_num, 1);
-- that's it!
end;
I'm not desperately concerned about optimisation unless this method is actually an order of magnitude (or so) worse than any alternative.
The code only ever gets executed in small bursts, maybe generating 20 or 30 codes every 10 minutes or so. Because of the large range of numbers I don't want to pre-calculate the codes into a huge table (only small pockets of the overall code range will be used in any given installation).
However, I'm sure there's probably a tidier way of achieving the same result - particularly those divisions and subtractions.
Any constructive criticism, observations or suggestions would be most welcome.
A: Seeing as it's SQL Server 2005, any reason not to use a CLR stored procedure? You could use your CLR language of choice then and it'd probably be a relatively direct port of your existing Delphi code.
A: Although it can clearly be done in SQL Server 2005, I think that it's of sufficient 'non-database-ness' that some kind of precompiled, high-level language routine makese sense.
I've written DLLs for Interbase/Firebird, and T-SQL sprocs for SQL Server, but never a CLR routine. It will be an interesting exercise!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I kill all sessions connecting to my oracle database? I need to quickly (and forcibly) kill off all external sessions connecting to my oracle database without the supervision of and administrator.
I don't want to just lock the database and let the users quit gracefully.
How would I script this?
A: Before killing sessions, if possible do
ALTER SYSTEM ENABLE RESTRICTED SESSION;
to stop new sessions from connecting.
A: I've been using something like this for a while to kill my sessions on a shared server. The first line of the 'where' can be removed to kill all non 'sys' sessions:
BEGIN
FOR c IN (
SELECT s.sid, s.serial#
FROM v$session s
WHERE (s.Osuser = 'MyUser' or s.MACHINE = 'MyNtDomain\MyMachineName')
AND s.USERNAME <> 'SYS'
AND s.STATUS <> 'KILLED'
)
LOOP
EXECUTE IMMEDIATE 'alter system kill session ''' || c.sid || ',' || c.serial# || '''';
END LOOP;
END;
A: This answer is heavily influenced by a conversation here: http://www.tek-tips.com/viewthread.cfm?qid=1395151&page=3
ALTER SYSTEM ENABLE RESTRICTED SESSION;
begin
for x in (
select Sid, Serial#, machine, program
from v$session
where
machine <> 'MyDatabaseServerName'
) loop
execute immediate 'Alter System Kill Session '''|| x.Sid
|| ',' || x.Serial# || ''' IMMEDIATE';
end loop;
end;
I skip killing sessions originating on the database server to avoid killing off Oracle's connections to itself.
A: If you want to stop new users from connecting, but allow current sessions to continue until they are inactive, you can put the database in QUIESCE mode:
ALTER SYSTEM QUIESCE RESTRICTED;
From the Oracle Database Administrator's Guide:
Non-DBA active sessions will continue
until they become inactive. An active
session is one that is currently
inside of a transaction, a query, a
fetch, or a PL/SQL statement; or a
session that is currently holding any
shared resources (for example,
enqueues). No inactive sessions are
allowed to become active...Once all
non-DBA sessions become inactive, the
ALTER SYSTEM QUIESCE RESTRICTED
statement completes, and the database
is in a quiesced state
A: Additional info
Important Oracle 11g changes to alter session kill session
Oracle author Mladen Gogala notes that an @ sign is now required to
kill a session when using the inst_id column:
alter system kill session '130,620,@1';
http://www.dba-oracle.com/tips_killing_oracle_sessions.htm
A: As SYS:
startup force;
Brutal, yet elegant.
A: Try trigger on logon
Insted of trying disconnect users you should not allow them to connect.
There is and example of such trigger.
CREATE OR REPLACE TRIGGER rds_logon_trigger
AFTER LOGON ON DATABASE
BEGIN
IF SYS_CONTEXT('USERENV','IP_ADDRESS') not in ('192.168.2.121','192.168.2.123','192.168.2.233') THEN
RAISE_APPLICATION_ERROR(-20003,'You are not allowed to connect to the database');
END IF;
IF (to_number(to_char(sysdate,'HH24'))< 6) and (to_number(to_char(sysdate,'HH24')) >18) THEN
RAISE_APPLICATION_ERROR(-20005,'Logon only allowed during business hours');
END IF;
END;
A: I found the below snippet helpful. Taken from: http://jeromeblog-jerome.blogspot.com/2007/10/how-to-unlock-record-on-oracle.html
select
owner||'.'||object_name obj ,
oracle_username||' ('||s.status||')' oruser ,
os_user_name osuser ,
machine computer ,
l.process unix ,
s.sid||','||s.serial# ss ,
r.name rs ,
to_char(s.logon_time,'yyyy/mm/dd hh24:mi:ss') time
from v$locked_object l ,
dba_objects o ,
v$session s ,
v$transaction t ,
v$rollname r
where l.object_id = o.object_id
and s.sid=l.session_id
and s.taddr=t.addr
and t.xidusn=r.usn
order by osuser, ss, obj
;
Then ran:
Alter System Kill Session '<value from ss above>'
;
To kill individual sessions.
A: To answer the question asked,
here is the most accurate SQL to accomplish the job,
you can combine it with PL/SQL loop to actually run kill statements:
select ses.USERNAME,
substr(MACHINE,1,10) as MACHINE,
substr(module,1,25) as module,
status,
'alter system kill session '''||SID||','||ses.SERIAL#||''';' as kill
from v$session ses LEFT OUTER JOIN v$process p ON (ses.paddr=p.addr)
where schemaname <> 'SYS'
and not exists
(select 1
from DBA_ROLE_PRIVS
where GRANTED_ROLE='DBA'
and schemaname=grantee)
and machine!='yourlocalhostname'
order by LAST_CALL_ET desc;
A: If Oracle is running in Unix /Linux then we can grep for all client connections and kill it
grep all oracle client process:
ps -ef | grep LOCAL=NO | grep -v grep | awk '{print $2}' | wc -l
Kill all oracle client process :
kill -9 ps -ef | grep LOCAL=NO | grep -v grep | awk '{print $2}'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: "Background" task in palm OS I'm trying to create a Palm OS app to check a web site every X minutes or hours, and provide a notification when a piece of data is available. I know that this kind of thing can be done on the new Palm's - for example, my Centro can have email or web sites download when the application isn't on top - but I don't know how to do it. Can anyone point me in the right direction?
A: This is possible to do but very difficult. There are several steps you'll have to take.
First off, this only works on Palm OS 5 and is sketchy on some of the early Palm OS 5 devices. The latest devices are better but not perfect.
Next, you will need to create an alarm for your application using AlmSetAlarm. This is how you accomplish the "every X minutes or hours" part.
When the alarm fires, your application will get a sysAppLaunchCmdAlarmTriggered launch code, even if it's not already running. If you only want to do something simple and quick, you can do it in response to the launch code and you're done.
After you do your stuff in the alarm launch code, be sure to set up the next alarm so that you continue to be called.
Important notes: You cannot access global variables when responding this launch code! Depending on the setup in your compiler, you probably also won't be able to access certain C++ features, like virtual functions (which internally use global variables). There is a setting you can set in Codewarrior that will help with this, but I'm not too familiar with it. You should architect your code so that it doesn't need globals; for example, you can use FtrSet and FtrGet to store bits of global data that you might need. Finally, you will only be able to access a single 64KB code segment of 68000 machine code. Inter-segment jumps don't work properly without globals set up.
You can get around a lot of these restrictions by moving the majority of your code to a PNOlet, but that's an entirely different and more complicated topic.
If you want to do something more complicated that could take a while (e.g. load a web page or download email), it is strongly recommended not to do it during the alarm launch code. You could do something in the sysAppLaunchCmdDisplayAlarm launch code and display a form to the user allowing them to cancel. But this is bound to get annoying quickly.
Better for the user experience (but much more complicated) is to become a background application. This is a bit of black magic and is not really well supported, but it is possible. There are basically three steps to becoming a background application:
*
*Protect your application database using DmDatabaseProtect. This will ensure that your application is locked down so it can't be deleted.
*Lock your code segment using MemHandleLock and MemHandleSetOwner (set the owner to 0). This will ensure that your code is loaded into memory and won't be moved.
*Register for some notifications. For example, the sysNotifyIdleTimeEvent is a great notification to use to do some periodic background processing.
Once you set this up, you can exit from the alarm launch code and then wait for your notifications to fire. You will then do all of your background processing when your notification handlers are called.
Also make sure that if you allocate any system objects (memory, handles, file handles, etc.), you set their owner to 0 (system) if you expect them to persist after you return from your notification handler. Otherwise the system will clean them up. If you do this, be super careful to avoid memory and resource leaks!! They will never get cleaned up when the owner is set to 0!
To leave background mode, simply do the reverse: unregister for notifications, unlock your code segment, and unprotect your application database.
If you do any network operations in the background, be sure that you set the sockets to non-blocking mode and deal correctly with that! Otherwise you will block the foreground application and cause problems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: __doPostBack is not working in firefox The __doPostBack is not working in firefox 3 (have not checked 2). Everything is working great in IE 6&7 and it even works in Chrome??
It's a simple asp:LinkButton with an OnClick event
<asp:LinkButton ID="DeleteAllPicturesLinkButton" Enabled="False" OnClientClick="javascript:return confirm('Are you sure you want to delete all pictures? \n This action cannot be undone.');" OnClick="DeletePictureLinkButton_Click" CommandName="DeleteAll" CssClass="button" runat="server">
The javascript confirm is firing so I know the javascript is working, it's specirically the __doPostBack event. There is a lot more going on on the page, just didn't know if it's work it to post the entire page.
I enable the control on the page load event.
Any ideas?
I hope this is the correct way to do this, but I found the answer. I figured I'd put it up here rather then in a stackoverflow "answer"
Seems it had something to do with nesting ajax toolkit UpdatePanel. When I removed the top level panel it was fixed.
Hope this helps if anyone else has the same problem. I still don't know what specifically was causing the problem, but that was the solution for me.
A: Check your User Agent string. This same thing happened to me one time and I realized it was because I was testing out some pages as "googlebot". The JavaScript that is generated depends on knowing what the user agent is.
From http://support.mozilla.com/tiki-view_forum_thread.php?locale=tr&comments_parentId=160492&forumId=1:
To reset your user agent string type about:config into the location bar and press enter. This brings up a list of preferences. Enter general.useragent into the filter box, this should show a few preferences (probably 4 of them). If any have the status user set, right-click on the preference and choose Reset
A: I had this same problem (__doPostBack not working) in Firefox- caused a solid hour of wasted time. The problem turned out to be the HTML. If you use HTML like this:
<input type="button" id="yourButton" onclick="doSomethingThenPostBack();" value="Post" />
Where "doSomethingThenPostBack" is just a JavaScript method that calls __doPostBack, the form will not post in Firefox. It will PostBack in IE and Chrome. To solve the problem, make sure your HTML is:
<input type="submit" id="yourButton" ...
The key is the type attribute. It must be "submit" in Firefox for __doPostBack to work. Other browsers don't seem to care. Hope this helps anyone else who hits this problem.
A: Is it because you are doing return confirm? seems like the return statement should prevent the rest of the code from firing. i would think an if statement would work
if (!confirm(...)) { return false; } _doPostBack(...);
Can you post all the js code in the OnClick of the link?
EDIT: aha, forgot that link button emits code like this
<a href="javascript:__doPostBack()" onclick="return confirm()" />
A: this might seem elemental, but did you verify that your firefox settings aren't set to interfere with the postback? Sometimes I encounter similar problems due to a odd browser configuration I had from a debugging session.
A: Are you handling the PageLoad event? If so, try the following
if (!isPostBack)
{
//do something
}
else if (Request.Form["__EVENTTARGET"].ToLower().IndexOf("myevent") >= 0)
{
//call appropriate function.
}
Check if you are getting a call this way, if so then maybe the event is not wired and nedes to be explicitly called.
A: what do you expect from "Enabled = 'false'" ?
A: I have had problems with firebug on some web forms, something to do with the network analyser can screw with postbacks.
A: With or without the OnClientClick event it still doesn't work.
The _doPostBack event is the auto generated javascript that .NET produces.
function __doPostBack(eventTarget, eventArgument) {
if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
theForm.__EVENTTARGET.value = eventTarget;
theForm.__EVENTARGUMENT.value = eventArgument;
theForm.submit();
}
}
*The &95; are underscores, seems to be a problem with the stackoverflow code block format.
A: Now that i think about it, as noted in my last edit, you want to drop the javascript: in the on client click property. It's not needed, because the onclick event is javascript as it is. try that, see if that works.
A: Seems it had something to do with nesting ajax toolkit UpdatePanel. When I removed the top level panel it was fixed.
Hope this helps if anyone else has the same problem.
A: I had this exact same issue in a web app I was working on, and I tried solving it for hours.
Eventually, I did a NEW webform, dropped a linkbutton in it, and it worked perfectly!
I then noticed the following issue:
...
I switch the order to the following, and it immediately was fixed:
...
IE had no issue either way (that I noticed anyway).
A: I had a similar issue. It turned out that Akamai was modifying the user-agent string because an setting was being applied that was not needed.
This meant that some .NET controls did not render __doPostBack code properly. This issue has been blogged here.
A: @Terrapin: you got this exactly right (for me, anyways).
I was running User Agent Switcher and had inadvertently left the Googlebot 2.1 agent selected.
In Reporting Services 2008 it was causing the iframes that reports are actually rendered in to be about 300x200 px, and in Reporting Services 2008 R2 is was throwing "__doPostBack undefined" errors in the Error Console.
Switching back to the Default User Agent fixed all my issues.
A: I had the same problem with Firefox. Instead of using __doPostBack() could you use the jQuery .trigger() method to trigger a click action on an element that has your postback event registered as the click action?
For example, if you had this element in your aspx page:
<asp:Button runat="server" ID="btnMyPostback" OnClick="btnMyPostback_Click" CssClass="hide" ToolTip="Click here to submit this transaction." />
And your postback event in your controller:
protected void btnMyPostback_Click(object sender, EventArgs e)
{
//do my postback stuff
}
You could do the postback by calling:
$("#btnMyPostback").trigger("click");
This will cause the Page_Load event to fire if you need to do something on Page_Load.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I get Emacs' key bindings in Python's IDLE? I use Emacs primarily for coding Python but sometimes I use IDLE. Is there a way to change the key bindings easily in IDLE to match Emacs?
A: IDLE provides Emacs keybindings without having to install other software.
*
*Open up the menu item Options -> Configure IDLE...
*Go to Keys tab
*In the drop down menu on the right
side of the dialog change the select
to "IDLE Classic Unix"
It's not the true emacs key bindings but you get the basics like movement, saving/opening, ...
A: There's a program for Windows called XKeymacs that allows you to specify emacs keybindings for different programs. It should work with IDLE.
http://www.cam.hi-ho.ne.jp/oishi/indexen.html
-Mark
A: 'readline' module supposedly provides Emacs like key bindings and even functionality. However, it is not available on Windows but on Unix. Therefore, this might be a viable solution if you are not using Windows.
import readline
Since I am running IDLE on Windows it is unfortunately not an option for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Reading VC++ CArchive Binary Format (or Java reading (CObArray)) Is there any clear documentation on the binary formats used to serialize the various MFC data structures? I've been able to view some of my own classes in a hex editor and use Java's ByteBuffer class to read them in (with automatic endianness conversions, etc).
However, I am currently running into issues while trying to bring over the CObArray data, as there seems to be a rather large header that is opaque to me, and it is unclear how it is persisting object type information.
Is there a set of online documentation that would be helpful for this? Or some sample Java code from someone that has dealt with this in the past?
A: Since MFC ships with source code I would create a test MFC application that serializes a CObArray and step through the serialization code. This should give you all the information you need.
A: I agree with jmatthias: use the MFC source code.
There's also this page on MSDN that may be useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Extending an enum via inheritance I know this rather goes against the idea of enums, but is it possible to extend enums in C#/Java? I mean "extend" in both the sense of adding new values to an enum, but also in the OO sense of inheriting from an existing enum.
I assume it's not possible in Java, as it only got them fairly recently (Java 5?). C# seems more forgiving of people that want to do crazy things, though, so I thought it might be possible some way. Presumably it could be hacked up via reflection (not that you'd every actually use that method)?
I'm not necessarily interested in implementing any given method, it just provoked my curiosity when it occurred to me :-)
A: When built-in enums aren't enough, you can do it the old fashion way and craft your own. For example, if you wanted to add an additional property, for example, a description field, you could do it as follows:
public class Action {
public string Name {get; private set;}
public string Description {get; private set;}
private Action(string name, string description) {
Name = name;
Description = description;
}
public static Action DoIt = new Action("Do it", "This does things");
public static Action StopIt = new Action("Stop It", "This stops things");
}
You can then treat it like an enum like so:
public void ProcessAction(Action a) {
Console.WriteLine("Performing action: " + a.Name)
if (a == Action.DoIt) {
// ... and so on
}
}
The trick is to make sure that the constructor is private (or protected if you want to inherit), and that your instances are static.
A: You're going the wrong way: a subclass of an enum would have fewer entries.
In pseudocode, think:
enum Animal { Mosquito, Dog, Cat };
enum Mammal : Animal { Dog, Cat }; // (not valid C#)
Any method that can accept an Animal should be able to accept a Mammal, but not the other way around. Subclassing is for making something more specific, not more general. That's why "object" is the root of the class hierarchy. Likewise, if enums were inheritable, then a hypothetical root of the enum hierarchy would have every possible symbol.
But no, C#/Java don't allow sub-enums, AFAICT, though it would be really useful at times. It's probably because they chose to implement Enums as ints (like C) instead of interned symbols (like Lisp). (Above, what does (Animal)1 represent, and what does (Mammal)1 represent, and are they the same value?)
You could write your own enum-like class (with a different name) that provided this, though. With C# attributes it might even look kind of nice.
A: You can use .NET reflection to retrieve the labels and values from an existing enum at run-time (Enum.GetNames() and Enum.GetValues() are the two specific methods you would use) and then use code injection to create a new one with those elements plus some new ones. This seems somewhat analagous to "inheriting from an existing enum".
A: I didn't see anyone else mention this but the ordinal value of an enum is important. For example, with grails when you save an enum to the database it uses the ordinal value. If you could somehow extend an enum, what would be the ordinal values of your extensions? If you extended it in multiple places how could you preserve some kind of order to these ordinals? Chaos/instability in the ordinal values would be a bad thing which is probably another reason why the language designers have not touched this.
Another difficulty if you were the language designer, how can you preserve the functionality of the values() method which is supposed to return all of the enum values. What would you invoke this on and how would it gather up all of the values?
A: Adding enums is a fairly common thing to do if you go back to the source code and edit, any other way (inheritance or reflection, if either is possible) is likely to come back and hit you when you get an upgrade of the library and they have introduced the same enum name or the same enum value - I have seen plenty of lowlevel code where the integer number matches to the binary encoding, where you would run into problems
Ideally code referencing enums should be written as equals only (or switches), and try to be future proof by not expecting the enum set to be const
A: If you mean extends in the Base class sense, then in Java... no.
But you can extend an enum value to have properties and methods if that's what you mean.
For example, the following uses a Bracket enum:
class Person {
enum Bracket {
Low(0, 12000),
Middle(12000, 60000),
Upper(60000, 100000);
private final int low;
private final int high;
Brackets(int low, int high) {
this.low = low;
this.high = high;
}
public int getLow() {
return low;
}
public int getHigh() {
return high;
}
public boolean isWithin(int value) {
return value >= low && value <= high;
}
public String toString() {
return "Bracket " + low + " to " + high;
}
}
private Bracket bracket;
private String name;
public Person(String name, Bracket bracket) {
this.bracket = bracket;
this.name = name;
}
public String toString() {
return name + " in " + bracket;
}
}
A: Enums are supposed to represent the enumeration of all possible values, so extending rather does go against the idea.
However, what you can do in Java (and presumably C++0x) is have an interface instead of a enum class. Then put you standard values in an enum that implements the feature. Obviously you don't get to use java.util.EnumSet and the like. This is the approach taken in "more NIO features", which should be in JDK7.
public interface Result {
String name();
String toString();
}
public enum StandardResults implements Result {
TRUE, FALSE
}
public enum WTFResults implements Result {
FILE_NOT_FOUND
}
A: The reason you can't extend Enums is because it would lead to problems with polymorphism.
Say you have an enum MyEnum with values A, B, and C , and extend it with value D as MyExtEnum.
Suppose a method expects a myEnum value somewhere, for instance as a parameter. It should be legal to supply a MyExtEnum value, because it's a subtype, but now what are you going to do when it turns out the value is D?
To eliminate this problem, extending enums is illegal
A: Saw a post regarding this for Java a while back, check out http://www.javaspecialists.eu/archive/Issue161.html .
A: I would like to be able to add values to C# enumerations which are combinations of existing values. For example (this is what I want to do):
AnchorStyles is defined as
public enum AnchorStyles {
None = 0,
Top = 1,
Bottom = 2,
Left = 4,
Right = 8,
}
and I would like to add an AnchorStyles.BottomRight = Right + Bottom so instead of saying
my_ctrl.Anchor = AnchorStyles.Right | AnchorStyles.Bottom;
I can just say
my_ctrl.Anchor = AnchorStyles.BottomRight;
This doesn't cause any of the problems that have been mentioned above, so it would be nice if it was possible.
A: A temporary/local workaround, when you just want very local/one time usage:
enum Animals { Dog, Cat }
enum AnimalsExt { Dog = Animals.Dog, Cat= Animals.Cat, MyOther}
// BUT CAST THEM when using:
var xyz = AnimalsExt.Cat;
MethodThatNeedsAnimal( (Animals)xyz );
See all answers at: Enum "Inheritance"
A: You can't inherit from/extend an enum, you can use attributes to declare a description. If you're looking for an integer value, that's built-in.
A: Hmmm - as far as I know, this can't be done - enumerations are written at design-time and are used as a convenience to the programmer.
I'm pretty sure that when the code is compiled, the equivalent values will be substituted for the names in your enumeration, thereby removing the concept of an enumeration and (therefore) the ability to extend it.
A: As far as java is concerned it is not allowed because adding elements to an enum would effectively create a super class rather than a sub class.
Consider:
enum Person (JOHN SAM}
enum Student extends Person {HARVEY ROSS}
A general use case of Polymorphism would be
Person person = Student.ROSS; //not legal
which is clearly wrong.
A: Some time back even i wanted to do something like this and found that enum extensions would voilate lot of basic concepts... (Not just polymorphisim)
But still u might need to do if the enum is declared in external library and
Remember you should make a special caution when using this enum extensions...
public enum MyEnum { A = 1, B = 2, C = 4 }
public const MyEnum D = (MyEnum)(8);
public const MyEnum E = (MyEnum)(16);
func1{
MyEnum EnumValue = D;
switch (EnumValue){
case D: break;
case E: break;
case MyEnum.A: break;
case MyEnum.B: break;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "98"
} |
Q: ant build.xml windows white space in path I'm using Windows and I'm trying to get ANT to work.
When I do an ant build from the command line, I get:
C:\dev\Projects\springapp\${%ANT_HOME%}\lib not found.
I look into the build.xml file and I find:
appserver.home=${user.home}/apache-tomcat-6.0.14 (which I just copied and pasted straight from a tutorial)
I changed it to:
appserver.home="C:\Program Files\Apache Software Foundation\Tomcat 6.0"
but now I get:
C:\dev\Projects\springapp\"C:Program FilesApache Software FoundationTomcat 6.0"\lib not found.
It seems like the white space in Program Files and Tomcat 6.0 are causing the build to fail. How do you deal with these in xml files without re-creating the directory with a path with no white space?
A: Variant with "/" instead "\" works on my system. Just need to delete " symbols before and after path structure.
A: Find the Windows short name for those directories using dir /x and use thme when setting path values.
Some more discussion at How does Windows determine/handle the DOS short name of any given file?
A: It looks like you have your properties setup incorrectly.
I'm guessing your basedir property is pointing at C:\dev\Projects\springapp and your properties are using value like:
<property name="property.1" value="directory" />
instead of
<property name="property.1" location="directory" />
Using the location property then resolves links as relative to your basedir if the location is a relative path and to the absolute path if you enter one of those. If you could post parts of your Ant file specifically how you define appserver.home and how you use it in the task that's throwing the error I could be more specific.
A: Change it to
appserver.home="C:\\Program Files\\Apache Software Foundation\\Tomcat 6.0"
A: In addition to escaping the windows directory separator also make sure that all paths that you type in should be with correct capitalisation, Windows is not case sensitive but case presrving, while Ant is case sensitive.
A: In the top of your build.xml, try adding
<property environment="env"/>
and then using ${env.USER_HOME} (or whatever you have in your environment). That did it for me (${env.JAVA_HOME} rather than ${java.home}).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Python regular expression for HTML parsing I want to grab the value of a hidden input field in HTML.
<input type="hidden" name="fooId" value="12-3456789-1111111111" />
I want to write a regular expression in Python that will return the value of fooId, given that I know the line in the HTML follows the format
<input type="hidden" name="fooId" value="**[id is here]**" />
Can someone provide an example in Python to parse the HTML for the value?
A: import re
reg = re.compile('<input type="hidden" name="([^"]*)" value="<id>" />')
value = reg.search(inputHTML).group(1)
print 'Value is', value
A: Parsing is one of those areas where you really don't want to roll your own if you can avoid it, as you'll be chasing down the edge-cases and bugs for years go come
I'd recommend using BeautifulSoup. It has a very good reputation and looks from the docs like it's pretty easy to use.
A: For this particular case, BeautifulSoup is harder to write than a regex, but it is much more robust... I'm just contributing with the BeautifulSoup example, given that you already know which regexp to use :-)
from BeautifulSoup import BeautifulSoup
#Or retrieve it from the web, etc.
html_data = open('/yourwebsite/page.html','r').read()
#Create the soup object from the HTML data
soup = BeautifulSoup(html_data)
fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag
value = fooId.attrs[2][1] #The value of the third attribute of the desired tag
#or index it directly via fooId['value']
A: I agree with Vinko BeautifulSoup is the way to go. However I suggest using fooId['value'] to get the attribute rather than relying on value being the third attribute.
from BeautifulSoup import BeautifulSoup
#Or retrieve it from the web, etc.
html_data = open('/yourwebsite/page.html','r').read()
#Create the soup object from the HTML data
soup = BeautifulSoup(html_data)
fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag
value = fooId['value'] #The value attribute
A: Pyparsing is a good interim step between BeautifulSoup and regex. It is more robust than just regexes, since its HTML tag parsing comprehends variations in case, whitespace, attribute presence/absence/order, but simpler to do this kind of basic tag extraction than using BS.
Your example is especially simple, since everything you are looking for is in the attributes of the opening "input" tag. Here is a pyparsing example showing several variations on your input tag that would give regexes fits, and also shows how NOT to match a tag if it is within a comment:
html = """<html><body>
<input type="hidden" name="fooId" value="**[id is here]**" />
<blah>
<input name="fooId" type="hidden" value="**[id is here too]**" />
<input NAME="fooId" type="hidden" value="**[id is HERE too]**" />
<INPUT NAME="fooId" type="hidden" value="**[and id is even here TOO]**" />
<!--
<input type="hidden" name="fooId" value="**[don't report this id]**" />
-->
<foo>
</body></html>"""
from pyparsing import makeHTMLTags, withAttribute, htmlComment
# use makeHTMLTags to create tag expression - makeHTMLTags returns expressions for
# opening and closing tags, we're only interested in the opening tag
inputTag = makeHTMLTags("input")[0]
# only want input tags with special attributes
inputTag.setParseAction(withAttribute(type="hidden", name="fooId"))
# don't report tags that are commented out
inputTag.ignore(htmlComment)
# use searchString to skip through the input
foundTags = inputTag.searchString(html)
# dump out first result to show all returned tags and attributes
print foundTags[0].dump()
print
# print out the value attribute for all matched tags
for inpTag in foundTags:
print inpTag.value
Prints:
['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True]
- empty: True
- name: fooId
- startInput: ['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True]
- empty: True
- name: fooId
- type: hidden
- value: **[id is here]**
- type: hidden
- value: **[id is here]**
**[id is here]**
**[id is here too]**
**[id is HERE too]**
**[and id is even here TOO]**
You can see that not only does pyparsing match these unpredictable variations, it returns the data in an object that makes it easy to read out the individual tag attributes and their values.
A: /<input type="hidden" name="fooId" value="([\d-]+)" \/>/
A: /<input\s+type="hidden"\s+name="([A-Za-z0-9_]+)"\s+value="([A-Za-z0-9_\-]*)"\s*/>/
>>> import re
>>> s = '<input type="hidden" name="fooId" value="12-3456789-1111111111" />'
>>> re.match('<input\s+type="hidden"\s+name="([A-Za-z0-9_]+)"\s+value="([A-Za-z0-9_\-]*)"\s*/>', s).groups()
('fooId', '12-3456789-1111111111')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Have you successfully used a GPGPU? I am interested to know whether anyone has written an application that takes advantage of a GPGPU by using, for example, nVidia CUDA. If so, what issues did you find and what performance gains did you achieve compared with a standard CPU?
A: I have been using GPGPU for motion detection (Originally using CG and now CUDA) and stabilization (using CUDA) with image processing.
I've been getting about a 10-20X speedup in these situations.
From what I've read, this is fairly typical for data-parallel algorithms.
A: I have been doing gpgpu development with ATI's stream SDK instead of Cuda.
What kind of performance gain you will get depends on a lot of factors, but the most important is the numeric intensity. (That is, the ratio of compute operations to memory references.)
A BLAS level-1 or BLAS level-2 function like adding two vectors only does 1 math operation for each 3 memory references, so the NI is (1/3). This is always run slower with CAL or Cuda than just doing in on the cpu. The main reason is the time it takes to transfer the data from the cpu to the gpu and back.
For a function like FFT, there are O(N log N) computations and O(N) memory references, so the NI is O(log N). If N is very large, say 1,000,000 it will likely be faster to do it on the gpu; If N is small, say 1,000 it will almost certainly be slower.
For a BLAS level-3 or LAPACK function like LU decomposition of a matrix, or finding its eigenvalues, there are O( N^3) computations and O(N^2) memory references, so the NI is O(N). For very small arrays, say N is a few score, this will still be faster to do on the cpu, but as N increases, the algorithm very quickly goes from memory-bound to compute-bound and the performance increase on the gpu rises very quickly.
Anything involving complex arithemetic has more computations than scalar arithmetic, which usually doubles the NI and increases gpu performance.
(source: earthlink.net)
Here is the performance of CGEMM -- complex single precision matrix-matrix multiplication done on a Radeon 4870.
A: While I haven't got any practical experiences with CUDA yet, I have been studying the subject and found a number of papers which document positive results using GPGPU APIs (they all include CUDA).
This paper describes how database joins can be paralellized by creating a number of parallel primitives (map, scatter, gather etc.) which can be combined into an efficient algorithm.
In this paper, a parallel implementation of the AES encryption standard is created with comparable speed to discreet encryption hardware.
Finally, this paper analyses how well CUDA applies to a number of applications such as structured and unstructured grids, combination logic, dynamic programming and data mining.
A: I've implemented a Monte Carlo calculation in CUDA for some financial use. The optimised CUDA code is about 500x faster than a "could have tried harder, but not really" multi-threaded CPU implementation. (Comparing a GeForce 8800GT to a Q6600 here). It is well know that Monte Carlo problems are embarrassingly parallel though.
Major issues encountered involves the loss of precision due to G8x and G9x chip's limitation to IEEE single precision floating point numbers. With the release of the GT200 chips this could be mitigated to some extent by using the double precision unit, at the cost of some performance. I haven't tried it out yet.
Also, since CUDA is a C extension, integrating it into another application can be non-trivial.
A: I have written trivial applications, it really helps if you can parallize floating point calculations.
I found the following course cotaught by a University of Illinois Urbana Champaign professor and an NVIDIA engineer very useful when I was getting started: http://courses.ece.illinois.edu/ece498/al/Archive/Spring2007/Syllabus.html (includes recordings of all lectures).
A: I have used CUDA for several image processing algorithms. These applications, of course, are very well suited for CUDA (or any GPU processing paradigm).
IMO, there are three typical stages when porting an algorithm to CUDA:
*
*Initial Porting: Even with a very basic knowledge of CUDA, you can port simple algorithms within a few hours. If you are lucky, you gain a factor of 2 to 10 in performance.
*Trivial Optimizations: This includes using textures for input data and padding of multi-dimensional arrays. If you are experienced, this can be done within a day and might give you another factor of 10 in performance. The resulting code is still readable.
*Hardcore Optimizations: This includes copying data to shared memory to avoid global memory latency, turning the code inside out to reduce the number of used registers, etc. You can spend several weeks with this step, but the performance gain is not really worth it in most cases. After this step, your code will be so obfuscated that nobody understands it (including you).
This is very similar to optimizing a code for CPUs. However, the response of a GPU to performance optimizations is even less predictable than for CPUs.
A: I implemented a Genetic Algorithm on the GPU and got speed ups of around 7.. More gains are possible with a higher numeric intensity as someone else pointed out. So yes, the gains are there, if the application is right
A: I have implemented Cholesky Factorization for solving large linear equation on GPU using ATI Stream SDK. My observations were
Got performance speedup upto 10 times.
Working on same problem to optimize it more, by scaling it to multiple GPUs.
A: I wrote a complex valued matrix multiplication kernel that beat the cuBLAS implementation by about 30% for the application I was using it for, and a sort of vector outer product function that ran several orders of magnitude than a multiply-trace solution for the rest of the problem.
It was a final year project. It took me a full year.
http://www.maths.tcd.ie/~oconbhup/Maths_Project.pdf
A: Yes. I have implemented the Nonlinear Anisotropic Diffusion Filter using the CUDA api.
It is fairly easy, since it's a filter that must be run in parallel given an input image. I haven't encountered many difficulties on this, since it just required a simple kernel. The speedup was at about 300x. This was my final project on CS. The project can be found here (it's written in Portuguese thou).
I have tried writing the Mumford&Shah segmentation algorithm too, but that has been a pain to write, since CUDA is still in the beginning and so lots of strange things happen. I have even seen a performance improvement by adding a if (false){} in the code O_O.
The results for this segmentation algorithm weren't good. I had a performance loss of 20x compared to a CPU approach (however, since it's a CPU, a different approach that yelded the same results could be taken). It's still a work in progress, but unfortunaly I left the lab I was working on, so maybe someday I might finish it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Path.GetTempFileName -- Directory name is invalid Running into a problem where on certain servers we get an error that the directory name is invalid when using Path.GetTempFileName. Further investigation shows that it is trying to write a file to c:\Documents and Setting\computername\aspnet\local settings\temp (found by using Path.GetTempPath). This folder exists so I'm assuming this must be a permissions issue with respect to the asp.net account.
I've been told by some that Path.GetTempFileName should be pointing to C:\Windows\Microsoft.NET\Framework\v2.0.50727\temporaryasp.net files.
I've also been told that this problem may be due to the order in which IIS and .NET where installed on the server. I've done the typical 'aspnet_regiis -i' and checked security on the folders etc. At this point I'm stuck.
Can anyone shed some light on this?
**Update:**Turns out that providing 'IUSR_ComputerName' access to the folder does the trick. Is that the correct procedure? I don't seem to recall doing that in the past, and obviously, want to follow best practices to maintain security. This is, after all, part of a file upload process.
A: I encountered this error while diagnosing a console app that was writing in temp files. In one of my test iterations I purged all the files/directories in temp for a 'clean-slate' run. I resolved this self inflicted issue by logging out and back in again.
A: Could be because IIS_WPG does not have access to a temp folder. If you think it is a permission issue, run a Procmon on asp.net worker process and check for AccessDenied errors.
A: This is probably a combination of impersonation and a mismatch of different authentication methods occurring.
There are many pieces; I'll try to go over them one by one.
Impersonation is a technique to "temporarily" switch the user account under which a thread is running. Essentially, the thread briefly gains the same rights and access -- no more, no less -- as the account that is being impersonated. As soon as the thread is done creating the web page, it "reverts" back to the original account and gets ready for the next call. This technique is used to access resources that only the user logged into your web site has access to. Hold onto the concept for a minute.
Now, by default ASP.NET runs a web site under a local account called ASPNET. Again, by default, only the ASPNET account and members of the Administrators group can write to that folder. Your temporary folder is under that account's purview. This is the second piece of the puzzle.
Impersonation doesn't happen on its own. It needs to be turn on intentionally in your web.config.
<identity impersonate="true" />
If the setting is missing or set to false, your code will execute pure and simply under the ASPNET account mentioned above. Given your error message, I'm positive that you have impersonation=true. There is nothing wrong with that! Impersonation has advantages and disadvantages that go beyond this discussion.
There is one question left: when you use impersonation, which account gets impersonated?
Unless you specify the account in the web.config (full syntax of the identity element here), the account impersonated is the one that the IIS handed over to ASP.NET. And that depends on how the user has authenticated (or not) into the site. That is your third and final piece.
The IUSR_ComputerName account is a low-rights account created by IIS. By default, this account is the account under which a web call runs if the user could not be authenticated. That is, the user comes in as an "anonymous".
In summary, this is what is happening to you:
Your user is trying to access the web site, and IIS could not authenticate the person for some reason. Because Anonymous access is ON, (or you would not see IUSRComputerName accessing the temp folder), IIS allows the user in anyway, but as a generic user. Your ASP.NET code runs and impersonates this generic IUSR___ComputerName "guest" account; only now the code doesn't have access to the things that the ASPNET account had access to, including its own temporary folder.
Granting IUSR_ComputerName WRITE access to the folder makes your symptoms go away.
But that just the symptoms. You need to review why is the person coming as "Anonymous/Guest"?
There are two likely scenarios:
a) You intended to use IIS for authentication, but the authentication settings in IIS for some of your servers are wrong.
In that case, you need to disable Anonymous access on those servers so that the usual authentication mechanisms take place. Note that you might still need to grant to your users access to that temporary folder, or use another folder instead, one to which your users already have access.
I have worked with this scenario many times, and quite frankly it gives you less headaches to forgo the Temp folder; create a dedicated folder in the server, set the proper permissions, and set its location in web.config.
b) You didn't want to authenticate people anyway, or you wanted to use ASP.NET Forms Authentication (which uses IIS's Anonymous access to bypass checks in IIS and lets ASP.NET handle the authentication directly)
This case is a bit more complicated.
You should go to IIS and disable all forms of authentication other than "Anonymous Access". Note that you can't do that in the developer's box, because the debugger needs Integrated Authentication to be enabled. So your debugging box will behave a bit different than the real server; just be aware of that.
Then, you need to decide whether you should turn impersonation OFF, or conversely, to specify the account to impersonate in the web.config. Do the first if your web server doesn't need outside resources (like a database). Do the latter if your web site does need to run under an account that has access to a database (or some other outside resource).
You have two more alternatives to specify the account to impersonate. One, you could go to IIS and change the "anonymous" account to be one with access to the resource instead of the one IIS manages for you. The second alternative is to stash the account and password encrypted in the registry. That step is a bit complicated and also goes beyond the scope of this discussion.
Good luck!
A: I was having the same problem with one of my ASP.Net applications. I was getting Path.GetTempPath() but it was throwing an exception of:
"Could not write to file "C:\Windows\Temp\somefilename", exception: Access to the path "C:\Windows\Temp\somefilename" is denied."
I tried a few suggestions on this page, but nothing helped.
In the end, I went onto the web server (IIS server) and changed permissions on the server's "C:\Windows\Temp" directory to give the "Everyone" user full read-write permissions.
And then, finally, the exception went away, and my users could download files from the application. Phew!
A: You can use Path.GetTempPath() to find out which directory to which it's trying to write.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ASP.NET MVC versus the Zeitgeist ASP.NET MVC seems to be making a pretty big entrance. Can anyone summarize how its MVC implementation stacks up against popular MVC frameworks for other languages? (I'm thinking specifically of Rails and Zend Framework, though there are obviously lots.) Observations on learning curve, common terminology, ease of use and feelgood factor welcome.
(For the sake of a little background, I've been avoiding using ASP.NET for some time because I really hate the webforms approach, but Jeff's prolific praise on the podcast has almost convinced me to give it a go.)
A: I'm just getting into ASP.NET MVC, so these are some early thoughts comparing it to Rails:
Mostly manages to stick with static typing, at the expense of a little extra code.
This will either give you the warm fuzzies or make you feel slightly shackled depending on how you feel about dynamic typing. For instance, you can have your views expect particular typed data (and so get compile-time checking of your views).
Better separation of bits of the framework.
So there's no prescribed data access mechanism such as ActiveRecord in Rails; you're free to choose your own. LINQ feels similar if you want something cheap, if a bit more verbose. You can use the non-WebForms parts of ASP.NET like caching and authentication.
Still playing feature catch-up.
Preview 5 brought AcceptVerbs, model updaters (similar to Ruby's hash.merge) and more ways to bind forms to models. Feels like there's still more to come before they check off most of the feature set that Rails has.
I'm still missing a little of Rails' freedom and elegance (much of which is down to Ruby, I guess), but ASP.NET MVC really does feel quite close.
A: If you're already programming in the .NET idiom, it's pretty easy to pick up on a lot of what's going on in the MVC Framework. Rails, on the other hand, can be pretty easy to pick up (granted, at a basic level) if you've never set eyes on Ruby before you start.
It seems like you're talking about quality-as-MVC, though, and it looks to me like both frameworks (can't speak for Zend) do a very good job of separating the concerns.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Do you actively manage technical debt? Do you actively manage technical debt debt on your software development projects and if so, how do you do it?
A: On our teams we actively manage technical debt. We do Scrum, so we spawn a technical debt card for either the current iteration or the next iteration depending on the estimate and our remaining sprint capacity and they get prioritized just like features and bug cards do. We also manage larger, cross-team debt items by having a cross-team backlog of technical debt that we prioritize and inject into each Scrum team during their sprint planning.
A: I think it's important to schedule time for dealing technical debt if you are trying to make up for old sins, but I also think you should not make this a habit. Once you clean up the mess you should avoid putting your project into more debt, unless you have good reasons for doing so.
Actively managing it like Mike suggests seems like the most reasonable approach, but I think it's very important to make it clear (to your team) that you should not schedule time or plan for refactoring in the long run.
Refactoring should be a natural part of writing code, and thus should be included in your other estimates and plans, and not be treated as a separate activity—unless you have to, i.e for "historical" reasons or because you consciously decided to implement something a given way and then re-implement it later.
A: One aspect of managing technical debt is in convincing non-technical managers that you need time allocated for refactoring and bug fixing.
Here's an article with specific suggestions on how to do that.
A: What you do is create a culture where technical debt is not acceptable unless in extreme cases. Much like people who only pay cash and use credit only as an absolute last resort.
A: If I really need to pile up technical debt, because I need to release something NOW, I file a critical bug about it, so it gets highest priority. But it is only for extreme situations (the client is jumping up and down, the wife is looking for a dingbat etc.).
A: It depends a lot on the product. When I worked in a field where our code had to be outside-audited it was a planned part of our sprint. PM just asked development what area needed refactoring and it was put in the plan. That's not to say you wouldn't fix the code in the area you were working on, but you wouldn't devote a day to rewriting a mangled chunk of code that worked. Now I'm working in scrum and developers just do it as they work. My impression is that about the same amount of time goes into refactoring work, either way.
A: I agree with Anders. If you have to set up systems for managing technical debt, that means you're still adding it. Stop going into debt in the first place by upgrading your definition of "done".
This does mean that "indebted" modules will be harder to work through. Developers should be aware of this and assign more story points so that they leave things "done" in their wake.
A: If you're late into release cycle you don't want to change the code base too much. This means there will always be some technical debt. I usually write FIXME:s for the changes that are suboptimal and then I take care of them before I start to implement features for the next release.
A: Java Posse have covered the management of Technical Debt recently which looks very comprehensive.
A: On the projects I have been involved so far, some technical debt has been "paid" (managed) only at the beginning of new phases of the projects, i.e. after "big releases" or milestones.
A very important aspect about technical debt is that it not only involves developers but management as well. In that sense, I am aware that the best way to deal with it, is to make it visible to "non-technical project stakeholders" who might allocate time and resources to manage the technical debt once they understand its implications.
This article discusses several types of technical debt, which ones might be healthy, and specially how to manage and track the technical debt load.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: How to parse relative time? This question is the other side of the question asking, "How do I calculate relative time?".
Given some human input for a relative time, how can you parse it? By default you would offset from DateTime.Now(), but could optionally offset from another DateTime.
(Prefer answers in C#)
Example input:
*
*"in 20 minutes"
*"5 hours ago"
*"3h 2m"
*"next week"
Edit: Let's suppose we can define some limits on the input. This sort of code would be a useful thing to have out on the web.
A: A Google search turns up the parsedatetime library (associated with the Chandler project), which is designed to do exactly this. It's open source (Apache License) and written in Python. It seems to be quite sophisticated -- from the homepage:
parsedatetime is able to parse, for
example, the following:
* Aug 25 5pm
* 5pm August 25
* next saturday
...
* tomorrow
* next thursday at 4pm
* at 4pm
* eod
* in 5 minutes
* 5 minutes from now
* 5 hours before now
* 2 days from tomorrow
Since it's implemented in pure Python and doesn't use anything fancy, there's a good chance it's compatible with IronPython, so you could use it with .net. If you want specifically a C# solution, you could write something based on the algorithms they use...
It also comes with a whole bunch of unit tests.
A: That's building a DSL (Domain specific language) for date handling. I don't know if somebody has done one for .NET but the construction of a DSL is fairly straightforward:
*
*Define the language precisely, which input forms you will accept and what will you do with ambiguities
*Construct the grammar for the language
*Build the finite state machine that parses your language into an actionable AST
You can do all that by yourself (with the help of the Dragon Book, for instance) or with the help of tools to the effect, as shown in this link.
Just by thinking hard about the possibilities you have a good chance, with the help of good UI examples, of covering more than half of the actual inputs your application will receive. If you aim to accept everything a human could possibly type, you can record the input determined as ambiguous and then add them to the grammar, whenever they can be interpreted, as there are things that will be inherently ambiguous.
A: The ruby folks have attempted to tackle this with a parser called Chronic.
*
*Chronic RDocs
*Chronic on GitHub
I watched an informative video presentation recently on how the author went about solving this problem.
*
*Chronic Presentation (San Diego Ruby Brigade)
A: This is likely not all that helpful since you're talking c# but since no one's mentioned it yet you can try to take a look at php's excellent and utterly insane native strtotime function
A: This: http://www.codeproject.com/KB/edit/dateparser.aspx
Is fairly close to what you are trying to accomplish. Not the most elegant solution, but certainly might save you some work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Using Microsoft's Application Blocks I haven't done a lot of .NET programming, but I've examined a few of the application blocks published by Microsoft's Patterns and Practices group. I was wondering how these are typically used:
*
*Linked directly into applications
*Source added into applications and built with them, perhaps with some customization's
*Sample code used as reference while writing application-specific code
I'm sure all three of these usages are common, but what are the most typical usage patterns?
Are there a few particular application blocks that are used by "everyone?"
Note: This question is related to, but not the same as Enterprise Library Application Blocks OR Home Grown Framework?.
A: I have used Microsoft's Enterprise Library extensively. They generally should never be included within your project if possible. The added cost of compiling can be heavy. Additionally, there's no reason to have the source code in your project to use the classes. You will still get intellisence during coding as long as you add a reference to the DLL's in your projects. It is also advisable to avoid having multiple codebases floating around your developer environment. If you need to customize the classes, open them up in their own solution and keep one version active. Of course I always strongly suggest using version control (VSS or Subversion) in case you need to roll back changes.
There are also open source alternatives to the Microsoft classes that are usually better coded (i.e. Log4Net, nUnit, etc.). Microsoft code tends to be bloated and inefficient.
A: I usually put the source into my project, and then I can get better intellisense (and a better understanding of them). I don't tend to customize them at all though. I like to have them stock so I can just distribute the stock binaries anytime I need them.
A: I've tried several Application Blocks of Enterprise Lib 3.1 (May 2007) and here are some comments :
Caching Application Block : Less interesting than System.Web.Caching in simple scenarios (like In-Memory caching)
Exception Handling & Logging : Over-complicated. NLog or Log4Net are better solutions.
I looked at the other Blocks but they didn't seem to fit for our projects.
Finally we completely dropped EntLib because it was painful to customize...
I would advise you to really consider a less monolithic solution than EntLib.
A: We just put the EntLib 3.1 binaries in the global assembly cache and add references in our projects. We typically only use the logging framework, though.
A: I think that most convenient way is to add App blocks\EntLib as a solution items. That way they will not be recompiled each time you build your project (they will not participate in build process at all) and you can easily access their source code\set breakpoint etc.
A: We use the blocks by adding references to the DLLs, making sure that "copy local" is set so that they are deployed with the app into the app's bin folder. This means that we don't have to muck around with the GAC - much simpler!
When debugging, Visual Studio can still step into the source code even if it's not directly included in your project, as long as you have the EntLib source code on your hard disk somewhere. It will prompt you for the location on first use, and remember it thereafter.
We currently use the Caching, Exception and Logging blocks. We haven't thought of a use case for the rest yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Checking Inheritance with templates in C++ I've a class which is a wrapper class(serves as a common interface) around another class implementing the functionality required. So my code looks like this.
template<typename ImplemenationClass> class WrapperClass {
// the code goes here
}
Now, how do I make sure that ImplementationClass can be derived from a set of classes only, similar to java's generics
<? extends BaseClass>
syntax?
A: It's verbose, but you can do it like this:
#include <boost/utility/enable_if.hpp>
#include <boost/type_traits/is_base_of.hpp>
struct base {};
template <typename ImplementationClass, class Enable = void>
class WrapperClass;
template <typename ImplementationClass>
class WrapperClass<ImplementationClass,
typename boost::enable_if<
boost::is_base_of<base,ImplementationClass> >::type>
{};
struct derived : base {};
struct not_derived {};
int main() {
WrapperClass<derived> x;
// Compile error here:
WrapperClass<not_derived> y;
}
This requires a compiler with good support for the standard (most recent compilers should be fine but old versions of Visual C++ won't be). For more information, see the Boost.Enable_If documentation.
As Ferruccio said, a simpler but less powerful implementation:
#include <boost/static_assert.hpp>
#include <boost/type_traits/is_base_of.hpp>
struct base {};
template <typename ImplementationClass>
class WrapperClass
{
BOOST_STATIC_ASSERT((
boost::is_base_of<base, ImplementationClass>::value));
};
A: In the current state of things, there is no good way other than by comments or a third-party solution. Boost provides a concept check library for this, and I think gcc also has an implementation. Concepts are on the list of C++0x improvements, but I'm not sure if you can specify subtypes - they are more for "must support these operations" which is (roughly) equivalent.
Edit: Wikipedia has this section about concepts in C++0x, which is significantly easier to read than draft proposals.
A: See Stoustrup's own words on the subject.
Basically a small class, that you instantiate somewhere, e.g. the templated classes constructor.
template<class T, class B> struct Derived_from {
static void constraints(T* p) { B* pb = p; }
Derived_from() { void(*p)(T*) = constraints; }
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What’s the best approach when migrating legacy projects across versions of visual studio? I've been thinking about the number of projects we have in-house that are still being developed using visual studio 6 and how best to migrate them forward onto visual studio 2008. The projects range in flavours of C/C++ and VB.
Is it better to let VS2008 convert the work-spaces into solutions, fix any compile errors and be on your merry way? Or, is it better to start with a clean solution and migrate code across project by project discarding dead code along the way?
A: The Microsoft p&p team has recommended some strategies that answers this. Basically they recommend something like the project by project approach you mention. Of course, they're assuming a neatly architected application that has no nasty, dark corners from which late nights of coding and copious amounts of coffee spring from.
It doesn't hurt to let VS2008 convert the project for you and see how much effort is required to fix the errors.
A: When I had to convert a VB6 app to VS2003 several years ago, I ran the converter and it produced something that basically compiled, but wasn't very good at all. I ended up having to modify a big chunk of the code it generated.
I would start with a clean solution, then run the converter on a project and copy over only the code you need. One of the big differences I noticed between a VB6 project and the converted VB.NET project (WinForm) was with the built-in controls. The converter would try to preserve the type of controls you were using, even if they were old and outdated. So you might be better served by creating new forms with modern controls (text boxes, tab controls, etc), then copy in the code that you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Can you "ignore" a file in Perforce? I sometimes use the feature 'Reconcile Offline Work...' found in Perforce's P4V IDE to sync up any files that I have been working on while disconnected from the P4 depot. It launches another window that performs a 'Folder Diff'.
I have files I never want to check in to source control (like ones found in bin folder such as DLLs, code generated output, etc.) Is there a way to filter those files/folders out from appearing as "new" that might be added. They tend to clutter up the list of files that I am actually interested in. Does P4 have the equivalent of Subversion's 'ignore file' feature?
A: If you want a solution that will apply to all work-spaces without needing to be copied around, you (or your sysadmin) can refuse submission of those file-types through using lines like the below in the p4 protect table:
write user * * -//.../*.suo
write user * * -//.../*.obj
write user * * -//.../*.ccscc
I remember doing this before, but I don't have the necessary permissions to test this here. Check out Perforce's Sysadmin guide and try it out
A: As of version 2012.1, Perforce supports the P4IGNORE environment variable. I updated my answer to this question about ignoring directories with an explanation of how it works. Then I noticed this answer, which is now superfluous I guess.
Assuming you have a client named "CLIENT", a directory named "foo" (located at your project root), and you wish to ignore all .dll files in that directory tree, you can add the following lines to your workspace view to accomplish this:
-//depot/foo/*.dll //CLIENT/foo/*.dll
-//depot/foo/.../*.dll //CLIENT/foo/.../*.dll
The first line removes them from the directory "foo" and the second line removes them from all sub directories. Now, when you 'Reconcile Offline Work...', all the .dll files will be moved into "Excluded Files" folders at the bottom of the folder diff display. They will be out of your way, but can still view and manipulate them if you really need to.
You can also do it another way, which will reduce your "Excluded Files" folder to just one, but you won't be able to manipulate any of the files it contains because the path will be corrupt (but if you just want them out of your way, it doesn't matter).
-//depot/foo.../*.dll //CLIENT/foo.../*.dll
A: Perforce Streams makes ignoring files much easier, as of version 2011.1. According to the documentation, you can ignore certain extensions or certain paths in your directory.
From p4 help stream
Ignored: Optional; a list of file or directory names to be ignored in
client views. For example:
/tmp # ignores files named 'tmp'
/tmp/... # ignores dirs named 'tmp'
.tmp # ignores file names ending in '.tmp'
Lines in the Ignored field may appear in any order. Ignored
names are inherited by child stream client views.
This essentially does what @raven's answer specifies, but is done easier with streams, as it automatically propagates to every work-space using that stream. It also applies to any streams inheriting from the stream in which you specify the ignore types.
You can edit the stream via p4 stream //stream_depot/stream_name or right-clicking the stream in p4v's stream view.
And as @svec noted, the ability to specify ignore files per workspace is coming soon, and is in fact in P4 2012.1 beta.
A: Yes, But.
Perforce version 2012.1 added a feature known as p4ignore, inspired by Git. However the Perforce developers made a change to the behaviour, without justification, that happens to make the feature a lot less useful.
Whilst Git takes rules from all .gitignore files, Perforce doesn't know where to look until you specify a filename in an environment variable P4IGNORE. This freedom is a curse. You can't hack on two repositories that use different names for their ignore files.
Also, Perforce's ignore feature doesn't work out the box. You can set it up easily enough for yourself, but others don't benefit unless they explicitly opt-in. A contributor who hasn't may accidentally commit unwelcome files (eg. a bin folder created by a build script).
Git's ignore feature is great because it works out the box. If the .gitignore files are added to the repository (everyone does this), they'll work out the box for everyone. No-one will accidentally publish their private key.
Amusingly, the Perforce docs shows '.p4ignore' as an example ignore rule, which is backwards! If the rules are useful, they should be shared as part of the repository.
Perforce could still make good on the feature. Choose a convention for the file names, say p4ignore.txt, so the feature works out the box. Drop the P4IGNORE environment variable, it's counterproductive. Edit the docs, to encourage developers to share useful rules. Let users write personal rules in a file in their home folder, as Git does.
If you know anyone at Perforce, please email them this post.
A: Will's suggestion of using .p4ignore only seems to work with the WebSphere Studio (P4WSAD) plugin. I just tried it on my local windows box and any files and directories that I listed were not ignored.
Raven's suggestion of modifying your client spec is the correct way under Perforce. Proper organization of your code/data/executables and generated output files will make the process of excluding files from being checked in much easier.
As a more draconian approach, you can always write a submit trigger which will reject submission of change-lists if they contain a certain file or files with a certain extension, etc.
A: This works as of Perforce 2013.1, the new P4IGNORE mechanism was first added in release, 2012.1, described on the Perforce blog here:
https://www.perforce.com/blog/new-20121-p4ignore
As it's currently described, you set an environment variable "P4IGNORE" to a filename which contains a list of the files to ignore.
So you can check it out to see how you like it.
A: HISTORICAL ANSWER - no longer correct. At the time this was written originally it was true;
You can not write and check in a file that the server will use to make ignore rules; general glob or regexp file pattern ignore in perforce.
Other answers have global server configurations that are global (and not per folder).
The other answers show things that might work for you, if you want one line in your view per folder times number of extensions you want to ignore in that single folder, or that provide this capability in WebSphere Studio plugins only, or provide capability for server administrators, but not available to users.
In short, I find Perforce really weak in this area. While I appreciate that those who use the Eclipse Plugin can use .p4ignore, and I think that's great, it leaves those of us that don't, out in the dark.
UPDATE: See accepted answer for new P4IGNORE capability added mid-2012.
A: I have found it easiest to reconcile offline work using a BASH script like this one:
#!/bin/bash
# reconcile P4 offline work, assuming P4CLIENT is set
if [ -z "$P4CLIENT" ] ; then echo "P4CLIENT is not set"; exit 1; fi
unset PWD # confuses P4 on Windows/CYGWIN
# delete filew that are no longer present
p4 diff -sd ... | p4 -x - delete
# checkout files that have been changed.
# I don't run this step. Instead I just checkout everything,
# then revert unchanged files before committing.
p4 diff -se ... | pr -x - edit
# Add new files, ignoring subversion info, EMACS backups, log files
# Filter output to see only added files and real errors
find . -type f \
| grep -v -E '(\.svn)|(/build.*/)|(/\.settings)|~|#|(\.log)' \
| p4 -x - add \
| grep -v -E '(currently opened for add)|(existing file)|(already opened for edit)'
I adapted this from this Perforce Knowledge Base article.
A: I'm looking for a .p4ignore like solution as well (and not one tied to a particular IDE). Thus far, the closest thing I've found is p4delta. It sounds like it will do exactly what the original poster was asking, albeit through another layer of indirection.
http://p4delta.sourceforge.net
Unfortunately, while this does seem to produce the proper list of files, I can't get "p4delta --execute" to work ("Can't modify a frozen string") and the project has not been updated in year. Perhaps others will have better luck.
A: If you are using the Eclipse Perforce plugin, then the plugin documentation lists several ways to ignore files.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103"
} |
Q: Which C# project type would you use to redevelop a MFC C++ activex control? Looking at the C# project templates in VS2008 and the offerings are WPF User Control Library, WPF Custom Control Library and Windows Forms Control Library. Which of these would you use if you wanted to move a legacy active control written in c++ into the world of C# and .NET?
A: It sounds like you are trying to do several different things all at once:
*
*Migrate your code to building in a newer version of visual studio.
*Migrate your use of technology to a newer technology (ActiveX to .net)
*Migrate your language (c++ to c#).
If you have a small codebase you are probably as well to start from scratch and port functionality into the new codebase as required.
For a larger codebase you need to realize that this is an expensive task both in effort and defect rate.
An order might be:
*
*Import your code into the newer version of visual studio. Get it compiling. Review the project settings for each project.
*Refactor your code to isolate the mfc and activex code as much as possible. Follow good refactoring practices especially if don't have many unit tests before you start.
*Consider replacing your ActiveX layer with .net.
*Consider which GUI toolkit is best for replacing MFC.
*Language - consider moving first to managed c++.
*Consider moving from managed c++ to c#.
Most importantly be able to justify doing all of the above!
A: There is no project template that will do this for you. You might as well read up and start with a usercontrol.
A: You would have to consider the target application that will host the control. If it is a line of business application I've heard that WPF doesn't offer great advantages over Forms. According to this blog entry however, the author believes that the killer WPF is a LOB application that leverages the graphical power afforded by WPF for data visualisation.
In the end I guess it is a cost/benefit analysis. Do you go down the WPF route and pay the cost of the learning curve for the future benefit of graphical data visualisation or do you stick with the tried and true method and risk developing an outdated application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the possible mimetype hierarchy of an email message? I'm working with a snippet of code that recursively calls itself and tries to pull out a MIME Type part of text/html from an email (if it exists) for further processing.
The "text/html" could exist inside other content such as multipart/alternative, so I'm trying to find out if there is a defined hierarchy for email MIME Types.
Anybody know if there is and what it is? i.e. what types can parent other types?
A: In theory, only multipart/ and message/ can parent other types (per RFC2046).
A: Your question assumes that mail clients follow the RFC standards for MIME encoding, which they don't. I'd advise you collect a bunch of mail from sources and try and process it as-it-exists. The problem you are facing is extremely difficult (perhaps impossible) to solve 100%.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Should I use one big SQL Select statement or several small ones? I'm building a PHP page with data sent from MySQL.
Is it better to have
*
*1 SELECT query with 4 table joins, or
*4 small SELECT queries with no table join; I do select from an ID
Which is faster and what is the pro/con of each method? I only need one row from each tables.
A: Generally, it's better to have one SELECT statement. One of the main reasons to have databases is that they are fast at processing information, particularly if it is in the format of query.
If there is any drawback to this approach, it's that there are some kinds of analysis that you can't do with one big SELECT statement. RDBMS purists will insist that this is a database design problem, in which case you are back to my original suggestion.
A: When you use JOINs instead of multiple queries, you allow the database to apply its optimizations. You also are potentially retrieving rows that you don't need (if you were to replace an INNER join with multiple selects), which increases the network traffic between your app server and database server. Even if they're on the same box, this matters.
A: It might depend on what you do with the data after you fetch it from the DB. If you use each of the four results independently, then it would be more logical and clear to have four separate SELECT statements. On the other hand, if you use all the data together, like to create a unified row in a table or something, then I would go with the single SELECT and JOINs.
I've done a bit of PHP/MySQL work, and I find that even for queries on huge tables with tons of JOINs, the database is pretty good at optimizing - if you have smart indexes. So if you are serious about performance, start reading up on query optimization and indexing.
A: You should run a profiling tool if you're truly worried cause it depends on many things and it can vary but as a rule its better to have fewer queries being compiled and fewer round trips to the database.
Make sure you filter things as well as you can using your where and join on clauses.
But honestly, it usually doesn't matter since you're probably not going to be hit all that hard compared to what the database can do, so unless optimization is your spec you should not do it prematurely and do whats simplest.
A: I would say 1 query with the join. This way you need to hit the server only once. And if your tables are joined with indexes, it should be fast.
A: Well under Oracle you'd want to take advantage of the query caching, and if you have a lot of small queries you are doing in your sequential processing, it would suck if the last query pushed the first one out of the cache...just in time for you to loop around and run that first query again (with different parameter values obviously) on the next pass.
We were building an XML output file using Java stored procedures and definitely found the round trip times for each individual query were eating us alive. We found it was much faster to get all the data in as few queries as possible, then plug those values into the XML DOM as needed.
The only downside is that the Java code was a bit less elegant, as the data fetch was now remote from its usage. But we had to generate a large complex XML file in as close to zero time as possible, so we had to optimize for speed.
A: Be careful when dealing with a merge table however. It has been my experience that although a single join can be good in most situations, when merge tables are involved you can run into strange situations.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Uninstall Command Fails Only in Release Mode I'm able to successfully uninstall a third-party application via the command line and via a custom Inno Setup installer.
Command line Execution:
MSIEXEC.exe /x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn
Inno Setup Command:
[Run]
Filename: msiexec.exe; Flags: runhidden waituntilterminated;
Parameters: "/x {{14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn";
StatusMsg: "Uninstalling Service...";
I am also able to uninstall the application programmatically when executing the following C# code in debug mode.
C# Code:
string fileName = "MSIEXEC.exe";
string arguments = "/x {14D74337-01C2-4F8F-B44B-67FC613E5B1F} /qn";
ProcessStartInfo psi = new ProcessStartInfo(fileName, arguments)
{
CreateNoWindow = true,
UseShellExecute = false,
RedirectStandardOutput = true
};
Process process = Process.Start(psi);
string errorMsg = process.StandardOutput.ReadToEnd();
process.WaitForExit();
The same C# code, however, produces the following failure output when run as a compiled, deployed Windows Service:
"This action is only valid for products that are currently installed."
Additional Comments:
*
*The Windows Service which is issuing
the uninstall command is running on
the same machine as the code being
tested in Debug Mode. The Windows
Service is running/logged on as the
Local system account.
*I have consulted my application logs
and I have validated that the
executed command arguments are thhe
same in both debug and release mode.
*I have consulted the Event Viewer
but it doesn't offer any clues.
Thoughts? Any help would be greatly appreciated. Thanks.
A: Step 1: Check the MSI error log files
I'm suspicious that your problem is due to running as LocalSystem.
The Local System account is not the same as a normal user account which happens to have admin rights. It has no access to the network, and its interaction with the registry and file system is quite different.
From memory any requests to read/write to your 'home directory' or HKCU under the registry actually go into either the default user profile, or in the case of temp dirs, c:\windows\temp
A: I've come across similar problems in the past with installation, a customer was using the SYSTEM account to install and this was causing all sorts of permission problems for non-administrative users.
MSI log files aren't really going to help if the application doesn't appear "installed", I'd suggest starting with capturing the output of MSIINV.EXE under the system account, that will get you an "Inventory" of the currently installed programs (or what that user sees installed) http://blogs.msdn.com/brada/archive/2005/06/24/432209.aspx
I think you probably need to go back to the drawing board and see if you really need the windows service to do the uninstall. You'll probably come across all sorts of Vista UAC issues if you haven't already...
A: Thanks to those offering help. This appears to be a permissions issue. I have updated my service to run under an Administrator account and it was able to successfully uninstall the third-party application. To Orion's point, though the Local System account is a powerful account that has full access to the system -- http://technet.microsoft.com/en-us/library/cc782435.aspx -- it doesn't seem to have the necessary rights to perform the uninstall.
[See additional comments for full story regarding the LocalSystem being able to uninstall application for which it installed.]
A: This is bizarre. LocalSystem definitely has the privileges to install applications (that's how Windows Update and software deployment in Active Directory work), so it should be able to uninstall as well.
Perhaps the application is initially installed per-user instead of per-machine?
A: @Paul Lalonde
The app's installer is wrapped within a custom InnoSetup Installer. The InnoSetup installer, in turn, is manually executed by the logged in user. That said, the uninstall is trigged by a service running under the Local System account.
Apparently, you were on to something. I put together a quick test which had the service running under the LocalSystem account install as well as uninstall the application and everything worked flawlessly. You were correct. The LocalSystem account has required uninstall permissions for applications in which it installs. You saved the day. Thanks for the feedback!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Find the settings JNDI is using for error reporting I've got a J2SE application that I am maintaining uses JNDI.
(It uses JNDI to find it's J2EE application server.)
It has pretty poor error reporting of failure to find the JNDI server.
I've been looking around fora way to display which server the InitialContext is trying to talk to.
Has anyone got a neat way to do this ?
A: Reporting the value for InitialContext.getEnvironment().get(Context.PROVIDER_URL) might be helpful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Return collection as read-only I have an object in a multi-threaded environment that maintains a collection of information, e.g.:
public IList<string> Data
{
get
{
return data;
}
}
I currently have return data; wrapped by a ReaderWriterLockSlim to protect the collection from sharing violations. However, to be doubly sure, I'd like to return the collection as read-only, so that the calling code is unable to make changes to the collection, only view what's already there. Is this at all possible?
A: If your underlying data is stored as list you can use List(T).AsReadOnly method.
If your data can be enumerated, you can use Enumerable.ToList method to cast your collection to List and call AsReadOnly on it.
A: One should note that aku's answer will only protect the list as being read only. Elements in the list are still very writable. I don't know if there is any way of protecting non-atomic elements without cloning them before placing them in the read only list.
A: I voted for your accepted answer and agree with it--however might I give you something to consider?
Don't return a collection directly. Make an accurately named business logic class that reflects the purpose of the collection.
The main advantage of this comes in the fact that you can't add code to collections so whenever you have a native "collection" in your object model, you ALWAYS have non-OO support code spread throughout your project to access it.
For instance, if your collection was invoices, you'd probably have 3 or 4 places in your code where you iterated over unpaid invoices. You could have a getUnpaidInvoices method. However, the real power comes in when you start to think of methods like "payUnpaidInvoices(payer, account);".
When you pass around collections instead of writing an object model, entire classes of refactorings will never occur to you.
Note also that this makes your problem particularly nice. If you don't want people changing the collections, your container need contain no mutators. If you decide later that in just one case you actually HAVE to modify it, you can create a safe mechanism to do so.
How do you solve that problem when you are passing around a native collection?
Also, native collections can't be enhanced with extra data. You'll recognize this next time you find that you pass in (Collection, Extra) to more than one or two methods. It indicates that "Extra" belongs with the object containing your collection.
A: You can use a copy of the collection instead.
public IList<string> Data {
get {
return new List<T>(data);
}}
That way it doesn't matter if it gets updated.
A: You want to use the yield keyword. You loop through the IEnumerable list and return the results with yeild. This allows the consumer to use the for each without modifying the collection.
It would look something like this:
List<string> _Data;
public IEnumerable<string> Data
{
get
{
foreach(string item in _Data)
{
return yield item;
}
}
}
A: If your only intent is to get calling code to not make a mistake, and modify the collection when it should only be reading all that is necessary is to return an interface which doesn't support Add, Remove, etc.. Why not return IEnumerable<string>? Calling code would have to cast, which they are unlikely to do without knowing the internals of the property they are accessing.
If however your intent is to prevent the calling code from observing updates from other threads you'll have to fall back to solutions already mentioned, to perform a deep or shallow copy depending on your need.
A: I think you're confusing concepts here.
The ReadOnlyCollection provides a read-only wrapper for an existing collection, allowing you (Class A) to pass out a reference to the collection safe in the knowledge that the caller (Class B) cannot modify the collection (i.e. cannot add or remove any elements from the collection.)
There are absolutely no thread-safety guarantees.
*
*If you (Class A) continue to modify the underlying collection after you hand it out as a ReadOnlyCollection then class B will see these changes, have any iterators invalidated, etc. and generally be open to any of the usual concurrency issues with collections.
*Additionally, if the elements within the collection are mutable, both you (Class A) and the caller (Class B) will be able to change any mutable state of the objects within the collection.
Your implementation depends on your needs:
- If you don't care about the caller (Class B) from seeing any further changes to the collection then you can just clone the collection, hand it out, and stop caring.
- If you definitely need the caller (Class B) to see changes that are made to the collection, and you want this to be thread-safe, then you have more of a problem on your hands. One possibility is to implement your own thread-safe variant of the ReadOnlyCollection to allow locked access, though this will be non-trivial and non-performant if you want to support IEnumerable, and it still won't protect you against mutable elements in the collection.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do I logout of multiple asp.net applications? I have a main asp.net app, which is written in asp.net 1.1. Runnning underneath the application are several 2.0 apps. To completely logout a user can I just logout of the 1.1 app with FormsAuthentication.SignOut or is it more complicated than that?
A: What you are looking to do is called Single Sign On and Single Sign Off. There are differences based on how you have the applications set up. I will try to clarify where those differences come into play.
To implement single sign on and single sign off you need to make the cookie name, protection, and path attributes the same between all the applications.
<authentication mode="Forms">
<forms name=".cookiename"
loginUrl="~/Login.aspx"
timeout="30"
path="/" />
</authentication>
Next you need to add the machine keys and they need to be the same between all your applications.
<machineKey validationKey="F9D1A2D3E1D3E2F7B3D9F90FF3965ABDAC304902"
encryptionKey="F9D1A2D3E1D3E2F7B3D9F90FF3965ABDAC304902F8D923AC"
validation="SHA1" />
Are you using second or third level domains for the applications? If so you will need to do a little bit more by adding the domain to the cookie:
protected void Login(string userName, string password)
{
System.Web.HttpCookie cookie = FormsAuthentication.GetAuthCookie(userName, False);
cookie.Domain = "domain1.com";
cookie.Expires = DateTime.Now.AddDays(30);
Response.AppendCookie(cookie);
}
Now to do single sign off, calling FormsAuthentication.SignOut may not be enough. The next best thing is to set the cookie expiration to a past date. This will ensure that the cookie will not be used again for authentication.
protected void Logout(string userName)
{
System.Web.HttpCookie cookie = FormsAuthentication.GetAuthCookie(userName, False);
cookie.Domain = "domain1.com";
cookie.Expires = DateTime.Now.AddDays(-1);
Response.AppendCookie(cookie);
}
I am taking into consideration you are using the same database for all the applications. If the applications use a separate database for registration and authentication, then we will need to do some more. Just let me know if this is the case. Otherwise this should work for you.
A: It could be easier if you are having a central session store for all your applications. You can then set the session to null in one place.
A: This worked for me:
In the Logout event, instead of FormsAuthentication.GetAuthCookie method use Cookies collection in Request object as below:
HttpCookie cookie = Request.Cookies.Get(otherSiteCookieName);
cookie.Expires = DateTime.Now.AddDays(-1);
HttpContext.Current.Response.Cookies.Add(cookie);
Ofcourse, this requires u know the Cookie name of the site(s) you want the user to be logged out - which however won't be a problem if you are using the same cookie across all the web apps.
A: I prefer to use web.config
<authentication mode="Forms">
<forms domain=".tv.loc" loginUrl="~/signin" timeout="2880" name="auth" />
</authentication>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I conditionally create a stored procedure in SQL Server? As part of my integration strategy, I have a few SQL scripts that run in order to update the database. The first thing all of these scripts do is check to see if they need to run, e.g.:
if @version <> @expects
begin
declare @error varchar(100);
set @error = 'Invalid version. Your version is ' + convert(varchar, @version) + '. This script expects version ' + convert(varchar, @expects) + '.';
raiserror(@error, 10, 1);
end
else
begin
...sql statements here...
end
Works great! Except if I need to add a stored procedure. The "create proc" command must be the only command in a batch of sql commands. Putting a "create proc" in my IF statement causes this error:
'CREATE/ALTER PROCEDURE' must be the first statement in a query batch.
Ouch! How do I put the CREATE PROC command in my script, and have it only execute if it needs to?
A: But watch out for single quotes within your Stored Procedure - they need to be "escaped" by adding a second one. The first answer has done this, but just in case you missed it. A trap for young players.
A: Versioning your database is the way to go, but... Why conditionally create stored procedures. For Views, stored procedures, functions, just conditionally drop them and re-create them every time. If you conditionally create, then you will not clean-up databases that have a problem or a hack that got put in 2 years ago by another developer (you or I would never do this) who was sure he would remember to remove the one time emergency update.
A: Here's what I came up with:
Wrap it in an EXEC(), like so:
if @version <> @expects
begin
...snip...
end
else
begin
exec('CREATE PROC MyProc AS SELECT ''Victory!''');
end
Works like a charm!
A: Problem with dropping and creating is you lose any security grants that had previously been applied to the object being dropped.
A: This is an old thread, but Jobo is incorrect: Create Procedure must be the first statement in a batch. Therefore, you can't use Exists to test for existence and then use either Create or Alter. Pity.
A: It is much better to alter an existing stored proc because of the potential for properties and permissions that have been added AND which will be lost if the stored proc is dropped.
So, test to see if it NOT EXISTS, if it does not then create a dummy proc. Then after that use an alter statement.
IF NOT EXISTS(SELECT * FROM sysobjects WHERE Name = 'YOUR_STORED_PROC_NAME' AND xtype='P')
EXECUTE('CREATE PROC [dbo].[YOUR_STORED_PROC_NAME] as BEGIN select 0 END')
GO
ALTER PROC [dbo].[YOUR_STORED_PROC_NAME]
....
A: SET NOEXEC ON is good way to switch off some part of code
IF NOT EXISTS (SELECT * FROM sys.assemblies WHERE name = 'SQL_CLR_Functions')
SET NOEXEC ON
GO
CREATE FUNCTION dbo.CLR_CharList_Split(@list nvarchar(MAX), @delim nchar(1) = N',')
RETURNS TABLE (str nvarchar(4000)) AS EXTERNAL NAME SQL_CLR_Functions.[Granite.SQL.CLR.Functions].CLR_CharList_Split
GO
SET NOEXEC OFF
Found here:
https://codereview.stackexchange.com/questions/10490/conditional-create-must-be-the-only-statement-in-the-batch
P.S. Another way is SET PARSEONLY { ON | OFF }.
A: I must admit, I would normally agree with @Peter - I conditionally drop and then unconditionally recreate every time. I've been caught out too many times in the past when trying to second-guess the schema differences between databases, with or without any form of version control.
Having said that, your own suggestion @Josh is pretty cool. Certainly interesting. :-)
A: My solution is to check if the proc exists, if so then drop it, and then create the proc (same answer as @robsoft but with an example...)
IF EXISTS(SELECT * FROM sysobjects WHERE Name = 'PROC_NAME' AND xtype='P')
BEGIN
DROP PROCEDURE PROC_NAME
END
GO
CREATE PROCEDURE PROC_NAME
@value int
AS
BEGIN
UPDATE SomeTable
SET SomeColumn = 1
WHERE Value = @value
END
GO
A: IF NOT EXISTS(SELECT * FROM sys.procedures WHERE name = 'pr_MyStoredProc')
BEGIN
CREATE PROCEDURE pr_MyStoredProc AS .....
SET NOCOUNT ON
END
ALTER PROC pr_MyStoredProc
AS
SELECT * FROM tb_MyTable
A: use the 'Exists' command in T-SQL to see if the stored proc exists. If it does, use 'Alter', else use 'Create'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: When do function-level static variables get allocated/initialized? I'm quite confident that globally declared variables get allocated (and initialized, if applicable) at program start time.
int globalgarbage;
unsigned int anumber = 42;
But what about static ones defined within a function?
void doSomething()
{
static bool globalish = true;
// ...
}
When is the space for globalish allocated? I'm guessing when the program starts. But does it get initialized then too? Or is it initialized when doSomething() is first called?
A: I try to test again code from Adam Pierce and added two more cases: static variable in class and POD type. My compiler is g++ 4.8.1, in Windows OS(MinGW-32).
Result is static variable in class is treated same with global variable. Its constructor will be called before enter main function.
*
*Conclusion (for g++, Windows environment):
*
*Global variable and static member in class: constructor is called before enter main function (1).
*Local static variable: constructor is only called when execution reaches its declaration at first time.
*If Local static variable is POD type, then it is also initialized before enter main function (1).
Example for POD type: static int number = 10;
(1): The correct state should be: "before any function from the same translation unit is called". However, for simple, as in example below, then it is main function.
#include <iostream>
#include <string>
using namespace std;
class test
{
public:
test(const char *name)
: _name(name)
{
cout << _name << " created" << endl;
}
~test()
{
cout << _name << " destroyed" << endl;
}
string _name;
static test t; // static member
};
test test::t("static in class");
test t("global variable");
void f()
{
static test t("static variable");
static int num = 10 ; // POD type, init before enter main function
test t2("Local variable");
cout << "Function executed" << endl;
}
int main()
{
test t("local to main");
cout << "Program start" << endl;
f();
cout << "Program end" << endl;
return 0;
}
result:
static in class created
global variable created
local to main created
Program start
static variable created
Local variable created
Function executed
Local variable destroyed
Program end
local to main destroyed
static variable destroyed
global variable destroyed
static in class destroyed
Anybody tested in Linux env ?
A: Some relevant verbiage from C++ Standard:
3.6.2 Initialization of non-local objects [basic.start.init]
1
The storage for objects with static storage
duration (basic.stc.static) shall be zero-initialized (dcl.init)
before any other initialization takes place. Objects of
POD types (basic.types) with static storage duration
initialized with constant expressions (expr.const) shall be
initialized before any dynamic initialization takes place.
Objects of namespace scope with static storage duration defined in
the same translation unit and dynamically initialized shall be
initialized in the order in which their definition appears in
the translation unit. [Note: dcl.init.aggr describes the
order in which aggregate members are initialized. The
initialization of local static objects is described in stmt.dcl. ]
[more text below adding more liberties for compiler writers]
6.7 Declaration statement [stmt.dcl]
...
4
The zero-initialization (dcl.init) of all local objects with
static storage duration (basic.stc.static) is performed before
any other initialization takes place. A local object of
POD type (basic.types) with static storage duration
initialized with constant-expressions is initialized before its
block is first entered. An implementation is permitted to perform
early initialization of other local objects with static storage
duration under the same conditions that an implementation is
permitted to statically initialize an object with static storage
duration in namespace scope (basic.start.init). Otherwise such
an object is initialized the first time control passes through its
declaration; such an object is considered initialized upon the
completion of its initialization. If the initialization exits by
throwing an exception, the initialization is not complete, so it will
be tried again the next time control enters the declaration. If control re-enters the declaration (recursively) while the object is being
initialized, the behavior is undefined. [Example:
int foo(int i)
{
static int s = foo(2*i); // recursive call - undefined
return i+1;
}
--end example]
5
The destructor for a local object with static storage duration will
be executed if and only if the variable was constructed.
[Note: basic.start.term describes the order in which local
objects with static storage duration are destroyed. ]
A:
Or is it initialized when doSomething() is first called?
Yes, it is. This, among other things, lets you initialize globally-accessed data structures when it is appropriate, for example inside try/catch blocks. E.g. instead of
int foo = init(); // bad if init() throws something
int main() {
try {
...
}
catch(...){
...
}
}
you can write
int& foo() {
static int myfoo = init();
return myfoo;
}
and use it inside the try/catch block. On the first call, the variable will be initialized. Then, on the first and next calls, its value will be returned (by reference).
A: The memory for all static variables is allocated at program load. But local static variables are created and initialized the first time they are used, not at program start up. There's some good reading about that, and statics in general, here. In general I think some of these issues depend on the implementation, especially if you want to know where in memory this stuff will be located.
A: Static variables are allocated inside a code segment -- they are part of the executable image, and so are mapped in already initialized.
Static variables within function scope are treated the same, the scoping is purely a language level construct.
For this reason you are guaranteed that a static variable will be initialized to 0 (unless you specify something else) rather than an undefined value.
There are some other facets to initialization you can take advantage off -- for example shared segments allow different instances of your executable running at once to access the same static variables.
In C++ (globally scoped) static objects have their constructors called as part of the program start up, under the control of the C runtime library. Under Visual C++ at least the order that objects are initialized in can be controlled by the init_seg pragma.
A: I was curious about this so I wrote the following test program and compiled it with g++ version 4.1.2.
include <iostream>
#include <string>
using namespace std;
class test
{
public:
test(const char *name)
: _name(name)
{
cout << _name << " created" << endl;
}
~test()
{
cout << _name << " destroyed" << endl;
}
string _name;
};
test t("global variable");
void f()
{
static test t("static variable");
test t2("Local variable");
cout << "Function executed" << endl;
}
int main()
{
test t("local to main");
cout << "Program start" << endl;
f();
cout << "Program end" << endl;
return 0;
}
The results were not what I expected. The constructor for the static object was not called until the first time the function was called. Here is the output:
global variable created
local to main created
Program start
static variable created
Local variable created
Function executed
Local variable destroyed
Program end
local to main destroyed
static variable destroyed
global variable destroyed
A: The compiler will allocate static variable(s) defined in a function foo at program load, however the compiler will also add some additional instructions (machine code) to your function foo so that the first time it is invoked this additional code will initialize the static variable (e.g. invoking the constructor, if applicable).
@Adam: This behind the scenes injection of code by the compiler is the reason for the result you saw.
A: In the following code it prints Initial = 4 which is the value of static_x as it is implemented in the compiling time.
int func(int x)
{
static int static_x = 4;
static_x = x;
printf ("Address = 0x%x",&static_x ); // prints 0x40a010
return static_x;
}
int main()
{
int x = 8;
uint32_t *ptr = (uint32_t *)(0x40a010); // static_x location
printf ("Initial = %d\n",*ptr);
func(x);
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103"
} |
Q: Very slow compile times on Visual Studio 2005 We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines.
A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash)
We are looking at merging projects. We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a DLL hell as we try to keep things in synch.
I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages.
UPDATE
I neglected to mention this is a C# solution. Thanks for all the C++ suggestions, but it's been a few years since I've had to worry about headers.
EDIT:
Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped)
*
*New 3GHz laptop - the power of lost utilization works wonders when whinging to management
*Disable Anti Virus during compile
*'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI
Still not rip-snorting through a compile, but every bit helps.
Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance
WORKAROUND
We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them.
We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
A: If this is C or C++, and you're not using precompiled headers, you should be.
A: The Chromium.org team listed several options for accelerating the build (at this point about half-way down the page):
In decreasing order of speedup:
*
*Install Microsoft hotfix 935225.
*Install Microsoft hotfix 947315.
*Use a true multicore processor (ie. an Intel Core Duo 2; not a Pentium 4 HT).
*Use 3 parallel builds. In Visual Studio 2005, you will find the option in Tools > Options... > Projects and Solutions > Build and Run > maximum number of parallel project builds.
*Disable your anti-virus software for .ilk, .pdb, .cc, .h files and only check for viruses on modify. Disable scanning the directory where your sources reside. Don't do anything stupid.
*Store and build the Chromium code on a second hard drive. It won't really speed up the build but at least your computer will stay responsive when you do gclient sync or a build.
*Defragment your hard drive regularly.
*Disable virtual memory.
A: We had a 80+ projects in our main solution which took around 4 to 6 minutes to build depending on what kind of machine a developer was working. We considered that to be way too long: for every single test it really eats away your FTEs.
So how to get faster build times? As you seem to already know it is the number of projects that really hurt the buildtime. Of course we did not want to get rid of all our projects and simply throw all sourcefiles into one. But we had some projects that we could combine nevertheless: As every "Repository project" in the solution had its own unittest project, we simply combined all the unittest projects into one global-unittest project. That cut down the number of projects with about 12 projects and somehow saved 40% of the time to build the entire solution.
We are thinking about another solution though.
Have you also tried to setup a new (second) solution with a new project? This second solution should simply incorporates all files using solution folders. Because you might be surprised to see the build time of that new solution-with-just-one-project.
However, working with two different solutions will take some carefull consideration. Developers might be inclined to actually -work- in the second solution and completely neglect the first. As the first solution with the 70+ projects will be the solution that takes care of your object-hierarchy, this should be the solution where your buildserver should run all your unittests. So the server for Continous Integration must be the first project/solution. You have to maintain your object-hierarchy, right.
The second solution with just one project (which will build mucho faster) will than be the project where testing and debugging will be done by all developers. You have to take care of them looking at the buildserver though! If anything breaks it MUST be fixed.
A: Make sure your references are Project references, and not directly to the DLLs in the library output directories.
Also, have these set to not copy locally except where absolutely necessary (The master EXE project).
A: I posted this response originally here:
https://stackoverflow.com/questions/8440/visual-studio-optimizations#8473
You can find many other helpful hints on that page.
If you are using Visual Studio 2008, you can compile using the /MP flag to build a single project in parallel. I have read that this is also an undocumented feature in Visual Studio 2005, but have never tried myself.
You can build multiple projects in parallel by using the /M flag, but this is usually already set to the number of available cores on the machine, though this only applies to VC++ I believe.
A: I notice this question is ages old, but the topic is still of interest today. The same problem bit me lately, and the two things that improved build performance the most were (1) use a dedicated (and fast) disk for compiling and (2) use the same outputfolder for all projects, and set CopyLocal to False on project references.
Some additional resources:
*
*https://stackoverflow.com/questions/8440/visual-studio-optimizations
*http://weblogs.asp.net/scottgu/archive/2007/11/01/tip-trick-hard-drive-speed-and-visual-studio-performance.aspx
*http://arnosoftwaredev.blogspot.com/2010/05/how-to-improve-visual-studio-compile.html
*http://blog.brianhartsock.com/2009/12/22/analyzing-visual-studio-build-performance/
A: We have nearly 100 projects in one solution and a dev build time of only seconds :)
For local development builds we created a Visual Studio Addin that changes Project references to DLL references and unloads the unwanted projects (and an option to switch them back of course).
*
*Build our entire solution once
*Unload the projects we are not currently working on and change all project references to DLL references.
*Before check-in change all references back from DLL to project references.
Our builds now only take seconds when we are working on only a few projects at a time. We can also still debug the additional projects as it links to the debug DLLs. The tool typically takes 10-30 seconds to make a large number of changes, but you don't have to do it that often.
Update May 2015
The deal I made (in comments below), was that I would release the plugin to Open Source if it gets enough interest. 4 years later it has only 44 votes (and Visual Studio now has two subsequent versions), so it is currently a low-priority project.
A: Some analysis tools:
tools->options->VC++ project settings -> Build Timing = Yes
will tell you build time for every vcproj.
Add /Bt switch to compiler command line to see how much every CPP file took
Use /showIncludes to catch nested includes (header files that include other header files), and see what files could save a lot of IO by using forward declarations.
This will help you optimize compiler performance by eliminating dependencies and performance hogs.
A: Before spending money to invest in faster hard drives, try building your project entirely on a RAM disk (assuming you have the RAM to spare). You can find various free RAM disk drivers on the net. You won't find any physical drive, including SSDs, that are faster than a RAM disk.
In my case, a project that took 5 minutes to build on a 6-core i7 on a 7200 RPM SATA drive with Incredibuild was reduced by only about 15 seconds by using a RAM disk. Considering the need to recopy to permanent storage and the potential for lost work, 15 seconds is not enough incentive to use a RAM disk and probably not much incentive to spend several hundreds of dollars on a high-RPM or SSD drive.
The small gain may indicate that the build was CPU bound or that Windows file caching was rather effective, but since both tests were done from a state where the files weren't cached, I lean heavily towards CPU-bound compiles.
Depending on the actual code you're compiling your mileage may vary -- so don't hesitate to test.
A: How big is your build directory after doing a complete build? If you stick with the default setup then every assembly that you build will copy all of the DLLs of its dependencies and its dependencies' dependencies etc. to its bin directory. In my previous job when working with a solution of ~40 projects my colleagues discovered that by far the most expensive part of the build process was copying these assemblies over and over, and that one build could generate gigabytes of copies of the same DLLs over and over again.
Here's some useful advice from Patrick Smacchia, author of NDepend, about what he believes should and shouldn't be separate assemblies:
http://codebetter.com/patricksmacchia/2008/12/08/advices-on-partitioning-code-through-net-assemblies/
There are basically two ways you can work around this, and both have drawbacks. One is to reduce the number of assemblies, which is obviously a lot of work. Another is to restructure your build directories so that all your bin folders are consolidated and projects do not copy their dependencies' DLLs - they don't need to because they are all in the same directory already. This dramatically reduces the number of files created and copied during a build, but it can be difficult to set up and can leave you with some difficulty pulling out only the DLLs required by a specific executable for packaging.
A: I had a similar issue on a solution with 21 projects and 1/2 million LOC. The biggest difference was getting faster hard drives. From the performance monitor the 'Avg. Disk Queue' would jump up significantly on the laptop indicating the hard drive was the bottle neck.
Here's some data for total rebuild times...
1) Laptop, Core 2 Duo 2GHz, 5400 RPM Drive (not sure of cache. Was standard Dell inspiron).
Rebuild Time = 112 seconds.
2) Desktop (standard issue), Core 2 Duo 2.3Ghz, single 7200RPM Drive 8MB Cache.
Rebuild Time = 72 seconds.
3) Desktop Core 2 Duo 3Ghz, single 10000 RPM WD Raptor
Rebuild Time = 39 seconds.
The 10,000 RPM drive can not be understated. Builds where significantly quicker plus everything else like displaying documentation, using file explorer was noticable quicker. It was a big productivity boost by speeding the code-build-run cycle.
Given what companies spend on developer salaries it is insane how much they can waste buy equiping them with the same PCs as the receptionist uses.
A: Perhaps take some common functions and make some libraries, that way the same sources are not being compiled over and over again for multiple projects.
If you are worried about different versions of DLLs getting mixed up, use static libraries.
A: Turn off VSS integration. You may not have a choice in using it, but DLLs get "accidentally" renamed all the time...
And definitely check your pre-compiled header settings. Bruce Dawson's guide is a bit old, but still very good - check it out: http://www.cygnus-software.com/papers/precompiledheaders.html
A: I have a project which has 120 or more exes, libs and dlls and takes a considerable time to build. I use a tree of batch files that call make files from one master batch file. I have had problems with odd things from incremental (or was it temperamental) headers in the past so I avoid them now. I do a full build infrequently, and usually leave it to the end of the day while I go for a walk for an hour (so I can only guess it takes about half an hour). So I understand why that is unworkable for working and testing.
For working and testing I have another set of batch files for each app (or module or library) which also have all the debugging settings in place -- but these still call the same make files. I may switch DEBUG on of off from time to time and also decide on builds or makes or if I want to also build libs that the module may depend on, and so on.
The batch file also copies the completed result into the (or several) test folders. Depending of the settings this completes in several seconds to a minute (as opposed to say half an hour).
I used a different IDE (Zeus) as I like to have control over things like .rc files, and actually prefer to compile from the command line, even though I am using MS compliers.
Happy to post an example of this batch file if anyone is interested.
A: Disable file system indexing on your source directories (specifically the obj directories if you want your source searchable)
A: For C# .NET builds, you can use .NET Demon. It's a product that takes over the Visual Studio build process to make it faster.
It does this by analyzing the changes you made, and builds only the project you actually changed, as well as other projects that actually relied on the changes you made. That means if you only change internal code, only one project needs to build.
A: Turn off your antivirus. It adds ages to the compile time.
A: Use distributed compilation. Xoreax IncrediBuild can cut compilation time down to few minutes.
I've used it on a huge C\C++ solution which usually takes 5-6 hours to compile. IncrediBuild helped to reduce this time to 15 minutes.
A: Instructions for reducing your Visual Studio compile time to a few seconds
Visual Studio is unfortunately not smart enough to distinguish an assembly's interface changes from inconsequential code body changes. This fact, when combined with a large intertwined solutions, can sometimes create a perfect storm of unwanted 'full-builds' nearly every time you change a single line of code.
A strategy to overcome this is to disable the automatic reference-tree builds. To do this, use the 'Configuration Manager' (Build / Configuration Manager...then in the Active solution configuration dropdown, choose 'New') to create a new build configuration called 'ManualCompile' that copies from the Debug configuration, but do not check the 'Create new project configurations' checkbox. In this new build configuration, uncheck every project so that none of them will build automatically. Save this configuration by hitting 'Close'. This new build configuration is added to your solution file.
You can switch from one build configuration to another via the build configuration dropdown at the top of your IDE screen (the one that usually shows either 'Debug' or 'Release'). Effectively this new ManualCompile build configuration will render useless the Build menu options for: 'Build Solution' or 'Rebuild Solution'. Thus, when you are in the ManualCompile mode, you must manually build each project that you are modifying, which can be done by right-clicking on each affected project in the Solution Explorer, and then selecting 'Build' or 'Rebuild'. You should see that your overall compile times will now be mere seconds.
For this strategy to work, it is necessary for the VersionNumber found in the AssemblyInfo and GlobalAssemblyInfo files to remain static on the developer's machine (not during release builds of course), and that you don't sign your DLLs.
A potential risk of using this ManualCompile strategy is that the developer might forget to compile required projects, and when they start the debugger, they get unexpected results (unable to attach debugger, files not found, etc.). To avoid this, it is probably best to use the 'Debug' build configuration to compile a larger coding effort, and only use the ManualCompile build configuration during unit testing or for making quick changes that are of limited scope.
A: If this is a web app, setting batch build to true can help depending on the scenario.
<compilation defaultLanguage="c#" debug="true" batch="true" >
You can find an overview here: http://weblogs.asp.net/bradleyb/archive/2005/12/06/432441.aspx
A: One cheaper alternative to Xoreax IB is the use of what I call uber-file builds. It's basically a .cpp file that has
#include "file1.cpp"
#include "file2.cpp"
....
#include "fileN.cpp"
Then you compile the uber units instead of the individual modules. We've seen compile times from from 10-15 minutes down to 1-2 minutes. You might have to experiemnt with how many #includes per uber file make sense. Depends on the projects. etc. Maybe you include 10 files, maybe 20.
You pay a cost so beware:
*
*You can't right click a file and say "compile..." as you have to exclude the individual cpp files from the build and include only the uber cpp files
*You have to be careful of static global variable conflicts.
*When you add new modules, you have to keep the uber files up to date
It's kind of a pain, but for a project that is largely static in terms of new modules, the intial pain might be worth it. I've seen this method beat IB in some cases.
A: You also may want to check for circular project references. It was an issue for me once.
That is:
Project A references Project B
Project B references Project C
Project C references Project A
A: If it's a C++ project, then you should be using precompiled headers. This makes a massive difference in compile times. Not sure what cl.exe is really doing (with not using precompiled headers), it seems to be looking for lots of STL headers in all of the wrong places before finally going to the correct location. This adds entire seconds to every single .cpp file being compiled. Not sure if this is a cl.exe bug, or some sort of STL problem in VS2008.
A: Looking at the machine that you're building on, is it optimally configured?
We just got our build time for our largest C++ enterprise-scale product down from 19 hours to 16 minutes by ensuring the right SATA filter driver was installed.
Subtle.
A: There's undocumented /MP switch in Visual Studio 2005, see http://lahsiv.net/blog/?p=40, which would enable parallel compilation on file basis rather than project basis. This may speed up compiling of the last project, or, if you compile one project.
A: When choosing a CPU: L1 cache size seems to have a huge impact on compilation time. Also, it is usually better to have 2 fast cores than 4 slow ones. Visual Studio doesn't use the extra cores very effectively. (I base this on my experience with the C++ compiler, but it is probably also true for the C# one.)
A: I'm also now convinced there is a problem with VS2008. I'm running it on a dual core Intel laptop with 3G Ram, with anti-virus switched off. Compiling the solution is often quite slick, but if I have been debugging a subsequent recompile will often slow down to a crawl. It is clear from the continuous main disk light that there is a disk I/O bottleneck (you can hear it, too). If I cancel the build and shutdown VS the disk activity stops. Restart VS, reload the solution and then rebuild, and it is much faster. Unitl the next time
My thoughts are that this is a memory paging issue - VS just runs out of memory and the O/S starts page swapping to try to make space but VS is demanding more than page swapping can deliver, so it slows down to a crawl. I can't think of any other explanation.
VS definitely is not a RAD tool, is it?
A: Does your company happen to use Entrust for their PKI/Encryption solution by any chance? It turns out, we were having abysmal build performance for a fairly large website built in C#, taking 7+ minutes on a Rebuild-All.
My machine is an i7-3770 with 16gb ram and a 512GB SSD, so performance should not have been that bad. I noticed my build times were insanely faster on an older secondary machine building the same codebase. So I fired up ProcMon on both machines, profiled the builds, and compared the results.
Lo and behold, the slow-performing machine had one difference -- a reference to the Entrust.dll in the stacktrace. Using this newly acquired info, I continued to search StackOverflow and found this: MSBUILD (VS2010) very slow on some machines. According to the accepted answer the problem lies in the fact the Entrust handler was processing the .NET certificate checks instead of the native Microsoft handler. Tt is also suggested that Entrust v10 solves this issue that is prevalent in Entrust 9.
I currently have it uninstalled and my build times plummeted to 24 seconds. YYMV with the number of projects you currently are building and may not directly address the scaling issue you were inquiring about. I will post an edit to this response if I can provide a fix without resorting to an uninstallation the software.
A: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, if you are having issues, I recommend reading then, just what has helped us)
*
*New 3GHz laptop - the power of lost utilization works wonders when whinging to management
*Disable Anti Virus during compile
*'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI
Still not rip-snorting through a compile, but every bit helps.
We are also testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them.
We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance
A: It's sure there's a problem with VS2008. Because the only thing I've done it's to install VS2008 for upgrading my project which has been created with VS2005.
I've only got 2 projects in my solution. It isn't big.
Compilation with VS2005 : 30 secondes
Compilation with VS2008 : 5 minutes
A: There are a few things that I have found useful for speading up C# /.NET builds:
*
*Combine small projects into larger projects as there is a large per project overhead on building a solution. (Use nDepend if needed to control calling across layers)
*Make all our projects build into the some output directory and then set “copy local” to false on all the project references – this can lead to a large speed up due to reduced IO.
*Turn of your virus checker to see if it makes much difference; if so find a faster virus checker, or exclude the "hot" folders from the virus checker scanning
*Use perforce monitor and the sys internal tool to see way your compiles are taking so long.
*Consider a ram disk to put your output directory on.
*Consider using a SSD
*More memory can have a big effect at times – (reduce the ram in the machine if you get a big slow down by removing a little ram, you may get a big speed up by adding more)
Remove unneeded project references (you may have to remove unneeded “usings” first)
*Consider using a dependency injection framework and interfaces for your lowest domain layer, so a recompile of everything is only needed when the interface changes – this may not gain much depending on how often the interface is changed.
A: I have found the following helps the compile speed: After repeated compiles (iterative development in memory), I would quit VS2008. Then go into the project directories and delete all the obj and bin folders, start the project back up, and my compile time went way back down. This is similar to the Clean solution action within VS2008.
Just want to confirm if this is useful for anyone out there.
A: I found my fix here: http://blogs.msdn.com/b/vsdteam/archive/2006/09/15/756400.aspx
A: Slow Visual Studio Performance … Solved!
September 24th, 2014 by Uzma Abidi
I had an odd performance-related issue today. My Microsoft Visual Studio seemed to be taking far too long to perform even the simplest of operations. I Googled around and tried a few ideas that people had such as disabling add-ins or clearing Visual Studio’s recent projects list but those suggestions didn’t seem to solve the problem. I remembered that the Windows SysInternals website had a tool called Process Monitor that would sniff registry and file accesses by any running program. It seemed to me that Visual Studio was up to something and Process Monitor should help me figure out what it was. I downloaded the most recent version, and after fiddling around a bit with its display filters, ran it and to my horror, I saw that Visual Studio was so slow because it was accessing the more than 10,000 folders in C:\Users\krintoul\AppData\Local\Microsoft\WebSiteCache on most IDE operations. I’m not sure why there were that many folders and moreover, wasn’t sure what Visual Studio was doing with them, but after I zipped those folders up and moved them somewhere else, Visual Studio’s performance improved tremendously.
The Windows SysInternals website has a number of other useful utilities for network management, security, system information and more. Check it out. I’m sure you’ll find something of value.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "133"
} |
Q: How do I calculate the "cost" of a crash? Background:
Some time ago, I built a system for recording and categorizing application crashes for one of our internal programs. At the time, I used a combination of frequency and aggregated lost time (the time between the program launch and the crash) for prioritizing types of crashes. It worked reasonably well.
Now, The Powers That Be want solid numbers on the cost of each type of crash being worked on. Or at least, numbers that look solid. I suppose I could use the aggregate lost time, multiplied by some plausible figure, but it seems dodgy.
Question:
Are there any established methods of calculating the real-world cost of application crashes? Or failing that, published studies speculating on such costs?
Consensus
Accuracy is impossible, but an estimate based on uptime should suffice if it is applied consistently and its limitations clearly documented. Thanks, Matt, Orion, for taking time to answer this.
A: There is a missing factor here .. most applications have a 'buckling' factor where crashes suddenly start "costing" a lot more because people loose confidence in the service your app is providing. Once that happens then it can be very costly to get users back to trusting and using the system.
A: It depends...
In terms of cost, the only thing that matters is the business impact of the crash, so it rather depends on the type of application.
For may applications, it may not be possible to determine business impact. For others, there may be meaninful measures.
Demand-based measures may be meaningful - if sales are steady then down-time for a sales app may be useful. If sales fluctuate unpredictable, then such measures are less useful.
Cost of repair may also be useful.
A:
The Powers That Be want solid numbers on the cost of each type of crash being worked on
I want to fly in my hot air balloon to Mars, but it doesn't mean that such a thing is possible.
Seriously, I think you have a duty to tell them that there is no way to accurately measure this. Tell them you can rank the crashes, or whatever it is that you can actually do with your data, but that's all you've got.
Something like "We can't actually work out how much it costs. We DO have this data about how long things are running for, and so on, but the only way to attach costs is to pretend that X minutes equals X dollars even though this has no basis in reality"
If you just make some bullcrap costing algorithm and DON'T push back at all, you only have yourself to blame when management turns around and uses this arbitrary made up number to do something stupid like fire staff, or decide not to fix any crashes and instead focus on leveraging their synergy with sharepoint portal internet web sharing love server 2013
Update: To clarify, I'm not saying you should only rely on stats with 100% accuracy, and just give up on everything else.
What I think is important is that you know what it is you're measuring. You're not actually measuring cost, you're measuring uptime. As such, you should be upfront about it. If you want to estimate the cost that's fine, but I believe you need to make this clear..
If I were to produce such a report, I'd call it the 'crash uptime report' and maybe have a secondary field called "Estimated cost based on $5/minute." The managers get their cost estimate, but it's clear that the actual report is based on the uptime, and cost is only an estimate, and how the estimate works.
A: I've not seen any studies, but a reasonable heuristic would be something like :
( Time since last application save when crash occurred + Time to restart application ) * Average hourly rate of application operator.
The estimation gets more complex if the crashes have some impact on external customers such, or might delay other things (i.e. create a bottle neck such that another person winds up sitting around waiting because some else's application crashed).
That said, your 'powers that be' may well be happy with a very rough estimate so long as it's applied consistently and they can see how it is changing over time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Casting between multi- and single-dimentional arrays This came up from this answer to a previous question of mine.
Is it guaranteed for the compiler to treat array[4][4] the same as array[16]?
For instance, would either of the below calls to api_func() be safe?
void api_func(const double matrix[4][4]);
// ...
{
typedef double Matrix[4][4];
double* array1 = new double[16];
double array2[16];
// ...
api_func(reinterpret_cast<Matrix&>(array1));
api_func(reinterpret_cast<Matrix&>(array2));
}
A: From the C++ standard, referring to the sizeof operator:
When applied to an array, the result is the total number of bytes in the array. This implies that the size of an array of n elements is n times the size of an element.
From this, I'd say that double[4][4] and double[16] would have to have the same underlying representation.
I.e., given
sizeof(double[4]) = 4*sizeof(double)
and
sizeof(double[4][4]) = 4*sizeof(double[4])
then we have
sizeof(double[4][4]) = 4*4*sizeof(double) = 16*sizeof(double) = sizeof(double[16])
I think a standards-compliant compiler would have to implement these the same, and I think that this isn't something that a compiler would accidentally break. The standard way of implementing multi-dimensional arrays works as expected. Breaking the standard would require extra work, for likely no benefit.
The C++ standard also states that an array consists of contiguously-allocated elements, which eliminates the possibility of doing anything strange using pointers and padding.
A: I don't think there is a problem with padding introduced by having a multi-dimensional array.
Each element in an array must satisfy the padding requirements imposed by the architecture. An array [N][M] is always going to have the same in memory representation as one of [M*N].
A: Each array element should be laid out sequentially in memory by the compiler. The two declarations whilst different types are the same underlying memory structure.
A: @Konrad Rudolph:
I get those two (row major/column major) mixed up myself, but I do know this: It's well-defined.
int x[3][5], for example, is an array of size 3, whose elements are int arrays of size 5. (§6.5.2.1) Adding all the rules from the standard about arrays, addressing, etc. you get that the second subscript references consecutive integers, wheras the first subscript will reference consecutive 5-int objects. (So 3 is the bigger number; you have 5 ints between x[1][0] and x[2][0].)
A: I would be worried about padding being added for things like Matrix[5][5] to make each row word aligned, but that could be simply my own superstition.
A: A bigger question is: do you really need to perform such a cast?
Although you might be able to get away with it, it would still be more readable and maintainable to avoid altogether. For example, you could consistently use double[m*n] as the actual type, and then work with a class that wraps this type, and perhaps overloads the [] operator for ease of use. In that case, you might also need an intermediate class to encapsulate a single row -- so that code like my_matrix[3][5] still works as expected.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Characters to avoid in automatically generated passwords I need to generate some passwords, I want to avoid characters that can be confused for each other. Is there a definitive list of characters I should avoid? my current list is
il10o8B3Evu![]{}
Are there any other pairs of characters that are easy to confuse? for special characters I was going to limit myself to those under the number keys, though I know that this differs depending on your keyboards nationality!
As a rider question, I would like my passwords to be 'wordlike'do you have a favoured algorithm for that?
Thanks :)
A: Read Choosing Secure Passwords.
One interesting tidbit from there: For more secure passwords, make sure some numbers and special characters appear in the middle. Cracking programs check for them at the beginning and ends sooner.
A: Here are the character sets that Steve Gibson uses for his "Perfect Paper Password" system. They are "characters to allow" rather than "characters to avoid", but they seem pretty reasonable for what you want:
A standard set of 64 characters
!#%+23456789:=?@ABCDEFGHJKLMNPRS
TUVWXYZabcdefghijkmnopqrstuvwxyz
A larger set of 88 characters
!"#$%&'()*+,-./23456789:;<=>?@ABCDEFGHJKLMNO
PRSTUVWXYZ[\]^_abcdefghijkmnopqrstuvwxyz{|}~
For pronounceable passwords, I'm not familiar with the algorithms but you might want to look at APG and pwgen as a starting point.
A: As another option, you could use a monospace/terminal font like courier for printing the passwords. Similar characters should be a lot more distinguishable that way.
A: To add to Jim's answer you could also use the word list and randomly replace certain characters with symbols (an @ for an A, a 0 (zero) for an O or a 5 for an S) and/or remove the vowels from the words.
*
*lmn%Desk
*p@per&b0y32H@t
Still mostly human readable.
A: For an international client several years ago, I had to generate random, secure passwords that were then mail-merged into documents by my client and sent by postal mail to recipients in 40 countries. Not knowing what typeface was to be used in the documents, I used a list of characters like the Steve Gibson 64-character set to eliminate the confusion between similar glyphs.
To make the resulting passwords pronounceable, and thus easier to remember, I paired consonants and vowels together, with some consonant digraphs (sh, th, wh, etc.) added to the mix.
To reduce the chances of inappropriate or offensive words from being generated (in English or in the recipients’ languages), I limited runs of consecutive alpha characters to two, with numerals or punctuation characters betwee:
Es4tU$sA6
wH@cY8Go2
Looking back over my method now, I realize that there was room for improvement in the inappropriateness algorithm. Using the just the rules above, some offensive words are possible now that some numerals and punctuation are substituted for letters.
A: In my (electrical engineering, techie) graduate school, all computer accounts were initialized with passwords that, I assume, were generated by a standard linux utility. They consisted of three random syllables, with three lowercase letters in each syllable. The result was reasonably secure (on the order of billions of possible combinations) yet so pronounce-able that I still use some of those passwords over a decade later. James' example is an excellent demonstration of this.
A comment on passwords in general, from a network-security professional: they're terrible, for several reasons, including:
*
*Generally easily broken, either through social engineering or with attack software, especially if you know anything about your target.
Example 1: I recently needed to revise a password-protected technical document. Looking at the date, I knew who our Tech Writer in residence was at the time, typed the first word that entered my mind, and immediately unlocked the document.
Example 2: Standard password-cracking programs allow the cracker to specify a set of rules that operate on a user-supplied dictionary. It's trivial to replace certain letters with $ymb01$, or to translate into 1337, etc.
*"Secure" passwords aren't. Given the sheer number of passwords most people need to remember, the most common way to "remember" a "strong" password like "a4$N!8_q" is to write it on a piece of paper (or, worse, store it in a text file). 'Nuff said.
If you need truly secure authentication, multi-factor (or two-factor) is the industry-accepted mechanism. The "two factors" are usually something you have (such as an access card) and something you know that enables it (such as a PIN). Neither one works without the other--you need both.
On the other hand, consider the level of security you really need. What are you protecting? How badly do the "bad guys" want to get it, and what are the consequences if they do? Chances are, "Its@Secret!" is more than good enough. :-)
A: For human-readable passwords, I recently used a PHP script very similar to the one below. It worked well. Granted, the passwords aren't going to be incredibly secure (as they're prone to dictionary attacks), but for memorisable, or at least readable, passwords it works well. However, this function shouldn't be used as-is, it's more for illustration than anything else.
function generatePassword($syllables = 2, $use_prefix = true)
{
// Define function unless it is already exists
if (!function_exists('arr'))
{
// This function returns random array element
function arr(&$arr)
{
return $arr[rand(0, sizeof($arr)-1)];
}
}
// Random prefixes
$prefix = array('aero', 'anti', 'auto', 'bi', 'bio',
'cine', 'deca', 'demo', 'dyna', 'eco',
'ergo', 'geo', 'gyno', 'hypo', 'kilo',
'mega', 'tera', 'mini', 'nano', 'duo',
'an', 'arch', 'auto', 'be', 'co',
'counter', 'de', 'dis', 'ex', 'fore',
'in', 'infra', 'inter', 'mal',
'mis', 'neo', 'non', 'out', 'pan',
'post', 'pre', 'pseudo', 'semi',
'super', 'trans', 'twi', 'vice');
// Random suffixes
$suffix = array('dom', 'ity', 'ment', 'sion', 'ness',
'ence', 'er', 'ist', 'tion', 'or',
'ance', 'ive', 'en', 'ic', 'al',
'able', 'y', 'ous', 'ful', 'less',
'ise', 'ize', 'ate', 'ify', 'fy', 'ly');
// Vowel sounds
$vowels = array('a', 'o', 'e', 'i', 'y', 'u', 'ou', 'oo', 'ae', 'ea', 'ie');
// Consonants
$consonants = array('w', 'r', 't', 'p', 's', 'd', 'f', 'g', 'h', 'j',
'k', 'l', 'z', 'x', 'c', 'v', 'b', 'n', 'm', 'qu');
$password = $use_prefix?arr($prefix):'';
$password_suffix = arr($suffix);
for($i=0; $i<$syllables; $i++)
{
// selecting random consonant
$doubles = array('n', 'm', 't', 's');
$c = arr($consonants);
if (in_array($c, $doubles)&&($i!=0)) { // maybe double it
if (rand(0, 2) == 1) // 33% probability
$c .= $c;
}
$password .= $c;
//
// selecting random vowel
$password .= arr($vowels);
if ($i == $syllables - 1) // if suffix begin with vovel
if (in_array($password_suffix[0], $vowels)) // add one more consonant
$password .= arr($consonants);
}
// selecting random suffix
$password .= $password_suffix;
return $password;
}
A: My preferred method is to get a word list of 3, 4 and 5 letter words. Then select at least 2 of those, and place a random 2 digit number or special symbol (%&*@#$) between each word. If you want to you can capitalize up to one character per word at random.
Depending on your strength requirements you end up with easy-to-remember and communicate passwords like:
*
*lemon%desk
*paper&boy32hat
Keep in mind you occasionally get interesting or inappropriate combinations of words (I'll let you use your imagination). I usually have a button allowing the generation of a new password if the one presented is disliked.
As a rule, only use symbols that people commonly know the name for. On a US Standard keyboard I would avoid ~`'/\^
I guess this more answered your rider question than your main question . ..
Good luck!
A: I don't love the wordlist approach. For example, in /usr/share/dict/words on OSX, there are 5110 4-character words. Using two of them with a seperator character produces ~600M combinations. But if you used the character set directly with a strong random number generator, you'd have 88^9 possible passwords, 3.16e+17 combinations.
Either way, the likely attack against this system is going to be against the random number generator, so make sure you're using a cryptographically strong one. If you use PHP's standard rand function, it will be attacked by registering and resetting thousands of passwords to sample the RNG state and then predict the remaining RNG state, which will reduce the number of possible passwords an attacker needs to test.
A: A starting approach might be to generate mostly valid english syllables, mix them, then throw in a text->l33t conversion. There's been work done on generational natural language grammars, so one of those might help.
E.g. ah ul ing are all valid syllables or close to it... mix them -> Ingulah...l33t it -> 1ngu4h. Is it the best out there? Nah. But at least it is semipronouncable(if you speak l33t) and more computationally secure.
A: function random_readable_pwd($length=12){
// special characters
$sym="!\"§$%&/()={[]}\,.-_:;@>|";
// read words from text file to array
$filename="special.txt";
if (!file_exists($filename)) { die('File "'.$filename.'" is not exists!'); }
$lines = file($filename);
foreach ($lines as $line_num => $line) {
$line=substr($line, 0, -2);
$words[].=$line;
}
// Add words while password is smaller than the given length
$pwd = '';
$ran_date=date("s");
while (strlen($pwd) < $length){
$r = mt_rand(0, count($words)-1);
// randomly upercare word but not all in one time
if ($ran_date % 3 == 0) $words[$r]=ucwords($words[$r]);
$pwd .= $words[$r];
//randomly add symbol
if ($ran_date % 2 == 0) $pwd .= $sym{mt_rand(0,strlen($sym))};
$ran_date++;
}
// append a number at the end if length > 2 and
// reduce the password size to $length
$num = mt_rand(1, 99);
if ($length > 2){
$pwd = substr($pwd,0,$length-strlen($num)).$num;
} else {
$pwd = substr($pwd, 0, $length);
}
return $pwd;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: live asp.net web.config settings I've only recently started working with asp.net and c#. Is there a standard practice set of web.config settings for a live final website? There seem to be a ton of options available and I'm looking to streamline performance, close possible security holes and other unnecessary options.
A: Tip/Trick: Automating Dev, QA, Staging, and Production Web.Config Settings with VS 2005
A: An empty web.config (or at least an absent <system.web> element) would mean that all of the framework's recommended defaults would take effect. You would then just need to be concerned with the host (e.g., IIS) set-up.
A: Start with a clean web.config and only add the sections you need.
For security, all you really can do is make sure you flag
<compelation debug="false">
for your production box and set custom errors to true.
A: Secure all folders containing any sensitive info with the location tag. Encrypt any connection strings with DPAPI.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Learning Ruby on Rails As it stands now, I'm a Java and C# developer. The more and more I look at Ruby on Rails, the more I really want to learn it.
What have you found to be the best route to learn RoR? Would it be easier to develop on Windows, or should I just run a virtual machine with Linux?
Is there an IDE that can match the robustness of Visual Studio? Any programs to develop that give a good overhead of what to do? Any good books?
Seriously, any tips/tricks/rants would be awesome.
A: Path of least resistance:
*
*Have a simple web project in mind.
*Go to rubyonrails.org and look at their "Blog in 15 minutes" screencast to get excited.
*Get a copy of O'Reilly Media's Learning Ruby
*Get a Mac or Linux box.
(Fewer early Rails frustrations due to the fact that Rails is generally developed on these.)
*Get a copy of Agile Web Development with Rails.
*Get the version of Ruby and Rails described in that book.
*Run through that book's first section to get a feel for what it's like.
*Go to railscasts.com and view at the earliest videos for a closer look.
*Buy The Rails Way by Obie Fernandez to get a deeper understanding of Rails and what it's doing.
*Then upgrade to the newest production version of Rails, and view the latest railscasts.com videos.
A: There's a very solid ongoing series on NETTUTS right now that you may be interested in.
A: http://railsforzombies.org/ is a nice one. Introducing an all new way to learn Ruby on Rails in the browser with no additional configuration needed.
A: As you, I'm a java/C# developer trying to learn more Ruby On Rails.
I'm taking the free online course Ruby on Rails Programming with Passion, is a good introductory course, check it out.
We are using NetBeans as IDE (win/mac/linux/solaris), if you are used to Eclipse or Visual Studio, there is a good chance you will like it.
A: Fantastic decision! It is extremely useful to get a grounding in Ruby before going to Rails so here is my take on the best path to Rails:
*
*Learn to Program by Chris Pine - You can read this in an afternoon to get a feel for the Ruby language.
*The Well Grounded Rubyist by David Black - Like the title says it will give you an excellent grounding in the language.
*Eloquent Ruby by Russ Olsen - This book is sublime, it reads like a novel.
*Ruby Best Practices by Gregory Brown - By this point you should be ready for the advanced level of this book.
*Rails for Zombies - Fun tutorial you can complete in an afternoon.
*Rails Tutorial by Michael Hartl - Fantastic (and free) tutorial and I have heard his accompanying screencasts are amazing.
*Agile Web Development with Rails by Sam Ruby - By the time you are finished this you are now a completely capable Rails person!
Aside from books the most important thing is to get feedback on what you are doing. To do this I recommend spending time in irc.freenode.net #ruby and #rubyonrails. It is also extremely helpful to post things you are working on or having trouble with here on stackoverflow as the comments, explanations and different way of thinking about things that people provide are invaluable.
You should also definitely check out the Ruby Rogues podcast, they provide invaluable information and the commentators are all extremely respected people in the Ruby community. And for your viewing and reading pleasure (in that order,) head over to Ryan Bates's Railscasts and then Eifion Bedford's Asciicasts.
Finally, I recommend looking into different gems on github, reading the code and then contributing to them. You don't have to get overly ambitious and do massive recodes, especially at first. Just start with small things like editing and making the README files a little easier to read.
I don't use an IDE but at Railsconf I saw a demo of Rubymine from Jetbrains and it seemed pretty amazing.
A: 0) LEARN RUBY FIRST. This is very important. One huge advantage of Rails is Ruby: a great language that is very powerful but also marvelously easy to misunderstand. Run through a few Ruby tutorials online. When coding challenges come up on Daily WTF, write them in Ruby. You'll pick it up fast.
1) Go buy the book "Ruby for Rails"
2) Check out a Rails tutorial and subscribe to the Riding Rails blog.
3) Standup an app locally. Don't use scaffolding.
4) When you install plugins into your app, go look at the code in that plugin (in your vendor directory) and learn it. It is one of the best ways to learn Ruby and Rails internals. When you don't understand how something works, post it here and 1,000 people will help you.
As for your other questions:
Yes, you will need a Linux environment to develop in. You can develop Rails on Windows, but that doesn't mean it should be done. Lots of gems aren't up to speed on Windows.
NetBeans works well as an IDE. If you're on a Mac, you'll get street cred for using Textmate.
A: I'm surprised there has been so little mention of Why's (Poignant) Guide to Ruby. Why may not be around anymore but the guide is easy to find on the net (Google points here first) it's a very easy read and provided my introduction to Ruby.
After the guide, I'd recommend either one of the books the others have suggested, or following the series of screencasts at Learning Rails which is how I picked up enough Ruby on Rails to be dangerous. Once you've completed the Learning Rails series. what you want to do with Rails will start to diverge from the general tutorials and that's where Railscasts becomes a wonderful tool. There's not much can be done with Rails that Railscasts hasn't touched on at some point.
A: Find a nearby Ruby users group and start attending that. I've found that is a great way to meet a lot of people who are passionate about development and willing to teach.
A: My first suggestion would be to learn a little about symbols first. Rails isn't the smallest framework ever, and while there's definitely lots to learn, most of it will start to make sense if you have at least a little bit of understanding what makes it different ("special") from other languages. As pointed out, there's no exact analog in any of the major languages, but they're heavily used by Rails, in order to make things read straightforwardly and perform well, which is the reason I brought it up. My very first exposure to Rails was also my first time looking at Ruby (well before 2.0), and the first thing that caught my eye was the goofy :things they were passing around, and I asked, "WTF is that?"
Also, check out RubyQuiz, and read other peoples' answers on that site.
A: I came from a Java background to Ruby to. I found this tutorial helpful http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-java/. When it comes to learning rails I cannot say how much I use script\console. It allows you to play with the code and learn how to do things that you are not sure about.
The only book I ever bought was Agile Web Development with Rails, Third Edition http://www.pragprog.com/titles/rails3/agile-web-development-with-rails-third-edition. It was quite useful and provided a good overview of the Rails framework. In addition to that I regular watch Railscasts(http://railscasts.com), which is a great screen casting blog that covers all kinds of Rails topics.
I personally prefer using Linux (because git works better). But, I have also used windows and besides git I do not think the OS choice will impact your programming.
I use netbeans for my IDE and occasionally vim (with the rails plugin). I like netbeans but, I find that it can still be a little flaky when it comes to the Rails support (not all the features work all the time).
A: This looks like a great resource for people like me who are coming from PHP to RoR
http://railsforphp.com/ There's also a book Rails for PHP Developers
A: I used to do Java and C# on Windoze.
I'd second these sources:
IDE: Try Apatana RadRails 3 Sneak Peek: http://www.radrails.org/3. Its the closest thing you'll get to Visual Studio. I play with it here and there but still love the lightness of Textmate.
OS: Mac OS gets the most if not all love from Ruby community. Anything else is treated like a bastard child.
Books:
*
*The Pragmatic Programmers' Guide (the pickaxe book)
*Agile Web Development with Rails
Screencasts:
*
*Peepcode (pay) is a nice way to pick up concepts quickly
*Railscasts (free) is a good weekly way to pick up new gems and concepts incrementally
*Railscasts (pro) is also a good way to pick up concepts for a pretty low price.
A: I wrote a post called "Getting Started With Rails -- What I wish I knew" that many people found helpful.
The basics:
*
*Agile development with Rails (book)
*InstantRails for quick ruby/rails environment on Windows
*Aptana as the IDE
*Subversion for version control
The online tutorials are decent but scattered. Invest $30 in a book for a more comprehensive understanding.
A: I've been moving from C# in my professional career to looking at Ruby and RoR in my personal life, and I've found linux to be slightly more appealing personally for development. Particularly now that I've started using git, the implementation is cleaner on linux.
Currently I'm dual booting and getting closer to running Ubuntu full time. I'm using gedit with various plugins for the development environment. And as of late 2010, I'm making the push to use Vim for development, even over Textmate on OS X.
A large amount of the Rails developers are using (gasp) Macs, which has actually got me thinking in that direction.
Although I haven't tried it, Ruby in Steel gives you a Ruby IDE inside the Visual Studio world, and IronRuby is the .NET flavor of Ruby, if you're interested.
As far as books are concerned, the Programming Ruby (also known as the Pickaxe) book from the Pragmatic Programmers is the de-facto for learning Ruby. I bit the bullet and purchased that book and Agile Web Development with Rails; both books have been excellent.
Peepcode screencasts and PDF books have also been great for getting started; at $9 per screencast it's hard to go wrong. I actually bought a 5-pack.
Also check out the following:
*
*Official Rails Guides
*Railscasts
*railsapi.com or Ruby on Rails - APIdock
*The Ruby Show
*Rails for Zombies
*Softies on Rails - Ruby on Rails for .NET Developers
*Rails Podcast
*Rails Best Practices
I've burned through the backlog of Rails and Rails Envy podcasts in the past month and they have provided wonderful insight into lots of topics, even regarding software development in general.
A: I've found http://railstutorial.org/book to be a great resource for learning Rails
A: The ubber source for anything Rails is http://www.rubyonrails.org/ if they don't have it on the site you probably don't need it.
A quick cookbook is Ruby on Rails: Up and Running you can get it from O'Rielly or search Google for a on-line version. They walk you though the conventions of Rails and use Instant Rails which is ok.
A better Rails book "Agile Web Development with Rails" This is the soups to nuts of Rails. It walks you though downloading and setting up Rails, Gems, everything.
If you want are a Java 'guy' and want a transition book O'Reilly has "Rails for Java Developers" http://oreilly.com/catalog/9780977616695/?CMP=AFC-ak_book&ATT=Rails+for+Java+Developers
A: Another IDE you could try is Aptana.
A: Oh I almost forgot. Here are a few more Ruby screencast resources:
SD Ruby - the have a bunch of videos online - I found their Rest talks SD9 and SD10 to be among the best of the intros. Other rest talks assume you know everything. These ones are very introductory and to the point.
Obie Fernandez on InfoQ - Restful Rails. I've also read his Rails Way book and found it informative but really long winded and meandering and the quality is a bit inconsistent. I learned a lot from this book but felt it was a bit punishing to have to read through the repetition and irrelevant stuff to get to the good bits.
Netbeans is a nice hand holding IDE that can teach you a lot of language tricks if you have the patience to wait for its tooltips (it is a painfully slow IDE even on a really fast machine) and you can use the IDE to graphically browse through the available generators and stuff like that. Get the latest builds and you even have Rspec test running built in.
Bort is a prebuilt base app with a lot of the standard plugins already plugged in. If you download it and play with it and figure out how it is setup you are about halfway to creating your own full featured apps.
A: My suggestion is just to start - pick a small project that you would generally use to learn an MVC-style language (i.e. something with a database, maybe some basic workflow), and then as you need to learn a concept, use one (or both!) of
Agile Web Development with Rails
or
The Rails Way
to learn about how it works, and then try it.
The problems with Agile Web Development are that it's outdated, and that the scenario runs on too long for you really to want to build it once; The Rails Way can be hard to follow as it bounces from reference to learning, but when it's good, it's better than Agile Web Development.
But overall they're both good books, and they're both good for learning, but neither of them provide an "education" path that you'll want to follow. So I read a few chapters of the former (enough to get the basic concepts and learn how to bootstrap the first app - there are some online articles that help with this as well) and then just got started, and then every few days I read about something new or I use the books to understand something.
One more thing: both books are much more Rails books than they are Ruby books, and if you're going to write clean code, it's worth spending a day learning Ruby syntax as early as possible. Why's Guide to Ruby is a good one, there are others as well.
A: I bought the book "Simply Rails 2" by Patrick Lenz.
This book is a great introduction to Ruby and Ruby on Rails.
As for my ruby installation and db, I used Cygwin.
It comes with PostgreSQL, ruby and svn.
I like PostgreSQL because I come from an Oracle
background so it feels more comfortable than MySQL.
The other utility I found really useful was pgAdmin
for accessing the PostgreSQL databases.
The first thing I needed to do was to get gems installed.
I got the gems tar file from rubyforge
wget "http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz"
Once I had gems setup, I installed
rails
ruby-postgres
postgres
rack
I also needed an issue tracking system so I installed redmine.
wget "http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz"
I found that using a UNIX-like environment (Cygwin) was
preferable in my case because many of the tutorials were
taylored for OS X or Linux.
The text editor I use is Textpad. I'm looking for an alternative.
I think that vim with the rails plugin might work nicely.
A: I come from a non-programming background. I have learned PHP on my own and recently joined a firm that specializes in Ruby on Rails. They have a comprehensive Rails training program, which is flexible enough to accommodate whatever changes we want to implement. Though I am not a rails pro, I would like to share my experience with rails. I hope that it helps.
Here is the path I am following [combined with tools I am using]
*
*Start with a simple ruby guide. It will help a lot, since entire rails framework revolves around classes and objects.
*Environment and OS are not important. Though I am working on a Mac, I frequently work on Linux and Windows, and I do not face any problems.
*Start with a good book which explains using a demo app. [I am using Agile Web Development with Rails - By The Pragmatic Bookshelf]. There are many other good books as well.
*Once you are done with the application, you will have a good idea of the framework.
*
*Try to understand the SQL queries generated by Active Record module.
*Go through the Rails Guides. You will find the framework a lot easier.
*Keep practicing.
Few imp points
*
*It takes years to learn a language completely. So be patient and do not stop learning.
*Go through rails api as when required. [While developing your first app]
*Google the things which you do not understand. People have written great articles on almost all topics.
*Use Stackoverflow :-) [Only when you are not able to find the solution on your own.]
*Load railscasts on your phone or video player. Watch 'em while travelling or in your free time. They are of few minutes each. You will learn a great deal of things and also learn the best way of doing things.
Tools
*
*Shell [in Mac and Ubuntu]
*Editor [Textmate in Mac, Gedit in Ubuntu and Notepad++ in Windows]
*Firefox with Firebug installed for testing.
Finally I have one thing to say "Keep trying". All the best.
A: The fastest way to learn anything, Ruby on Rails included, is pair programming.
Find someone who knows Rails, pick an example app, sit down, and work through fixing bugs, adding features.
The knowledge sharing is unbelievable.
A: I really enjoy RubyMine from Jetbrains. It seems like a very full featured IDE something I miss from many of the other alternatives out there. Also for a simple env I enjoy e the text editor. Plain and simple.
A:
IDE: NetBeans
Book: Agile Web Development With Rails
Installation: Instant Rails
A: Beware, the rails world is a massively frustrating mess of outdated and inconsistent documentation and examples. It is maybe one of the fastest moving and most faddish development communities there is. By the time you learn something it will already have changed. Even the books are not consistent in which version of rails they are talking about. Documentation by blogging! enough said.
I currently do RoR on windows. My advice is to avoid windows if you can. Lots of things don't work and the rails community really really doesn't care about you. The move to Git has really messed me up since it doesn't work very well on windows. A lot of gems will fail because of this (Heroku looks like a cool tool - too bad for me it can't handle window's Git setup). Capistrano is out. It goes on and annoyingly on.
Plus, in the back of your mind, you always wonder when something doesn't work "Is it a rails/windows problem?" I am not sure this is solved by using linux because linux brings its own hassles like constantly having to upgrade all those different dependencies, etc...If that's the kind of thing you enjoy it might be an okay choice for you. Those days of enjoying system fiddling are behind me and I just want to get on with doing my work. I am planning on installing ubuntu on a home machine just so i can get familiar with things like capistrano so maybe my opinion will change.
I'd highly suggest if you are going to do rails dev for any amount of time you seriously consider getting a Mac. If you value your time and sanity it will pay for itself almost instantly. Depending on how you value your time 10 hours of debugging windows/linux setup problems and you have spend as much as a Mac costs anyway.
Rails is a joy compared to what it replaces but it is a bit of a pain in that its proponents skip right past a lot of the boring but important stuff like documentation, compatibility issues and community building. It is way more powerful than other frameworks like Django but I sometimes look over at the Django documentation and community and sigh like a guy with a wild sexy girlfriend looking at his friend's plain but sane and stable wife. But then rails adds a feature and I go "Ohhh shiny!"
IMO the Rails Screencasts are better than the Peepcode screencasts. RubyPlus also has screencasts, mind you, they are bit rough around the edges. BuildingWebApps has a free online course that starts doing screencasts halfway through.
A: *
*Data Structures and Algorithms with Object-Oriented Design Patterns in Ruby
Bruno R. Preiss |
Published in 2004
*Learn to Program
Chris Pine | Pragmatic Bookshelf
Published in 2006, 176 pages
*Mr. Neighborly's Humble Little Ruby Book
Jeremy McAnally |
Published in 2006, 147 pages
*Programming Ruby: A Pragmatic Programmer's Guide
David Thomas, Andrew Hunt | Addison-Wesley
Published in 2000, 608 pages
*Rails in a Nutshell
C. Fauser, J. MacAulay, E. Ocampo-Gooding, J. Guenin | O'Reilly Media
Published in 2009, 352 pages
*Ruby Best Practices
Gregory T. Brown | O'Reilly Media
Published in 2009, 328 pages
*Ruby Essentials
| Techotopia
Published in 2007
*Ruby on Rails Security
Heiko Webers | OWASP
Published in 2009, 48 pages
*Ruby User's Guide
Mark Slagell |
Published in 2005
*The Book Of Ruby
Huw Collingbourne |
Published in 2009, 425 pages
*The Little Book of Ruby
Huw Collingbourne | Dark Neon Ltd.
Published in 2008, 87 pages
*why's (poignant) guide to Ruby
why the lucky stiff |
Published in 2008
A: I think the screencasts and short books from Peepcode are really good. They have screencasts to get you started and have some as you get more advanced.
A: There is a site called Softies on Rails that is written by a couple of ex-.NET developers that may be of some use. They have a book called Rails for .NET Developers coming out in the next few months...
I started out on a Windows box using the RadRails plugin for Eclipse and the RubyWeaver extension for Dreamweaver (back during the 1.x days of Rails). Since then I have moved to a Mac running TextMate and haven't thought of going back.
As for books, I started with The Ruby Way and Agile Web Development with Rails. It definately helps to build a background in Ruby as you start to make your way into Rails development.
Definately watch the Railscast series by Ryan Bates.
A: I have found "The Rails Way" by Obie Fernandez excellent and often found myself referring to it when Agile Web Development with Rails didn't seem to go far enough. Obie Fernandez has a decent blog too.
A: Wait a couple of months for Learning Rails by Simon St. Laurent, Edd Dumbill to come out in November. That series of books is stupendous, and this book will cover the latest version of Rails.
A: Once you get your environment up and running, this is helpful in giving you a basic app that users can log into.
Restful Authentication with all the bells and whistles:
http://railsforum.com/viewtopic.php?id=14216&p=1
A: I'm currently learning RoR, here's what I've done so far:
1. Read, and followed, SitePoint's "Simply Rails 2.2"
2. Read, and followed, Oreilly's "Rails, Up and Running" 2nd edition.
Those two books are very instructive, and take the same approach in different styles; the second book is a little more aggressive, which is good if you have some RoR knowledge.
As posted above, be extremely careful when reading resources, there are A LOT of outdated videos and articles.
A: Good link for learning Ruby :
http://en.wikibooks.org/wiki/Ruby_Programming
A: Just to +1 Agile Web Development with Rails (though make sure you get the latest edition) - http://pragprog.com/
I develop on a Mac and this can soemtimes be beneficial - it's quite a popular platform with Rails developers so many of the blog posts you look at will be mac-orientated. Linux is great too though ;)
Finally - and I have no connection with the company at all - when you do have something you want to put live, heroku is a good choice. Finding a cheap rails host isn't easy so this is a nice starting point. There are a lot of other great hosts out there too though! Heroku does kind of require git for version control (though you can use it on top of subversion).
Best of luck!
A: Railscasts shmailcasts ...
1. Think of some type app that you'd like to develop it.
2. Take 20 minutes to napkin out some user flows
3. Read the first couple of chapters of "Agile Web Development with Rails" with your project in mind
4. Install Netbeans and rails on your windows or mac machine. Either is just the same.
5. Develop your app
6. Consult the bajillion and one online references as you develop.
A: The Book Agile Development with Rails is the number one teaching aid. It's got a nice life-like(ish) application it builds up through the chapters as it introduces you to different concepts. I worked through the examples twice, after which I had enough knowledge to do my own stuff and rely on the rails API documentation (http://api.rubyonrails.org/).
A: To learn Ruby, read "The Well-Grounded Rubyist" by David Black. It is extremely clear, well-written, and well-organized. The best technical book I've ever read (out of maybe a dozen, since I'm a relatively new programmer).
To learn Rails, read "Head First Rails." They explain how all the mysterious parts work together. Be patient with the silliness and work your way through the examples - it will pay off. (Also, for consistency, use whatever version of Rails they use. You can upgrade later.)
Both of these books assume little to no knowledge on your part, regarding OOP programming and MVC architecture. If you do know a bit, don't skim, because you might assume things incorrectly. (For instance, Ruby objects don't have public attributes, only getters and setters. But you can automatically create multiple getters/setters with a single line like attr_accessor :attr1, :attr2, :attr3.)
A: Without a doubt
Agile Web Development with Rails
and
The Rspec Book
and for fun
Advanced Rails Recipies
*
*I would link to the other two, but Stack Overflow won't let me. See the same site.
A: I've seen the infamous "Blog in 15 minutes" video ages ago when Rails was probably around version 1.0 or something like that. One of the most important things about the Ruby/Rails world is that given it's great community it's changing ridiculously fast in comparison to other frameworks.
Today, Rails is significantly different that what it used to be, altho the main ideology has been kept the same. Having said that event tho in the lsat few years I've learnt a lot of things about Rails I still keep learning new things about it.
The most valuable resources to me that help me discovering and keeping up with the latest ways of doing Ruby and Rails are the following:
*
*Rails Guides - A nice way of learning Rails itself, edited by the community, moderated by the core contributors. The site has a lot to offer on most of the important main topics around Rails that it can get you up and running very quickly. It covers bot the most recent stable and edge versions of the framework.
*If you understand the main ideology of Rails than I definitely recommend checking out (and subscribing to) Ryan Bates' Railscasts. Let me just quote from the site itself, I think it's pretty self explanatory:
Every week Ryan Bates will host a new
Railscasts episode featuring tips and
tricks with Ruby on Rails. These
screencasts are short and focus on one
technique so you can quickly move on
to applying it to your own project.
The topics target the intermediate
Rails developer, but beginners and
experts will get something out of it
as well.
*There are also a lot of podcasts around Ruby/Rails, the two that I keep listening to are Ruby5 and the Ruby Show.
*For more specific questions like API calls etc, I'd recommend APIDock's Rails and Ruby sections where you can get more information on specific methods.
*If you are getting more familiar with the framework, it's worth taking a look at Rails Best Practices. There's a bunch of short articles on certain issues that most people make in the beginning of their learning curve with Rails. This site is meant to point pot these issues and help beginners finding their way towards writing better and more well thought out code. There's also a gem that you could use which scans your application and points out these issues and offers solutions/workarounds. Pretty neat!
These resources should help you in getting up and running with Rails. Good luck with your journey to the Rails world and welcome to the community.
A: A lot of good opinions here. I'll add what's not here. My experience:
*
*Rails on Windows is easy to get going with RailsInstaller, especially if you're using SQLite.
*If you want to use Ruby gems which need C extensions (e.g. RMagick), installation is difficult and unpredictable.
*PostgreSQL is a pain to install on Windows, and a pain to hook up to Rails.
*git doesn't work quite right on Windows.
*IDEs are bulky (Aptana). Notepad++ is good enough.
*Rails on Ubuntu is easy, and gems requiring C libraries just work.
*If your computer is powerful enough, use VirtualBox or VMWare Player, and use an Ubuntu Virtual Machine.
Setup Resources
*
*This page shows, start to finish how to set up Ruby/Rails/PostgreSQL on Ubuntu 11.10.
*If you don't like RVM (I don't), use rbenv. RVM and rbenv are tools for managing multiple versions of Ruby, including JRuby, Rubinius, etc.
Live Deployment for Development/Testing
*
*Live deployment lets your friends try out your app. It also makes it easier to interact with web services which need to make callbacks to your Rails server (such as PayPal IPN or Twilio).
*Heroku.com is my favourite place to deploy.
*localtunnel.com is a good utility to point a publicly visible URL to your local Rails server. (I have only used it for Windows-based Rails servers).
Learning
*
*Try out tutorials on the web.
*Use stackoverflow.com to ask questions.
*Use "raise Exception, params.to_s " in your Controllers to stop the app print out all the parameters which are driving your controllers. This gave me the greatest insight on how data is schlepped back and forth in a Rails app.
*Use the Rails console ("rails console") to inspect data, and try out code snippets before you embed them in your models or controllers.
A: Ruby:
I used Learn to program (in a weekend), Ruby Visual QuickStart (believe it or not this QS
book was "off the hook" excellent). This took about a week.
Rails:
I just went through Learn Rails in one "aggressive" week. Definitely feel I have the nuts and bolts. It's 2009 which I deemed important!
Now I plan to combine a more advanced book with a real project.
IDE: VIM with rails plugin is great if you're a vim addict. Otherwise, try any suggested above.
Of course railscast, etc., are useful for most up to date stuff.
A: My steps was:
* Agile development with Rails (book)
* Railscasts - very useful, always learn something new.
* And of course the RoR API
A: Book : The Rails Way by Obie Fernandez
IDE : Netbeans or TextMate.
A: My company has been developing mavenlive.com, a knowledge management and decision support platform for three years. Over the past few years we've learned a lot about rails and here are some of my recommendations.
*
*Switch to Mac! The tools that are available to you and the development environment on Mac allows you to be far more productive than on Windows.
*railcasts.com has a wealth of informative screencasts from beginner to expert. You can always find new and more efficient ways of doing things from Ryan's posts.
*Scaling Rails screencasts coupled with NewRelic has provided powerful insight into the performance of our application and allows us to develop effectively while keeping our eyes open for future scalability issues.
A: Read all the guides at guides.rails.info, starting with Getting Started with Rails They are well written, well organized, and up to date.
A: An excellent source for learning Ruby and Ruby on Rails is at http://www.teachmetocode.com. There are screencasts that cover the basics of Rails, along with a 6-part series on how to create a Twitter clone with Ruby on Rails.
A: I actually have an article about getting started with rails that should help. The only part of your question it doesn't cover is the OS. Mac is the dominant player here, believe it or not! But I use Ubuntu happily. There are gedit plugins that get you very close to TextMate - in fact, I like gedit better.
If you're on a windows machine and can use linux, that's definitely a better way to go. Rails on Windows has a lot of issues.
A: I learnt Ruby with the help of Mr. Neighborly's Humble Little Ruby Book. It's an excellent free-to-download introduction to Ruby with lots of examples, which I'd 100% recommend.
A: Some awesome advice here!
Some of the resources I will list have been mentioned, some I don't think were. I am definitely still not a pro, I've just been learning for the past few months and I've been improving at a rapid rate. This is what helped me in order:
*
*Why's poignant guide to ruby: excellent introduction to the Ruby language by the infamous _why.
*Agile web development with rails book: great book with some good in-depth follow alongs
*Rails tutorial by Michael Hartl (railstutotrial.org): this has been my favorite resource. Hartl's style of walking you through demo apps and explaining everything just made things click for me.
*Rails for Zombies - ran through this twice, great for reinforcing the basics.
*Railscasts - I started following this along at first, but they were not helpful until now that I am just really starting to grasp Rails. I would leave these for the end after you have got your feet wet.
*Think Vitamin's rails tutorials were also pretty good. I followed along these screen casts at first, to feel out the language and then did them again towards the end.
*The "Learning Rails" podcast, although outdated (Rails 2) was also a good starting resource. I listened to this while driving/working out.
I hope that was helpful! I'm far from being a pro, but I dove in head first and absorbed as much as I could from multiple resources. The ones I mentioned above were the most helpful!
Oh and what's really helping me now is coming up with personal projects and settings certain tasks. Following along is great, but you truly learn when you dive in without a guide!
A: Try this book http://ruby.railstutorial.org/ruby-on-rails-tutorial-book
A: This is also a good read
http://guides.rubyonrails.org/
A: I'm learning Rails now and if you're using Windows (assuming so with C# dev) I highly suggest learning on Linux if investing in a Mac is not an option.
If you don't want to to create a separate Partition on your HDD for Ubuntu, I suggest checking out Wubi, a Windows installer for Ubuntu. The Rails experience is much less of a headache on Ubuntu than it is Windows and I'd argue is similar to that of an OSX dev environment, just not as much application support. I'm currently using a alpha text editor Redcar, which allows you to have some of textmate's functionality, the popular OSX editor.
Good books I've read on Rails are Beginning Rails 3 by Cloves Carneiro Jr and Rida Al Barazi. Also Rails Test Prescriptions by Noel Rappin, about developing in a test-driven approach.
My favorite things to keep me moving from amateur to notive are Railscasts by Ryan Bates. He usually releases a screencast every Monday or so about Rails gems, or recently Sass, SCSS, Coffeescript and technologies related to Rails 3.1.
A must read for any beginning programmer I feel is why's (poignant) guide to ruby. Unfortunately _why disappeared just as I was getting into Ruby, but his content is still scatter accross various sources. It has quirky humor and by the end you'll know Ruby's syntax quite well.
A: I agree with srboisvert. Don't do it on Windows. You can add Ubuntu (version of Linux) to Windows and have dual boot. It requires some work, but it is easier than going against the grain and trying to get everything working on Widows.
Ubuntu, Heroku and Git work wonderfully. Just know the learning curve is steep at first. Hire someone from Guru.com or Elance to help you.
Also, running Textmate on Mac is the preferred solution, so if you are considering getting a Mac or have access to one, that is the best thing to do. I don't think you need very much computing power...
Finally, my favorite book is Agile Web Development for Rails. Googling around doesn't work so well because most of the information is from old versions of Rails and is deprecated or doesn't work.
A: I program with RoR on the Mac OS with textmate, and it's awesome.
I would suggest "Programming Ruby 1.9" (The Pickaxe Book) for Ruby and Agile Web Development with Rails" to learn Rails, both published by the Pragmatic Bookshelf.
Good luck!
A: I asked the same question when I started out - hoping for a somewhat prescriptive guide on learning Rails... couldn't find one so decided I would write one for others who might find themselves in a similar boat one day :) You can find it here:
Best Way to Learn Ruby & Rails
(It's now actually returned with the !learn factoid helper in the official Ruby on Rails IRC chat room.)
A: I have got up to speed with Ruby on Rails fairly quickly via this free online course which is currently being offered by UC Berkeley - Software as a Service - Engineering Long Lasting Software with instruction by Armando Fox and David Patterson. I can't speak highly enough of this course... it really was a privilege to learn Rails from these guys. And there is an active community on the course forums if you run into difficulty along the way. The first offering of the online course has now finished (as of 25 March, 2012) - the next time it will be run will be sometime in September of 2012.
It assumes you are a fairly competent developer and gets you started on ruby in the second week, then Rails runs from the third week up to the end of the course (five weeks). Your assignments are marked by an auto-grader. You get provided with a pre-built Ubuntu VM image with everything you need for development pre-installed on it (e.g. Ruby, Rails, Rake, Gems, RSPec, Cucumber, etc). All you have to do is start up the VM inside the (free) VirtualBox software which runs on MacOSX, Windows and Linux.
There is a recommended text book for the course ... here ... but you may be able to get by looking at the lectures and screencasts online.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "241"
} |
Q: ASP.Net: If I have the Session ID, Can I get the Session object? This question is related to this one, though I think I was a little too long-winded there to really get a good answer. I'll keep this brief.
I'm working on a web handler (ashx) that accepts a form post from an aspx page. When the handler receives this form post, in order to do what it needs to do, it needs to know the user who is logged in (User.Identity.Name), but I can't rely on cookies being sent by the browser.
I know I can get the Session.SessionID and place it in a hidden form field, but once my handler receives the form post, how can I use that SessionID to figure out the logged-in user's identity?
I'm using the StateServer mode for session state.
A: I think you can do it be implementing the IReadOnlySessionState interface on your HttpHandler
A: Unless there is a need to use session directly, you could always store whatever information about the logged-in user's identity in a singleton dictionary or cache and reference it via the SessionID stored in a hidden field. I personally see security issues in this but won't go into those. I would consider issuing single use identities for this type of implementation.
A: Jonas posted a great answer to this question here:
Can I put an ASP.Net session ID in a hidden form field?
A: In an HttpHandler or HttpModule implementation, you cannot always access session from the BeginRequest event. There is another event you can handle, called OnAcquireRequestState. If you write your code in that event, then HttpContext.Current.Session will not be null.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can I test my web pages in Microsoft Internet Explorer on a Mac? I want to test the web pages I create in all the modern versions of Internet Explorer (6, 7 and 8 beta) but I work mainly on a Mac and often don't have direct access to a PC.
A: There's three different methods that I recommend:
Cloud-based interactive virtual machines
Use something like SauceLabs or BrowserStack. You'll be able to pick a browser of choice, enter a url and use a real OS with the real browser and test and interact as much as you need. Both of these also support setting up a tunnel to/from your own machine so any local hostnames will work fine.
There is also CrossBrowserTesting, browserling/testling, which seem to have similar services although I haven't used these myself.
Local virtualization
You can use VirtualBox (free and open-source, similar to VMWare or Parallels) to create one or more virtual machines on your computer. You may or may not know this, but you do not need to get an official copy of Microsoft Windows for these virtual machines. Microsoft offers free VM images of simplified Windows installations for the purposes of testing Internet Explorer and Microsoft Edge (download). Check one of these articles to get that up and running:
*
*Testing IE6, 7, 8 and 9 on Mac OS X, 2011-06, xairon.net
*Internet Explorer for Mac the Easy Way, 2011-09, osxdaily.com
In the past, there were also native Mac applications (such as ies4osx), or as a Windows application which requires a VM if you don't have Windows (such as IETester or MultipleIEs). The downside is that these emulations are often less stable than the real client, and are even harder to debug with because they don't run in the natural environment of the browser. Sometimes causing errors that don't occur in the real browser, and maybe not having bugs that the real browser would have.
Cloud-based screenshots factory
If you don't need interactivity and or need a cheaper solution (note that this method may not always be cheaper, do a little research before making assumptions) there are also services online that, like the previous one, have access to real browser/OS environments. But contrary to the previous, don't grant interactive access to the actual machines but only to get screenshots. This has both an upside and a downside. The downside is that you can't interact with it. The upside however is that most of these allow easy summarizing of screenshots so you don't have to start session after another and get screenshots.
Some I've used:
*
*BrowserShots (free and used to be my favorite, although the slowness made alternatives more attractive)
*Adobe BrowserLab (also free, requires an Adobe ID. Not as much options and coverage as BrowserShots, but: no delay, instant screenshots, compare views and ability to let the screenshot be taken after a given number of seconds instead of right away (to test asynchronous stuff).
*CrossBrowserTesting (not free, but also has an interactive environment (see previous method) and a screenshot factory that is like your own private "BrowserShots" site)
A: Once you've virtualized Windows on your Mac, you can also try the Mutiple IE installer to get a variety of flavors of Internet Explorer without having to create separate VM instances.
*
*Multiple IE Installer
If you're just wanting to see a simple screenshot of how the page will render in various browsers, you can try the free service browsershots or there are a number of services that will automatically test your pages in multiple browsers.
*
*browsershots.org
A: Update: Microsoft now provide virtual machine images for various versions of IE that are ready to use on all of the major OS X virtualisation platforms (VirtualBox, VMWare Fusion, and Parallels).
Download the appropriate image from: https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/
On an Intel based Mac you can run Windows within a virtual machine. You will need one virtual machine for each version of IE you want to test against.
The instructions below include free and legal virtualisation software and Windows disk images.
*
*Download some virtual machine software. The developer disk images we're going to use are will work with either VMWare Fusion or Sun Virtual Box. VMWare has more features but costs $80, Virtual Box on the other hand is more basic but is free for most users (see Virtual Box licensing FAQ for details).
*Download the IE developer disk images, which are free from Microsoft: http://www.microsoft.com/downloads/...
*Extract the disk images using cabextract which is available from MacPorts or as source code (Thanks to Clinton).
*Download Q.app from http://www.kju-app.org/ and put it in your /Applications folder (you will need it to convert the disk images into a format VMWare/Virtual Box can use)
At this point, the process depends on which VM software you're using.
Virtual Box users
*
*Open a Terminal.app on your Mac (you can find it in /Applications/Utilities) and run the following sequence of commands, replacing input.vhd with the name of the VHD file you're starting from and output.vdi with the name you want your final disk image to have:
/Applications/Q.app/Contents/MacOS/qemu-img convert -O raw -f vpc "input.vhd" temp.bin
VBoxManage convertdd temp.bin "output.vdi"
rm temp.bin
mv "output.vdi" ~/Library/VirtualBox/VDI/
VBoxManage modifyvdi "output.vdi" compact
*Start Virtual Box and create a new virtual machine
*Select the new VDI file you've just created as the boot hard disk
VMWare fusion users
*
*Open a Terminal.app on your Mac (you can find it in /Applications/Utilities) and run the following commands, replacing input.vhd and output.vmdk with the name of the VHD file you're working on and the name you want your resulting disk image to have:
/Applications/Q.app/Contents/MacOS/qemu-img convert -O vmdk -f vpc "input.vhd" "output.vmdk"
mv "output.vmdk" ~/Documents/Virtual\ Machines.localized/
This will probably take a while (It takes around 30 minutes per disk image on my 2.4GHz Core 2 Duo MacBook w/ 2Gb RAM).
*Start VMWare Fusion and create a new virtual machine
*In the advanced disk options select "use and existing disk" and find the VMDK file you just created
A: Litmus is another web-based alternative.
A: Browsershots is another option if you just want to get screenshots..
A: There is an issue with the latest release (January 2009) of the VHDs. The VHD sees there are hardware changes and prompts for a license key, evenutally locking users out. As yet there is no known workaround.
A: If you don't have a copy of Windows that you could run in a virtual machine (VMware also isn't free), you can try IEs4Linux. It will require you configure some open source stuff on your Mac, but it is all free. You'll at least need fink, wine, and cabextract. See the link above for some specific command line directions. It's not that hard!
A: I've used Codeweavers Crossover product for doing this from time to time.
http://www.codeweavers.com/products/cxmac/
It's a different option to virtualisation, and gives you a little more control than some of the hosted solutions. That said, it's based on WINE, and so you can potentially get all the problems and issues that come with doing it that way. That said, for basic testing without plugins, etc, it works great.
I'm not 100% sure about support for IE8, you'd need to check that out, but it definitely gives you native support for 6 and 7.
A: You could use Spoon Browsers (web-based) once it becomes available for Mac.
A: OSX Daily explains how to install Windows VMs with a single terminal command (assuming you already have VirtualBox installed). To summarize:
IE 7:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | IEVMS_VERSIONS="7" bash
IE 8:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | IEVMS_VERSIONS="8" bash
IE 9:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | IEVMS_VERSIONS="9" bash
ALL THE IEs!:
curl -s https://raw.github.com/xdissent/ievms/master/ievms.sh | bash
A: There's a OSX distribution of IEs4 Linux called ies4osx, which has worked fine for me without any configuration.
A: Browsershots is nice, but useless if you need to test functionality rather than just overall visual rendering.
IEs4OSX and IEs4Linux have serious drawbacks. They have no real support for plugins and extensions like Flash and Silverlight. Rendering isn't precise and they're highly unstable. For testing you really need an actual version of IE running on Windows, but you don't need to have a dedicated box.
IE images on VirtualBox is really the best, and easiest way to go.
I have a screencast here if anyone's looking for a visual walk-through.
A: Yet another Web based alternative (although as Jeff said, not much use for testing functionality) is http://www.browsercam.com
A: If this is a business web site (or a serious site where it is important that it actually works on IE), then don't take the cheap route - invest in a Windows machine or two. Your customers will thank you.
Otherwise, virtualize.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.