text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Multithreading in asp.net What kind of multi-threading issues do you have to be careful for in asp.net?
A: It's risky to spawn threads from the code-behind of an ASP.NET page, because the worker process will get recycled occasionally and your thread will die.
If you need to kick off long-running processes as a result of user actions on web pages, your best bet is to drop a message off in MSMQ and have a separate background service monitoring the queue. The service could take as long as it wants to accomplish the task, and the web page would be finished with its work almost immediately. You could accomplish the same thing with an asynch call to a web method, but don't rely on getting the response when the web method is finished working. From code-behind, it needs to be a quick fire-and-forget.
A: One thing to watch out for at things that expire (I think httpContext does), if you are using it for operations that are "fire and forget" remember that all of a sudden if the asp.net cleanup code runs before your operation is done, you won't be able to access certain information.
A: If this is for a web service, you should definitely consider thread pooling. Too many threads will bring your application to a grinding halt because they will eventually start competing for CPU time.
Is this for file or network IO? If so, you should also consider using asynchronous IO. It can be a bit more of a pain to program, but you don't have to worry about spawning off too many threads at once.
A: Programmatic Caching is one area which immediately comes to my mind. It is a great feature which needs to be used carefully. Since it is shared across requests, you have to put locks around it before updating it.
Another place I would check is any code accessing filesystem like writing to log files. If one request has a read-write lock on a file, other concurrent requests will error out if not handled properly.
A: Isn't there a Limit of 25 Total Threads in the IIS Configuration? At least in IIS 6 i believe. If you exceed that limit, interesting things (read: loooooooong response times) may happen.
A: Depending on what you need, as far as multi threading is concerned, have you thought of spawning requests from the client. It's safe to spawn requests using AJAX, and then act on the results in a callback. Or use a service as a backgrounding mechanism, which runs every X minutes and processes in the background that way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How to repeatedly merge branches in Mercurial We're using Mercurial where I work and I want to have a setup similar to how I used SVN:
*
*Trunk
*Tags
*
*Production
*Branches
Since Mercurial supports branches natively, I know how to create a named branch, but I can't find any documentation on how to repeatedly merge 'Trunk' with 'Production'.
Quite simply, I want to have a development branch for normal work and a production branch that I routinely pull changes from the development branch into. How do I do this with Mercurial?
A: Something like hg transplant? That's what we use on our dev and prod branches.
A: As the previous poster mentioned, the transplant extension can be used for cherry-picking individual changes from one branch to another. If, however, you always want to pull all the latest changes, the hg merge command will get you there.
The simplest case is when you're using clones to implement branching (since that's the use case Mercurial is designed around). Assuming you've turned on the built-in fetch extension in your .hgrc / Mercurial.ini:
cd ~/src/development
# hack hack hack
hg commit -m "Made some changes"
cd ../production
hg fetch ../development
If you're using local branches:
hg update -C development
# hack hack hack
hg commit -m "Made some changes"
hg update -C production
hg merge development
hg commit -m "Merged from development"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How do you programmatically change the tab order in a Win32 dialog? Often time I need to add a control to a dialog after the dialog has been generated via dialog template and CreateDialogIndirect. In these cases the tab order is set by the dialog template and there is no obvious way to change the tab order by including a newly created control.
A: I recently discovered that you can use SetWindowPos to accomplish this. Determine which control after which you want to insert the new control in the tab order then use SetWindowPos like this:
SetWindowPos(hNewControl, hOldControl, 0, 0, 0, 0, SWP_NOMOVE|SWP_NOSIZE);
This changes the z-order of controls which, in turn, establishes the tab order.
A: I know this is an old question but here is how to do it at compile time (which is preferable in the vast majority of cases):
http://msdn.microsoft.com/en-us/library/7039hzb0(v=vs.80).aspx
My favourite method:
*
*From the View menu, choose Tab Order.
*Choose Assign Interactively.
*Double-click the tab order box beside the control you want to be the
first control in the tab order.
*Click the tab order box for each of the other controls.
*Click anywhere on the form to save your changes and exit Tab Order
mode, or press ESC to exit Tab Order mode without saving your
changes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: View of allocated memory I'm looking for a tool ($, free, open source; I don't care) that will allow me to view not just the memory statistics for a .NET program, but also the object hierarchy. I'd really like to be able to drill down each thourgh each object and view it's foot print, as well as all the object's it references.
I've looked at things like Ants Profiler from RedGate, but it's not quite what I want: I can't view specific instances.
EDIT:
I've used the .NET Memory Profiler (the one that ships with visual studio, and the one that used to be part of the SDK (?))before, and while it's really good (and shows views most other don't), what I'm really after is being able to drill down through my object hierarchy, viewing each object instance.
A: I have used JetBrains DotTrace and Redgate Ants, both of which I would recommend. A lesser known profiler I have also used is .Net Memory Profiler (http://memprofiler.com/), which at the time I used it provided a different perspective on memory usage than the former two profilers mentioned. I find DotTrace and Ants to be very similar, though each one is slightly different.
A: JetBrains dottrace profiler is the best. I wouldn't work without it. It is hard to find a tool that is free and performs well in this arena. Dottrace is hands down the best profiler I have used for .Net.
A: There's also the Microsoft .net profiler - I've used it a bit, and it's not bad for a free tool. Not sure if you can walk the object hierarchy, but does break down memory use by type, and over time. You can even see the underlying data.
It does slow down the app a lot, though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Dynamic linq:Creating an extension method that produces JSON result I'm stuck trying to create a dynamic linq extension method that returns a string in JSON format - I'm using System.Linq.Dynamic and Newtonsoft.Json and I can't get the Linq.Dynamic to parse the "cell=new object[]" part. Perhaps too complex? Any ideas? :
My Main method:
static void Main(string[] args)
{
NorthwindDataContext db = new NorthwindDataContext();
var query = db.Customers;
string json = JSonify<Customer>
.GetJsonTable(
query,
2,
10,
"CustomerID"
,
new string[]
{
"CustomerID",
"CompanyName",
"City",
"Country",
"Orders.Count"
});
Console.WriteLine(json);
}
JSonify class
public static class JSonify<T>
{
public static string GetJsonTable(
this IQueryable<T> query,
int pageNumber,
int pageSize,
string IDColumnName,
string[] columnNames)
{
string selectItems =
String.Format(@"
new
{
{{0}} as ID,
cell = new object[]{{{1}}}
}",
IDColumnName,
String.Join(",", columnNames));
var items = new
{
page = pageNumber,
total = query.Count(),
rows =
query
.Select(selectItems)
.Skip(pageNumber * pageSize)
.Take(pageSize)
};
return JavaScriptConvert.SerializeObject(items);
// Should produce this result:
// {
// "page":2,
// "total":91,
// "rows":
// [
// {"ID":"FAMIA","cell":["FAMIA","Familia Arquibaldo","Sao Paulo","Brazil",7]},
// {"ID":"FISSA","cell":["FISSA","FISSA Fabrica Inter. Salchichas S.A.","Madrid","Spain",0]},
// {"ID":"FOLIG","cell":["FOLIG","Folies gourmandes","Lille","France",5]},
// {"ID":"FOLKO","cell":["FOLKO","Folk och fä HB","Bräcke","Sweden",19]},
// {"ID":"FRANK","cell":["FRANK","Frankenversand","München","Germany",15]},
// {"ID":"FRANR","cell":["FRANR","France restauration","Nantes","France",3]},
// {"ID":"FRANS","cell":["FRANS","Franchi S.p.A.","Torino","Italy",6]},
// {"ID":"FURIB","cell":["FURIB","Furia Bacalhau e Frutos do Mar","Lisboa","Portugal",8]},
// {"ID":"GALED","cell":["GALED","Galería del gastrónomo","Barcelona","Spain",5]},
// {"ID":"GODOS","cell":["GODOS","Godos Cocina Típica","Sevilla","Spain",10]}
// ]
// }
}
}
A: This is really ugly and there may be some issues with the string replacement, but it produces the expected results:
public static class JSonify
{
public static string GetJsonTable<T>(
this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames)
{
string select = string.Format("new ({0} as ID, \"CELLSTART\" as CELLSTART, {1}, \"CELLEND\" as CELLEND)", IDColumnName, string.Join(",", columnNames));
var items = new
{
page = pageNumber,
total = query.Count(),
rows = query.Select(select).Skip((pageNumber - 1) * pageSize).Take(pageSize)
};
string json = JavaScriptConvert.SerializeObject(items);
json = json.Replace("\"CELLSTART\":\"CELLSTART\",", "\"cell\":[");
json = json.Replace(",\"CELLEND\":\"CELLEND\"", "]");
foreach (string column in columnNames)
{
json = json.Replace("\"" + column + "\":", "");
}
return json;
}
}
A: Thanks for the quick response.
However, note the required output does not have property names in the "cell" array ( that's why I was using object[]):
"cell":["FAMIA","Familia Arquibaldo",...
vs.
"cell":{"CustomerID":"FAMIA","CompanyName","Familia Arquibaldo",...
The result is meant to be used with a JQuery grid called "flexify" which requires the output in this format.
A: static void Main(string[] args)
{
NorthwindDataContext db = new NorthwindDataContext();
var query = db.Customers;
string json = query.GetJsonTable<Customer>(2, 10, "CustomerID", new string[] {"CustomerID", "CompanyName", "City", "Country", "Orders.Count" });
}
public static class JSonify
{
public static string GetJsonTable<T>(
this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames)
{
string select = string.Format("new ({0} as ID, new ({1}) as cell)", IDColumnName, string.Join(",", columnNames));
var items = new
{
page = pageNumber,
total = query.Count(),
rows = query.Select(select).Skip((pageNumber - 1) * pageSize).Take(pageSize)
};
return JavaScriptConvert.SerializeObject(items);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Modeling Geographic Locations in an Relational Database I am designing a contact management system and have come across an interesting issue regarding modeling geographic locations in a consistent way. I would like to be able to record locations associated with a particular person (mailing address(es) for work, school, home, etc.) My thought is to create a table of locales such as the following:
Locales (ID, LocationName, ParentID) where autonomous locations (such as countries, e.g. USA) are parents of themselves. This way I can have an arbitrarily deep nesting of 'political units' (COUNTRY > STATE > CITY or COUNTRY > STATE > CITY > UNIVERSITY). Some queries will necessarily involve recursion.
I would appreciate any other recommendations or perhaps advice regarding predictable issues that I am likely to encounter with such a scheme.
A: Sounds like a good approach to me. The one thing that I'm not clear on when reading you post is what "parents of themselves" means - if this is to indicate that the locale does not have a parent, you're better off using null than the ID of itself.
A: You might want to have a look at Freebase.com as a site that's had some open discussion about what a "location" means and what it means when a location is included in another. These sorts of questions can generate a lot of discussion.
For example, there is the obvious "geographic nesting", but there are less obvious logical nestings. For example, in a strictly geographic sense, Vatican City is nested within Italy. But it's not nested politically. Similarly, if your user is located in a research center that belongs to a university, but isn't located on the University's property, do you model that relationship or not?
A: I think you might be overthinking this. There's a reason most systems just store addresses and maybe a table of countries. Here are some things to look out for:
*
*Would an address in the Bronx include the borough as a level in the hierarchy? Would an address in an unincorporated area eliminate the "city" level of the hierarchy? How do you model an address within a university vs an address that's not within one? You'll end up with a ragged hierarchy which will force you to traverse the tree every time you need to display an address in your application. If you have an "address book" page the performance hit could be significant.
*I'm not sure that you even have just one hierarchy. Brown University has facilities in Providence, RI and Bristol, RI. The only clean solution would be to have a double hierarchy with two campuses that each belong to their respective cities in one hierarchy but that both belong to Brown University on the other hierarchy. (A university is fundamentally unlike a political region. You shouldn't really mix them.)
*What about zip codes? Some zip codes encompass multiple towns, other times a city is broken into multiple zip codes. And (rarely) some zip codes even cross state lines. (According to Wikipedia, at least...)
*How will you enter the data? Building out the database by parsing conventionally-formatted addresses can be difficult when you take into account vanity addresses, alternate names for certain streets, different international formats, etc. And I think that entering every address hierarchically would be a PITA.
*It sounds like you're trying to model the entire world in your application. Do you really want or need to maintain a table that could conceivable contain every city, state, province, postal code, and country in the world? (Or at least every one where you know somebody?) The only thing I can think of that this scheme would buy you is proximity, but if that's what you want I'd just store state and country separately (and maybe the zip code) and add latitude and longitude data from Google.
Sorry for the extreme pessimism, but I've gone down that road myself. It's logically beautiful and elegant, but it doesn't work so well in practice.
A: Here's a suggestion for a pretty flexible schema. An immediate warning: it could be too flexible/complex for what you actually need
Location
(LocationID, LocationName)
-- Basic building block
LocationGroup
(LocationGroupID, LocationGroupName, ParentLocationGroupID)
-- This can effective encapsulate multiple hierarchies. You have one root node and then you can create multiple independent branches. E.g. you can split by state first and then create several sub-hierarchies e.g. ZIP/city/xxxx
LocationGroupLocation
(LocationID, LocationGroupID)
-- Here's how you link Location with one or more hierarchies. E.g. you can link your house to a ZIP, as well as a City... What you need to implement is a constraint that you should not be able to link up a location with any two hierarchies where one of them is a parent of the other (as the relationship is already implicit).
A: I would think carefully about this since it may not be a necessary feature.
Why not just use a text field and let users type in an address?
Remember the KISS principle (Keep It Simple, Stupid).
A: I agree with the other posts that you need to be very careful here about your requirements. Location can become a tricky issue and this is why GIS systems are so complicted.
If you are sure you just need a basic heirarchy structure, I have the following suggestions:
*
*I support the previous comment that root level items should not have themselves as the parent. Root level items should have a null value for the parent. Always be careful about putting data into a field that has no meaning (i.e. "special" value to represent no data). This practice is rarely necessarily and way overused in the devleoper community.
*Consider XPath / XML. This is Something to consider for bother recording the heirarchy structure, and for processing / parsing the data at retrieval. If you are using MSSQL Server, the XPath expressions in select statements are perfect for tasks such as returning the full location/heirarchy path of a record as the code is simple and the results are fast.
A: For Geographic locations you may wish to resolve an address to a Latitude, Longitude array (perhaps using Google maps etc.) to calculate proximities etc.. For Geopolitical nesting ... I'd go with the KISS response.
If you really want to model it, perhaps you need the types to be more generic ... Country -> State -> County -> Borough -> Locality -> City -> Suburb -> Street or PO Box -> Number -> -> Appartment etc. -> Institution (University or Employer) -> Division -> Subdivision-1 -> subdivision-n ... Are you sure you can't do KISS?
A: I'm modeling an apps for global users and I have the same problems, but I think that this approach could already be in use in many enterprise. But why this problem don't have an universal solution? Or, has this problem one best solution that can be the start point or anybody in the world need think in a solution for it since beginnig?
In IT, we are making the same things any times and in many places, unfortunately. For exemplo, who are not have made more than one user, customer or product's database? And the worst, all enterprise in the world has made it. I think that could have universal solutions for universal problems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Does Java need closures? I've been reading a lot lately about the next release of Java possibly supporting closures. I feel like I have a pretty firm grasp on what closures are, but I can't think of a solid example of how they would make an Object-Oriented language "better". Can anyone give me a specific use-case where a closure would be needed (or even preferred)?
A: Java doesn't need closures, an Object oriented language can do everything a closure does using intermediate objects to store state or do actions (in Java's case inner classes).
But closures are desirable as a feature because they greatly simplify the code and increase readability and as a consequence the maintainability of the code.
I'm no Java specialist but I'm using C# 3.5 and closures are one of my favorite features of the language, for example take the following statement as an example:
// Example #1 with closures
public IList<Customer> GetFilteredCustomerList(string filter) {
//Here a closure is created around the filter parameter
return Customers.Where( c => c.Name.Contains(filter)).ToList();
}
now take an equivalent example that doesn't use closures
//Example #2 without closures, using just basic OO techniques
public IList<Customer> GetFilteredCustomerList(string filter) {
return new Customers.Where( new CustomerNameFiltrator(filter));
}
...
public class CustomerNameFiltrator : IFilter<Customer> {
private string _filter;
public CustomerNameFiltrator(string filter) {
_filter = filter;
}
public bool Filter(Customer customer) {
return customer.Name.Contains( _filter);
}
}
I know this is C# and not Java but the idea is the same, closures are useful for conciseness, and make code shorter and more readable. Behind the scenes, the closures of C# 3.5 do something that's looks very similar to example #2 meaning the compiler creates a private class behind the scenes and passes the 'filter' parameter to it.
Java doesn't need closures to work, as a developer you don't need them either, but, they are useful and provide benefits so that means that they are desirable in a language that is a production language and one of it's goals is productivity.
A:
I've been reading a lot lately about the next release of Java possibly supporting closures. I feel like I have a pretty firm grasp on what closures are, but I can't think of a solid example of how they would make an Object-Oriented language "better."
Well, most people who use the term "closure" actually mean "function object", and in this sense, function objects make it possible to write simpler code in certain circumstances such as when you need custom comparators in a sort function.
For example, in Python:
def reversecmp(x, y):
return y - x
a = [4, 2, 5, 9, 11]
a.sort(cmp=reversecmp)
This sorts the list a in reverse order by passing the custom comparison functoin reversecmp. The addition of the lambda operator makes things even more compact:
a = [4, 2, 5, 9, 11]
a.sort(cmp=lambda x, y : y - x)
Java does not have function objects, so it uses "functor classes" to simulate them. In Java you do the equivalent operation by implementing a custom version of the Comparator class, and passing that to the sort function:
class ReverseComparator implements Comparator {
public compare(Object x, Object y) {
return (Integer) y - (Integer) x;
}
...
List<Integer> a = Arrays.asList(4, 2, 5, 9, 11);
Collections.sort(a, new ReverseComparator());
As you can see, it gives the same effect as closures, but is clumsier and more verbose. However, the addition of anonymous inner classes obviates most of the pain:
List<Integer> a = Arrays.asList(4, 2, 5, 9, 11);
Comparator reverse = new Comparator() {
public Compare(Object x, Object y) {
return (Integer) y - (Integer) x;
}
}
Collections.sort(a, reverse);
So I would say that the combination of functor classes + anonymous inner classes in Java is sufficient to compensate for the lack of true function objects, making their addition unnecessary.
A: As a Lisp programmer I would wish that the Java community understands the following difference: functions as objects vs. closures.
a) functions can be named or anonymous. But they can also be objects of themselves. This allows functions to be passed around as arguments, returned from functions or stored in data structures. This means that functions are first class objects in a programming language.
Anonymous functions don't add much to the language, they just allow you to write functions in a shorter way.
b) A closure is a function plus a binding environment. Closures can be passed downwards (as parameters) or returned upwards (as return values). This allows the function to refer to variables of its environment, even if the surrounding code is no longer active.
If you have a) in some language, then the question comes up what to do about b)? There are languages that have a), but not b). In the functional programming world a) (functions) and b (functions as closures) is nowadays the norm. Smalltalk had a) (blocks are anonymous functions) for a long time, but then some dialects of Smalltalk added support for b) (blocks as closures).
You can imagine that you get a slightly different programming model, if you add functions and closures to the language.
From a pragmatic view, the anonymous function adds some short notation, where you pass or invoke functions. That can be a good thing.
The closure (function plus binding) allows you for example to create a function that has access to some variables (for example to a counter value). Now you can store that function in an object, access it and invoke it. The context for the function object is now not only the objects it has access to, but also the variables it has access to via bindings. This is also useful, but you can see that variable bindings vs. access to object variables now is an issue: when should be something a lexical variable (that can be accessed in a closure) and when should it be a variable of some object (a slot). When should something be a closure or an object? You can use both in the similar ways. A usual programming exercise for students learning Scheme (a Lisp dialect) is to write a simple object system using closures.
The result is a more complicated programming language and a more complicated runtime model. Too complicated?
A: Java has had closures since 1.1, just in a very cumbersome and limited way.
They are often useful wherever you have a callback of some description. A common case is to abstract away control flow, leaving the interesting code to call an algoritm with a closure that has no external control flow.
A trivial example is for-each (although Java 1.5 already has that). Whilst you can implement a forEach method in Java as it stands, it's far too verbose to be useful.
An example which already makes sense with existing Java is implementing the "execute around" idiom, whereby resource acquisition and release is abstracted. For instance, file open and close can be done within try/finally, without the client code having to get the details right.
A: They don't make an Object-Oriented language better. They make practical languages more practical.
If you're attacking a problem with the OO hammer - represent everything as interactions between objects - then a closure makes no sense. In a class-based OO language, closures are the smoke-filled back rooms where stuff gets done but no one talks about it afterwards. Conceptually, it is abhorrent.
In practice, it's extremely convenient. I don't really want to define a new type of object to hold context, establish the "do stuff" method for it, instantiate it, and populate the context... i just want to tell the compiler, "look, see what i have access to right now? That's the context i want, and here's the code i want to use it for - hold on to this for me 'till i need it".
Fantastic stuff.
A: The most obvious thing would be a pseudo-replacement for all those classes that just have a single method called run() or actionPerformed() or something like that. So instead of creating a Thread with a Runnable embedded, you'd use a closure instead. Not more powerful than what we've got now, but much more convenient and concise.
So do we need closures? No. Would they be nice to have? Sure, as long as they don't feel bolted on, as I fear they would be.
A: When closures finally arrive in Java, I will gleefully get rid of all my custom comparator classes.
myArray.sort( (a, b) => a.myProperty().compareTo(b.myProperty() );
...looks a helluva lot better than...
myArray.sort(new Comparator<MyClass>() {
public int compare(MyClass a, MyClass b) {
return a.myProperty().compareTo(b.myProperty();
}
});
A: A few people have said, or implied, that closures are just syntactic sugar - doing what you could already do with anonymous inner classes and making it more convenient to pass parameters in.
They are syntactic sugar in the same sense that Java is syntactic sugar for assembler (that "assembler" could be bytecode, for sake of argument). In other words they raise they level of abstraction, and this is an important concept.
Closures promote the concept of the function-as-object to a first class entity - one that increases the expressiveness of code, rather than cluttering it with even more boilerplate.
An example that's close to my heart has already been mentioned by Tom Hawtin - implementing the Execute Around idiom, which is just about the only way to get RAII into Java. I wrote a blog entry on exactly that subject a couple of years ago when I first heard closures might be coming.
Ironically, I think the very reason that closures would be good for Java (more expressiveness with less code) may be what rattles many Java advocates. Java has a mindset of "spell everything out the long way". That and the fact that closures are a nod towards a more functional way of doing things - which I also see as a Good Thing, but may water down the pure OO message that many in the Java community hold dear.
A: I have been thinking a lot about the topic of this very interesting question in
the last few days. First of all, if I have understood correctly, Java already has
some basic notion of closures (defined through anonymous classes) but the new feature
that is going to be introduced is the support for closures based on anonymous functions.
This extension will definitely make the language more expressive but I am not sure
if it really fits with the rest of the language.
Java has been designed as an object-oriented language with no support for functional programming: Will the new semantics be easy to understand? Java 6 does not even have functions, will Java 7 have anonymous functions but no "normal" functions?
My impression is that as new programming styles or paradigms like functional
programming become more popular, everyone wants to use them in their
favourite OOP language. This is understandable: one wants to continue to use
a language they're familiar with while adopting new features. But in this way
a language can become really complex and lose coherence.
So my attitude at the moment is to stick to Java 6 for OOP (I hope Java 6 will still
be supported for a while) and, in case I really get interested in doing OOP + FP,
to take a look at some other language like Scala (Scala was defined to be multi-
paradigm from the beginning and can be well integrated with Java) rather than switching
to Java 7.
I think Java owes its success to the fact that it combines a simple language with very
powerful libraries and tools, and I do not think that new features like closures will
make it a better programming language.
A: Now that JDK8 is about to be released there is more information available that can enrich the answers to this question.
Bria Goetz, language architect at Oracle, has published a series of papers (yet drafts) on the current state of lambda expressions in Java. It does also cover closures as they are planning to release them in the upcoming JDK, which should be code complete around January 2013 and should be released around midyear 2013.
*
*The State of Lambda: in the first page or two this article attempts to answer the question presented here. Although I still found it short in arguments, but is is full of examples.
*The State of Lambda - Libraries Edition: this is also very interesting because it covers advantages like lazy evaluation and parallelism.
*The Translation of Lambda Expressions: which basically explains the desugaring process done by the Java compiler.
A: As a java developer who is trying to teach themselves lisp in an attempt to become a better programmer, I would say that I would like to see the Josh Block proposal for closures implemented. I find myself using anonymous inner classes to express things like what to do with each element of a list when aggregating some data. To would be nice to represent that as a closure, instead of having to create an abstract class.
A: Closures in an imperative language (examples: JavaScript, C#, the forthcoming C++ refresh) are not the same as anonymous inner classes. They need to be able to capture modifiable references to local variables. Java's inner classes can only capture local final variables.
Almost any language feature can be criticised as non-essential:
*
*for, while, do are all just syntactic sugar over goto/if.
*Inner classes are syntactic sugar over classes with a field pointing to the outer class.
*Generics are syntactic sugar over casts.
Exactly the same "non-essential" argument should have blocked the inclusion of all the above features.
A: I suppose for supporting core functional programming concepts, you need closures. Makes the code more elegant and composable with the support for closures. Also, I like the idea of passing around lines of code as parameters to functions.
A: There are some very useful 'higher order functions' which can do operations on lists using closures. Higher order functions are functions having 'function objects' as parameters.
E.g. it is a very common operation to apply some transformation to every element in a list. This higher order function is commonly called 'map' or 'collect'. (See the *. spread operator of Groovy).
For example to square each element in a list without closures you would probably write:
List<Integer> squareInts(List<Integer> is){
List<Integer> result = new ArrayList<Integer>(is.size());
for (Integer i:is)
result.add(i*i);
return result;
}
Using closures and map and the proposed syntax, you could write it like that:
is.map({Integer i => i*i})
(There is a possible performance problem here regarding boxing of primitive types.)
As explained by Pop Catalin there is another higher order function called 'select' or 'filter': It can be used to get all the elements in a list complying to some condition. For example:
Instead of:
void onlyStringsWithMoreThan4Chars(List<String> strings){
List<String> result = new ArrayList<String>(str.size()); // should be enough
for (String str:strings)
if (str.length() > 4) result.add(str);
return result;
}
Instead you could write something like
strings.select({String str => str.length() > 4});
using the proposal.
You might look at the Groovy syntax, which is an extension of the Java language to support closures right now. See the chapter on collections of the Groovy User Guide for more examples what to do with closures.
A remark:
There is perhaps some clarification needed regarding the term 'closure'. What I've shown above are strictly spoken no closures. They are just 'function objects'.
A closure is everything which can capture - or 'close over' - the (lexical) context of the code surrounding it. In that sense there are closures in Java right now, i.e. anonymous classes:
Runnable createStringPrintingRunnable(final String str){
return new Runnable(){
public void run(){
System.out.println(str); // this accesses a variable from an outer scope
}
};
}
A: Java Closure Examples
A: Not only that benjismith, but I love how you can just do...
myArray.sort{ it.myProperty }
You only need the more detailed comparator you've shown when the natural language comparison of the property doesn't suit your needs.
I absolutely love this feature.
A: What about readability and maintainability...one-liner closures are harder to understand and debug, imo
Software has looong life and you can get people with rudimentary knowledge of the language to maintain it...So spread out logic better than one-liners for easy maintenance...You generally don't have a software star looking after software after its release...
A: You might want to look at Groovy, a language that's mostly compatible with Java, and runs on the JRE, but supports Closures.
A: The lack of binding in anonymous function [i.e. if the variables (and method arguments if there is an enclosing method) of the outer context are declared final then they are available but not otherwise], I don't quite understand what that restriction actually buys.
I use "final" profusely anyways. So, if my intent was to use the same objects inside the closure, I would indeed declare those objects final in the enclosing scope. But what would be wrong in letting the "closure [java a.i.c.]" just get a copy of the reference as if passed via a constructor (well that in fact is how it is done).
If the closure wants to overwrite the reference, so be it; it will do so without changing the copy that the enclosing scope sees.
If we argue that that would lead to unreadable code (e.g. maybe it's not straight-forward to see what the object reference is at the time of the constructor call for the a.i.c.), then how about at least making the syntax less verbose? Scala? Groovy?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: .MSI Not Always Uninstalling Previous Versions In a number of applications we create an MSI Installer with the Visual Studio Setup Project. In most cases, the install works fine, but every now and then the previous version was not uninstalled correctly. The user ends up with two icons on the desktop, and in the Add/Remove program list, the application appears twice. We have yet to find any pattern and in most cases the installer works without any problems.
A: What happens when the uninstall of the previous version fails depends on the sequencing of the RemoveExistingProducts action. I have written a summary about the various options in the past: http://jpassing.wordpress.com/2007/06/16/where-to-place-removeexistingproducts-in-a-major-msi-upgrade/.
Unfortunately, you do not have control over RemoveExistingProducts sequencing when using VS setup projects (Unless you edit the MSI with Orca after it has been built, which usually is not practical). But if your setup project is not completely trivial, I would strongly suggest you to use a different MSI authoring tool like WiX or one of the commercial tools anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Floats messing up in Safari browsers I have a site I made really fast that uses floats to display different sections of content. The floated content and the content that has an additional margin both appear fine in FF/IE, but on safari one of the divs is completely hidden. I've tried switching to padding and position:relative, but nothing has worked for me. If I take out the code to display it to the right it shows up again but under the floated content.
The main section of css that seems to be causing the problem is:
#settings{
float:left;
}
#right_content{
margin-top:20px;
margin-left:440px;
width:400px;
}
This gives me the same result whether I specify a size to the #settings div or not. Any ideas would be appreciated.
The site is available at: http://frickinsweet.com/tools/Theme.mvc.aspx to see the source code.
A: Have you tried floating the #right_content div to the right?
#right_content{
float: right;
margin-top: 20px;
width: 400px;
}
A: I believe the error lies in the mark up that the color picker is generating. I saved the page and removed that code for the color picker and it renders fine in IE/FF/SF.
A: Sorry I should have mentioned that as well. I tried floating that content right and additionally tried floating it left and setting the position with the thinking that both divs would start out at left:0 where setting the margin of the right would move it over.
Thanks
A: A few things you should fix beforehand:
*
*Your <style> tag is in <body>, when it belongs in <head>
*You have a typo "realtive" in one of your inline styles:
<a href="http://feeds.feedburner.com/ryanlanciaux" style="position:realtive; top:-6px;">
Try to get your page to validate; this should make debugging the actual problems far easier.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Persistent DB Connections - Yea or Nay? I'm using PHP's PDO layer for data access in a project, and I've been reading up on it and seeing that it has good innate support for persistent DB connections. I'm wondering when/if I should use them. Would I see performance benefits in a CRUD-heavy app? Are there downsides to consider, perhaps related to security?
If it matters to you, I'm using MySQL 5.x.
A: You could use this as a rough "ruleset":
YES, use persistent connections, if:
*
*There are only few applications/users accessing the database, i.e. you will not result in 200 open (but probably idle) connections, because there are 200 different users shared on the same host.
*The database is running on another server that you are accessing over the network
*An (one) application accesses the database very often
NO, don't use persistent connections, if:
*
*Your application only needs to access the database 100 times an hour.
*You have many webservers accessing one database server
*You're using Apache in prefork mode. It uses one connection for each child process, which can ramp up fairly quickly. (via @Powerlord in the comments)
Using persistent connections is considerable faster, especially if you are accessing the database over a network. It doesn't make so much difference if the database is running on the same machine, but it is still a little bit faster. However - as the name says - the connection is persistent, i.e. it stays open, even if it is not used.
The problem with that is, that in "default configuration", MySQL only allows 1000 parallel "open channels". After that, new connections are refused (You can tweak this setting). So if you have - say - 20 Webservers with each 100 Clients on them, and every one of them has just one page access per hour, simple math will show you that you'll need 2000 parallel connections to the database. That won't work.
Ergo: Only use it for applications with lots of requests.
A: Creating connections to the database is a fairly expensive operation. Persistent connections are a good idea. In the ASP.Net and Java world, we have "connection pooling", which is roughly the same thing, and also a good idea.
A: IMO, The real answer to this question is whatever works best for you app. I would recommend you benchmark your app using both persistent and non-persistent connections.
Maggie Nelson @ Objectively Oriented posted about this in August and Robert Swarthout made an accompanying post with some hard numbers. Both are pretty good reads.
A: In brief, my experience says that persistent connections should be avoided as far as possible.
Note that mysql_close is a no-operation (no-op) for connections that are created using mysql_pconnect. This means persistent connection cannot be closed by client at will. Such connection will be closed by mysqldb server when no activity occurs on the connection for duration more than wait_timeout. If wait_timeout is large value (say 30 min) then mysql db server can easily reach max_connections limit. In such case, mysql db will not accept any future connection request. This is when your pager starts beeping.
In order to avoid reaching max_connections limit, use of Persistent connection need careful balancing of following variables...
*
*Number of apache processes on one host
*Total number of hosts running apache
*wait_timout variable in mysql db server
*max_connections variable in mysql db server
*Number of requests served by one apache process before it is re-spawned
So, pl use persistent connection after enough deliberation. You may not want to invite complex runtime issues for a small gain that you get from persistent connection.
A: In my humble opinion:
When using PHP for web development, most of your connection will only "live" for the life of the page executing. A persistant connection is going to cost you a lot of overhead as you'll have to put it in the session or some such thing.
99% of the time a single non-persistant connection that dies at the end of the page execution will work just fine.
The other 1% of the time, you probably should not be using PHP for the app, and there is no perfect solution for you.
A: In general, you'll need to use non-persistent connections sometimes, and it's nice to have a single pattern to apply to db connection design (as long as there's relatively little upside to using persistent connections in your context.)
A: I was going to ask this same question but rather than ask the same question again I'll just add some information that I've found.
*
*Are PHP persistent connections evil ?
*Persistent Database Connections
It is also worth noting that the newer mysqli extension does not even include the option to use persistent database connections.
I'm still using persitent connections at the moment but plan to switch to non-persistent in the near future.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: Is it possible to do the Navision 5.0 export to Word/Excel to OpenOffice.org? Navision 5.0 includes a feature to export to Word or Excel. Is it possible to make this work with OpenOffice.org Writer or Calc instead? If so, what has to be done to set it up?
I have been told by my Navision reseller that the feature works best with Office 2007, and export to Excel 2003 works. No mention of Office 2000 (which is what we mostly have installed currently) or OpenOffice.org. I'm hoping to be able to standardise on OpenOffice.org across the company when 3.0 is released, to avoid the expense of upgrading everyone to Microsoft Office 2007.
A: I know this is an old question, but I'll add the answer just in case anyone comes here:
You can export directly to OpenOffice without customizations. The only thing you need is to go into Tools > Manage Style Sheets... and modify the existing StyleSheets so that they open OpenCalc and OpenWrite instead of Excel and Word.
Note: it's been a while since I last configured it, but I seem to remember that you might need to export and reimport the stylesheets to change the associated program.
It's quite easy, and you can actually keep both options (Export to Excel/Export to OpenCalc) so that users who need MS Office can use Excel, while the rest use OpenCalc.
This answer applies to the functionality to export to Word and Excel in Dynamics Nav 5.0 and Nav 5.0SP1. I haven't tried it in Dynamics Nav 2009 (Role Tailored Client).
A: In order to export from Navision, you will need to have Microsoft Excel 2003 or 2007 installed on the machine (because export process is using automation calling Excel). However, after that, you definitely can open exported files using Open Office.
But, you cannot export to OpenOffice directly. At least, not without customizations.
A: You should be able to export the documents to word or excel and then use OpenOffice to open the documents. No additional set up required.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why the option to use attributes in new ATL projects was removed from Visual Studio 2008? This is the text from MSDN comment: "According to the book ATL Internals (2ed) the use of attributes is deprecated and should not be used in new ATL projects."
But WHY?!!
A: In ATL these attributes were a compiler trick. They were not a core part of the platform like attributes in C#. They were also more confusing to debug than macros's and the method of dumping generated attribute code was a hassle.
I suspect another issue is likely to have been C++ compiler compatibility and standards-adherence. Attributes didn't make for more beautiful C++ code, and perhaps this syntax may be used in a future version of a real C++ standard.
A: Appendix D of the book goes into this.
Attributed ATL requires compiler extensions, and the problems encountered include
*
*debugging either code generated by the attribute or the interaction between the attribute and your code
*no control over the code generated
*bugs, especially around connection points
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: create and stream large XML document in C++ I have some code that creates a fairly large xml DOM and writes it off to a file (up to 50-100MB) . It basically creates the DOM and then calls a toString on it and writes it out with ofstream. Is there a way to get streaming output of the generated dom so that it doesn't create the whole structure in memory all at once and then copy it, etc? I will not modify any node after i create it so it can write it out and free up the memory right away. I could write my own xml class that does the xml construction but ... i don't think that's a good idea since i'll probably miss something when it comes down to escaping etc.
A: Ok, turns out libxml2 has a streaming API:
http://xmlsoft.org/examples/testWriter.c
It's a little old style (very C-ish) but you can write your wrapper around it.
A: I would recommend GenX as a streaming XML writer, I use this in Programmer's Notepad and it works a treat, you can see examples of use in the source code. Extremely fast, and it produces good UTF-8 XML. Memory usage while you use it should remain roughly constant.
A: Look under keyword "C++ XML writer;" XML writers generate XML to file without building the entire DOM in memory so they don't need to use very much memory at all. You didn't mention platform, but Microsoft XmlLite has IXmlWriter.
A: There really isn't much to generating valid XML; the escaping that you worry about is trivial.
There's a library for streaming writing of XML here: https://code.google.com/p/xml-writer-cpp/ — if nothing else, it's useful for education purposes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Get CPU usage in shell script? I'm running some JMeter tests against a Java process to determine how responsive a web application is under load (500+ users). JMeter will give the response time for each web request, and I've written a script to ping the Tomcat Manager every X seconds which will get me the current size of the JVM heap.
I'd like to collect stats on the server of the % of CPU being used by Tomcat. I tried to do it in a shell script using ps like this:
PS_RESULTS=`ps -o pcpu,pmem,nlwp -p $PID`
...running the command every X seconds and appending the results to a text file. (for anyone wondering, pmem = % mem usage and nlwp is number of threads)
However I've found that this gives a different definition of "% of CPU Utilization" than I'd like - according to the manpages for ps, pcpu is defined as:
cpu utilization of the process in "##.#" format. It is the CPU time used divided by the time the process has been running (cputime/realtime ratio), expressed as a percentage.
In other words, pcpu gives me the % CPU utilization for the process for the lifetime of the process.
Since I want to take a sample every X seconds, I'd like to be collecting the CPU utilization of the process at the current time only - similar to what top would give me
(CPU utilization of the process since the last update).
How can I collect this from within a shell script?
A: The most useful tool I've found for monitoring a server while performing a test such as JMeter on it is dstat. It not only gives you a range of stats from the server, it outputs to csv for easy import into a spreadsheet and lets you extend the tool with modules written in Python.
A: User load: top -b -n 2 |grep Cpu |tail -n 1 |awk '{print $2}' |sed 's/.[^.]*$//'
System load: top -b -n 2 |grep Cpu |tail -n 1 |awk '{print $3}' |sed 's/.[^.]*$//'
Idle load: top -b -n 1 |grep Cpu |tail -n 1 |awk '{print $5}' |sed 's/.[^.]*$//'
Every outcome is a round decimal.
A: Off the top of my head, I'd use the /proc filesystem view of the system state - Look at man 5 proc to see the format of the entry for /proc/PID/stat, which contains total CPU usage information, and use /proc/stat to get global system information. To obtain "current time" usage, you probably really mean "CPU used in the last N seconds"; take two samples a short distance apart to see the current rate of CPU consumption. You can then munge these values into something useful. Really though, this is probably more a Perl/Ruby/Python job than a pure shell script.
You might be able to get the rough data you're after with /proc/PID/status, which gives a Sleep average for the process. Pretty coarse data though.
A: Use top -b (and other switches if you want different outputs). It will just dump to stdout instead of jumping into a curses window.
A: also use 1 as iteration count, so you will get current snapshot without waiting to get another one in $delay time.
top -b -n 1
A: This will not give you a per-process metric, but the Stress Terminal UI is super useful to know how badly you're punishing your boxes. Add -c flag to make it dump the data to a CSV file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do you allow multiple file uploads on an internal windows-authentication intranet? I have a couple of solutions, but none of them work perfectly.
Platform
*
*ASP.NET / VB.NET / .NET 2.0
*IIS 6
*IE6 (primarily), with some IE7; Firefox not necessary, but useful
Allowed 3rd Party Options
*
*Flash
*ActiveX (would like to avoid)
*Java (would like to avoid)
Current Attempts
Gmail Style: You can use javascript to add new Upload elements (input type='file'), then upload them all at once with the click of a button. This works, but still requires a lot of clicks. (I was able to use an invisible ActiveX control to detect things like File Size, which would be useful.)
Flash Uploader: I discovered a couple of Flash Upload controls that use a 1x1 flash file to act as the uploader, callable by javascript. (One such control is FancyUpload, another is Dojo's Multiple File Uploader, yet another is one by darick_c at CodeProject.) These excited me, but I quickly ran into two issues:
*
*Flash 10 will break the functionality that is used to call the multiple file upload dialogue box. The workaround is to use a transparent flash frame, or just use a flash button to call the dialogue box. That's not a huge deal.
*The integrated windows authentication used on our intranet is not used when the Flash file attempts to upload the files, prompting the user for credentials. The workaround for this is to use cookieless sessions, which would be a nightmare for our project due to several other reasons.
Java Uploader: I noticed several Java-based multiple-file uploaders, but most of the appear to cost money. If I found one that worked really well, I could arrange to purchase it. I'd just rather not. I also don't like the look of most of them. I liked FancyUpload because it interacted with html/javascript so that I could easily style and manage it any way I want.
ActiveX Uploader: I found an ActiveX solution as well. It appears that ActiveX will work. I would just write my own instead of buying that one. This will be my last resort, I think.
Resolution
I would love to be able to use something like FancyUpload. If I can just get by the credentials prompt some way, it would be perfect. But, from my research, it appears that the only real workaround is cookieless sessions, which I just can't do.
So, the question is: Is there a way to resolve the issues presented above OR is there a different solution that I have not listed which accomplishes the same goal?
A: I don't think there is any work around for the integrated windows authentication. What you could possibly do is save the files to a generic unprotected folder and, in the case of swfupload, use a handler to move the file when its fully uploaded
A: You could try SWFUpload as well - it would fit in your Flash Uploader "category".
A: Our company uses https://ajaxuploader.com which supports this feature.
A: In Internet Explorer, FileReference.upload (flash upload) will send cookies along as well.
This behavior breaks only when running in other browsers.
A: @davidinbcn.myopenid.co: That's basically how I solved this issue. But, in an effort to provide a more detailed answer, I'm posting my solution here.
The Solution!
Create two web applications, or websites, or whatever.
Application A is a simple web application. The purpose of this application is to receive file uploads and save them to the proper place. Set this up as an anonymous access allowed. Then make a single ASPX page that accepts posted files and saves them to a given location. (I'm doing this on an intranet. Internet sites may be exposing themselves to security issues by doing this. Take extra precautions if that is the case.) The code behind for this page would look something like this:
Dim uploads As HttpFileCollection = HttpContext.Current.Request.Files
If uploads.Count > 0 Then
UploadFiles(uploads)
Else
result = "error"
err = "File Not Uploaded"
End If
Application B is your primary site that will allow file uploads. Set this up as an authenticated web application that does not allow anonymous access. Then, place the FancyUpload (or similar solution) on a page on this site. Configure it to post its files to Application A's upload ASPX page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I suppress firefox password field completion? I'm developing a website. I'm using a single-page web-app style, so all of the different parts of the site are AJAX'd into index.php. When a user logs in and tells Firefox to remember his username and password, all input boxes on the site get auto-filled with that username and password. This is a problem on the form to change a password. How can i prevent Firefox from automatically filling out these fields? I already tried giving them different names and ids.
Edit: Someone has already asked this. Thanks Joel Coohorn.
A: The autocomplete="off" method doesn't work for me. I realized firefox was injecting the saved password in the first password field it encountered, so the solution that worked for me was to create a dummy password field before the password update field and hide it. Like so:
<input type="password" style="display: none;" />
<input type="password" name="password_update" />
A: From Mozilla's documentation
<form name="form1" id="form1" method="post" autocomplete="off"
action="http://www.example.com/form.cgi">
[...]
</form>
http://developer.mozilla.org/en/How_to_Turn_Off_Form_Autocompletion
A: Have you tried adding the autocomplete="off" attribute in the input tag? Not sure if it'll work, but it is worth a try.
A: are all your input boxes set to type=password? That would do it. One of the things you can do, and I'm not at all sure that this is the best answer is to leave input box as an input type and just use javascript and onkeydown event to place stars in the input box instead of having the browser render it. Firefox won't pre-fill that.
As an aside, I have had to work on single-page web-apps and I absolutely hate it. Why would you want to take away the user's ability to bookmark pages? To use the back button?
A: Adding to this answer https://stackoverflow.com/a/30897967/1333247
This is in case you also have a User field in front of the password fields and want to disable autocompletion for it too (e.g. router web config, setting proxy User and Password).
Just create a dummy user field in front of the dummy password field to hide user name autocompletion:
<input type="text" style="display: none;" />
<input type="password" style="display: none;" />
<input type="password" name="password_update" />
Per the docs this is about the Login autocompletion. To disable the normal one (e.g. search terms completion), just use the
autocomplete="off"
attribute on the form or inputs. To disable both you need both, since the attribute won't disable Login autocompletion.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do I stop Visual Studio's 'Publish Web Site' from publishing my _ReSharper folder? It's really quite annoying as they are not part of the project.
A: I know this is really old, but maybe someone else will be helped with my reply.
I found the solution here: http://www.meadow.se/wordpress/?p=137
Basically, add these lines to the bottom of your web deployment project file above the tab.
Just put your folder or file in there too and it will not be copied over.
A: The Visual Studio Web Deployment addin lets you exclude folders and more.
VS2005
VS2008
Also a decent writeup on the Web Deployment addin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: WPF DatePicker: What is the best? I need a Datepicker for a WPF application. What is considered to be the best one?
A: I haven't used Marlon's, but I have used Kevin Moore's. At the time I used it, there were a number of bugs I had to work around. Other than those issues, it did seem to work well enough.
A: There is also the WPF Tool Kit which has a DatePicker/Calendar control
(i added emphasis because this is the answer)
A: This is an old question, but for the record I want to point out that this control is included with .NET 4.
A: If you pay Telerik has a great DatePicker and Calender
A: I'm thinking of using this one: Marlon Grech Date Picker.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Override ScriptControl or BaseValidator for an async ASP.NET validator control? I'm planning to develop an ASP.NET server control to provide asynchronous username availability validation for new user registrations. The control will allow a developer to point it at a "username" TextBox and it will provide an indication of whether or not the username is available. Like this example, but without the clunky UpdatePanel.
One design decision that's giving me headaches is whether to inherit from ScriptControl or BaseValidator.
By implementing it as a ScriptControl, I can make the client side portion easier to deal with and easily localize it with a resx.
However, I want to make sure that the validator functions properly with respect to Page.IsValid. The only way I know to do this is to override BaseValidator and implement EvaluateIsValid().
So, my question is, how would you suggest structuring this control? Is inheriting from BaseValidator the best (only) way to get the validator part right, or can I do that in some other way?
A: You should be able to do both if you implement the IScriptControl interface while also deriving from BaseValidator:
public class YourControl : IScriptControl, BaseValidator
To implement the IScriptControl interface means your control will also have to have the GetScriptReferences and GetScriptDescriptors methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: TreeView Drag & Drop help - _Invalid FORMATETC structure_ exception I'm trying to implement Drag & Drop functionality with source being a TreeView control. When I initiate a drag on a node, I'm getting:
Invalid FORMATETC structure (Exception from HRESULT: 0x80040064 (DV_E_FORMATETC))
The ItemDrag handler (where the exception takes place), looks like:
private void treeView_ItemDrag(object sender,
System.Windows.Forms.ItemDragEventArgs e)
{
this.DoDragDrop(e.Item, DragDropEffects.Move);
}
Does anyone know the root cause of this and how to remedy it? (.NET 2.0, Windows XP SP2)
A: In case it helps anyone else - I encountered this problem with the WPF TreeView (not Windows Forms as listed in the question) and the solution was simply to make sure to mark the event as handled in the drop event handler.
private void OnDrop(object sender, DragEventArgs e)
{
// Other logic...
e.Handled = true;
}
A: FORMATETC is a type of application clipboard, for lack of a better term. In order to pull off some of the visual tricks of draging around the tree node, it has to be copied into this clipboard with its source description. The source control loads its info into the FORMATETC clipboard and sends it to the target object. It looks like the error occurs on the drop and not on the drag. The DV in DV_E_FORMATETC typically indicates the error occurrs on the drop step.
The destination doesn't look like it likes what you are droping on it. The clipboard may be corrupt or the drop destination may not be configured to understand it.
I recommend you try one of two things.
*
*Remove the original tree structure and destination. Dump your dlls. Close everything. Open up and put the treeview and destination back on the form. It may have just been poorly formed and not fully populating the FORMATETC structure.
*Try putting another treeview and droping to that. If you are droping to another tree and it works you know your oranges to oranges work and it isn't the treeview. It may be the destination if it is a grid or listview. You may need to change those structures to be able to receive the drop.
Not that it helps but the structure is something like this:
typedef struct tagFORMATETC
{
CLIPFORMAT cfFormat;
DVTARGETDEVICE *ptd;
DWORD dwAspect;
LONG lindex;
DWORD tymed;
} FORMATETC, *LPFORMATETC;
A: When doing drag and drop with list and treeview controls you have to make sure that you removing and inserting the list items correctly. For example, using drag and drop involving three ListView controls:
private void triggerInstanceList_DragOver(object sender, DragEventArgs e)
{
SetDropEffect(e);
}
private void triggerInstanceList_DragEnter(object sender, DragEventArgs e)
{
SetDropEffect(e);
}
private void SetDropEffect(DragEventArgs e)
{
if (e.Data.GetDataPresent(typeof(ListViewItem)))
{
ListViewItem itemToDrop = e.Data.GetData(typeof(ListViewItem)) as ListViewItem;
if (itemToDrop.Tag is TriggerTypeIdentifier)
e.Effect = DragDropEffects.Copy;
else
e.Effect = DragDropEffects.Move;
}
else
e.Effect = DragDropEffects.None;
}
private void triggerInstanceList_DragDrop(object sender, DragEventArgs e)
{
if (e.Data.GetDataPresent(typeof(ListViewItem)))
{
try
{
ListViewItem itemToDrop = e.Data.GetData(typeof(ListViewItem)) as ListViewItem;
if (itemToDrop.Tag is TriggerTypeIdentifier)
{
ListViewItem newItem = new ListViewItem("<new " + itemToDrop.Text + ">", itemToDrop.ImageIndex);
_triggerInstanceList.Items.Add(newItem);
}
else
{
_expiredTriggers.Items.Remove(itemToDrop);
_triggerInstanceList.Items.Add(itemToDrop);
}
}
catch (Exception ex)
{
Debug.WriteLine(ex);
}
}
}
you will note that at the end of the DragDrop event I am either moving the ListViewItem or creating a copy of one.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What are the Agile tools for PHP? - Unit Testing
- Mocking
- Inversion of Control
- Refactoring
- Object Relational Mapping
- Others?
I have found simpletest for unit testing and mocking and, though it leaves much to be desired, it kind-of sort of works.
I have yet to find any reasonable Inversion of Control framework (there is one that came up on phpclasses but no documentation and doesn't seem like anyone's tried it).
A: phpUnderControl - continuous integration.
Don't forget about version control (e.g. using CVS or Subversion)!
A: Unit Testing - PHPUnit phpunit.de
ORM - Doctrine phpdoctrine.org, Propel propel.phpdb.org
A: Agile Toolkit is a framework which focuses on Agile development values of producing software quickly then fine-tuning it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why is AppDomain.CurrentDomain.BaseDirectory different between Windows Forms and Library In my winforms application, AppDomain.CurrentDomain.BaseDirectory is set to "C:\Projects\TestProject\bin\Debug\"
In my unit tests it is "C:\Projects\TestProject\bin\Debug" (no final slash). Why is this?
[Edit] @Will : I am asking why the test project's directory doesn't have a trailing slash?
A: You may be asking one of two possible questions: Why are they different, or why the test project's directory doesn't have a trailing slash.
Assuming its the first: That's where the code is executing from. When you debug the program, its compiled and the binaries are placed under the project's \bin\debug directory. When you're testing, you're running the test's binaries, which are compiled and placed under the test project's bin\debug directory.
Assuming its the last: Possibly some obscure reason, possibly a bug, or possibly to catch people who are concatenating paths rather than using Path.Combine (naughty naughty!).
Well, I don't know why it's different. Test applications may be run within a custom CLR host; I think this may be the case as test apps do some weird stuff with private accessors that normally aren't allowed within the standard CLR host. I'm only grasping at straws here as I don't have any actual knowledge about how this stuff is actually being coded.
Anyhow, the workaround stands (Path.Combine). Nobody should be concatenating paths, as path delimeters can change.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why is parameterized SQL generated by NHibernate just as fast as a stored procedure? One of my co-workers claims that even though the execution path is cached, there is no way parameterized SQL generated from an ORM is as quick as a stored procedure. Any help with this stubborn developer?
A: Round 1 - You can start a profiler trace and compare the execution times.
A: For most people, the best way to convince them is to "show them the proof." In this case, I would create a couple basic test cases to retrieve the same set of data, and then time how long it takes using stored procedures versus NHibernate. Once you have the results, hand it over to them and most skeptical people should yield to the evidence.
A: I would only add a couple things to Rob's answer:
First, Make sure the amount of data involved in the test cases is similiar to production values. In other words if your queries are normally against tables with hundreds of thousands or rows, then create such a test environment.
Second, make everything else equal except for the use of an nHibernate generated query and a s'proc call. Hopefully you can execute the test by simply swapping out a provider.
Finally, realize that there is usually a lot more at stake than just stored procedures vs. ORM. With that in mind the test should look at all of the factors: execution time, memory consumption, scalability, debugging ability, etc.
A: The problem here is that you've accepted the burden of proof. You're unlikely to change someone's mind like that. Like it or not, people--even programmers-- are just too emotional to be easily swayed by logic. You need to put the burden of proof back on him- get him to convince you otherwise- and that will force him to do the research and discover the answer for himself.
A better argument to use stored procedures is security. If you use only stored procedures, with no dynamic sql, you can disable SELECT, INSERT, UPDATE, DELETE, ALTER, and CREATE permissions for the application database user. This will protect you against most 2nd order SQL Injection, whereas parameterized queries are only effective against first order injection.
A: I would start by reading this article:
http://decipherinfosys.wordpress.com/2007/03/27/using-stored-procedures-vs-dynamic-sql-generated-by-orm/
Here is a speed test between the two:
http://www.blackwasp.co.uk/SpeedTestSqlSproc.aspx
A: Measure it, but in a non-micro-benchmark, i.e. something that represents real operations in your system. Even if there would be a tiny performance benefit for a stored procedure it will be insignificant against the other costs your code is incurring: actually retrieving data, converting it, displaying it, etc. Not to mention that using stored procedures amounts to spreading your logic out over your app and your database with no significant version control, unit tests or refactoring support in the latter.
A: Benchmark it yourself. Write a testbed class that executes a sampled stored procedure a few hundred times, and run the NHibernate code the same amount of times. Compare the average and median execution time of each method.
A: It is just as fast if the query is the same each time. Sql Server 2005 caches query plans at the level of each statement in a batch, regardless of where the SQL comes from.
The long-term difference might be that stored procedures are many, many times easier for a DBA to manage and tune, whereas hundreds of different queries that have to be gleaned from profiler traces are a nightmare.
A: I've had this argument many times over.
Almost always I end up grabbing a really good dba, and running a proc and a piece of code with the profiler running, and get the dba to show that the results are so close its negligible.
A: Measure it.
Really, any discussion on this topic is probably futile until you've measured it.
A: He may be correct for the specific use case he is thinking of. A stored procedure will probably execute faster for some complex set of SQL, that can be arbitrarily tuned. However, something you get from things like hibernate is caching. This may prove much faster for the lifetime of your actual application.
A: The additional layer of abstraction will cause it to be slower than a pure call to a sproc. Just by the fact that you have additional allocations on the managed heap, and additional pushes and pops off the callstack, the truth of the matter is that it is more efficient to call a sproc over having an ORM build the query, regardless how good the ORM is.
How slow, if its even measurable, is debatable. This is also helped by the fact that most ORM's have a caching mechanism to avoid doing the query at all.
A: Even if the stored procedure is 10% faster (it probably isn't), you may want to ask yourself how much it really matters. What really matters in the end, is how easy it is to write and maintain code for your system. If you are coding a web app, and your pages all return in 0.25 seconds, then the extra time saved by using stored procedures is negligible. However, there can be many added advantages of using an ORM like NHibernate, which would be extremely hard to duplicate using only stored procedures.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Image library that will auto-crop I'm looking for a .Net library that will accept an image or filename and an aspect ratio, and crop the image to that aspect ratio. That's the easy part: I could do it myself. But I also want it to show a little intelligence in choosing exactly what content gets cropped, even if it's just picking which edge to slice.
This is for a personal project, and the pain isn't high enough to justify spending any money on it, but if you can recommend a for-pay tool go ahead. Maybe someone else will find the suggestion useful.
A: Disclaimer: I work for a .NET Imaging vendor (Atalasoft)
It depends on what kind of image you are talking about. If you are talking about 1-bit document images (like faxes or scans) we can do this.
If you are talking about photographs, our product doesn't do this, but you might be looking for Seam carving. I wrote this application
http://www.atalasoft.com/cs/blogs/31appsin31days/archive/2008/05/26/simple-seam-carver.aspx
with our library that could be ported to just using the built-in .NET images with some work.
The idea of seam carving is to find connected paths in the image with the least interesting variation from the surrounding pixels. In the normal implementation, you would pick a continuous (but not necessarily vertical) path and remove it. If you wanted a crop, you could find the area with the least energy and remove it. My code shows how to calculate the energy of a pixel and path (how different it is from it's surrounding pixels)
If you look up seam carving, you will find some free implementations out there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Better Merge Tool for Subversion Is there a good external merge tool for tortoisesvn (I don't particularly like the built in Merge tool). I use WinMerge for diffs, but it doesn't work with the three way merge (maybe a better question would be is there a way to force tortoisesvn to merge like tortoisecvs?)
[Edit]
After trying all of them, for me, the SourceGear is the one I prefer. The way to specify the DiffMerge from sourcegear is:
C:\Program Files\SourceGear\DiffMerge\DiffMerge.exe /t1="My Working Version" /t2="Repository Version" /t3="Base" /r=%merged %mine %theirs %base
A: I use KDiff3 as a 3-way merge tool. It does a decent job.
A: Araxis Merge is expensive, but great. Handles 3 way merges on files or folders really well. I find the way it displays diffs much more helpful than Windiff or P4's tool.
A: Perforce Merge Tool
Even though Perforce is obviously not free the merge tool is. It's 100x better than the default TortoiseSvn one. To integrate with TortoiseSvn set the merge tool to:
C:\Path-To\P4Merge.exe %base %theirs %mine %merged
A: Take a look at Sourcegear DiffMerge. DiffMerge is the compare and merge tool from their Vault and Fortress products, but they make it available for free as a standalone tool. One noteworthy feature is that it allows diffing of entire directory trees.
Edit: While DiffMerge remains a free tool, it nags for registration with a popup at least once a day (since at least version 4.2). It also states in the popup:
Select new features in future releases will also require registration,
but core features and fixes will be available to everyone.
A: CompareIt is good, I find using the command line interface for svn, that it merges more for me, where tortoise picks up on things and asks you.
A: Beyond Compare has been suggested a number of times to me.
A: I like this one SmartSynchronize which is free for non-commercial use
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: MultiMap in Scala I'm trying to mixin the MultiMap trait with a HashMap like so:
val children:MultiMap[Integer, TreeNode] =
new HashMap[Integer, Set[TreeNode]] with MultiMap[Integer, TreeNode]
The definition for the MultiMap trait is:
trait MultiMap[A, B] extends Map[A, Set[B]]
Meaning that a MultiMap of types A & B is a Map of types A & Set[B], or so it seems to me. However, the compiler complains:
C:\...\TestTreeDataModel.scala:87: error: illegal inheritance; template $anon inherits different type instances of trait Map: scala.collection.mutable.Map[Integer,scala.collection.mutable.Set[package.TreeNode]] and scala.collection.mutable.Map[Integer,Set[package.TreeNode]]
new HashMap[Integer, Set[TreeNode]] with MultiMap[Integer, TreeNode]
^ one error found
It seems that generics are tripping me up again.
A: I had to import scala.collection.mutable.Set. It seems the compiler thought the Set in HashMap[Integer, Set[TreeNode]] was scala.collection.Set. The Set in the MultiMap def is scala.collection.mutable.Set.
A: That can be annoying, the name overloading in Scala's collections is one of its big weaknesses.
For what it's worth, if you had scala.collection._ imported, you could probably have written your HashMap type as:
new HashMap[ Integer, mutable.Set[ TreeNode ] ]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: WScript.Shell and blocking execution? I'm using WScript to automate some tasks, by using WScript.Shell to call external programs.
However, right now it does not wait for the external program to finish, and instead moves on. This causes issues because I have some tasks dependent on others finishing first.
I am using code like:
ZipCommand = "7za.exe a -r -y " & ZipDest & BuildLabel & ".zip " & buildSourceDir
Set wshShell = WScript.CreateObject("Wscript.Shell")
wshShell.run ZipCommand
Is there a way to do this so it blocks until the shell executed program returns?
A: If you use the "Exec" method, it returns a reference, so you can poll the "Status" property to determine when it is complete. Here is a sample from msdn:
Dim WshShell, oExec
Set WshShell = CreateObject("WScript.Shell")
Set oExec = WshShell.Exec(ZipCommand)
Do While oExec.Status = 0
WScript.Sleep 100
Loop
A: Turns out, that while loop is severe CPU hog :P
I found a better way:
ZipCommand = "7za.exe a -r -y " & ZipDest & BuildLabel & ".zip " & buildSourceDir
Set wshShell = WScript.CreateObject("Wscript.Shell")
wshShell.Run ZipCommand,1,1
The last two arguments are Show window and Block Execution :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Good Features for an ORM I'm currently working on putting together a fairly simple ORM tool to serve as a framework for various web projects for a client. Most of the projects are internal and will not require massive amounts of concurrency and all will go against SQL Server. I've suggested that they go with ORM tools like SubSonic, NHibernate, and a number of other open source projects out there, but for maintainability and flexibility reasons they want to create something custom. So my question is this: What are some features that I should make sure to include in this ORM tool? BTW, I'll be using MyGeneration to do the code generation templates.
A: If your client isn't interested in OSS because of (real or imagined) perceptions about support, have you considered any of the top-quality commercial third-party ORMs such as LightSpeed that comes with a nice GUI designer tool
(source: mindscape.co.nz)
Mindscape (the company that sells LightSpeed) is a New Zealand company based near where I live, I have met some of the devs there, and I know they are famous for having incredible customer support. And they give you the source code when you buy the software, so you can tweak it any way you like.
You probably don't want to have to roll your own ORM unless you have to and your client is willing to hand over a stupid amount of cash for you to do so.
A: IMO writing your own OR/M is one the worst design decisions you could ever make. "maintainability and flexibility" are reasons exactly NOT to write your own OR/M.
Please read See 25 Reasons Not To Write Your Own Object Relational Mapper, and see if your client really wants to pay what it costs to build something like NHibernate ($7.6M) or SubSonic ($1.5M). Because, like ChanChan said above, you will end up with something similar to that.
A: For the love of all that's holy (and the women and the children), do everything possible to convince them not to go with a custom O/RM solution. Why are people wanting to re-invent the wheel when there are perfectly-good, open-source wheels already in existence?!?!
A: There's a bunch of posts by Davy Brion (an NHibernate committer) who is for some reason also forced to write a custom ORM for a client.
Some of the things he covers are:
*
*Mapping Classes To Tables
*Out Of The Box CRUD Functionality
*Hydrating Entities
*Session Level Cache
*Executing Custom Queries
Definately worth checking out, if you MUST go down this path: Build Your Own Data Access Layer Series
A: You need to go the nHibernate style, in my experience, and have it so that you have some kind of map, between your objects and the database. This allows your objects to have some things that are hard to represent in a database but are easier represented in POCOs.
Generation gets you started, by giving you classes that meet your schema, but if you plan on maintaining anything or testing anything, mapping is pain now for pleasure later.
Subsonic is a great model, and its open source, if you must go generation, use their templates in myGeneration to get a leg up.
BTW: I've done what you are doing, and I ended up with something very similar to subsonic, and now advise my clients to take the subsonic source, and fork it for themselves.
A: Maybe just maybe, you need badly some "features" that do not exist yet in the existing solutions. Maybe you need something simpler also. 1.5$ for Subsonic is simply outrageous.
Maybe you want to use POCO. Maybe you want to use the stuff easily in a 3 tier scenario.
Maybe you don't want to support ALL RDBMS on the planet, so you can hardcode and optimize the code just for your target. Maybe you want to implement smarter object tracking. Maybe some design decisions made by the existing orms drive you crazy....
I myself am using a custom orm developed by me myself and i, and i am satisfied that i did it. There is no hidden dragon under the carpet, no surprise scenario. My orm does exacty what i want it to do, nothing less, nothing more.
A: *
*Second level cache
Allows you maintain entities instances in-memory
*
*Automatic dirty-checking
Allows you updates changes in an object without loading it.
*Powerful query language
*Powerful cascade operation
*Powerful primary key generator strategy
ORM framework will pickup best primary key generator strategy according to target database
*Support to composite elements
*Support to events
onSave, onUpdate and so on
*Good documentation and reference books
*Support to conversational state
regards,
A: No-one has mentioned it yet; but go with LLBLGen. You may customise the template as you like, and you may also, obviously, write your own custom code in the generated classes. Buy it. You will never look back, and you will be saying "Thank you silky!" when it consistently works beautifully. (I didn't write it, but I love it). If it doesn't work out for you, you may also say "Damn you silky!". But that's unlikely, however I do offer it as an option.
The only bad thing I noticed about LLBLGen has been the support for switching between Databases/servers on the fly. It doesn't support a feature that I'd like; namely the ability to detect that a given entity you retrieved doesn't "exist" in a new database that you've switched to. But this is a rare case.
I suggest LLBLGen, because I was in the process of writing my own OR/M when I came across it. Never looked back.
A: Your job as a consultant (sounds like that's what you are) is to leverage your expertise in implementing for your clients a solution that fits their desires with a minimum cost and time investment.
If they want to build and sell an OR/M. The go to town making one. If they want anything else, use one that already exists to get the job done.
If they insist on spending money, buy an existing one (I won't name any, but there exist some good ones that are not free).
A: Try to use Devart LinqConnect - all of the LINQ to SQL features and wide support of the most popular database servers - Oracle, MySQL, Postgre, SQL Server, and SQLite. Incredible Visual Modeling tool, advanced monitoring tool, high quality support - as a result i've learned it only in three weeks during my project execution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Does having a registry full of old stuff slow down Windows? I know this isn't strictly speaking a programming question but something I always hear from pseudo-techies is that having a lot of entries in your registry slows down your Windows-based PC. I think this notion comes from people who are trying to troubleshoot their PC and why it's running so slow and they open up the registry at some point and see leftover entries from programs they uninstalled ages ago.
But is there any truth to this idea? I would not think so since the registry is essentially just a database and drilling down to an entry wouldn't take significantly longer on a larger registry. But does it?
EDIT: To be clear, I'm not looking for advice on how to make a PC run faster, or asking why my PC in particular is slow (it's not), I'm just curious if people who say "bigger registry means slower PC" are accurate or not.
A: I think its a symptom, not a cause, as fever is a symptom of an infection.
When you install windows updates, at least in xp and up, a folder called SXS is maintained for rolling them back. These rollback points are also stored in reg keys.
The size of the sxs(side by side) folder grows exponentially and definitely has been linked to why, when some people simple reinstall with sp3 instead of installing sp1 and rolling up to sp3 they get better performance, even with the same programs installed.
A: 1) Start -> Run -> msconfig
2) Check the Startup tab
3) If you don't know what it is, uncheck
4) Reboot
Its not the registry, its the crap you have running in the background.
A: In short, not really.
In the old days when machines were slower the answer was yes; but having a modern processor rip through even a 60MB registry is not a problem.
Typically, the real reason a modern machine starts running slow is due to everything from malware to virus scanners: Mcafee, Norton's, etc are prime targets in my mind.
Also, the WinSXS folder tends to grow as service packs and applications are installed. This seems to have a negative impact on system performance. There are only two possible solutions in this scenario. First, if possible, reinstall the OS with the latest service pack already slipstreamed into the install. Second, if that isn't possible AND you are running Vista with SP1, you can run the vsp1cln.exe tool (see technet) which will clean up a lot of the older versions of components. Note that this tool can only be executed once and it does not allow you to roll back.
A: any problems occur on the registry could also make your computer much slower.the fix registry problems you need to install a registry cleaner as this will fix the errors and make your pc back to its normal state.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Is it possible to track allocation/deallocation? As far as I can tell, this is isn't possible, so I'm really just hoping for a left field undocumented allocation hook function.
I want a way to track allocations like in _CrtSetAllocHook, but for C#/.NET.
The only visibility to the garbage collector/allocation appears to be GC.CollectionCount.
Anyone have any other .NET memory mojo?
A: The CLR has a 'profiling API' that hooks into pretty much everything - it is what the commercial .NET memory profiling products use, I believe. Here is an MSDN link to the top level of the documentation: .NET Framework General Reference: About the Profiling API
See this MSDN magazine article for an introduction to the memory piece: Inspect and Optimize Your Program's Memory Usage with the .NET Profiler API
A: I would just use Red Gate's ANTS Profiler. It will tell you a lot about what's going on in memory without you having to learn the profiling API yourself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What Python way would you suggest to check whois database records? I'm trying to get a webservice up and running that actually requires to check whois databases. What I'm doing right now is ugly and I'd like to avoid it as much as I can: I call gwhois command and parse its output. Ugly.
I did some search to try to find a pythonic way to do this task. Generally I got quite much nothing - this old discussion list link has a way to check if domain exist. Quite not what I was looking for... But still, it was best anwser Google gave me - everything else is just a bunch of unanwsered questions.
Any of you have succeeded to get some method up and running? I'd very much appreciate some tips, or should I just do it the opensource-way, sit down and code something by myself? :)
A: Found this question in the process of my own search for a python whois library.
Don't know that I agree with cdleary's answer that using a library that wraps
a command is always the best way to go - but I can see his reasons why he said this.
Pro: cmd-line whois handles all the hard work (socket calls, parsing, etc)
Con: not portable; module may not work depending on underlying whois command.
Slower, since running a command and most likely shell in addition to whois command.
Affected if not UNIX (Windows), different UNIX, older UNIX, or
older whois command
I am looking for a whois module that can handle whois IP lookups and I am not interested in coding my own whois client.
Here are the modules that I (lightly) tried out and more information about it:
UPDATE from 2022: I would search pypi for newer whois libraries. NOTE - some APIs are specific to online paid services
pywhoisapi:
*
*Home: http://code.google.com/p/pywhoisapi/
*Last Updated: 2011
*Design: REST client accessing ARIN whois REST service
*Pros: Able to handle IP address lookups
*Cons: Able to pull information from whois servers of other RIRs?
BulkWhois
*
*Home: http://pypi.python.org/pypi/BulkWhois/0.2.1
*Last Updated: fork https://github.com/deontpearson/BulkWhois in 2020
*Design: telnet client accessing whois telnet query interface from RIR(?)
*Pros: Able to handle IP address lookups
*Cons: Able to pull information from whois servers of other RIRs?
pywhois:
*
*Home: http://code.google.com/p/pywhois/
*Last Updated: 2010 (no forks found)
*Design: REST client accessing RRID whois services
*Pros: Accessses many RRIDs; has python 3.x branch
*Cons: does not seem to handle IP address lookups
python-whois (wraps whois command):
*
*Home: http://code.google.com/p/python-whois/
*Last Updated: 2022-11 https://github.com/DannyCork/python-whois
*Design: wraps "whois" command
*Cons: does not seem to handle IP address lookups
whois:
*
*Home: https://github.com/richardpenman/whois (https://pypi.org/project/python-whois/)
*Last Updated: 2022-12
*Design: port of whois.c from Apple
*Cons: TODO
whoisclient - fork of python-whois
*
*Home: http://gitorious.org/python-whois
*Last Updated: (home website no longer valid)
*Design: wraps "whois" command
*Depends on: IPy.py
*Cons: does not seem to handle IP address lookups
Update: I ended up using pywhoisapi for the reverse IP lookups that I was doing
A: Look at this:
http://code.google.com/p/pywhois/
pywhois - Python module for retrieving WHOIS information of domains
Goal:
- Create a simple importable Python module which will produce parsed WHOIS data for a given domain.
- Able to extract data for all the popular TLDs (com, org, net, ...)
- Query a WHOIS server directly instead of going through an intermediate web service like many others do.
- Works with Python 2.4+ and no external dependencies
Example:
>>> import pywhois
>>> w = pywhois.whois('google.com')
>>> w.expiration_date
['14-sep-2011']
>>> w.emails
['[email protected]',
'[email protected]',
'[email protected]',
'[email protected]']
>>> print w
...
A: There's nothing wrong with using a command line utility to do what you want. If you put a nice wrapper around the service, you can implement the internals however you want! For example:
class Whois(object):
_whois_by_query_cache = {}
def __init__(self, query):
"""Initializes the instance variables to defaults. See :meth:`lookup`
for details on how to submit the query."""
self.query = query
self.domain = None
# ... other fields.
def lookup(self):
"""Submits the `whois` query and stores results internally."""
# ... implementation
Now, whether or not you roll your own using urllib, wrap around a command line utility (like you're doing), or import a third party library and use that (like you're saying), this interface stays the same.
This approach is generally not considered ugly at all -- sometimes command utilities do what you want and you should be able to leverage them. If speed ends up being a bottleneck, your abstraction makes the process of switching to a native Python implementation transparent to your client code.
Practicality beats purity -- that's what's Pythonic. :)
A: Here is the whois client re-implemented in Python:
http://code.activestate.com/recipes/577364-whois-client/
A: I don't know if gwhois does something special with the server output; however, you can plainly connect to the whois server on port whois (43), send your query, read all the data in the reply and parse them. To make life a little easier, you could use the telnetlib.Telnet class (even if the whois protocol is much simpler than the telnet protocol) instead of plain sockets.
The tricky parts:
*
*which whois server will you ask? RIPE, ARIN, APNIC, LACNIC, AFRINIC, JPNIC, VERIO etc LACNIC could be a useful fallback, since they tend to reply with useful data to requests outside of their domain.
*what are the exact options and arguments for each whois server? some offer help, others don't. In general, plain domain names work without any special options.
A: Another way to do it is to use urllib2 module to parse some other page's whois service (many sites like that exist). But that seems like even more of a hack that what you do now, and would give you a dependency on whatever whois site you chose, which is bad.
I hate to say it, but unless you want to re-implement whois in your program (which would be re-inventing the wheel), running whois on the OS and parsing the output (ie what you are doing now) seems like the right way to do it.
A: Parsing another webpage woulnd't be as bad (assuming their html woulnd't be very bad), but it would actually tie me to them - if they're down, I'm down :)
Actually I found some old project on sourceforge: rwhois.py. What scares me a bit is that their last update is from 2003. But, it might seem as a good place to start reimplementation of what I do right now... Well, I felt obligued to post the link to this project anyway, just for further reference.
A: here is a ready-to-use solution that works for me; written for Python 3.1 (when backporting to Py2.x, take special care of the bytes / Unicode text distinctions). your single point of access is the method DRWHO.whois(), which expects a domain name to be passed in; it will then try to resolve the name using the provider configured as DRWHO.whois_providers[ '*' ] (a more complete solution could differentiate providers according to the top level domain). DRWHO.whois() will return a dictionary with a single entry text, which contains the response text sent back by the WHOIS server. Again, a more complete solution would then try and parse the text (which must be done separately for each provider, as there is no standard format) and return a more structured format (e.g., set a flag available which specifies whether or not the domain looks available). have fun!
##########################################################################
import asyncore as _sys_asyncore
from asyncore import loop as _sys_asyncore_loop
import socket as _sys_socket
##########################################################################
class _Whois_request( _sys_asyncore.dispatcher_with_send, object ):
# simple whois requester
# original code by Frederik Lundh
#-----------------------------------------------------------------------
whoisPort = 43
#-----------------------------------------------------------------------
def __init__(self, consumer, host, provider ):
_sys_asyncore.dispatcher_with_send.__init__(self)
self.consumer = consumer
self.query = host
self.create_socket( _sys_socket.AF_INET, _sys_socket.SOCK_STREAM )
self.connect( ( provider, self.whoisPort, ) )
#-----------------------------------------------------------------------
def handle_connect(self):
self.send( bytes( '%s\r\n' % ( self.query, ), 'utf-8' ) )
#-----------------------------------------------------------------------
def handle_expt(self):
self.close() # connection failed, shutdown
self.consumer.abort()
#-----------------------------------------------------------------------
def handle_read(self):
# get data from server
self.consumer.feed( self.recv( 2048 ) )
#-----------------------------------------------------------------------
def handle_close(self):
self.close()
self.consumer.close()
##########################################################################
class _Whois_consumer( object ):
# original code by Frederik Lundh
#-----------------------------------------------------------------------
def __init__( self, host, provider, result ):
self.texts_as_bytes = []
self.host = host
self.provider = provider
self.result = result
#-----------------------------------------------------------------------
def feed( self, text ):
self.texts_as_bytes.append( text.strip() )
#-----------------------------------------------------------------------
def abort(self):
del self.texts_as_bytes[:]
self.finalize()
#-----------------------------------------------------------------------
def close(self):
self.finalize()
#-----------------------------------------------------------------------
def finalize( self ):
# join bytestrings and decode them (witha a guessed encoding):
text_as_bytes = b'\n'.join( self.texts_as_bytes )
self.result[ 'text' ] = text_as_bytes.decode( 'utf-8' )
##########################################################################
class DRWHO:
#-----------------------------------------------------------------------
whois_providers = {
'~isa': 'DRWHO/whois-providers',
'*': 'whois.opensrs.net', }
#-----------------------------------------------------------------------
def whois( self, domain ):
R = {}
provider = self._get_whois_provider( '*' )
self._fetch_whois( provider, domain, R )
return R
#-----------------------------------------------------------------------
def _get_whois_provider( self, top_level_domain ):
providers = self.whois_providers
R = providers.get( top_level_domain, None )
if R is None:
R = providers[ '*' ]
return R
#-----------------------------------------------------------------------
def _fetch_whois( self, provider, domain, pod ):
#.....................................................................
consumer = _Whois_consumer( domain, provider, pod )
request = _Whois_request( consumer, domain, provider )
#.....................................................................
_sys_asyncore_loop() # loops until requests have been processed
#=========================================================================
DRWHO = DRWHO()
domain = 'example.com'
whois = DRWHO.whois( domain )
print( whois[ 'text' ] )
A: import socket
socket.gethostbyname_ex('url.com')
if it returns a gaierror you know know it's not registered with any DNS
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Calling C# code from Java? Does anyone have a good solution for integrating some C# code into a java application?
The code is small, so I could re-write in java, but I would rather reuse the code if possible. Don't repeat yourself, etc.
Also, I know I can expose the C# as a web service or whatever, but it has some security/encryption stuff in there, so I would rather keep it tightly integrated if possible.
Edit: It's going to be on a server-based app, so "downloading" another runtime is irrelevant.
A: If it's short, I think you're better off re-writing the code in java. Downloading one 50Mb runtime is bad enough.
A: There is an IL to Java Bytecode compiler GrassHopper which may be of use to you. I've never tried it though.
I'd look at rewriting your code in Java though
EDIT: Note that Grasshopper seems to be no longer available.
A: You would use the Java Native Interface to call your C# code compiled into a DLL.
If its a small amount of C#, it would be much easier to port it to Java. If its a lot, this might be a good way to do it.
Here is a highlevel overview of it:
http://en.wikipedia.org/wiki/Java_Native_Interface
Your other option would be to create a COM assembly from the C# code and use J-Interop to invoke it.
http://sourceforge.net/projects/j-interop/
A: We used JNBridge for this, and it worked great. It handles Java->.NET and vice versa, all in-proc.
A: I am author of jni4net, open source intraprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
A: If you do not want to rewrite hadle it as an Inter-process communication and choose one of following:
*
*Named pipes
*Sockets
*SOAP
A: I would rewrite it if it's not too much trouble.
The web service would work, but it seems like that would be a lot of overhead just to reuse a little code.
A: http://www.infoq.com/articles/in-process-java-net-integration suggests running CLR and JVM in the same process space and passing calls back and forth. It sounds very efficient. I'm going to give it a try and integrate it into Jace if it works well.
A: If it is a piece of code that is exposable as a command line utility, I just make the other host language use a system call to execute the utility.
If your C# app needs to call Java, compile a special Java main that takes appropriate command line args and returns text output.
It the oldest, simplest method.
A: You can call your c# classes (compiled in a dll) via a bridging library, various libraries are available, every one with his characteristics. JNBridge generate proxy classes that you can call to manage the code in java classes. JCOBridge let you load your c# classes and use it from java using the invoke mechanism, also javonet let you import java classes and call java code using the invoke mechanism. All the explored solutions are commercial solutions that let you call java code from .NET and vice-versa with graphical user interface integration and other amenities.
Links:
jnbridge java-.NET bridge Developer and Deployment license schema with 30 day free trial
jcobridge java-.NET bridge Developer and Deployment license schema with unlimited Trial
javonet java-.NET bridge Research and Professional license schema with 30-day unlimited Trial after sign-up
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How do I get list of recent files in GNU Emacs? When I use Emacs I want to be able to easily display and navigate through a list of files I worked on from not just the current session but from previous sessions. (BTW, running Emacs 22.2 on Windows)
A: Even if you don't have recentf turned on, Emacs is saving a list of files entered via the minibuffer in the variable file-name-history. Also, executing (savehist-mode 1) in your .emacs file makes that variable persist across invocations of Emacs.
So here's a little function that displays the files that actually exist from that list (anyone is welcome to use/build on this):
(defun dir-of-recent-files ()
"See the list of recently entered files in a Dired buffer."
(interactive)
(dired (cons
"*Recent Files*"
(seq-filter
'file-exists-p
(delete-dups
(mapcar (lambda (s) (string-trim-right s "/*"))
file-name-history)
))))
)
I find this quite useful and have it bound to one of those little special function keys on my desktop keyboard. (And so I have not seen the point of turning on recentf...)
A: From Joe Grossberg's blog (no longer available):
But if you're using GNU Emacs 21.2
(the latest version, which includes
this as part of the standard distro),
you can just put the following lines
into your .emacs file
;; recentf stuff
(require 'recentf)
(recentf-mode 1)
(setq recentf-max-menu-items 25)
(global-set-key "\C-x\ \C-r" 'recentf-open-files)
Then, when you launch emacs, hit
CTRL-X CTRL-R. It will show a list of
the recently-opened files in a buffer.
Move the cursor to a line and press
ENTER. That will open the file in
question, and move it to the top of
your recent-file list.
(Note: Emacs records file names.
Therefore, if you move or rename a
file outside of Emacs, it won't
automatically update the list. You'll
have to open the renamed file with the
normal CTRL-X CTRL-F method.)
Jayakrishnan Varnam has a page
including screenshots of how this
package works.
Note: You don't need the (require 'recentf) line.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Does silverlight work on chrome? Does anyone know if silverlight plugs into chrome, or when they plan to support it?
A: This guy have had partial success with silverlight in chrome, but it does not seem to be supported:
http://wildermuth.com/2008/09/02/Silverlight_2_and_Google_Chrome
From The Microsoft Silverlight Team in the silverlight forum:
Hello, currently we don't have plans
to support Chrome. We will support it
in the future if it gains enough
market share. Please understand, each
browser implements the plug-in model
differently, so it'll be a lot of
effort to officially support a browser
100%... By the way, IE 8 also runs
each tab in its own process. If a tab
crashes, other tabs will still work
fine.
UPDATE:
Jon Galloway has just posted instructions on how to get silverlight successfully running on Chrome here:
http://weblogs.asp.net/jgalloway/archive/2008/09/17/silverlight-on-chrome.aspx
A: The official word on what is supported looks like this:
alt text http://www.jesseliberty.com/sl/browsers.jpg
The reality is that we do run on a lot of browsers, but things change might quickly in these here parts.
A: For what it is worth, the Dev Branch of Google Chrome was recently updated to support Silverlight 2. I tried it and it works for me. Of course, you have to use the Dev release of Google Chrome. You can get more information about switching to Chrome Dev here.
A: Silverlight already works with web-kit, and since Google's Chrome is based on web-kit, it shouldn't be too much effort to get it working.
Indeed, this gentleman seems to have had some success.
Based on this, I would suspect that Silverlight will be fully supported by Chrome by the time it goes gold.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Preventing accidental double clicking on a button I have a few controls that inherit from ASP.NET buttons and use onserverclick.
If the user clicks twice, the button fires two server side events. How can I prevent this?
I tried setting this.disabled='true' after the click (in the onclick attribute) via javascript, but that blocks the first postback as well.
A: You don't necessarily want to show the button disabled on postback. You want to make sure they don't accidentally submit twice. So disabling or hiding the button as a result of a server-side action is already too late in the game. By this point the 2nd request is already on it's way. You need to either do it with javascript or make sure your server side code won't run twice.
A: In case of an updatepanel and a button inside a FormView-Template I use the following approach:
// Using that prm reference, hook _initializeRequest
Sys.WebForms.PageRequestManager.getInstance().add_initializeRequest(InitializeRequestBuchung);
// Abfangen von Mehrfachklicks auf Buttons für asynchrone Postbacks im Updatepanel
function InitializeRequestBuchung(sender, args) {
var arrButtonIds = ["ButtonInsert", "ButtonUpdate"];
// Get a reference to the PageRequestManager.
var prm = Sys.WebForms.PageRequestManager.getInstance();
if (prm.get_isInAsyncPostBack() & jQuery.inArray(args.get_postBackElement().id, arrButtonIds) > -1) {
args.set_cancel(true);
}
}
This cancels the following postback if an async postback is currently still active. Works perfectly.
A: See this example for disabling control on postback. It should help you do what you're trying to achieve.
http://encosia.com/2007/04/17/disable-a-button-control-during-postback/
A: Someone else said this somewhere on here a few days ago, and I concur - use javascript to simply hide the button instead of disabling it; you could show a "spinner" image in its place, which lets the user know what is going on.
A: Instead of hiding, what I have done is swapping buttons using javascript. Show another greyed out image on the click of the first button.
A: Set the Button property UseSubmitBehavior to false. Then create an OnClientClick function that disables the button.
It would look something like this:
<script type="text/javascript">
function disableFunctn(button){
button.disabled = true;
}
</script>
<asp:Button ID="button1" UseSubmitBehavior="false" OnClientClick="disableFunctn(this);"/>
fastest cheapest way:
<asp:Button ID="button1" UseSubmitBehavior="false" OnClientClick="this.disabled=true;"/>
A: You can also try for example btnSave.Enable = false; when the button is hit and before the processing for the button is done in the Click Event routine. If you need it to be reset to allow it to be enabled have a separate button that resets the button for reuse.
Another method is to set the button with verification so that the user is asked if they want to Save, it should pop up both times.
Yet another method would be to flag the first occurrence then set a popup for the second to verify a second or subsequent usage.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: CSS - Placement of a div in the lower left-hand corner I wish I were a CSS smarty ....
How can you place a div container in the lower left-hand corner of the web page; taking into account the users scroll-position?
A: To position an element relative to the "viewport" (the window or frame it's in), and have it ignore how that viewport is scrolled, you can use the position: fixed; property value (MDN documentation). This has been supported by every browser since Internet Explorer 7.
To position the element at the bottom-left of the window, we need to also specify that it should be positioned at 0 distance from the bottom and left:
position: fixed;
bottom: 0;
left: 0;
Full Example
.bottom-left {
position: fixed;
bottom: 0;
left: 0;
}
.alert {
border: 2px solid red;
background: white;
font-weight: bold;
padding: 1em;
}
<div class="bottom-left alert">
Look at me!
</div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam dignissim diam arcu, a gravida justo malesuada et. Fusce iaculis, dui laoreet ultricies congue, arcu lectus rhoncus neque, ut molestie magna augue ut neque. Duis in feugiat ipsum, et imperdiet nunc. Cras convallis lorem eu diam malesuada malesuada. Nunc dapibus suscipit ligula, vel mattis eros blandit id. In placerat justo vitae pretium fermentum. Proin ac erat commodo nibh ullamcorper feugiat. Nulla ultricies maximus massa, non semper dolor malesuada vel. Nullam sem justo, bibendum vel tempus pharetra, gravida vel sapien. Morbi facilisis tristique mauris vel elementum. Ut porttitor egestas metus eget auctor. Phasellus efficitur rutrum massa nec fringilla. Aliquam et imperdiet leo. Sed tincidunt hendrerit tortor eget tempor.</p>
<p>Sed vel dolor lectus. Nulla sed blandit lacus. Mauris ac magna nec libero vehicula aliquet id a libero. Vivamus sed lobortis velit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed at feugiat sapien, ut commodo mi. Quisque scelerisque maximus efficitur. In ultrices, magna eu semper pellentesque, tellus odio hendrerit augue, ut porta sapien lacus quis odio.</p>
<p>Duis sodales, dui a condimentum imperdiet, tellus est laoreet velit, a viverra risus libero sed urna. Phasellus sollicitudin tincidunt viverra. Proin vulputate leo at justo auctor feugiat. Nam auctor, mauris at commodo tempus, eros diam varius ligula, vitae efficitur massa lectus et enim. Integer tristique nibh in lacus condimentum, et interdum urna mollis. Aenean id risus tristique, volutpat dolor sed, fermentum ex. Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam velit nibh, elementum at orci quis, tempor fermentum tellus. Nunc facilisis nisi at leo auctor aliquet. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aliquam tempor ipsum vel scelerisque tincidunt. Etiam vulputate auctor ante, in tristique est congue ut. Vestibulum maximus nibh vestibulum tristique ullamcorper. Phasellus eu eleifend ante, nec efficitur nulla.</p>
<p>Nunc pulvinar purus id arcu egestas, sed iaculis nisl finibus. Sed cursus bibendum tortor, id cursus lacus euismod in. Nam lacinia, sapien faucibus dapibus varius, neque velit fringilla est, in porta quam sem sit amet ligula. Aliquam ornare est ac pellentesque suscipit. Curabitur eleifend convallis sem, volutpat efficitur erat laoreet id. Maecenas interdum ante in lectus varius, lobortis auctor quam rutrum. Nullam tristique felis quis lectus luctus gravida. Cras porttitor pellentesque nibh. Fusce placerat vehicula commodo. Mauris vel lectus viverra sem consectetur sagittis quis vel lectus. Quisque vel dapibus augue. Sed lacinia massa quis dui sodales faucibus.</p>
<p>Donec sagittis, dolor sed fermentum dapibus, justo ipsum porttitor purus, sed fermentum mi nulla non lorem. Praesent aliquet iaculis molestie. Phasellus enim nunc, vestibulum non odio vel, porta imperdiet lorem. Morbi laoreet felis a ipsum elementum sollicitudin. Morbi varius mollis ex, a posuere lorem fringilla ac. Curabitur metus ligula, mollis quis diam eu, pulvinar placerat libero. Aenean vestibulum lacinia diam in facilisis. Praesent egestas sapien a est consequat facilisis. Nulla id mauris a metus venenatis pellentesque. Praesent justo augue, efficitur ac vulputate et, luctus at elit. Proin quis urna quam. Pellentesque iaculis, felis sed hendrerit venenatis, purus augue venenatis tellus, a posuere justo tellus at ex. Donec et arcu non arcu scelerisque efficitur nec sed dolor. Sed eget lacus enim. Donec sodales mollis condimentum.</p>
| {
"language": "la",
"url": "https://stackoverflow.com/questions/50430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How to iterate a jagged array? This has been driving me crazy for a few days. Why doesn't the following work?
Dim arr(3, 3) As Integer
For y As Integer = 0 To arr.GetLength(0) - 1
For x As Integer = 0 To arr.GetLength(y) - 1
arr(y, x) = y + x
Next
Next
Also, what if the array looked like this instead?
{ {1, 2, 3},
{4, 5, 6, 7, 8, 9, 9, 9},
{5, 4, 3, 2}
}
A: Because there is no '2' or '3' dimension. Should be .GetLength(1) instead of .GetLength(y)
Also: in VB.Net array declarations work a little differently. The subscript you specify in the declaration is the last index, not the number of items created like with C# or C++. But the array is still 0-indexed like C# or C++, instead of 1-indexed like VB6. That means that if you move to VB.Net from a different language your array instincts are probably wrong, no matter which language it is. In VB.Net, Dim arr(3,3) As Integer actually creates a 4x4 array.
A: Ok, so what you really need is a "jagged array". This will allow you to have an "array that contains other arrays of varying lengths".
Dim arr As Integer()() = {New Integer() {1, 2, 3}, New Integer() {4, 5, 6, 7, 8, 9, 9, 9}, New Integer() {5, 4, 3, 2}}
For x = 0 To arr.GetUpperBound(0)
Console.WriteLine("Row " & x & " has " & arr(x).GetUpperBound(0) & " columns")
For y = 0 To arr(x).GetUpperBound(0)
Console.WriteLine("(" & x & "," & y & ") = " & arr(x)(y))
Next
Next
Output:
Row 0 has 2 columns
(0,0) = 1
(0,1) = 2
(0,2) = 3
Row 1 has 7 columns
(1,0) = 4
(1,1) = 5
(1,2) = 6
(1,3) = 7
(1,4) = 8
(1,5) = 9
(1,6) = 9
(1,7) = 9
Row 2 has 3 columns
(2,0) = 5
(2,1) = 4
(2,2) = 3
(2,3) = 2
A: arr.GetLength(y)
should be
arr.GetLength(1)
A: Well what if I had an array that looked like this
{ {1, 2, 3},
{4, 5, 6, 7, 8, 9, 9, 9},
{5, 4, 3, 2}
}
How would GetLength(1) still know the length of each row?
Basically what I want is.... a way to find the number of elements in any given row.
A: Dim arr(3, 3) As Integer
Dim y As Integer
Dim x As Integer
For x = 0 To arr.Rank - 1
For y = 0 To arr.GetLength(x) - 2
arr(x, y) = x + y
Next
Next
The above code worked for me.
Edit, the code feels dirty though. I'm wondering what it is you are trying to accomplish?
A: Your declaration: DIM arr(3,3) As Integer allready specifies that there are 3 elements in any given row (or 4, I'm not so sure about VB)
You could try:
Dim arr(3) as Integer()
You should then be able to do:
arr(n).Length
To find the length of row n.
I'm a bit rusty on VB6 and never learned VB.NET, but this should give you a 'jagged' array. Check out the msdn documentation on multidimensioned arrays.
A: This code en C# is to get all the combinations of items in a jagged array:
static void Main(string[] args)
{
bool exit = false;
int[] indices = new int[3] { 0, 0, 0 };
string[][] vectores = new string[3][];
vectores[0] = new string[] { "A", "B", "C" };
vectores[1] = new string[] { "A", "B" };
vectores[2] = new string[] { "B", "D", "E", "F" };
string[] item;
int[] tamaños = new int[3]{vectores[0].GetUpperBound(0),
vectores[1].GetUpperBound(0),
vectores[2].GetUpperBound(0)};
while (!exit)
{
item = new string[]{ vectores[0][indices[0]],
vectores[1][indices[1]],
vectores[2][indices[2]]};
Console.WriteLine("[{0},{1},{2}]={3}{4}{5}", indices[0], indices[1], indices[2], item[0], item[1], item[2]);
GetVector(tamaños, ref indices, ref exit);
}
Console.ReadKey();
}
public static void GetVector(int[] tamaños, ref int[] indices, ref bool exit)
{
for (int i = tamaños.GetUpperBound(0); i >= 0; i--)
{
if (tamaños[i] > indices[i])
{
indices[i]++;
break;
}
else
{
//ULTIMO ITEM EN EL ARRAY, VALIDAR LAS OTRAS DIMENSIONES SI YA ESTA EN EL ULTIMO ITEM
if (!ValidateIndexes(tamaños, indices))
indices[i] = 0;
else
{
exit = true;
break;
}
}
}
}
public static bool ValidateIndexes(int[] tamaños, int[] indices)
{
for (int i = 0; i < tamaños.Length; i++)
{
if (tamaños[i] != indices[i])
return false;
}
return true;
}
The output looks like
[0,0,0]=AAB
[0,0,1]=AAD
[0,0,2]=AAE
[0,0,3]=AAF
[0,1,0]=ABB
[0,1,1]=ABD
[0,1,2]=ABE
[0,1,3]=ABF
[1,0,0]=BAB
[1,0,1]=BAD
[1,0,2]=BAE
[1,0,3]=BAF
[1,1,0]=BBB
[1,1,1]=BBD
[1,1,2]=BBE
[1,1,3]=BBF
[2,0,0]=CAB
[2,0,1]=CAD
[2,0,2]=CAE
[2,0,3]=CAF
[2,1,0]=CBB
[2,1,1]=CBD
[2,1,2]=CBE
[2,1,3]=CBF
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I size a UITextView to its content? Is there a good way to adjust the size of a UITextView to conform to its content? Say for instance I have a UITextView that contains one line of text:
"Hello world"
I then add another line of text:
"Goodbye world"
Is there a good way in Cocoa Touch to get the rect that will hold all of the lines in the text view so that I can adjust the parent view accordingly?
As another example, look at the notes' field for events in the Calendar application - note how the cell (and the UITextView it contains) expands to hold all lines of text in the notes' string.
A: We can do it by constraints .
*
*Set Height constraints for UITextView.
2.Create IBOutlet for that height constraint.
@property (weak, nonatomic) IBOutlet NSLayoutConstraint *txtheightconstraints;
3.don't forget to set delegate for your textview.
4.
-(void)textViewDidChange:(UITextView *)textView
{
CGFloat fixedWidth = textView.frame.size.width;
CGSize newSize = [textView sizeThatFits:CGSizeMake(fixedWidth, MAXFLOAT)];
CGRect newFrame = textView.frame;
newFrame.size = CGSizeMake(fmaxf(newSize.width, fixedWidth), newSize.height);
NSLog(@"this is updating height%@",NSStringFromCGSize(newFrame.size));
[UIView animateWithDuration:0.2 animations:^{
_txtheightconstraints.constant=newFrame.size.height;
}];
}
then update your constraint like this :)
A: If you don't have the UITextView handy (for example, you're sizing table view cells), you'll have to calculate the size by measuring the string, then accounting for the 8 pt of padding on each side of a UITextView. For example, if you know the desired width of your text view and want to figure out the corresponding height:
NSString * string = ...;
CGFloat textViewWidth = ...;
UIFont * font = ...;
CGSize size = CGSizeMake(textViewWidth - 8 - 8, 100000);
size.height = [string sizeWithFont:font constrainedToSize:size].height + 8 + 8;
Here, each 8 is accounting for one of the four padded edges, and 100000 just serves as a very large maximum size.
In practice, you may want to add an extra font.leading to the height; this adds a blank line below your text, which may look better if there are visually heavy controls directly beneath the text view.
A: Starting with iOS 8, it is possible to use the auto layout features of a UITableView to automatically resize a UITextView with no custom code at all. I have put a project in github that demonstrates this in action, but here is the key:
*
*The UITextView must have scrolling disabled, which you can do programmatically or through the interface builder. It will not resize if scrolling is enabled because scrolling lets you view the larger content.
*In viewDidLoad for the UITableViewController, you must set a value for estimatedRowHeight and then set the rowHeight to UITableViewAutomaticDimension.
- (void)viewDidLoad {
[super viewDidLoad];
self.tableView.estimatedRowHeight = self.tableView.rowHeight;
self.tableView.rowHeight = UITableViewAutomaticDimension;
}
*The project deployment target must be iOS 8 or greater.
A: I reviewed all the answers and all are keeping fixed width and adjust only height. If you wish to adjust also width you can very easily use this method:
so when configuring your text view, set scroll disabled
textView.isScrollEnabled = false
and then in delegate method func textViewDidChange(_ textView: UITextView) add this code:
func textViewDidChange(_ textView: UITextView) {
let newSize = textView.sizeThatFits(CGSize(width: CGFloat.greatestFiniteMagnitude, height: CGFloat.greatestFiniteMagnitude))
textView.frame = CGRect(origin: textView.frame.origin, size: newSize)
}
Outputs:
A: I found out a way to resize the height of a text field according to the text inside it and also arrange a label below it based on the height of the text field! Here is the code.
UITextView *_textView = [[UITextView alloc] initWithFrame:CGRectMake(10, 10, 300, 10)];
NSString *str = @"This is a test text view to check the auto increment of height of a text view. This is only a test. The real data is something different.";
_textView.text = str;
[self.view addSubview:_textView];
CGRect frame = _textView.frame;
frame.size.height = _textView.contentSize.height;
_textView.frame = frame;
UILabel *lbl = [[UILabel alloc] initWithFrame:CGRectMake(10, 5 + frame.origin.y + frame.size.height, 300, 20)];
lbl.text = @"Hello!";
[self.view addSubview:lbl];
A: Guys using autolayout and your sizetofit isn't working, then please check your width constraint once. If you had missed the width constraint then the height will be accurate.
No need to use any other API. just one line would fix all the issue.
[_textView sizeToFit];
Here, I was only concerned with height, keeping the width fixed and had missed the width constraint of my TextView in storyboard.
And this was to show up the dynamic content from the services.
Hope this might help..
A: The following things are enough:
*
*Just remember to set scrolling enabled to NO for your UITextView:
*Properly set Auto Layout Constraints.
You may even use UITableViewAutomaticDimension.
A: Using UITextViewDelegate is the easiest way:
func textViewDidChange(_ textView: UITextView) {
textView.sizeToFit()
textviewHeight.constant = textView.contentSize.height
}
A: Swift :
textView.sizeToFit()
A: This works for both iOS 6.1 and iOS 7:
- (void)textViewDidChange:(UITextView *)textView
{
CGFloat fixedWidth = textView.frame.size.width;
CGSize newSize = [textView sizeThatFits:CGSizeMake(fixedWidth, MAXFLOAT)];
CGRect newFrame = textView.frame;
newFrame.size = CGSizeMake(fmaxf(newSize.width, fixedWidth), newSize.height);
textView.frame = newFrame;
}
Or in Swift (Works with Swift 4.1 in iOS 11)
let fixedWidth = textView.frame.size.width
let newSize = textView.sizeThatFits(CGSize(width: fixedWidth, height: CGFloat.greatestFiniteMagnitude))
textView.frame.size = CGSize(width: max(newSize.width, fixedWidth), height: newSize.height)
If you want support for iOS 6.1 then you should also:
textview.scrollEnabled = NO;
A: Combined with Mike McMaster's answer, you might want to do something like:
[myTextView setDelegate: self];
...
- (void)textViewDidChange:(UITextView *)textView {
if (myTextView == textView) {
// it changed. Do resizing here.
}
}
A: This no longer works on iOS 7 or above
There is actually a very easy way to do resizing of the UITextView to its correct height of the content. It can be done using the UITextView contentSize.
CGRect frame = _textView.frame;
frame.size.height = _textView.contentSize.height;
_textView.frame = frame;
One thing to note is that the correct contentSize is only available after the UITextView has been added to the view with addSubview. Prior to that it is equal to frame.size
This will not work if auto layout is ON. With auto layout, the general approach is to use the sizeThatFits method and update the constant value on a height constraint.
CGSize sizeThatShouldFitTheContent = [_textView sizeThatFits:_textView.frame.size];
heightConstraint.constant = sizeThatShouldFitTheContent.height;
heightConstraint is a layout constraint that you typically setup via a IBOutlet by linking the property to the height constraint created in a storyboard.
Just to add to this amazing answer, 2014, if you:
[self.textView sizeToFit];
there is a difference in behaviour with the iPhone6+ only:
With the 6+ only (not the 5s or 6) it does add "one more blank line" to the UITextView. The "RL solution" fixes this perfectly:
CGRect _f = self.mainPostText.frame;
_f.size.height = self.mainPostText.contentSize.height;
self.mainPostText.frame = _f;
It fixes the "extra line" problem on 6+.
A: disable scrolling
add constaints
and add your text
[yourTextView setText:@"your text"];
[yourTextView layoutIfNeeded];
if you use UIScrollView you should add this too;
[yourScrollView layoutIfNeeded];
-(void)viewDidAppear:(BOOL)animated{
CGRect contentRect = CGRectZero;
for (UIView *view in self.yourScrollView.subviews) {
contentRect = CGRectUnion(contentRect, view.frame);
}
self.yourScrollView.contentSize = contentRect.size;
}
A: This worked nicely when I needed to make text in a UITextView fit a specific area:
// The text must already be added to the subview, or contentviewsize will be wrong.
- (void) reduceFontToFit: (UITextView *)tv {
UIFont *font = tv.font;
double pointSize = font.pointSize;
while (tv.contentSize.height > tv.frame.size.height && pointSize > 7.0) {
pointSize -= 1.0;
UIFont *newFont = [UIFont fontWithName:font.fontName size:pointSize];
tv.font = newFont;
}
if (pointSize != font.pointSize)
NSLog(@"font down to %.1f from %.1f", pointSize, tv.font.pointSize);
}
A: here is the swift version of @jhibberd
let cell:MsgTableViewCell! = self.tableView.dequeueReusableCellWithIdentifier("MsgTableViewCell", forIndexPath: indexPath) as? MsgTableViewCell
cell.msgText.text = self.items[indexPath.row]
var fixedWidth:CGFloat = cell.msgText.frame.size.width
var size:CGSize = CGSize(width: fixedWidth,height: CGFloat.max)
var newSize:CGSize = cell.msgText.sizeThatFits(size)
var newFrame:CGRect = cell.msgText.frame;
newFrame.size = CGSizeMake(CGFloat(fmaxf(Float(newSize.width), Float(fixedWidth))), newSize.height);
cell.msgText.frame = newFrame
cell.msgText.frame.size = newSize
return cell
A: For iOS 7.0, instead of setting the frame.size.height to the contentSize.height (which currently does nothing) use [textView sizeToFit].
See this question.
A: This works fine for Swift 5 in case you want to fit your TextView once user write text on the fly.
Just implement UITextViewDelegate with:
func textViewDidChange(_ textView: UITextView) {
let newSize = textView.sizeThatFits(CGSize(width: CGFloat.greatestFiniteMagnitude, height: CGFloat.greatestFiniteMagnitude))
textView.frame.size = CGSize(width: newSize.width, height: newSize.height)
}
A: In my (limited) experience,
- (CGSize)sizeWithFont:(UIFont *)font forWidth:(CGFloat)width lineBreakMode:(UILineBreakMode)lineBreakMode
does not respect newline characters, so you can end up with a lot shorter CGSize than is actually required.
- (CGSize)sizeWithFont:(UIFont *)font constrainedToSize:(CGSize)size
does seem to respect the newlines.
Also, the text isn't actually rendered at the top of the UITextView. In my code, I set the new height of the UITextView to be 24 pixels larger than the height returned by the sizeOfFont methods.
A: In iOS6, you can check the contentSize property of UITextView right after you set the text. In iOS7, this will no longer work. If you want to restore this behavior for iOS7, place the following code in a subclass of UITextView.
- (void)setText:(NSString *)text
{
[super setText:text];
if (NSFoundationVersionNumber > NSFoundationVersionNumber_iOS_6_1) {
CGRect rect = [self.textContainer.layoutManager usedRectForTextContainer:self.textContainer];
UIEdgeInsets inset = self.textContainerInset;
self.contentSize = UIEdgeInsetsInsetRect(rect, inset).size;
}
}
A: if any other get here, this solution work for me, 1"Ronnie Liew"+4"user63934" (My text arrive from web service):
note the 1000 (nothing can be so big "in my case")
UIFont *fontNormal = [UIFont fontWithName:FONTNAME size:FONTSIZE];
NSString *dealDescription = [client objectForKey:@"description"];
//4
CGSize textSize = [dealDescription sizeWithFont:fontNormal constrainedToSize:CGSizeMake(containerUIView.frame.size.width, 1000)];
CGRect dealDescRect = CGRectMake(10, 300, containerUIView.frame.size.width, textSize.height);
UITextView *dealDesc = [[[UITextView alloc] initWithFrame:dealDescRect] autorelease];
dealDesc.text = dealDescription;
//add the subview to the container
[containerUIView addSubview:dealDesc];
//1) after adding the view
CGRect frame = dealDesc.frame;
frame.size.height = dealDesc.contentSize.height;
dealDesc.frame = frame;
And that is... Cheers
A: I will post right solution at the bottom of the page in case someone is brave (or despaired enough) to read to this point.
Here is gitHub repo for those, who don't want to read all that text: resizableTextView
This works with iOs7 (and I do believe it will work with iOs8) and with autolayout. You don't need magic numbers, disable layout and stuff like that. Short and elegant solution.
I think, that all constraint-related code should go to updateConstraints method. So, let's make our own ResizableTextView.
The first problem we meet here is that don't know real content size before viewDidLoad method. We can take long and buggy road and calculate it based on font size, line breaks, etc. But we need robust solution, so we'll do:
CGSize contentSize = [self sizeThatFits:CGSizeMake(self.frame.size.width, FLT_MAX)];
So now we know real contentSize no matter where we are: before or after viewDidLoad. Now add height constraint on textView (via storyboard or code, no matter how). We'll adjust that value with our contentSize.height:
[self.constraints enumerateObjectsUsingBlock:^(NSLayoutConstraint *constraint, NSUInteger idx, BOOL *stop) {
if (constraint.firstAttribute == NSLayoutAttributeHeight) {
constraint.constant = contentSize.height;
*stop = YES;
}
}];
The last thing to do is to tell superclass to updateConstraints.
[super updateConstraints];
Now our class looks like:
ResizableTextView.m
- (void) updateConstraints {
CGSize contentSize = [self sizeThatFits:CGSizeMake(self.frame.size.width, FLT_MAX)];
[self.constraints enumerateObjectsUsingBlock:^(NSLayoutConstraint *constraint, NSUInteger idx, BOOL *stop) {
if (constraint.firstAttribute == NSLayoutAttributeHeight) {
constraint.constant = contentSize.height;
*stop = YES;
}
}];
[super updateConstraints];
}
Pretty and clean, right? And you don't have to deal with that code in your controllers!
But wait!
Y NO ANIMATION!
You can easily animate changes to make textView stretch smoothly. Here is an example:
[self.view layoutIfNeeded];
// do your own text change here.
self.infoTextView.text = [NSString stringWithFormat:@"%@, %@", self.infoTextView.text, self.infoTextView.text];
[self.infoTextView setNeedsUpdateConstraints];
[self.infoTextView updateConstraintsIfNeeded];
[UIView animateWithDuration:1 delay:0 options:UIViewAnimationOptionLayoutSubviews animations:^{
[self.view layoutIfNeeded];
} completion:nil];
A: Did you try [textView sizeThatFits:textView.bounds] ?
Edit: sizeThatFits returns the size but does not actually resize the component. I'm not sure if that's what you want, or if [textView sizeToFit] is more what you were looking for. In either case, I do not know if it will perfectly fit the content like you want, but it's the first thing to try.
A: Very easy working solution using code and storyboard both.
By Code
textView.scrollEnabled = false
By Storyboard
Uncheck the Scrolling Enable
No need to do anything apart of this.
A: Update
The key thing you need to do is turn off scrolling in your UITextView.
myTextView.scrollEnabled = @NO
Original Answer
To make a dynamically sizing UITextView inside a UITableViewCell, I found the following combination works in Xcode 6 with the iOS 8 SDK:
*
*Add a UITextView to a UITableViewCell and constrain it to the sides
*Set the UITextView's scrollEnabled property to NO. With scrolling enabled, the frame of the UITextView is independent of its content size, but with scrolling disabled, there is a relationship between the two.
*If your table is using the original default row height of 44 then it will automatically calculate row heights, but if you changed the default row height to something else, you may need to manually switch on auto-calculation of row heights in viewDidLoad:
tableView.estimatedRowHeight = 150;
tableView.rowHeight = UITableViewAutomaticDimension;
For read-only dynamically sizing UITextViews, that’s it. If you’re allowing users to edit the text in your UITextView, you also need to:
*
*Implement the textViewDidChange: method of the UITextViewDelegate protocol, and tell the tableView to repaint itself every time the text is edited:
- (void)textViewDidChange:(UITextView *)textView;
{
[tableView beginUpdates];
[tableView endUpdates];
}
*And don’t forget to set the UITextView delegate somewhere, either in Storyboard or in tableView:cellForRowAtIndexPath:
A: Another method is the find the size a particular string will take up using the NSString method:
-(CGSize)sizeWithFont:(UIFont *)font constrainedToSize:(CGSize)size
This returns the size of the rectangle that fits the given string with the given font. Pass in a size with the desired width and a maximum height, and then you can look at the height returned to fit the text. There is a version that lets you specify line break mode also.
You can then use the returned size to change the size of your view to fit.
A: Hope this helps:
- (void)textViewDidChange:(UITextView *)textView {
CGSize textSize = textview.contentSize;
if (textSize != textView.frame.size)
textView.frame.size = textSize;
}
A: The Best way which I found out to re-size the height of the UITextView according to the size of the text.
CGSize textViewSize = [YOURTEXTVIEW.text sizeWithFont:[UIFont fontWithName:@"SAMPLE_FONT" size:14.0]
constrainedToSize:CGSizeMake(YOURTEXTVIEW.frame.size.width, FLT_MAX)];
or You can USE
CGSize textViewSize = [YOURTEXTVIEW.text sizeWithFont:[UIFont fontWithName:@"SAMPLE_FONT" size:14.0]
constrainedToSize:CGSizeMake(YOURTEXTVIEW.frame.size.width, FLT_MAX) lineBreakMode:NSLineBreakByTruncatingTail];
A: For those who want the textview to actually move up and maintain the bottom line position
CGRect frame = textView.frame;
frame.size.height = textView.contentSize.height;
if(frame.size.height > textView.frame.size.height){
CGFloat diff = frame.size.height - textView.frame.size.height;
textView.frame = CGRectMake(0, textView.frame.origin.y - diff, textView.frame.size.width, frame.size.height);
}
else if(frame.size.height < textView.frame.size.height){
CGFloat diff = textView.frame.size.height - frame.size.height;
textView.frame = CGRectMake(0, textView.frame.origin.y + diff, textView.frame.size.width, frame.size.height);
}
A: The only code that will work is the one that uses 'SizeToFit' as in jhibberd answer above but actually it won't pick up unless you call it in ViewDidAppear or wire it to UITextView text changed event.
A: Based on Nikita Took's answer I came to the following solution in Swift which works on iOS 8 with autolayout:
descriptionTxt.scrollEnabled = false
descriptionTxt.text = yourText
var contentSize = descriptionTxt.sizeThatFits(CGSizeMake(descriptionTxt.frame.size.width, CGFloat.max))
for c in descriptionTxt.constraints() {
if c.isKindOfClass(NSLayoutConstraint) {
var constraint = c as! NSLayoutConstraint
if constraint.firstAttribute == NSLayoutAttribute.Height {
constraint.constant = contentSize.height
break
}
}
}
A: Swift answer:
The following code computes the height of your textView.
let maximumLabelSize = CGSize(width: Double(textView.frame.size.width-100.0), height: DBL_MAX)
let options = NSStringDrawingOptions.TruncatesLastVisibleLine | NSStringDrawingOptions.UsesLineFragmentOrigin
let attribute = [NSFontAttributeName: textView.font!]
let str = NSString(string: message)
let labelBounds = str.boundingRectWithSize(maximumLabelSize,
options: NSStringDrawingOptions.UsesLineFragmentOrigin,
attributes: attribute,
context: nil)
let myTextHeight = CGFloat(ceilf(Float(labelBounds.height)))
Now you can set the height of your textView to myTextHeight
A: Here is the answer if you need resize textView and tableViewCell dynamically in staticTableView
[https://stackoverflow.com/a/43137182/5360675][1]
A: Works like a charm on ios 11
and I'm working in a cell, like a chat cell with bubble.
let content = UITextView(frame: CGRect(x: 4, y: 4, width: 0, height: 0))
content.text = "what ever short or long text you wanna try"
content.textAlignment = NSTextAlignment.left
content.font = UIFont.systemFont(ofSize: 13)
let spaceAvailable = 200 //My cell is fancy I have to calculate it...
let newSize = content.sizeThatFits(CGSize(width: CGFloat(spaceAvailable), height: CGFloat.greatestFiniteMagnitude))
content.isEditable = false
content.dataDetectorTypes = UIDataDetectorTypes.all
content.isScrollEnabled = false
content.backgroundColor = UIColor.clear
bkgView.addSubview(content)
A: this method seems to work for ios7
// Code from apple developer forum - @Steve Krulewitz, @Mark Marszal, @Eric Silverberg
- (CGFloat)measureHeight
{
if ([self respondsToSelector:@selector(snapshotViewAfterScreenUpdates:)])
{
CGRect frame = internalTextView.bounds;
CGSize fudgeFactor;
// The padding added around the text on iOS6 and iOS7 is different.
fudgeFactor = CGSizeMake(10.0, 16.0);
frame.size.height -= fudgeFactor.height;
frame.size.width -= fudgeFactor.width;
NSMutableAttributedString* textToMeasure;
if(internalTextView.attributedText && internalTextView.attributedText.length > 0){
textToMeasure = [[NSMutableAttributedString alloc] initWithAttributedString:internalTextView.attributedText];
}
else{
textToMeasure = [[NSMutableAttributedString alloc] initWithString:internalTextView.text];
[textToMeasure addAttribute:NSFontAttributeName value:internalTextView.font range:NSMakeRange(0, textToMeasure.length)];
}
if ([textToMeasure.string hasSuffix:@"\n"])
{
[textToMeasure appendAttributedString:[[NSAttributedString alloc] initWithString:@"-" attributes:@{NSFontAttributeName: internalTextView.font}]];
}
// NSAttributedString class method: boundingRectWithSize:options:context is
// available only on ios7.0 sdk.
CGRect size = [textToMeasure boundingRectWithSize:CGSizeMake(CGRectGetWidth(frame), MAXFLOAT)
options:NSStringDrawingUsesLineFragmentOrigin
context:nil];
return CGRectGetHeight(size) + fudgeFactor.height;
}
else
{
return self.internalTextView.contentSize.height;
}
}
A: The easiest way to ask a UITextView is just calling -sizeToFitit should work also with scrollingEnabled = YES, after that check for the height and add a height constraint on the text view with the same value.
Pay attention that UITexView contains insets, this means that you can't ask the string object how much space it want to use, because this is just the bounding rect of the text.
All the person that are experiencing wrong size using -sizeToFit it's probably due to the fact that the text view has not been layout yet to the interface size.
This always happen when you use size classes and a UITableView, the first time cells are created in the - tableView:cellForRowAtIndexPath: the comes out with the size of the any-any configuration, if you compute you value just now the text view will have a different width than the expected and this will screw all sizes.
To overcome this issue I've found useful to override the -layoutSubviews method of the cell to recalculate textview height.
A: If you are using scrollview and content view,and you want to increase the height depending the TextView content height,then this piece of code will help you.
Hope this will help,it worked perfectly in iOS9.2
and of course set textview.scrollEnabled = NO;
-(void)adjustHeightOfTextView
{
//this pieces of code will helps to adjust the height of the uitextview View W.R.T content
//This block of code work will calculate the height of the textview text content height and calculate the height for the whole content of the view to be displayed.Height --> 400 is fixed,it will change if you change anything in the storybord.
CGSize textViewSize = [self.textview sizeThatFits:CGSizeMake(self.view.frame.size.width, self.view.frame.size.height)];//calulcate the content width and height
float textviewContentheight =textViewSize.height;
self.scrollview.contentSize = CGSizeMake(self.textview.frame.size.width,textviewContentheight + 400);//height value is passed
self.scrollview.frame =CGRectMake(self.scrollview.frame.origin.x, self.scrollview.frame.origin.y, self.scrollview.frame.size.width, textviewContentheight+400);
CGRect Frame = self.contentview.frame;
Frame.size.height = textviewContentheight + 400;
[self.contentview setFrame:Frame];
self.textview.frame =CGRectMake(self.textview.frame.origin.x, self.textview.frame.origin.y, self.textview.frame.size.width, textviewContentheight);
[ self.textview setContentSize:CGSizeMake( self.textview.frame.size.width,textviewContentheight)];
self.contenview_heightConstraint.constant =
self.scrollview.bounds.size.height;
NSLog(@"%f",self.contenview_heightConstraint.constant);
}
A: It's quite easy with Key Value Observing (KVO), just create a subclass of UITextView and do:
private func setup() { // Called from init or somewhere
fitToContentObservations = [
textView.observe(\.contentSize) { _, _ in
self.invalidateIntrinsicContentSize()
},
// For some reason the content offset sometimes is non zero even though the frame is the same size as the content size.
textView.observe(\.contentOffset) { _, _ in
if self.contentOffset != .zero { // Need to check this to stop infinite loop
self.contentOffset = .zero
}
}
]
}
public override var intrinsicContentSize: CGSize {
return contentSize
}
If you don't want to subclass you could try doing textView.bounds = textView.contentSize in the contentSize observer.
A: The simplest solution that worked for me was to put a height constraint on the textView in the Storyboard, then connect the textView and height constraint to the code:
@IBOutlet var myAwesomeTextView: UITextView!
@IBOutlet var myAwesomeTextViewHeight: NSLayoutConstraint!
Then after setting the text and paragraph styles, add this in viewDidAppear:
self.myAwesomeTextViewHeight.constant = self.myAwesomeTextView.contentSize.height
Some notes:
*
*In contrast to other solutions, isScrollEnabled must to be set to true in order for this to work.
*In my case, I was setting custom attributes to the font inside the code, therefore I had to set the height in viewDidAppear (it wouldn't work properly before that). If you aren't changing any text attributes in code, you should be able to set the height in viewDidLoad or anywhere after setting the text.
A: Looks like this guy figured it out for NSTextView and his answer applies to iOS too. Turns out intrinsicContentSize doesn’t sync up with the layout manager. If a layout happens after intrinsicContentSize you could have a discrepancy.
He has an easy fix.
NSTextView in NSOutlineView with IntrinsicContentSize setting wrong height
A: Not sure why people always over complicate things:
here it is :
- (void)textViewDidChange:(UITextView *)textView{ CGRect frame = textView.frame;
CGFloat height = [self measureHeightOfUITextView:textView];
CGFloat insets = textView.textContainerInset.top + textView.textContainerInset.bottom;
height += insets;
frame.size.height = height;
if(frame.size.height > textView.frame.size.height){
CGFloat diff = frame.size.height - textView.frame.size.height;
textView.frame = CGRectMake(5, textView.frame.origin.y - diff, textView.frame.size.width, frame.size.height);
}
else if(frame.size.height < textView.frame.size.height){
CGFloat diff = textView.frame.size.height - frame.size.height;
textView.frame = CGRectMake(5, textView.frame.origin.y + diff, textView.frame.size.width, frame.size.height);
}
[textView setNeedsDisplay];
}
A: Here are the steps
Solution works on all versions of iOS. Swift 3.0 and above.
*
*Add your UITextView to the View. I'm adding it with code, you can add via interface builder.
*Add constraints. I'm adding the constraints in code, you can do it in interface builder as well.
*Use UITextViewDelegate method func textViewDidChange(_ textView: UITextView) to adjust the size of the TextView
Code :
*
*//1. Add your UITextView in ViewDidLoad
let textView = UITextView()
textView.frame = CGRect(x: 0, y: 0, width: 200, height: 100)
textView.backgroundColor = .lightGray
textView.text = "Here is some default text."
//2. Add constraints
textView.translatesAutoresizingMaskIntoConstraints = false
[
textView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
textView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
textView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
textView.heightAnchor.constraint(equalToConstant: 50),
textView.widthAnchor.constraint(equalToConstant: 30)
].forEach{ $0.isActive = true }
textView.font = UIFont.preferredFont(forTextStyle: .headline)
textView.delegate = self
textView.isScrollEnabled = false
textViewDidChange(textView)
//3. Implement the delegate method.
func textViewDidChange(_ textView: UITextView) {
let size = CGSize(width: view.frame.width, height: .infinity)
let estimatedSize = textView.sizeThatFits(size)
textView.constraints.forEach { (constraint) in
if constraint.firstAttribute == .height {
print("Height: ", estimatedSize.height)
constraint.constant = estimatedSize.height
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "558"
} |
Q: Can I prevent user pasting Javascript into Design Mode IFrame? I'm building a webapp that contains an IFrame in design mode so my user's can "tart" their content up and paste in content to be displayed on their page. Like the WYSIWYG editor on most blog engines or forums.
I'm trying to think of all potential security holes I need to plug, one of which is a user pasting in Javascript:
<script type="text/javascript">
// Do some nasty stuff
</script>
Now I know I can strip this out at the server end, before saving it and/or serving it back, but I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Any advice would be great, couldn't find much searching Google.
Anthony
A:
...I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Firefox has a plug-in called Greasemonkey that allows users to arbitrarily run JavaScript against any page that loads into their browser, and there is nothing you can do about it. Firebug allows you to modify web pages as well as run arbitrary JavaScript.
AFAIK, you really only need to worry once it gets to your server, and then potentially hits other users.
A: As Jason said, I would focus more on cleaning the data on the server side. You don't really have any real control on the client side unless you're using Silverlight / Flex and even then you'd need to check the server.
That said, Here are some tips from "A List Apart" you may find helpful regarding server side data cleaning.
http://www.alistapart.com/articles/secureyourcode
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: WebDev.WebServer.EXE Crashes After VS 2008 SP1 Install Since, for various reasons, I can't use IIS for an ASP.NET website I'm developing, I run Cassini from the command line to test the site. However, after installing Visual Studio 2008 SP1, I get a System.Net.Sockets.SocketException when I try to start up the web server. Is anyone else having this problem, and if so, how did you fix it?
A: *
*Is there anything in the Application section of the event log?
*Have you tried using a different port?
*Per this thread, try:
Unbind from Visual Source safe, delete the web project from the solution, rename the folder where the website is stored and then re add to the solution as an existing web site and then bind to source safe again.
There may be some incorrect info in your .suo or .sln file. You can safely rename the former, as it is user-specific (solution user options); the latter (the solution itself) would be a bit more of a hassle to recreate.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I get the path and name of the file that is currently executing? I have scripts calling other script files but I need to get the filepath of the file that is currently running within the process.
For example, let's say I have three files. Using execfile:
*
*script_1.py calls script_2.py.
*In turn, script_2.py calls script_3.py.
How can I get the file name and path of script_3.py, from code within script_3.py, without having to pass that information as arguments from script_2.py?
(Executing os.getcwd() returns the original starting script's filepath not the current file's.)
A: The __file__ attribute works for both the file containing the main execution code as well as imported modules.
See https://web.archive.org/web/20090918095828/http://pyref.infogami.com/__file__
A: import os
os.path.dirname(os.path.abspath(__file__))
No need for inspect or any other library.
This worked for me when I had to import a script (from a different directory then the executed script), that used a configuration file residing in the same folder as the imported script.
A: I think this is cleaner:
import inspect
print inspect.stack()[0][1]
and gets the same information as:
print inspect.getfile(inspect.currentframe())
Where [0] is the current frame in the stack (top of stack) and [1] is for the file name, increase to go backwards in the stack i.e.
print inspect.stack()[1][1]
would be the file name of the script that called the current frame. Also, using [-1] will get you to the bottom of the stack, the original calling script.
A: import sys
print sys.path[0]
this would print the path of the currently executing script
A: __file__
as others have said. You may also want to use os.path.realpath to eliminate symlinks:
import os
os.path.realpath(__file__)
A: I think it's just __file__ Sounds like you may also want to checkout the inspect module.
A: import os
os.path.dirname(__file__) # relative directory path
os.path.abspath(__file__) # absolute file path
os.path.basename(__file__) # the file name only
A: You can use inspect.stack()
import inspect,os
inspect.stack()[0] => (<frame object at 0x00AC2AC0>, 'g:\\Python\\Test\\_GetCurrentProgram.py', 15, '<module>', ['print inspect.stack()[0]\n'], 0)
os.path.abspath (inspect.stack()[0][1]) => 'g:\\Python\\Test\\_GetCurrentProgram.py'
A: The suggestions marked as best are all true if your script consists of only one file.
If you want to find out the name of the executable (i.e. the root file passed to the python interpreter for the current program) from a file that may be imported as a module, you need to do this (let's assume this is in a file named foo.py):
import inspect
print inspect.stack()[-1][1]
Because the last thing ([-1]) on the stack is the first thing that went into it (stacks are LIFO/FILO data structures).
Then in file bar.py if you import foo it'll print bar.py, rather than foo.py, which would be the value of all of these:
*
*__file__
*inspect.getfile(inspect.currentframe())
*inspect.stack()[0][1]
A: import sys
print sys.argv[0]
A: print(__file__)
print(__import__("pathlib").Path(__file__).parent)
A: p1.py:
execfile("p2.py")
p2.py:
import inspect, os
print (inspect.getfile(inspect.currentframe())) # script filename (usually with path)
print (os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))) # script directory
A: Since Python 3 is fairly mainstream, I wanted to include a pathlib answer, as I believe that it is probably now a better tool for accessing file and path information.
from pathlib import Path
current_file: Path = Path(__file__).resolve()
If you are seeking the directory of the current file, it is as easy as adding .parent to the Path() statement:
current_path: Path = Path(__file__).parent.resolve()
A: This should work:
import os,sys
filename=os.path.basename(os.path.realpath(sys.argv[0]))
dirname=os.path.dirname(os.path.realpath(sys.argv[0]))
A: Here is what I use so I can throw my code anywhere without issue. __name__ is always defined, but __file__ is only defined when the code is run as a file (e.g. not in IDLE/iPython).
if '__file__' in globals():
self_name = globals()['__file__']
elif '__file__' in locals():
self_name = locals()['__file__']
else:
self_name = __name__
Alternatively, this can be written as:
self_name = globals().get('__file__', locals().get('__file__', __name__))
A: It's not entirely clear what you mean by "the filepath of the file that is currently running within the process".
sys.argv[0] usually contains the location of the script that was invoked by the Python interpreter.
Check the sys documentation for more details.
As @Tim and @Pat Notz have pointed out, the __file__ attribute provides access to
the file from which the module was
loaded, if it was loaded from a file
A: import os
print os.path.basename(__file__)
this will give us the filename only. i.e. if abspath of file is c:\abcd\abc.py then 2nd line will print abc.py
A: I have a script that must work under windows environment.
This code snipped is what I've finished with:
import os,sys
PROJECT_PATH = os.path.abspath(os.path.split(sys.argv[0])[0])
it's quite a hacky decision. But it requires no external libraries and it's the most important thing in my case.
A: Update 2018-11-28:
Here is a summary of experiments with Python 2 and 3. With
main.py - runs foo.py
foo.py - runs lib/bar.py
lib/bar.py - prints filepath expressions
| Python | Run statement | Filepath expression |
|--------+---------------------+----------------------------------------|
| 2 | execfile | os.path.abspath(inspect.stack()[0][1]) |
| 2 | from lib import bar | __file__ |
| 3 | exec | (wasn't able to obtain it) |
| 3 | import lib.bar | __file__ |
For Python 2, it might be clearer to switch to packages so can use from lib import bar - just add empty __init__.py files to the two folders.
For Python 3, execfile doesn't exist - the nearest alternative is exec(open(<filename>).read()), though this affects the stack frames. It's simplest to just use import foo and import lib.bar - no __init__.py files needed.
See also Difference between import and execfile
Original Answer:
Here is an experiment based on the answers in this thread - with Python 2.7.10 on Windows.
The stack-based ones are the only ones that seem to give reliable results. The last two have the shortest syntax, i.e. -
print os.path.abspath(inspect.stack()[0][1]) # C:\filepaths\lib\bar.py
print os.path.dirname(os.path.abspath(inspect.stack()[0][1])) # C:\filepaths\lib
Here's to these being added to sys as functions! Credit to @Usagi and @pablog
Based on the following three files, and running main.py from its folder with python main.py (also tried execfiles with absolute paths and calling from a separate folder).
C:\filepaths\main.py: execfile('foo.py')
C:\filepaths\foo.py: execfile('lib/bar.py')
C:\filepaths\lib\bar.py:
import sys
import os
import inspect
print "Python " + sys.version
print
print __file__ # main.py
print sys.argv[0] # main.py
print inspect.stack()[0][1] # lib/bar.py
print sys.path[0] # C:\filepaths
print
print os.path.realpath(__file__) # C:\filepaths\main.py
print os.path.abspath(__file__) # C:\filepaths\main.py
print os.path.basename(__file__) # main.py
print os.path.basename(os.path.realpath(sys.argv[0])) # main.py
print
print sys.path[0] # C:\filepaths
print os.path.abspath(os.path.split(sys.argv[0])[0]) # C:\filepaths
print os.path.dirname(os.path.abspath(__file__)) # C:\filepaths
print os.path.dirname(os.path.realpath(sys.argv[0])) # C:\filepaths
print os.path.dirname(__file__) # (empty string)
print
print inspect.getfile(inspect.currentframe()) # lib/bar.py
print os.path.abspath(inspect.getfile(inspect.currentframe())) # C:\filepaths\lib\bar.py
print os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) # C:\filepaths\lib
print
print os.path.abspath(inspect.stack()[0][1]) # C:\filepaths\lib\bar.py
print os.path.dirname(os.path.abspath(inspect.stack()[0][1])) # C:\filepaths\lib
print
A: Try this,
import os
os.path.dirname(os.path.realpath(__file__))
A: To get directory of executing script
print os.path.dirname( inspect.getfile(inspect.currentframe()))
A: I used the approach with __file__
os.path.abspath(__file__)
but there is a little trick, it returns the .py file
when the code is run the first time,
next runs give the name of *.pyc file
so I stayed with:
inspect.getfile(inspect.currentframe())
or
sys._getframe().f_code.co_filename
A: I wrote a function which take into account eclipse debugger and unittest.
It return the folder of the first script you launch. You can optionally specify the __file__ var, but the main thing is that you don't have to share this variable across all your calling hierarchy.
Maybe you can handle others stack particular cases I didn't see, but for me it's ok.
import inspect, os
def getRootDirectory(_file_=None):
"""
Get the directory of the root execution file
Can help: http://stackoverflow.com/questions/50499/how-do-i-get-the-path-and-name-of-the-file-that-is-currently-executing
For eclipse user with unittest or debugger, the function search for the correct folder in the stack
You can pass __file__ (with 4 underscores) if you want the caller directory
"""
# If we don't have the __file__ :
if _file_ is None:
# We get the last :
rootFile = inspect.stack()[-1][1]
folder = os.path.abspath(rootFile)
# If we use unittest :
if ("/pysrc" in folder) & ("org.python.pydev" in folder):
previous = None
# We search from left to right the case.py :
for el in inspect.stack():
currentFile = os.path.abspath(el[1])
if ("unittest/case.py" in currentFile) | ("org.python.pydev" in currentFile):
break
previous = currentFile
folder = previous
# We return the folder :
return os.path.dirname(folder)
else:
# We return the folder according to specified __file__ :
return os.path.dirname(os.path.realpath(_file_))
A: Simplest way is:
in script_1.py:
import subprocess
subprocess.call(['python3',<path_to_script_2.py>])
in script_2.py:
sys.argv[0]
P.S.: I've tried execfile, but since it reads script_2.py as a string, sys.argv[0] returned <string>.
A: The following returns the path where your current main script is located at. I tested this with Linux, Win10, IPython and Jupyter Lab. I needed a solution that works for local Jupyter notebooks as well.
import builtins
import os
import sys
def current_dir():
if "get_ipython" in globals() or "get_ipython" in dir(builtins):
# os.getcwd() is PROBABLY the dir that hosts the active notebook script.
# See also https://github.com/ipython/ipython/issues/10123
return os.getcwd()
else:
return os.path.abspath(os.path.dirname(sys.argv[0]))
A: Finding the home directory of the path in which your Python script resides
As an addendum to the other answers already here (and not answering the OP's question, since other answers already do that), if the path to your script is /home/gabriel/GS/dev/eRCaGuy_dotfiles/useful_scripts/cpu_logger.py, and you wish to obtain the home directory part of that path, which is /home/gabriel, you can do this:
import os
# Obtain the home dir of the user in whose home directory this script resides
script_path_list = os.path.normpath(__file__).split(os.sep)
home_dir = os.path.join("/", script_path_list[1], script_path_list[2])
To help make sense of this, here are the paths for __file__, script_path_list, and home_dir. Notice that script_path_list is a list of the path components, with the first element being an empty string since it originally contained the / root dir path separator for this Linux path:
__file__ = /home/gabriel/GS/dev/eRCaGuy_dotfiles/useful_scripts/cpu_logger.py
script_path_list = ['', 'home', 'gabriel', 'GS', 'dev', 'eRCaGuy_dotfiles', 'useful_scripts', 'cpu_logger.py']
home_dir = /home/gabriel
Source:
*
*Python: obtain the path to the home directory of the user in whose directory the script being run is located [duplicate]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "534"
} |
Q: Designing a new UI for a legacy WinForms MDI application I'm working on moving a client/server application created with C# and WinForms into the SOA/WPF/Silverlight world. One of the big hurdles is the design of the UI. My current UI is MDI driven and users rely heavily on child windows, having many open at the same time and toggling back and forth between them.
What might be the best way to recreate the UI functionality in an MDI-less environment? (I've no desire to create MDI functionality on my own in WPF). Tabs? A list panel that toggles different controls?
A: Look at 37signals and how nice their web UIs are (mostly HTML + AJAX). It's a good example of web applications that work. One of the things to remember are to make sure you don't break the web paradigm. If users want to see two things side by side, they should be able to duplicate the window and let the web browser do the windowing.
For WPF, there are a lot of new visualization paradigms. You can find some examples on the sites for various control toolkit providers: Xceed, Telerik, Infragistics. They have demo programs for the different ways they help you organize screens in an application.
When developing complex composite applications in WPF, you could also start at the Patterns and Practices Prism site. It's an InProgress set of practices for planning and developing complex composite (smart client style) applications in WPF.
A: I think the answer is really going to be up to your users -- I'd set up some prototypes with multiple paradigms and let them provide some input. The last thing you want to do is introduce a new UI paradigm without having any end-user input.
Tabs are really popular now, but don't allow side-by-side viewing, so if that is a requirement you may want to go with more of an outlook-style setup, with multiple panels that can be activated, hidden and resized.
One thing that you might want to do is to code your app as a composite UI, where each view is built independently from its container (be it a child window, tab or accordion, etc.), and is just "dropped in" in the designer. That will protect you from when the users change their minds about the navigation paradigm in the future.
A: multiple top level windows are easy to implement and have all the advantages of MDI - that's what MS selected for the newer versions of Office
A: Did you try GOA WinForms?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is a regex "independent non-capturing group"? From the Java 6 Pattern documentation:
Special constructs (non-capturing)
(?:X) X, as a non-capturing group
…
(?>X) X, as an independent, non-capturing group
Between (?:X) and (?>X) what is the difference? What does the independent mean in this context?
A: If you have foo(?>(co)*)co, that will never match. I'm sure there are practical examples of when this would be useful, try O'Reilly's book.
A: It means that the grouping is atomic, and it throws away backtracking information for a matched group. So, this expression is possessive; it won't back off even if doing so is the only way for the regex as a whole to succeed. It's "independent" in the sense that it doesn't cooperate, via backtracking, with other elements of the regex to ensure a match.
A: I think this tutorial explains what exactly "independent, non-capturing group" or "Atomic Grouping" is
The regular expression a(bc|b)c (capturing group) matches abcc and abc. The regex a(?>bc|b)c (atomic group) matches abcc but not abc.
When applied to abc, both regexes will match a to a, bc to bc, and then c will fail to match at the end of the string. Here their paths diverge. The regex with the capturing group has remembered a backtracking position for the alternation. The group will give up its match, b then matches b and c matches c. Match found!
The regex with the atomic group, however, exited from an atomic group after bc was matched. At that point, all backtracking positions for tokens inside the group are discarded. In this example, the alternation's option to try b at the second position in the string is discarded. As a result, when c fails, the regex engine has no alternatives left to try.
A: (?>X?) equals (?:X)?+, (?>X*) equals (?:X)*+, (?>X+) equals (?:X)++.
Taking away the fact that X must be a non-capturing group, the preceding equivalence is:
(?>X?) equals X?+, (?>X*) equals X*+, (?>X+) equals X++.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
} |
Q: C: Implicit casting and integer overflowing in the evaluation of expressions Let's take the code
int a, b, c;
...
if ((a + b) > C)
If we add the values of a and b and the sum exceeds the maximum value of an int, will the integrity of the comparison be compromised? I was thinking that there might be an implicit up cast or overflow bit check and that will be factored into the evaluation of this expression.
A: C will do no such thing. It will silently overflow and lead to a possibly incorrect comparison. You can up-cast yourself, but it will not be done automatically.
A: A test confirms that GCC 4.2.3 will simply compare with the overflowed result:
#include <stdio.h>
int main()
{
int a, b, c;
a = 2000000000;
b = 2000000000;
c = 2100000000;
printf("%d + %d = %d\n", a, b, a+b);
if ((a + b) > c)
{
printf("%d + %d > %d\n", a, b, c);
}
else
{
printf("%d + %d < %d\n", a, b, c);
}
return 0;
}
Displays the following:
2000000000 + 2000000000 = -294967296
2000000000 + 2000000000 < 2100000000
A: I believe this might be platform specific. Check the C documentation on how overflows are handled...
Ah, yes, and the upcast will not happen automatically...
A: See section 2.7, Type Conversions in the K&R book
A: If upcasting doesn't gain you any bits (there's no guarantee that sizeof(long)>sizeof(int) in C), you can use conditions like the ones below to compare and check for overflow—upcasting is almost certainly faster if you can use it, though.
#if !defined(__GNUC__) || __GNUC__<2 || (__GNUC__==2 && __GNUC_MINOR__<96)
# define unlikely(x) (x)
#else
# define unlikely(x) (__extension__ (__builtin_expect(!!(x), 0)))
#endif
/* ----------
* Signed comparison (signed char, short, int, long, long long)
* Checks for overflow off the top end of the range, in which case a+b must
* be >c. If it overflows off the bottom, a+b < everything in the range. */
if(a+b>c || unlikely(a>=0 && b>=0 && unlikely(a+b<0)))
...
/* ----------
* Unsigned comparison (unsigned char, unsigned short, unsigned, etc.)
* Checks to see if the sum wrapped around, since the sum of any two natural
* numbers must be >= both numbers. */
if(a+b>c || unlikely(a+b<a))
...
/* ----------
* To generate code for the above only when necessary: */
if(sizeof(long)>sizeof(int) ? ((long)a+b>c)
: (a+b>c || unlikely(a>=0 && b>=0 && unlikely(a+b<0)))
...
Great candidates for macros or inline functions. You can pull the "unlikely"s if you want, but they can help shrink and speed up the code GCC generates.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Font-size independent UI: everything broke when I switched to 120 DPI? So I was reading those Windows Vista UI guidelines someone linked to in another question, and they mentioned that you should be able to survive a switch to 120 DPI. Well, I fire up my handy VM with my app installed, and what do we get... AAAAGH!!! MASSIVE UI FAIL!
Everything's all jumbled: some containers aren't big enough for their text; some controls that were positioned "next to each other" are now all squished together/spread apart; some buttons aren't tall enough; my ListView columns aren't wide enough... eeek.
It sounds like a completely different approach is in order. My previous one was basically using the VS2008 Windows Forms designer to create, I guess, a pixel-based layout. I can see that if I were to stick with Windows Forms, FlowLayoutPanels would be helpful, although I've found them rather inflexible in the past. They also don't solve the problem where the containers (e.g. the form itself) aren't big enough; presumably there's a way to do that? Maybe that AutoSize property?
This might also be a sign that it's time to jump ship to WPF; I'm under the impression that it's specifically designed for this kind of thing.
The basic issue seems to come down to these:
*
*If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?
*Does WPF have significant advantages here, and if so, can you try to convince me that it's worth the switch?
*Are there any general "best-practices" for font-size-independent layouts, either in the .NET stack or in general?
A:
If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?
For one, AutoScaleMode may be your friend.
A: In general, the problem is one of using two different "constants" for form layout, and then changing one of those constants without changing the other.
You are using pixels for your form entities, and points (basically inches) to specify font size. Pixels and points are related by DPI, so you change the DPI and suddenly your pixel fixed values don't line up with your point fixed values.
There are packages and classes for this, but at the end of the day you must choose one unit or the other, or scale one of the units according to the changing constant.
Personally, I'd change the entities on the form into inches. I'm not a C# person, so I don't know if this is supported natively, or if you have to perform some dynamic form sizing on application startup.
If you have to do this in your software, then go ahead and size everything normally (say, to your usual 96 DPI).
When your application starts, verify the system is at 96 DPI before you show your forms. If it is, great. If not, then set a variable with the correction factor, and scale and translate (modify both the location and size) of each entity before you show the form.
The ultimate, though, would be to specify everything in inches or points (a point is 1/72 of an inch) and let the OS deal with it. You might have to deal with corner cases (an outdoor screen with a correctly set DPI would show your application in a few pixels...)
A: Learn how the Anchor and Dock properties work on your controls, leave anything that can AutoSize itself alone, and use a TableLayoutPanel when you can.
If you do these three things, you'll get a lot of the WPF design experience in Windows Forms. A well-designed TableLayoutPanel will do its best to size the controls so that they fit the form properly. Combined with AutoSize controls, docking, and the AutoScaleMode mentioned by Soeren Kuklau you should be able to make something that scales well. If not, your form might just have too many controls on it; consider splitting it into tab pages, floating toolboxes, or some other space.
In WPF it's a lot easier because the concept of auto-sizing controls is built-in; in most cases if you are placing a WPF element by using a coordinate pair you are doing it wrong. Still, you can't change the fact that at lower resolutions it doesn't take much 120 dpi text to fill up the screen. Sometimes the problem is not your layout, but an attempt to put too much into a small space.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How do I format a number in Java? How do I format a number in Java?
What are the "Best Practices"?
Will I need to round a number before I format it?
32.302342342342343 => 32.30
.7323 => 0.73
etc.
A: Try this:
String.format("%.2f", 32.302342342342343);
Simple and efficient.
A: You and String.format() will be new best friends!
https://docs.oracle.com/javase/1.5.0/docs/api/java/util/Formatter.html#syntax
String.format("%.2f", (double)value);
A: Use DecimalFormat.
A: There are two approaches in the standard library. One is to use java.text.DecimalFormat. The other more cryptic methods (String.format, PrintStream.printf, etc) based around java.util.Formatter should keep C programmers happy(ish).
A: As Robert has pointed out in his answer: DecimalFormat is neither synchronized nor does the API guarantee thread safety (it might depend on the JVM version/vendor you are using).
Use Spring's Numberformatter instead, which is thread safe.
A: public static void formatDouble(double myDouble){
NumberFormat numberFormatter = new DecimalFormat("##.000");
String result = numberFormatter.format(myDouble);
System.out.println(result);
}
For instance, if the double value passed into the formatDouble() method is 345.9372, the following will
be the result:
345.937
Similarly, if the value .7697 is passed to the method, the following will be the result:
.770
A: Be aware that classes that descend from NumberFormat (and most other Format descendants) are not synchronized. It is a common (but dangerous) practice to create format objects and store them in static variables in a util class. In practice, it will pretty much always work until it starts experiencing significant load.
A: From this thread, there are different ways to do this:
double r = 5.1234;
System.out.println(r); // r is 5.1234
int decimalPlaces = 2;
BigDecimal bd = new BigDecimal(r);
// setScale is immutable
bd = bd.setScale(decimalPlaces, BigDecimal.ROUND_HALF_UP);
r = bd.doubleValue();
System.out.println(r); // r is 5.12
f = (float) (Math.round(n*100.0f)/100.0f);
DecimalFormat df2 = new DecimalFormat( "#,###,###,##0.00" );
double dd = 100.2397;
double dd2dec = new Double(df2.format(dd)).doubleValue();
// The value of dd2dec will be 100.24
The DecimalFormat() seems to be the most dynamic way to do it, and it is also very easy to understand when reading others code.
A: Round numbers, yes. This is the main example source.
/*
* Copyright (c) 1995 - 2008 Sun Microsystems, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* - Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* - Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* - Neither the name of Sun Microsystems nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
* IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
import java.util.*;
import java.text.*;
public class DecimalFormatDemo {
static public void customFormat(String pattern, double value ) {
DecimalFormat myFormatter = new DecimalFormat(pattern);
String output = myFormatter.format(value);
System.out.println(value + " " + pattern + " " + output);
}
static public void localizedFormat(String pattern, double value, Locale loc ) {
NumberFormat nf = NumberFormat.getNumberInstance(loc);
DecimalFormat df = (DecimalFormat)nf;
df.applyPattern(pattern);
String output = df.format(value);
System.out.println(pattern + " " + output + " " + loc.toString());
}
static public void main(String[] args) {
customFormat("###,###.###", 123456.789);
customFormat("###.##", 123456.789);
customFormat("000000.000", 123.78);
customFormat("$###,###.###", 12345.67);
customFormat("\u00a5###,###.###", 12345.67);
Locale currentLocale = new Locale("en", "US");
DecimalFormatSymbols unusualSymbols = new DecimalFormatSymbols(currentLocale);
unusualSymbols.setDecimalSeparator('|');
unusualSymbols.setGroupingSeparator('^');
String strange = "#,##0.###";
DecimalFormat weirdFormatter = new DecimalFormat(strange, unusualSymbols);
weirdFormatter.setGroupingSize(4);
String bizarre = weirdFormatter.format(12345.678);
System.out.println(bizarre);
Locale[] locales = {
new Locale("en", "US"),
new Locale("de", "DE"),
new Locale("fr", "FR")
};
for (int i = 0; i < locales.length; i++) {
localizedFormat("###,###.###", 123456.789, locales[i]);
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "120"
} |
Q: Multiline ddl Custom Control One of the guys I work with needs a custom control that would work like a multiline ddl since such a thing does not exist as far as we have been able to discover
does anyone have any ideas or have created such a thing before
we have a couple ideas but they involve to much database usage
We prefer that it be FREE!!!
A: Have a look at EasyListBox. I used on a project and while a bit quirky at first, got the job done.
A: I'm not sure exactly what you mean by multi-line, but if it is selecting multiple elements in a drop down list, see this demo.
If its showing elements that wrap mulitple lines in a drop down, see this demo. You can put a break in the HTML to achieve what you might be looking for. I've used this control in this manner before, so I can confirm it works.
Good luck.
A: We use a custom modified version of suckerfish at work. DB performance isn't an issue for us because we cache the control.
The control renders out nested UL/LIs either for all nodes in the web.sitemap or for a certain set of pages pulled from the DB. We then use jQuery to do all the cool javascript stuff. Because it uses such basic HTML, it's pretty easy to have multi-line or wrapped long items once you style it with CSS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do you initialize a 2 dimensional array when you do not know the size I have a two dimensional array that I need to load data into. I know the width of the data (22 values) but I do not know the height (estimated around 4000 records, but variable).
I have it declared as follows:
float[,] _calibrationSet;
....
int calibrationRow = 0;
While (recordsToRead)
{
for (int i = 0; i < SensorCount; i++)
{
_calibrationSet[calibrationRow, i] = calibrationArrayView.ReadFloat();
}
calibrationRow++;
}
This causes a NullReferenceException, so when I try to initialize it like this:
_calibrationSet = new float[,];
I get an "Array creation must have array size or array initializer."
Thank you,
Keith
A: You can't use an array.
Or rather, you would need to pick a size, and if you ended up needing more then you would have to allocate a new, larger, array, copy the data from the old one into the new one, and continue on as before (until you exceed the size of the new one...)
Generally, you would go with one of the collection classes - ArrayList, List<>, LinkedList<>, etc. - which one depends a lot on what you're looking for; List will give you the closest thing to what i described initially, while LinkedList<> will avoid the problem of frequent re-allocations (at the cost of slower access and greater memory usage).
Example:
List<float[]> _calibrationSet = new List<float[]>();
// ...
while (recordsToRead)
{
float[] record = new float[SensorCount];
for (int i = 0; i < SensorCount; i++)
{
record[i] = calibrationArrayView.ReadFloat();
}
_calibrationSet.Add(record);
}
// access later: _calibrationSet[record][sensor]
Oh, and it's worth noting (as Grauenwolf did), that what i'm doing here doesn't give you the same memory structure as a single, multi-dimensional array would - under the hood, it's an array of references to other arrays that actually hold the data. This speeds up building the array a good deal by making reallocation cheaper, but can have an impact on access speed (and, of course, memory usage). Whether this is an issue for you depends a lot on what you'll be doing with the data after it's loaded... and whether there are two hundred records or two million records.
A: You can't create an array in .NET (as opposed to declaring a reference to it, which is what you did in your example) without specifying its dimensions, either explicitly, or implicitly by specifying a set of literal values when you initialize it. (e.g. int[,] array4 = { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } };)
You need to use a variable-size data structure first (a generic list of 22-element 1-d arrays would be the simplest) and then allocate your array and copy your data into it after your read is finished and you know how many rows you need.
A: I would just use a list, then convert that list into an array.
You will notice here that I used a jagged array (float[][]) instead of a square array (float [,]). Besides being the "standard" way of doing things, it should be much faster. When converting the data from a list to an array you only have to copy [calibrationRow] pointers. Using a square array, you would have to copy [calibrationRow] x [SensorCount] floats.
var tempCalibrationSet = new List<float[]>();
const int SensorCount = 22;
int calibrationRow = 0;
while (recordsToRead())
{
tempCalibrationSet[calibrationRow] = new float[SensorCount];
for (int i = 0; i < SensorCount; i++)
{
tempCalibrationSet[calibrationRow][i] = calibrationArrayView.ReadFloat();
} calibrationRow++;
}
float[][] _calibrationSet = tempCalibrationSet.ToArray();
A: I generally use the nicer collections for this sort of work (List, ArrayList etc.) and then (if really necessary) cast to T[,] when I'm done.
A: you would either need to preallocate the array to a Maximum size (float[999,22] ) , or use a different data structure.
i guess you could copy/resize on the fly.. (but i don't think you'd want to)
i think the List sounds reasonable.
A: You could also use a two-dimensional ArrayList (from System.Collections) -- you create an ArrayList, then put another ArrayList inside it. This will give you the dynamic resizing you need, but at the expense of a bit of overhead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Programmatically determine which Java thread holds a lock Is it possible at runtime to programmatically check the name of the Thread that is holding the lock of a given object?
A: You can, from 1.6, use JMX to do all sorts of interesting things including finding held locks. You can't get the actual object, but you do get the class and identity hash value (which is not unique).
(This post originally had a link to an example in my now defunct weblog.)
A: Run jconsole. It is included in the Java SDK and is run from the command line. I'm not sure what OS you are using, but on windows you can just pass it the PID of the java process. It should help you find the thread that is causing the problem. Or, you can use a commercial profiler like YourKit or any number of other profilers.
A: You can only tell whether the current thread holds a normal lock (Thread.holdsLock(Object)). You can't get a reference to the thread that has the lock without native code.
However, if you're doing anything complicated with threading, you probably want to familiarize yourself with the java.util.concurrent packages. The ReentrantLock does allow you to get its owner (but its a protected method, so you'd have to extend this). Depending on your application, it may well be that by using the concurrency packages, you'll find that you don't need to get the lock's owner after all.
There are non-programmatic methods to find the lock owners, such as signaling the JVM to issue a thread dump to stderr, that are useful to determine the cause of deadlocks.
A: In 1.5, you can find all the threads and get each one's state, eg like this:
Map<Thread,StackTraceElement[]> map = Thread.getAllStackTraces();
for (Map.Entry<Thread, StackTraceElement[]> threadEntry : map.entrySet()) {
log.info("Thread:"+threadEntry.getKey().getName()+":"+threadEntry.getKey().getState());
for (StackTraceElement element : threadEntry.getValue()) {
log.info("--> "+element);
}
}
Thread.getState gives you info about whether the thread is BLOCKED, WAITING etc, see jdk api ThreadState
A: if it is re -entrant lock u can can check if it is held by current thread
final ReentrantLock lock = new ReentrantLock();
lock.isHeldByCurrentThread();
A: You can get at the locks held by threads with reflection. This only works with java 1.6.
ThreadMXBean bean = ManagementFactory.getThreadMXBean();
ThreadInfo[] ti = bean.getThreadInfo(bean.getAllThreadIds(), true, true);
On each of these ThreadInfo objects there are LockInfo objects which you can use the identityHashCode on them to compare to the lock in question.
A: You can check the lock on the particular object by calling wait() or notify() method on that object. If the object does not hold the lock, then it will throw llegalMonitorStateException .
2- By calling holdsLock(Object o) method. This will return the boolean value.
A: You can use a variable to hold the current thread when you take the lock, then print it if someone else is trying to use it.
Thread holderOfLock = null;
Object theLock = new Object();
public void doStuff()
{
if(holderOfLock != null)
{
//get and print name of holderOfLock-thread or get stacktrace etc.
}
synchronized (theLock)
{
holderOfLock = Thread.currentThread();
//do stuff...
holderOfLock = null;
}
}
A: Ugly but works.
String findLockOwner(ReentrantLock lock) {
String patternStr = "\\[Locked by thread (\\S+)\\]";
Pattern pattern = Pattern.compile(patternStr);
Matcher matcher = pattern.matcher(lock.toString());
boolean matchFound = matcher.find();
if (matchFound && matcher.groupCount() >= 1) {
return matcher.group(1);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Context Menu Resets ComboBox's SelectedIndex I have a ContextMenu that is displayed after a user right clicks on a ComboBox. When the user selects an item in the context menu, a form is brought up using the ShowDialog() method.
If frmOptions.ShowDialog() = Windows.Forms.DialogResult.Cancel Then
LoadComboBoxes()
End If
When that form is closed, I refresh all the data in the ComboBoxes on the parent form. However, when this happens the ComboBox that opened the ContextMenu is reset to have a selected index of -1 but the other selected indexes of the other ComboBoxes remain the same.
How do I prevent the ComboBox that opened the context menu from being reset?
A: One way to handle this would be to use the context menu's Popup event to grab the selected index of the combobox launching the menu. When the dialog form closes reset the selected index.
A: I figured it out.
I created a method that passed the ContextMenu.SourceControl() property by reference so I could manipulate the control that called the ContextMenu. In the beginning of the method, I got the SelectedValue of the ComboBox and the reloaded the data in the ComboBoxes. I then set the SelectedValue to the value I had got in the beginning of the method.
Thank you DaveK for pointing me in the right direction.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Django Sessions I'm looking at sessions in Django, and by default they are stored in the database. What are the benefits of filesystem and cache sessions and when should I use them?
A: One thing that has to be considered when choosing session backend is "how often session data is modified"? Even sites with moderate traffic will suffer if session data is modified on each request, making many database trips to store and retrieve data.
In my previous work we used memcache as session backend exclusively and it worked really well. Our administrative team put really great effort in making two special memcached instances stable as a rock, but after bit of twiddling with initial setup, we did not have any interrupts of session backends operations.
A: The filesystem backend is only worth looking at if you're not going to use a database for any other part of your system. If you are using a database then the filesystem backend has nothing to recommend it.
The memcache backend is much quicker than the database backend, but you run the risk of a session being purged and some of your session data being lost.
If you're a really, really high traffic website and code carefully so you can cope with losing a session then use memcache. If you're not using a database use the file system cache, but the default database backend is the best, safest and simplest option in almost all cases.
A: I'm no Django expert, so this answer is about session stores generally. Downvote if I'm wrong.
Performance and Scalability
Choice of session store has an effect on performance and scalability. This should only be a big problem if you have a very popular application.
Both database and filesystem session stores are (usually) backed by disks so you can have a lot of sessions cheaply (because disks are cheap), but requests will often have to wait for the data to be read (because disks are slow). Memcached sessions use RAM, so will cost more to support the same number of concurrent sessions (because RAM is expensive), but may be faster (because RAM is fast).
Filesystem sessions are tied to the box where your application is running, so you can't load balance between multiple application servers if your site gets huge. Database and memcached sessions let you have multiple application servers talking to a shared session store.
Simplicity
Choice of session store will also impact how easy it is to deploy your site. Changing away from the default will cost some complexity. Memcached and RDBMSs both have their own complexities, but your application is probably going to be using an RDBMS anyway.
Unless you have a very popular application, simplicity should be the larger concern.
Bonus
Another approach is to store session data in cookies (all of it, not just an ID). This has the advantage that the session store automatically scales with the number of users, but it has disadvantages too. You (or your framework) need to be careful to stop users forging session data. You also need to keep each session small because the whole thing will be sent with every request.
A: As of Django 1.1 you can use the cached_db session back end.
This stores the session in the cache (only use with memcached), and writes it back to the DB. If it has fallen out of the cache, it will be read from the DB.
Although this is slower than just using memcached for storing the session, it adds persistence to the session.
For more information, see: Django Docs: Using Cached Sessions
A: If the database have a DBA that isn't you, you may not be allowed to use a database-backed session (it being a front-end matter only). Until django supports easily merging data from several databases, so that you can have frontend-specific stuff like sessions and user-messages (the messages in django.contrib.auth are also stored in the db) in a separate db, you need to keep this in mind.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Windows XP support for Remote NDIS I'm looking at developing a device which will need to support Ethernet over USB (hosted in Linux, XP, and Vista). As I understand it, Vista and Linux support the industry standard USB CDC. However, in classic Windows style, XP only supports it's own Remote NDIS. So, now I'm thinking of just bowing down and doing it over RNDIS, as opposed to rolling my own CDC driver for XP.
I've been reading some older documentation that says even XP is pretty buggy with NDIS (suprise!). Does anyone have experience with XP's RNDIS drivers? Are they safe for product development? Any insight would be much appreciated.
A: We use RNDIS at work. and I've found that it blue screens my machine every now and again (about every month or two). However others (at my work) have not had this happen, so it could just be the particular device that I use.
I think it is stable enough for development, so give it a go.
A: The problem here is that Linux does not support RNDIS in the host mode, and you can't develop custom driver due to MS RNDIS license restrictions. MAC does not support RNDIS as well due to same reason (licensing).
So if you need multiplatform solution you need a standard approach which is CDC/ECM.
There is number of available CDC/ECM XP/VIsta solutions in the market,you can google for them i don't want to advertise our solution here :)
A: After doing my own research and testing, a single NDIS device works reasonably well. However, if you are at all needing to support multiple NDIS devices, you are out of luck. My system became extremely unstable and was essentially unusable. This was very reproducible.
I would not recommend NDIS in any type of multiple-device scenario.
A: if you are looking for commercial solution, Jungo does provide decent ECM solutions works for Windows/Linux/Mac. The only problem is that you have to pay them non-trivial royalty fee if you are going for a mass volume product.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Simultaneous calls from CDR I need to come up with an analysis of simultaneus events, when having only starttime and duration of each event.
Details
I've a standard CDR call detail record, that contains among others:
*
*calldate (timedate of each call start
*duration (int, seconds of call duration)
*channel (a string)
What I need to come up with is some sort of analysys of simultaneus calls on each second, for a given timedate period. For example, a graph of simultaneous calls we had yesterday.
(The problem is the same if we have visitors logs with duration on a website and wish to obtain simultaneous clients for a group of web-pages)
What would your algoritm be?
I can iterate over records in the given period, and fill an array, where each bucket of the array corresponds to 1 second in the overall period. This works and seems to be fast, but if the timeperiod is big (say..1 year), I would need lots of memory (3600x24x365x4 bytes ~ 120MB aprox).
This is for a web-based, interactive app, so my memory footprint should be small enough.
Edit
By simultaneous, I mean all calls on a given second. Second would be my minimum unit. I cannot use something bigger (hour for example) becuse all calls during an hour do not need to be held at the same time.
A: I would implement this on the database. Using a GROUP BY clause with DATEPART, you could get a list of simultaneous calls for whatever time period you wanted, by second, minute, hour, whatever.
On the web side, you would only have to display the histogram that is returned by the query.
A: @eric-z-beard: I would really like to be able to implement this on the database. I like your proposal, and while it seems to lead to something, I dont quite fully understand it. Could you elaborate? Please recall that each call will span over several seconds, and each second need to count. If using DATEPART (or something like it on MySQL), what second should be used for the GROUP BY. See note on simultaneus.
Elaborating over this, I found a way to solve it using a temporary table. Assuming temp holds all seconds from tStart to tEnd, I could do
SELECT temp.second, count(call.id)
FROM call, temp
WHERE temp.second between (call.start and call.start + call.duration)
GROUP BY temp.second
Then, as suggested, the web app should use this as a histogram.
A: You can use a static Numbers table for lots of SQL tricks like this. The Numbers table simply contains integers from 0 to n for n like 10000.
Then your temp table never needs to be created, and instead is a subquery like:
SELECT StartTime + Numbers.Number AS Second
FROM Numbers
A: You can create table 'simultaneous_calls' with 3 fields: yyyymmdd Char(8),
day_second Number, -- second of the day,
count Number -- count of simultaneous calls
Your web service can take 'count' value from this table and make some statistics.
Simultaneous_calls table will be filled by some batch program which will be started every day after end of the day.
Assuming that you use Oracle, the batch may start a PL/SQL procedure which does the following:
*
*Appends table with 24 * 3600 = 86400 records for each second of the day, with default 'count' value = 0.
*Defines the 'day_cdrs' cursor for the query:
Select to_char(calldate, 'yyyymmdd') yyyymmdd,
(calldate - trunc(calldate)) * 24 * 3600 starting_second,
duration duration
From cdrs
Where cdrs.calldate >= Trunc(Sysdate -1)
And cdrs.calldate
*Iterates the cursor to increment 'count' field for the seconds of the call:
For cdr in day_cdrs
Loop
Update simultaneos_calls
Set count = count + 1
Where yyyymmdd = cdr.yyyymmdd
And day_second Between cdr.starting_second And cdr.starting_second + cdr.duration;
End Loop;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: __doPostBack not rendering on postback I'm having a strange problem.
I have to use GetPostBackEventRefence to force a Postback, but it works the first time, after the first postback, the .NET function is not rendered... any ideas?
This is what I'm missing after the postback:
<script language="javascript" type="text/javascript">
<!--
function __doPostBack(eventTarget, eventArgument) {
var theform;
if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) {
theform = document.Main;
}
else {
theform = document.forms["Main"];
}
theform.__EVENTTARGET.value = eventTarget.split("$").join(":");
theform.__EVENTARGUMENT.value = eventArgument;
theform.submit();
}
// -->
</script>
A: Well, following that idea I created a dummy function with the postbackreference, and it works... it still is weird though, because of it rendering correctly the first time
this.Page.RegisterClientScriptBlock("DUMMY", "<script language='javascript'>function dummy() { " + this.Page.GetPostBackEventReference(this) + "; } </script>");
A: The first thing I would look at is whether you have any asp controls (such as linkbutton, comboboxes,that don't normally generate a submit but requre a postback) being displayed on the page.
The __doPostback function will only be put into the page if ASP thinks that one of your controls requires it.
If you aren't using one of those you can use:
Page.ClientScript.GetPostBackClientHyperlink(controlName, "")
to add the function to your page
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you capture mouse events in FF, over Shockwave Object How do you capture the mouse events, move and click over top of a Shockwave Director Object (not flash) in Firefox, via JavaScript. The code works in IE but not in FF.
The script works on the document body of both IE and Moz, but mouse events do not fire when mouse is over a shockwave director object embed.
Update:
function displaycoordIE(){
window.status=event.clientX+" : " + event.clientY;
}
function displaycoordNS(e){
window.status=e.clientX+" : " + e.clientY;
}
function displaycoordMoz(e)
{
window.alert(e.clientX+" : " + e.clientY);
}
document.onmousemove = displaycoordIE;
document.onmousemove = displaycoordNS;
document.onclick = displaycoordMoz;
Just a side note, I have also tried using an addEventListener to "mousemove".
A: You could also catch the mouse event within Director (That never fails) and then call your JS functions from there, using gotoNetPage "javascript:function('" & argument & "')"
ej:
on mouseDown me
gotoNetPage "javascript:function('" & argument & "')"
end
The mouse move detection is a little bit trickier, as there is no such an event in lingo, but you can use:
property pMouseLock
on beginsprite
pMouseLock = _mouse.mouseLock
end
on exitFrame
if _mouse.mouseLock <> pMouseLock then
gotoNetPage "javascript:function('" & argument & "')"
pMouseLock = _mouse.mouseLock
end if
end
regards
A: Just an idea.
Try overlaying the shockwave object with a div with opacity 0, then you can capture events on the div itself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Signed to unsigned conversion in C - is it always safe? Suppose I have the following C code.
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
What implicit conversions are going on here, and is this code safe for all values of u and i? (Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
A: Referring to The C Programming Language, Second Edition (ISBN 0131103628),
*
*Your addition operation causes the int to be converted to an unsigned int.
*Assuming two's complement representation and equally sized types, the bit pattern does not change.
*Conversion from unsigned int to signed int is implementation dependent. (But it probably works the way you expect on most platforms these days.)
*The rules are a little more complicated in the case of combining signed and unsigned of differing sizes.
A: When converting from signed to unsigned there are two possibilities. Numbers that were originally positive remain (or are interpreted as) the same value. Number that were originally negative will now be interpreted as larger positive numbers.
A: When one unsigned and one signed variable are added (or any binary operation) both are implicitly converted to unsigned, which would in this case result in a huge result.
So it is safe in the sense of that the result might be huge and wrong, but it will never crash.
A: Short Answer
Your i will be converted to an unsigned integer by adding UINT_MAX + 1, then the addition will be carried out with the unsigned values, resulting in a large result (depending on the values of u and i).
Long Answer
According to the C99 Standard:
6.3.1.8 Usual arithmetic conversions
*
*If both operands have the same type, then no further conversion is needed.
*Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.
*Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.
*Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.
*Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
In your case, we have one unsigned int (u) and signed int (i). Referring to (3) above, since both operands have the same rank, your i will need to be converted to an unsigned integer.
6.3.1.3 Signed and unsigned integers
*
*When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
*Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
*Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
Now we need to refer to (2) above. Your i will be converted to an unsigned value by adding UINT_MAX + 1. So the result will depend on how UINT_MAX is defined on your implementation. It will be large, but it will not overflow, because:
6.2.5 (9)
A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
Bonus: Arithmetic Conversion Semi-WTF
#include <stdio.h>
int main(void)
{
unsigned int plus_one = 1;
int minus_one = -1;
if(plus_one < minus_one)
printf("1 < -1");
else
printf("boring");
return 0;
}
You can use this link to try this online: https://repl.it/repls/QuickWhimsicalBytes
Bonus: Arithmetic Conversion Side Effect
Arithmetic conversion rules can be used to get the value of UINT_MAX by initializing an unsigned value to -1, ie:
unsigned int umax = -1; // umax set to UINT_MAX
This is guaranteed to be portable regardless of the signed number representation of the system because of the conversion rules described above. See this SO question for more information: Is it safe to use -1 to set all bits to true?
A: Conversion from signed to unsigned does not necessarily just copy or reinterpret the representation of the signed value. Quoting the C standard (C99 6.3.1.3):
When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
For the two's complement representation that's nearly universal these days, the rules do correspond to reinterpreting the bits. But for other representations (sign-and-magnitude or ones' complement), the C implementation must still arrange for the same result, which means that the conversion can't just copy the bits. For example, (unsigned)-1 == UINT_MAX, regardless of the representation.
In general, conversions in C are defined to operate on values, not on representations.
To answer the original question:
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
The value of i is converted to unsigned int, yielding UINT_MAX + 1 - 5678. This value is then added to the unsigned value 1234, yielding UINT_MAX + 1 - 4444.
(Unlike unsigned overflow, signed overflow invokes undefined behavior. Wraparound is common, but is not guaranteed by the C standard -- and compiler optimizations can wreak havoc on code that makes unwarranted assumptions.)
A: As was previously answered, you can cast back and forth between signed and unsigned without a problem. The border case for signed integers is -1 (0xFFFFFFFF). Try adding and subtracting from that and you'll find that you can cast back and have it be correct.
However, if you are going to be casting back and forth, I would strongly advise naming your variables such that it is clear what type they are, eg:
int iValue, iResult;
unsigned int uValue, uResult;
It is far too easy to get distracted by more important issues and forget which variable is what type if they are named without a hint. You don't want to cast to an unsigned and then use that as an array index.
A:
What implicit conversions are going on here,
i will be converted to an unsigned integer.
and is this code safe for all values of u and i?
Safe in the sense of being well-defined yes (see https://stackoverflow.com/a/50632/5083516 ).
The rules are written in typically hard to read standards-speak but essentially whatever representation was used in the signed integer the unsigned integer will contain a 2's complement representation of the number.
Addition, subtraction and multiplication will work correctly on these numbers resulting in another unsigned integer containing a twos complement number representing the "real result".
division and casting to larger unsigned integer types will have well-defined results but those results will not be 2's complement representations of the "real result".
(Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
While conversions from signed to unsigned are defined by the standard the reverse is implementation-defined both gcc and msvc define the conversion such that you will get the "real result" when converting a 2's complement number stored in an unsigned integer back to a signed integer. I expect you will only find any other behaviour on obscure systems that don't use 2's complement for signed integers.
https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html#Integers-implementation
https://msdn.microsoft.com/en-us/library/0eex498h.aspx
A: Horrible Answers Galore
Ozgur Ozcitak
When you cast from signed to unsigned
(and vice versa) the internal
representation of the number does not
change. What changes is how the
compiler interprets the sign bit.
This is completely wrong.
Mats Fredriksson
When one unsigned and one signed
variable are added (or any binary
operation) both are implicitly
converted to unsigned, which would in
this case result in a huge result.
This is also wrong. Unsigned ints may be promoted to ints should they have equal precision due to padding bits in the unsigned type.
smh
Your addition operation causes the int
to be converted to an unsigned int.
Wrong. Maybe it does and maybe it doesn't.
Conversion from unsigned int to signed
int is implementation dependent. (But
it probably works the way you expect
on most platforms these days.)
Wrong. It is either undefined behavior if it causes overflow or the value is preserved.
Anonymous
The value of i is converted to
unsigned int ...
Wrong. Depends on the precision of an int relative to an unsigned int.
Taylor Price
As was previously answered, you can
cast back and forth between signed and
unsigned without a problem.
Wrong. Trying to store a value outside the range of a signed integer results in undefined behavior.
Now I can finally answer the question.
Should the precision of int be equal to unsigned int, u will be promoted to a signed int and you will get the value -4444 from the expression (u+i). Now, should u and i have other values, you may get overflow and undefined behavior but with those exact numbers you will get -4444 [1]. This value will have type int. But you are trying to store that value into an unsigned int so that will then be cast to an unsigned int and the value that result will end up having would be (UINT_MAX+1) - 4444.
Should the precision of unsigned int be greater than that of an int, the signed int will be promoted to an unsigned int yielding the value (UINT_MAX+1) - 5678 which will be added to the other unsigned int 1234. Should u and i have other values, which make the expression fall outside the range {0..UINT_MAX} the value (UINT_MAX+1) will either be added or subtracted until the result DOES fall inside the range {0..UINT_MAX) and no undefined behavior will occur.
What is precision?
Integers have padding bits, sign bits, and value bits. Unsigned integers do not have a sign bit obviously. Unsigned char is further guaranteed to not have padding bits. The number of values bits an integer has is how much precision it has.
[Gotchas]
The macro sizeof macro alone cannot be used to determine precision of an integer if padding bits are present. And the size of a byte does not have to be an octet (eight bits) as defined by C99.
[1] The overflow may occur at one of two points. Either before the addition (during promotion) - when you have an unsigned int which is too large to fit inside an int. The overflow may also occur after the addition even if the unsigned int was within the range of an int, after the addition the result may still overflow.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "158"
} |
Q: Wordpress Category Template Question I am looking at using a custom template for a set of categories. Is it possible to use a category template (like category-4.php) on a parent category and have the children use that template as well?
So based on the answer so far, is there a way to accomplish this? I want to add text and images to all categories within a parent category.
A: From the documentation it does not appear to be possible without actually adding several category template files (unless you custom program it). I run Wordpress, and I have only seen it accomplished category by category.
http://codex.wordpress.org/Category_Templates
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the point of the finally block? Syntax aside, what is the difference between
try {
}
catch() {
}
finally {
x = 3;
}
and
try {
}
catch() {
}
x = 3;
edit: in .NET 2.0?
so
try {
throw something maybe
x = 3
}
catch (...) {
x = 3
}
is behaviourally equivalent?
A: try catch finally is pretty important construct. You can be sure that even if an exception is thrown, the code in finally block will be executed. It's very important in handling external resources to release them. Garbage collection won't do that for you. In finally part you shouldn't have return statements or throw exceptions. It's possible to do that, but it's a bad practice and can lead to unpredictable results.
If you try this example:
try {
return 0;
} finally {
return 2;
}
The result will be 2:)
Comparison to other languages: Return From Finally
A: There are several things that make a finally block useful:
*
*If you return from the try or catch blocks, the finally block is still executed, right before control is given back to the calling function
*If an exception occurs within the catch block, or an uncaught type of exception occurs in the try block, the code in the finally block is still executed.
These make finally blocks excellent for closing file handles or sockets.
A: Well, for one thing, if you RETURN inside your try block, the finally will still run, but code listed below the try-catch-finally block will not.
A: Depends on the language as there might be some slight semantic differences, but the idea is that it will execute (almost) always, even if the code in the try block threw an exception.
In the second example, if the code in the catch block returns or quits, the x = 3 will not be executed. In the first it will.
In the .NET platform, in some cases the execution of the finally block won't occur:
Security Exceptions, Thread suspensions, Computer shut down :), etc.
A: In the case, that the try and the catch are empty, there is no difference. Otherwise you can be sure, that the finally will be executed.
If you, for example throw a new Exception in your catchblock (rethrow), than the assignment will only be executed, if it is in the finally-block.
Normally a finally is used to clean up after yourself (close DB-connections, File-Handles and the likes).
You should never use control-statements (return, break, continue) in a finally, as this can be a maintenance nightmare and is therefore considered bad practice
A: The finally block is in the same scope as the try/catch, so you will have access to all the variables defined inside.
Imagine you have a file handler, this is the difference in how it would be written.
try
{
StreamReader stream = new StreamReader("foo.bar");
stream.write("foo");
}
catch(Exception e) { } // ignore for now
finally
{
stream.close();
}
compared to
StreamReader stream = null;
try
{
stream = new StreamReader("foo.bar");
stream.write("foo");
} catch(Exception e) {} // ignore
if (stream != null)
stream.close();
Remember though that anything inside finally isn't guaranteed to run. Imagine that you get an abort signal, windows crashes or the power is gone. Relying on finally for business critical code is bad.
A: In Java:
Finally always gets called, regardless of if the exception was correctly caught in catch(), or in fact if you have a catch at all.
A: The finally block will always be called (well not really always ... ) even if an exception is thrown or a return statement is reached (although that may be language dependent). It's a way to clean up that you know will always be called.
A: Finally blocks permit you, as a developer, to tidy up after yourself, regardless of the actions of preceeding code in the try{} block encountered errors, and have others have pointed out this, is falls mainly under the umbrella of freeing resources - closing pointers / sockets / result sets, returning connections to a pool etc.
@mats is very correct that there is always the potential for "hard" failures - finally blocks shouldn't include mission critical code, which should always be done transactionally inside the try{}
@mats again - The real beauty is that it allows you throw exceptions back out of your own methods, and still guarantee that you tidy up:
try
{
StreamReader stream = new StreamReader("foo.bar");
mySendSomethingToStream(stream);
}
catch(noSomethingToSendException e) {
//Swallow this
logger.error(e.getMessage());
}
catch(anotherTypeOfException e) {
//More serious, throw this one back
throw(e);
}
finally
{
stream.close();
}
So, we can catch many types of exception, process them differently (the first allows execution for anything beyond the try{}, the second effectively returns), but always neatly and tidily clear up.
A: @iAn and @mats:
I would not "tear down" anything in finally {} that was "set up" within the try {} as a rule. Would be better to pull the stream creation outside of the try {}. If you need to handle an exception on stream create this could be done in a greater scope.
StreamReader stream = new StreamReader("foo.bar");
try {
mySendSomethingToStream(stream);
}
catch(noSomethingToSendException e) {
//Swallow this
logger.error(e.getMessage());
}
catch(anotherTypeOfException e) {
//More serious, throw this one back
throw(e);
}
finally {
stream.close();
}
A: This is not an answer, but a critique. This question is old, but this has always bothered me. I found it here for a reason. I've read every answer and it looks to me that nobody really thought it through.
I really think there is no good point to finally, which may be why it didn't exist in programming languages until "recently". Most of the examples stating stream.close() can cause null reference exceptions, so you still have to test if it's null.
Yes, if you return from within try{}, finally still runs. But is that good practice? It seems like mental gymnastics, might as well bring goto back. Why not wait, and return after the block? All that finally {} does, is add two or three lines to your code.
A: So you can clean up any open connections, etc. initialized in the try block. If you opened a connection and then an exception occurred, that exception would not be properly closed. This type of scenario is what the finally block is for.
A: The finally block is supposed to execute whether you caught the exception or not.
See Try / Catch / Finally example
A: @Ed, you might be thinking of something like a catch(...) that catches a non-specified exception in C++.
But finally is code that will get executed no matter what happens in the catch blocks.
Microsoft has a help page on try-finally for C#
A: Any code in the finally is ran in the even in the event of an unhandled exception. Typically the finally code is used to clean up local declarations of unmanaged code using .dispose().
A: In Java, you use it for anything that you want to execute regardless of whether you used a "return", just ran through the try block, or had an exception caught.
For example, closing a database session or a JMS connection, or deallocating some OS resource.
I am guessing it is similar in .NET?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: Global/session scoped values in PHP Is there a standard way of dealing with globally scoped variables in PHP? Session scoped?
From the research I've done, it looks like the options are mostly add-ons or external. APC might work, but would be limited to a single PHP instance and not so useful for a farm of servers. Memcached seems like it would work, but I was hoping to find something within PHP.
Does its stateless approach keep there from being a standard method for handling this?
A: A persistent layar is the only way to go with php. Either file based solution or database.
php natively doesn't provide any mechanism to do application scope variable.
A: You can do session variables with $_SESSION.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Other ways to encrypt WCF Connections I'm currently working on a project that requires encrypted data be passed between WCF hosts. As I understand it there are only 2 ways to ensure that data passed over WCF is secure.
*
*Using Certificates
*Using a Domain (or having the same username and password on every machine on the network)
Using certificates can be expensive and complicated to manage and since we're not guaranteed to have a server class machine at every deployment a Domain is also out of the question. Having every machine use the same username and password is also problematic if that password ever needs to be changed.
Currently we use OpenSSH to tunnel our connections between hosts. Is there another option built into the framework that I'm unaware of?
A: Encryption requires a key. Keys are usually implemented as certificates. If you own both sides of the communication, you can create your own certificate for free without having to go buy one from a trusted root authority.
A: Here is an alternative. Works without IIS and SSL/X509 certificates.
A: If you are using a http endpoint, you can use a secure transport such as https.
A: Use traditional encryption of the data that you are placing inside the WCF container. Maybe something like the following:
http://www.obviex.com/samples/EncryptionWithSalt.aspx
A: The cheapest method is probably to run your own certification authority. This means you have total control over the certificates, but you do not have to pay for external certification. If you automate this appropriately, you can give every machine on your net a cryptographic identity and use your local certification to tie everything together.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: linq - how do you do a query for items in one query source that are not in another one? If I have 2 query sources how do I find ones that are in one that are not in the other?
example of join to find items in both:
var results = from item1 in qs1.Items
join item2 in qs2 on item1.field1 equals item2.field2
select item1;
So what would the linq code be to return the items in qs1 that are not in qs2?
A: From Marco Russo
NorthwindDataContext dc = new NorthwindDataContext();
dc.Log = Console.Out;
var query =
from c in dc.Customers
where !(from o in dc.Orders
select o.CustomerID)
.Contains(c.CustomerID)
select c;
foreach (var c in query) Console.WriteLine( c );
A: use the Except extension method.
var items1 = new List<string> { "Apple","Orange","Banana" };
var items2 = new List<string> { "Grapes","Apple","Kiwi" };
var excluded = items1.Except(items2);
A: Darren Kopp's answer:
var excluded = items1.Except(items2);
is the best solution from a performance perspective.
(NB: This true for at least regular LINQ, perhaps LINQ to SQL changes things as per Marco Russo's blog post. However, I'd imagine that in the "worst case" Darren Kopp's method will return at least the speed of Russo's method even in a LINQ to SQL environment).
As a quick example try this in LINQPad:
void Main()
{
Random rand = new Random();
int n = 100000;
var randomSeq = Enumerable.Repeat(0, n).Select(i => rand.Next());
var randomFilter = Enumerable.Repeat(0, n).Select(i => rand.Next());
/* Method 1: Bramha Ghosh's/Marco Russo's method */
(from el1 in randomSeq where !(from el2 in randomFilter select el2).Contains(el1) select el1).Dump("Result");
/* Method 2: Darren Kopp's method */
randomSeq.Except(randomFilter).Dump("Result");
}
Try commenting one of the two methods out at a time and try out the performance for different values of n.
My experience (on my Core 2 Duo Laptop) seems to suggest:
n = 100. Method 1 takes about 0.05 seconds, Method 2 takes about 0.05 seconds
n = 1,000. Method 1 takes about 0.6 seconds, Method 2 takes about 0.4 seconds
n = 10,000. Method 1 takes about 2.5 seconds, Method 2 takes about 0.425 seconds
n = 100,000. Method 1 takes about 20 seconds, Method 2 takes about 0.45 seconds
n = 1,000,000. Method 1 takes about 3 minutes 25 seconds, Method 2 takes about 1.3 seconds
Method 2 (Darren Kopp's answer) is clearly faster.
The speed decrease for Method 2 for larger n is most likely due to the creation of the random data (feel free to put in a DateTime diff to confirm this) whereas Method 1 clearly has algorithmic complexity issues (and just by looking you can see it is at least O(N^2) as for each number in the first collection it is comparing against the entire second collection).
Conclusion: Use Darren Kopp's answer of LINQ's 'Except' method
A: Another totally different way of looking at it would be to pass a lambda expression (condition for populating the second collection) as a predicate to the first collection.
I know this is not the exact answer to the question. I think other users already gave the correct answer.
A: Here's a more simple version of the same thing, you don't need to nest the query:
List<string> items1 = new List<string>();
items1.Add("cake");
items1.Add("cookie");
items1.Add("pizza");
List<string> items2 = new List<string>();
items2.Add("pasta");
items2.Add("pizza");
var results = from item in items1
where items2.Contains(item)
select item;
foreach (var item in results)
Console.WriteLine(item); //Prints 'pizza'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is a good way to format logs? I'm designing an application which includes the need to log all incoming messages I receive from a Telnet connection. The text is largely plain though can include ANSI tags that provide text colour and formatting (16 colours, bold, underline, etc).
I'm would like to format my logs to store the text with formatting, date/time and potentially other meta data later. My first thoughts was all XML but this could impact my ability to write a fast search tool later. My current idea is Date/Time + text in one file with meta-data stored in another XML file, referenced by line number.
Is this a good solution? Also, where and how should I store the formatting commands? The original ANSI tags would disrupt the plain but having them in two different files might be awkward.
Additional: Thanks to some answers so far, though I should mention that most of the time the messages will be person to person communications rather than system messages. A more primitive IRC of sorts. Its up to my user to decide later (by adding meta data) which messages were important. This is the raw on the record log that filtered or edited logs might derive from.
A: My first suggestion would be to use a drop-in logging tool like log4net, which will make formatting much more automatic.
If you are going to go the route of two files (and I agree with Craig that a database is probably a better choice,) you can probably save yourself a lot of heartache by having one file that is as sparse as you can make it for later fast searches and one that holds all the information in one place (metadata and data) rather than creating a metadata-only format.
A: G'day,
Definitely do the logging in flat file and add munge scripts to turn it into XMl later.
First suggestion would be to make sure that all date/time strings are in ISO 8601 format, namely YYYY-MM-DD hh:mm:ss.
Second is to make your categories, e.g. exception, fatal, error, warning, info, etc. really stand out in your logs.
Then aybe look at some of the vim syntax files and create a new syntax for your log format so that important log entries really stand out.
It's not really that hard to take one of the standard syntax files and modify it to handle your log strings.
HTH.
cheers,
Rob
A: If you are catpuring logging information for future searching and anaylsis perhaps a database would be a better answer.
As for your solution. Flat files do not scale well at all where as a database scale much better. I wouldn't split the files either, that just compounds the scalability issue. If you have to use a flat file I would probably try keeping the meta data in a csv (less over head) and the data in a series of files indexed by the csv file. That way all the data doesn't impact your index file. Just my thoughts.
A: I'm going to "split the fence" and say use the database for all of your analysis/archiving log entries (such as your Telnet communications). This will grant you the benefits of full text searching, columns, and easy ways to search out the data.
Use a flat file (or XML format since the file shouldn't be too big) for any of your debug/critical error type logs.
If you have a broken database connection, or something has gone wacky with your table structure, logging to the DB will be meaningless.
Come to think of it, if you are looking for a slightly more "lightweight" solution, you could use SQLite to log all your telnet traffic so that you can leverage the advantage of the DB structure, but also have the availability of the file.
With another nod to log4net, you could easily accomplish this with the ADO appender they have.
A: I'm not sure exactly what you are trying to accomplish. Telnet is usually thought of as a character-at-a-time protocol, so when you say "incoming messages" do you mean each character is a message? Or the entire user's session is a message?
I'll make some assumtions.
You have users logging in via telnet and you want to capture everything they do while they are logged in. Later, you want to be able to associate the stuff they did with that user and the time and date they did it. You'll need to be able to search later to find out "who did 'rm *' as root?"
I would store each user's session as a separate file, with a naming convention that includes the user's login and a timestamp.
e.g. 2008_09_08_14_52_07_nidonocu
Within the the file, I would capture each byte received, assuming they will mostly be plain text characters.
e.g.
ls
cd www
ls
vi index.html
/copyright 2007
llllllllllllr8:wq
exit
Write the 8-bit ANSI characters to the file as well. You should be able to use a text editor and grep to do basic audits and searches. You could use a binary file viewer or get more sophisticated later if you need to actually read the 8-bit data.
Backups, archiving, purging, etc. can all be done using regular file system tools and scripting.
My apologies if my assumptions are wrong.
--
Bruce
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Rss feed for game programmer? I was browsing this thread, which has good recommendation but a bit too general for me.
So, if anyone has a collection of nice game programming feeds,please share them. :)
(both general and specific topics are welcome)
A: I used http://www.gamedev.net/ in college a lot, especially the NeHe Tutorials
A: AIGameDev.com: http://feeds.aigamedev.com/AiGameDev
A: Here are two I've used
DirectX forum feed and Summary of interesting resources
A: GameDevKicks.com might become interesting over time - if used more:
http://www.gamedevkicks.com/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can I have TortoiseSVN auto-add files? Is there a way to have TortoiseSVN (or any other tool) auto-add any new .cs files I create within a directory to my working copy so I don't have to remember which files I created at the end of the day?
A: I would probably make a batch file, something like this (untested):
dir /b /S *.cs > allcsfiles.txt
svn add --targets allcsfiles.txt
I believe svn won't mind you trying to add files which are already versioned..
Anyway, that's probably about as automatic as you will easily get.
A: If you just commit your working copy, you'll get a file list showing you your unversioned files, which you can tick to add as you commit. You don't have to add them explicitly before you commit.
A: svn add --force --auto-props [Path to check in]
Worked ok for me.
-Jet
A: Yes, you can add a bat file to svn (on the installed server) so that anytime you update a particular branch, that change get mimicked.. I believe its called hooks...
I hope this is what you meant.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Wait until file is unlocked in .NET What's the simplest way of blocking a thread until a file has been unlocked and is accessible for reading and renaming? For example, is there a WaitOnFile() somewhere in the .NET Framework?
I have a service that uses a FileSystemWatcher to look for files that are to be transmitted to an FTP site, but the file created event fires before the other process has finished writing the file.
The ideal solution would have a timeout period so the thread doesn't hang forever before giving up.
Edit: After trying out some of the solutions below, I ended up changing the system so that all files wrote to Path.GetTempFileName(), then performed a File.Move() to the final location. As soon as the FileSystemWatcher event fired, the file was already complete.
A: Starting from Eric's answer, I included some improvements to make the code far more compact and reusable. Hope it's useful.
FileStream WaitForFile (string fullPath, FileMode mode, FileAccess access, FileShare share)
{
for (int numTries = 0; numTries < 10; numTries++) {
FileStream fs = null;
try {
fs = new FileStream (fullPath, mode, access, share);
return fs;
}
catch (IOException) {
if (fs != null) {
fs.Dispose ();
}
Thread.Sleep (50);
}
}
return null;
}
A: For this particular application directly observing the file will inevitably lead to a hard to trace bug, especially when the file size increases. Here are two different strategies that will work.
*
*Ftp two files but only watch one. For example send the files important.txt and important.finish. Only watch for the finish file but process the txt.
*FTP one file but rename it when done. For example send important.wait and have the sender rename it to important.txt when finished.
Good luck!
A: This was the answer I gave on a related question:
/// <summary>
/// Blocks until the file is not locked any more.
/// </summary>
/// <param name="fullPath"></param>
bool WaitForFile(string fullPath)
{
int numTries = 0;
while (true)
{
++numTries;
try
{
// Attempt to open the file exclusively.
using (FileStream fs = new FileStream(fullPath,
FileMode.Open, FileAccess.ReadWrite,
FileShare.None, 100))
{
fs.ReadByte();
// If we got this far the file is ready
break;
}
}
catch (Exception ex)
{
Log.LogWarning(
"WaitForFile {0} failed to get an exclusive lock: {1}",
fullPath, ex.ToString());
if (numTries > 10)
{
Log.LogWarning(
"WaitForFile {0} giving up after 10 tries",
fullPath);
return false;
}
// Wait for the lock to be released
System.Threading.Thread.Sleep(500);
}
}
Log.LogTrace("WaitForFile {0} returning true after {1} tries",
fullPath, numTries);
return true;
}
A: One of the techniques I used some time back was to write my own function. Basically catch the exception and retry using a timer which you can fire for a specified duration. If there is a better way, please share.
A: From MSDN:
The OnCreated event is raised as soon
as a file is created. If a file is
being copied or transferred into a
watched directory, the OnCreated event
will be raised immediately, followed
by one or more OnChanged events.
Your FileSystemWatcher could be modified so that it doesn't do its read/rename during the "OnCreated" event, but rather:
*
*Spanws a thread that polls the file status until it is not locked (using a FileInfo object)
*Calls back into the service to process the file as soon as it determines the file is no longer locked and is ready to go
A: In most cases simple approach like @harpo suggested will work. You can develop more sophisticated code using this approach:
*
*Find all opened handles for selected file using SystemHandleInformation\SystemProcessInformation
*Subclass WaitHandle class to gain access to it's internal handle
*Pass found handles wrapped in subclassed WaitHandle to WaitHandle.WaitAny method
A: Ad to transfer process trigger file SameNameASTrasferedFile.trg
that is created after file transmission is completed.
Then setup FileSystemWatcher that will fire event only on *.trg file.
A: Here is a generic code to do this, independant from the file operation itself. This is an example on how to use it:
WrapSharingViolations(() => File.Delete(myFile));
or
WrapSharingViolations(() => File.Copy(mySourceFile, myDestFile));
You can also define the retry count, and the wait time between retries.
NOTE: Unfortunately, the underlying Win32 error (ERROR_SHARING_VIOLATION) is not exposed with .NET, so I have added a small hack function (IsSharingViolation) based on reflection mechanisms to check this.
/// <summary>
/// Wraps sharing violations that could occur on a file IO operation.
/// </summary>
/// <param name="action">The action to execute. May not be null.</param>
public static void WrapSharingViolations(WrapSharingViolationsCallback action)
{
WrapSharingViolations(action, null, 10, 100);
}
/// <summary>
/// Wraps sharing violations that could occur on a file IO operation.
/// </summary>
/// <param name="action">The action to execute. May not be null.</param>
/// <param name="exceptionsCallback">The exceptions callback. May be null.</param>
/// <param name="retryCount">The retry count.</param>
/// <param name="waitTime">The wait time in milliseconds.</param>
public static void WrapSharingViolations(WrapSharingViolationsCallback action, WrapSharingViolationsExceptionsCallback exceptionsCallback, int retryCount, int waitTime)
{
if (action == null)
throw new ArgumentNullException("action");
for (int i = 0; i < retryCount; i++)
{
try
{
action();
return;
}
catch (IOException ioe)
{
if ((IsSharingViolation(ioe)) && (i < (retryCount - 1)))
{
bool wait = true;
if (exceptionsCallback != null)
{
wait = exceptionsCallback(ioe, i, retryCount, waitTime);
}
if (wait)
{
System.Threading.Thread.Sleep(waitTime);
}
}
else
{
throw;
}
}
}
}
/// <summary>
/// Defines a sharing violation wrapper delegate.
/// </summary>
public delegate void WrapSharingViolationsCallback();
/// <summary>
/// Defines a sharing violation wrapper delegate for handling exception.
/// </summary>
public delegate bool WrapSharingViolationsExceptionsCallback(IOException ioe, int retry, int retryCount, int waitTime);
/// <summary>
/// Determines whether the specified exception is a sharing violation exception.
/// </summary>
/// <param name="exception">The exception. May not be null.</param>
/// <returns>
/// <c>true</c> if the specified exception is a sharing violation exception; otherwise, <c>false</c>.
/// </returns>
public static bool IsSharingViolation(IOException exception)
{
if (exception == null)
throw new ArgumentNullException("exception");
int hr = GetHResult(exception, 0);
return (hr == -2147024864); // 0x80070020 ERROR_SHARING_VIOLATION
}
/// <summary>
/// Gets the HRESULT of the specified exception.
/// </summary>
/// <param name="exception">The exception to test. May not be null.</param>
/// <param name="defaultValue">The default value in case of an error.</param>
/// <returns>The HRESULT value.</returns>
public static int GetHResult(IOException exception, int defaultValue)
{
if (exception == null)
throw new ArgumentNullException("exception");
try
{
const string name = "HResult";
PropertyInfo pi = exception.GetType().GetProperty(name, BindingFlags.NonPublic | BindingFlags.Instance); // CLR2
if (pi == null)
{
pi = exception.GetType().GetProperty(name, BindingFlags.Public | BindingFlags.Instance); // CLR4
}
if (pi != null)
return (int)pi.GetValue(exception, null);
}
catch
{
}
return defaultValue;
}
A: I threw together a helper class for these sorts of things. It will work if you have control over everything that would access the file. If you're expecting contention from a bunch of other things, then this is pretty worthless.
using System;
using System.IO;
using System.Threading;
/// <summary>
/// This is a wrapper aroung a FileStream. While it is not a Stream itself, it can be cast to
/// one (keep in mind that this might throw an exception).
/// </summary>
public class SafeFileStream: IDisposable
{
#region Private Members
private Mutex m_mutex;
private Stream m_stream;
private string m_path;
private FileMode m_fileMode;
private FileAccess m_fileAccess;
private FileShare m_fileShare;
#endregion//Private Members
#region Constructors
public SafeFileStream(string path, FileMode mode, FileAccess access, FileShare share)
{
m_mutex = new Mutex(false, String.Format("Global\\{0}", path.Replace('\\', '/')));
m_path = path;
m_fileMode = mode;
m_fileAccess = access;
m_fileShare = share;
}
#endregion//Constructors
#region Properties
public Stream UnderlyingStream
{
get
{
if (!IsOpen)
throw new InvalidOperationException("The underlying stream does not exist - try opening this stream.");
return m_stream;
}
}
public bool IsOpen
{
get { return m_stream != null; }
}
#endregion//Properties
#region Functions
/// <summary>
/// Opens the stream when it is not locked. If the file is locked, then
/// </summary>
public void Open()
{
if (m_stream != null)
throw new InvalidOperationException(SafeFileResources.FileOpenExceptionMessage);
m_mutex.WaitOne();
m_stream = File.Open(m_path, m_fileMode, m_fileAccess, m_fileShare);
}
public bool TryOpen(TimeSpan span)
{
if (m_stream != null)
throw new InvalidOperationException(SafeFileResources.FileOpenExceptionMessage);
if (m_mutex.WaitOne(span))
{
m_stream = File.Open(m_path, m_fileMode, m_fileAccess, m_fileShare);
return true;
}
else
return false;
}
public void Close()
{
if (m_stream != null)
{
m_stream.Close();
m_stream = null;
m_mutex.ReleaseMutex();
}
}
public void Dispose()
{
Close();
GC.SuppressFinalize(this);
}
public static explicit operator Stream(SafeFileStream sfs)
{
return sfs.UnderlyingStream;
}
#endregion//Functions
}
It works using a named mutex. Those wishing to access the file attempt to acquire control of the named mutex, which shares the name of the file (with the '\'s turned into '/'s). You can either use Open(), which will stall until the mutex is accessible or you can use TryOpen(TimeSpan), which tries to acquire the mutex for the given duration and returns false if it cannot acquire within the time span. This should most likely be used inside a using block, to ensure that locks are released properly, and the stream (if open) will be properly disposed when this object is disposed.
I did a quick test with ~20 things to do various reads/writes of the file and saw no corruption. Obviously it's not very advanced, but it should work for the majority of simple cases.
A: I don't know what you're using to determine the file's lock status, but something like this should do it.
while (true)
{
try {
stream = File.Open( fileName, fileMode );
break;
}
catch( FileIOException ) {
// check whether it's a lock problem
Thread.Sleep( 100 );
}
}
A: A possible solution would be, to combine a filesystemwatcher with some polling,
get Notified for every Change on a File, and when getting notified check if it is
locked as stated in the currently accepted answer: https://stackoverflow.com/a/50800/6754146
The code for opening the filestream is copied from the answer and slightly modified:
public static void CheckFileLock(string directory, string filename, Func<Task> callBack)
{
var watcher = new FileSystemWatcher(directory, filename);
FileSystemEventHandler check =
async (sender, eArgs) =>
{
string fullPath = Path.Combine(directory, filename);
try
{
// Attempt to open the file exclusively.
using (FileStream fs = new FileStream(fullPath,
FileMode.Open, FileAccess.ReadWrite,
FileShare.None, 100))
{
fs.ReadByte();
watcher.EnableRaisingEvents = false;
// If we got this far the file is ready
}
watcher.Dispose();
await callBack();
}
catch (IOException) { }
};
watcher.NotifyFilter = NotifyFilters.LastWrite;
watcher.IncludeSubdirectories = false;
watcher.EnableRaisingEvents = true;
//Attach the checking to the changed method,
//on every change it gets checked once
watcher.Changed += check;
//Initially do a check for the case it is already released
check(null, null);
}
With this way you can Check for a file if its locked and get notified when its closed over the specified callback, this way you avoid the overly aggressive polling and only do the work when it may be actually be closed
A: Here is a similar answer to the above except I added a check to see if the file exists.
bool WaitForFile(string fullPath)
{
int numTries = 0;
while (true)
{
//need to add this line to prevent infinite loop
if (!File.Exists(fullPath))
{
_logger.LogInformation("WaitForFile {0} returning true - file does not exist", fullPath);
break;
}
++numTries;
try
{
// Attempt to open the file exclusively.
using (FileStream fs = new FileStream(fullPath,
FileMode.Open, FileAccess.ReadWrite,
FileShare.None, 100))
{
fs.ReadByte();
// If we got this far the file is ready
break;
}
}
catch (Exception ex)
{
_logger.LogInformation(
"WaitForFile {0} failed to get an exclusive lock: {1}",
fullPath, ex.ToString());
if (numTries > 10)
{
_logger.LogInformation(
"WaitForFile {0} giving up after 10 tries",
fullPath);
return false;
}
// Wait for the lock to be released
System.Threading.Thread.Sleep(500);
}
}
_logger.LogInformation("WaitForFile {0} returning true after {1} tries",
fullPath, numTries);
return true;
}
A: How about this as an option:
private void WaitOnFile(string fileName)
{
FileInfo fileInfo = new FileInfo(fileName);
for (long size = -1; size != fileInfo.Length; fileInfo.Refresh())
{
size = fileInfo.Length;
System.Threading.Thread.Sleep(1000);
}
}
Of course if the filesize is preallocated on the create you'd get a false positive.
A: I do it the same way as Gulzar, just keep trying with a loop.
In fact I don't even bother with the file system watcher. Polling a network drive for new files once a minute is cheap.
A: Simply use the Changed event with the NotifyFilter NotifyFilters.LastWrite:
var watcher = new FileSystemWatcher {
Path = @"c:\temp\test",
Filter = "*.xml",
NotifyFilter = NotifyFilters.LastWrite
};
watcher.Changed += watcher_Changed;
watcher.EnableRaisingEvents = true;
A: I ran into a similar issue when adding an outlook attachment. "Using" saved the day.
string fileName = MessagingBLL.BuildPropertyAttachmentFileName(currProp);
//create a temporary file to send as the attachment
string pathString = Path.Combine(Path.GetTempPath(), fileName);
//dirty trick to make sure locks are released on the file.
using (System.IO.File.Create(pathString)) { }
mailItem.Subject = MessagingBLL.PropertyAttachmentSubject;
mailItem.Attachments.Add(pathString, Outlook.OlAttachmentType.olByValue, Type.Missing, Type.Missing);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
} |
Q: Visual Studio 2008 "randomly" hangs on test run We are using VS 2008 Team System with the automated test suite, and upon running tests the test host "randomly" locks up. I actually have to kill the VSTestHost process and re-run the tests to get something to happen, otherwise all tests sit in a "pending" state.
Has anyone experience similar behavior and know of a fix? We have 3 developers here experiencing the same behavior.
A: This may be related to an obscure bug that causes unit tests to hang unless the computer name is UPPERCASE. Crazy, I know - but I had this problem and the fix worked for me.
Bug report on MS Connect
Workaround on MS Connect
TFS Blog Article about this issue
HowTo edit the registry to change your computer name
The easiest approach is to tweak the registry. You need to edit two keys:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\ComputerName\ActiveComputerName
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\ComputerName\ComputerName
Change value ComputerName to be upper case in both keys, and restart. Tests then magically work.
A: When you say lock up, do you mean VS is actually hung, or do the tests not run?
The easiest way to track down what is going on would be to look at a dump of the hung process. If you are on Vista, just right-click on the process and choose to create a memory dump. If you are on Windows XP, and don't have the Debugging Tools for Windows installed, you can get a memory dump using ntsd.exe. You'll need the process ID, which you can get from Task Manager by adding the PID column to the Processes tab display.
Once you have that, run the following commands:
ntsd -p <PID>
.dump C:\mydump.dmp
You can then either inspect that dump using WinDBG and SOS or if you can post the dump somewhere I'd be happy to take a look at it.
In any case, you'll want to likely take two dumps about a minute apart. That way if you do things like !runaway you can see which threads are working which will help you track down why it is hanging.
One other question - are you on VS2008 SP1?
A: I would try running the tests from the command line using MSTest.exe. This might help isolate the problem to Visual Studio, and at least give you some method of running the tests successfully.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Response.StatusCode and Internet Explorer - Display custom message? I am implementing a HttpRequestValidationException in my Application_Error Handler, and if possible, I want to display a custom message.
Now, I'm thinking about the StatusCode. In my current example, it sends a 200, which I think should not be done. I would like to send the (IMHO) more appropriate 400 Bad Request instead. However, at the same time, I would like to use Response.Write to enter a custom message. Firefox displays it properly, but IE7 gives me the Default unhelpful Internet Explorer Error Page.
On one side, I guess that Internet Explorer just assumes that everything <> 200 is simply not having any "good" content, and the RFC is not really clear here.
So I just wonder, is sending a HTTP 200 for an Error Page caused by a HttpRequestValidationException good practice or not? Are there good alternatives?
A: An HTTP 200 Response Code does not indicate an error. It indicates that everything was OK. You should not use a 200 response code for an error.
Internet Explorer shows its "Friendly Errors" page if the response is less than 512 bytes. Here's more on this issue: http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx,
A: No, it's certainly not a good practice. 2XX status codes mean (among other things) that the request is valid. Which is just the contrary to raising a HttpRequestValidationException.
I don't know how to make IE behave correctly, sadly. A slightly better way than to send a 200 would be to redirect it to an error page, but still far from perfect.
A: Internet Explorer shows what they call a "friendly HTTP error message" when the response is 4xx or 5xx. This option can be turned off by the user in IE's Tools.Options.Advanced[Browsing] dialog.
Sending a 200 for an error page is generally bad practice. One alternative would be to have a valid "Error" page that's supposed to show error messages (so a 200 would be okay) and then use a 3xx redirect to that page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Determine Disk Geometry on Windows I need to programmatically determine out how many sectors, heads, and cylinders are on a physical disk from Windows XP. Does anyone know the API for determining this? Where might Windows expose this information?
A: Use DeviceIoControl with control code IOCTL_DISK_GET_DRIVE_GEOMETRY or IOCTL_DISK_GET_DRIVE_GEOMETRY_EX.
There's sample code in MSDN to do this here.
A: There's a control code you can pass to DeviceIoControl to get the physical disk geometry.
A: WMI is good at this too, I've used it with great success.
using( ManagementClass driveClass = new ManagementClass( "Win32_DiskDrive" ) )
{
using( ManagementObjectCollection physicalDrives = driveClass.GetInstances( ) )
{
foreach( ManagementObject drive in physicalDrives )
{
string cylinders = ( string )drive["TotalCylinders"];
// ... etc ...
drive.Dispose( );
}
}
}
For a list of additional drive properties you can use, check out this page
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What would it take to make OpenID mainstream? OpenID is a great idea in principle, but the UI and the explanation as to why it is good are currently not tailored for general use -- what do you think it would take to make OpenID work for the general public? Can this be solved with technology, or is the problem so intrinsically hard that we are stuck with difficult explanations/multi-step registration procedures, numerous accounts, or poor security?
A: I think it'll take a huge buy-in from a site that millions of people use; for example, MySpace is soon supporting OpenID, so now the number of users that OpenID supports has just jumped by a huge amount. If more of the high activity sites on the net follow this lead, there you go!
A: It will take all the popular sites supporting it and making it transparent to the user.
"You can make a useraccount here, or if you use MySpace, Google Mail, Hotmail, etc then you can sign in using OpenID."
Don't sell it as a new service, sell it as being able to sign in using a different ID from another site.
The issue, however, is that with everyone supporting it each user will now have a myspace id, google id, etc. Now if they sign onto stackoverflow with their myspace id then later with google they may be perplexed that stackoverflow doesn't recognize them.
I wonder if openid has a solution for linking openid accounts so they are one and the same - I doubt the technology allows for it, since they are essentially independant signing authorities. Google would have to share data with Myspace and vice versa to enable that...
A: ISPs should provide openIds to all their customers that mimic their e-mail addresses. Perhaps openID needs to support automatic translation of [email protected] into http://openid.example.com/foo so that ISPs can easily set this up on a separate server.
A: I don't think it will become mainstream. I think Ted Dziuba gets it right when he says it solves a "problem" that most people don't consider to be worth solving.
http://teddziuba.com/2008/09/openid-is-why-i-hate-the-inter.html
A: It will have to get a hell of a lot simpler, with easier-to-remember IDs.
A: It needs to be much simpler: involve less knowledge of the concepts, and require fewer steps - preferably zero. When the technology works with little or no assistance, it'll take off.
The mechanics of OpenID credentials, providers and suppliers shouldn't need to be exposed to the user. People talk about educating the masses of internet users, but that's never going to happen - the masses never stop being stupid. If you want to appeal to the masses, you need to bring the technology down to meet their level instead. When a Google-affiliated site picks up that you're logged into Google and silently uses that account, it works without you ever having to tell it who you are. The fact that OpenID is so clumsy in comparison is why the big providers like Google are still avoiding it, and why the general public won't adopt it.
I think the developers of OpenID messed up when they used a URL rather than an email address for the IDs. People know what email addresses are, they already have one that's associated with them (or can get one easily), and email providers like Google and Microsoft are happy to adopt a role as portals. In fact, an automatic translation from email address to URL is all it would take:
[email protected] -> http://www.example.com/openid/myname
A: You mean it isn't already? ;)
Obviously a lot of currently-popular applications would need to offer it and make it obvious that it was a good alternative.
If Google and Facebook made it an obvious option, that would help.
Ultimately, user education will really be the thing that does it. I doubt most people would care though...dumb sheeple.
A: Many of the responses so far seem to boil down to two options:
*
*user education, and
*forcing adoption (lots of sites changing to openid from in-house auth.)
Is that all we can do? What about distributed tools to make it easy for casual users to do openid delegation? (Say, something integrated with OS X / Windows / Ubuntu) Are there technological barriers that make this infeasible?
If client-side (and vendor-issued) applications could let you manage your on-line security preference, then we'd possibly be able to combat some of the risks associated with giving random sites your passwords -- since the "login area" would be some local program sitting in your systray, or what not. Of course, the integration of web apps with the desktop (such as that provided by Chrome) may make such a distinction impossible in practice, so it may be a moot point.
In any case, it seems like there should be something we could do now to make openid more palatable to the general public, and speed adoption in addition to making the system more user friendly.
A: As someone who primarily programs web apps in Java, I can't/won't use OpenID because the library support isn't there. JOID and openid4java are the only two that I know of. JOID is apparently not actively maintained, not including really important patches that have been on the mailing list for months; and openid4java requires >40 megabytes of external dependencies, including some that need to go into the endorsed classpath, which is, as one user commented, ridiculous:
Comment by witichis, Apr 28, 2008
46MB download for a simple redirect and de/encryp - are you f****n' drunk?
In my opinion, OpenID is not bad. It consolidates login credentials. It does solve a real problem, while it may not be the optimal solution The only two problems I can see are that you must trust the identity provider not to allow someone else to claim to be you, and that relying parties (web sites you log in to) can collude to link your identity on multiple sites together.
A: I think we need to see OpenID offered as a login method more consumer oriented websites. There are a lot of big consumer sites that can be used as OpenID providers, but the only place I recall seeing OpenID available as a login before Stackoverflow is to comment on Blogger. Being a provider is great and all, but it's pretty much invisible to consumers. Seeing an actual place to use OpenID, on the other hand, will probably garner somewhat more interest.
A: It would certainly help if more OpenID consumers were also OpenID providers. As a developer, I'm comfortable going through a few contortions to figure out that I can create a new ID on openid.org, but the more mainstream consumer could easily be put off by the process.
A: The fact that big sites will accept OpenID isn't, on it's own, enough to make it mainstream. The closest I've seen so far was having LiveJournal both accept and provide OpenID authentication (which I believe it has been doing for quite some time).
But I think that just accepting OpenID isn't enough. What we really need is more sites like this one that refuse to make their own authentication system, and require OpenID authentication. If the "next big thing" said you have to use your OpenID to log in (with a really simple wizard to set up a new ID with someone else), I believe that it will start the ball properly rolling.
A: I'd use it if I could do it per-site and aggregate the identity later on my own time and terms. As it is, it's a giant pain in the ass to even find a decent OpenID provider; by decent I mean stackoverflow.com isn't one so I'm not going to bother.
A: Browsers should auto-fill OpenID login boxes so that you don't have to remember your ID.
A: Web frameworks should come with it as the default, unless you take lots of extra time to configure a simple username/password combination.
A: Sites that use OpenID need to put it front and center on the login page. I have seen many sites hide it behind a link under the standard login/registration page like this:
Username:
Password:
or use your OpenID
A: Choosing a provider needs to be much simpler.
At present there's no way to know how reliable, trustworthy or secure any of them are, or which will still be around in 6 months time.
A: It won't be mainstream, as it's too much effort and is too confusing for those used to email address and password.
For example:
To login to stackoverflow with Opera I have to click login, select myOpenID from the list, type my username, hit enter, press Ctrl+Enter to autofill the password on the myOpenID site, then press the continue button.
To login into any normal site with Opera I just press Ctrl+Enter to autofill the saved user/pass combo.
A: Personally I don't think it needs to be mainstream at all, it was an interesting idea, but it is no longer relevant.
When I create a normal login, I type in my username, master password and click on the SuperGenPass bookmarklet. That is it, when I had to sign up to stackoverflow I had to find an openId provider, sign up there (which took forever) login to my website and setup delegation, then add stackoverflow to my list of sites.
And yesterday I couldn't login because I had removed the file from my webhost and they had some security issue.
Conclusion: Don't use openid.
A: Im looking into OpenId right now to integrate into a start up site so it can manage the login process for my site.
I think to make this main stream they need to make this super simple. Copy, paste code into your site and it loads the login form that gives you pretty much what Stackoverflow.com does.
I think you can style up the layout of the form to be more recognizable as well.
A: Make it less open.
i do not want the same identity on multiple sites.
i do not want to have to create a flickr account before StackOverflow will let me post.
i do not have to have to create a new flickr account for each website that i want to register with.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: ASP.NET AJAX Load Balancing Issues This would be a question for anyone who has code in the App_Code folder and uses a hardware load balancer. Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off.
When a file in the App_Code folder, and the site is not pre-compiled iis will generate random file names for these files.
server1 "/ajax/SomeControl, App_Code.tjazq3hb.ashx"
server2 "/ajax/SomeControl, App_Code.wzp3akyu.ashx"
So when a user posts the page and gets transfered to the other server nothing works.
Does anyone have a solution for this? I could change to a pre-compiled web-site, but we would lose the ability for our QA department to just promote the changed files.
A: Do you have the <machinekey> node on both servers set to the same value?
You can override the machine.config file in web.config to set this. This needs to match otherwise you can get strange situations like this.
A: Does your load balancer supports sticky sessions? With this on, the balancer will route the same IP to the same server over and over within a certain time window. This way, all requests (AJAX or otherwise) from one client would always hit the same server in the cluster/farm.
A: Ok, first things first... the MachineKey thing is true. That should absolutely be set to the same on all of the load balanced machines. I don't remember everything it affects, but do it anyway.
Second, go ahead and precompile the site. You can actually still push out new versions, whenever there is a .cs file for a page that page gets recompiled. What gets tricky is the app_code files which get compiled into a single dll. However, if a change is made in there, you can upload the new dll and again everything should be fine.
To make things even easier, enable the "Used fixed naming and single page assemblies" option. This will ensure things have the same name on each compilation, so you just test and then replace the changed .dll files.
All of that said, you shouldn't be having an issue as is. The request goes to IIS, which just serves up the page and compiles as needed. If the code behind is different on each machine it really shouldn't matter, the code is the same and that machine will reference it's own code. The actual request/postback doesn't know or care about any of that. Everything I said above should help simplify things, but it should be working anyway... so it's probably a machinekey issue.
A: If it's a hardware load balancer, you shouldn't have an issue, because all that is known there is the request URL, in which the server would compile the requested page and serve it.
the only issue i can think of that you might have is with session and view state.
A: Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off.
A: It appears that the is only for ViewState encryption. It doesn't affect the file names for auto compiled assemblies.
A: You could move whatever is in your app_code to an external class library if your QA dept can promote that entire library. I think you are stuck with sticky sessions if you can't find a convenient or tolerable way to switch to a pre-compiled site.
A: I think asp.net model has quite a bit dependency for encryption and machine specific storage, so I am not sure if it works to avoid sticky IP for session.
I don't know about ASP.NET AJAX (I use MonoRail NJS approach instead), but session state could be an issue for you.
You have to make sure session states are serializable, and don't use InMemory session. You probably need to run ASP.NET Session State Server to make sure the whole frontend farm are using the same session storage. In such case session has to be perfectly serializable (thats why no object in session is preferred, you have to always use ID, and I bet MS stick on this limitation when they do AJAX library development)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Visual Studio 2005 crashes on start-up In my work environment, Visual Studio currently crashes every time I start our main project unless I delete the .suo (solution options) and .ncb (C++ Intellisense symbols) files.
Obviously, I've found a workaround. Is there a more permanent solution than this?
A: Try monitoring the Visual Studio process using a tool like Process Monitor and get more info. It could be because of some weird file access issues.
A: Have you installed Visual Studio 2005 Service Pack 1?
A: The accepted answer wasn't quite correct, but it pointed in the right direction.
There is a hotfix for VS2k5 SP1 described in KB article 947315 that addresses this issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I get ms-access to connect to ms-sql as a different user? How do I get ms-access to connect (through ODBC) to an ms-sql database as a different user than their Active Directory ID?
I don't want to specify an account in the ODBC connection, I want to do it on the ms-access side to hide it from my users. Doing it in the ODBC connection would put me right back in to the original situation I'm trying to avoid.
Yes, this relates to a previous question: http://www.stackoverflow.com/questions/50164/
A: I think you can get this to work the way you want it to if you use an "ODBC DSN-LESS connection"
If you need to, keep your ODBC DSN's on your users' machines using windows authentication. Give your users read-only access to your database. (If they create a new mdb file and link the tables they'll only be able to read the data.)
Create a SQL Login which has read/write permission to your database.
Write a VBA routine which loops over your linked tables and resets the connection to use you SQL Login but be sure to use the "DSN-Less" syntax.
"ODBC;Driver={SQL Native Client};" &
"Server=MyServerName;" & _
"Database=myDatabaseName;" & _
"Uid=myUsername;" & _
"Pwd=myPassword"
Call this routine as part of your startup code.
A couple of notes about this approach:
*
*Access seems to have an issue with the connection info once you change from Read/Write to Read Only and try going back to Read/Write without closing and re-opening the database (mde/mdb) file. If you can change this once at startup to Read/Write and not change it during the session this solution should work.
*By using a DSN - Less connection you are able to hide the credentials from the user in code (assuming you're giving them an mde file you should be ok). Normally hard-coding connection strings isn't a good idea, but since you're dealing with an in-house app you should be ok with this approach.
A: I think you'd have to launch the MS Access process under the account you want to use to connect. There are various tools that let you do this, such as CPAU. This tool will let you encrypt the password as well.
A: We admit here that you are using an ODBC connexion to your database with Integrated Security on, so that you do not have/do not want to write a username/pasword value in the connexion string (which is according to me the right choice).
In this case, there is fortunately no way to "simulate" another user when connecting to the data. Admit with me that being able to make such a thing would be a huge break in integrated security!
I understood from your previous post that you wanted users to be able to update the data or not depending on the client interface they use. According to me, the idea would be to create for each table a linked 'not updatable' view. Let's say that for each table called Table_Blablabla you create a view (=query in Access) called View_Table_Blablabla ...).
When using Access, you can then decide at runtime wether you want to open the updatable table or the read-only view. This can be done for example at runtime, in the form_Open event, by setting the form recordsource either to the table or the view.
A: @Philippe
I assume that you are using the word admit as being roughly equivalent to understand or perhaps agree; as opposed to the opposite of deny.
I understand the implications of having all the users login to the database using one ID and password (and having them stored in the application). That to me is a smaller risk than the problem I'm facing right now.
@off
Some more background to the problem:
I have ODBC connections set up on each of the users workstations using Windwos NT authentication. Most of the time the users connect using an MDE setup to use that ODBC connection - in this case they ALWAYS have the ability to add/update/delete data.
The problem comes that some of the users are educated enough about MS-Access to create a new mdb and link it to the MS-SQL server. They can then edit the data right within the tables rather than going through the application which does a certain amount of validation and hand holding. And they like doing this, but sometimes the mess it up and cause me problems.
A: What I was hoping to do (which I just experimented with) was to refresh the links to the database something like this for each table (Note: I've switched the ODCB connection to SQL Server authentication for this experiment, and added the accounts to the SQL server as well: readonly - which can't to any updates, and readwrite - which has full privileges on the table).
myTable.Connect = _
"ODBC;" & _
"DATABASE=" & "MyTestDB" & ";" & _
"UID=readonly;" & _
"PWD=readonly_password;" & _
"DSN=" & "MyTestDB" & ";"
myTable.RefreshLink
this stops them from editing, but I can't get a later readwrite to work
myTable.Connect = _
"ODBC;" & _
"DATABASE=" & "MyTestDB" & ";" & _
"UID=readwrite;" & _
"PWD=readwrite_password;" & _
"DSN=" & "MyTestDB" & ";"
myTable.RefreshLink
It seems that whichever permission I connect with first, sticks permenantly. If I started readwrite and then go to readonly, the table remains with the readwrite privileges
A: Why not use integrated/windows security. You can grant an active directory group the rights you want the users and then add the users accounts to that group. I believe you can also use sql server's roles feature in addition to this to limit functionality based on the client application being used.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: how does unix handle full path name with space and arguments? How does unix handle full path name with space and arguments ?
In windows we quote the path and add the command-line arguments after, how is it in unix?
"c:\foo folder with space\foo.exe" -help
update:
I meant how do I recognize a path from the command line arguments.
A: You can either quote it like your Windows example above, or escape the spaces with backslashes:
"/foo folder with space/foo" --help
/foo\ folder\ with\ space/foo --help
A: Since spaces are used to separate command line arguments, they have to be escaped from the shell. This can be done with either a backslash () or quotes:
"/path/with/spaces in it/to/a/file"
somecommand -spaced\ option
somecommand "-spaced option"
somecommand '-spaced option'
This is assuming you're running from a shell. If you're writing code, you can usually pass the arguments directly, avoiding the problem:
Example in perl. Instead of doing:
print("code sample");system("somecommand -spaced option");
you can do
print("code sample");system("somecommand", "-spaced option");
Since when you pass the system() call a list, it doesn't break arguments on spaces like it does with a single argument call.
A: Also be careful with double-quotes -- on the Unix shell this expands variables. Some are obvious (like $foo and \t) but some are not (like !foo).
For safety, use single-quotes!
A: You can quote if you like, or you can escape the spaces with a preceding \, but most UNIX paths (Mac OS X aside) don't have spaces in them.
/Applications/Image\ Capture.app/Contents/MacOS/Image\ Capture
"/Applications/Image Capture.app/Contents/MacOS/Image Capture"
/Applications/"Image Capture.app"/Contents/MacOS/"Image Capture"
All refer to the same executable under Mac OS X.
I'm not sure what you mean about recognizing a path - if any of the above paths are passed as a parameter to a program the shell will put the entire string in one variable - you don't have to parse multiple arguments to get the entire path.
A: You can quote the entire path as in windows or you can escape the spaces like in:
/foo\ folder\ with\ space/foo.sh -help
Both ways will work!
A: I would also like to point out that in case you are using command line arguments as part of a shell script (.sh file), then within the script, you would need to enclose the argument in quotes. So if your command looks like
>scriptName.sh arg1 arg2
And arg1 is your path that has spaces, then within the shell script, you would need to refer to it as "$arg1" instead of $arg1
Here are the details
A: If the normal ways don't work, trying substituting spaces with %20.
This worked for me when dealing with SSH and other domain-style commands like auto_smb.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: What version of TinyMCE will work in Drupal 5 with google chrome? My drupal site (internal) will not display the TinyMCE editor when using Google Chrome browser. Has anyone gotten TinyMCE to work with Chrome in Drupal 5?
A: There are a number of known incompatibilities between TinyMCE and WebKit (the rendering engine used by Chrome). If you're using TinyMCE 2.x, you might want to try the Safari plug-in to TinyMCE; Safari also uses WebKit. I gather also that TinyMCE 3.x has better support for Safari/WebKit than TinyMCE 2.x, so you might want to try upgrading to the latest 3.x version.
HTH
Alastair
A: I upgraded the TinyMCE to the latest, but it still fails.
The editor will display the first time you try to edit after login, but subsequent editing fails to bring up the TinyMCE editor
On the upside, Drupal 6.4 and the latest TinyMCE appear to work correctly, so I guess I'll need to update Drupal to v. 6 for this website.
edit
Refreshing the edit page renders TinyMCE in my current setup, so that's my answer for Drupal 5.7 in Chrome.
A: Just Refresh the edit page and the editor will render in Chrome.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What's the best way to get the fractional part of a float in PHP? How would you find the fractional part of a floating point number in PHP?
For example, if I have the value 1.25, I want to return 0.25.
A: If if the number is negative, you'll have to do this:
$x = abs($x) - floor(abs($x));
A: $x = $x - floor($x)
A: My PHP skills are lacking but you could minus the result of a floor from the original number
A: $x = fmod($x, 1);
Here's a demo:
<?php
$x = 25.3333;
$x = fmod($x, 1);
var_dump($x);
Should ouptut
double(0.3333)
Credit.
A: Don't forget that you can't trust floating point arithmetic to be 100% accurate. If you're concerned about this, you'll want to look into the BCMath Arbitrary Precision Mathematics functions.
$x = 22.732423423423432;
$x = bcsub(abs($x),floor(abs($x)),20);
You could also hack on the string yourself
$x = 22.732423423423432;
$x = strstr ( $x, '.' );
A: The answer provided by nlucaroni will only work for positive numbers. A possible solution that works for both positive as well as negative numbers is:
$x = $x - intval($x)
A: However, if you are dealing with something like perlin noise or another graphical representation, the solution which was accepted is correct. It will give you the fractional part from the lower number.
i.e:
*
*.25 : 0 is integer below, fractional part is .25
*-.25 : -1 is integer below, fractional part is .75
With the other solutions, you will repeat 0 as integer below, and worse, you will get reversed fractional values for all negative numbers.
A: Some of the preceding answers are partial. This, I believe, is what you need to handle all situations:
function getDecimalPart($floatNum) {
return abs($floatNum - intval($floatNum));
}
$decimalPart = getDecimalPart($floatNum);
A: You can use fmod function:
$y = fmod($x, 1); //$x = 1.25 $y = 0.25
A: To stop the confusion on this page actually this is the best answer, which is fast and works for both positive and negative values of $x:
$frac=($x<0) ? $x-ceil($x) : $x-floor($x);
I ran speed tests of 10 million computations on PHP 7.2.15 and even though both solutions give the same results, fmod is slower than floor/ceil.
$frac=($x<0) ? $x-ceil($x) : $x-floor($x);
-> 490-510 ms (depending on the sign of $x)
$frac=fmod($x, 1);
-> 590 - 1000 ms (depending on the value of $x)
Whereas the actual empty loop itself takes 80 ms (which is included in above timings).
Test script:
$x=sqrt(2)-0.41421356237;
$time_start = microtime(true);
for ($i=0;$i<=9999999;$i++) {
//$frac=fmod($x, 1); // version a
$frac=($x<0) ? $x-ceil($x) : $x-floor($x); // version b
}
$time_end = microtime(true);
$time = $time_end - $time_start;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: Google Finance - Get Quotes search box - Column Alignment How does Google manage to properly align the second column (i.e. the ticker name) in the "Get Quotes" search box suggestion drop-down in google finance url
Example: If you enter iii - the second column is perfectly aligned.
It does not use a fixed width font - so just adding the correct numbers of spaces to the ticker will not work.
How do they do that?
A: most likely just using margins. float the first column left then set the margin to the width of the first column.
A: I just viewed source with a DOM inspector and it appears that they are spans for each cell with a margin set (as Darren said) to position the right column over.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: #line and jump to line Do any editors honer C #line directives with regards to goto line features?
Context:
I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding.
I can drop them but then finding the input line is even a worse pain
A: If the editor is scriptable it should be possible to write a script to do the navigation. There might even be a Vim or Emacs script that already does something similar.
FWIW when I writing a lot of Bison/Flexx I wrote a Zeus Lua macro script that attempted to do something similar (i.e. move from input file to the corresponding line of the output file by search for the #line marker).
For any one that might be interested here is that particular macro script.
A: #line directives are normally inserted by the precompiler, not into source code, so editors won't usually honor that if the file extension is .c.
However, the normal file extension for post-compiled files is .i or .gch, so you might try using that and see what happens.
A: I've used the following in a header file occasionally to produce clickable items in
the VC6 and recent VS(2003+) compiler ouptut window.
Basically, this exploits the fact that items output in the compiler output
are essentially being parsed for "PATH(LINENUM): message".
This presumes on the Microsoft compiler's treatment of "pragma remind".
This isn't quite exactly what you asked... but it might be generally helpful
in arriving at something you can get the compiler to emit that some editors might honor.
// The following definitions will allow you to insert
// clickable items in the output stream of the Microsoft compiler.
// The error and warning variants will be reported by the
// IDE as actual warnings and errors... which means you can make
// them occur in the task list.
// In theory, the coding standards could be checked to some extent
// in this way and reminders that show up as warnings or even
// errors inserted...
#define strify0(X) #X
#define strify(X) strify0(X)
#define remind(S) message(__FILE__ "(" strify( __LINE__ ) ") : " S)
// example usage
#pragma remind("warning: fake warning")
#pragma remind("error: fake error")
I haven't tried it in a while but it should still work.
A: Use sed or a similar tool to translate the #lines to something else not interpreted by the compiler, so you get C error messages on the real line, but have a reference to the original input file nearby.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Message passing in a plug-in framework First off, there's a bit of background to this issue available on my blog:
*
*http://www.codebork.com/coding/2008/06/25/message-passing-a-plug-framework.html
*http://www.codebork.com/coding/2008/07/31/message-passing-2.html
I'm aware that the descriptions aren't hugely clear, so I'll try to summarise what I'm attempting as best I can here. The application is a personal finance program. Further background on the framework itself is available at the end of this post.
There are a number of different types of plug-in that the framework can handle (e.g., accounts, export, reporting, etc.). However, I'm focussing on one particular class of plug-in, so-called data plug-ins, as it is this class that is causing me problems. I have one class of data plug-in for accounts, one for transactions, etc.
I'm midway through a vast re-factoring that has left me with the following architecture for data plug-ins:
*
*The data plug-in object (implementing intialisation, installation and plug-in metadata) [implements IDataPlugin<FactoryType>]
*The data object (such as an account) [implements, e.g., IAccount]
*A factory to create instances of the data object [implements, e.g., IAccountFactory]
Previously the data object and the plug-in object were combined into one, but this meant that a new transaction plug-in had to be instantiated for each transaction recorded in the account which caused a number of problems. Unfortunately, that re-factoring has broken my message passing. The data object implements INotifyPropertyChanged, and so I've hit a new problem, and one that I'm not sure how to work around: the plug-in object is registering events with the message broker, but it's the data objects that actually fire the events. This means that the subscribing plug-in currently has to subscribe to each created account, transaction, etc.! This is clearly not scalable.
As far as I can tell at the moment I have two possible solutions:
*
*Make the data plug-in object a go-between for the data-objects and message broker, possibly batching change notifications. I don't like this because it adds another layer of complexity to the messaging system that I feel I should be able to do without.
*Junk the current event-based implementation and use something else that's more easily manageable (in-memory WCF?!).
So I guess I'm really asking:
*
*How would you solve this problem?
*What potential solutions do you think I've overlooked?
*Is my approach even vaguely on-track/sensible?! :-)
As you will be able to tell from the dates of the blog posts, some variant of this problem has been taxing me for quite a long time now! As such, any and all responses will be greatly appreciated.
The background to the framework itself is as follows:
My plug-in framework consists of three main components: a plug-in broker, a preferences manager and a message broker. The plug-in broker does the bread-and-butter plug-in stuff: discovering and creating plug-ins. The preferences manager manages user preferences for the framework and individual plug-ins, such as which plug-ins are enabled, where data should be saved, etc. Communication is via publish/subscribe, with the message broker sitting in the middle, gathering all published message types and managing subscriptions. The publish/subscribe is currently implemented via the .NET INotifyPropertyChanged interface, which provides one event called PropertyChanged; the message broker builds a list of all plug-ins implementing INotifyPropertyChanged and subscribes other plug-ins this event. The purpose of the message passing is to allow the account and transaction plug-ins to notify the storage plug-ins that data has changed so that it may be saved.
A: Wow! Big question! :)
Correct me if I'm wrong. Your basic solution now is kind of an Observer pattern, where the data object (Account, etc) notifies about changes in their states. You think that the problem is that the subscribing plugin has to register in every object to be able to handle notifications.
That's not a problem per se, you can put the event control in the Domain Model, but I suggest you create a Service Layer and do this event notifications in this layer. That way just one object would be responsible for publishing notifications.
Martin Fowler have a series of Event Patterns in his blog. Check it out! Very good reading.
A: This is my understanding of your question: You have a plugin object that may have to listen for events on x data objects - you don't want to subscribe to the event on each data object though. I'm assuming that several plugins may want to listen to events on the same data object.
You could create a session type object. Each plugin listens for events on the session object. The data object no longer raises the event - it calls the session object to raise the event (one of the parameters would have to be the data object raising the event).
That means that your plugins only have to subscribe to one event, but they get the event from all data objects.
On the other hand, if only one plugin will ever listen to a data object at a time, why not just have the data object call the plugin directly?
A: It's early yet, but have you considered trying to use MEF instead of rolling your own?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Best CSS generator language? Reusing values in CSS (particularly colors) has always been a problem for me when it comes to maintaining that CSS. What are the best tools for creating variables, or generally improving maintainability with CSS?
A: See the answers to the following questions
*
*Create a variable in .CSS file for use within that .CSS file
*Avoiding repeated constants in CSS
A: I have used Sass and think it's great.
A: I haven't had the chance to use it on a project yet, but if you happen to be using PHP for your backend, Turbine looks promising.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can XML comments go anywhere? I wrote a simple tool to generate a DBUnit XML dataset using queries that the user enters. I want to include each query entered in the XML as a comment, but the DBUnit API to generate the XML file doesn't support inserting the comment where I would like it (above the data it generates), so I am resorting to putting the comment with ALL queries either at the top or bottom.
So my question: is it valid XML to place it at either location? For example, above the XML Declaration:
<!-- Queries used: ... -->
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
...
</dataset>
Or below the root node:
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
...
</dataset>
<!-- Queries used: ... -->
I plan to initially try above the XML Declaration, but I have doubts on if that is valid XML, despite the claim from wikipedia:
Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA.
I plan to post back if this works, but it would be nice to know if it is an official XML standard.
UPDATE: See my response below for the result of my test.
A: The first example is not valid XML, the declaration has to be the first thing in a XML document.
But besides that, comments can go anywhere else.
Correcting your first example:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Queries used: ... -->
<dataset>
</dataset>
A: The processing instruction must be the very first thing in the XML content (see XML comment and processing instructions). The following should work:
<?xml version='1.0' encoding='UTF-8'?>
<!-- Queries used: ... -->
<dataset>
...
</dataset>
A: Thanks for the answers everyone!
As it turns out, the comment ahead of the file seemed to work, but when I delved into the DBUnit source, it is because validation is turned off.
I did try a simple document load via:
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
Document document = builder.parse(new File("/path/to/file"));
and this fails with an exception because the XML Declaration is not the first thing (as others indicated would be the case).
So, while DBUnit would work, I prefer to have valid XML, so I moved the comment to the end (since DBUnit generates the XML Declaration, it is not an option to place the comment below it, even though I would prefer that... at least not without modifying the XML after the fact, which would be more work than it is worth).
A: According to the XML specification, a well-formed XML document is:
document ::= prolog element Misc*
where prolog is
prolog ::= XMLDecl? Misc* (doctypedecl Misc*)?
and Misc is
Misc ::= Comment | PI | S
and
XMLDecl ::= '<?xml' VersionInfo EncodingDecl? SDDecl? S? '?>'
which means that, if you want to have comments at the top, you cannot have an XML type declaration.
You can, however, have comments after the declaration and outside the document element, either at the top or the bottom of the document, because Misc* can contain comments.
The specification agrees with Wikipedia on comments:
2.5 Comments
[Definition: Comments may appear anywhere in a document outside other markup; in addition, they may appear within the document type declaration at places allowed by the grammar. They are not part of the document's character data; an XML processor MAY, but need not, make it possible for an application to retrieve the text of comments. For compatibility, the string "--" (double-hyphen) MUST NOT occur within comments.] Parameter entity references MUST NOT be recognized within comments.
All of this together means that you can put comments anywhere that's not inside other markup, except that you cannot have an XML declaration if you lead with a comment.
However, while in theory theory agrees with practice, in practice it doesn't, so I'd be curious to see how your experiment works out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to log mallocs This is a bit hypothetical and grossly simplified but...
Assume a program that will be calling functions written by third parties. These parties can be assumed to be non-hostile but can't be assumed to be "competent". Each function will take some arguments, have side effects and return a value. They have no state while they are not running.
The objective is to ensure they can't cause memory leaks by logging all mallocs (and the like) and then freeing everything after the function exits.
Is this possible? Is this practical?
p.s. The important part to me is ensuring that no allocations persist so ways to remove memory leaks without doing that are not useful to me.
A: You don't specify the operating system or environment, this answer assumes Linux, glibc, and C.
You can set __malloc_hook, __free_hook, and __realloc_hook to point to functions which will be called from malloc(), realloc(), and free() respectively. There is a __malloc_hook manpage showing the prototypes. You can add track allocations in these hooks, then return to let glibc handle the memory allocation/deallocation.
It sounds like you want to free any live allocations when the third-party function returns. There are ways to have gcc automatically insert calls at every function entrance and exit using -finstrument-functions, but I think that would be inelegant for what you are trying to do. Can you have your own code call a function in your memory-tracking library after calling one of these third-party functions? You could then check if there are any allocations which the third-party function did not already free.
A: First, you have to provide the entrypoints for malloc() and free() and friends. Because this code is compiled already (right?) you can't depend on #define to redirect.
Then you can implement these in the obvious way and log that they came from a certain module by linking those routines to those modules.
The fastest way involves no logging at all. If the amount of memory they use is bounded, why not pre-allocate all the "heap" they'll ever need and write an allocator out of that? Then when it's done, free the entire "heap" and you're done! You could extend this idea to multiple heaps if it's more complex that that.
If you really do need to "log" and not make your own allocator, here's some ideas. One, use a hash table with pointers and internal chaining. Another would be to allocate extra space in front of every block and put your own structure there containing, say, an index into your "log table," then keep a free-list of log table entries (as a stack so getting a free one or putting a free one back is O(1)). This takes more memory but should be fast.
Is it practical? I think it is, so long as the speed-hit is acceptable.
A: You could run the third party functions in a separate process and close the process when you are done using the library.
A: A better solution than attempting to log mallocs might be to sandbox the functions when you call them—give them access to a fixed segment of memory and then free that segment when the function is done running.
Unconfined, incompetent memory usage can be just as damaging as malicious code.
A: Can't you just force them to allocate all their memory on the stack? This way it would be garanteed to be freed after the function exits.
A: In the past I wrote a software library in C that had a memory management subsystem that contained the ability to log allocations and frees, and to manually match each allocation and free. This was of some use when attempting to find memory leaks, but it was difficult and time consuming to use. The number of logs was overwhelming, and it took an extensive amount of time to understand the logs.
That being said, if your third party library has extensive allocations, its more then likely impractical to track this via logging. If you're running in a Windows environment, I would suggest using a tool such as Purify[1] or BoundsChecker[2] that should be able to detect leaks in your third party libraries. The investment in the tool should pay for itself in time saved.
[1]: http://www-01.ibm.com/software/awdtools/purify/ Purify
[2]: http://www.compuware.com/products/devpartner/visualc.htm BoundsChecker
A: Since you're worried about memory leaks and talking about malloc/free, I assume you're in C. I'm also assuming based on your question that you do not have access to the source code of the third party library.
The only thing I can think of is to examine memory consumption of your app before & after the call, log error messages if they're different and convince the third party vendor to fix any leaks you find.
A: If you have money to spare, then consider using Purify to track issues. It works wonders, and does not require source code or recompilation. There are also other debugging malloc libraries available that are cheaper. Electric Fence is one name I recall. That said, the debugging hooks mentioned by Denton Gentry seem interesting too.
A: If you're too poor for Purify, try Valgrind. It it a lot better than it was 6 years ago and a lot easier to dive into than Purify.
A: Microsoft Windows provides (use SUA if you need a POSIX), quite possibly, the most advanced heap+(other api known to use the heap) infrastructure of any shipping OS today.
the __malloc() debug hooks and the associated CRT debug interfaces are nice for cases where you have the source code to the tests, however they can often miss allocations by standard libraries or other code which is linked. This is expected as they are the Visual Studio heap debugging infrastructure.
gflags is a very comprehensive and detailed set of debuging capabilities which has been included with Windows for many years. Having advanced functionality for source and binary only use cases (as it is the OS heap debugging infrastructure).
It can log full stack traces (repaginating symbolic information in a post-process operation), of all heap users, for all heap modifying entrypoint's, serially if needed. Also, it may modify the heap with pathalogical cases which may align the allocation of data such that the page protection offered by the VM system is optimally assigned (i.e. allocate your requested heap block at the end of a page, so even a singele byte overflow is detected at the time of the overflow.
umdh is a tool which can help assess the status at various checkpoints, however the data is continually accumulated during the execution of the target o it is not a simple checkpointing debug stop in the traditional context. Also, WARNING, Last I checked at least, the total size of the circular buffer which store's the stack information, for each request is somewhat small (64k entries (entries+stack)), so you may need to dump rapidly for heavy heap users. There are other ways to access this data but umdh is fairly simple.
NOTE there are 2 modes;
*
*MODE 1, umdh {-p:Process-id|-pn:ProcessName} [-f:Filename] [-g]
*MODE 2, umdh [-d] {File1} [File2] [-f:Filename]
I do not know what insanity gripped the developer who chose to alternate between -p:foo argument specifier's and naked ordering of argument's but it can get a little confusing.
The debugging sdk works with a number of other tools, memsnap is a tool which apparently focuses on memory leask and such, but I have not used it, your milage may vary.
Execute gflags with no arguments for the UI mode, +arg's and /args are different "modes" of use also.
A: On Linux I've successfully used mtrace(3) to log allocations and freeings. Its usage is as simple as
*
*Modify your program to call mtrace() when you need to begin tracing (e.g. at the top of main()),
*Set environment variable MALLOC_TRACE to the file path where the trace should be saved and run the program.
After that the output file will contain something like this (excerpt from the middle to show a failed allocation):
@ /usr/lib/tls/libnvidia-tls.so.390.116:[0xf44b795c] + 0x99e5e20 0x49
@ /opt/gcc-7/lib/libstdc++.so.6:(_ZdlPv+0x18)[0xf6a80f78] - 0x99beba0
@ /usr/lib/tls/libnvidia-tls.so.390.116:[0xf44b795c] + 0x9a23ec0 0x10
@ /opt/gcc-7/lib/libstdc++.so.6:(_ZdlPv+0x18)[0xf6a80f78] - 0x9a23ec0
@ /opt/Xorg/lib/video-libs/libGL.so.1:[0xf668ee49] + 0x99c67c0 0x8
@ /opt/Xorg/lib/video-libs/libGL.so.1:[0xf668f14f] - 0x99c67c0
@ /opt/Xorg/lib/video-libs/libGL.so.1:[0xf668ee49] + (nil) 0x30000000
@ /lib/libc.so.6:[0xf677f8eb] + 0x99c21f0 0x158
@ /lib/libc.so.6:(_IO_file_doallocate+0x91)[0xf677ee61] + 0xbfb00480 0x400
@ /lib/libc.so.6:(_IO_setb+0x59)[0xf678d7f9] - 0xbfb00480
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to push data to variety of different client types in near real time? We need is to push sports data to a number of different client types such as ajax/javascript, flash, .NET and Mac/iPhone. Data updates need to only be near-real time with delays of several seconds being acceptable.
How to best accomplish this?
A: The best solution (if we're talking .NET) seem to be to use WCF and streaming http. The client makes the first http connection to the server at port 80, the connection is then kept open with a streaming response that never ends. (And if it does it reconnects).
Here's a sample that demonstrates this: Streaming XML.
The solution to pushing through firewalls: Keeping connections open in IIS
A: I would go with XML. XML is widely supported on all platforms and has lots of libraries and tools available for it. And since it's text, there are no issues when you pass it between platforms.
I know JSON is another alternative, but I'm not familiar enough with it to know whether or not to recommend it in this case.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to click on an AutoCompleteExtender with Watin For my acceptance testing I'm writing text into the auto complete extender and I need to click on the populated list.
In order to populate the list I have to use AppendText instead of TypeText, otherwise the textbox looses focus before the list is populated.
Now my problem is when I try to click on the populated list. I've tried searching the UL element and clicking on it; but it's not firing the click event on the list.
Then I tried to search the list by tagname and value:
Element element = Browser.Element(Find.By("tagname", "li") && Find.ByValue("lookupString"));
but it's not finding it, has anyone been able to do what I'm trying to do?
A: In case someone has the same problem. It works with the next code:
string lookupString = "string in list";
Regex lookup = new Regex(string.Format(".*{0}.*", lookupString));
Element list = Browser.Element("li", Find.ByText(lookup));
list.MouseDown();
A: The shorter version of that is:
string lookupString = "string in list";
Element list = Browser.Element("li", Find.ByText(new Regex(lookupString)));
list.MouseDown();
Regexs will do a partial match so you don't need to specify .* either side and use string.Format. This assumes however that the lookupString doesn't contain any characters special to Regexs, they'd need to be escaped.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is a reason that LINQ to SQL wouldn't generate a collection based on a relationship? I have a relationship between two entities (e1 and e2) and e1 has a collection of e2, however I have a similar relationship set up between (e2 and e3), yet e2 does not contain a collection of e3's, any reason why this would happen? Anything I can post to make this easier to figure out?
Edit: I just noticed that the relationship between e1 and e2 is solid and between e2 and e3 is dotted, what causes that? Is it related?
A: Using this setup, everything worked.
1) LINQ to SQL Query, 2) DB Tables, 3) LINQ to SQL Data Model in VS.NET 2008
1 - LINQ to SQL Query
DataClasses1DataContext db = new DataClasses1DataContext();
var results = from threes in db.tableThrees
join twos in db.tableTwos on threes.fk_tableTwo equals twos.id
join ones in db.tableOnes on twos.fk_tableOne equals ones.id
select new { ones, twos, threes };
2 - Database Scripts
--Table One
CREATE TABLE tableOne(
[id] [int] IDENTITY(1,1) NOT NULL,
[value] [nvarchar](50) NULL,
CONSTRAINT [PK_tableOne] PRIMARY KEY CLUSTERED
( [id] ASC ) WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];
--Table Two
CREATE TABLE tableTwo(
[id] [int] IDENTITY(1,1) NOT NULL,
[value] [nvarchar](50) NULL,
[fk_tableOne] [int] NOT NULL,
CONSTRAINT [PK_tableTwo] PRIMARY KEY CLUSTERED
( [id] ASC ) WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];
ALTER TABLE tableTwo WITH CHECK
ADD CONSTRAINT [FK_tableTwo_tableOne]
FOREIGN KEY([fk_tableOne])
REFERENCES tableOne ([id]);
ALTER TABLE tableTwo CHECK CONSTRAINT [FK_tableTwo_tableOne];
--Table Three
CREATE TABLE tableThree(
[id] [int] IDENTITY(1,1) NOT NULL,
[value] [nvarchar](50) NULL,
[fk_tableTwo] [int] NOT NULL,
CONSTRAINT [PK_tableThree] PRIMARY KEY CLUSTERED
([id] ASC ) WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];
ALTER TABLE tableThree WITH CHECK
ADD CONSTRAINT [FK_tableThree_tableTwo]
FOREIGN KEY([fk_tableTwo])
REFERENCES tableTwo ([id]);
ALTER TABLE tableThree CHECK CONSTRAINT [FK_tableThree_tableTwo];
3 - LINQ to SQL Data Model in Visual Studio
alt text http://i478.photobucket.com/albums/rr148/KyleLanser/ThreeLevelHierarchy.png
A: the FK_Contraints are set up like this:
ALTER TABLE [dbo].[e2] WITH CHECK ADD CONSTRAINT [FK_e2_e1] FOREIGN KEY([E1Id]) REFERENCES [dbo].[e1] ([Id])
ALTER TABLE [dbo].[e3] WITH CHECK ADD CONSTRAINT [FK_e3_e2] FOREIGN KEY([E2Id]) REFERENCES [dbo].[e2] ([Id])
is this what you were asking for?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What role to give a sql login in order to create a database and additional logins What role should I give a sql login if I need the login to be able to create a database, and create additional logins and add users based on those logins to the database i created? This is sql 2005.
A: There is no fixed database role that includes these permissions. You'll have to create a role and assign the permissions individually.
CREATE ROLE db_creator
GRANT CREATE DATABASE TO db_creator
GRANT ALTER ANY LOGIN TO db_creator
GRANT ALTER ANY USER TO db_creator
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Average User Download Speeds Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality.
I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start.
A: Speedtest.net has a lot of stats broken down by country, region, city and ISP. Not sure about accuracy, since it's only based on the people using their "bandwidth measurement" service.
A: It would depend on the geography that you are targeting. For example, in India, you can safely assume it would be a number below 256kbps.
A: Try attacking it from the other angle. Look at streaming services that cater to the customer you want, and have significant volume (maybe youtube) and see what they're pushing. You'll find there'a pretty direct correlation between alexa rating (popularity) and quality(minimum bitrate required). Vimeo will always have fewer users than Youtube because the user experience is poor for low bitrate users.
There are many other factors, and this should only form one small facet of your bandwidth decision, but it's a useful comparison to make.
Keep in mind, however, that you want to degrade gracefully. As more and more sites come online you'll start bumping into ISPs that limit total transfer, and being able to tell your customers how much of their bandwidth your site is consuming is useful, as well as proclaiming that you are a low bandwidth site.
Further, more and more users are using portable cellular connections (iPhone) where limited bandwidth is a big deal. AT&T has oversold many markets so being able to get useful video through a tiny link will enable you to capture market that vimeo and Hulu cannot.
Quite frankly, though, the best thing to do is degrade on the fly gracefully. Measure the bandwidth of the connection continuously and adjust bandwidth as needed for a smooth playback experience with good audio. Then you can take all users across the gamut...
-Adam
A: You could try looking at the lower tier offerings from AT&T and Comcast. Probably 1.5 Mbps for the basic level (which I imagine most people get).
The "test your bandwidth" sites may have some stats on this, too.
A: There are a lot of factors involved (server bandwidth, local ISP, network in between, etc) which make it difficult to give a hard answer. With my current ISP, I typically get 200-300 kB/sec. Although when the planets align I've gotten as much as 2 MB/sec (the "quoted" peak downlink speed). That was with parallel streams, however. The peak bandwidth I've achieved on a single stream is 1.2 MB/sec
A: The best strategy is always to give your users options. Why don't you start the stream at a low bitrate that will work for everyone and provide a "High Quality" link for those of us with FTTH connections? I believe YouTube has started doing this.
A: According to CWA, the average US resident has a 1.9Mbps download speed. They have data by state, so if you have money then you can probably get a more specific report for your intended audience. Keep in mind, however, that more and more people are sharing this with multiple computers, using VOIP devices, and running background processes that consume bandwidth.
-Adam
A: Wow.
This is so dependent on the device, connection method, connection type, ISP throttling, etc. involved in the end-to-end link.
To try and work out an average speed would be fairly impossible.
Think, fat pipe at home (8Gb plus) versus bad wireless connection provided for free at the airport (9.6kb) and you can start to get an idea of the range of connections you're trying to average over.
Then we move onto variations in screen sizes and device capabilities.
Maybe trawl the UA stings of incoming connectins to get an idea of the capabilities of the user devices being used out there.
Maybe see if you can use some sort of geolocation solution to try and see how people are connecting to your site to get an idea of connection capabilities as well.
Are you offering the video in a fixed format, i.e. X x Y pixel size?
HTH.
cheers,
Rob
A: If I'm using your site, "average" doesn't matter. All I care about is MY experience, and so you either need to make the site adaptive, design for a pretty low speed (iPhone 2G gets you 70-80 kbps if you're lucky, to take one common case), or be very clear about the requirements so I can decide whether or not my connection-of-the-moment will work or not.
What you don't want to subject your users to is unpredictably choppy, intermittent video and audio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Nesting a GridView within Repeater I have a scenario wherein, for example, I need to repeat a list of US states and display a table of cities and city populations after the name of each state. The design requirement dictates that every outer repetition must be the name of a state followed by a table of cities, and that requirement cannot be changed at this time. Are there disadvantages to nesting a GridView within a Repeater and then binding each repeated GridView during the Repeater's ItemDataBound event? What are some alternative solutions?
A: If it were me, I'd reverse the question and ask why I should use a GridView, If you need a bunch of built-in features like paging and sorting, then the GridView might be a good fit. If you just want tabular data, I'd reconsider. Why? Because with GridView you're getting a whole bunch of stuff you won't use, your ViewState will be potentially huge, and your page performance will be slower.
I'm not a bigot when it comes to GridView, but I only use them when there is a damn good reason.
A: In your above scenario, you'd be better off doing a master-detail style GridView, which will save you the overhead of all those GridView objects that get created.
There are various implementation of it (using a drop down for the master, using a modal popup for the detail, etc.), but the main point is that there are implementations available.
A: At the very least, hopefully you can turn off ViewState on the GridViews.
A: The best solution I was able to come up with was to nest the GridView in the Repeater. Then I bound each repeated GridView during the Repeater's ItemDataBound event. I turned off their ViewStates, of course, as they weren't required.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best way to detect a release build from a debug build? .net So I have about 10 short css files that I use with mvc app.
There are like
error.css
login.css
etc...
Just some really short css files that make updating and editing easy (At least for me). What I want is something that will optimize the if else branch and not incorporate it within the final bits. I want to do something like this
if(Debug.Mode){
<link rel="stylesheet" type="text/css" href="error.css" />
<link rel="stylesheet" type="text/css" href="login.css" />
<link rel="stylesheet" type="text/css" href="menu.css" />
<link rel="stylesheet" type="text/css" href="page.css" />
} else {
<link rel="stylesheet" type="text/css" href="site.css" />
}
I'll have a msbuild task that will combine all the css files, minimize them and all that good stuff. I just need to know if there is a way to remove the if else branch in the final bits.
A: if (System.Diagnostics.Debugger.IsAttached)
{
// Do this
}
else
{
// Do that
}
A: I should had used google.
#if DEBUG
Console.WriteLine("Debug mode.")
#else
Console.WriteLine("Release mode.")
#endif
Make sure that the option "Configuration settings" -> "Build" "Define DEBUG
constant" in the project properties is checked.
A: You can try to use
HttpContext.Current.IsDebuggingEnabled
it is controlled by a node in configuration. In my opinion this is nicer solution than conditional compilation.
However if you want to be able to control based on a compilation I think you can used a ConditionalAttribute.
Regards,
A: Specifically, like this in C#:
#if (DEBUG)
Debug Stuff
#endif
C# has the following preprocessor directives:
#if
#else
#elif // Else If
#endif
#define
#undef // Undefine
#warning // Causes the preprocessor to fire warning
#error // Causes the preprocessor to fire a fatal error
#line // Lets the preprocessor know where this source line came from
#region // Codefolding
#endregion
A: Compiler constants. I don't remember the C# syntax, but this is how I do it in VB:
#If CONFIG = "Debug" Then
'do somtehing
#Else
'do something else
#EndIf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Search by hash? I had the idea of a search engine that would index web items like other search engines do now but would only store the file's title, url and a hash of the contents.
This way it would be easy to find items on the web if you already had them and didn't know where they came from or wanted to know all the places that something appeared.
More useful for non textual items like images, executables and archives.
I was wondering if there is already something similar?
A: Check out the wikipedia page on locality sensitive hashing. There's also a good page hosted by a research on MIT.
In general, there are several flavors available: hashes for strings (such as simhash), sets or 0/1 features (such as min-wise hashes), and for real vectors.
The main trick for numerical hashes is basically dimension reduction, so far. For strings, the idea is to come up with a representation that's robust in the face of minor edits.
I'm also doing a little research in this field, although I guess stackoverflow might not be the right place for nascent work.
A: Well, for images, there's http://tineye.com, which will one-up that, and find you similar images too.
A: The question seems to focus on exact match hashes, which we understand better than nearest-neighbor approaches, and are indeed worthwhile, especially if people can share tags and other metadata that way.
As @rjmunro notes, hash-based searching is a popular idea in the P2P world, and Bitzi did pretty much this, though they have shut down and their Bitpedia (Digital Media Encyclopedia) isn't hosted there any more, though some of it at least is still available at Archive.org.
Bitzi also produced software like Bitcollider (SourceForge.net),
and the Magnet URI scheme, which allows for specifying a file by hash and is thus a content-based identifier. Various applications support searching at various databases via Magnet URIs as described at that Wikipedia page.
The same idea is popular in the password-cracking scene - see e.g. findmyhash - Python script to crack hashes using online services etc.
Going a step further, I think it would be great if there were databases and online repositories identifying content by hash and providing tags and other metadata about the content from various perspectives. Then I could leave my music collection in its pristine state (no wasted backup space and time), but still tag them myself and add other metadata, via external tag databases. If my applications knew how to grab the tags, it would seem much better than the current system where we modify and copy around big files just to move tags from e.g. my desktop to my phone.
See a related idea at Metadata Independent Hashing for Media Identification & P2P Transfer Optimisation (pdf).
A: It's not a bad idea. Sometimes I find myself stumbled upon some file trying to figure out where it comes from :) But how are you going to track item's sources? Content can be obtained by various means - web browser, download manager, simply by copying from network share.
A: If I understand your proposal right, http://bitzi.com/ has done this for a while.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you create a weak reference to an object in Python? How do you create a weak reference to an object in Python?
A: >>> import weakref
>>> class Object:
... pass
...
>>> o = Object()
>>> r = weakref.ref(o)
>>> # if the reference is still active, r() will be o, otherwise None
>>> do_something_with_o(r())
See the wearkref module docs for more details.
You can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Redirecting non-www URL to www using .htaccess I'm using Helicon's ISAPI Rewrite 3, which basically enables .htaccess in IIS. I need to redirect a non-www URL to the www version, i.e. example.com should redirect to www.example.com. I used the following rule from the examples but it affects subdomains:
RewriteCond %{HTTPS} (on)?
RewriteCond %{HTTP:Host} ^(?!www\.)(.+)$ [NC]
RewriteCond %{REQUEST_URI} (.+)
RewriteRule .? http(?%1s)://www.%2%3 [R=301,L]
This works for most part, but is also redirect sub.example.com to www.sub.example.com. How can I rewrite the above rule so that subdomains do not get redirected?
A: Append the following RewriteCond:
RewriteCond %{HTTP:Host} ^[^.]+\.[a-z]{2,5}$ [NC]
That way it'll only apply the rule to nondottedsomething.uptofiveletters as you can see, subdomain.domain.com will not match the condition and thus will not be rewritten.
You can change [a-z]{2,5} for a stricter tld matching regex, as well as placing all the constraints for allowed chars in domain names (as [^.]+ is more permissive than strictly necessary).
All in all I think in this case that wouldn't be necessary.
EDIT: sadie spotted a flaw on the regex, changed the first part of it from [^.] to [^.]+
A: I've gotten more control using urlrewriter.net, something like:
<unless header="Host" match="^www\.">
<if url="^(https?://)[^/]*(.*)$">
<redirect to="$1www.domain.tld$2"/>
</if>
<redirect url="^(.*)$" to="http://www.domain.tld$1"/>
</unless>
A: Zigdon has the right idea except his regex isn't quite right. Use
^example\.com$
instead of his suggestion of:
^example\.com(.*)
Otherwise you won't just be matching example.com, you'll be matching things like example.comcast.net, example.com.au, etc.
A: @Vinko
For your generic approach, I'm not sure why you chose to limit the length of the TLD in your regex? It's not very future-proof, and I'm unsure what benefit it's providing? It's actually not even "now-proof" because there's at least one 6-character TLD out there (.museum) which won't be matched.
It seems unnecessary to me to do this. Couldn't you just do ^[^.]+\.[^.]\+$? (note: the question-mark is part of the sentence, not the regex!)
All that aside, there is a bigger problem with this approach that is: it will fail for domains that aren't directly beneath the TLD. This is domains in Australia, UK, Japan, and many other countries, who have hierarchies: .co.jp, .co.uk, .com.au, and so on.
Whether or not that is of any concern to the OP, I don't know but it's something to be aware of if you're after a "fix all" answer.
The OP hasn't yet made it clear whether he wants a generic solution or a solution for a single (or small group) of known domains. If it's the latter, see my other note about using Zigdon's approach. If it's the former, then proceed with Vinko's approach taking into account the information in this post.
Edit: One thing I've left out until now, which may or may not be an option for you business-wise, is to go the other way. All our sites redirect http://www.domain.com to http://domain.com. The folks at http://no-www.org make a pretty good case (IMHO) for this being the "right" way to do it, but it's still certainly just a matter of preference. One thing is for sure though, it's far easier to write a generic rule for that kind of redirection than this one.
A: @org 0100h Yes, there are many variables left out of the description of the problem, and all your points are valid ones and should be addressed in the event of an actual implementation. There are both pros and cons to your proposed regex. On the one hand it's easier and future proof, on the other, do you really want to match example.foobar if sent in the Host header? There might be some edge cases when you'll end up redirecting to the wrong domain. A thrid alternative is modifying the regex to use a list of the actual domains, if more than one, like
RewriteCond %{HTTP:Host} (example.com|example.net|example.org) [NC]
(Note to chris, that one will change %1)
@chrisofspades It's not meant to replace it, your condition number two ensures that it doesn't have www, whereas mine doesn't. It won't change the values of %1, %2, %3 because it doesn't store the matches (iow, it doesn't use parentheses).
A: Can't you adjust the RewriteCond to only operate on example.com?
RewriteCond %{HTTP:Host} ^example\.com(.*) [NC]
A: Why dont you just have something like this in your vhost (of httpd) file?
ServerName: www.example.com
ServerAlias: example.com
Of course that wont re-direct, that will just carry on as normal
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Can you do a partial checkout with Subversion? If I had 20 directories under trunk/ with lots of files in each and only needed 3 of those directories, would it be possible to do a Subversion checkout with only those 3 directories under trunk?
A: Subversion 1.5 introduces sparse checkouts which may be something you might find useful. From the documentation:
... sparse directories (or shallow checkouts) ... allows you to easily check out a working copy—or a portion of a working copy—more shallowly than full recursion, with the freedom to bring in previously ignored files and subdirectories at a later time.
A: I wrote a script to automate complex sparse checkouts.
#!/usr/bin/env python
'''
This script makes a sparse checkout of an SVN tree in the current working directory.
Given a list of paths in an SVN repository, it will:
1. Checkout the common root directory
2. Update with depth=empty for intermediate directories
3. Update with depth=infinity for the leaf directories
'''
import os
import getpass
import pysvn
__author__ = "Karl Ostmo"
__date__ = "July 13, 2011"
# =============================================================================
# XXX The os.path.commonprefix() function does not behave as expected!
# See here: http://mail.python.org/pipermail/python-dev/2002-December/030947.html
# and here: http://nedbatchelder.com/blog/201003/whats_the_point_of_ospathcommonprefix.html
# and here (what ever happened?): http://bugs.python.org/issue400788
from itertools import takewhile
def allnamesequal(name):
return all(n==name[0] for n in name[1:])
def commonprefix(paths, sep='/'):
bydirectorylevels = zip(*[p.split(sep) for p in paths])
return sep.join(x[0] for x in takewhile(allnamesequal, bydirectorylevels))
# =============================================================================
def getSvnClient(options):
password = options.svn_password
if not password:
password = getpass.getpass('Enter SVN password for user "%s": ' % options.svn_username)
client = pysvn.Client()
client.callback_get_login = lambda realm, username, may_save: (True, options.svn_username, password, True)
return client
# =============================================================================
def sparse_update_with_feedback(client, new_update_path):
revision_list = client.update(new_update_path, depth=pysvn.depth.empty)
# =============================================================================
def sparse_checkout(options, client, repo_url, sparse_path, local_checkout_root):
path_segments = sparse_path.split(os.sep)
path_segments.reverse()
# Update the middle path segments
new_update_path = local_checkout_root
while len(path_segments) > 1:
path_segment = path_segments.pop()
new_update_path = os.path.join(new_update_path, path_segment)
sparse_update_with_feedback(client, new_update_path)
if options.verbose:
print "Added internal node:", path_segment
# Update the leaf path segment, fully-recursive
leaf_segment = path_segments.pop()
new_update_path = os.path.join(new_update_path, leaf_segment)
if options.verbose:
print "Will now update with 'recursive':", new_update_path
update_revision_list = client.update(new_update_path)
if options.verbose:
for revision in update_revision_list:
print "- Finished updating %s to revision: %d" % (new_update_path, revision.number)
# =============================================================================
def group_sparse_checkout(options, client, repo_url, sparse_path_list, local_checkout_root):
if not sparse_path_list:
print "Nothing to do!"
return
checkout_path = None
if len(sparse_path_list) > 1:
checkout_path = commonprefix(sparse_path_list)
else:
checkout_path = sparse_path_list[0].split(os.sep)[0]
root_checkout_url = os.path.join(repo_url, checkout_path).replace("\\", "/")
revision = client.checkout(root_checkout_url, local_checkout_root, depth=pysvn.depth.empty)
checkout_path_segments = checkout_path.split(os.sep)
for sparse_path in sparse_path_list:
# Remove the leading path segments
path_segments = sparse_path.split(os.sep)
start_segment_index = 0
for i, segment in enumerate(checkout_path_segments):
if segment == path_segments[i]:
start_segment_index += 1
else:
break
pruned_path = os.sep.join(path_segments[start_segment_index:])
sparse_checkout(options, client, repo_url, pruned_path, local_checkout_root)
# =============================================================================
if __name__ == "__main__":
from optparse import OptionParser
usage = """%prog [path2] [more paths...]"""
default_repo_url = "http://svn.example.com/MyRepository"
default_checkout_path = "sparse_trunk"
parser = OptionParser(usage)
parser.add_option("-r", "--repo_url", type="str", default=default_repo_url, dest="repo_url", help='Repository URL (default: "%s")' % default_repo_url)
parser.add_option("-l", "--local_path", type="str", default=default_checkout_path, dest="local_path", help='Local checkout path (default: "%s")' % default_checkout_path)
default_username = getpass.getuser()
parser.add_option("-u", "--username", type="str", default=default_username, dest="svn_username", help='SVN login username (default: "%s")' % default_username)
parser.add_option("-p", "--password", type="str", dest="svn_password", help="SVN login password")
parser.add_option("-v", "--verbose", action="store_true", default=False, dest="verbose", help="Verbose output")
(options, args) = parser.parse_args()
client = getSvnClient(options)
group_sparse_checkout(
options,
client,
options.repo_url,
map(os.path.relpath, args),
options.local_path)
A: Or do a non-recursive checkout of /trunk, then just do a manual update on the 3 directories you need.
A: Indeed, thanks to the comments to my post here, it looks like sparse directories are the way to go. I believe the following should do it:
svn checkout --depth empty http://svnserver/trunk/proj
svn update --set-depth infinity proj/foo
svn update --set-depth infinity proj/bar
svn update --set-depth infinity proj/baz
Alternatively, --depth immediates instead of empty checks out files and directories in trunk/proj without their contents. That way you can see which directories exist in the repository.
As mentioned in @zigdon's answer, you can also do a non-recursive checkout. This is an older and less flexible way to achieve a similar effect:
svn checkout --non-recursive http://svnserver/trunk/proj
svn update trunk/foo
svn update trunk/bar
svn update trunk/baz
A: If you already have the full local copy, you can remove unwanted sub folders by using --set-depth command.
svn update --set-depth=exclude www
See: http://blogs.collab.net/subversion/sparse-directories-now-with-exclusion
The set-depth command support multipile paths.
Updating the root local copy will not change the depth of the modified folder.
To restore the folder to being recusively checkingout, you could use --set-depth again with infinity param.
svn update --set-depth=infinity www
A: I'm adding this information for those using the TortoiseSvn tool: to obtain the OP same functionality, you can use the Choose items... button in the Checkout Depth section of the Checkout function, as shown in the following screenshot:
A: Not in any especially useful way, no. You can check out subtrees (as in Bobby Jack's suggestion), but then you lose the ability to update/commit them atomically; to do that, they need to be placed under their common parent, and as soon as you check out the common parent, you'll download everything under that parent. Non-recursive isn't a good option, because you want updates and commits to be recursive.
A: Sort of. As Bobby says:
svn co file:///.../trunk/foo file:///.../trunk/bar file:///.../trunk/hum
will get the folders, but you will get separate folders from a subversion perspective. You will have to go separate commits and updates on each subfolder.
I don't believe you can checkout a partial tree and then work with the partial tree as a single entity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "173"
} |
Q: Application Testing Is the real benefit in TDD the actual testing of the application, or the benefits that writing a testable application brings to the table? I ask because I feel too often the conversation revolves so much around testing, and not the total benefits package.
A: TDD helps you design your software. The tests becomes the design. By writing the test first you think about your code from a consumer perspective, making a more user friendly and more compact software design.
Also, by applying TDD you typically end up writing your code in a way where you can supply test mocks and stubs. This leads to less coupled software, making it easier to change and maintain over time.
So I guess allot of the talk around TDD is about testing, but by doing that other big benefits follow, such as quality (coverage), flexibility (decoupling), better design (think as the consumer of the API).
A: The real improvement is that it is a good way to force you to really think through the design and implementation. Then, once you've prepared the tests and written the code, solutions to unforeseen problems appear more easily.
Something that usually happens to me that is a good analogy: When I'm going to post a question to a forum or IRC channel, I like to have the problems well written and fully described, many times the process of preparing a well written and complete description of the problem magically makes the solution appear.
A: The real benefit of TDD is supposed to be that it allows you to modify/refactor/enhance your application without worrying about whether you've broken existing functionality. The fact that writing unit tests tends to result in loosely coupled code and better architecture isn't necessarily the point of TDD, but I think it's hard to have one without the other.
You can't really experience the benefit of TDD unless you have unit tests with good coverage. In order to do that, you're going to have to write testable code. That's why the two are often used in conjunction or in place of one another.
A: Automated testing is such a time saver and confidence booster when you are developing a product that you'll ship multiple versions of. With automated tests, you know that you haven't broken anything between versions. This especially helpful when your product is something that people can write add-ons for - you don't want to break their add-ons between versions.
With TDD, you get a good suite of tests as you develop. Without TDD writing those tests is much more difficult.
A: Michael Feathers has an insightful blog post about this titled The Flawed Theory Behind Unit Testing. Seriously, go read it. The punch line is
All of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code.
but you should read the full post for the context.
A: Automated testing keeps humans from doing a machine's job.
Test-driven development maximizes the amount of automated testing.
Beyond a certain point, of course, a human is still required. You reach diminishing returns when you try to apply TDD beyond that point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Will random data appended to a JPG make it unusable? So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. These are dummy padding (fillers, etc - probably all 0x00) just to make the file size a multiple of 8 bytes for block encryption.
Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the FF D9 that specifies the end of the image - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter.
I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying).
I figure with all the talk of Steganography hullaballo years ago, someone has some input here...
(encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding)
A: You can .. but the results may be unpredictable.
Even though there is enough information in the format to tell the client to ignore the extra data it is likely not a case the programmer tested for.
A paranoid program might look at the size, notice the discrepancy and decide it won't process your file because clearly it doesn't fully understand it. This is particularly likely when reading data from the web when random bytes in a file could be considered a security risk.
A: You can embed your data in the XMP tag within a JPEG (or EXIF or IPTC fields for that matter).
XMP is XML so you have a fair bit of flexibility there to do you own custom stuff.
It's probably not the simplest thing possible but putting your data here will maintain the integrity of the JPEG and require no "post processing".
You data will then show up in other imaging software such as PhotoShop, which may not be ideal.
A: No, you can add bits to the end of a jpg file, without making it unusable. The heading of the jpg file tells how to read it, so the program reading it will stop at the end of the jpg data.
In fact, people have hidden zip files inside jpg files by appending the zip data to the end of the jpg data. Because of the way these formats are structured, the resulting file is valid in either format.
A: As others have stated, you have no control how programs process image files and therefore some programs may find the images valid others may not.
However, there is a bigger issue here. Judging by your question, I'm deducing you're practicing "security through obscurity." It's widely considered a very bad practice. Use Google to find a plethora of articles about the topic.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Are there any languages that implement generics _well_? I liked the discussion at Differences in Generics, and was wondering whether there were any languages that used this feature particularly well.
I really dislike Java's List<? extends Foo> for a List of things that are Liskov-substitutable for Foo. Why can't List<Foo> cover that?
And honestly, Comparable<? super Bar>?
I also can't remember for the life of my why you should never return an Array of generics:
public T[] getAll<T>() { ... }
I never liked templates in C++, but that was mostly because none of the compilers could ever spit out a remotely meaningful error message for them. One time I actually did a make realclean && make 17 times to get something to compile; I never did figure out why the 17th time was the charm.
So, who actually likes using generics in their pet language?
A: Heck, English doesn't even implement generics well. :)
My bias is for C#. Mainly because that is what I am currently using and I have used them to good effect.
A: I think the generics in Java are actually pretty good. The reason why List<Foo> is different than List<? extends Foo> is that when Foo is a subtype of Bar, List<Foo> is not a subtype of List<Bar>. If you could treat a List<Foo> object as a List<Bar>, then you could add Bar objects to it, which could break things. Any reasonable type system will require this. Java lets you get away with treating Foo[] as a subtype of Bar[], but this forces runtime checks, reducing performance. When you return such an array, this makes it difficult for the compiler to know whether to do a runtime check.
I have never needed to use the lower bounds (List<? super Foo>), but I would imagine they might be useful for returning generic values. See covariance and contravariance.
On the whole though, I definitely agree with the complaints about overly verbose syntax and confusing error messages. Languages with type inference like OCaml and Haskell will probably make this easier on you, although their error messages can be confusing as well.
A: I'll add OCaml to the list, which has really generic generics. I agree that Haskell's type classes are really well done, but it's a bit different in that Haskell has no OO semantics, but OCaml does support OO.
A: Haskell implements type-constructor parameterisation (generics, or parametric polymorphism) quite well. So does Scala (although it needs a bit of hand-holding sometimes).
Both of these languages have higher-kinded types (a.k.a. abstract type constructors, or type-constructor polymorphism, or higher-order polymorphism).
See here: Generics of a Higher Kind
A: I use .Net (VB.Net), and haven't had any problems using generics. It's mostly painless.
Dim Cars as List(Of Car)
Dim Car as Car
For Each Car in Cars
...
Next
Never had any problems using the generic collections, although I haven't gone so far as to design any objects that use generics on my own.
A: I think that C# and VB.NET do a good job with generics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Using strings in XML Literals I'm a C# developer who's fumbling in the first VB code he's written since VB6, so if I am asking a rather obvious question, please forgive me.
I decided to experiment with XML Literals to generate some XML code for me, instead of using XMLDocument
I have 2 questions, the second regarding a workaround due to my inability to figure out the first.
1: Ideal solution
I have a Dictionary of ElementName, ElementValue whose KeyValue pairs I was looping over in the hope of generating the values dynamically, but the following syntax dies a horrible death
Dim xConnections As XElement
For Each connection As Connection In connections.AsList
For Each kvp As KeyValuePair(Of String, String) In connection.DecompiledElements
xConnections = <Connections> <<%= kvp.Key %>><%= kvp.Value %><\<%=kvp.Key %>> </Connections>
Next
Next
I have vague memories of the T4 syntax (the <%=%> syntax) being able to handle more complex operations (rather than direct assignment to the <%= ) and a 'Response.Write' like object to write output to, but I can't remember the details.
2: Cludgy workaround
Instead I thought of building a StringBuilder object and assigning its .ToString to the XElement, but that also failed with a conversion error.
I would prefer to continue using my key value pair concept in example one above, as I feel cludging together a string as in example 2 above is rather nasty, and I really should go back to using XMLDocument if instead.
Any thoughts or assistance greatly appreciated
A: VB.NET XML Literals are very powerful, but most often adding some LINQ to them makes them truly awesome. This code should do exactly what you're trying to do.
Dim Elements = New Dictionary(Of String, String)
Elements.Add("Key1", "Value1")
Elements.Add("Key2", "Value2")
Elements.Add("Key3", "Value3")
Dim xConnections = <Connections>
<%= From elem In Elements _
Select <<%= elem.Key %>><%= elem.Value %></> %>
</Connections>
The empty closing tag </> is all that is needed for the vb compiler to properly construct an xml element whose name is generated from a value within a <%= %> block.
Calling xConnections.ToString renders the following:
<Connections>
<Key1>Value1</Key1>
<Key2>Value2</Key2>
<Key3>Value3</Key3>
</Connections>
A: If I understand correctly what you are trying to do, you can use the StringBuilder. Use the StringBuilder.Append method and append the XmlElement 'OuterXml' property.
For example:
sb.Append(xmlElement.OuterXml)
A: To answer this more completely...
When injecting Strings into an XML Literal, it will not work properly unless you use XElement.Parse when injecting an XElement (this is because special characters are escaped)
So your ideal solution is more like this:
Dim conns = connections.AsList()
If conns IsNot Nothing AndAlso conns.length > 0 Then
Dim index = 0
Dim xConnections = _
<Connections>
<%= From kvp As KeyValuePair(Of String, String) In conns (System.Threading.Interlocked.Increment(index)).DecompiledElements() _
Select XElement.Parse("<" & <%= kvp.Key %> & ">" & <%= kvp.Value %> & "</" & <%= kvp.Key %> & ">") _
%>
</Connections>
Return xConnections.ToString()
End If
ToString will return the OuterXML Properly as a String (Value will not...)
of course, just drop the ToString() if you want to return an XElement of
Since I don't know what AsList() does, nor do I know what DecompiledElements do, set your error trapping accordingly. There are other ways to do the loops as well, this is just one solution.
A: We would all be remiss not to mention that dynamic XML element names are generally a bad idea. The whole point of XML is to create a store a data structure in a form that is readily:
*
*Verifiable
*Extendable
Dynamic element names fail that first condition. Why not simply use a standard XML format for storing key/value pairs like plists?
<dict>
<key>Author</key>
<string>William Shakespeare</string>
<key>Title</key>
<string>Romeo et</string>
<key>ISBN</key>
<string>?????</string>
</dict>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Java Desktop application framework I am working on designing and building a desktop application. I am thinking about using eclipse or netbeans for the base of this application. However, I have never built on either of these platforms. I am personally leaning to using netbeans because it seams like that platform is an easer learning curve. But, I wanted to ask people that have actually build on these platforms before which one is easier to use?
My personal definition of easer is as follows:
*
*Easy to get started with
*Consistent and logical API
*Good documentation
*Easy to build and deploy
Thanks very much,
Josh
A: I can't say enough about the Eclipse RCP platform. I would recommend it for any Java desktop app development.
It's free, has great tutorials, and allows cross-platform development of rich desktop applications that use native OS windowing toolkit, meaning that your application will look native in any platform. It also has a clean API that stays out of your way, and makes deploying to any platform a piece of cake.
If your interested check out this book: http://www.amazon.com/Eclipse-Rich-Client-Platform-Applications/dp/0321334612
A: *
*Easy to use: I have experience developing on Eclipse and I have to say it's not easy to understand its development model. Sure for basic stuff it has some wizards that make easier, but for something a little more complex it's just difficult. I don't know about Netbeans, but I heard its easier.
*Consistent API: I think Eclipse wins in this aspect. It runs over OSGI (brings some complexity though) and has plugins extensions for pretty much everything. It seems to be the platform of choice for plugin development, so I can assume it's reliable.
*Documentation: Eclipse wins by far. The help from eclipse site is excelent and the mailing list has plenny of users questions.
A: I have used Eclipse as a framework base but it was mostly just using SWT-- we didn't really use much of the RCP.
It really depends on what you're writing, but from what I have learned from using Eclipse it is really only suited for writing an app that involves you editing data using various views (just like editing code).
Anything more random that than can cause you to start pushing the framework in a direction it wasn't designed and causing massive fail.
A: You'll use Swing to develop your application because any other windowing framework is useful only if you have a previous knowledge or the appropriate background.
You can rely on http://www.javadocking.com/ to streamline you application.
If your application requirements are highly focused on the user interface, maybe you can look in another direction like Adobe Air.
A: RCP has a bigger learning curve, but once you learn the basics the Eclipse IDE itself supports building RCP applications very well. I have only built a plugin for NetBeans (not build a full-fledged application), and the learning curve was lesser than for the RCP application. The book Eclipse Rich Client Platform: Designing, Coding, and Packaging Java(TM) Applications provides a detailed introduction to building RCP applications.
A: Please see some of the open source applications at http://www.eclipse.org/community/rcp.php before opining that RCP is only for building a text editor. Thanks.
A: Introduction to Eclipse RCP.
A: I've done a little work with both, but only on relatively simple applications. Both seem to have roughly equal capabilities. I personally prefer the Netbeans platform as it makes more sense to me.
You should also consider JSR-296, the Swing Application Framework. It provides a basic framework for building Swing Applications, managing application resources, state, etc, but without as much "baggage" as Netbeans and Eclipse. Netbeans IDE has a number of tools for building applications on the SAF. See https://appframework.dev.java.net/ for more info.
Without knowing more about your application, it's hard to point you at the appropriate strengths/weaknesses of the two platforms.
A: I have a similar task and I am also considering different frameworks. I have some experience with Eclipse (~4 months developing RCP) and now Netbeans (played around for a couple of days). IMHO these frameworks are too complicated. You just end up debugging into Eclipse-specific plugin loaders trying to figure out why you get a ClassNotFoundException or a NullPointer. The same story with NetBeans: somehow somewhere somewhat reads out xml config files and creates UI from that, cool, as long as you follow the tutorial. And of course you can't develop and Eclipse-based RCP using some other IDE, NB is also very jealous about its usage (unless you do some hacks like I did).
What I'm lacking is a clear way to debug my threads from main to action performed. Instead I'm always told what I have to do to avoid exceptions. And so I have to keep my fingers crossed each time I'm trying to pull something new. And it never works out the first time.
Now I thought about the features I need and I looked out for smaller projects that aim at specific functionality like plugin management eg. And there are plenty. Just compile a list of the functionality you need and start adding smaller frameworks by integrating them into your project. This also makes sense since your project should consume less RAM in the end.
A: If you really want to avoid crossplatform maintainance then work with Swing. This buys you all platforms. RCP requires native libraries for each platform.
We have had good experiences with Swing.
A: I would use eclipse RCP when you really need a platform to built on top, not just "widgetery" like swing or swt. RCP is rock solid and consistent, modular and very flexible. Once you master it, you get huge benefits. Being a platform it give you the most commonly used things - preference, configurations, automatic updates, layout management, branding and things like that. You build a product, not an app. But learning curve is pretty steep in the beginning.
Swing on the other hand is not a platform, you will re-invent the wheel by writing your own things I mentioned above. But yes, swing is faster to learn and get started. I think it fits better for smaller applications with shorter life span.
A: Look some of the open source applications at http://www.eclipse.org/community/rcp.php before opening that RCP is only for building a stuff.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Lack of operator overloading in ActionScript 3.0 One of the things I miss the most in ActionScript is the lack of operator overloading, in particular ==. I kind of work around this issue by adding a "Compare" method to my classes, but that doesn't help in many cases, like when you want to use things like the built in Dictionary.
Is there a good way to work around this problem?
A: Yes it can be done (but be careful, its hacky): http://filimanjaro.com/2012/operators-overloading-in-as3-javascript-too-%E2%80%93-workaround/
In the tutorial I wrote about +=, -= operator overloading. But it's also possible with ==, I can write more about that if it's not clear.
Ah, the approach has some drawback (in rare cases it can be even dangerous). Think twice, before using it in a production.
EDIT:
After tests it seems the trick with +=, -= doesn't apply to == operator (what makes sense). Sorry for misleading info.
A: Nope.
But it doesn't hurt to add equals methods to your own classes. I try to never use == when comparing objects (the same goes for ===, which is the same thing for objects) since it only checks identity .
Sadly all the collections in Flash and Flex assume that identity is the only measure of equality that is needed.
There are hints in Flex that someone wanted to alleviate this problem at one time, but it seems like it was abandoned: there is an interface called IUID, and it is mentioned in the Flex Developer's Guide , but it is not used anywhere. Not even the collections in Flex use it to determine equality. And since you are asking for a solution for Flash, it may not have helped you anyway.
I've written some more about this (in the context of Flex) on my blog: Is there no equality?.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: What is the simplest way to find the difference between 2 times in python? I have 2 time values which have the type datetime.time. I want to find their difference. The obvious thing to do is t1 - t2, but this doesn't work. It works for objects of type datetime.datetime but not for datetime.time. So what is the best way to do this?
A: You could transform both into timedelta objects and subtract these from each other, which will take care to of the carry-overs. For example:
>>> import datetime as dt
>>> t1 = dt.time(23, 5, 5, 5)
>>> t2 = dt.time(10, 5, 5, 5)
>>> dt1 = dt.timedelta(hours=t1.hour, minutes=t1.minute, seconds=t1.second, microseconds=t1.microsecond)
>>> dt2 = dt.timedelta(hours=t2.hour, minutes=t2.minute, seconds=t2.second, microseconds=t2.microsecond)
>>> print(dt1-dt2)
13:00:00
>>> print(dt2-dt1)
-1 day, 11:00:00
>>> print(abs(dt2-dt1))
13:00:00
Negative timedelta objects in Python get a negative day field, with the other fields positive. You could check beforehand: comparison works on both time objects and timedelta objects:
>>> dt2 < dt1
True
>>> t2 < t1
True
A: Python has pytz (http://pytz.sourceforge.net) module which can be used for arithmetic of 'time' objects. It takes care of DST offsets as well. The above page has a number of examples that illustrate the usage of pytz.
A: Also a little silly, but you could try picking an arbitrary day and embedding each time in it, using datetime.datetime.combine, then subtracting:
>>> import datetime
>>> t1 = datetime.time(2,3,4)
>>> t2 = datetime.time(18,20,59)
>>> dummydate = datetime.date(2000,1,1)
>>> datetime.datetime.combine(dummydate,t2) - datetime.datetime.combine(dummydate,t1)
datetime.timedelta(0, 58675)
A: It seems that this isn't supported, since there wouldn't be a good way to deal with overflows in datetime.time. I know this isn't an answer directly, but maybe someone with more python experience than me can take this a little further. For more info, see this: http://bugs.python.org/issue3250
A: Environment.TickCount seems to work well if you need something quick.
int start = Environment.TickCount
...DoSomething()
int elapsedtime = Environment.TickCount - start
Jon
A: Firstly, note that a datetime.time is a time of day, independent of a given day, and so the different between any two datetime.time values is going to be less than 24 hours.
One approach is to convert both datetime.time values into comparable values (such as milliseconds), and find the difference.
t1, t2 = datetime.time(...), datetime.time(...)
t1_ms = (t1.hour*60*60 + t1.minute*60 + t1.second)*1000 + t1.microsecond
t2_ms = (t2.hour*60*60 + t2.minute*60 + t2.second)*1000 + t2.microsecond
delta_ms = max([t1_ms, t2_ms]) - min([t1_ms, t2_ms])
It's a little lame, but it works.
A: Retrieve the times in milliseconds and then do the subtraction.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: best tool to reverse-engineer a WinXP PS/2 touchpad driver? I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus.
A: IDA Pro won't be much use to you if you want to find out what 'messages' are being sent. You should realise that this is a very big step up for most web developers, but you already knew that?
I would start by deciding if you really need to work at the driver-level, often this is the Kernel level. The user mode level may be where you want to look first. Use a tool like WinSpy or other Windows debug tool to find out what messages are getting passed around by your driver software, and the mouse configuration applet in control panel. You can use the Windows API function called SendMessage() to send your messages to the application from user mode.
Your first stop for device driver development should be the Windows DDK docs and OSR Online.
A: I suggest reading the synaptics touchpad specs (most of the touchpads installed on notebooks are synaptics') available here http://www.synaptics.com/decaf/utilities/ACF126.pdf
I believe on page 18 you'll find the feature you are looking for. At least you'll know what to expect.
So, very likely, the touchpad driver "converts" the command coming from user mode to this PS/2 command.
I don't know the specifics of the touchpad PS/2 driver but I see two major ways for the user mode panel to communicate with the driver:
- update some key in the registry (this is actually very common)
- the driver provides an alternate "channel" that the user mode app opens and writes specific commands to
You may want to try using the process monitor from sysinternals to log registry activity when setting/resetting the feature.
As for the options 2 you may want to try IRP tracker from OSR and see if there's any specific communication between the panel and the driver (in the form or IRPs going back and forth). In this case, kernel programming knowledge is somewhat required.
The windows kernel debugger may also be useful to see if the PS/2 driver has some alternate channel.
A: Have a look at IDA Pro - The Interactive Disassembler. It is an amazing disassembler.
If you want to debug, not just reverse engineer, try PEBrowse Professional Interactive from SmidgeonSoft
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.