text
stringlengths 8
267k
| meta
dict |
---|---|
Q: displaying HTML inside a Flex application I have some HTML that is generated via a Rich Text Editor outside of my Flex application but would like to display it inside Flex.
The HTML is simple HTML tags, things like styles, anchors, and possibly image tags, is there a control that would let me render this HTML in flex or am I going to have to roll up my sleeves and roll my own?
Any ideas appreciated...Thanks.
A: If the HTML is really simple, you can display it in a normal label or textarea component, If it is more complex, I'll quote what I answered in this question. The discussion there also has a little more info.
If it is complex HTML and Javascript, one possible way is HTMLComponent, a method that uses an iframe over your flash to make it appear like the HTML is in your app. There are a few downsides to this method however - most of them described in detail at Deitte.com.
If this can move offline, you could use Air (it has an mx:HTML component built in). Deitte.com has a detail of this technique as well.
A: Check out the documentation on mx.controls.Label and flash.text.TextField (which is what displays the text in a Text or Label control in Flex). The TextField documentation states that
The <img> tag lets you embed external image files (JPEG, GIF, PNG), SWF files, and movie clips inside text fields. Text automatically flows around images you embed in text fields. To use this tag, you must set the text field to be multiline and to wrap text.
Which means that you can display an image in a Text component in Flex by setting its htmlText property to some HTML which contains an <img> tag. You can't use Label, because it is not multiline.
I've noticed that text fields have trouble with properly measuring their heights if the images displayed in them are left or right aligned with text flowing around them (e.g. align="left"). You may have to add some extra spacing below to counter that if you plan to use aligned images.
A: You will have to use flex iFrame control.
It is not an 100% flash solutions, and combines a bit of js calls but works perfectly for me.
You can grab latest source code from github https://github.com/flex-users/flex-iframe
Here is some sample code from the component author.
<!---
A basic example application showing how to embed a local html page in a Flex application.
@author Alistair Rutherford
-->
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
xmlns:flexiframe="http://code.google.com/p/flex-iframe/"
horizontalAlign="center"
verticalAlign="middle"
viewSourceURL="srcview/index.html">
<!-- Example project presentation -->
<mx:ApplicationControlBar dock="true">
<mx:Text selectable="false">
<mx:htmlText><![CDATA[<font color="#000000" size="12"><b>flex-iframe - Simple html example</b><br>This example shows how to embed a simple Html page in a Flex application.</font>]]></mx:htmlText>
</mx:Text>
</mx:ApplicationControlBar>
<!-- HTML content stored in a String -->
<mx:String id="iFrameHTMLContent">
<![CDATA[
<html>
<head>
<title>About</title>
</head>
<body>
<div>About</div>
<p>Simple HTML Test application. This test app loads a page of html locally.</p>
<div>Credits</div>
<p> </p>
<p>IFrame.as is based on the work of</p>
<ul>
<li><a href="http://coenraets.org/" target="_top">Christophe Coenraets</a></li>
<li><a href="http://www.deitte.com/" target="_top">Brian Deitte</a></li>
</ul>
</body>
</html>
]]>
</mx:String>
<!-- Example using the 'source' property -->
<mx:Panel title="A simple Html page embedded with the 'source' property"
width="80%"
height="80%">
<flexiframe:IFrame id="iFrameBySource"
width="100%"
height="100%"
source="about.html"/>
</mx:Panel>
<!-- Example using the 'content' property -->
<mx:Panel title="A simple Html page embedded with the 'content' property"
width="80%"
height="80%">
<flexiframe:IFrame id="iFrameByContent"
width="100%"
height="100%"
content="{iFrameHTMLContent}"/>
</mx:Panel>
</mx:Application>
A: @mmattax
Indeed you can display images in a TextArea component. The approach is not entirely without problems though...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Determining the performance consequences of PHP code How can you determine the performance consequences of your PHP code if you are not familiar with the internals? Are there ways to figure out how your code is being executed (besides simply load testing it)? I am looking for things like memory usage, the execution time for algorithms.
Perhaps Joel would say, "learn C, then read the internals", but I really don't have time to learn C right now (though I'd love to, actually).
A: Use the Xdebug extension to profile PHP code.
A: If you're not familiar with valgrind or similar, then to add to @Jordi Bunster's answer...
When you've had profiling on in Xdebug, you can open the dumped profile files in KCacheGrind or WinCacheGrind to get a graphical view of what is taking the time in your code.
Fortunately the xdebug documentation also explains this in detail as well as how to interpret the results: http://xdebug.org/docs/profiler
A: Even if you are familiar with the internals, you should still load test your assumptions. I like to use the PEAR Benchmark package to compare different code.
If you can isolate your code, you can keep your load testing simple. A typical technique is to run each option some number of times and see which one is faster. For example, if you have a class, you can write a test case and that puts it through it's paces and run it several times.
A: You can use low-level approach such as sticking microtime() and memory_get_usage() calls into the code or you can use one of existing profiling solutions:
*
*Xdebug (free, opensource)
*Zend Studio/Debugger profiling (commercial)
*Zend Server Code Tracing (commercial)
*xhprof (free, opensource)
As usual, commercial tools have nice GUIs and pretty pictures, but cost money, free ones are free, but you'd probably have to invest a bit more time.
Also, PHP CGI binary has a benchmark mode with -T option, you many try running php-cgi -T 100 yourscript.php to do a poor man's benchmark.
A: See SD PHP Profiler for a tool that can show you graphically where your PHP applications spends its time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What does it mean when a PostgreSQL process is "idle in transaction"? What does it mean when a PostgreSQL process is "idle in transaction"?
On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following:
postgres: user db 127.0.0.1(55658) idle in transaction
Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated.
A: The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too.
If you're using Slony for replication, however, the Slony-I FAQ suggests idle in transaction may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details.
A: As mentioned here: Re: BUG #4243: Idle in transaction it is probably best to check your pg_locks table to see what is being locked and that might give you a better clue where the problem lies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "106"
} |
Q: What is the difference between Raising Exceptions vs Throwing Exceptions in Ruby? Ruby has two different exceptions mechanisms: Throw/Catch and Raise/Rescue.
Why do we have two?
When should you use one and not the other?
A: https://coderwall.com/p/lhkkug/don-t-confuse-ruby-s-throw-statement-with-raise offers an excellent explanation that I doubt I can improve on. To summarize, nicking some code samples from the blog post as I go:
*
*raise/rescue are the closest analogues to the throw/catch construct you're familiar with from other languages (or to Python's raise/except). If you've encountered an error condition and you would throw over it in another language, you should raise in Ruby.
*Ruby's throw/catch lets you break execution and climb up the stack looking for a catch (like raise/rescue does), but isn't really meant for error conditions. It should be used rarely, and is there just for when the "walk up the stack until you find a corresponding catch" behaviour makes sense for an algorithm you're writing but it wouldn't make sense to think of the throw as corresponding to an error condition.
What is catch and throw used for in Ruby? offers some suggestions on nice uses of the throw/catch construct.
The concrete behavioural differences between them include:
*
*rescue Foo will rescue instances of Foo including subclasses of Foo. catch(foo) will only catch the same object, Foo. Not only can you not pass catch a class name to catch instances of it, but it won't even do equality comparisons. For instance
catch("foo") do
throw "foo"
end
will give you an UncaughtThrowError: uncaught throw "foo" (or an ArgumentError in versions of Ruby prior to 2.2)
*Multiple rescue clauses can be listed...
begin
do_something_error_prone
rescue AParticularKindOfError
# Insert heroism here.
rescue
write_to_error_log
raise
end
while multiple catches need to be nested...
catch :foo do
catch :bar do
do_something_that_can_throw_foo_or_bar
end
end
*A bare rescue is equivalent to rescue StandardError and is an idiomatic construct. A "bare catch", like catch() {throw :foo}, will never catch anything and shouldn't be used.
A: *
*raise, fail, rescue, and ensure handle errors, also known as exceptions
*throw and catch are control flow
Unlike in other
languages, Ruby’s throw and catch are not used for exceptions.
Instead, they provide a way to terminate execution early when no
further work is needed.
(Grimm, 2011)
Terminating a single level of control flow, like a while loop, can be done with a simple return. Terminating many levels of control flow, like a nested loop, can be done with throw.
While the exception mechanism of raise and rescue is great for abandoning execution when things go wrong, it's sometimes nice to be able to jump out of some deeply nested construct during normal processing. This is where catch and throw come in handy.
(Thomas and Hunt, 2001)
References
*
*Grimm, Avdi. "Throw, Catch, Raise, Rescue… I’m so Confused!" RubyLearning Blog. N.p., 11 July 2011. Web. 1 Jan. 2012. http://rubylearning.com/blog/2011/07/12/throw-catch-raise-rescue--im-so-confused/.
*Thomas, Dave, and Andrew Hunt. "Programming Ruby." : The Pragmatic Programmer's Guide. N.p., 2001. Web. 29 Sept. 2015. http://ruby-doc.com/docs/ProgrammingRuby/html/tut_exceptions.html.
A: I think http://hasno.info/ruby-gotchas-and-caveats has a decent explanation of the difference:
catch/throw are not the same as raise/rescue. catch/throw allows you to quickly exit blocks back to a point where a catch is defined for a specific symbol, raise rescue is the real exception handling stuff involving the Exception object.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "192"
} |
Q: What is the simplest way to write web apps in Haskell? I would like to use Haskell more for my projects, and I think if I can get started using it for web apps, it would really help that cause. I have tried happs once or twice but had trouble getting off the ground. Are there simpler/more conventional (more like lamp) frameworks out there that I can use or should I just give happs another try?
A: If you decide to go with HApps you'll probably want to checkout this excellent example driven tutorial that is being developed as a HApps application:
HApps Tutorial
A: I developed MFlow with the idea of the highest functionality/code size ratio. MFlow is made with no other framework in mind, but to use Haskell to the limit to solve the problems of web applications to reduce drastically the noise and the error ratio in web programming. The entire navigation in a MFlow application is safe at compile time. It uses standard web libraries: WAI, formlets, stm, blaze-html..
Judge for yourself: This is a complete application with three pages. In a loop, it ask for two numbers and show the sum. you can press the back button as you please:
module Main where
import MFlow.Wai.Blaze.Html.All
main= do
addMessageFlows [("sum", transient . runFlow $ sumIt )]
wait $ run 8081 waiMessageFlow
sumIt= do
setHeader $ html . body
n1 <- ask $ p << "give me the first number" ++> getInt Nothing
n2 <- ask $ p << "give me the second number" ++> getInt Nothing
ask $ p << ("the result is " ++ show (n1 + n2)) ++> wlink () << p << "click here"
The state can be made persistent with a little modification.
http://hackage.haskell.org/package/MFlow
There are examples here : http://haskell-web.blogspot.com.es/
A: Here is a list of web related blog posts about Haskell from the wiki.
Furthermore, the next big Haskell web framework is WASH.
And there is an Apple webobjects based domain specific language.
A: The Web Application Interface, WAI, is a very nice base layer that you can build apps on top of. There are many nice libraries on hackage for routing, templating, etc that work well in combination with WAI, which is what I do.
A: The best tools as of 2011 are:
*
*Snap
*Yesod; or
*Happstack
The web development community around Haskell has been thriving on the competition between these communities.
The authors even compare their frameworks here: Comparing Haskell's Snap and Yesod web frameworks
A: You can use CGI and an (x)html combinator library, as listed in the wiki's Haskell Web Development article. A larger overview of libraries, frameworks etc. for web programming in haskell can be found in Practical web programming in Haskell.
A: Yesod would be a good choice, you can find O'Reilly's Yesod Web Framework Book online.
A: There is also Hope (link is depreciated), although it doesn't seem to have gained as much traction as HApps and WASH. However, the site has also been quiet for about a year.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Database system that is not relational What are the other types of database systems out there. I've recently came across couchDB that handles data in a non relational way. It got me thinking about what other models are other people is using.
So, I want to know what other types of data model is out there. (I'm not looking for any specifics, just want to look at how other people are handling data storage, my interest are purely academic)
The ones I already know are:
*
*RDBMS (mysql,postgres etc..)
*Document based approach (couchDB, lotus notes)
*Key/value pair (BerkeleyDB)
A: db4o
Quote from the "about" page:
db4o is the open source object database that enables Java and .NET developers to store and retrieve any application object with only one line of code, eliminating the need to predefine or maintain a separate, rigid data model.
A: Older non-relational databases:
Network Database
Hierarchical Database
Both mostly went out of style when relational became feasible.
A: Column-oriented databases are also a bit of a different animal. Many of them do support standard relational database SQL though. These are generally used for data warehouse type applications.
A: Semantic Web is also a non-relational data storage paradigm. There are no relations, all metadata is stored in the same way as data, and every entity has potentially its own unique set of attributes. Open-source projects that implement RDF, a Semantic Web standard, include Jena and Sesame.
A: Isn't Amazon's SimpleDB non-relational?
A: db4o, as mentioned by Eric, is an Object-Oriented database management system (OODBMS).
A: There's object-based databases(Gemstore, for example). Google's Big-Table and Amason's Simple Storage I am not sure how you would categorize, but both are map-reduce based.
A: A non-relational document oriented database we have been looking at is Apache CouchDB.
Apache CouchDB is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API. Among other features, it provides robust, incremental replication with bi-directional conflict detection and resolution, and is queryable and indexable using a table-oriented view engine with JavaScript acting as the default view definition language.
Our interest was in providing a distributed access user preferences store that would be immune to shape changes to which we could serialize preference objects from Java and access those just as easily with Javascript from a XULRunner based client application.
A: I'd like to detail more on Bill Karwin's answer about semantic web and triplestores, since it's what I am working on at the moment, and I have something to say on it.
The idea behind a triplestore is to store a graph-based database, whose datamodel roots in RDF. With RDF, you describe nodes and associations among nodes (in other words, edges). Data is organized in triples :
start node ----relation----> end node
(in RDF speech: subject --predicate--> object). With this very simple data model, any data network can be represented by adding more and more triples, provided you give a meaning to nodes and relations.
RDF is very general, and it's a graph-based data model well suited for search criteria looking for all triples with a particular combination of subject, predicate, or object, in any combination. Eventually, through a query language called SPARQL, you can also perform more complex queries, an operation that boils down to a graph isomorphism search onto the graph, both in terms of topology and in terms of node-edge meaning (we'll see this in a moment). SPARQL allows you only SELECT (and similar) queries. No DELETE, no INSERT, no UPDATE. The information you query (e.g. specific nodes you are interested in) are mapped into a table, which is what you get as a result of your query.
Now, topology in itself does not mean a lot. For this, a Schema language has been invented. Actually, more than one, and calling them schema languages is, in some cases, very limitative. The most famous and used today are RDF-Schema, OWL (Lite and Full), and they predate from the obsolete DAML+OIL. The point of these languages is, boiling down stuff, to give a meaning to nodes (by granting them a type, also described as a triple) and to relationships (edges). Also, you can define the "range" and "domain" of these relationships, or said differently what type is the start node and what type is the end node: you can say for example, that the property "numberOfWheels" can be applied only to connect a node of type Vehicle to a non-zero integer value.
ns:MyFiat --rdf:type--> ns:Vehicle
ns:MyFiat --ns:numberOfWheels-> 4
Now, you can use these ontologies in two directions: validation and inference. Validation is not that fancy today, but I've seen instances of use. Inference is what is cool today, because it allows reasoning. Inference basically takes a RDF graph containing a set of triples, takes an ontology, mixes them into a triplestore database which contains an "inference engine" and like magic the inference engine invents triples according to your ontological description. Example: suppose you just store this information in the database
ns:MyFiat --ns:numberOfWheels--> 4
and nothing else. No type is specified about this node, but the inference engine will add automatically a triple saying that
ns:MyFiat --rdf:type--> ns:Vehicle
because you said in your ontology that only objects of type Vehicle can be described by a property numberOfWheels.
Conversely, you can use the inference engine to validate your data against the ontology so to refuse not compliant data (sort of like XML-Schema for XML). In this case, you will need both triples to have your data successfully accepted by the triplestore.
Additional characteristics of triplestores are Formulas and Context-aware storage. Formulas are statements (as usual, triples subject predicate object) that describe something hypothetical. I never used Formulas, so I won't go into more details of something I don't know. Context awareness are basically subgraphs: the problem with storing triples is that you don't have anything to say where these triples come from. Suppose you have two dealers that describe the same price of a component. One says that the price is 5.99 and the other 4.99. If you just store both triples into a database, now you don't know anything about who stated each information. There are two ways to solve this problem.
One is reification. Reification means that you store additional triples to describe another triple. It's wasteful, and makes life hell because you have to reify every and each triple you store. The alternative is context-awareness. Having a context-aware storage It's like being able to box a bunch of triples into a container with a label on it (the context identifier). You now can use this identifier as subject for additional statements, hence describing a bunch of triples in a single action.
A: 4. Navigational. Includes Tree/Hierarchy and Graph/Network.
File systems, the semantic web, XML, Object databases, CODASYL, and many others all fit into this category.
Those 4 are pretty much it.
A: There is also what is referred to as an "inverted index" or "inverted list" database. Software AG's Adabas product would be an example. As with hierachical, these databases continue to be used in large corporate or university environments because of legacy considerations or due to a performance advantage in certain situations (typically high-end transactional applications).
A: There are BASE systems (Basically Available, Soft State, Eventually consistent) and they work well with simple data models holding vast volumes of data. Google's BigTable, Dojo's Persevere, Amazon's Dynamo, Facebook's Cassandra are some examples.
See LINK
A: The illuminate Correlation Database is a new revolutionary non-relational database. The Correlation Database Management Dystem (CDBMS) is data model independent and designed to efficiently handle unplanned, ad hoc queries in an analytical system environment. Unlike relational database management systems or column-oriented databases, a correlation database uses a value-based storage (VBS) architecture in which each unique data value is stored only once and an auto-generated indexing system maintains the context for all values (data is 100% indexed). Queries are performed using natural language instead of SQL (NoSQL).
Learn more at: www.datainnovationsgroup.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Create background process in windows without visible console window How do I create a background process with Haskell on windows without a visible command window being created?
I wrote a Haskell program that runs backup processes periodically but every time I run it, a command window opens up to the top of all the windows. I would like to get rid of this window. What is the simplest way to do this?
A: You should really tell us how you are trying to do this currently, but on my system (using linux) the following snippet will run a command without opening a new terminal window. It should work the same way on windows.
module Main where
import System
import System.Process
import Control.Monad
main :: IO ()
main = do
putStrLn "Running command..."
pid <- runCommand "mplayer song.mp3" -- or whatever you want
replicateM_ 10 $ putStrLn "Doing other stuff"
waitForProcess pid >>= exitWith
A: Thanks for the responses so far, but I've found my own solution. I did try a lot of different things, from writing a vbs script as suggested to a standalone program called hstart. hstart worked...but it creates a separate process which I didn't like very much because then I can't kill it in the normal way. But I found a simpler solution that required simply Haskell code.
My code from before was a simple call to runCommand, which did popup the window. An alternative function you can use is runProcess which has more options. From peeking at the ghc source code file runProcess.c, I found that the CREATE_NO_WINDOW flag is set when you supply redirects for all of STDIN, STOUT, and STDERR. So that's what you need to do, supply redirects for those. My test program looks like:
import System.Process
import System.IO
main = do
inH <- openFile "in" ReadMode
outH <- openFile "out" WriteMode
runProcess "rsync.bat" [] Nothing Nothing (Just inH) (Just outH) (Just outH)
This worked! No command window again! A caveat is that you need an empty file for inH to read in as the STDIN eventhough in my situation it was not needed.
A: The simplest way I can think of is to run the rsync command from within a Windows Shell script (vbs or cmd).
A: I don't know anything about Haskell, but I had this problem in a C project a few months ago.
The best way to execute an external program without any windows popping up is to use the ShellExecuteEx() API function with the "open" verb. If ShellExecuteEx() is available to you in Haskell, then you should be able to achieve what you want.
The C code looks something like this:
SHELLEXECUTEINFO Info;
BOOL b;
// Execute it
memset (&Info, 0, sizeof (Info));
Info.cbSize = sizeof (Info);
Info.fMask = SEE_MASK_NOCLOSEPROCESS | SEE_MASK_FLAG_NO_UI;
Info.hwnd = NULL;
Info.lpVerb = "open";
Info.lpFile = "rsync.exe";
Info.lpParameters = "whatever parameters you like";
Info.lpDirectory = NULL;
Info.nShow = SW_HIDE;
b = ShellExecuteEx (&Info);
if (b)
{
// Looks good; if there is an instance, wait for it
if (Info.hProcess)
{
// Wait
WaitForSingleObject (Info.hProcess, INFINITE);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Is there a difference between foo(void) and foo() in C++ or C? Consider these two function definitions:
void foo() { }
void foo(void) { }
Is there any difference between these two? If not, why is the void argument there? Aesthetic reasons?
A: C++11 N3337 standard draft
There is no difference.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf
Annex C "Compatibility" C.1.7 Clause 8: declarators says:
8.3.5 Change: In C ++ , a function declared with an empty parameter list takes no arguments. In C, an empty
parameter list means that the number and type of the function arguments are unknown.
Example:
int f();
// means int f(void) in C ++
// int f( unknown ) in C
Rationale: This is to avoid erroneous function calls (i.e., function calls with the wrong number or type of
arguments).
Effect on original feature: Change to semantics of well-defined feature. This feature was marked as “obsolescent” in C.
8.5.3 functions says:
4. The parameter-declaration-clause determines the arguments that can be specified, and their processing, when
the function is called. [...] If the parameter-declaration-clause is empty, the function
takes no arguments. The parameter list (void) is equivalent to the empty parameter list.
C99
As mentioned by C++11, int f() specifies nothing about the arguments, and is obsolescent.
It can either lead to working code or UB.
I have interpreted the C99 standard in detail at: https://stackoverflow.com/a/36292431/895245
A: I realize your question pertains to C++, but when it comes to C the answer can be found in K&R, pages 72-73:
Furthermore, if a function declaration does not include arguments, as
in
double atof();
that too is taken to mean that nothing is to be assumed about the
arguments of atof; all parameter checking is turned off. This special
meaning of the empty argument list is intended to permit older C
programs to compile with new compilers. But it's a bad idea to use it
with new programs. If the function takes arguments, declare them; if
it takes no arguments, use void.
A: In C:
*
*void foo() means "a function foo taking an unspecified number of arguments of unspecified type"
*void foo(void) means "a function foo taking no arguments"
In C++:
*
*void foo() means "a function foo taking no arguments"
*void foo(void) means "a function foo taking no arguments"
By writing foo(void), therefore, we achieve the same interpretation across both languages and make our headers multilingual (though we usually need to do some more things to the headers to make them truly cross-language; namely, wrap them in an extern "C" if we're compiling C++).
A: In C, you use a void in an empty function reference so that the compiler has a prototype, and that prototype has "no arguments". In C++, you don't have to tell the compiler that you have a prototype because you can't leave out the prototype.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "279"
} |
Q: What are some good usability guidelines an average developer should follow? I'm not a usability specialist, and I really don't care to be one.
I just want a small set of rules of thumb that I can follow while coding my user interfaces so that my product has decent usability.
At first I thought that this question would be easy to answer "Use your common sense", but if it's so common among us developers we wouldn't, as a group, have a reputation for our horrible interfaces.
Any suggestions?
A: Just two things, really:
*
*"A user interface is well-designed when the program behaves exactly how the user thought it would" - quoted from Joel Spolsky's User Interface Design For Programmers
*Put your designs in front of a user. A real end-user is best, but for lightweight, rapid feedback, you can't beat hallway usability testing i.e. grab a co-worker.
If you remember Joel's advice and make sure you get feedback on whatever you do and act on it i.e. iterate, you'll not go too far wrong. And I would echo the recommendation for Steve Krug's Don't Make Me Think - it's probably the best work-related book I've read, bar none, and is just as applicable to desktop software as websites.
Hope this helps.
A: *
*Don't make things work in a different way than your users are expecting (i.e. breaking the "back" button when using Ajax in web forms
*Follow the K.I.S.S principal
Really, any rules someone posts will be a variation on the theme:
Don't Make Your Users Think
"Don't Make Me Think" has already been posted, see also
Design of Everyday Things and Designing with Web Standards which are also great for light usability reading.
A: The single most important piece of advice I'd give someone is to work on the UI first. Pen and paper and all. That way, you won't subconsciously couple buttons to functions, input fields to variables, etc.
The best UI might be a pain to code, and if your backend code is mostly written, it will sabotage your thinking.
Other than that, I'd point to Apple's Human Interface Guidelines. Of course, if your platform is not OS X, take the OS X sections with a lot of salt. What works in OS X might not work on Windows. You should embrace your platform's idioms.
OS X stuff aside, that document has some pretty good starting points on the fundamentals.
A: Avoid modes. It's frustrating to a user when input works sometimes but not others, or does different things at different times.
A: Here are some simple rules:
*
*Fewer clicks are better.
*Frequently used features should be easier to find.
*Features for "advanced" users can be harder to find than the ones above.
Think about the number of mouse/keyboard clicks it takes a user to get to something.
PS - please don't tell the Microsoft Office 2008 people about this; the poor little guys would cry themselves to sleep tonight! :)
A:
Source: http://stuffthathappens.com/blog/wp-content/uploads/2008/03/simplicity.png
A: Read Don't Make Me Think by Steve Krug. It is a great starting point, and an easy short read.
EDIT: This is mainly for web usability though, but it would still be a good read even if you are doing rich clients.
A: Think about the users that will use your app. Why are they using it and in which context?
*
*Will the majority be pro users that know the domain in which the application is used and use the app a lot? Then don't be afraid of adding a lot of data to the screens as long as it arranged logically for users (normally that is not in alphabetical order :-). Think trade screens for stock borkers or airplane cockpits.
*Are users occassional users? Keep it simple. Avoid context switches (keep all/as much as possible of necessary data for a task on the screen at each time). Don't break expectations of how gui widgets normally work. Design for failures.
*Anything in between? Allow users to grow in the UI. Track usage so you can later determine where users seem to spend the most time so you can improve the most used areas of your app.
*Test your app on friends and colleagues (the corridor test) to see if they are able to use it efficiently.
That's a start.
A: I suggest to read these blog posts from the Enso creators.
Of course they repeat guides/ideas/advices from books such as
The Design of Everyday Things and About Face, but nevertheless, the posts contain quite a few insights and (IMO) they are a good read.
A: What information does your user need, put that on the screen and nothing else. If you cannot define what the user needs - get another user.
A: Remember that your application will be one of many the user will have to deal with. Don't do things just to be different or kewl. Don't come up with unusual graphics, behaviors, terminology, or interactions. Use the standard OS controls, conventions, utilities, and behaviors.
Let your app interoperate with other apps; allow cutting and pasting of data, save your data in formats other apps can read, and allow importing data from other apps instead of using your UI.
If you are making a desktop app, do not try to take over the user's computer. Leave the user's Documents folder, task bar, and application preferences alone. Don't change anything already installed on the computer. Allow scripted or command-line interactions.
If you're making a web app, do not try to take over the browser. Do not try to subvert the standard menu bars, history, layout, or fonts. Allow the user to change the page using Javascript.
A: (1) Common actions should require as little effort as possible and should be obvious; on the other hand, actions that are rarely needed can be require a lot of steps and can be hidden behind menus and dialogs. To be able to do so, you should always describe what the user will want to do with the application by listing use cases.
(2) A UI should be selfdocumenting. The manual should be integrated in the application's dialogs and menu's, as users don't read separate manuals. For example, the keyboard shortcut should be shown in the menu item representing the action it is associated with.
A: Provide keyboard shortcuts for power users (even if it is as simple as "hit enter to search")
Don't put too much on screen at once.
If you pop up a messagebox, your users generally won't ever read it.
A: *
*Simple is better than complex
*Complex is better than complicated (eliminate 'nested ifs')
*Intuitive (good elements needs no explanation)
*Follow the convention (for example, underlined means link, red means error, tab goes to next field, etc.)
*Use semantics to apply the logic (header reads first, paragraphs next)
*whitespace is important
A: In addition to the other recommendations here, I'd recommend Designing Interfaces by Jenifer Tidwell as a good way of becoming familiar with UI conventions.
Also, The inmates are running the asylum By Alan Cooper is excellent for providing an insight into how to approach interaction design.
A: A good follow on to Don't Make Me Think is Robert Hoekman's Designing the Obvious. It's more focused on web applications, as opposed to web sites like in Krug's.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Batch file to delete files older than N days I am looking for a way to delete all files older than 7 days in a batch file. I've searched around the web, and found some examples with hundreds of lines of code, and others that required installing extra command line utilities to accomplish the task.
Similar things can be done in BASH in just a couple lines of code. It seems that something at least remotely easy could be done for batch files in Windows. I'm looking for a solution that works in a standard Windows command prompt, without any extra utilities. Please no PowerShell or Cygwin either.
A: Copy this code and save it as DelOldFiles.vbs.
USAGE IN CMD : cscript //nologo DelOldFiles.vbs 15
15 means to delete files older than 15 days in past.
'copy from here
Function DeleteOlderFiles(whichfolder)
Dim fso, f, f1, fc, n, ThresholdDate
Set fso = CreateObject("Scripting.FileSystemObject")
Set f = fso.GetFolder(whichfolder)
Set fc = f.Files
Set objArgs = WScript.Arguments
n = 0
If objArgs.Count=0 Then
howmuchdaysinpast = 0
Else
howmuchdaysinpast = -objArgs(0)
End If
ThresholdDate = DateAdd("d", howmuchdaysinpast, Date)
For Each f1 in fc
If f1.DateLastModified<ThresholdDate Then
Wscript.StdOut.WriteLine f1
f1.Delete
n = n + 1
End If
Next
Wscript.StdOut.WriteLine "Deleted " & n & " file(s)."
End Function
If Not WScript.FullName = WScript.Path & "\cscript.exe" Then
WScript.Echo "USAGE ONLY IN COMMAND PROMPT: cscript DelOldFiles.vbs 15" & vbCrLf & "15 means to delete files older than 15 days in past."
WScript.Quit 0
End If
DeleteOlderFiles(".")
'to here
A: For Windows Server 2008 R2:
forfiles /P c:\sql_backups\ /S /M *.sql /D -90 /C "cmd /c del @PATH"
This will delete all .sql files older than 90 days.
A: Delete all Files older than 3 days
forfiles -p "C:\folder" -m *.* -d -3 -c "cmd /c del /q @path"
Delete Directories older than 3 days
forfiles -p "C:\folder" -d -3 -c "cmd /c IF @isdir == TRUE rd /S /Q @path"
A: Run the following commands:
ROBOCOPY C:\source C:\destination /mov /minage:7
del C:\destination /q
Move all the files (using /mov, which moves files and then deletes them as opposed to /move which moves whole filetrees which are then deleted) via robocopy to another location, and then execute a delete command on that path and you're all good.
Also if you have a directory with lots of data in it you can use /mir switch
A: Use forfiles.
There are different versions. Early ones use unix style parameters.
My version (for server 2000 - note no space after switches)-
forfiles -p"C:\what\ever" -s -m*.* -d<number of days> -c"cmd /c del @path"
To add forfiles to XP, get the exe from ftp://ftp.microsoft.com/ResKit/y2kfix/x86/
and add it to C:\WINDOWS\system32
A: How about this modification on 7daysclean.cmd to take a leap year into account?
It can be done in less than 10 lines of coding!
set /a Leap=0
if (Month GEQ 2 and ((Years%4 EQL 0 and Years%100 NEQ 0) or Years%400 EQL 0)) set /a Leap=day
set /a Months=!_months!+Leap
Edit by Mofi:
The condition above contributed by J.R. evaluates always to false because of invalid syntax.
And Month GEQ 2 is also wrong because adding 86400 seconds for one more day must be done in a leap year only for the months March to December, but not for February.
A working code to take leap day into account - in current year only - in batch file 7daysclean.cmd posted by Jay would be:
set "LeapDaySecs=0"
if %Month% LEQ 2 goto CalcMonths
set /a "LeapRule=Years%%4"
if %LeapRule% NEQ 0 goto CalcMonths
rem The other 2 rules can be ignored up to year 2100.
set /A "LeapDaySecs=day"
:CalcMonths
set /a Months=!_months!+LeapDaySecs
A: IMO, JavaScript is gradually becoming a universal scripting standard: it is probably available in more products than any other scripting language (in Windows, it is available using the Windows Scripting Host). I have to clean out old files in lots of folders, so here is a JavaScript function to do that:
// run from an administrator command prompt (or from task scheduler with full rights): wscript jscript.js
// debug with: wscript /d /x jscript.js
var fs = WScript.CreateObject("Scripting.FileSystemObject");
clearFolder('C:\\temp\\cleanup');
function clearFolder(folderPath)
{
// calculate date 3 days ago
var dateNow = new Date();
var dateTest = new Date();
dateTest.setDate(dateNow.getDate() - 3);
var folder = fs.GetFolder(folderPath);
var files = folder.Files;
for( var it = new Enumerator(files); !it.atEnd(); it.moveNext() )
{
var file = it.item();
if( file.DateLastModified < dateTest)
{
var filename = file.name;
var ext = filename.split('.').pop().toLowerCase();
if (ext != 'exe' && ext != 'dll')
{
file.Delete(true);
}
}
}
var subfolders = new Enumerator(folder.SubFolders);
for (; !subfolders.atEnd(); subfolders.moveNext())
{
clearFolder(subfolders.item().Path);
}
}
For each folder to clear, just add another call to the clearFolder() function. This particular code also preserves exe and dll files, and cleans up subfolders as well.
A: Might I add a humble contribution to this already valuable thread. I'm finding that other solutions might get rid of the actual error text but are ignoring the %ERRORLEVEL% which signals a fail in my application. AND I legitimately want %ERRORLEVEL% just as long as it isn't the "No files found" error.
Some Examples:
Debugging and eliminating the error specifically:
forfiles /p "[file path...]\IDOC_ARCHIVE" /s /m *.txt /d -1 /c "cmd /c del @path" 2>&1 | findstr /V /O /C:"ERROR: No files found with the specified search criteria."2>&1 | findstr ERROR&&ECHO found error||echo found success
Using a oneliner to return ERRORLEVEL success or failure:
forfiles /p "[file path...]\IDOC_ARCHIVE" /s /m *.txt /d -1 /c "cmd /c del @path" 2>&1 | findstr /V /O /C:"ERROR: No files found with the specified search criteria."2>&1 | findstr ERROR&&EXIT /B 1||EXIT /B 0
Using a oneliner to keep the ERRORLEVEL at zero for success within the context of a batchfile in the midst of other code (ver > nul resets the ERRORLEVEL):
forfiles /p "[file path...]\IDOC_ARCHIVE" /s /m *.txt /d -1 /c "cmd /c del @path" 2>&1 | findstr /V /O /C:"ERROR: No files found with the specified search criteria."2>&1 | findstr ERROR&&ECHO found error||ver > nul
For a SQL Server Agent CmdExec job step I landed on the following. I don't know if it's a bug, but the CmdExec within the step only recognizes the first line of code:
cmd /e:on /c "forfiles /p "C:\SQLADMIN\MAINTREPORTS\SQL2" /s /m *.txt /d -1 /c "cmd /c del @path" 2>&1 | findstr /V /O /C:"ERROR: No files found with the specified search criteria."2>&1 | findstr ERROR&&EXIT 1||EXIT 0"&exit %errorlevel%
A: Gosh, a lot of answers already. A simple and convenient route I found was to execute ROBOCOPY.EXE twice in sequential order from a single Windows command line instruction using the & parameter.
ROBOCOPY.EXE SOURCE-DIR TARGET-DIR *.* /MOV /MINAGE:30 & ROBOCOPY.EXE SOURCE-DIR TARGET-DIR *.* /MOV /MINAGE:30 /PURGE
In this example it works by picking all files (.) that are older than 30 days old and moving them to the target folder. The second command does the same again with the addition of the PURGE command which means remove files in the target folder that don’t exist in the source folder.
So essentially, the first command MOVES files and the second DELETES because they no longer exist in the source folder when the second command is invoked.
Consult ROBOCOPY's documentation and use the /L switch when testing.
A: If you have the XP resource kit, you can use robocopy to move all the old directories into a single directory, then use rmdir to delete just that one:
mkdir c:\temp\OldDirectoriesGoHere
robocopy c:\logs\SoManyDirectoriesToDelete\ c:\temp\OldDirectoriesGoHere\ /move /minage:7
rmdir /s /q c:\temp\OldDirectoriesGoHere
A: I think e.James's answer is good since it works with unmodified versions of Windows as early as Windows 2000 SP4 (and possibly earlier), but it required writing to an external file. Here is a modified version that does not create an external text file while maintaining the compatibility:
REM del_old.cmd
REM usage: del_old MM-DD-YYYY
setlocal enabledelayedexpansion
for /f "tokens=*" %%a IN ('xcopy *.* /d:%1 /L /I null') do @if exist "%%~nxa" set "excludefiles=!excludefiles!;;%%~nxa;;"
for /f "tokens=*" %%a IN ('dir /b') do @(@echo "%excludefiles%"|FINDSTR /C:";;%%a;;">nul || if exist "%%~nxa" DEL /F /Q "%%a">nul 2>&1)
To be true to the original question, here it is in a script that does ALL the math for you if you call it with the number of days as the parameter:
REM del_old_compute.cmd
REM usage: del_old_compute N
setlocal enabledelayedexpansion
set /a days=%1&set cur_y=%DATE:~10,4%&set cur_m=%DATE:~4,2%&set cur_d=%DATE:~7,2%
for /f "tokens=1 delims==" %%a in ('set cur_') do if "!%%a:~0,1!"=="0" set /a %%a=!%%a:~1,1!+0
set mo_2=28&set /a leapyear=cur_y*10/4
if %leapyear:~-1% equ 0 set mo_2=29
set mo_1=31&set mo_3=31&set mo_4=30&set mo_5=31
set mo_6=30&set mo_7=31&set mo_8=31&set mo_9=30
set mo_10=31&set mo_11=30&set mo_12=31
set /a past_y=(days/365)
set /a monthdays=days-((past_y*365)+((past_y/4)*1))&&set /a past_y=cur_y-past_y&set months=0
:setmonth
set /a minusmonth=(cur_m-1)-months
if %minusmonth% leq 0 set /a minusmonth+=12
set /a checkdays=(mo_%minusmonth%)
if %monthdays% geq %checkdays% set /a months+=1&set /a monthdays-=checkdays&goto :setmonth
set /a past_m=cur_m-months
set /a lastmonth=cur_m-1
if %lastmonth% leq 0 set /a lastmonth+=12
set /a lastmonth=mo_%lastmonth%
set /a past_d=cur_d-monthdays&set adddays=::
if %past_d% leq 0 (set /a past_m-=1&set adddays=)
if %past_m% leq 0 (set /a past_m+=12&set /a past_y-=1)
set mo_2=28&set /a leapyear=past_y*10/4
if %leapyear:~-1% equ 0 set mo_2=29
%adddays%set /a past_d+=mo_%past_m%
set d=%past_m%-%past_d%-%past_y%
for /f "tokens=*" %%a IN ('xcopy *.* /d:%d% /L /I null') do @if exist "%%~nxa" set "excludefiles=!excludefiles!;;%%~nxa;;"
for /f "tokens=*" %%a IN ('dir /b') do @(@echo "%excludefiles%"|FINDSTR /C:";;%%a;;">nul || if exist "%%~nxa" DEL /F /Q "%%a">nul 2>&1)
NOTE: The code above takes into account leap years, as well as the exact number of days in each month. The only maximum is the total number of days there have been since 0/0/0 (after that it returns negative years).
NOTE: The math only goes one way; it cannot correctly get future dates from negative input (it will try, but will likely go past the last day of the month).
A: ROBOCOPY works great for me. Originally suggested my Iman. But instead of moving the files/folders to a temporary directory then deleting the contents of the temporary folder, move the files to the trash!!!
This is is a few lines of my backup batch file for example:
SET FilesToClean1=C:\Users\pauls12\Temp
SET FilesToClean2=C:\Users\pauls12\Desktop\1616 - Champlain\Engineering\CAD\Backups
SET RecycleBin=C:\$Recycle.Bin\S-1-5-21-1480896384-1411656790-2242726676-748474
robocopy "%FilesToClean1%" "%RecycleBin%" /mov /MINLAD:15 /XA:SH /NC /NDL /NJH /NS /NP /NJS
robocopy "%FilesToClean2%" "%RecycleBin%" /mov /MINLAD:30 /XA:SH /NC /NDL /NJH /NS /NP /NJS
It cleans anything older than 15 days out of my 'Temp' folder and 30 days for anything in my AutoCAD backup folder. I use variables because the line can get quite long and I can reuse them for other locations. You just need to find the dos path to your recycle bin associated with your login.
This is on a work computer for me and it works. I understand that some of you may have more restrictive rights but give it a try anyway;) Search Google for explanations on the ROBOCOPY parameters.
Cheers!
A: You might be able to pull this off. You can take a look at this question, for a simpler example. The complexity comes, when you start comparing the dates. It may be easy to tell if the date is greater or not, but there are many situations to consider if you need to actually get the difference between two dates.
In other words - don't try to invent this, unless you really can't use the third party tools.
A: this is nothing amazing, but i needed to do something like this today and run it as scheduled task etc.
batch file, DelFilesOlderThanNDays.bat below with sample exec w/ params:
DelFilesOlderThanNDays.bat 7 C:\dir1\dir2\dir3\logs *.log
echo off
cls
Echo(
SET keepDD=%1
SET logPath=%2 :: example C:\dir1\dir2\dir3\logs
SET logFileExt=%3
SET check=0
IF [%3] EQU [] SET logFileExt=*.log & echo: file extention not specified (default set to "*.log")
IF [%2] EQU [] echo: file directory no specified (a required parameter), exiting! & EXIT /B
IF [%1] EQU [] echo: number of days not specified? :)
echo(
echo: in path [ %logPath% ]
echo: finding all files like [ %logFileExt% ]
echo: older than [ %keepDD% ] days
echo(
::
::
:: LOG
echo: >> c:\trimLogFiles\logBat\log.txt
echo: executed on %DATE% %TIME% >> c:\trimLogFiles\logBat\log.txt
echo: ---------------------------------------------------------- >> c:\trimLogFiles\logBat\log.txt
echo: in path [ %logPath% ] >> c:\trimLogFiles\logBat\log.txt
echo: finding all files like [ %logFileExt% ] >> c:\trimLogFiles\logBat\log.txt
echo: older than [ %keepDD% ] days >> c:\trimLogFiles\logBat\log.txt
echo: ---------------------------------------------------------- >> c:\trimLogFiles\logBat\log.txt
::
FORFILES /p %logPath% /s /m %logFileExt% /d -%keepDD% /c "cmd /c echo @path" >> c:\trimLogFiles\logBat\log.txt 2<&1
IF %ERRORLEVEL% EQU 0 (
FORFILES /p %logPath% /s /m %logFileExt% /d -%keepDD% /c "cmd /c echo @path"
)
::
::
:: LOG
IF %ERRORLEVEL% EQU 0 (
echo: >> c:\trimLogFiles\logBat\log.txt
echo: deleting files ... >> c:\trimLogFiles\logBat\log.txt
echo: >> c:\trimLogFiles\logBat\log.txt
SET check=1
)
::
::
IF %check% EQU 1 (
FORFILES /p %logPath% /s /m %logFileExt% /d -%keepDD% /c "cmd /c del @path"
)
::
:: RETURN & LOG
::
IF %ERRORLEVEL% EQU 0 echo: deletion successfull! & echo: deletion successfull! >> c:\trimLogFiles\logBat\log.txt
echo: ---------------------------------------------------------- >> c:\trimLogFiles\logBat\log.txt
A: Expanding on aku's answer, I see a lot of people asking about UNC paths. Simply mapping the unc path to a drive letter will make forfiles happy. Mapping and unmapping of drives can be done programmatically in a batch file, for example.
net use Z: /delete
net use Z: \\unc\path\to\my\folder
forfiles /p Z: /s /m *.gz /D -7 /C "cmd /c del @path"
This will delete all files with a .gz extension that are older than 7 days. If you want to make sure Z: isn't mapped to anything else before using it you could do something simple as
net use Z: \\unc\path\to\my\folder
if %errorlevel% equ 0 (
forfiles /p Z: /s /m *.gz /D -7 /C "cmd /c del @path"
) else (
echo "Z: is already in use, please use another drive letter!"
)
A: Ok was bored a bit and came up with this, which contains my version of a poor man's Linux epoch replacement limited for daily usage (no time retention):
7daysclean.cmd
@echo off
setlocal ENABLEDELAYEDEXPANSION
set day=86400
set /a year=day*365
set /a strip=day*7
set dSource=C:\temp
call :epoch %date%
set /a slice=epoch-strip
for /f "delims=" %%f in ('dir /a-d-h-s /b /s %dSource%') do (
call :epoch %%~tf
if !epoch! LEQ %slice% (echo DELETE %%f ^(%%~tf^)) ELSE echo keep %%f ^(%%~tf^)
)
exit /b 0
rem Args[1]: Year-Month-Day
:epoch
setlocal ENABLEDELAYEDEXPANSION
for /f "tokens=1,2,3 delims=-" %%d in ('echo %1') do set Years=%%d& set Months=%%e& set Days=%%f
if "!Months:~0,1!"=="0" set Months=!Months:~1,1!
if "!Days:~0,1!"=="0" set Days=!Days:~1,1!
set /a Days=Days*day
set /a _months=0
set i=1&& for %%m in (31 28 31 30 31 30 31 31 30 31 30 31) do if !i! LSS !Months! (set /a _months=!_months! + %%m*day&& set /a i+=1)
set /a Months=!_months!
set /a Years=(Years-1970)*year
set /a Epoch=Years+Months+Days
endlocal& set Epoch=%Epoch%
exit /b 0
USAGE
set /a strip=day*7 : Change 7 for the number of days to keep.
set dSource=C:\temp : This is the starting directory to check for files.
NOTES
This is non-destructive code, it will display what would have happened.
Change :
if !epoch! LEQ %slice% (echo DELETE %%f ^(%%~tf^)) ELSE echo keep %%f ^(%%~tf^)
to something like :
if !epoch! LEQ %slice% del /f %%f
so files actually get deleted
February: is hard-coded to 28 days. Bissextile years is a hell to add, really. if someone has an idea that would not add 10 lines of code, go ahead and post so I add it to my code.
epoch: I did not take time into consideration, as the need is to delete files older than a certain date, taking hours/minutes would have deleted files from a day that was meant for keeping.
LIMITATION
epoch takes for granted your short date format is YYYY-MM-DD. It would need to be adapted for other settings or a run-time evaluation (read sShortTime, user-bound configuration, configure proper field order in a filter and use the filter to extract the correct data from the argument).
Did I mention I hate this editor's auto-formating? it removes the blank lines and the copy-paste is a hell.
I hope this helps.
A: forfiles /p "v:" /s /m *.* /d -3 /c "cmd /c del @path"
You should do /d -3 (3 days earlier) This works fine for me. So all the complicated batches could be in the trash bin. Also forfiles don't support UNC paths, so make a network connection to a specific drive.
A: Have a look at my answer to a similar question:
REM del_old.bat
REM usage: del_old MM-DD-YYY
for /f "tokens=*" %%a IN ('xcopy *.* /d:%1 /L /I null') do if exist %%~nxa echo %%~nxa >> FILES_TO_KEEP.TXT
for /f "tokens=*" %%a IN ('xcopy *.* /L /I /EXCLUDE:FILES_TO_KEEP.TXT null') do if exist "%%~nxa" del "%%~nxa"
This deletes files older than a given date. I'm sure it can be modified to go back seven days from the current date.
update: I notice that HerbCSO has improved on the above script. I recommend using his version instead.
A: My command is
forfiles -p "d:\logs" -s -m*.log -d-15 -c"cmd /c del @PATH\@FILE"
@PATH - is just path in my case, so I had to use @PATH\@FILE
also forfiles /? not working for me too, but forfiles (without "?") worked fine.
And the only question I have: how to add multiple mask (for example ".log|.bak")?
All this regarding forfiles.exe that I downloaded here (on win XP)
But if you are using Windows server forfiles.exe should be already there and it is differs from ftp version. That is why I should modify command.
For Windows Server 2003 I'm using this command:
forfiles -p "d:\Backup" -s -m *.log -d -15 -c "cmd /c del @PATH"
A: Enjoy:
forfiles -p "C:\what\ever" -s -m *.* -d <number of days> -c "cmd /c del @path"
See forfiles documentation for more details.
For more goodies, refer to An A-Z Index of the Windows XP command line.
If you don't have forfiles installed on your machine, copy it from any Windows Server 2003 to your Windows XP machine at %WinDir%\system32\. This is possible since the EXE is fully compatible between Windows Server 2003 and Windows XP.
Later versions of Windows and Windows Server have it installed by default.
For Windows 7 and newer (including Windows 10):
The syntax has changed a little. Therefore the updated command is:
forfiles /p "C:\what\ever" /s /m *.* /D -<number of days> /C "cmd /c del @path"
A: For windows 2012 R2 the following would work:
forfiles /p "c:\FOLDERpath" /d -30 /c "cmd /c del @path"
to see the files which will be deleted use this
forfiles /p "c:\FOLDERpath" /d -30 /c "cmd /c echo @path @fdate"
A: There are very often relative date/time related questions to solve with batch file. But command line interpreter cmd.exe has no function for date/time calculations. Lots of good working solutions using additional console applications or scripts have been posted already here, on other pages of Stack Overflow and on other websites.
Common for operations based on date/time is the requirement to convert a date/time string to seconds since a determined day. Very common is 1970-01-01 00:00:00 UTC. But any later day could be also used depending on the date range required to support for a specific task.
Jay posted 7daysclean.cmd containing a fast "date to seconds" solution for command line interpreter cmd.exe. But it does not take leap years correct into account. J.R. posted an add-on for taking leap day in current year into account, but ignoring the other leap years since base year, i.e. since 1970.
I use since 20 years static tables (arrays) created once with a small C function for quickly getting the number of days including leap days from 1970-01-01 in date/time conversion functions in my applications written in C/C++.
This very fast table method can be used also in batch code using FOR command. So I decided to code the batch subroutine GetSeconds which calculates the number of seconds since 1970-01-01 00:00:00 UTC for a date/time string passed to this routine.
Note: Leap seconds are not taken into account as the Windows file systems also do not support leap seconds.
First, the tables:
*
*Days since 1970-01-01 00:00:00 UTC for each year including leap days.
1970 - 1979: 0 365 730 1096 1461 1826 2191 2557 2922 3287
1980 - 1989: 3652 4018 4383 4748 5113 5479 5844 6209 6574 6940
1990 - 1999: 7305 7670 8035 8401 8766 9131 9496 9862 10227 10592
2000 - 2009: 10957 11323 11688 12053 12418 12784 13149 13514 13879 14245
2010 - 2019: 14610 14975 15340 15706 16071 16436 16801 17167 17532 17897
2020 - 2029: 18262 18628 18993 19358 19723 20089 20454 20819 21184 21550
2030 - 2039: 21915 22280 22645 23011 23376 23741 24106 24472 24837 25202
2040 - 2049: 25567 25933 26298 26663 27028 27394 27759 28124 28489 28855
2050 - 2059: 29220 29585 29950 30316 30681 31046 31411 31777 32142 32507
2060 - 2069: 32872 33238 33603 33968 34333 34699 35064 35429 35794 36160
2070 - 2079: 36525 36890 37255 37621 37986 38351 38716 39082 39447 39812
2080 - 2089: 40177 40543 40908 41273 41638 42004 42369 42734 43099 43465
2090 - 2099: 43830 44195 44560 44926 45291 45656 46021 46387 46752 47117
2100 - 2106: 47482 47847 48212 48577 48942 49308 49673
Calculating the seconds for year 2039 to 2106 with epoch beginning 1970-01-01 is only possible with using an unsigned 32-bit variable, i.e. unsigned long (or unsigned int) in C/C++.
But cmd.exe use for mathematical expressions a signed 32-bit variable. Therefore the maximum value is 2147483647 (0x7FFFFFFF) which is 2038-01-19 03:14:07.
*Leap year information (No/Yes) for the years 1970 to 2106.
1970 - 1989: N N Y N N N Y N N N Y N N N Y N N N Y N
1990 - 2009: N N Y N N N Y N N N Y N N N Y N N N Y N
2010 - 2029: N N Y N N N Y N N N Y N N N Y N N N Y N
2030 - 2049: N N Y N N N Y N N N Y N N N Y N N N Y N
2050 - 2069: N N Y N N N Y N N N Y N N N Y N N N Y N
2070 - 2089: N N Y N N N Y N N N Y N N N Y N N N Y N
2090 - 2106: N N Y N N N Y N N N N N N N Y N N
^ year 2100
*Number of days to first day of each month in current year.
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Year with 365 days: 0 31 59 90 120 151 181 212 243 273 304 334
Year with 366 days: 0 31 60 91 121 152 182 213 244 274 305 335
Converting a date to number of seconds since 1970-01-01 is quite easy using those tables.
Attention please!
The format of date and time strings depends on Windows region and language settings. The delimiters and the order of tokens assigned to the environment variables Day, Month and Year in first FOR loop of GetSeconds must be adapted to local date/time format if necessary.
It is necessary to adapt the date string of the environment variable if date format in environment variable DATE is different to date format used by command FOR on %%~tF.
For example when %DATE% expands to Sun 02/08/2015 while %%~tF expands to 02/08/2015 07:38 PM the code below can be used with modifying line 4 to:
call :GetSeconds "%DATE:~4% %TIME%"
This results in passing to subroutine just 02/08/2015 - the date string without the 3 letters of weekday abbreviation and the separating space character.
Alternatively following could be used to pass current date in correct format:
call :GetSeconds "%DATE:~-10% %TIME%"
Now the last 10 characters from date string are passed to function GetSeconds and therefore it does not matter if date string of environment variable DATE is with or without weekday as long as day and month are always with 2 digits in expected order, i.e. in format dd/mm/yyyy or dd.mm.yyyy.
Here is the batch code with explaining comments which just outputs which file to delete and which file to keep in C:\Temp folder tree, see code of first FOR loop.
@echo off
setlocal EnableExtensions DisableDelayedExpansion
rem Get seconds since 1970-01-01 for current date and time.
call :GetSeconds "%DATE% %TIME%"
rem Subtract seconds for 7 days from seconds value.
set /A "LastWeek=Seconds-7*86400"
rem For each file in each subdirectory of C:\Temp get last modification date
rem (without seconds -> append second 0) and determine the number of seconds
rem since 1970-01-01 for this date/time. The file can be deleted if seconds
rem value is lower than the value calculated above.
for /F "delims=" %%# in ('dir /A-D-H-S /B /S "C:\Temp"') do (
call :GetSeconds "%%~t#:0"
set "FullFileName=%%#"
setlocal EnableDelayedExpansion
rem if !Seconds! LSS %LastWeek% del /F "!FullFileName!"
if !Seconds! LEQ %LastWeek% (
echo Delete "!FullFileName!"
) else (
echo Keep "!FullFileName!"
)
endlocal
)
endlocal
goto :EOF
rem No validation is made for best performance. So make sure that date
rem and hour in string is in a format supported by the code below like
rem MM/DD/YYYY hh:mm:ss or M/D/YYYY h:m:s for English US date/time.
:GetSeconds
rem If there is " AM" or " PM" in time string because of using 12 hour
rem time format, remove those 2 strings and in case of " PM" remember
rem that 12 hours must be added to the hour depending on hour value.
set "DateTime=%~1"
set "Add12Hours=0"
if not "%DateTime: AM=%" == "%DateTime%" (
set "DateTime=%DateTime: AM=%"
) else if not "%DateTime: PM=%" == "%DateTime%" (
set "DateTime=%DateTime: PM=%"
set "Add12Hours=1"
)
rem Get year, month, day, hour, minute and second from first parameter.
for /F "tokens=1-6 delims=,-./: " %%A in ("%DateTime%") do (
rem For English US date MM/DD/YYYY or M/D/YYYY
set "Day=%%B" & set "Month=%%A" & set "Year=%%C"
rem For German date DD.MM.YYYY or English UK date DD/MM/YYYY
rem set "Day=%%A" & set "Month=%%B" & set "Year=%%C"
set "Hour=%%D" & set "Minute=%%E" & set "Second=%%F"
)
rem echo Date/time is: %Year%-%Month%-%Day% %Hour%:%Minute%:%Second%
rem Remove leading zeros from the date/time values or calculation could be wrong.
if "%Month:~0,1%" == "0" if not "%Month:~1%" == "" set "Month=%Month:~1%"
if "%Day:~0,1%" == "0" if not "%Day:~1%" == "" set "Day=%Day:~1%"
if "%Hour:~0,1%" == "0" if not "%Hour:~1%" == "" set "Hour=%Hour:~1%"
if "%Minute:~0,1%" == "0" if not "%Minute:~1%" == "" set "Minute=%Minute:~1%"
if "%Second:~0,1%" == "0" if not "%Second:~1%" == "" set "Second=%Second:~1%"
rem Add 12 hours for time range 01:00:00 PM to 11:59:59 PM,
rem but keep the hour as is for 12:00:00 PM to 12:59:59 PM.
if %Add12Hours% == 1 if %Hour% LSS 12 set /A Hour+=12
set "DateTime="
set "Add12Hours="
rem Must use two arrays as more than 31 tokens are not supported
rem by command line interpreter cmd.exe respectively command FOR.
set /A "Index1=Year-1979"
set /A "Index2=Index1-30"
if %Index1% LEQ 30 (
rem Get number of days to year for the years 1980 to 2009.
for /F "tokens=%Index1% delims= " %%Y in ("3652 4018 4383 4748 5113 5479 5844 6209 6574 6940 7305 7670 8035 8401 8766 9131 9496 9862 10227 10592 10957 11323 11688 12053 12418 12784 13149 13514 13879 14245") do set "Days=%%Y"
for /F "tokens=%Index1% delims= " %%L in ("Y N N N Y N N N Y N N N Y N N N Y N N N Y N N N Y N N N Y N") do set "LeapYear=%%L"
) else (
rem Get number of days to year for the years 2010 to 2038.
for /F "tokens=%Index2% delims= " %%Y in ("14610 14975 15340 15706 16071 16436 16801 17167 17532 17897 18262 18628 18993 19358 19723 20089 20454 20819 21184 21550 21915 22280 22645 23011 23376 23741 24106 24472 24837") do set "Days=%%Y"
for /F "tokens=%Index2% delims= " %%L in ("N N Y N N N Y N N N Y N N N Y N N N Y N N N Y N N N Y N N") do set "LeapYear=%%L"
)
rem Add the days to month in year.
if "%LeapYear%" == "N" (
for /F "tokens=%Month% delims= " %%M in ("0 31 59 90 120 151 181 212 243 273 304 334") do set /A "Days+=%%M"
) else (
for /F "tokens=%Month% delims= " %%M in ("0 31 60 91 121 152 182 213 244 274 305 335") do set /A "Days+=%%M"
)
rem Add the complete days in month of year.
set /A "Days+=Day-1"
rem Calculate the seconds which is easy now.
set /A "Seconds=Days*86400+Hour*3600+Minute*60+Second"
rem Exit this subroutine.
goto :EOF
For optimal performance it would be best to remove all comments, i.e. all lines starting with rem after 0-4 leading spaces.
And the arrays can be made also smaller, i.e. decreasing the time range from 1980-01-01 00:00:00 to 2038-01-19 03:14:07 as currently supported by the batch code above for example to 2015-01-01 to 2019-12-31 as the code below uses which really deletes files older than 7 days in C:\Temp folder tree.
Further the batch code below is optimized for 24 hours time format.
@echo off
setlocal EnableExtensions DisableDelayedExpansion
call :GetSeconds "%DATE:~-10% %TIME%"
set /A "LastWeek=Seconds-7*86400"
for /F "delims=" %%# in ('dir /A-D-H-S /B /S "C:\Temp"') do (
call :GetSeconds "%%~t#:0"
set "FullFileName=%%#"
setlocal EnableDelayedExpansion
if !Seconds! LSS %LastWeek% del /F "!FullFileName!"
endlocal
)
endlocal
goto :EOF
:GetSeconds
for /F "tokens=1-6 delims=,-./: " %%A in ("%~1") do (
set "Day=%%B" & set "Month=%%A" & set "Year=%%C"
set "Hour=%%D" & set "Minute=%%E" & set "Second=%%F"
)
if "%Month:~0,1%" == "0" if not "%Month:~1%" == "" set "Month=%Month:~1%"
if "%Day:~0,1%" == "0" if not "%Day:~1%" == "" set "Day=%Day:~1%"
if "%Hour:~0,1%" == "0" if not "%Hour:~1%" == "" set "Hour=%Hour:~1%"
if "%Minute:~0,1%" == "0" if not "%Minute:~1%" == "" set "Minute=%Minute:~1%"
if "%Second:~0,1%" == "0" if not "%Second:~1%" == "" set "Second=%Second:~1%"
set /A "Index=Year-2014"
for /F "tokens=%Index% delims= " %%Y in ("16436 16801 17167 17532 17897") do set "Days=%%Y"
for /F "tokens=%Index% delims= " %%L in ("N Y N N N") do set "LeapYear=%%L"
if "%LeapYear%" == "N" (
for /F "tokens=%Month% delims= " %%M in ("0 31 59 90 120 151 181 212 243 273 304 334") do set /A "Days+=%%M"
) else (
for /F "tokens=%Month% delims= " %%M in ("0 31 60 91 121 152 182 213 244 274 305 335") do set /A "Days+=%%M"
)
set /A "Days+=Day-1"
set /A "Seconds=Days*86400+Hour*3600+Minute*60+Second"
goto :EOF
For even more information about date and time formats and file time comparisons on Windows see my answer on Find out if file is older than 4 hours in batch file with lots of additional information about file times.
A: This one did it for me. It works with a date and you can substract the wanted amount in years to go back in time:
@echo off
set m=%date:~-7,2%
set /A m
set dateYear=%date:~-4,4%
set /A dateYear -= 2
set DATE_DIR=%date:~-10,2%.%m%.%dateYear%
forfiles /p "C:\your\path\here\" /s /m *.* /d -%DATE_DIR% /c "cmd /c del @path /F"
pause
the /F in the cmd /c del @path /F forces the specific file to be deleted in some the cases the file can be read-only.
the dateYear is the year Variable and there you can change the substract to your own needs
A: My script to delete files older than a specific year :
@REM _______ GENERATE A CMD TO DELETE FILES OLDER THAN A GIVEN YEAR
@REM _______ (given in _olderthanyear variable)
@REM _______ (you must LOCALIZE the script depending on the dir cmd console output)
@REM _______ (we assume here the following line's format "11/06/2017 15:04 58 389 SpeechToText.zip")
@set _targetdir=c:\temp
@set _olderthanyear=2017
@set _outfile1="%temp%\deleteoldfiles.1.tmp.txt"
@set _outfile2="%temp%\deleteoldfiles.2.tmp.txt"
@if not exist "%_targetdir%" (call :process_error 1 DIR_NOT_FOUND "%_targetdir%") & (goto :end)
:main
@dir /a-d-h-s /s /b %_targetdir%\*>%_outfile1%
@for /F "tokens=*" %%F in ('type %_outfile1%') do @call :process_file_path "%%F" %_outfile2%
@goto :end
:end
@rem ___ cleanup and exit
@if exist %_outfile1% del %_outfile1%
@if exist %_outfile2% del %_outfile2%
@goto :eof
:process_file_path %1 %2
@rem ___ get date info of the %1 file path
@dir %1 | find "/" | find ":" > %2
@for /F "tokens=*" %%L in ('type %2') do @call :process_line "%%L" %1
@goto :eof
:process_line %1 %2
@rem ___ generate a del command for each file older than %_olderthanyear%
@set _var=%1
@rem LOCALIZE HERE (char-offset,string-length)
@set _fileyear=%_var:~0,4%
@set _fileyear=%_var:~7,4%
@set _filepath=%2
@if %_fileyear% LSS %_olderthanyear% echo @REM %_fileyear%
@if %_fileyear% LSS %_olderthanyear% echo @del %_filepath%
@goto :eof
:process_error %1 %2
@echo RC=%1 MSG=%2 %3
@goto :eof
A: More flexible way is to use FileTimeFilterJS.bat:
@echo off
::::::::::::::::::::::
set "_DIR=C:\Users\npocmaka\Downloads"
set "_DAYS=-5"
::::::::::::::::::::::
for /f "tokens=* delims=" %%# in ('FileTimeFilterJS.bat "%_DIR%" -dd %_DAYS%') do (
echo deleting "%%~f#"
echo del /q /f "%%~f#"
)
The script will allow you to use measurements like days, minutes ,seconds or hours.
To choose weather to filter the files by time of creation, access or modification
To list files before or after a certain date (or between two dates)
To choose if to show files or dirs (or both)
To be recursive or not
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "740"
} |
Q: Plugin for R# similar to CodeRush "statement highlight" See here http://www.hanselman.com/blog/InSearchOfThePerfectMonospacedProgrammersFontInconsolata.aspx - for want of a better description - the statement block highlighting - eg in the pics on the link the "statement blocks" are grouped with a vertical line. I understand this is a feature of CodeRush - does R# have either anything similar, or a plugin to do the same?
A: R# has a feature called Highlight current line, which you have to enable in the ReSharper options. This looks like crap on dark background, high contrast themes, so if you use one I'd suggest going into the Visual Studio options, under Fonts & Colors and going to "ReSharper - current Line" and making the background color a darker shade that doesn't have as much contrast with the background.
R# also has matching brace highlighting, which is color-configurable as well under the same VS option dialog.
Does that answer you question?
A: I use the latest version of ReSharper that is currently available — ReSharper 4.5 — but unfortunately I don't believe there is any feature for drawing a vertical line between matching braces, as in the screen-shots you referenced.
The feature I find useful, which Ben mentioned, is the matching brace highlighting, however this only takes effect when your cursor is adjacent to an opening or closing brace.
A: Notepad ++ has a nice brace matching feature, with vertical lines matching the braces. It's not VS, so I only use it when I am faced with some confusing JS, cut and paste, figure out the braces and go back to VS. It would be GREAT if this sort of feature existed in VS, or R#.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SQL: aggregate function and group by Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk. Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with:
select * from scott.emp
where deptno = 20 and job = 'CLERK'
and sal = (select max(sal) from scott.emp
where deptno = 20 and job = 'CLERK')
This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a more elegant way to write this, maybe using a group by? BTW, if this matters, I am using Oracle.
A: The following is slightly over-engineered, but is a good SQL pattern for "top x" queries.
SELECT
*
FROM
scott.emp
WHERE
(deptno,job,sal) IN
(SELECT
deptno,
job,
max(sal)
FROM
scott.emp
WHERE
deptno = 20
and job = 'CLERK'
GROUP BY
deptno,
job
)
Also note that this will work in Oracle and Postgress (i think) but not MS SQL. For something similar in MS SQL see question SQL Query to get latest price
A: If I was certain of the targeted database I'd go with Mark Nold's solution, but if you ever want some dialect agnostic SQL*, try
SELECT *
FROM scott.emp e
WHERE e.deptno = 20
AND e.job = 'CLERK'
AND e.sal = (
SELECT MAX(e2.sal)
FROM scott.emp e2
WHERE e.deptno = e2.deptno
AND e.job = e2.job
)
*I believe this should work everywhere, but I don't have the environments to test it.
A: In Oracle I'd do it with an analytical function, so you'd only query the emp table once :
SELECT *
FROM (SELECT e.*, MAX (sal) OVER () AS max_sal
FROM scott.emp e
WHERE deptno = 20
AND job = 'CLERK')
WHERE sal = max_sal
It's simpler, easier to read and more efficient.
If you want to modify it to list list this information for all departments, then you'll need to use the "PARTITION BY" clause in OVER:
SELECT *
FROM (SELECT e.*, MAX (sal) OVER (PARTITION BY deptno) AS max_sal
FROM scott.emp e
WHERE job = 'CLERK')
WHERE sal = max_sal
ORDER BY deptno
A: That's great! I didn't know you could do a comparison of (x, y, z) with the result of a SELECT statement. This works great with Oracle.
As a side-note for other readers, the above query is missing a "=" after "(deptno,job,sal)". Maybe the Stack Overflow formatter ate it (?).
Again, thanks Mark.
A: In Oracle you can also use the EXISTS statement, which in some cases is faster.
For example...
SELECT name, number
FROM cust
WHERE cust IN
( SELECT cust_id FROM big_table )
AND entered > SYSDATE -1
would be slow.
but
SELECT name, number
FROM cust c
WHERE EXISTS
( SELECT cust_id FROM big_table WHERE cust_id=c.cust_id )
AND entered > SYSDATE -1
would be very fast with proper indexing. You can also use this with multiple parameters.
A: There are many solutions. You could also keep your original query layout by simply adding table aliases and joining on the column names, you would still only have DEPTNO = 20 and JOB = 'CLERK' in the query once.
SELECT
*
FROM
scott.emp emptbl
WHERE
emptbl.DEPTNO = 20
AND emptbl.JOB = 'CLERK'
AND emptbl.SAL =
(
select
max(salmax.SAL)
from
scott.emp salmax
where
salmax.DEPTNO = emptbl.DEPTNO
AND salmax.JOB = emptbl.JOB
)
It could also be noted that the key word "ALL" can be used for these types of queries which would allow you to remove the "MAX" function.
SELECT
*
FROM
scott.emp emptbl
WHERE
emptbl.DEPTNO = 20
AND emptbl.JOB = 'CLERK'
AND emptbl.SAL >= ALL
(
select
salmax.SAL
from
scott.emp salmax
where
salmax.DEPTNO = emptbl.DEPTNO
AND salmax.JOB = emptbl.JOB
)
I hope that helps and makes sense.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Ubuntu 32 bit maximum address space Jeff covered this a while back on his blog in terms of 32 bit Vista.
Does the same 32 bit 4 GB memory cap that applies in 32 bit Vista apply to 32 bit Ubuntu? Are there any 32 bit operating systems that have creatively solved this problem?
A: Ubuntu server has PAE enabled in the kernel, the desktop version does not have this feature enabled by default.
This explains, by the way, why Ubuntu server does not work in some hardware emulators whereas the desktop edition does
A: Yes, 32 bit ubuntu has the same memory limitations.
There are exceptions to the 4GB limitation, but they are application specific... As in, Microsoft Sql Server can use 16 gigabytes with "Physical address Extensions" [PAE] configured and supported and... ugh
http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3703755&SiteID=17
Also drivers in ubuntu and windows both reduce the amount of memory available from the 4GB address space by mapping memory from that 4GB to devices. Graphics cards are particularly bad at this, your 256MB graphics card is using up at least 256MB of your address space...
If you can [your drivers support it, and cpu is new enough] install a 64 bit os. Your 32 bit applications and games will run fine.
A: Well, with windows, there's something called PAE, which means you can access up to 64 GB of memory on a windows machine. The downside is that most apps don't support actually using more than 4 GB of RAM. Only a small number of apps, like SQL Server are programmed to actually take advantage of all the extra memory.
A: In theory, all 32-bit OSes have that problem. You have 32 bits to do addressing.
2^32 bits / 2^10 (bits per kb) / 2^10 (kb per mb) / 2^10 (mb per gb) = 2^2 = 4gb.
Although there are some ways around it. (Look up the jump from 16-bit computing to 32-bit computing. They hit the same problem.)
A: There seems to be some confusion around PAE. PAE is "Page Address Extension", and is by no means a Windows feature. It is a hack Intel put in their Pentium II (and newer) chips to allow machines to access 64GB of memory. On Windows, applications need to support PAE explicitely, but in the open source world, packages can be compiled and optimized to your liking. The packages that could use more than 4GB of memory on Ubuntu (and other Linux distro's) are compiled with PAE support. This includes all server-specific software.
A: Linux supports a technology called PAE that lets you use more than 4GB of memory, however I don't know whether Ubuntu has it on by default. You may need to compile a new kernel.
Edit: Some threads on the Ubuntu forums suggest that the server kernel has PAE on by default, you could try installing that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Payment Processors - What do I need to know if I want to accept credit cards on my website? This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments?
Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available.
PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them?
And what about the vendors, like Visa, who have their own best practices?
Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it?
What about backups? Are there other physical copies of that data around?
Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.)
This blog post gives a complete rundown of handling credit cards (specifically for the UK).
Perhaps I phrased the question wrong, but I'm looking for tips like these:
*
*Use SecurID or eToken to add an additional password layer to the physical box.
*Make sure the box is in a room with a physical lock or keycode combination.
A: Keep in mind that using SSL to send a card number from a browser to a server is like covering your credit card number with your thumb when you hand your card to a cashier in a restaurant: your thumb (SSL) prevents other customers in the restaurant (the Net) from seeing the card, but once the card is in the hands of the cashier (a web server) the card is no longer protected by the SSL exchange, and the cashier could be doing anything with that card. Access to a saved card number can only be stopped by the security on the web server. Ie, most card thefts on the net aren't done during transmission, they're done by breaking through poor server security and stealing databases.
A: Why bother with PCI compliance?? At best you'll shave a fraction of a percent off your processing fees. This is one of those cases where you gotta be sure this is what you want to be doing with your time both upfront in development and over time in keeping up with the latest requirements.
In our case, it made the most sense to use a subscription-savy gateway and pair that with a merchant account. The subscription-savy gateway allows you to skip all the PCI compliance and do nothing more than process the transaction proper.
We use TrustCommerce as our gateway and are happy with their service/pricing. They have code for a bunch of languages that makes integration pretty easy.
A: Be sure to get a handle on the extra work and budget required for PCI. PCI may require huge external audit fees and internal effort/support. Also be aware of the fines/penalties that can be unilaterally levied on you, often hugely disproportionate to the scale of the 'ofense'.
A: I went through this process not to long ago with a company I worked for and I plan on going through it again soon with my own business. If you have some network technical knowledge, it really isn't that bad. Otherwise you will be better off using Paypal or another type of service.
The process starts by getting a merchant account setup and tied to your bank account. You may want to check with your bank, because a lot of major banks provide merchant services. You may be able to get deals, because you are already a customer of theirs, but if not, then you can shop around. If you plan on accepting Discover or American Express, those will be separate, because they provide the merchant services for their cards, no getting around this. There are other special cases also. This is an application process, be prepared.
Next you will want to purchase an SSL certificate that you can use for securing your communications for when the credit card info is transmitted over public networks. There are plenty of vendors, but my rule of thumb is to pick one that is a brand name in a way. The better they are known, the better your customer has probably heard of them.
Next you will want to find a payment gateway to use with your site. Although this can be optional depending on how big you are, but majority of the time it won't be. You will need one. The payment gateway vendors provide a way to talk to the Internet Gateway API that you will communicate with. Most vendors provide HTTP or TCP/IP communication with their API. They will process the credit card information on your behalf. Two vendors are Authorize.Net and PayFlow Pro. The link I provide below has some more information on other vendors.
Now what? For starters there are guidelines on what your application has to adhere to for transmitting the transactions. During the process of getting everything setup, someone will look at your site or application and make sure you are adhering to the guidelines, like using SSL and that you have terms of use and policy documentation on what the information the user is giving you is used for. Don't steal this from another site. Come up with your own, hire a lawyer if you need to. Most of these things fall under the PCI Data Security link Michael provided in his question.
If you plan on storing the credit card numbers, then you better be prepared to put some security measures in place internally to protect the info. Make sure the server the information is stored on is only accessible to members who need to have access. Like any good security, you do things in layers. The more layers you put in place the better. If you want you can use key fob type security, like SecureID or eToken to protect the room the server is in. If you can't afford the key fob route, then use the two key method. Allow a person who has access to the room to sign out a key, which goes along with a key they already carry. They will need both keys to access the room. Next you protect the communication to the server with policies. My policy is that the only thing communicating to it over the network is the application and that information is encrypted. The server should not be accessible in any other form. For backups, I use truecrypt to encrypt the volumes the backups will be saved to. Anytime the data is removed or stored somewhere else, then again you use truecrypt to encrypt the volume the data is on. Basically where ever the data is, it needs to be encrypted. Make sure all processes for getting at the data carries auditing trails. use logs for access to the server room, use cameras if you can, etc... Another measure is to encrypt the credit card information in the database. This makes sure that the data can only be viewed in your application where you can enforce who sees the information.
I use pfsense for my firewall. I run it off a compact flash card and have two servers setup. One is for fail over for redundancy.
I found this blog post by Rick Strahl which helped tremendously to understand doing e-commerce and what it takes to accept credit cards through a web application.
Well, this turned out to be a long answer. I hope these tips help.
A: Ask yourself the following question: why do you want to store credit card numbers in the first place? Chances are that you don't. In fact, if you do store them and manage to have one stolen, you could be looking at some serious liability.
I've written an app that does store credit card numbers (since the transactions were processed offline). Here's a good way to do it:
*
*Get an SSL certificate!
*Create a form to get CC# from the user.
*Encrypt part (not all!) of the CC# and store it in your database. (I'd suggest the middle 8 digits.) Use a strong encryption method and a secret key.
*Mail the remainder of the CC# to whoever processes your transactions (probably yourself) with the ID of the person to process.
*When you log in later, you will type in the ID and the mailed-out portion of the CC#. Your system can decrypt the other portion and recombine to get the full number so you can process the transaction.
*Finally, delete the online record. My paranoid solution was to overwrite the record with random data before deletion, to remove the possibility of an undelete.
This sounds like a lot of work, but by never recording a complete CC# anywhere, you make it extremely hard for a hacker to find anything of value on your webserver. Trust me, it's worth the peace of mind.
A: There's a lot to the whole process. The single easiest way to do it is to use services similar to paypal, so that you never actually handle any credit card data. Apart from that, there's a quite a bit of stuff to go through to get approved to offer credit card services on your website. You should probably talk with your bank, and the people who issue your merchant ID to help you in setting up the process.
A: As others have mentioned the easiest way into this area is with the use of Paypal, Google checkout or Nochex. However if you intend to to a significant amount of business you may wish to look up "upgrading" to higher level site integrations services such as WorldPay, NetBanx (UK) or Neteller (US). All of these services are reasonably easy to set up. And I know that Netbanx offers convenient integration into some of the off the shelf shopping cart solutions such as Intershop (because I wrote some of them). Beyond that you are looking at direct integration with the banking systems (and their APAX systems) but thats hard and at that point you also need to prove to the Credit card companies that you are handling the credit card numbers securely (probably not worth considering if you are not taking $100k's worth per month).
Working from 1st to last the cost/benefits are that the early options are much easier (quicker/cheaper) to set up put you pay quite high handling charges for each transaction. the later ones are much more costly to set up but you pay less in the long run.
The other advantage of the most of the non dedicated solutions is that you don't need to keep encrypted credit card numbers secure. Thats someone else's problem :-)
A: The PCI 1.2 document just came out. It gives a process for how to implement PCI compliance along with the requirements. You can find the full doc here:
https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml
Long story short, create a separate network segment for whichever servers will be dedicated to storing CC info (usually DB server(s)). Isolate the data as much as possible, and ensure only the minimum access necessary to access the data is present. Encrypt it when you store it. Never store PAN's. Purge old data and rotate your encryption keys.
Example Don'ts :
*
*Don't let the same account that can lookup general info in the database look up CC info.
*Don't keep your CC database on the same physical server as your web server.
*Don't allow external (Internet) traffic into your CC database network segment.
Example Dos:
*
*Use a separate Database account to query CC info.
*Disallow all but required traffic to CC database server via firewall/access-lists
*Restrict access to CC server to a limited set of authorized users.
A: I'd like to add a non-technical comment that you may wish to think about
Several of my clients run e-commerce sites, including a couple who have moderately large stores. Both of those, whilst they certainly could implement a payment gateway choose not too, they take the cc number, store it temporarily encrypted online and process it manually.
They do this because of the high incidence of fraud and manual processing allows them to take additional checks before filling an order. I'm told that they reject a little over 20% of all their transactions - processing manually certainly takes extra time and in one case they have an employee who does nothing but process transactions, but the cost of paying his salary is apparently less than their exposure if they just passed cc numbers though an online gateway.
Both of these clients are delivering physical goods with resale value, so are particularly exposed and for items like software where a fraudulent sale wouldn't result in any actual loss your mileage would vary, but it's worth considering above the technical aspects of an online gateway if implementing such is really what you want.
EDIT: And since creating this answer I'd like to add a cautionary tale and say that the time is past when this was a good idea.
Why? Because I know of another contact who was taking a similar approach. The card details were stored encrypted, the website was accessed by SSL, and the numbers were deleted immediately after processing. Secure you think?
No - one machine on their network got infected by a key logging Trojan. As a result they were identified as being the source for several score credit card forgeries - and were consequently hit by a large fine.
As a result of this I now never advise anyone to handle credit cards themselves. Payment gateways have since become much more competitive and cost effective, and fraud measures have improved. The risk is now no longer worth it.
I could delete this answer, but I think best to leave up edited as a cautionary tale.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "260"
} |
Q: Modifying a spreadsheet using a VB macro I have two spreadsheets... when one gets modified in a certain way I want to have a macro run that modifies the second in an appropriate manner. I've already isolated the event I need to act on (the modification of any cell in a particular column), I just can't seem to find any concrete information on accessing and modifying another spreadsheet (this spreadsheet is located on a different LAN share also... the user has access to both, though).
Any help would be great. References on how to do this or something similar are just as good as concrete code samples.
A: In Excel, you would likely just write code to open the other worksheet, modify it and then save the data.
See this tutorial for more info.
I'll have to edit my VBA later, so pretend this is pseudocode, but it should look something like:
Dim xl: Set xl = CreateObject("Excel.Application")
xl.Open "\\the\share\file.xls"
Dim ws: Set ws = xl.Worksheets(1)
ws.Cells(0,1).Value = "New Value"
ws.Save
xl.Quit constSilent
A: You can open a spreadsheet in a single line:
Workbooks.Open FileName:="\\the\share\file.xls"
and refer to it as the active workbook:
Range("A1").value = "New value"
A: Copy the following in your ThisWorkbook object to watch for specific changes. In this case when you increase a numeric value to another numeric value.
NB: you will have to replace Workbook-SheetChange and Workbook-SheetSelectionChange with an underscore. Ex: Workbook_SheetChange and Workbook_SheetSelectionChange the underscore gets escaped in Markdown code.
Option Explicit
Dim varPreviousValue As Variant ' required for IsThisMyChange() . This should be made more unique since it's in the global space.
Private Sub Workbook-SheetChange(ByVal Sh As Object, ByVal Target As Range)
' required for IsThisMyChange()
IsThisMyChange Sh, Target
End Sub
Private Sub Workbook-SheetSelectionChange(ByVal Sh As Object, ByVal Target As Range)
' This implements and awful way of accessing the previous value via a global.
' not pretty but required for IsThisMyChange()
varPreviousValue = Target.Cells(1, 1).Value ' NB: This is used so that if a Merged set of cells if referenced only the first cell is used
End Sub
Private Sub IsThisMyChange(Sh As Object, Target As Range)
Dim isMyChange As Boolean
Dim dblValue As Double
Dim dblPreviousValue As Double
isMyChange = False
' Simple catch all. If either number cant be expressed as doubles, then exit.
On Error GoTo ErrorHandler
dblValue = CDbl(Target.Value)
dblPreviousValue = CDbl(varPreviousValue)
On Error GoTo 0 ' This turns off "On Error" statements in VBA.
If dblValue > dblPreviousValue Then
isMyChange = True
End If
If isMyChange Then
MsgBox ("You've increased the value of " & Target.Address)
End If
' end of normal execution
Exit Sub
ErrorHandler:
' Do nothing much.
Exit Sub
End Sub
If you are wishing to change another workbook based on this, i'd think about checking to see if the workbook is already open first... or even better design a solution that can batch up all your changes and do them at once. Continuously changing another spreadsheet based on you listening to this one could be painful.
A: After playing with this for a while, I found the Michael's pseudo-code was the closest, but here's how I did it:
Dim xl As Excel.Application
Set xl = CreateObject("Excel.Application")
xl.Workbooks.Open "\\owghome1\bennejm$\testing.xls"
xl.Sheets("Sheet1").Select
Then, manipulate the sheet... maybe like this:
xl.Cells(x, y).Value = "Some text"
When you're done, use these lines to finish up:
xl.Workbooks.Close
xl.Quit
If changes were made, the user will be prompted to save the file before it's closed. There might be a way to save automatically, but this way is actually better so I'm leaving it like it is.
Thanks for all the help!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Spartan Programming I really enjoyed Jeff's post on Spartan Programming. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with.
For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like:
while (bytes = read(...))
{
...
}
while (GetMessage(...))
{
...
}
Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null ... this is a real practical pain.
One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created ...
I try to be neat - but cluttering my code with explicit objects sure feels laborious at times. It does not generally make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code.
What style do you advocate and can you rationalize it in a practical sense?
A: In The Pragmatic Programmer Hunt and Thomas talk about a study they term the Law of Demeter and it focuses on the coupling of functions to modules other than there own. By allowing a function to never reach a 3rd level in it's coupling you significantly reduce the number of errors and increase the maintainability of the code.
So:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
Is close to a felony because we are 4 objects down the rat hole. That means to change something in one of those objects I have to know that you called this whole stack right here in this very method. What a pain.
Better:
Account.getUser();
Note this runs counter to the expressive forms of programming that are now really popular with mocking software. The trade off there is that you have a tightly coupled interface anyway, and the expressive syntax just makes it easier to use.
A: I think the ideal solution is to find a balance between the extremes. There is no way to write a rule that will fit in all situations; it comes with experience. Declaring each intermediate variable on its own line will make reading the code more difficult, which will also contribute to the difficulty in maintenance. By the same token, debugging is much more difficult if you inline the intermediate values.
The 'sweet spot' is somewhere in the middle.
A: One expression per line.
There is no reason to obfuscate your code. The extra time you take typing the few extra terms, you save in debug time.
A: I tend to err on the side of readability, not necessarily debuggability. The examples you gave should definitely be avoided, but I feel that judicious use of multiple expressions can make the code more concise and comprehensible.
A: I'm usually in the "shorter is better" camp. Your example is good:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
I would cringe if I saw that over four lines instead of one--I don't think it'd make it easier to read or understand. The way you presented it here, it's clear that you're digging for a single object. This isn't better:
obja State = session.getState();
objb Account = State.getAccount();
objc AccountNumber = Account.getAccountNumber();
ObjectA a = getTheUser(AccountNumber);
This is a compromise:
objb Account = session.getState().getAccount();
ObjectA a = getTheUser(Account.getAccountNumber());
but I still prefer the single line expression. Here's an anecdotal reason: it's difficult for me to reread and error-check the 4-liner right now for dumb typos; the single line doesn't have this problem because there are simply fewer characters.
A: ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
This is a bad example, probably because you just wrote something from the top of your head.
You are assigning, to variable named a of type ObjectA, the return value of a function named getTheUser.
So let's assume you wrote this instead:
User u = getTheUser(session.getState().getAccount().getAccountNumber());
I would break this expression like so:
Account acc = session.getState().getAccount();
User user = getTheUser( acc.getAccountNumber() );
My reasoning is: how would I think about what I am doing with this code?
I would probably think: "first I need to get the account from the session and then I get the user using that account's number".
The code should read the way you think. Variables should refer to the main entities involved; not so much to their properties (so I wouldn't store the account number in a variable).
A second factor to have in mind is: will I ever need to refer to this entity again in this context?
If, say, I'm pulling more stuff out of the session state, I would introduce SessionState state = session.getState().
This all seems obvious, but I'm afraid I have some difficulty putting in words why it makes sense, not being a native English speaker and all.
A: Maintainability, and with it, readability, is king. Luckily, shorter very often means more readable.
Here are a few tips I enjoy using to slice and dice code:
*
*Variable names: how would you describe this variable to someone else on your team? You would not say "the numberOfLinesSoFar integer". You would say "numLines" or something similar - comprehensible and short. Don't pretend like the maintainer doesn't know the code at all, but make sure you yourself could figure out what the variable is, even if you forgot your own act of writing it. Yes, this is kind of obvious, but it's worth more effort than I see many coders put into it, so I list it first.
*Control flow: Avoid lots of closing clauses at once (a series of }'s in C++). Usually when you see this, there's a way to avoid it. A common case is something like
:
if (things_are_ok) {
// Do a lot of stuff.
return true;
} else {
ExpressDismay(error_str);
return false;
}
can be replaced by
if (!things_are_ok) return ExpressDismay(error_str);
// Do a lot of stuff.
return true;
if we can get ExpressDismay (or a wrapper thereof) to return false.
Another case is:
*
*Loop iterations: the more standard, the better. For shorter loops, it's good to use one-character iterators when the variable is never used except as an index into a single object.
The particular case I would argue here is against the "right" way to use an STL container:
for (vector<string>::iterator a_str = my_vec.begin(); a_str != my_vec.end(); ++a_str)
is a lot wordier, and requires overloaded pointer operators *a_str or a_str->size() in the loop. For containers that have fast random access, the following is a lot easier to read:
for (int i = 0; i < my_vec.size(); ++i)
with references to my_vec[i] in the loop body, which won't confuse anyone.
Finally, I often see coders take pride in their line number counts. But it's not the line numbers that count! I'm not sure of the best way to implement this, but if you have any influence over your coding culture, I'd try to shift the reward toward those with compact classes :)
A: Good explanation. I think this is version of the general Divide and Conquer mentality.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Win32 List-View Control SubItem padding for custom-drawn SubItems? When using custom-draw (NM_CUSTOMDRAW) to draw the entire contents of a ListView SubItem (in Report/Details view), it would be nice to be able to apply the same left and right
padding in my custom paint method that is applied by the control itself for non-custom-drawn items.
Is there a way to programmatically retrieve this padding value? Is it
related to the width of a particular character (" " or "w" or something?) or
is it a fixed value (6px on left and 3px on right or something) or...?
EDIT: To clarify, I want to add the same padding to my NM_CUSTOMDRAWn SubItems that the control adds to items that it draws, and the metric that I'm looking for, for example, is the white space between the beginning of the 2nd column and the word "Siamese" in the following screenshot (Note: screenshot from MSDN added to help explain my question):
(source: microsoft.com)
Note that the word "Siamese" is aligned with the header item ("Breed"). I would like to be able to guarantee the same alignment for custom-drawn items.
A: use ListView Header message HDM_GETBITMAPMARGIN
see link text
A: ListView_GetSubItemRect (LVM_GETSUBITEMTECT)
http://msdn.microsoft.com/en-us/library/ms930172.aspx
Despite what the documentation says I suspect LVIR_LABEL returns just the returns the bounding rectangle of the item text, as per ListView_GetItemRect.
(This just kept niggling me as I though I had actually seen an answer somewhere when playing with NM_CUSTOMDRAW).
Edit After Comment 2:
I imagine you have seen NMLVCUSTOMDRAW which if you are willing to use Version 6.0. has rcText. I wouldn't since I use Win2K.
Given what you have found I would go back to the suggestion of using
ListView_GetItemRect to get LVIR_LABEL and compare that with LVIR_BOUNDS and use the difference.
A: the way for doing this is retrieving the format of the corresponding column with
ListView_GetColumn()
then check the retrieved myLVCOLUMN.mask
LVCOLUMN myLVCOLUMN;
myLVCOLUMN.mask=LVCF_FMT;
ListView_GetColumn(hwnd,nCol,&myLVCOLUMN);
then when we draw the corresponding label belonging to that column
if(myLVCOLUMN.fmt & LVCFMT_CENTER)
DrawText(x,x,x,x, DT_CENTER | DT_WORD_ELLIPSIS );
else if (myLVCOLUMN.fmt & LVCFMT_RIGHT)
DrawText(x,x,x,x, DT_RIGHT | DT_WORD_ELLIPSIS );
else
DrawText(x,x,x,x, DT_LEFT | DT_WORD_ELLIPSIS );
A: I would assume that GetSystemMetrics() is that you need to look at. I think that SM_CXEDGE and SM_CYEDGE are probably the values you want, but don't quote me on that. ;-)
A: Can only guess without seeing your output.
A few suggestions: If you are using the DrawTextEx function, have you have experimented with DT_INTERNAL et al?
Are you accidentally putting in a blank image/icon.
Does it look ok in classic screen mode? If so I would look at XP Theme functions to see if some thing is going on.
Late edit after first comment:
I wonder if the size of rectangle matches the space required for the LVN_ENDLABELEDIT edit box around the text so the text doesn't move (or for a focus rectangle)?
I guess you could compare the result of LVM_GETITEMRECT with LVIR_LABEL on the first column and use the difference as your left border.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Get MIME type of a local file in PHP5 without a PECL extension? mime_content_type() is deprecated.
How can I find the MIME type of a local file using PHP5 but without using this deprecated method or the PECL fileinfo extension?
Edit: That's what I was afraid of. It's unfortunate that they deprecated a built-in function in favour of one that requires an extension that isn't always available.
A: If you can't use the fileinfo extension, and you don't want to use mime_content_type, your options are limited.
Most likely you'll need to do a lookup based on the file extension. mime_content_type did something a bit more intelligent and actually looked for special data in the file to determine the mime type.
A: The getID3() library is a quick and easy works-most-of-the-time option. Originally named for a project to obtain MP3 ID3 data, the library does two hecks of a lot more than that and is quite convenient for all sorts of common or odd file meta data tasks.
I've used it to get the MIME types of files for online image and video tools. In all the testing I've done I've not seen getID3 get the MIME type wrong.
I've also used it to check if QuickTime videos have streaming hints. I mention this as an example of versatility.
A second more time consuming option is to roll your own MIME type checker as already suggested. If you have a MIME magic file you can go a little further than a lookup on the file extension by comparing the first n bytes of file data against a first-n-bytes to MIME type lookup table derived from your MIME magic file.
A typical MIME magic file will contain in excess of 500 sets of MIME types which might result in slow comparisons (lots of checks to make). Hard-coding the 10 most common MIME type checks in your home rolled solution will help there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's the best way to detect if an IDataReader is empty? It seems like IDataReader.Read() is always true at least one time (If I'm wrong about this let me know.) So how do you tell if it has no records without just wrapping it in a try/catch?
A: Yes, if you want to use the interface then Read until false is the only way to test. If you are looking for a generic IDataReader implementation, you could try DbDataReader and use the HasRows property.
A: You can just cast System.Data.IDataReader to System.Data.Common.DbDataReader
using (System.Data.IDataReader IReader = ICommand.ExecuteReader())
{
if (((System.Data.Common.DbDataReader)IReader).HasRows)
{
//do stuff
}
} // End Using IReader
It's pure evil, but it (usually) works ;)
(assuming your instance of IDataReader is implemented by a custom ADO.NET provider, and not some custom silly class of yours which just implements IDataReader instead of deriving from DbDataReader [which implements IDataReader]).
A: if(dr.Read())
{
//do stuff
}
else
{
//it's empty
}
usually you'll do this though:
while(dr.Read())
{
}
A: Just stumbled across this problem and came up with this...
bool isBeforeEoF;
do
{
isBeforeEoF = reader.Read();
if (isBeforeEoF)
{
yield return new Foo()
{
StreamID = (Guid)reader["ID"],
FileType = (string)reader["Type"],
Name = (string)reader["Name"],
RelativePath = (string)reader["RelativePath"]
};
}
} while (isBeforeEoF);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ASP.NET 3.5 Without Microsoft SQL Server - What do I lose? I was just assigned to do a CMS using ASP.net 3.5 and MySQL. I am kind of new to ASP.NET development (quite sufficient with C#) and I am wondering what major ASP.NET and general .NET features I am losing when I don't have the option to use Microsoft SQL Server.
I know already from quick Googling that I lose LINQ (and I was really looking forward to using this to build my model layer!), but I am not sure what other handy features I will lose. Since I've been relying on ASP.net tutorials which assume that you use MS SQL Server, I feel a chunk of my ASP.net knowledge just become invalid.
Thanks!
A: You can leverage MySql in a number of ORMs, one of which is NHibernate. For the most part you can treat it as if you were running on SQL Server or Oracle. And with Linq2NHibernate, you can get nice LINQ syntax.
You'd lose the SqlDataSource control, but some would argue that it would actually be a blessing :)
And of course you'd lose Linq2SQL. EntityFramework will have 3rd party adapters MySql, Oracle and a few others soon after release.
A: You do not lose LINQ, you lose LINQtoSQL. LINQ itself is more generic as it can be used on anything that implements iQueryable.
You lose the SqlDataSource, not a big deal.
You lose some of the integration the server explorer does for you with sql server, again not a big deal.
As far as im concerned you dont lose anything very important, and you shouldnt be losing any of your .net knowledge. Most examples use sql server as a default but they can easily be changed to use another database.
Also there are a few open source .net CMS packages out there already that use MySql take a look at cuyahoga
A: As a consequence of losing notification services, you also lose SqlCacheDependency
A: Some things that come to mind:
*
*asp.net has nice "automatic" user management (authentication) system. I think it only goes with SQL Server, but there might be a way to make it work on other DBs. The tutorials assume SQL Server usually (or the built in file based DB for development)
*Not related to asp.net, but useful for any project is SQLCLR, which I find a great addition to sql server. Lets you delegate logic you write in the business level (supporting dll or classes) to sql server in the from of a SP, but the SP is written in vb.net/c#
*Notification services
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Usefulness of SQL Server "with encryption" statement Recently a friend and I were talking about securing stored procedure code in a SQL server database.
From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on.
So in what scenarious could "with encryption" be used, and when should it be avoided at all costs?
A: It can be used to hide your code from casual observers, but as you say: it's easily circumvented.
It really can't be any other way, since the server needs to decrypt the code to execute it. It's DRM, basically, and fails for the same reason as all the other DRM does - you can't simultaneously hide the data, and allow it to be accessed.
A: @Blorgbeard
Good response, the MSDN documentation on "WITH ENCRYPTION" seems to agree with your point, now calling it "obfuscated" rather then encrypted.
I've met a few developers who were completely unaware of this point however. Hopefully this question/response will inform others too.
A: Yes, it's easily broken. I had a situation this past week where I had to decrypt several sprocs that a former developer had encrypted for a client of mine. After decrypting it, which took a moderate effort, I wouldn't rely on that for any means of protecting intellectual property, passwords, user ids. Anything really.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you Modify TextBox Control Tab Stops When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this?
A: First add the following namespace
using System.Runtime.InteropServices;
Then add the following after the class declaration:
private const int EM_SETTABSTOPS = 0x00CB;
[DllImport("User32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr SendMessage(IntPtr h,
int msg,
int wParam,
int [] lParam);
Then add the following to the Form_Load event:
// define value of the Tab indent
int[] stops = {16};
// change the indent
SendMessage(this.textBox1.Handle, EM_SETTABSTOPS, 1, stops);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I get a value from an XML web service in C#? In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that?
For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back
<xml><somekey>somevalue</somekey></xml>
I'd like it to spit out "somevalue".
A: I think it will be useful to read this first:
Creating and Consuming a Web Service (in .NET)
This is a series of tutorials of how web services are used in .NET, including how XML input is used (deserialization).
A: I use this code and it works great:
System.Xml.XmlDocument xd = new System.Xml.XmlDocument;
xd.Load("http://www.webservice.com/webservice?fXML=1");
string xPath = "/xml/somekey";
// this node's inner text contains "somevalue"
return xd.SelectSingleNode(xPath).InnerText;
EDIT: I just realized you're talking about a webservice and not just plain XML. In your Visual Studio Solution, try right clicking on References in Solution Explorer and choose "Add a Web Reference". A dialog will appear asking for a URL, you can just paste it in: "http://www.webservice.com/webservice.asmx". VS will autogenerate all the helpers you need. Then you can just call:
com.webservice.www.WebService ws = new com.webservice.www.WebService();
// this assumes your web method takes in the fXML as an integer attribute
return ws.SomeWebMethod(1);
A: You can use something like that:
var client = new WebClient();
var response = client.UploadValues("www.webservice.com", "POST", new NameValueCollection {{"fXML", "1"}});
using (var reader = new StringReader(Encoding.UTF8.GetString(response)))
{
var xml = XElement.Load(reader);
var value = xml.Element("somekey").Value;
Console.WriteLine("Some value: " + value);
}
Note I didn't have a chance to test this code, but it should work :)
A: It may also be worth adding that if you need to specifically use POST rather than SOAP then you can configure the web service to receive POST calls:
Check out the page on MSDN:
Configuration Options for XML Web Services Created Using ASP.NET
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Define realtime on the web for business It drives me nuts to hear business proponents using the term realtime for web-based systems. I'm becoming the crazy in the room, chanting, "There is no such thing as realtime on the web! We're in banking, not the rocket launch/ship navigation/airplane autopilot business!"
Anyone have anything better for performance specifications than realtime, or its horrible hybrid, near-realtime?
A: Immediate? Instant? Live (no, wait, Microsoft owns that word these days, don't they?)?
More seriously, "realtime" is probably not confusing for anyone who doesn't have a process-control / embedded-system background. Have a comforting beverage and worry about other things.
A: Real-time means one thing to an embedded programmer. It means something else to a normal person. If my online balance always matches my ATM/bank-teller's balance, I would call that pretty real-time. If I can transfer money between accounts, refresh the screen, and immediately see the completed transfer, I would call that real-time.
If you web backend just prints out orders for human intervention, or dumps user-commands in a file for offline batch processing, that would not be realtime.
A: Real time means that as you have a set of tasks that executes in order to execute a task, if one task takes more then the defined time to finish, the whole process fails and probably the system crashes. For example, the application used to control the Mars exploratory vehicle is considered to be an real-time application, even that a command triggered on earth needs 8 minutes to achieves the vehicle and the images of the vehicke cameras takes more 8 minutes to reach the earth. So even with a 16 minutes latency between taking the action and seeing the result it can be defined as real-time, because if it takes more than the 16 minutes planned delay, there is a huge risk that the vehicle could collide with a rock or fall into a depression.
Back to your example, I don't see a ATM machine, or the above mentioned balances as real-time, they could be Online or Updated but not Real-Time as the system you not crash if takes more time then expected to complete a withdraw at a ATM.
A: In the banking industry most of the time "real time" means the opposite of "end-of-day".
Because there was no such thing as internet/intranet/LANs/WANs in the old days, all balancing is done at "end-of-day". Transactions done in one branch with a certain bank account are oblivious of the transactions done in another; all of the balance resolution will occur during end of day. When mainframes came in the same rule applied: resolutions are done by computer by a long-running-process usually run between 9PM and 12 midnight.
This is the reason behind terms such as "current balance" and "available balance", e.g., available balance is what has been determined by the end-of-day process as an account's balance for the previous day; current balance is what it's supposed to be, but you can't touch it yet since the bank is not sure if you've made some transaction somewhere else.
With the advent of ATMs, the internet, and other interconnectivity technologies, "real time" balance resolution is now possible: a withdrawal, an online transaction, a purchase debit, etc will immediately be reflected in the customers' bank accounts without the need to wait for end-of-day processing.
A: Inline? As in actions happen inline with your actions as opposed to out-of-band or end of the day batch jobs.
A: How do you define "real-time" for embedded systems? I would say that a decent definition is "a system which is able to process and respond to inputs faster than the average time between inputs." In other words, a system that will never fall behind in processing compared to the systems which are feeding it data. Using this definition everything on the web is a real-time system, since web servers that fall behind tend to be inaccessable (ie. the slashdot effect).
A: It's a marketing term that means "really fast", like maybe < 1 second.
Obviously it can't mean the same thing as when people talk about real-time embedded systems, real-time operating systems, etc. The web is too big and heterogeneous for that.
A: One definition of a real time system ( from the safety critical systems world ) is a system whose correctness depends on the timeliness of its responses.
That would apply equally well for a real-time web trading system - the stock values go stale in seconds as for an embedded avionics fly-by-wire system with millisecond correctness requirements.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How to insert a row into a dataset using SSIS? I'm trying to create an SSIS package that takes data from an XML data source and for each row inserts another row with some preset values. Any ideas? I'm thinking I could use a DataReader source to generate the preset values by doing the following:
SELECT 'foo' as 'attribute1', 'bar' as 'attribute2'
The question is, how would I insert one row of this type for every row in the XML data source?
A: I'm not sure if I understand the question... My assumption is that you have n number of records coming into SSIS from your data source, and you want your output to have n * 2 records.
In order to do this, you can do the following:
*
*multicast to create multiple copies of your input data
*derived column transforms to set the "preset" values on the copies
*sort
*merge
Am I on the right track w/ what you're trying to accomplish?
A: I've never tried it, but it looks like you might be able to use a Derived Column transformation to do it: set the expression for attribute1 to "foo" and the expression for attribute2 to "bar".
You'd then transform the original data source, then only use the derived columns in your destination. If you still need the original source, you can Multicast it to create a duplicate.
At least I think this will work, based on the documentation. YMMV.
A: I would probably switch to using a Script Task and place your logic in there. You may still be able leverage the File Reading and other objects in SSIS to save some code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Win32 ToolTip disappears never to re-appear with Commctl 6 I'm creating a ToolTip window and adding tools to it using the flags
TTF_IDISHWND | TTF_SUBCLASS. (c++, win32)
I have a manifest file such that my program uses the new WindowsXP themes
(comctrl32 version 6).
When I hover over a registered tool, the tip appears.
Good.
When I click the mouse, the tip disappears.
Ok.
However, moving away from the tool and back
again does not make the tip re-appear. I need to hover over a different tool
and then come back to my tool to get the tip to come back.
When I remove my manifest file (to use the older non-XP comctrl32), the
problem goes away.
After doing some experimentation, I discovered the following differences
between ToolTips in Comctl32 version 5 (old) and Comctl32 version 6 (new):
*
*New TTF_TRANSPARENT ToolTips (when used In-Place) actually return
HTCLIENT from WM_NCITTEST if a mouse button is down, thus getting
WM_LBUTTONDOWN and stealing focus for a moment before vanishing. This causes
the application's border to flash.
*Old TTF_TRANSPARENT ToolTips always return HTTRANSPARENT from WM_NCHITTEST,
and thus never get WM_LBUTTONDOWN themselves and never steal focus. (This seems to be just aesthetic, but may impact the next point...)
*New ToolTips seem not to get WM_TIMER events after a mouse-click, and
only resume getting (a bunch of) timer events after being de-activated and
re-activated. Thus, they do not re-display their tip window after a mouse
click and release.
*Old ToolTips get a WM_TIMER message as soon as the mouse is moved again
after click/release, so they are ready to re-display their tip.
Thus, as a comctl32 workaround, I had to:
*
*subclass the TOOLTIPS_CLASS window and always return HTTRANSPARENT from
WM_NCHITTEST if the tool asked for transparency.
*avoid using TTF_SUBCLASS and rather process the mouse messages myself so
I could de-activate/re-activate upon receiving WM_xBUTTONUP.
I assume that the change in internal behavior was to accommodate the new "clickable" features in ToolTips like hyperlinks, but the hover behavior appears to be thus broken.
Does anyone know of a better solution than my subclass workaround? Am I missing some other point?
A: You're not the only one who has hit compatablity issues with tooltips between these DLLS.
I too have had nothing but trouble with the new tooltips in the themable common controls. We have already been monkeying with mouse messages and active/deactivating the tips before adding the manifest and theming our application - so it sounds like what your doing isn't too crazy.
We're still living with problems with TTN_NEEDTEXT messages being send constantly as the mouse moves (not just when hovering), positioning problems with large tips (maybe not something new), and odd unicode messages being sent instead of the ANSI versions (which I plan to post as a question at some point).
A: I don't know, but this sounds like a really "hard" problem (in the sense that all real-world) problems are really hard. I bet the underlying problem is something to do with the setting of the focus. Windows that manually do that are evil and generally suffer from all manner of bugs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I find out if a process is already running using c#? I have C# winforms application that needs to start an external exe from time to time, but I do not wish to start another process if one is already running, but rather switch to it.
So how in C# would I so this in the example below?
using System.Diagnostics;
...
Process foo = new Process();
foo.StartInfo.FileName = @"C:\bar\foo.exe";
foo.StartInfo.Arguments = "Username Password";
bool isRunning = //TODO: Check to see if process foo.exe is already running
if (isRunning)
{
//TODO: Switch to foo.exe process
}
else
{
foo.Start();
}
A: I have used the AppActivate function in VB runtime to activate an existing process.
You will have to import Microsoft.VisualBasic dll into the C# project.
using System;
using System.Diagnostics;
using Microsoft.VisualBasic;
namespace ConsoleApplication3
{
class Program
{
static void Main(string[] args)
{
Process[] proc = Process.GetProcessesByName("notepad");
Interaction.AppActivate(proc[0].MainWindowTitle);
}
}
}
A: You can simply enumerate processes using Process.GetProcesses method.
A: This should do it for ya.
Check Processes
//Namespaces we need to use
using System.Diagnostics;
public bool IsProcessOpen(string name)
{
//here we're going to get a list of all running processes on
//the computer
foreach (Process clsProcess in Process.GetProcesses()) {
//now we're going to see if any of the running processes
//match the currently running processes. Be sure to not
//add the .exe to the name you provide, i.e: NOTEPAD,
//not NOTEPAD.EXE or false is always returned even if
//notepad is running.
//Remember, if you have the process running more than once,
//say IE open 4 times the loop thr way it is now will close all 4,
//if you want it to just close the first one it finds
//then add a return; after the Kill
if (clsProcess.ProcessName.Contains(name))
{
//if the process is found to be running then we
//return a true
return true;
}
}
//otherwise we return a false
return false;
}
A: You can use LINQ as well,
var processExists = Process.GetProcesses().Any(p => p.ProcessName.Contains("<your process name>"));
A: I found out that Mutex is not working like in the Console application. So using WMI to query processes that can be seen using Task Manager window will solved your problem.
Use something like this:
static bool isStillRunning() {
string processName = Process.GetCurrentProcess().MainModule.ModuleName;
ManagementObjectSearcher mos = new ManagementObjectSearcher();
mos.Query.QueryString = @"SELECT * FROM Win32_Process WHERE Name = '" + processName + @"'";
if (mos.Get().Count > 1)
{
return true;
}
else
return false;
}
NOTE: Add assembly reference "System.Management" to enable the type intellisense.
A: I think the complete answer to your problem requires understanding of what happens when your application determines that an instance of foo.exe is already running i.e what does '//TODO: Switch to foo.exe process' actually mean?
A: In a past project I needed to prevent multiple execution of a process, so I added a some code in the init section of that process which creates a named mutex. This mutext was created and acquired before continuing the rest of the process. If the process can create the mutex and acquire it, then it is the first one running. If another process already controls the mutex, then the one which fails is not the first so it exits immediately.
I was just trying to prevent a second instance from running, due to dependencies on specific hardware interfaces. Depending on what you need with that "switch to" line, you might need a more specific solution such as a process id or handle.
Also, I had source code access to the process I was trying to start. If you can not modify the code, adding the mutex is obviously not an option.
A: Two concerns to keep in mind:
*
*Your example involved placing a
password on a command line. That
cleartext representation of a secret
could be a security vulnerability.
*When enumerating processes, ask
yourself which processes you really
want to enumerate. All users, or
just the current user? What if the
current user is logged in twice (two
desktops)?
A:
Mnebuerquo wrote:
Also, I had source code access to the
process I was trying to start. If you
can not modify the code, adding the
mutex is obviously not an option.
I don't have source code access to the process I want to run.
I have ended up using the proccess MainWindowHandle to switch to the process once I have found it is alread running:
[DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)]
public static extern bool SetForegroundWindow(IntPtr hWnd);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Why does clicking a child window not always bring the application to the foreground? When an application is behind another applications and
I click on my application's taskbar icon, I expect the entire application to
come to the top of the z-order, even if an app-modal, WS_POPUP dialog box is
open.
However, some of the time, for some of my (and others') dialog boxes, only the dialog box comes to the front; the rest of the application stays behind.
I've looked at Spy++ and for the ones that work correctly, I can see
WM_WINDOWPOSCHANGING being sent to the dialog's parent. For the ones that
leave the rest of the application behind, WM_WINDOWPOSCHANGING is not being
sent to the dialog's parent.
I have an example where one dialog usually brings the whole app with it and the other does not. Both the working dialog box and the non-working dialog box have the same window style, substyle, parent, owner, ontogeny.
In short, both are WS_POPUPWINDOW windows created with DialogBoxParam(),
having passed in identical HWNDs as the third argument.
Has anyone else noticed this behavioral oddity in Windows programs? What messages does the TaskBar send to the application when I click its button? Who's responsibility is it to ensure that all of the application's windows come to the foreground?
In my case the base parentage is an MDI frame...does that factor in somehow?
A: I know this is very old now, but I just stumbled across it, and I know the answer.
In the applications you've seen (and written) where bringing the dialog box to the foreground did not bring the main window up along with it, the developer has simply neglected to specify the owner of the dialog box.
This applies to both modal windows, like dialog boxes and message boxes, as well as to modeless windows. Setting the owner of a modeless popup also keeps the popup above its owner at all times.
In the Win32 API, the functions to bring up a dialog box or a message box take the owner window as a parameter:
INT_PTR DialogBox(
HINSTANCE hInstance,
LPCTSTR lpTemplate,
HWND hWndParent, /* this is the owner */
DLGPROC lpDialogFunc
);
int MessageBox(
HWND hWnd, /* this is the owner */
LPCTSTR lpText,
LPCTSTR lpCaption,
UINT uType
);
Similary, in .NET WinForms, the owner can be specified:
public DialogResult ShowDialog(
IWin32Window owner
)
public static DialogResult Show(
IWin32Window owner,
string text
) /* ...and other overloads that include this first parameter */
Additionally, in WinForms, it's easy to set the owner of a modeless window:
public void Show(
IWin32Window owner,
)
or, equivalently:
form.Owner = this;
form.Show();
In straight WinAPI code, the owner of a modeless window can be set when the window is created:
HWND CreateWindow(
LPCTSTR lpClassName,
LPCTSTR lpWindowName,
DWORD dwStyle,
int x,
int y,
int nWidth,
int nHeight,
HWND hWndParent, /* this is the owner if dwStyle does not contain WS_CHILD */
HMENU hMenu,
HINSTANCE hInstance,
LPVOID lpParam
);
or afterwards:
SetWindowLong(hWndPopup, GWL_HWNDPARENT, (LONG)hWndOwner);
or (64-bit compatible)
SetWindowLongPtr(hWndPopup, GWLP_HWNDPARENT, (LONG_PTR)hWndOwner);
Note that MSDN has the following to say about SetWindowLong[Ptr]:
Do not call SetWindowLongPtr with the GWLP_HWNDPARENT index to change the parent of a child window. Instead, use the SetParent function.
This is somewhat misleading, as it seems to imply that the last two snippets above are wrong. This isn't so. Calling SetParent will turn the intended popup into a child of the parent window (setting its WS_CHILD bit), rather than making it an owned window. The code above is the correct way to make an existing popup an owned window.
A: When you click on the taskbar icon Windows will send a WM_ACTIVATE message to your application.
Are you sure your code is passing the WM_ACTIVATE message to the DefWindowProc window procedure for processing?
A: Is the dialog's parent window set correctly?
After I posted this, I started my own Windows Forms application and reproduced the problem you describe. I have two dialogs, one works correctly the other does not and I can't see any immediate reason is to why they behave differently. I'll update this post if I find out.
Raymond Chen where are you!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Is there a way of selecting the last item of a list with CSS? Say I have a list as follows:
*
*item1
*item2
*item3
Is there a CSS selector that will allow me to directly select the last item of a list? In this case item 3.
Cheers!
A: Until it's properly supported, you'll need to add a class to 'last' items, as suggested. You don't have to do this manually, though. If you can take a javascript hit, take a look at either:
*
*the jQuery :last-child selector
*Keeping Your Elements’ Kids in Line with Offspring (a list apart article by Alex Bischoff), a specific, lighter-weight method
Either will avoid 'polluting' your markup, and are perfectly acceptable if your style is a 'nice addition' as opposed to a 'must have' design feature.
A: The answer for this question should be updated! IE9 + Firefox (for a while back) + Chrome, Safari all support: last-of-type or last-child
A: Not that i'm aware of. The traditional solution is to tag the first & last items with class="first" & class="last" so you can identify them.
The CSS psudo-class first-child will get you the first item but not all browsers support it. CSS3 will have last-child too (this is currently supported by Firefox, Safari but not IE 6/7/beta 8)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Can the Weblogic default handler display the list of contexts? In Jetty, if there is no deployment at '/' then the DefaultHandler displays a list of known contexts. This is very useful during development.
Is it possible to configure BEA Weblogic to provide a similar convenience?
A: You could write a small webapp that hooks up to the Weblogic JMX and displays the list of deployed webapps and deploy that one at '/'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to sort strings in JavaScript I have a list of objects I wish to sort based on a field attr of type string. I tried using -
list.sort(function (a, b) {
return a.attr - b.attr
})
but found that - doesn't appear to work with strings in JavaScript. How can I sort a list of objects based on an attribute with type string?
A: Use String.prototype.localeCompare as per your example:
list.sort(function (a, b) {
return ('' + a.attr).localeCompare(b.attr);
})
We force a.attr to be a string to avoid exceptions. localeCompare has been supported since Internet Explorer 6 and Firefox 1. You may also see the following code used that doesn't respect a locale:
if (item1.attr < item2.attr)
return -1;
if ( item1.attr > item2.attr)
return 1;
return 0;
A: I had been bothered about this for long, so I finally researched this and give you this long winded reason for why things are the way they are.
From the spec:
Section 11.9.4 The Strict Equals Operator ( === )
The production EqualityExpression : EqualityExpression === RelationalExpression
is evaluated as follows:
- Let lref be the result of evaluating EqualityExpression.
- Let lval be GetValue(lref).
- Let rref be the result of evaluating RelationalExpression.
- Let rval be GetValue(rref).
- Return the result of performing the strict equality comparison
rval === lval. (See 11.9.6)
So now we go to 11.9.6
11.9.6 The Strict Equality Comparison Algorithm
The comparison x === y, where x and y are values, produces true or false.
Such a comparison is performed as follows:
- If Type(x) is different from Type(y), return false.
- If Type(x) is Undefined, return true.
- If Type(x) is Null, return true.
- If Type(x) is Number, then
...
- If Type(x) is String, then return true if x and y are exactly the
same sequence of characters (same length and same characters in
corresponding positions); otherwise, return false.
That's it. The triple equals operator applied to strings returns true iff the arguments are exactly the same strings (same length and same characters in corresponding positions).
So === will work in the cases when we're trying to compare strings which might have arrived from different sources, but which we know will eventually have the same values - a common enough scenario for inline strings in our code. For example, if we have a variable named connection_state, and we wish to know which one of the following states ['connecting', 'connected', 'disconnecting', 'disconnected'] is it in right now, we can directly use the ===.
But there's more. Just above 11.9.4, there is a short note:
NOTE 4
Comparison of Strings uses a simple equality test on sequences of code
unit values. There is no attempt to use the more complex, semantically oriented
definitions of character or string equality and collating order defined in the
Unicode specification. Therefore Strings values that are canonically equal
according to the Unicode standard could test as unequal. In effect this
algorithm assumes that both Strings are already in normalized form.
Hmm. What now? Externally obtained strings can, and most likely will, be weird unicodey, and our gentle === won't do them justice. In comes localeCompare to the rescue:
15.5.4.9 String.prototype.localeCompare (that)
...
The actual return values are implementation-defined to permit implementers
to encode additional information in the value, but the function is required
to define a total ordering on all Strings and to return 0 when comparing
Strings that are considered canonically equivalent by the Unicode standard.
We can go home now.
tl;dr;
To compare strings in javascript, use localeCompare; if you know that the strings have no non-ASCII components because they are, for example, internal program constants, then === also works.
A: An explanation of why the approach in the question doesn't work:
let products = [
{ name: "laptop", price: 800 },
{ name: "phone", price:200},
{ name: "tv", price: 1200}
];
products.sort( (a, b) => {
{let value= a.name - b.name; console.log(value); return value}
});
> 2 NaN
Subtraction between strings returns NaN.
Echoing Alejadro's answer, the right approach is:
products.sort( (a,b) => a.name > b.name ? 1 : -1 )
A: A typescript sorting method modifier using a custom function to return a sorted string in either ascending or descending order
const data = ["jane", "mike", "salome", "ababus", "buisa", "dennis"];
const sortStringArray = (stringArray: string[], mode?: 'desc' | 'asc') => {
if (!mode || mode === 'asc') {
return stringArray.sort((a, b) => a.localeCompare(b))
}
return stringArray.sort((a, b) => b.localeCompare(a))
}
console.log(sortStringArray(data, 'desc'));// [ 'salome', 'mike', 'jane', 'dennis', 'buisa', 'ababus' ]
console.log(sortStringArray(data, 'asc')); // [ 'ababus', 'buisa', 'dennis', 'jane', 'mike', 'salome' ]
A: Answer (in Modern ECMAScript)
list.sort((a, b) => (a.attr > b.attr) - (a.attr < b.attr))
Or
list.sort((a, b) => +(a.attr > b.attr) || -(a.attr < b.attr))
Description
Casting a boolean value to a number yields the following:
*
*true -> 1
*false -> 0
Consider three possible patterns:
*
*x is larger than y: (x > y) - (y < x) -> 1 - 0 -> 1
*x is equal to y: (x > y) - (y < x) -> 0 - 0 -> 0
*x is smaller than y: (x > y) - (y < x) -> 0 - 1 -> -1
(Alternative)
*
*x is larger than y: +(x > y) || -(x < y) -> 1 || 0 -> 1
*x is equal to y: +(x > y) || -(x < y) -> 0 || 0 -> 0
*x is smaller than y: +(x > y) || -(x < y) -> 0 || -1 -> -1
So these logics are equivalent to typical sort comparator functions.
if (x == y) {
return 0;
}
return x > y ? 1 : -1;
A: There should be ascending and descending orders functions
if (order === 'asc') {
return a.localeCompare(b);
}
return b.localeCompare(a);
A: If you want to control locales (or case or accent), then use Intl.collator:
const collator = new Intl.Collator();
list.sort((a, b) => collator.compare(a.attr, b.attr));
You can construct a collator like:
new Intl.Collator("en");
new Intl.Collator("en", {sensitivity: "case"});
...
See the above link for documentation.
Note: unlike some other solutions, it handles null, undefined the JavaScript way, i.e., moves them to the end.
A: Since strings can be compared directly in JavaScript, this will do the job:
list.sort(function (a, b) {
return a.attr < b.attr ? -1: 1;
})
This is a little bit more efficient than using
return a.attr > b.attr ? 1: -1;
because in case of elements with same attr (a.attr == b.attr), the sort function will swap the two for no reason.
For example
var so1 = function (a, b) { return a.atr > b.atr ? 1: -1; };
var so2 = function (a, b) { return a.atr < b.atr ? -1: 1; }; // Better
var m1 = [ { atr: 40, s: "FIRST" }, { atr: 100, s: "LAST" }, { atr: 40, s: "SECOND" } ].sort (so1);
var m2 = [ { atr: 40, s: "FIRST" }, { atr: 100, s: "LAST" }, { atr: 40, s: "SECOND" } ].sort (so2);
// m1 sorted but ...: 40 SECOND 40 FIRST 100 LAST
// m2 more efficient: 40 FIRST 40 SECOND 100 LAST
A: An updated answer (October 2014)
I was really annoyed about this string natural sorting order so I took quite some time to investigate this issue.
Long story short
localeCompare() character support is badass, just use it.
As pointed out by Shog9, the answer to your question is:
return item1.attr.localeCompare(item2.attr);
Bugs found in all the custom JavaScript "natural string sort order" implementations
There are quite a bunch of custom implementations out there, trying to do string comparison more precisely called "natural string sort order"
When "playing" with these implementations, I always noticed some strange "natural sorting order" choice, or rather mistakes (or omissions in the best cases).
Typically, special characters (space, dash, ampersand, brackets, and so on) are not processed correctly.
You will then find them appearing mixed up in different places, typically that could be:
*
*some will be between the uppercase 'Z' and the lowercase 'a'
*some will be between the '9' and the uppercase 'A'
*some will be after lowercase 'z'
When one would have expected special characters to all be "grouped" together in one place, except for the space special character maybe (which would always be the first character). That is, either all before numbers, or all between numbers and letters (lowercase & uppercase being "together" one after another), or all after letters.
My conclusion is that they all fail to provide a consistent order when I start adding barely unusual characters (i.e., characters with diacritics or characters such as dash, exclamation mark and so on).
Research on the custom implementations:
*
*Natural Compare Lite https://github.com/litejs/natural-compare-lite : Fails at sorting consistently https://github.com/litejs/natural-compare-lite/issues/1 and http://jsbin.com/bevututodavi/1/edit?js,console, basic Latin characters sorting http://jsbin.com/bevututodavi/5/edit?js,console
*Natural Sort https://github.com/javve/natural-sort : Fails at sorting consistently, see issue https://github.com/javve/natural-sort/issues/7 and see basic Latin characters sorting http://jsbin.com/cipimosedoqe/3/edit?js,console
*JavaScript Natural Sort https://github.com/overset/javascript-natural-sort: seems rather neglected since February 2012, Fails at sorting consistently, see issue https://github.com/overset/javascript-natural-sort/issues/16
*Alphanum http://www.davekoelle.com/files/alphanum.js , Fails at sorting consistently, see http://jsbin.com/tuminoxifuyo/1/edit?js,console
Browsers' native "natural string sort order" implementations via localeCompare()
localeCompare() oldest implementation (without the locales and options arguments) is supported by Internet Explorer 6 and later, see http://msdn.microsoft.com/en-us/library/ie/s4esdbwz(v=vs.94).aspx (scroll down to localeCompare() method).
The built-in localeCompare() method does a much better job at sorting, even international & special characters.
The only problem using the localeCompare() method is that "the locale and sort order used are entirely implementation dependent". In other words, when using localeCompare such as stringOne.localeCompare(stringTwo): Firefox, Safari, Chrome, and Internet Explorer have a different sort order for Strings.
Research on the browser-native implementations:
*
*http://jsbin.com/beboroyifomu/1/edit?js,console - basic Latin characters comparison with localeCompare()
http://jsbin.com/viyucavudela/2/ - basic Latin characters comparison with localeCompare() for testing on Internet Explorer 8
*http://jsbin.com/beboroyifomu/2/edit?js,console - basic Latin characters in string comparison : consistency check in string vs when a character is alone
*https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/localeCompare - Internet Explorer 11 and later supports the new locales & options arguments
Difficulty of "string natural sorting order"
Implementing a solid algorithm (meaning: consistent but also covering a wide range of characters) is a very tough task. UTF-8 contains more than 2000 characters and covers more than 120 scripts (languages).
Finally, there are some specification for this tasks, it is called the "Unicode Collation Algorithm", which can be found at http://www.unicode.org/reports/tr10/. You can find more information about this on this question I posted https://softwareengineering.stackexchange.com/questions/257286/is-there-any-language-agnostic-specification-for-string-natural-sorting-order
Final conclusion
So considering the current level of support provided by the JavaScript custom implementations I came across, we will probably never see anything getting any close to supporting all this characters and scripts (languages). Hence I would rather use the browsers' native localeCompare() method. Yes, it does have the downside of being non-consistent across browsers but basic testing shows it covers a much wider range of characters, allowing solid & meaningful sort orders.
So as pointed out by Shog9, the answer to your question is:
return item1.attr.localeCompare(item2.attr);
Further reading:
*
*https://softwareengineering.stackexchange.com/questions/257286/is-there-any-language-agnostic-specification-for-string-natural-sorting-order
*How to sort strings in JavaScript
*Natural sort of alphanumerical strings in JavaScript
*Sort Array of numeric & alphabetical elements (Natural Sort)
*Sort mixed alpha/numeric array
*https://web.archive.org/web/20130929122019/http://my.opera.com/GreyWyvern/blog/show.dml/1671288
*https://web.archive.org/web/20131005224909/http://www.davekoelle.com/alphanum.html
*http://snipplr.com/view/36012/javascript-natural-sort/
*http://blog.codinghorror.com/sorting-for-humans-natural-sort-order/
Thanks to Shog9's nice answer, which put me in the "right" direction I believe.
A: Use sort() straightforward without any - or <
const areas = ['hill', 'beach', 'desert', 'mountain']
console.log(areas.sort())
// To print in descending way
console.log(areas.sort().reverse())
A: You should use > or < and == here. So the solution would be:
list.sort(function(item1, item2) {
var val1 = item1.attr,
val2 = item2.attr;
if (val1 == val2) return 0;
if (val1 > val2) return 1;
if (val1 < val2) return -1;
});
A: Nested ternary arrow function
(a,b) => (a < b ? -1 : a > b ? 1 : 0)
A: In your operation in your initial question, you are performing the following operation:
item1.attr - item2.attr
So, assuming those are numbers (i.e. item1.attr = "1", item2.attr = "2") You still may use the "===" operator (or other strict evaluators) provided that you ensure type. The following should work:
return parseInt(item1.attr) - parseInt(item2.attr);
If they are alphaNumeric, then do use localCompare().
A: list.sort(function(item1, item2){
return +(item1.attr > item2.attr) || +(item1.attr === item2.attr) - 1;
})
How they work samples:
+('aaa'>'bbb')||+('aaa'==='bbb')-1
+(false)||+(false)-1
0||0-1
-1
+('bbb'>'aaa')||+('bbb'==='aaa')-1
+(true)||+(false)-1
1||0-1
1
+('aaa'>'aaa')||+('aaa'==='aaa')-1
+(false)||+(true)-1
0||1-1
0
A: <!doctype html>
<html>
<body>
<p id = "myString">zyxtspqnmdba</p>
<p id = "orderedString"></p>
<script>
var myString = document.getElementById("myString").innerHTML;
orderString(myString);
function orderString(str) {
var i = 0;
var myArray = str.split("");
while (i < str.length){
var j = i + 1;
while (j < str.length) {
if (myArray[j] < myArray[i]){
var temp = myArray[i];
myArray[i] = myArray[j];
myArray[j] = temp;
}
j++;
}
i++;
}
var newString = myArray.join("");
document.getElementById("orderedString").innerHTML = newString;
}
</script>
</body>
</html>
A: var str = ['v','a','da','c','k','l']
var b = str.join('').split('').sort().reverse().join('')
console.log(b)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "517"
} |
Q: Separating concerns with Linq To SQL and DTO's I recently started a new webforms project and decided to separate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following:
(from c in dataContext.Customers
where c.Active == true
select new DTO.Customer
{
CustomerID = c.CustomerID,
Name = c.CustomerName,
...
}).ToList()
Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present.
My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses.
A: I don't know if it's best practice but I have written similar code in the not so recent past because I too felt that I could improve the separation of concerns by using my own classes instead of the LINQ-designer-generated ones within my application.
You may want to consider just returning an IQueryable<Customer> instead of an IList<Customer> from your data-access method. Since IQueryable<T> inherits from IEnumerable<T> the rest of your app should be able to deal with it quite well. You can also convert it to a List when you really need to.
The advantage of this is that you can dynamically modify your query quite easily and minimze the amount of data returned from SQL Server.
E.g. if your method signature is
IQueryable<Customer> GetCustomers() you could get a single customer by calling GetCustomers().Where(c => c.CustomerID == 101).Single();
In this example only one record would be returned from the database whereas I imagine currently your code would return either all customers or you'd be required to write separate methods (and thus very repetitive code) to cater for all the different things you may want to filter by.
A: In my opinion in most cases DTO objects are not needed when dealing with LINQ. Generated LINQ classes can be easily tested. LINQ gives you ability to query your data from different sources using identical queries. It gives you ability to test your queries against lists of objects instead of real db.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What are the alternative's to using the iThenticate service for content comparison? What are the alternative's to using the iThenticate service for content comparison?
A: Wikipedia page on plagiarism detection has a list of commercial and free services.
A: I generally recommend Copyscape and/or eTBlast if you choose not to use iThenticate.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I stop Visual Studio from automatically inserting asterisk during a block comment? I'm tearing my hair out with this one. If I start a block comment /* in VS.NET 2005+ then carriage return, Visual Studio insists that I have another asterisk *. I know there's an option to turn this off but I just can't find it. Anyone know how to turn this feature off?
A: Update: this setting was changed in VS 2015 update 2. See this answer.
This post addresses your question. The gist of it is:
Text Editor > C# > Advanced > Generate XML documentation comments for ///
A: Visual Studio 2015 Update 2 has (finally) addressed this problem!
A new option as been added to Tools > Options > Text Editor > C# > Advanced named Insert * at the start of new lines when writing /* */ comments.
Disabling this option prevents the editor from automatically prefixing block comments with asterisks. It only took 7.5 years and 4 major releases :)
A: Try this:
#if false
whatever you want here
and here
#endif
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Are JavaScript strings immutable? Do I need a "string builder" in JavaScript? Does javascript use immutable or mutable strings? Do I need a "string builder"?
A: The string type value is immutable, but the String object, which is created by using the String() constructor, is mutable, because it is an object and you can add new properties to it.
> var str = new String("test")
undefined
> str
[String: 'test']
> str.newProp = "some value"
'some value'
> str
{ [String: 'test'] newProp: 'some value' }
Meanwhile, although you can add new properties, you can't change the already existing properties
A screenshot of a test in Chrome console
In conclusion,
1. all string type value (primitive type) is immutable.
2. The String object is mutable, but the string type value (primitive type) it contains is immutable.
A: from the rhino book:
In JavaScript, strings are immutable objects, which means that the
characters within them may not be changed and that any operations on
strings actually create new strings. Strings are assigned by
reference, not by value. In general, when an object is assigned by
reference, a change made to the object through one reference will be
visible through all other references to the object. Because strings
cannot be changed, however, you can have multiple references to a
string object and not worry that the string value will change without
your knowing it
A: They are immutable. You cannot change a character within a string with something like var myString = "abbdef"; myString[2] = 'c'. The string manipulation methods such as trim, slice return new strings.
In the same way, if you have two references to the same string, modifying one doesn't affect the other
let a = b = "hello";
a = a + " world";
// b is not affected
However, I've always heard what Ash mentioned in his answer (that using Array.join is faster for concatenation) so I wanted to test out the different methods of concatenating strings and abstracting the fastest way into a StringBuilder. I wrote some tests to see if this is true (it isn't!).
This was what I believed would be the fastest way, though I kept thinking that adding a method call may make it slower...
function StringBuilder() {
this._array = [];
this._index = 0;
}
StringBuilder.prototype.append = function (str) {
this._array[this._index] = str;
this._index++;
}
StringBuilder.prototype.toString = function () {
return this._array.join('');
}
Here are performance speed tests. All three of them create a gigantic string made up of concatenating "Hello diggity dog" one hundred thousand times into an empty string.
I've created three types of tests
*
*Using Array.push and Array.join
*Using Array indexing to avoid Array.push, then using Array.join
*Straight string concatenation
Then I created the same three tests by abstracting them into StringBuilderConcat, StringBuilderArrayPush and StringBuilderArrayIndex http://jsperf.com/string-concat-without-sringbuilder/5 Please go there and run tests so we can get a nice sample. Note that I fixed a small bug, so the data for the tests got wiped, I will update the table once there's enough performance data. Go to http://jsperf.com/string-concat-without-sringbuilder/5 for the old data table.
Here are some numbers (Latest update in Ma5rch 2018), if you don't want to follow the link. The number on each test is in 1000 operations/second (higher is better)
Browser
Index
Push
Concat
SBIndex
SBPush
SBConcat
Chrome 71.0.3578
988
1006
2902
963
1008
2902
Firefox 65
1979
1902
2197
1917
1873
1953
Edge
593
373
952
361
415
444
Exploder 11
655
532
761
537
567
387
Opera 58.0.3135
1135
1200
4357
1137
1188
4294
Findings
*
*Nowadays, all evergreen browsers handle string concatenation well. Array.join only helps IE 11
*Overall, Opera is fastest, 4 times as fast as Array.join
*Firefox is second and Array.join is only slightly slower in FF but considerably slower (3x) in Chrome.
*Chrome is third but string concat is 3 times faster than Array.join
*Creating a StringBuilder seems to not affect perfomance too much.
Hope somebody else finds this useful
Different Test Case
Since @RoyTinker thought that my test was flawed, I created a new case that doesn't create a big string by concatenating the same string, it uses a different character for each iteration. String concatenation still seemed faster or just as fast. Let's get those tests running.
I suggest everybody should keep thinking of other ways to test this, and feel free to add new links to different test cases below.
http://jsperf.com/string-concat-without-sringbuilder/7
A: Strings are immutable – they cannot change, we can only ever make new strings.
Example:
var str= "Immutable value"; // it is immutable
var other= statement.slice(2, 10); // new string
A: Just to clarify for simple minds like mine (from MDN):
Immutables are the objects whose state cannot be changed once the object is created.
String and Numbers are Immutable.
Immutable means that:
You can make a variable name point to a new value, but the previous value is still held in memory. Hence the need for garbage collection.
var immutableString = "Hello";
// In the above code, a new object with string value is created.
immutableString = immutableString + "World";
// We are now appending "World" to the existing value.
This looks like we're mutating the string 'immutableString', but we're not. Instead:
On appending the "immutableString" with a string value, following events occur:
*
*Existing value of "immutableString" is retrieved
*"World" is appended to the existing value of "immutableString"
*The resultant value is then allocated to a new block of memory
*"immutableString" object now points to the newly created memory space
*Previously created memory space is now available for garbage collection.
A: Performance tip:
If you have to concatenate large strings, put the string parts into an array and use the Array.Join() method to get the overall string. This can be many times faster for concatenating a large number of strings.
There is no StringBuilder in JavaScript.
A: Regarding your question (in your comment to Ash's response) about the StringBuilder in ASP.NET Ajax the experts seem to disagree on this one.
Christian Wenz says in his book Programming ASP.NET AJAX (O'Reilly) that "this approach does not have any measurable effect on memory (in fact, the implementation seems to be a tick slower than the standard approach)."
On the other hand Gallo et al say in their book ASP.NET AJAX in Action (Manning) that "When the number of strings to concatenate is larger, the string builder becomes an essential object to avoid huge performance drops."
I guess you'd need to do your own benchmarking and results might differ between browsers, too. However, even if it doesn't improve performance it might still be considered "useful" for programmers who are used to coding with StringBuilders in languages like C# or Java.
A: It's a late post, but I didn't find a good book quote among the answers.
Here's a definite except from a reliable book:
Strings are immutable in ECMAScript, meaning that once they are created, their values cannot change. To change the string held by a variable, the original string must be destroyed and the variable filled with another string containing a new value...
—Professional JavaScript for Web Developers, 3rd Ed., p.43
Now, the answer which quotes Rhino book's excerpt is right about string immutability but wrong saying "Strings are assigned by reference, not by value." (probably they originally meant to put the words an opposite way).
The "reference/value" misconception is clarified in the "Professional JavaScript", chapter named "Primitive and Reference values":
The five primitive types...[are]: Undefined, Null, Boolean, Number, and String. These variables are said to be accessed by value, because you are manipulating the actual value stored in the variable.
—Professional JavaScript for Web Developers, 3rd Ed., p.85
that's opposed to objects:
When you manipulate an object, you’re really working on a reference to that object rather than the actual object itself. For this reason, such values are said to be accessed by reference.—Professional JavaScript for Web Developers, 3rd Ed., p.85
A: JavaScript strings are indeed immutable.
A: Strings in Javascript are immutable
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "293"
} |
Q: How to get hashes out of arrays in Perl? I want to write a little "DBQuery" function in perl so I can have one-liners which send an SQL statement and receive back and an array of hashes, i.e. a recordset. However, I'm running into an issue with Perl syntax (and probably some odd pointer/reference issue) which is preventing me from packing out the information from the hash that I'm getting from the database. The sample code below demonstrates the issue.
I can get the data "Jim" out of a hash inside an array with this syntax:
print $records[$index]{'firstName'}
returns "Jim"
but if I copy the hash record in the array to its own hash variable first, then I strangely can't access the data anymore in that hash:
%row = $records[$index];
$row{'firstName'};
returns "" (blank)
Here is the full sample code showing the problem. Any help is appreciated:
my @records = (
{'id' => 1, 'firstName' => 'Jim'},
{'id' => 2, 'firstName' => 'Joe'}
);
my @records2 = ();
$numberOfRecords = scalar(@records);
print "number of records: " . $numberOfRecords . "\n";
for(my $index=0; $index < $numberOfRecords; $index++) {
#works
print 'you can print the records like this: ' . $records[$index]{'firstName'} . "\n";
#does NOT work
%row = $records[$index];
print 'but not like this: ' . $row{'firstName'} . "\n";
}
A: The array of hashes doesn't actually contain hashes, but rather an references to a hash.
This line:
%row = $records[$index];
assigns %row with one entry. The key is the scalar:
{'id' => 1, 'firstName' => 'Jim'},
Which is a reference to the hash, while the value is blank.
What you really want to do is this:
$row = $records[$index];
$row->{'firstName'};
or else:
$row = %{$records[$index];}
$row{'firstName'};
A: Others have commented on hashes vs hashrefs. One other thing that I feel should be mentioned is your DBQuery function - it seems you're trying to do something that's already built into the DBI? If I understand your question correctly, you're trying to replicate something like selectall_arrayref:
This utility method combines "prepare", "execute" and "fetchall_arrayref" into a single call. It returns a reference to an array containing a reference to an array (or hash, see below) for each row of data fetched.
A: To add to the lovely answers above, let me add that you should always, always, always (yes, three "always"es) use "use warnings" at the top of your code. If you had done so, you would have gotten the warning "Reference found where even-sized list expected at -e line 1."
A: The nested data structure contains a hash reference, not a hash.
# Will work (the -> dereferences the reference)
$row = $records[$index];
print "This will work: ", $row->{firstName}, "\n";
# This will also work, by promoting the hash reference into a hash
%row = %{ $records[$index] };
print "This will work: ", $row{firstName}, "\n";
If you're ever presented with a deep Perl data structure, you may profit from printing it using Data::Dumper to print it into human-readable (and Perl-parsable) form.
A: what you actually have in your array is a hashref, not a hash. If you don't understand this concept, its probably worth reading the perlref documentation.
to get the hash you need to do
my %hash = %{@records[$index]};
Eg.
my @records = (
{'id' => 1, 'firstName' => 'Jim'},
{'id' => 2, 'firstName' => 'Joe'}
);
my %hash = %{$records[1]};
print $hash{id}."\n";
Although. I'm not sure why you would want to do this, unless its for academic purposes. Otherwise, I'd recommend either fetchall_hashref/fetchall_arrayref in the DBI module or using something like Class::DBI.
A: Also note a good perl idiom to use is
for my $rowHR ( @records ) {
my %row = %$rowHR;
#or whatever...
}
to iterate through the list.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Can a STP template be hidden from subsite creation page? When a template is added using the add-template stsadm command, it becomes available to everyone when creating a subsite. Is there any way to make it only available when a site collection is being created?
A: go to site actions -> Site Settings -> view all site settings -> site templates and page layouts and remove the site template from the list of allowed items.
Gary Lapointe may also have made an stsadm extenstion for it; check stsadm.blogspot.com
Mauro Masucci
http://www.brantas.co.uk
A: The url to the blog post mentioned above, for hiding the stp templates using the stsadm extention, is http://stsadm.blogspot.com/2007/08/set-available-site-templates.html
Here’s an example of how to remove a template from the list of available templates for a site collection:
stsadm –o gl-removeavailablesitetemplate –url "http://intranet/" -template "WIKI#0" -lcid 1033 -resetallsubsites
A: Thanks Mauro. I was hoping for a solution which doesn't require going to every site collection, but it looks like there may not be one!
A: Komrade,
stsadm.blogspot.com may be the answer again, you can list all the site collections and then using the command that edward posted to remove the site templates. That might help make things a bit quicker!
Although, you should only have to do it once per site collection, all subsites (as far as I remember) inherit their settings from the parent site.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Best way to deal with RoutingError in Rails 2.1.x? I'm playing with the routing.rb code in Rails 2.1, and trying to to get it to the point where I can do something useful with the RoutingError exception that is thrown when it can't find the appropriate path.
This is a somewhat tricky problem, because there are some class of URLs which are just plain BAD: the /azenv.php bot attacks, the people typing /bar/foo/baz into the URL, etc... we don't want that.
Then there's subtle routing problems, where we do want to be notified: /artists/ for example, or ///. In these situations, we may want an error being thrown, or not... or we get Google sending us URLs which used to be valid but are no longer because people deleted them.
In each of these situations, I want a way to contain, analyze and filter the path that we get back, or at least some Railsy way to manage routing past the normal 'fallback catchall' url. Does this exist?
EDIT:
So the code here is:
# File vendor/rails/actionpack/lib/action_controller/rescue.rb, line 141
def rescue_action_without_handler(exception)
log_error(exception) if logger
erase_results if performed?
# Let the exception alter the response if it wants.
# For example, MethodNotAllowed sets the Allow header.
if exception.respond_to?(:handle_response!)
exception.handle_response!(response)
end
if consider_all_requests_local || local_request?
rescue_action_locally(exception)
else
rescue_action_in_public(exception)
end
end
So our best option is to override log_error(exception) so that we can filter down the exceptions according to the exception. So in ApplicationController
def log_error(exception)
message = '...'
if should_log_exception_as_debug?(exception)
logger.debug(message)
else
logger.error(message)
end
end
def should_log_exception_as_debug?(exception)
return (ActionController::RoutingError === exception)
end
Salt for additional logic where we want different controller logic, routes, etc.
A: Nooooo!!! Don't implement method_missing on your controller! And please try to avoid action_missing as well.
The frequently touted pattern is to add a route:
map.connect '*', :controller => 'error', :action => 'not_found'
Where you can show an appropriate error.
Rails also has a mechanism called rescue_action_in_public where you can write your own error handling logic -- we really should clean it up and encourage people to use it. PDI! :-)
A: There's the method_missing method. You could implement that in your Application Controller and catch all missing actions, maybe logging those and redirecting to the index action of the relevant controller. This approach would ignore everything that can't be routed to a controller, which is pretty close to what you want.
Alternatively, I'd just log all errors, extract the URL and sort it by # of times it occured.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to write a download progress indicator in Python? I am writing a little application to download files over http (as, for example, described here).
I also want to include a little download progress indicator showing the percentage of the download progress.
Here is what I came up with:
sys.stdout.write(rem_file + "...")
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize):
percent = int(count*blockSize*100/totalSize)
sys.stdout.write("%2d%%" % percent)
sys.stdout.write("\b\b\b")
sys.stdout.flush()
Output: MyFileName... 9%
Any other ideas or recommendations to do this?
One thing that's somewhat annoying is the blinking cursor in the terminal on the first digit of the percentage. Is there a way to prevent this? Is there a way to hide the cursor?
EDIT:
Here a better alternative using a global variable for the filename in dlProgress and the '\r' code:
global rem_file # global variable to be used in dlProgress
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize):
percent = int(count*blockSize*100/totalSize)
sys.stdout.write("\r" + rem_file + "...%d%%" % percent)
sys.stdout.flush()
Output: MyFileName...9%
And the cursor shows up at the END of the line. Much better.
A: For what it's worth, here's the code I used to get it working:
from urllib import urlretrieve
from progressbar import ProgressBar, Percentage, Bar
url = "http://......."
fileName = "file"
pbar = ProgressBar(widgets=[Percentage(), Bar()])
urlretrieve(url, fileName, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize):
pbar.update( int(count * blockSize * 100 / totalSize) )
A: If you use the curses package, you have much greater control of the console. It also comes at a higher cost in code complexity and is probably unnecessary unless you are developing a large console-based app.
For a simple solution, you can always put the spinning wheel at the end of the status messge (the sequence of characters |, \, -, / which actually looks nice under blinking cursor.
A: There's a text progress bar library for python at http://pypi.python.org/pypi/progressbar/2.2 that you might find useful:
This library provides a text mode progressbar. This is tipically used to display the progress of a long running operation, providing a visual clue that processing is underway.
The ProgressBar class manages the progress, and the format of the line is given by a number of widgets. A widget is an object that may display diferently depending on the state of the progress. There are three types of widget: - a string, which always shows itself; - a ProgressBarWidget, which may return a diferent value every time it's update method is called; and - a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it expands to fill the remaining width of the line.
The progressbar module is very easy to use, yet very powerful. And automatically supports features like auto-resizing when available.
A: You might also try:
sys.stdout.write("\r%2d%%" % percent)
sys.stdout.flush()
Using a single carriage return at the beginning of your string rather than several backspaces. Your cursor will still blink, but it'll blink after the percent sign rather than under the first digit, and with one control character instead of three you may get less flicker.
A: I used this code:
url = (<file location>)
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
f.close()
A: def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
urlretrieve(url, filename, reporthook=download_progress_hook)
A: For small files you may need to had this lines in order to avoid crazy percentages:
sys.stdout.write("\r%2d%%" % percent)
sys.stdout.flush()
Cheers
A: Thats how I did this could help you:
https://github.com/mouuff/MouDownloader/blob/master/api/download.py
A: Late to the party, as usual. Here's an implementation that supports reporting progress, like the core urlretrieve:
import urllib2
def urlretrieve(urllib2_request, filepath, reporthook=None, chunk_size=4096):
req = urllib2.urlopen(urllib2_request)
if reporthook:
# ensure progress method is callable
if hasattr(reporthook, '__call__'):
reporthook = None
try:
# get response length
total_size = req.info().getheaders('Content-Length')[0]
except KeyError:
reporthook = None
data = ''
num_blocks = 0
with open(filepath, 'w') as f:
while True:
data = req.read(chunk_size)
num_blocks += 1
if reporthook:
# report progress
reporthook(num_blocks, chunk_size, total_size)
if not data:
break
f.write(data)
# return downloaded length
return len(data)
A: To avoid progress values like 106% or similar, follow the below logic
percent = min(int(count * blockSize * 100 / totalSize), 100)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: How would you organize a Subversion repository for in house software projects? I work for a company whose primary business is not software related. Most documentation for using source control is written with a development team writing for commercial or open source projects in mind. As someone who writes in house software I can say that work is done differently then it would be in a commercial or open source setting. In addition there are stored procedures and database scripts that need to be kept in sync with the code.
In particular I am looking to get suggestions on how best to structure the repository with in house software in mind. Most documentation suggests trunk, branches, tags etc. And procedures for keeping production, test and development environments in sync with their respective sections in the repository etc.
A: For subversion, there is Assembla too which comes with Trac and other useful tools. It's free or you could pay for an account if you need more space or users per project.
A: You could use a service like www.unfuddle.com to set up a free SVN or GIT repository.
We use Unfuddle and it's really great. There are free and paid versions (depending on your needs).
Or, you could of course set up a local copy. There are plenty of tutorials to be found via Google for that: http://www.google.com/search?rlz=1C1GGLS_enUS291&aq=f&sourceid=chrome&ie=UTF-8&q=set+up+svn
A: One repository for your projects is probably sufficient. I like the typical approach that indexes the layout by project (see this section from the O'Reilly Subversion book):
/first-project/trunk
/first-project/branches
/first-project/tags
/another-project/trunk
/another-project/branches
/another-project/tags
/common-stuff/trunk
/common-stuff/branches
/common-stuff/tags
Keep in mind that you can always reorganize the repository later.
Also, for in-house stuff, I prefer FSFS for the data-store, as opposed to Berkeley DB. FSFS is more resilient and the speed of checkouts is not much concern for small teams/projects. You can compare and decide for yourself.
Other standard parts of the recipe include Trac and a minimal Linux server to host the repository on the LAN.
A: After posting this question I spoke with a coworker who suggested I read the article:
http://www.codinghorror.com/blog/archives/000968.html
In short, it advocates that programmers be more aware of branching. It helped me to see there is no right way to go about organizing our repository. For our team we will have a trunk and two long term branches for test and development. In addition we will make seperate branches for each task do we and merged the changes from the task branches as we promote the task up to testing and production.
A: For my company, I use svn+ssh and key-based authentication. You can do this with both windows clients and linux clients. This is real easy to use once you get your keys straight as you use an ssh key to login rather than typing a password.
Here's an article on setting up svn+ssh with notes on security. If you understand all of this stuff and follow these steps, you'll be off to a good start.
This article describes a number of ways to further secure your ssh logins for the svn accounts.
I recommend creating accounts specifically for svn access with no other access to that server. My guess is that you would use a daily build or automated script to update your stored procs in the db. Daily builds can have their own special accounts and their own ssh keys. I don't like for my automated tools to run with the same login as a human user (so I know which tool is broken).
If you don't understand all of the security tricks, searching google can get you some help. If you have trouble, set one up without the security tricks first. That makes it a bit simpler to troubleshoot.
Good luck, and enjoy having the benefits of source control!
A: I agree with using the common conventions of trunk/branches/tags. Beyond that, I think you might be looking for something like my answer at How do you organize your version control repository?.
A: Setting up SVN repositories can be tricky only in the sense of how you organize them. Before we setup SVN, I actually RTFM'd the online Subversion manual which discusses organizational techniques for repositories and some of the gotchas you should think about in advance, namely what you cannot do after you have created your repositories if you decide to change your mind. I suggest a pass through this manual before setup.
For us, as consultants, we do custom and in-house software development as well as some document management through SVN. It was in our interest to create one repository for each client and one for ourselves. Within each repository, we created folders for each project (software or otherwise). This allowed us to segment security access by repository and by client and even by project within a repository. Going deeper, for each software project we created 'working', 'tags' and 'branches' folders. We generally put releases in 'tags' using 'release_w.x.y.z' as the tag for a standard.
In your case, to keep sprocs, scripts, and other related documents in synch, you can create a project folder, then under that a 'working' folder, then under that 'code' and next to it 'scripts', etc. Then when you tag the working version for release, you end up tagging it all together.
\Repository
\ProjectX
\Working
\Code
\Scripts
\Notes
\Tags
\Branches
As for non-code, I would suggest a straight folder layout by project or document type (manuals, policies, etc.). Generally with documents and depending on how your company operates, just having the version history/logs is enough.
We run SVN on Windows along with WebSVN which is a great open source repository viewer. We use it to give clients web access to their code and it's all driven by the underlying Subversion security. Internally, we use TortoiseSVN to manage the repositories, commit, update, import, etc.
Another thing is that training should be considered an integral part of your deployment. Users new to version control may have a hard time understanding what is going on. We found that giving them functional instructions (do this when creating a project, do this when updating, etc.) was very helpful while they learned the concepts. We created a 'sandbox' repository where users can play all they want with documents and folders to practice, you may find this useful as well to experiment on what policies to establish.
Good luck!
A: As specified in this thread, distributed VCSs (git, Mercurial) are a better model than centralized ones, due to branch creation ease, branch merging ease, no need to setup a special server, no need to have network access to work and some other advantages. If you'll work alone the DVCS allows you to incorporate people to your projects if the need arises very easily.
Anyhow, answering directly your question, a way to set up SVN would be to have a repository per project and, depending on if stored procedures and scripts and libraries are shared or not, create a directory on each project tree for scripts and stored procedures, or a full repository for the shared code.
A: I believe in following the basic pattern with a project dir with trunk,tags,branches beneath it. I usually like to setup the top level like this
Projects holds all of the individual project or modules
Releases holds release tags that involve multiple modules
Users holds private user branches
Admin holds my hook scripts, backup scripts etc.
The release tags are useful if you have a product that is made up of several modules that all may be a different tags for a specific release (one ring to bind them and all that). It makes it easy for source escrow and for developments to reference what made up release 1 or 2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Regular Expression to match valid dates I'm trying to write a regular expression that validates a date. The regex needs to match the following
*
*M/D/YYYY
*MM/DD/YYYY
*Single digit months can start with a leading zero (eg: 03/12/2008)
*Single digit days can start with a leading zero (eg: 3/02/2008)
*CANNOT include February 30 or February 31 (eg: 2/31/2008)
So far I have
^(([1-9]|1[012])[-/.]([1-9]|[12][0-9]|3[01])[-/.](19|20)\d\d)|((1[012]|0[1-9])(3[01]|2\d|1\d|0[1-9])(19|20)\d\d)|((1[012]|0[1-9])[-/.](3[01]|2\d|1\d|0[1-9])[-/.](19|20)\d\d)$
This matches properly EXCEPT it still includes 2/30/2008 & 2/31/2008.
Does anyone have a better suggestion?
Edit: I found the answer on RegExLib
^((((0[13578])|([13578])|(1[02]))[\/](([1-9])|([0-2][0-9])|(3[01])))|(((0[469])|([469])|(11))[\/](([1-9])|([0-2][0-9])|(30)))|((2|02)[\/](([1-9])|([0-2][0-9]))))[\/]\d{4}$|^\d{4}$
It matches all valid months that follow the MM/DD/YYYY format.
Thanks everyone for the help.
A: To control a date validity under the following format :
YYYY/MM/DD or YYYY-MM-DD
I would recommand you tu use the following regular expression :
(((19|20)([2468][048]|[13579][26]|0[48])|2000)[/-]02[/-]29|((19|20)[0-9]{2}[/-](0[4678]|1[02])[/-](0[1-9]|[12][0-9]|30)|(19|20)[0-9]{2}[/-](0[1359]|11)[/-](0[1-9]|[12][0-9]|3[01])|(19|20)[0-9]{2}[/-]02[/-](0[1-9]|1[0-9]|2[0-8])))
Matches
2016-02-29 | 2012-04-30 | 2019/09/31
Non-Matches
2016-02-30 | 2012-04-31 | 2019/09/35
You can customise it if you wants to allow only '/' or '-' separators.
This RegEx strictly controls the validity of the date and verify 28,30 and 31 days months, even leap years with 29/02 month.
Try it, it works very well and prevent your code from lot of bugs !
FYI : I made a variant for the SQL datetime. You'll find it there (look for my name) : Regular Expression to validate a timestamp
Feedback are welcomed :)
A: Here is the Reg ex that matches all valid dates including leap years. Formats accepted mm/dd/yyyy or mm-dd-yyyy or mm.dd.yyyy format
^(?:(?:(?:0?[13578]|1[02])(\/|-|\.)31)\1|(?:(?:0?[1,3-9]|1[0-2])(\/|-|\.)(?:29|30)\2))(?:(?:1[6-9]|[2-9]\d)?\d{2})$|^(?:0?2(\/|-|\.)29\3(?:(?:(?:1[6-9]|[2-9]\d)?(?:0[48]|[2468][048]|[13579][26])|(?:(?:16|[2468][048]|[3579][26])00))))$|^(?:(?:0?[1-9])|(?:1[0-2]))(\/|-|\.)(?:0?[1-9]|1\d|2[0-8])\4(?:(?:1[6-9]|[2-9]\d)?\d{2})$
courtesy Asiq Ahamed
A: I landed here because the title of this question is broad and I was looking for a regex that I could use to match on a specific date format (like the OP). But I then discovered, as many of the answers and comments have comprehensively highlighted, there are many pitfalls that make constructing an effective pattern very tricky when extracting dates that are mixed-in with poor quality or non-structured source data.
In my exploration of the issues, I have come up with a system that enables you to build a regular expression by arranging together four simpler sub-expressions that match on the delimiter, and valid ranges for the year, month and day fields in the order you require.
These are :-
Delimeters
[^\w\d\r\n:]
This will match anything that is not a word character, digit character, carriage return, new line or colon. The colon has to be there to prevent matching on times that look like dates (see my test Data)
You can optimise this part of the pattern to speed up matching, but this is a good foundation that detects most valid delimiters.
Note however; It will match a string with mixed delimiters like this 2/12-73 that may not actually be a valid date.
Year Values
(\d{4}|\d{2})
This matches a group of two or 4 digits, in most cases this is acceptable, but if you're dealing with data from the years 0-999 or beyond 9999 you need to decide how to handle that because in most cases a 1, 3 or >4 digit year is garbage.
Month Values
(0?[1-9]|1[0-2])
Matches any number between 1 and 12 with or without a leading zero - note: 0 and 00 is not matched.
Date Values
(0?[1-9]|[12]\d|30|31)
Matches any number between 1 and 31 with or without a leading zero - note: 0 and 00 is not matched.
This expression matches Date, Month, Year formatted dates
(0?[1-9]|[12]\d|30|31)[^\w\d\r\n:](0?[1-9]|1[0-2])[^\w\d\r\n:](\d{4}|\d{2})
But it will also match some of the Year, Month Date ones. It should also be bookended with the boundary operators to ensure the whole date string is selected and prevent valid sub-dates being extracted from data that is not well-formed i.e. without boundary tags 20/12/194 matches as 20/12/19 and 101/12/1974 matches as 01/12/1974
Compare the results of the next expression to the one above with the test data in the nonsense section (below)
\b(0?[1-9]|[12]\d|30|31)[^\w\d\r\n:](0?[1-9]|1[0-2])[^\w\d\r\n:](\d{4}|\d{2})\b
There's no validation in this regex so a well-formed but invalid date such as 31/02/2001 would be matched. That is a data quality issue, and as others have said, your regex shouldn't need to validate the data.
Because you (as a developer) can't guarantee the quality of the source data you do need to perform and handle additional validation in your code, if you try to match and validate the data in the RegEx it gets very messy and becomes difficult to support without very concise documentation.
Garbage in, garbage out.
Having said that, if you do have mixed formats where the date values vary, and you have to extract as much as you can; You can combine a couple of expressions together like so;
This (disastrous) expression matches DMY and YMD dates
(\b(0?[1-9]|[12]\d|30|31)[^\w\d\r\n:](0?[1-9]|1[0-2])[^\w\d\r\n:](\d{4}|\d{2})\b)|(\b(0?[1-9]|1[0-2])[^\w\d\r\n:](0?[1-9]|[12]\d|30|31)[^\w\d\r\n:](\d{4}|\d{2})\b)
BUT you won't be able to tell if dates like 6/9/1973 are the 6th of September or the 9th of June. I'm struggling to think of a scenario where that is not going to cause a problem somewhere down the line, it's bad practice and you shouldn't have to deal with it like that - find the data owner and hit them with the governance hammer.
Finally, if you want to match a YYYYMMDD string with no delimiters you can take some of the uncertainty out and the expression looks like this
\b(\d{4})(0[1-9]|1[0-2])(0[1-9]|[12]\d|30|31)\b
But note again, it will match on well-formed but invalid values like 20010231 (31th Feb!) :)
Test data
In experimenting with the solutions in this thread I ended up with a test data set that includes a variety of valid and non-valid dates and some tricky situations where you may or may not want to match i.e. Times that could match as dates and dates on multiple lines.
I hope this is useful to someone.
Valid Dates in various formats
Day, month, year
2/11/73
02/11/1973
2/1/73
02/01/73
31/1/1973
02/1/1973
31.1.2011
31-1-2001
29/2/1973
29/02/1976
03/06/2010
12/6/90
month, day, year
02/24/1975
06/19/66
03.31.1991
2.29.2003
02-29-55
03-13-55
03-13-1955
12\24\1974
12\30\1974
1\31\1974
03/31/2001
01/21/2001
12/13/2001
Match both DMY and MDY
12/12/1978
6/6/78
06/6/1978
6/06/1978
using whitespace as a delimiter
13 11 2001
11 13 2001
11 13 01
13 11 01
1 1 01
1 1 2001
Year Month Day order
76/02/02
1976/02/29
1976/2/13
76/09/31
YYYYMMDD sortable format
19741213
19750101
Valid dates before Epoch
12/1/10
12/01/660
12/01/00
12/01/0000
Valid date after 2038
01/01/2039
01/01/39
Valid date beyond the year 9999
01/01/10000
Dates with leading or trailing characters
12/31/21/
31/12/1921AD
31/12/1921.10:55
12/10/2016 8:26:00.39
wfuwdf12/11/74iuhwf
fwefew13/11/1974
01/12/1974vdwdfwe
01/01/99werwer
12321301/01/99
Times that look like dates
12:13:56
13:12:01
1:12:01PM
1:12:01 AM
Dates that runs across two lines
1/12/19
74
01/12/19
74/13/1946
31/12/20
08:13
Invalid, corrupted or nonsense dates
0/1/2001
1/0/2001
00/01/2100
01/0/2001
0101/2001
01/131/2001
31/31/2001
101/12/1974
56/56/56
00/00/0000
0/0/1999
12/01/0
12/10/-100
74/2/29
12/32/45
20/12/194
2/12-73
A: Sounds like you're overextending regex for this purpose. What I would do is use a regex to match a few date formats and then use a separate function to validate the values of the date fields so extracted.
A: Perl expanded version
Note use of /x modifier.
/^(
(
( # 31 day months
(0[13578])
| ([13578])
| (1[02])
)
[\/]
(
([1-9])
| ([0-2][0-9])
| (3[01])
)
)
| (
( # 30 day months
(0[469])
| ([469])
| (11)
)
[\/]
(
([1-9])
| ([0-2][0-9])
| (30)
)
)
| ( # 29 day month (Feb)
(2|02)
[\/]
(
([1-9])
| ([0-2][0-9])
)
)
)
[\/]
# year
\d{4}$
| ^\d{4}$ # year only
/x
Original
^((((0[13578])|([13578])|(1[02]))[\/](([1-9])|([0-2][0-9])|(3[01])))|(((0[469])|([469])|(11))[\/](([1-9])|([0-2][0-9])|(30)))|((2|02)[\/](([1-9])|([0-2][0-9]))))[\/]\d{4}$|^\d{4}$
A: if you didn't get those above suggestions working, I use this, as it gets any date I ran this expression through 50 links, and it got all the dates on each page.
^20\d\d-(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)-(0[1-9]|[1-2][0-9]|3[01])$
A: This regex validates dates between 01-01-2000 and 12-31-2099 with matching separators.
^(0[1-9]|1[012])([- /.])(0[1-9]|[12][0-9]|3[01])\2(19|20)\d\d$
A: var dtRegex = new RegExp(/[1-9\-]{4}[0-9\-]{2}[0-9\-]{2}/);
if(dtRegex.test(date) == true){
var evalDate = date.split('-');
if(evalDate[0] != '0000' && evalDate[1] != '00' && evalDate[2] != '00'){
return true;
}
}
A: This is not an appropriate use of regular expressions. You'd be better off using
[0-9]{2}/[0-9]{2}/[0-9]{4}
and then checking ranges in a higher-level language.
A: Maintainable Perl 5.10 version
/
(?:
(?<month> (?&mon_29)) [\/] (?<day>(?&day_29))
| (?<month> (?&mon_30)) [\/] (?<day>(?&day_30))
| (?<month> (?&mon_31)) [\/] (?<day>(?&day_31))
)
[\/]
(?<year> [0-9]{4})
(?(DEFINE)
(?<mon_29> 0?2 )
(?<mon_30> 0?[469] | (11) )
(?<mon_31> 0?[13578] | 1[02] )
(?<day_29> 0?[1-9] | [1-2]?[0-9] )
(?<day_30> 0?[1-9] | [1-2]?[0-9] | 30 )
(?<day_31> 0?[1-9] | [1-2]?[0-9] | 3[01] )
)
/x
You can retrieve the elements by name in this version.
say "Month=$+{month} Day=$+{day} Year=$+{year}";
( No attempt has been made to restrict the values for the year. )
A: Regex was not meant to validate number ranges(this number must be from 1 to 5 when the number preceding it happens to be a 2 and the number preceding that happens to be below 6).
Just look for the pattern of placement of numbers in regex. If you need to validate is qualities of a date, put it in a date object js/c#/vb, and interogate the numbers there.
A: I know this does not answer your question, but why don't you use a date handling routine to check if it's a valid date? Even if you modify the regexp with a negative lookahead assertion like (?!31/0?2) (ie, do not match 31/2 or 31/02) you'll still have the problem of accepting 29 02 on non leap years and about a single separator date format.
The problem is not easy if you want to really validate a date, check this forum thread.
For an example or a better way, in C#, check this link
If you are using another platform/language, let us know
A: Perl 6 version
rx{
^
$<month> = (\d ** 1..2)
{ $<month> <= 12 or fail }
'/'
$<day> = (\d ** 1..2)
{
given( +$<month> ){
when 1|3|5|7|8|10|12 {
$<day> <= 31 or fail
}
when 4|6|9|11 {
$<day> <= 30 or fail
}
when 2 {
$<day> <= 29 or fail
}
default { fail }
}
}
'/'
$<year> = (\d ** 4)
$
}
After you use this to check the input the values are available in $/ or individually as $<month>, $<day>, $<year>. ( those are just syntax for accessing values in $/ )
No attempt has been made to check the year, or that it doesn't match the 29th of Feburary on non leap years.
A: If you're going to insist on doing this with a regular expression, I'd recommend something like:
( (0?1|0?3| <...> |10|11|12) / (0?1| <...> |30|31) |
0?2 / (0?1| <...> |28|29) )
/ (19|20)[0-9]{2}
This might make it possible to read and understand.
A: /(([1-9]{1}|0[1-9]|1[0-2])\/(0[1-9]|[1-9]{1}|[12]\d|3[01])\/[12]\d{3})/
This would validate for following -
*
*Single and 2 digit day with range from 1 to 31. Eg, 1, 01, 11, 31.
*Single and 2 digit month with range from 1 to 12. Eg. 1, 01, 12.
*4 digit year. Eg. 2021, 1980.
A: A slightly different approach that may or may not be useful for you.
I'm in php.
The project this relates to will never have a date prior to the 1st of January 2008. So, I take the 'date' inputed and use strtotime(). If the answer is >= 1199167200 then I have a date that is useful to me. If something that doesn't look like a date is entered -1 is returned. If null is entered it does return today's date number so you do need a check for a non-null entry first.
Works for my situation, perhaps yours too?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
} |
Q: How can I retrieve the page title of a webpage using Python? How can I retrieve the page title of a webpage (title html tag) using Python?
A: Use soup.select_one to target title tag
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('url')
soup = bs(r.content, 'lxml')
print(soup.select_one('title').text)
A: Using regular expressions
import re
match = re.search('<title>(.*?)</title>', raw_html)
title = match.group(1) if match else 'No title'
A: I'll always use lxml for such tasks. You could use beautifulsoup as well.
import lxml.html
t = lxml.html.parse(url)
print(t.find(".//title").text)
EDIT based on comment:
from urllib2 import urlopen
from lxml.html import parse
url = "https://www.google.com"
page = urlopen(url)
p = parse(page)
print(p.find(".//title").text)
A: No need to import other libraries. Request has this functionality in-built.
>> hearders = {'headers':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:51.0) Gecko/20100101 Firefox/51.0'}
>>> n = requests.get('http://www.imdb.com/title/tt0108778/', headers=hearders)
>>> al = n.text
>>> al[al.find('<title>') + 7 : al.find('</title>')]
u'Friends (TV Series 1994\u20132004) - IMDb'
A: soup.title.string actually returns a unicode string.
To convert that into normal string, you need to do
string=string.encode('ascii','ignore')
A: Here is a fault tolerant HTMLParser implementation.
You can throw pretty much anything at get_title() without it breaking, If anything unexpected happens
get_title() will return None.
When Parser() downloads the page it encodes it to ASCII
regardless of the charset used in the page ignoring any errors.
It would be trivial to change to_ascii() to convert the data into UTF-8 or any other encoding. Just add an encoding argument and rename the function to something like to_encoding().
By default HTMLParser() will break on broken html, it will even break on trivial things like mismatched tags. To prevent this behavior I replaced HTMLParser()'s error method with a function that will ignore the errors.
#-*-coding:utf8;-*-
#qpy:3
#qpy:console
'''
Extract the title from a web page using
the standard lib.
'''
from html.parser import HTMLParser
from urllib.request import urlopen
import urllib
def error_callback(*_, **__):
pass
def is_string(data):
return isinstance(data, str)
def is_bytes(data):
return isinstance(data, bytes)
def to_ascii(data):
if is_string(data):
data = data.encode('ascii', errors='ignore')
elif is_bytes(data):
data = data.decode('ascii', errors='ignore')
else:
data = str(data).encode('ascii', errors='ignore')
return data
class Parser(HTMLParser):
def __init__(self, url):
self.title = None
self.rec = False
HTMLParser.__init__(self)
try:
self.feed(to_ascii(urlopen(url).read()))
except urllib.error.HTTPError:
return
except urllib.error.URLError:
return
except ValueError:
return
self.rec = False
self.error = error_callback
def handle_starttag(self, tag, attrs):
if tag == 'title':
self.rec = True
def handle_data(self, data):
if self.rec:
self.title = data
def handle_endtag(self, tag):
if tag == 'title':
self.rec = False
def get_title(url):
return Parser(url).title
print(get_title('http://www.google.com'))
A: In Python3, we can call method urlopen from urllib.request and BeautifulSoup from bs4 library to fetch the page title.
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("https://www.google.com")
soup = BeautifulSoup(html, 'lxml')
print(soup.title.string)
Here we are using the most efficient parser 'lxml'.
A: The mechanize Browser object has a title() method. So the code from this post can be rewritten as:
from mechanize import Browser
br = Browser()
br.open("http://www.google.com/")
print br.title()
A: This is probably overkill for such a simple task, but if you plan to do more than that, then it's saner to start from these tools (mechanize, BeautifulSoup) because they are much easier to use than the alternatives (urllib to get content and regexen or some other parser to parse html)
Links:
BeautifulSoup
mechanize
#!/usr/bin/env python
#coding:utf-8
from bs4 import BeautifulSoup
from mechanize import Browser
#This retrieves the webpage content
br = Browser()
res = br.open("https://www.google.com/")
data = res.get_data()
#This parses the content
soup = BeautifulSoup(data)
title = soup.find('title')
#This outputs the content :)
print title.renderContents()
A: Using HTMLParser:
from urllib.request import urlopen
from html.parser import HTMLParser
class TitleParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.match = False
self.title = ''
def handle_starttag(self, tag, attributes):
self.match = tag == 'title'
def handle_data(self, data):
if self.match:
self.title = data
self.match = False
url = "http://example.com/"
html_string = str(urlopen(url).read())
parser = TitleParser()
parser.feed(html_string)
print(parser.title) # prints: Example Domain
A: Here's a simplified version of @Vinko Vrsalovic's answer:
import urllib2
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen("https://www.google.com"))
print soup.title.string
NOTE:
*
*soup.title finds the first title element anywhere in the html document
*title.string assumes it has only one child node, and that child node is a string
For beautifulsoup 4.x, use different import:
from bs4 import BeautifulSoup
A: Using lxml...
Getting it from page meta tagged according to the Facebook opengraph protocol:
import lxml.html.parse
html_doc = lxml.html.parse(some_url)
t = html_doc.xpath('//meta[@property="og:title"]/@content')[0]
or using .xpath with lxml:
t = html_doc.xpath(".//title")[0].text
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
} |
Q: How do I conditionally set a column to its default value with MySqlParameter? I have a table in a MySql database that stores user accounts. One of the columns, expires, stores an expiration date but defaults to NULL. I need to be able to remove an expiration date and set it back to the default value.
Currently, all of my CRUD routines are written using MySqlCommand with parameters. Can this be done directly with a MySqlParameter, or do I have to create an alternate command object to handle this eventuality?
A: The problem was DBNull, doing:
command.Parameters.AddWithValue("@parameter", null);
compiles OK.
A: It's not clear what conditions you're talking about. If you want to set column to default value, you can use DbNull.Value;
command.AddWithValue("@param", DbNull.Value);
or
command.Parameters.Add("@param", <data type>).Value = DBNull.Value;
A: I usually set values that are intended to be default/blank to null in code, then before executing the query, run the following loop:
foreach(MySqlParameter param in cmd.Parameters)
if (param.Value == null) param.Value = DBNull.Value;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to upload a file to a WCF Service? I've build a WCF Service to accept a file and write it to disk. The front-end consists of a page with SWFUpload which is handling the upload on the client side. Apparently, SWFUpload posts the data with a Content Type of: multipart/form-data.
I would think this ok but on the Service side I get an error. The error is "ProtocolException" expecting text/xml. I've tried different message encodings in the bindings but nothing seems to work.
How can I get this file uploaded using multipart/form-data?
A: @jdiaz,
@JasonS is right, to upload file you need to transfer it as a byte stream. You need to use WCF streaming. For example on how to upload file via WCF see an article from http://kjellsj.blogspot.com
A: What you want to use is probably MTOM, if you want it to be standard. Using this, you can have MIME multiparts messages.
You then have to read the file as a stream and stuff it into one of the parameters of the request.
A: It might be that your WCF service targets .NET Framework 3.5 and your IIS is running on .NET Framework 4.0. In this case (framework mismatch) you need to modify your service.
A: I believe you are going to have to tranfer the file as a byte array to WCF. You will need to handle the post from SWFUpload and convert to a byte array before sending to your service.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Find long running query on Informix? How can you find out what are the long running queries are on Informix database server? I have a query that is using up the CPU and want to find out what the query is.
A: If the query is currently running watch the onstat -g act -r 1 output and look for items with an rstcb that is not 0
Running threads:
tid tcb rstcb prty status vp-class name
106 c0000000d4860950 0 2 running 107soc soctcppoll
107 c0000000d4881950 0 2 running 108soc soctcppoll
564457 c0000000d7f28250 c0000000d7afcf20 2 running 1cpu CDRD_10
In this example the third row is what is currently running. If you have multiple rows with non-zero rstcb values then watch for a bit looking for the one that is always or almost always there. That is most likely the session that your looking for.
c0000000d7afcf20 is the address that we're interested in for this example.
Use onstat -u | grep c0000000d7afcf20 to find the session
c0000000d7afcf20 Y--P--- 22887 informix - c0000000d5b0abd0 0 5 14060 3811
This gives you the session id which in our example is 22887. Use onstat -g ses 22887
to list info about that session. In my example it's a system session so there's nothing to see in the onstat -g ses output.
A: That's because the suggested answer is for DB2, not Informix.
The sysmaster database (a virtual relational database of Informix shared memory) will probably contain the information you seek. These pages might help you get started:
*
*http://docs.rinet.ru/InforSmes/ch22/ch22.htm
*http://www.informix.com.ua/articles/sysmast/sysmast.htm
A: Okay it took me a bit to work out how to connect to sysmaster. The JDBC connection string is:
jdbc:informix-sqli://dbserver.local:1526/sysmaster:INFORMIXSERVER=mydatabase
Where the port number is the same as when you are connecting to the actual database. That is if your connection string is:
jdbc:informix-sqli://database:1541/crm:INFORMIXSERVER=crmlive
Then the sysmaster connection string is:
jdbc:informix-sqli://database:1541/sysmaster:INFORMIXSERVER=crmlive
Also found this wiki page that contains a number of SQL queries for operating on the sysmaster tables.
A: SELECT ELAPSED_TIME_MIN,SUBSTR(AUTHID,1,10) AS AUTH_ID,
AGENT_ID, APPL_STATUS,SUBSTR(STMT_TEXT,1,20) AS SQL_TEXT
FROM SYSIBMADM.LONG_RUNNING_SQL
WHERE ELAPSED_TIME_MIN > 0
ORDER BY ELAPSED_TIME_MIN DESC
Credit: SQL to View Long Running Queries
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What table/view do you query against to select all the table names in a schema in Oracle? What object do you query against to select all the table names in a schema in Oracle?
A: To see all the tables you have access to
select table_name from all_tables where owner='<SCHEMA>';
To select all tables for the current logged in schema (eg, your tables)
select table_name from user_tables;
A: you're looking for:
select table_name from user_tables;
A: You may use:
select tabname from tabs
to get the name of tables present in schema.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: High availability and scalable platform for Java/C++ on Solaris I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state.
We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design.
The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured.
Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context.
I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change.
We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this.
Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match.
As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage.
If you had to undertake the task I've been given, what would you do?
EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only.
A: The first thing I would do is construct a model of the system to map the data flow and try to understand precisely where the bottleneck lies. If you can model your system as a pipeline, then you should be able to use the theory of constraints (most of the literature is about optimising business processes but it applies equally to software) to continuously improve performance and eliminate the bottleneck.
Next I would collect some hard empirical data that accurately characterises the performance of your system. It is something of a cliché that you cannot manage what you cannot measure, but I have seen many people attempt to optimise a software system based on hunches and fail miserably.
Then I would use the Pareto Principle (80/20 rule) to choose the small number of things that will produce the biggest gains and focus only on those.
To scale a Java application horizontally, I have used Oracle Coherence extensively. Although some dismiss it as a very expensive distributed hashtable, the functionality is much richer than that and you can, for example, directly access data in the cache from C++ code .
Other alternatives for horizontally scaling your Java code would be Giga Spaces, IBM Object Grid or Gemstone Gemfire.
If your C++ code is stateless and is used purely for number crunching, you could look at distributing the process using ICE Grid which has bindings for all of the languages you are using.
A: You need to scale sideways and out. Maybe something like a message queue could be the backend between the frontend and the crunching.
A: Andrew, (in addition to modeling as a pipeline etc), measuring things is important. Have you ran a profiler over the code and got metrics of where most of the time is spent?
For the database code, how often does it change ? Are you looking at caching at the moment ? I assume you have looked at indexes etc over the data to speed up the Db ?
What levels of traffic do you have on the front end ? Are you caching web pages ? (It isn't too hard to say use a JMS type api to communicate between components. You can then put Web Page component on one machine (or more), and then put the integration code (c++) on another, and for many JMS products there are usually native C++ api's ie. ActiveMQ comes to mind), but it really helps to know how much of the time is in Web (JSP ?) , C++, Database ops.
Is the database storing business data, or is it being also used to pass data between Java and C++ ? You say you are using shared mem not JNI ? What level of multi-threading currently exists in the APP? Would you describe the code as being synchronous in nature or async?
Is there a physical relationship between the Solaris code and the devices that must be maintained (ie. do all the devices register with the c++ code, or can that be specified). ie. if you were to put a web load balancer on the frontend, and just put 2 machines up today is the relationhip of which devices are managed by a box initialized up front or in advance?
What are the HA requirements ? ie. just state info ? Can the HA be done just in the web tier by clustering Session data ?
Is the DB running on another machine ?
How big is the DB ? Have you optimized your queries ie. tried using explicit inner/outer joins sometimes helps versus nested sub queries (sometmes). (again look at the sql stats).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Change Attribute's parameter at runtime I am not sure whether is it possible to change attribute's parameter during runtime? For example, inside an assembly I have the following class
public class UserInfo
{
[Category("change me!")]
public int Age
{
get;
set;
}
[Category("change me!")]
public string Name
{
get;
set;
}
}
This is a class that is provided by a third party vendor and I can't change the code. But now I found that the above descriptions are not accurate, and I want to change the "change me" category name to something else when i bind an instance of the above class to a property grid.
May I know how to do this?
A: In case anyone else walks down this avenue, the answer is you can do it, with reflection, except you can't because there's a bug in the framework. Here's how you would do it:
Dim prop As PropertyDescriptor = TypeDescriptor.GetProperties(GetType(UserInfo))("Age")
Dim att As CategoryAttribute = DirectCast(prop.Attributes(GetType(CategoryAttribute)), CategoryAttribute)
Dim cat As FieldInfo = att.GetType.GetField("categoryValue", BindingFlags.NonPublic Or BindingFlags.Instance)
cat.SetValue(att, "A better description")
All well and good, except that the category attribute is changed for all the properties, not just 'Age'.
A: You can subclass most of the common attributes quite easily to provide this extensibility:
using System;
using System.ComponentModel;
using System.Windows.Forms;
class MyCategoryAttribute : CategoryAttribute {
public MyCategoryAttribute(string categoryKey) : base(categoryKey) { }
protected override string GetLocalizedString(string value) {
return "Whad'ya know? " + value;
}
}
class Person {
[MyCategory("Personal"), DisplayName("Date of Birth")]
public DateTime DateOfBirth { get; set; }
}
static class Program {
[STAThread]
static void Main() {
Application.EnableVisualStyles();
Application.Run(new Form { Controls = {
new PropertyGrid { Dock = DockStyle.Fill,
SelectedObject = new Person { DateOfBirth = DateTime.Today}
}}});
}
}
There are more complex options that involve writing custom PropertyDescriptors, exposed via TypeConverter, ICustomTypeDescriptor or TypeDescriptionProvider - but that is usually overkill.
A: Well you learn something new every day, apparently I lied:
What isn’t generally realised is that
you can change attribute instance values fairly
easily at runtime. The reason is, of
course, that the instances of the
attribute classes that are created are
perfectly normal objects and can be
used without restriction. For example,
we can get the object:
ASCII[] attrs1=(ASCII[])
typeof(MyClass).GetCustomAttributes(typeof(ASCII), false);
…change the value of its public variable and show that it has changed:
attrs1[0].MyData="A New String";
MessageBox.Show(attrs1[0].MyData);
…and finally create another instance
and show that its value is unchanged:
ASCII[] attrs3=(ASCII[])
typeof(MyClass).GetCustomAttributes(typeof(ASCII), false);
MessageBox.Show(attrs3[0].MyData);
http://www.vsj.co.uk/articles/display.asp?id=713
A: Did you solve the problem?
Here are possible steps to achieve an acceptable solution.
*
*Try to create a child class, redefine all of the properties that you need to change the [Category] attribute (mark them with new). Example:
public class UserInfo
{
[Category("Must change")]
public string Name { get; set; }
}
public class NewUserInfo : UserInfo
{
public NewUserInfo(UserInfo user)
{
// transfer all the properties from user to current object
}
[Category("Changed")]
public new string Name {
get {return base.Name; }
set { base.Name = value; }
}
public static NewUserInfo GetNewUser(UserInfo user)
{
return NewUserInfo(user);
}
}
void YourProgram()
{
UserInfo user = new UserInfo();
...
// Bind propertygrid to object
grid.DataObject = NewUserInfo.GetNewUser(user);
...
}
Later Edit: This part of the solution is not workable if you have a large number of properties that you might need to rewrite the attributes. This is where part two comes into place:
*Of course, this won't help if the class is not inheritable, or if you have a lot of objects (and properties). You would need to create a full automatic proxy class that gets your class and creates a dynamic class, applies attributes and of course makes a connection between the two classes.. This is a little more complicated, but also achievable. Just use reflection and you're on the right road.
A: Unfortunately attributes are not meant to change at runtime. You basically have two options:
*
*Recreate a similar type on the fly using System.Reflection.Emit as shown below.
*Ask your vendor to add this functionality. If you are using Xceed.WpfToolkit.Extended you can download the source code from here and easily implement an interface like IResolveCategoryName that would resolve the attribute at runtime. I did a bit more than that, it was pretty easy to add more functionality like limits when editing a numeric value in a DoubleUpDown inside the PropertyGrid, etc.
namespace Xceed.Wpf.Toolkit.PropertyGrid
{
public interface IPropertyDescription
{
double MinimumFor(string propertyName);
double MaximumFor(string propertyName);
double IncrementFor(string propertyName);
int DisplayOrderFor(string propertyName);
string DisplayNameFor(string propertyName);
string DescriptionFor(string propertyName);
bool IsReadOnlyFor(string propertyName);
}
}
For first option: This however lack of proper property binding to reflect the result back to the actual object being edited.
private static void CreatePropertyAttribute(PropertyBuilder propertyBuilder, Type attributeType, Array parameterValues)
{
var parameterTypes = (from object t in parameterValues select t.GetType()).ToArray();
ConstructorInfo propertyAttributeInfo = typeof(RangeAttribute).GetConstructor(parameterTypes);
if (propertyAttributeInfo != null)
{
var customAttributeBuilder = new CustomAttributeBuilder(propertyAttributeInfo,
parameterValues.Cast<object>().ToArray());
propertyBuilder.SetCustomAttribute(customAttributeBuilder);
}
}
private static PropertyBuilder CreateAutomaticProperty(TypeBuilder typeBuilder, PropertyInfo propertyInfo)
{
string propertyName = propertyInfo.Name;
Type propertyType = propertyInfo.PropertyType;
// Generate a private field
FieldBuilder field = typeBuilder.DefineField("_" + propertyName, propertyType, FieldAttributes.Private);
// Generate a public property
PropertyBuilder property = typeBuilder.DefineProperty(propertyName, PropertyAttributes.None, propertyType,
null);
// The property set and property get methods require a special set of attributes:
const MethodAttributes getSetAttr = MethodAttributes.Public | MethodAttributes.HideBySig;
// Define the "get" accessor method for current private field.
MethodBuilder currGetPropMthdBldr = typeBuilder.DefineMethod("get_" + propertyName, getSetAttr, propertyType, Type.EmptyTypes);
// Intermediate Language stuff...
ILGenerator currGetIl = currGetPropMthdBldr.GetILGenerator();
currGetIl.Emit(OpCodes.Ldarg_0);
currGetIl.Emit(OpCodes.Ldfld, field);
currGetIl.Emit(OpCodes.Ret);
// Define the "set" accessor method for current private field.
MethodBuilder currSetPropMthdBldr = typeBuilder.DefineMethod("set_" + propertyName, getSetAttr, null, new[] { propertyType });
// Again some Intermediate Language stuff...
ILGenerator currSetIl = currSetPropMthdBldr.GetILGenerator();
currSetIl.Emit(OpCodes.Ldarg_0);
currSetIl.Emit(OpCodes.Ldarg_1);
currSetIl.Emit(OpCodes.Stfld, field);
currSetIl.Emit(OpCodes.Ret);
// Last, we must map the two methods created above to our PropertyBuilder to
// their corresponding behaviors, "get" and "set" respectively.
property.SetGetMethod(currGetPropMthdBldr);
property.SetSetMethod(currSetPropMthdBldr);
return property;
}
public static object EditingObject(object obj)
{
// Create the typeBuilder
AssemblyName assembly = new AssemblyName("EditingWrapper");
AppDomain appDomain = System.Threading.Thread.GetDomain();
AssemblyBuilder assemblyBuilder = appDomain.DefineDynamicAssembly(assembly, AssemblyBuilderAccess.Run);
ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assembly.Name);
// Create the class
TypeBuilder typeBuilder = moduleBuilder.DefineType("EditingWrapper",
TypeAttributes.Public | TypeAttributes.AutoClass | TypeAttributes.AnsiClass |
TypeAttributes.BeforeFieldInit, typeof(System.Object));
Type objType = obj.GetType();
foreach (var propertyInfo in objType.GetProperties())
{
string propertyName = propertyInfo.Name;
Type propertyType = propertyInfo.PropertyType;
// Create an automatic property
PropertyBuilder propertyBuilder = CreateAutomaticProperty(typeBuilder, propertyInfo);
// Set Range attribute
CreatePropertyAttribute(propertyBuilder, typeof(Category), new[]{"My new category value"});
}
// Generate our type
Type generetedType = typeBuilder.CreateType();
// Now we have our type. Let's create an instance from it:
object generetedObject = Activator.CreateInstance(generetedType);
return generetedObject;
}
}
A: Given that the PropertyGrid's selected item is "Age":
SetCategoryLabelViaReflection(MyPropertyGrid.SelectedGridItem.Parent,
MyPropertyGrid.SelectedGridItem.Parent.Label, "New Category Label");
Where SetCategoryLabelViaReflection() is defined as follows:
private void SetCategoryLabelViaReflection(GridItem category,
string oldCategoryName,
string newCategoryName)
{
try
{
Type t = category.GetType();
FieldInfo f = t.GetField("name",
BindingFlags.NonPublic | BindingFlags.Instance);
if (f.GetValue(category).Equals(oldCategoryName))
{
f.SetValue(category, newCategoryName);
}
}
catch (Exception ex)
{
System.Diagnostics.Trace.Write("Failed Renaming Category: " + ex.ToString());
}
}
As far as programatically setting the selected item, the parent category of which you wish to change; there are a number of simple solutions. Google "Set Focus to a specific PropertyGrid property".
A: Here's a "cheaty" way to do it:
If you have a fixed number of constant potential values for the attribute parameter, you can define a separate property for each potential value of the parameter (and give each property that slightly different attribute), then switch which property you reference dynamically.
In VB.NET, it might look like this:
Property Time As Date
<Display(Name:="Month")>
ReadOnly Property TimeMonthly As Date
Get
Return Time
End Get
End Property
<Display(Name:="Quarter")>
ReadOnly Property TimeQuarterly As Date
Get
Return Time
End Get
End Property
<Display(Name:="Year")>
ReadOnly Property TimeYearly As Date
Get
Return Time
End Get
End Property
A: I really don't think so, unless there's some funky reflection that can pull it off. The property decorations are set at compile time and to my knowledge are fixed
A: In the mean time I've come to a partial solution, derived from the following articles:
*
*ICustomTypeDescriptor, Part 1
*ICustomTypeDescriptor, Part 2
*Add (Remove) Items to (from) PropertyGrid at Runtime
Basically you would create a generic class CustomTypeDescriptorWithResources<T>, that would get the properties through reflection and load Description and Category from a file (I suppose you need to display localized text so you could use a resources file (.resx))
A: You can change Attribute values at runtime at Class level (not object):
var attr = TypeDescriptor.GetProperties(typeof(UserContact))["UserName"].Attributes[typeof(ReadOnlyAttribute)] as ReadOnlyAttribute;
attr.GetType().GetField("isReadOnly", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(attr, username_readonly);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71"
} |
Q: Performance gain in compiling java to native code? Is there any performance to be gained these days from compiling java to native code, or do modern hotspot compilers end up doing this over time anyway?
A: There was a similar discussion here recently, for the question What are advantages of bytecode over native code?. You can find interesting answers in that thread.
A: Some more anecdotal evidence. I've worked on a few performance critical real-time trading financial applications. I agree with Frank, nearly every time your problem is not the lack of being compiled, it is your algorithm or data structure. Modern hot-spot compilers are very good with the right code, for example the CERN Colt library is within 90% of compiled, optimised Fortran for numerical work.
If you are worried about speed I'd really recommend a good profiler and get evidence as to where your bottlenecks are - I use YourKit and have been very pleased.
We have only resorted to native compiled code for speed in one instance in the last few years, and that was so we could use CUDA and get some serious GPU performance.
A: Your question is a little large, the answer vary a lot
*
*If you are using Just In Time compilation (JIT) or not
*When you are using,, if your process is executed for a long time or not
All recent JVM use JIT, but on old JVM the java code is several time slower that native code.
If you have a server that run for a long period of time or batch that execute the same code again and again, the difference and up being very low.
We wrote the same batch both in C++ and in Java and run it with different dataset, the result differ for about 3 second, with dataset taking from 5 minutes to several hours.
But be careful, they are special case that there will be an important difference, for example the batch that need a lot memory.
A: Memory performance or CPU performance? Or are they the same these days?
My only evidence is anecdotal and on a different platform: after porting a bunch of CPU-hungry apps to C# (.NET 2.0), I did not notice substantial loss in performance (I do not consider 10% substantial). Well written code seems to perform well on a variety of architectures.
Most apps spend/waste time with:
*
*IO operations that will not benefit from static (compile-time) analysis.
*Bad Algorithms that will not benefit from static analysis.
*Bad Memory layouts in critical CPU inner loops. While it is technically possible that compilers help us here, I have yet to see a real compiler do anything interesting.
So based upon my experience, unless you are writing a video codec, there is no benefit to compiling Java apps vs. just relying upon the hotspot compilers.
A: Tried Hello-World in with six different implementations just to check the overhead
and the difference was staggering. Java was off the charts while the compiled languages did equally well. I could proved all the evidence (in a reproducible) if needed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Strategy for single sign on with legacy applications I'm wondering what strategies people use for reduced sign on with legacy applications and how effective they have found them?
We have an ASP.Net based intranet and own a lot of the legacy applications, but not all. We also have BizTalk and are considering the use of it's SSO engine too.
A: A good compromise between effort/rework and the convenience of single sign on is to continue to maintain a list of users, privileges, roles etc in the legacy app. Make the changes necessary to automatically log the user into your application based on their user account (usually their Windows or network account).
I'm currently running a couple of applications that use this method of sign on, and it makes them seem more integrated even though they aren't.
Another advantage we've found is that it stops people from sharing passwords to legacy applications. They're much less likely to hand out an admin password that also gives others access to their email or payroll details!
A: Multiple identity storage per application?
Might not be a single sign on solution, but have you try looking into something that is more targetted solution like MS Identity Lifecycle Manager? It will simplify identity synchronization between applications and it's pluggable as well, meaning you can hook up your own code to do the synchronization between different system. So if you change the identity info (i.e. login info) in ILM portal, you can propagate those to the different systems. Same thing for provisioning and deprovisioning identity. Single point of entry.
I supposed you can use biztalk also for similar thing.
As for truly single sign on solution where you just logged in once and you don't have to login again to different applications. I've yet to find one.
I supposed if your legacy apps has a pluggable identity provider module, it's doable, meaning you can customize the login system to hook up to your single identity source of truth whatever that maybe.
A: We did two things with legacy accounts. (legacy web based apps)
We first mapped the legacy accounts to their system logon accounts (running in a Windows Active Directory).
A facade logon screen then was applied to over the top of the legacy apps (web based), this would request the AD logon, which would then reverse map to the legacy applications logon account and assign the appropriate rights to the user, using the legacy systems security model. The user received a token for the session which kept the doors open for them.
This gave us the benefit of not having to retrofit legacy apps (for example what would happen is app x only had numbers for ID, and the user uses a windows logon (alphanumeric), and also achieve a psuedo single signon from the client's perspective.
The other option that did make sense was at the new logon screen, it would check multiple repositories of security, so even if the user didn't decide to use their windows logon they could still logon with the legacy account name. Obviously this does have some side effects but can also help ease the transition pain end users sometimes feel moving between systems.
There are also programs like the Citrix XenApp Single Signon which take a totally different approach to the issue.
A: In addition to Jimmy's points about using ILM, this particular system does allow integration with the AD PCNS (Password Change Notification Service) service, that can be used with ILM (ILM "sees" the password change event and can publish it to other consuming applications / services) to at least ensure that as a user's password changes in one system, it gets reflected into others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Access to restricted URI denied code: 1012 How do you get around this Ajax cross site scripting problem on FireFox 3?
A: To update the answer (I guess, mostly for my benefit when I come looking for this answer later on), if are loading XML or something else, you can always ask the user if he will allow us to read from another site with this code:
try {
if (netscape.security.PrivilegeManager.enablePrivilege)
netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");
} catch (e) {
alert("Sorry, browser security settings won't let this program run.");
return;
}
(from the RESTful web services book) But, this only works in firefox, when the html file is loaded from local file. So, not that useful.
A: If you're using jQuery it has a callback function to overcome this:
http://docs.jquery.com/Ajax/jQuery.ajax#options
As of jQuery 1.2, you can load JSON
data located on another domain if you
specify a JSONP callback, which can be
done like so: "myurl?callback=?".
jQuery automatically replaces the ?
with the correct method name to call,
calling your specified callback. Or,
if you set the dataType to "jsonp" a
callback will be automatically added
to your Ajax request.
Alternatively you could make your ajax request to a server-side script which does the cross-domain call for you, then passes the data back to your script
A: One more solution: if all you need is the headers, you can specify "HEAD" as the method and it won't trigger the security issue. For instance, if you just want to know if the web page exists.
var client = new XMLHttpRequest();
client.open("HEAD", my_url, false);
client.send(null);
if(client.readyState != 4 || client.status != 200) //if we failed
alert("can't open web page");
A: Some more details would be nice: which AJAX library are you using, what would you like to achive, how you do it.
For example it can be a cross-domain Ajax request, which is not allowed. In this case use JSON.
A: I came across this problem recently and it was while I as AJAX loading the local request, not cross site scripting problem. Also, Jimmy himself seems to have the same problem. This seems to be the FF security problem, this article describes the cause and the solution to access to restricted uri denied" code: "1012 problem.
Sorry, got that error using JQuery
$.ajax on FireFox 3. Tried jsonp
suggestion but I think that will only
work with something that will serve up
json. I'm trying to create a sample
local html file based mashup that will
pull data from Yahoo!Finance, but they
are serving .csv, so I think I'm SOL.
– Jimmy Chandra (Sep 9 at 17:20)
I hope you'll find it useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How to keep Stored Procedures and other scripts in SVN/Other repository? Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
A: I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
A: You can create a batch file and schedule it:
*
*delete the contents of your scripts directory
*using something like ExportSQLScript to export all objects to script/scripts
*svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
A: Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
A: I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
A: Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
A: Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
A: I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
A: I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
A: I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
A: You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Pass Silverlight type to Microsoft AJAX and pass parameter validation I'm working on a Silverlight application where I want to take advantage of the Microsoft ASP.NET AJAX Client library. I'm calling the library using the HTML Bridge that is part of Silverlight 2. Silverlight got great support for passing types between JavaScript and Managed Code, but now I've bumped against a problem.
Microsoft ASP.NET AJAX Client Libraries includes a "type system", and one of the things the framework does is validating that the parameters is of correct type. The specific function I'm calling is the Sys.Application.addHistoryPoint, and the validation code looks like this:
var e = Function.validateParams(arguments, [
{name: "state", type: Object},
{name: "title", type: String, mayBeNull: true, optional: true}
]);
I've tried passing all kinds of CLR types as the state parameter (C# structs, [ScriptableTypes], Dictionary types etc. And every time I get the error:
"Sys.ArgumentTypeException: Object of type 'Function' cannot be converted to type 'Object'
This error is obviously coming from the parameter validation... But WHY does ASP.NET AJAX think my types are Functions? Does anyone understand the type validation in MS AJAX?
I know I can do workarounds like calling HtmlPage.Window.Eval("...") and pass my JS integration as strings, but I don't want to do that. I want to pass a real .NET type as the state parameter.
A: I found a pretty good overview of this here, but even that overview seemed to cover every scenario except the one you mention. I'm wondering if this can't be done because javascript objects really are functions (more or less).
What if you wrote a wrapper function that could create the state object using a string?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Checking network status and controlling PPP in a program So I'm running PPP under linux with a cellular modem. The program I'm writing needs to know if the link is active before sending any data.
What are my options to check
*
*if the link is available
*if it routes to a server I control (it doesn't go to the internet as I said earlier)
Also, what is the best way to restart it - I'd like to have program control over when it starts and stops, so I assume an init.d isn't appropriate. Using system() doesn't seem to give a PID, are there other options besides fork() and the gaggle of exec??() calls?
C on Linux on ARM (custom distribution using buildroot).
A: You can use the ip-up script functionality of pppd to have it execute a program when the IP interface is up and ready. Details are in the pppd(8) man page - search for "ip-up".
To restart pppd, use the linkname parameter to set a known pidfile name (/var/run/ppp-name.pid). Also use the persist option. When you want to restart pppd, send the SIGHUP signal. Again, this is described in the man page.
A: You could parse /proc/net/route.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the quickest way to a very simple blog? I am about to start a new project and would like to document its development in a very simple blog.
My requirements are:
*
*self-hosted on my Gentoo-based LAMP stack (that seems to rule out blogger)
*Integration in a django based website (as in www.myproject.com/about, www.myproject.com/blog etc rather than www.myproject.com and a totally different site at blog.myproject.com)
*very little or no learning curve that's specific to the blog engine (don't want to learn an API just to blog, but having to get deeper into Django to be able to roll my own would be OK) According to the answers so far, there is a chance that this excludes Wordpress
Should I
a) install blog engine X (please specify X)
b) use django to hand-roll a way to post new entries and a page on my website to display the posts in descending chronological order
A: I've tried WordPress recently and am very disappointed. As long as you don't want to customize anything, all is well. But imagine you want to install a plugin to handle Markdown editing. There the trouble begins. The plugin architecture of WordPress is seriously screwd up. In the case of Markdown, this means that no good solution exists. The existing plugin is a series of (quite well-documented) hacks that fall apart at a hard stare.
I never intended to write the least bit of code for WordPress but the last few days, I've been knee-deep in PHP the whole time, hacking plugins as well as the WordPress core in order to make it work for my special scenario (which really isn't all that special, I'm just a perfectionist). Which is a pity, because the documentation of WordPress is more than just patchy. I don't use it anymore, I grep for functions and read the source. All in all, one of the less enjoyable OpenSource projects.
A: You can spend hours if not days customizing Wordpress with plugins, themes, etc...
I would go with a 0 installation solution, such as blogger (https://www.blogger.com/start)
You can even use our own domain name with it if you need do.
EDIT: Plus, if you ever get slashdotted, digged or redditted, google can handle the traffic, your server probably can't.
A: For me, Wordpress is still the quickest & simplest to setup and get going. It can be extended to do pretty much anything or you can keep it real simple. Runs on PHP, but unless you want to write plugins for it, you never need to write code
A: Install Wordpress. It is the most common engine for a reason. It's PHP but will play just fine in your environment.
A: If you're the perfectionist kind, roll your own.
*
*It isn't that hard
*You learn something useful
*You'll get exactly what you want and need
Be warned that you may run into a quagmire fighting comment spam, fixing security holes, etc. But it'll probably be a fun project.
If you are the practical type and ready to face some integration pain, use an existing engine like WadcomBlog (Python) or PyBlosxom, or something completely different like MovableType or WordPress.
Here's a simple Django blog example to get you started.
Some pros and cons of rolling your blog engine this article by Phil Haack.
Jeff Croft apparently rolled his own as well.
A: Have a look at Blosxom. It's file-based, so no crufty database. The basic idea has been ported to different languages, pyblosxom is in Python.
A: I use PyBlosxom for my personal blog, and I think it is pretty useful if you need something minimalistic. The deployment is simple, as you need only the python runtime and cgi. You might want to have some basic knowledge of python at least if you are going to use it, though.
Have a look at Blosxom. It's file-based, so no crufty database. The basic idea has been ported to different languages, pyblosxom is in Python.
A: I wrote the engine for my personal blog in maybe 6 hours during one weekend, with comments, labels, simplified markup, sitemap, feeds and so on. It was great fun and I learned a lot of Django.
If you decide to go this way, look at generic views, this Django feature will save you much of work (and learn few useful tricks).
A: I Haven't tried it myself yet (other than the demo), but I've bookmarked Chyrp so that if I ever need to set up a quick & simple blog (kind of like you're describing) I could try this. So check it out, might be a good option for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Find all drive letters in Java For a project I'm working on. I need to look for an executable on the filesystem. For UNIX derivatives, I assume the user has the file in the mighty $PATH variable, but there is no such thing on Windows.
I can safely assume the file is at most 2 levels deep into the filesystem, but I don't know on what drive it will be. I have to try all drives, but I can't figure out how to list all available drives (which have a letter assigned to it).
Any help?
EDIT: I know there is a %PATH% variable, but it is not as integrated as in UNIX systems. For instance, the application I'm looking for is OpenOffice. Such software would not be in %PATH%, typically.
A: http://docs.oracle.com/javase/7/docs/api/java/io/File.html#listRoots()
File[] roots = File.listRoots();
for(int i = 0; i < roots.length ; i++)
System.out.println("Root["+i+"]:" + roots[i]);
google: list drives java, first hit:-)
A: Looking "everywhere" can be very messy.
Look at a CD-rom drive, and it spins up. That can be very noisy.
Look at a network drive, and it may be very slow. Maybe the server is down, and you may need to wait for minutes until it times out.
Maybe (for Windows-machines) you should just look in the start-menu. If nothing there points at OOo, it's probably not installed. If it is, the user is probably an advanced user, that will have no problems pointing out the location manually.
A: Windows does indeed have a PATH environment variable. It has a different syntax from the Unix one because it uses semicolon (;) as a separator instead of colon (:) and you have to watch for quoted strings that might contain spaces. But, it's there.
If this other program's installer adds its own directory to the PATH environment variable, then you could rely on that. However, as you mention, Windows installers typically do not need to add the application path to the PATH because they install a start menu shortcut or something else instead.
For drive letters in Java, one approach would be to try them all, there are only going to be at most 24 (C through Z) that are of any use. Or, you could shell out and run "net use" and parse the results, though that is a bit messier.
A: Of course there is a PATH environment variable in Windows.
%PATH%
This variable contains a semicolon-delimited list of directories in which the command interpreter will search for executable files. Equivalent to the UNIX $PATH variable.
A: Use JNI.
This is perfect for c++ code.
Not only you can list all drives but also get the corresponding drive type (removable,local disk, or cd-rom,dvd-rom...etc)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do I implement license management for on-site installation of webapps (preferably cross-platform)? I have a web application running on a Gentoo-based LAMP stack. My customers buy the software as a service and I host everything. However, there is some demand for on-site deployment inside the clients' own networks.
Currently, because I host the system, there is no built-in license management in the app. I bill based on user accounts and data capacity (it's a processing and analysis app for metering data) and I just set up whatever the client pays for and the client can't setup those things himself. Even without on-site installation, that should be changed for better scalability anyway.
I am looking for a license managment framework and/or typical approaches that you have implemented yourselves or have seen to work well elsewhere. My requirements are:
*
*"safe enough" rather than "military grade"
*very much non-obtrusive
*prevent the owner of a license from running the system in multiple plants when he has only licensed one
*make the number of user accounts and the data capacity both reasonably tamper-proof and easy to up- / downgrade
*work without an Internet connection (having a completely self-contained system would be the main point of opting for the on-site solution), though it might be acceptable if there has to be a temporary connection during installation
For some of the on-site scenarios, there would be a requirement for some particular OS, typically some version of Windows Server, but various Linux distros are getting more popular as well, especially in the public sector.
From a user's point of view, I am quite satisfied with the license management in FogBugz, it seems Joel Spolsky is satisfied with it from a vendor's point of view, and it is cross-platform, so it would make a great reference of what I'm aiming at.
A: Don't. Every hour that you spend writing a license key system is an hour that you are not spending fixing bugs or adding features. By writing a license management system, you are spending resources in order to reduce the value of your product!
Copyright your code, have a lawyer and be ready to prosecute anyone who violates your copyright, and call it a day.
A: I've seen one company that deploys their on-site web system using a pre-built linux VM image. Clients run it under VMWare and you don't need to worry about the actual OS on the client systems is. Plus they don't provide logins for the actual linux system running the web server, so the end users can't actually get in and modify anything.
I'm not sure how they handle the licensing limits, but they are in there. It might be a case of providing an updated VM image whenever a client requests more user licenses.
Although this doesn't directly address the need to prevent multiple instances running. I guess you could always do some sort of activation key system based on the system configuration.
A: I am sure you can get to the serial number of the cpu on most os's; you could use that, after all if they transplant the cpu its safe to assume that the old machine would no longer be in use?
Secondly a web service could be used to find out the current data capacity and what the client has paid for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can you handle an IN sub-query with LINQ to SQL? I'm a bit stuck on this. Basically I want to do something like the following SQL query in LINQ to SQL:
SELECT f.*
FROM Foo f
WHERE f.FooId IN (
SELECT fb.FooId
FROM FooBar fb
WHERE fb.BarId = 1000
)
Any help would be gratefully received.
A: General way to implement IN in LINQ to SQL
var q = from t1 in table1
let t2s = from t2 in table2
where <Conditions for table2>
select t2.KeyField
where t2s.Contains(t1.KeyField)
select t1;
General way to implement EXISTS in LINQ to SQL
var q = from t1 in table1
let t2s = from t2 in table2
where <Conditions for table2>
select t2.KeyField
where t2s.Any(t1.KeyField)
select t1;
A: Have a look at this article. Basically, if you want to get the equivalent of IN, you need to construct an inner query first, and then use the Contains() method. Here's my attempt at translating:
var innerQuery = from fb in FoorBar where fb.BarId = 1000 select fb.FooId;
var result = from f in Foo where innerQuery.Contains(f.FooId) select f;
A: from f in Foo
where f.FooID ==
(
FROM fb in FooBar
WHERE fb.BarID == 1000
select fb.FooID
)
select f;
A: Try using two separate steps:
// create a Dictionary / Set / Collection fids first
var fids = (from fb in FooBar
where fb.BarID = 1000
select new { fooID = fb.FooID, barID = fb.BarID })
.ToDictionary(x => x.fooID, x => x.barID);
from f in Foo
where fids.HasKey(f.FooId)
select f
A: // create a Dictionary / Set / Collection fids first
Find Other Artilces
var fids = (from fb in FooBar
where fb.BarID = 1000
select new { fooID = fb.FooID, barID = fb.BarID })
.ToDictionary(x => x.fooID, x => x.barID);
from f in Foo
where fids.HasKey(f.FooId)
select f
A: Try this
var fooids = from fb in foobar where fb.BarId=1000 select fb.fooID
var ff = from f in foo where f.FooID = fooids select f
A: var foos = Foo.Where<br>
( f => FooBar.Where(fb.BarId == 1000).Select(fb => fb.FooId).Contains(f.FooId));
A: // create a Dictionary / Set / Collection fids first
Find Other Artilces
var fids = (from fb in FooBar where fb.BarID = 1000 select new { fooID = fb.FooID, barID = fb.BarID }) .ToDictionary(x => x.fooID, x => x.barID);
from f in Foo where fids.HasKey(f.FooId) select f
A: from f in foo
where f.FooID equals model.FooBar.SingleOrDefault(fBar => fBar.barID = 1000).FooID
select new
{
f.Columns
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Implementing user defined display order UI i have a list of products that are being displayed in particular order. store admin can reassign the display order, by moving the "hot" items to the top of the list. what's the best way of implementing the admin functionality UI [asp.net C#]? Products table has a [displayOrder(int)] filed which determines the display order.
i'm looking for something intuitive and simple.
thank you.
p.s. i guess i didn't make myself clear, i'm looking for UI advice more than anything.
SOLUTION: ReorderList worked out great, this article helped too. Also, make sure OldValuesParameterFormatString="{0}" in your DataSource.
A: You need a Rank field for each product (which could also be the DisplayOrder field).
When the administrator ups or downs a product, update the rank value.
When you need to list the products, do a select query which sorts in DESC order of rank.
A: using AJAX you could implement a Reoder list control you can find more information here http://www.asp.net/AJAX/AjaxControlToolkit/Samples/ReorderList/ReorderList.aspx
Mauro
http://www.brantas.co.uk
A: i'm implementing that using the 'Order' column/property where people input numbers like 10, 20, 30 (i have ascending ordering). i have a list of items with text boxes to input order, and an 'apply order' button that saves new values to the database and reorders/reloads items on the page with the new ordering applied.
i don't forbid inputting the same value for two items, i sort them by name as a second sort parameter, or leave it to the database to sort it at will if it doesn't matter much. i believe it's understandable enough to put it that way, it seems like an ordered list which everybody understand easily.
A: If you can modify the database, add an IsHot column. Then sort by IsHot and DisplayOrder (in that order). This will keep the products in the correct order and the "hot" products will bubble up to the top.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you start Knowledge Transfer? Do you use a formal event to get people talking in your IT department? Like a monthly meetup in a social place, a internal wiki/chat space or just a regular "information market" with some presentations about technology or projects made by your staff for your staff? Do you invite Sales people to participate or is it a closed event for programmers only?
How do you get people to participate in these events? Do you allow them to spent work time on knowledge transfer? Or do you understand it as an integral part of the work time?
I wonder how to monitor the progress of knowledge transfer itself. How do you spot critical one-person spots of failure in your projects? There are several methods to avoid it, like staff swapping or the "fifo" attempt on bug fixing.
Note: Ok, this is a very very noisy question and I hope to fix it after a few comments. Sorry for the mixup.
edit: My personal experience is that there is a very high barrier for people to start contributing. It looks like they won't put in the (minimal) extra time to edit our wiki, or spend the hour in the afternoon to talk about technology topics with the developing staff. It's like people don't like our wiki, our document management system or the meeting. Maybe it's because it's all free-to-use and not forced by the management. But I don't like to force people into it - but is it the right way?
One example: Our wiki holds pages about projects, telling who worked on it to get a first contact in case of questions. But nobody besides a colleague and me is creating this pages...
A: I think all of the above. But you're forgetting the most important way.
The most efficient way to transfer knowledge is to have people work together. You might think about doing 1 on 1 code reviews or even pair programming and make knowledge transfer an intergral part of the work.
A: Knowledge Transfer and Knowledge Management have one drawback. They seem to cost an aweful lot: if everybody knows what I know, am I still needed? All the time I use to bring others up to speed, what do I gain from it?
The best way to go about this is to be an example. Share your knowledge; in a wiki, blog about it, talk about it, make it easily accessible, and talk about the benefits you have from that: less people come to interupt and ask you stuff, as they can get an answer easily without even getting up. And show them that you are still there.
This with all the other things mentioned will actually win out. One more thing: one of my employers kept on paying me 1/3 of my salary for another year after I left (on my own initiative), just to keep my knowledge-base up and running. Did he have to? No, it was his property anyway. But it motivated people still working for him to share their knowledge.
A: I think it depends on the knowledge you are trying to transfer. I've found the following:
Technical Knowledge: "How to guide" with screenshots and a short demo - similar to the way you will see new features at a conference. The added benefit of this is what you have got is documented for when you leave the company.
Problem solving: informal discussions, short internal projects, lessons learned and an internal FAQ system which EVERYONE is responsible for updating.
Soft Skills (people skills): social meetings/outings/informal events etc.
Measuring that is going to be difficult though, as no matter how you transfer your knowledge there will always be varying degrees of uptake, after all, just because I do something one way doesnt mean its correct. Another developer/designer/manager may have a different way of doing the same thing with the same end result.
Mauro
A: At my workplace we use a wiki. The workplace is small enough (~20 people) so that you can always ask the person who was most involved in a particular project, however it is expected that you have searched on the wiki before you ask "the expert". If you cannot find your answer in the wiki, then you should add it after you have discussed it with your co-worker.
A: One word: Lunch
A: You should encourage people about things that you want them to do. You should "feed the animal". Look at stackoverflow; what do you think about badges? Why do you think this wonderful things exist? Thanks to ego, there is nothing you can't get it done. Give them badges, real badges, wearable badges. They will wear with happiness, they will do with happiness.
Btw, yes, I am a boss :)
A: Although i am still a student, when i did work experience 12 months ago, the all IT departments from within the corporation (I was 'working' for large corporation which own several mines in the area) would have a daily telephone conference, where each employee would say what they had been doing etc, and then talk about something new they had discovered and any other interesting tid-bits.
A: Couple ways I have seen so far:
*
*Wiki is suitable for internal knowledge, for example environment, project specific topics.
*Open doors policy
*Encourage asking questions.
*Voluntary presentations. Find out who have special knowledge and make it easy and attractive to set up a short presentation about it.
*Project post mortem documents. A wrap up meeting moderated by someone outside project team held after project is finished or terminated.
*Compulsory presentations.
*
*Project presentation when they go live. Technologies used etc.
*In case someone is sent to conference, he should have a presentation about new technology he saw.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How to show a spinner while loading an image via JavaScript I'm currently working on a web application which has a page which displays a single chart (a .png image). On another part of this page there are a set of links which, when clicked, the entire page reloads and looks exactly the same as before except for the chart in the middle of the page.
What I want to do is when a link is clicked on a page just the chart on the page is changed. This will speed things up tremendously as the page is roughly 100kb large, and don't really want to reload the entire page just to display this.
I've been doing this via JavaScript, which works so far, using the following code
document.getElementById('chart').src = '/charts/10.png';
The problem is that when the user clicks on the link, it may take a couple of seconds before the chart changes. This makes the user think that their click hasn't done anything, or that the system is slow to respond.
What I want to happen is display a spinner / throbber / status indicator, in place of where the image is while it is loading, so when the user clicks the link they know at least the system has taken their input and is doing something about it.
I've tried a few suggestions, even using a psudo time out to show a spinner, and then flick back to the image.
A good suggestion I've had is to use the following
<img src="/charts/10.png" lowsrc="/spinner.gif"/>
Which would be ideal, except the spinner is significantly smaller than the chart which is being displayed.
Any other ideas?
A: I've used something like this to preload an image and then automatically call back to my javascript when the image is finished loading. You want to check complete before you setup the callback because the image may already be cached and it may not call your callback.
function PreloadImage(imgSrc, callback){
var objImagePreloader = new Image();
objImagePreloader.src = imgSrc;
if(objImagePreloader.complete){
callback();
objImagePreloader.onload=function(){};
}
else{
objImagePreloader.onload = function() {
callback();
// clear onLoad, IE behaves irratically with animated gifs otherwise
objImagePreloader.onload=function(){};
}
}
}
A: Use CSS to set the loading animation as a centered background-image for the image's container.
Then when loading the new large image, first set the src to a preloaded transparent 1 pixel gif.
e.g.
document.getElementById('mainimg').src = '/images/1pix.gif';
document.getElementById('mainimg').src = '/images/large_image.jpg';
While the large_image.jpg is loading, the background will show through the 1pix transparent gif.
A: Building on Ed's answer, I would prefer to see something like:
function PreLoadImage( srcURL, callback, errorCallback ) {
var thePic = new Image();
thePic.onload = function() {
callback();
thePic.onload = function(){};
}
thePic.onerror = function() {
errorCallback();
}
thePic.src = srcURL;
}
Your callback can display the image in its proper place and dispose/hide of a spinner, and the errorCallback prevents your page from "beachballing". All event driven, no timers or polling, plus you don't have to add the additional if statements to check if the image completed loading while you where setting up your events - since they're set up beforehand they'll trigger regardless of how quickly the images loads.
A: I like @duddle's jquery method but find that load() isn't always called (such as when the image is retrieved from cache in IE). I use this version instead:
$('img.example').one('load', function() {
$('#spinner').remove();
}).each(function() {
if(this.complete) {
$(this).trigger('load');
}
});
This calls load at most one time and immediately if it's already completed loading.
A: Some time ago I have written a jQuery plugin which handles displaying a spinner automatically http://denysonique.github.com/imgPreload/
Looking in to its source code should help you with detecting when to display the spinner and with displaying it in the centre of the loaded image.
A: You could show a static image that gives the optical illusion of a spinny-wheel, like these.
A: Using the load() method of jQuery, it is easily possible to do something as soon as an image is loaded:
$('img.example').load(function() {
$('#spinner').fadeOut();
});
See: http://api.jquery.com/load-event/
A: Use the power of the setTimeout() function (More info) - this allows you set a timer to trigger a function call in the future, and calling it won't block execution of the current / other functions (async.).
Position a div containing the spinner above the chart image, with it's css display attribute set to none:
<div> <img src="spinner.gif" id="spinnerImg" style="display: none;" /></div>
The nbsp stop the div collapsing when the spinner is hidden. Without it, when you toggle display of the spinner, your layout will "twitch"
function chartOnClick() {
//How long to show the spinner for in ms (eg 3 seconds)
var spinnerShowTime = 3000
//Show the spinner
document.getElementById('spinnerImg').style.display = "";
//Change the chart src
document.getElementById('chart').src = '/charts/10.png';
//Set the timeout on the spinner
setTimeout("hideSpinner()", spinnerShowTime);
}
function hideSpinner() {
document.getElementById('spinnerImg').style.display = "none";
}
A: put the spinner in a div the same size as the chart, you know the height and width so you can use relative positioning to center it correctly.
A: Aside from the lowsrc option, I've also used a background-image on the img's container.
A: Be aware that the callback function is also called if the image src doesn't exist (http 404 error). To avoid this you can check the width of the image, like:
if(this.width == 0) return false;
A: @iAn's solution looks good to me. The only thing I'd change is instead of using setTimeout, I'd try and hook into the images 'Load' event. This way, if the image takes longer than 3 seconds to download, you'll still get the spinner.
On the other hand, if it takes less time to download, you'll get the spinner for less than 3 seconds.
A: I would add some random digits to avoid the browser cache.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
} |
Q: How does the Licenses.licx based .Net component licensing model work? I've encountered multiple third party .Net component-vendors that use a licensing scheme. On an evaluation copy, the components show up with a nag-screen or watermark or some such indicator. On a licensed machine, a Licenses.licx is created - with what appears to be just the assembly full name/identifiers. This file has to be included when the client assembly is built.
*
*How does this model work? Both from component-vendors' and users' perspective.
*What is the .licx file used for? Should it be checked in? We've had a number of issues with the wrong/right .licx file being checked in and what not
A: That's not correct. The licx file is very important and is necessary for the host app to be built with the correct license info embedded in it. So, its critical that the licx files be also included in source control. Otherwise a person checking out the source code on another machine will not get the licx file and the build may fail or not have the proper license info for the used components in the exe.
A: Almost everything about .Net licensing is explained here. No need to rewrite, I think.
It is better to exclude license files from project in source control, if you can. Otherwise, editing visual components may be pain in the ass. Also, storing license files in source control repository is not a need.
Hope this helps.
A: This was a good article on the topic: Click Here
In order to deploy an application with licensed components like TX Text Control, the EXE file must be licensed properly.
The .NET licensing mechanism recommends to add the licenses to the EXE - the calling assembly.
---- What happens in detail?
If you drag and drop a TextControl from the Visual Studio toolbox to a form, Visual Studio creates a licenses.licx file and includes the license information. This file is located in the same folder like your project file.
Important: The licenses.licx file does not include the license string itself. It just contains information about the licensed components. In case of TX Text Control (TXTextControl.TextControl), the following string is addded:
TXTextControl.TextControl, TXTextControl, Version=15.0.700.500, Culture=neutral, PublicKeyToken=6b83fe9a75cfb638
As you can see, it contains the namespace and control name, the assembly version, culture information and the unique public key token. Based on that information, the License Compiler (lc.exe) compiles the real license string that will be embedded into the executable assembly. The lc.exe is a small utility of the .NET Framework SDK which is used by Visual Studio during the build process.
---- What if I am using TX Text Control in a user control?
In this case, you need to add the license to the main executable as well. Therefore, you can simply drag and drop a TextControl to a dummy form of the main project, so that the licenses.licx file and the references are added automatically.
After that, you can remove the TextControl from that form again. Or you can simply copy or create a licenses.licx file to the main project folder. In this case, you need to add a reference to TX Text Control manually as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: When can DataInputStream.skipBytes(n) not skip n bytes? The Sun Documentation for DataInput.skipBytes states that it "makes an attempt to skip over n bytes of data from the input stream, discarding the skipped bytes. However, it may skip over some smaller number of bytes, possibly zero. This may result from any of a number of conditions; reaching end of file before n bytes have been skipped is only one possibility."
*
*Other than reaching end of file, why might skipBytes() not skip the right number of bytes? (The DataInputStream I am using will either be wrapping a FileInputStream or a PipedInputStream.)
*If I definitely want to skip n bytes and throw an EOFException if this causes me to go to the end of the file, should I use readFully() and ignore the resulting byte array? Or is there a better way?
A: 1) There might not be that much data available to read (the other end of the pipe might not have sent that much data yet), and the implementing class might be non-blocking (i.e. it will just return what it can, rather than waiting for enough data to fulfil the request).
I don't know if any implementations actually behave in this way, however, but the interface is designed to permit it.
Another option is simply that the file gets closed part-way through the read.
2) Either readFully() (which will always wait for enough input or else fail) or call skipBytes() in a loop. I think the former is probably better, unless the array is truly vast.
A: I came across this problem today. It was reading off a network connection on a virtual machine so I imagine there could be a number of reasons for this happening. I solved it by simply forcing the input stream to skip bytes until it had skipped the number of bytes I wanted it to:
int byteOffsetX = someNumber; //n bytes to skip
int nSkipped = 0;
nSkipped = in.skipBytes(byteOffsetX);
while (nSkipped < byteOffsetX) {
nSkipped = nSkipped + in.skipBytes(byteOffsetX - nSkipped);
}
A: It turns out that readFully() adds more performance overhead than I was willing to put up with.
In the end I compromised: I call skipBytes() once, and if that returns fewer than the right number of bytes, I call readFully() for the remaining bytes.
A: Josh Bloch has publicised this recently. It is consistent in that InputStream.read is not guaranteed to read as many bytes as it could. However, it is utterly pointless as an API method. InputStream should probably also have readFully.
A: According to the docs, readFully() is the only way that both works and guaranteed to work.
The actual Oracle implementation is... confusing:
public final int skipBytes(int n) throws IOException {
int total = 0;
int cur = 0;
while ((total<n) && ((cur = (int) in.skip(n-total)) > 0)) {
total += cur;
}
return total;
}
Why call skip() in a loop when skipBytes() has essentially the same contract as skip()? It would make perfect sense to implement it this way if both of the following were true: skipBytes() guaranteed to skip less than requested only on EOF, and skip() guaranteed to skip at least one byte if at all possible (just like read() does).
What's even worse, is that skip() is actually implemented using read(), which means it actually does the job. It just doesn't promise to do it, which means other implementations may fail to do so (and even Oracle one may potentially fail in the future if changed in future releases).
To be completely safe: call readBytes(), and if it doesn't do the job, allocate a temp buffer and call readFully() (use a loop here if the ability to skip over arbitrarily large amounts of data is needed).
On the other hand, calling skipBytes() in a loop is pointless. At least with Oracle implementation, because it's already a loop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Where did all the java applets go? When java was young, people were excited about writing applets. They were cool and popular, for a little while. Now, I never see them anymore. Instead we have flash, javascript, and a plethora of other web app-building technologies.
Why don't sites use java applets anymore?
I'm also curious: historically, why do you think this occurred? What could have been done differently to keep Java applets alive?
A: By the time Java's GUI API stopped totally sucking, everyone was using Flash. And even today, Java is no where near as good as Flash at doing fancy graphics.
A: I assume it's because java is a "real", ie. general purpose language. To make an applet, you have to write code, and there aren't any shortcuts.
Now that flash etc have come along, you can pretty much just drap and drop your way through making a cool animation for your website. This is a much lower barrier for entry - you don't have to know how to program in order to get a flash animation working. So flash proliferates, and java applets are hardly used anymore.
A: I think applets are collateral damage in the battle between Microsoft and Sun.
At first, the JVM was very slow to load and demanded too much memory.
Then, when increase in computing power made the JVM possible, Sun played hard as it attempted to control all things Java:
As part of another private antitrust lawsuit filed against Microsoft by Sun in March, Sun sought a preliminary injunction requiring Microsoft to include a current Java virtual machine (JVM) in the Windows XP operating system. Microsoft said the decision to include the JVM this week is a direct result of the latest legal entanglement with Sun, but Microsoft plans to disband support for Java in Windows following Jan. 1, 2004. Microsoft Reverses Course, Will Include Java VM In Windows XP--For Now
A: They took forever to load up and get going in the browser, and then for a lot of people they didn't work. When they finally did load, the interfaces were ugly and clunky. I think the poor user experience was a big step towards making applets obsolete.
So to answer the original question I have a question of my own - you ask "Why don't sites use java applets anymore", and my response is "why would anyone want to?"
A: I see them a lot in acedemic settings (hosted on department or faculty sites), but you're right in that they are not very popular.
However, remember that Java's big promise has been achieved. We have Flash, Java Applets, Silverlight, and ever-improving JavaScript frameworks.
Now if I made add a personal opinion - I think that Java applets are inelegant. They tend to look ugly, the Java runtime makes its presence in the OS far too known (in terms of runtime visuals, updates, and the ugly installer). Flash is much better with its rich media environment and its transparent (and ubiquitous) deployment.
A: People still use applets. But you are right, there are tons of different solutions out there. For example, take a look at javafx
A: I think compatibility issues were a big problem. Most notably with IE and Microsoft's Java VM which wasn't as standards compliant as it might have been.
Even with the Sun JVM you could have problems. I've had fun where I've had two 3rd-party Applets requiring different versions of Java which causes all sorts of problems. Sun have tried to solve this problem by replacing Applets with Java Web Start which gives you a link in the browser that launches the application in it's own window instead of inside the browser. (In theory with JWS you can have different applications using different VMs but it never seems to work for me as well as it should.)
Advancements with JavaScript have also made it possible to developer much richer web pages so a lot of things in the past that you could only do in Applets can now be done simply with AJAX.
A: I think Java applets were overshadowed by Flash and ActionScript (pun unintended), being much easier to use for what Java Applets were being used at the time (animations + stateful applications).
Flash's success in this respect in turn owes to its much smaller file sizes, as well as benefiting from the Sun vs. Microsoft suit that resulted in Microsoft removing the MSJVM from Internet Explorer, at a time of Netscape's demise and IE's heavy dominance.
A: First, they not gone. You can still find lot of applets on the Web, lot of people use them particularly to demonstrate algorithms and such.
Advantages: can leverage existing libraries (math, physics, sorting, graph, etc.) and it is faster than Flash.
Inconvenience: it might be risky to target a recent JVM (although Sun did a good job on automatic updates, looks like lot of people are using Java 1.6 already), load time is a bit slow (even though great progress have been made there).
You can still find lot of game applets too, like Bookworm, with the added advantage, perhaps, to have part of work already done to run them on mobile phones...
Second, I can predict a regain of interest with JavaFX. Applets on steroids, able to break legend of "applets are ugly"... :-)
Last, a library like Processing makes super easy to create graphical intensive applets, and you can find lot of them on the Net, eg. on OpenProcessing where the worst (beginners in programming) is near the best!
A: I wonder how widespread the JVM actually is? In the case of Flash, IE5 preinstalled it, giving it a large automatic user base. But unless the JVM was included with the OS install, users wouldn't have it. I suppose as a developer you target the largest install base, meaning choosing Flash over Java.
There are Java applets here and there; definitely not widespread though.
A: i believe it's their ugliness that kept them away from the modern web. flash brought the design, javascript brought a convenient way to make some cool things on a client. being a box inside a browser (just like a flash, though, but much uglier) applet technology was put away.
actually, the only thing that might be missed is the possibility to have a 'client-server' type of communication inside the web, because java applet could have a stateful connection. on the other hand, you would have to put some server on the other side and open a port for it, which just was too much house-work for shared hosting environments.
applets still live in some different areas, like control centers for roads, tunnels, power plants and stuff like that.
A: People are still using applets, at least for the company that I am working with. The applets are used mainly by internal users.
I feel that applets have their benefits, as companies which employ Java at the server side, most probably will have a large pool of talents who are better skilled at Java.
Although perhaps other technologies like Javascript, HTML/CSS or flash are more popular or more fanciful, but the talent pool could be better employed to create web apps with Java applets as it is a language that they are already familiar with through their work with the server end stuff.
It could be faster for the Java talent pool to deliver a change request with Java applet solutions at a higher accuracy than any other technologies.
Sometimes, the most important thing in a technology solution is its functionality and how fast people who need to provide support for them can react to changes.
A: 1) AWT made for horrid UIs. Swing improved on that but it was too late, because...
2) Microsoft dropped support for Java in the browser (its propietary MSJVM), and before it did, it would only support the last version it released, which was roughly JDK 1.1 compatible.
3) So today you cannot be sure that an applet will run on the majority of non-developer machines, unlike flash.
Same can be said of ActiveX by the way.
A: For what it's worth, Sun is pouring money and resources into applets again. They've made some really significant improvements in JDK 1.6.10 to mitigate a lot of the 'clunkiness' applets used to exhibit. For instance, with this update, you can display your own custom loading image/animation while your applet loads, and the plugin has been put on a major diet to improve performance.
They've also embarked upon an initiative to directly compete with Flash and Silverlight - JavaFX.
Whether or not the market will respond to any of this remains to be seen, but it's certainly a fascinating move on Sun's part given Flash's dominance in the market place.
A: The JVM is very widespread, especially in the coorporate world, at least where I've worked, there was always a JVM installed.
I'm currently working on a Java Applet, but in general, I would never an applet unless I had to. But then again, I wouldn't use Flash or Silverlight, either. Applets have a slow load time, and look out of place in webpages. Also, Macromedia/Adobe have outmarketed the good ol' applets.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: Can you load a .Net form as a control? I want to load a desktop application, via reflection, as a Control inside another application.
The application I'm reflecting is a legacy one - I can't make changes to it.
I can dynamically access the Form, but can't load it as a Control.
In .Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception.
Forms cannot be loaded as controls.
Is there any way to convert the form to a control?
A: Yes, this works just fine. I'm working on a .NET app right now that loads forms into a panel on a host form.
The relevant snippet:
// setup the new form
form.TopLevel = false;
form.FormBorderStyle = FormBorderStyle.None;
form.Dock = DockStyle.Fill;
form.Show ( );
// add to the panel's list of child controls
panelFormHost.Controls.Add ( form );
A: You should be able to add the form to the controls collection of your parent form...
See here:
http://vbcity.com/forums/topic.asp?tid=30539
If that fails, try using the adapter pattern to create a container with your legacy form inside it, then load it in an MDI maybe?
A: What is the exception you get? Is it possible that the control itself is giving the exception (vs the framework)? Perhaps something is called in the original applications Main function that is not being called?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Passing on named variable arguments in python Say I have the following methods:
def methodA(arg, **kwargs):
pass
def methodB(arg, *args, **kwargs):
pass
In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments.
def methodA(arg, **kwargs):
methodB("argvalue", kwargs)
How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB?
A: Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments.
methodB("argvalue", **kwargs)
A: As an aside: When using functions instead of methods, you could also use functools.partial:
import functools
def foo(arg, **kwargs):
...
bar = functools.partial(foo, "argvalue")
The last line will define a function "bar" that, when called, will call foo with the first argument set to "argvalue" and all other functions just passed on:
bar(5, myarg="value")
will call
foo("argvalue", 5, myarg="value")
Unfortunately that will not work with methods.
A: Some experimentation and I figured this one out:
def methodA(arg, **kwargs):
methodB("argvalue", **kwargs)
Seems obvious now...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to get the base 10 logarithm of a Fixnum in Ruby? I want to get the base 10 logarithm of a Fixnum using Ruby, but found that n.log or n.log10 are not defined. Math::log is defined but uses a different base than 10.
What is the easiest way to get the base 10 logarithm of a Fixnum?
A: Reading the documentation for module Math the answer is really obvious:
Math::log10(n)
This gives the base 10 logarithm of n.
A: There is
Math::log10 (n)
And there is also a property of logarithms that logx(y) = log(y)/log(x)
A: Math.log10(numeric) => float
returns base 10 log
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Display data from XMLDataSource in TextBox Can anyone give me some pointers on how to display the results of an XPath query in a textbox using code (C#)? My datascource seems to (re)bind correctly once the XPath query has been applied, but I cannot find how to get at the resulting data.
Any help would be greatly appreciated.
A: XMLDataSource is designed to be used with data-bound controls. ASP.NET's TextBox is not a data-bound control. So to accomplish what you want you either have to find a textbox control with data binding or display the result in some other way.
For example, you could use a Repeater control and create your own rendering template for it.
<asp:Repeater id="Repeater1" runat="server" datasource="XMLds">
<ItemTemplate>
<input type="text" value="<%# XPath("<path to display field>")%>" />
</ItemTemplate>
</asp:Repeater>
A: Some more information would be nice to have to be able to give you a decent answer. Do you have any existing code snippets you could publish here?
The general idea is to use the XmlDataSource.XPath property as a filter on the XmlDataSource.Data property. Did you try displaying the contents of the Data prop in your textbox?
A: Based on a slection in a DropDownList, when the SelectedIndexChanged event fires, the XPath for an XMLDataSource object is updated:
protected void ddl_SelectedIndexChanged(object sender, EventArgs e)
{
XMLds.XPath = "/controls/control[@id='AuthorityType']/item[@text='" + ddl.SelectedValue + "']/linkedValue";
XMLds.DataBind();
}
The XPath string is fine, I can output and test that it is working correctly and resolving to the correct nodes. What I am having problems with, is getting at the data that is supposedly stored in the XmlDataSource; specifically, getting the data and outputting it in a TextBox. I'd like to be able to do this as part of the function above, i.e.
protected void ddl_SelectedIndexChanged(object sender, EventArgs e)
{
XMLds.XPath = "/controls/control[@id='AuthorityType']/item[@text='" + ddl.SelectedValue + "']/linkedValue";
XMLds.DataBind();
myTextBox.Text = <FieldFromXMLDataSource>;
}
Thank you for your time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to host licensed .Net controls in unmanaged C++ app? I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this?
To run unlicensed controls is typically simple:
if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj)))
{
// do something with obj
}
When using a licensed control however, we need to somehow embed a .licx file into the project (ref application licensing). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated.
A: The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component.
Basically component developers are free to implement licensing as they deem fit. With the .licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted).
So if GetKey checks for a .licx file in the component directory - you just need to make sure its there.
AFAIK the client assembly doesn't need to do anything except instantiate the control.
Also if you post the name of the component and the lc.exe command you're using, people could take a look..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to get a file's Media Type (MIME type)? How do you get a Media Type (MIME type) from a file using Java? So far I've tried JMimeMagic & Mime-Util. The first gave me memory exceptions, the second doesn't close its streams properly.
How would you probe the file to determine its actual type (not merely based on the extension)?
A: To chip in with my 5 cents:
TL,DR
I use MimetypesFileTypeMap and add any mime that is not there and I specifically need it, into mime.types file.
And now, the long read:
First of all, MIME types list is huge, see here: https://www.iana.org/assignments/media-types/media-types.xhtml
I like to use standard facilities provided by JDK first, and if that doesn't work, I'll go and look for something else.
Determine file type from file extension
Since 1.6, Java has MimetypesFileTypeMap, as pointed in one of the answers above, and it is the simplest way to determine mime type:
new MimetypesFileTypeMap().getContentType( fileName );
In its vanilla implementation this does not do much (i.e. it works for .html but it doesn't for .png). It is, however, super simple to add any content type you may need:
*
*Create file named 'mime.types' in META-INF folder in your project
*Add a line for every mime type you need and default implementation doesn't provide (there are hundreds of mime types and list grows as time goes by).
Example entries for png and js files would be:
image/png png PNG
application/javascript js
For mime.types file format, see more details here: https://docs.oracle.com/javase/7/docs/api/javax/activation/MimetypesFileTypeMap.html
Determine file type from file content
Since 1.7, Java has java.nio.file.spi.FileTypeDetector, which defines a standard API for determining a file type in implementation specific way.
To fetch mime type for a file, you would simply use Files and do this in your code:
Files.probeContentType(Paths.get("either file name or full path goes here"));
The API definition provides for facilities that support either for determining file mime type from file name or from file content (magic bytes). That is why probeContentType() method throws IOException, in case an implementation of this API uses Path provided to it to actually try to open the file associated with it.
Again, vanilla implementation of this (the one that comes with JDK) leaves a lot to be desired.
In some ideal world in a galaxy far, far away, all these libraries which try to solve this file-to-mime-type problem would simply implement java.nio.file.spi.FileTypeDetector, you would drop in the preferred implementing library's jar file into your classpath and that would be it.
In the real world, the one where you need TL,DR section, you should find the library with most stars next to it's name and use it. For this particular case, I don't need one (yet ;) ).
A: With Apache Tika you need only three lines of code:
File file = new File("/path/to/file");
Tika tika = new Tika();
System.out.println(tika.detect(file));
If you have a groovy console, just paste and run this code to play with it:
@Grab('org.apache.tika:tika-core:1.14')
import org.apache.tika.Tika;
def tika = new Tika()
def file = new File("/path/to/file")
println tika.detect(file)
Keep in mind that its APIs are rich, it can parse "anything". As of tika-core 1.14, you have:
String detect(byte[] prefix)
String detect(byte[] prefix, String name)
String detect(File file)
String detect(InputStream stream)
String detect(InputStream stream, Metadata metadata)
String detect(InputStream stream, String name)
String detect(Path path)
String detect(String name)
String detect(URL url)
See the apidocs for more information.
A: The JAF API is part of JDK 6. Look at javax.activation package.
Most interesting classes are javax.activation.MimeType - an actual MIME type holder - and javax.activation.MimetypesFileTypeMap - class whose instance can resolve MIME type as String for a file:
String fileName = "/path/to/file";
MimetypesFileTypeMap mimeTypesMap = new MimetypesFileTypeMap();
// only by file name
String mimeType = mimeTypesMap.getContentType(fileName);
// or by actual File instance
File file = new File(fileName);
mimeType = mimeTypesMap.getContentType(file);
A: I tried several ways to do it, including the first ones said by @Joshua Fox. But some don't recognize frequent mimetypes like for PDF files, and other could not be trustable with fake files (I tried with a RAR file with extension changed to TIF). The solution I found, as also is said by @Joshua Fox in a superficial way, is to use MimeUtil2, like this:
MimeUtil2 mimeUtil = new MimeUtil2();
mimeUtil.registerMimeDetector("eu.medsea.mimeutil.detector.MagicMimeMimeDetector");
String mimeType = MimeUtil2.getMostSpecificMimeType(mimeUtil.getMimeTypes(file)).toString();
A: This is the simplest way I found for doing this:
byte[] byteArray = ...
InputStream is = new BufferedInputStream(new ByteArrayInputStream(byteArray));
String mimeType = URLConnection.guessContentTypeFromStream(is);
A: Apache Tika.
<!-- https://mvnrepository.com/artifact/org.apache.tika/tika-parsers -->
<dependency>
<groupId>org.apache.tika</groupId>
<artifactId>tika-parsers</artifactId>
<version>1.24</version>
</dependency>
and Two line of code.
Tika tika=new Tika();
tika.detect(inputStream);
Screenshot below
A: Apache Tika offers in tika-core a mime type detection based based on magic markers in the stream prefix. tika-core does not fetch other dependencies, which makes it as lightweight as the currently unmaintained Mime Type Detection Utility.
Simple code example (Java 7), using the variables theInputStream and theFileName
try (InputStream is = theInputStream;
BufferedInputStream bis = new BufferedInputStream(is);) {
AutoDetectParser parser = new AutoDetectParser();
Detector detector = parser.getDetector();
Metadata md = new Metadata();
md.add(Metadata.RESOURCE_NAME_KEY, theFileName);
MediaType mediaType = detector.detect(bis, md);
return mediaType.toString();
}
Please note that MediaType.detect(...) cannot be used directly (TIKA-1120). More hints are provided at https://tika.apache.org/1.24/detection.html.
A: In Java 7 you can now just use Files.probeContentType(path).
A: It is better to use two layer validation for files upload.
First you can check for the mimeType and validate it.
Second you should look to convert the first 4 bytes of your file to hexadecimal and then compare it with the magic numbers. Then it will be a really secure way to check for file validations.
A: You can do it with just one line: MimetypesFileTypeMap().getContentType(new File("filename.ext")). Look the complete test code (Java 7):
import java.io.File;
import javax.activation.MimetypesFileTypeMap;
public class MimeTest {
public static void main(String a[]){
System.out.println(new MimetypesFileTypeMap().getContentType(
new File("/path/filename.txt")));
}
}
This code produces the follow output: text/plain
A: If you are working with a Servlet and if the servlet context is available to you, you can use :
getServletContext().getMimeType( fileName );
A: I couldn't find anything to check for video/mp4 MIME type so I made my own solution.
I happened to observe that Wikipedia was wrong and that the 00 00 00 18 66 74 79 70 69 73 6F 6D file signature is not correct. the fourth byte (18) and all 70 (excluded) after changes quite a lot amongst otherwise valid mp4 files.
This code is essentially a copy/paste of URLConnection.guessContentTypeFromStream code but tailored to video/mp4.
BufferedInputStream bis = new BufferedInputStream(new ByteArrayInputStream(content));
String mimeType = URLConnection.guessContentTypeFromStream(bis);
// Goes full barbaric and processes the bytes manually
if (mimeType == null){
// These ints converted in hex ar:
// 00 00 00 18 66 74 79 70 69 73 6F 6D
// which are the file signature (magic bytes) for .mp4 files
// from https://www.wikiwand.com/en/List_of_file_signatures
// just ctrl+f "mp4"
int[] mp4_sig = {0, 0, 0, 24, 102, 116, 121, 112};
bis.reset();
bis.mark(16);
int[] firstBytes = new int[8];
for (int i = 0; i < 8; i++) {
firstBytes[i] = bis.read();
}
// This byte doesn't matter for the file signature and changes
mp4_sig[3] = content[3];
bis.reset();
if (Arrays.equals(firstBytes, mp4_sig)){
mimeType = "video/mp4";
}
}
Tested successfully against 10 different .mp4 files.
EDIT: Here is a useful link (if it is still online) where you can find samples of many types. I don't own those videos, don't know who does either, but they're useful for testing the above code.
A: If you're an Android developer, you can use a utility class android.webkit.MimeTypeMap which maps MIME-types to file extensions and vice versa. Following code snippet may help you.
private static String getMimeType(String fileUrl) {
String extension = MimeTypeMap.getFileExtensionFromUrl(fileUrl);
return MimeTypeMap.getSingleton().getMimeTypeFromExtension(extension);
}
A: Unfortunately,
mimeType = file.toURL().openConnection().getContentType();
does not work, since this use of URL leaves a file locked, so that, for example, it is undeletable.
However, you have this:
mimeType= URLConnection.guessContentTypeFromName(file.getName());
and also the following, which has the advantage of going beyond mere use of file extension, and takes a peek at content
InputStream is = new BufferedInputStream(new FileInputStream(file));
mimeType = URLConnection.guessContentTypeFromStream(is);
//...close stream
However, as suggested by the comment above, the built-in table of mime-types is quite limited, not including, for example, MSWord and PDF. So, if you want to generalize, you'll need to go beyond the built-in libraries, using, e.g., Mime-Util (which is a great library, using both file extension and content).
A: A solution to detecting a file's Media Type1 has the following parts:
*
*A list of file signatures (see Kessler's list, Wikipedia's list, and Space Maker's list)
*A list of Media Types
*A map of Media Types to file name extensions
*Comparing file signatures against File, Path, or InputStream data sources
Please remember to give credit if you copy the code.
StreamMediaType.java
In the following code -1 means skip comparing the byte at that index; a -2 denotes end of file type signature. This detects binary formats, primarily images, and a few plain text format variations (HTML, SVG, XML). The code uses up to the first 11 "magic" bytes from the data source's header. Optimizations and improvements that shorten the logic are welcome.
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Path;
import java.util.LinkedHashMap;
import java.util.Map;
import static com.keenwrite.io.MediaType.*;
import static java.lang.System.arraycopy;
public class StreamMediaType {
private static final int FORMAT_LENGTH = 11;
private static final int END_OF_DATA = -2;
private static final Map<int[], MediaType> FORMAT = new LinkedHashMap<>();
static {
//@formatter:off
FORMAT.put( ints( 0x3C, 0x73, 0x76, 0x67, 0x20 ), IMAGE_SVG_XML );
FORMAT.put( ints( 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A ), IMAGE_PNG );
FORMAT.put( ints( 0xFF, 0xD8, 0xFF, 0xE0 ), IMAGE_JPEG );
FORMAT.put( ints( 0xFF, 0xD8, 0xFF, 0xEE ), IMAGE_JPEG );
FORMAT.put( ints( 0xFF, 0xD8, 0xFF, 0xE1, -1, -1, 0x45, 0x78, 0x69, 0x66, 0x00 ), IMAGE_JPEG );
FORMAT.put( ints( 0x49, 0x49, 0x2A, 0x00 ), IMAGE_TIFF );
FORMAT.put( ints( 0x4D, 0x4D, 0x00, 0x2A ), IMAGE_TIFF );
FORMAT.put( ints( 0x47, 0x49, 0x46, 0x38 ), IMAGE_GIF );
FORMAT.put( ints( 0x8A, 0x4D, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A ), VIDEO_MNG );
FORMAT.put( ints( 0x25, 0x50, 0x44, 0x46, 0x2D, 0x31, 0x2E ), APP_PDF );
FORMAT.put( ints( 0x38, 0x42, 0x50, 0x53, 0x00, 0x01 ), IMAGE_PHOTOSHOP );
FORMAT.put( ints( 0x25, 0x21, 0x50, 0x53, 0x2D, 0x41, 0x64, 0x6F, 0x62, 0x65, 0x2D ), APP_EPS );
FORMAT.put( ints( 0x25, 0x21, 0x50, 0x53 ), APP_PS );
FORMAT.put( ints( 0xFF, 0xFB, 0x30 ), AUDIO_MP3 );
FORMAT.put( ints( 0x49, 0x44, 0x33 ), AUDIO_MP3 );
FORMAT.put( ints( 0x3C, 0x21 ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x68, 0x74, 0x6D, 0x6C ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x68, 0x65, 0x61, 0x64 ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x62, 0x6F, 0x64, 0x79 ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x48, 0x54, 0x4D, 0x4C ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x48, 0x45, 0x41, 0x44 ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x42, 0x4F, 0x44, 0x59 ), TEXT_HTML );
FORMAT.put( ints( 0x3C, 0x3F, 0x78, 0x6D, 0x6C, 0x20 ), TEXT_XML );
FORMAT.put( ints( 0xFE, 0xFF, 0x00, 0x3C, 0x00, 0x3f, 0x00, 0x78 ), TEXT_XML );
FORMAT.put( ints( 0xFF, 0xFE, 0x3C, 0x00, 0x3F, 0x00, 0x78, 0x00 ), TEXT_XML );
FORMAT.put( ints( 0x42, 0x4D ), IMAGE_BMP );
FORMAT.put( ints( 0x23, 0x64, 0x65, 0x66 ), IMAGE_X_BITMAP );
FORMAT.put( ints( 0x21, 0x20, 0x58, 0x50, 0x4D, 0x32 ), IMAGE_X_PIXMAP );
FORMAT.put( ints( 0x2E, 0x73, 0x6E, 0x64 ), AUDIO_BASIC );
FORMAT.put( ints( 0x64, 0x6E, 0x73, 0x2E ), AUDIO_BASIC );
FORMAT.put( ints( 0x52, 0x49, 0x46, 0x46 ), AUDIO_WAV );
FORMAT.put( ints( 0x50, 0x4B ), APP_ZIP );
FORMAT.put( ints( 0x41, 0x43, -1, -1, -1, -1, 0x00, 0x00, 0x00, 0x00, 0x00 ), APP_ACAD );
FORMAT.put( ints( 0xCA, 0xFE, 0xBA, 0xBE ), APP_JAVA );
FORMAT.put( ints( 0xAC, 0xED ), APP_JAVA_OBJECT );
//@formatter:on
}
private StreamMediaType() {
}
public static MediaType getMediaType( final Path path ) throws IOException {
return getMediaType( path.toFile() );
}
public static MediaType getMediaType( final java.io.File file )
throws IOException {
try( final var fis = new FileInputStream( file ) ) {
return getMediaType( fis );
}
}
public static MediaType getMediaType( final InputStream is )
throws IOException {
final var input = new byte[ FORMAT_LENGTH ];
final var count = is.read( input, 0, FORMAT_LENGTH );
if( count > 1 ) {
final var available = new byte[ count ];
arraycopy( input, 0, available, 0, count );
return getMediaType( available );
}
return UNDEFINED;
}
public static MediaType getMediaType( final byte[] data ) {
assert data != null;
final var source = new int[]{
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
for( int i = 0; i < source.length; i++ ) {
source[ i ] = data[ i ] & 0xFF;
}
for( final var key : FORMAT.keySet() ) {
int i = -1;
boolean matches = true;
while( ++i < FORMAT_LENGTH && key[ i ] != END_OF_DATA && matches ) {
matches = key[ i ] == source[ i ] || key[ i ] == -1;
}
if( matches ) {
return FORMAT.get( key );
}
}
return UNDEFINED;
}
private static int[] ints( final int... data ) {
final var magic = new int[ FORMAT_LENGTH ];
int i = -1;
while( ++i < data.length ) {
magic[ i ] = data[ i ];
}
while( i < FORMAT_LENGTH ) {
magic[ i++ ] = END_OF_DATA;
}
return magic;
}
}
MediaType.java
Define the file formats according to the IANA Media Type list. Notice that the file name extensions are mapped in MediaTypeExtension. There's a dependency on Apache's FilenameUtils class for its getExtension function.
import java.io.File;
import java.io.IOException;
import java.nio.file.Path;
import static MediaType.TypeName.*;
import static MediaTypeExtension.getMediaType;
import static org.apache.commons.io.FilenameUtils.getExtension;
public enum MediaType {
APP_ACAD( APPLICATION, "acad" ),
APP_JAVA_OBJECT( APPLICATION, "x-java-serialized-object" ),
APP_JAVA( APPLICATION, "java" ),
APP_PS( APPLICATION, "postscript" ),
APP_EPS( APPLICATION, "eps" ),
APP_PDF( APPLICATION, "pdf" ),
APP_ZIP( APPLICATION, "zip" ),
FONT_OTF( "otf" ),
FONT_TTF( "ttf" ),
IMAGE_APNG( "apng" ),
IMAGE_ACES( "aces" ),
IMAGE_AVCI( "avci" ),
IMAGE_AVCS( "avcs" ),
IMAGE_BMP( "bmp" ),
IMAGE_CGM( "cgm" ),
IMAGE_DICOM_RLE( "dicom_rle" ),
IMAGE_EMF( "emf" ),
IMAGE_EXAMPLE( "example" ),
IMAGE_FITS( "fits" ),
IMAGE_G3FAX( "g3fax" ),
IMAGE_GIF( "gif" ),
IMAGE_HEIC( "heic" ),
IMAGE_HEIF( "heif" ),
IMAGE_HEJ2K( "hej2k" ),
IMAGE_HSJ2( "hsj2" ),
IMAGE_X_ICON( "x-icon" ),
IMAGE_JLS( "jls" ),
IMAGE_JP2( "jp2" ),
IMAGE_JPEG( "jpeg" ),
IMAGE_JPH( "jph" ),
IMAGE_JPHC( "jphc" ),
IMAGE_JPM( "jpm" ),
IMAGE_JPX( "jpx" ),
IMAGE_JXR( "jxr" ),
IMAGE_JXRA( "jxrA" ),
IMAGE_JXRS( "jxrS" ),
IMAGE_JXS( "jxs" ),
IMAGE_JXSC( "jxsc" ),
IMAGE_JXSI( "jxsi" ),
IMAGE_JXSS( "jxss" ),
IMAGE_KTX( "ktx" ),
IMAGE_KTX2( "ktx2" ),
IMAGE_NAPLPS( "naplps" ),
IMAGE_PNG( "png" ),
IMAGE_PHOTOSHOP( "photoshop" ),
IMAGE_SVG_XML( "svg+xml" ),
IMAGE_T38( "t38" ),
IMAGE_TIFF( "tiff" ),
IMAGE_WEBP( "webp" ),
IMAGE_WMF( "wmf" ),
IMAGE_X_BITMAP( "x-xbitmap" ),
IMAGE_X_PIXMAP( "x-xpixmap" ),
AUDIO_BASIC( AUDIO, "basic" ),
AUDIO_MP3( AUDIO, "mp3" ),
AUDIO_WAV( AUDIO, "x-wav" ),
VIDEO_MNG( VIDEO, "x-mng" ),
TEXT_HTML( TEXT, "html" ),
TEXT_MARKDOWN( TEXT, "markdown" ),
TEXT_PLAIN( TEXT, "plain" ),
TEXT_XHTML( TEXT, "xhtml+xml" ),
TEXT_XML( TEXT, "xml" ),
TEXT_YAML( TEXT, "yaml" ),
/*
* When all other lights go out.
*/
UNDEFINED( TypeName.UNDEFINED, "undefined" );
public enum TypeName {
APPLICATION,
AUDIO,
IMAGE,
TEXT,
UNDEFINED,
VIDEO
}
private final String mMediaType;
private final TypeName mTypeName;
private final String mSubtype;
MediaType( final String subtype ) {
this( IMAGE, subtype );
}
MediaType( final TypeName typeName, final String subtype ) {
mTypeName = typeName;
mSubtype = subtype;
mMediaType = typeName.toString().toLowerCase() + '/' + subtype;
}
public static MediaType valueFrom( final File file ) {
assert file != null;
return fromFilename( file.getName() );
}
public static MediaType fromFilename( final String filename ) {
assert filename != null;
return getMediaType( getExtension( filename ) );
}
public static MediaType valueFrom( final Path path ) {
assert path != null;
return valueFrom( path.toFile() );
}
public static MediaType valueFrom( String contentType ) {
if( contentType == null || contentType.isBlank() ) {
return UNDEFINED;
}
var i = contentType.indexOf( ';' );
contentType = contentType.substring(
0, i == -1 ? contentType.length() : i );
i = contentType.indexOf( '/' );
i = i == -1 ? contentType.length() : i;
final var type = contentType.substring( 0, i );
final var subtype = contentType.substring( i + 1 );
return valueFrom( type, subtype );
}
public static MediaType valueFrom(
final String type, final String subtype ) {
assert type != null;
assert subtype != null;
for( final var mediaType : values() ) {
if( mediaType.equals( type, subtype ) ) {
return mediaType;
}
}
return UNDEFINED;
}
public boolean equals( final String type, final String subtype ) {
assert type != null;
assert subtype != null;
return mTypeName.name().equalsIgnoreCase( type ) &&
mSubtype.equalsIgnoreCase( subtype );
}
public boolean isType( final TypeName typeName ) {
return mTypeName == typeName;
}
public String getSubtype() {
return mSubtype;
}
@Override
public String toString() {
return mMediaType;
}
}
MediaTypeExtension.java
Last piece of the puzzle is a map of MediaTypes to their known and common/popular file name extensions. This allows bidirectional lookup based on file name extensions.
import static MediaType.*;
import static java.util.List.of;
public enum MediaTypeExtension {
MEDIA_APP_ACAD( APP_ACAD, of( "dwg" ) ),
MEDIA_APP_PDF( APP_PDF ),
MEDIA_APP_PS( APP_PS, of( "ps" ) ),
MEDIA_APP_EPS( APP_EPS ),
MEDIA_APP_ZIP( APP_ZIP ),
MEDIA_AUDIO_MP3( AUDIO_MP3 ),
MEDIA_AUDIO_BASIC( AUDIO_BASIC, of( "au" ) ),
MEDIA_AUDIO_WAV( AUDIO_WAV, of( "wav" ) ),
MEDIA_FONT_OTF( FONT_OTF ),
MEDIA_FONT_TTF( FONT_TTF ),
MEDIA_IMAGE_APNG( IMAGE_APNG ),
MEDIA_IMAGE_BMP( IMAGE_BMP ),
MEDIA_IMAGE_GIF( IMAGE_GIF ),
MEDIA_IMAGE_JPEG( IMAGE_JPEG,
of( "jpg", "jpe", "jpeg", "jfif", "pjpeg", "pjp" ) ),
MEDIA_IMAGE_PNG( IMAGE_PNG ),
MEDIA_IMAGE_PSD( IMAGE_PHOTOSHOP, of( "psd" ) ),
MEDIA_IMAGE_SVG( IMAGE_SVG_XML, of( "svg" ) ),
MEDIA_IMAGE_TIFF( IMAGE_TIFF, of( "tiff", "tif" ) ),
MEDIA_IMAGE_WEBP( IMAGE_WEBP ),
MEDIA_IMAGE_X_BITMAP( IMAGE_X_BITMAP, of( "xbm" ) ),
MEDIA_IMAGE_X_PIXMAP( IMAGE_X_PIXMAP, of( "xpm" ) ),
MEDIA_VIDEO_MNG( VIDEO_MNG, of( "mng" ) ),
MEDIA_TEXT_MARKDOWN( TEXT_MARKDOWN, of(
"md", "markdown", "mdown", "mdtxt", "mdtext", "mdwn", "mkd", "mkdown",
"mkdn" ) ),
MEDIA_TEXT_PLAIN( TEXT_PLAIN, of( "txt", "asc", "ascii", "text", "utxt" ) ),
MEDIA_TEXT_R_MARKDOWN( TEXT_R_MARKDOWN, of( "Rmd" ) ),
MEDIA_TEXT_R_XML( TEXT_R_XML, of( "Rxml" ) ),
MEDIA_TEXT_XHTML( TEXT_XHTML, of( "xhtml" ) ),
MEDIA_TEXT_XML( TEXT_XML ),
MEDIA_TEXT_YAML( TEXT_YAML, of( "yaml", "yml" ) ),
MEDIA_UNDEFINED( UNDEFINED, of( "undefined" ) );
private final MediaType mMediaType;
private final List<String> mExtensions;
MediaTypeExtension( final MediaType mediaType ) {
this( mediaType, of( mediaType.getSubtype() ) );
}
MediaTypeExtension(
final MediaType mediaType, final List<String> extensions ) {
assert mediaType != null;
assert extensions != null;
assert !extensions.isEmpty();
mMediaType = mediaType;
mExtensions = extensions;
}
public String getExtension() {
return mExtensions.get( 0 );
}
public static MediaTypeExtension valueFrom( final MediaType mediaType ) {
for( final var type : values() ) {
if( type.isMediaType( mediaType ) ) {
return type;
}
}
return MEDIA_UNDEFINED;
}
boolean isMediaType( final MediaType mediaType ) {
return mMediaType == mediaType;
}
static MediaType getMediaType( final String extension ) {
final var sanitized = sanitize( extension );
for( final var mediaType : MediaTypeExtension.values() ) {
if( mediaType.isType( sanitized ) ) {
return mediaType.getMediaType();
}
}
return UNDEFINED;
}
private boolean isType( final String sanitized ) {
for( final var extension : mExtensions ) {
if( extension.equalsIgnoreCase( sanitized ) ) {
return true;
}
}
return false;
}
private static String sanitize( final String extension ) {
return extension == null ? "" : extension.toLowerCase();
}
private MediaType getMediaType() {
return mMediaType;
}
}
Usages:
// EXAMPLE -- Detect media type
//
final File image = new File( "filename.jpg" );
final MediaType mt = StreamMediaType.getMediaType( image );
// Tricky! The JPG could be a PNG in disguise.
if( mt.isType( MediaType.TypeName.IMAGE ) ) {
if( mt == MediaType.IMAGE_PNG ) {
// Nice try! Sneaky sneak.
}
}
// EXAMPLE -- Get typical media type file name extension
//
final String ext = MediaTypeExtension.valueFrom( MediaType.IMAGE_SVG_XML ).getExtension();
// EXAMPLE -- Get media type from HTTP request
//
final var url = new URL( "https://localhost/path/file.ext" );
final var conn = (HttpURLConnection) url.openConnection();
final var contentType = conn.getContentType();
MediaType mediaType = valueFrom( contentType );
// Fall back to stream detection probe
if( mediaType == UNDEFINED ) {
mediaType = StreamMediaType.getMediaType( conn.getInputStream() );
}
conn.disconnect();
You get the idea.
Short library review:
*
*Apache Tika -- 600kb bloat, needs multiple lines to configure, and multiple JAR files.
*jMimeMagic -- Unfinished, needs multiple lines to configure.
*MimeUtil2 -- Fairly large, didn't work out-of-the-box.
*FileTypeDetector -- Bundled with JDK, buggier than a mountain pine beetle infested forest.
*Files.probeContentType -- Detection is platform-specific and considered unreliable (source).
*MimetypesFileTypeMap -- Bundled with activation.jar, uses file name extension.
Sample audio, video, and image files for testing:
*
*http://mirrors.standaloneinstaller.com/video-sample/
*https://www.w3.org/People/mimasa/test/imgformat/
Note that nearly all XML documents will begin the same way:
<?xml version="1.0" standalone="no"?>
Since SVG documents are XML documents, many SVG documents will contain that XML declaration and may also contain:
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN" "http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
Detecting the SVG doctype would be possible by bumping the magic bytes from 11 to 13. Still, the doctype is not required, meaning that the SVG document could also begin after the XML declaration as follows:
<svg xmlns="http://www.w3.org/2000/svg">
Meaning, use caution when using this code to detect SVG file formats, as it is not reliable. Instead, consider using the HTTP Content-Type or filename extension.
Compounding the issue is that comments of arbitrary length can be inserted before the <svg tag, making detection extra-difficult.
1 "MIME type" is a deprecated term.
A: Actually, Apache Tika detector Tika.detect(File) is the best option and more accurate than Files.probeContentType(path).
check this great quick reference which contains examples and code samples.
A: From roseindia:
FileNameMap fileNameMap = URLConnection.getFileNameMap();
String mimeType = fileNameMap.getContentTypeFor("alert.gif");
A:
I was just wondering how most people fetch a mime type from a file in Java?
I've published my SimpleMagic Java package which allows content-type (mime-type) determination from files and byte arrays. It is designed to read and run the Unix file(1) command magic files that are a part of most ~Unix OS configurations.
I tried Apache Tika but it is huge with tons of dependencies, URLConnection doesn't use the bytes of the files, and MimetypesFileTypeMap also just looks at files names.
With SimpleMagic you can do something like:
// create a magic utility using the internal magic file
ContentInfoUtil util = new ContentInfoUtil();
// if you want to use a different config file(s), you can load them by hand:
// ContentInfoUtil util = new ContentInfoUtil("/etc/magic");
...
ContentInfo info = util.findMatch("/tmp/upload.tmp");
// or
ContentInfo info = util.findMatch(inputStream);
// or
ContentInfo info = util.findMatch(contentByteArray);
// null if no match
if (info != null) {
String mimeType = info.getMimeType();
}
A: If you are stuck with java 5-6 then this utility class from servoy open source product.
You only need this function
public static String getContentType(byte[] data, String name)
It probes the first bytes of the content and returns the content types based on that content and not by file extension.
A: in spring MultipartFile file;
org.springframework.web.multipart.MultipartFile
file.getContentType();
A: if you work on linux OS ,there is a command line file --mimetype:
String mimetype(file){
//1. run cmd
Object cmd=Runtime.getRuntime().exec("file --mime-type "+file);
//2 get output of cmd , then
//3. parse mimetype
if(output){return output.split(":")[1].trim(); }
return "";
}
Then
mimetype("/home/nyapp.war") // 'application/zip'
mimetype("/var/www/ggg/au.mp3") // 'audio/mp3'
A: After trying various other libraries I settled with mime-util.
<groupId>eu.medsea.mimeutil</groupId>
<artifactId>mime-util</artifactId>
<version>2.1.3</version>
</dependency>
File file = new File("D:/test.tif");
MimeUtil.registerMimeDetector("eu.medsea.mimeutil.detector.MagicMimeMimeDetector");
Collection<?> mimeTypes = MimeUtil.getMimeTypes(file);
System.out.println(mimeTypes);
A: public String getFileContentType(String fileName) {
String fileType = "Undetermined";
final File file = new File(fileName);
try
{
fileType = Files.probeContentType(file.toPath());
}
catch (IOException ioException)
{
System.out.println(
"ERROR: Unable to determine file type for " + fileName
+ " due to exception " + ioException);
}
return fileType;
}
A: File file = new File(PropertiesReader.FILE_PATH);
MimetypesFileTypeMap fileTypeMap = new MimetypesFileTypeMap();
String mimeType = fileTypeMap.getContentType(file);
URLConnection uconnection = file.toURL().openConnection();
mimeType = uconnection.getContentType();
A: Check the magic bytes of the stream or file:
https://stackoverflow.com/a/65667558/3225638
It uses pure Java, but requires you to define an enum of the types you want to detect.
A: If you want a reliable (ie. consistent) way of mapping file extensions to mime-types, here is what I use:
https://github.com/jjYBdx4IL/misc/blob/master/text-utils/src/main/java/com/github/jjYBdx4IL/utils/text/MimeType.java
It includes a bundled mime types database and basically inverts the logic of javax.activation's MimetypesFileTypeMap class by using the database to initialize the "programmatic" entries. That way the library-defined types always have precedence over what may be defined in unbundled resources.
A: in Java, the URLConnection class has a method called guessContentTypeFromName(String fileName) that can be used to guess the MIME media type (also known as the content type) of a file based on its file name. The method uses the file name’s extension to determine the content type.
String fileName = "image.jpg";
String contentType = URLConnection.guessContentTypeFromName(fileName);
System.out.println(contentType); // "image/jpeg"
To know more Read this article
A: I did it with following code.
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
public class MimeFileType {
public static void main(String args[]){
try{
URL url = new URL ("https://www.url.com.pdf");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.setDoOutput(true);
InputStream content = (InputStream)connection.getInputStream();
connection.getHeaderField("Content-Type");
System.out.println("Content-Type "+ connection.getHeaderField("Content-Type"));
BufferedReader in = new BufferedReader (new InputStreamReader(content));
}catch (Exception e){
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "375"
} |
Q: Best practice for Java IPC what is the best method for inter process communication in a multithreaded java app.
It should be performant (so no JMS please) easy to implement and reliable,so that
objects & data can be bound to one thread only?
Any ideas welcome!
A: Assuming the scenario 1 JVM, multiple threads then indeed java.util.concurrent is the place to look, specifically the various Queue implementations. However an abstraction on top of that may be nice and there Jetlang looks very interesting, lightweight Java message passing.
A: I recommend looking into the entire java.util.concurrent package, which have multiple classes for dealing with concurrency and different communication means between threads. All depends on what you want to achieve, as your question is pretty general.
A: You should use a producer/consumer queue. By doing that you avoid the pitfalls of multithreaded programming: race-conditions and deadlocks. Plus it is not just easier and cleaner, but also much faster if you use a lock-free queue like Disruptor or MentaQueue. I wrote a blog article where I talk about this in detail and show how to get < 100 nanoseconds latencies: Inter-thread communication with 2-digit nanosecond latency.
A: I've just added MappedBus on github (http://github.com/caplogic/mappedbus) which is an efficient IPC library that enable several Java processes/JVMs to communicate by exchaning messages and it uses a memory mapped file for the transport. The troughput has been measured to 40 million messages/s.
A: Could you clarify a bit? Do you mean IPC in a single JVM? (Multiple threads, yes, but at an OS-level only one process.) Or do you mean multiple JVMs? (And truly OS-level inter process communications.)
If it is the first, then maybe something out of java.util.concurrent, like ConcurrentLinkedQueue would do the trick. (I pass message around inbetween my threads with classes from java.util.concurrent with success.)
If the later, then I'm going to just guess and suggest taking a look at RMI, although I don' think it qualifies as fully reliable--you'd have to manage that a bit more 'hands on' like.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Killing Excel.EXE on server
Possible Duplicate:
How to properly clean up Excel interop objects in C#
Suppose a ASP.NET web application generates automated Excel Reports on the server. How do we kill a server-side Excel.EXE once the processing is over. I am raising this purposely, because I believe that the Garbage Collecter does not clean the Excel executable even after the Excel file is closed.
Any pointers would be helpful?
A: I agree with not running Office on a server. Not that I have any choice in the matter :)
One thing to keep in mind with the taskkill option, is that unless you specifically plan for it (aka - singleton), you may have multiple copies of Excel (or any other Office app) running, and unintentionally close the wrong instance.
Also note that per http://support.microsoft.com/kb/257757
Microsoft does not currently
recommend, and does not support,
Automation of Microsoft Office
applications from any unattended,
non-interactive client application or
component (including ASP, ASP.NET,
DCOM, and NT Services), because Office
may exhibit unstable behavior and/or
deadlock when Office is run in this
environment.
As an alternative, there is a product called Aspose Cells that offers a product that is designed to allow you to programmatically work with an Excel sheet in a server environment. As a disclaimer, I have never personally used this product, but I have heard about it from several people I worked with in the past.
A: I've had more time to think about this answer, and would now recommend using an XML approach with the Open XML Office spreadsheet format.
Heres some good links to get started on building a office document with code.
http://msdn.microsoft.com/en-us/magazine/cc163478.aspx
http://msdn.microsoft.com/en-us/library/bb735940(office.12).aspx
Just use SSIS on SQL Server. It provides the ability to export to Excel.
Don't run office on the server. Alteranatively waste money on aspose or spreadsheetgear.
GC does work your just not using it properly follow this pattern...
private void killExcel()
{
xlApp.Quit();
Marshal.ReleaseCOMObject(xlApp);
if(xlApp != null)
{
xlApp = null;
}
GC.WaitForPendingFinalizers();
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
}
get your Excel operational class to implement IDisposable, and then stick killExcel() in the Dispose method.
UPDATE: Also note that sometimes dev will still see Excel.exe running in task manager. Before assuming the above code isn't working, check that the process that is running the code is also closed. In the case of a VSTO or COM addin, check that Word/powerpoint/other excel instance is also closed as there is still a GC root back to the launching process. Once that is closed the Excel.exe process will close.
A: Are you using VSTO? You can close the Excel app after you finished with excelobject.Quit(); It worked for me, but I don't use Excel on server-side anymore.
You can have a look on Excel's XML schema to build the Excel file without Excel itself. Check out CarlosAg Excel Writer, which does exactly the same.
A: I've had a similar problem. While 'taskkill excel.exe' or enumerating all "excel" processes and killing them does work, this kills ALL running Excel processes. You're better off killing only the instance you're currently working with.
Here's the code I used to accomplish that. It uses a PInvoke (see here) to get the ProcessID from the Excel.Application instance (Me.ExcelInstance in the example below).
Dim ExcelPID As Integer
GetWindowThreadProcessId(New IntPtr(Me.ExcelInstance.Hwnd), ExcelPID)
If ExcelPID > 0 Then
Dim ExcelProc As Process = Process.GetProcessById(ExcelPID)
If ExcelProc IsNot Nothing Then ExcelProc.Kill()
End If
Please not this might not work on all platforms because of the PInvoke... To date, this is the only method I have found to be reliable. I have also tried to find the correct PID by enumarating all Excel processes and comparing the Process.MainModule.BaseAddress to the Excel.Application.Hinstance.
'DO NOT USE THIS METHOD, for demonstration only
For Each p as Process in ExcelProcesses
Dim BaseAddr As Integer = p.MainModule.BaseAddress.ToInt32()
If BaseAddr = Me.ExcelInstance.Hinstance Then
p.Kill()
Exit For
End If
Next
This is not a reliable way to find the correct process, as the BaseAddress sometimes seems to be the same for several processes (resulting in killing the wrong PID).
A: The command you need is "taskkill".
http://technet.microsoft.com/en-us/library/bb491009.aspx
> taskkill excel.exe
A: :). I jotted down my skirmish with Excel here. It also has some links that I found after some heavy searching. Hope it helps.
Basically Excel is a pain even though it can be automated.
A: Sorry to say this, and I'm not trying to be smart, but... don't put office on the server!!!
That's if I've understood correctly! :)
EDIT: Even though I've been marked down for this, I will never ever advocate running Office on the server - it has proven way too much of a pain in the ass for me in the past.
Having said that, the same now goes for me and Crystal Reports ;-)
A: I also would not recommend using office apps on a server except for data access to mdb files.
I can definitely understand that there are times where it is necessary. In thoses cases
I would recommend the following:
*
*Create a separate server where that is the only function. (Let's you reboot with minimum impact).
*Have the server implement a mechanism of queuing requests
*Keep a single thread processing the queue. This gives you the ability to keep track of the office app, kill it if necessary, and continue on without impacting any queued up jobs or other applications.
If you absolutely need to do it on the same server, then at least implement the above in it's own app pool.
Limiting yourself keeping a queue of work and only one instance of Excel (or any other office app) let's you kill it with abandon with TaskKill or .Kill() and not lose work.
I believe if you keep it to a single thread then you would rarely have a need to kill it.
A: I have used spreadsheetgear to generate XL reports on the server and it works really well. We don't have to worry about the EXCEL process..
A: You need safely dispose all COM interop objects after you end your work. By "all" I mean absolutely all: collections property values and so on. I've created stack object and pushed objects during their setup:
Stack<object> comObjectsToRelease = new Stack<object>();
...
Log("Creating VBProject object.");
VBProject vbProject = workbook.VBProject;
comObjectsToRelease.Push(vbProject);
...
finally
{
if(excel != null)
{
Log("Quiting Excel.");
excel.Quit();
excel = null;
}
while (comObjectsToRelease.Count > 0)
{
Log("Releasing {0} COM object.", comObjectsToRelease.GetType().Name);
Marshal.FinalReleaseComObject(comObjectsToRelease.Pop());
}
Log("Invoking garbage collection.");
GC.Collect();
}
If Excel is still there you have to kill it manually.
A: I had a similar problem and used the following code:
System.Diagnostics.Process[] procs = System.Diagnostics.Process.GetProcesses();
for (int i = 0; i < procs.Length; i++)
{
if(procs[i].ProcessName == "EXCEL")
{
procs[i].Kill();
}
}
This worked pretty well, but I would really think about working with Office on a server.
A: I actually had a question that was similar to this awhile back - Check for hung Office process when using Office Automation - some of the responses to that question might be useful for you.
Also, I have to agree with what everyone else is saying in regards to keeping any Office products off of a server; however, since you are doing Excel, it might be feasible for you to generate Excel XML documents. You can do this without having to do any Office automation and the process is fairly straightforward. For simple grid based spreadsheets I have found it to be a bit easier than trying to automate it using Excel. The Office Open XML is quite powerful and allows for more complex reports are possible as well some more effort.
A: The best approach is to use a purpose built library such as the one from Aspose to generate the spreadsheets or populate templates. The next best approach is to use the xml formats for office if practical for your needs. A lightweight approach that is sometimes suitable is to create an HTML file with one table in it and name it with an .xls extension. Excel will happily read that, but it is very limited in what it can do.
Those are the options I've used (but not much). There's also a thing called Microsoft Office Sharepoint Server, but I've no idea how much it really lets you do.
That said, your problem is happening because when you invoke the regular Excel libraries, you're actually spinning up Excel completely independently of .Net and actually just working with a proxy library to talk to it. This is pretty much the same kind of thing you'd have with WCF and a service. You wouldn't expect the service to die just because the client application was done using it. Worse, Excel is an unmanaged resource and will not be disposed/ finalized/ garbage collected at all. The .Net Runtime doesn't know about Excel, it just knows about those proxies. Application.quit is what you need and also you may need to explicitly release the com objects that are created.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do you show events in UML Class Diagrams? This one has me stumped regularly while creating top level class diagrams for documentation. Methods and attributes/fields are easy to model.
I usually end up adding a method named EvChanged to indicate a .Net event Changed.
What is the right way to show that a type publishes a specific event?
A: I find onEventName() the easiest naming scheme for event callbacks, but how to indicate which events an object can broadcast I've not found any solution. An extended UML class diagram that would allow for customized containers (besides the attribute and method contrainers) could be an alternative, if some tool would support it.
A: Just add an «event» stereotype to a classifier attribute.
A: I don't think there is any specific UML notation for showing events that a Class can broadcast. To show events that a Class can receive, you want a Reception element. This has a similar notation to an Operation, with the «signal» keyword.
A: I create a stereotype in the model, "PublishedEvent", with a BaseClass of Operation. I apply the stereotype to the Operations in the class.
A: Not the type of answer that I like to give, but Microsoft has an answer on the Office website.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do I reset a sequence in Oracle? In PostgreSQL, I can do something like this:
ALTER SEQUENCE serial RESTART WITH 0;
Is there an Oracle equivalent?
A: For regular sequences:
alter sequence serial restart start with 1;
For system-generated sequences used for identity columns:
alter table table_name modify id generated by default on null as identity(start with 1);
This feature was officially added in 18c but is unofficially available since 12.1.
It is arguably safe to use this undocumented feature in 12.1. Even though the syntax is not included in the official documentation, it is generated by the Oracle package DBMS_METADATA_DIFF. I've used it several times on production systems. However, I created an Oracle Service request and they verified that it's not a documentation bug, the feature is truly unsupported.
In 18c, the feature does not appear in the SQL Language Syntax, but is included in the Database Administrator's Guide.
A: This stored procedure restarts my sequence:
Create or Replace Procedure Reset_Sequence
is
SeqNbr Number;
begin
/* Reset Sequence 'seqXRef_RowID' to 0 */
Execute Immediate 'Select seqXRef.nextval from dual ' Into SeqNbr;
Execute Immediate 'Alter sequence seqXRef increment by - ' || TO_CHAR(SeqNbr) ;
Execute Immediate 'Select seqXRef.nextval from dual ' Into SeqNbr;
Execute Immediate 'Alter sequence seqXRef increment by 1';
END;
/
A: This is my approach:
*
*drop the sequence
*recreate it
Example:
--Drop sequence
DROP SEQUENCE MY_SEQ;
-- Create sequence
create sequence MY_SEQ
minvalue 1
maxvalue 999999999999999999999
start with 1
increment by 1
cache 20;
A: There is another way to reset a sequence in Oracle: set the maxvalue and cycle properties. When the nextval of the sequence hits the maxvalue, if the cycle property is set then it will begin again from the minvalue of the sequence.
The advantage of this method compared to setting a negative increment by is the sequence can continue to be used while the reset process runs, reducing the chance you need to take some form of outage to do the reset.
The value for maxvalue has to be greater than the current nextval, so the procedure below includes an optional parameter allowing a buffer in case the sequence is accessed again between selecting the nextval in the procedure and setting the cycle property.
create sequence s start with 1 increment by 1;
select s.nextval from dual
connect by level <= 20;
NEXTVAL
----------
1
...
20
create or replace procedure reset_sequence ( i_buffer in pls_integer default 0)
as
maxval pls_integer;
begin
maxval := s.nextval + greatest(i_buffer, 0); --ensure we don't go backwards!
execute immediate 'alter sequence s cycle minvalue 0 maxvalue ' || maxval;
maxval := s.nextval;
execute immediate 'alter sequence s nocycle maxvalue 99999999999999';
end;
/
show errors
exec reset_sequence;
select s.nextval from dual;
NEXTVAL
----------
1
The procedure as stands still allows the possibility that another session will fetch the value 0, which may or may not be an issue for you. If it is, you could always:
*
*Set minvalue 1 in the first alter
*Exclude the second nextval fetch
*Move the statement to set the nocycle property into another procedure, to be run at a later date (assuming you want to do this).
A: 1) Suppose you create a SEQUENCE like shown below:
CREATE SEQUENCE TESTSEQ
INCREMENT BY 1
MINVALUE 1
MAXVALUE 500
NOCACHE
NOCYCLE
NOORDER
2) Now you fetch values from SEQUENCE. Lets say I have fetched four times as shown below.
SELECT TESTSEQ.NEXTVAL FROM dual
SELECT TESTSEQ.NEXTVAL FROM dual
SELECT TESTSEQ.NEXTVAL FROM dual
SELECT TESTSEQ.NEXTVAL FROM dual
3) After executing above four commands the value of the SEQUENCE will be 4. Now suppose I have reset the value of the SEQUENCE to 1 again. The follow the following steps. Follow all the steps in the same order as shown below:
*
*ALTER SEQUENCE TESTSEQ INCREMENT BY -3;
*SELECT TESTSEQ.NEXTVAL FROM dual
*ALTER SEQUENCE TESTSEQ INCREMENT BY 1;
*SELECT TESTSEQ.NEXTVAL FROM dual
A: My approach is a teensy extension to Dougman's example.
Extensions are...
Pass in the seed value as a parameter. Why? I like to call the thing resetting the sequence back to the max ID used in some table. I end up calling this proc from another script which executes multiple calls for a whole bunch of sequences, resetting nextval back down to some level which is high enough to not cause primary key violations where I'm using the sequence's value for a unique identifier.
It also honors the previous minvalue. It may in fact push the next value ever higher if the desired p_val or existing minvalue are higher than the current or calculated next value.
Best of all, it can be called to reset to a specified value, and just wait until you see the wrapper "fix all my sequences" procedure at the end.
create or replace
procedure Reset_Sequence( p_seq_name in varchar2, p_val in number default 0)
is
l_current number := 0;
l_difference number := 0;
l_minvalue user_sequences.min_value%type := 0;
begin
select min_value
into l_minvalue
from user_sequences
where sequence_name = p_seq_name;
execute immediate
'select ' || p_seq_name || '.nextval from dual' INTO l_current;
if p_Val < l_minvalue then
l_difference := l_minvalue - l_current;
else
l_difference := p_Val - l_current;
end if;
if l_difference = 0 then
return;
end if;
execute immediate
'alter sequence ' || p_seq_name || ' increment by ' || l_difference ||
' minvalue ' || l_minvalue;
execute immediate
'select ' || p_seq_name || '.nextval from dual' INTO l_difference;
execute immediate
'alter sequence ' || p_seq_name || ' increment by 1 minvalue ' || l_minvalue;
end Reset_Sequence;
That procedure is useful all by itself, but now let's add another one which calls it and specifies everything programmatically with a sequence naming convention and looking for the maximum value used in an existing table/field...
create or replace
procedure Reset_Sequence_to_Data(
p_TableName varchar2,
p_FieldName varchar2
)
is
l_MaxUsed NUMBER;
BEGIN
execute immediate
'select coalesce(max(' || p_FieldName || '),0) from '|| p_TableName into l_MaxUsed;
Reset_Sequence( p_TableName || '_' || p_Fieldname || '_SEQ', l_MaxUsed );
END Reset_Sequence_to_Data;
Now we're cooking with gas!
The procedure above will check for a field's max value in a table, builds a sequence name from the table/field pair and invokes "Reset_Sequence" with that sensed max value.
The final piece in this puzzle and the icing on the cake comes next...
create or replace
procedure Reset_All_Sequences
is
BEGIN
Reset_Sequence_to_Data( 'ACTIVITYLOG', 'LOGID' );
Reset_Sequence_to_Data( 'JOBSTATE', 'JOBID' );
Reset_Sequence_to_Data( 'BATCH', 'BATCHID' );
END Reset_All_Sequences;
In my actual database there are around one hundred other sequences being reset through this mechanism, so there are 97 more calls to Reset_Sequence_to_Data in that procedure above.
Love it? Hate it? Indifferent?
A: Jezus, all this programming for just an index restart...
Perhaps I'm an idiot, but for pre-oracle 12 (which has a restart feature), what is wrong with a simpel:
drop sequence blah;
create sequence blah
?
A: Altering the sequence's INCREMENT value, incrementing it, and then altering it back is pretty painless, plus you have the added benefit of not having to re-establish all of the grants as you would had you dropped/recreated the sequence.
A: You can use the CYCLE option, shown below:
CREATE SEQUENCE test_seq
MINVALUE 0
MAXVALUE 100
START WITH 0
INCREMENT BY 1
CYCLE;
In this case, when the sequence reaches MAXVALUE (100), it will recycle to the MINVALUE (0).
In the case of a decremented sequence, the sequence would recycle to the MAXVALUE.
A: I create a block to reset all my sequences:
DECLARE
I_val number;
BEGIN
FOR US IN
(SELECT US.SEQUENCE_NAME FROM USER_SEQUENCES US)
LOOP
execute immediate 'select ' || US.SEQUENCE_NAME || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || US.SEQUENCE_NAME || ' increment by -' || l_val || ' minvalue 0';
execute immediate 'select ' || US.SEQUENCE_NAME || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || US.SEQUENCE_NAME || ' increment by 1 minvalue 0';
END LOOP;
END;
A: Here's a more robust procedure for altering the next value returned by a sequence, plus a whole lot more.
*
*First off it protects against SQL injection attacks since none of the strings passed in are used to directly create any of the dynamic SQL statements,
*Second it prevents the next sequence value from being set outside the bounds of the min or max sequence values. The next_value will be != min_value and between min_value and max_value.
*Third it takes the current (or proposed) increment_by setting as well as all the other sequence settings into account when cleaning up.
*Fourth all parameters except the first are optional and unless specified take on the current sequence setting as defaults. If no optional parameters are specified no action is taken.
*Finally if you try altering a sequence that doesn't exist (or is not owned by the current user) it will raise an ORA-01403: no data found error.
Here's the code:
CREATE OR REPLACE PROCEDURE alter_sequence(
seq_name user_sequences.sequence_name%TYPE
, next_value user_sequences.last_number%TYPE := null
, increment_by user_sequences.increment_by%TYPE := null
, min_value user_sequences.min_value%TYPE := null
, max_value user_sequences.max_value%TYPE := null
, cycle_flag user_sequences.cycle_flag%TYPE := null
, cache_size user_sequences.cache_size%TYPE := null
, order_flag user_sequences.order_flag%TYPE := null)
AUTHID CURRENT_USER
AS
l_seq user_sequences%rowtype;
l_old_cache user_sequences.cache_size%TYPE;
l_next user_sequences.min_value%TYPE;
BEGIN
-- Get current sequence settings as defaults
SELECT * INTO l_seq FROM user_sequences WHERE sequence_name = seq_name;
-- Update target settings
l_old_cache := l_seq.cache_size;
l_seq.increment_by := nvl(increment_by, l_seq.increment_by);
l_seq.min_value := nvl(min_value, l_seq.min_value);
l_seq.max_value := nvl(max_value, l_seq.max_value);
l_seq.cycle_flag := nvl(cycle_flag, l_seq.cycle_flag);
l_seq.cache_size := nvl(cache_size, l_seq.cache_size);
l_seq.order_flag := nvl(order_flag, l_seq.order_flag);
IF next_value is NOT NULL THEN
-- Determine next value without exceeding limits
l_next := LEAST(GREATEST(next_value, l_seq.min_value+1),l_seq.max_value);
-- Grab the actual latest seq number
EXECUTE IMMEDIATE
'ALTER SEQUENCE '||l_seq.sequence_name
|| ' INCREMENT BY 1'
|| ' MINVALUE '||least(l_seq.min_value,l_seq.last_number-l_old_cache)
|| ' MAXVALUE '||greatest(l_seq.max_value,l_seq.last_number)
|| ' NOCACHE'
|| ' ORDER';
EXECUTE IMMEDIATE
'SELECT '||l_seq.sequence_name||'.NEXTVAL FROM DUAL'
INTO l_seq.last_number;
l_next := l_next-l_seq.last_number-1;
-- Reset the sequence number
IF l_next <> 0 THEN
EXECUTE IMMEDIATE
'ALTER SEQUENCE '||l_seq.sequence_name
|| ' INCREMENT BY '||l_next
|| ' MINVALUE '||least(l_seq.min_value,l_seq.last_number)
|| ' MAXVALUE '||greatest(l_seq.max_value,l_seq.last_number)
|| ' NOCACHE'
|| ' ORDER';
EXECUTE IMMEDIATE
'SELECT '||l_seq.sequence_name||'.NEXTVAL FROM DUAL'
INTO l_next;
END IF;
END IF;
-- Prepare Sequence for next use.
IF COALESCE( cycle_flag
, next_value
, increment_by
, min_value
, max_value
, cache_size
, order_flag) IS NOT NULL
THEN
EXECUTE IMMEDIATE
'ALTER SEQUENCE '||l_seq.sequence_name
|| ' INCREMENT BY '||l_seq.increment_by
|| ' MINVALUE '||l_seq.min_value
|| ' MAXVALUE '||l_seq.max_value
|| CASE l_seq.cycle_flag
WHEN 'Y' THEN ' CYCLE' ELSE ' NOCYCLE' END
|| CASE l_seq.cache_size
WHEN 0 THEN ' NOCACHE'
ELSE ' CACHE '||l_seq.cache_size END
|| CASE l_seq.order_flag
WHEN 'Y' THEN ' ORDER' ELSE ' NOORDER' END;
END IF;
END;
A: In my project, once it happened that someone manually entered the records without using sequence, hence I have to reset sequence value manually, for which I wrote below sql code snippet:
declare
max_db_value number(10,0);
cur_seq_value number(10,0);
counter number(10,0);
difference number(10,0);
dummy_number number(10);
begin
-- enter table name here
select max(id) into max_db_value from persons;
-- enter sequence name here
select last_number into cur_seq_value from user_sequences where sequence_name = 'SEQ_PERSONS';
difference := max_db_value - cur_seq_value;
for counter in 1..difference
loop
-- change sequence name here as well
select SEQ_PERSONS.nextval into dummy_number from dual;
end loop;
end;
Please note, the above code will work if the sequence is lagging.
A: Here is a good procedure for resetting any sequence to 0 from Oracle guru Tom Kyte. Great discussion on the pros and cons in the links below too.
[email protected]>
create or replace
procedure reset_seq( p_seq_name in varchar2 )
is
l_val number;
begin
execute immediate
'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate
'alter sequence ' || p_seq_name || ' increment by -' || l_val ||
' minvalue 0';
execute immediate
'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate
'alter sequence ' || p_seq_name || ' increment by 1 minvalue 0';
end;
/
From this page: Dynamic SQL to reset sequence value
Another good discussion is also here: How to reset sequences?
A: A true restart is not possible AFAIK. (Please correct me if I'm wrong!).
However, if you want to set it to 0, you can just delete and recreate it.
If you want to set it to a specific value, you can set the INCREMENT to a negative value and get the next value.
That is, if your sequence is at 500, you can set it to 100 via
ALTER SEQUENCE serial INCREMENT BY -400;
SELECT serial.NEXTVAL FROM dual;
ALTER SEQUENCE serial INCREMENT BY 1;
A: The following script set the sequence to a desired value:
Given a freshly created sequence named PCS_PROJ_KEY_SEQ and table PCS_PROJ:
BEGIN
DECLARE
PROJ_KEY_MAX NUMBER := 0;
PROJ_KEY_CURRVAL NUMBER := 0;
BEGIN
SELECT MAX (PROJ_KEY) INTO PROJ_KEY_MAX FROM PCS_PROJ;
EXECUTE IMMEDIATE 'ALTER SEQUENCE PCS_PROJ_KEY_SEQ INCREMENT BY ' || PROJ_KEY_MAX;
SELECT PCS_PROJ_KEY_SEQ.NEXTVAL INTO PROJ_KEY_CURRVAL FROM DUAL;
EXECUTE IMMEDIATE 'ALTER SEQUENCE PCS_PROJ_KEY_SEQ INCREMENT BY 1';
END;
END;
/
A: I make an alternative that the user don’t need to know the values, the system get and use variables to update.
--Atualizando sequence da tabela SIGA_TRANSACAO, pois está desatualizada
DECLARE
actual_sequence_number INTEGER;
max_number_from_table INTEGER;
difference INTEGER;
BEGIN
SELECT [nome_da_sequence].nextval INTO actual_sequence_number FROM DUAL;
SELECT MAX([nome_da_coluna]) INTO max_number_from_table FROM [nome_da_tabela];
SELECT (max_number_from_table-actual_sequence_number) INTO difference FROM DUAL;
IF difference > 0 then
EXECUTE IMMEDIATE CONCAT('alter sequence [nome_da_sequence] increment by ', difference);
--aqui ele puxa o próximo valor usando o incremento necessário
SELECT [nome_da_sequence].nextval INTO actual_sequence_number from dual;
--aqui volta o incremento para 1, para que futuras inserções funcionem normalmente
EXECUTE IMMEDIATE 'ALTER SEQUENCE [nome_da_sequence] INCREMENT by 1';
DBMS_OUTPUT.put_line ('A sequence [nome_da_sequence] foi atualizada.');
ELSE
DBMS_OUTPUT.put_line ('A sequence [nome_da_sequence] NÃO foi atualizada, já estava OK!');
END IF;
END;
A: Here's how to make all auto-increment sequences match actual data:
*
*Create a procedure to enforce next value as was already described in this thread:
CREATE OR REPLACE PROCEDURE Reset_Sequence(
P_Seq_Name IN VARCHAR2,
P_Val IN NUMBER DEFAULT 0)
IS
L_Current NUMBER := 0;
L_Difference NUMBER := 0;
L_Minvalue User_Sequences.Min_Value%Type := 0;
BEGIN
SELECT Min_Value
INTO L_Minvalue
FROM User_Sequences
WHERE Sequence_Name = P_Seq_Name;
EXECUTE Immediate 'select ' || P_Seq_Name || '.nextval from dual' INTO L_Current;
IF P_Val < L_Minvalue THEN
L_Difference := L_Minvalue - L_Current;
ELSE
L_Difference := P_Val - L_Current;
END IF;
IF L_Difference = 0 THEN
RETURN;
END IF;
EXECUTE Immediate 'alter sequence ' || P_Seq_Name || ' increment by ' || L_Difference || ' minvalue ' || L_Minvalue;
EXECUTE Immediate 'select ' || P_Seq_Name || '.nextval from dual' INTO L_Difference;
EXECUTE Immediate 'alter sequence ' || P_Seq_Name || ' increment by 1 minvalue ' || L_Minvalue;
END Reset_Sequence;
*Create another procedure to reconcile all sequences with actual content:
CREATE OR REPLACE PROCEDURE RESET_USER_SEQUENCES_TO_DATA
IS
STMT CLOB;
BEGIN
SELECT 'select ''BEGIN'' || chr(10) || x || chr(10) || ''END;'' FROM (select listagg(x, chr(10)) within group (order by null) x FROM ('
|| X
|| '))'
INTO STMT
FROM
(SELECT LISTAGG(X, ' union ') WITHIN GROUP (
ORDER BY NULL) X
FROM
(SELECT CHR(10)
|| 'select ''Reset_Sequence('''''
|| SEQ_NAME
|| ''''','' || coalesce(max('
|| COL_NAME
|| '), 0) || '');'' x from '
|| TABLE_NAME X
FROM
(SELECT TABLE_NAME,
REGEXP_SUBSTR(WTEXT, 'NEW\.(\S*) IS NULL',1,1,'i',1) COL_NAME,
REGEXP_SUBSTR(BTEXT, '(\.|\s)([a-z_]*)\.nextval',1,1,'i',2) SEQ_NAME
FROM USER_TRIGGERS
LEFT JOIN
(SELECT NAME BNAME,
TEXT BTEXT
FROM USER_SOURCE
WHERE TYPE = 'TRIGGER'
AND UPPER(TEXT) LIKE '%NEXTVAL%'
)
ON BNAME = TRIGGER_NAME
LEFT JOIN
(SELECT NAME WNAME,
TEXT WTEXT
FROM USER_SOURCE
WHERE TYPE = 'TRIGGER'
AND UPPER(TEXT) LIKE '%IS NULL%'
)
ON WNAME = TRIGGER_NAME
WHERE TRIGGER_TYPE = 'BEFORE EACH ROW'
AND TRIGGERING_EVENT = 'INSERT'
)
)
) ;
EXECUTE IMMEDIATE STMT INTO STMT;
--dbms_output.put_line(stmt);
EXECUTE IMMEDIATE STMT;
END RESET_USER_SEQUENCES_TO_DATA;
NOTES:
*
*Procedure extracts names from trigger code and does not depend on naming conventions
*To check generated code before execution, switch comments on last two lines
A: Stored procedure that worked for me
create or replace
procedure reset_sequence( p_seq_name in varchar2, tablename in varchar2 )
is
l_val number;
maxvalueid number;
begin
execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'select max(id) from ' || tablename INTO maxvalueid;
execute immediate 'alter sequence ' || p_seq_name || ' increment by -' || l_val || ' minvalue 0';
execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || p_seq_name || ' increment by '|| maxvalueid ||' minvalue 0';
execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || p_seq_name || ' increment by 1 minvalue 0';
end;
How to use the stored procedure:
execute reset_sequence('company_sequence','company');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "197"
} |
Q: Polygon fill modes in GDI and GDI+ The system default polygon fill mode in current device context is ALTERNATE (as I've learned from the Petzold book on Windows programming) and this one is used in Polygon Win32 function unless you change the mode with SetPolyFillMode.
My question is:
Does the GDI+ Graphics::FillPolygon (without the FillMode parameter in its signature) method also use the current device context fill mode or sets the well-known-default and then sets back the mode set before it was called?
Thanks!
A: I don't know the answer off the top of my head, but you could try finding out by retrieving the fill mode before and after the call. If it's not different, it's either not been changed, or was changed then changed back.
A: I looked at the reference source and FillPolygon without a fill mode simply calls FillPolygon with a fill mode of alternate.
FillPolygone with a fill mode calls a method named GdipFillPolygonI, but I can't find anything about that method.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What usable alternatives to XML syntax do you know? For me usable means that:
*
*it's being used in real-wold
*it has tools support. (at least some simple editor)
*it has human readable syntax (no angle brackets please)
Also I want it to be as close to XML as possible, i.e. there must be support for attributes as well as for properties. So, no YAML please. Currently, only one matching language comes to my mind - JSON. Do you know any other alternatives?
A: TL;DR
Prolog wasn't mentioned here, but it is the best format I know of for representing data. Prolog programs, essentially, describe databases, with complex relationships between entities. Prolog is dead-simple to parse, whose probably only rival is S-expressions in this domain.
Full version
Programmers often "forget" what XML actually consists of. Usually referring to a very small subset of what it is. XML is a very complex format, with at least these parts: DTD schema language, XSD schema language, XSLT transformation language, RNG schema language and XPath (plus XQuery) languages - they all are part and parcel of XML standard. Plus, there are some apocrypha like E4X. Each and every one of them having their own versions, quite a bit of overlap, incompatibilities etc. Very few XML parsers in the wild implement all of them. Not to mention the multiple quirks and bugs of the popular parses, some leading to notable security issues like https://en.wikipedia.org/wiki/XML_external_entity_attack .
Therefore, looking for an XML alternative is not a very good idea. You probably don't want to deal with the likes of XML at all.
YAML is, probably, the second worst option. It's not as big as XML, but it was also designed in an attempt to cover all bases... more than ten times each... in different and unique ways nobody could ever conceive of. I'm yet to hear about a properly working YAML parser. Ruby, the language that uses YAML a lot, had famously screwed up because of it. All YAML parsers I've seen to date are copies of libyaml, which is itself a hand-written (not a generated from a formal description) kind of parser, with a code which is very difficult to verify for correctness (functions that span hundreds of lines with convoluted control flow). As was already mentioned, it completely contains JSON in it... on top of a handful of Unicode coding techniques... inside the same document, and probably a bunch of other stuff you don't want to hear about.
JSON, on the other hand, is completely unlike the other two. You can probably write a JSON parser while waiting for downloading JSON parser artefact from your Maven Nexus. It can do very little, but at least you know what it's capable of. No surprises. (Except some discrepancies related to character escaping in strings and doubles encoding). No covert exploits. You cannot write comments in it. Multiline strings look bad. Whatever you mean by distinction between properties and attributes you can implement by more nested dictionaries.
Suppose, though you wanted to right what XML wronged... well, then the popular stuff like YAML or JSON won't do it. Somehow fashion and rational thinking parted ways in programming some time in the mid seventies. So, you'll have to go back to where it all began with McCarthy, Hoare, Codd and Kowalski, figure out what is it you are trying to represent, and then see what's the best representation technique there is for whatever is that you are trying to represent :)
A: I have found S-Expressions to be a great way to represent structured data. It's a very simple format which is easy to generate and parse. It doesn't support attributes, but like YAML & JSON, it doesn't need to. Attributes are simply a way for XML to limit verbosity. Simpler, cleaner formats just don't need them.
A: YAML is a 100% superset of JSON, so it doesn't make sense to reject YAML and then consider JSON instead. YAML does everything JSON does, but YAML gives so much more too (like references).
I can't think of anything XML can do that YAML can't, except to validate a document with a DTD, which in my experience has never been worth the overhead. But YAML is so much faster and easier to type and read than XML.
As for attributes or properties, if you think about it, they don't truly "add" anything... it's just a notational shortcut to write something as an attribute of the node instead of putting it in its own child node. But if you like that convenience, you can often emulate it with YAML's inline lists/hashes. Eg:
<!-- XML -->
<Director name="Spielberg">
<Movies>
<Movie title="Jaws" year="1975"/>
<Movie title="E.T." year="1982"/>
</Movies>
</Director>
# YAML
Director:
name: Spielberg
Movies:
- Movie: {title: E.T., year: 1975}
- Movie: {title: Jaws, year: 1982}
For me, the luxury of not having to write each node tag twice, combined with the freedom from all the angle-bracket litter makes YAML a preferred choice. I also actually like the lack of formal tag attributes, as that always seemed to me like a gray area of XML that needlessly introduced two sets of syntax (both when writing and traversing) for essentially the same concept. YAML does away with that confusion altogether.
A: Jeff wrote about this here and here. That should help you get started.
A: You're demands are a bit impossible.. You want something close to XML, but reject probably the closest equivalent that doesn't have angle-bracket (YAML).
As much as I dislike it, why not just use XML? You shouldn't ever have to actually read XML (aside from debugging, I suppose), there are an absurd amount of tools about for it.
Pretty much anything that isn't XML isn't going to be as widely used, thus there will be less tool support.
JSON is probably about equivalent, but it's pretty much equally unreadable.. but again, you shouldn't ever have to actually read it (load it into whatever language you are using, and it should be transformed into native arrays/dicts/variables/whatever).
Oh, I do find JSON far nicer to parse than XML: I've used it in Javascript, and the simplejson Python module - about one command and it's nicely transformed into a native Python dict, or a Javascript object (thus the name!)
A: There is AXON that cover the best of XML and JSON. Let's explain that in several examples.
AXON could be considered as shorter form of XML data.
XML
<person>
<name>Frank Martin</name>
<age>32</age>
</person>
AXON
person{
name{"Frank Martin"}
age{32}}
XML
<person name="Frank Martin" age="32" />
AXON
person{name:"Frank Martin" age:32}
AXON contains some form of JSON.
JSON
{"name":"Frank Martin" "age":32 "birth":"1965-12-24"}
AXON
{name:"Frank Martin" age:32 birth:1965-12-24}
AXON can represent combination of XML-like and JSON-like data.
AXON
table {
fields {
("id" "int") ("val1" "double") ("val2" "int") ("val3" "double")
}
rows {
(1 3.2 123 -3.4)
(2 3.5 303 2.4)
(3 2.3 235 -1.2)
}
}
There is available the python library pyaxon now.
A: I would recommend JSON ... but since you already mentioned it maybe you should take a look at Google protocol buffers.
Edit: Protocol buffers are made to be used programatically (there are bindings for c++, java, python ...) so they may not be suited for your purpose.
A: I think Clearsilver is a very good alternative. They even have a comparison page here and a list of projects that use it
A: YAML is extremely fully-featured and generally human-readable format, but it's Achilles heal is complexity as demonstrated by the Rails vulnerabilities we saw this winter. Due to its ubiquity in Ruby as a config language Tom Preston-Werner of Github fame stepped up to create a sane alternative dubbed TOML. It gained massive traction immediately and has great tool support. I highly recommend anyone looking at YAML check it out:
https://github.com/mojombo/toml
A: For storing code-like data, LES (Loyc Expression Syntax) is a budding alternative. I've noticed a lot of people use XML for code-like constructs, such as build systems which support conditionals, command invocations, sometimes even loops. These sorts of things look natural in LES:
// LES code has no built-in meaning. This just shows what it looks like.
[DelayedWrite]
Output(
if version > 4.0 {
$ProjectDir/Src/Foo;
} else {
$ProjectDir/Foo;
}
);
It doesn't have great tool support yet, though; currently the only LES library is for C#. Currently only one app is known to use LES: LLLPG. It supports "attributes" but they are like C# attributes or Java annotations, not XML attributes.
In theory you could use LES for data or markup, but there are no standards for how to do that:
body {
'''Click here to use the World's '''
a href="http://google.com" {
strong "most popular"; " search engine!"
};
};
point = (2, -3);
tasteMap = { "lemon" -> sour; "sugar" -> sweet; "grape" -> yummy };
A: JSON is a very good alternative, and there are tools for it in multiple languages. And it's really easy to use in web clients, as it is native javascript.
A: If you're allergic to angle brackets, then JSON, HDF (ClearSilver), and OGDL are the only ones I know offhand.
After a bit of googling, I also found a list of alternatives here:
http://web.archive.org/web/20060325012720/www.pault.com/xmlalternatives.html
A: AFAIK, JSON and YAML are exactly equivalent in data structure terms. YAML just has less brackets and quotes and stuff. So I don't see how you are rejecting one and keeping the other.
Also, I don't see how XML's angle brackets are less "human readable" than JSON's square brackets, curly brackets and quotes.
A: There are truly plenty of alternatives to XML, but the main problem with many of them seems to be that libraries might not be available for every language of choice and the libraries are relatively laborious to implement.
Parsing a tree structure itself might not be that pleasant, if compared to key-value pairs, e.g. hash tables. If a hash table instance meets the requirement that all of its keys are strings and all of its values are strings, then it's relatively non-laborous to implement hashtable2string() and string2hashtable().
I've been using the hash table serialization in AJAX between PHP and JavaScript and the format that I've developed, is called ProgFTE (Programmer Friendly text Exchange) and is described at:
http://martin.softf1.com/g/n//a2/doc/progfte/index.html
One can find a Ruby version of the ProgFTE implementation as part of the Kibuvits Ruby Library:
http://rubyforge.org/projects/kibuvits/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Are there any good automated frameworks for applying coding standards in Perl? One I am aware of is Perl::Critic
And my googling has resulted in no results on multiple attempts so far. :-(
Does anyone have any recommendations here?
Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated.
A: There is perltidy for most stylistic standards. perlcritic can be easily configured using a .perlcritic file. I personally use the it at level one, but I've disabled a few policies.
A: In addition to 'automated frameworks', I highly recommend Damian Conway's Perl Best Practices. I don't agree with 100% of what he suggests, but most of the time he's bang on.
A: The post above mentioning Devel::Prof probably really means Devel::Cover (to get the code coverage of a test suite).
A: Like:
*
*http://metacpan.org/pod/Perl::Critic
*http://www.slideshare.net/joshua.mcadams/an-introduction-to-perl-critic/
Looks like a nice tool!
A: In terms of setting up a profile, have you tried perlcritic --profile-proto? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones.
In terms of running perlcritic regularly, set up a Test::Perl::Critic test along with the rest of your tests. This is good for new code.
For your existing code, use Test::Perl::Critic::Progressive instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again.
Perl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at Task::Perl::Critic and
Task::Perl::Critic::IncludingOptionalDependencies.
You don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests.
I personally think that everyone should run at the "brutal" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of "## no critic"; I count 143 at present.
(Yes, I'm one of the Perl::Critic developers.)
A: A nice combination is perlcritic with EPIC for Eclipse - hit CTRL-SHIFT-C (or your preferred configured shortcut) and your code is marked up with warning indicators wherever perlcritic has found something to complain about. Much nicer than remembering to run it before checkin. And as normal with perlcritic, it will pick up your .perlcriticrc so you can customise the rules. We keep our .perlcriticrc in version control so everyone gets the same standards.
A: In addition to the cosmetic best practices, I always find it useful to run Devel::Prof on my unit test suite to check test coverage.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Where to find gdb for an arm target running on linux 2.6.21.5-cfs-v19 I have a ARM11 based hardware board which runs a Linux kernel ver. 2.6.21.5-cfs-v19. I have my application running on this ARM-LINUX board. To do source level debugging, of my application I used to use gdb from command prompt of the linux board, that was with some earlier version of the board linux version.
With this version 2.6.21.5-cfs-v19, I dont have a related version of gdb for the board. I tried to look around to get gdb for this version of linux and arm port of it, but without much success. Can anyone point me where I can get either an independant executable for gdb for above mentioned os and board configuration or source for the same, which I may try compiling for that target.
-AD.
A: Sometime ago I published an article about cross-target debugging with GDB and GDBServer. The target processor there is PPC7450, but it's rather detailed, so maybe you'll find it useful.
A: You might have some luck using OpenEmbedded. If there's no precompiled version you can use right away, setting up an OE-cross compile environment is not that hard.
Another option could be to install gdb-server on the board, like described in this blogpost.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Improving Python readability? I've been really enjoying Python programming lately. I come from a background of a strong love for C-based coding, where everything is perhaps more complicated than it should be (but puts hair on your chest, at least). So switching from C to Python for more complex things that don't require tons of speed has been more of a boon than a bane in writing projects.
However, coming from this land of brackets and parentheses and structs as far as the naked eye can see, I come across a small problem: I find Python difficult to read.
For example, the following block of text is hard for me to decipher unless I stare at it (which I dislike doing):
if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
The problem occurs at the end of that if block: all the tabbing and then suddenly returning to a jarring block feels almost disturbing. As a solution, I've started coding my Python like this:
if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
#-- while --
#-- if --
did_i_not_warn_you_biz()
my_father_is_avenged()
And this, for some odd reason, makes me more able to read my own code. But I'm curious: has anyone else with my strange problem found easier ways to make their tabbed-out code more readable? I'd love to find out if there's a better way to do this before this becomes a huge habit for me.
A: You could try increasing the indent size, but in general I would just say, relax, it will come with time. I don't think trying to make Python look like C is a very good idea.
A: Rather than focusing on making your existing structures more readable, you should focus on making more logical structures. Make smaller blocks, try not to nest blocks excessively, make smaller functions, and try to think through your code flow more.
If you come to a point where you can't quickly determine the structure of your code, you should probably consider refactoring and adding some comments. Code flow should always be immediately apparent -- the more you have to think about it, the less maintainable your code becomes.
A: Perhaps the best thing would be to turn on "show whitespace" in your editor. Then you would have a visual indication of how far in each line is tabbed (usually a bunch of dots), and it will be more apparent when that changes.
A: Part of learning a new programming language is learning to read code in that language. A crutch like this may make it easier to read your own code, but it's going to impede the process of learning how to read anyone else's Python code. I really think you'd be better off getting rid of the end of block comments and getting used to normal Python.
A: I like to put blank lines around blocks to make control flow more obvious. For example:
if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
A: from __future__ import braces
Need I say more? :)
Seriously, PEP 8, 'Blank lines', §4 is the official way to do it.
A: I would look in to understanding more details about Python syntax. Often times if a piece of code looks odd, there usually is a better way to write it. For example, in the above example:
bar = foo if baz else None
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
While it is a small change, it might help the readability. Also, in all honesty, I've never used a while loop, so there is a good change you would end up with a nice concise list comprehension or for loop instead. ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to make only certain parts of a site beta? Most sites are either fully released, or in beta.
But what happens if you have a large site, and some of the parts are still in Beta, and other parts aren't.
How do you effectively communicate this to the customer?
A: Maybe take a look at how Facebook, Bloglines, Gmail did it?
Like "We have this beta thing going on, come on over and see the same site with new stuff, but if it doesnt work, use the old parts"
Maybe gmail labs where you can sign up for "beta features"
A: If there's a certain way you enter the part of the beta site, maybe you can have a modal that pops up that they have to agree to every time. I wouldn't have it on every page since it gets annoying, so I would only use this approach if there is a definitive way to get into that part of the site (e.g. people won't be coming to random parts of the beta section through Google or something).
A: One way I've used for non-web software is a change to background. So for example if your normal site tended to have a plain white background, you could have the beta site have a repeating beta text in a background image. You want to make it fairly faint so it is present but doesn't detract from the overall experience.
Another subtle but present option would just be to change the title bar.
Or you could do what google does, which is a large site with some of it in beta. Check out Google experimental search. Basically the site is no different, but it is hard to get to accidentally.
A: There are a few ways.
*
*Provide access to the site via two domains (e.g. www.domain.com and beta.domain.com) and only allow access to beta parts of the site when going in via beta.domain.com.
People will be accessing the same code base, but will only get access to the beta sections if they've specified the beta subdomain. Trying to access beta sections of the site will explain this & tell them how to access the beta.
*Strongly Flag the beta sections of the application as being beta, and force the user to acknowledge that they're OK using beta features with some kind of agreement screen. The first time they try to use the beta feature, they'll be shown the agreement screen. Subsequent uses of the feature will prominently deisplay that "thios part of the site is in beta and is used at your own peril."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to get an absolute file path in Python Given a path such as "mydir/myfile.txt", how do I find the file's absolute path in Python? E.g. on Windows, I might end up with:
"C:/example/cwd/mydir/myfile.txt"
A: You can use this to get absolute path of a specific file.
from pathlib import Path
fpath = Path('myfile.txt').absolute()
print(fpath)
A: import os
os.path.abspath(os.path.expanduser(os.path.expandvars(PathNameString)))
Note that expanduser is necessary (on Unix) in case the given expression for the file (or directory) name and location may contain a leading ~/(the tilde refers to the user's home directory), and expandvars takes care of any other environment variables (like $HOME).
A: Install a third-party path module (found on PyPI), it wraps all the os.path functions and other related functions into methods on an object that can be used wherever strings are used:
>>> from path import path
>>> path('mydir/myfile.txt').abspath()
'C:\\example\\cwd\\mydir\\myfile.txt'
A:
Given a path such as mydir/myfile.txt, how do I find the file's absolute path relative to the current working directory in Python?
I would do it like this,
import os.path
os.path.join( os.getcwd(), 'mydir/myfile.txt' )
That returns '/home/ecarroll/mydir/myfile.txt'
A: Update for Python 3.4+ pathlib that actually answers the question:
from pathlib import Path
relative = Path("mydir/myfile.txt")
absolute = relative.absolute() # absolute is a Path object
If you only need a temporary string, keep in mind that you can use Path objects with all the relevant functions in os.path, including of course abspath:
from os.path import abspath
absolute = abspath(relative) # absolute is a str object
A: if you are on a mac
import os
upload_folder = os.path.abspath("static/img/users")
this will give you a full path:
print(upload_folder)
will show the following path:
>>>/Users/myUsername/PycharmProjects/OBS/static/img/user
A: This always gets the right filename of the current script, even when it is called from within another script. It is especially useful when using subprocess.
import sys,os
filename = sys.argv[0]
from there, you can get the script's full path with:
>>> os.path.abspath(filename)
'/foo/bar/script.py'
It also makes easier to navigate folders by just appending /.. as many times as you want to go 'up' in the directories' hierarchy.
To get the cwd:
>>> os.path.abspath(filename+"/..")
'/foo/bar'
For the parent path:
>>> os.path.abspath(filename+"/../..")
'/foo'
By combining "/.." with other filenames, you can access any file in the system.
A: Today you can also use the unipath package which was based on path.py: http://sluggo.scrapping.cc/python/unipath/
>>> from unipath import Path
>>> absolute_path = Path('mydir/myfile.txt').absolute()
Path('C:\\example\\cwd\\mydir\\myfile.txt')
>>> str(absolute_path)
C:\\example\\cwd\\mydir\\myfile.txt
>>>
I would recommend using this package as it offers a clean interface to common os.path utilities.
A: >>> import os
>>> os.path.abspath("mydir/myfile.txt")
'C:/example/cwd/mydir/myfile.txt'
Also works if it is already an absolute path:
>>> import os
>>> os.path.abspath("C:/example/cwd/mydir/myfile.txt")
'C:/example/cwd/mydir/myfile.txt'
A: You could use the new Python 3.4 library pathlib. (You can also get it for Python 2.6 or 2.7 using pip install pathlib.) The authors wrote: "The aim of this library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them."
To get an absolute path in Windows:
>>> from pathlib import Path
>>> p = Path("pythonw.exe").resolve()
>>> p
WindowsPath('C:/Python27/pythonw.exe')
>>> str(p)
'C:\\Python27\\pythonw.exe'
Or on UNIX:
>>> from pathlib import Path
>>> p = Path("python3.4").resolve()
>>> p
PosixPath('/opt/python3/bin/python3.4')
>>> str(p)
'/opt/python3/bin/python3.4'
Docs are here: https://docs.python.org/3/library/pathlib.html
A: In case someone is using python and linux and looking for full path to file:
>>> path=os.popen("readlink -f file").read()
>>> print path
abs/path/to/file
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1005"
} |
Q: Changing the value of an element in a list of structs I have a list of structs and I want to change one element. For example :
MyList.Add(new MyStruct("john");
MyList.Add(new MyStruct("peter");
Now I want to change one element:
MyList[1].Name = "bob"
However, whenever I try and do this I get the following error:
Cannot modify the return value of
System.Collections.Generic.List.this[int]‘ because it is not
a variable
If I use a list of classes, the problem doesn't occur.
I guess the answer has to do with structs being a value type.
So, if I have a list of structs should I treat them as read-only? If I need to change elements in a list then I should use classes and not structs?
A: In .Net 5.0, you can use CollectionsMarshal.AsSpan() (source, GitHub issue) to get the underlying array of a List<T> as a Span<T>. Note that items should not be added or removed from the List<T> while the Span<T> is in use.
var listOfStructs = new List<MyStruct> { new MyStruct() };
Span<MyStruct> spanOfStructs = CollectionsMarshal.AsSpan(listOfStructs);
spanOfStructs[0].Value = 42;
Assert.Equal(42, spanOfStructs[0].Value);
struct MyStruct { public int Value { get; set; } }
This works because the Span<T> indexer uses a C# 7.0 feature called ref returns. The indexer is declared with a ref T return type, which provides semantics like that of indexing into arrays, returning a reference to the actual storage location.
In comparison the List<T> indexer is not ref returning instead returning a copy of what lives at that location.
Keep in mind that this is still unsafe: if the List<T> reallocates the array, the Span<T> previously returned by CollectionsMarshal.AsSpan won't reflect any further changes to the List<T>. (Which is why the method is hidden in the System.Runtime.InteropServices.CollectionsMarshal class.)
Source
A: There is nothing wrong with structs that have exposed fields, or that allow mutation via property setters. Structs which mutate themselves in response to methods or property getters, however, are dangerous because the system will allow methods or property getters to be called on temporary struct instances; if the methods or getters make changes to the struct, those changes will end up getting discarded.
Unfortunately, as you note, the collections built into .net are really feeble at exposing value-type objects contained therein. Your best bet is usually to do something like:
MyStruct temp = myList[1];
temp.Name = "Albert";
myList[1] = temp;
Somewhat annoying, and not at all threadsafe. Still an improvement over a List of a class type, where doing the same thing might require:
myList[1].Name = "Albert";
but it might also require:
myList[1] = myList[1].Withname("Albert");
or maybe
myClass temp = (myClass)myList[1].Clone();
temp.Name = "Albert";
myList[1] = temp;
or maybe some other variation. One really wouldn't be able to know unless one examined myClass as well as the other code that put things in the list. It's entirely possible that one might not be able to know whether the first form is safe without examining code in assemblies to which one does not have access. By contrast, if Name is an exposed field of MyStruct, the method I gave for updating it will work, regardless of what else MyStruct contains, or regardless of what other things may have done with myList before the code executes or what they may expect to do with it after.
A: Not quite. Designing a type as class or struct shouldn't be driven by your need to store it in collections :) You should look at the 'semantics' needed
The problem you're seeing is due to value type semantics. Each value type variable/reference is a new instance. When you say
Struct obItem = MyList[1];
what happens is that a new instance of the struct is created and all members are copied one by one. So that you have a clone of MyList[1] i.e. 2 instances.
Now if you modify obItem, it doesn't affect the original.
obItem.Name = "Gishu"; // MyList[1].Name still remains "peter"
Now bear with me for 2 mins here (This takes a while to gulp down.. it did for me :)
If you really need structs to be stored in a collection and modified like you indicated in your question, you'll have to make your struct expose an interface (However this will result in boxing). You can then modify the actual struct via an interface reference, which refers to the boxed object.
The following code snippet illustrates what I just said above
public interface IMyStructModifier
{
String Name { set; }
}
public struct MyStruct : IMyStructModifier ...
List<Object> obList = new List<object>();
obList.Add(new MyStruct("ABC"));
obList.Add(new MyStruct("DEF"));
MyStruct temp = (MyStruct)obList[1];
temp.Name = "Gishu";
foreach (MyStruct s in obList) // => "ABC", "DEF"
{
Console.WriteLine(s.Name);
}
IMyStructModifier temp2 = obList[1] as IMyStructModifier;
temp2.Name = "Now Gishu";
foreach (MyStruct s in obList) // => "ABC", "Now Gishu"
{
Console.WriteLine(s.Name);
}
HTH. Good Question.
Update: @Hath - you had me running to check if I overlooked something that simple. (It would be inconsistent if setter properties dont and methods did - the .Net universe is still balanced :)
Setter method doesn't work
obList2[1] returns a copy whose state would be modified. Original struct in list stays unmodified. So Set-via-Interface seems to be only way to do it.
List<MyStruct> obList2 = new List<MyStruct>();
obList2.Add(new MyStruct("ABC"));
obList2.Add(new MyStruct("DEF"));
obList2[1].SetName("WTH");
foreach (MyStruct s in obList2) // => "ABC", "DEF"
{
Console.WriteLine(s.Name);
}
A: MyList[1] = new MyStruct("bob");
structs in C# should almost always be designed to be immutable (that is, have no way to change their internal state once they have been created).
In your case, what you want to do is to replace the entire struct in specified array index, not to try to change just a single property or field.
A: In addition to the other answers, I thought it could be helpful to explain why the compiler complains.
When you call MyList[1].Name, unlike an array, the MyList[1] actually calls the indexer method behind the scenes.
Any time a method returns an instance of a struct, you're getting a copy of that struct (unless you use ref/out).
So you're getting a copy and setting the Name property on a copy, which is about to be discarded since the copy wasn't stored in a variable anywhere.
This tutorial describes what's going on in more detail (including the generated CIL code).
A: It's not so much that structs are "immutable."
The real underlying issue is that structs are a Value type, not a Reference type. So when you pull out a "reference" to the struct from the list, it is creating a new copy of the entire struct. So any changes you make on it are changing the copy, not the original version in the list.
Like Andrew states, you have to replace the entire struct. As that point though I think you have to ask yourself why you are using a struct in the first place (instead of a class). Make sure you aren't doing it around premature optimization concerns.
A: As of C#9, I am not aware of any way to pull a struct by reference out of a generic container, including List<T>. As Jason Olson's answer said:
The real underlying issue is that structs are a Value type, not a Reference type. So when you pull out a "reference" to the struct from the list, it is creating a new copy of the entire struct. So any changes you make on it are changing the copy, not the original version in the list.
So, this can be pretty inefficient. SuperCat's answer, even though it is correct, compounds that inefficiency by copying the updated struct back into the list.
If you are interested in maximizing the performance of structs, then use an array instead of List<T>. The indexer in an array returns a reference to the struct and does not copy the entire struct out like the List<T> indexer. Also, an array is more efficient than List<T>.
If you need to grow the array over time, then create a generic class that works like List<T>, but uses arrays underneath.
There is an alternative solution. Create a class that incorporates the structure and create public methods to call the methods of that structure for the required functionality. Use a List<T> and specify the class for T. The structure may also be returned via a ref returns method or ref property that returns a reference to the structure.
The advantage of this approach is that it can be used with any generic data structure, like Dictionary<TKey, TValue>. When pulling a struct out of a Dictionary<TKey, TValue>, it also copies the struct to a new instance, just like List<T>. I suspect that this is true for all C# generic containers.
Code example:
public struct Mutable
{
private int _x;
public Mutable(int x)
{
_x = x;
}
public int X => _x; // Property
public void IncrementX() { _x++; }
}
public class MutClass
{
public Mutable Mut;
//
public MutClass()
{
Mut = new Mutable(2);
}
public MutClass(int x)
{
Mut = new Mutable(x);
}
public ref Mutable MutRef => ref Mut; // Property
public ref Mutable GetMutStruct()
{
return ref Mut;
}
}
private static void TestClassList()
{
// This test method shows that a list of a class that holds a struct
// may be used to efficiently obtain the struct by reference.
//
var mcList = new List<MutClass>();
var mClass = new MutClass(1);
mcList.Add(mClass);
ref Mutable mutRef = ref mcList[0].MutRef;
// Increment the x value defined in the struct.
mutRef.IncrementX();
// Now verify that the X values match.
if (mutRef.X != mClass.Mut.X)
Console.Error.WriteLine("TestClassList: Error - the X values do not match.");
else
Console.Error.WriteLine("TestClassList: Success - the X values match!");
}
Output on console window:
TestClassList: Success - the X values match!
For the following line:
ref Mutable mutRef = ref mcList[0].MutRef;
I initially and inadvertently left out the ref after the equal sign. The compiler didn't complain, but it did produce a copy of the struct and the test failed when it ran. After adding the ref, it ran correctly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
} |
Q: What is a managable way to store e-mails for extended periods of time? If you have a site which sends out emails to the customer, and you want to save a copy of the mail, what is an effective strategy?
If you save it to a table in your database (e.g. create a table called Mail), it gets very large very quickly.
Some strategies I've seen are:
*
*Save it to the file system
*Run a scheduled task to clear old entries from the database - but then you wind up not having a copy;
*Create a separate table for each time frame (one each year, or one each month)
What strategies have you used?
A: I don't agree that gmail is an effective backup for business data.
Why trust your business information to a provider who makes no guarantees of service, or over who you have no control whatsoever?
Makes no sense to me.
Depending on how frequently you need to access this information, I'd say go with the filesystem or database archive. At least that way, you have control over your own data.
A: Data you want to save is saved in a database. The only exception that is justified is large binary data (images, videos). Who cares how large the table gets? If the mails are automated and template-based, you just have to save the variable parts anyway. The size will be about the same wherever you save it, but you probably already have a mechanism to backup your database, so you won't have to invent one to handle millions of files.
A: Lots of assumptions:
1. You're running windows / would like an archive in windows
2. The ability to search in the mails is important.
Since you are sending mails to your customers there isn't any reason you can't bcc a mail account of your own. Assuming you have a suitable account on your own server then I'd look at using MailStore (home) to pull the mails out from your account and put them into it's own compressed database.
A: Another option (depending on the email content) is to not save the email, but make sure you can recreate the email by archiving the original content that went into generating the email.
A: It depends on the content of your email. If it contains large images. I would plump for the file system. Otherwise if your Mail table table is getting very large very quickly I would go for the separate table, archiving off dead customers.
A: We save the email to a database table. It really doesn't get that big that quickly. We've a table with 32,000 emails in it (they're biggish emails too @ 50kb per email) and with compression, the file only uses 16MB.
If you're sending a shed load of email, then know that GMail(free) currently only allows 7GB of data. I'd be happy holding that on a disk.
A: I'd think about putting in place some sort of general archiving functionality. How you implement that depends on your specific retrieval needs.
For example if you wish just to retrieve emails sent to a particular customer for a certain month then stocking them in an appropriate heirachy on the File System (zip them up if necessary) should be simple to do. You might want to record a list of sent emails in a database table with a pointer to the appropriate directory but a naming convention for your directories and files might be sufficient
You might not need to access very old emails very infrequently so you might archive these to DVD for example if online storage is a problem
If you're wanting to often search the actual content of emails then your going to have to put the content in a DB table or use an indexer like Lucerne to examine the files stocked on disk
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you treat legacy code (and data)? I am currently in the process of restructuring my local Subversion repository by adding some new projects and merging legacy code and data from a couple of older repositories into it.
When I have done this in the past I have usually put the legacy code in a dedicated "legacy" folder, as not to "disturb" the new and "well-structured" code tree. However, in the spirit of refactoring I feel this is somewhat wrong. In theory, the legacy code will be refactored over time and moved to its new location, but in practice this rarely happens.
How do you treat your legacy code? As much as I feel tempted to tuck away old sins in the "legacy" folder, never to look at it again, on some level I hope that by forcing it to live among the more "healthy" inhabitants in the repository, maybe the legacy code will have a better chance of getting well some day?
(Yeah, we all know we shouldn't rewrite stuff, but this is my "fun" repository, not my business projects...)
Update
I am not worried about the technical aspects of keeping track of various versions. I know how to use tags and branches for that. This is more of a psychological aspect, as I prefer to have a "neat" structure in the repository, which makes navigating it much easier—for humans.
A: All code becomes 'legacy' one day, why seperate it at all? Source control is by project/branch or project/platform/branch and that type of hierarchy. Who cares how long in the tooth it is?
A: Tagging is a very cheap operation in subversion. Tag your code when you start refactoring and at regular stages while you go along. That way it's easy to still access the old (but functional code) as a reference for your shiny new (but broken code). :-)
A: Use Externals Definitions (svn:externals property) to reference your legacy code as you would a third-party repository.
Then you can separate your refactoring work from your dependent projects and (using fixed revision references i.e. -r1234) be very explicit about which revision of the legacy code the dependent project depends on.
A: Here's your free psychological analysis:
What you have here is a deep-rooted desire to fix your legacy code so that it's not legacy anymore. When you hide it away, you're just repressing that desire, trying to avoid it because it's an uncomfortable feeling. If you leave it out in the open, one of two things will happen: it'll eventually drive you and insane and you'll have to kill yourself, or (more optimistically), you'll be reminded of each messy bit over and over until you finally break down and clean it up.
Don't hide the mess; clean it. Otherwise it'll come back to bite you sooner or later.
A: It depends on what you call legacy. If by saying legacy you really mean 'code from some retired application which is so bad we'll never use it any more' it should be separated from you current code.
If it is something from your current project but is written by other people or is not up to your current standards just treat it normally but flag it for re-factoring in the future in your issue tracker.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Determine how much memory a class uses? I am trying to find a way to determine at run-time how much memory a given class is using in .NET. Using Marshal.SizeOf() is out, as it only works on value types. Is there a way to check exactly how much memory a class uses?
A: I've only recently started looking into this type of thing, but i have found that memory profilers can give quite detailed information regarding instances of objects within your application.
Here are a couple that are worth trying:
*
*ANTS Profiler
*.NET Memory Profiler
A: I agree that a memory profiler is the easiest way to get the information you are looking for. In addition to the two previously mentioned, I recommend JetBrains dotTrace, which is both a performance profiler and a memory profiler.
If you want to do it yourself, and are willing to get pretty deep into the guts of the CLR, you can use the .NET Profiling API, which is an unmanaged API that (as Microsoft says): "enables a profiler to monitor a program's execution by the common language runtime (CLR)." It's not exactly intended for casual use, but it does have an enormous amount of functionality.
A: just link to related SO question:
*
*sizeof() equivalent for reference types?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Finding unused files in a project We are migrating our works repository so I want to do a cull of all the unreferenced files that exist in the source tree before moving it into the nice fresh (empty) repository.
So far I have gone through by hand and found all the unreferenced files that I know about but I want to find out if I have caught them all. One way would be to manually move the project file by file to a new folder and see what sticks when compiling. That will take all week, so I need an automated tool.
What do people suggest?
Clarifications:
1) It is C++.
2) The files are mixed. I am looking for files that have been superseded by others but have left to rot in the repository - for instance file_iter.h is not referenced by any other file in the program but remains in the repository just in case someone wants to compile a version from 1996! Now we are moving to a fresh repository we can safely junk all the files that are no longer used.
3) Lint only finds unused includes - not unused files (I have the 7.5 manual in front of me).
A: You've tagged this post with c++, so I'm assuming that's the language in question. If that's the only thing that's in the repository then it shouldn't be too hard to grep all files in the repository for each filename to give you a good starting point. If the repository contains other files (metadata, support files, resources, etc) then you're probably going to need to do it manually.
A: I can't offer an existing tool for it, but I would expect that you can get a lot of this information from you build tools (with some effort, probably). Typically you can at least let the build tool print the commands it would run, without actually running them. (E.g. the -n option of make and bjam does this.) From it you should be able to extract at least the used source files.
With the -MM of g++ you can get all the non-system header files for the given source files. The output is in the form of a make rule, but with some filtering this shouldn't be a problem.
I don't know if this helps; it's just what I would try in your situation.
A: You can actually do this indirectly with Lint by running a "whole project analysis" (in which all files are analysed together rather than individually).
Configure it to ignore everything but unreferenced variable/enum/function etc warnings and it should give you a reasonable indicator of where the deadwood lies without those issues being obscured by any others in the codebase.
A: A static source code analysis tool like lint might do the job. They will tell you if a piece of code will never be called.
A: Have you taken a look at Source-Navigator? It can be used as an IDE but I found to be very good at analyzing source code structure. For example, it can find out where and if a certain method is used in your source code.
I don't know if it's scriptable but it might be a good starting point for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP) I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows):
SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples;
Is this normal behaviour when using a SQL database?
The schema (the table holds responses to a survey):
CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ','
I wrote some tests in Java and Python for context and they crush SQL (except for pure python):
java 1.5 threads ~ 7 ms
java 1.5 ~ 10 ms
python 2.5 numpy ~ 18 ms
python 2.5 ~ 370 ms
Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown)
Tunings i've tried without success include (blindly following some web advice):
increased the shared memory available to Postgres to 256MB
increased the working memory to 2MB
disabled connection and statement logging
used a stored procedure via CREATE FUNCTION ... LANGUAGE SQL
So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous.
Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help.
No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest.
The sqlite3 timing is driven by the Python program and is running from disk (not :memory:)
I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data.
The Postgres query doesn't change timing on subsequent runs.
I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!)
A: I retested with MySQL specifying ENGINE = MEMORY and it doesn't change a thing (still 200 ms). Sqlite3 using an in-memory db gives similar timings as well (250 ms).
The math here looks correct (at least the size, as that's how big the sqlite db is :-)
I'm just not buying the disk-causes-slowness argument as there is every indication the tables are in memory (the postgres guys all warn against trying too hard to pin tables to memory as they swear the OS will do it better than the programmer)
To clarify the timings, the Java code is not reading from disk, making it a totally unfair comparison if Postgres is reading from the disk and calculating a complicated query, but that's really besides the point, the DB should be smart enough to bring a small table into memory and precompile a stored procedure IMHO.
UPDATE (in response to the first comment below):
I'm not sure how I'd test the query without using an aggregation function in a way that would be fair, since if i select all of the rows it'll spend tons of time serializing and formatting everything. I'm not saying that the slowness is due to the aggregation function, it could still be just overhead from concurrency, integrity, and friends. I just don't know how to isolate the aggregation as the sole independent variable.
A: Those are very detailed answers, but they mostly beg the question, how do I get these benefits without leaving Postgres given that the data easily fits into memory, requires concurrent reads but no writes and is queried with the same query over and over again.
Is it possible to precompile the query and optimization plan? I would have thought the stored procedure would do this, but it doesn't really help.
To avoid disk access it's necessary to cache the whole table in memory, can I force Postgres to do that? I think it's already doing this though, since the query executes in just 200 ms after repeated runs.
Can I tell Postgres that the table is read only, so it can optimize any locking code?
I think it's possible to estimate the query construction costs with an empty table (timings range from 20-60 ms)
I still can't see why the Java/Python tests are invalid. Postgres just isn't doing that much more work (though I still haven't addressed the concurrency aspect, just the caching and query construction)
UPDATE:
I don't think it's fair to compare the SELECTS as suggested by pulling 350,000 through the driver and serialization steps into Python to run the aggregation, nor even to omit the aggregation as the overhead in formatting and displaying is hard to separate from the timing. If both engines are operating on in memory data, it should be an apples to apples comparison, I'm not sure how to guarantee that's already happening though.
I can't figure out how to add comments, maybe i don't have enough reputation?
A: I'm a MS-SQL guy myself, and we'd use DBCC PINTABLE to keep a table cached, and SET STATISTICS IO to see that it's reading from cache, and not disk.
I can't find anything on Postgres to mimic PINTABLE, but pg_buffercache seems to give details on what is in the cache - you may want to check that, and see if your table is actually being cached.
A quick back of the envelope calculation makes me suspect that you're paging from disk. Assuming Postgres uses 4-byte integers, you have (6 * 4) bytes per row, so your table is a minimum of (24 * 350,000) bytes ~ 8.4MB. Assuming 40 MB/s sustained throughput on your HDD, you're looking at right around 200ms to read the data (which, as pointed out, should be where almost all of the time is being spent).
Unless I screwed up my math somewhere, I don't see how it's possible that you are able to read 8MB into your Java app and process it in the times you're showing - unless that file is already cached by either the drive or your OS.
A: I would say your test scheme is not really useful. To fulfill the db query, the db server goes through several steps:
*
*parse the SQL
*work up a query plan, i. e. decide on which indices to use (if any), optimize etc.
*if an index is used, search it for the pointers to the actual data, then go to the appropriate location in the data or
*if no index is used, scan the whole table to determine which rows are needed
*load the data from disk into a temporary location (hopefully, but not necessarily, memory)
*perform the count() and avg() calculations
So, creating an array in Python and getting the average basically skips all these steps save the last one. As disk I/O is among the most expensive operations a program has to perform, this is a major flaw in the test (see also the answers to this question I asked here before). Even if you read the data from disk in your other test, the process is completely different and it's hard to tell how relevant the results are.
To obtain more information about where Postgres spends its time, I would suggest the following tests:
*
*Compare the execution time of your query to a SELECT without the aggregating functions (i. e. cut step 5)
*If you find that the aggregation leads to a significant slowdown, try if Python does it faster, obtaining the raw data through the plain SELECT from the comparison.
To speed up your query, reduce disk access first. I doubt very much that it's the aggregation that takes the time.
There's several ways to do that:
*
*Cache data (in memory!) for subsequent access, either via the db engine's own capabilities or with tools like memcached
*Reduce the size of your stored data
*Optimize the use of indices. Sometimes this can mean to skip index use altogether (after all, it's disk access, too). For MySQL, I seem to remember that it's recommended to skip indices if you assume that the query fetches more than 10% of all the data in the table.
*If your query makes good use of indices, I know that for MySQL databases it helps to put indices and data on separate physical disks. However, I don't know whether that's applicable for Postgres.
*There also might be more sophisticated problems such as swapping rows to disk if for some reason the result set can't be completely processed in memory. But I would leave that kind of research until I run into serious performance problems that I can't find another way to fix, as it requires knowledge about a lot of little under-the-hood details in your process.
Update:
I just realized that you seem to have no use for indices for the above query and most likely aren't using any, too, so my advice on indices probably wasn't helpful. Sorry. Still, I'd say that the aggregation is not the problem but disk access is. I'll leave the index stuff in, anyway, it might still have some use.
A: Postgres is doing a lot more than it looks like (maintaining data consistency for a start!)
If the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up.
(Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation).
Materialized Views
Also consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back.
I'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk.
The real question is, is 200ms fast enough?
-------------- More --------------------
I was interested in solving this using materialized views, since I've never really played with them. This is in oracle.
First I created a MV which refreshes every minute.
create materialized view mv_so_x
build immediate
refresh complete
START WITH SYSDATE NEXT SYSDATE + 1/24/60
as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;
While its refreshing, there is no rows returned
SQL> select * from mv_so_x;
no rows selected
Elapsed: 00:00:00.00
Once it refreshes, its MUCH faster than doing the raw query
SQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:05.74
SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00
SQL>
If we insert into the base table, the result is not immediately viewable view the MV.
SQL> insert into so_x values (1,2,3,4,5);
1 row created.
Elapsed: 00:00:00.00
SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00
SQL>
But wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want.
SQL> /
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899460 7495.35823 22.2905352 5.00276078 2.17647059
Elapsed: 00:00:00.00
SQL>
This isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate.
A: I don't think that your results are all that surprising -- if anything it is that Postgres is so fast.
Does the Postgres query run faster a second time once it has had a chance to cache the data? To be a little fairer your test for Java and Python should cover the cost of acquiring the data in the first place (ideally loading it off disk).
If this performance level is a problem for your application in practice but you need a RDBMS for other reasons then you could look at memcached. You would then have faster cached access to raw data and could do the calculations in code.
A: One other thing that an RDBMS generally does for you is to provide concurrency by protecting you from simultaneous access by another process. This is done by placing locks, and there's some overhead from that.
If you're dealing with entirely static data that never changes, and especially if you're in a basically "single user" scenario, then using a relational database doesn't necessarily gain you much benefit.
A: Are you using TCP to access the Postgres? In that case Nagle is messing with your timing.
A: You need to increase postgres' caches to the point where the whole working set fits into memory before you can expect to see perfomance comparable to doing it in-memory with a program.
A: Thanks for the Oracle timings, that's the kind of stuff I'm looking for (disappointing though :-)
Materialized views are probably worth considering as I think I can precompute the most interesting forms of this query for most users.
I don't think query round trip time should be very high as i'm running the the queries on the same machine that runs Postgres, so it can't add much latency?
I've also done some checking into the cache sizes, and it seems Postgres relies on the OS to handle caching, they specifically mention BSD as the ideal OS for this, so I thinking Mac OS ought to be pretty smart about bringing the table into memory. Unless someone has more specific params in mind I think more specific caching is out of my control.
In the end I can probably put up with 200 ms response times, but knowing that 7 ms is a possible target makes me feel unsatisfied, as even 20-50 ms times would enable more users to have more up to date queries and get rid of a lots of caching and precomputed hacks.
I just checked the timings using MySQL 5 and they are slightly worse than Postgres. So barring some major caching breakthroughs, I guess this is what I can expect going the relational db route.
I wish I could up vote some of your answers, but I don't have enough points yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How do I automate finding unused #include directives? Typically when writing new code you discover that you are missing a #include because the file doesn't compile. Simple enough, you add the required #include. But later you refactor the code somehow and now a couple of #include directives are no longer needed. How do I discover which ones are no longer needed?
Of course I can manually remove some or all #include lines and add them back until the file compiles again, but this isn't really feasible in a large project with thousands of files. Are there any tools available that will help automating task?
A: You can use PC-Lint/FlexeLint to do that.
Unusually there isn't a free OS version of the tool available.
You can remove #includes by passing by reference instead of passing by value and forward declaring. This is because the compiler doesn't need to know the size of the object at compile time. This will require a large amount of manual work on your behalf however. The good thing is it will reduce your compile times.
A: You could just write a 'brute force' command line tool that comments out the #includes one by one and tests whether the compile still works. Let me know when you've got it to work. ;0)
A: This article explains a technique of #include removing by using the parsing of Doxygen. That's just a perl script, so it's quite easy to use.
A: There is an Eclipse plugin called includator which helps to manage include dependencies in C/C++ projects
http://includator.com/
A: Here is 'brute force' VC6 macro which works on single .cpp or .h file opened in editor by commenting include by include and running compile:
Sub RemoveNotUsedIncludes()
'Check if already processed; Exit if so
ActiveDocument.Selection.FindText "//INCLUDE NOT USED", dsMatchFromStart
IF ActiveDocument.Selection <> "" THEN
ActiveDocument.Selection.SetBookmark
MsgBox "Already checked"
ActiveDocument.Selection.ClearBookmark
EXIT SUB
END IF
'Find first #include; Exit if not found
ActiveDocument.Selection.FindText "#include", dsMatchFromStart
IF ActiveDocument.Selection = "" THEN
MsgBox "No #include found"
EXIT SUB
END IF
Dim FirstIncludeLine
FirstIncludeLine = ActiveDocument.Selection.CurrentLine
FOR i=1 TO 200
'Test build
ActiveDocument.Selection.SetBookmark
ActiveDocument.Selection = "//CHECKING... #include"
Build
ActiveDocument.Undo
ActiveDocument.Selection.ClearBookmark
IF Errors = 0 THEN
'If build failed add comment
ActiveDocument.Selection.EndOfLine
ActiveDocument.Selection = " //INCLUDE NOT USED"
END IF
'Find next include
ActiveDocument.Selection.EndOfLine
ActiveDocument.Selection.FindText "#include"
'If all includes tested exit
IF ActiveDocument.Selection.CurrentLine = FirstIncludeLine THEN EXIT FOR
NEXT
End Sub
Of case it could be improved to work on whole project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: JavaScript culture sensitive currency formatting How can i format currency related data in a manner that is culture aware in JavaScript?
A: So I know this is an old question, but incase anyone else shows up looking for similar answers, in modern JavaScript you can use
new Intl.NumberFormat('de-DE', { style: 'currency', currency: 'EUR' }).format(number)
For more info here is the reference doc.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat
A: Dojo has a currency formatter that's locale aware.
If you don't want to include Dojo in your project just for this function, then perhaps you can localize the currency in your back-end?
A: Number.toLocaleString (implemented in JavaScript 1.5, ECMAScript 3rd Edition)
var number = 3500;
console.log(number.toLocaleString()); /* Displays "3,500" in English locale */
Docs on MDN.
A: There is the Number.localeFormat() function, but I'm not sure it's what you're after.
http://msdn.microsoft.com/en-gb/library/bb310813.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Determine file type in Ruby How does one reliably determine a file's type? File extension analysis is not acceptable. There must be a rubyesque tool similar to the UNIX file(1) command?
This is regarding MIME or content type, not file system classifications, such as directory, file, or socket.
A: If you're using the File class, you can augment it with the following functions based on @PatrickRichie's answer:
class File
def mime_type
`file --brief --mime-type #{self.path}`.strip
end
def charset
`file --brief --mime #{self.path}`.split(';').second.split('=').second.strip
end
end
And, if you're using Ruby on Rails, you can drop this into config/initializers/file.rb and have available throughout your project.
A: There is a ruby binding to libmagic that does what you need. It is available as a gem named ruby-filemagic:
gem install ruby-filemagic
Require libmagic-dev.
The documentation seems a little thin, but this should get you started:
$ irb
irb(main):001:0> require 'filemagic'
=> true
irb(main):002:0> fm = FileMagic.new
=> #<FileMagic:0x7fd4afb0>
irb(main):003:0> fm.file('foo.zip')
=> "Zip archive data, at least v2.0 to extract"
irb(main):004:0>
A: For those who came here by the search engine, a modern approach to find the MimeType in pure ruby is to use the mimemagic gem.
require 'mimemagic'
MimeMagic.by_magic(File.open('tux.jpg')).type # => "image/jpeg"
If you feel that is safe to use only the file extension, then you can use the mime-types gem:
MIME::Types.type_for('tux.jpg') => [#<MIME::Type: image/jpeg>]
A: If you're on a Unix machine try this:
mimetype = `file -Ib #{path}`.gsub(/\n/,"")
I'm not aware of any pure Ruby solutions that work as reliably as 'file'.
Edited to add: depending what OS you are running you may need to use 'i' instead of 'I' to get file to return a mime-type.
A: You could give shared-mime a try (gem install shared-mime-info). Requires the use ofthe Freedesktop shared-mime-info library, but does both filename/extension checks as well as "magic" checks... tried giving it a whirl myself just now but I don't have the freedesktop shared-mime-info database installed and have to do "real work," unfortunately, but it might be what you're looking for.
A: I found shelling out to be the most reliable. For compatibility on both Mac OS X and Ubuntu Linux I used:
file --mime -b myvideo.mp4
video/mp4; charset=binary
Ubuntu also prints video codec information if it can which is pretty cool:
file -b myvideo.mp4
ISO Media, MPEG v4 system, version 2
A: You can use this reliable method base on the magic header of the file :
def get_image_extension(local_file_path)
png = Regexp.new("\x89PNG".force_encoding("binary"))
jpg = Regexp.new("\xff\xd8\xff\xe0\x00\x10JFIF".force_encoding("binary"))
jpg2 = Regexp.new("\xff\xd8\xff\xe1(.*){2}Exif".force_encoding("binary"))
case IO.read(local_file_path, 10)
when /^GIF8/
'gif'
when /^#{png}/
'png'
when /^#{jpg}/
'jpg'
when /^#{jpg2}/
'jpg'
else
mime_type = `file #{local_file_path} --mime-type`.gsub("\n", '') # Works on linux and mac
raise UnprocessableEntity, "unknown file type" if !mime_type
mime_type.split(':')[1].split('/')[1].gsub('x-', '').gsub(/jpeg/, 'jpg').gsub(/text/, 'txt').gsub(/x-/, '')
end
end
A: This was added as a comment on this answer but should really be its own answer:
path = # path to your file
IO.popen(
["file", "--brief", "--mime-type", path],
in: :close, err: :close
) { |io| io.read.chomp }
I can confirm that it worked for me.
A: I recently found mimetype-fu.
It seems to be the easiest reliable solution to get a file's MIME type.
The only caveat is that on a Windows machine it only uses the file extension, whereas on *Nix based systems it works great.
A: Pure Ruby solution using magic bytes and returning a symbol for the matching type:
https://github.com/SixArm/sixarm_ruby_magic_number_type
I wrote it, so if you have suggestions, let me know.
A: The best I found so far:
http://bogomips.org/mahoro.git/
A: You could give a go with MIME::Types for Ruby.
This library allows for the identification of a file’s likely MIME content type. The identification of MIME content type is based on a file’s filename extensions.
A: The ruby gem is well.
mime-types for ruby
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "81"
} |
Q: Good Java graph algorithm library? Has anyone had good experiences with any Java libraries for Graph algorithms. I've tried JGraph and found it ok, and there are a lot of different ones in google. Are there any that people are actually using successfully in production code or would recommend?
To clarify, I'm not looking for a library that produces graphs/charts, I'm looking for one that helps with Graph algorithms, eg minimum spanning tree, Kruskal's algorithm Nodes, Edges, etc. Ideally one with some good algorithms/data structures in a nice Java OO API.
A: check out Blueprints:
Blueprints is a collection of interfaces, implementations, ouplementations, and test suites for the property graph data model. Blueprints is analogous to the JDBC, but for graph databases. Within the TinkerPop open source software stack, Blueprints serves as the foundational technology for:
Pipes: A lazy, data flow framework
Gremlin: A graph traversal language
Frames: An object-to-graph mapper
Furnace: A graph algorithms package
Rexster: A graph server
A: JDSL (Data Structures Library in Java) should be good enough if you're into graph algorithms - http://www.cs.brown.edu/cgc/jdsl/
A: http://incubator.apache.org/hama/ is a distributed scientific package on Hadoop for massive matrix and graph data.
A: Summary:
*
*JGraphT if you are more interested in data structures and algorithms.
*JGraph if your primary focus is visualization.
*Jung, yWorks, and BFG are other things people tried using.
*Prefuse is a no no since one has to rewrite most of it.
*Google Guava if you need good datastructures only.
*Apache Commons Graph. Currently dormant, but provides implementations for many algorithms. See https://issues.apache.org/jira/browse/SANDBOX-458 for a list of implemented algorithms, also compared with Jung, GraphT, Prefuse, jBPT
A: For visualization our group had some success with prefuse. We extended it to handle architectural floorplates and bubble diagraming, and it didn't complain too much. They have a new Flex toolkit out too called Flare that uses a very similar API.
UPDATE:
I'd have to agree with the comment, we ended up writing a lot of custom functionality/working around prefuse limitations. I can't say that starting from scratch would have been better though as we were able to demonstrate progress from day 1 by using prefuse. On the other hand if we were doing a second implementation of the same stuff, I might skip prefuse since we'd understand the requirements a lot better.
A: Try Annas its an open source graph package which is easy to get to grips with
http://annas.googlecode.com
A: It's also good to be convinced that a Graph can be represented as simply as :
class Node {
int value;
List<Node> adj;
}
and implement most the algorithms you find interesting by yourself. If you fall on this question in the middle of some practice/learning session on graphs, that's the best lib to consider. ;)
You can also prefer adjacency matrix for most common algorithms :
class SparseGraph {
int[] nodeValues;
List<Integer>[] edges;
}
or a matrix for some operations :
class DenseGraph {
int[] nodeValues;
int[][] edges;
}
A: Check out JGraphT for a very simple and powerful Java graph library that is pretty well done and, to allay any confusion, is different than JGraph. Some sample code:
UndirectedGraph<String, DefaultEdge> g =
new SimpleGraph<String, DefaultEdge>(DefaultEdge.class);
String v1 = "v1";
String v2 = "v2";
String v3 = "v3";
String v4 = "v4";
// add the vertices
g.addVertex(v1);
g.addVertex(v2);
g.addVertex(v3);
g.addVertex(v4);
// add edges to create a circuit
g.addEdge(v1, v2);
g.addEdge(v2, v3);
g.addEdge(v3, v4);
g.addEdge(v4, v1);
A: I don't know if I'd call it production-ready, but there's jGABL.
A: If you need performance, you might take a look at Grph. The library is developed in the French University and CNRS/Inria.
http://www.i3s.unice.fr/~hogie/grph/
The project is active and reactive support is provided!
A: JUNG is a good option for visualisation, and also has a fairly good set of available graph algorithms, including several different mechanisms for random graph creation, rewiring, etc. I've also found it to be generally fairly easy to extend and adapt where necessary.
A: Instructional graph algorithm implementations in java could be found here (by prof. Sedgewick et al.):
http://algs4.cs.princeton.edu/code/
I was introduced to them while attending these exceptional algorithm courses on coursera (also taught by prof. Sedgewick):
https://www.coursera.org/course/algs4partI
https://www.coursera.org/course/algs4partII
A: Apache Commons offers commons-graph. Under http://svn.apache.org/viewvc/commons/sandbox/graph/trunk/ one can inspect the source. Sample API usage is in the SVN, too. See https://issues.apache.org/jira/browse/SANDBOX-458 for a list of implemented algorithms, also compared with Jung, GraphT, Prefuse, jBPT
Google Guava if you need good datastructures only.
JGraphT is a graph library with many Algorithms implemented and having (in my oppinion) a good graph model. Helloworld Example. License: LGPL+EPL.
JUNG2 is also a BSD-licensed library with the data structure similar to JGraphT. It offers layouting algorithms, which are currently missing in JGraphT. The most recent commit is from 2010 and packages hep.aida.* are LGPL (via the colt library, which is imported by JUNG). This prevents JUNG from being used in projects under the umbrella of ASF and ESF. Maybe one should use the github fork and remove that dependency. Commit f4ca0cd is mirroring the last CVS commit. The current commits seem to remove visualization functionality. Commit d0fb491c adds a .gitignore.
Prefuse stores the graphs using a matrix structure, which is not memory efficient for sparse graphs. License: BSD
Eclipse Zest has built in graph layout algorithms, which can be used independently of SWT. See org.eclipse.zest.layouts.algorithms. The graph structure used is the one of Eclipse Draw2d, where Nodes are explicit objects and not injected via Generics (as it happens in Apache Commons Graph, JGraphT, and JUNG2).
A: http://neo4j.org/ is a graph database that contains many of graph algorithms and scales better than most in-memory libraries.
A: If you were using JGraph, you should give a try to JGraphT which is designed for algorithms. One of its features is visualization using the JGraph library. It's still developed, but pretty stable. I analyzed the complexity of JGraphT algorithms some time ago. Some of them aren't the quickest, but if you're going to implement them on your own and need to display your graph, then it might be the best choice. I really liked using its API, when I quickly had to write an app that was working on graph and displaying it later.
A: In a university project I toyed around with yFiles by yWorks and found it had pretty good API.
A: If you are actually looking for Charting libraries and not for Node/Edge Graph libraries I would suggest splurging on Big Faceless Graph library (BFG). It's way easier to use than JFreeChart, looks nicer, runs faster, has more output options, really no comparison.
A: JGraph from http://mmengineer.blogspot.com/2009/10/java-graph-floyd-class.html
Provides a powerfull software to work with graphs (direct or undirect). Also generates Graphivz code, you can see graphics representations. You can put your own code algorithms into pakage, for example: backtracking code. The package provide some algorithms: Dijkstra, backtracking minimun path cost, ect..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "237"
} |
Q: Java Generics: Comparing the class of Object o to Let's say I have the following class:
public class Test<E> {
public boolean sameClassAs(Object o) {
// TODO help!
}
}
How would I check that o is the same class as E?
Test<String> test = new Test<String>();
test.sameClassAs("a string"); // returns true;
test.sameClassAs(4); // returns false;
I can't change the method signature from (Object o) as I'm overridding a superclass and so don't get to choose my method signature.
I would also rather not go down the road of attempting a cast and then catching the resulting exception if it fails.
A: An instance of Test has no information as to what E is at runtime. So, you need to pass a Class<E> to the constructor of Test.
public class Test<E> {
private final Class<E> clazz;
public Test(Class<E> clazz) {
if (clazz == null) {
throw new NullPointerException();
}
this.clazz = clazz;
}
// To make things easier on clients:
public static <T> Test<T> create(Class<T> clazz) {
return new Test<T>(clazz);
}
public boolean sameClassAs(Object o) {
return o != null && o.getClass() == clazz;
}
}
If you want an "instanceof" relationship, use Class.isAssignableFrom instead of the Class comparison. Note, E will need to be a non-generic type, for the same reason Test needs the Class object.
For examples in the Java API, see java.util.Collections.checkedSet and similar.
A: I could only make it working like this:
public class Test<E> {
private E e;
public void setE(E e) {
this.e = e;
}
public boolean sameClassAs(Object o) {
return (o.getClass().equals(e.getClass()));
}
public boolean sameClassAs2(Object o) {
return e.getClass().isInstance(o);
}
}
A: The method I've always used is below. It is a pain and a bit ugly, but I haven't found a better one. You have to pass the class type through on construction, as when Generics are compiled class information is lost.
public class Test<E> {
private Class<E> clazz;
public Test(Class<E> clazz) {
this.clazz = clazz;
}
public boolean sameClassAs(Object o) {
return this.clazz.isInstance(o);
}
}
A: I was just trying to do the same thing, and one neat trick i just realized is that you can can try a cast, and if the cast fails, ClassCastException will be thrown. You can can catch that, and do whatever.
so your sameClassAs method should look like:
public boolean sameClassAs(Object o) {
boolean same = false;
try {
E t = (E)o;
same = true;
} catch (ClassCastException e) {
// same is false, nothing else to do
} finally {
return same;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Twitter for work updates If you are sending work/progress reports to the project lead on a daily or weekly basis, I wondered if you would consider using Twitter or similar services for these updates.
Say if you're working remotely or with a distributed team and the project lead has a hard time getting an overview about the topics people are working on, and where the issues/time consumers are, would you set up some private accounts (or even a private company-internal service) to broadcast progress updates to your colleagues?
edit Thanks for the link to those products, but do you already use one of it in your company too? For real-life professional use?
A: Look at http://www.yammer.com for a corporate version of twitter.
A: Maybe try campfire or basecamp.
A: We use Laconica on my team, it's very useful for those updates that you want to send to the whole team but aren't really worth wasting an email on.
Since only my team is using the installation of Laconica that we have, I take the RSS for the public feed and I integrated that into SharePoint.
So while the developers and PM's on our team use Twhirl to manage sending and recieving updates, management is still able to see the updates directly on our team site.
It's quite transparent in that nobody actually has to go to the Laconica instance I have setup to do anything except initially register.
Check out this post for information on how I integrated Laconica with SharePoint: How can I integrate Laconica update stream into SharePoint?
A: Try Laconica: An open source Twitter-like system you could run on your own servers.
A: What about confidentiality and information security? I'm certain a company run IM service would be a better alternative.
I've viewed Twitter and similar services to be used as marketing tools to engage customers and prospects.
A: Or, the layer above Laconica called Identi.ca There's a good talk with the founder of Identi.ca about such usage over at IT Conversations.
A: Many of my colleagues are posting work updates on Twitter, being careful not to disclose company confidential information. From those working on open commercial development projects, I've even seen Twitter updates indicating which work item they were working on. Coolness.
A: I can see the appeal of using twitter in this way. Where I work, we send a daily project "snapshot" to basically everyone else in the company. As the company grows (we are nearing 35 employees now), this is becoming a bit of a burden to read through (or at the very least file/delete) all the status emails as they arrive. I don't know that I could see Twitter replacing these emails, however, because these emails are not necessarily supposed to tell someone when something is completed, but rather to tell someone what it is I'm working on today, and what my upcoming projects are in the future.
I guess most of our project updates are actually done more frequently in person. For larger projects, we now employ what's referred to as a "burndown". This basically means we gather for a quick re-estimation of how much work is left on a project, which then results in a nice graph that should show whether the project is on track or not.
We do also throw in the occasional email when there's something more immediate, or if someone isn't available for discussion/notification.
A: I would consider what the reports were meant to accomplish, and then discover a solution that accentuated that objective without being a logistical nightmare :)
Twitter might only be appropriate if the updates had a short shelf life, and if scattering them among other updates wasn't destructive.
There's also a question of confidentiality on any 3rd party service like this.
A: Check out https://presentlyapp.com/
A: The Prologue theme for WordPress was designed with this in mind.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Accessing non-generic members of a generic object Is there a way to collect (e.g. in a List) multiple 'generic' objects that don't share a common super class? If so, how can I access their common properties?
For example:
class MyObject<T>
{
public T Value { get; set; }
public string Name { get; set; }
public MyObject(string name, T value)
{
Name = name;
Value = value;
}
}
var fst = new MyObject<int>("fst", 42);
var snd = new MyObject<bool>("snd", true);
List<MyObject<?>> list = new List<MyObject<?>>(){fst, snd};
foreach (MyObject<?> o in list)
Console.WriteLine(o.Name);
Obviously, this is pseudo code, this doesn't work.
Also I don't need to access the .Value property (since that wouldn't be type-safe).
EDIT: Now that I've been thinking about this, It would be possible to use sub-classes for this. However, I think that would mean I'd have to write a new subclass for every new type.
@Grzenio
Yes, that exactly answered my question. Of course, now I need to duplicate the entire shared interface, but that's not a big problem. I should have thought of that...
@aku
You are right about the duck typing. I wouldn't expect two completely random types of objects to be accessible.
But I thought generic objects would share some kind of common interface, since they are exactly the same, apart from the type they are parametrized by. Apparently, this is not the case automatically.
A: I don't think it is possible in C#, because MyObject is not a baseclass of MyObject. What I usually do is to define an interface (a 'normal' one, not generic) and make MyObject implement that interface, e.g.
interface INamedObject
{
string Name {get;}
}
and then you can use the interface:
List<INamedObject> list = new List<INamedObject>(){fst, snd};
foreach (INamedObject o in list)
Console.WriteLine(o.Name);
Did it answer your question?
A: C# doesn't support duck typing. You have 2 choices: interfaces and inheritance, otherwise you can't access similar properties of different types of objects.
A: The best way would be to add a common base class, otherwise you can fall back to reflection.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I fix the multiple-step OLE DB operation errors in SSIS? I'm attempting to make a DTS package to transfer data between two databases on the same server and I'm getting the following errors. Iv read that the Multiple-step OLE DB operation generated error can occur when you are transferring between different database types and there is loss of precision, but this is not that case here. How do I examine the column meta data?
Error: 0xC0202009 at Data Flow Task,
piTech [183]: An OLE DB error has
occurred. Error code: 0x80040E21. An
OLE DB record is available. Source:
"Microsoft SQL Native Client"
Hresult: 0x80040E21 Description:
"Multiple-step OLE DB operation
generated errors. Check each OLE DB
status value, if available. No work
was done.".
Error: 0xC0202025 at Data Flow Task,
piTech [183]: Cannot create an OLE DB
accessor. Verify that the column
metadata is valid.
Error: 0xC004701A at Data Flow Task,
DTS.Pipeline: component "piTech" (183)
failed the pre-execute phase and
returned error code 0xC0202025.
A: '-2147217887' message 'IDispatch error #3105' source 'Microsoft OLE DB Service Components' description 'Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.'."
This is what I was also facing. The problem came from the fact that I changed my SQLOLEDB.1 provider to SQLNCLI11 without mentioning the compatibility mode in the connection string.
When I set this DataTypeCompatibility=80; in the connection string, I got the problem solved.
A: Take a look at the fields's proprieties (type, length, default value, etc.), they should be the same.
I had this problem with SQL Server 2008 R2 because the fields's length are not equal.
A: This query should identify columns that are potential problems...
SELECT *
FROM [source].INFORMATION_SCHEMA.COLUMNS src
INNER JOIN [dest].INFORMATION_SCHEMA.COLUMNS dst
ON dst.COLUMN_NAME = src.COLUMN_NAME
WHERE dst.CHARACTER_MAXIMUM_LENGTH < src.CHARACTER_MAXIMUM_LENGTH
A: This issue will come mostly due to empty rows at the end of the file, remove those and run the job.
A: This error is common when the source table contains a TEXT column and the target is anything other than a TEXT column. It can be a real time-eater if you have not encountered (or forgot!) this before.
Convert the text column to string and set the error condition on truncation to ignore. this will usually serve as a solution for this error.
A: For me the answer was that I was passing two parameters to and execute SQL task, but only using one. I was doing some testing and commented out a section of code using the second parameter. I neglected to remove the parameter mapping.
So ensure you are passing in the correct number of parameters in the parameter mapping if you are using the Execute SQL task.
A: You can use SELECT * FROM INFORMATION_SCHEMA.COLUMNS but I suspect you created the destination database from a script of the source database so it is very likely that they columns will be the same.
Some comparisons might bring something up though.
These sorts of errors sometimes come from trying to insert too much data into varchar columns too.
A: I had a similar issue when i was transferring data from an old database to a new database, I got the error above. I then ran the following script
SELECT * FROM [source].INFORMATION_SCHEMA.COLUMNS src INNER JOIN [dest].INFORMATION_SCHEMA.COLUMNS dst ON dst.COLUMN_NAME = src.COLUMN_NAME WHERE dst.CHARACTER_MAXIMUM_LENGTH < src.CHARACTER_MAXIMUM_LENGTH
and found that my columns where slightly different in terms of character sizes etc.
I then tried to alter the table to the new table structure which did not work. I then transferred the data from the old database into Excel and imported the data from excel to the new DB which worked 100%.
A: Also check if the script has no batch seperator commands (remove the 'GO' statements on a single line).
A: In my case, the problem was setting the variable of the Execute SQL Task, in parameters mapping the parameter name, (in OLEDB must be the position of the parameter that you call in the stored procedure), I had 1, but the first parameter starts in 0, so I changed it and voilá!
A: check if you have written GO statement in your query. If it's there then try to remove it.
drop table if exists Employee
GO
If should be only
drop table if exists Employee
A: This error will also occur when trying to do an insert and a field is coded not null and nulls are trying to be inserted.
A: I hade this error when transfering a csv to mssql
I converted the columns to DT_NTEXT and some columns on mssql where set to nvarchar(255).
setting them to nvarchar(max) resolved it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Is there a need to destroy char * = "string" or char * = new char[6]? I assume that char* = "string" is the same to char* = new char[6]. I believe these strings are created on the heap instead of the stack. So do I need to destroy them or free their memory when I'm done using them or do they get destroyed by themselves?
A: The name of the game is "destroy only what you created". Here are the pairs:
*
*malloc/free
*calloc/free
*new/delete
*new []/delete []
Since you created the 2nd string using new [], the onus is on you to destroy it with delete []. Call delete [] string2 when you are done.
Now if your code is convoluted enough and makes keeping track of deletions difficult, consider the usage of scoped pointers or auto pointers. The boost::scoped_ptr class from boost library is a good place to begin. Also look into the RAII idiom, pretty handy and useful stuff.
A:
I assume when I do char* = "string" its the same thing as char* = new char[6].
No. What the first one does is create a constant. Modifying it is undefined behaviour. But to answer your question; no, you don't have to destroy them. And just a note, always use std::string whenever possible.
A: No. You only need to manually free strings when you manually allocate the memory yourself using the malloc function (in C) or the new operator (in C++). If you do not use malloc or new, then the char* or string will be created on the stack or as a compile-time constant.
A: They're not the same. Your first example is a constant string, so it's definitely not allocated from the heap. Your second example is a runtime memory allocation of 6 characters, and that comes from the heap. You don't want to delete your first example, but you need to delete [] your second example.
A: No. When you say:
const char* c = "Hello World!";
You are assigning c to a "pre-existing" string constant which is NOT the same as:
char* c = new char[6];
Only in the latter case are you allocating memory on the heap. So you'd call delete when you're done.
A: You don't know where the string literals are stored. It may even be read-only memory, so your code should read:
const char* c = "string";
And a new char array should be deleted just like any other dynamically allocated memory area.
A: Let's see what GCC 4.8 x86-64 Linux does
Program:
#include <cstdio>
int main() {
const char *s = "abc";
char *sn = new char[4];
sn[3] = '\0';
std::printf("%s\n", s);
std::printf("%s\n", sn);
}
Compile and decompile:
g++ -c -ggdb -o a.o -std=c++98 a.cpp
objdump -CSr a.o
The output contains:
const char *s = "abc";
8: 48 c7 45 f0 00 00 00 movq $0x0,-0x10(%rbp)
f: 00
c: R_X86_64_32S .rodata
char *sn = new char[4];
10: bf 04 00 00 00 mov $0x4,%edi
15: e8 00 00 00 00 callq 1a <main+0x1a>
16: R_X86_64_PC32 operator new[](unsigned long)-0x4
1a: 48 89 45 f8 mov %rax,-0x8(%rbp)
Interpretation:
*
*char *s = "abc" goes into .rodata. So you cannot free it in any way.
*char *sn = new char[4]; comes from the output of operator new[]. So you should free it when you can.
A: new is always an allocation whereas defining a string inline actually embeds the data in the program itself and cannot be changed (some compilers allow this by a smart trick, don't bother).
Some compilers type inline strings so that you cannot modify the buffer.
char* const sz1 = "string"; // embedded string, immutable buffer
char* sz2 = new char[10]; // allocated string, should be deleted
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Apache XML-RPC Exception Handling What is the easiest way to extract the original exception from an exception returned via Apache's implementation of XML-RPC?
A: It turns out that getting the cause exception from the Apache exception is the right one.
} catch (XmlRpcException rpce) {
Throwable cause = rpce.getCause();
if(cause != null) {
if(cause instanceof ExceptionYouCanHandleException) {
handler(cause);
}
else { throw(cause); }
}
else { throw(rpce); }
}
A: According to the XML-RPC Spec it returns the "fault" in the xml.
Is this the "Exception" you are referring to or are you refering to a Java Exception generated while making the XML-RPC call?
Fault example
HTTP/1.1 200 OK
Connection: close
Content-Length: 426
Content-Type: text/xml
Date: Fri, 17 Jul 1998 19:55:02 GMT
Server: UserLand Frontier/5.1.2-WinNT
<?xml version="1.0"?>
<methodResponse>
<fault>
<value>
<struct>
<member>
<name>faultCode</name>
<value><int>4</int></value>
</member>
<member>
<name>faultString</name>
<value>
<string>Too many parameters.</string>
</value>
</member>
</struct>
</value>
</fault>
</methodResponse>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to set up Git bare HTTP-available repository on IIS My server already runs IIS on TCP ports 80 and 443. I want to make a centralized "push/pull" Git repository available to all my team members over the Internet.
So I should use HTTP or HTTPS.
But I cannot use Apache because of IIS already hooking up listening sockets on ports 80 and 443! Is there any way to publish a Git repository over IIS? Does Git use WebDAV?
Update. It seems that Git HTTP installation is read-only. That's sad. I intended to keep the stable branch on a build server and redeploy using a hook on push. Does anyone see a workaround besides using SVN for that branch?
A: Git supposedly supports webdav, and should work with any webdav server. However, it's really slow compared to the native git protocols.
http://www.kernel.org/pub/software/scm/git/docs/howto/setup-git-server-over-http.txt
A: Bonobo Git Server
https://bonobogitserver.com/
GitAspx - By Jeremy Skinner
https://github.com/JeremySkinner/git-dot-aspx/
https://github.com/JeremySkinner/git-dot-aspx/downloads
Install Instructions
https://www.jeremyskinner.co.uk/2010/10/19/gitaspx-0-3-available/
Git Web
https://gitweb.codeplex.com/
WebGitNET
https://github.com/otac0n/WebGitNet
Alternatively ... (non-IIS, but highly recommend, free and open-source)
Gitea (fork of Gogs): https://gitea.io
Gogs: https://gogs.io
SCM Manager allows you to easily set up revision control endpoints for Git, Hg, and SVN under the same hosting process. HTTP/HTTPS is supported along with built-in user authentication.
https://www.scm-manager.org
https://bitbucket.org/sdorra/scm-manager/
A: Today Git ain't too bad on Windows these days.
And if you want to use SVN on port 443 and/or 80 when IIS is already using it, try the tool at http://gstoolkit.codeplex.com/wikipage?title=SvnReverseProxy&ProjectName=gstoolkit which is a reverse proxy that allows IIS to transparently pass thru SVN to a back-end VisualSVN server (running on the same machine on port 8080).
I'm still trying to get WebDAV and Git working on Windows though. (either by Apache's or IIS's WebDAV).
A: There is a way to setup Git with MSysGit without cygwin.
http://java2cs2.blogspot.com/2010/03/setup-git-server-on-windows-machine.html
A: Try this instruction that uses SCM-Manager and IIS: Hosting Git, SVN and Hg (Mercurial) repositories on Windows with IIS
A: https://github.com/projectkudu/kudu is the engine behind deployments on Azure. This might help for anybody still asking this question...
A: It's possible to run a Git Smart HTTP server (supporting both push and pull from remote clients over HTTP) on Windows without any extra dependencies besides IIS and Git.
When you install Git, it includes an executable called git-http-backend.exe which implements the Smart HTTP protocol which is designed to run through CGI on any web server.
You can secure the Git server using HTTP Basic Authentication and HTTPS. You can use Windows accounts and file/folder permissions to control access rights to individual repositories.
Full detailed instructions are here: How to run a Git server on Windows with IIS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Designers and developers working together The rich presentational capabilities of WPF and Silverlight mean developers like me will be working closely with graphic designers more often these days, as is the case in my next project.
Does anyone out there have any tips and experience (from both points of view) on making this go more smoothly?
For example, when I mentioned source control to a designer recently, I was quickly told you can't source control graphics, images etc, so it is a waste of time. So I responded: ok but, what about XAML files in WPF/Silverlight?
Scott Hanselman spoke about this topic in a podcast, but he focused more on the tools, while I'm more interested in the communication issues/aspects.
A: Involve the graphic designer in early design and architecture sessions.
You want to involve them to reveal misaligned assumptions and to establish a pattern of working together rather than throwing things back and forth over the wall.
A: Originally, it was envisioned that professional designers would work in Expression Blend, and developers would work in Visual Studio, making changes to a single shared set of source files. While it is certainly possible to do that (so long as you are careful to check regularly that you haven't broken something expected by the other dev. or design tool), many members of the developer community, including some inside Microsoft, have discovered benefits in keeping Blend and Visual Studio project activity SEPARATE -- even to the point of manually cutting and pasting carefully-refactored versions of Blend-generated Xaml into the "official" VStudio project source, rather than allowing designers and developers operate directly on a single shared code base. Microsoft's User Experience Team in the UK published a video describing the problems they ran into trying to coordinate designer and developer efforts on actual projects.
Real_World_WPF_DesignersAndDevelopersWorkingTogether
One of the main lessons learned is that you can't staff a project with designers and developers who are completely ignorant of each other's domains. Developers need to be familiar enough with Blend that they can provide designers with useful UI shells for the designer to decorate, and useful data "stubs" the designer can design interactivity against, and the designer needs to have enough understanding of development issues that they don't do things like delete controls and replace them with custom visual elements - not realizing that they broke all the functionality tied to the original control.
A: Microsoft's vision of the designer/developer workflow marriage definitely seems to break down in real life. I have experience working on a fairly large scale WPF project which involved 2 dedicated design resources for about 4 months. Here are some facts that Microsoft seems to often forget.
*
*Designers often prefer to use Macs (designers at my company are 100% Mac - 0% Windows)
*Blend doesn't run on a Mac (as far as VM solutions - designers typically don't like geeky solutions like running weird applications in a foreign OS).
*Designers use their tools of the trade - Photoshop and Illustrator. Period.
*The aggressiveness of today's schedules usually don't provide ample time for designers to learn a totally new application / design environment (like Blend).
So given the above, what I noticed was that this creates a new job type - either a very techy designer or a graphically enlightened programmer. Basically, someone who can take the design assets in raw form - usually .psd or illustrator format and apply these as needed to the application process.
I turned out to be that guy (graphically enlightened programmer). I spent a lot of time exporting XAML from Illustrator files, cleaning them up by hand when necessary, and making these assets easily usable display objects in Blend or VS. There were also times where I would take a a design element and re-draw it using blend ( usually when the original asset was bitmap based and it made more sense to convert it to vector).
My application may not have been typical - as it was extremely graphically rich and resolution independence was one of the main objectives as it needed to look good on multiple resolutions and aspect ratios (think of the difficulties in designing for TV in todays landscape - things have to look good in both low-res SD and scale well up to hi-res HD).
In summary, I think WPF is an awesome technology and absolutely a step in the right direction for Microsoft. It however is not the end-all be-all solution for integrating the designer in the development process - unless you redefine the role of designer.
A: I'm Felix Corke, the designer from the hanselman podcast you mentioned, so here are a couple of points from a genuine creative as opposed to a developer.
It took a long time to become used to developer tools - I'd never heard of Visual Studio, C# or any type of source control when I first started doing xaml work a few years ago. They were as alien to me as maybe Illustrator or 3DsMax would be to you.
My biggest single point is that the designer can't be expected to know developer practices - please be prepared to do a great deal of hand-holding. You won't have to learn anything new whereas the designer will be launched into a whole new scary side of app development. I made a right mess of a few solutions and checkins (and still do).
Happily, I've learned to become more of an design focussed integrator than a straight creative, and maybe this is a role you need to include in your project. This is the illustration I made for our beauty and the geek - designer/developer session at Mix - if either of you is at too far at either end of the spectrum it can be difficult understand how the other works and what their role should be.
Happy to answer any specific questions!
ps you do NOT want 100Mb+ .psd files in source control ;)
A: One of the things I've discovered is that how you as a developer design your code greatly affects what the designer can do with it. Often you download a Silverlight or WPF sample application from the web and open it up in Blend, just to have Blend crash on you because the code doesn't run well inside the designer. If it doesn't crash, it seldom look anything like the running application.
I recently gave a talk at Tech Ed Australia and New Zealand about techniques you can apply to "design for designability". A short bulled list is included:
*
*Write code that can take advantage of data binding. The Model-View-ViewModel or the presentation pattern is a good fit for this.
*Supply "design time" stubs for your service dependencies. If the class you are binding against makes web service calls be sure to replace the web service client with a stub class that returns "dummy data" that the designer consumes inside blend. This can easily be done through IoC and Dependency Injection, injecting one implementation if HtmlPage.IsEnabled == false.
*By using data binding you can limit the number of "named elements" you have in your XAML file. If you write allot of code behind you end up coupling your C# code against named elements such as txtName or txtAddress, making it easy for the designer to "screw up".
*Use a command pattern instead of code behind click event handlers. By loosely couple the invoker of an event from the handler you can have less named elements, and you give the designer the freedom to choose between a Button or a Menu Item to invoke a specific command.
*Test your code in Blend! Even if you consider your self a pure developer you should test that your code is consumable by a tool, and strive to get a best possible experience at design time. Some would argue that a tool shouldn't effect your software design, just as some one complains about "design for testability", and making software design decisions just to make the code more testable. I think it's a smart thing to do, and the only way you can get some real designer-developer work flow going.
Other tips would be to start small. If your designer is new to XAML, WPF and Silverlight, start by introducing them to the project team, and have them do some basic designs in the tools they know. Let them do some buttons and illustrations in Adobe Illustrator, and export it to XAML, and show them how you can leverage their design assets directly. Continue by introducing more and more, and hopefully they get interested and want to make the switch to Blend. It's quite a learning curve, but it sure is worth it!
Good luck!
PS: I have written allot about patterns and making designer friendly code on my blog at http://jonas.follesoe.no. You can also find links to a video recording of my Tech Ed talk, as well as lots of links to further reading on the topic.
A: I'm a big believer in the Integrator approach which is really the role I have had to perform to make our WPF efforts successful.
Laurent Bugnion has a post on this that describes what I'm talking about. Robby Ingebretsen is also a big believer in this approach.
But basically, someone has to cover the 'gap' that exists between the developer world and designer world. What usually happens is that this person comes from either the developer world or the designer world. If they come from the developer world, then they are probably a developer with designer tendencies (they're responsible for look and feel, the visuals in the application, the layout of the screens, etc.). If they come from the designer world, then they aren't afraid of code and the enjoy diving down every now and then to code to get that animation or whatever sparkling.
However, regardless of what world they come from, they usually have to build skills that they never have had before. In my case, I am developer that loves the user interface layer and therefore I would say that I am a developer with designer tendencies. In order to cover that gap and have productive conversations with our graphics designer, I have had to pick up a whole bunch of designer type skills like: learning to use Expression Design, XAM 3D, etc.
Shannon Braun recently gave a presentation at a local developer conference about the developer/designer relationship and the workflows that the community is discovering works for them. I didn't attend the conference, but I thought his slides were a great discussion on the matter.
A: The extent to which designers have come to feel entitled to be distant from the whole of the work involved in building a software product is a much bigger problem that needs to be solved. Don't pander to any designer's expressed right to not have to know how their work gets integrated into the whole.
The kind of stark specialization that has grown up in the designer community is one of the biggest industrial maturity problems that faces the software development industry. It's an extent of specialization that predictably creates more rework and longer cycle times.
This is also true of developers' sense of entitlement to go blissfully unaware of interaction design and implementation.
Extreme specialization is always an exponential multiplier in productivity problems. Solve it organizationally by adopting processes that promote learning cultures. This is the level of maturity that most other production industries have already realized, and that software drags woefully behind.
At every place in a development workflow where handoffs occur between over-specialization, work queues and buffers form. Software remains one of the few industries that doesn't recognize this as one of the biggest problems we face. This is even more exacerbated in the Microsoft community as over-specialization seems ever-more normal due to Microsoft's perpetuation of over-specialization through its tools and guidance. Unless you can afford to waste as much money as Microsoft does in development efforts, you should look to methodologies that are much better informed on questions of flow and productivity.
Consequently, the developer who cannot test and the tester who cannot code is a symptom of the same industrial immaturity.
You won't learn any of this from the Scrum template for TFS. Microsoft was years behind the curve in getting agile thinking in-play even in its most rudimentary forms, and now that we're progressing into Lean thinking, Microsoft will be another three to five years away from trying to incorporate Lean thinking into its product lines. Don't wait for Microsoft to tell you how to shape a team and a workflow. You can learn right now from the people that Microsoft will ultimately pay attention to in a few years.
A: I have spent 4 months on a project working extremely closely with a designer and he has still not picked up the basic idea of CVS (which is not my choice of source control system). I'm talking template files, JavaScript and CSS here. He's not stupid, it's just one of these things that makes his job harder so he resists fully commiting himself to it.
In my case I had to really hammer home the point that almost all of my JavaScript depended on the mark-up and when he changed his pure CSS, DIV-based layout into a table-based one without telling me then all my JS is going to break.
Often during the course of the project myself and the designer, who I get on with quite well and play soccer with outside of work, had very heated exchanges about our respective responsibilities. If I didn't know him well enough to just get past these exchanges then I think it would have created an unbearable working environment. So I think it's important you establish between you both and with some sort of manager or project supervisor exactly what is expected of both parties during the project.
In my case there have been very few problems lately, because the situation with CVS has been sorted out as well as the idea that he can't just go and change the mark-up whenever he feels like it. Rather than try and create template files and work on them directly, the designer only works on static files and its my responsibility to plug them into my template files.
It's all about communication and a little bit of compromise on both sides.
A: This may be a bit off topic (I'm replying specifically to your question about source control and graphics), but you can put binary data (images etc.) into source control (and in my opinion in a lot of cases should) -- they just take up more disk space and you can't use a diff view to analyze what has changed in any meaningful way, but what you do gain is a history of commit messages documenting each revision, rollback ability and the ability to easily archive (tagging a revision in SVN terms) all files (be they visual assets, documentation, source code, whatever) belonging to a specific release/version together. It's also easier for your build system to just fetch everything required for building a specific version of your software from the source control.
A: In my experience, the integrator or "devsigner" role really needs to be involved in this process unless everyone on the (small) team are able to perform this role. This is a very rare circumstance. Usually you will find that developers are very good at developing but aren't so great with design/usability and designers are great with aesthetics/usability but don't want to or are not educated enough to code. Having someone that can crossover into both worlds and "speak the language" is very important.
The integrator needs to coordinate the controls that are being developed with the design assets that are being created by the designers. In our current project, we have 6 active developers and 2 designers from an outside shop. I am the integrator for this project and I spend most of my day in Expression Blend. The developers work primarily in VS creating controls that meet our product spec and the design shop is designing what the end product will look like. The designers are working in Illustrator. My job is to take the Illustrator files and create control styles from them and then apply them to the controls developed by our development team. As we move towards Blend 3 with native support for PSD and AI files, this task becomes much easier.
It is very helpful to create the "look" for your application in a separate solution from the main trunk of the application and then merge your ResourceDictionaries into the main app later. You can get the look and feel correct without getting too caught up in what could still be incomplete controls.
A: I am going to assume that you refer to RIA projects since your mention of SL.
I have worked one quite a few RIA projects with Adobe designing and developing applications and services.
The best advice I can give you based on based on my 14 years experience as an UX and Visual designer with some programming experience although pathetic compared to you guys.
Accept that you wont understand each other.
The programmer thinks in what functionality should be done, the designer think in how the functionality should behave.
For the developer a button is mostly generic, for the designer it's not the case. Designers think in composition, developers think in frameworks.
So learn to understand that your responsibility is different.
You the developer DO need to think about how generic your code is and can't afford to treat everything as being unique and a hardcoded composition. That is unless you can automate that uniqueness somehow.
The designer DO need to think about the application or service as somehow unique. It might mean that a button is not a button. There might be different sizes or colors or other annoyances.
So make sure you develop a good relationship with the designer by acknowledging that you understand the designers responsibility and make sure he understands yours.
It's not that you are not interested in making the best application in the world. It's just that some of these design decisions takes quite a lot of time.
Make sure that you get very clear on how the designer should deliver to you so you don't waste his or your own time. What format, assets? Naming?
All things that are involved in delivery from one paradigme to another.
And most importantly communicate and respect that they don't know how to do JavaScript or how understand the basic ideas of CVS.
Most developers you wouldn't know how to kern to save their life, what a widow is, how to best layer FireWorks or create a photo-realistic icon, come up with a good tagline or make something understandable to average Joe in 4 words. You don't know what a grid or alignment is and you tend to make things green and purple on black.
And the designer should understand that just because you deal with programming does not mean you are a robot, that you can't have creative ideas and solutions. He should also try to learn how to program at least pseudo program so that he understands what's involved in making your project.
And most importantly. Don't start to debate Mac vs. PC :) Projects have been canceled because of this.
A: Quite frankly you should tell the designer that images can, should and "will be put in source control mister!" :)
It may be slightly non-conventional and you wont be able to do a merge or anything of that nature, but there will be revisions and a history, etc .. Images can also be embedded in a resource file which goes into source control as well.
XAML can (and should) be put in source control and as its a markup file it will benefit from all of the features.
As far as tips from working with a designer, the one you are working with scares the heck outta me just by that comment alone, so it may all boil down to WHO you are working with. I would explain basic best practices in a nice manner and proceed from there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: File Descriptor Assignment in C When sockets are created or files are opened/created in C, is the file descriptor that's assigned to the socket/file guaranteed to be the lowest-valued descriptor available? What does the C spec say about file descriptor assignment in this regard, if anything?
A: It's not guaranteed to be the lowest, and is implementation dependent (1). In general, however, the routine that assigns open file descriptors uses a method that gives you the first open on. It could be that immediately after several lower ones free, leaving you with a higher descriptor than you might expect though.
The only reason I can think of to know this, though, is for the select function, which is sped up if you pass it the highest file descriptor you need to check for.
(1) Note that those implementations that follow the IEEE standard do guarantee the lowest unused descriptor for files, but this may not apply to sockets. Not every implementation follows the IEEE standard for open(), so if you're writing portable software it is best not to depend on it.
A: I don't think you'll find it in the C spec, more likely the spec for your OS. My experience in Linux has been that it's always the lowest.
A: I'll counter this with another question - why does this matter? You shouldn't be comparing the file descriptor with anything (unless checking for stdin/stdout/stderr) or doing math with it. As long as it fits in an int (and its guaranteed to) that's all you really need to know.
A: Steve M is right; C has no notion of sockets, and its file I/O functions use a [pointer to a] FILE object, not a descriptor.
A: @aib the open(), close(), lseek(), read(), write() all make use of file descriptors. I hardly ever use streams for I/O.
@Kyle it matters because of statements like select(). Knowing the highest descriptor can improve performance.
A: The C spec says that it's implementation dependent. If you're looking at a Unix implementation, the man page for open(2) says "The file descriptor returned by a successful call will be the lowest-numbered file descriptor not currently open for the process."
This helps if you're trying to attach a specific file to a specific descriptor. Say you want to redirect stderr to /dev/null. Something like
close(2); open("/dev/null", O_WRONLY);
ought to do it. You should, of course, capture the fd returned by open and ensure that it's 2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.