text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Managing web services in FlexBuilder - How does the manager work? In FlexBuilder 3, there are two items under the 'Data' menu to import and manage web services. After importing a webservice, I can update it with the manage option. However, the webservices seems to disappear after they are imported. The manager does however recognize that a certain WSDL URL was imported and refuses to do anything with it.
How does the manager know this, and how can I make it refresh a certain WSDL URL?
A: In your src folder of the flexbuilder project you should see the generated classes. For instance, if you use the manager to generate the proxy classes for www.example.com you should see the folders /com/example with the generated proxy classes inside.
To consume these webservices in ActionScript use the statement:
"import com.example.*;"
To consume the webservice in mxml include the .as file using:
<mx:Script source="yourscriptname.as"/>
To refresh the generated proxy classes, consuming the latest WSDL, simply open the manager and select "update".
Also, I found this article very useful for consuming web services.
I hope that helps, the question was kind of vague about the problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Crop MP3 to first 30 seconds Original Question
I want to be able to generate a new (fully valid) MP3 file from an existing MP3 file to be used as a preview -- try-before-you-buy style. The new file should only contain the first n seconds of the track.
Now, I know I could just "chop the stream" at n seconds (calculating from the bitrate and header size) when delivering the file, but this is a bit dirty and a real PITA on a VBR track. I'd like to be able to generate a proper MP3 file.
Anyone any ideas?
Answers
Both mp3split and ffmpeg are both good solutions. I chose ffmpeg as it is commonly installed on linux servers and is also easily available for windows. Here's some more good command line parameters for generating previews with ffmpeg
*
*-t <seconds> chop after specified number of seconds
*-y force file overwrite
*-ab <bitrate> set bitrate e.g. -ab 96k
*-ar <rate Hz> set sampling rate e.g. -ar 22050 for 22.05kHz
*-map_meta_data <outfile>:<infile> copy track metadata from infile to outfile
instead of setting -ab and -ar, you can copy the original track settings, as Tim Farley suggests, with:
*
*-acodec copy
A: If you wish to REMOVE the first 30 seconds (and keep the remainder) then use this:
ffmpeg -ss 30 -i inputfile.mp3 -acodec copy outputfile.mp3
A: You might want to try Mp3Splt.
I've used it before in a C# service that simply wrapped the mp3splt.exe win32 process. I assume something similar could be done in your Linux/PHP scenario.
A: try:
ffmpeg -t 30 -i inputfile.mp3 outputfile.mp3
A: I also recommend ffmpeg, but the command line suggested by John Boker has an unintended side effect: it re-encodes the file to the default bitrate (which is 64 kb/s in the version I have here at least). This might give your customers a false impression of the quality of your sound files, and it also takes longer to do.
Here's a command line that will slice to 30 seconds without transcoding:
ffmpeg -t 30 -i inputfile.mp3 -acodec copy outputfile.mp3
The -acodec switch tells ffmpeg to use the special "copy" codec which does not transcode. It is lightning fast.
NOTE: the command was updated based on comment from Oben Sonne
A: This command also works perfectly.
I cropped my music files from 20 to 40 seconds.
-y : force output file to overwrite.
ffmpeg -i test.mp3 -ss 00:00:20 -to 00:00:40 -c copy -y temp.mp3
A: you can use mp3cut:
cutmp3 -i foo.mp3 -O 30s.mp3 -a 0:00.0 -b 0:30.0
It's in ubuntu repo, so just: sudo apt-get install cutmp3.
A: I have got an error while doing the same
Invalid audio stream. Exactly one MP3 audio stream is required.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argumentStream mapping:
Fix for me was:
ffmpeg -ss 00:02:43.00 -t 00:00:10 -i input.mp3 -codec:a libmp3lame out.mp3
A: My package medipack is a very simple command-line app as a wrapper over ffmpeg.
you can achieve trimming your video using these commands:
medipack trim input.mp3 -s 00:00 -e 00:30 -o output.mp3
medipack trim input.mp3 -s 00:00 -t 00:30 -o output.mp3
you can view options of trim subcommand as:
srb@srb-pc:$ medipack trim -h
usage: medipack trim [-h] [-s START] [-e END | -t TIME] [-o OUTPUT] [inp]
positional arguments:
inp input video file ex: input.mp4
optional arguments:
-h, --help show this help message and exit
-s START, --start START
start time for cuting in format hh:mm:ss or mm:ss
-e END, --end END end time for cuting in format hh:mm:ss or mm:ss
-t TIME, --time TIME clip duration in format hh:mm:ss or mm:ss
-o OUTPUT, --output OUTPUT
you could also explore other options using medipack -h
srb@srb-pc:$ medipack --help
usage: medipack.py [-h] [-v] {trim,crop,resize,extract} ...
positional arguments:
{trim,crop,resize,extract}
optional arguments:
-h, --help show this help message and exit
-v, --version Display version number
you may visit my repo https://github.com/srbcheema1/medipack and checkout examples in README.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "116"
} |
Q: SQL Server 2005 Temporary Tables In a stored procedure, when is #Temptable created in SQL Server 2005? When creating the query execution plan or when executing the stored procedure?
if (@x = 1)
begin
select 1 as Text into #Temptable
end
else
begin
select 2 as Text into #Temptable
end
A: It's created when it's executed and dropped when the session ends.
A: You might also want to consider table variables, whose lifecycle is completely managed for you.
DECLARE @MyTable TABLE (MyPK INT IDENTITY, MyName VARCHAR(100))
INSERT INTO @MyTable ( MyName ) VALUES ( 'Icarus' )
INSERT INTO @MyTable ( MyName ) VALUES ( 'Daedalus' )
SELECT * FROM @MyTable
I almost always use this approach, but it does have disadvantages. Most notably, you can only use indexes that you can declare within the TABLE() construct, essentially meaning that you're limited to the primary key only -- no using ALTER TABLE.
A: Interesting question.
For the type of temporary table you're creating, I think it's when the stored procedure is executed. Tables created with the # prefix are accessible to the SQL Server session they're created in. Once the session ends, they're dropped.
This url: http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx seems to indicate that temp tables aren't created when query execution plans are created.
A: Whilst it may be automatically dropped at the end of a session, it is good practice to drop the table yourself when you're done with it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Resources for building a Visual Studio plug-in? I'd like to build a pretty simple plug-in for Visual Studio, but I don't really know how this has to be done. Is this doable in (non-managed) C++?
I'd like to know what resources you'd recommend me.
A: DevExpress has a free plug-in called DXCore which provides some nice abstractions upon which to then build other plug-ins...you might look into that.
A: Do you really want to do it in unmanaged code? DevExpress has a nice free library to develop visual studio plugins but it's managed. This is what they use to develop Refactor and coderush
http://www.devexpress.com/Products/Visual_Studio_Add-in/DXCore/
It seems the underlying API is kind of messy. As far as I know this is the easiest way.
A: I've never tried, so I don't know about doing it in C++, but this website has loads of information: http://msdn.microsoft.com/en-us/vsx/default.aspx
A: A good place to start would be this tutorial:
http://www.c-sharpcorner.com/UploadFile/mgold/AddIns11292005015631AM/AddIns.aspx
A: The DXCore from DevExpress is a wonderful library for basing all sorts of plugins. Feel free to drop by the IDE Tools Forums and more specifically the DXCore plugin forum and ask for any help you might need. :)
I'm not so sure about unmanaged C++ but I know for certain that the DXCore supports Plugin creation in any managed language.
A: Found this MSDN tutorial: Creating Add-ins Using Visual C++. Thanks Matt.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Can jQuery select by CSS rule, not class? A .container can contain many .components, and .components themselves can contain .containers (which in turn can contain .components etc. etc.)
Given code like this:
$(".container .component").each(function(){
$(".container", this).css('border', '1px solid #f00');
});
What do I need to add to the line within the braces to select only the nested .containers that have their width in CSS set to auto? I'm sure it's something simple, but I haven't really used jQuery all that much.
A: $(".container .component").each(function() {
if ($(".container", this).css('width') === "auto")
$(".container", this).css('border', '1px solid #f00');
});
A: You may want to look into .filter().
Something like:
$('.container .component .container')
.filter(function() {return $(this).css('width') == 'auto';})
.css({border: '1px solid #f00'});
A: $(".container .component").each(function()
{
$(".container", this).each(function() {
if($(this).css('width') == 'auto')
{
$(this).css('border', '1px solid #f00');
}
});
});
Similar to the other answer but since components can also have multiple containers, also needs the .each() check in here too for the width.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Targeting multiple versions of .net framework Suppose I have some code that would, in theory, compile against any version of the .net framework. Think "Hello World", if you like.
If I actually compile the code, though, I'll get an executable that runs against one particular version.
Is there any way to arrange things so that the compiled exe will just run against whatever version it finds? I strongly suspect that the answer is no, but I'd be happy to be proven wrong...
Edit: Well, I'll go to the foot of our stairs. I had no idea that later frameworks would happily run exe's compiled under earlier versions. Thanks for all the responses!
A: Im not sure if this is correct, but i'd try to compile it for the lowest version, the higher versions should be able to run the lower versions exe's.
A: Read ScuttGu's post about VS 2008 Multi-Targeting Support
One of the big changes we are making
starting with the VS 2008 release is
to support what we call
"Multi-Targeting" - which means that
Visual Studio will now support
targeting multiple versions of the
.NET Framework, and developers will be
able to start taking advantage of the
new features Visual Studio provides
without having to always upgrade their
existing projects and deployed
applications to use a new version of
the .NET Framework library.
Now when you open an existing project
or create a new one with VS 2008, you
can pick which version of the .NET
Framework to work with - and the IDE
will update its compilers and
feature-set to match this. Among
other things, this means that
features, controls, projects,
item-templates, and assembly
references that don't work with that
version of the framework will be
hidden, and when you build your
application you'll be able to take the
compiled output and copy it onto a
machine that only has an older version
of the .NET Framework installed, and
you'll know that the application will
work.
That way you can use VS2008 to develop .NET 2.0 projects that will work on both .NET 2.0, 3.0 and 3.5
A: Along side multi targeting, the frameworks are backwards compatible, so something compiled to 1.0 will run on 1.1 and 2. Somthing compiled on 1.1 will run on 2 ... etc.
A: I know @John Boker is correct when it comes to .Net class libraries. You can compile a class library against .Net 1.1 and then use it in a .Net 2.0 or higher project.
I suspect the same is also true for executables.
A: with 2005 & 2008, yes (on CLR 2.0)
With 2003, no.. because it compiles down to CLR 1.1
You could theorectically write some code using #if (DOTNET35) and such so that you don't use features outside the compilers knowledge and then run the desired compiler on the app... I question the usefulness of this though.
A: Well, AFAIK, all .NET versions (except version 1.x) compile to the same bytecode. In case of C#, all new features are simply syntactic sugar, which get transformed into C# 2.0 constructs when compiling.
The key point where things could go wrong is when you use C# 3.0 or 3.5 specific DLLs. They don't work well with the .NET 2.0 framework, so you can't use those.
I can't really think of a workaround for this, sorry :(
A: On the subject of which .NET framework the user has installed, there is also a new option with the Client Profile that’s available with .NET 3.5 SP1. This basically allows you to ship a small (277k) bootstrap program which downloads and installs the required files (A subset od the full .NET framework).
For more information, and general tips on creating a small .NET installation, see this great blog entry by Scott Hanselman.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Custom Aggregate Functions in MS SQL Server? How can I create a custom aggregate function in MS SQL Server? An example would help a lot.
A: SQL Server 2000 doesn't officially support custom aggregate functions. However, I recently needed that functionality as well, and I found this article enlightening:
http://weblogs.sqlteam.com/jeffs/articles/1490.aspx
It's a bit hack-ish, though: it requires access to the sp_OA___ extended procedures.
The summary is that you can simulate an aggregate function with a series of four wrapper functions, each of which performs one of the following tasks:
*
*Create an ActiveX object that can hold state within the query. Call this before running the query.
*
*Do the actual aggregation using the ActiveX object.
*Clear the ActiveX object state on GROUP BY boundries
*Destroy the object. Call this after running the query and during error handling.
You then include items 2 and 3 in the select list for your query, and item 2 must also be wrapped in an existing no-effect aggregate function like MAX() or MIN(). You can also use this technique for cumulative functions to do things like row numbers.
Some of the comments suggest that the optimizer may try to negate the aggregation effects by optimizing away the calls in some circumstances, though I expect that would be a very rare case indeed. However, I found this question because I took those warnings seriously enough to continue searching for something better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What is the best way of adding in regularly used blocks of code when marking up in TextMate? Caveat: I'm relatively new to coding as well as TextMate, so apologies if there is an obvious answer I'm missing here.
I do a lot of HTML/CSS markup, there are certain patterns that I use a lot, for example, forms, navigation menus etc. What I would like is a way to store those patterns and insert them quickly when I need them.
Is there a way to do this using TextMate?
A: You can do this very easily in TextMate using Snippets. Just add a new snippet in the bundle editor, and set up how you want to trigger it. You can set a key shortcut, or have it pop up when you hit Tab after a certain word/pattern.
There are many things you can do with them—in your case, it would probably be very useful to set so-called "placeholders" in your snippets, which are the parts that change every time (e.g. the names of the fields in the form). Then, as soon as you insert the snippet, you can hit Tab to move between these.
A: Along with the links provided above, I think you'll find this screencast useful. It gives a run through of some of the tools TextMate's HTML bundle already provides.
It's probably slightly off-topic though, but worth a look nonetheless.
A: As mentioned prior snippets are what you are looking for.
For reference look here:
http://manual.macromates.com/en/snippets
http://screenflicker.com/mike/code/div-snippets/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Changing the default title of confirm() in JavaScript? Is it possible to modify the title of the message box the confirm() function opens in JavaScript?
I could create a modal popup box, but I would like to do this as minimalistic as possible.
I would like to do something like this:
confirm("This is the content of the message box", "Modified title");
The default title in Internet Explorer is "Windows Internet Explorer" and in Firefox it's "[JavaScript-program]." Not very informative. Though I can understand from a browser security stand point that you shouldn't be able to do this.
A: This is not possible, as you say, from a security stand point. The only way you could simulate it, is by creating a modeless dialog window.
There are many third-party javascript-plugins that you could use to fake this effect so you do not have to write all that code.
A: Not possible. You can however use a third party javascript library that emulates a popup window, and it will probably look better as well and be less intrusive.
A: You can always use a hidden div and use javascript to "popup" the div and have buttons that are like yes and or no. Pretty easy stuff to do.
A: YES YOU CAN do it!! It's a little tricky way ; ) (it almost works on ios)
var iframe = document.createElement("IFRAME");
iframe.setAttribute("src", 'data:text/plain,');
document.documentElement.appendChild(iframe);
if(window.frames[0].window.confirm("Are you sure?")){
// what to do if answer "YES"
}else{
// what to do if answer "NO"
}
Enjoy it!
A: You can't unfortunately. The only way is to simulate this with a window.open call.
A: Don't use the confirm() dialog then... easy to use a custom dialog from prototype/scriptaculous, YUI, jQuery ... there's plenty out there.
A: I know this is not possible for alert(), so I guess it is not possible for confirm either. Reason is security: it is not allowed for you to change it so you wouldn't present yourself as some system process or something.
A: Yes, this is impossible to modify the title of it. If you still want to have your own title, you can try to use other pop-up windows instead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: COTS Workshop Registration System Does anyone have any experience with any COTS systems for managing workshops and the associated registrations, courses, communications, etc.?
We have a home-built Perl system that is about 8 years old and is currently embedded as an iframe in a SharePoint portal site (externally facing). Needless to say, it isn't integrated into our site well, looks like crap, needs an overhaul, lacks features, etc. It would be nice to find either a product we can install or a service that provides those features.
Thanks!
A: You might also look into Moodle - it's a platform developed to supplement classroom teaching (or implement online learning courses) but should have all the major features you listed, and would support your needs reasonably well, as well as enhancing your event with an online component such as slide/presentation distribution only to registered users or users that took a particular class, etc)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Configuring sendmail behind a firewall I'm setting up a server which is on a network behind a firewall and I want programs on this computer to be able to use sendmail to send emails to any email address. We have an SMTP server running on this network (let's call it mailrelay.example.com) which is how we're supposed to get outgoing emails through the firewall.
So how do I configure sendmail to send all mail through mailrelay.example.com? Googling hasn't given me the answer yet, and has only revealed that sendmail configuration is extremely complex and annoying.
A: http://www.elandsys.com/resources/sendmail/smarthost.html
Sendmail Smarthost
A smarthost is a host through which
outgoing mail is relayed. Some ISPs
block outgoing SMTP traffic (port 25)
and require their users to send out
all mail through the ISP's mail
server. Sendmail can be configured to
use the ISP's mail server as the smart
host.
Read the linked article for instruction for how to set this up.
A: @Espo: Thanks for the great advice on where to start. Your link would have been better if I had been configuring sendmail for its first use instead of taking an existing configuration and making this small change. However, once I knew to look for stuff on "SmartHost", I found an easier way.
All I had to do was edit my /etc/mail/sendmail.cf file to change
DS
to
DSmailrelay.example.com
then restart sendmail and it worked.
A: @eli: modifying sendmail.cf directly is not usually recommended, since it is generated by the macro compiler.
Edit /etc/mail/sendmail.mc to include the line:
define(`SMART_HOST',`mailrelay.example.com')dnl
After changing the sendmail.mc macro configuration file, it must be recompiled
to produce the sendmail configuration file.
# m4 /etc/mail/sendmail.mc > /etc/sendmail.cf
And restart the sendmail service (Linux):
# /etc/init.d/sendmail restart
As well as setting the smarthost, you might want to also disable name resolution configuration and possibly shift your sendmail to non-standard port, or disable daemon mode.
Disable Name Resolution
Servers that are within fire-walled networks or using Network Address
Translation (NAT) may not have DNS or NIS services available. This creates
a problem for sendmail, since it will use DNS by default, and if it is not
available you will see messages like this in mailq:
host map: lookup (mydomain.com): deferred)
Unless you are prepared to setup an appropriate DNS or NIS service that
sendmail can use, in this situation you will typically configure name
resolution to be done using the /etc/hosts file. This is done by enabling
a 'service.switch' file and specifying resolution by file, as follows:
1: Enable service.switch for sendmail
Edit /etc/mail/sendmail.mc to include the lines:
define(`confSERVICE_SWITCH_FILE',`/etc/mail/service.switch')dnl
2: Configure service.switch for files
Create or modify /etc/mail/service.switch to refer only to /etc/hosts for name
resolution:
# cat /etc/mail/service.switch
hosts files
3: Recompile sendmail.mc and restart sendmail for this setting to take effect.
Shift sendmail to non-standard port, or disable daemon mode
By default, sendmail will listen on port 25. You may want to change this port
or disable the sendmail daemon mode altogether for various reasons:
- if there is a security policy prohibiting the use of well-known ports
- if another SMTP product/process is to be running on the same host on the standard port
- if you don't want to accept mail via smtp at all, just send it using sendmail
1: To shift sendmail to use non-standard port.
Edit /etc/mail/sendmail.mc and modify the "Port" setting in the line:
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')
For example, to get sendmail to use port 125:
DAEMON_OPTIONS(`Port=125,Addr=127.0.0.1, Name=MTA')
This will require sendmail.mc to be recompiled and sendmail to be restarted.
2: Alternatively, to disable sendmail daemon mode altogether (Linux)
Edit /etc/sysconfig/sendmail and modify the "DAEMON" setting to:
DAEMON=no
This change will require sendmail to be restarted.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Dynamic robots.txt Let's say I have a web site for hosting community generated content that targets a very specific set of users. Now, let's say in the interest of fostering a better community I have an off-topic area where community members can post or talk about anything they want, regardless of the site's main theme.
Now, I want most of the content to get indexed by Google. The notable exception is the off-topic content. Each thread has it's own page, but all the threads are listed in the same folder so I can't just exclude search engines from a folder somewhere. It has to be per-page. A traditional robots.txt file would get huge, so how else could I accomplish this?
A: This will work for all well-behaving search engines, just add it to the <head>:
<meta name="robots" content="noindex, nofollow" />
A: If using Apache I'd use mod-rewrite to alias robots.txt to a script that could dynamically generate the necessary content.
Edit: If using IIS you could use ISAPIrewrite to do the same.
A: You can implement it by substituting robots.txt with dynamic script generating the output.
With Apache You could make simple .htaccess rule to acheive that.
RewriteRule ^robots\.txt$ /robots.php [NC,L]
A: Simlarly to @James Marshall's suggestion - in ASP.NET you could use an HttpHandler to redirect calls to robots.txt to a script which generated the content.
A: Just for that thread , make sure your head contains a noindex meta tag. Thats one more way to tell search engines not to crawl your page other than blocking in robots.txt
A: Just keep in mind that a robots.txt disallow will NOT prevent Google from indexing pages that have links from external sites, all it does is prevent crawling internally. See http://www.webmasterworld.com/google/4490125.htm or http://www.stonetemple.com/articles/interview-matt-cutts.shtml.
A: You can disallow search engines to read or index your content by restricting robot meta tags. In this way, spider will consider your instructions and will index only such pages that you want.
A: block dynamic webpage by robots.txt use this code
User-agent: *
Disallow: /setnewsprefs?
Disallow: /index.html?
Disallow: /?
Allow: /?hl=
Disallow: /?hl=*&
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to safely embed any flash file (swf)? I want to allow my users to embed their own Flash animations in their posts. Usually the actual file is hosted on some free image hosting site. I wouldn't actually load the flash unless the user clicked a button to play (so that nothing auto-plays on page load). I know people can make some really annoying crap in flash, but I can't find any information about potential serious damage a flash app could cause to the viewer.
Is it unsafe to embed just any flash file from the internets? If so, how can I let users embed innocent animations but still keep out the harmful apps?
edit:
From what I can gather, the most obvious threat is for actionscript to redirect you to a malicious site.
Adobe says you can set allowScriptAccess=never and allowNetworking=none and the swf should have no access to anything outside of itself. Will this solve all my problems?
A: Flash has some neat security measures in place. Allowing users to upload swf's to your site and embedding them is unsafe, you're basically setting yourself up for an XSS attack.
However, allowing them to hotlink should not be a problem. The swf will be locked to the domain that is hosting it and is not allowed calling url's outside of that space.
It will still be open to "evil links" (i'm sure theres a proper word for them), and by that I mean having regular links to yoursite.com/admin/deleteallpages.php which it tries to load "as" you. It will not however be able to use this data in any way, it'll basically be the same as a normal link, and I'd guess modern cms' are protected from that type of attacks.
You could get the same protection by hosting your flashes on a different subdomain, since flash considers this the same as a completely different domain.
A: When embedding SWFs from unknown sources, it is also best practice to throw a mask on the Loader so that the loaded SWF can't take over more screen real estate than expected.
Pseudo-code to do so:
var maskSpr : Sprite = new Sprite();
maskSpr.graphics.beginFill();
maskSpr.graphics.drawRect(0,0,safeWidth,safeHeight);
maskSpr.graphics.endFill();
myLdr.mask = maskSpr;
A: There is actually more than one option.
To be totally safe, set allowScriptAccess=never and allowNetworking=none and the swf will have no access to anything outside of itself.
NOTE: allowNetworking is only in Flash Player 9 (it was created in response to various myspace worms), so you'll need to use SWF Object to insure that only users with the right flash player version or better have the flash loaded.
If you want to enable things like youtube videos, though, you can't set allowNetworking to "none". Fortunately, there is an intermediate level of security for this field - "internal" which lets the SWF talk to its hosted domain.
Also note that you better not have a crossdomain.xml file on your site - read more about those dangers here and other places.
Here are some other sites that are mentioned by other answers that go into more detail:
http://www.adobe.com/devnet/flashplayer/articles/secure_swf_apps_04.html
http://blogs.adobe.com/stateofsecurity/2007/07/how_to_restrict_swf_content_fr_1.html
A: As an example Drupal has a scenario of how allowing flash content from users could be a security concern.
A: Adobe says you can set allowScriptAccess=never and allowNetworking=none and the swf should have no access to anything outside of itself. Although allowNetworking is only in Flash Player 9, so users with earlier versions of Flash would still be susceptible to some exploits.
Creating more secure SWF web applications : Security Controls Within the HTML Code
How to restrict SWF content from HTML
A: Yes, it's unsafe.
There's no easy way of allowing it. You could have a domain whitelist that allowed YouTube, Hulu, etc. through, but whitelisting is inherently painstaking - you'd be constantly updating.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why is branching and merging easier in Mercurial than in Subversion? Handling multiple merges onto branches in Subversion or CVS is just one of those things that has to be experienced. It is inordinately easier to keep track of branches and merges in Mercurial (and probably any other distributed system) but I don't know why. Does anyone else know?
My question stems from the fact that with Mercurial you can adopt a working practice similar to that of Subversions/CVSs central repository and everything will work just fine. You can do multiple merges on the same branch and you won't need endless scraps of paper with commit numbers and tag names.
I know the latest version of Subversion has the ability to track merges to branches so you don't get quite the same degree of hassle but it was a huge and major development on their side and it still doesn't do everything the development team would like it to do.
There must be a fundamental difference in the way it all works.
A: Untouched by any of the already provided answers, Hg offered superior merge capabilities because it uses more information when merging changes (hginit.com):
For example, if I change a function a
little bit, and then move it somewhere
else, Subversion doesn’t really
remember those steps, so when it comes
time to merge, it might think that a
new function just showed up out of the
blue. Whereas Mercurial will remember
those things separately: function
changed, function moved, which means
that if you also changed that function
a little bit, it is much more likely
that Mercurial will successfully merge
our changes.
Of course, remembering what was last merged (the point addressed by most of the answers provided here) is also a huge win.
Both improvements, however, are questionable since subversion 1.5+ stores additional merge information in the form of subversion properties: that information available, there's no obvious reason why subversion merge couldn't implement merge as successfully as Hg or Git. I don't know if it does, though, but it certainly sounds like subversion developers are on their way to get around this issue.
A: I suppose this might partially be because Subversion has the idea of a central server along with an absolute time line of revisions. Mercurial is truly distributed and has no such reference to an absolute time line. This does allow Mercurial projects to form more complicated hierarchies of branches for adding features and testing cycles by sub project however teams now need to much more actively keep on top of merges to stay current as they can't just hit update and be done with it.
A: In Subversion (and CVS), the repository is first and foremost. In git and mercurial there is not really the concept of a repository in the same way; here changes are the central theme.
I've not thought much about how you'd implement either but my impression (based on bitter experience and lots of reading) is that this difference is what makes merging and branching so much easier in non-repository based systems.
A: Because Subversion (at least version 1.4 and below) doesn't keep track of what have been merged. For Subversion, merging is basically the same as any commit while on other version control like Git, what have been merged are remembered.
A:
In Subversion (and CVS), the repository is first and foremost. In git
and mercurial there is not really the concept of a repository in the
same way; here changes are the central theme.
+1
The hassle in CVS/SVN comes from the fact that these systems do not
remember the parenthood of changes. In Git and Mercurial,
not only can a commit have multiple children, it can also have multiple
parents!
That can easily observed using one of the graphical tools, gitk or hg
view. In the following example, branch #2 was forked from #1 at
commit A, and has since been merged once (at M, merged with commit B):
o---A---o---B---o---C (branch #1)
\ \
o---o---M---X---? (branch #2)
Note how A and B have two children, whereas M has two parents. These
relationships are recorded in the repository. Let's say the maintainer of
branch #2 now wants to merge the latest changes from branch #1, they can
issue a command such as:
$ git merge branch-1
and the tool will automatically know that the base is B--because it
was recorded in commit M, an ancestor of the tip of #2--and
that it has to merge whatever happened
between B and C. CVS does not record this information, nor did SVN prior to
version 1.5. In these systems, the graph
would look like:
o---A---o---B---o---C (branch #1)
\
o---o---M---X---? (branch #2)
where M is just a gigantic "squashed" commit of everything that happened between A and B,
applied on top of M. Note that after the deed is done, there is no trace
left (except potentially in human-readable comments) of where M did
originate from, nor of how many commits were collapsed together--making
history much more impenetrable.
Worse still, performing a second merge becomes a nightmare: one has to figure out
what the merge base was at the time of the first merge (and one has to know
that there has been a merge in the first place!), then
present that information to the tool so that it does not try to replay A..B on
top of M. All of this is difficult enough when working in close collaboration, but is
simply impossible in a distributed environment.
A (related) problem is that there is no way to answer the question: "does X
contain B?" where B is a
potentially important bug fix. So, why not just record that information in the commit, since
it is known at merge time!
P.-S. -- I have no experience with SVN 1.5+ merge recording abilities, but the workflow seems to be much more
contrived than in the distributed systems. If that is indeed the case, it's probably because--as mentioned
in the above comment--the focus is put on repository organization rather than on the changes themselves.
A: I only have experience with Subversion but I can tell you that the merge screen in TortoiseSVN is horribly complicated. Luckily they include a dry run button so that you can see if you are doing it right. The complication is in the configuration of what you want to merge to where. Once you get that set up for the merge the merge generally goes fine. Then you need to resolve any and all conflicts and then commit your merged in working copy to the repository.
If Mercurial can make the configuration of the merge easier then I can say that would make merging 100% easier then Subversion.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "93"
} |
Q: Propagation of Oracle Transactions Between C++ and Java We have an existing C++ application that we are going to gradually replace with a new Java-based system. Until we have completely reimplemented everything in Java we expect the C++ and Java to have to communicate with each other (RMI, SOAP, messaging, etc - we haven't decided).
Now my manager thinks we'll need the Java and C++ sides to participate in the same Oracle DB transaction. This is related to, but different from the usual distrbuted transaction problem of having a single process co-ordinate 2 transactional resources, such as a DB and a message queue.
I think propagating a transaction across processes is a terrible idea from a performance and stability point-of-view, but I am still going to be asked for a solution.
I am familiar with XA transactions and I've done some work with the JBoss Transaction Manager, but my googling hasn't turned up anything good on propagating an XA transaction between 2 processes.
We are using Spring on the Java side and their documentation explicitly states they do not provide any help with transaction propagation.
We are not planning on using a traditional Java EE server (for example: IBM Websphere), which may have support for propagation (not that I can find any definitive documentation).
Any help or pointers on solutions is greatly appreciated.
A: I have been using Hazlecast Messaging and Distributed memory locks to solve some of these concerns, however using such a tool would require that you redisign your software in those parts where you touch the same data. C++ client docs here Java client here
Oracle also has a similar product called Oracle Coherence that may help you, see locking in the dev guide.
Also the database contains a MQ system called Oracle Streams Advanced queueing ( transactional persistent queues) that might help you in some situations. Oracle AQ integrates well with Oracle triggers.
Additionally there is the Database Change Notification that may help you update caches or notify processes of updates, this can be used together with the Optimistic Offline Lock pattern.
See also Software transactional memory
Apache Zookeeper can also help you with distributed locking.
A: There is an example on Laurent Schneider's blog of using the DBMS_XA package inside Oracle to permit multiple sessions to work in the same transaction. So it would be possible to have Java and C++ sessions participating in the same transaction without needing any sort of additional coordinator.
Alternately, you might consider using Workspace Manager. That was originally designed to support extremely long-running transactions (i.e. manipulating lots of spatial data for a proposed development). Essentially, you can create a workspace, which in your case would be roughly equivalent to a named transaction. Both the Java and C++ code could enter that workspace (from separate sessions) and both could manipulate and commit data in that workspace. When the transaction was complete, you could then merge the workspace to the LIVE workspace, which is equivalent to doing a commit in a normal transaction.
On the other hand, I would strongly agree with your initial assessment that coordinating transactions between processes is very likely to be a bad idea from a performance, stability, simplicity, and maintenance standpoint. On the other hand, it may well be a legitimate business requirement depending on how the C++ code is going to be retired (i.e. whether it is possible to replace code in such a way that transactions can be either exclusively Java or exclusively C++)
A: I believe JBoss Transaction Manager supports 2pc tx propagation across web service calls. You could, I suppose integrate your systems that way, but the performance would stink.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: ncover with nunit2 task in NAnt Is there any chance to get this work? I want my tests to be run by nunit2 task in NAnt. In addition I want to run NCover without running tests again.
A: I figured it out. You change the path of the NUnit launcher to that of TeamCity's own. Here is an example:
<mkdir dir="${build}/coverage" failonerror="false"/>
<!-- run the unit tests and generate code coverage -->
<property name="tools.dir.tmp" value="${tools.dir}"/>
<if test="${not path::is-path-rooted(tools.dir)}">
<property name="tools.dir.tmp" value="../../${tools.dir}"/>
</if>
<property name="nunitpath" value="${lib.dir}/${lib.nunit.basedir}/bin/nunit-console.exe"/>
<property name="nunitargs" value=""/>
<if test="${property::exists('teamcity.dotnet.nunitlauncher')}">
<property name="nunitpath" value="${teamcity.dotnet.nunitlauncher}"/>
<property name="nunitargs" value="v2.0 x86 NUnit-2.4.8"/>
</if>
<ncover program="${tools.dir.tmp}/${tools.ncover.basedir}/ncover.console.exe"
commandLineExe="${nunitpath}"
commandLineArgs="${nunitargs} ${proj.name.unix}.dll"
workingDirectory="${build}"
assemblyList="${proj.srcproj.name.unix}"
logFile="${build}/coverage/coverage.log"
excludeAttributes="System.CodeDom.Compiler.GeneratedCodeAttribute"
typeExclusionPatterns=".*?\{.*?\}.*?"
methodExclusionPatterns="get_.*?; set_.*?"
coverageFile="${build}/coverage/coverage.xml"
coverageHtmlDirectory="${build}/coverage/html/"
/>
As you can see, I have some of my own variables in there, but you should be able to figure out what is going on. The property you are concerned with is teamcity.dotnet.nunitlauncher. You can read more about it here at http://www.jetbrains.net/confluence/display/TCD4/TeamCity+NUnit+Test+Launcher.
A: Why not have NCover run NUnit? You get the exact same test results. Also, what exactly are you trying to measure when running NCover outside of the tests? There's other ways to find stale or unreferenced code.
A: I am having to do the same thing. I think the best we can hope for is to break open the NUnit jar file that comes with TeamCity and writing a custom task that integrates NUnit2 and NCover. I wish this wasn't so, but the NUnit2 task does not produce any visible output, so TeamCity is obviously not reading StdOut for the test results.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: An easy way to diff log files, ignoring the time stamps? I need to diff two log files but ignore the time stamp part of each line (the first 12 characters to be exact). Is there a good tool, or a clever awk command, that could help me out?
A: Depending on the shell you are using, you can turn the approach @Blair suggested into a 1-liner
diff <(cut -b13- file1) <(cut -b13- file2)
(+1 to @Blair for the original suggestion :-)
A: @EbGreen said
I would just take the log files and strip the timestamps off the start of each line then save the file out to different files. Then diff those files.
That's probably the best bet, unless your diffing tool has special powers.
For example, you could
cut -b13- file1 > trimmed_file1
cut -b13- file2 > trimmed_file2
diff trimmed_file1 trimmed_file2
See @toolkit's response for an optimization that makes this a one-liner and obviates the need for extra files. If your shell supports it. Bash 3.2.39 at least seems to...
A: Use Kdiff3 and at Configure>Diff edit "Line-Matching Preprocessor command" to something like:
sed "s/[ 012][0-9]:[0-5][0-9]:[0-5][0-9]//"
This will filter out time-stamps from comparison alignment algorithm.
Kdiff3 also lets you manually align specific lines.
A: Answers using cut are fine but sometimes keeping timestamps within the diff output is appreciable. As the OP's question is about ignoring the time stamps (not removing them), I share here my tricky command line:
diff -I '^#' <(sed -r 's/^((.){12})/#\1\n/' 1.log) <(sed -r 's/^((.){12})/#\1\n/' 2.log)
*
*sed isolates the timestamps (# before and \n after) within a process substitution
*diff -I '^#' ignores lines having these timestamps (lines beginning by #)
example
Two log files having same content but different timestamps:
$> for ((i=1;i<11;i++)) do echo "09:0${i::1}:00.000 data $i"; done > 1.log
$> for ((i=1;i<11;i++)) do echo "11:00:0${i::1}.000 data $i"; done > 2.log
Basic diff command line says all lines are different:
$> diff 1.log 2.log
1,10c1,10
< 09:01:00.000 data 1
< 09:02:00.000 data 2
< 09:03:00.000 data 3
< 09:04:00.000 data 4
< 09:05:00.000 data 5
< 09:06:00.000 data 6
< 09:07:00.000 data 7
< 09:08:00.000 data 8
< 09:09:00.000 data 9
< 09:01:00.000 data 10
---
> 11:00:01.000 data 1
> 11:00:02.000 data 2
> 11:00:03.000 data 3
> 11:00:04.000 data 4
> 11:00:05.000 data 5
> 11:00:06.000 data 6
> 11:00:07.000 data 7
> 11:00:08.000 data 8
> 11:00:09.000 data 9
> 11:00:01.000 data 10
Our tricky diff -I '^#' does not display any difference (timestamps ignored):
$> diff -I '^#' <(sed -r 's/^((.){12})/#\1\n/' 1.log) <(sed -r 's/^((.){12})/#\1\n/' 2.log)
$>
Change 2.log (replace data by foo on the 6th line) and check again:
$> sed '6s/data/foo/' -i 2.log
$> diff -I '^#' <(sed -r 's/^((.){12})/#\1\n/' 1.log) <(sed -r 's/^((.){12})/#\1\n/' 2.log)
11,13c11,13
11,13c11,13
< #09:06:00.000
< data 6
< #09:07:00.000
---
> #11:00:06.000
> foo 6
> #11:00:07.000
=> timestamps are kept in the diffoutput!
You can also use the side by side feature using -y or --side-by-side option:
$> diff -y -I '^#' <(sed -r 's/^((.){12})/#\1\n/' 1.log) <(sed -r 's/^((.){12})/#\1\n/' 2.log)
#09:01:00.000 #11:00:01.000
data 1 data 1
#09:02:00.000 #11:00:02.000
data 2 data 2
#09:03:00.000 #11:00:03.000
data 3 data 3
#09:04:00.000 #11:00:04.000
data 4 data 4
#09:05:00.000 #11:00:05.000
data 5 data 5
#09:06:00.000 | #11:00:06.000
data 6 | foo 6
#09:07:00.000 | #11:00:07.000
data 7 data 7
#09:08:00.000 #11:00:08.000
data 8 data 8
#09:09:00.000 #11:00:09.000
data 9 data 9
#09:01:00.000 #11:00:01.000
data 10 data 10
old sed
If your sed implementation does not support the -r option, you may have to count the twelve dots <(sed 's/^\(............\)/#\1\n/' 1.log) or use another pattern of your choice ;)
A: For a graphical option, Meld can do this using its text filters feature.
It allows for ignoring lines based on one or more python regex. The differences still appear, but lines that don't have any other differences won't be highlighted.
A: I want to propose a solution for Visual Studio Code:
*
*Install this extension - https://marketplace.visualstudio.com/items?itemName=ryu1kn.partial-diff
*Configure it like this - https://github.com/ryu1kn/vscode-partial-diff/issues/49#issuecomment-608299085
*Run extension command "Toggle Pre-Comparison Text Normalization Rules" and enable rule added on step #2
*Use the extension (here is an explanation of it's UI quirk - https://github.com/ryu1kn/vscode-partial-diff/issues/11)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: How can I get the definition (body) of a trigger in SQL Server? Unable to find a SQL diff tool that meets my needs, I am writing my own. Between the INFORMATION_SCHEMA and sys tables, I have a mostly-complete working version. But one thing I can't find in the metadata is the definition of a trigger, you know, the actual SQL code. Am I overlooking something?
Thanks.
Thanks, Pete, I didn't know about that!
Scott, I'm working with very basic hosting packages that don't allow remote connections to the DB. I don't know from the specs on RedGate (which I can't afford anyway) whether they provide a workaround for that, and although there are also API's out there (such as the one from Apex), I didn't see the point in investing in a solution that was still going to require more programming on my part. :)
My solution is to drop an ASPX page on the site that acts as a kind of "schema service", returning the collected metadata as XML. I set up a little AJAX app that compares any number of catalog instances to a master and shows the diffs. It's not perfect, but a major step forward for me.
Thanks again!
A: To expand on SQLMenace's answer, here's a simple query to return all triggers and their definitions from a database:
SELECT
sysobjects.name AS trigger_name,
OBJECT_NAME(parent_obj) AS table_name,
OBJECT_DEFINITION(id) AS trigger_definition
FROM sysobjects
WHERE sysobjects.type = 'TR'
A: you have various ways to view SQL Server trigger definition.
querying from a system view:
SELECT definition
FROM sys.sql_modules
WHERE object_id = OBJECT_ID('trigger_name');
Or
SELECT OBJECT_NAME(parent_obj) [table name],
NAME [triger name],
OBJECT_DEFINITION(id) body
FROM sysobjects
WHERE xtype = 'TR'
AND name = 'trigger_name';
definition using OBJECT_DEFINITION function:
SELECT OBJECT_DEFINITION(OBJECT_ID('trigger_name')) AS trigger_definition;
definition using sp_helptext stored procedure:
EXEC sp_helptext
'trigger_name';
A: sp_helptext works to get the sql that makes up a trigger.
The text column in the syscomments view also contains the sql used for object creation.
A: SELECT
DB_NAME() AS DataBaseName,
dbo.SysObjects.Name AS TriggerName,
dbo.sysComments.Text AS SqlContent
FROM
dbo.SysObjects INNER JOIN
dbo.sysComments ON
dbo.SysObjects.ID = dbo.sysComments.ID
WHERE
(dbo.SysObjects.xType = 'TR')
AND
dbo.SysObjects.Name = '<YourTriggerName>'
A: For 2005 and 2008 you can use the OBJECT_DEFINITION() function
A: this query return trigger with its name and body.
Select
[tgr].[name] as [trigger name],
[tbl].[name] as [table name] ,
OBJECT_DEFINITION(tgr.id) body
from sysobjects tgr
join sysobjects tbl
on tgr.parent_obj = tbl.id
WHERE tgr.xtype = 'TR'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Truncate (not round) decimal places in SQL Server I'm trying to determine the best way to truncate or drop extra decimal places in SQL without rounding. For example:
declare @value decimal(18,2)
set @value = 123.456
This will automatically round @value to be 123.46, which is good in most cases. However, for this project, I don't need that. Is there a simple way to truncate the decimals I don't need? I know I can use the left() function and convert back to a decimal. Are there any other ways?
A: Do you want the decimal or not?
If not, use
select ceiling(@value),floor(@value)
If you do it with 0 then do a round:
select round(@value,2)
A: Another truncate with no rounding solution and example.
Convert 71.950005666 to a single decimal place number (71.9)
1) 71.950005666 * 10.0 = 719.50005666
2) Floor(719.50005666) = 719.0
3) 719.0 / 10.0 = 71.9
select Floor(71.950005666 * 10.0) / 10.0
A: Round has an optional parameter
Select round(123.456, 2, 1) will = 123.45
Select round(123.456, 2, 0) will = 123.46
A: ROUND(number, decimals, operation)
number => Required. The number to be rounded
decimals => Required. The number of decimal places to round number to
operation => Optional. If 0, it rounds the result to the number of decimal. If another value than 0, it truncates the result to the number of decimals. Default value is 0
SELECT ROUND(235.415, 2, 1)
will give you 235.410
SELECT ROUND(235.415, 0, 1)
will give you 235.000
But now trimming0 you can use cast
SELECT CAST(ROUND(235.415, 0, 1) AS INT)
will give you 235
A: SELECT Cast(Round(123.456,2,1) as decimal(18,2))
A: ROUND ( 123.456 , 2 , 1 )
When the third parameter != 0 it truncates rather than rounds.
Syntax
ROUND ( numeric_expression , length [ ,function ] )
Arguments
*
*numeric_expression
Is an expression of the exact numeric or approximate numeric data
type category, except for the bit data type.
*length
Is the precision to which numeric_expression is to be rounded. length must be an expression of type tinyint, smallint, or int. When length is a positive number, numeric_expression is rounded to the number of decimal positions specified by length. When length is a negative number, numeric_expression is rounded on the left side of the decimal point, as specified by length.
*function
Is the type of operation to perform. function must be tinyint, smallint, or int. When function is omitted or has a value of 0 (default), numeric_expression is rounded. When a value other than 0 is specified, numeric_expression is truncated.
A: This will remove the decimal part of any number
SELECT ROUND(@val,0,1)
A: SELECT CAST(Value as Decimal(10,2)) FROM TABLE_NAME;
Would give you 2 values after the decimal point. (MS SQL SERVER)
A: select round(123.456, 2, 1)
A: Another way is ODBC TRUNCATE function:
DECLARE @value DECIMAL(18,3) =123.456;
SELECT @value AS val, {fn TRUNCATE(@value, 2)} AS result
LiveDemo
Output:
╔═════════╦═════════╗
║ val ║ result ║
╠═════════╬═════════╣
║ 123,456 ║ 123,450 ║
╚═════════╩═════════╝
Remark:
I recommend using built-in ROUND function with 3rd parameter set to 1.
A: Here's the way I was able to truncate and not round:
select 100.0019-(100.0019%.001)
returns 100.0010
And your example:
select 123.456-(123.456%.001)
returns 123.450
Now if you want to get rid of the ending zero, simply cast it:
select cast((123.456-(123.456%.001)) as decimal (18,2))
returns 123.45
A: Actually whatever the third parameter is, 0 or 1 or 2, it will not round your value.
CAST(ROUND(10.0055,2,0) AS NUMERIC(10,2))
A: I know this is pretty late but I don't see it as an answer and have been using this trick for years.
Simply subtract .005 from your value and use Round(@num,2).
Your example:
declare @num decimal(9,5) = 123.456
select round(@num-.005,2)
returns 123.45
It will automatically adjust the rounding to the correct value you are looking for.
By the way, are you recreating the program from the movie Office Space?
A: Try like this:
SELECT cast(round(123.456,2,1) as decimal(18,2))
A: If you desire to take some number like 89.0904987 and turn it into 89.09 by simply omitting the undesired decimal places, simply use the following:
select cast(yourColumnName as decimal(18,2))
The following screenshot is from W3Schools SQL Data Types section, which describes what decimal(18,2) is doing:
Therefore,
select cast(89.0904987 as decimal(18,2))
gives you: 89.09
A: Please try to use this code for converting 3 decimal values after a point into 2 decimal places:
declare @val decimal (8, 2)
select @val = 123.456
select @val = @val
select @val
The output is 123.46
A: I think you want only the decimal value,
in this case you can use the following:
declare @val decimal (8, 3)
SET @val = 123.456
SELECT @val - ROUND(@val,0,1)
A: I know this question is really old but nobody used sub-strings to round. This as advantage the ability to round really long numbers (limit of your string in SQL server which is usually 8000 characters):
SUBSTRING('123.456', 1, CHARINDEX('.', '123.456') + 2)
A: I think we can go much easier with simpler example solution found in Hackerrank:
Problem statement: Query the greatest value of the Northern Latitudes
(LAT_N) from STATION that is less than 137.2345. Truncate your answer
to 4 decimal places.
SELECT TRUNCATE(MAX(LAT_N),4)
FROM STATION
WHERE LAT_N < 137.23453;
Solution Above gives you idea how to simply make value limited to 4 decimal points. If you want to lower or upper the numbers after decimal, just change 4 to whatever you want.
A: select convert(int,@value)
A: Mod(x,1) is the easiest way I think.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "222"
} |
Q: Best way to rotate Apache log files I've got an Apache server that has one access log file that is topping 600MB. This makes it really hard to search the file or parse it.
What software or modules for Apache are available that will make a daily copy of my access file to make it more manageable?
A: Have you looked at logrotate - this is probably the simplest, most widely available and well understood method of achieving this. It is highly configurable and will probably do 90% of what you need.
A: CustomLog "|bin/rotatelogs /var/logs/logfile 5M" common
This configuration will rotate the logfile whenever it reaches a size of 5 megabytes.
ErrorLog "|bin/rotatelogs /var/logs/errorlog.%Y-%m-%d-%H_%M_%S 5M"
This Would Be Best Way to Redirect Apache logs. No need to compile mod with httpd.
A: I'm a big fan of Cronolog. Just install and pipe your logs through it. For daily log rotation, something like this would work:
ErrorLog "|/usr/bin/cronolog /path/to/logs/%Y-%m-%d/error.log"
CustomLog "|/usr/bin/cronolog /path/to/logs/%Y-%m-%d/access.log" combined
Pretty handy, and once installed, easier (in my experience) than logrotate.
A: The actual command for Windows, which is quite difficult to find online is:
CustomLog '|" "*Apache-Path/bin/rotatelogs.exe"
"**Apache-Path*/logs/backup/internet_access_%d-%m-%y.log" 86400' combined
Where the "internet_access" bit is the name you choose for your files, the 86400 is the number of seconds in one day. You need to change the Apache-Path to the relevant directory you've installed Apache to.
A: logrotate
logrotate is probably the best solution. Use the file /etc/logrotate.conf to change the settings for all your logs. You van change weekly to daily so the logs are rotated every day. Also, you might want to add compress so the archives are compressed. If you don't care about the old logs, you can set rotate rotate 4 to something lower.
A: rotatelog.exe or cronolog.exe on windows os. They are used in pipe command in http.conf
Mod_log_rotate additional module for apache ONLY for access log rotation
Logrotate ONLY for unix os.
A: I have a module that does this for you without the need for external pipes etc :
http://www.poptart.org/bin/view/Poptart/ModAutorotate
I've tried to add it to the Apache modules collection but that seems to have been broken for a while now.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
} |
Q: Pass functions in F# Is it possible to pass a reference to a function to another function in F#? Specifically, I'd like to pass lambda functions like
foo(fun x -> x ** 3)
More specifically, I need to know how I would refer to the passed function in a function that I wrote myself.
A: Yes, it is possible. The manual has this example:
> List.map (fun x -> x % 2 = 0) [1 .. 5];;
val it : bool list
= [false; true; false; true; false]
A: Functions are first class citizens in F#. You can therefore pass them around just like you want to.
If you have a function like this:
let myFunction f =
f 1 2 3
and f is function then the return value of myFunction is f applied to 1,2 and 3.
A: Passing a lambda function to another function works like this:
Suppose we have a trivial function of our own as follows:
let functionThatTakesaFunctionAndAList f l = List.map f l
Now you can pass a lambda function and a list to it:
functionThatTakesaFunctionAndAList (fun x -> x ** 3.0) [1.0;2.0;3.0]
Inside our own function functionThatTakesaFunctionAndAList you can just refer to the lambda function as f because you called your first parameter f.
The result of the function call is of course:
float list = [1.0; 8.0; 27.0]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ASP.NET Merge: Virtual path 'obal.asax' is not allowed I am doing a Web Deployment of my website and I have the merge assemblies property set to true.
For some reason I get the following error.
aspnet_merge : error occurred:
An error occurred when merging assemblies: The relative virtual path 'Global.asax' is not allowed here.
It seems to have something to do with the Global.asax, but I'm not really sure why its getting truncated. My code compiles locally fine, but its only the merge that is messing up.
Any ideas?
A: As a shot in the dark:
Is it a slash issue? I vaguely remember MSBuild forcibly requiring a trailing slash on some of its properties.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Strip all HTML tags except links I am trying to write a regular expression to strip all HTML with the exception of links (the <a href and </a> tags respectively. It does not have to be 100% secure (I am not worried about injection attacks or anything as I am parsing content that has already been approved and published into a SWF movie).
The original "strip tags" regular expression I'm using was <(.|\n)+?>, and I tried to modify it to <([^a]|\n)+?>, but that of course will allow any tag that has an a in it rather than one that has it in the beginning, with a space.
Not that it should really matter, but in case anyone cares to know I am writing this in ActionScript 3.0 for a Flash movie.
A: <(?!\/?a(?=>|\s.*>))\/?.*?>
Try this. Had something similar for p tags. Worked for them so don't see why not. Uses negative lookahead to check that it doesn't match a (prefixed with an optional / character) where (using positive lookahead) a (with optional / prefix) is followed by a > or a space, stuff and then >. This then matches up until the next > character. Put this in a subst with
s/<(?!\/?a(?=>|\s.*>))\/?.*?>//g;
This should leave only the opening and closing a tags
A: In general there are problems with this approach. Regexes are best for 'flat' text matches - nested data pushes regex engines into areas for which they are not designed. General HTML parsing needs a parser not a regex engine (Google for the difference between regular and context-free languages if you want the full technical details).
It is easy to strip out all tags by replacing /</ and />/ with the empty string or their entity equivalents but selectively filtering HTML using regexes will be vulnerable to a wide range of accidental or malicious inputs breaking things.
A: I keep going on about it, but there's no way I can recommend regexr too often. It's fantastic for testing this type of things.
A: Here you go:
{<(?!i|b|h[1-6]|/i|/b|/h[1-6][\s|>|/])[^>]*>}
A: strip_tags() does this.
Here, I am including all <a><p><font><b><i><sup> tags and outputting a tidied version:
cat input.htm | tr -d '\n' | php -r '$input=fgets(STDIN); echo strip_tags($input,"<a><p><font><b><i><sup>");' | tidy -i -wrap 0 -o output.htm
A: How about
<[^a](.|\n)+?>
?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Does Microsoft ASP.NET Ajax Cause DOM Object Leaks? We've been using "Drip" to try and identify why pages with UpdatePanels in them tend to use a lot of client-side memory. With a page with a regular postback, we are seeing 0 leaks detected by Drip. However, when we add an update panel to the mix, every single DOM object that is inside of the update panel appears to leak (according to Drip).
I am not certain is Drip is reliable enough to report these kinds of things - the reported leaks do seem to indicate Drip is modifying the page slightly.
Does anyone have any experience with this? Should I panic and stop using Microsoft Ajax? I'm not above doubting Microsoft, but it seems fishy to me that it could be this bad.
Also, if you know of a tool that is better than Drip, that would be helpful as well.
A: According to ASP.NET AJAX in Action, p. 257
Just before the old markup is replaced with the updated HTML, all the DOM elements in the panel are examined for Microsoft Ajax behaviours or controls attached to them. To avoid memory leaks, the components associated with DOM elements are disposed, and then destroyed when the HTMl is replaced.
So as far as I know, any asp.net ajax components within the update panel are disposed to prevent memory leaks, but anything else in there will just be replaced with the html received.
So if you don't have any asp.net ajax components in the target container for the response, it would be basically the same as an inner html replacement with any other js framework / ajax request, so i would say that it's just the how the browser handles this, rather than asp.net ajax causing this.
Also, while it may be "leaking", it may be by design, meaning that the browser might not have reclaimed the dom elements yet and released them. Also, drip might be causing those to leak, as it is attaching to those dom elements.
A: That's very likely. This was pretty much what we assumed (browser problem, not necessarily Ajax).
Our problem is now, with this application being accessed by many people via a Citrix environment, with each page continually creating DOM objects and not releasing them, the Citrix environment starts thrashing after some usage. I've seen similar complaints online (especially where you are dumb enough to access an Ajax website via Citrix), but it doesn't make me feel much better that this is the intended behavior.
I'm wondering now if anyone has come up with a clever workaround. We also have a client app where we are using the .NET BrowserControl to access these websites, rather than just straight IE7, so if anyone knows a secret API call (FreeStaleDomObjectsFTW()) we can utilize from that end of the stack, that would be useful as well.
A: you could attach to the pageLoading event of the PageRequestManager class and go through the panels updating property and remove the DOM elements in each.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I document a module in Python? That's it. If you want to document a function or a class, you put a string just after the definition. For instance:
def foo():
"""This function does nothing."""
pass
But what about a module? How can I document what a file.py does?
A: Add your docstring as the first statement in the module.
"""
Your module's verbose yet thorough docstring.
"""
import foo
# ...
For packages, you can add your docstring to __init__.py.
A: You do it the exact same way. Put a string in as the first statement in the module.
A: For the packages, you can document it in __init__.py.
For the modules, you can add a docstring simply in the module file.
All the information is here: http://www.python.org/dev/peps/pep-0257/
A: It's easy, you just add a docstring at the top of the module.
A: For PyPI Packages:
If you add doc strings like this in your __init__.py file as seen below
"""
Please refer to the documentation provided in the README.md,
which can be found at gorpyter's PyPI URL: https://pypi.org/project/gorpyter/
"""
# <IMPORT_DEPENDENCIES>
def setup():
"""Verify your Python and R dependencies."""
Then you will receive this in everyday usage of the help function.
help(<YOUR_PACKAGE>)
DESCRIPTION
Please refer to the documentation provided in the README.md,
which can be found at gorpyter's PyPI URL: https://pypi.org/project/gorpyter/
FUNCTIONS
setup()
Verify your Python and R dependencies.
Note, that my help DESCRIPTION is triggered by having that first docstring at the very top of the file.
A: Here is an Example Google Style Python Docstrings on how module can be documented. Basically there is an information about a module, how to execute it and information about module level variables and list of ToDo items.
"""Example Google style docstrings.
This module demonstrates documentation as specified by the `Google
Python Style Guide`_. Docstrings may extend over multiple lines.
Sections are created with a section header and a colon followed by a
block of indented text.
Example:
Examples can be given using either the ``Example`` or ``Examples``
sections. Sections support any reStructuredText formatting, including
literal blocks::
$ python example_google.py
Section breaks are created by resuming unindented text. Section breaks
are also implicitly created anytime a new section starts.
Attributes:
module_level_variable1 (int): Module level variables may be documented in
either the ``Attributes`` section of the module docstring, or in an
inline docstring immediately following the variable.
Either form is acceptable, but the two should not be mixed. Choose
one convention to document module level variables and be consistent
with it.
Todo:
* For module TODOs
* You have to also use ``sphinx.ext.todo`` extension
.. _Google Python Style Guide:
http://google.github.io/styleguide/pyguide.html
"""
module_level_variable1 = 12345
def my_function():
pass
...
...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
} |
Q: IE6 and XML prolog With an XML prolog like
? xml version="1.0" encoding="iso-8859-1"? >
and a Doctype like
!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
I can get my page to render as expected. However, in IE7 the same page does not render correctly. (a span inside a div does not align vertically) Articles on the web suggest that XML prolog + doctype will throw IE6 into quirks mode. However this article seems to suggest otherwise, although it does not mention the version (is it 6 or 7) it applies to, though the article is dated sep 2005 which makes me believe it applies to IE6
Does XML Prolog + doc type throw IE6 into quirks mode? What about IE7? Any recommendations on for or against using the prolog + doctype?
A: Adding an XML prolog before the doctype will throw IE6 into quirks rendering mode. (See here.) In fact, any space before the doctype will throw IE6 into quirks mode. This is not the case for IE7 and above. You can use document.compatMode (example) to have the browser tell you what mode it is using to do the rendering.
The IE blog entry on MSDN is referring to changes made to IE7 that allow IE7 to stay in standards mode when using the appropriate doctype even if it is preceded by an XML prolog.
I would generally recommend omitting the prolog and keeping the browser in standards mode; I think this will make your life easier moving forward.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How To Extract SFTP SSH Key From Key Cache in FileZilla FTP Client I have connected to a server via SFTP using FileZilla and accepted adding the server's SSH key to the key cache in FileZilla.
How can I extract this cached key to a keyfile so that may use it through other SFTP applications that require a keyfile be made available?
I have not been able to find anything in the FileZilla documentation related to this.
A: Thomas was correct. FileZilla piggybacks on PuTTY's PSFTP program and stores the saved keys encoded in a hex format at the registry key he listed (HKCUR\Software\SimonTatham\PuTTY\SshHostKeys). I needed the key in known_hosts format, so I has able to install a windows version of openssh at his recommendation and used the ssh-keyscan tool to hit the server and save the key info out in the correct format:
ssh-keyscan -t rsa <my_ftp_ip_address> > c:\known_hosts
ssh-keyscan -t dsa <my_ftp_ip_address> > c:\known_hosts
Thank you Thomas and SO!
A: If you'd rather use a GUI, you can snag the host key from the log window or the first-time connection popup using WinSCP FTP client: https://winscp.net/eng/docs/ssh_verifying_the_host_key
A: Thanks Dougman for the tip!
To further help any newcomers reading your answer.
Prior to running the ssh-keyscan, assuming the openssh is install by default, there is a few commands that needs to be run (read the quickstart/readme install for details).
Here are my commands which allow me to obtain the host key.
C:\Program Files\OpenSSH\bin>mkgroup -l >> ..\etc\group
C:\Program Files\OpenSSH\bin>mkpasswd -l >> ..\etc\passwd
C:\Program Files\OpenSSH\bin>net start opensshd
The OpenSSH Server service is starting.
The OpenSSH Server service was started successfully.
C:\Program Files\OpenSSH\bin>ssh-keyscan -t rsa vivo.sg.m.com > c:\known_hosts
vivo.sg.m.com SSH-2.0-Sun_SSH_1.1
A: If you use the standard openssh console client (cygwin or from linux), host keys are stored, one-per-line, in ~/.ssh/known_hosts. From there, it's a simple matter of figuring out which bit of that host key is needed for your library.
Putty also stores host keys, but it appears to encode them in hex. Those can be found at HKCUR\Software\SimonTatham\PuTTY\SshHostKeys
A: Unless I am misunderstanding you: you don't need to.
If you connect to the server with another application (ie: PuTTY) and it has not seen the server before then you will be prompted to accept the key.
I see why you might want to do this, but each application could have it's own way to store keys.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Best way to use a property to reference a Key-Value pair in a dictionary This is a fairly trivial matter, but I'm curious to hear people's opinions on it.
If I have a Dictionary which I'm access through properties, which of these formats would you prefer for the property?
/// <summary>
/// This class's FirstProperty property
/// </summary>
[DefaultValue("myValue")]
public string FirstProperty {
get {
return Dictionary["myKey"];
}
set {
Dictionary["myKey"] = value;
}
This is probably the typical way of doing it. It's fairly efficient, easy to understand, etc. The only disadvantage is with a longer or more complex key it would be possible to misspell it or change only one instance or something, leading me to this:
/// <summary>
/// This class's SecondProperty property
/// </summary>
[DefaultValue("myValue")]
private const string DICT_MYKEY = "myKey"
public string SecondProperty {
get {
return Dictionary[DICT_MYKEY];
}
set {
Dictionary[DICT_MYKEY] = value;
}
Which is marginally more complicated, but seems to offer additional safety, and is closer to what I would think of as the "Code Complete" solution. The downside is that when you also have a /// block and a [DefaultValue()] block above the property already, it starts getting a bit crowded up there.
So which do you like better, and why? Does anybody have any better ideas?
A: I like the second one purely because any avoidance of magic strings/numbers in code is a good thing. IMO if you need to reference a number or string literal in code more than once, it should be a constant. In most cases even if it's only used once it should be in a constant
A: I agree with @Glenn for a purely nit-picky point of view. The answer is whatever works for you. All this code takes place in 10 lines (if you include the omitted last curly brace). Nobody is going to get lost and the chance of mistyping is pretty slim (not impossible but very slim). On the other hand, if you used the key somewhere else, then DEFINATELY go with the constant.
Personally, I would go off on you about your curly brace style. :) Just kidding! It really is a matter of style.
A: This isn't answering your question, but I don't think "DefaultValue" means what you think it means. It doesn't set a default value for your property.
See MSDN and this question for more details.
A: A lot of people would probably argue that the second option is "correct", because any value used more than once should be refactored into a constant. I would most likely use the first option. You have already gotten close to the "Code Complete" solution by encapsulating the dictionary entry in a strong typed property. This reduces the chance of screwing up retrieving the wrong Dictionary entry in your implementation.
There are only 2 places where you could mess up typing "myKey", in the getter and setter, and this would be very easy to spot.
The second option would just get too messy.
A: You could match the property names up to the keys and use reflection to get the name for the lookup.
public string FirstProperty {
get {
return Dictionary[PropertyName()];
}
set {
Dictionary[PropertyName()] = value;
}
private string PropertyName()
{
return new StackFrame(1).GetMethod().Name.Substring(4);
}
This has the added benefit of making all your property implementation identical, so you could set them up in visual studio as code snippets if you want.
A: When you only use a magic string in one context, like you do, I think it's alright.
But if you ever need to use the key in another part of the class, go const.
A: @Joel you don't want to count on StackFrame. In-lining can ruin your day when you least expect it.
But to the question: Either way doesn't really matter a whole lot.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: invisible watermarks in images How do you insert invisible watermarks in images for copyright purposes? I'm looking for a python library.
What algorithm do you use? What about performance and efficiency?
A: You might want to look into Steganography; that is hiding data inside of images. There are forms that won't get lost if you convert to a lossier format or even crop parts of the image out.
A: I'm looking for "unbreakable" watermarks, so data stored in exif or image metadata are out.
I have found some interesting stuff on the web while waiting for replies here:
http://www.cosy.sbg.ac.at/~pmeerw/Watermarking/
There is a master thesis that's fairly exhaustive about algorithms and their caracteristics (what they do and how unbreakable they are). I haven't got any time to read it in depth, but this stuff looks serious. There are algorithms that support JPEG compression, cropping, gamma correction or down scaling in some way. It's C, but I can port it to Python or use C libraries from Python.
However, it's from 2001 and I guess 7 years are a long time in this field :( Does anybody have some similar and more recent stuff?
A: I use the following code. It requires PIL:
def reduceOpacity(im, opacity):
"""Returns an image with reduced opacity."""
assert opacity >= 0 and opacity <= 1
if im.mode != 'RGBA':
im = im.convert('RGBA')
else:
im = im.copy()
alpha = im.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(opacity)
im.putalpha(alpha)
return im
def watermark(im, mark, position, opacity=1):
"""Adds a watermark to an image."""
if opacity < 1:
mark = reduceOpacity(mark, opacity)
if im.mode != 'RGBA':
im = im.convert('RGBA')
# create a transparent layer the size of the image and draw the
# watermark in that layer.
layer = Image.new('RGBA', im.size, (0,0,0,0))
if position == 'tile':
for y in range(0, im.size[1], mark.size[1]):
for x in range(0, im.size[0], mark.size[0]):
layer.paste(mark, (x, y))
elif position == 'scale':
# scale, but preserve the aspect ratio
ratio = min(float(im.size[0]) / mark.size[0], float(im.size[1]) / mark.size[1])
w = int(mark.size[0] * ratio)
h = int(mark.size[1] * ratio)
mark = mark.resize((w, h))
layer.paste(mark, ((im.size[0] - w) / 2, (im.size[1] - h) / 2))
else:
layer.paste(mark, position)
# composite the watermark with the layer
return Image.composite(layer, im, layer)
img = Image.open('/path/to/image/to/be/watermarked.jpg')
mark1 = Image.open('/path/to/watermark1.png')
mark2 = Image.open('/path/to/watermark2.png')
img = watermark(img, mark1, (img.size[0]-mark1.size[0]-5, img.size[1]-mark1.size[1]-5), 0.5)
img = watermark(img, mark2, 'scale', 0.01)
The watermark is too faint to see. Only a solid color image would really show it. I can use it to create an image that doesn't show a watermark, but if I do a bit-by-bit subtraction using the original image, I can demonstrate that my watermark is there.
If you want to see how it works, go to TylerGriffinPhotography.com. Each image on the site is watermarked twice: once with the watermark in the lower right corner at 50% opacity (5px from the edge), and once over the whole image at 1% opacity (using "scale", which scales the watermark to the whole image). Can you figure out what the second, low opacity watermark shape is?
A: If you're talking about steganography, here's an old not too-fancy module I did for a friend once (Python 2.x code):
the code
from __future__ import division
import math, os, array, random
import itertools as it
import Image as I
import sys
def encode(txtfn, imgfn):
with open(txtfn, "rb") as ifp:
txtdata= ifp.read()
txtdata= txtdata.encode('zip')
img= I.open(imgfn).convert("RGB")
pixelcount= img.size[0]*img.size[1]
## sys.stderr.write("image %dx%d\n" % img.size)
factor= len(txtdata) / pixelcount
width= int(math.ceil(img.size[0]*factor**.5))
height= int(math.ceil(img.size[1]*factor**.5))
pixelcount= width * height
if pixelcount < len(txtdata): # just a sanity check
sys.stderr.write("phase 2, %d bytes in %d pixels?\n" % (len(txtdata), pixelcount))
sys.exit(1)
## sys.stderr.write("%d bytes in %d pixels (%dx%d)\n" % (len(txtdata), pixelcount, width, height))
img= img.resize( (width, height), I.ANTIALIAS)
txtarr= array.array('B')
txtarr.fromstring(txtdata)
txtarr.extend(random.randrange(256) for x in xrange(len(txtdata) - pixelcount))
newimg= img.copy()
newimg.putdata([
(
r & 0xf8 |(c & 0xe0)>>5,
g & 0xfc |(c & 0x18)>>3,
b & 0xf8 |(c & 0x07),
)
for (r, g, b), c in it.izip(img.getdata(), txtarr)])
newimg.save(os.path.splitext(imgfn)[0]+'.png', optimize=1, compression=9)
def decode(imgfn, txtfn):
img= I.open(imgfn)
with open(txtfn, 'wb') as ofp:
arrdata= array.array('B',
((r & 0x7) << 5 | (g & 0x3) << 3 | (b & 0x7)
for r, g, b in img.getdata())).tostring()
findata= arrdata.decode('zip')
ofp.write(findata)
if __name__ == "__main__":
if sys.argv[1] == 'e':
encode(sys.argv[2], sys.argv[3])
elif sys.argv[1] == 'd':
decode(sys.argv[2], sys.argv[3])
the algorithm
It stores a byte of data per image pixel using: the 3 least-significant bits of the blue band, the 2 LSB of the green one and the 3 LSB of the red one.
encode function: An input text file is compressed by zlib, and the input image is resized (keeping proportions) to ensure that there are at least as many pixels as compressed bytes. A PNG image with the same name as the input image (so don't use a ".png" filename as input if you leave the code as-is :) is saved containing the steganographic data.
decode function: The previously stored zlib-compressed data are extracted from the input image, and saved uncompressed under the provided filename.
I verified the old code still runs, so here's an example image containing steganographic data:
You'll notice that the noise added is barely visible.
A: Well, invisible watermarking is not that easy. Check digimarc, what money did they earn on it. There is no free C/Python code that a lonely genius has written a leave it for free usage. I've implemented my own algorithm and the name of the tool is SignMyImage. Google it if interested ... F>
A: What about Exif? It's probably not as secure as what you're thinking, but most users don't even know it exists and if you make it that easy to read the watermark information those who care will still be able to do it anyway.
A: I don't think there is a library that does this out of the box. If you want to implement your own, I would definitely go with the Python Imaging Library (PIL).
This is a Python Cookbook recipe that uses PIL to add a visible watermark to an image. If it's enough for your needs, you could use this to add a watermark with enough transparency that it is only visible if you know what you are looking for.
A: There is a newer (2005) digital watermarking FAQ at watermarkingworld.org
A: I was going to post an answer similar to Ugh. I would suggest putting a small TXT file describing the image source (and perhaps a small copyright statement, if one applies) into the image in a manner that is difficult to detect and break.
A: I'm not sure how important it is to be unbreakable, but a simple solution might just be to append a text file to the end of the image. Something like "This image belongs to ...".
If you open the image in a viewer/browser, it looks like a normal jpeg, but if you open it in a text editor, the last line would be readable.
The same method allows you include an actual file into an image. (hide a file inside of an image) I've found that it's a bit hit-or-miss, but 7-zip files seem to work. You could hide all sorts of copywrite goodies inside the image.
Again, it's not unbreakable by any stretch of the imagination, but it's completely invisible to the naked eye.
A: Some image formats have headers where you can store arbitrary information as well.
For example, the PNG specification has a chunk where you can store text data. This is similar to the answers above, but without adding random data to the image data itself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: NT authentication login I am working on a site where users can login to get more private information.
My client has another site else where that uses nt authentication for accessing it.
What they want to do is have a button on the site I am working on under the private area that will send them to the nt authenticated site, but not require them to log on to that site instead passing the username and password that they used to log into my site to the other site for them.
Is it possible to do this? and how would I accomplish it? Is there a better way to do this?
A: Here's an (untested) theory, the details of which will greatly depend on what types of authentication the Sharepoint site will accept. I'll tackle Basic, since it's the easiest.
You'll write out some JavaScript that uses XMLHttpRequest to submit a request to the Sharepoint site, and add their username and password to the request headers. Their browser will run that JavaScript, and get logged into the Sharepoint site.
Now, when they click the link, the client's browser should have the cached credentials to send to the Sharepoint site.
Possible issues:
*
*XMLHttpRequest does not allow cross domain auth
*Browser and XHR don't share auth info
*Sharepoint and XHR can't agree on auth method
Another option is to proxy the connection to Sharepoint, which allows you to login server side (bypassing XHR limitations and browser security) - but requiring load on your server and possibly some URL target issues.
A: How will the other site validate your username and password?
Ideally your site shouldn't even be remembering the user's password to be able to pass it to another site (you store hashes of the password, not the password itself, and only use the actually password during validation).
What if your site provided a token to the user, who presents that token to the new site, which in turn asks your site to validate the token. Basically the second site is trusting you to tell them who the user is.
This all breaks down if the second site is actually using the Windows accounts for anything other than just retrieving a user name (for example permissions on the underlying file), since the user is not logged on as the actual Windows user account in this scenario.
A: If you need to authenticate against the second site, you may need to spawn a new thread and call the windows LogonUser API. Once you have the security token, assign it to the new thread and do your connection via that thread.
LogonUser requires enhanced privileges, and isn't Managed code, so there are some pretty severe hiccups to using it. But that's been the only work around I've been able to find to get a Forms authenticated site talking to a Windows Authenticated Service/Site.
Hope this helps.
A: Is this an intranet environment? If so they shouldn't have to login anyways. If sharepoint is setup using "Integrated Authentication" and the site is listed as a trusted site in IE, the browser will use there network cred for auto login. This can be setup on firefox as well.
A: Your users will not be able to connect to the NTLM site directly without getting an NTLM challenge. I would write what would effectively be a proxy to the NTLM site; i.e your server-side code will have credentials to connect to the NTLM site, and it passes through the requests from your users.
As you mention it's SharePoint (spit) bear in mind that SharePoint has a bunch of Web Services you could use for this (rather than doing screen-scraping).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Extending the User model with custom fields in Django What's the best way to extend the User model (bundled with Django's authentication app) with custom fields? I would also possibly like to use the email as the username (for authentication purposes).
I've already seen a few ways to do it, but can't decide on which one is the best.
A: Since Django 1.5 you may easily extend the user model and keep a single table on the database.
from django.contrib.auth.models import AbstractUser
from django.db import models
from django.utils.translation import ugettext_lazy as _
class UserProfile(AbstractUser):
age = models.PositiveIntegerField(_("age"))
You must also configure it as current user class in your settings file
# supposing you put it in apps/profiles/models.py
AUTH_USER_MODEL = "profiles.UserProfile"
If you want to add a lot of users' preferences the OneToOneField option may be a better choice thought.
A note for people developing third party libraries: if you need to access the user class remember that people can change it. Use the official helper to get the right class
from django.contrib.auth import get_user_model
User = get_user_model()
A: There is an official recommendation on storing additional information about users.
The Django Book also discusses this problem in section Profiles.
A: It's too late, but my answer is for those who search for a solution with a recent version of Django.
models.py:
from django.db import models
from django.contrib.auth.models import User
from django.db.models.signals import post_save
from django.dispatch import receiver
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
extra_Field_1 = models.CharField(max_length=25, blank=True)
extra_Field_2 = models.CharField(max_length=25, blank=True)
@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
@receiver(post_save, sender=User)
def save_user_profile(sender, instance, **kwargs):
instance.profile.save()
you can use it in templates like this:
<h2>{{ user.get_full_name }}</h2>
<ul>
<li>Username: {{ user.username }}</li>
<li>Location: {{ user.profile.extra_Field_1 }}</li>
<li>Birth Date: {{ user.profile.extra_Field_2 }}</li>
</ul>
and in views.py like this:
def update_profile(request, user_id):
user = User.objects.get(pk=user_id)
user.profile.extra_Field_1 = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit...'
user.save()
A: The least painful and indeed Django-recommended way of doing this is through a OneToOneField(User) property.
Extending the existing User model
…
If you wish to store information related to User, you can use a one-to-one relationship to a model containing the fields for additional information. This one-to-one model is often called a profile model, as it might store non-auth related information about a site user.
That said, extending django.contrib.auth.models.User and supplanting it also works...
Substituting a custom User model
Some kinds of projects may have authentication requirements for which Django’s built-in User model is not always appropriate. For instance, on some sites it makes more sense to use an email address as your identification token instead of a username.
[Ed: Two warnings and a notification follow, mentioning that this is pretty drastic.]
I would definitely stay away from changing the actual User class in your Django source tree and/or copying and altering the auth module.
A: New in Django 1.5, now you can create your own Custom User Model (which seems to be good thing to do in above case). Refer to 'Customizing authentication in Django'
Probably the coolest new feature on 1.5 release.
A: Here I tried to explain how to extend Django's Default user model with extra fields
It's very simple just do it.
Django allows extending the default user model with AbstractUser
Note:- first create an extra field model which you want to add in user model then run the command python manage.py makemigrations and python manage.py migrate
first run ---> python manage.py makemigrations then
second run python manage.py migrate
Step:- create a model with extra fields which you want to add in Django default user model (in my case I created CustomUser
model.py
from django.db import models
from django.contrib.auth.models import AbstractUser
# Create your models here.
class CustomUser(AbstractUser):
mobile_no = models.IntegerField(blank=True,null=True)
date_of_birth = models.DateField(blank=True,null=True)
add in settings.py name of your model which you created in my case CustomUser is the user model. registred in setttings.py to make it the default user model,
#settings.py
AUTH_USER_MODEL = 'myapp.CustomUser'
finally registred CustomUser model in admin.py
#admin.py
@admin.register(CustomUser)
class CustomUserAdmin(admin.ModelAdmin):
list_display = ("username","first_name","last_name","email","date_of_birth", "mobile_no")
then run command python manage.py makemigrations
then python manage.py migrate
then python manage.py createsuperuser
now you can see your model Default User model extended with (mobile_no ,date_of_birth)
A: The below one is another approach to extend an User.
I feel it is more clear,easy,readable then above two approaches.
http://scottbarnham.com/blog/2008/08/21/extending-the-django-user-model-with-inheritance/
Using above approach:
*
*you don't need to use
user.get_profile().newattribute to access the additional information
related to the user
*you can just directly access
additional new attributes via
user.newattribute
A: Note: this answer is deprecated. see other answers if you are using Django 1.7 or later.
This is how I do it.
#in models.py
from django.contrib.auth.models import User
from django.db.models.signals import post_save
class UserProfile(models.Model):
user = models.OneToOneField(User)
#other fields here
def __str__(self):
return "%s's profile" % self.user
def create_user_profile(sender, instance, created, **kwargs):
if created:
profile, created = UserProfile.objects.get_or_create(user=instance)
post_save.connect(create_user_profile, sender=User)
#in settings.py
AUTH_PROFILE_MODULE = 'YOURAPP.UserProfile'
This will create a userprofile each time a user is saved if it is created.
You can then use
user.get_profile().whatever
Here is some more info from the docs
http://docs.djangoproject.com/en/dev/topics/auth/#storing-additional-information-about-users
Update: Please note that AUTH_PROFILE_MODULE is deprecated since v1.5: https://docs.djangoproject.com/en/1.5/ref/settings/#auth-profile-module
A: Well, some time passed since 2008 and it's time for some fresh answer. Since Django 1.5 you will be able to create custom User class. Actually, at the time I'm writing this, it's already merged into master, so you can try it out.
There's some information about it in docs or if you want to dig deeper into it, in this commit.
All you have to do is add AUTH_USER_MODEL to settings with path to custom user class, which extends either AbstractBaseUser (more customizable version) or AbstractUser (more or less old User class you can extend).
For people that are lazy to click, here's code example (taken from docs):
from django.db import models
from django.contrib.auth.models import (
BaseUserManager, AbstractBaseUser
)
class MyUserManager(BaseUserManager):
def create_user(self, email, date_of_birth, password=None):
"""
Creates and saves a User with the given email, date of
birth and password.
"""
if not email:
raise ValueError('Users must have an email address')
user = self.model(
email=MyUserManager.normalize_email(email),
date_of_birth=date_of_birth,
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, username, date_of_birth, password):
"""
Creates and saves a superuser with the given email, date of
birth and password.
"""
u = self.create_user(username,
password=password,
date_of_birth=date_of_birth
)
u.is_admin = True
u.save(using=self._db)
return u
class MyUser(AbstractBaseUser):
email = models.EmailField(
verbose_name='email address',
max_length=255,
unique=True,
)
date_of_birth = models.DateField()
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
objects = MyUserManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['date_of_birth']
def get_full_name(self):
# The user is identified by their email address
return self.email
def get_short_name(self):
# The user is identified by their email address
return self.email
def __unicode__(self):
return self.email
def has_perm(self, perm, obj=None):
"Does the user have a specific permission?"
# Simplest possible answer: Yes, always
return True
def has_module_perms(self, app_label):
"Does the user have permissions to view the app `app_label`?"
# Simplest possible answer: Yes, always
return True
@property
def is_staff(self):
"Is the user a member of staff?"
# Simplest possible answer: All admins are staff
return self.is_admin
A: This is what i do and it's in my opinion simplest way to do this. define an object manager for your new customized model then define your model.
from django.db import models
from django.contrib.auth.models import PermissionsMixin, AbstractBaseUser, BaseUserManager
class User_manager(BaseUserManager):
def create_user(self, username, email, gender, nickname, password):
email = self.normalize_email(email)
user = self.model(username=username, email=email, gender=gender, nickname=nickname)
user.set_password(password)
user.save(using=self.db)
return user
def create_superuser(self, username, email, gender, password, nickname=None):
user = self.create_user(username=username, email=email, gender=gender, nickname=nickname, password=password)
user.is_superuser = True
user.is_staff = True
user.save()
return user
class User(PermissionsMixin, AbstractBaseUser):
username = models.CharField(max_length=32, unique=True, )
email = models.EmailField(max_length=32)
gender_choices = [("M", "Male"), ("F", "Female"), ("O", "Others")]
gender = models.CharField(choices=gender_choices, default="M", max_length=1)
nickname = models.CharField(max_length=32, blank=True, null=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
REQUIRED_FIELDS = ["email", "gender"]
USERNAME_FIELD = "username"
objects = User_manager()
def __str__(self):
return self.username
Dont forget to add this line of code in your settings.py:
AUTH_USER_MODEL = 'YourApp.User'
This is what i do and it always works.
A: You can Simply extend user profile by creating a new entry each time when a user is created by using Django post save signals
models.py
from django.db.models.signals import *
from __future__ import unicode_literals
class UserProfile(models.Model):
user_name = models.OneToOneField(User, related_name='profile')
city = models.CharField(max_length=100, null=True)
def __unicode__(self): # __str__
return unicode(self.user_name)
def create_user_profile(sender, instance, created, **kwargs):
if created:
userProfile.objects.create(user_name=instance)
post_save.connect(create_user_profile, sender=User)
This will automatically create an employee instance when a new user is created.
If you wish to extend user model and want to add further information while creating a user you can use django-betterforms (http://django-betterforms.readthedocs.io/en/latest/multiform.html). This will create a user add form with all fields defined in the UserProfile model.
models.py
from django.db.models.signals import *
from __future__ import unicode_literals
class UserProfile(models.Model):
user_name = models.OneToOneField(User)
city = models.CharField(max_length=100)
def __unicode__(self): # __str__
return unicode(self.user_name)
forms.py
from django import forms
from django.forms import ModelForm
from betterforms.multiform import MultiModelForm
from django.contrib.auth.forms import UserCreationForm
from .models import *
class ProfileForm(ModelForm):
class Meta:
model = Employee
exclude = ('user_name',)
class addUserMultiForm(MultiModelForm):
form_classes = {
'user':UserCreationForm,
'profile':ProfileForm,
}
views.py
from django.shortcuts import redirect
from .models import *
from .forms import *
from django.views.generic import CreateView
class AddUser(CreateView):
form_class = AddUserMultiForm
template_name = "add-user.html"
success_url = '/your-url-after-user-created'
def form_valid(self, form):
user = form['user'].save()
profile = form['profile'].save(commit=False)
profile.user_name = User.objects.get(username= user.username)
profile.save()
return redirect(self.success_url)
addUser.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<form action="." method="post">
{% csrf_token %}
{{ form }}
<button type="submit">Add</button>
</form>
</body>
</html>
urls.py
from django.conf.urls import url, include
from appName.views import *
urlpatterns = [
url(r'^add-user/$', AddUser.as_view(), name='add-user'),
]
A: Extending Django User Model (UserProfile) like a Pro
I've found this very useful: link
An extract:
from django.contrib.auth.models import User
class Employee(models.Model):
user = models.OneToOneField(User)
department = models.CharField(max_length=100)
>>> u = User.objects.get(username='fsmith')
>>> freds_department = u.employee.department
A: It's very easy in Django version 3.0+ (If you are NOT in the middle of a project):
In models.py
from django.db import models
from django.contrib.auth.models import AbstractUser
class CustomUser(AbstractUser):
extra_field=models.CharField(max_length=40)
In settings.py
First, register your new app and then below AUTH_PASSWORD_VALIDATORS
add
AUTH_USER_MODEL ='users.CustomUser'
Finally, register your model in the admin, run makemigrations and migrate, and it will be completed successfully.
Official doc: https://docs.djangoproject.com/en/3.2/topics/auth/customizing/#substituting-a-custom-user-model
A: Simple and effective approach is
models.py
from django.contrib.auth.models import User
class CustomUser(User):
profile_pic = models.ImageField(upload_to='...')
other_field = models.CharField()
A: Currently as of Django 2.2, the recommended way when starting a new project is to create a custom user model that inherits from AbstractUser, then point AUTH_USER_MODEL to the model.
Source: https://docs.djangoproject.com/en/2.2/topics/auth/customizing/#using-a-custom-user-model-when-starting-a-project
A: Try this:
Create a model called Profile and reference the user with a OneToOneField and provide an option of related_name.
models.py
from django.db import models
from django.contrib.auth.models import *
from django.dispatch import receiver
from django.db.models.signals import post_save
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='user_profile')
def __str__(self):
return self.user.username
@receiver(post_save, sender=User)
def create_profile(sender, instance, created, **kwargs):
try:
if created:
Profile.objects.create(user=instance).save()
except Exception as err:
print('Error creating user profile!')
Now to directly access the profile using a User object you can use the related_name.
views.py
from django.http import HttpResponse
def home(request):
profile = f'profile of {request.user.user_profile}'
return HttpResponse(profile)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "534"
} |
Q: SQL Server Alter Computed Column Does anyone know of a way to alter a computed column without dropping the column in SQL Server. I want to stop using the column as a computed column and start storing data directly in the column, but would like to retain the current values.
Is this even possible?
A: If you need to maintain the name of the column (so as not to break client code), you will need to drop the column and add back a stored column with the same name. You can do this without downtime by making the changes (along the lines of SQLMenace's solution) in a single transaction. Here's some pseudo-code:
begin transaction
drop computed colum X
add stored column X
populate column using the old formula
commit transaction
A: Not that I know of but here is something you can do
add another column to the table
update that column with the values of the computed column then drop the computed column
A: Ok, so let me see if I got this straight. You want to take a column that is currently computed and make it a plain-jane data column. Normally this would drop the column but you want to keep the data in the column.
*
*Make a new table with the primary key columns from your source table and the generated column.
*Copy the data from your source table into the new table.
*Change the column on your source table.
*Copy the data back.
No matter what you do I am pretty sure changing the column will drop it. This way is a bit more complex but not that bad and it saves your data.
[Edit: @SqlMenace's answer is much easier. :) Curse you Menace!! :)]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I hide the input caret in a System.Windows.Forms.TextBox? I need to display a variable-length message and allow the text to be selectable. I have made the TextBox ReadOnly which does not allow the text to be edited, but the input caret is still shown.
The blinking input caret is confusing. How do I hide it?
A: When using the win32 call don't forget to hide the cursor in the textbox's GotFocus event.
A: Just for completeness, I needed such a functionality for using with a DevExpress WinForms TextEdit control.
They already do provide a ShowCaret and a HideCaret method, unfortunately they are protected. Therefore I created a derived class that provides the functionality. Here is the full code:
public class MyTextEdit : TextEdit
{
private bool _wantHideCaret;
public void DoHideCaret()
{
HideCaret();
_wantHideCaret = true;
}
public void DoShowCaret()
{
ShowCaret();
_wantHideCaret = false;
}
protected override void OnGotFocus(EventArgs e)
{
base.OnGotFocus(e);
if (_wantHideCaret)
{
HideCaret();
}
}
}
To use the code, simply use the derived class instead of the original TextEdit class in your code and call DoHideCaret() anywhere, e.g. in the constructor of your form that contains the text edit control.
Maybe this is helpful to someone in the future.
A: You can do through a win32 call
[DllImport("user32.dll")]
static extern bool HideCaret(IntPtr hWnd);
public void HideCaret()
{
HideCaret(someTextBox.Handle);
}
A: If you disable the text box (set Enable=false), the text in it is still scrollable and selectable. If you don't like the visual presentation of a disabled text box (gray background usually) you can manually override the colors.
Be warned, manually overriding colors is going to make your form/control look weird on systems that do not use the default color/theme settings. Don't assume that because your control is white that everyone's control is going to be white. That's why you should always use the system colors whenever possible (defined in the System.Drawing.SystemColors enumeration) such as SystemColors.ControlLight.
A: AFAIK, this cannot be done. The TextBox control is a funny control because it actually has a lot of behaviour that can't be modified due to the way it taps into the operating system. This is why many of the cool custom TextBoxes are written from scratch.
I am afraid you may not be able to do what you wish to do :(
A: I know this is an old thread but it is a useful reference.
I solved the problem with a much easier but very kludgie solution, which may depend on how much control you have over the user's access to the form. I added a textbox (any focus-able control) which I gave prime tabIndex value and then positioned it off-form so that it was not visible. This works fine on a dialog because the user can't resize. If the form is resizeable, this may not work.
As I said, a kludge - but a lot easier to set up. (BTW I found the HideCaret approach didn't work - but I didn't pursue it hard.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Project design / FS layout for large django projects What is the best way to layout a large django project? The tutorials provide simple instructions for setting up apps, models, and views, but there is less information about how apps and projects should be broken down, how much sharing is allowable/necessary between apps in a typical project (obviously that is largely dependent on the project) and how/where general templates should be kept.
Does anyone have examples, suggestions, and explanations as to why a certain project layout is better than another? I am particularly interested in the incorporation of large numbers of unit tests (2-5x the size of the actual code base) and string externalization / templates.
A: I found Zachary's layout quite useful
Zachary Voase’s Blog » Django Project Conventions, Revisited.
A: This page does a good job of addressing some of my questions: http://www.b-list.org/weblog/2006/sep/10/django-tips-laying-out-application/
Specifically:
*
*To define custom template tags or filters, you must create a sub-directory in the application’s directory called templatetags, and it must contain a file named __init__.py so that it can be imported as a Python module.
*To define unit tests which will automatically be noticed by Django’s testing framework, put them in a module called tests (which can be either a file named tests.py or a directory called tests). The testing framework will also find any doctests in that module, but the preferred place for those is, of course, the docstrings of the classes or functions they’re designed to test.
*To provide custom SQL which will be executed immediately after your application is installed, create a sub-directory called sql inside the application’s directory; the file names should be the same as the names of the models whose tables they’ll operate on; for example, if you have an app named weblog containing a model named Entry, then the file sql/entry.sql inside the app’s directory can be used to modify or insert data into the entries table as soon as it’s been created.
The note about tests.py and tests (the directory) also holds for models, which helps address the problem of having way to many tests (or models) for one file.
I would still like to see some examples / suggestions for app/project break down, and big django sites that work well.
A: The Pinax project is built around the idea of small reusable apps, which are easily brought together into a project. They've used the project Cloud 27 as a demo project.
The Django project I'm working on (called Basie. It's pre-0.1, so no link yet.) is trying to follow the Pinax model, and so far it's working out fairly well.
A: The major guidelines are similar to any other large code project. Apps should address a single, clearly-defined responsibility. The name "application" is a misnomer; Django apps should be thought of more as reusable components which can be plugged together to create a real application. Tests for each app should be contained within that app. Apps should be decoupled from each other as much as possible, but clearly there will be dependencies, so the goal should be to keep the dependency graph as simple and sane as possible.
I prefer to keep all the templates for a project under a single project-wide templates directory, with a subdirectory for each app (using a template subdirectory for each app is a very strong convention in Django, as it avoids template name collisions between apps). The reason for a single project-wide templates directory is that templates, template inheritance trees, and block names can be quite project-specific, so it's hard to provide "default" app templates that can plug in to any project. There have been some attempts to settle on standard naming conventions for base site-wide templates and the blocks they define, but I haven't seen a standard emerge yet (the way they do things over at Pinax is probably the closest we have to a standard).
Re "string externalization", if you mean i18n and l10n, Django has strong support for that and standard places where it puts the .po files - check the docs.
A: My current layout stems from me wanting to have a test-version of my sites. This means having two projects for every site, since they need different configurations, and forces me to move all the applications out of the projects.
I've created two folders: $APP_ROOT/devel and $APP_ROOT/prod. These contain all the apps. Using source control (in my case git) I have the apps in devel at the HEAD revision, while the apps in prod are locked to the PROD tag. The templates also have their own folder with the same layout as the apps.
Now I'm able to do all my development in the devel-apps folder and the matching template-folder. When I have something I'm happy with, I tag that revision and update prod.
A: I really like Randall Degges' post on this subject. He leaves out info on how to glue the settings files together, but I'll have a post on that I'll be able to link, but for now anyone can check out my repo where I include some direction in the readme.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: Publishing Flash video streaming What options do I have to publish Flash video from webcams other than Adobe's Flash Media Server and Red5? I would prefer a solution in .NET, Python or PHP but can't find anything.
A: Besides another commercial product, like Wowza Media Server, you could go with a CDN (Content Delivery Network), like Limelight Networks or Voxel.net. You might even be able to find a local hosting provider that would serve up live Flash video for you.
(Live Flash video in a non-trivial thing to do, so the options are a bit limited.)
A: Weborb can play flv over http, but cannot accept an rtmp live stream from a webcam, so cannot re-stream this input. In addition to the alternatives given for rtmp (FMS,red5,wowza) you could also use haxevideo.
A: It looks like WebOrb can do it: WebOrb FAQ (last entry)
Can I stream Flash video to a Flex/Flash client through WebORB?
Yes, WebORB supports FLV video streaming. An example is included with the WebORB for .NET product distribution.
I haven't worked with WebOrb though, so I can't say for sure how easy it is.
A: Weborb (http://www.themidnightcoders.com/weborb/) has some awesome benefits on the data access side if you're looking to do some AMF as well as streaming video it could be a very decent option (and has a php and .net version).
On the python side I found (http://rtmpy.org/) but couldn't say too much about it as I have never used it...
A: WebORB actually can accept live video stream from a user camera and definitely can restream it to other clients. I provides a video chat demo right in the product distribution.
A: I have mainly used FluorineFX and WebORB in Flex business applications. I don't think fluorine supports video streaming, but WebORB definetely does. However, a colleague of mine searched a while to get it working with AS2 and didn't manage. Red5 was up and running with AS2 in no time. With AS3 there's lots of documentation from the site of WebORB. WebORB does has the advantage that is supports .NET, Java, PHP and Ruby. Silverlight will also be supported, so that'll be great!
A: Justin.tv has a turnkey API, similar to Nimbb.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Hudson can't build my Maven 2 project because it says artifacts are missing from the repository? (they aren't) I'm using Hudson and Maven 2 for my automated build/CI. I can build fine with maven from the command line, but when I run the same goal with Hudson, the build fails complaining of missing artifacts. I'm running Hudson as a windows XP service.
A: Make sure you're running Hudson as the same user that you are using to run Maven from the command line. Maven creates a separate repository for each user. If you are running Hudson as a Windows service, this won't be the same user as you have logged on as and will be running "mvn" commands with. This means the artifacts in the repositories may be different.
To fix, either start Hudson manually as the user which works, or update the repository for the user which Hudson is running as.
A: Obvious question, but have you got Hudson set up to point to the same Maven repository as your command line build? You can check this from the Hudson admin gui - look in the Maven section of the Manage Hudson page. This should have a MAVEN_HOME environment variable listed. Look in the settings.xml file under:
MAVEN_HOME\conf\settings.xml
The localRepository configuration item is the location of the Maven repository that the Hudson build is using.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Database sharding and Rails What's the best way to deal with a sharded database in Rails? Should the sharding be handled at the application layer, the active record layer, the database driver layer, a proxy layer, or something else altogether? What are the pros and cons of each?
A: I assume with shards we're talking about horizontal partitioning and not vertical partitioning (here are the differences on Wikipedia).
First off, stretch vertical partitioning as far as you can take it before you consider horizontal partitioning. It's easy in Rails to have different models point to different machines and for most Rails sites, this will bring you far enough.
For horizontal partitioning, in an ideal world, this would be handled at the application layer in Rails. But while it's not hard, it's not trivial in Rails, and by the time you need it, usually your application has grown beyond the point where this is feasible since you have ActiveRecord calls sprinkled all over the place. And no one, developers or management, likes working on it before you need it since everyone would rather work on features users will use now rather than on partitioning which may not come into play for years after your traffic has exploded.
ActiveRecord layer... not easy from what I can see. Would require lots of monkey patching into Rails internals.
At Spock we ended up handling this using a custom MySQL proxy and open sourced it on SourceForge as Spock Proxy. ActiveRecord thinks it's talking to one MySQL database machine when reality it's talking to the proxy, which then talks to one or more MySQL databases, merges/sorts the results, and returns them to ActiveRecord. Requires only a few changes to your Rails code. Take a look at the Spock Proxy SourceForge page for more details and for our reasons for going this route.
A: For those of you like me who hadn't heard of sharding:
http://highscalability.com/unorthodox-approach-database-design-coming-shard
A: rails 6.1 provides ability to switch connection per database thus we can do the horizontal partitioning.
*
*Shards are declared in the three-tier config like this:
production:
primary:
database: my_primary_database
adapter: mysql2
primary_replica:
database: my_primary_database
adapter: mysql2
replica: true
primary_shard_one:
database: my_primary_shard_one
adapter: mysql2
primary_shard_one_replica:
database: my_primary_shard_one
adapter: mysql2
replica: true
*
*Models are then connected with the connects_to API via the shards key
class ApplicationRecord < ActiveRecord::Base
self.abstract_class = true
connects_to shards: {
default: { writing: :primary, reading: :primary_replica },
shard_one: { writing: :primary_shard_one, reading: :primary_shard_one_replica }
}
end
*
*Then models can swap connections manually via the connected_to API. If using sharding, both a role and a shard must be passed:
ActiveRecord::Base.connected_to(role: :writing, shard: :shard_one) do
@id = Person.create! # Creates a record in shard one
end
ActiveRecord::Base.connected_to(role: :writing, shard: :shard_one) do
Person.find(@id) # Can't find record, doesn't exist because it was created
# in the default shard
end
reference:
*
*https://edgeguides.rubyonrails.org/active_record_multiple_databases.html#horizontal-sharding
*https://dev.to/ritikesh/multitenant-architecture-on-rails-6-1-27c7
A: Connecting Rails to multiple databases is not a big deal- you simply have an ActiveRecord subclass for each shard that overrides the connection property. That makes it pretty simple if you need to make cross-shard calls. You then just have to write a little code when you need to make calls between the shards.
I don't like Hank's idea of splitting the rails instances, because it seems challenging to call the code between the instances unless you have a big shared library.
Also you should look at doing something like Masochism before you start sharding.
A: FiveRuns have a gem named DataFabric that does application-level sharding and master/slave replication. It might be worth checking out.
A: For rails to work with replicated environment, I would suggest using my_replication plugin which helps switch database connection to one of the slaves at run-time
https://github.com/minhnghivn/my_replication
A: To my mind, the simplest way is maintain a 1:1 between rails instances and DB shards.
A: Proxy layer is better, it can support all program languages.
For example: Apache ShardingSphere' proxy.
There are 2 different products of Apache ShardingSphere, ShardingSphere-JDBC for application layer which for Java language only and ShardingSphere-Proxy for proxy layer which for all program languages.
FYI: https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/
A: Depends upon rails version. Newer rails version provide support for sharding as said by @Oshan. But if you can't update to a newer version you can use the octopus gem.
Gem Link
https://github.com/thiagopradi/octopus
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Can you use reflection to find the name of the currently executing method? Like the title says: Can reflection give you the name of the currently executing method.
I'm inclined to guess not, because of the Heisenberg problem. How do you call a method that will tell you the current method without changing what the current method is? But I'm hoping someone can prove me wrong there.
Update:
*
*Part 2: Could this be used to look inside code for a property as well?
*Part 3: What would the performance be like?
Final Result
I learned about MethodBase.GetCurrentMethod(). I also learned that not only can I create a stack trace, I can create only the exact frame I need if I want.
To use this inside a property, just take a .Substring(4) to remove the 'set_' or 'get_'.
A: EDIT: MethodBase is probably a better way to just get the method you're in (as opposed to the whole calling stack). I'd still be concerned about inlining however.
You can use a StackTrace within the method:
StackTrace st = new StackTrace(true);
And the look at the frames:
// The first frame will be the method you want (However, see caution below)
st.GetFrames();
However, be aware that if the method is inlined, you will not be inside the method you think you are. You can use an attribute to prevent inlining:
[MethodImpl(MethodImplOptions.NoInlining)]
A: The simple way to deal is:
System.Reflection.MethodBase.GetCurrentMethod().DeclaringType.FullName + "." + System.Reflection.MethodBase.GetCurrentMethod().Name;
If the System.Reflection is included in the using block:
MethodBase.GetCurrentMethod().DeclaringType.FullName + "." + MethodBase.GetCurrentMethod().Name;
A: For Async Methods, you can use:
//using System.Reflection;
var myMethodName = MethodBase
.GetCurrentMethod()
.DeclaringType
.Name
.Substring(1)
.Split('>')[0];
A: The snippet provided by Lex was a little long, so I'm pointing out the important part since no one else used the exact same technique:
string MethodName = new StackFrame(0).GetMethod().Name;
This should return identical results to the MethodBase.GetCurrentMethod().Name technique, but it's still worth pointing out because I could implement this once in its own method using index 1 for the previous method and call it from a number of different properties. Also, it only returns one frame rather then the entire stack trace:
private string GetPropertyName()
{ //.SubString(4) strips the property prefix (get|set) from the name
return new StackFrame(1).GetMethod().Name.Substring(4);
}
It's a one-liner, too ;)
A: How about this:
StackFrame frame = new StackFrame(1);
frame.GetMethod().Name; //Gets the current method name
MethodBase method = frame.GetMethod();
method.DeclaringType.Name //Gets the current class name
A: A bit more resilient, solution for customers from 2021,2022:
namespace my {
public struct notmacros
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static string
whoami( [CallerMemberName] string caller_name = null)
{
if (string.IsNullOrEmpty(caller_name))
return "unknown";
if (string.IsNullOrWhiteSpace(caller_name))
return "unknown";
return caller_name;
}
}
} // my namespace
Usage
using static my.notmacros
// somewhere appropriate
var my_name = whoami() ;
.NET fiddle link for the actual demo:
https://dotnetfiddle.net/moK73n
Please note the compiler requirement: .NET 6
A: To handle both async and plain old method calls, I did this.
In my application, it's only getting called from exception handlers, so perf is not a concern.
[MethodImpl(MethodImplOptions.NoInlining)]
public static string GetCurrentMethodName()
{
var st = new StackTrace();
var sf = st.GetFrame(1);
string name = sf.GetMethod().Name;
if (name.Equals("MoveNext"))
{
// We're inside an async method
name = sf.GetMethod().ReflectedType.Name
.Split(new char[] { '<', '>' }, StringSplitOptions.RemoveEmptyEntries)[0];
}
return name;
}
A: I think you should be able to get that from creating a StackTrace. Or, as @edg and @Lars Mæhlum mention, MethodBase.GetCurrentMethod()
A: I just did this with a simple static class:
using System.Runtime.CompilerServices;
.
.
.
public static class MyMethodName
{
public static string Show([CallerMemberName] string name = "")
{
return name;
}
}
then in your code:
private void button1_Click(object sender, EventArgs e)
{
textBox1.Text = MyMethodName.Show();
}
private void button2_Click(object sender, EventArgs e)
{
textBox1.Text = MyMethodName.Show();
}
A: using System;
public class Program
{
public static void Main()
{
Console.WriteLine("1: {0} {1}", System.Reflection.MethodBase.GetCurrentMethod().Name, System.Reflection.MethodBase.GetCurrentMethod().ReflectedType);
OtherMethod();
}
public static void OtherMethod()
{
Console.WriteLine("2: {0} {1}", System.Reflection.MethodBase.GetCurrentMethod().Name, System.Reflection.MethodBase.GetCurrentMethod().ReflectedType);
}
}
Output:
1: Main Program
2: OtherMethod Program
A: For non-async methods one can use
System.Reflection.MethodBase.GetCurrentMethod().Name;
https://learn.microsoft.com/en-us/dotnet/api/system.reflection.methodbase.getcurrentmethod
Please remember that for async methods it will return "MoveNext".
A: Try this inside the Main method in an empty console program:
MethodBase method = MethodBase.GetCurrentMethod();
Console.WriteLine(method.Name);
Console Output:
Main
A: Comparing ways to get the method name -- using an arbitrary timing construct in LinqPad:
CODE
void Main()
{
// from http://blogs.msdn.com/b/webdevelopertips/archive/2009/06/23/tip-83-did-you-know-you-can-get-the-name-of-the-calling-method-from-the-stack-using-reflection.aspx
// and https://stackoverflow.com/questions/2652460/c-sharp-how-to-get-the-name-of-the-current-method-from-code
var fn = new methods();
fn.reflection().Dump("reflection");
fn.stacktrace().Dump("stacktrace");
fn.inlineconstant().Dump("inlineconstant");
fn.constant().Dump("constant");
fn.expr().Dump("expr");
fn.exprmember().Dump("exprmember");
fn.callermember().Dump("callermember");
new Perf {
{ "reflection", n => fn.reflection() },
{ "stacktrace", n => fn.stacktrace() },
{ "inlineconstant", n => fn.inlineconstant() },
{ "constant", n => fn.constant() },
{ "expr", n => fn.expr() },
{ "exprmember", n => fn.exprmember() },
{ "callermember", n => fn.callermember() },
}.Vs("Method name retrieval");
}
// Define other methods and classes here
class methods {
public string reflection() {
return System.Reflection.MethodBase.GetCurrentMethod().Name;
}
public string stacktrace() {
return new StackTrace().GetFrame(0).GetMethod().Name;
}
public string inlineconstant() {
return "inlineconstant";
}
const string CONSTANT_NAME = "constant";
public string constant() {
return CONSTANT_NAME;
}
public string expr() {
Expression<Func<methods, string>> ex = e => e.expr();
return ex.ToString();
}
public string exprmember() {
return expressionName<methods,string>(e => e.exprmember);
}
protected string expressionName<T,P>(Expression<Func<T,Func<P>>> action) {
// https://stackoverflow.com/a/9015598/1037948
return ((((action.Body as UnaryExpression).Operand as MethodCallExpression).Object as ConstantExpression).Value as MethodInfo).Name;
}
public string callermember([CallerMemberName]string name = null) {
return name;
}
}
RESULTS
reflection
reflection
stacktrace
stacktrace
inlineconstant
inlineconstant
constant
constant
expr
e => e.expr()
exprmember
exprmember
callermember
Main
Method name retrieval: (reflection) vs (stacktrace) vs (inlineconstant) vs (constant) vs (expr) vs (exprmember) vs (callermember)
154673 ticks elapsed ( 15.4673 ms) - reflection
2588601 ticks elapsed (258.8601 ms) - stacktrace
1985 ticks elapsed ( 0.1985 ms) - inlineconstant
1385 ticks elapsed ( 0.1385 ms) - constant
1366706 ticks elapsed (136.6706 ms) - expr
775160 ticks elapsed ( 77.516 ms) - exprmember
2073 ticks elapsed ( 0.2073 ms) - callermember
>> winner: constant
Note that the expr and callermember methods aren't quite "right". And there you see a repetition of a related comment that reflection is ~15x faster than stacktrace.
A: As of .NET 4.5, you can also use [CallerMemberName].
Example: a property setter (to answer part 2):
protected void SetProperty<T>(T value, [CallerMemberName] string property = null)
{
this.propertyValues[property] = value;
OnPropertyChanged(property);
}
public string SomeProperty
{
set { SetProperty(value); }
}
The compiler will supply matching string literals at call sites, so there is basically no performance overhead.
A: Yes definitely.
If you want an object to manipulate I actually use a function like this:
public static T CreateWrapper<T>(Exception innerException, params object[] parameterValues) where T : Exception, new()
{
if (parameterValues == null)
{
parameterValues = new object[0];
}
Exception exception = null;
StringBuilder builder = new StringBuilder();
MethodBase method = new StackFrame(2).GetMethod();
ParameterInfo[] parameters = method.GetParameters();
builder.AppendFormat(CultureInfo.InvariantCulture, ExceptionFormat, new object[] { method.DeclaringType.Name, method.Name });
if ((parameters.Length > 0) || (parameterValues.Length > 0))
{
builder.Append(GetParameterList(parameters, parameterValues));
}
exception = (Exception)Activator.CreateInstance(typeof(T), new object[] { builder.ToString(), innerException });
return (T)exception;
}
This line:
MethodBase method = new StackFrame(2).GetMethod();
Walks up the stack frame to find the calling method then we use reflection to obtain parameter information values passed to it for a generic error reporting function. To get the current method simply use current stack frame (1) instead.
As others have said for the current methods name you can also use:
MethodBase.GetCurrentMethod()
I prefer walking the stack because if look internally at that method it simply creates a StackCrawlMark anyway. Addressing the Stack directly seems clearer to me
Post 4.5 you can now use the [CallerMemberNameAttribute] as part of the method parameters to get a string of the method name - this may help in some scenarios (but really in say the example above)
public void Foo ([CallerMemberName] string methodName = null)
This seemed to be mainly a solution for INotifyPropertyChanged support where previously you had strings littered all through your event code.
A: Add this method somewhere and call it without parameter!
public static string GetCurrentMethodName([System.Runtime.CompilerServices.CallerMemberName] string name = "")
{
return name;
}
A: Try this...
/// <summary>
/// Return the full name of method
/// </summary>
/// <param name="obj">Class that calls this method (use Report(this))</param>
/// <returns></returns>
public string Report(object obj)
{
var reflectedType = new StackTrace().GetFrame(1).GetMethod().ReflectedType;
if (reflectedType == null) return null;
var i = reflectedType.FullName;
var ii = new StackTrace().GetFrame(1).GetMethod().Name;
return string.Concat(i, ".", ii);
}
A: Here is what I'm using in a static helper class for my async methods.
public static string GetMethodName(string rawName)
{
return rawName.Substring(1, rawName.IndexOf('>') - 1);
}
Calling it:
string methodName = StringExtensionMethods.GetMethodName(MethodBase.GetCurrentMethod().ReflectedType.Name ?? "");
HTH
A: new StackTrace().ToString().Split("\r\n",StringSplitOptions.RemoveEmptyEntries)[0].Replace("at ","").Trim()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "216"
} |
Q: Real-time wmv video encoding in C# How to encode video on the fly and send it trough the network from C#?
Can't find a suitable library. I need to encode in WMV and don't mind if the actual encoding is made in C++ as long as the library has a .NET assembly available.
Thanks
A: I'm aware of ffmpeg, but it is native C code only. If you're ok with interoperability this may be your ticket.
Edit: It turns out someone already wrapped this in a .NET assembly. It's called FFlib.NET.
A: I use the Windows Media Format SDK, although I admit I use it directly in C++ native code. I believe it can be called from managed code.
This is now included as part of the Windows SDK here:
http://msdn.microsoft.com/en-us/windows/bb190307.aspx
(or you can download it separately - see the list in the left-hand panel)
Be warned, it is a fair bit to get your head around. However, there are sample code resources which should assist.
A: Depending on what you are encoding (size, framerate, hardware, etc) real-time encoding may not even be possible. Video encoding is VERY CPU intensive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Searching subversion history (full text) Is there a way to perform a full text search of a subversion repository, including all the history?
For example, I've written a feature that I used somewhere, but then it wasn't needed, so I svn rm'd the files, but now I need to find it again to use it for something else. The svn log probably says something like "removed unused stuff", and there's loads of checkins like that.
Edit 2016-04-15: Please note that what is asked here by the term "full text search", is to search the actual diffs of the commit history, and not filenames and/or commit messages. I'm pointing this out because the author's phrasing above does not reflect that very well - since in his example he might as well be only looking for a filename and/or commit message. Hence a lot of the svn log answers and comments.
A: I have been looking for something similar. The best I have come up with is OpenGrok. I have not tried to implement it yet, but sounds promising.
A: git svn clone <svn url>
git log -G<some regex>
A: While not free, you might take a look at Fisheye from Atlassian, the same folks that bring you JIRA. It does full text search against SVN with many other useful features.
http://www.atlassian.com/software/fisheye/
A: I just ran into this problem and
svnadmin dump <repo location> |grep -i <search term>
did the job for me. Returned the revision of the first occurrence and quoted the line I was looking for.
A: Update April, 2022
VisualSVN Server 5.0 comes with a new full-text search feature that allows you to search through the contents and history of your repositories in the web interface. Try out the feature on the demo server.
Old answer
svn log in Apache Subversion 1.8 supports a new --search option. So you can search Subversion repository history log messages without using 3'rd party tools and scripts.
svn log --search searches in author, date, log message text and list of changed paths.
See SVNBook | svn log command-line reference.
A: I was looking for the same thing and found this:
http://svn-search.sourceforge.net/
A: Use unix utility like grep:
svn log -l <commit limit> --diff | grep -C <5 or more lines> <search message>
or you can save the result of the svn log somewhere, then search through it
A: If you are running Windows have a look at SvnQuery. It maintains a full text index of local or remote repositories. Every document ever committed to a repository gets indexed. You can do google-like queries from a simple web interface.
A: I'm using a small shellscript, but this only works for a single file. You can ofcourse combine this with find to include more files.
#!/bin/bash
for REV in `svn log $1 | grep ^r[0-9] | awk '{print $1}'`; do
svn cat $1 -r $REV | grep -q $2
if [ $? -eq 0 ]; then
echo "$REV"
fi
done
If you really want to search everything, use the svnadmin dump command and grep through that.
A: I don't have any experience with it, but SupoSE (open source, written in Java) is a tool designed to do exactly this.
A: The best way that I've found to do this is with less:
svn log --verbose | less
Once less comes up with output, you can hit / to search, like VIM.
Edit:
According to the author, he wants to search more than just the messages and the file names. In which case you will be required to ghetto-hack it together with something like:
svn diff -r0:HEAD | less
You can also substitute grep or something else to do the searching for you. If you want to use this on a sub-directory of the repository, you will need to use svn log to discern the first revision in which that directory existed, and use that revision instead of 0.
A: svn log -v [repository] > somefile.log
for diff you can use the --diff option
svn log -v --diff [repository] > somefile.log
then use vim or nano or whatever you like using, and do a search for what you're looking for. You'll find it pretty quickly.
It's not a fancy script or anything automated. But it works.
A: I usually do what Jack M says (use svn log --verbose) but I pipe to grep instead of less.
A: I wrote this as a cygwin bash script to solve this problem.
However it requires that the search term is currently within the filesystem file. For all the files that match the filesystem grep, an grep of all the svn diffs for that file are then performed. Not perfect, but should be good enough for most usage. Hope this helps.
/usr/local/bin/svngrep
#!/bin/bash
# Usage: svngrep $regex @grep_args
regex="$@"
pattern=`echo $regex | perl -p -e 's/--?\S+//g; s/^\\s+//;'` # strip --args
if [[ ! $regex ]]; then
echo "Usage: svngrep \$regex @grep_args"
else
for file in `grep -irl --no-messages --exclude=\*.tmp --exclude=\.svn $regex ./`; do
revs="`svnrevisions $file`";
for rev in $revs; do
diff=`svn diff $file -r$[rev-1]:$rev \
--diff-cmd /usr/bin/diff -x "-Ew -U5 --strip-trailing-cr" 2> /dev/null`
context=`echo "$diff" \
| grep -i --color=none -U5 "^\(+\|-\).*$pattern" \
| grep -i --color=always -U5 $pattern \
| grep -v '^+++\|^---\|^===\|^Index: ' \
`
if [[ $context ]]; then
info=`echo "$diff" | grep '^+++\|^---'`
log=`svn log $file -r$rev`
#author=`svn info -r$rev | awk '/Last Changed Author:/ { print $4 }'`;
echo "========================================================================"
echo "========================================================================"
echo "$log"
echo "$info"
echo "$context"
echo
fi;
done;
done;
fi
/usr/local/bin/svnrevisions
#!/bin/sh
# Usage: svnrevisions $file
# Output: list of fully numeric svn revisions (without the r), one per line
file="$@"
svn log "$file" 2> /dev/null | awk '/^r[[:digit:]]+ \|/ { sub(/^r/,"",$1); print $1 }'
A: I came across this bash script, but I have not tried it.
A: In case you are trying to determine which revision is responsible for a specific line of code, you are probably looking for:
svn blame
Credit: original answer
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "144"
} |
Q: Connecting private IPs A friend of mine told me there was a way to connect two private IPs without using a proxy server. The idea was that both computers connected to a public server and some how the server joined the private connections and won't use any more bandwidth.
Is this true? How's this technique named?
A: There is a technique called "Hole Punching" that works well with "Cone" NAT (Cone is a technical familly of router). That's not an 100% sure technique, today, it works well with UDP on about 80% of the router.
There is some implementations of library to realize Hole Punching: STUN (wikipedia)
A: This is true. It's the way FogCreek Copilot works
Take a look at item 2 on Joel's Copilot 2.0 post.
A: Your friend might be referring to VIP's (Virtual IP's). From my understanding a VIP is usually controlled by a piece of hardware like a router and then redirects to one of your 2 private IP's. We use this with a cluster of machines behind a VIP. I'm not a network guy so that's pretty much the extent of my knowledge.
A: If you're looking at joining two private networks (two networks of machines behind a NAT), the best way to do this is with a VPN. There are many pieces of equipment available to accomplish this.
A: I'm not sure it's what you're thinking of, but you could do something similar with ssh tunneling. Let's say you wanted userA on 10.1.2.3/24 to connect a mysql server on userB's on 192.168.0.3/24. There's no direct network connectivity between the two networks, but both machines can connect to serverA on the public internet.
userB runs this command:
ssh -R localhost:13306:localhost:3306 username@serverA
userA runs this command:
ssh -L 3306:localhost:13306 username@serverA
Now userA can use whatever tool they please to connect to mysql on localhost and the cxn will be tunneled through serverA and to the mysql daemon running on localhost on userB's machine.
(hopefully no typos, typed with one hand as I hold my two day old daughter =))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: SQL: Select like column from two tables I have a database with two tables (Table1 and Table2). They both have a common column [ColumnA] which is an nvarchar.
How can I select this column from both tables and return it as a single column in my result set?
So I'm looking for something like:
ColumnA in Table1:
a
b
c
ColumnA in Table2:
d
e
f
Result set should be:
a
b
c
d
e
f
A: Do you care if you get dups or not?
UNION will be slower than UNION ALL because UNION will filter out dups
A: SELECT ColumnA FROM Table1 UNION Select ColumnB FROM Table2 ORDER BY 1
Also, if you know the contents of Table1 and Table2 will NEVER overlap, you can use UNION ALL in place of UNION instead. Saves a little bit of resources that way.
-- Kevin Fairchild
A: Use the UNION operator:
SELECT ColumnA FROM Table1
UNION
SELECT ColumnA FROM Table2
A: The union answer is almost correct, depending on overlapping values:
SELECT distinct ColumnA FROM Table1
UNION
SELECT distinct ColumnA FROM Table2
If 'd' appeared in Table1 or 'c' appeared in Table2 you would have multiple rows with them.
A: You can use a union select:
Select columnA from table1 union select columnA from table2
A: SELECT Table1.*, Table2.d, Table2.e, Table2.f
FROM Table1 JOIN Table2 ON Table1.a = Table2.a
Or am I misunderstanding your question?
Edit: It appears I did.
A: I believe it's:
SELECT columna FROM table1 UNION SELECT columnb FROM table2;
A: In Oracle (at least) there is UNION and UNION ALL, UNION ALL will return all results from both sets even if there are duplicates, where as UNION will return the distinct results from both sets.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Looking for a simple JavaScript example that updates DOM I am looking for a simple JavaScript example that updates DOM.
Any suggestions?
A: Here is a short pure-javascript example. Assume you have a div with the id "maincontent".
var newnode = document.createTextNode('Here is some text.');
document.getElementById('maincontent').appendChild(newnode);
Of course, things are a lot easier (especially when you want to do more complicated things) with jQuery.
A: @Ravi
Here's working example of your code
<html>
<head>
<title>Font Detect please</title>
<script src="prototype.js" type="text/javascript"></script>
<script type="text/javascript">
function changeTD()
{
$('Myanmar3').innerHTML = 'False';
}
</script>
</head>
<body>
<table border="1">
<tr><td>Font</td><td>Installed</td></tr>
<tr><td>Myanmar3</td><td id="Myanmar3">True</td></tr>
</table>
<a href="javascript:void(0);" onclick="changeTD();">Click Me</a>
</body>
</html>
You'll notice that I added a little link that you have to click to actually make the change. I thought this might make it easier to try out for real.
A: I believe that this tutorial on jQuery has an example that might help you: http://docs.jquery.com/Tutorials:Getting_Started_with_jQuery
A: A more specific question might give more helpful results, but here's a simple pair of snippets that shows and later updates text in a status container element.
// give some visual cue that you're waiting
container.appendChild( document.createTextNode( "Getting stuff from remote server..." ) );
// then later...
// update request status
container.replaceChild( document.createTextNode( "Done." ), container.firstChild );
A: <html>
<head>
<title>Font Detect please</title>
<script src="prototype.js" type="text/javascript"></script>
<script type="text/javascript">
$('Myanmar3').update('False');
$('Myanmar3').innerHTML;
</script>
</head>
<body>
<table border="1">
<tr><td>Font</td><td>Installed</td></tr>
<tr><td>Myanmar3</td><td id=Myanmar3>True</td></tr>
</table>
</body>
</html>
I have a simple code like that above and am trying to change the result True to false via Javascript using Prototype. What might I be doing wrong?
Edit: Got it. I didn't call it. :D
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I best generate a CSV (comma-delimited text file) for download with ASP.NET? This is what I've got. It works. But, is there a simpler or better way?
One an ASPX page, I've got the download link...
<asp:HyperLink ID="HyperLinkDownload" runat="server" NavigateUrl="~/Download.aspx">Download as CSV file</asp:HyperLink>
And then I've got the Download.aspx.vb Code Behind...
Public Partial Class Download
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
'set header
Response.Clear()
Response.ContentType = "text/csv"
Dim FileName As String = "books.csv"
Response.AppendHeader("Content-Disposition", "attachment;filename=" + FileName)
'generate file content
Dim db As New bookDevelopmentDataContext
Dim Allbooks = From b In db.books _
Order By b.Added _
Select b
Dim CsvFile As New StringBuilder
CsvFile.AppendLine(CsvHeader())
For Each b As Book In Allbooks
CsvFile.AppendLine(bookString(b))
Next
'write the file
Response.Write(CsvFile.ToString)
Response.End()
End Sub
Function CsvHeader() As String
Dim CsvLine As New StringBuilder
CsvLine.Append("Published,")
CsvLine.Append("Title,")
CsvLine.Append("Author,")
CsvLine.Append("Price")
Return CsvLine.ToString
End Function
Function bookString(ByVal b As Book) As String
Dim CsvLine As New StringBuilder
CsvLine.Append(b.Published.ToShortDateString + ",")
CsvLine.Append(b.Title.Replace(",", "") + ",")
CsvLine.Append(b.Author.Replace(",", "") + ",")
CsvLine.Append(Format(b.Price, "c").Replace(",", ""))
Return CsvLine.ToString
End Function
End Class
A: I pass all my CSV data through a function like this:
Function PrepForCSV(ByVal value As String) As String
return String.Format("""{0}""", Value.Replace("""", """"""))
End Function
Also, if you're not serving up html you probably want an http handler (.ashx file) rather than a full web page. If you create a new handler in Visual Studio, odds are you could just copy past your existing code into the main method and it will just work, with a small performance boost for your efforts.
A: You can create the equivalent of bookString() in the query itself. Here is what I think would be a simpler way.
protected void Page_Load(object sender, EventArgs e)
{
using (var db = new bookDevelopmentDataContext())
{
string fileName = "book.csv";
var q = from b in db.books
select string.Format("{0:d},\"{1}\",\"{2}\",{3:F2}", b.Published, b.Title.Replace("\"", "\"\""), b.Author.Replace("\"", "\"\""), t.price);
string outstring = string.Join(",", q.ToArray());
Response.Clear();
Response.ClearHeaders();
Response.ContentType = "text/csv";
Response.AppendHeader("Content-Disposition", string.Format("attachment;filename={0}", fileName));
Response.Write("Published,Title,Author,Price," + outstring);
Response.End();
}
}
A: If you want a colon delimited value converter then there is a 3rd party open source called FileHelpers. I'm not sure about what open-source license it is under, but it has helped me quite a lot.
A: CSV formatting has some gotchas. Have you asked yourself these questions:
*
*Does any of my data have embedded commas?
*Does any of my data have embedded double-quotes?
*Does any of my data have have newlines?
*Do I need to support Unicode strings?
I see several problems in your code above. The comma thing first of all... you are stripping commas:
CsvLine.Append(Format(b.Price, "c").Replace(",", ""))
Why? In CSV, you should be surrounding anything which has commas with quotes:
CsvLine.Append(String.Format("\"{0:c}\"", b.Price))
(or something like that... my VB is not very good). If you're not sure if there are commas, but put quotes around it. If there are quotes in the string, you need to escape them by doubling them. " becomes "".
b.Title.Replace("\"", "\"\"")
Then surround this by quotes if you want. If there are newlines in your string, you need to surround the string with quotes... yes, literal newlines are allowed in CSV files. It looks weird to humans, but it's all good.
A good CSV writer requires some thought. A good CSV reader (parser) is just plain hard (and no, regex not good enough for parsing CSV... it will only get you about 95% of the way there).
And then there is Unicode... or more generally I18N (Internationalization) issues. For example, you are stripping commas out of a formatted price. But that's assuming the price is formatted as you expect it in the US. In France, the number formatting is reversed (periods used instead of commas, and vice versa). Bottom line, use culture-agnostic formatting wherever possible.
While the issue here is generating CSV, inevitably you will need to parse CSV. In .NET, the best parser I have found (for free) is Fast CSV Reader on CodeProject. I've actually used it in production code and it is really really fast, and very easy to use!
A: In addition to what Simon said, you may want to read the CSV how-to guide and make sure your output doesn't run across any of the gotchas.
To clarify something Simon said:
Then surround this by quotes if you want
Fields that contain doubled up double quotes ("") will need to be completely surrounded with double quotes. There shouldn't be any harm in just wrapping all fields with double quotes, unless you specifically want the parser to strip out leading and trailing whitespace (instead of trimming it yourself).
A: There's a lot of overhead associated with the Page class. Since you're just spitting out a CSV file and have no need for postback, server controls, caching, or the rest of it, you should make this into a handler with an .ashx extension. See here.
A: I use the following method when building a CSV file from a DataTable. ControllerContext is just the reponse stream object where the file is written to. For you it is just going to be the Response object.
public override void ExecuteResult(ControllerContext context)
{
StringBuilder csv = new StringBuilder(10 * Table.Rows.Count * Table.Columns.Count);
for (int c = 0; c < Table.Columns.Count; c++)
{
if (c > 0)
csv.Append(",");
DataColumn dc = Table.Columns[c];
string columnTitleCleaned = CleanCSVString(dc.ColumnName);
csv.Append(columnTitleCleaned);
}
csv.Append(Environment.NewLine);
foreach (DataRow dr in Table.Rows)
{
StringBuilder csvRow = new StringBuilder();
for(int c = 0; c < Table.Columns.Count; c++)
{
if(c != 0)
csvRow.Append(",");
object columnValue = dr[c];
if (columnValue == null)
csvRow.Append("");
else
{
string columnStringValue = columnValue.ToString();
string cleanedColumnValue = CleanCSVString(columnStringValue);
if (columnValue.GetType() == typeof(string) && !columnStringValue.Contains(","))
{
cleanedColumnValue = "=" + cleanedColumnValue; // Prevents a number stored in a string from being shown as 8888E+24 in Excel. Example use is the AccountNum field in CI that looks like a number but is really a string.
}
csvRow.Append(cleanedColumnValue);
}
}
csv.AppendLine(csvRow.ToString());
}
HttpResponseBase response = context.HttpContext.Response;
response.ContentType = "text/csv";
response.AppendHeader("Content-Disposition", "attachment;filename=" + this.FileName);
response.Write(csv.ToString());
}
protected string CleanCSVString(string input)
{
string output = "\"" + input.Replace("\"", "\"\"").Replace("\r\n", " ").Replace("\r", " ").Replace("\n", "") + "\"";
return output;
}
A: Looking mostly good except in your function "BookString()" you should pass all those strings through a small function like this first:
Private Function formatForCSV(stringToProcess As String) As String
If stringToProcess.Contains("""") Or stringToProcess.Contains(",") Then
stringToProcess = String.Format("""{0}""", stringToProcess.Replace("""", """"""))
End If
Return stringToProcess
End Function
'So, lines like this:
CsvLine.Append(b.Title.Replace(",", "") + ",")
'would be lines like this instead:
CsvLine.Append(formatForCSV(b.Title)) + ",")
The function will format your strings well for CSV. It replaces quotes with double quotes and add quotes around the string if there are either quotes or commas in the string.
Note that it doesn't account for newlines, but can only safely guarantee good CSV output for those strings that you know are free of newlines (inputs from simple one-line text forms, etc.).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Direct TCP/IP connections in P2P apps From a Joel's post on Copilot:
Direct Connect! We’ve always done
everything we can to make sure that
Fog Creek Copilot can connect in any
networking situation, no matter what
firewalls or NATs are in place. To
make this happen, both parties make
outbound connections to our server,
which relays traffic on their behalf.
Well, in many cases, this isn’t
necessary. So version 2.0 does
something rather clever: it sets up
the initial connection through our
servers, so you get connected right
away with 100% reliability. But then
once you’re all connected, it quietly,
in the background, looks for a way to
make a direct connection. If it can’t,
no big deal: you just keep relaying
through our server. If you can make a
direct peer-to-peer connection, it
silently shifts your data onto the
direct connection. You won’t notice
anything except, probably, much faster
communication.
How do they change the server connection to a P2P connection?
A: It's pretty tricky and interesting. I'm sure I have some details wrong, but the overview is this:
The programs can already talk to each other through Joel's server, so they can exchange information with each other and Joel's server. Further, Joel has their external IP addresses, and they give joel information about their internal IP addresses.
They decide to try this hole punch technique. Computer A initiates a TCP connection with Computer B using B's external IP address. It won't go through, but what it does is tell's A's router that it needs to allow incoming packets from B on a given port.
Computer B does the same thing, but its message gets through to A since A's router opened a port/ip combination that matches what B sent (there's some port magic that happens here - this is non trivial, but doable).
B's router remembers that B initiated a connection with A on a given port and IP, and so A's packets now flow into B past their router correctly as well.
So it's actually pretty straight forward, but the implementation has details, especially regarding how ports are given to new TCP connections, and how NAT routers typically deal with TCP requests and how they map to external ports. These details are the interesting, and difficult, bit.
-Adam
A: There is a technique called "Hole Punching" that works well with "Cone" NAT (Cone is a technical familly of router). That's not an 100% sure technique, today, it works well with UDP on about 80% of the router.
There is some implementations of library to realize Hole Punching: STUN (wikipedia)
A: I believe the simple version is that they drop the server connection and replace it with the P2P connection.
Something along the lines of:
*
*Machine1 connects to copilot's servers.
*Machine1 connects to copilot's servers.
*Machine1 connects to copilot's servers.
*Machine2 subsequently connects, and they begin screen sharing.
*Machine2 opens a port intended for Machine1 to connect to.
*Machine1 tries to connect to the now open port on Machine2.
If this connection is established:
*
*The connection to copilot's servers is severed.
*Data is instead transfered over the direct (P2P) connection between the two machines.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are good alternative data formats to XML? XML, granted, is very useful, but can be quite verbose. What alternatives are there and are they specialised for any particular purpose? Library support to interrogate the contents easily is a big plus point.
A: Don't forget about YAML!
JSON seems to have better support though. For example, the Prototype JS library has excellent built-in JSON functions.
A: I wouldn't dismiss plain text, like CSV or tab-delimited.
A: HDF5 is a very compact data format with some characteristics that are similar to xml. The .net libraries leave a lot to be desired, but the format scales very well both in terms of size and performance.
A: My work with XML is almost exclusively with document-centric XML, which must model long sequences of arbitrarily nested structures. I haven't used JSON yet, but my impression is that it is cumbersome to use with document-like data, but well-adapted and even elegant for use with record-like data. Consider the shape of your data when making your decision.
A: You could try google's protobufs. It's much faster than XML. There are libraries for it in C, C++, C#, Java and Python (there are alpha versons for ruby and perl). But it is binary.
A: S-Expressions work great if you don't need to apply attributes to elements. Another alternative is YAML.
A: XML is often used for configuration, and in this case there are some other simple storage formats that are often used (less document oriented):
*
*.property files
*INI files
There's various ways for reading and writing both, depending on platform and language.
A: What do you want to do with the data? Store it? Pass it around? Display it? These questions should drive your search for an appropriate technology. Simply asking how you should format your data is like asking what language you should program in, without specifying what you want to accomplish.
For most data tasks, well Dr. Codd has the cure: http://en.wikipedia.org/wiki/Edgar_F._Codd. Databases should be able to do just about anything you have in mind.
If you're passing it around, I advocate plain text. When you roll your own binary format your data goes away when your parser goes away.
With plain text, the deeper question is where to put the metadata. Should it be external to the data file, or internal ("self-describing").
For example, XML is plain text, but so is source code. With a source file, there is a specification that goes in to great detail as to the syntax and semantics, while XML is supposed to be self-describing. The problem is that it isn't. Furthermore it evolved right out of document presentation and markup, but is now being abused for all sorts of data serialization, transfer, and storage.
A: TOML is the new big thing. It has the niceness of YAML without the big spec. It extends a common and familiar configuration file format. It is directly analogous to (and translatable to) JSON. Has support in all the big languages. Created by Github co-founder/president Tom and narcissistically named. Its awesome. Give it a shot!
Sample TOML:
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ]
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
A: If someone looking up less verbose alternative to XML, which is more or less isomorphic to XML, then there is AXON. In order to explain consider examples of equivalent representations in both XML and AXON. There is also python library pyaxon that support AXON format.
XML
<person>
<name>Alex</name>
<age>34</age>
<email>[email protected]</email>
</person>
AXON
person {
name {"Alex"}
age {34}
email {"[email protected]"}}
XML
<memo date="2008-02-14">
<from>
<name>The Whole World</name><email>[email protected]</email>
</from>
<to>
<name>Dawg</name><email>[email protected]</email>
</to>
<message>
Dear sir, you won the internet. http://is.gd/fh0
</message>
</memo>
AXON
memo {
date:2008-02-14
from {
name{"The Whole World"} email{"[email protected]"}}
to {
name{"Dawg"} email{"[email protected]"}}
message {"Dear sir, you won the internet. http://is.gd/fh0"}
}
XML
<club>
<players>
<player id="kramnik"
name="Vladimir Kramnik"
rating="2700"
status="GM" />
<player id="fritz"
name="Deep Fritz"
rating="2700"
status="Computer" />
<player id="mertz"
name="David Mertz"
rating="1400"
status="Amateur" />
</players>
<matches>
<match>
<Date>2002-10-04</Date>
<White refid="fritz" />
<Black refid="kramnik" />
<Result>Draw</Result>
</match>
<match>
<Date>2002-10-06</Date>
<White refid="kramnik" />
<Black refid="fritz" />
<Result>White</Result>
</match>
</matches>
</club>
AXON
club {
players {
player {
id:"kramnik"
name:"Vladimir Kramnik"
rating:2700
status:"GM"}
player {
id:"fritz"
name:"Deep Fritz"
rating:2700
status:"Computer"}
player {
id:"mertz"
name:"David Mertz"
rating:1400
status:"Amateur"}}
matches {
match {
Date{2002-10-04}
White{refid:"fritz"}
Black{refid:"kramnik"}
Result{"Draw"}}
match {
Date{2002-10-06}
White{refid:"kramnik"}
Black{refid:"fritz"}
Result{"White"}}}}
A: There seems to be a lot of multi-platform support for JSON.
A: Jeff's article on The Angle Bracket Tax summarizes a number of alternatives (well, mainly YAML), and led me to the wiki article on lightweight markup languages.
Update: Although YAML is a possible "alternative to XML" for some applications, the two are not, as I first thought, isomorphic.
Indeed, it "ain't markup language."
Furthermore, YAML ain't as "lightweight" as it appears. For documents that can be represented in plain XML (such as Jeff's example), YAML is clearly less verbose. But YAML offers many other specialized structures, enlisting many more characters and sequences than are reserved by XML.
Bottom line, if you're looking for XML-without-angle-brackets, YAML ain't it.
A: For the sake of completeness I will mention Edifact for which I wrote an interface a long time ago.
A: JSON is valid YAML which could be very useful. Two for one!
A:
I wouldn't dismiss plain text, like CSV or tab-delimited.
I'm really looking for alternatives that have a defined structure and (cross platform, multi language) library support. I'm interested in looking at different designs and their pros and cons. I like the idea of formats that can have a text and "binary" (compact, "compiled", fast I/O, smaller footprint) format. The advantage of having libraries is that they perform the parsing and perhaps extra data manipulation/validation for you.
Although having said that, there is definitely a use for simple formats like .ini, .plist and CSV etc. You shouldn't always have to use a hammer to crack a nut.
A: But at what cost?
I'm all for JSON in many situations, especially where weight or client-side work is a concern, but moving away from XML loses readability (so important in those config files) and the power of tomorrow's problem solutions like XSLT and XPath. Be really sure why and when you move away: it's a de facto standard for a reason.
(aside: my habit is to use XML internally, and transform that to JSON where that's the desired output)
A: Heresy! XML is king of data.
Say no to the usurpers, off with their heads!
Long live XML!
But seriously if just need data use Json, for support and elegance, but if you need formating ,xpath like queries, additional metadata, etc... Stick with XML
Note: I use Xml for configs system building code generation and similar tasks, but Json for Rpc,Sql for queries and persistency, and finally Yaml here and there for logging and quick tasks, in other words choose the appriopiate format for the need.
A: Simple Declarative Language is a nice alternative to XML for common tasks such as serialization and configuration. It provides a C# and Java parser library. I think it excels at specifying all kinds of metadata without the XML verbosity.
A: If you're asking in the perspective of a DSL, Guile Scheme could help, as already suggested with the S-expressions.
Personally I also use JSON for AJAX transactions.
A: XML is OK for text markup, but for general structures serialization is a quite bad option, where JSON is much more suited.
A: Anything you like, as long as it's not ASN.1
A: JSON can be used in many ways, but it is particularly well suited to use with MySQL tables I find. It works very well with Android as well (GSON library or JSON). Beyond that, it's effective at transmitting small bits of data individually or as arrays.
A: For storing code-like data, LES (Loyc Expression Syntax) is a budding alternative. I've noticed a lot of people use XML for code-like constructs, such as build systems which support conditionals, command invocations, sometimes even loops. These sorts of things look natural in LES:
// LES code has no built-in meaning. This just shows what it looks like.
[DelayedWrite] // an "attribute"
Output(
if version > 4.0 {
$ProjectDir/Src/Foo;
} else {
$ProjectDir/Foo;
}
);
It doesn't have great tool support yet, though; currently the only LES library is for C#. Currently only one app is known to use LES: LLLPG.
In theory you could use LES for data or markup, but there are no standards for how to do that:
body {
'''Click here to use the World's '''
a href="http://google.com" {
strong "most popular"; " search engine!"
};
};
point = (2, -3);
tasteMap = { "lemon" -> sour; "sugar" -> sweet; "grape" -> yummy };
A: For the sake of mentioning... have a look at my proposal:
http://igagis.github.io/stob/
It is very simple and is not overloadad with variety of special symbols, just {} and "" basically.
Supports C++ style comments.
There are C++, C# and Java libraries.
Example:
"String object"
AnotherStringObject
"String with children"{
"child 1"
Child2
"child three"{
SubChild1
"Subchild two"
Property1 {Value1}
"Property two" {"Value 2"}
//comment
/* multi-line
comment */
"multi-line
string"
"Escape sequences \" \n \r \t \\"
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Difference between foreach and for loops over an IEnumerable class in C# I have been told that there is a performance difference between the following code blocks.
foreach (Entity e in entityList)
{
....
}
and
for (int i=0; i<entityList.Count; i++)
{
Entity e = (Entity)entityList[i];
...
}
where
List<Entity> entityList;
I am no CLR expect but from what I can tell they should boil down to basically the same code. Does anybody have concrete (heck, I'd take packed dirt) evidence one way or the other?
A: Here is a good article that shows the IL differences between the two loops.
Foreach is technically slower however much easier to use and easier to read. Unless performance is critical I prefer the foreach loop over the for loop.
A: The foreach sample roughly corresponds to this code:
using(IEnumerator<Entity> e = entityList.GetEnumerator()) {
while(e.MoveNext()) {
Entity entity = e.Current;
...
}
}
There are two costs here that a regular for loop does not have to pay:
*
*The cost of allocating the enumerator object by entityList.GetEnumerator().
*The cost of two virtual methods calls (MoveNext and Current) for each element of the list.
A: One point missed here:
A List has a Count property, it internally keeps track of how many elements are in it.
An IEnumerable DOES NOT.
If you program to the interface IEnumerable and use the count extention method it will enumerate just to count the elements.
A moot point though since in the IEnumerable you cannot refer to items by index.
So if you want to lock in to Lists and Arrays you can get small performance increases.
If you want flexability use foreach and program to IEnumerable. (allowing the use of linq and/or yield return).
A: foreach creates an instance of an enumerator (returned from GetEnumerator) and that enumerator also keeps state throughout the course of the foreach loop. It then repeatedly calls for the Next() object on the enumerator and runs your code for each object it returns.
They don't boil down to the same code in any way, really, which you'd see if you wrote your own enumerator.
A: In terms of allocations, it'd be better to look at this blogpost. It shows in exactly in what circumstances an enumerator is allocated on the heap.
A: I think one possible situation where you might get a performance gain is if the enumerable type's size and the loop condition is a constant; for example:
const int ArraySize = 10;
int[] values = new int[ArraySize];
//...
for (int i = 0; i
In this case, depending on the complexity of the loop body, the compiler might be able to replace the loop with inline calls. I have no idea if the .NET compiler does this, and it's of limited utility if the size of the enumerable type is dynamic.
One situation where foreach might perform better is with data structures like a linked list where random access means traversing the list; the enumerator used by foreach will probably iterate one item at a time, making each access O(1) and the full loop O(n), but calling the indexer means starting at the head and finding the item at the right index; O(N) each loop for O(n^2).
Personally I don't usually worry about it and use foreach any time I need all items and don't care about the index of the item. If I'm not working with all of the items or I really need to know the index, I use for. The only time I could see it being a big concern is with structures like linked lists.
A: For Loop
for loop is used to perform the opreration n times
for(int i=0;i<n;i++)
{
l=i;
}
foreach loop
int[] i={1,2,3,4,5,6}
foreach loop is used to perform each operation value/object in IEnumarable
foreach(var k in i)
{
l=k;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to start learning JAVA for use with Oracle RDBMS? I am looking for some advice on what should I concentrate my efforts to get the needed skills to become a Java developer for Oracle applications. I'm bit confused as there are lot of technologies in the Java world. Where should I start? What to avoid? Is JDeveloper a good IDE for a beginner?
A: You're question is a little bit too vague in order to give a proper answer...
If you plan to query the Oracle Database from an External Java Program (Either within a Swing Application or an Application Server) then you need to learn 2 core APIs:
*
*JDBC (Java Database Connectivity)
*JPA (Java Persistence API)
JDBC is the core API that allows a Java Program to interact with any RDBMS so you should at least know how it works so whenever you have to dig into low-level code, you will actually know what's happening.
JPA is the latest Java API for Persistence which basically allows one to map Plain Old Java Object (AKA PoJo) to RDBMS Table Structures. There are multiple known implementation available but I would recommend Hibernate or TopLink as good starting points.
After that, you can start to dig into other known frameworks like the Spring Framework for some other RDBMS related APIs.
A: You should be able to do everything related to Oracle using JDBC, so make sure you bone up on that API. Other than that, it depends on the type of application. Standalone apps may use Swing (the Java UI toolkit) or in the future JavaFX, which is supposed to make Swing obsolete and may do so in a few years. Web/enterprisey apps will make use of Java Enterprise Edition, so take a look at the servlet API, and if the app uses Enterprise JavaBeans, look at the Java Persistence API, which you would probably be using instead of JDBC.
I haven't used JDeveloper, but I haven't found anything wrong with the free IDEs like Eclipse or Netbeans, and my personal favorite is JetBrains's IntelliJ IDEA.
A: There's really nothing specific you need to learn to be an oracle devloper per se. Obviously you need to learn oracle sql syntax, and all the standard rdbms theory that goes along with database programming in general. The Java libs for database support are pretty easy to pick up and run with. I'm sure you can find a tutorial on the web by a quick google search.
As for IDE I'd recommend Eclipse. It's a bit cumbersome at times, but the number of plug-ins available is staggering, and it has great refactoring and code completion support.
A: Expert Oracle JDBC Programming is a book aimed directly at developers who want to use Java with Oracle. Before you make even that small monetary investment though, you might want to check out the JDBC tutorial published by Sun.
A: To become an Oracle Developer there is a bit more to learn than jdbc. You should take a look at the Oracle web site. It is kind of slow and not very intuitive but has a lot of good information. There are OUGs that have good info as well.
If you just want to access Oracle via JAVA then you should use a framework such as Spring. Takes away the pain of jdbc. Lets you write sql and map it to objects.
If you don't know PL/SQL it might be good to learn what it is.
My two cents from working with Oracle for the past 7 yrs.
A: You can use JDeveloper and try to find some tutorials for it (I actually had some from my university). It integrates well with rest of Oracle stack (db and application server). Down site is that although you can download some developers editions to run for personal usage, running Oracle db + Oracle application server + JDeveloper on a machine that has less then 4GB of ram and one core is not really peasant experience.
A: Your question is very simple so I have listed a few simple steps to start developing a Java application using Oracle technologies.
*
*Install Oracle XE Database.
*Install [JDeveloper]. Choose the install with Weblogic if you are developing a J2EE application.
*Build and run a jdbc application using the [sample code] or use the wizard in JDeveloper.
*Install SQL Developer for writing stored procedures.
Steps 3. and 4. are optional. You now have everything you need to build either a proof of concept or an enterprise grade database application, using simple wizards and without re-inventing the wheel.
You mentioned developing an Oracle Application. It's best to leave the development of Oracle's packaged Application to Oracle itself but if you want to integrate your custom java application with Oracle's packaged application then use Oracle's SOA Suite.
Cheers
KB
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How Can I Monitor Which Window Currently Has Keyboard Focus Is there a way to track which window currently has keyboard focus. I could handle WM_SETFOCUS for every window but I'm wondering if there's an alternative, simpler method (i.e. a single message handler somewhere).
I could use OnIdle() in MFC and call GetFocus() but that seems a little hacky.
A: There is an easy way using .Net Framework 3.5 : the library UI Automation provides an event focus changed that fires every time the focus change to a new control.
Page on MSDN
Sample:
public void SubscribeToFocusChange()
{
AutomationFocusChangedEventHandler focusHandler
= new AutomationFocusChangedEventHandler(OnFocusChanged);
Automation.AddAutomationFocusChangedEventHandler(focusHandler);
}
private void OnFocusChanged(object sender, AutomationFocusChangedEventArgs e)
{
AutomationElement focusedElement = sender as AutomationElement;
//...
}
This api in fact use windows hook behind the scenes to do that. However you have to use the .Net Framework...
A: How about the Win32 GetForegroundWindow?
A: So from the way you worded the question I'm inferring that you want to have an event handler which is invoked whenever focus switches between windows. You want to be notified, rather than having to poll.
I actually don't think calling GetFocus from OnIdle is that much of a hack - sure it's polling, but it's low-overhead polling without side effects - but if you really want to track this, Windows Hooks are probably your best choice. Specifically you can install a CBT hook (WH_CBT) and listen for the HCBT_SETFOCUS notification.
Windows calls the WH_CBT hook with this hook code when Windows is about to set the focus to any window. In the case of thread-specific hooks, the window must belong to the thread. If the filter function returns TRUE, the focus does not change.
You could also do with with a WH_CALLWNDPROC hook and listen for the WM_SETFOCUS message.
Depending on whether you make it a global hook, or app-local, you can track focus across all windows on the system, or only the windows owned by your process.
A: If you're programming in .net 3.5, the Automation package olorin mentions is by far the easiest, but beware of using it in a program that itself has a UI, at least if the UI is done in WPF -- the focus tracking hooks get confused by events in its own app, and quickly lock up the UI. I sent MS a bug report on it. I have not observed the same problem using a traditional Windows Forms UI. You could, of course, put the tracking code in a separate console app and use some kind of ipc to transmit the info you need.
The tempting alternative of using Interop to access the WH_CBT Windows Hook from C# won't work -- the only global hooks you can get at from C# are the mouse and keyboard.
A: You could monitor messages for the WM_ACTIVATE event.
ref
A: Well, this may not be very graceful... but you can retrieve the current focused control pretty easily. So you might consider setting up a timer that asks every 1/2 second or so "Where is the current focus?"... Then you can observe changes. Example Delphi code is below; it should be pretty easy to adapt, since the real work is in the Windows API calls.
<snip>
function TForm1.GetCurrentHandle: integer;
var
activeWinHandle: HWND;
focusedThreadID : DWORD;
begin
//return the Windows handle of the currently focused control
Result := 0;
activeWinHandle := GetForegroundWindow;
focusedThreadID := GetWindowThreadProcessID(activeWinHandle,nil);
if AttachThreadInput(GetCurrentThreadID,focusedThreadID,true) then begin
try
Result := GetFocus;
finally
AttachThreadInput(GetCurrentThreadID, focusedThreadID, false);
end;
end; //if attached
end;
procedure TForm1.Timer1Timer(Sender: TObject);
begin
//give notification if the handle changed
//(this code gets fired by a timer)
CurrentHandle := GetCurrentHandle;
if CurrentHandle <> PreviousHandle then begin
Label1.Caption := 'Last focus change occurred @ ' + DateTimeToStr(Now);
end;
PreviousHandle := CurrentHandle;
end;
<snip>
A: http://msdn.microsoft.com/en-us/library/ms771428.aspx
Has a window focus tracker sample.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: What is the best practice for estimating required time for development of the SDLC phases? As a project manager, you are required to organize time so that the project meets a deadline.
Is there some sort of equations to use for estimating how long the development will take?
let's say the database
time = sql storedprocedures * tables manipulated or something similar
Or are you just stuck having to get the experience to get adequate estimations?
A: As project manager you have to remember that the best you will ever we be able to do on your own is give your best guess as to how long a given project will take. How accurate you are. depends on your experience and the scope of the project.
The only way I know of to get a reasonably accurate estimate that is it to break the project into individual tasks and get the developer who will be doing the actual work to put an estimate on each task. You can then use an evidence based algorithm that takes the estimation accuracy of each developer into account to give you the probability of hitting a given deadline.
If the probability is too low, you have two choices: remove features or move the deadline.
Further reading:
*
*http://www.joelonsoftware.com/items/2007/10/26.html
*http://www.wordyard.com/2007/10/11/evidence-based-scheduling/
*http://en.wikipedia.org/wiki/Monte_Carlo_method
A: There will be such a formula as soon as computers can start generating all code themselves. Until then you are stuck with human developers who all have different levels of skill and development speed.
A: There's no set formula out there that I've seen that would really work. Fogbugz has its monte carlo simulator which has somewhat of a concept for this, but really, experience is going to be your best point of reference. Every developer and every project will be different!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can .NET check other running programs command line parameters? We've got an interesting case where we are trying to determine how different instances of our app were launched. Is there any way for .NET to be able to query another running instance and get the command line parameters passed to that instance? I've not been able to find any way to do it in .NET so far, so I thought I'd check here to see if anyone had done anything like this before.
A: You can retrieve this information through WMI.
See the Win32_Process class, in particular its command line property. This Code Project article provides pointers on how to do this,
A: Generally those variables are stored in the program's memory space, which you should (theoretically) should not be able to access.
You'll need to find out how to initiate interprocess communication with the other instances and trade data. Named pipes are one good option, but you might want to start a new stackoverflow question to get good options on this.
-Adam
A: For future reference, here is a code snippet from how I got it to work. This was just for a test to see how it all worked. The actual implemented code parses the command line parameters for what we need.
try
{
ManagementScope connectScope = new ManagementScope();
connectScope.Path = new ManagementPath(@"\\" + Environment.MachineName + @"\root\CIMV2");
SelectQuery msQuery = new SelectQuery("SELECT * FROM Win32_Process Where Name = '" + "PROGRAMNAMEHERE.exe" + "'");
ManagementObjectSearcher searchProcedure = new ManagementObjectSearcher(connectScope, msQuery);
foreach (ManagementObject item in searchProcedure.Get())
{
try
{
MessageBox.Show(item["CommandLine"].ToString());
}
catch (SystemException)
{}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: cmd defaults to F: drive When I open cmd on my laptop it is defaulting to the F: drive. This is troubling me does anyone know how it got that way or how to get it back to where it opens to the C: drive by default?
A: Use the command
C:
To change to the drive C. It would of course work for any drive letter.
A: http://blog.stevienova.com/2007/04/08/change-your-default-cmd-prompt-path/
Sometimes, your path when you go to start->run, CMD will be something
you don’t want. In active directory or on an NT domain, sometimes your
default home path might be a network drive. This isn’t so good when
you are offline or drop offline after being online. The CMD prompt is
set to a place where you can’t get to.
To change the path, you can edit the registry (at your own risk)
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USERSoftwareMicrosoftCommand Processor] “Autorun”=”c:”
This will change the path to your c: drive.
A: Very minor nit: if you're using Windows 7 you don't need the cmdhere powertoy, it's built in to Explorer.
You just navigate to a directory in Windows Explorer then hold down the shift key and right click. "Open command window here" is one of the selections on the context menu.
When it comes to opening cmd.exe in a specific directory, I just create a shortcut to cmd.exe and then in the shortcut properties I set "Start in:" to the drive/directory I want it to start in.
Using a shortcut allows me to customize the cmd.exe windows depending on what I'm using it for. For normal file editing/viewing I use a 180x60 window and appropriate font, but when I want to read/search log files I have a shortcut that opens a 260x100 window with a smaller font. That way I can view most long log file lines without having to use the horizontal scroll.
A: I believe it defaults to %HOMEDRIVE%\%HOMEPATH% so if you can muck about with those environment variables that might be an option. I can't edit these environment variables on my company's network, so I had to use the AutoRun to change it to something sane.
A: quick answer: cmd /k c:
long answer to make it "automagical":
http://windowsxp.mvps.org/autoruncmd.htm
A: In RegEdit.exe I created a String:
HKEY_CURRENT_USER\Software\Microsoft\Command Processor\AutoRun
The value I used for AutoRun was "D:"
A: *
*On the start screen / menu, type in "cmd", right-click it and select "Open File - Location".
*In the opened window, right-Click on "Command Prompt" icon, select "Properties", and edit the "Start In" property to your desired path. I used "C:\" as an example
A: If you are opening it from a shortcut change the working dir for the shortcut.
A: In addition to the other answers, there's a nice powertoy for XP called "open command window here." It adds an option to your right-click context menu when you click inside a folder to open a command window using that directory as the starting path.
http://www.microsoft.com/windowsxp/Downloads/powertoys/Xppowertoys.mspx
A: I ran into a similar issue where cmd would always open up in a particular directory (annoying when running scripts which invoke cmd). The best way to deal with this is to edit your autorun settings. Raymond Chen has a nice article about this here:
http://blogs.msdn.com/oldnewthing/archive/2007/11/21/6447771.aspx
The summary is that when you start a command shell, it checks the autorun registry key, and executes the commands stored there. The registry keys it checks are:
HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\AutoRun
and/or
HKEY_CURRENT_USER\Software\Microsoft\Command Processor\AutoRun
A: Some answers already mentioned AutoRun as a solution.
But that can be very dangerous, as the AutoRun entry will be executed for any new cmd.exe instance (only pipes ignore the AutoRun).
A simple expample that fails:
cd /d E:\myPath
FOR /F "delims=" %%Q in ('dir') do echo - %%Q
With AutoRun=C:, this shows the content of the current path of drive C:
You can still use AutoRun, but it should be a batch script, that checks if it was called interactive, by FOR/F or by drag&drop.
@echo off
REM *** To enable this script, call it by <scriptName> --install
setlocal EnableDelayedExpansion
REM *** ALWAYS make a copy of the complete CMDCMDLINE, else you destroy the original!!!
set "_ccl_=!cmdcmdline!"
REM *** The check is necessary to distinguish between a new cmd.exe instance for a user or for a "FOR /F" sub-command
if "!_ccl_:~1,-2!" == "!comspec!" (
REM ***** INTERACTIVE ****
REM *** %1 contains only data, when the script itself was called from the command line
if "%~1" NEQ "" (
goto :direct_call
)
endlocal
doskey /macrofile="%~dp0\cmdMacros.mac"
echo ********************************************************************
echo * AutoRun executed from "%~f0"
echo * Macros loaded from "%~dp0\cmdMacros.mac"
echo ********************************************************************
cd /d C:\myPath
) ELSE (
REM *** Called by a FOR command, by an explorer click or a drag & drop operation
REM *** Handle PROBLEMATIC Drag&Drop content, if necessary
endlocal
)
exit /b
:direct_call
if "%~1" == "--install" (
reg add "HKEY_CURRENT_USER\Software\Microsoft\Command Processor" /v "AutoRun" /t REG_SZ /d "%~f0"
exit /b
)
if "%~1" == "--show" (
reg query "HKEY_CURRENT_USER\Software\Microsoft\Command Processor" /v AutoRun
exit /b
)
if "%~1" == "--remove" (
reg DELETE "HKEY_CURRENT_USER\Software\Microsoft\Command Processor" /v AutoRun /f
)
exit /b
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Order an Array like another Array in C# What is the best algorithm to take array like below:
A {0,1,2,3}
I expected to order it like array below:
B {3,1,0,2}
Any ideas?
A: So if you have two arrays and they hold the same data just in different order then just do this:
A = B
I suspect that is not your situation so I think we need more info.
A: What you need to do is determine the ordering of B and then apply that ordering to A. One way to accomplish this is to undo the ordering of B and keep track of what happens along the way. Then you can do the reverse to A.
Here's some sketchy C# (sorry, I haven't actually run this)...
Take a copy of B:
List<int> B2 = new List<int>(B);
Now sort it, using a sort function that records the swaps:
List<KeyValuePair<int,int>> swaps = new List<KeyValuePair<int,int>>();
B2.Sort( delegate( int x, int y ) {
if( x<y ) return -1;
if( x==y ) return 0;
// x and y must be transposed, so assume they will be:
swaps.Add( new KeyValuePair<int,int>(x,y) );
return 1;
});
Now apply the swaps, in reverse order, to A:
swaps.Reverse();
foreach( KeyValuePair<int,int> x in swaps )
{
int t = A[x.key];
A[x.key] = A[x.value];
A[x.value] = t;
}
Depending how the built-in sort algorithm works, you might need to roll your own. Something nondestructive like a merge sort should give you the correct results.
A: Here's my implementation of the comparer (uses LINQ, but can be easily adapted to older .net versions). You can use it for any sorting algorithms such as Array.Sort, Enumerable.OrderBy, List.Sort, etc.
var data = new[] { 1, 2, 3, 4, 5 };
var customOrder = new[] { 2, 1 };
Array.Sort(data, new CustomOrderComparer<int>(customOrder));
foreach (var v in data)
Console.Write("{0},", v);
The result is 2,1,3,4,5, - any items not listed in the customOrder are placed at the end in the default for the given type (unless a fallback comparator is given)
public class CustomOrderComparer<TValue> : IComparer<TValue>
{
private readonly IComparer<TValue> _fallbackComparer;
private const int UseDictionaryWhenBigger = 64; // todo - adjust
private readonly IList<TValue> _customOrder;
private readonly Dictionary<TValue, uint> _customOrderDict;
public CustomOrderComparer(IList<TValue> customOrder, IComparer<TValue> fallbackComparer = null)
{
if (customOrder == null) throw new ArgumentNullException("customOrder");
_fallbackComparer = fallbackComparer ?? Comparer<TValue>.Default;
if (UseDictionaryWhenBigger < customOrder.Count)
{
_customOrderDict = new Dictionary<TValue, uint>(customOrder.Count);
for (int i = 0; i < customOrder.Count; i++)
_customOrderDict.Add(customOrder[i], (uint) i);
}
else
_customOrder = customOrder;
}
#region IComparer<TValue> Members
public int Compare(TValue x, TValue y)
{
uint indX, indY;
if (_customOrderDict != null)
{
if (!_customOrderDict.TryGetValue(x, out indX)) indX = uint.MaxValue;
if (!_customOrderDict.TryGetValue(y, out indY)) indY = uint.MaxValue;
}
else
{
// (uint)-1 == uint.MaxValue
indX = (uint) _customOrder.IndexOf(x);
indY = (uint) _customOrder.IndexOf(y);
}
if (indX == uint.MaxValue && indY == uint.MaxValue)
return _fallbackComparer.Compare(x, y);
return indX.CompareTo(indY);
}
#endregion
}
A: In the example you gave (an array of numbers), there would be no point in re-ordering A, since you could just use B.
So, presumably these are arrays of objects which you want ordered by one of their properties.
Then, you will need a way to look up items in A based on the property in question (like a hashtable). Then you can iterate B (which is in the desired sequence), and operate on the corresponding element in A.
A: Both array's contain the same values (or nearly so) but I need to force them to be in the same order. For example, in array A the value "3045" is in index position 4 and in array B it is in index position 1. I want to reorder B so that the index positions of like values are the same as A.
A: If they are nearly the same then here is some pseudo code:
Make an ArrayList
Copy the contents of the smaller array to the arraylist
for each item I in the larger array
FInd I in the ArrayList
Append I to a new array
Remove I from the arraylist
A: Could the issue be resolved using a Dictionary so the elements have a relationship that isn't predicated on sort order at all?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Given that I have a hash of id(key) and countries(values) sorted alphabetically, what is the best way to bubble up an entry to the top of the stack? This is a php example, but an algorithm for any language would do. What I specifically want to do is bubble up the United States and Canada to the top of the list. Here is an example of the array shortened for brevity.
array(
0 => '-- SELECT --',
1 => 'Afghanistan',
2 => 'Albania',
3 => 'Algeria',
4 => 'American Samoa',
5 => 'Andorra',)
The id's need to stay intact. So making them -1 or -2 will unfortunately not work.
A: What I usually do in these situations is to add a separate field called DisplayOrder or something similar. Everything defaults to, say, 1... You then sort by DisplayOrder and then the Name. If you want something higher or lower on the list, you can tweak the display order accordingly while keeping your normal IDs as-is.
-- Kevin Fairchild
A: My shortcut in similar cases is to add a space at the start of Canada and two spaces at the start of United States. If displaying these as options in a SELECT tag, the spaces are not visible but the sorting still brings them to the front.
However, that may be a little hacky in some contexts. In Java the thing to do would be to extend StringComparator, override the compare() method making the US and Canada special cases, then sort the list (or array) passing in your new comparator as the sort algorithm.
However I would imagine it might be simpler to just find the relevant entries in the array, remove them from the array and add them again at the start. If you are in some kind of framework which will re-sort the array, then it might not work. But in most cases that will do just fine.
[edit] I see that you are using a hash and not an array - so it will depend on how you are doing the sorting. Could you simply put the US into the hash with a key of -2, Canada with -1 and then sort by ID instead? Not having used PHP in anger for 11 years, I don't recall whether it has built-in sorting in its hashes or if you are doing that at the application level.
A: $a = array(
0 => '- select -',
1 => 'Afghanistan',
2 => 'Albania',
3 => 'Algeria',
80 => 'USA'
);
$temp = array();
foreach ($a as $k => $v) {
$v == 'USA'
? array_unshift($temp, array($k, $v))
: array_push($temp, array($k, $v));
}
foreach ($temp as $t) {
list ($k, $v) = $t;
echo "$k => $v\n";
}
The output is then:
80 => USA
0 => - select -
1 => Afghanistan
2 => Albania
3 => Algeria
A: You can not change the order of elements within the same array by "moving" an item around. What you can do it to build a new array that first has your favourite items and then adds anything else from the original countries array at the end:
$countries = array(
0 => '-- SELECT --',
1 => 'Afghanistan',
2 => 'Albania',
3 => 'Algeria',
4 => 'American Samoa',
5 => 'Andorra',
22 => 'Canada',
44 => 'United States',);
# tell what should be upfront (by id)
$favourites = array(0, 44, 22);
# add favourites at first
$ordered = array();
foreach($favourites as $id)
{
$ordered[$id] = $countries[$id];
}
# add everything else
$ordered += array_diff_assoc($countries, $ordered);
# result
print_r($ordered);
Demo
A: It's been ages since I don't know how to code. But yes.
array_unshift($queue, "United States", "Canada");
print_r($queue);
array_unshift — Prepend one or more elements to the beginning of an array
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Database Patterns Does anyone know of papers/books/etc. that document patterns for databases? For example, one common rule of thumb is that every table should have a primary key and that the key should be devoid of information content. So I was wondering if anyone had written a book or published papers regarding design patterns for designing relational databases?
@Gaius,
That is the question that a database designer needs to weigh--what is the probable stability of the database structure? Given a long-enough horizon nothing is stable. Or to say the converse, given a long-enough horizon, everything is subject to change. A surrogate key (in theory) should never change its meaning because it never had meaning to begin with.
I guess the other thing to consider in that particular design scenario is who is it that will be seeing the primary key? If the primary key is something that end-users will actually need to refer to then it makes sense to make it something they can understand. But I can't think of many cases where an end-user needs to see a primary key; usually the primary key is present to allow the DB engine to speed up certain operations.
My original thought in asking the question was to find design patterns for database design that were codified by more experienced database designers than myself so as to, hopefully, avoid some easily avoidable errors. It would be interesting reading if anyone had ever codified database design anti-patterns.
A: Books by E.F. Codd and C.J. Date are the most obvious answers. I have not read this particular book but I am familiar with the authors, it is likely quite good.
Applied Mathmatics for Database Professionals by Lexx de Haan and Toon Koppelaars.
A: Actually, I think the rule of thumb is typically to use a natural key rather than a surrogate whenever possible...
So if I have, for instance, an Invoice table and an InvoiceDetail table, we can probably use InvoiceNumber as our primary key on the first one. It already exists in our data and (I assume?) would be unique. For the second table, we are probably going to be stuck needing a surrogate key, however -- whether it's joined to Invoice number as composite or not.
In any event, back to the original question... hometoast's link should get you started.
-- Kevin Fairchild
A: SQL Anti-Patterns by Bill Karwin is very easy to read (not dry) and explains in fairly clear terms a number of different potential pitfalls, how you might find yourself using them, and how/why to do things right.
A: Specifically, regarding keys: I strongly disagree with the strange idea that keys must be without meaning. In general, I consider a database a collection of facts; as soon as you start adding arbitrary numbers (like generated keys) and other irrelevant information into it, it should be a warning sign. I recommend this articly by Joe Celko for more on keys.
More general notes:
Suggestions for schema designs/data models for different businesses:
David C. Hay: Data Model Patterns: Conventions of Thought
Rather old, but there is a reason why it's still in print
http://www.dorsethouse.com/books/dmp.html
Maybe not very pattern-like, but still very good:
Stephane Faroult, Peter Robson: The Art of SQL
http://oreilly.com/catalog/9780596008949/
Another one which I can recommend:
Vadim Tropashko: SQL Design Patterns - The Expert Guide to SQL Programming
http://www.rampant-books.com/book_2006_1_sql_coding_styles.htm
Systematic text-book about data modelling:
Graeme Simsion & Graham Witt, "Data Modeling Essentials"
http://www.elsevierdirect.com/product.jsp?isbn=9780126445510
Maybe you are actually looking for a "style guide"?. I that case:
Joe Celko: SQL Programming Style
http://www.elsevierdirect.com/product.jsp?isbn=9780120887972
A: To answer exactly: yes. There are s*-tons of info written on 'good' database design. Although youe example rule of thumb is certainly questionable.
A: Using primary keys with business meaning ("natural keys") certainly has its merits, but it can make refactoring your database very difficult. Use caution, especially if there's any reason to believe the database structure will change over time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Program for working with large CSV Files Are there any good programs for dealing with reading large CSV files? Some of the datafiles I deal with are in the 1 GB range. They have too many lines for Excel to even deal with. Using Access can be a little slow, as you have to actually import them into a database to work with them directly. Is there a program that can open large CSV files and give you a simple spreadsheet layout to help you easily and quickly scan through the data?
A: MySQL can import CSV files very quickly onto tables using the LOAD DATA INFILE command. It can also read from CSV files directly, bypassing any import procedures, by using the CSV storage engine.
Importing it onto native tables with LOAD DATA INFILE has a start up cost, but after that you can INSERT/UPDATE much faster, as well as index fields. Using the CSV storage engine is almost instantaneous at first, but only sequential scan will be fast.
Update: This article (scroll down to the section titled Instant Data Loads) talks about using both approaches to loading CSV data onto MySQL, and gives examples.
A: I've found reCSVeditor is a great program for editing large CSV files. It's ideal for stripping out unnecessary columns. I've used it for files 1,000,000 record files quite easily.
A: vEdit is great for this. I routinely open up 100+ meg (i know you said up to one gig, I think they advertise on their site it can handle twice that) files with it. It has regex support and loads of other features. 70 dollars is cheap for the amount you can do with it.
A: GVim can handle files that large for free if you are not attached to a true spreadsheet static field size view.
A: vEdit is great but don't forget you can always go back to "basics" check out Cygwin and start greping.
Helpfull commands
*
*grep
*head
*tail
*of course perl!
A: It depends on what you actually want to do with the data. Given a large text file like that you typically only want a smaller subset of the data at any one time, so don't overlook tools like 'grep' for pulling out the pieces you want to look for and work with.
A: If you can fit the data into memory and you like python then I recommend checking out the UniTable portion of Augustus. (Disclaimer: Augustus is open source (GPLv2) but I work for the company that writes it.)
It's not very well documented but this should help you get going.
from augustus.kernel.unitable import *
a = UniTable().from_csv_file('filename')
b = a.subtbl(a['key'] == some_value) #creates a subtable
It won't directly give you an excel like interface but with a little bit of work you can get many statistics out quickly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Differences in string compare methods in C# Comparing string in C# is pretty simple. In fact there are several ways to do it. I have listed some in the block below. What I am curious about are the differences between them and when one should be used over the others? Should one be avoided at all costs? Are there more I haven't listed?
string testString = "Test";
string anotherString = "Another";
if (testString.CompareTo(anotherString) == 0) {}
if (testString.Equals(anotherString)) {}
if (testString == anotherString) {}
(Note: I am looking for equality in this example, not less than or greater than but feel free to comment on that as well)
A: From MSDN:
"The CompareTo method was designed primarily for use in sorting or
alphabetizing operations. It should not be used when the primary
purpose of the method call is to determine whether two strings are
equivalent. To determine whether two strings are equivalent, call
the Equals method."
They suggest using .Equals instead of .CompareTo when looking solely for equality. I am not sure if there is a difference between .Equals and == for the string class. I will sometimes use .Equals or Object.ReferenceEquals instead of == for my own classes in case someone comes along at a later time and redefines the == operator for that class.
A: Not that performance usually matters with 99% of the times you need to do this, but if you had to do this in a loop several million times I would highly suggest that you use .Equals or == because as soon as it finds a character that doesn't match it throws the whole thing out as false, but if you use the CompareTo it will have to figure out which character is less than the other, leading to slightly worse performance time.
If your app will be running in different countries, I'd recommend that you take a look at the CultureInfo implications and possibly use .Equals. Since I only really write apps for the US (and don't care if it doesn't work properly by someone), I always just use ==.
A: If you are ever curious about differences in BCL methods, Reflector is your friend :-)
I follow these guidelines:
Exact match: EDIT: I previously always used == operator on the principle that inside Equals(string, string) the object == operator is used to compare the object references but it seems strA.Equals(strB) is still 1-11% faster overall than string.Equals(strA, strB), strA == strB, and string.CompareOrdinal(strA, strB). I loop tested with a StopWatch on both interned/non-interned string values, with same/different string lengths, and varying sizes (1B to 5MB).
strA.Equals(strB)
Human-readable match (Western cultures, case-insensitive):
string.Compare(strA, strB, StringComparison.OrdinalIgnoreCase) == 0
Human-readable match (All other cultures, insensitive case/accent/kana/etc defined by CultureInfo):
string.Compare(strA, strB, myCultureInfo) == 0
Human-readable match with custom rules (All other cultures):
CompareOptions compareOptions = CompareOptions.IgnoreCase
| CompareOptions.IgnoreWidth
| CompareOptions.IgnoreNonSpace;
string.Compare(strA, strB, CultureInfo.CurrentCulture, compareOptions) == 0
A: In the forms you listed here, there's not much difference between the two. CompareTo ends up calling a CompareInfo method that does a comparison using the current culture; Equals is called by the == operator.
If you consider overloads, then things get different. Compare and == can only use the current culture to compare a string. Equals and String.Compare can take a StringComparison enumeration argument that let you specify culture-insensitive or case-insensitive comparisons. Only String.Compare allows you to specify a CultureInfo and perform comparisons using a culture other than the default culture.
Because of its versatility, I find I use String.Compare more than any other comparison method; it lets me specify exactly what I want.
A: Here are the rules for how these functions work:
stringValue.CompareTo(otherStringValue)
*
*null comes before a string
*it uses CultureInfo.CurrentCulture.CompareInfo.Compare, which means it will use a culture-dependent comparison. This might mean that ß will compare equal to SS in Germany, or similar
stringValue.Equals(otherStringValue)
*
*null is not considered equal to anything
*unless you specify a StringComparison option, it will use what looks like a direct ordinal equality check, i.e. ß is not the same as SS, in any language or culture
stringValue == otherStringValue
*
*Is not the same as stringValue.Equals().
*The == operator calls the static Equals(string a, string b) method (which in turn goes to an internal EqualsHelper to do the comparison.
*Calling .Equals() on a null string gets null reference exception, while on == does not.
Object.ReferenceEquals(stringValue, otherStringValue)
Just checks that references are the same, i.e. it isn't just two strings with the same contents, you're comparing a string object with itself.
Note that with the options above that use method calls, there are overloads with more options to specify how to compare.
My advice if you just want to check for equality is to make up your mind whether you want to use a culture-dependent comparison or not, and then use .CompareTo or .Equals, depending on the choice.
A: One BIG difference to note is .Equals() will throw an exception if first string is null, Whereas == will not.
string s = null;
string a = "a";
//Throws {"Object reference not set to an instance of an object."}
if (s.Equals(a))
Console.WriteLine("s is equal to a");
//no Exception
if(s==a)
Console.WriteLine("s is equal to a");
A: As Ed said, CompareTo is used for sorting.
There is a difference, however, between .Equals and ==.
== resolves to essentially the following code:
if(object.ReferenceEquals(left, null) &&
object.ReferenceEquals(right, null))
return true;
if(object.ReferenceEquals(left, null))
return right.Equals(left);
return left.Equals(right);
The simple reason is the following will throw an exception:
string a = null;
string b = "foo";
bool equal = a.Equals(b);
And the following will not:
string a = null;
string b = "foo";
bool equal = a == b;
A: Good explanation and practices about string comparison issues may be found in the article New Recommendations for Using Strings in Microsoft .NET 2.0 and also in Best Practices for Using Strings in the .NET Framework.
Each of mentioned method (and other) has particular purpose. The key difference between them is what sort of StringComparison Enumeration they are using by default. There are several options:
*
*CurrentCulture
*CurrentCultureIgnoreCase
*InvariantCulture
*InvariantCultureIgnoreCase
*Ordinal
*OrdinalIgnoreCase
Each of above comparison type targets different use case:
*
*Ordinal
*
*Case-sensitive internal identifiers
*Case-sensitive identifiers in standards like XML and HTTP
*Case-sensitive security-related settings
*OrdinalIgnoreCase
*
*Case-insensitive internal identifiers
*Case-insensitive identifiers in standards like XML and HTTP
*File paths (on Microsoft Windows)
*Registry keys/values
*Environment variables
*Resource identifiers (handle names, for example)
*Case insensitive security related settings
*InvariantCulture or InvariantCultureIgnoreCase
*
*Some persisted linguistically-relevant data
*Display of linguistic data requiring a fixed sort order
*CurrentCulture or CurrentCultureIgnoreCase
*
*Data displayed to the user
*Most user input
Note, that StringComparison Enumeration as well as overloads for string comparison methods, exists since .NET 2.0.
String.CompareTo Method (String)
Is in fact type safe implementation of IComparable.CompareTo Method. Default interpretation: CurrentCulture.
Usage:
The CompareTo method was designed primarily for use in sorting or alphabetizing operations
Thus
Implementing the IComparable interface will necessarily use this method
String.Compare Method
A static member of String Class which has many overloads. Default interpretation: CurrentCulture.
Whenever possible, you should call an overload of the Compare method that includes a StringComparison parameter.
String.Equals Method
Overriden from Object class and overloaded for type safety. Default interpretation: Ordinal.
Notice that:
The String class's equality methods include the static Equals, the static operator ==, and the instance method Equals.
StringComparer class
There is also another way to deal with string comparisons especially aims to sorting:
You can use the StringComparer class to create a type-specific comparison to sort the elements in a generic collection. Classes such as Hashtable, Dictionary, SortedList, and SortedList use the StringComparer class for sorting purposes.
A: *
*s1.CompareTo(s2): Do NOT use if primary purpose is to determine whether two strings are equivalent
*s1 == s2: Cannot ignore case
*s1.Equals(s2, StringComparison): Throws NullReferenceException if s1 is null
*String.Equals(s2, StringComparison): By process of eliminiation, this static method is the WINNER (assuming a typical use case to determine whether two strings are equivalent)!
A: with .Equals, you also gain the StringComparison options. very handy for ignoring case and other things.
btw, this will evaluate to false
string a = "myString";
string b = "myString";
return a==b
Since == compares the values of a and b (which are pointers) this will only evaluate to true if the pointers point to the same object in memory. .Equals dereferences the pointers and compares the values stored at the pointers.
a.Equals(b) would be true here.
and if you change b to:
b = "MYSTRING";
then a.Equals(b) is false, but
a.Equals(b, StringComparison.OrdinalIgnoreCase)
would be true
a.CompareTo(b) calls the string's CompareTo function which compares the values at the pointers and returns <0 if the value stored at a is less than the value stored at b, returns 0 if a.Equals(b) is true, and >0 otherwise. However, this is case sensitive, I think there are possibly options for CompareTo to ignore case and such, but don't have time to look now.
As others have already stated, this would be done for sorting. Comparing for equality in this manner would result in unecessary overhead.
I'm sure I'm leaving stuff out, but I think this should be enough info to start experimenting if you need more details.
A: Using .Equals is also a lot easier to read.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "281"
} |
Q: Enabled Brigded Network in Vmware Server I have the vmware server with this error, anyone knows how to fix it?VMware Server Error http://soporte.cardinalsystems.com.ar/errorvmwareserver.jpg
A: In the Network Connections on the host PC, you might try repairing the connections that are created by VMWare. Something like "VMWare Network Adapter VMnet1"
I'm assuming that the network connections (to a LAN/Internet) are working on the host computer. If not, I'd start by fixing the host first.
A: There should be a vmware.log file or something similar in the directory that contains your vm. After you start the vm, are there any new errors in it?
Also, is the network adapter enabled?
A: No Idea what I do, but now its working.
this is all I have done:
*
*reinstall Vmware server several times ( more than 4 )
*Fix network adapter
*prey ( more than 1000 times)
UPDATE: One of the three VM does not work the other works perfect.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Strong Validation in WPF I have a databound TextBox in my application like so: (The type of Height is decimal?)
<TextBox Text="{Binding Height, UpdateSourceTrigger=PropertyChanged,
ValidatesOnExceptions=True,
Converter={StaticResource NullConverter}}" />
public class NullableConverter : IValueConverter {
public object Convert(object o, Type type, object parameter, CultureInfo culture) {
return o;
}
public object ConvertBack(object o, Type type, object parameter, CultureInfo culture) {
if (o as string == null || (o as string).Trim() == string.Empty)
return null;
return o;
}
}
Configured this way, any non-empty strings which cannot be converted to decimal result in a validation error which will immediately highlight the textbox. However, the TextBox can still lose focus and remain in an invalid state. What I would like to do is either:
*
*Not allow the TextBox to lose focus until it contains a valid value.
*Revert the value in the TextBox to the last valid value.
What is the best way to do this?
Update:
I've found a way to do #2. I don't love it, but it works:
private void TextBox_LostKeyboardFocus(object sender, RoutedEventArgs e) {
var box = sender as TextBox;
var binding = box.GetBindingExpression(TextBox.TextProperty);
if (binding.HasError)
binding.UpdateTarget();
}
Does anyone know how to do this better? (Or do #1.)
A: You can force the keyboard focus to stay on the TextBox by handling the PreviewLostKeyBoardFocus event like this:
<TextBox PreviewLostKeyboardFocus="TextBox_PreviewLostKeyboardFocus" />
private void TextBox_PreviewLostKeyboardFocus(object sender, KeyboardFocusChangedEventArgs e) {
e.Handled = true;
}
A: It sounds to me that you'll want to handle two events:
GotFocus: Will trigger when the textbox gains focus. You can store the initial value of the box.
LostFocus: Will trigger when the textbox loses focus. At this point you can do your validation and decide if you want to roll back or not.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: preferred way to implement visitor pattern in dynamic languages? As an exercise for myself, I was translating a sample program into various languages. Starting in C#, I had a visitor-pattern interface like so:
interface Visitor
{
void Accept(Bedroom x);
void Accept(Bathroom x);
void Accept(Kitchen x);
void Accept(LivingRoom x);
}
Moving to ruby (or python, or whatever), I no longer get dispatch from the compiler based on type. Should I do the dispatch in the visitor itself?
class Cleaner
def accept(x)
acceptBedroom(x) if Bedroom === x
acceptBathroom(x) if Bathroom === x
acceptKitchen(x) if Kitchen===x
acceptLivingRoom(x) if LivingRoom===x
end
...
Or should I do the dispatch in the different specializaions of the room:
class Bathroom<Room
def initialize(name)
super(name)
end
def accept(visitor)
visitor.acceptBathroom(self)
end
end
Or is there another preferred idiom that is used in dynamic languages?
A: My recommendation is to use the former approach. There are pluses and minuses for each approach. The former is harder to maintain as the number of Room types grows; the latter is harder as the number of Cleaner types grows.
In Ruby, you could try
def accept(x)
send "accept#{x.class}".to_sym, x
end
PS: not all dynamically typed languages are unable to do dispatch based on type; some can infer type, or failing that, can used forced casting to pick the proper method among the overloaded options.
A: I would go with the second version. The first one looks like the kind of code smell that Visitor is supposed to solve: long if-else-if or switch-case statements.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Unit testing with network-reliant code I'm trying to be better about unit testing my code, but right now I'm writing a lot of code that deals with remote systems. SNMP, WMI, that sort of thing. With most classes I can mock up objects to test them, but how do you deal with unit testing a real system? For example, if my class goes out and gets the Win32_LogicalDisk object for a server, how could I possibly unit test it?
A: Assuming you meant "How do I test against things that are hard/impossible to mock":
If you have a class that "goes out and gets the Win32_LogicalDisk object for a server" AND does something else (consumes the 'Win32_LogicalDisk' object in some way), assuming you want to test the pieces of the class that consume this object, you can use Dependency Injection to allow you to mock the 'Win32_LogicalDisk' object. For instance:
class LogicalDiskConsumer(object):
def __init__(self, arg1, arg2, LogicalDiskFactory)
self.arg1=arg1
self.arg2=arg2
self.LogicalDisk=LogicalDiskFactory()
def consumedisk(self):
self.LogicalDisk.someaction()
Then in your unit test code, pass in a 'LogicalDiskFactory' that returns a mock object for the 'Win32_LogicalDisk'.
A: The easiest way to test things which are hard to mock is to refactor the code in the way that your code (logic which is worth testing) is in one place and other things which your code use are in separate module(s). The module is easy to mock and this way you can focus on your business logic.
A: You might create a set of "test stubs" that replace the core library routines and return known values, perhaps after suitable delays.
As an example, I recently needed to develop code to run inside a 3rd-party product. The challenge was that our "partner" would be doing the compiling and integration with their base code: I wasn't allowed to look at their code in any form! My strategy was to build a very simple emulator that did what I thought their code did, based on information from their engineers. We used a language that made it easy to switch various pieces of the emulator in and out of each build, so I could do a tremendous amount of testing before involving our partner to build each new iteration.
I'd use the same method again, as software problems in that particular product are about an order of magnitude fewer than in our next most reliable product!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: What are the best keyboard macros for programming in windows? I like putting shortcuts of the form "g - google.lnk" in my start menu so google is two keystrokes away. Win, g.
My eight or so most frequent applications go there.
I also make links to my solution files I am always opening "x - Popular Project.lnk"
Are there any better ways to automate opening frequently used applications?
A: AutoHotkey is a reasonably good program for implementing windows key shortcuts. You might instead define WIN + G to be "open browser to google" which gives you a better response time (don't have to wait for start menu to popup, etc)
There are macro programs that change the macros used based on the window that's in focus. I've never needed that much control, but you might want to look into that.
-Adam
A: Get a keyboard launcher program like Launchy
A: For shortcuts I use Launchy
For macros I use AutoHotKey
Others will suggest SlickRun for shortcuts also.
A: I use a lot the "intellisense" snippets in Visual Studio. You can include your own snippets and press double tab when they appear in the list. That's definitely a time saver.
A: I use QuickMacros and love it.
so much so, that I did some extensive training articles on it here.
A: The holy grail-
Ctrl-C, Ctrl-V
I kid, I kid! Try the veal!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Iterate over subclasses of a given class in a given module In Python, given a module X and a class Y, how can I iterate or generate a list of all subclasses of Y that exist in module X?
A: Can I suggest that neither of the answers from Chris AtLee and zacherates fulfill the requirements?
I think this modification to zacerates answer is better:
def find_subclasses(module, clazz):
for name in dir(module):
o = getattr(module, name)
try:
if (o != clazz) and issubclass(o, clazz):
yield name, o
except TypeError: pass
The reason I disagree with the given answers is that the first does not produce classes that are a distant subclass of the given class, and the second includes the given class.
A: Although Quamrana's suggestion works fine, there are a couple of possible improvements I'd like to suggest to make it more pythonic. They rely on using the inspect module from the standard library.
*
*You can avoid the getattr call by using inspect.getmembers()
*The try/catch can be avoided by using inspect.isclass()
With those, you can reduce the whole thing to a single list comprehension if you like:
def find_subclasses(module, clazz):
return [
cls
for name, cls in inspect.getmembers(module)
if inspect.isclass(cls) and issubclass(cls, clazz)
]
A: Here's one way to do it:
import inspect
def get_subclasses(mod, cls):
"""Yield the classes in module ``mod`` that inherit from ``cls``"""
for name, obj in inspect.getmembers(mod):
if hasattr(obj, "__bases__") and cls in obj.__bases__:
yield obj
A: Given the module foo.py
class foo(object): pass
class bar(foo): pass
class baz(foo): pass
class grar(Exception): pass
def find_subclasses(module, clazz):
for name in dir(module):
o = getattr(module, name)
try:
if issubclass(o, clazz):
yield name, o
except TypeError: pass
>>> import foo
>>> list(foo.find_subclasses(foo, foo.foo))
[('bar', <class 'foo.bar'>), ('baz', <class 'foo.baz'>), ('foo', <class 'foo.foo'>)]
>>> list(foo.find_subclasses(foo, object))
[('bar', <class 'foo.bar'>), ('baz', <class 'foo.baz'>), ('foo', <class 'foo.foo'>), ('grar', <class 'foo.grar'>)]
>>> list(foo.find_subclasses(foo, Exception))
[('grar', <class 'foo.grar'>)]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How do I get the current location of an iframe? I have built a basic data entry application allowing users to browse external content in iframe and enter data quickly from the same page. One of the data variables is the URL.
Ideally I would like to be able to load the iframes current url into a textbox with javascript. I realize now that this is not going to happen due to security issues.
Has anyone done anything on the server side? or know of any .Net browser in browser controls. The ultimate goal is to just give the user an easy method of extracting the url of the page they are viewing in the iframe It doesn't necessarily HAVE to be an iframe, a browser in the browser would be ideal.
Thanks,
Adam
A: I did some tests in Firefox 3 comparing the value of .src and .documentWindow.location.href in an iframe. (Note: The documentWindow is called contentDocument in Chrome, so instead of .documentWindow.location.href in Chrome it will be .contentDocument.location.href.)
src is always the last URL that was loaded in the iframe without user interaction. I.e., it contains the first value for the URL, or the last value you set up with Javascript from the containing window doing:
document.getElementById("myiframe").src = 'http://www.google.com/';
If the user navigates inside the iframe, you can't anymore access the value of the URL using src. In the previous example, if the user goes away from www.google.com and you do:
alert(document.getElementById("myiframe").src);
You will still get "http://www.google.com".
documentWindow.location.href is only available if the iframe contains a page in the same domain as the containing window, but if it's available it always contains the right value for the URL, even if the user navigates in the iframe.
If you try to access documentWindow.location.href (or anything under documentWindow) and the iframe is in a page that doesn't belong to the domain of the containing window, it will raise an exception:
document.getElementById("myiframe").src = 'http://www.google.com/';
alert(document.getElementById("myiframe").documentWindow.location.href);
Error: Permission denied to get property Location.href
I have not tested any other browser.
Hope it helps!
A: document.getElementById('iframeID').contentWindow.location.href
You can't access cross-domain iframe location at all.
A: I use this.
var iframe = parent.document.getElementById("theiframe");
var innerDoc = iframe.contentDocument || iframe.contentWindow.document;
var currentFrame = innerDoc.location.href;
A: HTA works like a normal windows application.
You write HTML code, and save it as an .hta file.
However, there are, at least, one drawback: The browser can't open an .hta file; it's handled as a normal .exe program. So, if you place a link to an .hta onto your web page, it will open a download dialog, asking of you want to open or save the HTA file. If its not a problem for you, you can click "Open" and it will open a new window (that have no toolbars, so no Back button, neither address bar, neither menubar).
I needed to do something very similar to what you want, but instead of iframes, I used a real frameset.
The main page need to be a .hta file; the other should be a normal .htm page (or .php or whatever).
Here's an example of a HTA page with 2 frames, where the top one have a button and a text field, that contains the second frame URL; the button updates the field:
frameset.hta
<html>
<head>
<title>HTA Example</title>
<HTA:APPLICATION id="frames" border="thin" caption="yes" icon="http://www.google.com/favicon.ico" showintaskbar="yes" singleinstance="no" sysmenu="yes" navigable="yes" contextmenu="no" innerborder="no" scroll="auto" scrollflat="yes" selection="yes" windowstate="normal"></HTA:APPLICATION>
</head>
<frameset rows="60px, *">
<frame src="topo.htm" name="topo" id="topo" application="yes" />
<frame src="http://www.google.com" name="conteudo" id="conteudo" application="yes" />
</frameset>
</html>
*
*There's an HTA:APPLICATION tag that sets some properties to the file; it's good to have, but it isn't a must.
*You NEED to place an application="yes" at the frames' tags. It says they belongs to the program too and should have access to all data (if you don't, the frames will still show the error you had before).
topo.htm
<html>
<head>
<title>Topo</title>
<script type="text/javascript">
function copia_url() {
campo.value = parent.conteudo.location;
}
</script>
</head>
<body style="background: lightBlue;" onload="copia_url()">
<input type="button" value="Copiar URL" onclick="copia_url()" />
<input type="text" size="120" id="campo" />
</body>
</html>
*
*You should notice that I didn't used any getElement function to fetch the field; on HTA file, all elements that have an ID becomes instantly an object
I hope this help you, and others that get to this question. It solved my problem, that looks like to be the same as you have.
You can found more information here: http://www.irt.org/articles/js191/index.htm
Enjoy =]
A: I like your server side idea, even if my proposed implementation of it sounds a little bit ghetto.
You could set the .innerHTML of the iframe to the HTML contents you grab server side. Depending on how you grab this, you will have to pay attention to relative versus absolute paths.
Plus, depending on how the page you are grabbing interacts with other pages, this could totally not work (cookies being set for the page you are grabbing won't work across domains, maybe state is being tracked in Javascript... Lots of reasons this might not work.)
I don't believe that tracking the current state of the page you are trying to mirror is theoretically possible, but I'm not sure. The site could track all sorts of things server side, you won't have access to this state. Imagine the case where on a page load a variable is set to a random value server-side, how would you capture this state?
Do these ideas help with anything?
-Brian J. Stinar-
A: Does this help?
http://www.quirksmode.org/js/iframe.html
I only tested this in firefox, but if you have something like this:
<iframe name='myframe' id='myframe' src='http://www.google.com'></iframe>
You can get its address by using:
document.getElementById('myframe').src
Not sure if I understood your question correctly but anyways :)
A: You can use Ra-Ajax and have an iframe wrapped inside e.g. a Window control. Though in general terms I don't encourage people to use iframes (for anything)
Another alternative is to load the HTML on the server and send it directly into the Window as the content of a Label or something. Check out how this Ajax RSS parser is loading the RSS items in the source which can be downloaded here (Open Source - LGPL)
(Disclaimer; I work with Ra-Ajax...)
A: Ok, so in this application, there is an iframe in which the user is supplied with links or some capacity that allows that iframe to browse to some external site. You are then looking to capture the URL to which the user has browsed.
Something to keep in mind. Since the URL is to an external source, you will be limited in how much you can interact with this iframe via javascript (or an client side access for that matter), this is known as browser cross-domain security, as apparently you have discovered. There are clever work arounds, as presented here Cross-domain, cross-frame Javascript, although I do not think this work around applies in this case.
About all you can access is the location, as you need.
I would suggest making the code presented more resilitant and less error prone. Try browsing the web sometime with IE or FF configured to show javascript errors. You will be surprised just how many javascript errors are thrown, largely because there is a lot of error prone javascript out there, which just continues to proliferate.
This solution assumes that the iframe in question is the same "window" context where you are running the javascript. (Meaning, it is not embedded within another frame or iframe, in which case, the javascript code gets more involved, and you likely need to recursively search through the window hierarchy.)
<iframe name='frmExternal' id='frmExternal' src='http://www.stackoverflow.com'></frame>
<input type='text' id='txtUrl' />
<input type='button' id='btnGetUrl' value='Get URL' onclick='GetIFrameUrl();' />
<script language='javascript' type='text/javascript'>
function GetIFrameUrl()
{
if (!document.getElementById)
{
return;
}
var frm = document.getElementById("frmExternal");
var txt = document.getElementById("txtUrl");
if (frm == null || txt == null)
{
// not great user feedback but slightly better than obnoxious script errors
alert("There was a problem with this page, please refresh.");
return;
}
txt.value = frm.src;
}
</script>
Hope this helps.
A: You can access the src property of the iframe but that will only give you the initially loaded URL. If the user is navigating around in the iframe via you'll need to use an HTA to solve the security problem.
http://msdn.microsoft.com/en-us/library/ms536474(VS.85).aspx
Check out the link, using an HTA and setting the "application" property of an iframe will allow you to access the document.href property and parse out all of the information you want, including DOM elements and their values if you so choose.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: How can I avoid global state? So, I was reading the Google testing blog, and it says that global state is bad and makes it hard to write tests. I believe it--my code is difficult to test right now. So how do I avoid global state?
The biggest things I use global state (as I understand it) for is managing key pieces of information between our development, acceptance, and production environments. For example, I have a static class named "Globals" with a static member called "DBConnectionString." When the application loads, it determines which connection string to load, and populates Globals.DBConnectionString. I load file paths, server names, and other information in the Globals class.
Some of my functions rely on the global variables. So, when I test my functions, I have to remember to set certain globals first or else the tests will fail. I'd like to avoid this.
Is there a good way to manage state information? (Or am I understanding global state incorrectly?)
A: Keep in mind if your tests involve actual resources such as databases or filesystems then what you are doing are integration tests rather than unit tests. Integration tests require some preliminary setup whereas unit tests should be able to run independently.
You could look into the use of a dependency injection framework such as Castle Windsor but for simple cases you may be able to take a middle of the road approach such as:
public interface ISettingsProvider
{
string ConnectionString { get; }
}
public class TestSettings : ISettingsProvider
{
public string ConnectionString { get { return "testdatabase"; } };
}
public class DataStuff
{
private ISettingsProvider settings;
public DataStuff(ISettingsProvider settings)
{
this.settings = settings;
}
public void DoSomething()
{
// use settings.ConnectionString
}
}
In reality you would most likely read from config files in your implementation. If you're up for it, a full blown DI framework with swappable configurations is the way to go but I think this is at least better than using Globals.ConnectionString.
A: Dependency injection is what you're looking for. Rather than have those functions go out and look for their dependencies, inject the dependencies into the functions. That is, when you call the functions pass the data they want to them. That way it's easy to put a testing framework around a class because you can simply inject mock objects where appropriate.
It's hard to avoid some global state, but the best way to do this is to use factory classes at the highest level of your application, and everything below that very top level is based on dependency injection.
Two main benefits: one, testing is a heck of a lot easier, and two, your application is much more loosely coupled. You rely on being able to program against the interface of a class rather than its implementation.
A: Great first question.
The short answer: make sure your application is a function from ALL its inputs (including implicit ones) to its outputs.
The problem you're describing doesn't seem like global state. At least not mutable state. Rather, what you're describing seems like what is often referred to as "The Configuration Problem", and it has a number of solutions. If you're using Java, you may want to look into light-weight injection frameworks like Guice. In Scala, this is usually solved with implicits. In some languages, you will be able to load another program to configure your program at runtime. This is how we used to configure servers written in Smalltalk, and I use a window manager written in Haskell called Xmonad whose configuration file is just another Haskell program.
A: An example of dependency injection in an MVC setting, here goes:
index.php
$container = new Container();
include_file('container.php');
container.php
container.add("database.driver", "mysql");
container.add("database.name","app");
...
$container.add(new Database($container->get('database.driver', "database.name")), 'database');
$container.add(new Dao($container->get('database')), 'dao');
$container.add(new Service($container->get('dao')));
$container.add(new Controller($container->get('service')), 'controller');
$container.add(new FrontController(),'frontController');
index.php continues here:
$frontController = $container->get('frontController');
$controllerClass = $frontController->getController($_SERVER['request_uri']);
$controllerAction = $frontController->getAction($_SERVER['request_uri']);
$controller = $container->get('controller');
$controller->$action();
And there you have it, the controller depends on a service layer object which depends on
a dao(data access object) object which depends on a database object with depends on the
database driver, name etc
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: BLOB Storage - 100+ GB, MySQL, SQLite, or PostgreSQL + Python I have an idea for a simple application which will monitor a group of folders, index any files it finds. A gui will allow me quickly tag new files and move them into a single database for storage and also provide an easy mechanism for querying the db by tag, name, file type and date. At the moment I have about 100+ GB of files on a couple removable hard drives, the database will be at least that big. If possible I would like to support full text search of the embedded binary and text documents. This will be a single user application.
Not trying to start a DB war, but what open source DB is going to work best for me? I am pretty sure SQLLite is off the table but I could be wrong.
A: Why store the files in the database at all? Simply store your meta-data and a filename. If you need to copy them to a new location for some reason, just do that as a file system copy.
Once you remove the file contents then any competent database will be able to handle the meta-data for a few hundred thousand files.
A: I'm still researching this option for one of my own projects, but CouchDB may be worth a look.
A: My preference would be to store the document with the metadata. One reason, is relational integrity. You can't easily move the files or modify the files without the action being brokered by the db. I am sure I can handle these problems but it isn't as clean as I would like and my experience has been that most vendors can handle huge amounts of binary data in the database these days. I guess I was wondering if PostgreSQL or MySQL have any obvious advantages in these areas, I am primarily familiar with Oracle. Anyway, thanks for the response, if the DB knows where the external file is it will also be easy to bring the file in at a later date if I want. Another aspect of the question was if either database is easier to work with when using Python. I'm assuming that is a wash.
A: I always hate to answer "don't", but you'd be better off indexing with something like Lucene (PyLucene). That and storing the paths in the database rather than the file contents is almost always recommended.
To add to that, none of those database engines will store LOBs in a separate dataspace (they'll be embedded in the table's data space) so any of those engines should perfom nearly equally as well (well except sqllite). You need to move to Informix, DB2, SQLServer or others to get that kind of binary object handling.
A: Pretty much any of them would work (even though SQLLite wasn't meant to be used in a concurrent multi-user environment, which could be a problem...) since you don't want to index the actual contents of the files.
The only limiting factor is the maximum "packet" size of the given DB (by packet I'm referring to a query/response). Usually these limit are around 2MB, meaning that your files must be smaller than 2MB. Of course you could increase this limit, but the whole process is rather inefficient, since for example to insert a file you would have to:
*
*Read the entire file into memory
*Transform the file in a query (which usually means hex encoding it - thus doubling the size from the start)
*Executing the generated query (which itself means - for the database - that it has to parse it)
I would go with a simple DB and the associated files stored using a naming convention which makes them easy to find (for example based on the primary key). Of course this design is not "pure", but it will perform much better and is also easier to use.
A: why are you wasting time emulating something that the filesystem should be able to handle? more storage + grep is your answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Add alternating row color to SQL Server Reporting services report How do you shade alternating rows in a SQL Server Reporting Services report?
Edit: There are a bunch of good answers listed below--from quick and simple to complex and comprehensive. Alas, I can choose only one...
A: Using IIF(RowNumber...) can lead to some issues when rows are being grouped and another alternative is to use a simple VBScript function to determine the color.
It's a little more effort but when the basic solution does not suffice, it's a nice alternative.
Basically, you add code to the Report as follows:
Private bOddRow As Boolean
'*************************************************************************
' -- Display green-bar type color banding in detail rows
' -- Call from BackGroundColor property of all detail row textboxes
' -- Set Toggle True for first item, False for others.
'*************************************************************************
Function AlternateColor(ByVal OddColor As String, _
ByVal EvenColor As String, ByVal Toggle As Boolean) As String
If Toggle Then bOddRow = Not bOddRow
If bOddRow Then
Return OddColor
Else
Return EvenColor
End If
End Function
Then on each cell, set the BackgroundColor as follows:
=Code.AlternateColor("AliceBlue", "White", True)
Further Reading: Report Solution Patterns and Recipes: Greenbar Reports | Wrox
A: Michael Haren's solution works fine for me. However i got a warning saying that "Transparent" is not a valid BackgroundColor when Preview. Found a quick fix from
Setting BackgroundColor of Report elements in SSRS. Use Nothing instead of "Transparent"
= IIf(RowNumber(Nothing) Mod 2 = 0, "Silver", Nothing)
A: I got the chess effect when I used Catch22's solution, I think because my matrix has more than one column in design.
that expression worked fine for me :
=iif(RunningValue(Fields![rowgroupfield].Value.ToString,CountDistinct,Nothing) Mod 2,"Gainsboro", "White")
A: The only effective way to solve this without using VB is to "store" the row grouping modulo value within the row grouping (and outside the column grouping) and reference it explicitly within your column grouping. I found this solution at
http://ankeet1.blogspot.com/2009/02/alternating-row-background-color-for.html
But Ankeet doesn't the best job of explaining what's happening, and his solution recommends the unnecessary step of creating a grouping on a constant value, so here's my step-by-step process for a matrix with a single row group RowGroup1:
*
*Create a new column within the RowGroup1. Rename the textbox for this to something like RowGroupColor.
*Set the Value of RowGroupColor's textbox to
=iif(RunningValue(Fields![RowGroupField].Value
,CountDistinct,Nothing) Mod 2, "LightSteelBlue", "White")
*Set the BackgroundColor property of all your row cells to
"=ReportItems!RowGroupColor.Value"
*Set the width of the the RowGroupColor column to 0pt and set CanGrow
to false to hide it from clients.
Voila! This also solves a lot of the problems mentioned in this thread:
*
*Automatic resets for subgroups: Just add a new column for that
rowgroup, performing a RunningValue on its group values.
*No need to worry about True/False toggles.
*Colors only held in one place for easy modification.
*Can be used interchangeably on row or column groups (just set height to 0 instead of width)
It would be awesome if SSRS would expose properties besides Value on Textboxes. You could just stuff this sort of calculation in a BackgroundColor property of the row group textboxes and then reference it as ReportItems!RowGroup.BackgroundColor in all of the other cells.
Ahh well, we can dream ...
A: My problem was that I wanted all the columns in a row to have the same background. I grouped both by row and by column, and with the top two solutions here I got all the rows in column 1 with a colored background, all the rows in column 2 with a white background, all the rows in column 3 with a colored background, and so on. It's as if RowNumber and bOddRow (of Catch22's solution) pay attention to my column group instead of ignoring that and only alternating with a new row.
What I wanted is for all the columns in row 1 to have a white background, then all the columns in row 2 to have a colored background, then all the columns in row 3 to have a white background, and so on. I got this effect by using the selected answer but instead of passing Nothing to RowNumber, I passed the name of my column group, e.g.
=IIf(RowNumber("MyColumnGroupName") Mod 2 = 0, "AliceBlue", "Transparent")
Thought this might be useful to someone else.
A: I think this trick is not discussed here. So here it is,
In any type of complex matrix, when you want alternate cell colors, either row wise or column wise,
the working solution is,
If you want a alternate color of cells coloumn wise then,
*
*At the bottom right corner of a report design view, in "Column
Groups", create a fake parent group on 1 (using expression), named
"FakeParentGroup".
*Then, in the report design, for cells that to be colored
alternatively, use following background color expression
=IIF(RunningValue( Fields![ColumnGroupField].Value, countDistinct, "FakeParentGroup" ) MOD 2, "White", "LightGrey")
Thats all.
Same for the alternate color row wise, just you have to edit solution accordingly.
NOTE: Here, sometimes you need to set border of cells accordingly, usually it vanishes.
Also dont forget to delete value 1 in report that came into pic when you created fake parent group.
A: Go to the table row's BackgroundColor property and choose "Expression..."
Use this expression:
= IIf(RowNumber(Nothing) Mod 2 = 0, "Silver", "Transparent")
This trick can be applied to many areas of the report.
And in .NET 3.5+ You could use:
= If(RowNumber(Nothing) Mod 2 = 0, "Silver", "Transparent")
Not looking for rep--I just researched this question myself and thought I'd share.
A: I have changed @Catch22's solution A bit as I do not like the idea of having to go into each field if I decide I want to change one of the colors. This is especially important in reports where the are numerous fields that would need to have the color variable changed.
'*************************************************************************
' -- Display alternate color banding (defined below) in detail rows
' -- Call from BackgroundColor property of all detail row textboxes
'*************************************************************************
Function AlternateColor(Byval rowNumber as integer) As String
Dim OddColor As String = "Green"
Dim EvenColor As String = "White"
If rowNumber mod 2 = 0 then
Return EvenColor
Else
Return OddColor
End If
End Function
Noticed that I have change the function from one that accepts the colors to one that contains the colors to be used.
Then in each field add:
=Code.AlternateColor(rownumber(nothing))
This is much more robust than manually changing the color in each fields' background color.
A: If for the entire report you need an alternating color, you can use the DataSet your Tablix is bound to for a report-wide identity rownumber on the report and use that in the RowNumber function...
=IIf(RowNumber("DataSet1") Mod 2 = 1, "White","Blue")
A: @Aditya's answer is great, but there are instances where formatting will be thrown off if the very first cell of the row (for row background formatting) has a missing value (in complex tablixes with column/rows groups and missing values).
@Aditya's solution cleverly leverages countDistinct result of runningValue function to identify row numbers within a tablix (row) group. If you have tablix rows with missing value in the first cell, runningValue will not increment countDistinct result and it will return the previous row's number (and, therefore, will affect the formatting of that cell). To account for that, you will have to add an additional term to offset the countDistinct value. My take was to check the first running value in the row group itself (see line 3 of the snippet below):
=iif(
(RunningValue(Fields![RowGroupField].Value, countDistinct, "FakeOrRealImmediateParentGroup")
+ iif(IsNothing(RunningValue(Fields![RowGroupField].Value, First, "GroupForRowGroupField")), 1, 0)
) mod 2, "White", "LightGrey")
Hope this helps.
A: One thing I noticed is that neither of the top two methods have any notion of what color the first row should be in a group; the group will just start with the opposite color from the last line of the previous group. I wanted my groups to always start with the same color...the first row of each group should always be white, and the next row colored.
The basic concept was to reset the toggle when each group starts, so I added a bit of code:
Private bOddRow As Boolean
'*************************************************************************
' -- Display green-bar type color banding in detail rows
' -- Call from BackGroundColor property of all detail row textboxes
' -- Set Toggle True for first item, False for others.
'*************************************************************************
Function AlternateColor(ByVal OddColor As String, _
ByVal EvenColor As String, ByVal Toggle As Boolean) As String
If Toggle Then bOddRow = Not bOddRow
If bOddRow Then
Return OddColor
Else
Return EvenColor
End If
End Function
'
Function RestartColor(ByVal OddColor As String) As String
bOddRow = True
Return OddColor
End Function
So I have three different kinds of cell backgrounds now:
*
*First column of data row has =Code.AlternateColor("AliceBlue", "White", True) (This is the same as the previous answer.)
*Remaining columns of data row have =Code.AlternateColor("AliceBlue", "White", False) (This, also, is the same as the previous answer.)
*First column of grouping row has =Code.RestartColor("AliceBlue") (This is new.)
*Remaining columns of grouping row have =Code.AlternateColor("AliceBlue", "White", False) (This was used before, but no mention of it for grouping row.)
This works for me. If you want the grouping row to be non-colored, or a different color, it should be fairly obvious from this how to change it around.
Please feel free to add comments about what could be done to improve this code: I'm brand new to both SSRS and VB, so I strongly suspect that there's plenty of room for improvement, but the basic idea seems sound (and it was useful for me) so I wanted to throw it out here.
A: for group headers/footers:
=iif(RunningValue(*group on field*,CountDistinct,"*parent group name*") Mod 2,"White","AliceBlue")
You can also use this to “reset” the row color count within each group. I wanted the first detail row in each sub group to start with White and this solution (when used on the detail row) allowed that to happen:
=IIF(RunningValue(Fields![Name].Value, CountDistinct, "NameOfPartnetGroup") Mod 2, "White", "Wheat")
See: http://msdn.microsoft.com/en-us/library/ms159136(v=sql.100).aspx
A: Could someone explain the logic behind turning rest of the fields to false in below code (from above post)
One thing I noticed is that neither of the top two methods have any notion of what color the first row should be in a group; the group will just start with the opposite color from the last line of the previous group. I wanted my groups to always start with the same color...the first row of each group should always be white, and the next row colored.
The basic concept was to reset the toggle when each group starts, so I added a bit of code:
Private bOddRow As Boolean
'*************************************************************************
'-- Display green-bar type color banding in detail rows
'-- Call from BackGroundColor property of all detail row textboxes
'-- Set Toggle True for first item, False for others.
'*************************************************************************
'
Function AlternateColor(ByVal OddColor As String, _
ByVal EvenColor As String, ByVal Toggle As Boolean) As String
If Toggle Then bOddRow = Not bOddRow
If bOddRow Then
Return OddColor
Else
Return EvenColor
End If
End Function
'
Function RestartColor(ByVal OddColor As String) As String
bOddRow = True
Return OddColor
End Function
So I have three different kinds of cell backgrounds now:
*
*First column of data row has =Code.AlternateColor("AliceBlue", "White", True) (This is the same as the previous answer.)
*Remaining columns of data row have =Code.AlternateColor("AliceBlue", "White", False) (This, also, is the same as the previous answer.)
*First column of grouping row has =Code.RestartColor("AliceBlue") (This is new.)
*Remaining columns of grouping row have =Code.AlternateColor("AliceBlue", "White", False) (This was used before, but no mention of it for grouping row.)
This works for me. If you want the grouping row to be non-colored, or a different color, it should be fairly obvious from this how to change it around.
Please feel free to add comments about what could be done to improve this code: I'm brand new to both SSRS and VB, so I strongly suspect that there's plenty of room for improvement, but the basic idea seems sound (and it was useful for me) so I wanted to throw it out here.
A: I tried all these solutions on a Grouped Tablix with row spaces and none worked across the entire report. The result was duplicate colored rows and other solutions resulted in alternating columns!
Here is the function I wrote that worked for me using a Column Count:
Private bOddRow As Boolean
Private cellCount as Integer
Function AlternateColorByColumnCount(ByVal OddColor As String, ByVal EvenColor As String, ByVal ColCount As Integer) As String
if cellCount = ColCount Then
bOddRow = Not bOddRow
cellCount = 0
End if
cellCount = cellCount + 1
if bOddRow Then
Return OddColor
Else
Return EvenColor
End If
End Function
For a 7 Column Tablix I use this expression for Row (of Cells) Backcolour:
=Code.AlternateColorByColumnCount("LightGrey","White", 7)
A: Just because none of the answers above seemed to work in my matrix, I'm posting this here:
http://reportingservicestnt.blogspot.com/2011/09/alternate-colors-in-matrixpivot-table.html
A: My matrix data had missing values in it, so I wasn't able to get ahmad's solution to work, but this solution worked for me
Basic idea is to create a child group and field on your innermost group containing the color. Then set the color for each cell in the row based on that field's value.
A: Slight modification of other answers from here that worked for me. My group has two values to group on, so I was able to just put them both in the first arg with a + to get it to alternate correctly
= Iif ( RunningValue (Fields!description.Value + Fields!name.Value, CountDistinct, Nothing) Mod 2 = 0,"#e6eed5", "Transparent")
A: When using row and column groups both, I had an issue where the colors would alternate between the columns even though it was the same row. I resolved this by using a global variable that alternates only when the row changes:
Public Dim BGColor As String = "#ffffff"
Function AlternateColor() As String
If BGColor = "#cccccc" Then
BGColor = "#ffffff"
Return "#cccccc"
Else
BGColor = "#cccccc"
Return "#ffffff"
End If
End Function
Now, in the first column of the row you want to alternate, set the color expression to:
=Code.AlternateColor()
-
In the remaining columns, set them all to:
=Code.BGColor
This should make the colors alternate only after the first column is drawn.
This may (unverifiably) improve performance, too, since it does not need to do a math computation for each column.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "147"
} |
Q: Reading Email using Pop3 in C# I am looking for a method of reading emails using Pop3 in C# 2.0. Currently, I am using code found in CodeProject. However, this solution is less than ideal. The biggest problem is that it doesn't support emails written in unicode.
A: My open source application BugTracker.NET includes a POP3 client that can parse MIME. Both the POP3 code and the MIME code are from other authors, but you can see how it all fits together in my app.
For the MIME parsing, I use http://anmar.eu.org/projects/sharpmimetools/.
See the file POP3Main.cs, POP3Client.cs, and insert_bug.aspx
A: I've successfully used OpenPop.NET to access emails via POP3.
A: You can also try Mail.dll mail component, it has SSL support, unicode, and multi-national email support:
using(Pop3 pop3 = new Pop3())
{
pop3.Connect("mail.host.com"); // Connect to server and login
pop3.Login("user", "password");
foreach(string uid in pop3.GetAll())
{
IMail email = new MailBuilder()
.CreateFromEml(pop3.GetMessageByUID(uid));
Console.WriteLine( email.Subject );
}
pop3.Close(false);
}
You can download it here at https://www.limilabs.com/mail
Please note that this is a commercial product I've created.
A: call me old fashion but why use a 3rd party library for a simple protocol. I've implemented POP3 readers in web based ASP.NET application with System.Net.Sockets.TCPClient and System.Net.Security.SslStream for the encryption and authentication. As far as protocols go, once you open up communication with the POP3 server, there are only a handful of commands that you have to deal with. It is a very easy protocol to work with.
A: I wouldn't recommend OpenPOP. I just spent a few hours debugging an issue - OpenPOP's POPClient.GetMessage() was mysteriously returning null. I debugged this and found it was a string index bug - see the patch I submitted here: http://sourceforge.net/tracker/?func=detail&aid=2833334&group_id=92166&atid=599778. It was difficult to find the cause since there are empty catch{} blocks that swallow exceptions.
Also, the project is mostly dormant... the last release was in 2004.
For now we're still using OpenPOP, but I'll take a look at some of the other projects people have recommended here.
A: HigLabo.Mail is easy to use. Here is a sample usage:
using (Pop3Client cl = new Pop3Client())
{
cl.UserName = "MyUserName";
cl.Password = "MyPassword";
cl.ServerName = "MyServer";
cl.AuthenticateMode = Pop3AuthenticateMode.Pop;
cl.Ssl = false;
cl.Authenticate();
///Get first mail of my mailbox
Pop3Message mg = cl.GetMessage(1);
String MyText = mg.BodyText;
///If the message have one attachment
Pop3Content ct = mg.Contents[0];
///you can save it to local disk
ct.DecodeData("your file path");
}
you can get it from https://github.com/higty/higlabo or Nuget [HigLabo]
A: I just tried SMTPop and it worked.
*
*I downloaded this.
*Added smtpop.dll reference to my C# .NET project
Wrote the following code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SmtPop;
namespace SMT_POP3 {
class Program {
static void Main(string[] args) {
SmtPop.POP3Client pop = new SmtPop.POP3Client();
pop.Open("<hostURL>", 110, "<username>", "<password>");
// Get message list from POP server
SmtPop.POPMessageId[] messages = pop.GetMailList();
if (messages != null) {
// Walk attachment list
foreach(SmtPop.POPMessageId id in messages) {
SmtPop.POPReader reader= pop.GetMailReader(id);
SmtPop.MimeMessage msg = new SmtPop.MimeMessage();
// Read message
msg.Read(reader);
if (msg.AddressFrom != null) {
String from= msg.AddressFrom[0].Name;
Console.WriteLine("from: " + from);
}
if (msg.Subject != null) {
String subject = msg.Subject;
Console.WriteLine("subject: "+ subject);
}
if (msg.Body != null) {
String body = msg.Body;
Console.WriteLine("body: " + body);
}
if (msg.Attachments != null && false) {
// Do something with first attachment
SmtPop.MimeAttachment attach = msg.Attachments[0];
if (attach.Filename == "data") {
// Read data from attachment
Byte[] b = Convert.FromBase64String(attach.Body);
System.IO.MemoryStream mem = new System.IO.MemoryStream(b, false);
//BinaryFormatter f = new BinaryFormatter();
// DataClass data= (DataClass)f.Deserialize(mem);
mem.Close();
}
// Delete message
// pop.Dele(id.Id);
}
}
}
pop.Quit();
}
}
}
A: downloading the email via the POP3 protocol is the easy part of the task. The protocol is quite simple and the only hard part could be advanced authentication methods if you don't want to send a clear text password over the network (and cannot use the SSL encrypted communication channel). See RFC 1939: Post Office Protocol - Version 3
and RFC 1734: POP3 AUTHentication command for details.
The hard part comes when you have to parse the received email, which means parsing MIME format in most cases. You can write quick&dirty MIME parser in a few hours or days and it will handle 95+% of all incoming messages. Improving the parser so it can parse almost any email means:
*
*getting email samples sent from the most popular mail clients and improve the parser in order to fix errors and RFC misinterpretations generated by them.
*Making sure that messages violating RFC for message headers and content will not crash your parser and that you will be able to read every readable or guessable value from the mangled email
*correct handling of internationalization issues (e.g. languages written from righ to left, correct encoding for specific language etc)
*UNICODE
*Attachments and hierarchical message item tree as seen in "Mime torture email sample"
*S/MIME (signed and encrypted emails).
*and so on
Debugging a robust MIME parser takes months of work. I know, because I was watching my friend writing one such parser for the component mentioned below and was writing a few unit tests for it too ;-)
Back to the original question.
Following code taken from our POP3 Tutorial page and links would help you:
//
// create client, connect and log in
Pop3 client = new Pop3();
client.Connect("pop3.example.org");
client.Login("username", "password");
// get message list
Pop3MessageCollection list = client.GetMessageList();
if (list.Count == 0)
{
Console.WriteLine("There are no messages in the mailbox.");
}
else
{
// download the first message
MailMessage message = client.GetMailMessage(list[0].SequenceNumber);
...
}
client.Disconnect();
*
*HOWTO: Download emails from a GMail account in C# (blogpost)
*Rebex Mail for .NET (POP3/IMAP client component for .NET)
*Rebex Secure Mail for .NET (POP3/IMAP client component for .NET - SSL enabled)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
} |
Q: How do I prevent replay attacks? This is related to another question I asked. In summary, I have a special case of a URL where, when a form is POSTed to it, I can't rely on cookies for authentication or to maintain the user's session, but I somehow need to know who they are, and I need to know they're logged in!
I think I came up with a solution to my problem, but it needs fleshing out. Here's what I'm thinking. I create a hidden form field called "username", and place within it the user's username, encrypted. Then, when the form POSTs, even though I don't receive any cookies from the browser, I know they're logged in because I can decrypt the hidden form field and get the username.
The major security flaw I can see is replay attacks. How do I prevent someone from getting ahold of that encrypted string, and POSTing as that user? I know I can use SSL to make it harder to steal that string, and maybe I can rotate the encryption key on a regular basis to limit the amount of time that the string is good for, but I'd really like to find a bulletproof solution. Anybody have any ideas? Does the ASP.Net ViewState prevent replay? If so, how do they do it?
Edit: I'm hoping for a solution that doesn't require anything stored in a database. Application state would be okay, except that it won't survive an IIS restart or work at all in a web farm or garden scenario. I'm accepting Chris's answer, for now, because I'm not convinced it's even possible to secure this without a database. But if someone comes up with an answer that does not involve the database, I'll accept it!
A: You could use some kind of random challenge string that's used along with the username to create the hash. If you store the challenge string on the server in a database you can then ensure that it's only used once, and only for one particular user.
A: In one of my apps to stop 'replay' attacks I have inserted IP information into my session object. Everytime I access the session object in code I make sure to pass the Request.UserHostAddress with it and then I compare to make sure the IPs match up. If they don't, then obviously someone other than the person made this request, so I return null. It's not the best solution but it is at least one more barrier to stop replay attacks.
A: If you hash in a time-stamp along with the user name and password, you can close the window for replay attacks to within a couple of seconds. I don't know if this meets your needs, but it is at least a partial solution.
A: There are several good answers here and putting them all together is where the answer ultimately lies:
*
*Block-cipher encrypt (with AES-256+) and hash (with SHA-2+) all state/nonce related information that is sent to a client. Hackers with otherwise just manipulate the data, view it to learn the patterns and circumvent everything else. Remember ... it only takes one open window.
*Generate a one-time random and unique nonce per request that is sent back with the POST request. This does two things: It ensures that the POST response goes with THAT request. It also allows tracking of one-time use of a given set of get/POST pairs (preventing replay).
*Use timestamps to make the nonce pool manageable. Store the time-stamp in an encrypted cookie per #1 above. Throw out any requests older than the maximum response time or session for the application (e.g., an hour).
*Store a "reasonably unique" digital fingerprint of the machine making the request with the encrypted time-stamp data. This will prevent another trick wherein the attacker steals the clients cookies to perform session-hijacking. This will ensure that the request is coming back not only once but from the machine (or close enough proximity to make it virtually impossible for the attacker to copy) the form was sent to.
There are ASPNET and Java/J2EE security filter based applications that do all of the above with zero coding. Managing the nonce pool for large systems (like a stock trading company, bank or high volume secure site) is not a trivial undertaking if performance is critical. Would recommend looking at those products versus trying to program this for each web-application.
A: If you really don't want to store any state, I think the best you can do is limit replay attacks by using timestamps and a short expiration time. For example, server sends:
{Ts, U, HMAC({Ts, U}, Ks)}
Where Ts is the timestamp, U is the username, and Ks is the server's secret key. The user sends this back to the server, and the server validates it by recomputing the HMAC on the supplied values. If it's valid, you know when it was issued, and can choose to ignore it if it's older than, say, 5 minutes.
A good resource for this type of development is The Do's and Don'ts of Client Authentication on the Web
A: Can you use memory or a database to maintain any information about the user or request at all?
If so, then on request for the form, I would include a hidden form field whose contents are a randomly generated number. Save this token to in application context or some sort of store (a database, flat file, etc.) when the request is rendered. When the form is submitted, check the application context or database to see if that randomly generated number is still valid (however you define valid - maybe it can expire after X minutes). If so, remove this token from the list of "allowed tokens".
Thus any replayed requests would include this same token which is no longer considered valid on the server.
A: I am new to some aspects of web programming but I was reading up on this the other day. I believe you need to use a Nonce.
A: (Replay attacks can easily be all about an IP/MAC spoofing, plus you're challenged on dynamic IPs )
It is not just replay you are after here, in isolation it is meaningless. Just use SSL and avoid handcrafting anything..
ASP.Net ViewState is a mess, avoid it. While PKI is heavyweight and bloated, at least it works without inventing your own security 'schemes'. So if I could, I'd use it and always go for mutual authent. Server-only authentification is quite useless.
A: The ViewState includes security functionality. See this article about some of the build-in security features in ASP.NET . It does validation against the server machineKey in the machine.config on the server, which ensures that each postback is valid.
Further down in the article, you also see that if you want to store values in your own hidden fields, you can use the LosFormatter class to encode the value in the same way that the ViewState uses for encryption.
private string EncodeText(string text) {
StringWriter writer = new StringWriter();
LosFormatter formatter = new LosFormatter();
formatter.Serialize(writer, text);
return writer.ToString();
}
A: Use https... it has replay protection built in.
A: If you only accept each key once (say, make the key a GUID, and then check when it comes back), that would prevent replays. Of course, if the attacker responds first, then you have a new problem...
A: Is this WebForms or MVC? If it's MVC you could utilize the AntiForgery token. This seems like it's similar to the approach you mention except it uses basically a GUID and sets a cookie with the guid value for that post. For more on that see Steve Sanderson's blog: http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Another thing, have you considered checking the referrer on the postback? This is not bulletproof but it may help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: WPF - Load Font from Stream? I have a MemoryStream with the contents of a Font File (.ttf) and I would like to be able to create a FontFamily WPF object from that stream WITHOUT writing the contents of the stream to disk. I know this is possible with a System.Drawing.FontFamily but I cannot find out how to do it with System.Windows.Media.FontFamily.
Note: I will only have the stream, so I can't pack it as a resource in the application and because of disk permissions issues, will not be able to write the font file to disk for reference as "content"
UPDATE:
The API docs how describe how an application resource can be used, though it is not clear to me whether that is an Embedded resource in the assembly or a file on disk.
You can use a base URI value when you reference a font that is packaged as part of the application. For example, the base URI value can be a "pack://application" URI, which lets you reference fonts that are packaged as application resources. The following code example shows a font reference that is composed of a base URI value and a relative URI value.
A: There is a similar question here, which contains a supposed solution by converting a System.Drawing.FontFamily to a WPF font family, all in memory without any file IO:
public static void Load(MemoryStream stream)
{
byte[] streamData = new byte[stream.Length];
stream.Read(streamData, 0, streamData.Length);
IntPtr data = Marshal.AllocCoTaskMem(streamData.Length); // Very important.
Marshal.Copy(streamData, 0, data, streamData.Length);
PrivateFontCollection pfc = new PrivateFontCollection();
pfc.AddMemoryFont(data, streamData.Length);
MemoryFonts.Add(pfc); // Your own collection of fonts here.
Marshal.FreeCoTaskMem(data); // Very important.
}
public static System.Windows.Media.FontFamily LoadFont(int fontId)
{
if (!Exists(fontId))
{
return null;
}
/*
NOTE:
This is basically how you convert a System.Drawing.FontFamily to System.Windows.Media.FontFamily, using PrivateFontCollection.
*/
return new System.Windows.Media.FontFamily(MemoryFonts[fontId].Families[0].Name);
}
This seems to use the System.Drawing.PrivateFontCollection(^) to add a System.Drawing.Font created from a MemoryStream and then use the Families[0].Name of that font to pass into the System.Windows.Media.FontFamily constructor. I assume the family name would then be a URI to the instance of that font in the PrivateFontCollection but you'd probably have to try it out.
A: The best approach I could think of, was to save the oldFont to a temp directory, and immediately load it using the newFont constructor that accepts a uri.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to reference javadocs to dependencies in Maven's eclipse plugin when javadoc not attached to dependency I use Eclipse, Maven, and Java in my development. I use Maven to download dependencies (jar files and javadoc when available) and Maven's eclipse plug-in to generate the .project and .classpath files for Eclipse. When the dependency downloaded does not have attached javadoc I manually add a link for the javadoc in the .classpath file so that I can see the javadoc for the dependency in Eclipse. Then when I run Maven's eclipse plugin to regenerate the .classpath file it of course wipes out that change.
Is there a way to configure Maven's eclipse plug-in to automatically add classpath attributes for javadoc when running Maven's eclipse plug-in?
I'm only interested in answers where the javadoc and/or sources are not provided for the dependency in the maven repository, which is the case most often for me. Using downloadSources and/or downloadJavadocs properties won't help this problem.
A: From the Maven Eclipse Plugin FAQ
The following example shows how to do
this in the command-line:
mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
or in your pom.xml:
<project>
[...]
<build>
[...]
<plugins>
[...]
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<configuration>
<downloadSources>true</downloadSources>
<downloadJavadocs>true</downloadJavadocs>
</configuration>
</plugin>
[...]
</plugins>
[...]
</build>
[...]
</project>
A: Generally Javadocs are not primarily used as dependency . Because these are neither required at compile nor runtime. It’s just to help the developer while developing or debugging.
Assuming using the java IDE Eclipse we can use the java docs as referenced. Following are the approaches we can associate the javadocs/sources with the respective jars.
1. If it’s non-maven project :
Download the javadocs jar or zipped file, whatever available and placed it in some directory.
Right click on the application project in the IDE Eclipse, click Properties and choose Java Build Path then select tab Libraries under the Java Build Path. Now expand the jar you want to link with java docs/source. Select the Javadoc location link and click on Edit button, a new window appears where we need to choose the javadocs jar path. Click OK and we have linked the javadoc/source with the respective jars.
2. If it’s a maven project
If we are using the Maven project then go to jar files under the Maven dependency under the project in Project Explorer view as shown below. Now right click on the jar file you want to add the Javadoc/source, choose Maven then click on Javadoc or Source you want to link with the project. Now IDE will automatically download the required javadoc/source and will link it with the respective jar in the project.
You can verify this by right click on the project in the IDE and click on Java Build Path and select the Libraries tab under the Java Build Path and then expand the desired jar, here when you click the Edit button you will see the linked path of the Javadoc/Source with the respective jar as shown below in the image.
3. If it’s Maven project and we are setting the default behavior:
Eclipse will aquatically download the javadoc/source along with the main required jar at the starting.
By default setting instruction to Maven to download the Javadoc/sources for all the jars linked in the project.
Click Windows – preferences – select Maven and click the checkbox Download Artifact Javadoc as shown below
Now click on apply and save it and now when you create new Maven project , by default the Javadocs will get downloaded and linked with all the dependent jars in the project.
You can verify by right click on the project and Properties and under Java Build path can see the javadocs are linked with all the jars as shown below.
If your project is Maven project then It’s always best to use 2nd approach because by using this approach the IDE and Maven, takes care of downloading the correct version of the Javadoc/source and linked it with the relative jar as well.
Approach 3rd is bit costly because the javadoc/sources will be downloaded for-all the dependent jars, may be you are not interested for javadocs/sources for all the dependent jars.
A: I'm running STS 2.8.1 which is basically eclipse + spring tools; In an existing maven project, I right clicked on the project -> maven -> Download Sources and Download JavaDocs
A: As mentioned in How to download sources and javadoc artifacts with Maven Eclipse plugin from other repository?, you can do this:
In Eclipse go to Windows-> Preferences-> Maven. Check the box that says "Download Artifact Javadoc." That has worked well for me.
A: You might consider just avoiding this problem completely by installing the javadoc jar into your local repository manually using the install-file goal and passing in the -Dclassifier=javadoc option. Once you do that the .classpath that mvn generates should be correct.
If you use a remote repo as a proxy to central you could also deploy the javadocs to that repo and then everyone else who uses that proxy will now get the javadocs automatically as well.
A: Would having the sources for the dependency help? You can tell the eclipse plugin to download those (and refer to them in the .classpath) with -DdownloadSources=true
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: catching button clicks in javascript without server interaction I've got a sign up form that requires the user to enter their email and password, both are in two separate text boxes. I want to provide a button that the user can click so that the password (which is masked) will appear in a popup when the user clicks the button.
Currently my JavaScript code for this is as follows:
function toggleShowPassword() {
var button = $get('PASSWORD_TEXTBOX_ID');
var password;
if (button)
{
password = button.value;
alert(password);
button.value = password;
}
}
The problem is that every time the user clicks the button, the password is cleared in both Firefox and IE. I want them to be able to see their password in clear text to verify without having to retype their password.
My questions are:
*
*Why does the password field keep getting reset with each button click?
*How can I make it so the password field is NOT cleared once the user has seen his/her password in clear text?
A: I would assume that the browser has some issue with the script attempting to set the value of a password field:
button.value = password;
This line of code has no real purpose. password.value is not affected in the previous lines where you are reading the value and using it in the alert().
This should be a simpler version of your code:
function toggleShowPassword() {
var button = $get('PASSWORD_TEXTBOX_ID');
if (button)
{
alert(button.value);
}
}
edit: actually I just did a quick test, and Firefox has no problem setting the password field's value with code such as button.value = "blah". So it doesn't seem like this would be the case ... I would check if your ASP.NET code is causing a postback as others have suggested.
A: It sounds that you're doing a request to the server on each click, the password box being reset in each page load is typical behavior of the browsers.
A: You didn't say you were using ASP.NET, but...
By design, ASP.NET clears during postback the value of TextBox controls whose Mode is Password. I work around this in a subclass with the following code:
// If the TextMode is "password", the Text property won't work
if ( TextMode == System.Web.UI.WebControls.TextBoxMode.Password )
Attributes[ "value" ] = stringValue;
A: If you don't want the button to submit the form, then be sure it has type 'button' rather than 'submit'. For example, you might do something like this:
<input type="button" value="Show My Password" onclick="toggleShowPassword()"/>
A: In your HTML:
<input type="button" onclick="toggleShowPassword();">
You need to use "button" rather than "submit" to prevent your form from posting.
A: I did a quick example up of a working version:
<html>
<head>
<script type="text/javascript" src="prototype.js"></script>
<script type="text/javascript">
function toggleShowPassword() {
var textBox = $('PasswordText');
if (textBox)
{
alert(textBox.value);
}
}
</script>
</head>
<body>
<input type="password" id="PasswordText" /><input type="button" onclick="toggleShowPassword();" value="Show Password" />
</body>
</html>
The key is that the input is of type button and not submit. I used the prototype library for retrieving the element by ID.
A: You do not need to do button.value = password; since reading the value does not change it. I'm not sure why it's being cleared, maybe JavaScript does not allow password field values to be modified.
A: hah!
the answer if here:
http://forums.asp.net/p/1067527/1548528.aspx
I figured out the solution... the fix was simple change
OnClientClick="myOnClick()"
to
OnClientClick="return myOnClick()"
Here's the fully corrected code...
function myOnClick() {
//perform some other actions...
return false;
}
Untitled Page
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: C# .Net 3.5 Code to replace a file extension using LINQ I've written this very simple function to replace a file extension using LINQ in C#.NET 3.5 however I have a feeling that there's a more elegant way to do this. (I'm not committed to using LINQ here - just looking for a more elegant approach.) Ideas?
private string ReplaceFileExtension(string fileName, string newExtension)
{
string[] dotSplit = fileName.Split('.');
return String.Join(".", dotSplit.Take(dotSplit.Length - 1).ToArray()) + "." + newExtension;
}
(I'm aware of the fact that this won't work if the original file name doesn't have a dot.)
A: It's very easy... just use System.IO.Path.ChangeExtension
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you generate a random number in C#? I would like to generate a random floating point number between 2 values. What is the best way to do this in C#?
A: The only thing I'd add to Eric's response is an explanation; I feel that knowledge of why code works is better than knowing what code works.
The explanation is this: let's say you want a number between 2.5 and 4.5. The range is 2.0 (4.5 - 2.5). NextDouble only returns a number between 0 and 1.0, but if you multiply this by the range you will get a number between 0 and range.
So, this would give us random doubles between 0.0 and 2.0:
rng.NextDouble() * 2.0
But, we want them between 2.5 and 4.5! How do we do this? Add the smallest number, 2.5:
2.5 + rng.NextDouble() * 2.0
Now, we get a number between 0.0 and 2.0; if you add 2.5 to each of these values we see that the range is now between 2.5 and 4.5.
At first I thought that it mattered if b > a or a > b, but if you work it out both ways you'll find it works out identically so long as you don't mess up the order of the variables used. I like to express it with longer variable names so I don't get mixed up:
double NextDouble(Random rng, double min, double max)
{
return min + (rng.NextDouble() * (max - min));
}
A: System.Random r = new System.Random();
double rnd( double a, double b )
{
return a + r.NextDouble()*(b-a);
}
A: // generate a random number starting with 5 and less than 15
Random r = new Random();
int num = r.Next(5, 15);
For doubles you can replace Next with NextDouble
A: How random? If you can deal with pseudo-random then simply:
Random randNum = new Random();
randNum. NextDouble(Min, Max);
If you want a "better" random number, then you probably should look at the Mersenne Twister algorithm. Plenty of people hav already implemented it for you though
A: Here is a snippet of how to get Cryographically safe random numbers:
This will fill in the 8 bytes with a crytographically strong sequence of random values.
byte[] salt = new byte[8];
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
rng.GetBytes(salt);
For more details see How Random is your Random??" (inspired by a CodingHorror article on deck shuffling)
A: For an explaination of why Longhorn has been downmodded so much: http://msdn.microsoft.com/en-us/magazine/cc163367.aspx Look for the implementation of NextDouble and the explanation of what is a random double.
That link is also a goo example of how to use cryptographic random numbers (like Sameer mentioned) only with actual useful outputs instead of a bit stream.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Outlook Email via a Webpage I have a web application developed with ASP.net and C# that is running on my companies' intranet. Because all the users for this application are all using Microsoft Outlook without exception, I would like for the the application to open up an Outlook message on the client-side. I understand that Office is designed to be run on the desktop and not from a server, however I have no trouble creating a Word or Excel document on the client-side.
I have code that instantiates the Outlook object using the Microsoft.Office.Interop.Outlook namespace and Outlook installed on the server. When I try to run the code from the server, I get a DCOM source error message that states "The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {000C101C-0000-0000-C000-000000000046} to the user This security permission can be modified using the Component Services administrative tool." I have modified the permissions using the Component Services tool, but still get this same error.
Is there a way to overcome this or is this a fruitless exercise because Outlook cannot be opened on the client side from the server-side code?
Mailto will not work due to the extreme length that the emails can obtain. Also, the user that sends it needs add in eye-candy to the text for the recipients.
A: You cannot open something on the client from server side code. You'd have to use script on the page to do what you're wanting (or something else client-side like ActiveX or embedded .NET or something)
Here's a sample Javascript that invokes an Outlook MailItem from an webpage. This could easily be injected into the page from your server-side code so it executes on the client.
http://www.codeproject.com/KB/aspnet/EmailUsingJavascript.aspx
A: (hint: formatting in your question)
I'm not understanding what's wrong with a mailto link or a formmail-type page.
A: If everyone in the company uses Outlook, then just using a standard "mailto" link should always open Outlook. It sounds like you're over-engineering this.
A: Do you want to open an existing E-Mail or create a new one?
Perhaps I misunderstood your question; could you provide a link like:
mailto:[email protected]?subject=This%20is%20the%20subject&body=Hello%20there!
When the user clicks on that a link, a new Outlook-E-Mail will be opened and the:
*
*Recipient: recipient@email-tld
*Subject: This is the subject
*Body: Hello there!
All these fields are already filled from the link.
A: I'll just throw this out there cuz it's been asked.
Mailto has a lot of disadvantages; mainly size. Since the sender needs to do alot of formatting on the email text, the html code generated can take up a lot of space that fails when using mailto.
thanks for the suggestion though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why does a button control need to be clicked twice? I've got a web application working using VB and Ajax. I'm using updatepanels to avoid the irritating "flicker" on postbacks to the server.
I would like to have a button control defined within the updatepanel itself (tried moving it outside and got some catastrophic error, so left it there) that makes the current panel not visible and a sibling panel visible. This works with the exception that the button must be clicked twice. Not double clicked, but clicked once than clicked again.
In setting breakpoints I discovered the code behind that's attached to the button is actually being executed on the first click, but the panels don't switch as expected. If I click the same button OR worse yet, a different button, the expected behavior of the second panel appearing occurs. However, with the second button being clicked there's an unwanted bonus of a third panel being displayed, the third panel being made visible due to the second button being clicked.
I'm assuming this behavior is due to the updatepanel and its Ajax nature. Is there a way to avoid the second click? Am I misusing the updatepanel? I really wanted to use a modal popup (right out of the AjaxToolKit) but had problems with posting back the data so I opted for this approach. Any insights, assistance, even criticism would be welcome as this has plagued me long enough. Thanks
A: If you get rid of the UpdatePanels do things work as expected with PostBacks? Chances are something in your Page_Load or other event higher up the chain are "resetting" things in some way before it gets to your click event. Could this be the case?
A: I think your problem is that only the update panel is receiving data from the server after the method executes. The panel your are trying to change is outside of the update panel so it does not know that its properties have changed.
You either need to do a full page postback or have the panel you wish to modify inside the update panel.
A: I have run into this before and resolved it, I just can't remember how. I will try to find my old code and get back to you. one thought, do you have EnablePartialRendering enabled in your scriptmanager? maybe try wrapping both containers in a third panel.
A: Your update panel is sitting inside the other panels.
Should that be the other way around? AFAIK only controls within the update panel will get updated in via the AJAX call.
A: Here's a fairly simple solution. (I was having the same problem this morning.)
The UpdatePanel can't render stuff outside itself. So, as you noticed, the updates are happening, but you're not seeing the result.
The easiest solution is to force a full postback. You can do that like this:
protected override void OnInit(EventArgs e)
{
var scriptManager = ScriptManager.GetCurrent(this);
// or this.Page in a UserControl, etc.
scriptManager.RegisterPostBackControl(someButton);
scriptManager.RegisterPostBackControl(someOtherButton);
// etc. for each control that needs to update something outside the UpdatePanel
}
This still allows the buttons themselves to be updated in the UpdatePanel by Ajax (e.g. changing their state to disabled or enabled). The full postback only happens if the buttons are clicked.
A: Like others have said an update panel only updates its contents, thats one of the main benefits of using it.
Panel2 and pnlPrvCmt need to be inside your update panel for your button click method to work. Another option would be to put Panel2 inside one update panel and pnlPrvCmt inside a second update panel. Then any control inside either update panel will cause both to refresh, as long as the UpdateMode=Always (which it is by default).
A: Try giving the dynamic control an ID when it is created. For some reason this is required by .net for a dynamic control to work in this context.
myControl.id="newID"
A: I have found this to occur under 2 different scenerios:
*
*No ID set on the control. Either the ID is left off of the markup or the the ID was not set when a dynamic control was created. ASP.Net uses the ID to track actions.
*Nested UpdatePanels. Scenerio: When using a Masterpage, you might have a content placeholder that you wrap in an UpdatePanel so that an UpdatePanel is not needed in the content on the page. Then, in developing your page you might, as a habit, add an UpdatePanel.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What can cause .NET assembly registration to fail? We've seen an issue where one of our installers (msi) returns the error code 2908, which is used to indicate that an assembly failed to register. Later in the installation, we get the following (sanitized) error:
MyAssemblyName, version="1.0.1.1",
culture="neutral",
publicKeyToken="119EFC79848A50".
Please refer to Help and Support for
more information. HRESULT: 0x8002802F.
The assembly registers properly on most systems. Has anyone else encountered this issue? How did you solve it?
A: I found a pair of blog postings that appear to cover this topic.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's the best way to insert/update/delete multiple records in a database from an application? Given a small set of entities (say, 10 or fewer) to insert, delete, or update in an application, what is the best way to perform the necessary database operations? Should multiple queries be issued, one for each entity to be affected? Or should some sort of XML construct that can be parsed by the database engine be used, so that only one command needs to be issued?
I ask this because a common pattern at my current shop seems to be to format up an XML document containing all the changes, then send that string to the database to be processed by the database engine's XML functionality. However, using XML in this way seems rather cumbersome given the simple nature of the task to be performed.
A: You didn't mention what database you are using, but in SQL Server 2008, you can use table variables to pass complex data like this to a stored procedure. Parse it there and perform your operations. For more info, see Scott Allen's article on ode to code.
A: It depends on how many you need to do, and how fast the operations need to run. If it's only a few, then doing them one at a time with whatever mechanism you have for doing single operations will work fine.
If you need to do thousands or more, and it needs to run quickly, you should re-use the connection and command, changing the arguments for the parameters to the query during each iteration. This will minimize resource usage. You don't want to re-create the connection and command for each operation.
A: Most databases support BULK UPDATE or BULK DELETE operations.
A: From a "business entity" design standpoint, if you are doing different operations on each of a set of entities, you should have each entity handle its own persistence.
If there are common batch activities (like "delete all older than x date", for instance), I would write a static method on a collection class that executes the batch update or delete. I generally let entities handle their own inserts atomically.
A: The answer depends on the volume of data you're talking about. If you've got a fairly small set of records in memory that you need to synchronise back to disk then multiple queries is probably appropriate. If it's a larger set of data you need to look at other options.
I recently had to implement a mechanism where an external data feed gave me ~17,000 rows of dta that I needed to synchronise with a local table. The solution I chose there was to load the external data into a staging table and call a stored proc that did the synchronisation completely within the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I get the ClickOnce Publish version to match the AssemblyInfo.cs File version? Every time I publish the application in ClickOnce I get get it to update the revision number by one. Is there a way to get this change automatically to change the version number in AssemblyInfo.cs file (all our error reporting looks at the Assembly Version)?
A: I implemented this recently using some custom tasks. An issue I found with implementing this with ClickOnce is that all your DLL files are updated. This causes the ClickOnce update to download all the application files every update. This bypasses on of the nice features of the ClickOnce deployment where only the modified files are re-downloaded in an update.
Just something to think about when implementing something like this with ClickOnce.
A: We use Team Foundation Server Team Build and have added a block to the TFSBuild.proj's AfterCompile target to trigger the ClickOnce publish with our preferred version number:
<MSBuild Projects="$(SolutionRoot)\MyProject\Myproject.csproj"
Properties="PublishDir=$(OutDir)\myProjectPublish\;
ApplicationVersion=$(PublishApplicationVersion);
Configuration=$(Configuration);Platform=$(Platform)"
Targets="Publish" />
The PublishApplicationVersion variable is generated by a custom MSBuild task to use the TFS Changeset number, but you could use your own custom task or an existing solution to get the version number from the AssemblyInfo file.
This could theoretically be done in your project file (which is just an MSBuild script anyway), but I'd recommend against deploying from a developer machine.
I'm sure other continuous integration (CI) solutions can handle this similarly.
Edit: Sorry, got your question backwards. Going from the ClickOnce version number to the AssemblyInfo file should be doable. I'm sure the MSBuild Community Tasks (link above) have a task for updating the AssemblyInfo file, so you'd just need a custom task to pull the version number from the ClickOnce configuration XML.
However, you may also consider changing your error reporting to include the ClickOnce publish version too:
if (System.Deployment.Application.ApplicationDeployment.IsNetworkDeployed)
{
Debug.WriteLine(System.Deployment.Application.ApplicationDeployment.
CurrentDeployment.CurrentVersion);
}
A: Steps:
*
*Use external incrementing version number (if you leverage a continuous integration server like CruiseControl.NET, then it comes from the build label).
*Use GlobalVersionInfo.cs (file link-referenced by all projects in your solution) to hold the current version and update it on the build with the AssemblyInfo task from the MSBuild Community tasks.
*Script Mage command-line tool from the .NET SDK to update the ClickOnce manifest, using the same version (see the -v and -mv switches).
BTW, a nice bonus is that, whenever you automatically publish a newer ClickOnce deployment version via the integration script, if you also specify the minimal version to mage.exe (same as version), then every user will be updated automatically on the next application launch.
A: You'll probably need to create a piece of code that updates AssemblyInfo.cs according to the version number stored in the .csproj file. (The ClickOnce deploy version is stored inside an XML tag.)
You'd then change your .csproj file to run this bit of code when Publish|Release build is performed. The MSBuild folks have blogged about how to perform custom actions during certain build types; check the MSBuild team blog.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Compressing a TIF file I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
A: You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).
A: Pimping disclaimer: I work for Atalasoft, a company that makes .NET imaging software.
Using dotImage, this task becomes something like this:
FileSystemImageSource source = new FileSystemImageSource("path-to-your-file.tif", true); // true = loop over all frames
// tiff encoder will auto-select an appropriate compression - CCITT4 for 1 bit.
TiffEncoder encoder = new TiffEncoder();
encoder.Append = true;
// DynamicThresholdCommand is very good for documents. For pictures, use DitherCommand
DynamicThresholdCommand threshold = new DynamicThresholdCommand();
using (FileStream outstm = new FileStream("path-to-output.tif", FileMode.Create)) {
while (source.HasMoreImages()) {
AtalaImage image = source.AcquireNext();
AtalaImage finalImage = image;
// convert when needed.
if (image.PixelFormat != PixelFormat.Pixel1bppIndexed) {
finalImage = threshold.Apply().Image;
}
encoder.Save(outstm, finalImage, null);
if (finalImage != image) {
finalImage.Dispose();
}
source.Release(image);
}
}
The Bob Powell example is good, as far as it goes, but it has a number of problems, not the least of which is that it's using a simple threshold, which is terrific if you want speed and don't actually care what your output looks like or your input domain is such that really is pretty much black and white already - just represented in color. Binarization is a tricky problem. When your task is to reduce available information by 1/24th, how to keep the right information and throw away the rest is a challenge. DotImage has six different tools (IIRC) for binarization. SimpleThreshold is bottom of the barrel, from my point of view.
A: I suggest to experiment with the desired results first using tiff and image utilities before diving into the coding. I found VIPS to be a handy tool. The next option is to look into what LibTIFF can do. I've had good results with the free LibTiff.NET using c# (see also stackoverflow). I was very disappointed by the GDI tiff functionality, although your milage may vary (I need the missing 16-bit-grayscale).
Also you can use the LibTiff utilities (i.e. see http://www.libtiff.org/man/tiffcp.1.html)
A: I saw the above code, and it looked like it was converting every pixel with manual logic.
Would this work for you?
Imports System.Drawing.Imaging
'get the color tif file
Dim bmpColorTIF As New Bitmap("C:\color.tif")
'select the an area of the tif (will grab all frames)
Dim rectColorTIF As New Rectangle(0, 0, bmpColorTIF.Width, bmpColorTIF.Height )
'clone the rectangle as 1-bit color tif
Dim bmpBlackWhiteTIF As Bitmap = bmpColorTIF.Clone(rectColorTIF, PixelFormat.Format1bppIndexed)
'do what you want with the new bitmap (save, etc)
...
Note: there are a ton of pixelformats to choose from.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is it possible to craft a glob that matches files in the current directory and all subdirectories? For this directory structure:
.
|-- README.txt
|-- firstlevel.rb
`-- lib
|-- models
| |-- foo
| | `-- fourthlevel.rb
| `-- thirdlevel.rb
`-- secondlevel.rb
3 directories, 5 files
The glob would match:
firstlevel.rb
lib/secondlevel.rb
lib/models/thirdlevel.rb
lib/models/foo/fourthlevel.rb
A: Apologies if I've missed the real point of the question but, if I was using sh/bash/etc., then I would probably use find to do the job:
find . -name '*.rb' -type f
Globs can get a bit nasty when used from within a script and find is much more flexible.
A: In zsh, **/*.rb works
A: In Ruby itself:
Dir.glob('**/*.rb') perhaps?
A: Looks like it can't be done from bash
If you using zsh then
ls **/*.rb
will produce the correct result.
Otherwise you can hijack the ruby interpreter (and probably those of other languages)
ruby -e "puts Dir.glob('**/*.rb')"
Thanks to Chris and Gaius for your answers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Arbitrary Naming Convention (Business Objects) Ok, do you do Business.Name or Business.BusinessName
SubCategory.ID or SubCategory.SubCategoryID
What about in your database?
Why?
I'm torn with both. Would love there to be a "right answer"
A: The only "right" answer is to be consistent. Decide upfront which one you will be using in a project, and stick to it.
A: The main drawback of using ID, Name etc is that you have to qualify them with the table name if you are writing an SQL join which overlaps two tables.
Despite that, I find it far more concise and readable to just use ID and Name - your code and tables will 'flow' much more easily past the eyes. Easier to type and less redundant. And typing SELECT Business.Name FROM ... in an SQL query is not really more troublesome than typing SELECT BusinessName FROM ...
In general, if I find myself repeating semantic information it alerts me to look for ways to eliminate it or at least recognise why it repeats. This could be on the small scale (attribute names) or the large scale (behaviour patterns or common class structures).
A: For very common properties like "Name" and "ID", the convention I have used is to not put the entity name in the field. For more unusual properties, I do put the entity name.
This is a naming convention decision, but I have not regretted projects where this is the convention, if you put the name of the entity for each ID, it ends up seeming to be too verbose.
A: we do ID on anything that's the primary key. Saying SubCategory.SubCategoryID seems redundant,
A: I may not be right, but I think Id is a tastier dish.
thing.id
because if you are going to write any reflective stuff that deals with your objects and needs primary key, its way easier to know it everywhere, then trying to determine it with a formula.
As for the other, thats total preference and I don't see any real implications other than time wasted typing the other characters, and its .net so no one actually types namespaces anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to generate WMI code/classes? How do you generate C# classes for accessing WMI?
A: To generate strongly typed WMI classes, use the Management Strongly typed class generator (MgmtClassGen.exe). It's usually in C:\Program Files\Microsoft Visual Studio X\SDK\vX\Bin. The parameters are at MSDN and they even have a page describing the code generated. If you have to do a lot of work with WMI, it's a lifesaver.
A: Easier approach (Visual Studio users):
*
*Add WMI classes to VisualStudio's Server Explorer. E.g.
*Get Visual Studio to call MgmtClassGen.exe for you. E.g.
@VanOrman provides additional references for MgmtClassGen.exe.
A: You can try WMICodeCreator it generates vbscript jscript vb.net and c# code
Download WMICodeCreator from Microsoft
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Single Sign On across multiple domains Our company has multiple domains set up with one website hosted on each of the domains. At this time, each domain has its own authentication which is done via cookies.
When someone logged on to one domain needs to access anything from the other, the user needs to log in again using different credentials on the other website, located on the other domain.
I was thinking of moving towards single sign on (SSO), so that this hassle can be eliminated. I would appreciate any ideas on how this could be achieved, as I do not have any experience in this regard.
Thanks.
Edit:
The websites are mix of internet (external) and intranet (internal-used within the company) sites.
A: The SSO solution that I've implemented here works as follows:
*
*There is a master domain, login.mydomain.example with the script master_login.php that manages the logins.
*Each client domain has the script client_login.php
*All the domains have a shared user session database.
*When the client domain requires the user to be logged in, it redirects to the master domain (login.mydomain.example/master_login.php). If the user has not signed in to the master it requests authentication from the user (ie. display login page). After the user is authenticated it creates a session in a database. If the user is already authenticated it looks up their session id in the database.
*The master domain returns to the client domain (client.mydomain.example/client_login.php) passing the session id.
*The client domain creates a cookie storing the session id from the master. The client can find out the logged in user by querying the shared database using the session id.
Notes:
*
*The session id is a unique global identifier generated with algorithm from RFC 4122
*The master_login.php will only redirect to domains in its whitelist
*The master and clients can be in different top level domains. Eg. client1.abc.example, client2.xyz.example, login.mydomain.example
A: Don't re-invent the wheel. There are a number of open source cross-domain SSO packages such as JOSSO, OpenSSO, CAS, Shibboleth and others. If you're using Microsoft Technology throughout (IIS, AD), you can use microsoft federation (ADFS) instead.
A: If you use Active Directory you could have each app use AD for authentication, login could then be seamless.
Otherwise, if the applications can talk to each other behind the scenes, you could use sessionids and have one app handling id generation serving all of your other applications.
A: How different are the host names?
These hosts can share cookies:
*
*mail.xyz.example
*www.xyz.example
*logon.xyz.example
But these cannot:
*
*abc.example
*xyz.example
*www.tre.example
In the former case you can bang out a cookie-based solution. Think GUID and a database session table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
} |
Q: JavaFX video encoding On JavaFX's Wikipedia
In May 2008 (...) Sun Also announced a
multi-year agreement with On2
Technologies to bring comprehensive
video capabilities to the JavaFX
product family using the company's
TrueMotion Video codec.
Do you know if it will include encoding capabilities for Webcam Video like Flash or just playback/streaming?
Thanks
A: The JavaFX API just supports media playback at the moment (see here: javafx.scene.media.MediaView). There might very well be mere Java APIs for encoding, however.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Tools to convert asp.net dynamic site into static site Are there any tools that will spider an asp.net website and create a static site?
A: http://www.httrack.com/
Have used for this purpose a few times, may need to do a little tidying up of urls, and some css linked images might not make it, depends on how good a job you want to do.
If you have dreamweaver, you can use that to manage the links if you need to clean up the file names afterwards.
Optionally use the link checker extension for firefox to check it all afterwards.
A: You could use OfflineExplorer: http://www.metaproducts.com/mp/Offline_Explorer.htm
This works well as long as you only have GET requests (links). Postbacks will not
be executed.
Be aware that crawling your site might acually change the underlying
database so I would strongly recommend you back up the database and web before
using a crawler.
A: Another solution is wget.
A: I've had good luck with WebZip.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to ensure that the same thread is used to execute code in IIS? We have a third party dll that is used in our web service hosted in IIS6. The problem is that once this dll is loaded into memory, the exception AccessViolationException gets thrown if a thread different then the one that created it tries to execute any code within the dll. The worker process is multi threaded and each call to the web service will get a random thread from the pool. We tried to unload it from memory and reload it each time we needed it, but I guess only the front end is .Net and the rest is unmanaged so it never actually gets completely unloaded from memory. We are using VB and .Net 2.0. Any suggestions?
(Response to Rob Walker)
We thought about creating a new thread and using it to call the dll, but how do we make the thread sit and wait for calls? How do you delegate the call to the thread without having the Dispatcher class supplied by .Net 3.0? Creating a hidden form and putting it in a message loop might work. And then we could call the Invoke() method of the form. But I can see many problems occurring if we create a form inside an IIS hosted web service.
A: I have read about a class in .net 3.0 called Dispatcher that allows you to put a thread in a loop and then call the method Invoke() using a delegate to execute a method using the thread. But this solution will not work if you cannot update to .Net 3.0. Another solution would be to host the third party dll in another application on the server and use some form of Remoting to access it. But you may still have a problem with the Remoting because it behaves similar to IIS and will also pick a random thread to execute the code . To get around this, you could put a wrapper around the dll and use it to delegate the calls to the UI thread by using the Invoke() method of the form.
A: I think you need to look at using a wrapper thread that handles all calls to the DLL, and deals with the serialization.
This thread is outside of the managed thread pool, so you control its lifetime. But even this would not be foolproof unless you can prevent IIS from restarting the app domain your web service is in.
You also need to worry about what happens when two web service requests come in at the same time. Is each call into the DLL standalone, or do you have to group together all the calls associated with a single web service request before allowing any other request to be serviced?
A: You could create a service that hosts the extra DLL. Via remoting you access the service, this will dispatch the calls the the thread that manages the DLL.
This way you have control over the thread that calls the DLL, and over the lifetime of the thread.
A: I'm a bit rusty, but you might try wrapping calls to the DLL in a single threaded apartment COM object. This would ensure that all calls go through the COM object's windows messaging thread. I think you would have to register the component in a server application within Component Services to do this.
A: Can you run the dll inside different threads as different instances? Like thread1 creates an instance of this third party dll, and thread2 also does, but as long as thread1 doesn't try to use thread2's instance it won't throw that exception? If thats the case, .Net never unloads any code once its loaded, if you load an assembly and then remove it, it still sits in that application pool. If you can create more than one instance at a time, you could load it up in a separate app pool you control per a request, then unload the app pool. Performance might drop though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the best way to send html/image email? Do you attach the images?
Use absolute urls?
How do you best avoid getting flagged as spam?
A: You attach the emails then reference them in your HTML like so:
<img src="cid:imagefilename.jpg" />
Outlook, at least, recognizes this as a reference to an attached image and dumps it in appropriately.
A: You'll want to use absolute URLs to link out to images on a server. Users won't want to download your attachments. Also most email clients will not displays images by default, so it's a good idea to keep the really important content as text.
Email clients generally all use very different rendering methods. For example, Outlook 2007 uses Word's HTML rendering engine, whereas previous versions used Internet Explorer.
Do be aware that CSS support is also very limited to in emails. Most clients, especially web mail, will strip out everything outside of the <body> tag, as well as <style> tags. This means that external or embedded CSS will not work, and that inline styles are the safest bet (the style="" attribute). There is also poor support for many CSS rules in Outlook 2007. This means that a lot people have returned to using tables for laying out email.
As it was pointed out, Campaign Monitor is an excellent resource, and I especially recommend their CSS Compatibility Chart
A: Campaign Monitor is a great resources for html email:
http://www.campaignmonitor.com/resources/#building
Also http://www.email-standards.org/, but seems down right now.
A: One of the biggest causes, that I have found, for email to be flagged as spam is DNS. Make sure the domain / MX records from which you are sending the email actually resolve correctly back from the server used for sending.
As for images, you could attach them, but the most common way is to host them and use absolute urls. Primarily this is a bandwidth issue - you have to figure you're going to get an open rate of 10 - 15%: if you have to attach all the assets to every email, 85% of the bandwidth you'll use will be wasted.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Algorithm / pseudo-code to create paging links? Can someome provide code or pseudo-code for how the paging links on StackOverflow are generated?
I keep racking my brain but can't think of a decent way to build the dynamic links that always show the 2 pages around the current, plus the first and last.
Example: 1 ... 5 6 7 ... 593
A: There are several other answers already, but I'd like to show you the approach I took to solve it:
First, let's check out how Stack Overflow handles normal cases and edge cases. Each of my pages displays 10 results, so to find out what it does for 1 page, find a tag that has less than 11 entries: usability works today. We can see nothing is displayed, which makes sense.
How about 2 pages? Find a tag that has between 11 and 20 entries (emacs works today). We see: "1 2 Next" or "Prev 1 2", depending on which page we're on.
3 pages? "1 2 3 ... 3 Next", "Prev 1 2 3 Next", and "Prev 1 ... 2 3". Interestingly, we can see that Stack Overflow itself doesn't handle this edge case very well: it should display "1 2 ... 3 Next"
4 pages? "1 2 3 ... 4 Next", "Prev 1 2 3 ... 4 Next", "Prev 1 ... 2 3 4 Next" and "Prev 1 ... 3 4"
Finally let's look at the general case, N pages: "1 2 3 ... N Next", "Prev 1 2 3 ... N Next", "Prev 1 ... 2 3 4 ... N Next", "Prev 1 ... 3 4 5 ... N Next", etc.
Let's generalize based on what we've seen:
The algorithm seems to have these traits in common:
*
*If we're not on the first page, display link to Prev
*Always display the first page number
*Always display the current page number
*Always display the page before this page, and the page after this page.
*Always display the last page number
*If we're not on the last page, display link to Next
Let's ignore the edge case of a single page and make a good first attempt at the algorithm: (As has been mentioned, the code to actually print out the links would be more complicated. Imagine each place we place a page number, Prev or Next as a function call that will return the correct URL.)
function printPageLinksFirstTry(num totalPages, num currentPage)
if ( currentPage > 1 )
print "Prev"
print "1"
print "..."
print currentPage - 1
print currentPage
print currentPage + 1
print "..."
print totalPages
if ( currentPage < totalPages )
print "Next"
endFunction
This function works ok, but it doesn't take into account whether we're near the first or last page. Looking at the above examples, we only want to display the ... if the current page is two or more away.
function printPageLinksHandleCloseToEnds(num totalPages, num currentPage)
if ( currentPage > 1 )
print "Prev"
print "1"
if ( currentPage > 2 )
print "..."
if ( currentPage > 2 )
print currentPage - 1
print currentPage
if ( currentPage < totalPages - 1 )
print currentPage + 1
if ( currentPage < totalPages - 1 )
print "..."
print totalPages
if ( currentPage < totalPages )
print "Next"
endFunction
As you can see, we have some duplication here. We can go ahead and clean that up for readibility:
function printPageLinksCleanedUp(num totalPages, num currentPage)
if ( currentPage > 1 )
print "Prev"
print "1"
if ( currentPage > 2 )
print "..."
print currentPage - 1
print currentPage
if ( currentPage < totalPages - 1 )
print currentPage + 1
print "..."
print totalPages
if ( currentPage < totalPages )
print "Next"
endFunction
There are only two problems left. First, we don't print out correctly for one page, and secondly, we'll print out "1" twice if we're on the first or last page. Let's clean those both up in one go:
function printPageLinksFinal(num totalPages, num currentPage)
if ( totalPages == 1 )
return
if ( currentPage > 1 )
print "Prev"
print "1"
if ( currentPage > 2 )
print "..."
print currentPage - 1
if ( currentPage != 1 and currentPage != totalPages )
print currentPage
if ( currentPage < totalPages - 1 )
print currentPage + 1
print "..."
print totalPages
if ( currentPage < totalPages )
print "Next"
endFunction
Actually, I lied: We have one remaining issue. When you have at least 4 pages and are on the first or last page, you get an extra page in your display. Instead of "1 2 ... 10 Next" you get "1 2 3 ... 10 Next". To match what's going on at Stack Overflow exactly, you'll have to check for this situation:
function printPageLinksFinalReally(num totalPages, num currentPage)
if ( totalPages == 1 )
return
if ( currentPage > 1 )
print "Prev"
print "1"
if ( currentPage > 2 )
print "..."
if ( currentPage == totalPages and totalPages > 3 )
print currentPage - 2
print currentPage - 1
if ( currentPage != 1 and currentPage != totalPages )
print currentPage
if ( currentPage < totalPages - 1 )
print currentPage + 1
if ( currentPage == 1 and totalPages > 3 )
print currentPage + 2
print "..."
print totalPages
if ( currentPage < totalPages )
print "Next"
endFunction
I hope this helps!
A: The controls generally show controls for: P1, Pn, Pc (current page), Pc+1, Pc-1. The only time this changes is at either ends of the paging range {Pc < P3 or Pc > (Pn-3)}
*
*The first step is to obviously work out the number of pages:
numPages = ceiling(totalRecords / numPerPage)
*
*If you've got 4 or less, the drop out at this point, because, by the above rules, the paging is always going to be fixed (P1, P2, Pn-1, Pn), where one will acutally be Pc
*else, you have three "states"
a. (Pc < P3) - so show P1, P2, P3, Pn, Next If Pc >1, show a 'prev' link before P1.
b. (Pc > Pn - 2), so show Prev, P1, Pn - 2, Pn -1, Pn, show a Next link if Pc < Pn
c. Show Prev, P1, Pc -1, Pc, Pc +1, Pn, Next
Easy as Pie in pseudo code. The loops can get a bit nasty when implemented as you've got to do some iterating in order to generate the links.
Edit:
Of course Prev and Next are identical to Pc +/- 1
A: Well, if you know the current page, it's pretty trivial to just subtract the number by 1, and add it by 1, then check those numbers against the bounds and display the first and last page always, then if they aren't in sequence, add the ellipses.
Or are you asking about getting the total number of pages and determining the current page number...?
A: public void PageLinks(int currentPage, int lastPage) {
if (currentPage > 2)
Add('[1]', '...');
for(int i=Math.Max(1, currentPage-1); i< Math.Min(currentPage+1, lastPage); i++)
Add('[i]');
if (currentPage < lastPage-1)
Add('...', '[lastpage]');
}
lastPage is calculated as Math.Ceiling(totalRecords/RecordsPerPage).
hmmm. actually, in the case that currentpage is 3, it still shows [1]...[2][3][4]...[xxx]
i think the ellipses are superfluous in that case. But that's how it works.
Edit: the preview formats the codeblock correctly, why does it get mangled? sure, its just pseudocode.... but still....
A: This is my approach to make a paging link. The following java code is just a pseudo.
package com.edde;
/**
* @author Yang Shuai
*/
public class Pager {
/**
* This is a method used to display the paging links(pagination or sometimes called pager).
* The totalPages are the total page you need to display. You can get this value using the
* formula:
*
* total_pages = total_records / items_per_page
*
* This methods is just a pseudo-code.
*
*
* @param totalPages how many pages you need to display
* @param currentPage you are in which page now
*/
public static void printPageLinks(int totalPages, int currentPage) {
// how many pages to display before and after the current page
int x = 2;
// if we just have one page, show nothing
if (totalPages == 1) {
return;
}
// if we are not at the first page, show the "Prev" button
if (currentPage > 1) {
System.out.print("Prev");
}
// always display the first page
if (currentPage == 1) {
System.out.print(" [1]");
} else {
System.out.print(" 1");
}
// besides the first and last page, how many pages do we need to display?
int how_many_times = 2 * x + 1;
// we use the left and right to restrict the range that we need to display
int left = Math.max(2, currentPage - 2 * x - 1);
int right = Math.min(totalPages - 1, currentPage + 2 * x + 1);
// the upper range restricted by left and right are more loosely than we need,
// so we further restrict this range we need to display
while (right - left > 2 * x) {
if (currentPage - left < right - currentPage) {
right--;
right = right < currentPage ? currentPage : right;
} else {
left++;
left = left > currentPage ? currentPage : left;
}
}
// do we need display the left "..."
if (left >= 3) {
System.out.print(" ...");
}
// now display the middle pages, we display how_many_times pages from page left
for (int i = 1, out = left; i <= how_many_times; i++, out++) {
// there are some pages we need not to display
if (out > right) {
continue;
}
// display the actual page
if (out == currentPage) {
System.out.print(" [" + out + "]");
} else {
System.out.print(" " + out);
}
}
// do we need the right "..."
if (totalPages - right >= 2) {
System.out.print(" ...");
}
// always display the last page
if (currentPage == totalPages) {
System.out.print(" [" + totalPages + "]");
} else {
System.out.print(" " + totalPages);
}
// if we are not at the last page, then display the "Next" button
if (currentPage < totalPages) {
System.out.print(" Next");
}
System.out.println();
}
public static void main(String[] args) {
// printPageLinks(50, 3);
help(500);
}
public static void test(int n) {
for (int i = 1; i <= n; i++) {
printPageLinks(n, i);
}
System.out.println("------------------------------");
}
public static void help(int n) {
for (int i = 1; i <= n; i++) {
test(i);
}
}
public static void help(int from, int to) {
for (int i = from; i <= to; i++) {
test(i);
}
}
}
A: Here is my algorithm. It works really nice:
// Input
total_items // Number of rows, records etc. from db, file or whatever
per_page // num items per page
page // current page
visible_pages // number of visible pages
// Calculations
lastPage = ceil(total_items / per_page);
prevPage = page - 1 < 1 ? 0 : page - 1;
nextPage = page + 1 > lastPage ? 0 : page + 1;
halfpages = ceil(visible_pages / 2);
startPage = page - halfpages < 1 ? 1 : page - halfpages;
endPage = startPage + visible_pages - 1;
if(endPage > lastPage) {
startPage -= endPage - lastPage;
startPage = startPage < 1 ? 1 : startPage;
endPage = startPage + visible_pages > lastPage ? lastPage : startPage + visible_pages - 1;
}
// Output
lastPage // Total number of pages
prevPage // Previous page number (if 0 there is no prev page)
nextPage // Next page number (if 0 there is no next page)
startPage // First visible page
endPage // Last visible page
So you can do a pager like this:
if prevPage
[1] [prevPage]
endif
[startPage] ... [endPage]
if nextPage
[nextPage] [lastPage]
endif
or customize whatever you like.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What’s your logging philosophy? As Jeff Atwood asked: "What’s your logging philosophy? Should all code be littered with .logthis() and .logthat() calls? Or do you inject logging after the fact somehow?"
A: I think always, always, always add logging when there is an exception, including the message and full stack trace. Beyond that, I think it's pretty subjective to whether or not you use the logs often or not...
I often try to only add logging in critical places where what I am logging should very rarely hit, otherwise you get the problem like he mentioned of logs that grow way too big... this is why logging error cases is the ideal thing to always log (and it's great to be able to see when these error cases are actually being hit so you can inspect the problem further).
Other good things to log are if you have assertions, and your assertions fail, then log it... such as, this query should be under 10 results, if it is bigger there may be a problem, so log it. Of course, if a log statement ends up filling the logs, it is probably a hint to either put it to some sort of "debug" level, or to adjust or remove the log statement. If the logs grow too big, you will often end up ignoring them.
A: My logging philosophy is pretty easily summarized in four parts:
Auditing, or business logic logging
Log those things that are required to be logged. This comes from the application requirements, and may include logging every change made to any database (as in many financial applications) or logging accesses to data (as may be required in the health industry to meet industry regulations)
As this is part of the program requirements many do not include it in their general discussions of logging, however there is overlap in these areas, and for some applications it is useful to consider all logging activities together.
Program logging
Messages which will help developers test and debug the application, and more easily follow the data flow and program logic to understand where implementation, integration, and other errors may exist.
In general this logging is turned on and off as needed for debugging sessions.
Performance logging
Add later logging as needed to find and resolve performance bottlenecks and other program issues which aren't causing the program to fail, but will lead to better operation. Overlaps with Program logging in the case of memory leaks and some non-critical errors.
Security logging
Logging user actions and interactions with external systems where security is a concern. Useful for determining how an attacker broke a system after an attack, but may also tie into an intrusion detection system to detect new or ongoing attacks.
A: I work with safety critical real-time systems and logging is often the only way to catch rare bugs that only turn up every 53rd tuesday when it's a full moon, if you catch my drift. This kind of makes you obsessive about the subject, so I'll apologise now if I start to froth at the mouth.
I design systems which are capable of logging pretty much everything, but I don't turn everything on by default. The debug information is sent to a hidden debug dialog which timestamps it and outputs it to a listbox (limited to around 500 lines before deletion), and the dialog allows me to stop it, save it to a log file automatically, or divert it to an attached debugger such as DBWin32. That diversion allows me to see the debug output from multiple applications all neatly serialized, which can be a life saver sometimes. The log files are automatically purged every N days. I used to use numeric logging levels (the higher you set the level, the more you capture):
*
*off
*errors only
*basic
*detailed
*everything
but this is too inflexible - as you work your way towards a bug it's much more efficient to be able to focus logging in on exactly what you need without having to wade through tons of detritus, and it may be one particular kind of transaction or operation that causes the error. If that requires you to turn everything on, you're just making your own job harder. You need something finer-grained.
So now I'm in the process of switching to logging based on a flag system. Everything that gets logged has a flag detailing what kind of operation it is, and there's a set of checkboxes allowing me to define what gets logged. Typically that list looks like this:
#define DEBUG_ERROR 1
#define DEBUG_BASIC 2
#define DEBUG_DETAIL 4
#define DEBUG_MSG_BASIC 8
#define DEBUG_MSG_POLL 16
#define DEBUG_MSG_STATUS 32
#define DEBUG_METRICS 64
#define DEBUG_EXCEPTION 128
#define DEBUG_STATE_CHANGE 256
#define DEBUG_DB_READ 512
#define DEBUG_DB_WRITE 1024
#define DEBUG_SQL_TEXT 2048
#define DEBUG_MSG_CONTENTS 4096
This logging system ships with the release build, turned on and saving to file by default. It's too late to find out you should have been logging AFTER the bug has occurred, if that bug only occurs once every six months on average and you have no way of reproducing it.
The software typically ships with ERROR, BASIC, STATE_CHANGE and EXCEPTION turned on, but this can be changed in the field via the debug dialog (or a registry/ini/cfg setting, where these things get saved).
Oh and one thing - my debug system generates one file per day. Your requirements may be different. But make sure your debug code starts every file with the date, version of the code you're running, and if possible some marker for the customer ID, location of the system or whatever. You can get a mish-mash of log files coming in from the field, and you need some record of what came from where and what version of the system they were running that's actually in the data itself, and you can't trust the customer/field engineer to tell you what version they've got - they may just tell you what version they THINK they've got. Worse, they may report the exe version that's on the disk, but the old version is still running because they forgot to reboot after replacing. Have your code tell you itself.
That's my brain dumped...
A: I take what I consider a traditional approach; some logging, surrounded by conditional defines. For production builds, I turn off the defines.
A: I choose to log deliberately as I go, as this means the log data is meaningful:
*
*Depending on logging framework you can add level/severity/category information so that the log data can be filtered
*You can make sure that the right level of information is present, not too much, not too little
*You know when writing the code which the most important things are, and can therefore ensure they are logged
Using some form of code injection, profiling or tracing tool to generate logs would most likely generate verbose, less useful logs that would be harder to dive into. They may be useful as a debugging aid, however.
A: I start by asserting a lot of conditions in my code (in C#, using System.Diagnostics.Assert), but I add logging only where I find, while debugging or putting the system under stress, that I really need to have a way to follow what's happening inside of my code without having a debugger permanently attached.
Otherwise, I prefer using Visual Studio's capability to put traces in the code as special breakpoints (i.e. you insert a breakpoint and right-click it, then select "When hit..." and tell it what to display in that case). There is no need to recompile and it is easy to enable/disable the traces on the fly.
A: If you're writing a program that will be used by many people, it's best to have some kind of mechanism to choose what will be logged and what won't. One argument in favor of .logthis() functions is that they can be an excellent replacement for inline comments in some instances (if done properly).
Plus, it helps you narrow down EXACTLY where an error is occurring.
A: Log 'em all and let Grep sort 'em out.
A: I agree with Adam, but I also would consider logging things of interest or things that you can demonstrate as achievements as a kind of proof of them happening.
A: I define a variety of levels and pass in a setting with the config / invocation.
A: If you really need logging in your system then your tests are crap or at the very least incomplete and not very thorough. Everything in your system should be a black box as much as possible. Notice how core classes like String dont need logging - the primary reason being they are very well tested and perform as detailed. No surprises.
A: I use logging as a way to narrow down issues that do not reproduce in our unit tests let alone repeating the same steps provided by the user: those rare glitches that only show up on some very remote hardware (and sometimes, albeit very rarely, even caused by a driver or third party library glitch outside of our control).
I agree with the comment that this should all be caught by our testing procedure, but it's difficult to find a million+ LOC codebase that demands very low-level, performance-critical code to ever meet those requirements. I don't work in mission-critical software but I work in the graphics industry where we're often having to do everything from implementing memory allocators to utilizing GPU code to SIMD.
Even with very modular, loosely-coupled or even completely decoupled code, the system interactions can lead to very complex inputs and outputs with behavior varying between platforms where occasionally we have that rogue edge case which eludes our tests. Modular black boxes can be very simple but the interactions between them can get very complex and lead to the occassional unanticipated edge case.
As an example of a case where logging saved my butt, one time I had this odd user with a prototype Intel machine that was crashing. We listed the minimum requirements for machines which should support SSE 4, but this particular machine met those minimum requirements and still did not support Streaming SIMD extensions past SSE 3 in spite of being a 16-core machine. Discovering that quickly was made possible by looking at his log which showed precisely the line number where the SSE 4 instructions were used. None of us in our team could reproduce the issue let alone a single other user that participated in verifying the report. Ideally we should have written code for older SIMD versions or at least did some branching and checking to make sure the hardware supported the minimum requirements, but we wanted to make a firm assumption communicated through the minimum hardware requirements for simplicity and economy. Here, perhaps, it's arguable that it was our minimum system requirements that had the "glitch".
Given the way I use logging here, we tend to get fairly large logs. However, the goal is not readability -- what's typically important is the last line of a log sent in with a report when the user experiences a crash of some sort that none of us on the team (let alone few other users in the world) can reproduce.
Nevertheless, one trick I employ regularly to avoid excessive log spamming is that it is often reasonable to assume that a piece of code which executes once successfully will also do so subsequently (not a hard guarantee, but often a reasonable assumption). So I often employ a log_once kind of function for granular functions to avoid the overhead of paying the cost of logging every time it is called.
I don't sprinkle log outputs all over the place (I might if I had the time). Typically I reserve them most for areas which seem the most dangerous: code invoking GLSL shaders, e.g. (GPU vendors vary wildly here in terms of capability and even how they compile the code), code using SIMD intrinsics, very low-level code, code that inevitably has to rely on OS-specific behavior, low-level code making assumptions about the representation of PODs (ex: code that assumes 8 bits to a byte) -- the kind of cases where we would likewise sprinkle a lot of assertions and sanity checks as well as write the most number of unit tests. Typically this is enough, and logging has saved my butt many times where I would have otherwise taken an unreproducible issue and would have had to take blind stabs at the problem, requiring many iteratons bouncing attempts at a solution to the one user in the world who could reproduce the problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do I simultaneously work on version 1.1 and version 2.0? The situation: We're out of beta and version 1.0 has been released to several customer sites. Team A is already busy working on version 1.1 that will have incremental bugfixes and usability tweaks, while another team works on version 2.0 with large-scale changes, where the core of the product may have been completely redesigned. Now, most of the changes made for 1.1 will have to make their way into 2.0 at some point, and some of the bug fixes made in the 2.0 branch might in fact need to be scheduled for an earlier release. The problem is that since 2.0 has fundamental differences, no changes from 1.1 can be merged in without manual conversion, nor vice versa.
My question: What are the best revision control practises to minimise merge conflicts and duplicate work in this kind of situation? How can I ensure that my teams spend as little time and effort as possible on revision control issues, while still providing regular patches to customers?
A: One good way is to fix each bug in the stable branch and merge the stable branch into the development branch. This is the Parallel Maintenance/Development Lines pattern, and the key is to merge early and often. Merging infrequently and late means that the development branch is unrecognisable compared to the stable one, or the bug cannot be repeated in the same way.
Subversion includes merge tracking since version 1.5 so you ensure that the same change set is not merged twice, causing silly conflicts. Other systems exist (e.g. Git, Mercurial, Accurev, Perforce) that let you make queries of the type "what changes on branch A have not been merged into branch B?" and cherry-pick the fixes you need across to the dev branch.
A: The article here (Day-to-day with Subversion) mentions that one method is to constantly update version 2 with data from the version 1.1 build. In the article, the guy says to do this every day.
The part you'll want to read is titled "Waiter, There's a Bug in my Trunk!". It's about halfway though the article.
A: I would probably rely on an issue tracking system for this purpose, and make sure to tag each change that needed to be brought forward into the trunk code. You can then ensure that check-in comments for each change reference the relevant issue, and are clear in expressing the intent of the code change so that it can be easily understood when trying to re-implement in the trunk.
A: Pretty much what everyone else has said, but I figured I would toss in my experience with handling development in multiple branches using SVN
With our main product, we have the need to simultaneously develop in 2+ versions at the same time.
I originally used the main trunk as the "main development" version, with tags used for each actual release. Branches were used for substantial development efforts for a new feature set. Then later, when we started working on 2, 3 and 4 releases at a time I started using a branch for each revision.
Since I maintain the repository and also handle pushing QA builds, I make sure to do "rollups" each morning - which consists of merging changes up the tree starting with the lowest currently active branch. So I end up merging changes from 1.1 into 1.2, which is merged into 1.3 with any other changes from 1.2 since the last merge, etc.
When I commit, I make sure to always comment the commit with something like
merged 1.1 rev 5656-5690
It can be a bit of a pain, but it works :)
A: Merge early, merge often, and make sure that QA on the mainline knows and regresses/verifies the defects fixed in each patch of the maintenance releases.
It's really easy to let something slip out and "unfix" a bug in a subsequent release, and let me tell you, customers don't care about how complicated it can get to manage multiple branches -- that's your job.
Make sure you're using a source control system that supports branching and merging (I've had experience with Perforce and SVN, and while Perforce is better, SVN is free).
I also believe that having a single person responsible for performing the merges in a consistent manner helps ensure that they happen regularly. It's generally been me or one of the senior people on our team.
A: The way we handle this at my work is to keep the trunk branch as the most cutting-edge code (ie, 2.0 in this case). You create a branch for the 1.x code, and make all your fixes there. Any changes to 1.x should be merged (manually, if need be) into the trunk (2.0) branch.
I would then insist that 1.x developers make note of both the revision number for the 1.x commit and the revision number for the 2.0 merge in the ticket for that bug. That way, it will be easier to notice if anyone forgets to merge their changes, and the fact that they have to keep track of it will help them remember.
A: One key point is captured in this picture from The Build Doctor: only merge one direction.
A: To answer that specific question many developers have switched from Subversion to Git. Checkout github.com.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Octal number literals: When? Why? Ever? I have never used octal numbers in my code nor come across any code that used it (hexadecimal and bit twiddling notwithstanding).
I started programming in C/C++ about 1994 so maybe I'm too young for this? Does older code use octal? C includes support for these by prepending a 0, but where is the code that uses these base 8 number literals?
A: Commercial Aviation uses octal "labels" (basically message type ids) in the venerable Arinc 429 bus standard. So being able to specify label values in octal when writing code for avionics applications is nice...
A: I have also seen octal used in aircraft transponders. A mode-3a transponder code is a 12-bit number that everyone deals with as 4 octal numbers. There is a bit more information on Wikipedia. I know it's not generally computer related, but the FAA uses computers too :).
A: I recently had to write network protocol code that accesses 3-bit fields. Octal comes in handy when you want to debug that.
Just for effect, can you tell me what the 3-bit fields of this are?
0x492492
On the other hand, this same number in octal:
022222222
Now, finally, in binary (in groups of 3):
010 010 010 010 010 010 010 010
A: It's useful for the chmod and mkdir functions in Unix land, but aside from that I can't think of any other common uses.
A: I came into contact with Octal through PDP-11, and so, apparently, did the C language :)
A: There are still a bunch of old Process Control Systems (Honeywell H4400, H45000, etc) out there from the late 60s and 70s which are arranged to use 24-bit words with octal addressing. Think about when the last nuclear power plants were constructed in the United States as one example.
Replacing these industrial systems is a pretty major undertaking so you may just be lucky enough to encounter one in the wild before they go extinct and gape in awe at their magnificent custom floating point formats!
A: tar files store information as an octal integer value string
A: There is no earthly reason to modify a standard that goes back to the birth of the language and which exists in untold numbers of programs. I still remember ASCII characters by their
octal values, would have to think to come up with the hex value of A, but it is 101 in octal; numeric 0 is 060... ^C is 003...
That is to say, I often use the octal representation.
Now if you really want to bend your mine, take a look at the word format for the PDP-10...
A: The only place I come across octal literals these days is when dealing with the permission bits on files in Linux, which are normally represented as 3 octal digits, where each digit represents the permissions for the file owner, group and other users respectively.
e.g. 0755 (also just 755 with most command line tools) means the file owner has full permissions (read, write, execute), and the group and other users just have read and execute permissions.
Representing these bits in octal makes it easier to figure out what permissions are set. You can tell at a glance what 0755 means, but not 493 or 0x1ed.
A: Anyone who learned to program on a PDP-8 has a warm spot in his heart for octal numbers. Word size was 12 bits divided into 4 groups of 3 bits each, so -1 was 7777 octal. This scheme was perpetuated in the PDP-11 which had 16 bit words but still used octal representation for various things, hence the *NIX file permission scheme which lives to this day.
A: From Wikipedia
At the time when octal originally
became widely used in computing,
systems such as the IBM mainframes
employed 24-bit (or 36-bit) words.
Octal was an ideal abbreviation of
binary for these machines because
eight (or twelve) digits could
concisely display an entire machine
word (each octal digit covering three
binary digits). It also cut costs by
allowing Nixie tubes, seven-segment
displays, and calculators to be used
for the operator consoles; where
binary displays were too complex to
use, decimal displays needed complex
hardware to convert radixes, and
hexadecimal displays needed to display
letters.
All modern computing
platforms, however, use 16-, 32-, or
64-bit words, with eight bits making
up a byte. On such systems three octal
digits would be required, with the
most significant octal digit
inelegantly representing only two
binary digits (and in a series the
same octal digit would represent one
binary digit from the next byte).
Hence hexadecimal is more commonly
used in programming languages today,
since a hexadecimal digit covers four
binary digits and all modern computing
platforms have machine words that are
evenly divisible by four. Some
platforms with a power-of-two word
size still have instruction subwords
that are more easily understood if
displayed in octal; this includes the
PDP-11. The modern-day ubiquitous x86
architecture belongs to this category
as well, but octal is almost never
used on this platform.
-Adam
A:
I have never used octal numbers in my
code nor come across any code that
used it.
I bet you have. According to the standard, numeric literals which start with zero are octal. This includes, trivially, 0. Every time you have used or seen a literal zero, this has been octal. Strange but true. :-)
A: Octal is and was most useful with the first available display hardware (7-segment displays). These original displays did not have the decoders available later.
Thus the digital register outputs were grouped to fit the available display which was capable of only displaying eight(8) symbols: 0,1,2 3,4,5,6,7 .
Also the first CRT display tubes were raster scan displays and simplest character-symbol generators were equivalent to the 7-segment displays.
The motivating driver was, as always, the least expensive display possible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: How to convince a company to switch their Source Control My current place of employment is currently in a transition, new ownership has taken over, things are finally getting standardized and proper guidelines are being enforced.
But we are still using VSS, there really isn't any reason for using it other then that's what whats initially setup. We don't use Visual Studio, or any tool really that specifically requires it.
What would be the absolute best argument I can bring up to help convince them that going to something like Subversion would be a much better solution, in the long run.
A: The best argument would have to be the reason why you want them to switch to subversion. :)
I know absolutely nothing about VSS, but the phrase "if it ain't broken don't fix it" comes to mind. You have to show your managers that VSS is broken and needs fixing. Even better if you can show management how it would save them money.
A: @Adam Davis: Uhhh actually Adam, VSS is a horrible source control system. It has a long history of corrupting history and losing data. It is terrible at merging, doesn't handle multiple developers well and is very slow. Also the history is poor. Microsoft don't really support it any more, you'll note that they never used it for their own internal development and now they don't even sell it in favour of a more modern solution (VSTS). In short, if you have to choose between VSS and any other type of source control, go with the alternative.
A: By just going over the features good source control brings:
*
*ability to easily see logs of who did what, when, and in what order, to which files
*keep a history of past versions of everything
*easily go back and reproduce a specific version of your files from any past version, to more easily reproduce bugs reported in older versions
*ability go retrieve deleted code, or remove unwanted changes, without having to worry about losing data in the process
A: VSS totally relies on the clients to manage the database. If a client drops connection in the middle of a write over the network at just the wrong time, your file is trashed on the server. Not just the tip, but all the history. Hope you have a good backup. I've been through it. It's bad news.
VSS usage over VPN or other remote connections is abysmal. It's using SMB to transfer the data, and you have to retrieve the file and all of its deltas just to get the tip. Nasty.
I've seen VSS start to act up at 1GB of data. Database errors, etc. MS (somewhere in a FAQ or KB) says that 2GB is really the max safe limit. There are no good management tools (the clients run the asylum), so you don't really get any warning about this.
Anything with a server process to provide some level of transactions and integrity control is a superior solution.
A: Any document that proves switching will lower costs. Failing that, multi-colored graphs and charts. Maybe a power-point presentation.
A: The internet is littered with well written articles on the flaws of VSS. I would collect this as a body of evidence for moving away from VSS. Find a key requirement that VSS can't support (remote working, support on other OSs, tools integration) and use it to drive your issue. You then need to find a source control system that is a good match for your organisation's requirements - are you sure Subversion is that system? Set up a demonstration of your chosen system, and use this to prove its worth.
I implemented this change at a previous employer (first to CVS, and then to SVN), and while it was successful we had to build a lot of bits around the edge and rely on a lot of (sometimes unreliable) open source projects to get all the tools we needed. With hindsight I should have considered trying to evaluate professional tools such as Perforce, Vault or even Team System. Having evaluated these, I could have made a proper value judgement on whether CVS/SVN were worth their "free" price tag.
A: being able to handle branching and forking is a start.
Try using subversion for a while in parallel to vss you will most likely find many arguments to convince your boss. If you don't, your boss is right, no reason to switch.
A: Get them to google for 'vss problem', 'source safe corruption' or simply look at the Wiki page for it. That ought to convince them that it's probably not a long-term viable thing for you to be betting such a vital part of your business on.
How big is your team? (ie, I mean how many members, not whether or not you're salad dodgers) Once you start to get more than half a dozen quite active users, VSS is going to give you headaches.
I seriously doubt that Microsoft use it (in fact, don't they use a customised Subversion or CVS variant?) and you've got to ask yourself - if the company don't eat their own dogfood, why would you eat it?
A: Basic answer is that you have to make the case that switching meets the needs of the business. For example:
*
*lower cost of development
*shorter schedule (another shade of #1)
*more apt for meeting process requirements (like software requirements traceability, or build reproducibility, etc).
Making the case on these things also requires something quantitative, not just "we will lower costs because this is the right way to do it!".
One thing to watch out for is that it's too easy for a developer to convince themselves that it would be beneficial to make the change without first going through the basic business filters. Once that happens, you end up with developers who are unhappy with their tools and are doubly frustrated because they think management won't listen. If you can't check off one of the things above, them you'll have no chance of persuading management of anything (unless management is incompetent, but that's for another question).
A: Why Subversion over VSS?
*
*Free software
*Easier to manage
*"check-ins" are atomic!
*Easy to Branch and Merge
*Continued development (i.e. VSS is dead end)
*Better tools for tracking changes and viewing logs
*Toolset and platform agnostic, but also integrates with many tools
I made the proposal to my manager, and it was a pretty easy sell. I've found it to be much easier to use, especially for branching (our project took 5 hours to "share and pin" in VSS, and then each operation took extra time to complete!).
A: I've previously written about why VSS is not a good idea. You might be able to gain some information from that. Also this article and this one contain further information.
VSS 2005 has papered over some of the cracks in 6.0, but not in a particularly convincing way. The same brain-dead foundation remains.
A: Even if it ain't broke, there's a potential benefit to migrating from VSS. First and most trivially, you won't have to buy new VSS licenses. Second, there are many examples of deficiencies in the VSS product (some also acknowledged by MS). The learning curve for SVN is at least as low as for VSS, and if you have devs happier with their source control system, they're more likely to use it early and often. That will translate to lots less risk for your company, and that's a good benefit.
A: @Jason: VSS is broken.
I think the most powerful method for motivating a change away from VSS is to point out how critical an asset your source code is. Taking risks with its integrity is not a wise business choice.
Add that your programmers are the creators of this asset, and that making it easier for them to be productive means more value in your source code asset. Joel on Software often talks about how investing in his programmers is a big win for his company.
The other answers here all describe specific reasons that you can point to when making your case.
A: In addition to the technical points given in other answers, there may be non-technical reasons lurking that you should be prepared to respond to:
You should investigate whether your company has any sort of policy against (or misguided fear of) open source software. If the company or its lawyers don’t understand the ins and outs of which licenses “infect” proprietary code and which don’t, as well as what you can do with open source code that doesn’t affect your proprietary code, you will have a hard time getting them to switch from a proprietary to an an open source tool. (And you may have a bigger education job on your hands.)
In arguing for the switch from proprietary (e.g. VSS) to open source (e.g. subversion) you’ll also need to be prepared to defend the quality of the code and the lack of any need for a warranty or other contract rights regarding the code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: .NET Development on a Mac Tips I have just got a MacBook Pro and have been using it (+Fusion) to develop on for about a month now. The purpose of this question is similar to Hidden Features of C#; to become a how-to of tips and trick for windows development on a mac.
I should clarify that I am aware of boot camp but do not use it (nor do I have any interest to), hence my use of steady state to make sure nothing happens to my OS partition without my knowledge. However; as Sara pointed out, Apple makes great hardware and I absolutely LOVE the form factor of my MBP so for someone who is looking for a windows only laptop a mac with boot camp should not be overlooked as the hardware is amazing.
My environment is as follows
* MacBook Pro 15" 2.4Ghz 2GB RAM (Going to upgrade to 4GB soon)
* VMWare Fusion 2.0 Beta
* Windows XP Pro SP3 (Slipstreamed BEFORE install)
Tips:
* Use Windows Steady State to keep OS consistent
* Use svn+ssh to connect to the mac for small repositories then use time machine to backup.
* Use spaces.
A: One more thing, there is a Deep Fried Bytes Podcast that is entirely about .NET development on Mac - you may find some nuggets in there too.
A: @Andrew - I'm exactly in your situation. I use a MBP while my company work is purely Microsoft based: i.e., .NET, COM etc. While nothing can beat running Vista natively in Boot Camp (I've never seen Vista run so fast), the niceties of having your Mac OS be the "main" OS, for internet, mail etc. has gotten me to the following configuration. Works like a charm:
Hardware
*
*Load up your MBP with the max possible - 4GB. It's really worth every $.
*Upgrade your hard drive (if not already) to 7200RPM. Major performance boost here.
Software
*
*Parallels Desktop for Mac for virtualization. You can either have multiple VM, or use a boot camp partition. The latter is supposed to be faster, but I haven't really measured it (I use it for having the option to boot natively if I really need speed). The former allows you to have multiple OS. I gave my VM 1GB memory. I can do more if you want it more snappy.
*Micorsoft Visual Studio 2005/8 for .NET and C++. I have yet to see any IDE for .NET which beats this one. The intellisense is really amazing.
*Code Gear (yes we have some Delphi)
For non development occasional need I also keep Microsoft Office 2007 installed. They do have MAC ports, but those don't always cut it.
A: *
*The extra RAM is great for your OS X environment, but my experience has shown you shouldn't exceed VMWare's recommended RAM settings of 1G.
*I was unsuccessful at getting a good experience running my VM(s) from an external drive. And it's a firewire 800. Keep your dev image pruned to as little space as possible and run directly from your internal drive.
*If you're sticking with XP (good choice BTW), you might want to give VirtualBox a try. It's VERY zippy. However, it chokes on Vista.
*If you have a thought about trying Parallels ... DON'T!!! It worked well enough for a while but eventually became very unstable, crashing often when host files were accessed and freezing 2 out of 3 times during startup. Also, their implementation of networking is convoluted and difficult to setup if, say, you wanted to browse an Apache site on your host from your guest.
*If you need to resize your image, there's a good tutorial for Parallels using GParted and Partition Magic. I'm sure it would be simple to adapt it to VMWare.
*Your use of SVN is almost exactly what I do (repo is on host, backed up with Time Machine). However, you could speed it up and remove the bloat of a server if you go with simply a file-based repository.
A: I develop in ASP.Net on my mac almost daily, and I have to question why you aren't interested in Boot Camp. Yeah, VMWare is nice, but for my money nothing beats the performance of running Windows by itself on the Mac.
A: Just extending this out slightly from the original question, there are some of us doing Delphi Windows development work on virtual machines, too.
I've got a MacBook Pro (1st gen) with a couple of gigs of ram, and a recent iMac (with 4 gigs of ram). I've had more luck than xanadont with external drives, running a couple of different brands on Firewire 400 and finding them to be fine with 16-20Gb VMs. If I'm going to be in one place for a few days (either in the office on the iMac or on the road with the MBP) then I'll copy the VM to the local drive but as a rule it's worked fine for about 2 years now.
I started with Parallels, but there came a point when they started releasing versions that hadn't been regression tested, and sometimes basic stuff would suddenly be broken in the current release. Simple fix, stop downloading the new version and stay 3-6 months behind everyone else. Then I needed to give a VM to a colleague and had to go through a few hoops getting it out of Parallels and into VMware. At that point I tried the Fusion beta, had first-hand experience of moving a VM between Mac and Windows (with no real fuss at all) and that persuaded me to switch to Fusion. I have to say, Fusion is an excellent, stable, reliable tool.
I run WInXP Pro SP 3, Delphi 7, Delphi 2007, SQL Express and various development tools on my VMs (I tend to have a VM for each of my clients).
And I agree with xanadont about the 1Gig ram thing - mine tend to have a gig and no more - I didn't see any real change in behaviour/performance with >1Gb in the vm, so it's better off given to the host operating system rather than the virtual one.
A: I'm in the same boat; VMware on a MBP, doing .NET development (and a little Mono, but that's a different beast). I would recommend updating to the Fusion 2.0 betas if you haven't yet; they're faster and offer some great new features (multiple snapshots! application linking!) and, in my experience, are just as stable as the 1.x releases.
A: I believe project mono has mac support.
This assumes you want to develop directly on the mac and that you are happy to forgo some of the MS specific features and tools (so no C#3.0, libraries like WPF and Visual Studio).
Of course, using paralles/vmware/virtualbox or any other virtual machine with a windows guest as you describe will also work fine.
A: Oded, it depends on what type of .NET development one is trying to do, and for what platform. If you're targeting Windows and building something other than console apps, you're best off not using Mono, as Mono projects are not necessarily drop-in-to-Windows-and-go solutions.
A: This is not purely .NET related but it is in the vein of the using Spaces item in the question.
Trackpad tips for a MacBook running Leopard (may not be supported in earlier OS X versions):
*
*Set System Preferences, Keyboard & Mouse, Trackpad to use Two Finger Secondary Click. This allows you to use two finger taps instead of the Control + Click combo for the Secondary Click (better know as the context menu to us .NET developers).
*Set System Preferences, Keyboard & Mouse, Trackpad to use Two Finger Screen Zoom for magnifying an area in the screen by holding the Control key and scrolling up or down. This is useful for quickly magnifying small fonts or image detail in any Mac application and in Windows running under VMware Fusion. You can pick either the Control, Option or Command keys for zooming by clicking the Options button along with other settings.
A: I use a Mac Book Pro as well but I run Vista. I set aside a little space so I could also run Leopard and just use Boot Camp. You can use Boot Camp to just boot from windows so you never have to deal with Leopard unless you want to.
I would highly reccomend it because Apple makes great hardware while Microsoft makes great tools (and also great OSs, I love Vista)
go ahead and downmod me for being a fangirl, but I've found what works for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Adding more information to TestResult.xml file from NUnit I would like to be able to add a "message" to a unit test, such that it actually appears within the TestResult.xml file generated by NUnit. For example, this is currently generated:
<results>
<test-case name="MyNamespace.Tests.MyTest" executed="True" success="True" time="0.203" asserts="4" />
</results>
I would like to be able to have an additional attribute (or node as the case may be), such as:
<results>
<test-case name="MyNamespace.Tests.MyTest" executed="True" success="True" time="0.203" asserts="4" message="Tested that some condition was met." />
</results>
The idea is that "message" above would somehow be defined within the test method itself (in my case, generated at run-time). Is there a property somewhere that I'm missing to be able to do something like this?
A: In the recent NUnit releases you can do:
Assert.AreEqual(250.00, destination.Balance, "some message here");
Where "Some message here" can be a constant message or a message generated at runtime and stored in a string variable. These messages will only appear in the output however if the assertion fails. Usually, however, you only need information about failing tests so I recommend building up a string by adding each previous message and then using that string variable as the message in all of your asserts. This allows you to get all of the information you need from failing tests.
A: This may be missing the point, but how about naming the tests so they indicate what they test - then you may not even need the message.
If it proves to be absolutely necessary, I think you'll need to produce your own testrunner that would (off the top of my head) read an additional attribute off the TestCase and attach it to the output.
A: You can use the TestContext to easily write out any message you want. Here is how I am setup.
Each of my tests are inherited from a testbase class. This removes redundant code.
[TestFixture]
public class TestBase
{
public IWebDriver driver;
//[OneTimeSetUp] and [OneTimeTearDown] go here if needed
[SetUp]
public void Setup(){
driver = Shortcuts.SetDriver("my browser");
}
[TearDown]
public void TearDown()
{
driver.Quit();
Comment("@Result: " + TestContext.CurrentContext.Result.Outcome.ToString());
}
public void Comment(string _comment)
{
TestContext.Out.WriteLine(_comment);
}
public void Error(string _error)
{
TestContext.Error.WriteLine(_error);
}
}
You can see the bottom two functions write out any message or error in said TestContext. This will work nicely with parallizable tests also.
I can then use that parent class to setup my tests, and write to my console.
//Role Management
public class RoleManagementTests : TestBase
{
[TestCase]
public void RoleManagement_7777_1()
{
Comment("Expected: User has the ability to view all roles in the system.");
//Test goes here
}
}
Now you can see the results in the output (Visual Studio) and in the TestResult.xml using NUnit Console Runner.
A: I can't see anything available at run time, but there are a couple of features that you might want to investigate: the Description attribute and the Property attribute both add text to the XML output file. Unfortunately, they're both defined at compile time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How Do I Test Rails Logging In from the Console? I was having a heck of a time figuring out how to login and logout using response objects from Rails. The standard blogs were ok, but I finally diagnosed it, and I wanted to record it here.
app.get '/'
assert_response :success
app.get '/auth_only_url'
assert_response 302
user = User.find(:user_to_login)
app.post '/signin_url',
:user_email => user.email,
:user_password => '<password in clear>'
assert_response 302
app.follow_redirect!
assert_response :success
app.get '/auth_only_url'
assert_response :success
Note, the above implies that you redirect after a failed auth request, and also that you redirect after logging in.
To ensure that you load the fixtures into your test environment DB (which normally occurs during rake test), make sure you execute the following:
rake db:fixtures:load RAILS_ENV=test
(From Patrick Richie)
The default URL will appear to be 'www.example.com', as this default host as set in ActionController::Integration::Session
ActionController::Integration::Session.new.host=> "www.example.com"
It is set in actionpack/lib/action_controller/integration.rb#75
To change it in the integration test, do the following:
session = open_session do |s| s.host = 'my-example-host.com' end
A: 'www.example.com' is the default host as set in ActionController::Integration::Session
>> ActionController::Integration::Session.new.host
=> "www.example.com"
It is set in actionpack/lib/action_controller/integration.rb#75
You should be able to change it in your integration test by doing the following:
session = open_session do |s|
s.host = 'my-example-host.com'
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: version control practice In my current job the supervisor's practice is to only check in production ready code. Most recently the project I was on involved work by 3 different developers with some file overlap. This meant manually integrating changes despite the fact that some changes took a day and then it was done. I wanted to see if this was a common practice and get suggestions on how to change this practice with the knowledge that many times my opinion means little in the grand scheme of things.
A: You can use various ways to handle this situation, depending on your source control system.
Private branches: Allow you to check in and work on code while you go, merging back and forth at appropriate times.
Shelvesets/pacakaged changesets: Allow you to store changesets and send them around for review - ensuring they're production ready before check in.
As to whether this is an appropriate way to work, we don't allow check-in to main branches without prior review. To pass review your code must pass various automated tools, and then must be acceptable to your peer reviewer. For some definitions of "production ready" - this is it. Therefore, we do something like what you do. However, we use private branches to ensure that check-ins can still be made while this is in progress, and that other check-ins don't have to interfere.
If production ready means tested in an integration environment, then it sounds like you may need staging branches or something similar.
A: Code that is checked in should be unit tested, but, to me, "production ready" implies that it's gone through integration and system testing. You can't do that until a code freeze, so I don't see how you can do that before every check in.
A: Start by switching away from VSS to something more reliable & feature-rich. See How to convince a company to switch their Source Control
Then apply known-good practices:
*
*Check in often
*Pick up others' changes often, to simplify merging
*Use fast unit tests to make sure each change meets a minimum bar
*Require that that the checked-in code always builds, and always passes tests.
Now you won't be "production ready" at this point: you will still need a couple weeks to test & fix before you can deploy. Getting that time down is awesome for you, and awesome for your customer, so invest in:
*
*High quality automated acceptance tests.
A: wouldn't it be a good idea to have a testing branch of the repo that can have the non "production ready code" checked in after the changes are done and tested?
the main trunk should never have code checked in that breaks the build and doesn't pass unit tests, but branches don't have to have all those restrictions in place.
A: I would personally not approve of this because sometimes that's the best way to catch problem code with less experienced developers (by seeing it as they are working on it) and when you "check in early and often" you can rollback to earlier changes you made (as you were developing) if you decide that some changes you made earlier was actually a better idea.
A: I think it may be the version control we user, VSS in combination with a lack of time to learn the branching. I really like the idea of nightly check ins to help with development and avoid 'Going Dark'. I can see him being resistant to the trunks but perhaps building a development SS and when the code is production ready move it to production SS.
A: From the practices I have seen the term production quality is used as a 'frightener' to ensure that people are scared of breaking top of tree, not a bad thing to be honest because top of tree should always work if possible.
I would say that best practice is that you should only be merging distinct (i.e. seperate) functional components on the top of tree. If you have a significant overlap on deltas to the same source files I think this 'might' indicate that somewhere along the line the project management has broken down, and that those developers should have merged their changes to seperate integration branch before going in to the main line sources. An individual developer saying that they unit tested their stuff is irrelevant, because the thing they tested has changed!
Trying to solve integration problems on your main line codeline will inevitably stall other unrelated submissions.
A: Assuming that you are working in a centralized version control system (such as Subversion), and assuming that you have a concept of "the trunk" (where the latest well-working code lives):
If you work on new features in "features branches"/"experimental branches", then it's OK to commit code which is far from finished. (When the feature is done, you commit the well-behaving result into the "trunk".)
But you will not win a popularity contest if committing non-compiling/obviously non-working code into the "trunk" or a "release branch".
The Pragmatic Programmers have a book called Pragmatic Version Control using Subversion which includes a section with advice about branches.
A: Check in early and check in often for two main reasons -
1 - it might make it easier to integrate code
2 - in case your computer explodes your weeks of work isn't gone
A: @bpapa
Nightly backups of work folders to servers will prevent losing more than a days work.
@tonyo
Let's see the requirement documents were completed the day after we finished coding. Does that tell you about our project management?
We are a small shop so while you would think change is easy there are some here that are unbending to the old ways.
A: An approach I particularly like is to have different life cycle versions in the depot. That is,for example, have a dev version of the code that is where the developers check in code that is in being worked on; then you could have a beta version, where you could add beta fixes to your code; and then a production version.
There is obvious overhead in this approach, such as the fact that you will have a larger workspace on you local machine, the fact that you will need need to have a migration process into place to move code from one stage to the next (which means a code freeze when doing the integration testing that goes with the migration), and that depending on the complexity of the project(s) you might need to have tools that change settings, environment variables, registry entries, etc.
All of this is a pain to set up, but you only do it once, and once you have it all in place, makes working on different stages of the code a breeze.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to start using ndepend? I recently downloaded ndepend and ran an analysis on an open source project I participate in.
I did not now where to look next - a bit of visual and information overload and it turned out I don't even know where to start.
Can anyone suggest starting points?
*
*What information should I look for first?
*What points out problems in the code (in a BIG way)?
*What would the low hanging fruit that can immediately seen?
A: Scott Hanselman / Stuart Celarier / Patrick Cauldwell's poster with ndepend metrics has some useful information on it. Rather than trying to break down all the heuristics being used I'd focus on only a few at a time starting with "zone of pain / zone of uselessness" and cyclomatic complexity.
There is also a podcast which covers some of the basics of the tool.
Between that and running nDepend on a few different projects you may be able to start gathering useful data that you can make into insights.
A: When starting with NDepend, the most important thing is to understand what Code Rule over LINQ (CQLinq) can bring to your shop by letting you define queries on your code and rules. Here you'll find a summary and source code of all 200 default code rules.
The second most important thing to look at is dependencies, with both the dependency graph view, that works hand-in-hand with the dependency matrix view. Once you master these 2 views, you'll be able to pinpoint where the code is well layered or not, and where developers made mistake.
Then it'll be time to learn more in-depth feature, such as the possibility to compare 2 versions of your code base, the various code metrics and why they are useful, enforcing statically purity and immutability, controlling automatically test coverage...
A: Excellent pair of web casts (30 minute videos) where Patrick Smacchia and Filip Ekberg talk through some of the features of nDepend and how to use them:
http://codebetter.com/patricksmacchia/2012/10/31/two-screencasts-on-how-to-demystify-spaghetti-code/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Identifying ASP.NET web service references At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
*
*Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
*Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
A: The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
A: You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a good method in C# for throwing an exception on a given thread The code that I want to write is like this:
void MethodOnThreadA()
{
for (;;)
{
// Do stuff
if (ErrorConditionMet)
ThrowOnThread(threadB, new MyException(...));
}
}
void MethodOnThreadB()
{
try
{
for (;;)
{
// Do stuff
}
}
catch (MyException ex)
{
// Do the right thing for this exception.
}
}
I know I can have thread B periodically check, in thread safe way, to see if a flag has been set by thread A, but that makes the code more complicated. Is there a better mechanism that I can use?
Here is a more fleshed out example of periodically checking:
Dictionary<Thread, Exception> exceptionDictionary = new Dictionary<Thread, Exception>();
void ThrowOnThread(Thread thread, Exception ex)
{
// the exception passed in is going to be handed off to another thread,
// so it needs to be thread safe.
lock (exceptionDictionary)
{
exceptionDictionary[thread] = ex;
}
}
void ExceptionCheck()
{
lock (exceptionDictionary)
{
Exception ex;
if (exceptionDictionary.TryGetValue(Thread.CurrentThread, out ex))
throw ex;
}
}
void MethodOnThreadA()
{
for (;;)
{
// Do stuff
if (ErrorConditionMet)
ThrowOnThread(threadB, new MyException(...));
}
}
void MethodOnThreadB()
{
try
{
for (;;)
{
// Do stuff
ExceptionCheck();
}
}
catch (MyException ex)
{
// Do the right thing for this exception.
}
}
A: What Orion Edwards is saying is not entirely true: is not the "only" way.
// Obviously this is BAD, and the only way to stop is to NOT throw
// exceptions across threads
Using CER (Constrained Execution Regions) in C# allows you to release your resources as an atomic operation, protecting your code from inter-thread exceptions. This technique is used by several classes of the .NET Framework which work with Windows' native API, where an unreleased handle may cause a memory leak.
See http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.runtimehelpers.prepareconstrainedregions.aspx
The following example shows how to reliably set handles by using the PrepareConstrainedRegions method. To reliably set a handle to a specified pre-existing handle, you must ensure that the allocation of the native handle and the subsequent recording of that handle within a SafeHandle object is atomic. Any failure between these operations (such as a thread abort or out-of-memory exception) will result in the native handle being leaked. You can use the PrepareConstrainedRegions method to make sure that the handle is not leaked.
As simple as:
public MySafeHandle AllocateHandle()
{
// Allocate SafeHandle first to avoid failure later.
MySafeHandle sh = new MySafeHandle();
RuntimeHelpers.PrepareConstrainedRegions();
try { }
finally // this finally block is atomic an uninterruptible by inter-thread exceptions
{
MyStruct myStruct = new MyStruct();
NativeAllocateHandle(ref myStruct);
sh.SetHandle(myStruct.m_outputHandle);
}
return sh;
}
A: There are enough problems with exceptions that can be thrown on threads by other mechanisms, like aborting threads and the likes, that you should find another way of doing it.
An exception is a mechanism used to signal that a process has experienced something exceptional that it cannot deal with. You should try to avoid writing the code so that an exception is used to signal that something else has experienced something exceptional.
That other thread will most likely not know how to handle the exception in all cases where it could be thrown by your code.
In short, you should find some other mechanism for aborting your threads than using exceptions.
Use event objects or similar to tell a thread to abort its processing, that's the best way.
A: This is NOT a good idea
This article talks about ruby's timeout library. which throws exceptions across threads.
It explains how doing such a thing is fundamentally broken. It's not just broken in ruby, it's broken anywhere that throws exceptions across threads.
In a nutshell, what can (and does) happen is this:
ThreadA:
At some random time, throw an exception on thread B:
ThreadB:
try {
//do stuff
} finally {
CloseResourceOne();
// ThreadA's exception gets thrown NOW, in the middle
// of our finally block and resource two NEVER gets closed.
// Obviously this is BAD, and the only way to stop is to NOT throw
// exceptions across threads
CloseResourceTwo();
}
Your 'periodic checking' example is fine, as you're not actually throwing exceptions across threads.
You're just setting a flag which says "throw an exception the next time you look at this flag", which is fine as it doesn't suffer from the "can be thrown in the middle of your catch or finally block" problem.
However, if you're going to do that, you may as well just be setting an "exitnow" flag, and using that and save yourself the hassle of creating the exception object. A volatile bool will work just fine for that.
A: While researching another issue, I came across this article which reminded me of your question:
Plumbing the Depths of the ThreadAbortException using Rotor
It shows the gyrations that .NET goes through to implement Thread.Abort() -- presumably any other cross-thread exception would have to be similar. (Yeech!)
A: I'm interested to know why you would want to do this. There's not an easy way to do it, because it's not a good practice. You should probably go back to your design and figure out a cleaner way to accomplish the end goal.
A: I don't think that's a good idea..
Take another crack at this problem - Try using some other mechanism like shared data to signal between threads.
A: Like the others, I'm not sure that's such a good idea, but if you really want to do it, then you can create a subclass of SynchronizationContext that allows posting and sending delegates to the target thread (if it's a WinForms thread the work is done for you as such a subclass already exists). The target thread will have to implement some sort of a message pump equivalent though, to receive the delegates.
A: @Orion Edwards
I take your point about an exception being thrown in the finally block.
However, I think there is a way - using yet another thread - of using this exception-as-interrupt idea.
Thread A:
At some random time, throw an exception on thread C:
Thread B:
try {
Signal thread C that exceptions may be thrown
//do stuff, without needing to check exit conditions
Signal thread C that exceptions may no longer be thrown
}
catch {
// exception/interrupt occurred handle...
}
finally {
// ...and clean up
CloseResourceOne();
CloseResourceTwo();
}
Thread C:
while(thread-B-wants-exceptions) {
try {
Thread.Sleep(1)
}
catch {
// exception was thrown...
if Thread B still wants to handle exceptions
throw-in-B
}
}
Or is that just silly?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.