text
stringlengths 8
267k
| meta
dict |
---|---|
Q: ASP.NET ObjectDataSource Binding Automatically to Repeater - Possible? I have a Question class:
class Question {
public int QuestionNumber { get; set; }
public string Question { get; set; }
public string Answer { get; set; }
}
Now I make an ICollection of these available through an ObjectDataSource, and display them using a Repeater bound to the DataSource. I use <%#Eval("Question")%> to display the Question, and I use a TextBox and <%#Bind("Answer")%> to accept an answer.
If my ObjectDataSource returns three Question objects, then my Repeater displays the three questions with a TextBox following each question for the user to provide an answer.
So far it works great.
Now I want to take the user's response and put it back into the relevant Question classes, which I will then persist.
Surely the framework should take care of all of this for me? I've used the Bind method, I've specified a DataSourceID, I've specified an Update method in my ObjectDataSource class, but there seems no way to actually kickstart the whole thing.
I tried adding a Command button and in the code behind calling MyDataSource.Update(), but it attempts to call my Update method with no parameters, rather than the Question parameter it expects.
Surely there's an easy way to achieve all of this with little or no codebehind?
It seems like all the bits are there, but there's some glue missing to stick them all together.
Help!
Anthony
A: You have to handle the postback event (button click or whatever) then enumerate the repeater items like this:
foreach(RepeaterItem item in rptQuestions.Items)
{
//pull out question
var question = (Question)item.DataItem;
question.Answer = ((TextBox)item.FindControl("txtAnswer")).Text;
question.Save() ? <--- not sure what you want to do with it
}
A: The bind method really isn't for the repeater, it's more for the formview or gridview, where you are editing just one item in the list not every item in the list.
On both you click a edit button which then gives you the bound controls (bound using bind) and then hit the save link which auto saves the item back into your datasource without any code behind.
A: Then what's the point in the Bind method (as opposed to the Eval method) if I have to bind everything back up manually on postback?
A: Ben: Having tried it, item.DataItem is always null, and according to the following post, it's not designed to be used that way:
http://www.netnewsgroups.net/group/microsoft.public.dotnet.framework.aspnet/topic4049.aspx
So how on earth do I manually bind it back?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is the best way to avoid getting "Emacs Pinky"? I just started using GNU Emacs as my text editor and I am concerned about getting afflicted with "Emacs Pinky" by having to constantly press the control key with my pinky finger as is required when using Emacs. How can I avoid potentially getting this type of repetitive strain injury?
A: I totally agree with the remap caps-lock solution, that helps quite a bit.
To go even further, I tried and liked the Ergoemacs keybindings. The project is being actively developed, and supported quite well. I personally don't use it because it's not integrated with Mac OS X (some EMACS Keys are integrated in Cocoa), though it seems someone has posted an inputrc file with Ergoemacs keybindings.
Another trick I've been playing with is enabling StickyKeys. It's supported on many platforms and alleviates some of the problems specific to chording (as opposed to just overuse): it is apparently recommended on the emacswiki: http://www.emacswiki.org/emacs/StickyModifiers
A: I use the excellent key-chord mode for common actions. You can give it functions or key sequences to call.
To make sure I do not obstruct normal typing too much, I generated letter statistics to find good key chords. I recorded them along with my default chords.
Additionally to more convenient command-calling, I have chords for inserting stuff like
Best wishes,
Arne
I hate writing emails without that ☺
Also I use control lock mode for things like flyspell, where I need control all the time. That’s modal editing with real emacs shortcuts.
The chords I use the most are
; buffer actions
(key-chord-define-global "vg" 'eval-region)
(key-chord-define-global "vb" 'eval-buffer)
(key-chord-define-global "cy" 'yank-pop)
(key-chord-define-global "cg" "\C-c\C-c")
(key-chord-define-global "äü" 'control-lock-toggle)
; frame actions
(key-chord-define-global "xo" 'other-window);
(key-chord-define-global "x1" 'delete-other-windows)
(key-chord-define-global "x0" 'delete-window)
(defun kill-this-buffer-if-not-modified ()
(interactive)
; taken from menu-bar.el
(if (menu-bar-non-minibuffer-window-p)
(kill-buffer-if-not-modified (current-buffer))
(abort-recursive-edit)))
(key-chord-define-global "xk" 'kill-this-buffer-if-not-modified)
; file actions
(key-chord-define-global "bf" 'ido-switch-buffer)
(key-chord-define-global "cf" 'ido-find-file)
(key-chord-define-global "zs" "\C-x\C-s")
(key-chord-define-global "vc" 'vc-next-action)
A: First I'd like to point out that suggesting not to use Emacs because the default keybindings may not be for everyone doesn't make any sense. Emacs is the most configurable "text editor" ever made and so, of course, trivial things like keymappings are fully configurable.
Regarding the "Emacs pinky" issue, I noticed that several people have "anti-Emacs-pinky" keybindings in their .emacs, like user "Paul Nathan" (17.5k rep as I type this) here:
What are good custom keybindings in emacs?
Then it is known for a fact that many people prefer the vi way and user Emacs' viper-mode.
I think that the major issue in Emacs is, by default, over-reliance on CTRL and more specifically C-x and C-f / C-b. These three are really terrible because it means, IMHO, painful fingers distortion.
So to me first the problem has to be defined: what is the issue? The issue is an over-reliance by default on CTRL, the fact that CTRL is typically badly located on most keyboards and the fact that most keyboard out there (I'd guesstimate more than 99.9% of all keyboards ever produced) are total pieces of junk.
So what is my solution to this?
*
*I use a good mechanical keyboard and I do touch-type. People really serious about this will probably shell out $$$ for a very good split & matrix keyboard (like the Kinesis Advantage)... Because split and matrix are the only kind of keyboards that makes sense from an ergonomic point of view (this is not even open for debate). I, sadly, have been typing for three decades and cannot adapt to matrix layout, so I'm using an old (flawed) staggered keyboard. If you're going to use a staggered keyboard, at least take one that has a good switch (for example buckling springs like in the IBM Model M or Cherry MX switches or Topres like in the Happy Hacking Keyboard Pro). Be ready to shell out $500 and more if you hope to find a split + "mechanical" staggered, like the Cherry MX-5000 (*) or the IBM M15.
So: in short, if you're really serious about this, go for a Kinesis Advantage (they're using Cherry MX switches and you can even choose your specific switch if I'm not mistaken).
If, like me, you sadly cannot adapt to these wonderful keyboards because they're "too different", then go for a good "mechanical" keyboard. Any keyboard allowing not to "bottom out" while typing will save your fingers' joints. Helps after decades of programming.
If you don't want to go the "mechanical" route and think rubber domes are fine keyboards (I consider them junk but each it's preference), then at least choose a good rubber dome. MS' Ergo 4K would be a good choice (once again: it's rubber dome so to me it's a finger-destroying junk, but it's a matter of taste).
*
*once you're using a good keyboard, remap CAPS-LOCK to CTRL. Can be done on any OS. It's trivial and there are plenty of links on the subject.
*Remap Emacs' keys to stop over-relying on CTRL. First CTRL-x is terrible. It really has to be the worst shortcut ever. But you can remap ctl-x-map to what you want. I do this in my remapping minor mode:
(define-key my-keys-minor-mode-map (kbd "C-,") ctl-x-map)
C-, might not suit you: pick something else...
Then there's the issue of cursor movement. I think it's a big one for "text editor". Even if tend to use all the fine Emacs functions to quickly move around the text buffers instead of "moving the cursor", I still do need to move the cursor "manually" quite often.
C-f / C-b have to be the two most stupid shortcuts to move the cursor ever.
I use M-{i,j,k,l}. So people prefer {hjkl} instead of {ijkl} but I like {ijkl} because it reproduce the inverted T-arrow. I also like the fact that when, as a touch-typist, you're in your home row, you already have three fingers on {jkl}. No crazy finger motion to reach 'f' or 'b': makes no sense.
Last but not least: when you're not typing, do rest your fingers on your keyboard. For this of course you need a keyboard with a good switch which has enough resistance not to activate when you're simply resting your keys on your keyboard.
A: Making caps lock another control key is a good place to start. Invest in an ergonomic keyboard. Some emacs users even go as far as to get foot pedal things for control and meta...
A: The Microsoft Natural Keyboard has been very, very good for me. I use emacs for everything 10+ hrs a day with no problems.
A: My advice would be to try using your thumbs to press modifier keys (control, alt) when they are within a reach. On keyboards which have shorter space-bar it is possible to press Alt (Meta) even without bending your thumb inwards. You can remap e.g. right Alt to Control and this way be able to conveniently access both Control and Alt.
This is also possible on MS Natural Keyboards.
A: Consider a Kinesis Contoured keyboard. It took me about a month to get up to speed with mine and I now consider it to be the ideal Emacs keyboard, even without the foot-pedal.
No joke. I ordered my first one with a food-pedal, but found I wasn't actually capable of coordinating the timing of my feet and my hands sufficiently to make much use of it for modifier keys. For a while I used it to toggle the integrated number pad, but I gave that up when I realized I wasn't using it because the number row on the Kinesis is so easy to reach.
A: One solution not yet mentioned here is to use both hands for key combinations.
For example, suppose you want to press <CTRL-K>. On QWERTY-keyboards, <K> is on the right, so press <CTRL> with your left hand and <K> with your right hand. Once you get used to the system, it works fine.
A: Configure so that the space bar works both as space and control; when the space is pressed alone, as a space, and pressed with others, as ctrl. So Space + x is treated as Ctrl + x.
You can do it with AutoHotKey in Windows, and with "at-home-modifier" in Xorg in Linux. (X, but Linux only.) You can use Karabiner (formaly known as keyremap4macbook) in Mac. (In fact, I am the author of at-home-modifier. =)
You can do more, if you have a keyboard with many keys around the space, like Japanese keyboards:Japanese keyboard http://www.owltech.co.jp/products/keyboard/KB86STD/KB86STD_B-320.jpg
My bottom row is basically EscBSSpaceEnterTab, but when used as modifiers, it's AltShiftCtrlShiftAlt. (For example, if you hold down Esc first and then Space, it's Alt+Space, but Space followed by Esc, it's Ctrl+Esc. If you press Space, Esc, and x, then it's ctrl + alt + x.) All can be pressed with the thumbs. You can order a Japanese keyboard from say amazon.com. You don't have to speak Japanese.
This is extremely handy. For firefox (sorry, not emacs) for example, focus a link, and press Ctrl + Enter; then it'll be opened in a new tab. (By also pressing Shift, it'll switch to the new tab, rather than staying on the current.)
(The above picture is the one the author of at-home-modifier uses. The maker doesn't sell this model any more, though.)
A: Per @Alasdair, remap Ctrl to "Caps Locks" or elsewhere: instructions for various platforms.
P.S. I'm a bit surprised this can't be done via an elisp function.
A: One more approach: if you want to avoid getting "emacs pinky" simply do not use pinky to press control key.
If it is necessary remap keys on your keyboard to go in the following order:
[Ctrl][Alt][ Space ][Alt][Ctrl]
On any standard keyboard (which symmetrically positions modifier keys, e.g. any MS keyboard) now you can press Ctrl key with ring finger and Alt key with middle finger on both hands. These fingers are much stronger than pinky and can endure frequent use.
Great tool for easily remapping keyboard keys on windows is AutoHotKey
On Ubuntu I managed to do it using: Keyboard Preferences / Layouts/ Other options
A: Put the modifiers where they were meant to be: on either side of the space-bar, where they can be pressed by the thumb (or other digit of your choice) on the opposite hand from the digit pressing the modified letter (so that C-g is right-thumb on Ctrl and left index on 'g', and C-k is left-thumb on Ctrl and right middle on 'k'). You will note that the correct sequence, from inside out, is Ctrl Meta Super Hyper.
How you do this depends on your OS and your keyboard. For Windows, you might like to start here. In Mac OS X you can look in System Preferences > Keyboard & Mouse > Modifier Keys. For Linux, there are a thousand xmodmap and XKB tutorials.
A: Try viper-mode, which is a vi emulator in emacs. As someone who has switched back and forth between vi/emacs/vim several times in the last 25 years, I'm now finally trying viper-mode in emacs, and I like it. I find the vi commands to be more comfortable, but I can still keep the advanced features of emacs that I like.
A: I use emacs and bash all day every day, and I have capslock as an extra left-control key, like VT100 intended. Nobody's mentioned the best way to do that on X11, yet. (actually, this is specific to the X.org/xfree86 X server, which everything uses these days):
setxkbmap -option ctrl:nocaps
Or edit your xorg.conf to have
Section "InputDevice"
Identifier "Generic Keyboard"
Driver "kbd"
Option "XkbRules" "xorg"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
Option "XkbOptions" "ctrl:nocaps"
Option "Autorepeat" "200 40"
EndSection
(The XkbOptions and Autorepeat are what I added to the pre-generated one. Then X will start with the right key mapping every time, and you don't have to find where to put setxkbmap to have it executed every time you log in and start your window manager.)
Although gnome does have a keyboard manager thing, as boskom mentioned.
FYI, emacs was originally written for MIT lisp machines with "space cadet" keyboards. X11 has super, hyper, alt and meta modifier keys. Sometimes the "windows keys" on PC keyboards are mapped to Super. They're handy to bind to window-manager stuff (e.g. switch virtual desktops) because almost no apps normally use them.
A: to Chow,
yesterday i found a solution that we can have system wide ErgoEmacs keybinding on the mac.
The trick is to use mac os x's keybinding system so that you have system wide ErgoEmacs keybinding with the Control key. Then, in OS preference, swap the Control and Cmd key.
So that, you get ErgoEmacs keybinding system wide with the modifier beside the space bar. The draw back is of course that normal mac Cmd+key is now at the corner of keyboard. So, it's a trade off, about whether you use most apps for text editing, or the app's shortcuts.
might give it a shot here:
http://code.google.com/p/ergoemacs/wiki/ErgoEmacs_keys_system_wide
also, few years back i tried the mac's os wide custome keybinding, some cocoa apps still doesn't support it. See bottom here
• How To Create Keybinding In Mac OS X
but perhaps things are better now.
A: alt text http://www.userfriendly.org/cartoons/archives/07sep/uf010710.gif
@ Xiong Chiamiov
A: I have a MS natural keyboard as well and it's awesome. I've managed to train myself to use the side of my left hand (below the pinky) to hit the Ctrl key.
A: Even after remapping capslock to become control, you still have to use your pinky to press it - at least I do, because my ring finger won't reach it for a command like C-g. Using your pinky at all is not recommended, right?
I'm on a MacBook Pro and I've just remapped the ⌘ on the right side of the keyboard to become control. So that way, for instance, C-g becomes a keystroke I execute with both hands, my right-hand thumb on ⌘ and my left forefinger on g.
We'll see if that helps with my RSI. Anyone else done this?
A: Get a foot pedal! (I have a kinesis.) After you do, unmap control and capslock so you force yourself to use your feet.
(FYI, remapping capslock will help, but after enough emacsing in one day, will not be a total solution.)
A: For the love of God - use another text editor! If it's something that requires a foot pedal to work with it normally, then... well... frankly, I'm speechless. There is a multitude of powerful contemporary text editors out there that don't require you to memorize volumes of arcane keystrokes or buy fancy hardware.
You know, I can understand and accept a lot of things, but a foot pedal for a simple text editor is really where I draw the line.
A: I have always been curious about why such a large community of programmers, writers, geeks, etc. haven't yet found super simple and effective solution to this problem. Simply: 1) take a small piece of paper, make paper ball of it; 2) use scotch to stick it on to your left ctrl key (temporarily removing it from the place); 3) when writing, use side of your left palm to press that key - now this key is higher than others and you can do it easily. That way you don't need to buy uber ergo-keyboards, or remap ctrl key to capslock (which you eventually will push with your pinky anyway)...
So much discussions about such a small problem.
A: Remap Left-Ctrl and Caps-Lock so they are where they should be:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout] "Scancode Map"=hex:00,00,00,00,00,00,00,00,03,00,00,00,3a,00,1d,00,1d,00,3a,00,00,00,00,00
A: One of the first things I do on a new machine is remap Caps Lock to a new Control.
Google around, there are plenty of .reg files available that will do this painlessly for you on Windows.
A: I started using the side\palm of my hands to hit the control key instead of my pinky fingers. My understanding is that on more ergonomic keyboards the control key button is bigger which makes it easier to perform that motion.
A: I can use the Control key in either the west or south-west positions without any trouble. Many Emacsers swear that the control key belongs in the west position and the west position only, and that anything else will ruin your pinky. The only thing we know for sure causes RSI from typing is too much typing. Try type-break-mode and see if a few regular breaks help.
A: Buy a Happy Hacking Keyboard which has Ctrl in The Right Place (Caps Lock is moved elsewhere). It has excellent response and is configurable via DIP switches for maximum integration on Mac, Windows, and Linux (for example, you can switch what is Alt and what is the Windows key right from the keyboard, no software required).
It also has a very small footprint, if that suits your fancy.
A: Since this thread is still kinda going, I'll add my two cents:
With or without emacs, the ctrl key is useful for tons of stuff on linux or windows: copy, cut, paste, find, close, quit... I use this stuff constantly. So as others have suggested, I want that near the spacebar so I can use my thumb. And that's how it is by default on a mac, where all that stuff uses the cmd key:
[ctrl] [alt] [cmd] [spacebar]
So, I use a mac keyboard on my linux box, and set up the cmd key as a second ctrl key (In Ubuntu Lucid: Keyboard Prefs > Layouts > Options > Alt/Win Key Behavior > Control is mapped to Win keys (and the usual Ctrl keys))
[ctrl] [alt] [ctrl] [spacebar]
Other benefits:
* When I need to use a mac sometimes, cut/paste/etc are all in the same place I'm used to.
* ctrl+tab (with the real ctrl key) still moves through tabs for browsers and other apps, on both platforms.
The drawback to this plan is that the alt key has moved to the left, so the alt+tab command (which I use for window switching) no longer matches the mac equivalent cmd+tab. But I can still hit it with my thumb, and it's still, to me, a far lesser evil than destroying my pinky. Yeah, I know I could just make ctrl+tab the window switcher, but then the real ctrl key doesn't work for tab switching. Besides, with apps moving into the browser, the window/tab navigational strategies are gonna be in flux for a while -- but the basics like cut/paste aren't going anywhere, so I want them locked down. Under my thumb.
(Of course, if you wanted to use emacs on a mac, I guess you'd be back at square one...)
A: I actually did my own hack to avoid using the ctrl key. I use now the SPACEBAR key.
This small program for X changes the behavior of the space bar, so that when it is used in a combination, it adds the control modifier to it. When used alone, it behaves normally on release.
That way you don't have to use your pinky at all! Worked perfectly for me.
A: I am not a programmer and I also have a hard time explaining ideas. I am on a dell mini laptop. The mouse touchpad is in a spot where my thumbs rest.
My left touchpad button acts as a control key:
With the side of my thumb I press it and edit in emacs as usual.
I was going to map the right touchpad button to alt, but instead I have done the
following:
I press the button and the control key gets pressed (locked).
I press it again and controll gets released.
This not only made my pinky feel better it also made my editing twice as fast. (according
to my org-mode clocks)
It is hard to explain how nice it works.
In order to do this I used the following two aplications:
xbindkeys
xdotool
My xbindkeys config file:
###########################
# xbindkeys configuration #
###########################
# left mouse button ctrl key
"xdotool keydown ctrl"
b:1
"xdotool keyup ctrl"
control + b:1 + Release
# vi wanna be style editing
"xdotool keydown ctrl"
release+b:3
"xdotool keyup ctrl"
release+control+b:3
# -------------------------
Before experimenting with these ideas make sure to read the man pages.
Do not have anything important open. I had to kill my window manager a few times
before getting it correct.
notes:
I use xmodmap to do all the regular stuff i.e. caps ---> control (not a full solution), swapping alt and control. (on a laptop It is ok but my thumbs cramp up)
I use the window manager config file (stumpwm) to create bindings to load the proper key mapping
file. (depending on my mood for the day)
I am sure all this can be implemented in a different environment.
My pink pain is gone, my editing is faster.
A: Try the God-Mode plugin that is not bad .
A:
Just to overcome this issue I remap all the copy,paste,save...etc into numpad.For further ease I bought a separate numpad and place it behind my laptop key board.
You can easily remap the keys using AHK(auto hot key).I am using the following key mappings script,
NumpadIns::^s
NumpadEnd::^c
NumpadDown::^v
NumpadPgDn::^x
NumpadLeft::^+v
NumpadClear::Control
NumpadRight::^a
NumpadHome::q
NumpadUp::Tab
NumpadDel::^f
NumpadEnter::Space
1 : copy
2 : paste
0 : save
etc....
A: Assuming the key in the left lower corner of the keyboard is control (which is for standard keyboards), it's very easy to just lower your palm and it will press the Ctrl key. No pinky involved. It's so easy and fast I love it. I only use the pinky for the other keys above the Ctrl key (Shift, Caps, etc.). For Alt, Windows key, Function key (when using my laptop) on the left side I use my left thumb (right thumb for space). I use Ctrl the same way for combinations of hot keys that include Ctrl.
I only use right Left Ctrl key. I use Right Ctrl as "super" modifier (mapped to F23 in autohotkey which then is mapped to supper in emacs).
It may seem odd at first to press Ctrl this way but after some usage becomes super easy (it only requires a slight lowering of the hand/palm) - to me it's almost as easy as when no hotkey (Ctrl) is pressed. I see this difference as I'm now learning emacs and learning to do the same on the right side (for AppKey and Right Control key and Down and End for my laptop).
Here's my ahk code:
#NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases.
SendMode Input ; Recommended for new scripts due to its superior speed and reliability.
SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory.
^#e:: Gosub, start_emacs
^#c:: Gosub, start_capture
#IfWinActive emacs@ ahk_class Emacs
Up::Return ; to not interfere with pressing <Down>
; F21 = Alt
RAlt::F21
; F23 = Super
RCtrl::F23
#IfWinActive
exit
start_emacs:
IfWinExist emacs@ ahk_class Emacs
WinActivate
else
Run c:\bin\emacs\bin\runemacs.exe
WinMaximize
return
start_capture:
Gosub, start_emacs
SendInput {Ctrl down}xf{Ctrl up} {ctrl down}{shift down}{backspace}{ctrl up}{shift up}
SendInput ~/org/capture.org {enter}
SendInput {Alt down}x{Alt up} org-capture {enter}
return
Here's what I have in my .emacs:
; prevent single key press from activating the given key
;; http://emacs.1067599.n5.nabble.com/w32-pass-rwindow-to-system-td144902.html
(setq w32-pass-lwindow-to-system nil
w32-pass-rwindow-to-system nil
w32-pass-apps-to-system nil)
; make sure the given key is not used as a modifier
(setq w32-lwindow-modifier nil
w32-rwindow-modifier nil
w32-apps-modifier nil) ; Menu/App key
; misc
(setq w32-recognize-altgr nil) ; C+M works: http://www.gnu.org/software/emacs/manual/html_node/emacs/Windows-Keyboard.html
; A-alt
(define-key local-function-key-map (kbd "<f21>") 'event-apply-alt-modifier) ; RAlt in ahk
; H-hyper
(define-key local-function-key-map (kbd "<f22>") 'event-apply-hyper-modifier)
(define-key local-function-key-map (kbd "<menu>") 'event-apply-hyper-modifier)
(define-key local-function-key-map (kbd "<apps>") 'event-apply-hyper-modifier)
(define-key local-function-key-map (kbd "<lwindow>") 'event-apply-hyper-modifier)
(define-key local-function-key-map (kbd "<down>") 'event-apply-hyper-modifier)
; s-super
(define-key local-function-key-map (kbd "<f23>") 'event-apply-super-modifier) ; RCtrl in ahk
(define-key local-function-key-map (kbd "<right>") 'event-apply-super-modifier)
(define-key local-function-key-map (kbd "<rwindow>") 'event-apply-super-modifier)
A: My Keyboard
Just make a keyboard with short space key!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: C++ Template Ambiguity A friend and I were discussing C++ templates. He asked me what this should do:
#include <iostream>
template <bool>
struct A {
A(bool) { std::cout << "bool\n"; }
A(void*) { std::cout << "void*\n"; }
};
int main() {
A<true> *d = 0;
const int b = 2;
const int c = 1;
new A< b > (c) > (d);
}
The last line in main has two reasonable parses. Is 'b' the template argument or is b > (c) the template argument?
Although, it is trivial to compile this, and see what we get, we were wondering what resolves the ambiguity?
A: AFAIK it would be compiled as new A<b>(c) > d. This is the only reasonable way to parse it IMHO. If the parser can't assume under normal circumstances a > end a template argument, that would result it much more ambiguity. If you want it the other way, you should have written:
new A<(b > c)>(d);
A: As stated by Leon & Lee, 14.2/3 (C++ '03) explicitly defines this behaviour.
C++ '0x adds to the fun with a similar rule applying to >>. The basic concept, is that when parsing a template-argument-list a non nested >> will be treated as two distinct > > tokens and not the right shift operator:
template <bool>
struct A {
A(bool);
A(void*);
};
template <typename T>
class C
{
public:
C (int);
};
int main() {
A<true> *d = 0;
const int b = 2;
const int c = 1;
new C <A< b >> (c) > (d); // #1
new C <A< b > > (c) > (d); // #2
}
'#1' and '#2' are equivalent in the above.
This of course fixes that annoyance with having to add spaces in nested specializations:
C<A<false>> c; // Parse error in C++ '98, '03 due to "right shift operator"
A: The C++ standard defines that if for a template name followed by a <, the < is always the beginning of the template argument list and the first non-nested > is taken as the end of the template argument list.
If you intended that the result of the > operator be the template argument, then you'd need to enclose the expression in parentheses. You don't need parentheses if the argument was part of a static_cast<> or another template expression.
A: The greediness of the lexer is probably the determining factor in the absence of parentheses to make it explicit. I'd guess that the lexer isn't greedy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Are off-the-cloud desktop applications dead? Although somewhat related to this question, I have what I think is a different take on it.
Is a desktop app that has no connections to the "cloud" dead? I believe that some things are going to continue to be on the machine (operating systems obviously, browsers, some light-weight applications), but more and more things are moving to network-based applications (see Google Docs for office suites, GMail and other web-email clients for email, flickr for photo management, and more).
So other than the lightweight applications, is there anything that, in 5 to 10 years, will continue to be (either out of necessity or just demand) remain on the desktop and off the cloud?
A: 10 years or more ago this would have been, "Are non-internet applications dead?"
There's things the cloud does better than desktop applications, and in those places I'm sure non-cloud applications will become increasingly rare. But there's plenty of applications where you might not want to use the cloud, the benefits don't outweigh the costs, or the complexity just isn't worth it.
It's a new tool, and it's a better tool than desktop applications for many things. However, you don't throw away a hammer when you buy a screwdriver, you simply reserve it for when a nail needs to be driven.
A: Video editing and other resource intensive tasks will probably stay off the cloud for a long time.
A: IDE's will probably be "off the cloud" for a long time, if ever... powerful customizable editors like Emacs will also probably stay "off the cloud" for a while.
A: If I look at the application that we've selling and at the applications I've written as a consultant, I must very much agree with you. Most of them are useless if there is no internet connection. Some do work in disconnected mode, some don't, but all of them are pretty useless if you cannot connect to the big supporting system hidden far far away.
On the other hand, I wouldn't want to say that everything will move into the cloud in 5 years. Too much work with porting. There will be desktop applications that will function as a thin and offline-able client (just like, for example, Google Reader does if you install Gears) and there will be fully "clouded" :) applications.
I have no idea what will happen in 10 years. If I put myself 10 years back (and that is very easy to do as I was writing a lot for a local computer magazine in that time), I totally couldn't predict how the computing will become internet-dependant in 2008.
A: Gosh, I hope not as that's my job.
The main piece of software I write controls electronic hardware (PXI boards and the like) for testing. Without "real" hardware, there's nothing to test. Even the very nature of the tests themselves prevent simultaneous access (once you set the state of a switch, you don't want someone else moving it).
So as long as you interact with any hardware, you're off-the-cloud.
Oh, and some companies have security issues with being on the Internet; I'd say security would also drive desktop apps with no connections.
A: There's no reason that many corporations will move to an online system simply because of security concerns.
For example, One of the greatest assets of Outlook is to go offline and continue working. Sure Google Gears has similar functionality, but then you're trusting Google with your corporate security.
A: Such applications are dead since 15 years, ever since Sun took market leadership with their JavaStation.
No, wait. They did not. And things are not "more and more" moving to network-based applications. Sure, there is Webmail, but even GMail is FAR away from the comfort of modern Outlook or Thunderbird Clients. Same for office. Google Docs is a nice toy for ocasional use, but it's vastly inferior to conventional Office suites.
The Desktop is not dead and it will not die anytime soon. Internet Applications are alternatives in some situations, but be are just starting getting proper functionality and performance. Let's face it: JavaScript performance is still a Joke, the IDE Support is not there yet and Browsers are too unstable at the moment.
Google Chrome, IE8 and Firefox 3.1 start to go in a better direction, but it will take years for them to be mature enough to create JavaScript applications that actually can fully replace desktop apps. But that would require some proper standardization accross browsers, and we all know that this will not happen before the next millennium or so.
A: About 1% of users actually use Google Docs&Spreadsheets full-time. Almost all of the rest use Microsoft Office. So, no, off-the-cloud applications are not dead simply because a Google office suite exists. And those are, really, the only high-profile true web applications out there that are meant as desktop app replacements.
Webmails are a special case though. It actually makes sense to use those rather than a desktop app, since your email is next-to-useless without a connection anyway. But most applications don't NEED a full-time Internet connection. A word processor certainly doesn't.
What will definitely remain on the desktop:
*
*Games
*Small apps (calculator, notepad type of stuff)
*Anything that generates data that needs to be secure (I don't imagine tons of people or companies want to trust their accounting details to Google, for example)
*Web browsers (obviously)
*IDEs (Visual Studio via Ajax? Come on...)
*Auxiliary development tools (SVN, etc), since good security policy would forbid their use through a web browser
*Anything that needs high enough performance that network latency would be an impediment
What will probably remain primarily on the desktop, at least for the next 5 years:
*
*Office tools (unless web-based limitations can be lifted... which would require much better-performing web browsers than we have now)
*Photoshop and such tools
*Chat clients (web-based equivalents are disappointing so far)
That's not to say that any of the above cannot have an Internet-based component, of course.
A: I personally will never leave my stuff on the web under someone else's control. All of my photos and e-mails I keep on local hard drives that I control.
I prefer to make my own stuff available to me through the web on my own hardware. The only way to have reasonable performance and be productive when offline is to use local apps.
To me the future will be local, but remotely accessible and synchronized. At least for the next 20 years or so.
Not only do I think it's not dead, I think it's the way everyone will want to go once we have a few disastrous failures (ie, websites disappearing with users content that isn't backed up anywhere or severe privacy breeches as some large company loses control of access to the data they are protecting).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What does the comma operator , do? What does the , operator do in C?
A: It causes the evaluation of multiple statements, but uses only the last one as a resulting value (rvalue, I think).
So...
int f() { return 7; }
int g() { return 8; }
int x = (printf("assigning x"), f(), g() );
should result in x being set to 8.
A: The comma operator will evaluate the left operand, discard the result and then evaluate the right operand and that will be the result. The idiomatic use as noted in the link is when initializing the variables used in a for loop, and it gives the following example:
void rev(char *s, size_t len)
{
char *first;
for ( first = s, s += len - 1; s >= first; --s)
/*^^^^^^^^^^^^^^^^^^^^^^^*/
putchar(*s);
}
Otherwise there are not many great uses of the comma operator, although it is easy to abuse to generate code that is hard to read and maintain.
From the draft C99 standard the grammar is as follows:
expression:
assignment-expression
expression , assignment-expression
and paragraph 2 says:
The left operand of a comma operator is evaluated as a void expression; there is a sequence point after its evaluation. Then the right operand is evaluated; the result has its type and value. 97) If an attempt is made to modify the result of a comma operator or to access it after the next sequence point, the behavior is undefined.
Footnote 97 says:
A comma operator does not yield an lvalue.
which means you can not assign to the result of the comma operator.
It is important to note that the comma operator has the lowest precedence and therefore there are cases where using () can make a big difference, for example:
#include <stdio.h>
int main()
{
int x, y ;
x = 1, 2 ;
y = (3,4) ;
printf( "%d %d\n", x, y ) ;
}
will have the following output:
1 4
A: The comma operator combines the two expressions either side of it into one, evaluating them both in left-to-right order. The value of the right-hand side is returned as the value of the whole expression.
(expr1, expr2) is like { expr1; expr2; } but you can use the result of expr2 in a function call or assignment.
It is often seen in for loops to initialise or maintain multiple variables like this:
for (low = 0, high = MAXSIZE; low < high; low = newlow, high = newhigh)
{
/* do something with low and high and put new values
in newlow and newhigh */
}
Apart from this, I've only used it "in anger" in one other case, when wrapping up two operations that should always go together in a macro. We had code that copied various binary values into a byte buffer for sending on a network, and a pointer maintained where we had got up to:
unsigned char outbuff[BUFFSIZE];
unsigned char *ptr = outbuff;
*ptr++ = first_byte_value;
*ptr++ = second_byte_value;
send_buff(outbuff, (int)(ptr - outbuff));
Where the values were shorts or ints we did this:
*((short *)ptr)++ = short_value;
*((int *)ptr)++ = int_value;
Later we read that this was not really valid C, because (short *)ptr is no longer an l-value and can't be incremented, although our compiler at the time didn't mind. To fix this, we split the expression in two:
*(short *)ptr = short_value;
ptr += sizeof(short);
However, this approach relied on all developers remembering to put both statements in all the time. We wanted a function where you could pass in the output pointer, the value and and the value's type. This being C, not C++ with templates, we couldn't have a function take an arbitrary type, so we settled on a macro:
#define ASSIGN_INCR(p, val, type) ((*((type) *)(p) = (val)), (p) += sizeof(type))
By using the comma operator we were able to use this in expressions or as statements as we wished:
if (need_to_output_short)
ASSIGN_INCR(ptr, short_value, short);
latest_pos = ASSIGN_INCR(ptr, int_value, int);
send_buff(outbuff, (int)(ASSIGN_INCR(ptr, last_value, int) - outbuff));
I'm not suggesting any of these examples are good style! Indeed, I seem to remember Steve McConnell's Code Complete advising against even using comma operators in a for loop: for readability and maintainability, the loop should be controlled by only one variable, and the expressions in the for line itself should only contain loop-control code, not other extra bits of initialisation or loop maintenance.
A: As earlier answers have stated it evaluates all statements but uses the last one as the value of the expression. Personally I've only found it useful in loop expressions:
for (tmp=0, i = MAX; i > 0; i--)
A: The only place I've seen it being useful is when you write a funky loop where you want to do multiple things in one of the expressions (probably the init expression or loop expression. Something like:
bool arraysAreMirrored(int a1[], int a2[], size_t size)
{
size_t i1, i2;
for(i1 = 0, i2 = size - 1; i1 < size; i1++, i2--)
{
if(a1[i1] != a2[i2])
{
return false;
}
}
return true;
}
Pardon me if there are any syntax errors or if I mixed in anything that's not strict C. I'm not arguing that the , operator is good form, but that's what you could use it for. In the case above I'd probably use a while loop instead so the multiple expressions on init and loop would be more obvious. (And I'd initialize i1 and i2 inline instead of declaring and then initializing.... blah blah blah.)
A: The expression:
(expression1, expression2)
First expression1 is evaluated, then expression2 is evaluated, and the value of expression2 is returned for the whole expression.
A: I've seen used most in while loops:
string s;
while(read_string(s), s.len() > 5)
{
//do something
}
It will do the operation, then do a test based on a side-effect. The other way would be to do it like this:
string s;
read_string(s);
while(s.len() > 5)
{
//do something
read_string(s);
}
A: I'm reviving this simply to address questions from @Rajesh and @JeffMercado which i think are very important since this is one of the top search engine hits.
Take the following snippet of code for example
int i = (5,4,3,2,1);
int j;
j = 5,4,3,2,1;
printf("%d %d\n", i , j);
It will print
1 5
The i case is handled as explained by most answers. All expressions are evaluated in left-to-right order but only the last one is assigned to i. The result of the ( expression )is1`.
The j case follows different precedence rules since , has the lowest operator precedence. Because of those rules, the compiler sees assignment-expression, constant, constant .... The expressions are again evaluated in left-to-right order and their side-effects stay visible, therefore, j is 5 as a result of j = 5.
Interstingly, int j = 5,4,3,2,1; is not allowed by the language spec. An initializer expects an assignment-expression so a direct , operator is not allowed.
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "205"
} |
Q: profile-guided optimization (C) Anyone know this compiler feature? It seems GCC support that. How does it work? What is the potential gain? In which case it's good? Inner loops?
(this question is specific, not about optimization in general, thanks)
A: PGO gives about a 5% speed boost when compiling x264, the project I work on, and we have a built-in system for it (make fprofiled). Its a nice free speed boost in some cases, and probably helps more in applications that, unlike x264, are less made up of handwritten assembly.
A: Jason's advise is right on. The best speedups you are going to get come from "discovering" that you let an O(n2) algorithm slip into an inner loop somewhere, or that you can cache certain computations outside of expensive functions.
Compared to the micro-optimizations that PGO can trigger, these are the big winners. Once you've done that level of optimization PGO might be able to help. We never had much luck with it though - the cost of the instrumentation was such that our application become unusably slow (by several orders of magnitude).
I like using Intel VTune as a profiler primarily because it is non-invasive compared to instrumenting profilers which change behaviour too much.
A: The fun thing about optimization is that speed gains are found in the unlikeliest of places.
It's also the reason you need a profiler, rather than guessing where the speed problems are.
I recommend starting with a profiler (gperf if you're using GCC) and just start poking around the results of running your application through some normal operations.
A: It works by placing extra code to count the number of times each codepath is taken. When you compile a second time the compiler uses the knowledge gained about execution of your program that it could only guess at before. There are a couple things PGO can work toward:
*
*Deciding which functions should be inlined or not depending on how often they are called.
*Deciding how to place hints about which branch of an "if" statement should be predicted on based on the percentage of calls going one way or the other.
*Deciding how to optimize loops based on how many iterations get taken each time that loop is called.
You never really know how much these things can help until you test it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: iPhone browser tag and optimized web site What is iPhone's browser tag and how iPhone optimized web site is different from a usual mobile web site?
Thanks!
A: Apple has some excellent guidelines for iPhone web page development here:
Safari Web Content Guide for iPhone
From my brief reading of it, here are a key elements to look out for:
*
*The way the "viewport" and scrolling works is a bit different due to the small screen size. There are custom META tags that let you adjust this automatically when someone comes to your page.
*Beware pages that use framesets or other features that require the user to scroll different elements on the page, because the iPhone does not display scrollbars.
*If you expect people to bookmark your page on the iPhone, there's a custom META tag that lets you specify a 53x53 icon that will look nicer than the typical favorite.ico.
*Avoid javascript that depends on mouse movement or hover actions to make things happen, they won't work right on iPhone.
*There are some custom CSS properties that allow you to adjust text size and highlight color of hyperlinks on the iPhone.
*There are other key HTML/Javascript features that they tell you to either favor or avoid as well.
A: Nettuts has a great introduction to web-developement for iPhone. You find it here
This is the specific code you asked for (taken from that article):
<!--#if expr="(${HTTP_USER_AGENT} = /iPhone/)"-->
<!--
place iPhone code in here
-->
<!--#else -->
<!--
place standard code to be used by non iphone browser.
-->
<!--#endif -->
A: Apple defines the user agent here.
This field is transmitted in the HTTP headers under the key "User-Agent"
A: Better Solution:
*
(NSString *)flattenHTML:(NSString *)html {
NSScanner *theScanner; NSString *text = nil;
theScanner = [NSScanner scannerWithString:html];
while ([theScanner isAtEnd] == NO) {
// find start of tag
[theScanner scanUpToString:@"<" intoString:NULL] ;
// find end of tag
[theScanner scanUpToString:@">" intoString:&text] ;
// replace the found tag with a space
//(you can filter multi-spaces out later if you wish)
html = [html stringByReplacingOccurrencesOfString:
[ NSString stringWithFormat:@"%@>", text]
withString:@" "];
} // while //
return html;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Input type=text to fill parent container I'm trying to let an <input type="text"> (henceforth referred to as “textbox”) fill a parent container by settings its width to 100%. This works until I give the textbox a padding. This is then added to the content width and the input field overflows. Notice that in Firefox this only happens when rendering the content as standards compliant. In quirks mode, another box model seems to apply.
Here's a minimal code to reproduce the behaviour in all modern browsers.
#x {
background: salmon;
padding: 1em;
}
#y, input {
background: red;
padding: 0 20px;
width: 100%;
}
<div id="x">
<div id="y">x</div>
<input type="text"/>
</div>
My question: How do I get the textbox to fit the container?
Notice: for the <div id="y">, this is straightforward: simply set width: auto. However, if I try to do this for the textbox, the effect is different and the textbox takes its default row count as width (even if I set display: block for the textbox).
EDIT: David's solution would of course work. However, I do not want to modify the HTML – I do especially not want to add dummy elements with no semantic functionality. This is a typical case of divitis that I want to avoid at all cost. This can only be a last-resort hack.
A: Because of the way the Box-Modell is defined and implemented I don't think there is a css-only solution to this problem. (Apart from what Matthew described: using percentage for the padding as well, e.g. width: 94%; padding: 0 3%;)
You could however build some Javascript-Code to calculate the width dynmically on page-load... hm, and that value would of course also have to be updated every time the browserwindow is resized.
Interesting by-product of some testing I've done: Firefox does set the width of an input field to 100% if additionally to width: 100%; you also set max-width to 100%. This doesn't work in Opera 9.5 or IE 7 though (haven't tested older versions).
A: How do I get the textbox to fit the container in 2019?
Just use display: flex;
#x {
background: salmon;
padding: 1em;
display: flex;
flex-wrap: wrap;
}
#y, input {
background: red;
padding: 0 20px;
width: 100%;
}
<div id="x">
<div id="y">x</div>
<input type="text"/>
</div>
A: With CSS3 you can use the box-sizing property on your inputs to standardise their box models.
Something like this would enable you to add padding and have 100% width:
input[type="text"] {
-webkit-box-sizing: border-box; // Safari/Chrome, other WebKit
-moz-box-sizing: border-box; // Firefox, other Gecko
box-sizing: border-box; // Opera/IE 8+
}
Unfortunately this won't work for IE6/7 but the rest are fine (Compatibility List), so if you need to support these browsers your best bet would be Davids solution.
If you'd like to read more check out this brilliant article by Chris Coyier.
Hope this helps!
A: You can surround the textbox with a <div> and give that <div> padding: 0 20px. Your problem is that the 100% width does not include any padding or margin values; these values are added on top of the 100% width, thus the overflow.
A: This is unfortunately not possible with pure CSS; HTML or Javascript modifications are necessary for any non-trivial flexible-but-constrained UI behavior. CSS3 columns will help in this regard somewhat, but not in scenarios like yours.
David's solution is the cleanest. It's not really a case of divitis -- you're not adding a bunch of divs unnecessarily, or giving them classnames like "p" and "h1". It's serving a specific purpose, and the nice thing in this case is that it's also an extensible solution -- e.g. you can then add rounded corners at any time without adding anything further. Accessibility also isn't affected, as they're empty divs.
Fwiw, here's how I implement all of my textboxes:
<div class="textbox" id="username">
<div class="before"></div>
<div class="during">
<input type="text" value="" />
</div>
<div class="after"></div>
</div>
You're then free to use CSS to add rounded corners, add padding like in your case, etc., but you also don't have to -- you're free to hide those side divs altogether and have just a regular input textbox.
Other solutions are to use tables, e.g. Amazon uses tables in order to get flexible-but-constrained layout, or to use Javascript to tweak the sizes and update them on window resizes, e.g. Google Docs, Maps, etc. all do this.
Anyway, my two cents: don't let idealism get in the way of practicality in cases like this. :) David's solution works and hardly clutters up HTML at all (and in fact, using semantic classnames like "before" and "after" is still very clean imo).
A: This behavior is caused by the different interpretations of the box model. The correct box model states that the width applies only to the content and padding and margin add on to it. So therefore your are getting 100% plus a 20px right and left padding equaling 100%+40px as the total width. The original IE box model, also known as quirks mode, includes padding and margin in the width. So the width of your content would be 100% - 40px in this case. This is why you see two different behaviors. As far as I know there is no solution for this there is however a work around by setting the width to say 98% and the padding to 1% on each side.
A: @Domenic this does not work. width auto does nothing more then the default behavior of that element because the initial value of width is auto ( see page 164, Cascading Style Sheets Level 2 Revision 1 Specification). Assigning a display of type block does not work either, this simply tell the browser to use a block box when displaying the element and does not assign a default behavior of taking as much space as possible like a div does ( see page 121, Cascading Style Sheets Level 2 Revision 1 Specification). That behavior is handled by the visual user agent not CSS or HTML definition.
A: i believe you can counter the overflow with a negative margin. ie
margin: -1em;
A: The default padding and border will prevent your textbox from truly being 100%, so first you have to set them to 0:
input {
background: red;
padding: 0;
width: 100%;
border: 0; //use 0 instead of "none" for ie7
}
Then, put your border and any padding or margin you want in a div around the textbox:
.text-box {
padding: 0 20px;
border: solid 1px #000000;
}
<body>
<div id="x">
<div id="y">x</div>
<div class="text-box"><input type="text"/></div>
</div>
</body>
This should allow your textbox to be expandable and the exact size you want without javascript.
A: To make the input fill up width of parent, there're 3 attributes to set: width: 100%, margin-left: 0, margin-right: 0.
I just guess zero margin setting can help, and I had tried it, however I don't know why margin (left and right; of course top and bottom margins don't affect here) should to be zero to make it works. :-)
input {
width: 100%;
margin-left: 0;
margin-right: 0;
}
Note: You may need to set box-sizing to border-box to make sure the padding don't affect the result.
A: I use to solve this with CSS-only tables. A little bit long example but
important for all who wants to make entry screens for large amount of fields
for databases...
// GH
// NO JAVA !!! ;-)
html {
height: 100%;
}
body {
position: fixed;
margin: 0px;
padding: 0px;
border: 2px solid #FF0000;
width: calc(100% - 4px);
/* Demonstrate how form can fill body */
min-height: calc(100% - 120px);
margin-top: 60px;
margin-bottom: 60px;
}
/* Example how to make a data entry form */
.rx-form {
display: table;
table-layout: fixed;
border: 1px solid #0000FF;
width: 100%;
border-collapse: separate;
border-spacing: 5px;
}
.rx-caption {
display: table-caption;
border: 1px solid #000000;
text-align: center;
padding: 10px;
margin: 10px;
width: calc(100% - 40px);
font-size: 2.5em;
}
.rx-row {
display: table-row;
/* To make frame on rows. Rows have no border... ? */
box-shadow: 0px 0px 0px 1px rgb(0, 0, 0);
}
.rx-cell {
display: table-cell;
margin: 0px;
padding: 4px;
border: 1px solid #FF0000;
}
.rx-cell label {
float: left;
border: 1px solid #00FF00;
width: 110px;
padding: 4px;
font-size: 1em;
text-align: right;
font-family: Arial, Helvetica, sans-serif;
overflow: hidden;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.rx-cell label:after {
content: " :";
}
.rx-cell input[type='text'] {
float: right;
border: 1px solid #FF00FF;
padding: 4px;
background-color: #eee;
border-radius: 0px;
font-family: Arial, Helvetica, sans-serif;
font-size: 1em;
/* Fill the cell - but subtract the label width - and litte more... */
width: calc(100% - 130px);
overflow: hidden;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
input[type='submit'] {
font-size: 1.3em;
}
<html>
<meta charset="UTF-8">
<body>
<!--
G Hasse, gorhas at raditex dot nu
This example have a lot of frames so we
can experiment with padding and margins.
-->
<form>
<div class='rx-form'>
<div class='rx-caption'>
Caption
</div>
<!-- First row of entry -->
<div class='rx-row'>
<div class='rx-cell'>
<label for="input11">Label 1-1</label>
<input type="text" name="input11" id="input11" value="Some latin text here. And if it is very long it will get ellipsis" />
</div>
<div class='rx-cell'>
<label for="input12">Label 1-2</label>
<input type="text" name="input12" id="input12" value="The content of input 2" />
</div>
<div class='rx-cell'>
<label for="input13">Label 1-3</label>
<input type="text" name="input13" id="input13" value="Content 3" />
</div>
<div class='rx-cell'>
<label for="input14">Label 1-4</label>
<input type="text" name="input14" id="input14" value="Content 4" />
</div>
</div>
<!-- Next row of entry -->
<div class='rx-row'>
<div class='rx-cell'>
<label for="input21">Label 2-1</label>
<input type="text" name="input21" id="input21" value="Content 2-1">
</div>
<div class='rx-cell'>
<label for="input22">Label 2-2</label>
<input type="text" name="input22" id="input22" value="Content 2-2">
</div>
<div class='rx-cell'>
<label for="input23">Label 2-3</label>
<input type="text" name="input23" id="input23" value="Content 2-3">
</div>
</div>
<!-- Next row of entry -->
<div class='rx-row'>
<div class='rx-cell'>
<label for="input21">Label 2-1</label>
<input type="text" name="input21" id="input21" value="Content 2-1">
</div>
<div class='rx-cell'>
<label for="input22">Label 2-2</label>
<input type="text" name="input22" id="input22" value="Content 2-2">
</div>
<div class='rx-cell'>
<label for="input23">Label 2-3</label>
<input type="text" name="input23" id="input23" value="Content 2-3">
</div>
</div>
<!-- And some text in cells -->
<div class='rx-row'>
<div class='rx-cell'>
<div>Cell content</div>
</div>
<div class='rx-cell'>
<span>Cell content</span>
</div>
</div>
<!-- And we place the submit buttons in a cell -->
<div class='rx-row'>
<div class='rx-cell'>
<input type="submit" name="submit1" value="submit1" />
<input type="submit" name="submit2" value="submit2" />
</div>
</div>
<!-- End of form -->
</div>
</form>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: SQL Server 2000 - Debugging Deadlocks I'm looking for suggestions on how to debug and chase down deadlock issues in an SQL Server 2000 database. I've had it recommended to me to use trace flags 1024 and 3605, which I have found give me the following:
1024 - this trace flag returns the type of locks participating in the deadlock and the current command affected.
3605 - this trace flag sends trace output to the error log.
The specific stored procedures, tables and indexes still need to be uncovered, so the goal is to use these trace flags to do so. And then from there I'll know better what indexes need tuning, locking hints for tuning queries, and potential sproc bugs to fix.
Any other suggestions or happy ending war stories on chasing down this all to common problem?
A: Here is the bible on deadlock troubleshooting: http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx
A: Very useful script for analyzing deadlocks: http://www.sommarskog.se/sqlutil/aba_lockinfo.html
A: Once you know the sql statements involved, a code analysis will surely help. Some thumb rules are checking for the order in which the rows are accessed and checking for the isolation level used for the SQL statements. A profiler trace can help a lot.
Most of the time, it is because of a reader trying to get a shared lock on a resource locked exclusively by a writer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best tool for synchronizing MySQL databases I'm on a little quest of merging the structure of two MySql databases. Is there a tool for this with the might of Red-Gate's SQL Compare? Are there any free alternatives?
A: I've been using SQLyog for years now. Recently they released v8.0 which includes an updated interface and two very cool features the Query Profiler and SQL Formatter. The formatter was the reason I upgraded this year.
http://www.webyog.com/en/sqlyog_feature_list.php
Blog post with video and animated gif of the SQL Formatter in action
http://www.webyog.com/blog/2009/02/02/profile-and-format-mysql-queries-with-the-new-sqlyog-80/
A few years ago I tried Navicat and the mysql manager but decided on SQLyog. I'm sure both products have been updated since but I'm still happy with SQLyog which is updated regularly.
[Update] - Just downloaded and installed NaviCat 8 for MySQL and was not impressed
After I added this post I reviewed and downloaded v8 of NaviCat for MySQL. In comparison with user interface of SQLyog there is no comparison. SQLyog is better and more intuitive. Especially with the two latest features mentioned earlier. Immediately I noticed how SQLyog keeps you in one window when trying to view the data of a table, while Navicat spawns a new window. The report builder and viewer is the only thing that would put Navicat above SQLyog IMHO, and I already have other means to handle reports using ASP.NET. Try both products and I think you'll agree with me.
A: although i dont have any experience with it, i hear that navicat would be the tool to use:
https://www.navicat.com/en/products/navicat-for-mysql
also toad for mysql i hear isnt bad:
http://www.quest.com/toad-for-mysql/
A: Toad for MySQL can do both a "Schema Compare" as well as a "Data Compare" (and a lot more).
It used to be freeware, but the latest versions are not. You can still find the old versions though.
A: I've used Navicat for more than 3 years. It provides all the features that I need and save me tons of time for managing and developing MySQL and Oracle.
You can find a visual tutorial about Navicat's data/ structure synchronization at http://support.navicat.com/visual_tutorial/data_sync.html.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: ASP.Net word count with a custom validator A requirement for an ASP.Net 2.0 project I'm working on limits a certain field to a max of 10 words (not characters). I'm currently using a CustomValidator control with the following ServerValidate method:
Protected Sub TenWordsTextBoxValidator_ServerValidate(ByVal source As Object, ByVal args As System.Web.UI.WebControls.ServerValidateEventArgs) Handles TenWordsTextBoxValidator.ServerValidate
'' 10 words
args.IsValid = args.Value.Split(" ").Length <= 10
End Sub
Does anyone have a more thorough/accurate method of getting a word count?
A: You can use one of the builtin validators with a regex that counts the words.
I'm a little rusty with regex so go easy on me:
(\b.*\b){0,10}
A: This regex seems to be working great:
"^(\b\S+\b\s*){0,10}$"
Update: the above had a few flaws so I ended up using this RegEx:
[\s\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\xBF]+
I split() the string on that regex and use the length of the resulting array to get the correct word count.
A: I voted for mharen's answer, and commented on it as well, but since the comments are hidden by default let me explain it again:
The reason you would want to use the regex validator rather than the custom validator is that the regex validator will also automatically validate the regex client-side using javascript, if it's available. If they pass validation it's no big deal, but every time someone fails the client-side validation you save your server from doing a postback.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What does the PDB get me while debugging and how do I know it's working? I have to use a third-party component without source code. I have the release DLL and release PDB file. Let's call it 'CorporateComponent.dll'. My own code creates objects from this DLL and calls methods on these objects.
CorpObject o = new CorpObject();
Int32 result = o.DoSomethingLousy();
While debugging, the method 'DoSomethingLousy' throws an exception. What does the PDB file do for me? If it does something nice, how can I be sure I'm making use of it?
A: To confirm if you're using the provided PDB, CorporateComponent.pdb, during debugging within the Visual Studio IDE review the output window and locate the line indicating that the CorporateComponent.dll is loaded and followed by the string Symbols loaded.
To illustrate from a project of mine:
The thread 0x6a0 has exited with code 0 (0x0).
The thread 0x1f78 has exited with code 0 (0x0).
'AvayaConfigurationService.vshost.exe' (Managed): Loaded 'C:\Development\Src\trunk\ntity\AvayaConfigurationService\AvayaConfigurationService\bin\Debug \AvayaConfigurationService.exe', Symbols loaded.
'AvayaConfigurationService.vshost.exe' (Managed): Loaded 'C:\Development\Src\trunk\ntity\AvayaConfigurationService\AvayaConfigurationService\bin\Debug\IPOConfigService.dll', No symbols loaded.
Loaded 'C:\Development\src...\bin\Debug\AvayaConfigurationService.exe', Symbols loaded.
This indicates that the PDB was found and loaded by the IDE debugger.
As indicated by others When examining stack frames within your application you should be able to see the symbols from the CorporateComponent.pdb. If you don't then perhaps the third-party did not include symbol information in the release PDB build.
A: The pdb contains information the debugger needs in order to correctly read the stack. Your stack traces will contain line numbers and symbol names of the stack frames inside of the modules for which you have the pdb.
I'll give two usages examples. The first is the obvious answer. The second explains source-indexed pdb's.
1st usage example...
Depending on calling convention and which optimizations the compiler used, it might not be possible for the debugger to manually unwind the stack through a module for which you do not have a pdb. This can happen with certain third party libraries and even for some parts of the OS.
Consider a scenario in which you encounter an access violation inside of the windows OS. The stack trace does not unwind into your own application because that OS component uses a special calling convention that confuses the debugger. If you configure your symbol path to download the public OS pdb's, then there is a good chance that the stack trace will unwind into your application. That enables you to see exactly what arguments your own code passed into the OS system call. (and similar example for AV inside of a 3rd party library or even inside of your own code)
2nd usage example...
Pdb's have another very useful property - they can integrate with some source control systems using a feature that microsoft calls "source indexing". A source-indexed pdb contains source control commands that specify how to fetch from source control the exact file versions that were used to build the component. Microsoft's debuggers understand how to execute the commands to automatically fetch the files during a debug session. This is a powerful feature that saves the debug egineer from having to manually sync a source tree to the correct label for a given build. It's especially useful for remote debugging sessions and for analyzing crash dumps post-mortem.
The "debugging tools for windows" installation (windbg) contains a document named srcsrv.doc which provides an example demonstrating how to use srctool.exe to determine which source files are source-indexed in a given pdb.
To answer your question "how do I know", the "modules" feature in the debugger can tell you which modules have a corresponding pdb. In windbg use the "lml" command. In visual studio select modules from somewhere in the debug menus. (sorry, I don't have a current version of visual studio handy)
A: The PDB is a database file that maps the instructions to their line numbers in the original code so when you get a stack trace you'll get the line numbers for the code. If it's an unmanaged DLL then the PDB file will also give you the names of the functions in the stack trace, whereas that information is usually only available for managed DLLs without PDBs.
A: The main I get from the pdb is line numbers and real method names for stack traces.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Using Windows XP as a SQL Server I was wondering if anyone knew of any limitations to using Windows XP as a File and SQL server. I am asking because one of the applications we sell, requires the customer to setup a server for filesharing and as a SQL Server. We already allow them to use SQL Express, but we wanted to see if we can suggest Windows XP as a low cost alternative to Windows Server. The only potential problem that I could see if there were limits on the number of concurrent connections to either the files or the database. We are only thinking of recommending this for smaller sized companies who would have 10-15 users.
A: There is a limit of 10 inbound connections on XP professional, and 5 on XP Home. So it would only be practicable for a very small company.
A: From this MS KB Article:
Note For Windows XP Professional, the
maximum number of other computers that
are permitted to simultaneously
connect over the network is ten. This
limit includes all transports and
resource sharing protocols combined.
For Windows XP Home Edition, the
maximum number of other computers that
are permitted to simultaneously
connect over the network is five. This
limit is the number of simultaneous
sessions from other computers the
system is permitted to host. This
limit does not apply to the use of
administrative tools that attach from
a remote computer.
Per development: The connection limit
refers to the number of
redirector-based connections and is
enforced for any file, print, named
pipe, or mail slot session. The TCP
connection limit is not enforced, but
it may be bound by legal agreement to
not permit more than 10 clients.
I suggest reading the kb article for more information.
A: Actually, you can run SQL Server Standard or Workgroup Edition on Windows XP Pro. It is not limited to the express version ...
A: One cost effective alternative is Windows Small Business Server.
SBS 2003 R2: Features at a Glance
A: This will break the EULA.
Here is the relevant knowledge base article. Note that while TCP connection limits are not enforced for XP, legally they are limited to 10 connections.
Small business server seems like a better fit, and is cost effective if you shop around.
A: The problem with Small Business Server is all the frill it comes with that is unnecessary for a simple file and sql server; like exchange server, sharepoint, etc. I've used Windows XP as a small business SQL/File server, but as others have pointed out, you are limited to 10 connections legally speaking.
A: Another issue with Small Business Server is it can't be installed on an existing domain. Your best bet would be to package the SQL Server portion around a normal Windows server installation. If you're looking at 10-15 users, there's no guarantee that they have a domain. But if they don't, likely they are already dealing with the file server problem using accounts with same usernames/passwords on the file server(s) as on their individual workstations.
A: Presumably you are meaning SQL Express, as you can't run SQL Server on XP, it's a server product.
If the customer can afford your product, they can afford a copy of Server 2003, or whatever, the file sharing's built-in. Admittedly SQL Server's fairly expensive, but if your product needs it, that's the way it goes. If cost were an issue, you shouldn't be using SQL Server as the database platform. There's no point in trying to force a server-based solution into a client OS. You'll end up with all sorts of problems before long.
Doesn't the client have a domain-based infrastructure already?
The upshot being being if the client has 5-10 users of the software, they should be on SBS anyway for a variety of other reasons. You don't get SQL Server with it though.
(Samba would be an option for file-sharing, but doubtless more expensive than simply buying Server 2003 in this instance).
A: The number of connections is not related to SQL Server edition, but to the operating system. For example, Windows XP allows only 10 concurrent connections, Windows 7 20. For Windows Server OS [no need to purchase any new server machine ], the number of connections is unlimited (but you can limit is using Terminal Services).
The error message shown in case of connection limit reached is something like "the security limit reached...the number of concurrent connect attempts"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Automating DB Object Migrations from Source Control I'm looking for some "Best Practices" for automating the deployment of Stored Procedures/Views/Functions/Table changes from source control. I'm using StarTeam & ANT so the labeling is taken care of; what I am looking for is how some of you have approached automating the pull of these objects from source - not necessarily StarTeam.
I'd like to end up with one script that can then be executed, checked in, and labeled.
I'm NOT asking for anyone to write that - just some ideas or approaches that have (or haven't) worked in the past.
I'm trying to clean up a mess and want to make sure I get this as close to "right" as I can.
We are storing the tables/views/functions etc. in individual files in StarTeam and our DB is SQL 2K5.
A: We use SQL Compare from redgate (http://www.red-gate.com/).
We have a production database, a development database and each developer has their own database.
The development database is synchronised with the changes a developer has made to their database when they check in their changes.
The developer also checks in a synchronisation script and a comparison report generated by SQL Compare.
When we deploy our application we simply synchronise the development database with the production database using SQL Compare.
This works for us because our application is for in-house use only. If this isn't your scenario then I would look at SQL Packager (also from redgate).
A: I prefer to separate views, procedures, and triggers (objects that can be re-created at will) from tables. For views, procedures, and triggers, just write a job that will check them out and re-create the latest.
For tables, I prefer to have a database version table with one row. Use that table to determine what new updates have not been applied. Then each update is applied and the version number is updated. If an update fails, you have only that update to check and you can re-run know that the earlier updates will not happen again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: GridView will not update underlying data source So I'm been pounding on this problem all day. I've got a LinqDataSource that points to my model and a GridView that consumes it. When I attempt to do an update on the GridView, it does not update the underlying data source. I thought it might have to do with the LinqDataSource, so I added a SqlDataSource and the same thing happens. The aspx is as follows (the code-behind page is empty):
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="Data Source=devsql32;Initial Catalog=Steam;Persist Security Info=True;"
ProviderName="System.Data.SqlClient"
SelectCommand="SELECT [LangID], [Code], [Name] FROM [Languages]" UpdateCommand="UPDATE [Languages] SET [Code]=@Code WHERE [LangID]=@LangId">
</asp:SqlDataSource>
<asp:GridView ID="_languageGridView" runat="server" AllowPaging="True"
AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="LangId"
DataSourceID="SqlDataSource1">
<Columns>
<asp:CommandField ShowDeleteButton="True" ShowEditButton="True" />
<asp:BoundField DataField="LangId" HeaderText="Id" ReadOnly="True" />
<asp:BoundField DataField="Code" HeaderText="Code" />
<asp:BoundField DataField="Name" HeaderText="Name" />
</Columns>
</asp:GridView>
<asp:LinqDataSource ID="_languageDataSource" ContextTypeName="GeneseeSurvey.SteamDatabaseDataContext" runat="server" TableName="Languages" EnableInsert="True" EnableUpdate="true" EnableDelete="true">
</asp:LinqDataSource>
What in the world am I missing here? This problem is driving me insane.
A: You are missing the <UpdateParameters> sections of your DataSources.
LinqDataSource.UpdateParameters
SqlDataSource.UpdateParameters
A: It turns out that we had a DataBind() call in the Page_Load of the master page of the aspx file that was probably causing the state of the GridView to get tossed out on every page load.
As a note - update parameters for a LINQ query are not required unless you want to set them some non-null default.
A: This is a total shot in the dark since I haven't used ASP at all.
I've been just learning XAML and WPF, which appears to be very similar to what you've posted above and I know that for some UI controls you need to specify the binding mode to two-way in order to get updates in both directions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best Practice: Legitimate Cross-Site Scripting While cross-site scripting is generally regarded as negative, I've run into several situations where it's necessary.
I was recently working within the confines of a very limiting content management system. I needed to include database code within the page, but the hosting server didn't have anything usable available. I set up a couple bare-bones scripts on my own server, originally thinking that I could use AJAX to import the contents of my scripts directly into the template of the CMS (thus retaining dynamic images, menu items, CSS, etc.). I was wrong.
Due to the limitations of XMLHttpRequest objects, it's not possible to grab content from a different domain. So I thought iFrame - even though I'm not a fan of frames, I thought that I could create a frame that matched the width and height of the content so that it would appear native. Again, I was blocked by cross-site scripting "protections." While I could indeed load a remote file into the iFrame, I couldn't execute JavaScript to modify its size on either the host page or inside the loaded page.
In this particular scenario, I wasn't able to point a subdomain to my server. I also couldn't create a script on the CMS server that could proxy content from my server, so my last thought was to use a remote JavaScript.
A remote JavaScript works. It breaks when the user has JavaScript disabled, which is a downside; but it works. The "problem" I was having with using a remote JavaScript was that I had to use the JS function document.write() to output any content. Any output that isn't JS causes script errors. In addition to using document.write() for every line, you also have to ensure that the content is escaped - or else you end up with more script errors.
My solution was as follows:
My script received a GET parameter ("page") and then looked for the file ({$page}.php), and read the contents into a variable. However, I had to use awkward buffering techniques in order to actually execute the included scripts (for things like database interaction) then strip the final content of all line break characters (\n) followed by escaping all required characters. The end result is that my original script (which outputs JavaScript) accesses seemingly "standard" scripts on my server and converts their standard output to JavaScript for displaying within the CMS template.
While this solution works, it seems like there may be a better way to accomplish the same thing. What is the best way to make cross-site scripting work specifically for the purpose of including content from a completely different domain?
A: Personally, I would call to that other domain on the server and get and parse the data there for use in your page. That way you avoid any problems and you get the power of a server-side language/platform for getting and parsing the data.
Not sure if that would work for your specific scenario...hard to know even with your verbose description...
A: You've got three choices:
*
*Create a server side proxy script.
*Create a remote script to read in remote dynamic HTML. Use a library like jQuery to make this easier. You can use the load function to inject HTML where needed. EDIT What I originally meant for example # 2 was utilizing JSONP, which requires the server side script to recognize the "callback=?" param.
*Use a client side Flash proxy and setup a crossdomain.xml file on your server's web root.
A: You could try easyXDM, by including very little code, you can pass data or method calls between documents of different domains.
A: I've come across that YDN server side proxy script before. It says it's built to work with Yahoo's Search APIs.
Will it work with any domain, if you simply trim the Yahoo API code out? Or do you need to replace it with the domain you want it to work with?
A: iframe remote content can be accessed by local javascript.
The remote server just have to set the document.domain of the page.
Eg:
Site A contain an iframe with src='Site B/home.php'
home.php looks like this :
[php stuff]...[/php]
[script type='text/javascript']document.domain='Site A'[/script]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Pretty graphs and charts in Python What are the available libraries for creating pretty charts and graphs in a Python application?
A: CairoPlot
A: I used pychart and thought it was very straightforward.
http://home.gna.org/pychart/
It's all native python and does not have a busload of dependencies. I'm sure matplotlib is lovely but I'd be downloading and installing for days and I just want one measley bar chart!
It doesn't seem to have been updated in a few years but hey it works!
A: I'm the one supporting CairoPlot and I'm very proud it came up here.
Surely matplotlib is great, but I believe CairoPlot is better looking.
So, for presentations and websites, it's a very good choice.
Today I released version 1.1. If interested, check it out at CairoPlot v1.1
EDIT: After a long and cold winter, CairoPlot is being developed again. Check out the new version on GitHub.
A: Have you looked into ChartDirector for Python?
I can't speak about this one, but I've used ChartDirector for PHP and it's pretty good.
A: NodeBox is awesome for raw graphics creation.
A: If you like to use gnuplot for plotting, you should consider Gnuplot.py. It provides an object-oriented interface to gnuplot, and also allows you to pass commands directly to gnuplot. Unfortunately, it is no longer being actively developed.
A: For interactive work, Matplotlib is the mature standard. It provides an OO-style API as well as a Matlab-style interactive API.
Chaco is a more modern plotting library from the folks at Enthought. It uses Enthought's Kiva vector drawing library and currently works only with Wx and Qt with OpenGL on the way (Matplotlib has backends for Tk, Qt, Wx, Cocoa, and many image types such as PDF, EPS, PNG, etc.). The main advantages of Chaco are its speed relative to Matplotlib and its integration with Enthought's Traits API for interactive applications.
A: Chaco from enthought is another option
A: You should also consider PyCha
http://www.lorenzogil.com/projects/pycha/
A: I am a fan on PyOFC2 : http://btbytes.github.com/pyofc2/
It just just a package that makes it easy to generate the JSON data needed for Open Flash Charts 2, which are very beautiful. Check out the examples on the link above.
A: You can also use pygooglechart, which uses the Google Chart API. This isn't something you'd always want to use, but if you want a small number of good, simple, charts, and are always online, and especially if you're displaying in a browser anyway, it's a good choice.
A: You didn't mention what output format you need but reportlab is good at creating charts both in pdf and bitmap (e.g. png) format.
Here is a simple example of a barchart in png and pdf format:
from reportlab.graphics.shapes import Drawing
from reportlab.graphics.charts.barcharts import VerticalBarChart
d = Drawing(300, 200)
chart = VerticalBarChart()
chart.width = 260
chart.height = 160
chart.x = 20
chart.y = 20
chart.data = [[1,2], [3,4]]
chart.categoryAxis.categoryNames = ['foo', 'bar']
chart.valueAxis.valueMin = 0
d.add(chart)
d.save(fnRoot='test', formats=['png', 'pdf'])
alt text http://i40.tinypic.com/2j677tl.jpg
Note: the image has been converted to jpg by the image host.
A: Please look at the Open Flash Chart embedding for WHIFF
http://aaron.oirt.rutgers.edu/myapp/docs/W1100_1600.openFlashCharts
and the amCharts embedding for WHIFF too http://aaron.oirt.rutgers.edu/myapp/amcharts/doc. Thanks.
A: You could also consider google charts.
Not technically a python API, but you can use it from python, it's reasonably fast to code for, and the results tend to look nice. If you happen to be using your plots online, then this would be an even better solution.
A: PLplot is a cross-platform software package for creating scientific plots. They aren't very pretty (eye catching), but they look good enough. Have a look at some examples (both source code and pictures).
The PLplot core library can be used to create standard x-y plots, semi-log plots, log-log plots, contour plots, 3D surface plots, mesh plots, bar charts and pie charts. It runs on Windows (2000, XP and Vista), Linux, Mac OS X, and other Unices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "130"
} |
Q: How are people unit testing code that uses Linq to SQL How are people unit testing code that uses Linq to SQL?
A: 3 years late, but this is how I do it:
https://github.com/lukesampson/LinqToSQL-test-extensions/
No need to write a wrapper or do lots of plumbing, just drop the T4 template next to your .dbml and you get:
*
*An interface for your data context e.g. IExampleDataContext
*An in-memory mock for your data context e.g. MemoryExampleDataContext
Both will automatically use the mappings you've already configured in your DBML.
So you can do things like
public class ProductRepo {
IExampleDataContext DB { get; set };
public ProductRepo(IExampleDataContext db) {
DB = db;
}
public List<Product> GetProducts() {
return DB.Products.ToList();
}
}
and you can call that with either
new ProductRepo(new MemoryExampleDataContext()).GetProducts(); // for testing
or
new ProductRepo(new ExampleDataContext()).GetProducts(); // use the real DB
A: Wrap the DataContext, then mock the wrapper. It's the fastest way to get it done, although it requires coding for testing, which some people think smells. But sometimes, when you have dependencies that cannot be (easily) mocked, it's the only way.
A: Normally, you don't need to test the part of the code that uses LINQ to SQL but if you really want to, you can use the same data sets that you're querying against the server and turn them into in-memory objects and run the LINQ queries against that (which would use the Enumerable methods instead of Queryable).
Another option is to use Matt Warren's mockable version of the DataContext.
You can also get the SQL statements that LINQ to SQL uses by getting them via the debugger (from the IQueryable object), check those manually, and then include them in the automated tests.
A: Linq makes testing much easier. Linq queries work just as well on Lists as on the Linq-to-sql stuff. You can swap out Linq to SQL for list objects and test that way.
A: Mattwar over at The Wayward Web Log had a great article about how to mock up an extensible Linq2Sql data context. Check it out -- MOCKS NIX - AN EXTENSIBLE LINQ TO SQL DATACONTEXT
A: Update:
Fredrik has put an example solution on how to do unit test linq2sql applications over at his blog. You can download it at:
http://web.archive.org/web/20120415022448/http://iridescence.no/post/DataContext-Repository-Pattern-Example-Code.aspx
Not only do I think its great that he posted an example solution, he also managed to extract interfaces for all classes, which makes the design more decoupled.
My old post:
*I found these blogs that I think are a good start for making the DataContext wrapper:
Link1
Link2
They cover almost the same topic except that the first one implements means for extracting interfaces for the tables as well. The second one is more extensive though, so I included it as well.*
A: LINQ to SQL is actually really nice to unit test as it has the ability to create databases on the fly from what is defined in your DBML.
It makes it really nice to test a ORM layer by creating the DB through the DataContext and having it empty to begin with.
I cover it on my blog here: http://web.archive.org/web/20090526231317/http://www.aaron-powell.com/blog/may-2008/unit-testing-linq-to-sql.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: ASP.NET MVC quick start - a one-stop tutorial? There are many ASP.MVC blog post bits and pieces scattered over different web sites, as well as couple of resource questions here - ASP.NET Model-view-controller (MVC) - where do I start from? and MVC Learning Resources
I wonder if there was a one-stop tutorial posted yet on getting started with ASP.NET MVC?
Thank you!
Edit: I probably need to clarify - a one-stop tutorial that'd help to get started within and hour or two and learn more as I go... Reading books is a non starter for me personally - takes more time I can afford and starts with basics...
A: Scott Guthrie wrote a free complete end to end tutorial of creating a full web application using MVC and it touches on most of the major pieces of MVC:
*
*NerdDinner.com
*Code Walkthrough of how to build NerdDinner.com
A: Don't forget Scott Guthrie's blog. Latest news on MVC. The "official" site is two releases behind.
A: Have you looked at MVC Samples on CodePlex? Rob Conery has some screencasts that go along with the creation of the site at http://blog.wekeroad.com/mvc-storefront/.
A: http://www.asp.net/mvc
Whoops, submitted before I was done. The ASP.NET MVC site has tons of videos/screencast on getting started with ASP.NET MVC. Definitely watch the Scott Hanselman ones first.
Edit
The Rob Conery screencasts that @David provided are provided on the ASP.NET MVC site also, under videos. That would constitute one spot to get those resources and also the ones the ASP.NET MVC team put out.
One note on any resource you use. You could run into functionality that is no longer available in the framework due to it being in development. If you use the resources provided that you already found along with the tutorials, you will find the replacements or how to get around it.
A: Quickstart gives a good overview of all features.
A: Hopefully, as we get closer to release, http://asp.net/mvc will be the one stop shop for ASP.NET MVC related issues.
A: In addition to the above mentioned:
*
*http://weblogs.asp.net/stephenwalther
*Asp.net MVC in Action looks to be a good book.
A: We just recently released the beta version of TheBeerHouse MVC Edition which should give you some great examples. There is also a book written explaining everything, but you will have to wait a little longer for that to come out :D.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Simplest way to reverse the order of strings in a make variable Let's say you have a variable in a makefile fragment like the following:
MY_LIST=a b c d
How do I then reverse the order of that list? I need:
$(warning MY_LIST=${MY_LIST})
to show
MY_LIST=d c b a
Edit: the real problem is that
ld -r some_object.o ${MY_LIST}
produces an a.out with undefined symbols because the items in MY_LIST are actually archives, but in the wrong order. If the order of MY_LIST is reversed, it will link correctly (I think). If you know a smarter way to get the link order right, clue me in.
A: An improvement to the GNU make solution:
reverse = $(if $(wordlist 2,2,$(1)),$(call reverse,$(wordlist 2,$(words $(1)),$(1))) $(firstword $(1)),$(1))
*
*better stopping condition, original uses the empty string wasting a function call
*doesn't add a leading space to the reversed list, unlike the original
A: Doh! I could have just used a shell script-let:
(for d in ${MY_LIST}; do echo $$d; done) | tac
A: You can also define search groups with ld:
ld -r foo.o -( a.a b.a c.a -)
Will iterate through a.a, b.a, and c.a until no new unresolved symbols can be satisfied by any object in the group.
If you're using gnu ld, you can also do:
ld -r -o foo.o --whole-archive bar.a
Which is slightly stronger, in that it will include every object from bar.a regardless of whether it satisfies an unresolved symbol from foo.o.
A: A solution in pure GNU make:
default: all
foo = please reverse me
reverse = $(if $(1),$(call
reverse,$(wordlist 2,$(words
$(1)),$(1)))) $(firstword $(1))
all : @echo $(call reverse,$(foo))
Gives:
$ make
me reverse please
A: Playing off of both Ben Collins' and elmarco's answers, here's a punt to bash which handles whitespace "properly"1
reverse = $(shell printf "%s\n" $(strip $1) | tac)
which does the right thing, thanks to $(shell) automatically cleaning whitespace and printf automatically formatting each word in its arg list:
$(info [ $(call reverse, one two three four ) ] )
yields:
[ four three two one ]
1...according to my limited test case (i.e., the $(info ...) line, above).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Favorite Windows keyboard shortcuts I'm a keyboard junkie. I love having a key sequence to do everything. What are your favorite keyboard shortcuts?
I'll start by naming a couple of mine:
1 - Alt-Space to access the windows menu for the current window
2 - F2 to rename a file in Windows Explorer
A: To maximize a window: Alt+Space, X
To restore a window: Alt+Space, R
To minimize a window: Alt+Space, N
To close a window: Alt+Space, C
A: *
*F4 in windows explorer to access the location bar trivially.
*Menu key (next to the right-hand windows key) + W + F to create a new folder in explorer.
A: I try to stick to my keyboard as well. I frequently use...
*
*Win+L to Lock my system
*Alt+F4 to close a program
*Win+R to launch from the Run Window (Used for frequent programs instead of going through QuickLaunch)
*F2 to rename a file
*Win+D to go to Desktop
*Alt+Tab and Alt+Tab+Shift to cycle through open programs
Visual Studio
*
*Alt, D (debug), P (process), W (webdev process)
*Alt, T (Tools), P (process), W (webdev process) for VS 2008
*Alt, M, O to collapse to definitions
*F5 to launch
*F9, F10, and F11 for stepping through debugger
*Alt+K, D to format a document
*Alt+K, C to comment
*Alt+K, U to uncomment
Browser
*
*Alt+W to close tab
*F6 to focus on the address bar
A: How is this not here?
+Pause to System Information. Then the system PATH variable is only 2 clicks away (Advanced system settings,Environment Variables...)
A: *
*Win + E to open an Windows Explorer reference
*Win + R from the Run box
*Ctrl + Esc to open the start menu
And, of course, Alt + F4 to close things.
A: A few basic keyboard shortcuts for clipboard operations, text selection, and navigation that work in most Windows programs:
Clipboard
*
*Ctrl+X - Clipboard Cut
*Ctrl+C - Clipboard Copy
*Ctrl+V - Clipboard Paste
Selecting Text
*
*Ctrl+A - Select All (in the current field or document)
*Shift+[navigate with ▲/▼, Home/End, or Pg Up/Pg Dn] - Select text between the caret's previous and new positions. Continue to hold Shift and navigate to select more text.
Navigation
*
*Ctrl+left arrow / Ctrl+right arrow - Move the caret to the previous/next word
*Ctrl+Home / Ctrl+End - Go to beginning/end of the current field or document
Bonus Tip!
*
*Before submitting a web form where you've entered a lot of text into a text field (for example, an email in a web-based mail client -- or a new question or answer on Stack Overflow!), do a quick Ctrl+A, Ctrl+C on the field. That way, if something goes wrong with the submit (even if the browser crashes), you haven't lost your work -- you have a copy of it sitting on the clipboard.
A: Ctrl+Shift+Esc to go straight to the task manager without any intermediate dialogs.
A: I use the free AutoHotKey, then I define my own shortcuts:
*
*dobule tap F4 quickly => Close active Windows (like Alt+F4 but with one finger only)
*double tap Right Alt quickly => Find and Run Robot task manager
*F12 => open Find and Run Robot Locate32 plugin (I use it like a very lightweight desktop search)
*Ctrl+Up / Down in a command window => scroll back / forward command line like the mouse wheel
*Ctrl+w in a command windows => close window
etc.
A: In calc, F5, F6, F7, F8 cycle between Hex, Dec, Oct, Bin mode.
A: On Windows Vista, if you bring up the Start menu and search for a program, pressing Ctrl+Shift+Enter will run the selected program as Administrator. So to open an Administrator command prompt:
Windows key, type "cmd", Ctrl+Shift+Enter
A: For when you have a window stuck under an appbar and can't get at that window's system menu to move it:
alt-spacebar -> M -> arrow keys -> return
A: Win + Pause/Break to bring up computer information and to access environment variables under the Advanced tab.
Win + R to go straight to the run box (though I barely use this anymore since I started with Launchy).
Of course Alt + Tab but also Alt + Shift + Tab for going backwards.
Oh, and personally, I hate Ctrl + F4 for closing tabs - too much of a pinky stretch.
Oh and try Win + Tab on Windows 7 (with Aero on).
A: Win + 1 .. 9 -- Start quick launch shortcut at that index (Windows Vista).
Ctrl + Scroll Lock, Scroll Lock -- Crash your computer: Windows feature lets you generate a memory dump file by using the keyboard
@gabr -- Win + D is show desktop, Win + M minimizes all windows. Hitting Win + D twice brings everything back as it has only shown the desktop window in front of the other windows.
A: *
*+[type name of program] to launch a program in Vista
*+E for explorer
*+F for find
*Alt+Tab to swap between programs
*Ctrl+Tab to swawp between tabs
Not really a 'Windows' shortcut, but the Ctrl+Alt+numpad and Ctrl+Alt+[arrows] to move and resize windows and move them to another monitor using WinSplit Revolution are absolutely great. I would never use large or multiple monitors without them.
A: win+M to minimise all. Useful for quick trips to the desktop.
A: My personal favourite is WinKey, U, Enter - shuts Windows down! ;-)
A: *
*Win+Pause/Break for System Properties
*Win+E: open windows explorer
*Win+F: find
*Win+R: run
*Win+M: minimize all windows
*Win+Shift+M: restore all windows
*Alt+F4: close program
*Alt+Tab: switch between tasks
*Ctrl+Alt+Del: task manager
A: *
*Ctrl + Shift + ESC : Run Task Manager
*Ctrl/Shift + Insert : Copy/Paste
*Shift + Delete : Cut (text)
*Win + L : Lock System
*Win + R : Run
*Ctrl + Pause Break : Break Loop (Programming)
*Ctrl + Tab : Tab Change
A: *
*Alt-F4 to close a program.
*WindowsKey + L to lock my workstation
*Ctr-Shift-Ins to copy text from a textbox
*Alt-Print Screen to capture a shot of just a window
*WindowsKey + R to open the "Run" dialog (XP Pro only- does something else on XP Home)
A: *
*Win-D to minimize all applications
*Ctrl-Shift-Esc to open Task Manager
A: Win-L to lock the computer..
A: Repeat Ctrl + Alt + Del Twice!
A: Ctrl + Shift + Esc -> Open Task Manager
Ctrl + W -> closes windows in MDIs where Ctrl+F4 doesn't work
Those and the Win + Number is Vista are used constantly.
Also a nice trick is Win + Tab -> cycles through program groups on task bar in Windows Xp and Server 2003. (i.e. same as Vista without the previews).
A: Many say that Win-D minimises all applications. Not true. It simply shows the desktop. Use Win-M to minimise all open windows. Use Win-Shift-M to restore them to their previous state.
By the way, did you notice that the Sift key can be combined with most of the usual shortcuts? e.g. Alt+Tab : cycle through applications 1->2->3->4->...1 Add Shift to the shortcut and you will be cycling in the opposite direction 1<-2<-3<-4<- ...1
Control+Tab to switch between Tabs in most Windows applications (sadly not in Eclipse) - you can already guess what Ctr+Shift+Tab will do. Especially handy in Firefox, IE, etc... where you have more than one Tab open and try going to the previous one. Very handy.
And one more tip, this is soooo handy, I love it. Only found out about it a couple of weeks ago:
FireFox users: tired of rightclick->Open Link in New Tab?
Click a link with MIDDLE mouse button and it will open in a new tab (depends on your Tabs settings in Tools->Options but by default would work). The magical thing about this is that it works even for the browser's Back button! Also when you type a search term into the Google box (usually in top right corner) and middle-click the search button, the search results are opened in a new tab. Closing tabs is also much easier with the middle mouse button (of course you can do Ctrl+W but sometimes the mouse is simply in your hand). You don't have to click the tab's red button to close it. Simply middle-click anywhere on the tab and it will be closed.
EDIT
I just tried the middle button in IE 7 and seems to work just like it does in FF, except for the Back-button and Search widget.
A: I don't have favorites among keyboard shortcuts -- they are all utility entities to me...
Except for +L, which means another coffee break!
A: Win+D to show the desktop and then Win+D to bring all the windows back again.
A: All those of you that mentioned Alt-Tab and Ctrl-Tab missed out the shift versions too
CTRL-SHIFT-TAB - move one tab back
ALT-SHIFT-TAB - move one window back in task switcher
A: Win+D, Win+R, Win+E, Win+1 (Firefox)
A: The most "important" shortcut is the Secure Attention Sequence, Ctrl+Alt+Del.
On XP you usually have to enable it otherwise it just runs Task Manager.
While logged in, SAS brings up the Windows Security dialog on its own Desktop, which will get you out of almost anything (such as a hung full-screen DirectX app).
The Task Manager shortcut is now (and always was) Ctrl+Shift+Esc.
(this answer applies to NT4, 2000, XP and 2003. I can't speak for Vista)
A: Alt+F4
Alt+Tab
Ctrl+Tab
Win+Tab
Ctrl+X
Ctrl+V
Ctrl+C
Alt+R
Alt+E
Alt+D
Ctrl+Space (VS IDE)
A: It's not a keyboard shortcut, but my favourite trick is to bind the large thumb button on the rat to move window, the smaller thumb button to resize. That way, windows can be moved and resized very easily and naturally. You can probably to that in windows too.
As for keyboard tricks, I use right ctrl+keypad to pick (one of nine) virtual screens. Very quick and natural.
A: I mapped some global hotkeys:
*
*In Winamp I use Ctrl+Alt+Backspace (same as AltGr+Backspace for me) to Pause. If someone wants my attention while I've got headphones on, far easier to press a couple of buttons than click the mouse on a button that's about five pixels wide.
*I use Ctrl+Alt+C to run calc.
A: In any dialog with tabs, Ctrl-Page Up/Down to cycle between the tabs.
A: I am used to setting up shortcut keys to program shortcuts in start menu (standard Windows feature). Once the program is started, pressing such shortcut brings the focus to the window instead of starting another copy.
For example, to pause Winamp I just press Ctrl+Alt+W, C (and I can have it working without tray or taskbar icons).
The only drawback is that some program names start with the same letter so I have to pick up other letters for them. =)
A: Not really an answer, but a hint for a good source to look from - if no one cited it above wikipedia has all ( for the most important OS's) - not the best
A: Press the Backspace key in any Windows Explorer window (including the common dialog windows) to go up one level in the folder hierarchy. This is a shortcut for a button next to the folder combo-box. Microsoft removed this functionality in Windows Vista and later in order to make Windows Explorer more like a web browser; now the Backspace key operates as a "Back" button. (If anyone knows of a way to go up one level in Vista and later, please comment!)
A: F5 to execute seems to be the one I use the most.
A: Windows
Windows right click key, next to the right alt can be very useful.
For the noobs,
tab and shift-tab to cycle through inputs
alt-tab and alt-shift-tab to cycle through the windows
ctrl-tab and alt-shift-tab to cycle through the tabs
ctrl-printscreen to snapshot the entire screen
and alt-printscreen to snapshot the current window
for some dialog windows ctrl-c will copy the message
Console
alt-space then e,p to paste in windows console
alt-space then e,k to mark in console
tab and shifttab to cycle autocomplete in console
Visual Studio
ctrl-shift-f Search in files
ctrl-f Search page
F12 Goto definition of the current word
F2 Rename selected text
F4 Open properties tab for selected
Highlight section and tab or shifttab Indent a block of text
ctrl-k,d Format Document
ctrl-k,c Comment out highlighted text
ctrl-k,u,c Un-comment highlighted text
ctrl-m,o Collapse to definitions
ctrl-m,m Toggle open and close the current method/function
ctrl-alt,l Open solution pane
ctrl-alt,o Open output pane
and of course ctrl-space for intellisense
A: My favourites are the following (which I have not been able to spot in the responses above):
*
*F12 Save as in Office applications
*Ctrl + Home Scroll to the top of the page in most applications or go to cell A1 in Excel
*Ctrl + Delete Go back to the cursor in a Word document or back to the active cell in Excel
*Ctrl + Shift + End Select a whole table in Excel from its top-left corner. If the table starts at A1, use in conjunction with the above for super speedy one-handed table selecting
It's already been said, but I'm repeating F6 to go directly to the browser address bar because it rocks!
A: I haven't seen Ctrl + Z mentioned yet. This one has saved my butt many times. It's the Undo command, which is really useful if you've just deleted a paragraph of text, or accidentally pasted over the wrong segment of code you just wrote.
Others in that vein:
Ctrl + X - Cut
Ctrl + C - Copy
Ctrl + V - Paste
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: BlackBerry development using IntelliJ IDEA 7.0? I know RIM has their own IDE (BlackBerry JDE) for building BlackBerry apps, but does anyone know how to configure IntelliJ IDEA to build/debug BlackBerry apps?
A: RIM's compiler (the one that builds the COD files) can be easily run from the command line. All you need to do is create a corresponding build step in IDEA.
Also, to make your life easier when editing the code, you may want to add the net_rim_api.jar (the one that comes with RIM JDE) to the JAR files used by your IDEA project.
As for the debugger, RIM's debugger was supposed to support the standard Java debugger interface. I don't remember what the minimum version of JDE is required for that.
A: RE: Chris' question about what is different... Blackberry applications can be standard MIDP apps or CLDC apps that make use of the Blackberry specific APIs. Most developers tend to take the latter approach, and then using Blackberry's tools is required - especially if you are using some of their secured APIs and have to sign your deployment files for them to run on the devices.
A potential answer to the original question would be to use the Blackberry ANT tools to create an ANT script for building the application and reference that from IntelliJ IDEA. Of course, that's only half the battle and to run/debug the application you'll need to connect the debugger to IDEA as noted by Alexander above. Alternatively, you could code in IDEA and run/debug in the JDE, but that seems less than ideal, to say the least.
I use Eclipse with the Blackberry plugin. Also not ideal, since you are forced to use an old (and buggy) version of Eclipse, but at least I'm in one IDE and can step through code running in a simulator.
Blackberry JDE integration would be a great IntelliJ plugin project.
A: Not really an answer, but more asking for clarification what is different for Blackberry dev versus other J2ME devices...
I see its a MIDP J2ME device, and so the standard Intellij J2ME support would seem to give most of what is needed.
I guess the emulator side of things might be different... but maybe you can call the jde emulator from IDEA...
Regards,
Chris
A: I've been using IntelliJ to develop Blackberry apps...sort of. IntelliJ is really good at indexing code, you just need to point it in the right direction. It's editing abilities are way beyond the JDE and in my opinion it is much more flexible and user friendly than Eclipse (even though RIM has an Eclipse plug-in).
I say sort of though as I just code in IntelliJ and currently still compile and debug through the JDE. Hoping for better integration on that front with IntelliJ down the line, but it is an acceptable working environment for now.
A: Not sure if this will help but here are instructions for setting up Eclipse for blackberry development.
Maybe you can use that information to figure out what changes to need to make in IDEA.
A: Its very easy to integrate IntelliJ with Blackberry development given the above suggestion (using the bb ant tasks), but I've yet to successfully debug the simulator through IntelliJ. It should work, but it doesn't.
Thus, the 'integration' is incomplete.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Building Apps for Motorola Cell Phone I have an L6 phone from motorola, a usb cable to connect it to my computer, and the Tools for Phones software so I can do things like upload my own custom ringtones or download pictures from the phone's camera.
I have some ideas for programs I'd like to run on the phone, and it supports java, but I don't see anything in the software that I have for uploading them. Even if they did, I wouldn't know where to get started.
Does anyone have any info on how to get started building apps for this or similar phones?
A: I've never used Morotolla's SDK but from my limited work in JME the real hook in the 3rd party tools are the emulators. Setting up a JME dev environment quickly is something that Sun got surprisingly right. Just get NetBeans with the JME pack and there is a regular emulator right in the IDE, and then you can hook in other proprietary emulators such as those from Motorolla.
Not sure what kind of apps you are looking to do, but if you're interested in games I thought Beginning Mobile Phone Game Programming was a great starting point:
A: Perhaps Motorola's own site
link
A: I have not used the new Motorola development studio, because my experience with Motorola's development tools has not been a joyous one. When working with Motorola devices I tend to stick to the standard emulator (or sometimes the Sony Ericsson emulators as those are the best I have worked with by far).
The problem with Motorola's tools is that I always seemed to spend way too much time trying to figure out how to work around them. I would run into emulator specific issues and bugs, and I honestly don't have time to waste trying to figure out why the application runs on the target device but crashes on the emulator. It should be the opposite.
A good emulator is very important for mobile development though as that is where you will do 90% of your development, testing and tweaking, only periodically trying it out on the phone.
Finally, I agree with bpapa...Netbeans is an excellent IDE for J2ME development and here is a book that I recommend (get the original if possible, not the second edition as the second edition focuses way too much on MIDP 2.0 and assumes you know the basics).
http://www.amazon.com/J2ME-Game-Programming-Development/dp/1592001181/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1221692983&sr=1-3
A: Yeah, the act of asking the question pointed me in the direction of an answer, and I found this:
https://developer.motorola.com/docstools/motodevstudio/
I could still use some pointers from someone of what to expect if anyone has done this before.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: SharePoint stream file for preview I am looking to stream a file housed in a SharePoint 2003 document library down to the browser. Basically the idea is to open the file as a stream and then to "write" the file stream to the reponse, specifying the content type and content disposition headers. Content disposition is used to preserve the file name, content type of course to clue the browser about what app to open to view the file.
This works all good and fine in a development environment and UAT environment. However, in the production environment, things do not always work as expected,however only with IE6/IE7. FF works great in all environments.
Note that in the production environment SSL is enabled and generally used. (When SSL is not used in the production environment, file streams, is named as expected, and properly dislays.)
Here is a code snippet:
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "test.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] pdfBytes = new byte[byteNum];
fs.Read(pdfBytes, 0, (int)byteNum);
Response.AppendHeader("Content-disposition", "filename=Testme.doc");
Response.CacheControl = "no-cache";
Response.ContentType = "application/msword; charset=utf-8";
Response.Expires = -1;
Response.OutputStream.Write(pdfBytes, 0, pdfBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Like I said, this code snippet works fine on the dev machine and in the UAT environment. A dialog box opens and asks to save, view or cancel Testme.doc. But in production onnly when using SSL, IE 6 & IE7 don't use the name of the attachment. Instead it uses the name of the page that is sending the stream, testheader.apx and then an error is thrown.
IE does provide an advanced setting "Do not save encrypted pages to disk".
I suspect this is part of the problem, the server tells the browser not to cache the file, while IE has the "Do not save encrypted pages to disk" enabled.
Yes I am aware that for larger files, the code snippet above will be a major drag on memory and this implimentation will be problematic. So the real final solution will not open the entire file into a single byte array, but rather will open the file as a stream, and then send the file down to the client in bite size chunks (e.g. perhaps roughly 10K in size).
Anyone else have similar experience "streaming" binary files over ssl? Any suggestions or recommendations?
A: It might be something really simple, believe it or not I coded exactly the same thing today, i think the issue might be that the content disposition doesnt tell the browser its an attachment and therefore able to be saved.
Response.AddHeader("Content-Disposition", "attachment;filename=myfile.doc");
failing that i've included my code below as I know that works over https://
private void ReadFile(string URL)
{
try
{
string uristring = URL;
WebRequest myReq = WebRequest.Create(uristring);
NetworkCredential netCredential = new NetworkCredential(ConfigurationManager.AppSettings["Username"].ToString(),
ConfigurationManager.AppSettings["Password"].ToString(),
ConfigurationManager.AppSettings["Domain"].ToString());
myReq.Credentials = netCredential;
StringBuilder strSource = new StringBuilder("");
//get the stream of data
string contentType = "";
MemoryStream ms;
// Send a request to download the pdf document and then get the response
using (HttpWebResponse response = (HttpWebResponse)myReq.GetResponse())
{
contentType = response.ContentType;
// Get the stream from the server
using (Stream stream = response.GetResponseStream())
{
// Use the ReadFully method from the link above:
byte[] data = ReadFully(stream, response.ContentLength);
// Return the memory stream.
ms = new MemoryStream(data);
}
}
Response.Clear();
Response.ContentType = contentType;
Response.AddHeader("Content-Disposition", "attachment;");
// Write the memory stream containing the pdf file directly to the Response object that gets sent to the client
ms.WriteTo(Response.OutputStream);
}
catch (Exception ex)
{
throw new Exception("Error in ReadFile", ex);
}
}
A: Ok, I resolved the problem, several factors at play here.
Firstly this support Microsoft article was beneficial:
Internet Explorer is unable to open Office documents from an SSL Web site.
In order for Internet Explorer to open documents in Office (or any out-of-process, ActiveX document server), Internet Explorer must save the file to the local cache directory and ask the associated application to load the file by using IPersistFile::Load. If the file is not stored to disk, this operation fails.
When Internet Explorer communicates with a secure Web site through SSL, Internet Explorer enforces any no-cache request. If the header or headers are present, Internet Explorer does not cache the file. Consequently, Office cannot open the file.
Secondly, something earlier in the page processing was causing the "no-cache" header to get written. So Response.ClearHeaders needed to be added, this cleared out the no-cache header, and the output of the page needs to allow caching.
Thirdly for good measure, also added on Response.End, so that no other processing futher on in the request lifetime attempts to clear the headers I've set and re-add the no-cache header.
Fourthly, discovered that content expiration had been enabled in IIS. I've left it enabled at the web site level, but since this one aspx page will serve as a gateway for downloading the files, I disabled it at the download page level.
So here is the code snippet that works (there are a couple other minor changes which I believe are inconsequential):
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "TestMe.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] fileBytes = new byte[byteNum];
fs.Read(fileBytes, 0, (int)byteNum);
Response.ClearContent();
Response.ClearHeaders();
Response.AppendHeader("Content-disposition", "attachment; filename=Testme.doc");
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.ContentType = "application/octet-stream";
Response.OutputStream.Write(fileBytes, 0, fileBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Response.End();
Keep in mind too, this is just for illustration. The real production code will include exception handling and likely read the file a chunk at a time (perhaps 10K).
Mauro, thanks for catching a detail that was missing from the code as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: php is_dir returns true for non-existent folder Has anyone encountered this oddity?
I'm checking for the existence of a number of directories in one of my unit tests. is_dir is reporting true (1) in spite of the folder not existing at the time it is called. The code looks like this (with a few extraneous intermediate vars to ease debugging):
foreach($userfolders as $uf) {
$uf = sprintf($uf, $user_id);
$uf = ltrim($uf,'/');
$path = trim($base . '/' . $uf);
$res = is_dir($path); //returns false except last time returns 1
$this->assertFalse($res, $path);
}
The machine running Ubuntu Linux 8.04 with PHP Version 5.2.4-2ubuntu5.3
Things I have checked:
- Paths are full paths
- The same thing happens on two separate machines (both running Ubuntu)
- I have stepped through line by line in a debugger
- Paths genuinely don't exist at the point where is_dir is called
- While the code is paused on this line, I can actually drop to a shell and run
the interactive PHP interpreter and get the correct result
- The paths are all WELL under 256 chars
- I can't imagine a permissions problem as the folder doesn't exist! The parent folder can't be causing permissions problems as the other folders in the loop are correctly reported as missing.
Comments on the PHP docs point to the odd issue with is_dir but not this particular one.
I'm not posting this as a "please help me fix" but in the hope that somebody encountering the same thing can search here and hopefully an answer from somebody else who has seen this!
A: I don't think this would cause your problem, but $path does have the trailing slash, correct?
A: For what its worth, is_readable can be used as a work around.
A: $path = trim($base . '/' . $uf);
That could be causing it. I'm assuming $base is some sort of root folder you are searching, so if $uf is something like '', '.', or '../' that could return true. We would have to see what values you are using in your foreach to know anything further.
[EDIT]
Doing some more looking the above code works fine on OpenBSD 4.3 with PHP 5.2.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I discard unstaged changes in Git? How do I discard changes in my working copy that are not in the index?
A: What follows is really only a solution if you are working with a fork of a repository where you regularly synchronize (e.g. pull request) with another repo. Short answer: delete fork and refork, but read the warnings on github.
I had a similar problem, perhaps not identical, and I'm sad to say my solution is not ideal, but it is ultimately effective.
I would often have git status messages like this (involving at least 2/4 files):
$ git status
# Not currently on any branch.
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: doc/PROJECT/MEDIUM/ATS-constraint/constraint_s2var.dats
# modified: doc/PROJECT/MEDIUM/ATS-constraint/parsing/parsing_s2var.dats
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: doc/PROJECT/MEDIUM/ATS-constraint/constraint_s2Var.dats
# modified: doc/PROJECT/MEDIUM/ATS-constraint/parsing/parsing_s2Var.dats
A keen eye will note that these files have dopplegangers that are a single letter in case off. Somehow, and I have no idea what led me down this path to start with (as I was not working with these files myself from the upstream repo), I had switched these files. Try the many solutions listed on this page (and other pages) did not seem to help.
I was able to fix the problem by deleting my forked repository and all local repositories, and reforking. This alone was not enough; upstream had to rename the files in question to new filenames. As long as you don't have any uncommited work, no wikis, and no issues that diverge from the upstream repository, you should be just fine. Upstream may not be very happy with you, to say the least. As for my problem, it is undoubtedly a user error as I'm not that proficient with git, but the fact that it is far from easy to fix points to an issue with git as well.
A: You could create your own alias which describes how to do it in a descriptive way.
I use the next alias to discard changes.
Discard changes in a (list of) file(s) in working tree
discard = checkout --
Then you can use it as next to discard all changes:
discard .
Or just a file:
discard filename
Otherwise, if you want to discard all changes and also the untracked files, I use a mix of checkout and clean:
Clean and discard changes and untracked files in working tree
cleanout = !git clean -df && git checkout -- .
So the use is simple as next:
cleanout
Now is available in the next Github repo which contains a lot of aliases:
*
*https://github.com/GitAlias/gitalias
A: If it's almost impossible to rule out modifications of the files, have you considered ignoring them? If this statement is right and you wouldn't touch those files during your development, this command may be useful:
git update-index --assume-unchanged file_to_ignore
A: If you merely wish to remove changes to existing files, use checkout (documented here).
git checkout -- .
*
*No branch is specified, so it checks out the current branch.
*The double-hyphen (--) tells Git that what follows should be taken as its second argument (path), that you skipped specification of a branch.
*The period (.) indicates all paths.
If you want to remove files added since your last commit, use clean (documented here):
git clean -i
*
*The -i option initiates an interactive clean, to prevent mistaken deletions.
*A handful of other options are available for a quicker execution; see the documentation.
If you wish to move changes to a holding space for later access, use stash (documented here):
git stash
*
*All changes will be moved to Git's Stash, for possible later access.
*A handful of options are available for more nuanced stashing; see the documentation.
A: When you want to transfer a stash to someone else:
# add files
git add .
# diff all the changes to a file
git diff --staged > ~/mijn-fix.diff
# remove local changes
git reset && git checkout .
# (later you can re-apply the diff:)
git apply ~/mijn-fix.diff
[edit] as commented, it ís possible to name stashes. Well, use this if you want to share your stash ;)
A: If you are in case of submodule and no other solutions work try:
*
*To check what is the problem (maybe a "dirty" case) use:
git diff
*To remove stash
git submodule update
A: git checkout .
This will discard any uncommitted changes to the branch. It won't reset it back if any changes were committed. This is handy when you've done some changes and decide you don't want them for some reason and you have NOT committed those changes. It actually just checks out the branch again and discards any current uncommitted changes.
( must be in the app's root or home dir for this to work )
A: For all unstaged files in current working directory use:
git restore .
For a specific file use:
git restore path/to/file/to/revert
That together with git switch replaces the overloaded git checkout (see here), and thus removes the argument disambiguation.
If a file has both staged and unstaged changes, only the unstaged changes shown in git diff are reverted. Changes shown in git diff --staged stay intact.
Before Git 2.23
For all unstaged files in current working directory:
git checkout -- .
For a specific file:
git checkout -- path/to/file/to/revert
-- here to remove ambiguity (this is known as argument disambiguation).
A: The easiest way to do this is by using this command:
This command is used to discard changes in working directory -
git checkout -- .
https://git-scm.com/docs/git-checkout
In git command, stashing of untracked files is achieved by using:
git stash -u
http://git-scm.com/docs/git-stash
A: I really found this article helpful for explaining when to use what command: http://www.szakmeister.net/blog/2011/oct/12/reverting-changes-git/
There are a couple different cases:
*
*If you haven't staged the file, then you use git checkout. Checkout "updates files in the working tree to match the version in the index". If the files have not been staged (aka added to the index)... this command will essentially revert the files to what your last commit was.
git checkout -- foo.txt
*If you have staged the file, then use git reset. Reset changes the index to match a commit.
git reset -- foo.txt
I suspect that using git stash is a popular choice since it's a little less dangerous. You can always go back to it if you accidently blow too much away when using git reset. Reset is recursive by default.
Take a look at the article above for further advice.
A: If all the staged files were actually committed, then the branch can simply be reset e.g. from your GUI with about three mouse clicks: Branch, Reset, Yes!
So what I often do in practice to revert unwanted local changes is to commit all the good stuff, and then reset the branch.
If the good stuff is committed in a single commit, then you can use "amend last commit" to bring it back to being staged or unstaged if you'd ultimately like to commit it a little differently.
This might not be the technical solution you are looking for to your problem, but I find it a very practical solution. It allows you to discard unstaged changes selectively, resetting the changes you don't like and keeping the ones you do.
So in summary, I simply do commit, branch reset, and amend last commit.
A: Just as a reminder, newer versions of git has the restore command, which also is a suggestion when typing git status when you have changed files:
(use "git add ..." to update what will be committed)
(use "git restore ..." to discard changes in working directory)
So git 'restore' is the modern solution to this. It is always a good idea to read the suggestions from git after typing 'git status' :-)
A: If you aren't interested in keeping the unstaged changes (especially if the staged changes are new files), I found this handy:
git diff | git apply --reverse
A: None of the solutions work if you just changed the permissions of a file (this is on DOS/Windoze)
Mon 23/11/2015-15:16:34.80 C:\...\work\checkout\slf4j+> git status
On branch SLF4J_1.5.3
Changes not staged for commit:
(use "git add ..." to update what will be committed)
(use "git checkout -- ..." to discard changes in working directory)
modified: .gitignore
modified: LICENSE.txt
modified: TODO.txt
modified: codeStyle.xml
modified: pom.xml
modified: version.pl
no changes added to commit (use "git add" and/or "git commit -a")
Mon 23/11/2015-15:16:37.87 C:\...\work\checkout\slf4j+> git diff
diff --git a/.gitignore b/.gitignore
old mode 100644
new mode 100755
diff --git a/LICENSE.txt b/LICENSE.txt
old mode 100644
new mode 100755
diff --git a/TODO.txt b/TODO.txt
old mode 100644
new mode 100755
diff --git a/codeStyle.xml b/codeStyle.xml
old mode 100644
new mode 100755
diff --git a/pom.xml b/pom.xml
old mode 100644
new mode 100755
diff --git a/version.pl b/version.pl
old mode 100644
new mode 100755
Mon 23/11/2015-15:16:45.22 C:\...\work\checkout\slf4j+> git reset --hard HEAD
HEAD is now at 8fa8488 12133-CHIXMISSINGMESSAGES MALCOLMBOEKHOFF 20141223124940 Added .gitignore
Mon 23/11/2015-15:16:47.42 C:\...\work\checkout\slf4j+> git clean -f
Mon 23/11/2015-15:16:53.49 C:\...\work\checkout\slf4j+> git stash save -u
Saved working directory and index state WIP on SLF4J_1.5.3: 8fa8488 12133-CHIXMISSINGMESSAGES MALCOLMBOEKHOFF 20141223124940 Added .gitignore
HEAD is now at 8fa8488 12133-CHIXMISSINGMESSAGES MALCOLMBOEKHOFF 20141223124940 Added .gitignore
Mon 23/11/2015-15:17:00.40 C:\...\work\checkout\slf4j+> git stash drop
Dropped refs/stash@{0} (cb4966e9b1e9c9d8daa79ab94edc0c1442a294dd)
Mon 23/11/2015-15:17:06.75 C:\...\work\checkout\slf4j+> git stash drop
Dropped refs/stash@{0} (e6c49c470f433ce344e305c5b778e810625d0529)
Mon 23/11/2015-15:17:08.90 C:\...\work\checkout\slf4j+> git stash drop
No stash found.
Mon 23/11/2015-15:17:15.21 C:\...\work\checkout\slf4j+> git checkout -- .
Mon 23/11/2015-15:22:00.68 C:\...\work\checkout\slf4j+> git checkout -f -- .
Mon 23/11/2015-15:22:04.53 C:\...\work\checkout\slf4j+> git status
On branch SLF4J_1.5.3
Changes not staged for commit:
(use "git add ..." to update what will be committed)
(use "git checkout -- ..." to discard changes in working directory)
modified: .gitignore
modified: LICENSE.txt
modified: TODO.txt
modified: codeStyle.xml
modified: pom.xml
modified: version.pl
no changes added to commit (use "git add" and/or "git commit -a")
Mon 23/11/2015-15:22:13.06 C:\...\work\checkout\slf4j+> git diff
diff --git a/.gitignore b/.gitignore
old mode 100644
new mode 100755
diff --git a/LICENSE.txt b/LICENSE.txt
old mode 100644
new mode 100755
diff --git a/TODO.txt b/TODO.txt
old mode 100644
new mode 100755
diff --git a/codeStyle.xml b/codeStyle.xml
old mode 100644
new mode 100755
diff --git a/pom.xml b/pom.xml
old mode 100644
new mode 100755
diff --git a/version.pl b/version.pl
old mode 100644
new mode 100755
The only way to fix this is to manually reset the permissions on the changed files:
Mon 23/11/2015-15:25:43.79 C:\...\work\checkout\slf4j+> git status -s | egrep "^ M" | cut -c4- | for /f "usebackq tokens=* delims=" %A in (`more`) do chmod 644 %~A
Mon 23/11/2015-15:25:55.37 C:\...\work\checkout\slf4j+> git status
On branch SLF4J_1.5.3
nothing to commit, working directory clean
Mon 23/11/2015-15:25:59.28 C:\...\work\checkout\slf4j+>
Mon 23/11/2015-15:26:31.12 C:\...\work\checkout\slf4j+> git diff
A: If you want to restore unstaged files use
git restore --staged .
A: As you type git status,
(use "git checkout -- ..." to discard changes in working directory)
is shown.
e.g. git checkout -- .
A: You can use git stash - if something goes wrong, you can still revert from the stash.
Similar to some other answer here, but this one also removes all unstaged files and also all unstaged deletes:
git add .
git stash
if you check that everything is OK, throw the stash away:
git stash drop
The answer from Bilal Maqsood with git clean also worked for me, but with the stash I have more control - if I do sth accidentally, I can still get my changes back
UPDATE
I think there is 1 more change (don't know why this worked for me before):
git add . -A instead of git add .
without the -A the removed files will not be staged
A: git checkout -f
man git-checkout:
-f, --force
When switching branches, proceed even if the index or the working tree differs from HEAD. This is used to throw away local changes.
When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored.
A: This checks out the current index for the current directory, throwing away all changes in files from the current directory downwards.
git checkout .
or this which checks out all files from the index, overwriting working tree files.
git checkout-index -a -f
A: Instead of discarding changes, I reset my remote to the origin. Note - this method is to completely restore your folder to that of the repo.
So I do this to make sure they don't sit there when I git reset (later - excludes gitignores on the Origin/branchname)
NOTE: If you want to keep files not yet tracked, but not in GITIGNORE you may wish to skip this step, as it will Wipe these untracked files not found on your remote repository (thanks @XtrmJosh).
git add --all
Then I
git fetch --all
Then I reset to origin
git reset --hard origin/branchname
That will put it back to square one. Just like RE-Cloning the branch, WHILE keeping all my gitignored files locally and in place.
Updated per user comment below:
Variation to reset the to whatever current branch the user is on.
git reset --hard @{u}
A: Tried all the solutions above but still couldn't get rid of new, unstaged files.
Use git clean -f to remove those new files - with caution though! Note the force option.
A: To do a permanent discard:
git reset --hard
To save changes for later:
git stash
A: Another quicker way is:
git stash save --keep-index --include-untracked
You don't need to include --include-untracked if you don't want to be thorough about it.
After that, you can drop that stash with a git stash drop command if you like.
A: git clean -df
Cleans the working tree by recursively removing files that are not under version control, starting from the current directory.
-d: Remove untracked directories in addition to untracked files
-f: Force (might be not necessary depending on clean.requireForce setting)
Run git help clean to see the manual
A: Just use:
git stash -u
Done. Easy.
If you really care about your stash stack then you can follow with git stash drop. But at that point you're better off using (from Mariusz Nowak):
git checkout -- .
git clean -df
Nonetheless, I like git stash -u the best because it "discards" all tracked and untracked changes in just one command. Yet git checkout -- . only discards tracked changes,
and git clean -df only discards untracked changes... and typing both commands is far too much work :)
A: simply say
git stash
It will remove all your local changes. You also can use later by saying
git stash apply
or
git stash pop
A: It seems like the complete solution is:
git clean -df
git checkout -- .
WARNING: while it won't delete ignored files mentioned directly in .gitignore, git clean -df may delete ignored files residing in folders.
git clean removes all untracked files and git checkout clears all unstaged changes.
A: you have a very simple git command git checkout .
A: 2019 update
You can now discard unstaged changes in one tracked file with:
git restore <file>
and in all tracked files in the current directory (recursively) with:
git restore .
If you run the latter from the root of the repository, it will discard unstaged changes in all tracked files in the project.
Notes
*
*git restore was introduced in July 2019 and released in version 2.23 as part of a split of the git checkout command into git restore for files and git switch for branches.
*git checkout still behaves as it used to and the older answers remain perfectly valid.
*When running git status with unstaged changes in the working tree, this is now what Git suggests to use to discard them (instead of git checkout -- <file> as it used to prior to v2.23).
*As with git checkout -- ., this only discards changes in tracked files. So Mariusz Nowak's answer still applies and if you want to discard all unstaged changes, including untracked files, you could run, as he suggests, an additional git clean -df.
A: No matter what state your repo is in you can always reset to any previous commit:
git reset --hard <commit hash>
This will discard all changes which were made after that commit.
A: This works even in directories that are; outside of normal git permissions.
sudo chmod -R 664 ./* && git checkout -- . && git clean -dfx
Happened to me recently
A: cd path_to_project_folder # take you to your project folder/working directory
git checkout . # removes all unstaged changes in working directory
A: In my opinion,
git clean -df
should do the trick. As per Git documentation on git clean
git-clean - Remove untracked files from the working tree
Description
Cleans the working tree by recursively removing files that
are not under version control, starting from the current directory.
Normally, only files unknown to Git are removed, but if the -x option
is specified, ignored files are also removed. This can, for example,
be useful to remove all build products.
If any optional ... arguments are given, only those paths are
affected.
Options
-d Remove untracked directories in addition to untracked files. If an untracked directory is managed by a different Git repository, it is
not removed by default. Use -f option twice if you really want to
remove such a directory.
-f
--force If the Git configuration variable clean.requireForce is not set to false, git clean will refuse to run unless given -f, -n or -i.
A: My favorite is
git checkout -p
That lets you selectively revert chunks.
See also:
git add -p
A: Since no answer suggests the exact option combination that I use, here it is:
git clean -dxn . # dry-run to inspect the list of files-to-be-removed
git clean -dxf . # REMOVE ignored/untracked files (in the current directory)
git checkout -- . # ERASE changes in tracked files (in the current directory)
This is the online help text for the used git clean options:
-d
Remove untracked directories in addition to untracked files. If an untracked directory is managed by a different Git repository, it is not removed by default. Use -f option twice if you really want to remove such a directory.
-x
Don’t use the standard ignore rules read from .gitignore (per directory) and $GIT_DIR/info/exclude, but do still use the ignore rules given with -e options. This allows removing all untracked files, including build products. This can be used (possibly in conjunction with git reset) to create a pristine working directory to test a clean build.
-n
Don’t actually remove anything, just show what would be done.
-f
If the Git configuration variable clean.requireForce is not set to false, Git clean will refuse to delete files or directories unless given -f, -n, or -i. Git will refuse to delete directories within the .git subdirectory or file, unless a second -f is given.
A: Another way to get rid of new files that is more specific than git clean -df (it will allow you to get rid of some files not necessarily all), is to add the new files to the index first, then stash, then drop the stash.
This technique is useful when, for some reason, you can't easily delete all of the untracked files by some ordinary mechanism (like rm).
A: I had a weird situation where a file is always unstaged, this helps me to resolve.
git rm .gitattributes
git add -A
git reset --hard
A: Just use:
git stash -k -u
This will stash unstaged changes and untracked files (new files) and keep staged files.
It's better than reset/checkout/clean, because you might want them back later (by git stash pop). Keeping them in the stash is better than discarding them.
A: To discard changes in working directory use
git checkout -- <file>
-- means changes in current branch
references:-
*
*https://www.baeldung.com/git-discard-unstaged-changes
A: Final working solution
git restore .
git clean -f
git clean -df (if you have folders in your local changes)
A: To delete unstaged changes I tried "git restore ." as I was told by Git but it just didn't work. A very good way is to use:
git revert --hard
It works perfectly. Its Git Help explains perfectly the point:
--hard
Resets the index and working tree. Any changes to tracked files in the working tree since are discarded. Any untracked files or directories in the way of writing any tracked files are simply deleted.
PS: I am using git version 2.35.3.windows.1. I think quite some answers here made the issue over-complicated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6019"
} |
Q: What causes this SqlException: A transport-level error has occurred when receiving results from the server Here is the full error: SqlException: A transport-level error has occurred when receiving results from the server. (provider: Shared Memory Provider, error: 1 - I/O Error detected in read/write operation)
I've started seeing this message intermittently for a few of the unit tests in my application (there are over 1100 unit & system tests). I'm using the test runner in ReSharper 4.1.
One other thing: my development machine is a VMWare virtual machine.
A: I ran into this many moons ago. Bottom line is you are running out of available ports.
First make sure your calling application has connection pooling on.
If that does then check the number of available ports for the SQL Server.
What is happening is that if pooling is off then every call takes a port and it takes by default 4 minutes to have the port expire, and you are running out of ports.
If pooling is on then you need to profile all the ports of SQL Server and make sure you have enough and expand them if necessary.
When I came across this error, connection pooling was off and it caused this issue whenever a decent load was put on the website. We did not see it in development because the load was 2 or 3 people at max, but once the number grew over 10 we kept seeing this error. We turned pooling on, and it fixed it.
A: I ran into this many moons ago as well. However, not to discount @Longhorn213s explanation, but we had the exact opposite behavior. We received the error in development and testing, but not production where obviously the load was much greater. We ended up tolerating the issue in development as it was sporadic and didn't materially slow down progress. I think there could be several reasons for this error, but was never able to pin point the cause myself.
A: We've also run across this error and figured out that we were killing a SQL server connection from the database server. The client application is under the impression that the connection is still active and tries make use of that connection, but fails because it was terminated.
A: We saw this in our environment, and traced part of it down to the "NOLOCK" hint in our queries. We removed the NOLOCK hint and set our servers to use Snapshot Isolation mode, and the frequency of these errors was reduced quite a bit.
A: We have seen this error a few times and tried different resolutions with varying success. One common underlying theme has been that the system giving the error was running low on memory. This is especially true if the server that is hosting Sql Server is running ANY other non-OS process. By default SQL Server will grab any memory that it can, then if leaving little for other processes/drivers. This can cause erratic behavior and intermittent messages. It is good practice to configure your SQL Server for a maximum memory that leaves some headroom is there are other processes that might need it. Example: Visual Studio on a dev machine that is running a copy of SQL Server developers edition on the same machine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: STL vector vs map erase In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy?
A: The inconsistency is due to use. vector is a sequence having an ordering over the elements. While it's true that the elements in a map are also ordered according to some comparison criterion, this ordering is non-evident from the structure. There is no efficient way to get from one element to the next (efficient = constant time). In fact, to iterate over the map is quite expensive; either the creation of the iterator or the iterator itself involves a walk over the complete tree. This cannot be done in O(n), unless a stack is used, in which case the space required is no longer constant.
All in all, there simply is no cheap way of returning the “next” element after erasing. For sequences, there is a way.
Additionally, Rob is right. There's no need for the Map to return an iterator.
A: Just as an aside, the STL shipped with MS Visual Studio C++ (Dinkumware IIRC) provides a map implementation with an erase function returning an iterator to the next element.
They do note it's not standards conforming.
A: See http://www.sgi.com/tech/stl/Map.html
Map has the important property that
inserting a new element into a map
does not invalidate iterators that
point to existing elements. Erasing an
element from a map also does not
invalidate any iterators, except, of
course, for iterators that actually
point to the element that is being
erased.
The reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this.
A: erase returns an iterator in C++11. This is due to defect report 130:
Table 67 (23.1.1) says that container::erase(iterator) returns an iterator. Table 69 (23.1.2) says that in addition to this requirement, associative containers also say that container::erase(iterator) returns void. That's not an addition; it's a change to the requirements, which has the effect of making associative containers fail to meet the requirements for containers.
The standards committee accepted this:
the LWG agrees the return type should be iterator, not void. (Alex Stepanov agrees too.)
(LWG = Library Working Group).
A: I have no idea if this is the answer, but one reason might be with the cost of locating the next element. Iterating through a map is inherently "slow".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What is the most efficient way to paginate my site when querying with SQL? I am trying to paginate the results of an SQL query for use on a web page. The language and the database backend are PHP and SQLite.
The code I'm using works something like this (page numbering starts at 0)
http://example.com/table?page=0
page = request(page)
per = 10 // results per page
offset = page * per
// take one extra record so we know if a next link is needed
resultset = query(select columns from table where conditions limit offset, per + 1)
if(page > 0) show a previous link
if(count(resultset) > per) show a next link
unset(resultset[per])
display results
Are there more efficient ways to do pagination than this?
One problem that I can see with my current method is that I must store all 10 (or however many) results in memory before I start displaying them. I do this because PDO does not guarantee that the row count will be available.
Is it more efficient to issue a COUNT(*) query to learn how many rows exist, then stream the results to the browser?
Is this one of those "it depends on the size of your table, and whether the count(*) query requires a full table scan in the database backend", "do some profiling yourself" kind of questions?
A: I've opted to go with the COUNT(*) two query method, because it allows me to create a link directly to the last page, which the other method does not allow. Performing the count first also allows me to stream the results, and so should work well with higher numbers of records with less memory.
Consistency between pages is not an issue for me. Thank you for your help.
A: There are several cases where I have a fairly complex (9-12 table join) query, returning many thousands of rows, which I need to paginate. Obviously to paginate nicely, you need to know the total size of the result. With MySQL databases, using the SQL_CALC_FOUND_ROWS directive in the SELECT can help you achieve this easily, although the jury is out on whether that will be more efficient for you to do.
However, since you are using SQLite, I recommend sticking with the 2 query approach. Here is a very concise thread on the matter.
A: i'd suggest just doing the count first. a count(primary key) is a very efficient query.
A: I doubt that it will be a problem for your users to wait for the backend to return ten rows. (You can make it up to them by being good at specifying image dimensions, make the webserver negotiate compressed data transfers when possible, etc.)
I don't think that it will be very useful for you to do a count(*) initially.
If you are up to some complicated coding: When the user is looking at page x, use ajax-like magic to pre-load page x+1 for improved user experience.
A general note about pagination:
If the data changes while the user browses through your pages, it may be a problem if your solution demands a very high level of consistency. I've writte a note about that elsewhere.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Mirroring perforce with SVK? Anyone know of a way to use SVK with perforce? The docs seem to imply it used to be possible, but some mailing list messages claim that is no longer the case?
A: For a private repository, you should try P5
A: Why use SVK to mirror Perforce? Surely using a Perforce Proxy is the best way to make your repository distributed?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Perform token replacements using VS post-build event command? I would like to "post-process" my app.config file and perform some token replacements after the project builds.
Is there an easy way to do this using a VS post-build event command?
(Yeah I know I could probably use NAnt or something, looking for something simple.)
A: Take a look at XmlPreProcess. We use it for producing different config files for our testing and live deployment packages.
We execute it from a nant script as part of a continuous build but, since it's a console app, I see no reason why you coudn't add a call in your project's post-build event instead
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I dynamically create a Video object in AS2 and add it to a MovieClip? I need to dynamically create a Video object in ActionScript 2 and add it to a movie clip. In AS3 I just do this:
var videoViewComp:UIComponent; // created elsewhere
videoView = new Video();
videoView.width = 400;
videoView.height = 400;
this.videoViewComp.addChild(videoView);
Unfortunately, I can't figure out how to accomplish this in AS2. Video isn't a child of MovieClip, so attachMovie() doesn't seem to be getting me anything. I don't see any equivalent to AS3's UIComponent.addChild() method either.
Is there any way to dynamically create a Video object in AS2 that actually shows up on the stage?
I potentially need multiple videos at a time though. Is it possible to duplicate that video object?
I think I have another solution working. It's not optimal, but it fits with some of the things I have to do for other components so it's not too out of place in the project. Once I get it figured out I'll post what I did here.
A: Ok, I've got something working.
First, I created a new Library symbol and called it "VideoWrapper". I then added a single Video object to that with an ID of "video".
Now, any time I need to dynamically add a Video to my state I can use MovieClip.attachMovie() to add a new copy of the Video object.
To make things easier I wrote a VideoWrapper class that exposes basic UI element handling (setPosition(), setSize(), etc). So when dealing with the Video in regular UI layout code I just use those methods so it looks just like all my other UI elements. When dealing with the video I just access the "video" member of the class.
My actual implementation is a bit more complicated, but that's the basics of how I got things working. I have a test app that's playing 2 videos, one from the local camera and one streaming from FMS, and it's working great.
A: To send you the ends of a line which is a tag, I use HTML Symbol Entities from w3schools
An example, taken from a project would be as follows:
< asset path="library\video.swf" />
The line above shows that there is a directory called library which contains the file video.swf
Besides, there is the file video.xml in the directory library. That file contains the lines
<xml version="1.0" encoding="utf-8" >
<movie version="7">
<frame>
<library>
<clip id="VideoDisplay">
<frame>
<video id="VideoSurface" width="160" height="120" />
<place id="VideoSurface" name="video" />
</frame>
</clip>
</library>
</frame>
</movie>
Long ago my son Alex downloaded the code of VideoDisplay class and the directory library from Internet
I have iproved the code of the class VideoDisplay.
by writting 2 members
public function pos():Number
{
return ns.time;
}
public function close():Void
{
return ns.close();
}
The program I have created
is
more than an explorer and presenter of .flv files
It also
is an explorer and presenter of the chosen fragments of each .flv file
Now the code of VideoDisplay class is:
class util.VideoDisplay
{
//{ PUBLIC MEMBERS
/**
* Create a new video display surface
*/
function VideoDisplay(targetURI:String, parent:MovieClip, name:String, depth:Number, initObj)
{
display = parent.attachMovie("VideoDisplay", name, depth, initObj);
// create video stream
nc = new NetConnection();
nc.connect(targetURI);
ns = new NetStream(nc);
// attach the video stream to the video object
display.video.attachVideo(ns);
}
/**
* Video surface dimensions
*/
function setSize(width:Number, heigth:Number):Void
{
display.video._width = width;
display.video._height = heigth;
}
/**
* Video clip position
*/
function setLocation(x:Number, y:Number):Void
{
display._x = x;
display._y = y;
}
/**
* Start streaming
* @param url FLV file
* @param bufferTime Buffer size (optional)
*/
public function play(url:String, bufferTime:Number):Void
{
if (bufferTime != undefined) ns.setBufferTime(bufferTime);
ns.play(url);
}
/**
* Pause streaming
*/
public function pause():Void
{
ns.pause();
}
/**
* Seek position in video
*/
public function seek(offset:Number):Void
{
ns.seek(offset);
}
/**
* Get position in video
*/
public function pos():Number
{
return ns.time;
}
public function close():Void
{
return ns.close();
}
//}
//{ PRIVATE MEMBERS
private var display:MovieClip;
private var nc:NetConnection;
private var ns:NetStream;
//}
}
A: I recommend you create a single instance of the Video object, leave it invisible (i.e., videoview.visible = false), and load the clip when you need it, displaying it at the appropriate time. You can also use swapDepth() if it becomes necessary.
Video handling in AS2 is not the best thing ever. Rest assured you'll run into a lot of little problems (looping without gaps, etc).
A: your approach is what i usually do because other option is to include the UIcomponent mediaDisplay into library and then attach that component using attachMovie but i found mediaDisplay i little buggy so i prefer to use the primitive video instance .
A: I hope that the code below will be very useful for you:
import UTIL.MEDIA.MEDIAInstances
class Main
{
static function main() {
var MEDIAInstancesInstance :MEDIAInstances = new MEDIAInstances ();
_root.Video_Display.play ("IsothermalCompression.flv", 0);
_root.VideoDisplayMC.onPress = function() {
_root.Video_Display.seek (0);
} // _root.displayMC.onPress = function() {
} // static function main()
} // class Main
//
import UTIL.MEDIA.VideoDisplay
class UTIL.MEDIA.MEDIAInstances
{
function MEDIAInstances()
{
// depth
_root.createEmptyMovieClip ("VideoDisplayMC", 500);
//
var Video_Display:VideoDisplay
=
new VideoDisplay(_root.VideoDisplayMC, "Video_Display", 1);
Video_Display.setLocation(400, 0); Video_Display.setSize (320, 240);
//
_root.Video_Display = Video_Display; _root.VideoDisplayMC._alpha = 75;
} // MEDIAInstances()
} // class UTIL.MEDIA.MEDIAInstances
//
class UTIL.MEDIA.VideoDisplay
{
private var display:MovieClip, nc:NetConnection, ns:NetStream;
function VideoDisplay(parent:MovieClip, name:String, depth:Number)
{
display = parent.attachMovie("VideoDisplay", name, depth);
nc = new NetConnection(); nc.connect(null); ns = new NetStream(nc);
display.video.attachVideo(ns);
}
function setSize(width:Number, heigth:Number):Void
{ display.video._width = width; display.video._height = heigth;}
function setLocation(x:Number, y:Number):Void { display._x = x; display._y = y;}
public function play(url:String, bufferTime:Number):Void
{
if (bufferTime != undefined) ns.setBufferTime(bufferTime); ns.play(url);
}
//
public function pause():Void { ns.pause();}
//
public function seek(offset:Number):Void { ns.seek(offset); }
} // UTIL.MEDIA.VideoDisplay
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is there a way to infer what image format a file is, without reading the entire file? Is there a good way to see what format an image is, without having to read the entire file into memory?
Obviously this would vary from format to format (I'm particularly interested in TIFF files) but what sort of procedure would be useful to determine what kind of image format a file is without having to read through the entire file?
BONUS: What if the image is a Base64-encoded string? Any reliable way to infer it before decoding it?
A: Sure there is. Like the others have mentioned, most images start with some sort of 'Magic', which will always translate to some sort of Base64 data. The following are a couple examples:
A Bitmap will start with Qk3
A Jpeg will start with /9j/
A GIF will start with R0l (That's a zero as the second char).
And so on. It's not hard to take the different image types and figure out what they encode to. Just be careful, as some have more than one piece of magic, so you need to account for them in your B64 'translation code'.
A: Most image file formats have unique bytes at the start. The unix file command looks at the start of the file to see what type of data it contains. See the Wikipedia article on Magic numbers in files and magicdb.org.
A: Either file on the *nix command-line or reading the initial bytes of the file. Most files come with a unique header in the first few bytes. For example, TIFF's header looks something like this: 0x00000000: 4949 2a00 0800 0000
For more information on the TIFF file format specifically if you'd like to know what those bytes stand for, go here.
A: TIFFs will begin with either II or MM (Intel byte ordering or Motorolla).
The TIFF 6 specification can be downloaded here and isn't too hard to follow
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Should I derive custom exceptions from Exception or ApplicationException in .NET? What is best practice when creating your exception classes in a .NET solution: To derive from System.Exception or from System.ApplicationException?
A: According to Jeffery Richter in the Framework Design Guidelines book:
System.ApplicationException is a class that should not be part of the .NET framework.
It was intended to have some meaning in that you could potentially catch "all" the application exceptions, but the pattern was not followed and so it has no value.
A: You should derive custom exceptions from System.Exception.
Even MSDN now says to ignore ApplicationException:
If you are designing an application
that needs to create its own
exceptions, you are advised to derive
custom exceptions from the Exception
class. It was originally thought that
custom exceptions should derive from
the ApplicationException class;
however in practice this has not been
found to add significant value. For
more information, see Best Practices for Handling Exceptions.
http://msdn.microsoft.com/en-us/library/system.applicationexception.aspx
A: ApplicationException considered useless is a strong, and critical, argument against ApplicationException.
Upshot: don't use it. Derive from Exception.
A: The authors of the framework themselves consider ApplicationException worthless:
https://web.archive.org/web/20190904221653/https://blogs.msdn.microsoft.com/kcwalina/2006/06/23/applicationexception-considered-useless/
with a nice follow-up here:
https://web.archive.org/web/20190828075736/https://blogs.msdn.microsoft.com/kcwalina/2006/07/05/choosing-the-right-type-of-exception-to-throw/
When in doubt, I follow their book Framework Design Guidelines.
http://www.amazon.com/Framework-Design-Guidelines-Conventions-Development/dp/0321246756
The topic of the blog post is further discussed there.
rp
A: I'm used to do:
private void buttonFoo_Click()
{
try
{
foo();
}
catch(ApplicationException ex)
{
Log.UserWarning(ex);
MessageVox.Show(ex.Message);
}
catch(Exception ex)
{
Log.CodeError(ex);
MessageBox.Show("Internal error.");
}
}
It allow to do the difference between:
*
*C# code system error that I must repairs.
*"Normal" user error that do not need correction from me.
I know it is not recommended to use ApplicationException, but it works great since there is very few classes that do not respect the ApplicationException pattern.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
} |
Q: What determines the monitor my app runs on? I am using Windows, and I have two monitors.
Some applications will always start on my primary monitor, no matter where they were when I closed them.
Others will always start on the secondary monitor, no matter where they were when I closed them.
Is there a registry setting buried somewhere, which I can manipulate to control which monitor applications launch into by default?
@rp: I have Ultramon, and I agree that it is indispensable, to the point that Microsoft should buy it and incorporate it into their OS. But as you said, it doesn't let you control the default monitor a program launches into.
A: Here's what I've found. If you want an app to open on your secondary monitor by default do the following:
1. Open the application.
2. Re-size the window so that it is not maximized or minimized.
3. Move the window to the monitor you want it to open on by default.
4. Close the application. Do not re-size prior to closing.
5. Open the application.
It should open on the monitor you just moved it to and closed it on.
6. Maximize the window.
The application will now open on this monitor by default. If you want to change it to another monitor, just follow steps 1-6 again.
A: Correctly written Windows apps that want to save their location from run to run will save the results of GetWindowPlacement() before shutting down, then use SetWindowPlacement() on startup to restore their position.
Frequently, apps will store the results of GetWindowPlacement() in the registry as a REG_BINARY for easy use.
The WINDOWPLACEMENTroute has many advantages over other methods:
*
*Handles the case where the screen resolution changed since the last run: SetWindowPlacement() will automatically ensure that the window is not entirely offscreen
*Saves the state (minimized/maximized) but also saves the restored (normal) size and position
*Handles desktop metrics correctly, compensating for the taskbar position, etc. (i.e. uses "workspace coordinates" instead of "screen coordinates" -- techniques that rely on saving screen coordinates may suffer from the "walking windows" problem where a window will always appear a little lower each time if the user has a toolbar at the top of the screen).
Finally, programs that handle window restoration properly will take into account the nCmdShow parameter passed in from the shell. This parameter is set in the shortcut that launches the application (Normal, Minimized, Maximize):
if(nCmdShow != SW_SHOWNORMAL)
placement.showCmd = nCmdShow; //allow shortcut to override
For non-Win32 applications, it's important to be sure that the method you're using to save/restore window position eventually uses the same underlying call, otherwise (like Java Swing's setBounds()/getBounds() problem) you'll end up writing a lot of extra code to re-implement functionality that's already there in the WINDOWPLACEMENT functions.
A: I'm fairly sure the primary monitor is the default. If the app was coded decently, when it's closed, it'll remember where it was last at and will reopen there, but -- as you've noticed -- it isn't a default behavior.
EDIT: The way I usually do it is to have the location stored in the app's settings. On load, if there is no value for them, it defaults to the center of the screen. On closing of the form, it records its position. That way, whenever it opens, it's where it was last. I don't know of a simple way to tell it to launch onto the second monitor the first time automatically, however.
-- Kevin Fairchild
A: Important note: If you remember the position of your application and shutdown and then start up again at that position, keep in mind that the user's monitor configuration may have changed while your application was closed.
Laptop users, for example, frequently change their display configuration. When docked there may be a 2nd monitor that disappears when undocked. If the user closes an application that was running on the 2nd monitor and the re-opens the application when the monitor is disconnected, restoring the window to the previous coordinates will leave it completely off-screen.
To figure out how big the display really is, check out GetSystemMetrics.
A: So I had this issue with Adobe Reader 9.0. Somehow the program forgot to open on my right monitor and was consistently opening on my left monitor. Most programs allow you to drag it over, maximize the screen, and then close it out and it will remember. Well, with Adobe, I had to drag it over and then close it before maximizing it, in order for Windows to remember which screen to open it in next time. Once you set it to the correct monitor, then you can maximize it. I think this is stupid, since almost all windows programs remember it automatically without try to rig a way for XP to remember.
A: It's not exactly the answer to this question but I dealt with this problem with the Shift + Win + [left,right] arrow keys shortcut. You can move the currently active window to another monitor with it.
A: Get UltraMon. Quickly.
http://realtimesoft.com/ultramon/
It doesn't let you specify what monitor an app starts on, but it lets you move an app to the another monitor, and keep its aspect ratio intact, with one mouse click. It is a very handy utility.
Most programs will start where you last left them. So if you have two monitors at work, but only one at home, it's possible to start you laptop at home and not see the apps running on the other monitor (which now isn't there). UltrMon also lets you move those orphan apps back to the main screen quickly and easily.
A: So I agree there are some apps that you can configured to open on one screen by maximizing or right clicking and moving/sizing screen, then close and reopen. However, there are others that will only open on the main screen.
What I've done to resolve: set the monitor you prefer stubborn apps to open on, as monitor 1 and the your other monitor as 2, then change your monitor 2 to be the primary - so your desktop settings and start bar remain. Hope this helps.
A: Do not hold me to this but I am pretty sure it depends on the application it self. I know many always open on the main monitor, some will reopen to the same monitor they were previously run in, and some you can set. I know for example I have shortcuts to open command windows to particular directories, and each has an option in their properties to the location to open the window in. While Outlook just remembers and opens in the last screen it was open in. Then other apps open in what ever window the current focus is in.
So I am not sure there is a way to tell every program where to open. Hope that helps some.
A: I've noticed that if I put a shortcut on my desktop on one screen the launched application may appear on that screen (if that app doesn't reposition itself).
This also applies to running things from Windows Explorer - if Explorer is on one screen the launched application will pick that monitor to use.
Again - I think this is when the launching application specifies the default (windows managed) position. Most applications seem to override this default behavior in some way.
A simple window created like so will do this:
hWnd = CreateWindow(windowClass, windowTitle, WS_VISIBLE | WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, SW_SHOW, CW_USEDEFAULT, 0, NULL, NULL, hInst, NULL);
A: Right click the shortcut and select properties.
Make sure you are on the "Shortcut" Tab.
Select the RUN drop down box and change it to Maximized.
This may assist in launching the program in full screen on the primary monitor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
} |
Q: Event handler in Qt with multithread Does any one know how the event handler manages the posted events?
In my app i have two threads (guiThread and computationThread). After an exception is thrown I call postEvent(..) to an existing dialog. The Qt-Event-Handler holds this one back until the dialog is closed.
Sorry my question is a bit cloudy. I will write it more exactly, if I have time left. I found a work around. But for me the problem is still interesting.
A: As mentionned in the Qt documentation about QCoreApplication::postEvent :
When control returns to the main event loop, all events that are stored in the queue will be sent using the notify() function.
...which explains why the Qt Event Handler holds the event until the dialog is closed.
If I understand correctly what you want to do, I would try using sendEvent.
A: I'm guessing that the dialog you created is modal, which would mean that it is running its own event loop. No events posted to the general guiThread will be processed until all modal event loops are exited.
Alternately, if you need the dialog to both be modal and know about the event, you could post the event directly to the dialog. You'll need to figure out how to handle pointers in a shared manner, but if nothing complicated is going on, you might be able to use the QApplication::activeWindow() function.
A: As others already wrote, I believe this behavior is caused by the fact that the dialog starts its own event loop.
If you use Qt4, you can try using queued signal/slot connections as an alternative to posting events.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you make a build that includes only one of many pending changes? In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less.
And of course I have my own machine, with dozens of files in an "in-progress" state.
Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change.
But of course I can't commit the change to the repository until it's tested.
Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases?
@Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing?
A: So you are asking how to handle working on multiple "tasks" at once, right? Except branching.
You can have multiple checkouts of the source on the local machine, suffixing the directory name with the name of the ticket you are working on. Just make sure to make changes in the right directory, depending on the task...
Mixing multiple tasks in one working copy / commit can get very confusing, especially if somebody needs to review your work later.
A: I prefer to make and test builds on my local machine/environment before committing or promoting any changes.
For your specific example, I would have checked out a clean copy of the source before starting task ABC, and after implementing ABC, created a build locally with that in it.
A: Something like that: git stash && ./bootstrap.sh && make tests :)
A: I try hard to make each "commit" operation represent a single, cohesive change. Sometimes it's a whole bug fix or whole feature, and sometimes it's a single small refactoring on the way to something bigger. There's no simple way to decide what a unit is here, just by gut feel. I also ask (beg!) my teammates to do the same.
When this is done well, you get a number of benefits:
*
*You can write a high quality, detailed description for the change.
*Reading the first line of the description of each change gives you a sense of the flow of the code.
*The diffs of a change are easy to read & understand.
*If a change introduces a bug / build break / other problem, it's easy to isolate, understand, and back out if necessary.
*If I'm half-way through a change and decide to abort, I don't lose much.
*If I'm not sure how to proceed next, I can spend a few minutes on each of several approaches, and then pick the one I like, discarding the others.
*My coworkers pick up most of my changes sooner, dramatically simplifying the merge problem.
*When I'm feeling stuck about a big problem, I can take a few small steps that I'm confident in, checking them in as I go, thereby making the big problem a little smaller.
Working like this can help reduce the need for small branches, since you take a small, confident step, validate it, and commit it, then repeat. I've talked about how to make the step small & confident, but for this to work, you also need to make validation phase go quickly. Having a strong battery of fast, fine-grained unit tests + high quality, fast application tests is key.
Teams that I have worked on before required code reviews before checking in; that adds latency, which interferes with my small-step work style. Making code reviews a high-urgency interrupt works; so does switching to pair programming.
Still, my brain seems to like heavy multitasking. To make that work, I still want multiple in-progress changes. I've used multiple branches, multiple local copies, multiple computers, and tools that make backups of pending changes. All of them can work. (And all of them are equivalent, implemented in different ways.) I think that multiple branches is my favorite, although you need a source control system that is good at spinning up new branches quickly & easily, without being a burden on the server. I've heard BitKeeper is good at this, but I haven't had a chance to check it out yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I add a div to DOM and pick it up later I think this is specific to IE 6.0 but...
In JavaScript I add a div to the DOM. I assign an id attribute. When I later try to pick up the div by the id all I get is null.
Any suggestions?
Example:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("ID", "obj_1000");
document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") );
Alert prints "::null"
Seems to work fine in Firefox 2.0+
A: In addition to what the other answers suggest (that you need to actually insert the element into the DOM for it to be found via getElementById()), you also need to use a lower-case attribute name in order for IE6 to recognize it as the id:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("id", "obj_1000");
document.body.appendChild(newDiv);
alert("Added:"
+ newDiv.getAttribute("id")
+ ":" + newDiv.id + ":"
+ document.getElementById("obj_1000") );
...responds as expected:
Added:obj_1000:obj_1000:[object]
According to the MSDN documentation for setAttribute(), up to IE8 there is an optional third parameter that controls whether or not it is case sensitive with regard to the attribute name. Guess what the default is...
A: The div needs to be added to an element for it to be part of the document.
document.appendChild(newDiv);
alert( document.getElementById("obj_1000") );
A: You have to add the div to the dom.
// Create the Div
var oDiv = document.createElement('div');
document.body.appendChild(oDiv);
A: newDiv.setAttribute( "ID", "obj_1000" );
should be
newDiv.id = "obj_1000";
A: Hummm, thanks for putting me on the right track guys...this was odd but it turns out that if I change the case to lower case, everything starting working just fine...
Finished Result:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("id", "obj_1000");
document.appendChild(newDiv);
alert("Added:" +
newDiv.getAttribute("id") + ":" +
newDiv.id + ":" +
document.getElementById("obj_1000"));
ODD...VERY ODD
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Changing the Directory Structure in Subversion How do I create a branch in subversion that is deeper' than just the 'branches' directory?
I have the standard trunk, tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag.
Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error.
What do I need to do to create this?
For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)?
A: Since subversion doesn't actually think of branches as anything special other than more directories, you can always just create the directory tree you want (with svn mkdir) then copy the code you want into the tree location.
Or just use the --parents flag @BlairC mentioned.
A: I second the use of TortoiseSVN, simply right-click on the directory and go to TortoiseSVN->Branch/tag... to quickly create a branch at a specified directory. Be sure to fill out the URL to be what you want it to be on the resulting "Copy (Branch / Tag)" dialog window.
A: svn copy --parents http://url/to/subproject http://url/to/repository/branches/version_1/project/subproject
That should create the directory you want to put the subproject in (--parents means "create the intermediate directories for me").
A: If you're using TortoiseSVN, you can use its Repository Explorer to do such things. Makes it all pretty WYSIWYG simple.
A: SVN doesn't really manage your branches. It simply does a wholesale copy. It's up to you how you want to manage it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I get the path of the assembly the code is in? Is there a way to get the path for the assembly in which the current code resides? I do not want the path of the calling assembly, just the one containing the code.
Basically my unit test needs to read some xml test files which are located relative to the dll. I want the path to always resolve correctly regardless of whether the testing dll is run from TestDriven.NET, the MbUnit GUI or something else.
Edit: People seem to be misunderstanding what I'm asking.
My test library is located in say
C:\projects\myapplication\daotests\bin\Debug\daotests.dll
and I would like to get this path:
C:\projects\myapplication\daotests\bin\Debug\
The three suggestions so far fail me when I run from the MbUnit Gui:
*
*Environment.CurrentDirectory
gives c:\Program Files\MbUnit
*System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location
gives C:\Documents and
Settings\george\Local
Settings\Temp\ ....\DaoTests.dll
*System.Reflection.Assembly.GetExecutingAssembly().Location
gives the same as the previous.
A: As far as I can tell, most of the other answers have a few problems.
The correct way to do this for a disk-based (as opposed to web-based), non-GACed assembly is to use the currently executing assembly's CodeBase property.
This returns a URL (file://). Instead of messing around with string manipulation or UnescapeDataString, this can be converted with minimal fuss by leveraging the LocalPath property of Uri.
var codeBaseUrl = Assembly.GetExecutingAssembly().CodeBase;
var filePathToCodeBase = new Uri(codeBaseUrl).LocalPath;
var directoryPath = Path.GetDirectoryName(filePathToCodeBase);
A: Same as John's answer, but a slightly less verbose extension method.
public static string GetDirectoryPath(this Assembly assembly)
{
string filePath = new Uri(assembly.CodeBase).LocalPath;
return Path.GetDirectoryName(filePath);
}
Now you can do:
var localDir = Assembly.GetExecutingAssembly().GetDirectoryPath();
or if you prefer:
var localDir = typeof(DaoTests).Assembly.GetDirectoryPath();
A: var assembly = System.Reflection.Assembly.GetExecutingAssembly();
var assemblyPath = assembly.GetFiles()[0].Name;
var assemblyDir = System.IO.Path.GetDirectoryName(assemblyPath);
A: Here is a VB.NET port of John Sibly's code. Visual Basic is not case sensitive, so a couple of his variable names were colliding with type names.
Public Shared ReadOnly Property AssemblyDirectory() As String
Get
Dim codeBase As String = Assembly.GetExecutingAssembly().CodeBase
Dim uriBuilder As New UriBuilder(codeBase)
Dim assemblyPath As String = Uri.UnescapeDataString(uriBuilder.Path)
Return Path.GetDirectoryName(assemblyPath)
End Get
End Property
A: In all these years, nobody has actually mentioned this one. A trick I learned from the awesome ApprovalTests project. The trick is that you use the debugging information in the assembly to find the original directory.
This will not work in RELEASE mode, nor with optimizations enabled, nor on a machine different from the one it was compiled on.
But this will get you paths that are relative to the location of the source code file you call it from
public static class PathUtilities
{
public static string GetAdjacentFile(string relativePath)
{
return GetDirectoryForCaller(1) + relativePath;
}
public static string GetDirectoryForCaller()
{
return GetDirectoryForCaller(1);
}
public static string GetDirectoryForCaller(int callerStackDepth)
{
var stackFrame = new StackTrace(true).GetFrame(callerStackDepth + 1);
return GetDirectoryForStackFrame(stackFrame);
}
public static string GetDirectoryForStackFrame(StackFrame stackFrame)
{
return new FileInfo(stackFrame.GetFileName()).Directory.FullName + Path.DirectorySeparatorChar;
}
}
A: The only solution that worked for me when using CodeBase and UNC Network shares was:
System.IO.Path.GetDirectoryName(new System.Uri(System.Reflection.Assembly.GetExecutingAssembly().CodeBase).LocalPath);
It also works with normal URIs too.
A: I've been using Assembly.CodeBase instead of Location:
Assembly a;
a = Assembly.GetAssembly(typeof(DaoTests));
string s = a.CodeBase.ToUpper(); // file:///c:/path/name.dll
Assert.AreEqual(true, s.StartsWith("FILE://"), "CodeBase is " + s);
s = s.Substring(7, s.LastIndexOf('/') - 7); // 7 = "file://"
while (s.StartsWith("/")) {
s = s.Substring(1, s.Length - 1);
}
s = s.Replace("/", "\\");
It's been working, but I'm no longer sure it is 100% correct. The page at http://blogs.msdn.com/suzcook/archive/2003/06/26/assembly-codebase-vs-assembly-location.aspx says:
"The CodeBase is a URL to the place where the file was found, while the Location is the path where it was actually loaded. For example, if the assembly was downloaded from the internet, its CodeBase may start with "http://", but its Location may start with "C:\". If the file was shadow-copied, the Location would be the path to the copy of the file in the shadow copy dir.
It’s also good to know that the CodeBase is not guaranteed to be set for assemblies in the GAC. Location will always be set for assemblies loaded from disk, however."
You may want to use CodeBase instead of Location.
A: This should work, unless the assembly is shadow copied:
string path = System.Reflection.Assembly.GetExecutingAssembly().Location
A: Does this help?
//get the full location of the assembly with DaoTests in it
string fullPath = System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location;
//get the folder that's in
string theDirectory = Path.GetDirectoryName( fullPath );
A: It's as simple as this:
var dir = AppDomain.CurrentDomain.BaseDirectory;
A: The current directory where you exist.
Environment.CurrentDirectory; // This is the current directory of your application
If you copy the .xml file out with build you should find it.
or
System.Reflection.Assembly assembly = System.Reflection.Assembly.GetAssembly(typeof(SomeObject));
// The location of the Assembly
assembly.Location;
A: You can get the bin path by
AppDomain.CurrentDomain.RelativeSearchPath
A: All of the proposed answers work when the developer can change the code to include the required snippet, but if you wanted to do this without changing any code you could use Process Explorer.
It will list all executing dlls on the system, you may need to determine the process id of your running application, but that is usually not too difficult.
I've written a full description of how do this for a dll inside II - http://nodogmablog.bryanhogan.net/2016/09/locating-and-checking-an-executing-dll-on-a-running-web-server/
A: in a windows form app, you can simply use Application.StartupPath
but for DLLs and console apps the code is much harder to remember...
string slash = Path.DirectorySeparatorChar.ToString();
string root = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
root += slash;
string settingsIni = root + "settings.ini"
A: I believe this would work for any kind of application:
AppDomain.CurrentDomain.RelativeSearchPath ?? AppDomain.CurrentDomain.BaseDirectory
A: You will get incorrect directory if a path contains the '#' symbol.
So I use a modification of the John Sibly answer that is combination UriBuilder.Path and UriBuilder.Fragment:
public static string AssemblyDirectory
{
get
{
string codeBase = Assembly.GetExecutingAssembly().CodeBase;
UriBuilder uri = new UriBuilder(codeBase);
//modification of the John Sibly answer
string path = Uri.UnescapeDataString(uri.Path.Replace("/", "\\") +
uri.Fragment.Replace("/", "\\"));
return Path.GetDirectoryName(path);
}
}
A: For ASP.Net, it doesn't work. I found a better covered solution at Why AppDomain.CurrentDomain.BaseDirectory not contains "bin" in asp.net app?. It works for both Win Application and ASP.Net Web Application.
public string ApplicationPath
{
get
{
if (String.IsNullOrEmpty(AppDomain.CurrentDomain.RelativeSearchPath))
{
return AppDomain.CurrentDomain.BaseDirectory; //exe folder for WinForms, Consoles, Windows Services
}
else
{
return AppDomain.CurrentDomain.RelativeSearchPath; //bin folder for Web Apps
}
}
}
A: What about this:
System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
A: AppDomain.CurrentDomain.BaseDirectory
works with MbUnit GUI.
A: Starting with .net framework 4.6 / .net core 1.0, there is now a AppContext.BaseDirectory, which should give the same result as AppDomain.CurrentDomain.BaseDirectory, except that AppDomains were not part of the .net core 1.x /.net standard 1.x API.
AppContext.BaseDirectory
EDIT: The documentation now even state:
In .NET 5.0 and later versions, for bundled assemblies, the value returned is the containing directory of the host executable.
Indeed, Assembly.Location doc doc says :
In .NET 5.0 and later versions, for bundled assemblies, the value returned is an empty string.
A: I suspect that the real issue here is that your test runner is copying your assembly to a different location. There's no way at runtime to tell where the assembly was copied from, but you can probably flip a switch to tell the test runner to run the assembly from where it is and not to copy it to a shadow directory.
Such a switch is likely to be different for each test runner, of course.
Have you considered embedding your XML data as resources inside your test assembly?
A: Note: Assembly.CodeBase is deprecated in .NET Core/.NET 5+: https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly.codebase?view=net-5.0
Original answer:
I've defined the following property as we use this often in unit testing.
public static string AssemblyDirectory
{
get
{
string codeBase = Assembly.GetExecutingAssembly().CodeBase;
UriBuilder uri = new UriBuilder(codeBase);
string path = Uri.UnescapeDataString(uri.Path);
return Path.GetDirectoryName(path);
}
}
The Assembly.Location property sometimes gives you some funny results when using NUnit (where assemblies run from a temporary folder), so I prefer to use CodeBase which gives you the path in URI format, then UriBuild.UnescapeDataString removes the File:// at the beginning, and GetDirectoryName changes it to the normal windows format.
A: How about this ...
string ThisdllDirectory = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
Then just hack off what you do not need
A: tl;dr
The concept of an assembly and a DLL file are not the same. Depending on how the assembly was loaded the path information gets lost or is not available at all.
Most of the time the provided answers will work, though.
There is one misconception the question and the previous answers have. In most of the cases the provided answers will work just fine but
there are cases where it is not possible to get the correct path of the assembly which the current code resides.
The concept of an assembly - which contains executable code - and a dll file - which contains the assembly - are not tightly coupled. An assembly may
come from a DLL file but it does not have to.
Using the Assembly.Load(Byte[]) (MSDN) method you can load an assembly directly from a byte array in memory.
It does not matter where the byte array comes from. It could be loaded from a file, downloaded from the internet, dynamically generated,...
Here is an example which loads an assembly from a byte array. The path information gets lost after the file was loaded. It is not possible to
get the original file path and all previous described methods do not work.
This method is located in the executing assembly which is located at "D:/Software/DynamicAssemblyLoad/DynamicAssemblyLoad/bin/Debug/Runner.exe"
static void Main(string[] args)
{
var fileContent = File.ReadAllBytes(@"C:\Library.dll");
var assembly = Assembly.Load(fileContent);
// Call the method of the library using reflection
assembly
?.GetType("Library.LibraryClass")
?.GetMethod("PrintPath", BindingFlags.Public | BindingFlags.Static)
?.Invoke(null, null);
Console.WriteLine("Hello from Application:");
Console.WriteLine($"GetViaAssemblyCodeBase: {GetViaAssemblyCodeBase(assembly)}");
Console.WriteLine($"GetViaAssemblyLocation: {assembly.Location}");
Console.WriteLine($"GetViaAppDomain : {AppDomain.CurrentDomain.BaseDirectory}");
Console.ReadLine();
}
This class is located in the Library.dll:
public class LibraryClass
{
public static void PrintPath()
{
var assembly = Assembly.GetAssembly(typeof(LibraryClass));
Console.WriteLine("Hello from Library:");
Console.WriteLine($"GetViaAssemblyCodeBase: {GetViaAssemblyCodeBase(assembly)}");
Console.WriteLine($"GetViaAssemblyLocation: {assembly.Location}");
Console.WriteLine($"GetViaAppDomain : {AppDomain.CurrentDomain.BaseDirectory}");
}
}
For the sake of completeness here is the implementations of GetViaAssemblyCodeBase() which is the same for both assemblies:
private static string GetViaAssemblyCodeBase(Assembly assembly)
{
var codeBase = assembly.CodeBase;
var uri = new UriBuilder(codeBase);
return Uri.UnescapeDataString(uri.Path);
}
The Runner prints the following output:
Hello from Library:
GetViaAssemblyCodeBase: D:/Software/DynamicAssemblyLoad/DynamicAssemblyLoad/bin/Debug/Runner.exe
GetViaAssemblyLocation:
GetViaAppDomain : D:\Software\DynamicAssemblyLoad\DynamicAssemblyLoad\bin\Debug\
Hello from Application:
GetViaAssemblyCodeBase: D:/Software/DynamicAssemblyLoad/DynamicAssemblyLoad/bin/Debug/Runner.exe
GetViaAssemblyLocation:
GetViaAppDomain : D:\Software\DynamicAssemblyLoad\DynamicAssemblyLoad\bin\Debug\
As you can see, neither the code base, location or base directory are correct.
A: string path = Path.GetDirectoryName(typeof(DaoTests).Module.FullyQualifiedName);
A: This should work:
ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap();
Assembly asm = Assembly.GetCallingAssembly();
String path = Path.GetDirectoryName(new Uri(asm.EscapedCodeBase).LocalPath);
string strLog4NetConfigPath = System.IO.Path.Combine(path, "log4net.config");
I am using this to deploy DLL file libraries along with some configuration file (this is to use log4net from within the DLL file).
A: This is what I came up with. In between web projects, unit tests (nunit and resharper test runner); I found this worked for me.
I have been looking for code to detect what configuration the build is in, Debug/Release/CustomName. Alas, the #if DEBUG. So if someone can improve that!
Feel free to edit and improve.
Getting app folder. Useful for web roots, unittests to get the folder of test files.
public static string AppPath
{
get
{
DirectoryInfo appPath = new DirectoryInfo(AppDomain.CurrentDomain.BaseDirectory);
while (appPath.FullName.Contains(@"\bin\", StringComparison.CurrentCultureIgnoreCase)
|| appPath.FullName.EndsWith(@"\bin", StringComparison.CurrentCultureIgnoreCase))
{
appPath = appPath.Parent;
}
return appPath.FullName;
}
}
Getting bin folder: Useful for executing assemblies using reflection. If files are copied there due to build properties.
public static string BinPath
{
get
{
string binPath = AppDomain.CurrentDomain.BaseDirectory;
if (!binPath.Contains(@"\bin\", StringComparison.CurrentCultureIgnoreCase)
&& !binPath.EndsWith(@"\bin", StringComparison.CurrentCultureIgnoreCase))
{
binPath = Path.Combine(binPath, "bin");
//-- Please improve this if there is a better way
//-- Also note that apps like webapps do not have a debug or release folder. So we would just return bin.
#if DEBUG
if (Directory.Exists(Path.Combine(binPath, "Debug")))
binPath = Path.Combine(binPath, "Debug");
#else
if (Directory.Exists(Path.Combine(binPath, "Release")))
binPath = Path.Combine(binPath, "Release");
#endif
}
return binPath;
}
}
A: I find my solution adequate for the retrieval of the location.
var executingAssembly = new FileInfo((Assembly.GetExecutingAssembly().Location)).Directory.FullName;
A: I got the same behaviour in the NUnit in the past. By default NUnit copies your assembly into the temp directory. You can change this behaviour in the NUnit settings:
Maybe TestDriven.NET and MbUnit GUI have the same settings.
A: I use this to get the path to the Bin Directory:
var i = Environment.CurrentDirectory.LastIndexOf(@"\");
var path = Environment.CurrentDirectory.Substring(0,i);
You get this result:
"c:\users\ricooley\documents\visual studio
2010\Projects\Windows_Test_Project\Windows_Test_Project\bin"
A: Web application?
Server.MapPath("~/MyDir/MyFile.ext")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "912"
} |
Q: construct a complex SQL query (or queries) As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments.
An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments.
Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus (Model->find(), etc.), but I'm not sanguine about this.
Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work...
A: Heh, I was just about to come back with essentially the same answer (using Cake's Model::find):
$this->loadModel('Comment');
$this->Comment->find( 'all', array(
'fields' => array('COUNT(Comment.id) AS popularCount'),
'conditions' => array(
'Comment.created >' => strtotime('-1 month')
),
'group' => 'Comment.blog_post_id',
'order' => 'popularCount DESC',
'contain' => array(
'Entry' => array(
'fields' => array( 'Entry.title' )
)
)
));
It's not perfect, but it works and can be improved on.
I made an additional improvement, using the Containable behaviour to extract the Entry data instead of the Comment data.
A: Shouldn't be too bad, you just need a group by (this is off the type of my head, so forgive syntax errors):
SELECT entry-id, count(id) AS c
FROM comment
WHERE comment.createdate >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH)
GROUP BY entry-id
ORDER BY c DESC
A: If you weren't fussed about the time sensitive nature of the comments, you could make use of CakePHP's counterCache functionality by adding a "comment_count" field to the entries table, configuring the counterCache key of the Comment belongsTo Entry association with this field, then call find() on the Entry model.
A: You probably want a WHERE clause to get just last 30 days comments:
SELECT entry-id, count(id) AS c
FROM comment
WHERE comment_date + 30 >= sysdate
GROUP BY entry-id
ORDER BY c DESC
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: .NET Production Debugging I've had a Windows app in production for a while now, and have it set up to send us error reports when it throws exceptions. Most of these are fairly descriptive and help me find the problem very quickly (I use the MS Application Exception Block).
On a few occasions I have reports that are issues that I can't reproduce, and seem to only happen on a few client machines.
I don't have physical access to these client machines, what are some strategies I can use for debugging? Would it be better to build some tracing into the code, or are there some other alternatives?
Thank you.
Edit: I should have been more clear: The error reports that I get do have the stack trace, but since it's production code, it doesn't indicate the exact line that caused the exception, just the method in which it was thrown.
A: You are on the right track. You need to create a tracking module which logs actions/exceptions locally.
You can then have a button or a menu option that the user can click to either automatically email you this information the moment the issue occurs, or they can have the option to get hold of the file so that they can transfer it to you in any other way.
You can even build-in a diagnostics code to run an integrity check on the system and sends you a report (maybe it runs all your unit tests to see if they work on that system).
A: One option is to generate a (mini-)dump file as close to the point where the exception is thrown as possible. This article talks about how to do this from managed code.
You could then load the dump file into Visual Studio or WinDbg and examine it with the aid of SOS
A: I always use this module from Jeff for unhandled exceptions, sending me an email with stacktrace etc.
A: Smart Inspect from Gurock Software has come in handy many times for me. It is very easy to put into a .NET application and gives you extremely powerful control when analyzing log files. It has log levels that allow you to turn off certain functionality except in certain cases so you don't lose performance.
They even have server software that your software can connect to to save logs when you do not have full access to the machines. For example, you could have a server running at www.yourdomain.com. Your software would have a configuration option to turn on debugging. Smart Inspect would be configured to send the log data to your server (And optionally to a local file) so that you could get live logging no matter where the software is being run.
Smart Inspect is very easy to configure and has many features that you can use to help. I've use it to debug high impact multi-threaded server applications on the fly without taking down the machines. It has all the hooks to keep track of different processes, threads and machines.
A: I'd make use of the event log. Take a look here:
http://support.microsoft.com/kb/307024
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: In LINQ to SQL how can I make it so I can add items to a collection of an entity without saving the entity first? var e1 = new E1();
e1.e2s.Add(new e2()); //e2s is null until e1 is saved, i want to save them all at the same time
context.e1s.imsertonsubmit(e1);
context.submitchanges();
A: The sub items will be saved along with the main item, and even identities will be set properly, if you give your DataClasses an association between these classes.
You do this by adding LoadOptions to your O/R-Designer DataClasses like this:
MyDataContext mydc = new MyDataContext();
System.Data.Linq.DataLoadOptions lo = new System.Data.Linq.DataLoadOptions();
lo.LoadWith<E1>(p => p.e2s);
mydc.LoadOptions = lo;
This way LINQ will take care of adding the sub-items, you don't need to InsertOnSubmit every one by itself.
A side effect: upon loading the item, the subitems will be retrieved, too.
A: Well - I don't know if your initial code block would work, but I'm guessing you have to mark your new e2 as insert on submit. Thus:
var e1 = new E1();
var e2 = new e2();
e1.e2s.Add(e2); //e2s is null until e1 is saved, i want to save them all at the same time
context.e1s.insertonsubmit(e1);
context.e2s.insertonsubmit(e2);
context.submitchanges();
A: there we go, apparently when you create another ctor, you have to actually call the no arg ctor in order for the stuff in the ctor to happen
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to import a DBF file in SQL Server How can you import a foxpro DBF file in SQL Server?
A: I was able to use the answer from jnovation but since there was something wrong with my fields, I simply selected specific fields instead of all, like:
select * into CERTDATA
from openrowset('VFPOLEDB','C:\SomePath\CERTDATA.DBF';'';
'','SELECT ACTUAL, CERTID, FROM CERTDATA')
Very exciting to finally have a workable answer thanks to everyone here!
A: What finally worked for us was to use the FoxPro OLEDB Driver and use the following syntax. In our case we are using SQL 2008.
select * from
openrowset('VFPOLEDB','\\VM-GIS\E\Projects\mymap.dbf';'';
'','SELECT * FROM mymap')
Substitute the \\VM-GIS... with the location of your DBF file, either UNC or drive path. Also, substitute mymap after the FROM with the name of the DBF file without the .dbf extension.
A: http://elphsoft.com/dbfcommander.html can export from DBF to SQL Server and vice versa
A: Use a linked server or use openrowset, example
SELECT * into SomeTable
FROM OPENROWSET('MSDASQL', 'Driver=Microsoft Visual FoxPro Driver;
SourceDB=\\SomeServer\SomePath\;
SourceType=DBF',
'SELECT * FROM SomeDBF')
A: This tools allows you to import to and from SQL Server.
*
*http://www.download3000.com/download_17933.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Cross-Branch merging in TFS? Is it possible to merge to a branch that is not a direct parent or child in TFS? I suspect that the answer is no as this is what I've experienced while using it. However, it seems that at certain times it would be really useful when there are different features being worked on that may have different approval cycles (ie. feature one might be approved before feature two). This becomes exceedingly difficult when we have production branches where we have to merge some feature into a previous branch so we can release before the next full version.
Our current branching strategy is to develop in the trunk (or mainline as we call it), and create a branch to stabilize and release to production. This branch can then be used to create hotfixes and other things while mainline can diverge for upcoming features.
What techniques can be used otherwise to mitigate a scenario such as the one(s) described above?
A: tf.exe merge /recursive /baseless $/TeamProject/SourceBranch $/TeamProject/TargetBranch
*
*MSDN: How To: Perform a Baseless Merge in Visual Studio Team Foundation Server
A: You may want to revisit your branching strategy. How do you get production branches? Are you merging all code from development branches, regression testing and then creating a production branch for fixes? Or are you developing on the trunk and then creating production branches to stabilize and release from? The second way creates problems of the type you're describing. If you are using the first approach -- the trunk is supposed to be only for things that have been built on branches tested and then merged you will run into this much less often. Under that approach if you're still having this problem it may be because your development effort is very large and you may need a relatively complex branching strategy with layers of branching and promotion.
A: I agree with Harpreet that you may want to revisit how you you have setup you branching structure. However you if you really want to perform this type of merge you can through something called a baseless merge. It runs from the tfs command prompt,
Tf merge /baseless <<source path>> <<target path>> /recursive
Additional info about baseless merges can be found here
Also I found this document to be invaluable when constructing our tfs branching structure
Microsoft Team Foundation Server Branching Guidance
A: AFAIK you can do this as long as the branches were created off of the same original folder.
*
*trunk/
*branches/
-/feature1 (branched from trunk)
-/feature2 (branched from trunk)
If you do this then you should be able to merge between feature1 and feature2 as well.
Though my branching/merging experience with TFS leaves me wanting more. I wish we just had SVN.
A: Yes, you can do a baseless merge, but only from the command line (tf.exe).
A: TFS will allow you to merge with a branch that is not a parent/child - these are called baseless merges. See these links:
From MSDN
From the TFS Team via CodePlex
We typically do major or destabilizing changes on a development branch. If close to a major release of one of our products nearly all changes will be done on a branch.
A: I am far from a TFS expert, but I think you can merge siblings, and I think it is not a baseless merge.
We branched off our main branch (branch name "main") for a feature (branch name "feature"), then I needed some of the work in a branch that was also branched off the main branch (branch name "dev"). I would consider feature and dev branches to be siblings as they both came from the same parent. I merged feature to dev and all files (14000) were marked as merge, some were marked as merge,edit. I could not cancel (visual studio would just hang), so I accepted the merge. Then I merged dev to main, then I pulled main to feature, and again 14000 files were marked for merge. I was really upset, and afraid this would continue.
At this point we did a test project. We set up main, then branched dev and feature from main. We repeated the above steps with the same results. Once we completed the merge from main to feature, all future merges only showed the edited files.
After our little test I completed the merge from main to feature. And just like the test our merges now only show the edited files. We can go dev to feature, feature to main, main to dev, etc.
I did notice when branching all file dates were modified. Maybe this is an issue?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Inform potential clients about security vulnerabilities? We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice?
A: I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this.
A: I think this is a fairly subjective decision and different prospects would react differently if you told them.
I think an idea might be to let them know after they have given business to someone else.
At least this way, the ex-prospect will not think that you are trying to pressure them into giving you the business.
A: I think the problem with this would be, that it would be quite hard to do checks on XSS without messing up their site. Also, things like SQL injection could be quite dangerous. If you stuck with appending selects, you might not have too much of a problem, but then the question is, how do you know it's even executing the injected SQL?
A: From the way you described it, it seems like a poor business practice that could be a beneficial one with some modification.
First off, any vulnerability assessment or penetration test you conduct on a customer should be agreed upon in writing by that customer, period. This covers your actions legally. Without a written agreement, if you inadvertently cause damage (application crash, denial-of-service, data leak, etc) during your inspection, you are liable and could be charged (under US law; other countries have different standards).
Even if you do not cause damage, a clueless or potentially malicious customer could take you to court claiming damages; a clueless judge might just award them.
If you have written authorization to do so, then a free vulnerability assessment to attract potential customers sounds like a show of good faith and demonstrates what you want -- your skills.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Sorting Directory.GetFiles() System.IO.Directory.GetFiles() returns a string[]. What is the default sort order for the returned values? I'm assuming by name, but if so how much does the current culture effect it? Can you change it to something like creation date?
Update: MSDN points out that the sort order is not guaranteed for .Net 3.5, but the 2.0 version of the page doesn't say anything at all and neither page will help you sort by things like creation or modification time. That information is lost once you have the array (it contains only strings). I could build a comparer that would check for each file it gets, but that means accessing the file system repeatedly when presumably the .GetFiles() method already does this. Seems very inefficient.
A:
Dim Files() As String
Files = System.IO.Directory.GetFiles("C:\")
Array.Sort(Files)
A: From msdn:
The order of the returned file names is not guaranteed; use the Sort() method if a specific sort order is required.
The Sort() method is the standard Array.Sort(), which takes in IComparables (among other overloads), so if you sort by creation date, it will handle localization based on the machine settings.
A: You are correct, the default is my name asc. The only way I have found to change the sort order it to create a datatable from the FileInfo collection.
You can then used the DefaultView from the datatable and sort the directory with the .Sort method.
This is quite involve and fairly slow but I'm hoping someone will post a better solution.
A: You can implement custom iComparer to do sorting. Read the file info for files and compare as you like.
IComparer comparer = new YourCustomComparer();
Array.Sort(System.IO.Directory.GetFiles(), comparer);
msdn info IComparer interface
A: A more succinct VB.Net version...is very nice. Thank you.
To traverse the list in reverse order, add the reverse method...
For Each fi As IO.FileInfo In filePaths.reverse
' Do whatever you wish here
Next
A: The MSDN Documentation states that there is no guarantee of any order on the return values. You have to use the Sort() method.
A: You could write a custom IComparer interface to sort by creation date, and then pass it to Array.Sort. You probably also want to look at StrCmpLogical, which is what is used to do the sorting Explorer uses (sorting numbers correctly with text).
A: If you want to sort by something like creation date you probably need to use DirectoryInfo.GetFiles and then sort the resulting array using a suitable Predicate.
A: In .NET 2.0, you'll need to use Array.Sort to sort the FileSystemInfos.
Additionally, you can use a Comparer delegate to avoid having to declare a class just for the comparison:
DirectoryInfo dir = new DirectoryInfo(path);
FileSystemInfo[] files = dir.GetFileSystemInfos();
// sort them by creation time
Array.Sort<FileSystemInfo>(files, delegate(FileSystemInfo a, FileSystemInfo b)
{
return a.LastWriteTime.CompareTo(b.LastWriteTime);
});
A: Here's the VB.Net solution that I've used.
First make a class to compare dates:
Private Class DateComparer
Implements System.Collections.IComparer
Public Function Compare(ByVal info1 As Object, ByVal info2 As Object) As Integer Implements System.Collections.IComparer.Compare
Dim FileInfo1 As System.IO.FileInfo = DirectCast(info1, System.IO.FileInfo)
Dim FileInfo2 As System.IO.FileInfo = DirectCast(info2, System.IO.FileInfo)
Dim Date1 As DateTime = FileInfo1.CreationTime
Dim Date2 As DateTime = FileInfo2.CreationTime
If Date1 > Date2 Then Return 1
If Date1 < Date2 Then Return -1
Return 0
End Function
End Class
Then use the comparer while sorting the array:
Dim DirectoryInfo As New System.IO.DirectoryInfo("C:\")
Dim Files() As System.IO.FileInfo = DirectoryInfo.GetFiles()
Dim comparer As IComparer = New DateComparer()
Array.Sort(Files, comparer)
A: If you're interested in properties of the files such as CreationTime, then it would make more sense to use System.IO.DirectoryInfo.GetFileSystemInfos().
You can then sort these using one of the extension methods in System.Linq, e.g.:
DirectoryInfo di = new DirectoryInfo("C:\\");
FileSystemInfo[] files = di.GetFileSystemInfos();
var orderedFiles = files.OrderBy(f => f.CreationTime);
Edit - sorry, I didn't notice the .NET2.0 tag so ignore the LINQ sorting. The suggestion to use System.IO.DirectoryInfo.GetFileSystemInfos() still holds though.
A: Just an idea. I like to find an easy way out and try re use already available resources. if I were to sort files I would've just create a process and make syscal to "DIR [x:\Folders\SubFolders*.*] /s /b /on" and capture the output.
With system's DIR command you can sort by :
/O List by files in sorted order.
sortorder N By name (alphabetic) S By size (smallest first)
E By extension (alphabetic) D By date/time (oldest first)
G Group directories first - Prefix to reverse order
The /S switch includes sub folders
I AM NOT SURE IF D = By Date/Time is using LastModifiedDate or FileCreateDate. But if the needed sort order is already built-in in the DIR command, I will get that by calling syscall. And it's FAST. I am just the lazy guy ;)
After a little googling I found switch to sort by particular date/time:-
/t [[:]TimeField] : Specifies which time field to display or use for sorting. The following list describes each of the values you can use for TimeField.
Value Description
c : Creation
a : Last access
w : Last written
A: A more succinct VB.Net version, if anyone is interested
Dim filePaths As Linq.IOrderedEnumerable(Of IO.FileInfo) = _
New DirectoryInfo("c:\temp").GetFiles() _
.OrderBy(Function(f As FileInfo) f.CreationTime)
For Each fi As IO.FileInfo In filePaths
' Do whatever you wish here
Next
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: Hide directories in wxGenericDirCtrl I am using a wxGenericDirCtrl, and I would like to know if there is a way to hide directories, I'd especially like to hide siblings of parent nodes.
For example if my directory structure looks like this:
+-a
|
+-b
| |
| +-whatever
|
+-c
| |
| +-d
| |
| +-e
| |
| +-f
|
+-g
|
+-whatever
If my currently selected directory is /a/c/d is there any way to hide b and g, so that the tree looks like this in my ctrl:
+-a
|
+-c
|
+-[d]
|
+-e
|
+-f
I'm currently working with a directory structure that has lots and lots directories that are irrelevant to most users, so it would be nice to be able to clean it up.
Edit:
If it makes a difference, I am using wxPython, and so far, I have only tested my code on linux using the GTK backend, but I do plan to make it multi-platform and using it on Windows and Mac using the native backends.
A: Listing/walking directories in Python is very easy, so I would recommend trying to "roll your own" using one of the simple tree controls (such as TreeCtrl or CustomTreeCtrl). It should really be quite easy to call the directory listing code when some directory is expanded and return the result.
A: I don't think that's possible.
It would be relatively easy to add this functionality to the underlying C++ wxWidgets control, but since you're using wxPython, you'd then have to rebuild that as well which is a tremendous issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: best technique for launching a windbg user-mode remote debugging session What is your favorite technique for launching a windbg user-mode remote debugging session?
Why is do you prefer this technique over other techniques? (pros/cons)
There are at least four different ways to do user-mode remote debug using windbg, as documented in the "Remote Debugging" section of the debugging tools for windows help file.
*
*run app on target then attach to it from the host windbg
*have the host windbg use remote.exe to launch the app on the target
*have the "smart client" host windbg launch the app on the target via a process server that is running on the target
*run a windbg instance on the target machine using the option "-server" to automatically start a server, then connect to the server from a 2nd machine.
A: Option 1 is my favourite because it is the simplest. I get to launch the app in the normal way without worry about getting WinDbg to set the right working directory, pass any command line arguments, etc.
Fortunately I've not run into any cases where this hasn't worked!
A: I tend to use option 4 (-server) because it is the only one that doesn't "pop" when you break into the kernel debugger long enough for the TCP connection to timeout. But this is more complex and not fully satisfying. So I'm looking for "best practices".
A: There is no "the best" solution. Each of the possibilities has advantages and disadvantages and it's good to understand all of them. It depends on several factors like:
*
*where are the symbols located
*which PC has access to the Internet to download the OS symbols
*what amount of data may you copy to the server (clients often accept better if it's just a single Exe)
*what's the bandwidth between client and server
*do you need other commands that just CDB/WinDbg, e.g. access to CMD, then consider remote.exe
*who's available on the server side, a debugging expert whom you can easily tell a lot of cryptic commands or a normal user who barely knows how to start a command prompt
*are both sides in a private network, so you need a "man in the middle" server to be able to access each other (or port forwarding as an alternative, which IT guys don't want and it may take days to get it set up)
From those 4 options, don't forget that clients often want to see exactly what you do, so they require an RDP session, Teamviewer or similar. That's something they understand.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: CSS overflow table row positioning I have table inside a div tab. The table has 40 rows in it and the div's height is set to show 10 rows of that table. CSS's overflow:auto lets me scroll through the 40 rows. All is well there.
How can I, with JavaScript cause the table to programatically position to a given row (i.e., programmatically scroll the table up or down by row)?
A: Where superHappyFunDiv is the ID of the container DIV and rows is a 0-based row index:
function scrollTo(row)
{
var container = document.getElementById("superHappyFunDiv");
var rows = container.getElementsByTagName("tr");
row = Math.min(Math.max(row, 0), rows.length-1);
container.scrollTop = rows[row].offsetTop;
}
Will attempt to scroll the requested row to the top of the container.
Tested in IE6 and FF3.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do you keep the machine awake? I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active.
Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake.
Any ideas?
A: A much cleaner solution is use JNA to tap into the native OS API. Check your platform at runtime, and if it happens to be Windows then the following will work:
import com.sun.jna.Native;
import com.sun.jna.Structure;
import com.sun.jna.Structure.FieldOrder;
import com.sun.jna.platform.win32.WTypes.LPWSTR;
import com.sun.jna.platform.win32.WinBase;
import com.sun.jna.platform.win32.WinDef.DWORD;
import com.sun.jna.platform.win32.WinDef.ULONG;
import com.sun.jna.platform.win32.WinNT.HANDLE;
import com.sun.jna.win32.StdCallLibrary;
/**
* Power management.
*
* @see <a href="https://stackoverflow.com/a/20996135/14731">https://stackoverflow.com/a/20996135/14731</a>
*/
public enum PowerManagement
{
INSTANCE;
@FieldOrder({"version", "flags", "simpleReasonString"})
public static class REASON_CONTEXT extends Structure
{
public static class ByReference extends REASON_CONTEXT implements Structure.ByReference
{
}
public ULONG version;
public DWORD flags;
public LPWSTR simpleReasonString;
}
private interface Kernel32 extends StdCallLibrary
{
HANDLE PowerCreateRequest(REASON_CONTEXT.ByReference context);
/**
* @param powerRequestHandle the handle returned by {@link #PowerCreateRequest(REASON_CONTEXT.ByReference)}
* @param requestType requestType is the ordinal value of {@link PowerRequestType}
* @return true on success
*/
boolean PowerSetRequest(HANDLE powerRequestHandle, int requestType);
/**
* @param powerRequestHandle the handle returned by {@link #PowerCreateRequest(REASON_CONTEXT.ByReference)}
* @param requestType requestType is the ordinal value of {@link PowerRequestType}
* @return true on success
*/
boolean PowerClearRequest(HANDLE powerRequestHandle, int requestType);
enum PowerRequestType
{
PowerRequestDisplayRequired,
PowerRequestSystemRequired,
PowerRequestAwayModeRequired,
PowerRequestMaximum
}
}
private final Kernel32 kernel32;
private HANDLE handle = null;
PowerManagement()
{
// Found in winnt.h
ULONG POWER_REQUEST_CONTEXT_VERSION = new ULONG(0);
DWORD POWER_REQUEST_CONTEXT_SIMPLE_STRING = new DWORD(0x1);
kernel32 = Native.load("kernel32", Kernel32.class);
REASON_CONTEXT.ByReference context = new REASON_CONTEXT.ByReference();
context.version = POWER_REQUEST_CONTEXT_VERSION;
context.flags = POWER_REQUEST_CONTEXT_SIMPLE_STRING;
context.simpleReasonString = new LPWSTR("Your reason for changing the power setting");
handle = kernel32.PowerCreateRequest(context);
if (handle == WinBase.INVALID_HANDLE_VALUE)
throw new AssertionError(Native.getLastError());
}
/**
* Prevent the computer from going to sleep while the application is running.
*/
public void preventSleep()
{
if (!kernel32.PowerSetRequest(handle, Kernel32.PowerRequestType.PowerRequestSystemRequired.ordinal()))
throw new AssertionError("PowerSetRequest() failed");
}
/**
* Allow the computer to go to sleep.
*/
public void allowSleep()
{
if (!kernel32.PowerClearRequest(handle, Kernel32.PowerRequestType.PowerRequestSystemRequired.ordinal()))
throw new AssertionError("PowerClearRequest() failed");
}
}
Then when the user runs powercfg /requests they see:
SYSTEM:
[PROCESS] \Device\HarddiskVolume1\Users\Gili\.jdks\openjdk-15.0.2\bin\java.exe
Your reason for changing the power setting
You should be able to do something similar for macOS and Linux.
A: Adding to scarcher2's code snippet above and moving mouse by only 1 pixel. I have moved the mouse twice so that some change occurs even if pointer is on extremes:
while(true){
hal.delay(1000 * 30);
Point pObj = MouseInfo.getPointerInfo().getLocation();
System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y);
hal.mouseMove(pObj.x + 1, pObj.y + 1);
hal.mouseMove(pObj.x - 1, pObj.y - 1);
pObj = MouseInfo.getPointerInfo().getLocation();
System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y);
}
A: I have a very brute-force technique of moving the mouse 1 point in the x direction and then back every 3 minutes.
There may me a more elegant solution but it's a quick fix.
A: Wouldn't all the suggestions moving the mouse back and forth drive the user crazy? I know I'd remove any app that would do that as soon as I can isolate it.
A: I use this code to keep my workstation from locking. It's currently only set to move the mouse once every minute, you could easily adjust it though.
It's a hack, not an elegant solution.
import java.awt.*;
import java.util.*;
public class Hal{
public static void main(String[] args) throws Exception{
Robot hal = new Robot();
Random random = new Random();
while(true){
hal.delay(1000 * 60);
int x = random.nextInt() % 640;
int y = random.nextInt() % 480;
hal.mouseMove(x,y);
}
}
}
A: Here is completed Batch file that generates java code, compile it, cleans the generated files, and runs in the background.. (jdk is required on your laptop)
Just save and run this as a Bat File. (somefilename.bat) ;)
@echo off
setlocal
rem rem if JAVA is set and run from :startapp labeled section below, else the program exit through :end labeled section.
if not "[%JAVA_HOME%]"=="[]" goto start_app
echo. JAVA_HOME not set. Application will not run!
goto end
:start_app
echo. Using java in %JAVA_HOME%
rem writes below code to Energy.java file.
@echo import java.awt.MouseInfo; > Energy.java
@echo import java.awt.Point; >> Energy.java
@echo import java.awt.Robot; >> Energy.java
@echo //Mouse Movement Simulation >> Energy.java
@echo public class Energy { >> Energy.java
@echo public static void main(String[] args) throws Exception { >> Energy.java
@echo Robot energy = new Robot(); >> Energy.java
@echo while (true) { >> Energy.java
@echo energy.delay(1000 * 60); >> Energy.java
@echo Point pObj = MouseInfo.getPointerInfo().getLocation(); >> Energy.java
@echo Point pObj2 = pObj; >> Energy.java
@echo System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y); >> Energy.java
@echo energy.mouseMove(pObj.x + 10, pObj.y + 10); >> Energy.java
@echo energy.mouseMove(pObj.x - 10, pObj.y - 10); >> Energy.java
@echo energy.mouseMove(pObj2.x, pObj.y); >> Energy.java
@echo pObj = MouseInfo.getPointerInfo().getLocation(); >> Energy.java
@echo System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y); >> Energy.java
@echo } >> Energy.java
@echo } >> Energy.java
@echo } >> Energy.java
rem compile java code.
javac Energy.java
rem run java application in background.
start javaw Energy
echo. Your Secret Energy program is running...
goto end
:end
rem clean if files are created.
pause
del "Energy.class"
del "Energy.java"
A: I've been using pmset to control sleep mode on my Mac for awhile now, and it's pretty easy to integrate. Here's a rough example of how you could call that program from Java to disable/enable sleep mode. Note that you need root privileges to run pmset, and therefore you'll need them to run this program.
import java.io.BufferedInputStream;
import java.io.IOException;
/**
* Disable sleep mode (record current setting beforehand), and re-enable sleep
* mode. Works with Mac OS X using the "pmset" command.
*/
public class SleepSwitch {
private int sleepTime = -1;
public void disableSleep() throws IOException {
if (sleepTime != -1) {
// sleep time is already recorded, assume sleep is disabled
return;
}
// query pmset for the current setting
Process proc = Runtime.getRuntime().exec("pmset -g");
BufferedInputStream is = new BufferedInputStream(proc.getInputStream());
StringBuffer output = new StringBuffer();
int c;
while ((c = is.read()) != -1) {
output.append((char) c);
}
is.close();
// parse the current setting and store the sleep time
String outString = output.toString();
String setting = outString.substring(outString.indexOf(" sleep\t")).trim();
setting = setting.substring(7, setting.indexOf(" ")).trim();
sleepTime = Integer.parseInt(setting);
// set the sleep time to zero (disable sleep)
Runtime.getRuntime().exec("pmset sleep 0");
}
public void enableSleep() throws IOException {
if (sleepTime == -1) {
// sleep time is not recorded, assume sleep is enabled
return;
}
// set the sleep time to the previously stored value
Runtime.getRuntime().exec("pmset sleep " + sleepTime);
// reset the stored sleep time
sleepTime = -1;
}
}
A: You can use the program Caffeine caffiene to keep your workstation awake. You could run the program via the open command in os X.
A: On OS X, just spawn caffeinate. This will prevent the system from sleeping until caffeinate is terminated.
A: In Visual Studio create a simple form.
From the toolbar, drag a Timer control onto the form.
In the Init code, set the timer interval to 60 seconds (60000 ms.).
Implement the timer callback with the following code "SendKeys.Send("{F15}");"
Run the new program.
No mouse movement needed.
Edit: At least on my Army workstation, simply programmatically generating mouse and key messages isn't enough to keep my workstation logged in and awake. The early posters with the Java Robot class are on the right track. JAVA Robot works on or below the OS's HAL (Hardware Abstraction Layer) However I recreated and tested the Java/Robot solution and it did not work - until I added a Robot.keyPress(123) to the code.
A: To go with the solution provided by user Gili for Windows using JNA, here's the JNA solution for MacOS.
First, the JNA library interface:
import com.sun.jna.Library;
import com.sun.jna.Native;
import com.sun.jna.platform.mac.CoreFoundation;
import com.sun.jna.ptr.IntByReference;
public interface ExampleIOKit extends Library {
ExampleIOKit INSTANCE = Native.load("IOKit", ExampleIOKit.class);
CoreFoundation.CFStringRef kIOPMAssertPreventUserIdleSystemSleep = CoreFoundation.CFStringRef.createCFString("PreventUserIdleSystemSleep");
CoreFoundation.CFStringRef kIOPMAssertPreventUserIdleDisplaySleep = CoreFoundation.CFStringRef.createCFString("PreventUserIdleDisplaySleep");
int kIOReturnSuccess = 0;
int kIOPMAssertionLevelOff = 0;
int kIOPMAssertionLevelOn = 255;
int IOPMAssertionCreateWithName(CoreFoundation.CFStringRef assertionType,
int assertionLevel,
CoreFoundation.CFStringRef reasonForActivity,
IntByReference assertionId);
int IOPMAssertionRelease(int assertionId);
}
Here's an example of invoking the JNA method to turn sleep prevention on or off:
public class Example {
private static final Logger _log = LoggerFactory.getLogger(Example.class);
private int sleepPreventionAssertionId = 0;
public void updateSleepPrevention(final boolean isEnabled) {
if (isEnabled) {
if (sleepPreventionAssertionId == 0) {
final var assertionIdRef = new IntByReference(0);
final var reason = CoreFoundation.CFStringRef.createCFString(
"Example preventing display sleep");
final int result = ExampleIOKit.INSTANCE.IOPMAssertionCreateWithName(
ExampleIOKit.kIOPMAssertPreventUserIdleDisplaySleep,
ExampleIOKit.kIOPMAssertionLevelOn, reason, assertionIdRef);
if (result == ExampleIOKit.kIOReturnSuccess) {
_log.info("Display sleep prevention enabled");
sleepPreventionAssertionId = assertionIdRef.getValue();
}
else {
_log.error("IOPMAssertionCreateWithName returned {}", result);
}
}
}
else {
if (sleepPreventionAssertionId != 0) {
final int result = ExampleIOKit.INSTANCE.IOPMAssertionRelease(sleepPreventionAssertionId);
if (result == ExampleIOKit.kIOReturnSuccess) {
_log.info("Display sleep prevention disabled");
}
else {
_log.error("IOPMAssertionRelease returned {}", result);
}
sleepPreventionAssertionId = 0;
}
}
}
}
A: On Windows, use the SystemParametersInfo function. It's a Swiss army-style function that lets you get/set all sorts of system settings.
To disable the screen shutting off, for instance:
SystemParametersInfo( SPI_SETPOWEROFFACTIVE, 0, NULL, 0 );
Just be sure to set it back when you're done...
A: Wouldn't it be easier to disable the power management on the server? It might be argued that servers shouldn't go into powersave mode?
A: This code moves the pointer to the same location where it already is so the user doesn't notice any difference.
while (true) {
Thread.sleep(180000);//this is how long before it moves
Point mouseLoc = MouseInfo.getPointerInfo().getLocation();
Robot rob = new Robot();
rob.mouseMove(mouseLoc.x, mouseLoc.y);
}
A: Run a command inside a timer like pinging the server..
A: I'd just do a function (or download a freebie app) that moves the mouse around. Inelegant, but easy.
A: This will work:
public class Utils {
public static void main(String[] args) throws AWTException {
Robot rob = new Robot();
PointerInfo ptr = null;
while (true) {
rob.delay(4000); // Mouse moves every 4 seconds
ptr = MouseInfo.getPointerInfo();
rob.mouseMove((int) ptr.getLocation().getX() + 1, (int) ptr.getLocation().getY() + 1);
}
}
}
A: One simple way which i use to avoid "Windows desktop Auto lock" is "Switch On/Off NumLock" every 6 seconds.
Here a Java Program to Switch ON/OFF NumLock.
import java.util.*;
import java.awt.*;
import java.awt.event.*;
public class NumLock extends Thread {
public void run() {
try {
boolean flag = true;
do {
flag = !flag;
Thread.sleep(6000);
Toolkit.getDefaultToolkit().setLockingKeyState(KeyEvent. VK_NUM_LOCK, flag);
}
while(true);
}
catch(Exception e) {}
}
public static void main(String[] args) throws Exception {
new NumLock().start();
}
}
Run this Java program in a separate command prompt; :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Google Reader API Unread Count Does Google Reader have an API and if so, how can I get the count of the number of unread posts for a specific user knowing their username and password?
A: Here is an update to this answer
import urllib
import urllib2
username = '[email protected]'
password = '******'
# Authenticate to obtain Auth
auth_url = 'https://www.google.com/accounts/ClientLogin'
#auth_req_data = urllib.urlencode({'Email': username,
# 'Passwd': password})
auth_req_data = urllib.urlencode({'Email': username,
'Passwd': password,
'service': 'reader'})
auth_req = urllib2.Request(auth_url, data=auth_req_data)
auth_resp = urllib2.urlopen(auth_req)
auth_resp_content = auth_resp.read()
auth_resp_dict = dict(x.split('=') for x in auth_resp_content.split('\n') if x)
# SID = auth_resp_dict["SID"]
AUTH = auth_resp_dict["Auth"]
# Create a cookie in the header using the Auth
header = {}
#header['Cookie'] = 'Name=SID;SID=%s;Domain=.google.com;Path=/;Expires=160000000000' % SID
header['Authorization'] = 'GoogleLogin auth=%s' % AUTH
reader_base_url = 'http://www.google.com/reader/api/0/unread-count?%s'
reader_req_data = urllib.urlencode({'all': 'true',
'output': 'xml'})
reader_url = reader_base_url % (reader_req_data)
reader_req = urllib2.Request(reader_url, None, header)
reader_resp = urllib2.urlopen(reader_req)
reader_resp_content = reader_resp.read()
print reader_resp_content
Google Reader removed SID auth around June, 2010 (I think), using new Auth from ClientLogin is the new way and it's a bit simpler (header is shorter). You will have to add service in data for requesting Auth, I noticed no Auth returned if you don't send the service=reader.
You can read more about the change of authentication method in this thread.
A: This URL will give you a count of unread posts per feed. You can then iterate over the feeds and sum up the counts.
http://www.google.com/reader/api/0/unread-count?all=true
Here is a minimalist example in Python...parsing the xml/json and summing the counts is left as an exercise for the reader:
import urllib
import urllib2
username = '[email protected]'
password = '******'
# Authenticate to obtain SID
auth_url = 'https://www.google.com/accounts/ClientLogin'
auth_req_data = urllib.urlencode({'Email': username,
'Passwd': password,
'service': 'reader'})
auth_req = urllib2.Request(auth_url, data=auth_req_data)
auth_resp = urllib2.urlopen(auth_req)
auth_resp_content = auth_resp.read()
auth_resp_dict = dict(x.split('=') for x in auth_resp_content.split('\n') if x)
auth_token = auth_resp_dict["Auth"]
# Create a cookie in the header using the SID
header = {}
header['Authorization'] = 'GoogleLogin auth=%s' % auth_token
reader_base_url = 'http://www.google.com/reader/api/0/unread-count?%s'
reader_req_data = urllib.urlencode({'all': 'true',
'output': 'xml'})
reader_url = reader_base_url % (reader_req_data)
reader_req = urllib2.Request(reader_url, None, header)
reader_resp = urllib2.urlopen(reader_req)
reader_resp_content = reader_resp.read()
print reader_resp_content
And some additional links on the topic:
*
*http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
*How do you access an authenticated Google App Engine service from a (non-web) python client?
*http://blog.gpowered.net/2007/08/google-reader-api-functions.html
A: It is there. Still in Beta though.
A: In the API posted in [1], the "token" field should be "T"
[1] http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Graph searching algorithm I'm looking for a graph algorithm with some unusual properties.
Each edge in the graph is either an "up" edge or a "down" edge.
A valid path can go an indefinite number of "up"'s followed by an indefinite number of "down"'s, or vice versa. However it cannot change direction more than once.
E.g., a valid path might be A "up" B "up" C "down" E "down" F
an invalid path might be A "up" B "down" C "up" D
What is a good algorithm for finding the shortest valid path between two nodes? What about finding all of the equal length shortest paths?
A: Maybe you can transform your graph into a normal directed graph and then use existing algorithms.
One way would be to split the graph into two graphs, one with all the up edges and one with all the down edges and with directed edges between all the nodes on graph one and the corresponding node on graph two.
First solve for starting in graph one and ending in graph two and then the other way around, then check the shortest solution.
A: One would think your standard BFS should work here. Whenever you add a node to the open list, you can wrap it into a struct that holds which direction it is using (up or down) and a boolean flag indicating whether it has switched directions yet. These can be used to determine which outgoing edges from that node are valid.
To find all shortest paths of equal length, include the number of edges traversed so far in your struct. When you find your first shortest path, make a note of the path length and stop adding nodes to the open list. Keep going through the remaining nodes on the list until you have checked all paths of the current length, then stop.
A: A* with a specially crafted cost (G score) and heuristic (H score) function can handle it.
For the cost you could keep track of the number of direction changes in the path and add infinite cost on the second change (ie. cut off the search for those branches).
The heuristic takes some more thought, especially when you want to keep the heuristic admissible (never overestimates minimum distance to goal) and monotonic. (Only way to guarantee A* finds an optimal solution.)
Maybe there is more information about the domain available to create the heuristic? (ie. x,y coordinates of the nodes in the graph?)
Of course, depending on the size of the graph you want to solve, you could first try simpler algorithms like breadth first search or Dijkstra's algorithm: basically every search algorithm will do, and for every one you will need a cost function (or similar) anyway.
A: Assuming you don't have any heuristics, a variation of dijkstra's algorithm should suffice pretty well. Every time you consider a new edge, store information about its "ancestors". Then, check for the invariant (only one direction change), and backtrack if it is violated.
The ancestors here are all the edges that were traversed to get to the current node, along the shortest path. One good way to store the ancestor information would be as a pair of numbers. If U is up, and D is down, a particular edge's ancestors could be UUUDDDD, which would be the pair 3, 4. You will not need a third number, because of the invariant.
Since we have used dijkstra's algorithm, finding multiple shortest paths is already taken care of.
A: If you have a standard graph search function, say Graph.shortest(from, to) in a library, you can loop and minimize, in C#/pseudocode:
[ (fst.shortest(A, C) + nxt.shortest(C, B))
for C in nodes , (fst, nxt) in [(up, down), (down, up)] ].reduce(min)
If you need to remember the minimum path/paths and it so happens that your standard function returns you the data, you could also pronounce
[ [fst, nxt, C, fst.shortest(A, C), nxt.shortest(C,B)]
for C in nodes , (fst, nxt) in [(up, down), (down, up)] ].reduce(myMin)
where myMin should compare two [fst, nxt, C, AC, BD] tuples and leave the one that has lower distance, or both and assuming reduce is a smart function.
This has some memory overhead if our graphs are large and don't use memory at all (which is possible if they are generated dynamically), but not really any speed overhead, imho.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is the use of the square brackets [] in sql statements? I've noticed that Visual Studio 2008 is placing square brackets around column names in sql. Do the brackets offer any advantage? When I hand code T-SQL I've never bothered with them.
Example:
Visual Studio:
SELECT [column1], [column2] etc...
My own way:
SELECT column1, column2 etc...
A: Regardless of following a naming convention that avoids using reserved words, Microsoft does add new reserved words. Using brackets allows your code to be upgraded to a new SQL Server version, without first needing to edit Microsoft's newly reserved words out of your client code. That editing can be a significant concern. It may cause your project to be prematurely retired....
Brackets can also be useful when you want to Replace All in a script. If your batch contains a variable named @String and a column named [String], you can rename the column to [NewString], without renaming @String to @NewString.
A: They're handy if your columns have the same names as SQL keywords, or have spaces in them.
Example:
create table test ( id int, user varchar(20) )
Oh no! Incorrect syntax near the keyword 'user'.
But this:
create table test ( id int, [user] varchar(20) )
Works fine.
A: Column names can contain characters and reserved words that will confuse the query execution engine, so placing brackets around them at all times prevents this from happening. Easier than checking for an issue and then dealing with it, I guess.
A: The brackets can be used when column names are reserved words.
If you are programatically generating the SQL statement from a collection of column names you don't control, then you can avoid problems by always using the brackets.
A: In addition
Some Sharepoint databases contain hyphens in their names. Using square brackets in SQL Statements allow the names to be parsed correctly.
A: They are useful to identify each elements in SQL.
For example:
CREATE TABLE SchemaName.TableName (
This would actually create a table by the name SchemaName.TableName under default dbo schema even though the intention might be to create the table inside the SchemaName schema.
The correct way would be the following:
CREATE TABLE [SchemaName].[TableName] (
Now it it knows what is the table name and in which schema should it be created in (rightly in the SchemaName schema and not in the default dbo schema)
A: I believe it adds them there for consistency... they're only required when you have a space or special character in the column name, but it's cleaner to just include them all the time when the IDE generates SQL.
A: The brackets are required if you use keywords or special chars in the column names or identifiers. You could name a column [First Name] (with a space) – but then you'd need to use brackets every time you referred to that column.
The newer tools add them everywhere just in case or for consistency.
A: They are useful if you are (for some reason) using column names with certain characters for example.
Select First Name From People
would not work, but putting square brackets around the column name would work
Select [First Name] From People
In short, it's a way of explicitly declaring a object name; column, table, database, user or server.
A: During the dark ages of SQL in the 1990s it was a good practice as the SQL designers were trying to add each word in the dictionary as keyword for endless avalanche of new features and they called it the SQL3 draft.
So it keeps forward compatibility.
And i found that it has another nice side effect, it helps a lot when you use grep in code reviews and refactoring.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "217"
} |
Q: How do you delete wild card cookies in Rails? How do you delete a cookie in rails that was set with a wild card domain:
cookies[:foo] = {:value => 'bar', :domain => '.acme.com'}
When, following the docs, you do:
cookies.delete :foo
the logs say
Cookie set: foo=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT
Notice that the domain is missing (it seems to use the default
params for everything). Respecting the RFC, of course the cookie's
still there, Browser -> ctrl/cmd-L ->
javascript:alert(document.cookie);
Voilà!
Q: What's the "correct" way to delete such a cookie?
A: Pass the :domain on delete as well. Here's the source of that method:
# Removes the cookie on the client machine by setting the value to an empty string
# and setting its expiration date into the past. Like []=, you can pass in an options
# hash to delete cookies with extra data such as a +path+.
def delete(name, options = {})
options.stringify_keys!
set_cookie(options.merge("name" => name.to_s, "value" => "", "expires" => Time.at(0)))
end
As you can see, it just sets an empty cookie with the name you gave, set to expire in 1969, and with no contents. But it does merge in any other options you give, so you can do:
cookies.delete :foo, :domain => '.acme.com'
And you're set.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Separating CSS deployment from rest of site Where I work, the design and development departments are totally separated, however we (the design department) are responsible for managing the CSS for our sites. Typically, new CSS needs to be released to the production server far more often than new website code. Because of this, we are deploying the CSS separately, and it lives outside source control.
However, lately, we've run into a few problems with new CSS not being synched for up site releases, and in general the process is a huge headache. I've been pushing to get the CSS under some kind of source control, but having trouble finding a good deployment method that makes everyone happy. Our biggest problem is managing changes that affect current portions of the site, where the CSS changes need to go live before the site changes, but not break anything on the exisiting site.
I won't go into the finer details of the weird culture between designers and devs here, but I was wondering what experience others have had in managing large amounts of CSS (50+ files, thousands and thousands of lines) that needs to be constantly updated and released independent of site releases.
A: I'll advocate the use of source control here. Especially if the development team uses branching to deal with structured releases. That way, whatever CSS is checked into the production branch is what should be deployed ... and if it is updated mid-stream, it's the responsibility of the person (designer?) that updates it to promote that code using whatever system your company uses to promote changes to production.
A: The fancy name is "Content Delivery Network" (Wikipedia).
We store our CSS files in a database, and then have a separate website that does nothing but serve CSS resources. We implemented this in May 2007 for 1000+ websites in 30+ countries. It has worked flawlessly for the last 15 months.
Static images and even JavaScript files are handled the same way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Console.WriteLine and generic List I frequently find myself writing code like this:
List<int> list = new List<int> { 1, 3, 5 };
foreach (int i in list) {
Console.Write("{0}\t", i.ToString()); }
Console.WriteLine();
Better would be something like this:
List<int> list = new List<int> { 1, 3, 5 };
Console.WriteLine("{0}\t", list);
I suspect there's some clever way of doing this, but I don't see it. Does anybody have a better solution than the first block?
A: new List<int> { 1, 3, 5 }.ForEach(Console.WriteLine);
A: A different approach, just for kicks:
Console.WriteLine(string.Join("\t", list));
A: If there is a piece of code that you repeat all the time according to Don't Repeat Yourself you should put it in your own library and call that. With that in mind there are 2 aspects to getting the right answer here. The first is clarity and brevity in the code that calls the library function. The second is the performance implications of foreach.
First let's think about the clarity and brevity in the calling code.
You can do foreach in a number of ways:
*
*for loop
*foreach loop
*Collection.ForEach
Out of all the ways to do a foreach List.ForEach with a lamba is the clearest and briefest.
list.ForEach(i => Console.Write("{0}\t", i));
So at this stage it may look like the List.ForEach is the way to go. However what's the performance of this? It's true that in this case the time to write to the console will govern the performance of the code. When we know something about performance of a particular language feature we should certainly at least consider it.
According to Duston Campbell's performance measurements of foreach the fastest way of iterating the list under optimised code is using a for loop without a call to List.Count.
The for loop however is a verbose construct. It's also seen as a very iterative way of doing things which doesn't match with the current trend towards functional idioms.
So can we get brevity, clarity and performance? We can by using an extension method. In an ideal world we would create an extension method on Console that takes a list and writes it with a delimiter. We can't do this because Console is a static class and extension methods only work on instances of classes. Instead we need to put the extension method on the list itself (as per David B's suggestion):
public static void WriteLine(this List<int> theList)
{
foreach (int i in list)
{
Console.Write("{0}\t", t.ToString());
}
Console.WriteLine();
}
This code is going to used in many places so we should carry out the following improvements:
*
*Instead of using foreach we should use the fastest way of iterating the collection which is a for loop with a cached count.
*Currently only List can be passed as an argument. As a library function we can generalise it through a small amount of effort.
*Using List limits us to just Lists, Using IList allows this code to work with Arrays too.
*Since the extension method will be on an IList we need to change the name to make it clearer what we are writing to:
Here's how the code for the function would look:
public static void WriteToConsole<T>(this IList<T> collection)
{
int count = collection.Count();
for(int i = 0; i < count; ++i)
{
Console.Write("{0}\t", collection[i].ToString(), delimiter);
}
Console.WriteLine();
}
We can improve this even further by allowing the client to pass in the delimiter. We could then provide a second function that writes to console with the standard delimiter like this:
public static void WriteToConsole<T>(this IList<T> collection)
{
WriteToConsole<T>(collection, "\t");
}
public static void WriteToConsole<T>(this IList<T> collection, string delimiter)
{
int count = collection.Count();
for(int i = 0; i < count; ++i)
{
Console.Write("{0}{1}", collection[i].ToString(), delimiter);
}
Console.WriteLine();
}
So now, given that we want a brief, clear performant way of writing lists to the console we have one. Here is entire source code including a demonstration of using the the library function:
using System;
using System.Collections.Generic;
using System.Linq;
namespace ConsoleWritelineTest
{
public static class Extensions
{
public static void WriteToConsole<T>(this IList<T> collection)
{
WriteToConsole<T>(collection, "\t");
}
public static void WriteToConsole<T>(this IList<T> collection, string delimiter)
{
int count = collection.Count();
for(int i = 0; i < count; ++i)
{
Console.Write("{0}{1}", collection[i].ToString(), delimiter);
}
Console.WriteLine();
}
}
internal class Foo
{
override public string ToString()
{
return "FooClass";
}
}
internal class Program
{
static void Main(string[] args)
{
var myIntList = new List<int> {1, 2, 3, 4, 5};
var myDoubleList = new List<double> {1.1, 2.2, 3.3, 4.4};
var myDoubleArray = new Double[] {12.3, 12.4, 12.5, 12.6};
var myFooList = new List<Foo> {new Foo(), new Foo(), new Foo()};
// Using the standard delimiter /t
myIntList.WriteToConsole();
myDoubleList.WriteToConsole();
myDoubleArray.WriteToConsole();
myFooList.WriteToConsole();
// Using our own delimiter ~
myIntList.WriteToConsole("~");
Console.Read();
}
}
}
=======================================================
You might think that this should be the end of the answer. However there is a further piece of generalisation that can be done. It's not clear from fatcat's question if he is always writing to the console. Perhaps something else is to be done in the foreach. In that case Jason Bunting's answer is going to give that generality. Here is his answer again:
list.ForEach(i => Console.Write("{0}\t", i));
That is unless we make one more refinement to our extension methods and add FastForEach as below:
public static void FastForEach<T>(this IList<T> collection, Action<T> actionToPerform)
{
int count = collection.Count();
for (int i = 0; i < count; ++i)
{
actionToPerform(collection[i]);
}
Console.WriteLine();
}
This allows us to execute any arbitrary code against every element in the collection using the fastest possible iteration method.
We can even change the WriteToConsole function to use FastForEach
public static void WriteToConsole<T>(this IList<T> collection, string delimiter)
{
collection.FastForEach(item => Console.Write("{0}{1}", item.ToString(), delimiter));
}
So now the entire source code, including an example usage of FastForEach is:
using System;
using System.Collections.Generic;
using System.Linq;
namespace ConsoleWritelineTest
{
public static class Extensions
{
public static void WriteToConsole<T>(this IList<T> collection)
{
WriteToConsole<T>(collection, "\t");
}
public static void WriteToConsole<T>(this IList<T> collection, string delimiter)
{
collection.FastForEach(item => Console.Write("{0}{1}", item.ToString(), delimiter));
}
public static void FastForEach<T>(this IList<T> collection, Action<T> actionToPerform)
{
int count = collection.Count();
for (int i = 0; i < count; ++i)
{
actionToPerform(collection[i]);
}
Console.WriteLine();
}
}
internal class Foo
{
override public string ToString()
{
return "FooClass";
}
}
internal class Program
{
static void Main(string[] args)
{
var myIntList = new List<int> {1, 2, 3, 4, 5};
var myDoubleList = new List<double> {1.1, 2.2, 3.3, 4.4};
var myDoubleArray = new Double[] {12.3, 12.4, 12.5, 12.6};
var myFooList = new List<Foo> {new Foo(), new Foo(), new Foo()};
// Using the standard delimiter /t
myIntList.WriteToConsole();
myDoubleList.WriteToConsole();
myDoubleArray.WriteToConsole();
myFooList.WriteToConsole();
// Using our own delimiter ~
myIntList.WriteToConsole("~");
// What if we want to write them to separate lines?
myIntList.FastForEach(item => Console.WriteLine(item.ToString()));
Console.Read();
}
}
}
A: List<int> list = new List<int> { 1, 3, 5 };
list.ForEach(x => Console.WriteLine(x));
Edit: Dammit! took too long to open visual studio to test it.
A: List<int> a = new List<int>() { 1, 2, 3, 4, 5 };
a.ForEach(p => Console.WriteLine(p));
edit: ahhh he beat me to it.
A: list.ForEach(x=>Console.WriteLine(x));
A: Do this:
list.ForEach(i => Console.Write("{0}\t", i));
EDIT: To others that have responded - he wants them all on the same line, with tabs between them. :)
A: Also you can do join:
var qwe = new List<int> {5, 2, 3, 8};
Console.WriteLine(string.Join("\t", qwe));
A: public static void WriteLine(this List<int> theList)
{
foreach (int i in list)
{
Console.Write("{0}\t", t.ToString());
}
Console.WriteLine();
}
Then, later...
list.WriteLine();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: Open Source Actionscript 3 or Javascript date utility classes? I was wondering if anyone could point to an Open Source date utility class that is fairly robust. I find myself rolling my own when I want to do a lot of things I take for granted in C# and Java. For instance I did find a decent example of a DateDiff() function that I tore apart and another DatePart() function. Another examples would be parsing different date/time formats. I'm trying to avoid reinventing something if it's already built.
Another possibility may be a nice set of Javascript files that I can convert to ActionScript 3. So far I've found DateJS but I want to get a good idea of what is out there.
A: as3corelib has the DateUtil class and it should be pretty reliable since it's written by some Adobe employees. I haven't encountered any problems with it.
A: There is also DP_DateExtensions, though I believe DateJS is more robust.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I modify a Work Item type to include additional information in TFS? TFS2008. I'd like to track task points on a Task work item, but there isn't anywhere (other than the description) to record this. I'd like to add a dropdown with 0, 1, 2, 3, 5, 8, etc, so these task points can be exported in reports.
A: Use the process template editor, available as part of the Visual Studio Team System 2008 Team Foundation Server Power Tools.
A: I created a web cast awhile ago that demonstrates this tool. it covers a couple of really basic scenarios. It can be accessed here.
Ta.
Steve Porter
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to make git ignore changes in case? I'm not too sure what is going on here, but sometimes a particular file in my repository will change the case of its name. e.g.,:
before: File.h
after: file.h
I don't really care why this is happening, but this causes git to think it is a new file, and then I have to go and change the file name back. Can you just make git ignore case changes?
[edit]
I suspect it is Visual Studio doing something weird with that particular file, because it seems to happen most often when I open and save it after changes. I don't have any way to fix bugs in VS however, but git should be a bit more capable I hope.
A: The situation described in the question is now re-occuring with Mac OS X, git version >= 1.7.4 (I think). The cure is to set your ignorecase=false and rename the lowercased files (that git changed that way, not Visual Studio) back to their UsualCase by hand (i.e. 'mv myname MyName').
More info here.
A: To force git to recognize the change of casing to a file, you can run this command.
*
*Change the File casing however you like
*git mv -f mynewapp.sln MyNewApp.sln
The previous command seems to be deprecated now.
A: Since version 1.5.6 there is an ignorecase option available in the [core] section of .git/config
e.g. add ignorecase = true
To change it for just one repo, from that folder run:
git config core.ignorecase true
To change it globally:
git config --global core.ignorecase true
A: You can force git to rename the file in a case-only way with this command:
git mv --cached name.txt NAME.TXT
Note this doesn't change the case of the file in your checked out copy on a Windows partition, but git records the casing change and you can commit that change. Future checkouts will use the new casing.
A: In git version 1.6.1.9 for windows I found that "ignorecase=true' in config was already set by default.
A: *
*From the console: git config core.ignorecase true
*Change file name capitalisation
*Commit
*From the console: git config core.ignorecase false
Step 4 fixed problems checking out branches with a different capitalisation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "140"
} |
Q: Detecting if an IDataReader contains a certain field before iteration So I'm using an IDataReader to hydrate some business objects, but I don't know at runtime exactly what fields will be in the reader. Any fields that aren't in the reader would be left null on the resulting object. How do you test if a reader contains a specific field without just wrapping it in a try/catch?
A: This should do the trick:
Public Shared Function ReaderContainsColumn(ByVal reader As IDataReader, ByVal name As String) As Boolean
For i As Integer = 0 To reader.FieldCount - 1
If reader.GetName(i).Equals(name, StringComparison.CurrentCultureIgnoreCase) Then Return True
Next
Return False
End Function
or (in C#)
public static bool ReaderContainsColumn(IDataReader reader, string name)
{
for (int i = 0; i < reader.FieldCount; i++) {
if (reader.GetName(i).Equals(name, StringComparison.CurrentCultureIgnoreCase)) return true;
}
return false;
}
:o)
A: You can also use IDataReader.GetSchemaTable to get a list of all the columns in the reader.
http://support.microsoft.com/kb/310107
A: Enumerable.Range(0, reader.FieldCount).Any(i => reader.GetName(i) == "ColumnName")
A: The best solution I've used is doing it like this:
DataTable dataTable = new DataTable();
dataTable.Load(reader);
foreach (var item in dataTable.Rows)
{
bool columnExists = item.Table.Columns.Contains("ColumnName");
}
Trying to access it through reader["ColumnName"] and checking for null or DBNull will throw an exception.
A: You can't just test reader["field"] for null or DBNull because a IndexOutOfRangeException is thrown if the column isn't in the reader.
The code I use in my mapping layer for creating domain objects and the stored procedures that use the mapping layer might have different column names is below; you could modify it to not throw an exception if the column isn't found and return default(t) or null.
I understand this isn't the most elegant or optimal solution (and really, if you can avoid it then you should), however, legacy stored procedures or Sql queries might warrant a work-around.
/// <summary>
/// Grabs the value from a specific datareader for a list of column names.
/// </summary>
/// <typeparam name="T">Type of the value.</typeparam>
/// <param name="reader">Reader to grab data off of.</param>
/// <param name="columnNames">Column names that should be interrogated.</param>
/// <returns>Value from the first correct column name or an exception if none of the columns exist.</returns>
public static T GetColumnValue<T>(IDataReader reader, params string[] columnNames)
{
bool foundValue = false;
T value = default(T);
IndexOutOfRangeException lastException = null;
foreach (string columnName in columnNames)
{
try
{
int ordinal = reader.GetOrdinal(columnName);
value = (T)reader.GetValue(ordinal);
foundValue = true;
}
catch (IndexOutOfRangeException ex)
{
lastException = ex;
}
}
if (!foundValue)
{
string message = string.Format("Column(s) {0} could not be not found.",
string.Join(", ", columnNames));
throw new IndexOutOfRangeException(message, lastException);
}
return value;
}
A: While I disagree with this approach (I think when accessing data, you should know the shape before hand), I understand that there are exceptions.
You could always load up a datatable with the reader and then iterate through it. You can then check to see if the column exists. This will be less performant, but you won't need try/catch blocks (so maybe it is more performant for your needs).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I get javadoc to link to the Java API using an Ant task? Right now my ant task looks like.
<javadoc sourcepath="${source}" destdir="${doc}">
<link href="http://java.sun.com/j2se/1.5.0/docs/api/" />
</javadoc>
And I'm getting this warning:
javadoc: warning - Error fetching URL: http://java.sun.com/j2se/1.5.0/docs/api/package-list
How do I get the javadoc to properly link to the API? I am behind a proxy.
A: You probably need the http.proxyHost and http.proxyPort system properties set. For example, ANT_OPTS="-Dhttp.proxyHost=proxy.y.com" ant doc
Alternatively, you could set the "offline" flag and provide a package list, but that could be a pain for the Java core.
A: You can also pass the arguments inside the ant task
<arg value="-J-Dhttp.proxyHost=your.proxy.here"/>
<arg value="-J-Dhttp.proxyPort=##"/>
If going the offline link route. Download the package list by going to the URL of the Java API (http://java.sun.com/j2se/1.5.0/docs/api/package-list) and saving it as a text file and then using this Ant task.
<javadoc sourcepath="${source}" destdir="${doc}">
<link offline="true" href="http://java.sun.com/j2se/1.5.0/docs/api/" packagelistloc="path-containing-package-list"/>
</javadoc>
A: You can also use the "offline" mode that allows you to build (faster!) without accessing the internet. Please see this answer: https://stackoverflow.com/a/24089805/366749
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: SQL Server Random Sort What is the best way to sort the results of a sql query into a random order within a stored procedure?
A: This is a duplicate of SO# 19412. Here's the answer I gave there:
select top 1 * from mytable order by newid()
In SQL Server 2005 and up, you can use TABLESAMPLE to get a random sample that's repeatable:
SELECT FirstName, LastName FROM Contact TABLESAMPLE (1 ROWS) ;
A: Or use the following query, which returns a better random sample result:
SELECT * FROM a_table WHERE 0.01 >= CAST(CHECKSUM(NEWID(), a_column) & 0x7fffffff AS float) / CAST (0x7fffffff AS int)
0.01 means ~1 percent of total rows.
Quote from SQL 2008 Books Online:
If you really want a random sample of
individual rows, modify your query to
filter out rows randomly, instead of
using TABLESAMPLE.
A: You can't just ORDER BY RAND(), as you know, because it will only generate one value. So use a key for a seed value.
SELECT RAND(object_id), object_id, name
FROM sys.objects
ORDER BY 1
A: select foo from Bar order by newid()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
} |
Q: Is business logic subjective? I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = @ID — he would call this “business logic”
What approach should I take to define “business logic” in a very objective way without offending my team lead?
A: I really think you just need to agree on a clear definition of what you mean when you say "business logic". If you need to be "politically sensitive", you could even craft the definition around your team lead's understanding, then come up with another term ("domain rules"?) that defines what you want to talk about.
Words and terms are relatively subjective -- of course, once you leave that company you will need to 're-learn' industry standards, so it's always better to stick with them if you can, but the main goal is to communicate clearly and get work done.
A: One way to differentiate is that "business logic" is something the customer would care about and that could be explained to a customer without referring to computer-specific words.
A: You could try to argue your point with a timed example, run a sql select against an indexed table and then run a loop to find exactly the same item in the same set but this time in code. The code will be much slower.
Let the database do what it was designed to do, select sets and subsets of data :) I think realistically though, all you can do is get your team together to build a set of standards which you will all code to, democracy rules!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Cannot delete from the database...? So, I have 2 database instances, one is for development in general, another was copied from development for unit tests.
Something changed in the development database that I can't figure out, and I don't know how to see what is different.
When I try to delete from a particular table, with for example:
delete from myschema.mytable where id = 555
I get the following normal response from the unit test DB indicating no row was deleted:
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000
However, the development database fails to delete at all with the following error:
DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884
My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is?
(note that this is a DB2 database)
A: Hmm, applying the great oracle to this question, I came up with:
http://bytes.com/forum/thread830774.html
It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well)
Does that help any?
A: You might have an open transaction on the dev db...that gets me sometimes on SQL Server
A: Is the type of id compatible with 555? Or has it been changed to a non-integer type?
Alternatively, does the 555 argument somehow go missing (e.g. if you are using JDBC and the prepared statement did not get its arguments set before executing the query)?
A: Can you add more to your question? That error sounds like the sql statement parser is very confused about your statement. Can you do a select on that table for the row where id = 555 ?
You could try running a RUNSTATS and REORG TABLE on that table, those are supposed to sort out wonky tables.
A: @castaway
A select with the same "where" condition works just fine, just not delete. Neither runstats nor reorg table have any affect on the problem.
A: @castaway
We actually just solved the problem, and indeed it is just what you said (a coworker found that exact same page too).
The solution was to drop foreign key constraints and re-add them.
Another post on the subject:
http://www.ibm.com/developerworks/forums/thread.jspa?threadID=208277&tstart=-1
Which indicates that the problem is a referential constraint corruption, and is actually, or supposedly anyways, fixed in a later version of db2 V9 (which we are not yet using).
Thanks for the help!
A: Please check
1. your arguments of triggers, procedure, functions and etc.
2. datatype of arguments.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I generate Emma code coverage reports using Ant? How do I setup an Ant task to generate Emma code coverage reports?
A: The User Guide has a good example of how to set up your build script so that you not only seperate the instrumented code from the execution, but it's also all contained in the same <target> so that you don't have to run a series of different targets, but instead you can just do something like ant emma tests (if ant tests was how you normally ran your unit tests, for example).
Here's their example:
<target name="emma" description="turns on EMMA instrumentation/reporting" >
<property name="emma.enabled" value="true" />
<!-- EMMA instr class output directory: -->
<property name="out.instr.dir" value="${basedir}/outinstr" />
<mkdir dir="${out.instr.dir}" />
</target>
<target name="run" depends="init, compile" description="runs the examples" >
<emma enabled="${emma.enabled}" >
<instr instrpathref="run.classpath"
destdir="${out.instr.dir}"
metadatafile="${coverage.dir}/metadata.emma"
merge="true"
/>
</emma>
<!-- note from matt b: you could just as easily have a <junit> task here! -->
<java classname="Main" fork="true" >
<classpath>
<pathelement location="${out.instr.dir}" />
<path refid="run.classpath" />
<path refid="emma.lib" />
</classpath>
<jvmarg value="-Demma.coverage.out.file=${coverage.dir}/coverage.emma" />
<jvmarg value="-Demma.coverage.out.merge=true" />
</java>
<emma enabled="${emma.enabled}" >
<report sourcepath="${src.dir}" >
<fileset dir="${coverage.dir}" >
<include name="*.emma" />
</fileset>
<txt outfile="${coverage.dir}/coverage.txt" />
<html outfile="${coverage.dir}/coverage.html" />
</report>
</emma>
</target>
A: To answer questions about where the source and instrumented directories are (these can be switched to whatever your standard directory structure is):
<property file="build.properties" />
<property name="source" location="src/main/java" />
<property name="test.source" location="src/test/java" />
<property name="target.dir" location="target" />
<property name="target" location="${target.dir}/classes" />
<property name="test.target" location="${target.dir}/test-classes" />
<property name="instr.target" location="${target.dir}/instr-classes" />
Classpaths:
<path id="compile.classpath">
<fileset dir="lib/main">
<include name="*.jar" />
</fileset>
</path>
<path id="test.compile.classpath">
<path refid="compile.classpath" />
<pathelement location="lib/test/junit-4.6.jar" />
<pathelement location="${target}" />
</path>
<path id="junit.classpath">
<path refid="test.compile.classpath" />
<pathelement location="${test.target}" />
</path>
First you need to setup where Ant can find the Emma libraries:
<path id="emma.lib" >
<pathelement location="${emma.dir}/emma.jar" />
<pathelement location="${emma.dir}/emma_ant.jar" />
</path>
Then import the task:
<taskdef resource="emma_ant.properties" classpathref="emma.lib" />
Then instrument the code:
<target name="coverage.instrumentation">
<mkdir dir="${instr.target}"/>
<mkdir dir="${coverage}"/>
<emma>
<instr instrpath="${target}" destdir="${instr.target}" metadatafile="${coverage}/metadata.emma" mode="copy">
<filter excludes="*Test*"/>
</instr>
</emma>
<!-- Update the that will run the instrumented code -->
<path id="test.classpath">
<pathelement location="${instr.target}"/>
<path refid="junit.classpath"/>
<pathelement location="${emma.dir}/emma.jar"/>
</path>
</target>
Then run a target with the proper VM arguments like:
<jvmarg value="-Demma.coverage.out.file=${coverage}/coverage.emma" />
<jvmarg value="-Demma.coverage.out.merge=true" />
Finally generate your report:
<target name="coverage.report" depends="coverage.instrumentation">
<emma>
<report sourcepath="${source}" depth="method">
<fileset dir="${coverage}" >
<include name="*.emma" />
</fileset>
<html outfile="${coverage}/coverage.html" />
</report>
</emma>
</target>
A: Emma 2.1 introduces another way of obtaining runtime coverage information (.ec file). One can remotely request the data from the given port of the computer where an instrumented application is runnig. So there's no need to stop VM.
To get the file with runtime coverage data you need to insert the following snippet in your Ant script between running of your tests and generating coverage report:
<emma>
<ctl connect="${emma.rt.host}:${emma.rt.port}" >
<command name="coverage.get" args="${emma.ec.file}" />
<command name="coverage.reset" />
</ctl>
</emma>
Other steps are similar to Emma 2.0. They are perfectly described in previous post
More information on Emma 2.1 features: http://sourceforge.net/project/shownotes.php?group_id=108932&release_id=336859
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Using generic classes with ObjectDataSource I have a generic Repository<T> class I want to use with an ObjectDataSource. Repository<T> lives in a separate project called DataAccess. According to this post from the MS newsgroups (relevant part copied below):
Internally, the ObjectDataSource is calling Type.GetType(string) to get the
type, so we need to follow the guideline documented in Type.GetType on how
to get type using generics. You can refer to MSDN Library on Type.GetType:
http://msdn2.microsoft.com/en-us/library/w3f99sx1.aspx
From the document, you will learn that you need to use backtick (`) to
denotes the type name which is using generics.
Also, here we must specify the assembly name in the type name string.
So, for your question, the answer is to use type name like follows:
TypeName="TestObjectDataSourceAssembly.MyDataHandler`1[System.String],TestObjectDataSourceAssembly"
Okay, makes sense. When I try it, however, the page throws an exception:
<asp:ObjectDataSource ID="MyDataSource" TypeName="MyProject.Repository`1[MyProject.MessageCategory],DataAccess" />
[InvalidOperationException: The type specified in the TypeName property of ObjectDataSource 'MyDataSource' could not be found.]
The curious thing is that this only happens when I'm viewing the page. When I open the "Configure Data Source" dialog from the VS2008 designer, it properly shows me the methods on my generic Repository class. Passing the TypeName string to Type.GetType() while debugging also returns a valid type. So what gives?
A: Do something like this.
Type type = typeof(Repository<MessageCategory);
string assemblyQualifiedName = type.AssemblyQualifiedName;
get the value of assemblyQualifiedName and paste it into the TypeName field. Note that Type.GetType(string), the value passed in must be
The assembly-qualified name of the type to get. See AssemblyQualifiedName. If the type is in the currently executing assembly or in Mscorlib.dll, it is sufficient to supply the type name qualified by its namespace.
So, it may work by passing in that string in your code, because that class is in the currently executing assembly (where you are calling it), where as the ObjectDataSource is not.
Most likely the type you are looking for is
MyProject.Repository`1[MyProject.MessageCategory, DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null], DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null
A: I know this is an old post but I have recently had this problem myself. Another solution would be to replace the inheritance with object composition, e.g.
[DataObject]
public class DataAccessObject {
private Repository<MessageCategory> _repository;
// ctor omitted for clarity
// ...
[DataObjectMethod(DataObjectMethodType.Select)]
public MessageCategory Get(int key) {
return _repository.Get(key);
}
}
This way the ObjectDataSource doesn't know about the repository because its hidden within the class. I have a class library in my facade layer that is a perfectly reasonable place to put this code in the project I am working on.
In addition, if you are using Resharper and interfaces, its possible to get Resharper to do the refactoring using Resharpers "Implement using field" function.
A: Darren,
Many, many thanks for your post. I've been fighting with this all day. Strangely, in my case, I need to double the square brackets, e.g. for your piece of code:
MyProject.Repository`1[[MyProject.MessageCategory, DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null]], DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null
Roger
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do I create tri-state checkboxes with a TreeView control in .NET? I have a treeview control in a Windows Forms project that has checkboxes turned on. Because the treeview control has nested nodes, I need the checkboxes to be able to have some sort of tri-mode selection. I can't find a way to do this (I can only have the checkboxes fully checked or unchecked).
A: If you are talking about Windows Forms, this article should help you build you tri-state TreeView:
http://www.codeproject.com/KB/tree/treeviewex2003.aspx?display=Print
If you need tri-state checkboxes on a treeview on asp.net i think you need to use a third-party component. Take a look a this one, and click "tri-state checkboxes" on the left side:
http://www.aspnetexpert.com/demos/tree/default.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Algorithm to decide if digital audio data is clipping? Is there an algorithm or some heuristic to decide whether digital audio data is clipping?
A: If you ever receive values at the maximum or minimum, then you are, by definition, clipping. Those values represent their particular value as well as all values beyond, and so they are best used as outside bounds detectors.
-Adam
A: The simple answer is that if any sample has the maximum or minimum value (-32768 and +32767 respectively for 16 bit samples), you can consider it clipping. This isn't stricly true, since that value may actually be the correct value, but there is no way to tell whether +32767 really should have been +33000.
For a more complicated answer: There is such a thing as sample counting clipping detectors that require x consecutive samples to be at the max/min value for them to be considered clipping (where x may be as high as 7). The theory here is that clipping in just a few samples is not audible.
That said, there is audio equipment that clips quite audible even at values below the maximum (and above the minimum). Typical advice is to master music to peak at -0.3 dB instead of 0.0 dB for this reason. You might want to consider any sample above that level to be clipping. It all depends on what you need it for.
A: What Adam said. You could also add some logic to detect maximum amplitude values over a period of time and only flag those, but the essence is to determine if/when the signal hits the maximum amplitude.
A: For digital audio data, the term "clipping" doesn't really carry a lot of meaning other than "max amplitude". In the analog world, audio data comes from some hardware which usually contains a "clipping register", which allows you the possibility of a maximum amplitude that isn't clipped.
What might be better suited to digital audio is to set some threshold based on the limitations of your output D/A. If you're doing VOIP, then choose some threshold typical of handsets or cell phones, and call it "clipping" if your digital audio gets above that. If you're outputting to high-end home theater systems, then you probably won't have any "clipping".
A: I just noticed that there even are some nice implementations.
For example in Audacity:
Analyze → Find Clipping…
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What kind of technologies are available for sending text messages? I'm looking into sending regular automated text-messages to a list of subscribed users. Having played with Windows Mobile devices, I could easily implement this using the compact .Net framework + a device hooked up to usb and send the messages through this. I would like to explore other solutions like having a server or something similar to do this. I just have no idea what is involved in such a system.
A: You can usually get an account with an sms service provider and send messages using an API (SOAP, resful http, smpp ....)
A google search for sms service provider yeilds many results with varying costs.
Here is an informative article How to Choose an SMS Service Provider
A: I use AQL who provide gateways to send SMS messages, voice push messages, inbound SMS -> HTTP POST gateways and other stuff.
For Perl there's my SMS::AQL module to interface with them; whipping up something in C# should be pretty easy.
A: It really all depends on how many text messages you intend to send and how critical it is that the message arrives on time (and, actually arrives).
SMS Aggregators
For larger volume and good reliability, you will want to go with an SMS aggregator. These aggregators have web service API's (or SMPP) that you can use to send your message and find out whether your message was delivered over time. Some examples of aggregators with whom I have experience are Air2Web, mBlox, etc.
The nice thing about working with an aggregator is that they can guide you through what it takes to send effective messages. For example, if you want your own, distinct, shortcode they can navigate the process with the carriers to secure that shortcode.
They can also make sure that you are in compliance with any rules regarding using SMS. Carriers will flat shut you off if you don't respect the use of SMS and only use SMS within the bounds of what you agreed to when you started to use the aggregator. If you overstep your bounds, they have the aggregator relationships to prevent any service interruptions.
You'll pay per message and may have a baseline service fee. All if this is determined by your volume.
SMTP to SMS
If you want an unreliable, low-rent solution to a low number of known addresses, you can use an SMTP to SMS solution. In this case you simply find out the mobile provider for the recipient and use their mobile provider's e-mail scheme to send the message. An example of this is [email protected].
In this scenario, you send the message and it is gone and you hope that it gets there. You really don't know if it is making it. Also, some providers limit how messages come in via their SMTP to SMS gateway to limit SMS spam.
But, that scenario is the very easiest to use from virtually any programming language. There are a million C# examples of how to send e-mail and this way would be no different.
This is the most cost-effective solution (i.e. free) until you get a large volume of messages. When you start doing too much of this, the carriers might step in when they find that you are sending a ton of messages through their SMTP to SMS gateway.
Effective Texting
In many cases you have to make sure that recipients have properly opted-in to your service. This is only a big deal if your texts are going to a really large population.
You'll want to remember that text messages are short (keep it to less than 140 to 160 characters). When you program things you'll want to bake that in or you might accidentally send multipart messages.
Don't forget that you will want to make sure that your recipients realize they might have to pay for the incoming text messages. In a world of unlimited text plans this is less and less of a concern.
A: You could always try a third-party gateway service for this. Somebody like clickatell provide a number of services and APIs to make this work in a variety of countries. This isn't an ad! I only used their services for a technology pilot. There are quite a few of these around.
A: Another technology to send sms messages is text2land.com text to speech technology to send sms messages to landline phones.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Best way to implement 1:1 asynchronous callbacks/events in ActionScript 3 / Flex / AIR? I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between:
*
*whoever instantiated a given command object and the command object,
*the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls.
Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships.
Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way?
Here's an example to illustrate my current method:
*
*I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor.
*Then the command object would repeat this pattern with its connection to the "data access" object.
*When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called.
A: I'll try one more idea:
Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code):
public function deleteThing( id : String ) : DeferredResponse {
var deferredResponse : DeferredResponse = new DeferredResponse();
var asyncToken : AsyncToken = theRemoteObject.deleteThing(id);
var result : Function = function( o : Object ) : void {
deferredResponse.notifyResultListeners(o);
}
var fault : Function = function( o : Object ) : void {
deferredResponse.notifyFaultListeners(o);
}
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken;
}
The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure.
Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response.
In the command it would look something like this:
public function execute( ) : void {
var deferredResponse : DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult);
deferredResponse.addEventListener(FaultEvent.FAULT, onFault);
}
or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered.
But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate.
A: Many of the Flex RPC classes, like RemoteObject, HTTPService, etc. return AsyncTokens when you call them. It sounds like this is what you're after. Basically the AsyncToken encapsulates the pending call, making it possible to register callbacks (in the form of IResponder instances) to a specific call.
In the case of HTTPService, when you call send() an AsyncToken is returned, and you can use this object to track the specific call, unlike the ResultEvent.RESULT, which gets triggered regardless of which call it is (and calls can easily come in in a different order than they were sent).
A: The AbstractCollection is the best way to deal with Persistent Objects in Flex / AIR. The GenericDAO provides the answer.
DAO is the Object which manages to perform CRUD Operation and other Common
Operations to be done over a ValueObject ( known as Pojo in Java ).
GenericDAO is a reusable DAO class which can be used generically.
Goal:
In JAVA IBM GenericDAO, to add a new DAO, the steps to be done is simply,
Add a valueobject (pojo).
Add a hbm.xml mapping file for the valueobject.
Add the 10-line Spring configuration file for the DAO.
Similarly, in AS3 Project Swiz DAO. We want to attain a similar feet of achievement.
Client Side GenericDAO model:
As we were working on a Client Side language, also we should be managing a persistent object Collection (for every valueObject) .
Usage:
Source:
http://github.com/nsdevaraj/SwizDAO
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to easily edit SQL XML column in SQL Management Studio I have a table with an XML column. This column is storing some values I keep for configuring my application. I created it to have a more flexible schema.
I can't find a way to update this column directly from the table view in SQL Management Studio. Other (INT or Varchar for example) columns are editable. I know I can write an UPDATE statement or create some code to update it. But I'm looking for something more flexible that will let power users edit the XML directly.
Any ideas?
Reiterating again: Please don't answer
I can write an application. I know
that, And that is exactly what I'm
trying to avoid.
A: I wound up writing a .net c# UI to deal with the xml data. Using xsl for display and an xml schema helped display the xml nicely and maintain it's integrity.
edit: Also c# contains the xmldocument class that simplifies reading/writing the data.
A: @Jacob's answer works very well, though you should add a REPLACE if you XML contains any ' characters:
select 'update [table name] set [xml field name] = ''' +
REPLACE(convert(varchar(max), [xml field name]), '''', '''''') +
''' where [primary key name] = ' +
convert(varchar(max), [primary key name]) from [table name]
A: This is an old question, but I needed to do this today. The best I can come up with is to write a query that generates SQL code that can be edited in the query editor - it's sort of lame but it saves you copy/pasting stuff.
Note: you may need to go into Tools > Options > Query Results > Results to Text and set the maximum number of characters displayed to a large enough number to fit your XML fields.
e.g.
select 'update [table name] set [xml field name] = ''' +
convert(varchar(max), [xml field name]) +
''' where [primary key name] = ' +
convert(varchar(max), [primary key name]) from [table name]
which produces a lot of queries that look like this (with some sample table/field names):
update thetable set thedata = '<root><name>Bob</name></root>' where thekey = 1
You then copy these queries from the results window back up to the query window, edit the xml strings, and then run the queries.
(Edit: changed 10 to max to avoid error)
A: I have a cheap and nasty workaround, but is ok. So, do a query of the record, i.e.
SELECT XMLData FROM [YourTable]
WHERE ID = @SomeID
Click on the xml data field, which should be 'hyperlinked'. This will open the XML in a new window. Edit it, then copy and paste the XML back into a new query window:
UPDATE [YourTable] SET XMLData = '<row><somefield1>Somedata</somefield1>
</row>'
WHERE ID = @SomeID
But yes, WE Desparately need to be able to edit. If you are listening Mr. Soft, please look at Oracle, you can edit XML in their Mgt Studio equivalent. Let's chalk it up to an oversight, I am still a HUGE fan of SQL server.
A: sql server management studio is missing this feature.
I can see Homer Simpson as the Microsoft project manager
banging his head with the palm of his hand:
"Duh!"
Of course, we want to edit xml columns.
A: Ignoring the "easily" part the question title, here is a giant hack that is fairly decent, provided you deal with small XML columns.
This is a proof of concept without much thought to optimization. Written against 2008 R2.
--Drop any previously existing objects, so we can run this multiple times.
IF EXISTS (SELECT * FROM sysobjects WHERE Name = 'TableToUpdate')
DROP TABLE TableToUpdate
IF EXISTS (SELECT * FROM sysobjects WHERE Name = 'vw_TableToUpdate')
DROP VIEW vw_TableToUpdate
--Create our table with the XML column.
CREATE TABLE TableToUpdate(
Id INT NOT NULL CONSTRAINT Pk_TableToUpdate PRIMARY KEY CLUSTERED IDENTITY(1,1),
XmlData XML NULL
)
GO
--Create our view updatable view.
CREATE VIEW dbo.vw_TableToUpdate
AS
SELECT
Id,
CONVERT(VARCHAR(MAX), XmlData) AS XmlText,
XmlData
FROM dbo.TableToUpdate
GO
--Create our trigger which takes the data keyed into a VARCHAR column and shims it into an XML format.
CREATE TRIGGER TR_TableToView_Update
ON dbo.vw_TableToUpdate
INSTEAD OF UPDATE
AS
SET NOCOUNT ON
DECLARE
@Id INT,
@XmlText VARCHAR(MAX)
DECLARE c CURSOR LOCAL STATIC FOR
SELECT Id, XmlText FROM inserted
OPEN c
FETCH NEXT FROM c INTO @Id, @XmlText
WHILE @@FETCH_STATUS = 0
BEGIN
/*
Slight limitation here. We can't really do any error handling here because errors aren't really "allowed" in triggers.
Ideally I would have liked to do a TRY/CATCH but meh.
*/
UPDATE TableToUpdate
SET
XmlData = CONVERT(XML, @XmlText)
WHERE
Id = @Id
FETCH NEXT FROM c INTO @Id, @XmlText
END
CLOSE c
DEALLOCATE c
GO
--Quick test before we go to SSMS
INSERT INTO TableToUpdate(XmlData) SELECT '<Node1/>'
UPDATE vw_TableToUpdate SET XmlText = '<Node1a/>'
SELECT * FROM TableToUpdate
If you open vw_TableToUpdate in SSMS, you are allowed to change the "XML", which will then update the "real" XML value.
Again, ugly hack, but it works for what I need it to do.
A: I do not think you can use the Management Studio GUI to update XML-columns without writing the UPDATE-command yourself.
One way you could let users update xml-data is to write a simple .net based program (winforms or asp.net) and then select/update the data from there. This way you can also sanitize the data and easily validate it against any given schema before inserting/updating the information.
A: I'm a bit fuzzy in this are but could you not use the OPENXML method to shred the XML into relational format then save it back in XML once the user has finished?
Like others have said I think it might be easier to write a small app to do it!
A: Another non-answer answer. You can use LinqPad. (https://www.linqpad.net/). It has the ability to edit SQL rows, including XML fields. You can also query for the rows you want to edit via SQL if you're not into LINQ.
My particular issue was attempting to edit an empty XML value into a NULL value. In SSMS the value showed as blank. However in LinqPad it showed as null. So in LinqPad I had to change it to , then back to null in order for the change to be saved. Now SSMS shows it as null too.
A: I know this is a really old question but I hope this might help someone.
If you do not wish to write an update statement or an application as the question suggests, then I believe the following will help given that you are a power user.
Alter the XML column to varchar and you will be able to modify this column in the SSMS edit table screen. I could not alter the column using the SSMS table designer. The Following script worked.
ALTER TABLE [tablename]
ALTER COLUMN [columnname] varchar(max);
Once you are done with edits, alter the column back to XML.
ALTER TABLE [tablename]
ALTER COLUMN [columnname] XML;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: Large Python Includes I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements.
Is there some way to group together several files and include them all at once?
A: *
*Put files in one folder.
*Add __init__.py file to the folder. Do necessary imports in __init__.py
*Replace multiple imports by one:
import folder_name
See Python Package Management
A: Yes, take a look at the "6.4 Packages" section in http://docs.python.org/tut/node8.html:
Basically, you can place a bunch of files into a directory and add an __init__.py file to the directory. If the directory is in your PYTHONPATH or sys.path, you can do "import directoryname" to import everything in the directory or "import directoryname.some_file_in_directory" to import a specific file that is in the directory.
The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as "string", from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: GUIDs in a SLN file Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs
Does anyone know for sure where these come from, and what they are used for?
A: According to MSDN:
[The Project] statement contains the
unique project GUID and the project
type GUID. This information is used by
the environment to find the project
file or files belonging to the
solution, and the VSPackage required
for each project. The project GUID is
passed to IVsProjectFactory to load
the specific VSPackage related to the
project, then the project is loaded by
the VSPackage.
A: Neither GUID is the same GUID as from AssemblyInfo.cs (that is the GUID for the assembly itself, not tied to Visual Studio but the end product of the build).
So, for a typical line in the sln file (open the .sln in notepad or editor-of-choice if you wish to see this):
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}"
The second GUID is a unique GUID for the project itself. The solution file uses this to map other settings to that project:
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.Build.0 = Debug|Any CPU
{55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.ActiveCfg = Release|Any CPU
{55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
The first GUID is actually a GUID that is the unique GUID for the solution itself (I believe). If you have a solution with more than one project, you'll actually see something like the following:
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Composite", "..\CompositeWPF\Source\CAL\Composite\Composite.csproj", "{77138947-1D13-4E22-AEE0-5D0DD046CA34}"
EndProject
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: linq to sql stack overflow with self-referencing entity I have an entity that contains many of itself, how do I map this in LINQ to SQl such that it does not do an infinite loop of getting e1 then all its children then their parents (e1) then its children and repeat? Now I know I can do this if I set the child property to false, however that is the most important property (don't really care about the parent). Is there a way to have it do lazy loading? Would that even fix it?
A: In the designer, select a property on an entity in the diagram, and in the Code Generation section in the properties window, set Delay Loaded to True .
See this blog post.
A: As far as I know LINQ to SQL does lazy loading by default. And even if it doesn't, I'm pretty sure the MS guys used some kind of identity map to ensure every record is loaded only once per DataContext.
Did you experience it getting stuck in a loop in practice?
A: This site is not good for my pre-existing biases, turns out this one was an ill-configured route not lazy/eager loading
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you get double-underscores to display in markdown? In python, there are some special variables and filenames that are surrounded by double-underscores. For example, there is the
__file__
variable. I am only able to get them to show up correctly inside of a code block. What do I need to enter to get double underscores in regular text without having them interpreted as an emphasis?
A: You can also put a backslash before the final underscore
__file_\_
gives you
__file__
A: Put a backslash before the first and the second underscore, like:
\_\_main.py__
It will show like this:
__main.py__
Only one backslash is not enough, because it will make your text shown in Italic.
By the way, considering they are variables and filenames, I suggest enclosing it in backticks(`):
`__main.py__`
It will show like __main.py__.
A: Just you need to write it like this in markdown
\_\_file\_\_
A: __file__
Put a backslash before the first underscore.
Like this:
\__file__
A: You can use _ in place of left underscores. Example:
__file__
A: `* ` The same holds true for the star-character or any markdown syntax. Bbackticking works well.
A: _\_file__
enter this will help you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: How to add a dll to gac in vista When I drag & drop a dll to the assembly folder on vista, I get the error "Access is denied: mydll.dll". How can I bypass the error message and add my dll to gac?
A: My guess would be that you have to do it as an administrator...try either disabling UAC, or using gacutil.exe to add your assembly.
A: Use runas command to run gacutil as a user with local admin rights to register the dll to GAC.
A: You may not locate gacutil.exe in your windows folder while using vista. It's not included because of vista's "Logo Program blah blah" thing. Try to use windows installer to add your assemblies into gac. This is the recommended way.
And never forget this traditional ogrish proverb: "Bi siktir git cay koy".
A: You can do that with gacutil.exe. It is located in:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727
It is only there though, if you installed the .NET SDK (not just the .Net Redistributible).
But you can copy it from your developer machine.
A: The assembly (dll) also needs to be strongly named if it's going to reside in the GAC.
http://msdn.microsoft.com/en-us/library/wd40t7ad(VS.80).aspx
A: Using Command line, use the following steps:
Open Visual Studio Command Prompt
First open Visual Studio Command Prompt (For Visual Studio 2008 the path is Programs --> Visual Studio 2008 --> Visual Studio Tools --> Visual Studio 2008 Command Prompt ) All the files mentioned in the following steps will be created in the Visual Studio 2008 Command Prompt Path. In my case it is C:\Program Files\Microsoft Visual Studio 9.0\VC
*
*Generate a KeyFile
sn -k keyPair.snk
*Get the MSIL for the assembly
ildasm SomeAssembly.dll /out:SomeAssembly.il
*Rename the original assembly, just in case
ren SomeAssembly.dll SomeAssembly.dll.orig
*Build a new assembly from the MSIL output and your KeyFile
ilasm SomeAssembly.il /dll /key=keyPair.snk
*Install the DLL in to the GAC
gacutil -i SomeAssembly.dll
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Embed asp page without iframe I want to embed an .asp page on an html page. I cannot use an iframe. I tried:
<object width="100%" height="1500" type="text/html" data="url.asp">
alt : <a href="url.asp">url</a>
</object>"
works great in ff but not ie7. Any ideas? Is it possible to use the object tag to embed .asp pages for IE or does it only work in ff?
A: You might be able to fake it using javascript. You could either use AJAX to load the page, then insert the HTML, or load "url.asp" in a hidden iframe and copy the HTML from there.
One downside (or maybe this is what you want) is that the pages aren't completely independent, so CSS rules from the outer page will affect the embedded page.
A: I've solved it in the past using Javascript and XMLHttp. It can get a bit hacky depending on the circumstances. In particular, you have to watch out for the inner page failing and how it affects/downgrades the outer one (hopefully you can keep it downgrading elegantly).
Search for XMLHttp (or check this great tutorial) and request the "child" page from the outer one, rendering the HTML you need. Preferably you can get just the specific data you need and process it in Javascript.
A: Well, after searching around and testing I don't think it is possible. It looks to me like IE does not allow the object tag access to a resource that is not on the same domain as the parent. It would have worked for me if the content I was trying to pull in was on same domain but it wasn't. If anyone could confirm my interpretation of this it would be appreciated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Enterprise Library Application Blocks OR Home Grown Framework? We are currently looking to adopt some type of "standard" developer framework and have looked into using the Enterprise Library. Would you recommend using these blocks as the foundation for software development, or should we do something home grown?
A: Like all good answers to architecture and programming questions, the answer is "it depends".
It depends on how unique your data access and object design needs are. It may also depend on how you plan on supporting your application in the long term. Finally, it greatly depends on the skill level of your developers.
There isn't a one-size-fits-all answer to this question, but generally, if your main focus is on cranking out software that provides some business value, pick out an existing framework and run with it. Don't spend your cycles building something that won't immediately drive business profits (i.e. increases revenues and/or decreases costs).
For example, one of my organization's projects is core to the operations of the company, needs to be developed and deployed as soon as possible, and will have a long life. For these reasons, we picked CSLA with some help from Enterprise Library. We could have picked other frameworks, but the important thing is that we picked a framework that seemed like it would fit well with our application and our developer skillset and we ran with it.
It gave us a good headstart and a community from which we can get support. We immediately started with functionality that provided business value and were not banging our heads against the wall trying to build a framework.
We are also in the position where we can hire people in the future who have most likely had exposure to our framework, giving them a really good headstart. This should reduce long-term support costs.
Are there things we don't use and overhead that we may not need? Perhaps. But, I'll trade that all day long for delivering business value in code early and often.
A: It really depends on what you need to do. Generally speaking, the bigger the niche is that your company is in, the better chance that you'll find a framework to properly support you. For smaller niches, you'll more than likely need to roll your own.
The company I work for has several apps all geared twoards estimating the building materials for given buildings. Since this is a pretty specific thing, and we have about 8 apps that are similar, we decided to roll our own and bring in 3rd party libraries when necessary (No sense re-inventing the wheel for some of the stuff)
Your millage may vary of course.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: In SQL Server, how do I identify *all* dependencies for a specific table using system tables/views? I am writing a DDL script to drop a number of tables but need to identify all dependencies for those tables first. Those dependencies include foreign key constraints, stored procedures, views, etc. Preferably, I want to programmatically script out dropping those dependencies using the system tables/views before dropping the dependent table.
A: This is extremely messy to write from scratch. Have you considered a 3rd party tool like
Red-Gate SQL Dependency Tracker?
A: sp_depends is not reliable see: Do you depend on sp_depends (no pun intended)
A: you could always search through the syscomments table....that might take a while though...
A: Could you reference sysreferences?
select 'if exists (select name from sysobjects where name = '''+c.name+''') '
+' alter table ' + t.name +' drop constraint '+ c.name
from sysreferences sbr, sysobjects c, sysobjects t, sysobjects r
where c.id = constrid
and t.id = tableid
and reftabid = r.id
and r.name = 'my_table'
That will generate a whole lot of conditional drop constraint calls. Should work.
A: You can use the sp_depends stored procedure to do this:
USE AdventureWorks
GO
EXEC sp_depends @objname = N'Sales.Customer' ;
http://msdn.microsoft.com/en-us/library/ms189487(SQL.90).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Examples for coding against the PayPal API in .NET 2.0+? Can anyone point me to a good introduction to coding against the paypal API?
A: Found this article by Rick Strahl recently http://www.west-wind.com/presentations/PayPalIntegration/PayPalIntegration.asp.
Have not implemeted anything from it yet, Rick has quite a few articles around the web on ecommerce in aspnet, and he seems to show up everytime I'm searching for it.
A: I would suggest you start by downloading the SDK:
https://www.paypal.com/IntegrationCenter/ic_sdk-resource.html
The SDK includes the following:
*
*Client libraries that call PayPal APIs
*API documentation for SDK components
*Sample code for Website Payments Pro and various administrative APIs
*Testing console that can verify connectivity to PayPal and submit API calls
You may also want to take a look at Encore Systems .NET* Class Library for PayPal SOAP API
A: I don't know what your needs are, but you might want to consider Google Checkout. Joe Audette was having considerable difficulty integrating PayPal.
I've used Google Checkout and have had great success. Note that you can go much, MUCH deeper with Google Checkout than the sample linked above.
EDIT: I didn't see Joe's updates. Look like he did eventually get it working.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: HTML meta keyword/description element, useful or not? Does filling out HTML meta description/keyword tags matter for SEO?
A: This article has some info on it.
A quick summary for keywords is:
Google and Microsoft: No
Yahoo and Ask: Yes
Edit: As noted below, the meta description is used by Google to describe your site to potential visitors (although may not be used for ranking).
A: Google will use meta tags, but the description, to better summarize your site. They won't help to increase your page rank.
See:
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=79812
EDIT: @Petr, are you sure that meta tags influence page rank? I am pretty sure that they don't, but if you have some references, I'd love to learn more about this. I have seen this, from the Official Google Webmaster Central Blog, which is what leads me to believe that they don't:
Even though we sometimes use the
description meta tag for the snippets
we show, we still don't use the
description meta tag in our ranking.
A: Keywords: Useless
All major search engines don't use them at all.
Description: Useful!
Replaces the default text in search engines if there isn't anything better. Use this to describe the page properly. Not perhaps useful for SEO, but it makes your results look more useful, and will hopefully increase click through rates by users.
A: If you want your users to share your content on Facebook, the meta tags actually come in handy, as Facebook will use this information when styling the post.
See Facebook Share Partners for more information.
Edit; whoops, wrong url. Fixed.
A: If your pages are part of an intranet then both the keywords and description meta tags can be very useful. If you have access to the search engine crawling your pages (and thus you can specifically look for sepcific tags/markup), they can add tremendous value without costing you too much time and are easy to change.
For pages outside of an intranet, you may have less success with keywords for reasons mentioned above.
A: The description meta is important as it is displayed ad-verbatim on Google search results below your site title. The absence of which, Google pulls and shows the first few lines of content on SERPs. The description tag allows you to control what SE users see as a page summary before clicking. This helps in increasing your CTRs from Search.
The keyword meta usefulness is still inconclusive, but SEOers continue to use them. Avoid using more than 5-6 keywords in the tag per page to avoid Google from detecting and penalising due to any suspected keyword dumping.
A: The problem with keyword meta tags is they are a completely unreliable source of information for search engines. The temptation for people to alter search results in their favour with misleading keywords is just too great.
A: Those are two of the things that are used by search engines. The exact weight of each changes frequently, they are generally regarded; however, as being fairly important.
One thing to note, care should be taken when entering values. The more relevant the keywords and description are to the textual content of the site, the more weight may be given to them. Of course there are no guarantees as nobody outside of the search engine companies really know what algorithms are being used.
This post talks a bit more about some aspects.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Can I depend on the values of GetHashCode() to be consistent? Is the return value of GetHashCode() guaranteed to be consistent assuming the same string value is being used? (C#/ASP.NET)
I uploaded my code to a server today and to my surprise I had to reindex some data because my server (win2008 64-bit) was returning different values compared to my desktop computer.
A: The implementation is dependent on the version of the framework but it also depends on the architecture. The implementation of string.GetHashCode() is dfferent in the x86 and x64 versions of the framework even if they have the same version number.
A: If I'm not mistaken, GetHashCode is consistent given the same value, but it is NOT guaranteed to be consistent across different versions of the framework.
From the MSDN docs on String.GetHashCode():
The behavior of GetHashCode is dependent on its implementation, which might change from one version of the common language runtime to another. A reason why this might happen is to improve the performance of GetHashCode.
A: I had a similar problem where I filled a database table with information which was dependent on String.GetHashCode (Not the best idea) and when I upgraded the server I was working on to x64 I noticed the values I was getting from String.GetHashCode were inconsistent with what was already in the table. My solution was to use my own version of GetHashCode which returns the same value as String.GetHashCode on a x86 framework.
Here's the code, don't forget to compile with "Allow unsafe code":
/// <summary>
/// Similar to String.GetHashCode but returns the same as the x86 version of String.GetHashCode for x64 and x86 frameworks.
/// </summary>
/// <param name="s"></param>
/// <returns></returns>
public static unsafe int GetHashCode32(string s)
{
fixed (char* str = s.ToCharArray())
{
char* chPtr = str;
int num = 0x15051505;
int num2 = num;
int* numPtr = (int*)chPtr;
for (int i = s.Length; i > 0; i -= 4)
{
num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0];
if (i <= 2)
{
break;
}
num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1];
numPtr += 2;
}
return (num + (num2 * 0x5d588b65));
}
}
A: I wonder if there are differences between 32-bit and 64-bit operating systems, because I am certain both my server and home computer are running the same version of .NET
I was always weary of using GetHashCode(), it might be a good idea for me to simply role my own hash algorithm. Well at least I ended up writing a quick re-index .aspx page because of it.
A: Are you running Win2008 x86 as your desktop? Because Win2008 includes version 2.0.50727.1434, which is an updated version of 2.0 included in Vista RTM.
A: Not a direct answer to your question, which Jonas has answered well, however this may be of assistance if you are worried about equality testing in hashes
From our tests, depending on what you are requiring with hashcodes, in C#, hashcodes do not need to be unique for Equality operations. As an example, consider the following:
We had a requirement to overload the equals operator, and therefore the GetHashCode function of our objects as they had become volatile and stateless, and sourcing themselves directly from data, so in one place of the application we needed to ensure that an object would be viewed as equal to another object if it was sourced from the same data, not just if it was the same reference. Our unique data identifiers are Guids.
The equals operator was easy to cater for as we just checked on the Guid of the record (after checking for null).
Unfortuantely the HashCode data size (being an int) depends on the operating system, and on our 32 bit system, the hashcode would be 32 bit. Mathematically, when we override the GetHashCode function, it is impossible to generate a unique hashcode from a guid which is greater than 32 bit (look at it from the converse, how would you translate a 32 bit integer into a guid?).
We then did some tests where we took the Guid as a string and returned the HashCode of the Guid, which almost always returns a unique identifier in our tests, but not always.
What we did notice however, when an object is in a hashed collection object (a hashtable, a dictionary etc), when 2 objects are not unique but their hashcodes are, the hashcode is only used as a first option lookup, if there are non-unique hash codes being used, the equality operator is always used as a fall back to detirmine equality.
As I said this may or may not be relevant to your situation, but if it is it's a handy tip.
UPDATE
To demonstrate, we have a Hashtable:
Key:Object A (Hashcode 1), value Object A1
Key:Object B (Hashcode 1), value Object B1
Key:Object C (Hashcode 1), value Object C1
Key:Object D (Hashcode 2), value Object D1
Key:Object E (Hashcode 3), value Object E1
When I call the hashtable for the object with the key of Object A, the object A1 will be returned after 2 steps, a call for hashcode 1, then an equality check on the key object as there is not a unique key with the hashcode 1
When I call the hashtable for the object with the key of Object D, the object D1 will be returned after 1 step, a hash lookup
A:
What we did notice however, when an
object is in a hashed collection
object (a hashtable, a dictionary
etc), when 2 objects are not unique
but their hashcodes are, the hashcode
is only used as a first option lookup,
if there are non-unique hash codes
being used, the equality operator is
always used as a fall back to
detirmine equality.
This is the way hash lookups work, right? Each bucket contains a list of items having the same hash code.
So to find the correct item under these conditions a linear search using value equality comparison takes place.
And if your hashing implementation achieves good distribution, this search is not required, i.e., one item per bucket.
Is my understanding correct?
A: /// <summary>
/// Default implementation of string.GetHashCode is not consistent on different platforms (x32/x64 which is our case) and frameworks.
/// FNV-1a - (Fowler/Noll/Vo) is a fast, consistent, non-cryptographic hash algorithm with good dispersion. (see http://isthe.com/chongo/tech/comp/fnv/#FNV-1a)
/// </summary>
private static int GetFNV1aHashCode(string str)
{
if (str == null)
return 0;
var length = str.Length;
// original FNV-1a has 32 bit offset_basis = 2166136261 but length gives a bit better dispersion (2%) for our case where all the strings are equal length, for example: "3EC0FFFF01ECD9C4001B01E2A707"
int hash = length;
for (int i = 0; i != length; ++i)
hash = (hash ^ str[i]) * 16777619;
return hash;
}
This implementation can be slower than the unsafe one posted before. But much simpler and safe.
A: I would have to Say...you cannot rely on it. For example if I run file1 through c#'s md5 hash code and copy nd paste the same file to a new directory...the hash code come out different even tough it is he same file. Obviously its the same .net version, same everything. The only thing that changed was the path.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Why does Path.Combine not properly concatenate filenames that start with Path.DirectorySeparatorChar? From the Immediate Window in Visual Studio:
> Path.Combine(@"C:\x", "y")
"C:\\x\\y"
> Path.Combine(@"C:\x", @"\y")
"\\y"
It seems that they should both be the same.
The old FileSystemObject.BuildPath() didn't work this way...
A: From MSDN:
If one of the specified paths is a zero-length string, this method returns the other path. If path2 contains an absolute path, this method returns path2.
In your example, path2 is absolute.
A: Following Christian Graus' advice in his "Things I Hate about Microsoft" blog titled "Path.Combine is essentially useless.", here is my solution:
public static class Pathy
{
public static string Combine(string path1, string path2)
{
if (path1 == null) return path2
else if (path2 == null) return path1
else return path1.Trim().TrimEnd(System.IO.Path.DirectorySeparatorChar)
+ System.IO.Path.DirectorySeparatorChar
+ path2.Trim().TrimStart(System.IO.Path.DirectorySeparatorChar);
}
public static string Combine(string path1, string path2, string path3)
{
return Combine(Combine(path1, path2), path3);
}
}
Some advise that the namespaces should collide, ... I went with Pathy, as a slight, and to avoid namespace collision with System.IO.Path.
Edit: Added null parameter checks
A: This code should do the trick:
string strFinalPath = string.Empty;
string normalizedFirstPath = Path1.TrimEnd(new char[] { '\\' });
string normalizedSecondPath = Path2.TrimStart(new char[] { '\\' });
strFinalPath = Path.Combine(normalizedFirstPath, normalizedSecondPath);
return strFinalPath;
A: Reason:
Your second URL is considered an absolute path, and the Combine method will only return the last path if the last path is an absolute path.
Solution:
Just remove the leading slash / from your second Path (/SecondPath to SecondPath), and it would work as excepted.
A: Not knowing the actual details, my guess is that it makes an attempt to join like you might join relative URIs. For example:
urljoin('/some/abs/path', '../other') = '/some/abs/other'
This means that when you join a path with a preceding slash, you are actually joining one base to another, in which case the second gets precedence.
A: I wanted to solve this problem:
string sample1 = "configuration/config.xml";
string sample2 = "/configuration/config.xml";
string sample3 = "\\configuration/config.xml";
string dir1 = "c:\\temp";
string dir2 = "c:\\temp\\";
string dir3 = "c:\\temp/";
string path1 = PathCombine(dir1, sample1);
string path2 = PathCombine(dir1, sample2);
string path3 = PathCombine(dir1, sample3);
string path4 = PathCombine(dir2, sample1);
string path5 = PathCombine(dir2, sample2);
string path6 = PathCombine(dir2, sample3);
string path7 = PathCombine(dir3, sample1);
string path8 = PathCombine(dir3, sample2);
string path9 = PathCombine(dir3, sample3);
Of course, all paths 1-9 should contain an equivalent string in the end. Here is the PathCombine method I came up with:
private string PathCombine(string path1, string path2)
{
if (Path.IsPathRooted(path2))
{
path2 = path2.TrimStart(Path.DirectorySeparatorChar);
path2 = path2.TrimStart(Path.AltDirectorySeparatorChar);
}
return Path.Combine(path1, path2);
}
I also think that it is quite annoying that this string handling has to be done manually, and I'd be interested in the reason behind this.
A: If you want to combine both paths without losing any path you can use this:
?Path.Combine(@"C:\test", @"\test".Substring(0, 1) == @"\" ? @"\test".Substring(1, @"\test".Length - 1) : @"\test");
Or with variables:
string Path1 = @"C:\Test";
string Path2 = @"\test";
string FullPath = Path.Combine(Path1, Path2.IsRooted() ? Path2.Substring(1, Path2.Length - 1) : Path2);
Both cases return "C:\test\test".
First, I evaluate if Path2 starts with / and if it is true, return Path2 without the first character. Otherwise, return the full Path2.
A: This actually makes sense, in some way, considering how (relative) paths are treated usually:
string GetFullPath(string path)
{
string baseDir = @"C:\Users\Foo.Bar";
return Path.Combine(baseDir, path);
}
// Get full path for RELATIVE file path
GetFullPath("file.txt"); // = C:\Users\Foo.Bar\file.txt
// Get full path for ROOTED file path
GetFullPath(@"C:\Temp\file.txt"); // = C:\Temp\file.txt
The real question is: Why are paths, which start with "\", considered "rooted"? This was new to me too, but it works that way on Windows:
new FileInfo("\windows"); // FullName = C:\Windows, Exists = True
new FileInfo("windows"); // FullName = C:\Users\Foo.Bar\Windows, Exists = False
A: I used aggregate function to force paths combine as below:
public class MyPath
{
public static string ForceCombine(params string[] paths)
{
return paths.Aggregate((x, y) => Path.Combine(x, y.TrimStart('\\')));
}
}
A: This is the disassembled code from .NET Reflector for Path.Combine method. Check IsPathRooted function. If the second path is rooted (starts with a DirectorySeparatorChar), return second path as it is.
public static string Combine(string path1, string path2)
{
if ((path1 == null) || (path2 == null))
{
throw new ArgumentNullException((path1 == null) ? "path1" : "path2");
}
CheckInvalidPathChars(path1);
CheckInvalidPathChars(path2);
if (path2.Length == 0)
{
return path1;
}
if (path1.Length == 0)
{
return path2;
}
if (IsPathRooted(path2))
{
return path2;
}
char ch = path1[path1.Length - 1];
if (((ch != DirectorySeparatorChar) &&
(ch != AltDirectorySeparatorChar)) &&
(ch != VolumeSeparatorChar))
{
return (path1 + DirectorySeparatorChar + path2);
}
return (path1 + path2);
}
public static bool IsPathRooted(string path)
{
if (path != null)
{
CheckInvalidPathChars(path);
int length = path.Length;
if (
(
(length >= 1) &&
(
(path[0] == DirectorySeparatorChar) ||
(path[0] == AltDirectorySeparatorChar)
)
)
||
((length >= 2) &&
(path[1] == VolumeSeparatorChar))
)
{
return true;
}
}
return false;
}
A: This is kind of a philosophical question (which perhaps only Microsoft can truly answer), since it's doing exactly what the documentation says.
System.IO.Path.Combine
"If path2 contains an absolute path, this method returns path2."
Here's the actual Combine method from the .NET source. You can see that it calls CombineNoChecks, which then calls IsPathRooted on path2 and returns that path if so:
public static String Combine(String path1, String path2) {
if (path1==null || path2==null)
throw new ArgumentNullException((path1==null) ? "path1" : "path2");
Contract.EndContractBlock();
CheckInvalidPathChars(path1);
CheckInvalidPathChars(path2);
return CombineNoChecks(path1, path2);
}
internal static string CombineNoChecks(string path1, string path2)
{
if (path2.Length == 0)
return path1;
if (path1.Length == 0)
return path2;
if (IsPathRooted(path2))
return path2;
char ch = path1[path1.Length - 1];
if (ch != DirectorySeparatorChar && ch != AltDirectorySeparatorChar &&
ch != VolumeSeparatorChar)
return path1 + DirectorySeparatorCharAsString + path2;
return path1 + path2;
}
I don't know what the rationale is. I guess the solution is to strip off (or Trim) DirectorySeparatorChar from the beginning of the second path; maybe write your own Combine method that does that and then calls Path.Combine().
A: In my opinion this is a bug. The problem is that there are two different types of "absolute" paths. The path "d:\mydir\myfile.txt" is absolute, the path "\mydir\myfile.txt" is also considered to be "absolute" even though it is missing the drive letter. The correct behavior, in my opinion, would be to prepend the drive letter from the first path when the second path starts with the directory separator (and is not a UNC path). I would recommend writing your own helper wrapper function which has the behavior you desire if you need it.
A: Remove the starting slash ('\') in the second parameter (path2) of Path.Combine.
A: This \ means "the root directory of the current drive". In your example it means the "test" folder in the current drive's root directory. So, this can be equal to "c:\test".
A: These two methods should save you from accidentally joining two strings that both have the delimiter in them.
public static string Combine(string x, string y, char delimiter) {
return $"{ x.TrimEnd(delimiter) }{ delimiter }{ y.TrimStart(delimiter) }";
}
public static string Combine(string[] xs, char delimiter) {
if (xs.Length < 1) return string.Empty;
if (xs.Length == 1) return xs[0];
var x = Combine(xs[0], xs[1], delimiter);
if (xs.Length == 2) return x;
var ys = new List<string>();
ys.Add(x);
ys.AddRange(xs.Skip(2).ToList());
return Combine(ys.ToArray(), delimiter);
}
A: As mentiond by Ryan it's doing exactly what the documentation says.
From DOS times, current disk, and current path are distinguished.
\ is the root path, but for the CURRENT DISK.
For every "disk" there is a separate "current path".
If you change the disk using cd D: you do not change the current path to D:\, but to: "D:\whatever\was\the\last\path\accessed\on\this\disk"...
So, in windows, a literal @"\x" means: "CURRENTDISK:\x".
Hence Path.Combine(@"C:\x", @"\y") has as second parameter a root path, not a relative, though not in a known disk...
And since it is not known which might be the «current disk», python returns "\\y".
>cd C:
>cd \mydironC\apath
>cd D:
>cd \mydironD\bpath
>cd C:
>cd
>C:\mydironC\apath
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "223"
} |
Q: Is it possible to make a recursive SQL query? I have a table similar to this:
CREATE TABLE example (
id integer primary key,
name char(200),
parentid integer,
value integer);
I can use the parentid field to arrange data into a tree structure.
Now here's the bit I can't work out. Given a parentid, is it possible to write an SQL statement to add up all the value fields under that parentid and recurse down the branch of the tree ?
UPDATE: I'm using posgreSQL so the fancy MS-SQL features are not available to me. In any case, I'd like this to be treated as a generic SQL question.
BTW, I'm very impressed to have 6 answers within 15 minutes of asking the question! Go stack overflow!
A: use a common table expression.
May want to indicate this is SQL Server 2005 or above only. Dale Ragan
here's an article on recursion by SqlTeam without common table expressions.
A: If your using SQL Server 2005, there is a really cool way to do this using Common Table Expressions.
It takes all of the gruntwork out of creating a temporary table, and basicly allows you to do it all with just a WITH and a UNION.
Here is a good tutorial:
http://searchwindevelopment.techtarget.com/tip/0,289483,sid8_gci1278207,00.html
A: Here is an example script using common table expression:
with recursive sumthis(id, val) as (
select id, value
from example
where id = :selectedid
union all
select C.id, C.value
from sumthis P
inner join example C on P.id = C.parentid
)
select sum(val) from sumthis
The script above creates a 'virtual' table called sumthis that has columns id and val. It is defined as the result of two selects merged with union all.
First select gets the root (where id = :selectedid).
Second select follows the children of the previous results iteratively until there is nothing to return.
The end result can then be processed like a normal table. In this case the val column is summed.
A: Since version 8.4, PostgreSQL has recursive query support for common table expressions using the SQL standard WITH syntax.
A: The following code compiles and it's tested OK.
create or replace function subtree (bigint)
returns setof example as $$
declare
results record;
entry record;
recs record;
begin
select into results * from example where parent = $1;
if found then
for entry in select child from example where parent = $1 and child parent loop
for recs in select * from subtree(entry.child) loop
return next recs;
end loop;
end loop;
end if;
return next results;
end;
$$ language 'plpgsql';
The condition "child <> parent" is needed in my case because nodes point to themselves.
Have fun :)
A: If you want a portable solution that will work on any ANSI SQL-92 RDBMS, you will need to add a new column to your table.
Joe Celko is the original author of the Nested Sets approach to storing hierarchies in SQL. You can Google "nested sets" hierarchy to understand more about the background.
Or you can just rename parentid to leftid and add a rightid.
Here is my attempt to summarize Nested Sets, which will fall woefully short because I'm no Joe Celko: SQL is a set-based language, and the adjacency model (storing parent ID) is NOT a set-based representation of a hierarchy. Therefore there is no pure set-based method to query an adjacency schema.
However, most of the major platforms have introduced extensions in recent years to deal with this precise problem. So if someone replies with a Postgres-specific solution, use that by all means.
A: There are a few ways to do what you need in PostgreSQL.
*
*If you can install modules, look at the tablefunc contrib. It has a connectby() function that handles traversing trees. http://www.postgresql.org/docs/8.3/interactive/tablefunc.html
*Also check out the ltree contrib, which you could adapt your table to use: http://www.postgresql.org/docs/8.3/interactive/ltree.html
*Or you can traverse the tree yourself with a PL/PGSQL function.
Something like this:
create or replace function example_subtree (integer)
returns setof example as
'declare results record;
child record;
begin
select into results * from example where parent_id = $1;
if found then
return next results;
for child in select id from example
where parent_id = $1
loop
for temp in select * from example_subtree(child.id)
loop
return next temp;
end loop;
end loop;
end if;
return null;
end;' language 'plpgsql';
select sum(value) as value_sum
from example_subtree(1234);
A: A standard way to make a recursive query in SQL are recursive CTE. PostgreSQL supports them since 8.4.
In earlier versions, you can write a recursive set-returning function:
CREATE FUNCTION fn_hierarchy (parent INT)
RETURNS SETOF example
AS
$$
SELECT example
FROM example
WHERE id = $1
UNION ALL
SELECT fn_hierarchy(id)
FROM example
WHERE parentid = $1
$$
LANGUAGE 'sql';
SELECT *
FROM fn_hierarchy(1)
See this article:
*
*Hierarchical queries in PostgreSQL
A: Oracle has "START WITH" and "CONNECT BY"
select
lpad(' ',2*(level-1)) || to_char(child) s
from
test_connect_by
start with parent is null
connect by prior child = parent;
http://www.adp-gmbh.ch/ora/sql/connect_by.html
A: Just as a brief aside although the question has been answered very well, it should be noted that if we treat this as a:
generic SQL question
then the SQL implementation is fairly straight-forward, as SQL'99 allows linear recursion in the specification (although I believe no RDBMSs implement the standard fully) through the WITH RECURSIVE statement. So from a theoretical perspective we can do this right now.
A: None of the examples worked OK for me so I've fixed it like this:
declare
results record;
entry record;
recs record;
begin
for results in select * from project where pid = $1 loop
return next results;
for recs in select * from project_subtree(results.id) loop
return next recs;
end loop;
end loop;
return;
end;
A: is this SQL Server? Couldn't you write a TSQL stored procedure that loops through and unions the results together?
I am also interested if there is a SQL-only way of doing this though. From the bits I remember from my geographic databases class, there should be.
A: I think it is easier in SQL 2008 with HierarchyID
A: If you need to store arbitrary graphs, not just hierarchies, you could push Postgres to the side and try a graph database such as AllegroGraph:
Everything in the graph database is stored as a triple (source node, edge, target node) and it gives you first class support for manipulating the graph structure and querying it using a SQL like language.
It doesn't integrate well with something like Hibernate or Django ORM but if you are serious about graph structures (not just hierarchies like the Nested Set model gives you) check it out.
I also believe Oracle has finally added a support for real Graphs in their latest products, but I'm amazed it's taken so long, lots of problems could benefit from this model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Good Ways to Use Source Control and an IDE for Plugin Code? What are good ways of dealing with the issues surrounding plugin code that interacts with outside system?
To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server.
I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE?
I am making the assumption here that all the code for the plugin is located under a single directory.
Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory?
A: To me, adding a symlink pointing to your development folder seems like a tidy solution to the problem.
If the main project is on a different machine/webserver, you could use something like sshfs to mount your development directory into the right place on the webserver.
A: Short answer - I do have my development and production servers check out the appropriate directories directly from SVN.
For your example:
Develop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test.
Once you're ready for production, merge the change into the production branch, and do an svn update on the production webserver.
A: Where I work some folks like to use the FileSync Plugin for Eclipse for this purpose, though I have seen some oddities with that plugin where files in the target directory occasionally go missing. The whole structure is:
*
*Ant task to create target directory at desired location (via copy commands, mostly)
*FileSync Plugin configured to keep files in sync between development location and target location as you code (sync the Eclipse output folder to a location in the Web server's classpath, etc.)
Of course, symlinks may work better on systems that have good support for symlinks :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Java ConnectionPool connection not closing, stuck in 'sleep' I have a webapp that uses JNDI lookups to get a connection to the database.
The connection works fine and returns the query no problems. The issue us that the connection does not close properly and is stuck in the 'sleep' mode (according to mysql administrator). This means that they become unusable nad then I run out of connections.
Can someone give me a few pointers as to what I can do to make the connection return to the pool successfully.
public class DatabaseBean {
private static final Logger logger = Logger.getLogger(DatabaseBean.class);
private Connection conn;
private PreparedStatement prepStmt;
/**
* Zero argument constructor
* Setup generic databse connection in here to avoid redundancy
* The connection details are in /META-INF/context.xml
*/
public DatabaseBean() {
try {
InitialContext initContext = new InitialContext();
DataSource ds = (DataSource) initContext.lookup("java:/comp/env/jdbc/mysite");
conn = ds.getConnection();
}
catch (SQLException SQLEx) {
logger.fatal("There was a problem with the database connection.");
logger.fatal(SQLEx);
logger.fatal(SQLEx.getCause());
}
catch (NamingException nameEx) {
logger.fatal("There was a naming exception");
logger.fatal(nameEx);
logger.fatal(nameEx.getCause());
}
}
/**
* Execute a query. Do not use for statements (update delete insert etc).
*
* @return A ResultSet of the execute query. A set of size zero if no results were returned. It is never null.
* @see #executeUpdate() for running update, insert delete etc.
*/
public ResultSet executeQuery() {
ResultSet result = null;
try {
result = prepStmt.executeQuery();
logger.debug(prepStmt.toString());
}
catch (SQLException SQLEx) {
logger.fatal("There was an error running a query");
logger.fatal(SQLEx);
}
return result;
}
SNIP
public void close() {
try {
prepStmt.close();
prepStmt = null;
conn.close();
conn = null;
} catch (SQLException SQLEx) {
logger.warn("There was an error closing the database connection.");
}
}
}
This is inside a javabean that uses the database connection.
public LinkedList<ImportantNoticeBean> getImportantNotices() {
DatabaseBean noticesDBBean = new DatabaseBean();
LinkedList<ImportantNoticeBean> listOfNotices = new LinkedList<ImportantNoticeBean>();
try {
PreparedStatement preStmt = noticesDBBean.getConn().prepareStatement("SELECT pseudonym, message, date_to, date_from " +
"FROM importantnotices, users " +
"WHERE importantnotices.username = users.username " +
"AND NOW() >= date_from AND NOW() <= date_to;");
noticesDBBean.setPrepStmt(preStmt);
ResultSet result = noticesDBBean.executeQuery();
while (result.next()) {
ImportantNoticeBean noticeBean = new ImportantNoticeBean();
noticeBean.setAuthor(result.getString("pseudonym"));
noticeBean.setMessage(result.getString("message"));
noticeBean.setDateTo(result.getDate("date_to"));
noticeBean.setDateFrom(result.getDate("date_from"));
listOfNotices.add(noticeBean);
}
result.close();
} catch (SQLException SQLEx) {
logger.error("There was an error in ImportantNoticesBean.getImportantNotices()");
logger.error(SQLEx);
} finally {
noticesDBBean.close();
}
return listOfNotices;
}
<Context reloadable="true">
<Resource name="jdbc/mysite"
auth="Container"
type="javax.sql.DataSource"
username="user"
password="password"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/mysite"
maxActive="10"
maxIdle="5"
maxWait="6000"
removeAbandoned="true"
logAbandoned="false"
removeAbandonedTimeout="20"
/>
</Context>
A: You seem to be closing the connection properly - except for the case where prepStmt.close() throws a SQLException, I can't find a connection leak.
What pool implementation are you using? When you close a connection, the pool need not close the underlying MySQL connection immediately - after all that is the point of a connection pool! So from MySQL side, the connections would look alive, although your app is not using any; they might simply be held by the TC connection pool.
You might want to experiment with the settings of the connection pool.Ask it to shrink the pool when the system is idle. Or, ask it to refresh all connections periodically. Or, have a strict upper bound on the number of concurrent connections it ever gets from MySQL etc.
One way to check if your code has a connection leak is to force the ds.getConnection() to always open a new physical connection and conn.close() to release the connection (if your connection pool has settings for those). Then if you watch the connections on MySQL side, you might be able to figure out if the code really has a connection leak or not.
A: This is a similar question - Connection Pool Settings for Tomcat
This is my response to that question and it fixed the problem for the other guy. It may help you out too.
Tomcat Documentation
DBCP uses the Jakarta-Commons Database Connection Pool. It relies on number of Jakarta-Commons components:
* Jakarta-Commons DBCP
* Jakarta-Commons Collections
* Jakarta-Commons Pool
I'm using the same connection pooling stuff and I'm setting these properties to prevent the same thing it's just not configured through tomcat.
But if the first thing doesn't work try these.
testWhileIdle=true
timeBetweenEvictionRunsMillis=300000
A: Ok I might have this sorted. I have changed the database config resource to the following:
*SNIP*
maxActive="10"
maxIdle="5"
maxWait="7000"
removeAbandoned="true"
logAbandoned="false"
removeAbandonedTimeout="3"
*SNIP*
This works well enough for now. What is happening, afaik, is that once I reach the ten connections then Tomcat is checking for abandoned connections (idle time > 3). It does this in a batch job each time that max connections is reached. The potential issue with this is if i need more than 10 queries run at the same time (not unique to me). The important thing is that removeAbandonedTimeout is less than maxWait.
Is this what should be happening? ie Is this the way that the pool should operate? If it is is seems, at least to me, that you would wait until something (the connection) is broken before fixing rather than not letting it 'break' in the first place. Maybe I am still not getting it.
A:
The issue us that the connection does not close properly and is stuck in the 'sleep' mode
This was actually only half right.
The problem I ran into was actually that each app was defining a new connection to the database sever. So each time I closed all the connections App A would make a bunch of new connections as per it's WEB.xml config file and run happily. App B would do the same. The problem is that they are independent pools which try to grab up to the server defined limit. It is a kind of race condition I guess. So when App A has finished with the connections it sits waiting to to use them again until the timeout has passed while App B who needs the connection now is denied the resources even though App A has finished with the and should be back in the pool. Once the timeout has passed, the connection is freed up and B (or C etc) can get at it again.
e.g. if the limit is 10 (mySQL profile limit) and each app has been configured to use a max of 10 the there will be 20 attempts at connections. Obviously this is a bad situation.
The solution is to RTFM and put the connection details in the right place. This does make shared posting a pain but there are ways around it (such as linking to other xml files from the context).
Just to be explicit: I put the connection details in the WEB.xml for each app and the had a fight about it.
A: One thing that @binil missed, you are not closing the result set in the case of an exception. Depending on the driver implementation this may cause the connection to stay open. Move the result.close() call to the finally block.
A: I am using the same configuration as you are. If the connection in mysql administrator(windows) shows that it is in sleep mode it only means that is pooled but not in use. I checked this running a test program program with multiple threads making random queries to Mysql. if it helps here is my configuration:
defaultAutoCommit="false"
defaultTransactionIsolation="REPEATABLE_READ"
auth="Container"
type="javax.sql.DataSource"
logAbandoned="true"
removeAbandoned="true"
removeAbandonedTimeout="300"
maxActive="-1"
initialSize="15"
maxIdle="10"
maxWait="10000"
username="youruser"
password="youruserpassword"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://yourhost/yourdatabase"/>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What registry access can you get without Administrator privileges? I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use?
Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE) without Administrator privileges?
A: Yes, you should be able to write to any place under HKEY_CURRENT_USER without having Administrator privileges. But this is effectively a private store that no other user on this machine will be able to access, so you can't put any shared configuration there.
A: In general, a non-administrator user has this access to the registry:
Read/Write to:
*
*HKEY_CURRENT_USER
Read Only:
*
*HKEY_LOCAL_MACHINE
*HKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\Software\Classes)
It is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that.
For your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER. The canonical place is anywhere within HKEY_CURRENT_USER\Software\YourCompany\YourProduct\
You could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE. It is very rare to need to do this, and you should avoid it. The problem is that any user can "read" those, but only an administrator (or by extension, your setup/install program) can "set" them.
Other common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question.
Finally, your application should stay out of HKEY_CURRENT_CONFIG. This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: How to find all database references In trying to figure out this problem (which is still unsolved and I still have no clue what is going on), I am wondering if maybe an external reference to the table in question is causing the problem. For example, a trigger or view or some other such thing.
Is there an easy way to find all references to a given database table? Including all views, triggers, constraints, or anything at all, preferably from the command line, and also preferably without a 3rd party tool (we are using db2).
A: Wow, I wouldn't have thought it, but there seems to be.. Good ole DB2.
I find the publib db2 docs view very very handy by the way:
http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp
I just found the "SYSCAT.TABDEP" catalog view in it, which seems to contain more or less what you asked for. I suspect for anything not covered there you'll have to trawl through the rest of the syscat tables which are vast. (Unfortunately I can't seem to link you to the exact page on SYSCAT.TABDEP itself, the search facility should lead you to it fairly easily though).
Most databases these days have a set of tables which contain data about the layout of your actual schema tables, quite handy for this sort of thing.
A: You can write a query search the information schema views (definition column) to find the table in all views, triggers, procedure, etc. Not sure about FK & indexes though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What do you use to capture webpages, diagram/pictures and code snippets for later reference? What do you use to capture webpages, diagram/pictures and code snippets for later reference?
A: Evernote http://www.evernote.com and delicious http://www.delicious.com
A: *
*Evernote
*Notepad2's clipboard feature (Notepad2.exe /c as a link in Launchy)
*Windows Clippings or PrintKey
*Firefox extension Page Saver
*Delicious
A: Microsoft OneNote.
A: I find google notebook is very good for drive by code snippeting and google bookmarks especially as when used with the google toolbar, for web pages.
The benefit of these tools are that they are available from any pc on the web, though a good use of semantic organisation using labels is recommended.
A: I just have an emacs instance running on my home machine, under screen. Whereever I am (and have network) I can connect to it remotely. I stick all useful urls, birthday present ideas, future dates, code snippets, ideas for docs etcetc in there.
I rarely have doodles/diagrams I need to capture, I tend to draw them in ascii in my file if needed.
I must admit I'm a bit stuck if I have no network/wifi somewhere, but that's rarely the case.
A: Here's my response to a similar question:
The combination of OneNote with a tablet PC is awesome! I was a bit of a skeptic at first. I used the trial version and then forgot about it. A year later I had an unruly collection of files, project related emails, notebooks and scraps of paper all scattered throughout my life. I went back to OneNote and all my problems went away. Some highlights:
*
*Everything is searchable. The character recognition is good enough that my chicken-scratch meeting notes can be searched. Text within images is searchable.
*OneNote syncs with Outlook so finding meeting notes is a breeze.
*I now embed all files into OneNote - pdfs, spreadsheets, word docs, images, web clippings.
*OneNote is constantly saving all changes so, combined with a scheduled automated backup, everything is in one place and is safe.
*There are some built-in collaboration tools I have yet to try but that look useful.
It is SO worth the price. It allows you to get started on a project and avoid all that time spent deciding how to organize things.
A: Zotero, is a nice plugin for Firefox.
A: SnagIt
captures everything you could want, and lets you annotate it.
A: I prefer to use the good old url for delicious
Apart from that i use the Scrapbook extension in firefox when i want to save something on the disk. It's possible to tag the page, edit it and remove those stupids ads before saving it.
I also have a Wiki on a stick that i carry around on a usbkey for code snippets that should go to other clients when i'm travelling around
Mostly, my code snippets are embedded into projects i carry on the same usb key, which allows me to demonstrate some technologies right off to the client and get his advice based on a demonstration, not a listing of code...
A: For screen shots, I use a mix between ScrapBook and ScreenGrab. They are both firefox plugins that are pretty amazing when you need to get a screenshot of a page for editing. Works great for consulting.
https://addons.mozilla.org/en-US/firefox/addon/427
https://addons.mozilla.org/en-US/firefox/addon/1146
A: Delicious Bookmarks extension for Firefox
A: It's a little primitive, but I've been using tiddlywiki (self-contained, single-file wiki) http://www.tiddlywiki.com/ which works good for basic text and markup. I combine it with a plugin to sync it with Outlook's notes (http://syncoutlooknotes.tiddlyspot.com/#SyncOutlookNotes) so that I can then sync it to my blackberry using the standard outlook-blackberry sync mechanism. This has the significant advantage that I can look at my notes and even write new notes when I'm out and about, away from my laptop, or just don't feel like lugging the laptop around to a meeting that I don't really need it for.
I'd prefer using something more advanced like Onenote, but being able to take my notes with my in the little blackberry has turned out to be a significant advantage.
A: Google Notebook is very convenient tool. You can clip and save any parts of web pages without leaving your browser tab. The Notebook plug-in automatically saves them as separate notes in your notebooks and keep the links back to the original web pages. You can organize your clippings later by moving them between your notebooks and/or tagging them. Very good for code snippets and references.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Find the highest order bit in C what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input)
1 -> 1
2 -> 2
3 -> 2
4 -> 4
5 -> 4
6 -> 4
7 -> 4
8 -> 8
9 -> 8
...
63 -> 32
A: From Hacker's Delight:
int hibit(unsigned int n) {
n |= (n >> 1);
n |= (n >> 2);
n |= (n >> 4);
n |= (n >> 8);
n |= (n >> 16);
return n - (n >> 1);
}
This version is for 32-bit ints, but the logic can be extended for 64-bits or higher.
A: Little bit late to this party but the simplest solution I found, given a modern GCC as a compiler is simply:
static inline int_t get_msb32 (register unsigned int val)
{
return 32 - __builtin_clz(val);
}
static inline int get_msb64 (register unsigned long long val)
{
return 64 - __builtin_clzll(val);
}
It's even relatively portable (at the very least it will work on any GCC platform).
A: Continually remove the low order bit comes to mind...
int highest_order_bit( int x )
{
int y = x;
do {
x = y;
y = x & (x-1); //remove low order bit
}
while( y != 0 );
return x;
}
A: The linux kernel has a number of handy bitops like this, coded in the most efficient way for a number of architectures. You can find generic versions in include/asm-generic/bitops/fls.h (and friends), but see also include/asm-x86/bitops.h for a definition using inline assembly if speed is of the essence, and portability is not.
A: fls bottoms out to a hardware instruction on many architectures. I suspect this is probably the simplest, fastest way of doing it.
1<<(fls(input)-1)
A: A fast way to do this is via a look-up table. For a 32-bit input, and an 8-bit look-up table, in only requires 4 iterations:
int highest_order_bit(int x)
{
static const int msb_lut[256] =
{
0, 0, 1, 1, 2, 2, 2, 2, // 0000_0000 - 0000_0111
3, 3, 3, 3, 3, 3, 3, 3, // 0000_1000 - 0000_1111
4, 4, 4, 4, 4, 4, 4, 4, // 0001_0000 - 0001_0111
4, 4, 4, 4, 4, 4, 4, 4, // 0001_1000 - 0001_1111
5, 5, 5, 5, 5, 5, 5, 5, // 0010_0000 - 0010_0111
5, 5, 5, 5, 5, 5, 5, 5, // 0010_1000 - 0010_1111
5, 5, 5, 5, 5, 5, 5, 5, // 0011_0000 - 0011_0111
5, 5, 5, 5, 5, 5, 5, 5, // 0011_1000 - 0011_1111
6, 6, 6, 6, 6, 6, 6, 6, // 0100_0000 - 0100_0111
6, 6, 6, 6, 6, 6, 6, 6, // 0100_1000 - 0100_1111
6, 6, 6, 6, 6, 6, 6, 6, // 0101_0000 - 0101_0111
6, 6, 6, 6, 6, 6, 6, 6, // 0101_1000 - 0101_1111
6, 6, 6, 6, 6, 6, 6, 6, // 0110_0000 - 0110_0111
6, 6, 6, 6, 6, 6, 6, 6, // 0110_1000 - 0110_1111
6, 6, 6, 6, 6, 6, 6, 6, // 0111_0000 - 0111_0111
6, 6, 6, 6, 6, 6, 6, 6, // 0111_1000 - 0111_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1000_0000 - 1000_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1000_1000 - 1000_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1001_0000 - 1001_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1001_1000 - 1001_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1010_0000 - 1010_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1010_1000 - 1010_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1011_0000 - 1011_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1011_1000 - 1011_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1100_0000 - 1100_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1100_1000 - 1100_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1101_0000 - 1101_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1101_1000 - 1101_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1110_0000 - 1110_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1110_1000 - 1110_1111
7, 7, 7, 7, 7, 7, 7, 7, // 1111_0000 - 1111_0111
7, 7, 7, 7, 7, 7, 7, 7, // 1111_1000 - 1111_1111
};
int byte;
int byte_cnt;
for (byte_cnt = 3; byte_cnt >= 0; byte_cnt--)
{
byte = (x >> (byte_cnt * 8)) & 0xff;
if (byte != 0)
{
return msb_lut[byte] + (byte_cnt * 8);
}
}
return -1;
}
A: This can easily be solved with existing library calls.
int highestBit(int v){
return fls(v) << 1;
}
The Linux man page gives more details on this function and its counterparts for other input types.
A: This should do the trick.
int hob (int num)
{
if (!num)
return 0;
int ret = 1;
while (num >>= 1)
ret <<= 1;
return ret;
}
hob(1234) returns 1024
hob(1024) returns 1024
hob(1023) returns 512
A: If you do not need a portable solution and your code is executing on an x86 compatible CPU you can use _BitScanReverse() intrinsic function provided by Microsoft Visual C/C++ compiler. It maps to BSR CPU instruction which returns the highest bit set.
A: The best algorithm I like very much is:
unsigned hibit(unsigned n) {
n |= (n >> 1u);
n |= (n >> 2u);
n |= (n >> 4u);
n |= (n >> 8u);
n |= (n >> 16u);
return n - (n >> 1);
}
And it's easily extended for uint64_t like that:
uint64_t hibit(uint64_t n) {
n |= (n >> 1u);
n |= (n >> 2u);
n |= (n >> 4u);
n |= (n >> 8u);
n |= (n >> 16u);
n |= (n >> 32u);
return n - (n >> 1);
}
or even to __int128
__int128 hibit(__int128 n) {
n |= (n >> 1u);
n |= (n >> 2u);
n |= (n >> 4u);
n |= (n >> 8u);
n |= (n >> 16u);
n |= (n >> 32u);
n |= (n >> 64u);
return n - (n >> 1);
}
In addition is crossplatphorm solution independend of using compilator
A: like obfuscated code? Try this:
1 << ( int) log2( x)
A: // Note doesn't cover the case of 0 (0 returns 1)
inline unsigned int hibit( unsigned int x )
{
unsigned int log2Val = 0 ;
while( x>>=1 ) log2Val++; // eg x=63 (111111), log2Val=5
return 1 << log2Val ; // finds 2^5=32
}
A: A nifty solution I came up with is to binary search the bits.
uint64_t highestBit(uint64_t a, uint64_t bit_min, uint64_t bit_max, uint16_t bit_shift){
if(a == 0) return 0;
if(bit_min >= bit_max){
if((a & bit_min) != 0)
return bit_min;
return 0;
}
uint64_t bit_mid = bit_max >> bit_shift;
bit_shift >>= 1;
if((a >= bit_mid) && (a < (bit_mid << 1)))
return bit_mid;
else if(a > bit_mid)
return highestBit(a, bit_mid, bit_max, bit_shift);
else
return highestBit(a, bit_min, bit_mid, bit_shift);
}
Bit max is the highest power of 2, so for a 64 bit number it would be 2^63. Bit shift should be initialized to half the number of bits, so for 64 bits, it would be 32.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: How can I do a line break (line continuation) in Python? Given:
e = 'a' + 'b' + 'c' + 'd'
How do I write the above in two lines?
e = 'a' + 'b' +
'c' + 'd'
A: The danger in using a backslash to end a line is that if whitespace is added after the backslash (which, of course, is very hard to see), the backslash is no longer doing what you thought it was.
See Python Idioms and Anti-Idioms (for Python 2 or Python 3) for more.
A: It may not be the Pythonic way, but I generally use a list with the join function for writing a long string, like SQL queries:
query = " ".join([
'SELECT * FROM "TableName"',
'WHERE "SomeColumn1"=VALUE',
'ORDER BY "SomeColumn2"',
'LIMIT 5;'
])
A: Taken from The Hitchhiker's Guide to Python (Line Continuation):
When a logical line of code is longer than the accepted limit, you need to split it over multiple physical lines. The Python interpreter will join consecutive lines if the last character of the line is a backslash. This is helpful in some cases, but should usually be avoided because of its fragility: a white space added to the end of the line, after the backslash, will break the code and may have unexpected results.
A better solution is to use parentheses around your elements. Left with an unclosed parenthesis on an end-of-line the Python interpreter will join the next line until the parentheses are closed. The same behaviour holds for curly and square braces.
However, more often than not, having to split a long logical line is a sign that you are trying to do too many things at the same time, which may hinder readability.
Having that said, here's an example considering multiple imports (when exceeding line limits, defined on PEP-8), also applied to strings in general:
from app import (
app, abort, make_response, redirect, render_template, request, session
)
A: Put a \ at the end of your line or enclose the statement in parens ( .. ). From IBM:
b = ((i1 < 20) and
(i2 < 30) and
(i3 < 40))
or
b = (i1 < 20) and \
(i2 < 30) and \
(i3 < 40)
A: You can break lines in between parenthesises and braces. Additionally, you can append the backslash character \ to a line to explicitly break it:
x = (tuples_first_value,
second_value)
y = 1 + \
2
A: From PEP 8 -- Style Guide for Python Code:
The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces. Long lines can be broken over multiple lines by wrapping expressions in parentheses. These should be used in preference to using a backslash for line continuation.
Backslashes may still be appropriate at times. For example, long, multiple with-statements cannot use implicit continuation, so backslashes are acceptable:
with open('/path/to/some/file/you/want/to/read') as file_1, \
open('/path/to/some/file/being/written', 'w') as file_2:
file_2.write(file_1.read())
Another such case is with assert statements.
Make sure to indent the continued line appropriately. The preferred place to break around a binary operator is after the operator, not before it. Some examples:
class Rectangle(Blob):
def __init__(self, width, height,
color='black', emphasis=None, highlight=0):
if (width == 0 and height == 0 and
color == 'red' and emphasis == 'strong' or
highlight > 100):
raise ValueError("sorry, you lose")
if width == 0 and height == 0 and (color == 'red' or
emphasis is None):
raise ValueError("I don't think so -- values are %s, %s" %
(width, height))
Blob.__init__(self, width, height,
color, emphasis, highlight)file_2.write(file_1.read())
PEP8 now recommends the opposite convention (for breaking at binary operations) used by mathematicians and their publishers to improve readability.
Donald Knuth's style of breaking before a binary operator aligns operators vertically, thus reducing the eye's workload when determining which items are added and subtracted.
From PEP8: Should a line break before or after a binary operator?:
Donald Knuth explains the traditional rule in his Computers and Typesetting series: "Although formulas within a paragraph always break after binary operations and relations, displayed formulas always break before binary operations"[3].
Following the tradition from mathematics usually results in more readable code:
# Yes: easy to match operators with operands
income = (gross_wages
+ taxable_interest
+ (dividends - qualified_dividends)
- ira_deduction
- student_loan_interest)
In Python code, it is permissible to break before or after a binary operator, as long as the convention is consistent locally. For new code Knuth's style is suggested.
[3]: Donald Knuth's The TeXBook, pages 195 and 196
A:
From the horse's mouth: Explicit line
joining
Two or more physical lines may be
joined into logical lines using
backslash characters (\), as follows:
when a physical line ends in a
backslash that is not part of a string
literal or comment, it is joined with
the following forming a single logical
line, deleting the backslash and the
following end-of-line character. For
example:
if 1900 < year < 2100 and 1 <= month <= 12 \
and 1 <= day <= 31 and 0 <= hour < 24 \
and 0 <= minute < 60 and 0 <= second < 60: # Looks like a valid date
return 1
A line ending in a backslash cannot
carry a comment. A backslash does not
continue a comment. A backslash does
not continue a token except for string
literals (i.e., tokens other than
string literals cannot be split across
physical lines using a backslash). A
backslash is illegal elsewhere on a
line outside a string literal.
A: If you want to break your line because of a long literal string, you can break that string into pieces:
long_string = "a very long string"
print("a very long string")
will be replaced by
long_string = (
"a "
"very "
"long "
"string"
)
print(
"a "
"very "
"long "
"string"
)
Output for both print statements:
a very long string
Notice the parenthesis in the affectation.
Notice also that breaking literal strings into pieces allows to use the literal prefix only on parts of the string and mix the delimiters:
s = (
'''2+2='''
f"{2+2}"
)
A: What is the line? You can just have arguments on the next line without any problems:
a = dostuff(blahblah1, blahblah2, blahblah3, blahblah4, blahblah5,
blahblah6, blahblah7)
Otherwise you can do something like this:
if (a == True and
b == False):
or with explicit line break:
if a == True and \
b == False:
Check the style guide for more information.
Using parentheses, your example can be written over multiple lines:
a = ('1' + '2' + '3' +
'4' + '5')
The same effect can be obtained using explicit line break:
a = '1' + '2' + '3' + \
'4' + '5'
Note that the style guide says that using the implicit continuation with parentheses is preferred, but in this particular case just adding parentheses around your expression is probably the wrong way to go.
A: One can also break the call of methods (obj.method()) in multiple lines.
Enclose the command in parenthesis "()" and span multiple lines:
> res = (some_object
.apply(args)
.filter()
.values)
For instance, I find it useful on chain calling Pandas/Holoviews objects methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1332"
} |
Q: Fixed vs. variable frame rates in games: what is best, and when? After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes).
Which method works best for specific situations? Please consider:
*
*Catering to different system specifications;
*Ease of development/maintenance;
*Ease of porting;
*Final performance.
A: I lean towards a variable framerate model, but internally some systems are ticked on a fixed timestep. This is quite easy to do by using a time accumulator. Physics is one system which is best run on a fixed timestep, and ticked multiple times per frame if necessary to avoid a loss in stability and keep the simulation smooth.
A bit of code to demonstrate the use of an accumulator:
const float STEP = 60.f / 1000.f;
float accumulator = 0.f;
void Update(float delta)
{
accumulator += delta;
while(accumulator > STEP)
{
Simulate(STEP);
accumulator -= STEP;
}
}
This is not perfect by any means but presents the basic idea - there are many ways to improve on this model. Obviously there are issues to be sorted out when the input framerate is obscenely slow. However, the big advantage is that no matter how fast or slow the delta is, the simulation is moving at a smooth rate in "player time" - which is where any problems will be perceived by the user.
Generally I don't get into the graphics & audio side of things, but I don't think they are affected as much as Physics, input and network code.
A: It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance.
*
*At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast)
*Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too)
*The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok)
*Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game)
If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates.
If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models.
A: One option that I, as a user, would like to see more often is dynamically changing the level of detail (in the broad sense, not just the technical sense) when framerates vary outside of a certian envelope. If you are rendering at 5FPS, then turn off bump-mapping. If you are rendering at 90FPS, increase the bells and whistles a bit, and give the user some prettier images to waste their CPU and GPU with.
If done right, the user should get the best experince out of the game without having to go into the settings screen and tweak themselves, and you should have to worry less, as a level designer, about keeping the polygon count the same across difference scenes.
Of course, I say this as a user of games, and not a serious one at that -- I've never attempted to write a nontrivial game.
A: The main problem I've encountered with variable length frame times is floating point precision, and variable frame times can surprise you in how they bite you.
If, for example, you're adding the frame time * velocity to a position, and frame time gets very small, and position is largish, your objects can slow down or stop moving because all your delta was lost due to precision. You can compensate for this using a separate error accumulator, but it's a pain.
Having fixed (or at least a lower bound on frame length) frame times allows you to control how much FP error you need to take into account.
A: My experience is fairly limited to somewhat simple games (developed with SDL and C++) but I have found that it is quite easy just to implement a static frame rate. Are you working with 2d or 3d games? I would assume that more complex 3d environments would benefit more from a variable frame rate and that the difficulty would be greater.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: On Disk Substring index I have a file (fasta file to be specific) that I would like to index, so that I can quickly locate any substring within the file and then find the location within the original fasta file.
This would be easy to do in many cases, using a Trie or substring array, unfortunately the strings I need to index are 800+ MBs which means that doing them in memory in unacceptable, so I'm looking for a reasonable way to create this index on disk, with minimal memory usage.
(edit for clarification)
I am only interested in the headers of proteins, so for the largest database I'm interested in, this is about 800 MBs of text.
I would like to be able to find an exact substring within O(N) time based on the input string. This must be useable on 32 bit machines as it will be shipped to random people, who are not expected to have 64 bit machines.
I want to be able to index against any word break within a line, to the end of the line (though lines can be several MBs long).
Hopefully this clarifies what is needed and why the current solutions given are not illuminating.
I should also add that this needs to be done from within java, and must be done on client computers on various operating systems, so I can't use any OS Specific solution, and it must be a programatic solution.
A: In some languages programmers have access to "direct byte arrays" or "memory maps", which are provided by the OS. In java we have java.nio.MappedByteBuffer. This allows one to work with the data as if it were a byte array in memory, when in fact it is on the disk. The size of the file one can work with is only limited by the OS's virtual memory capabilities, and is typically ~<4GB for 32-bit computers. 64-bit? In theory 16 exabytes (17.2 billion GBs), but I think modern CPUs are limited to a 40-bit (1TB) or 48-bit (128TB) address space.
This would let you easily work with the one big file.
A: The FASTA file format is very sparse. The first thing I would do is generate a compact binary format, and index that - it should be maybe 20-30% the size of your current file, and the process for coding/decoding the data should be fast enough (even with 4GB) that it won't be an issue.
At that point, your file should fit within memory, even on a 32 bit machine. Let the OS page it, or make a ramdisk if you want to be certain it's all in memory.
Keep in mind that memory is only around $30 a GB (and getting cheaper) so if you have a 64 bit OS then you can even deal with the complete file in memory without encoding it into a more compact format.
Good luck!
-Adam
A: I talked to a few co-workers and they just use VIM/Grep to search when they need to. Most of the time I wouldn't expect someone to search for a substring like this though.
But I don't see why MS Desktop search or spotlight or google's equivalent can't help you here.
My recommendation is splitting the file up --by gene or species, hopefully the input sequences aren't interleaved.
A: I don't imagine that the original poster still has this problem, but anyone needing FASTA file indexing and subsequence extraction should check out fastahack: http://github.com/ekg/fastahack
It uses an index file to count newlines and sequence start offsets. Once the index is generated you can rapidly extract subsequences; the extraction is driven by fseek64.
It will work very, very well in the case that your sequences are as long as the poster's. However, if you have many thousands or millions of sequences in your FASTA file (as is the case with the outputs from short-read sequencing or some de novo assemblies) you will want to use another solution, such as a disk-backed key-value store.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Prompt for Database Connection String I would like to offer a database connection prompt to the user. I can build my own, but it would be nice if I can use something that somebody else has already built (maybe something built into Windows or a free library available on the Internet). Anybody know how to do this in .Net?
EDIT: I found this and thought it was interesting: Showing a Connection String prompt in a WinForm application. This only works for SQL Server connections though.
A: ADO.NET has the handy ConnectionStringBuilder which will construct and validate a connection string. This would at least take the grunt work out of one part, allowing you to create a simple dialog box for the input.
A: Microsoft has released the source code for the data connection dialog on Code Gallery.
Here is an blog post from Yaohai with more info and here is the home of Data Connection Dialog on Code Gallery.
A: You might want to try using SQL Server Management Objects. This MSDN article has a good sample for prompting and connecting to a SQL server.
A: I combined the PropertyGrid Class with the SqlConnectionStringBuilder Class in a separate dialog and that worked really well for me.
A: The only "built in" connection string functionality that I could think of is the one that comes up when you run a CMD script (essentially a batch file) that runs SQL scripts. However I'm not sure if it's something built into Visual Studio.
It's really simple to make one anyway. If you don't want the user to be able to input a straight-out connection string, you can put together one made up of four textboxes and a checkbox:
*
*Server
*Catalog Name
*checkbox for integrated security or SQL Authentication
*Username
*Password
Fairly trivial, IMHO.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: HelpInsight documentation in Delphi 2007 I am using D2007 and am trying to document my source code, using the HelpInsight feature (provided since D2005). I am mainly interested in getting the HelpInsight tool-tips working. From various Web-surfing and experimentation I have found the following:
*
*Using the triple slash (///) comment style works more often than the other documented comment styles. i.e.: {*! comment *} and {! comment }
*The comments must precede the declaration that they are for. For most cases this will mean placing them in the interface section of the code. (The obvious exception is for types and functions that are not accessible from outside the current unit and are therefore declared in the implementation block.)
*The first comment cannot be for a function. (i.e. it must be for a type - or at least it appears the parser must have seen the "type" keyword before the HelpInsight feature works)
Despite following these "rules", sometimes the Help-insight just doesn't find the comments I've written. One file does not produce the correct HelpInsight tool-tips, but if I include this file in a different dummy project, it works properly.
Does anyone have any other pointers / tricks for getting HelpInsight to work?
A: I have discovered another caveat (which in my case was what was "wrong")
It appears that the unit with the HelpInsight comments must be explicitly added to the project. It is not sufficient to simply have the unit in a path that is searched when compiling the project.
In other words, the unit must be included in the Project's .dpr / .dproj file. (Using the Project | "Add to Project" menu option)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do I automatically destroy child processes in Windows? In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this.
I want the child processes to be automatically closed if my main processes crashes or is closed.
Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up.
How do I do this?
A: The Windows API supports objects called "Job Objects". The following code will create a "job" that is configured to shut down all processes when the main application ends (when its handles are cleaned up). This code should only be run once.:
HANDLE ghJob = CreateJobObject( NULL, NULL); // GLOBAL
if( ghJob == NULL)
{
::MessageBox( 0, "Could not create job object", "TEST", MB_OK);
}
else
{
JOBOBJECT_EXTENDED_LIMIT_INFORMATION jeli = { 0 };
// Configure all child processes associated with the job to terminate when the
jeli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
if( 0 == SetInformationJobObject( ghJob, JobObjectExtendedLimitInformation, &jeli, sizeof(jeli)))
{
::MessageBox( 0, "Could not SetInformationJobObject", "TEST", MB_OK);
}
}
Then when each child process is created, execute the following code to launch each child each process and add it to the job object:
STARTUPINFO info={sizeof(info)};
PROCESS_INFORMATION processInfo;
// Launch child process - example is notepad.exe
if (::CreateProcess( NULL, "notepad.exe", NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo))
{
::MessageBox( 0, "CreateProcess succeeded.", "TEST", MB_OK);
if(ghJob)
{
if(0 == AssignProcessToJobObject( ghJob, processInfo.hProcess))
{
::MessageBox( 0, "Could not AssignProcessToObject", "TEST", MB_OK);
}
}
// Can we free handles now? Not sure about this.
//CloseHandle(processInfo.hProcess);
CloseHandle(processInfo.hThread);
}
VISTA NOTE: See AssignProcessToJobObject always return "access denied" on Vista if you encounter access-denied issues with AssignProcessToObject() on vista.
A: One somewhat hackish solution would be for the parent process to attach to each child as a debugger (use DebugActiveProcess). When a debugger terminates all its debuggee processes are terminated as well.
A better solution (assuming you wrote the child processes as well) would be to have the child processes monitor the parent and exit if it goes away.
A: Windows Job Objects sounds like a good place to start. The name of the Job Object would have to be well-known, or passed to the children (or inherit the handle). The children would need to be notice when the parent dies, either through a failed IPC "heartbeat" or just WFMO/WFSO on the parent's process handle. At that point any child process could TermianteJobObject to bring down the whole group.
A: You can keep a separate watchdog process running. Its only task is watching the current process space to spot situations like you describe. It could even re-launch the original application after a crash or provide different options to the user, collect debug information, etc. Just try to keep it simple enough so that you don't need a second watchdog to watch the first one.
A: You can assign a job to the parent process before creating processes:
static HANDLE hjob_kill_on_job_close=INVALID_HANDLE_VALUE;
void init(){
hjob_kill_on_job_close = CreateJobObject(NULL, NULL);
if (hjob_kill_on_job_close){
JOBOBJECT_EXTENDED_LIMIT_INFORMATION jobli = { 0 };
jobli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
SetInformationJobObject(hjob_kill_on_job_close,
JobObjectExtendedLimitInformation,
&jobli, sizeof(jobli));
AssignProcessToJobObject(hjob_kill_on_job_close, GetCurrentProcess());
}
}
void deinit(){
if (hjob_kill_on_job_close) {
CloseHandle(hjob_kill_on_job_close);
}
}
JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE causes all processes associated with the job to terminate when the last handle to the job is closed. By default, all child processes will be assigned to the job automatically, unless you passed CREATE_BREAKAWAY_FROM_JOB when calling CreateProcess. See https://learn.microsoft.com/en-us/windows/win32/procthread/process-creation-flags for more information about CREATE_BREAKAWAY_FROM_JOB.
You can use process explorer from Sysinternals to make sure all processes are assigned to the job. Just like this:
A: You'd probably have to keep a list of the processes you start, and kill them off one by one when you exit your program. I'm not sure of the specifics of doing this in C++ but it shouldn't be hard. The difficult part would probably be ensuring that child processes are shutdown in the case of an application crash. .Net has the ability to add a function that get's called when an unhandled exception occurs. I'm not sure if C++ offers the same capabilities.
A: You could encapsulate each process in a C++ object and keep a list of them in global scope. The destructors can shut down each process. That will work fine if the program exits normally but it it crashes, all bets are off.
Here is a rough example:
class myprocess
{
public:
myprocess(HANDLE hProcess)
: _hProcess(hProcess)
{ }
~myprocess()
{
TerminateProcess(_hProcess, 0);
}
private:
HANDLE _hProcess;
};
std::list<myprocess> allprocesses;
Then whenever you launch one, call allprocessess.push_back(hProcess);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71"
} |
Q: Is there an ASP.NET pagination control (Not MVC)? I've got a search results page that basically consists of a repeater with content in it. What I need is a way to paginate the results. Getting paginated results isn't the problem, what I'm after is a web control that will display a list of the available paged data, preferably by providing the number of results and a page size
A: Repeaters don't do this by default.
However, GridViews do.
Personally, I hate GridViews, so I wrote a Paging/Sorting Repeater control.
Basic Steps:
*
*Subclass the Repeater Control
*Add a private PagedDataSource to it
*Add a public PageSize property
*Override Control.DataBind
*
*Store the Control.DataSource in the PagedDataSource.
*Bind the Control.DataSource to PagedDataSource
*Override Control.Render
*
*Call Base.Render()
*Render your paging links.
For a walkthrough, you could try this link:
https://web.archive.org/web/20210925054103/http://aspnet.4guysfromrolla.com/articles/081804-1.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Getting international characters from a web page? I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO!
That is, html uses escaped markup for the special characters, such as Ä
Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it.
A: I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so:
>>> from BeautifulSoup import BeautifulSoup
>>> html = "<html>ÄÄRITALO!</html>"
>>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES)
>>> print soup.contents[0].string
ÄÄRITALO!
(It would be nice if the standard codecs module included a codec for this, such that you could do "some_string".decode('html_entities') but unfortunately it doesn't!)
EDIT:
Another solution:
Python developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones).
A: Try using BeautifulSoup. It should do the trick and give you a nicely formatted DOM to work with as well.
This blog entry seems to have had some success with it.
A: I haven't tried it myself, but have you tried
http://zesty.ca/python/scrape.html ?
It seems to have a method htmldecode(text) which would do what you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do you check whether a python method is bound or not? Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to?
A: def isbound(method):
return method.im_self is not None
def instance(bounded_method):
return bounded_method.im_self
User-defined methods:
When a user-defined method object is
created by retrieving a user-defined
function object from a class, its
im_self attribute is None and the
method object is said to be unbound.
When one is created by retrieving a
user-defined function object from a
class via one of its instances, its
im_self attribute is the instance, and
the method object is said to be bound.
In either case, the new method's
im_class attribute is the class from
which the retrieval takes place, and
its im_func attribute is the original
function object.
In Python 2.6 and 3.0:
Instance method objects have new
attributes for the object and function
comprising the method; the new synonym
for im_self is __self__, and im_func
is also available as __func__. The old
names are still supported in Python
2.6, but are gone in 3.0.
A: im_self attribute (only Python 2)
A: In python 3 the __self__ attribute is only set on bound methods. It's not set to None on plain functions (or unbound methods, which are just plain functions in python 3).
Use something like this:
def is_bound(m):
return hasattr(m, '__self__')
A: The chosen answer is valid in almost all cases. However when checking if a method is bound in a decorator using chosen answer, the check will fail. Consider this example decorator and method:
def my_decorator(*decorator_args, **decorator_kwargs):
def decorate(f):
print(hasattr(f, '__self__'))
@wraps(f)
def wrap(*args, **kwargs):
return f(*args, **kwargs)
return wrap
return decorate
class test_class(object):
@my_decorator()
def test_method(self, *some_params):
pass
The print statement in decorator will print False.
In this case I can't find any other way but to check function parameters using their argument names and look for one named self. This is also not guarantied to work flawlessly because the first argument of a method is not forced to be named self and can have any other name.
import inspect
def is_bounded(function):
params = inspect.signature(function).parameters
return params.get('self', None) is not None
A: A solution that works for both Python 2 and 3 is tricky.
Using the package six, one solution could be:
def is_bound_method(f):
"""Whether f is a bound method"""
try:
return six.get_method_self(f) is not None
except AttributeError:
return False
In Python 2:
*
*A regular function won't have the im_self attribute so six.get_method_self() will raise an AttributeError and this will return False
*An unbound method will have the im_self attribute set to None so this will return False
*An bound method will have the im_self attribute set to non-None so this will return True
In Python 3:
*
*A regular function won't have the __self__ attribute so six.get_method_self() will raise an AttributeError and this will return False
*An unbound method is the same as a regular function so this will return False
*An bound method will have the __self__ attribute set (to non-None) so this will return True
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.