text
stringlengths
8
267k
meta
dict
Q: Advice on how to be graphically creative I've always felt that my graphic design skills have lacked, but I do have a desire to improve them. Even though I'm not the worlds worst artist, it's discouraging to see the results from a professional designer, who can do an amazing mockup from a simple spec in just a few hours. I always wonder how they came up with their design and more importantly, how they executed it so quickly. I'd like to think that all good artists aren't naturally gifted. I'm guessing that a lot of skill/talent comes from just putting in the time. Is there a recommended path to right brain nirvana for someone starting from scratch, a little later in life? I'd be interested in book recommendations, personal theories, or anything else that may shed some light on the best path to take. I have questions like should I read books about color theory, should I draw any chance I have, should I analyze shapes like an architect, etc... As far as my current skills go, I can make my way around Photoshop enough where I can do simple image manipulation... Thanks for any advice A: Most of artistic talent comes from putting in the time. However, as in most skills, practicing bad habits doesn't help you progress. You need to learn basic drawing skills (form, mainly) and practice doing them well and right (which means slowly). As you practice correctly, you'll improve much faster. This is the kind of thing that changes you from a person who says, "It doesn't look right, but I can't tell why - it's just 'off' somehow" to a person who says, "Oops, the arm is a bit long. If I shorten the elbow end it'll change the piece in this way, if I shorten the hand end it'll change the piece this way..." So you've got to study the forms you intend to draw, and recognize their internally related parts (the body height is generally X times the size of the head, the arms and legs are related in size but vary from the torso, etc). Same thing with buildings, physical objects, etc. Another thing that will really help you is understanding light and shadow - humans pick up on shape relationships based on outlines and based on shadows. Color theory is something that will make your designs attractive, or evoke certain responses and emotions, but until you get the form and lighting right the colors are not something you should stress. One reason why art books and classes focus so much on monochrome drawings. There are books and classes out there for these subjects - I could recommend some, but what you really need is to look at them yourself and pick the ones that appeal to you. You won't want to learn if you don't like drawing fruit bowls, and that's all your book does. Though you shouldn't avoid what you don't like, given that you're going the self taught route you should make it easy in the beginning, and then force yourself to draw the uninteresting and bland once you've got a bit of confidence and speed so you can go through those barriers more quickly. Good luck! -Adam A: That's a difficult thing. Usually people think "artistic skills" come from your genes but actually they do not. The bests graphic designer I know have some sort of education in arts. Of course, photoshop knowledge will allow you to do things but being interested in art (painting specially) will improve your sensitivity and your "good taste". Painting is a pleasure, both doing it and seeing it. Learning to both understand and enjoy it will help and the better way to do it is by going to museums. I try to go to as much expositions as I can, as well as read what I can on authors and styles (Piccaso, Monet, Dali, Magritte, Expresionism, Impresionism, Cubism, etc) that will give you a general overview that WILL help. On the other side... you are a programmer so you shouldn't be in charge of actually drawing the icons or designing the enterprise logo. You should however be familiarized with user interface design, specially with ease of use, and terms as goal oriented design. Of course, in a sufficiently large company you won't be in charge of the UI design either, but it will help anyway. I'd recommend the book About Face, which centers in goal oriented design as well as going through some user interface methapores and giving some historic background for the matter. A: I'm no artist and I'm colorblind, but I have been able to do fairly well with track creation for Motocross Madness and other games of that type (http://twisteddirt.com & http://dirttwister.com). Besides being familiar with the toolset I believe it helps to bring out your inner artist. I found that the book "Drawing on the Right Side of the Brain" was an amazing eye opening experience for me. One of the tricks that it uses is for you to draw a fairly complicated picture while looking at the picture upside down. If I had drawn it while looking at it right side up it would have looked horrible. I impressed myself with what I was able to draw while copying it while it was upside down. I did this many years ago. I just looked at their website and I think I will order the updated book and check out their DVD. A: I have a BFA in Graphic Design, although I don't use it much lately. Here's my $.02. Get a copy of "Drawing on the Right Side of the Brain" and go through it. You will become a better artist/drawer as a result and I'm a firm believer that if you can't do it with pencil/paper you won't be successful on the computer. Also go to the bookstore and pick up a copy of How or one of the other publications. I maintain a subscription to How just for inspiration. I'll see if I can dig up some web links tonight for resources (although I'm sure others will provide some). most importantly, carry a sketch book and use it. Draw. Draw. Draw. A: Drawing is probably what I'd recommend the most. Whenever you have a chance, just start drawing. Keep in mind that what you draw doesn't have to be original; it's a perfectly natural learning tool to try and duplicate someone else's work. You'll learn a lot. If you look at all the great masters, they had understudies who actually did part of their masters' works, so fight that "it must be original" instinct that school's instilled in you, and get duplicating. (Just make sure you either destroy or properly label these attempts as copies--you don't want to accidentally use them later and then be accused of plagiarism..) I have a couple of friends in the animation sector, and one of them told me that while she was going through college, the way she was taught to draw the human body was to go through each body part, and draw it 100 times, each in a completely different pose. This gets you comfortable with the make-up of the object, and helps you get intimately knowledgeable about how it'll look from various positions. (That may not apply directly to what you're doing, but it should give you an indicator as to the amount of discipline that may be involved in getting to the point you seek.) Definitely put together a library of stuff that you can look to for inspiration. Value physical media that you can flip through over websites; it's much quicker to flip through a picture book than it is to search your bookmarks online. When it comes to getting your imagination fired up, having to meticulously click and wait repeatedly is going to be counter-productive. A: Inspiration is probably your biggest asset. Like creative writing, and even programming, looking at what people have done and how they have done will give you tools to put in your toolbox. But in the sense of graphic design (photoshop, illustrator, etc), just like programmers don't enjoy reinventing the wheel, I don't think artwork is any different. Search the web for 'pieces' that you can manipulate (vector graphics: example). Run through tutorials that can easily give you some tricks. Sketch out a very rough idea, and look through web images to find something that has already created. It's like anything else that you wish to master, or become proficient in. If you want it, you've got to practice it over, and over, and over. A: I, too was not born with a strong design skillset, in fact quite the opposite. When I started out, my philosophy was that if the page or form just works then my job was done! Over the years though, I've improved. Although I believe I'll never be as good as someone who was born with the skills, sites like CSS Zen Garden among others have helped me a lot. Read into usability too, as I think usability and design for computer applications are inextricably entwined. Books such as Don Norman's "The Design of Everyday Things" to Steve Krug's "Don't Make Me Think", have all helped improve my 'design skills'... slightly! ;-) Good luck with it. A: As I mentioned in a thread yesterday, I have found working through tutorials for Adobe Photoshop, Illustrator, InDesign, and After Effects to be very helpful. I use Adobe's Kuler site for help with colors. I think that designers spend a lot of time looking at other's designs. Some of the books out there on web site design might help, even for designing applications. Adobe TV has a lot of short videos on graphic design in general, as well as achieving particular results in one of their tools. I find these videos quite helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/32493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Visual Studio identical token highlighting I coded a Mancala game in Java for a college class this past spring, and I used the Eclipse IDE to write it. One of the great (and fairly simple) visual aids in Eclipse is if you select a particular token, say a declared variable, then the IDE will automatically highlight all other references to that token on your screen. Notepad++, my preferred Notepad replacement, also does this. Another neat and similar feature in Eclipse was the vertical "error bar" to the right of your code (not sure what to call it). It display little red boxes for all of the syntax errors in your document, yellow boxes for warnings like "variable declared but not used", and if you select a word, boxes appear in the bar for each occurrence of the word in the document. A screenshot of these features in action: After a half hour of searching, I've determined that Visual Studio cannot do this on its own, so my question is: does anyone know of any add-ins for 2005 or 2008 that can provide either one of the aforementioned features? Being able to highlight the current line your cursor is on would be nice too. I believe the add-in ReSharper can do this, but I'd prefer to use a free add-in rather than purchase one. A: Old question but... Visual Studio 2010 has this feature built-in, at last. A: The highlight functionality is conveniently implemented in VisualAssist. In my opinion, they are both must-have. 1) Highlight identifier under editing caret: Options -> Advanced -> Refactoring -> Automatically highlight references to symbol under cursor 2) Highlight search result - in all windows. Works for RegExps! Options -> Advanced -> Display -> Highlight find results A: There is a RockScroll alternative called MetalScroll which is essentially the same thing with a few tweaks and improvements. Also there is a small and simple WordLight plug-in that only highlights the identical tokens. Both are open source and support code folding which is nice. Imho, the bar next to the scroll bar in Eclipse is a lot more elegant solution than the scroll bar replacement of RockScroll/MetalScroll. Unfortunately I couldn't find any VS plug-ins that do it the Eclipse way, so I just stick with WordLight. A: The automatic highlight is implemented in Visual Assist as the refactoring command "Find References". It highlights all occurences of a given variable or method, but that's not automatic (binded to a keyboard shortcut on my computer). Here is an exmaple: A: About RockScroll: It doesn't highlight the identifiers. It only highlights the same string in the source code! If there are similar identifier declared : ex. _test and test, and test is highlighted it will highlight the string "test" in variable _test too! And it will also highlight the same string in a method called "sometesting()". So it isn't exactly like eclipse and doesn't work for me. A: DevExpress CodeRush does this when you press TAB when the cursor is in an identifier, you can then tab through all the highlighted instances. There's also a DXCore plugin (the foundation upon which CodeRush/Refactor Pro are built) that does current-line highlighting. A: In VS 2017, this can be solved by installing the Match Margin plugin. It appears to be part of the Productivity Power Tools (which might be worth looking at for other features), but surprisingly, installing PPT didn't solve the problem for me, I had to install Match Margin separately. A: Check following addins Productivity Power Tools- Displays error in scrollbar and Highlight selected word A: In a different question on SO (link), someone mentioned the VS 2005 / VS 2008 add-in "RockScroll". It seems to provide the "error bar" feature I was inquiring about in my question above. RockScroll EDIT: RockScroll also does the identical token highlighting that I was looking for! Great! A: The "error bar" functionality is provided in JetBrains ReSharper. I'm not sure if it does highlighting of references to the currently selected identifier. A: For selected word(s) highlight function only, there is also StickyHighlight. StickyHighlight supports Visual Studio 2010 & 2012.
{ "language": "en", "url": "https://stackoverflow.com/questions/32494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: XML Parser Validation Report Most XML parsers will give up after the first error in a document. In fact, IIRC, that's actually part of the 'official' spec for parsers. I'm looking for something that will break that rule. It should take a given schema (assuming a valid schema) and an xml input and attempt to keep going after the first error and either raise an event for each error or return a list when finished, so I can use it to generate some kind of a report of the errors in the document. This requirement comes from above, so let's try to keep the purist "but it wouldn't make sense to keep going" comments to a minimum. I'm looking for something that will evaluate both whether the document is well-formed and whether or not it conforms to the schema. Ideally it would evaluate those as different classes of error. I'd prefer a .Net solution but I could use a standalone .exe as well. If you know of one that uses a different platform go ahead and post it because someone else might find it useful. Update: I expect that most of the documents where I use this will be mostly well-formed. Maybe an & included as data instead of &amp here and there, or an occasional mis-placed tag. I don't expect the parser to be able to recover from anything, just to make a best-effort to keep going. If a document is too out of whack it should spit out as much as it can followed by some kind of 'fatal, unable to continue' error. Otherwise the schema validation part is pretty easy. A: In fact, IIRC, that's actually part of the 'official' spec for parsers. Official does not need to be quoted :) fatal error [Definition:] An error which a conforming XML processor must detect and report to the application. After encountering a fatal error, the processor may continue processing the data to search for further errors and may report such errors to the application. In order to support correction of errors, the processor may make unprocessed data from the document (with intermingled character data and markup) available to the application. Once a fatal error is detected, however, the processor must not continue normal processing (i.e., it must not continue to pass character data and information about the document's logical structure to the application in the normal way). You could use xmllint with the recover option. A: Sounds like you might want TagSoup. It may not be exactly what you want, but as far as bad-document-handling parsers go it's the gold standard. A: Xerces has a feature you can set on to try and continue after a fatal error: http://apache.org/xml/features/continue-after-fatal-error True: Attempt to continue parsing after a fatal error. False: Stops parse on first fatal error. Default: false Note: The behavior of the parser when this feature is set to true is undetermined! Therefore use this feature with extreme caution because the parser may get stuck in an infinite loop or worse.
{ "language": "en", "url": "https://stackoverflow.com/questions/32505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Write files to App_Data under medium trust hack? Is there any way to Write files to App_Data under medium trust hack? Im sure I've heard about some hack, is that true? A: I don't think you are able to create new files, but you should be able to write to existing files in the App_Data folder. But I have honestly never experienced any problems with Medium Trust and writing to the App_Data folder. Are you sure it has the necessary permissions needed for writing files to the hard drive? A: I don't have access to the server itself, so I can't check that. I can only chmod files and folder from my FTP client. I think my hosting provider needs to grant write permission to the network service account on the App_Data folder.
{ "language": "en", "url": "https://stackoverflow.com/questions/32513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: List in JScrollPane painting outside the viewport I have a list, each item of which has several things in it, including a JProgressBar which can be updated a lot. Each time one of the items updates its JProgressBar, the ListDataListener on the list tries to scroll it to the visible range using /* * This makes the updating content item automatically scroll * into view if it is off the viewport. */ public void contentsChanged(final ListDataEvent evt) { if (!EventQueue.isDispatchThread()) { /** * Make sure the scrolling happens in the graphics "dispatch" thread. */ EventQueue.invokeLater(new Runnable() { public void run() { contentsChanged(evt); } }); } if (playbackInProgress) { int index = evt.getIndex0(); currentContentList.ensureIndexIsVisible(index); } } Note that I'm trying to make sure the scrolling is done in the dispatch thread, since I thought maybe the problem was it being scrolled while it was repainting. And yet, I still have a problem where if things are really active, some of the list items paint outside of the viewport, overwriting what's outside the JScrollPane. Forcing an exposure event will repaint those things, but it's annoying. Is there anything else I need to look out for to stop these things painting outside of their clipping area? A: Have you tried explicitly enabling double-buffering on the JList and/or the components that it is drawing over? (with:setDoubleBuffered(boolean aFlag)) Another thought is that you might need to exit the function immediately after delegating to the EDT. The way your code is written, it looks like the update will happen in both threads if ContentChanged is invoked from a non-EDT thread. Logging in the first if (or set a breakpoint in the if -- but not in the runnable -- should help determine if that is your problem. eg: public void contentsChanged(final ListDataEvent evt) { if (!EventQueue.isDispatchThread()) { log.debug("Delegating contentsChanged(...) to EDT"); EventQueue.invokeLater(new Runnable() { public void run() { contentsChanged(evt); } }); // don't run ensureIndexIsVisible twice: return; } if (playbackInProgress) { int index = evt.getIndex0(); currentContentList.ensureIndexIsVisible(index); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/32519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I restrict JFileChooser to a directory? I want to limit my users to a directory and its sub directories but the "Parent Directory" button allows them to browse to an arbitrary directory. How should I go about doing that? A: The solution of Allain is almost complete. Three problems are open to solve: * *Clicking the "Home"-Button kicks the user out of restrictions *DirectoryRestrictedFileSystemView is not accessible outside the package *Starting point is not Root * *Append @Override to DirectoryRestrictedFileSystemView public TFile getHomeDirectory() { return rootDirectories[0]; } * *set class and constructor public *Change JFileChooser fileChooser = new JFileChooser(fsv); into JFileChooser fileChooser = new JFileChooser(fsv.getHomeDirectory(),fsv); I use it for restricting users to stay in a zip-file via TrueZips TFileChooser and with slight modifications to the above code, this works perfectly. Thanks a lot. A: Incase anyone else needs this in the future: class DirectoryRestrictedFileSystemView extends FileSystemView { private final File[] rootDirectories; DirectoryRestrictedFileSystemView(File rootDirectory) { this.rootDirectories = new File[] {rootDirectory}; } DirectoryRestrictedFileSystemView(File[] rootDirectories) { this.rootDirectories = rootDirectories; } @Override public File createNewFolder(File containingDir) throws IOException { throw new UnsupportedOperationException("Unable to create directory"); } @Override public File[] getRoots() { return rootDirectories; } @Override public boolean isRoot(File file) { for (File root : rootDirectories) { if (root.equals(file)) { return true; } } return false; } } You'll obviously need to make a better "createNewFolder" method, but this does restrict the user to one of more directories. And use it like this: FileSystemView fsv = new DirectoryRestrictedFileSystemView(new File("X:\\")); JFileChooser fileChooser = new JFileChooser(fsv); or like this: FileSystemView fsv = new DirectoryRestrictedFileSystemView( new File[] { new File("X:\\"), new File("Y:\\") }); JFileChooser fileChooser = new JFileChooser(fsv); A: You can probably do this by setting your own FileSystemView. A: No need to be that complicated. You can easily set selection mode of a JFileChooser like this JFileChooser fc = new JFileChooser(); fc.setFileSelectionMode(JFileChooser.DIRECTORIES_ONLY); fc.setMultiSelectionEnabled(false); You can read more reference here How to Use File Choosers
{ "language": "en", "url": "https://stackoverflow.com/questions/32529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How do you write code that is both 32 bit and 64 bit compatible? What considerations do I need to make if I want my code to run correctly on both 32bit and 64bit platforms ? EDIT: What kind of areas do I need to take care in, e.g. printing strings/characters or using structures ? A: Options: Code it in some language with a Virtual Machine (such as Java) Code it in .NET and don't target any specific architecture. The .NET JIT compiler will compile it for you to the right architecture before running it. A: One solution would be to target a virtual environment that runs on both platforms (I'm thinking Java, or .Net here). Or pick an interpreted language. Do you have other requirements, such as calling existing code or libraries? A: The same things you should have been doing all along to ensure you write portable code :) mozilla guidelines and the C faq are good starting points A: I assume you are still talking about compiling them separately for each individual platform? As running them on both is completely doable by just creating a 32bit binary. A: The biggest one is making sure you don't put pointers into 32-bit storage locations. But there's no proper 'language-agnostic' answer to this question, really. You couldn't even get a particularly firm answer if you restricted yourself to something like standard 'C' or 'C++' - the size of data storage, pointers, etc, is all terribly implementation dependant. A: It honestly depends on the language, because managed languages like C# and Java or Scripting languages like JavaScript, Python, or PHP are locked in to their current methodology and to get started and to do anything beyond the advanced stuff there is not much to worry about. But my guess is that you are asking about languages like C++, C, and other lower level languages. The biggest thing you have to worry about is the size of things, because in the 32-bit world you are limited to the power of 2^32 however in the 64-bit world things get bigger 2^64. With 64-bit you have a larger space for memory and storage in RAM, and you can compute larger numbers. However if you know you are compiling for both 32 and 64, you need to make sure to limit your expectations of the system to the 32-bit world and limitations of buffers and numbers. A: In C (and maybe C++) always remember to use the sizeof operator when calculating buffer sizes for malloc. This way you will write more portable code anyway, and this will automatically take 64bit datatypes into account. A: In most cases the only thing you have to do is just compile your code for both platforms. (And that's assuming that you're using a compiled language; if it's not, then you probably don't need to worry about anything.) The only thing I can think of that might cause problems is assuming the size of data types, which is something you probably shouldn't be doing anyway. And of course anything written in assembly is going to cause problems. A: Keep in mind that many compilers choose the size of integer based on the underlying architecture, given that the "int" should be the fastest number manipulator in the system (according to some theories). This is why so many programmers use typedefs for their most portable programs - if you want your code to work on everything from 8 bit processors up to 64 bit processors you need to recognize that, in C anyway, int is not rigidly defined. Pointers are another area to be careful - don't use a long, or long long, or any specific type if you are fiddling with the numeric value of the pointer - use the proper construct, which, unfortunately, varies from compiler to compiler (which is why you have a separate typedef.h file for each compiler you use). -Adam Davis
{ "language": "en", "url": "https://stackoverflow.com/questions/32533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best way to use a console when developing? For scripting languages, what is the most effective way to utilize a console when developing? Are there ways to be more productive with a console than a "compile and run" only language? Added clarification: I am thinking more along the lines of Ruby, Python, Boo, etc. Languages that are used for full blown apps, but also have a way to run small snippets of code in a console. A: I am thinking more along the lines of Ruby, ... Well for Ruby the irb interactive prompt is a great tool for "practicing" something simple. Here are the things I'll mention about the irb to give you an idea of effective use: * *Automation. You are allowed a .irbrc file that will be automatically executed when launching irb. That means you can load your favorite libraries or do whatever you want in full Ruby automatically. To see what I mean check out some of the ones at dotfiles.org. *Autocompletion. That even makes writing code easier. Can't remember that string method to remove newlines? "".ch<tab> produces chop and chomp. NOTE: you have to enable autocompletion for irb yourself *Divide and Conquer. irb makes the small things really easy. If you're writing a function to manipulate strings, the ability to test the code interactively right in the prompt saves a lot of time! For instance you can just open up irb and start running functions on an example string and have working and tested code already ready for your library/program. *Learning, Experimenting, and Hacking. Something like this would take a very long time to test in C/C++, even Java. If you tried testing them all at once you might seg-fault and have to start over. Here I'm just learning how the String#[] function works. joe[~]$ irb >> "12341:asdf"[/\d+/] # => "12341" >> "12341:asdf"[/\d*/] # => "12341" >> "12341:asdf"[0..5] # => "12341:" >> "12341:asdf"[0...5] # => "12341" >> "12341:asdf"[0, ':'] TypeError: can't convert String into Integer from (irb):5:in `[]' from (irb):5 >> "12341:asdf"[0, 5] # => "12341" *Testing and Benchmarking. Now they are nice and easy to perform. Here is someone's idea to emulate the Unix time function for quick benchmarking. Just add it to your .irbrc file and its always there! *Debugging - I haven't used this much myself but there is always the ability to debug code like this. Or pull out some code and run it in the irb to see what its actually doing. I'm sure I'm missing some things but I hit on my favorite points. You really have zero limitation in shells so you're limited only by what you can think of doing. I almost always have a few shells running. Bash, Javascript, and Ruby's irb to name a few. I use them for a lot of things! A: I think it depends on the console. The usefulness of a CMD console on windows pails in comparison to a Powershell console. A: You didn't say what OS you're using but on Linux I been using a tabbed window manager (wmii) for a year or so and it has radically changed the way I use applications - consoles or otherwise. I often have four or more consoles and other apps on a virtual desktop and with wmii I don't have to fiddle with resizing windows to line everything up just so. I can trivially rearrange them into vertical columns, stack them up vertically, have them share equal amounts of vertical or horizontal space, and move them between screens. Say you open two consoles on your desktop. You'd get this (with apologies for the cronkey artwork): ---------------- | | | 1 | | | ---------------- ---------------- | | | 2 | | | ---------------- Now I want them side-by-side. I enter SHIFT-ALT-L in window 2 to move it rightwards and create two columns: ------- ------- | || | | || | | 1 || 2 | | || | | || | ------- ------- Now I could open another console and get ------- ------- | || 2 | | || | | | ------- | 1 | ------- | || 3 | | || | ------- ------- Then I want to temporarily view console 3 full-height, so I hit ALT-s in it and get: ------- ------- | | ------- | || | | 1 || 3 | | || | | || | ------- ------- Consoles 2 and 3 are stacked up now. I could also give windows tags. For example, in console 2 I could say ALT-SHIFT-twww+dev and that console would be visible in the 'www' and 'dev' virtual desktops. (The desktops are created if they don't already exist.) Even better, the console can be in a different visual configuration (e.g., stacked and full-screen) on each of those desktops. Anyway, I can't do tabbed window managers justice here. I don't know if it's relevant to your environment but if you get the chance to try this way of working you probably won't look back. A: I've added a shortcut to my Control-Shift-C key combination to bring up my Visual Studio 2008 Console. This alone has saved me countless seconds when needing to register a dll or do any other command. I imagine if you leverage this with another command tool and you may have some massive productivity increases. A: Are you kidding? In my Linux environment, the console is my lifeblood. I'm proficient in bash scripting, so to me a console is very much like sitting in a REPL for Python or Lisp. You can quite literally do anything. I actually write tools used by my team in bash, and the console is the perfect place to do that development. I really only need an editor as a backing store for things as I figure them out.
{ "language": "en", "url": "https://stackoverflow.com/questions/32537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Alternative "architectural" approaches to javaScript client code? How is your javaScript code organized? Does it follow patterns like MVC, or something else? I've been working on a side project for some time now, and the further I get, the more my webpage has turned into a full-featured application. Right now, I'm sticking with jQuery, however, the logic on the page is growing to a point where some organization, or dare I say it, "architecture" is needed. My first approach is "MVC-ish": * *The 'model' is a JSON tree that gets extended with helpers *The view is the DOM plus classes that tweak it *The controller is the object where I connect events handling and kick off view or model manipulation I'm very interested, however, in how other people have built more substantial javaScript apps. I'm not interested in GWT, or other server-oriented approaches... just in the approach of "javaScript + <generic web service-y thingy here>" Note: earlier I said javaScript "is not really OO, not really functional". This, I think, distracted everyone. Let's put it this way, because javaScript is unique in many ways, and I'm coming from a strongly-typed background, I don't want to force paradigms I know but were developed in very different languages. A: ..but Javascript has many facets that are OO. Consider this: var Vehicle = jQuery.Class.create({ init: function(name) { this.name = name; } }); var Car = Vehicle.extend({ fillGas: function(){ this.gas = 100; } }); I've used this technique to create page-level javascript classes that have their own state, this helps keep it contained (and I often identify areas that I can reuse and put into other classes). This is also especially useful when you have components/server controls that have their own script to execute, but when you might have multiple instances on the same page. This keeps the state separate. A: JavaScriptMVC is a great choice for organizing and developing a large scale JS application. The architecture design very well thought out. There are 4 things you will ever do with JavaScript: * *Respond to an event *Request Data / Manipulate Services (Ajax) *Add domain specific information to the ajax response. *Update the DOM JMVC splits these into the Model, View, Controller pattern. First, and probably the most important advantage, is the Controller. Controllers use event delegation, so instead of attaching events, you simply create rules for your page. They also use the name of the Controller to limit the scope of what the controller works on. This makes your code deterministic, meaning if you see an event happen in a '#todos' element you know there has to be a todos controller. $.Controller.extend('TodosController',{ 'click' : function(el, ev){ ... }, '.delete mouseover': function(el, ev){ ...} '.drag draginit' : function(el, ev, drag){ ...} }) Next comes the model. JMVC provides a powerful Class and basic model that lets you quickly organize Ajax functionality (#2) and wrap the data with domain specific functionality (#3). When complete, you can use models from your controller like: Todo.findAll({after: new Date()}, myCallbackFunction); Finally, once your todos come back, you have to display them (#4). This is where you use JMVC's view. '.show click' : function(el, ev){ Todo.findAll({after: new Date()}, this.callback('list')); }, list : function(todos){ $('#todos').html( this.view(todos)); } In 'views/todos/list.ejs' <% for(var i =0; i < this.length; i++){ %> <label><%= this[i].description %></label> <%}%> JMVC provides a lot more than architecture. It helps you in ever part of the development cycle with: * *Code generators *Integrated Browser, Selenium, and Rhino Testing *Documentation *Script compression *Error reporting A: MochiKit is great -- and was my first love, so-to-speak, as far as js libraries go. But I found that while MochiKit has very expressive syntax, it didn't feel nearly as comfortable to me as Prototype/Scriptaculous or jQuery did for me. I think if you know or like python, then MochiKit is a good tool for you. A: Thank you all kindly for your answers. After some time, I'd like to post what I've learned so far. So far, I see a very large difference the approach using something like Ext, and others like JQuery UI, Scriptaculous, MochiKit, etc. With Ext, the HTML is just a single placeholder - UI goes here. From then on, everything is described in JavaScript. DOM interaction is minimized under another (perhaps stronger) API layer. With the other kits, I find myself starting by doing a bit of HTML design, and then extending the DOM directly with snazzy effects, or just replacing the form input here, an addition there. The major differences start to happen as I need to deal with event handling, etc. As modules need to "talk" to each other, I find myself needing to step away from the DOM, abstracting it away in pieces. I note that many of these libraries also include some interesting modularization techniques as well. A very clear description is contributed on the Ext website, which includes a fancy way to "protect" your code with modules. A new player I haven completely evaluated is Sproutcore. It seems like Ext in approach, where the DOM is hidden, and you mostly want to deal with the project's API. A: Tristan, you will find that when you try to architecture JavaScript as an MVC application it tends to come up short in one area -- the model. The most difficult area to deal with is the model because the data does not persist throughout the application, and by nature the models seem to change on the client-side pretty consistently. You could standardize how you pass and receive data from the server, but then at that point the model does not really belong to JavaScript -- it belongs to your server-side application. I did see one attempt awhile back where someone created a framework for modeling data in JavaScript, much like the way SQLite belongs to the application. It was like Model.select( "Product" ) and Model.update( "Product", "Some data..." ). It was basically an object notation that held a bunch of data to manage the state of the current page. However, the minute you refresh, all that data is lost. I'm probably off on the syntax, but you get the point. If you are using jQuery, then Ben's approach is really the best. Extend the jQuery object with your functions and properties, and then compartmentalize your "controllers". I usually do this by putting them into separate source files, and loading them on a section-by-section basis. For instance, if it were an e-commerce site, I might have a JS file full of controllers that handle functionality for the checkout process. This tends to keep things lightweight and easy to manage. A: Just a quick clarification. It is perfectly feasible to write GWT apps that are not server-oriented. I am assuming that from Server-Oriented you mean GWT RPC that needs java based back-end. I have written GWT apps that are very "MVC-ish" on the client side alone. * *The model was an object graph. Although you code in Java, at runtime the objects are in javascript with no need of any JVM in either client or server-side. GWT also supports JSON with complete parsing and manipulation support. You can connect to JSON webservices easily, see 2 for a JSON mashup example. *View was composed of standard GWT widgets (plus some of our own composite widgets) *Controller layer was neatly separated from View via Observer pattern. If your "strongly-typed" background is with Java or similar language, I think you should seriously consider GWT for large projects. For small projects I usually prefer jQuery. Upcoming GWTQuery that works with GWT 1.5 may change that though not in near future because of abundance of plugins for jQuery. A: Not 100% sure what you mean here, but I will say that after doing ASP.NET for the last 6 years, my web pages are now mostly driven by JavaScript once the basic page rendering is done by the server. I use JSON for everything (have been for about 3 years now) and use MochiKit for my client-side needs. By the way, JavaScript is OO, but since it uses prototypical inheritance, people don't give it credit in that way. I would also argue that it is functional as well, it all depends on how you write it. If you are really interested in functional programming styles, check out MochiKit - you may like it; it leans quite a bit towards the functional programming side of JavaScript.
{ "language": "en", "url": "https://stackoverflow.com/questions/32540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How can you clone a WPF object? Anybody have a good example how to deep clone a WPF object, preserving databindings? The marked answer is the first part. The second part is that you have to create an ExpressionConverter and inject it into the serialization process. Details for this are here: http://www.codeproject.com/KB/WPF/xamlwriterandbinding.aspx?fid=1428301&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2801571 A: The simplest way that I've done it is to use a XamlWriter to save the WPF object as a string. The Save method will serialize the object and all of its children in the logical tree. Now you can create a new object and load it with a XamlReader. ex: Write the object to xaml (let's say the object was a Grid control): string gridXaml = XamlWriter.Save(myGrid); Load it into a new object: StringReader stringReader = new StringReader(gridXaml); XmlReader xmlReader = XmlReader.Create(stringReader); Grid newGrid = (Grid)XamlReader.Load(xmlReader); A: There are some great answers here. Very helpful. I had tried various approaches for copying Binding information, including the approach outlined in http://pjlcon.wordpress.com/2011/01/14/change-a-wpf-binding-from-sync-to-async-programatically/ but the information here is the best on the Internet! I created a re-usable extension method for dealing with InvalidOperationException “Binding cannot be changed after it has been used.” In my scenario, I was maintaining some code somebody wrote, and after a major DevExpress DXGrid framework upgrade, it no longer worked. The following solved my problem perfectly. The part of the code where I return the object could be nicer, and I will re-factor that later. /// <summary> /// Extension methods for the WPF Binding class. /// </summary> public static class BindingExtensions { public static BindingBase CloneViaXamlSerialization(this BindingBase binding) { var sb = new StringBuilder(); var writer = XmlWriter.Create(sb, new XmlWriterSettings { Indent = true, ConformanceLevel = ConformanceLevel.Fragment, OmitXmlDeclaration = true, NamespaceHandling = NamespaceHandling.OmitDuplicates, }); var mgr = new XamlDesignerSerializationManager(writer); // HERE BE MAGIC!!! mgr.XamlWriterMode = XamlWriterMode.Expression; // THERE WERE MAGIC!!! System.Windows.Markup.XamlWriter.Save(binding, mgr); StringReader stringReader = new StringReader(sb.ToString()); XmlReader xmlReader = XmlReader.Create(stringReader); object newBinding = (object)XamlReader.Load(xmlReader); if (newBinding == null) { throw new ArgumentNullException("Binding could not be cloned via Xaml Serialization Stack."); } if (newBinding is Binding) { return (Binding)newBinding; } else if (newBinding is MultiBinding) { return (MultiBinding)newBinding; } else if (newBinding is PriorityBinding) { return (PriorityBinding)newBinding; } else { throw new InvalidOperationException("Binding could not be cast."); } } } A: In .NET 4.0, the new xaml serialization stack makes this MUCH easier. var sb = new StringBuilder(); var writer = XmlWriter.Create(sb, new XmlWriterSettings { Indent = true, ConformanceLevel = ConformanceLevel.Fragment, OmitXmlDeclaration = true, NamespaceHandling = NamespaceHandling.OmitDuplicates, }); var mgr = new XamlDesignerSerializationManager(writer); // HERE BE MAGIC!!! mgr.XamlWriterMode = XamlWriterMode.Expression; // THERE WERE MAGIC!!! System.Windows.Markup.XamlWriter.Save(this, mgr); return sb.ToString(); A: How about: public static T DeepClone<T>(T from) { using (MemoryStream s = new MemoryStream()) { BinaryFormatter f = new BinaryFormatter(); f.Serialize(s, from); s.Position = 0; object clone = f.Deserialize(s); return (T)clone; } } Of course this deep clones any object, and it might not be the fastest solution in town, but it has the least maintenance... :)
{ "language": "en", "url": "https://stackoverflow.com/questions/32541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Does ReadUncommitted imply NoLock When writing a SQL statement in SQL Server 2005, does the READUNCOMMITTED query hint imply NOLOCK or do I have to specify it manually too? So is: With (NoLock, ReadUnCommitted) the same as: With (ReadUnCommitted) A: According to Kalen Delaney... The NOLOCK hint has nothing to do with the index options. The hint tells SQL Server not to request locks when doing SELECT operations, so there will be no conflict with data that is already locked. The index options just tell SQL Server that this level of locking is allowed, when locking is going to occur. For example, if ALLOW_ROW_LOCKS was off, the only possible locks would be page or table locks. The index options don't force locks to be held, they just control the possible size of the locks. In answer to the question in your subject, the NOLOCK hint and the READUNCOMMITTED hint are equivalent. A: Yes they are one and the same A: I think you can say that ReadUnCommitted has the abilities of NoLock However you cannot say that NoLock has the abilities of ReadUnCommitted
{ "language": "en", "url": "https://stackoverflow.com/questions/32550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make 'pretty urls' work in php hosted in IIS? Is there some way I can use URLs like: http://www.blog.com/team-spirit/ instead of http://www.blog.com/?p=122 in a Windows hosted PHP server? A: Isapi Rewrite Filter on CodePlex - actively developed, free ("DonationWare"), open source. A: This is how I did it with WordPress on IIS 6.0 http://www.coderjournal.com/2008/02/url-rewriter-reverse-proxy-iis-wordpress/ However it all depends on what version of IIS you are using. If you are lucky enough to use IIS 7.0 you don't really have to worry about pretty urls because everything is supported out of the box. However if you are using IIS 6.0 you are going to have to use a rewriter and some rules to force the IISness out of IIS. A: Use the official IIS URL Rewrite A: We use the free version of ISAPI_Rewrite. It uses similar syntax to mod_rewrite, so if you're familiar with that you may have an easier time getting started. There used to be a (syntax-compatible) port of mod_rewrite for IIS, but I can't find it now. A: Step 1 - Setting Up .NET to Process All Requests Setup your frontend server to process everything through the .NET framework. Open IIS and right-click on the website and select Properties. Click the Configuration button under Application Settings section Click the Insert... button to create a new wildcard mapping Set the executable textbox to aspnet_isapi.dll file location. for .net 2.0, 3.0, 3.5: C:WindowsMicrosoft.NETFrameworkv2.0.50727aspnet_isapi.dll Make sure the checkbox Verify that file exists is not checked. Press OK to confirm and close all the windows. Step 2 - Install PHP/WordPress Just follow this article on IIS.NET for installing PHP/WordPress on IIS 6.0. You may also want to install FastCGI, I recommend this, but it is optional. Step 3 - Setting Up the URL Rewriter and Reverse Proxy Rules The criteria for the requests are put inside the URL Rewriter Rules files. But before the proxy request is made, I must check to make sure the file being requested doesn't already exist on the frontend server. If it does exist on the frontend server I don't want to make a reverse proxy request. The following is the code used to do that. # any file that exists just return it RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^(.*) $1 [L] Then after I check to make sure the file doesn't exist on the frontend server I make the request to the backend using the following rules. https://nickberardi.com/url-rewriter-reverse-proxy-iis-wordpress/
{ "language": "en", "url": "https://stackoverflow.com/questions/32570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to discover a File's creation time with Java? Is there an easy way to discover a File's creation time with Java? The File class only has a method to get the "last modified" time. According to some resources I found on Google, the File class doesn't provide a getCreationTime() method because not all file systems support the idea of a creation time. The only working solution I found involes shelling out the the command line and executing the "dir" command, which looks like it outputs the file's creation time. I guess this works, I only need to support Windows, but it seems very error prone to me. Are there any third party libraries that provide the info I need? Update: In the end, I don't think it's worth it for me to buy the third party library, but their API does seem pretty good so it's probably a good choice for anyone else that has this problem. A: I've wrote a small test class some days ago, wish it can help you: // Get/Set windows file CreationTime/LastWriteTime/LastAccessTime // Test with jna-3.2.7 // [http://maclife.net/wiki/index.php?title=Java_get_and_set_windows_system_file_creation_time_via_JNA_(Java_Native_Access)][1] import java.io.*; import java.nio.*; import java.util.Date; // Java Native Access library: jna.dev.java.net import com.sun.jna.*; import com.sun.jna.ptr.*; import com.sun.jna.win32.*; import com.sun.jna.platform.win32.*; public class WindowsFileTime { public static final int GENERIC_READ = 0x80000000; //public static final int GENERIC_WRITE = 0x40000000; // defined in com.sun.jna.platform.win32.WinNT public static final int GENERIC_EXECUTE = 0x20000000; public static final int GENERIC_ALL = 0x10000000; // defined in com.sun.jna.platform.win32.WinNT //public static final int CREATE_NEW = 1; //public static final int CREATE_ALWAYS = 2; //public static final int OPEN_EXISTING = 3; //public static final int OPEN_ALWAYS = 4; //public static final int TRUNCATE_EXISTING = 5; public interface MoreKernel32 extends Kernel32 { static final MoreKernel32 instance = (MoreKernel32)Native.loadLibrary ("kernel32", MoreKernel32.class, W32APIOptions.DEFAULT_OPTIONS); boolean GetFileTime (WinNT.HANDLE hFile, WinBase.FILETIME lpCreationTime, WinBase.FILETIME lpLastAccessTime, WinBase.FILETIME lpLastWriteTime); boolean SetFileTime (WinNT.HANDLE hFile, final WinBase.FILETIME lpCreationTime, final WinBase.FILETIME lpLastAccessTime, final WinBase.FILETIME lpLastWriteTime); } static MoreKernel32 win32 = MoreKernel32.instance; //static Kernel32 _win32 = (Kernel32)win32; static WinBase.FILETIME _creationTime = new WinBase.FILETIME (); static WinBase.FILETIME _lastWriteTime = new WinBase.FILETIME (); static WinBase.FILETIME _lastAccessTime = new WinBase.FILETIME (); static boolean GetFileTime (String sFileName, Date creationTime, Date lastWriteTime, Date lastAccessTime) { WinNT.HANDLE hFile = OpenFile (sFileName, GENERIC_READ); // may be WinNT.GENERIC_READ in future jna version. if (hFile == WinBase.INVALID_HANDLE_VALUE) return false; boolean rc = win32.GetFileTime (hFile, _creationTime, _lastAccessTime, _lastWriteTime); if (rc) { if (creationTime != null) creationTime.setTime (_creationTime.toLong()); if (lastAccessTime != null) lastAccessTime.setTime (_lastAccessTime.toLong()); if (lastWriteTime != null) lastWriteTime.setTime (_lastWriteTime.toLong()); } else { int iLastError = win32.GetLastError(); System.out.print ("获取文件时间失败,错误码:" + iLastError + " " + GetWindowsSystemErrorMessage (iLastError)); } win32.CloseHandle (hFile); return rc; } static boolean SetFileTime (String sFileName, final Date creationTime, final Date lastWriteTime, final Date lastAccessTime) { WinNT.HANDLE hFile = OpenFile (sFileName, WinNT.GENERIC_WRITE); if (hFile == WinBase.INVALID_HANDLE_VALUE) return false; ConvertDateToFILETIME (creationTime, _creationTime); ConvertDateToFILETIME (lastWriteTime, _lastWriteTime); ConvertDateToFILETIME (lastAccessTime, _lastAccessTime); //System.out.println ("creationTime: " + creationTime); //System.out.println ("lastWriteTime: " + lastWriteTime); //System.out.println ("lastAccessTime: " + lastAccessTime); //System.out.println ("_creationTime: " + _creationTime); //System.out.println ("_lastWriteTime: " + _lastWriteTime); //System.out.println ("_lastAccessTime: " + _lastAccessTime); boolean rc = win32.SetFileTime (hFile, creationTime==null?null:_creationTime, lastAccessTime==null?null:_lastAccessTime, lastWriteTime==null?null:_lastWriteTime); if (! rc) { int iLastError = win32.GetLastError(); System.out.print ("设置文件时间失败,错误码:" + iLastError + " " + GetWindowsSystemErrorMessage (iLastError)); } win32.CloseHandle (hFile); return rc; } static void ConvertDateToFILETIME (Date date, WinBase.FILETIME ft) { if (ft != null) { long iFileTime = 0; if (date != null) { iFileTime = WinBase.FILETIME.dateToFileTime (date); ft.dwHighDateTime = (int)((iFileTime >> 32) & 0xFFFFFFFFL); ft.dwLowDateTime = (int)(iFileTime & 0xFFFFFFFFL); } else { ft.dwHighDateTime = 0; ft.dwLowDateTime = 0; } } } static WinNT.HANDLE OpenFile (String sFileName, int dwDesiredAccess) { WinNT.HANDLE hFile = win32.CreateFile ( sFileName, dwDesiredAccess, 0, null, WinNT.OPEN_EXISTING, 0, null ); if (hFile == WinBase.INVALID_HANDLE_VALUE) { int iLastError = win32.GetLastError(); System.out.print (" 打开文件失败,错误码:" + iLastError + " " + GetWindowsSystemErrorMessage (iLastError)); } return hFile; } static String GetWindowsSystemErrorMessage (int iError) { char[] buf = new char[255]; CharBuffer bb = CharBuffer.wrap (buf); //bb.clear (); //PointerByReference pMsgBuf = new PointerByReference (); int iChar = win32.FormatMessage ( WinBase.FORMAT_MESSAGE_FROM_SYSTEM //| WinBase.FORMAT_MESSAGE_IGNORE_INSERTS //|WinBase.FORMAT_MESSAGE_ALLOCATE_BUFFER , null, iError, 0x0804, bb, buf.length, //pMsgBuf, 0, null ); //for (int i=0; i<iChar; i++) //{ // System.out.print (" "); // System.out.print (String.format("%02X", buf[i]&0xFFFF)); //} bb.limit (iChar); //System.out.print (bb); //System.out.print (pMsgBuf.getValue().getString(0)); //win32.LocalFree (pMsgBuf.getValue()); return bb.toString (); } public static void main (String[] args) throws Exception { if (args.length == 0) { System.out.println ("获取 Windows 的文件时间(创建时间、最后修改时间、最后访问时间)"); System.out.println ("用法:"); System.out.println (" java -cp .;..;jna.jar;platform.jar WindowsFileTime [文件名1] [文件名2]..."); return; } boolean rc; java.sql.Timestamp ct = new java.sql.Timestamp(0); java.sql.Timestamp wt = new java.sql.Timestamp(0); java.sql.Timestamp at = new java.sql.Timestamp(0); for (String sFileName : args) { System.out.println ("文件 " + sFileName); rc = GetFileTime (sFileName, ct, wt, at); if (rc) { System.out.println (" 创建时间:" + ct); System.out.println (" 修改时间:" + wt); System.out.println (" 访问时间:" + at); } else { //System.out.println ("GetFileTime 失败"); } //wt.setTime (System.currentTimeMillis()); wt = java.sql.Timestamp.valueOf("2010-07-23 00:00:00"); rc = SetFileTime (sFileName, null, wt, null); if (rc) { System.out.println ("SetFileTime (最后修改时间) 成功"); } else { //System.out.println ("SetFileTime 失败"); } } } } A: With the release of Java 7 there is a built-in way to do this: Path path = Paths.get("path/to/file"); BasicFileAttributes attributes = Files.readAttributes(path, BasicFileAttributes.class); FileTime creationTime = attributes.creationTime(); It is important to note that not all operating systems provide this information. I believe in those instances this returns the mtime which is the last modified time. Windows does provide creation time. A: I've been investigating this myself, but I need something that will work across Windows/*nix platforms. One SO post includes some links to Posix JNI implementations. * *JNA-POSIX *POSIX for Java In particular, JNA-POSIX implements methods for getting file stats with implementations for Windows, BSD, Solaris, Linux and OSX. All in all it looks very promising, so I'll be trying it out on my own project very soon. A: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class CreateDateInJava { public static void main(String args[]) { try { // get runtime environment and execute child process Runtime systemShell = Runtime.getRuntime(); BufferedReader br1 = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter filename: "); String fname = (String) br1.readLine(); Process output = systemShell.exec("cmd /c dir \"" + fname + "\" /tc"); System.out.println(output); // open reader to get output from process BufferedReader br = new BufferedReader(new InputStreamReader(output.getInputStream())); String out = ""; String line = null; int step = 1; while ((line = br.readLine()) != null) { if (step == 6) { out = line; } step++; } // display process output try { out = out.replaceAll(" ", ""); System.out.println("CreationDate: " + out.substring(0, 10)); System.out.println("CreationTime: " + out.substring(10, 16) + "m"); } catch (StringIndexOutOfBoundsException se) { System.out.println("File not found"); } } catch (IOException ioe) { System.err.println(ioe); } catch (Throwable t) { t.printStackTrace(); } } } /** D:\Foldername\Filename.Extension Ex: Enter Filename : D:\Kamal\Test.txt CreationDate: 02/14/2011 CreationTime: 12:59Pm */ A: The javaxt-core library includes a File class that can be used to retrieve file attributes, including the creation time. Example: javaxt.io.File file = new javaxt.io.File("/temp/file.txt"); System.out.println("Created: " + file.getCreationTime()); System.out.println("Accessed: " + file.getLastAccessTime()); System.out.println("Modified: " + file.getLastModifiedTime()); Works with Java 1.5 and up. A: I like the answer on jGuru that lists the option of using JNI to get the answer. This might prove to be faster than shelling out and you may encounter other situations such as this that need to be implemented specifically for windows. Also, if you ever need to port to a different platform, then you can port your library as well and just have it return -1 for the answer to this question on *ix. A: This is a basic example in Java, using BasicFileAttributes class: Path path = Paths.get("C:\\Users\\jorgesys\\workspaceJava\\myfile.txt"); BasicFileAttributes attr; try { attr = Files.readAttributes(path, BasicFileAttributes.class); System.out.println("File creation time: " + attr.creationTime()); } catch (IOException e) { System.out.println("oops un error! " + e.getMessage()); }
{ "language": "en", "url": "https://stackoverflow.com/questions/32586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Odd behaviour for rowSpan in Flex I am experiencing some oddities when working with a Grid component in flex, I have the following form that uses a grid to align the fields, as you can see, each GridRow has a border. My problem is that the border is still visible through GridItems that span multiple rows (observe the TextArea that spans 4 rows, the GridRow borders go right threw it!) Any ideas of how to fix this? A: I think the problem is that when the Grid is drawn, it draws each row from top to bottom, and within each row the items left to right. So the row-spanned <mx:TextArea> item is drawn first extending down into the area of the 2 next rows, which get drawn after and on top. The quickest way around I can see would be to draw the row borders on the <mx:GridItem>s instead, skipping the left and right edges based on the item's placement in the row. Something like this: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Style> Grid { background-color: white; horizontal-gap: 0; } GridItem { padding-top: 5; padding-left: 5; padding-right: 5; padding-bottom: 5; background-color: #efefef; border-style: solid; border-thickness: 1; border-color: black; } .left { border-sides: top, bottom, left; } .right { border-sides: top, bottom, right; } .center { border-sides: top, bottom; } </mx:Style> <mx:Grid> <mx:GridRow> <mx:GridItem styleName="left"> <mx:Label text="Label"/> </mx:GridItem> <mx:GridItem styleName="center"> <mx:ComboBox/> </mx:GridItem> <mx:GridItem styleName="center"> <mx:Label text="Label"/> </mx:GridItem> <mx:GridItem styleName="right"> <mx:ComboBox/> </mx:GridItem> </mx:GridRow> <mx:GridRow> <mx:GridItem styleName="left"> <mx:Label text="Label"/> </mx:GridItem> <mx:GridItem styleName="center"> <mx:TextInput/> </mx:GridItem> <mx:GridItem colSpan="2" rowSpan="3"> <mx:VBox width="100%" height="100%"> <mx:Label text="Label"/> <mx:TextArea width="100%" height="100%"/> </mx:VBox> </mx:GridItem> </mx:GridRow> <mx:GridRow> <mx:GridItem styleName="left"> <mx:Label text="Label"/> </mx:GridItem> <mx:GridItem styleName="center"> <mx:TextInput/> </mx:GridItem> </mx:GridRow> <mx:GridRow> <mx:GridItem styleName="left"> <mx:Label text="Label"/> </mx:GridItem> <mx:GridItem styleName="center"> <mx:TextInput/> </mx:GridItem> </mx:GridRow> </mx:Grid> </mx:Application>
{ "language": "en", "url": "https://stackoverflow.com/questions/32596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Installing Team Foundation Server What are the best practices in setting up a new instance of TFS 2008 Workgroup edition? Specifically, the constraints are as follows: * *Must install on an existing Windows Server 2008 64 bit *TFS application layer is 32 bit only Should I install SQL Server 2008, Sharepoint and the app layer in a virtual instance of Windows Server 2008 or 2003(I am already running Hyper-V) or split the layers with a database on the host OS and the app layer in a virtual machine? Edit: Apparently, splitting the layers is not recommended A: This is my recipe for installing TFS 2008 SP1. There is no domain controller in this scenario, we are only a couple of users. If I was to do it again, I would consider changing our environement to use a active directory domain. * *Host Server running Windows Server 2008 with 8GB RAM and quad processor *Fresh install of Windows Server 2008 32bit in a VM under Hyper-V *Install Application Server role with IIS *Install SQL Server 2008 Standard edition * *Use a user account for Reporting Services and Analysis Services *Create a slipstreamed image of TFS 2008 with SP1 and install TFS *Install VSTS 2008 *Install Team System Explorer *Install VSTS 2008 SP1 *Install TFS Web Access Power tool After installing everything, reports were not generated. Found this forum post that helped resolve the problem. * *Open p://localhost:8080/Warehouse/v1.0/warehousecontroller.asmx *Run the webservice (see above link for details), it will take a little while, the tfsWarehouse will be rebuilt It is very important to do things in order, download the installation guide and follow it to the letter. I forgot to install the Team System Explorer until after installing SP1 and ventured into all sorts of problems. Installing SP1 once more fixed that. A: One critical thing you has to keep in mind about TFS, is that it likes to have the machine all to it self. So if you have to create a separate instance on Hyper-V do it using the proven Windows Server 2003 platform with SQL Server 2005. I am sure Microsoft has done a great job getting it to work under Windows Server 2008 and SQL Server 2008, however you don't get any additional features with this newer install and it is currently unproven in the wild. So my recommendation is to stick with what is known until the next release of TFS comes out. Also splitting the layers is definitely not recommended, especially in the workgroup edition where you will only be allowed to have 5 licensed users. Those 5 users will never exceed the server's needs. Also my recommendation is to not update Sharepoint if you don't need to. In my environment, we don't really use Sharepoint all that much, so I left it alone. Sharepoint is usually, in my experience, where most of the problems come from with TFS. A: I just upgraded our team to TFS 2008, from TFS 2005. The hardest part was upgrading SharePoint 2.0 to 3.0, so I would make sure to do that first, if you have not already installed TFS 2008. We had a couple of other difficulties, but they were all either related to the SharePoint upgrade, or to the fact that we were using an aftermarket Policy package - Scrum for TeamSystem. We are on SQL Server 2005, so I cannot address SQL Server 2008. As for splitting the layers, we did not do this either, as we are running on Windows Server 2003 and everything ran under the host OS. A: Splitting the layers is only needed for more than 450 users. I would also recommend having the Build Server on a completely seperate machine. Building is very file system intensive. SQL Server performs best when it has complete control of a file system - so having build and TFS on the same machine may create performance issues while builds are executing. Perhaps this can be alleviated with proper tuning and seperate physical drives - but I'd think in the long run it would be a lot simpler to just either use some old hardware - or spin up a small virtual machine on a seperate host for your builds
{ "language": "en", "url": "https://stackoverflow.com/questions/32597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Any disadvantages in accessing Subversion repositories through file:// for a solo developer? If you have Subversion installed on your development machine and you don't work in a team, is there any reason why you should use the svn protocol instead of file? A: If you are working by yourself on a single machine, then in my experience using the file:// protocol works fine. Even when my team was using Subversion off a remote server, I would set up a local file-based repository for my own personal projects. If you get to the point where you need to access it from a different machine, then I would go to the trouble of setting up a server-based repository. You might also look at a distributed system like Mercurial - we were evaluating it at my last company just before I left - but definitely pick one or the other, mixing svn and hg doesn't work well at all. A: You can always add a subversion server later, have it points to your file:// repository, and you'll get svn:// access immediately. The protocol doesn't matter, it just allow transport over different kinds of medium, it's the repository content that matters. And installing SVNSERVE later is rather easy. However, the flavor of subversion software you use does matter though, for instance one vendor makes it so the metadata is stored in "_svn" instead of ".svn" you might want to check for compatibility first. A: I believe as long as the use of the relevant SVN tools is enabled, you should have no problem - like others said, you can always set up a server later on. My tip then, is to make sure you can use ToroiseSVN and the Collabnet subversion client. One major tip to ease the setup of an SVN server right now, if you choose to, is to use a Virtual Appliance. That is, a virtual machine that has subversion pre-installed and (mostly) pre-configured on it - pretty much a plug & play thing. You can try here, here and here, or just try searching Google on "subversion virtual appliance". A: A while back on a project we were using ant to do builds. Ant would check out the latest code from the SVN repo, do the build, then create a tag in the SVN repo of the code the build was based off of. We found that the Ant automation couldn't work across any protocol except for the svn:// protocol. So, if you want to use Ant to automate any interaction with SVN, you'll need to use the svn:// protocol. A: Not that I know of. It always pays to use source control, so even if file:// is in some way inferior, if it means you actually use subversion rather then get fed up with the set up and just start coding, then its OK by my book. A: There is some mention of file based and server based repositories here. Please correct me if I am wrong but my understanding of Subversion is that a repository is either a system file repository or a Berkley DB repository. The file/server distinction being made is actually just in the way that the repository is accessed. Ie the same repository may be accessed (checked out, committed to, etc) either directly on the file system with the file:/// protocol or by proxy with ssh, the svn sever and/or Apache with http. A: I use svn:// for personal projects because I often work on multiple machines on the same network and I want to store everything in the repository on my desktop machine. A: None that I know if. It should prove to a be at least a little bit faster. A: I have many different machines I work with so it's easier for me to use svn:// for the paths. In addition to that, I find that the svn path is almost always shorter than my file paths, so it's less to type. A: There are three reasons to choose a server-based protocol over the file protocol. * *Reduced network traffic. When you use the file protocol on a repository that does not reside on your workstation, the entire file is written because in the absence of a daemon to process them, deltas cannot be used. * *You can expose a server-based protocol over the internet This leaves you with the question of whether to use the svn or http protocol. Both can be used over the net. Svn has the efficiency advantage of using direct binary rather than base64 encoding. Http is easier to smuggle through corporate firewalls in the face of bureaucratic obstruction. * *Fewer hassles with permissions in cross-domain scenarios such as telecommuting. Chances are your home workstation isn't part of the corporate domain. A Subversion server daemon functions as a proxy. It runs in an authenticated process that has the necessary permissions to perform I/O operations on your behalf. A: file:// access schema works fine when you are the only one who is currently accessing the repository. Moreover, there should be no problems to put the repository on server and make the repositories available through https:// or svn:// should you need to collaborate with others. However, file:// URLs are ugly on Windows and using a HTTP(S) server helps you use pretty ones. Typical local URL on Windows looks like file:///C:\Repositories\MyRepo. However, I prefer https://svn.example.com/MyRepo. A: Even if working by myself ... my protocol is to always use source control even for personal projects. It gives you a single point of backup for all of your code work, and allows you to change your mind and/or retrieve older versions.
{ "language": "en", "url": "https://stackoverflow.com/questions/32598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I rollback a TFS check-in? I'd like to rollback a change I made recently in TFS. In Subversion, this was pretty straightforward. However, it seems to be an incredible headache in TFS: Option 1: Get Prior Version * *Manually get prior version of each file *Check out for edit *Fail - the checkout (in VS2008) forces me to get the latest version Option 2: Get TFS Power Tools * *Download Team Foundation Power Tools *Issue rollback command from cmd line *Fail - it won't work if there are any other pending changes Option 3: Manually Undo Changes * *manually undo my changes, then commit a new changeset Question How do I rollback to a previous changeset in TFS? A: * *Download and install Team Foundation Power Tools. *Open up the Visual Studio command prompt *Navigate to the directory on the file system that TFS is mapped to. If you don't do this you'll get an "Unable to determine the workspace" error when you try to roll back *Make sure everything else is checked in or shelved *run tfpt rollback to bring up the interface. *Choose the changesets you want to rollback *Check in the new versions of the files you rolled back The big disadvantage of the tool is that it will want to refresh everything in your workspace before you can merge. I got around this issue by creating a new workspace just for the rollback which mapped directly to the place in the source tree where the affected files were. If you need help to figure out which changesets to roll back, I find the code review tool in the free Team Foundation Side Kicks add-in very helpful. A: Ahh, just found this CodePlex Article on using TFPT.exe (power tool) to rollback a changeset. Hope this helps you out. A: Not having a rollback option is actually feature of TFS ;) To rollback changes: * *Check out whatever specific version of changes you want *Edit->Select All->Copy the text in the file *Checkout whatever version of the file is on the server *Paste over the file and check in. And now all your intermediate changesets before the rollback are saved as well! What a great feature! A: For reference, if you're using TFS 2010, here's the link to Rollback Command (Team Foundation Version Control) manual. To rollback a particular changeset, go to Visual Studio Command Prompt (2010), navigate to your TFS workspace directory, and type in command: tf rollback /changeset:C12345 where 12345 is your changeset number. After this, it will show you the log of what it did and you'll have to sort out merge conflicts. A: Your solution #1 will work: 1. manually get prior version of each file *check out for edit *Checkin the file and Ignore server changes when prompted. The reason why it failed for you is because you must have the "Get latest version of item on check out" option turned on. Turn this option off by going to Tools...Options...Source Control...Visual Studio Tema Foundation Server and unchecking "Get latest version of item on check out" Cheers A: If you did 1 check-in and you just want to undo it, that has a changeset # associated with it. Do a history on the folder in question to see the bad changeset. Open it up to see the details (all files changed, etc). I believe that you can restore or undo a changeset from that screen, but my Visual Studio just crashed when I tried to do this. /sigh -- I definitely share your pain. Where do I downmod TFS on this site? A: I think that the Team Foundation Power Tools is the way to go. If there are pending changes you can move them to a shelveset then undo or check in all pending changes before running the rollback command. See http://www.codeplex.com/VSTSGuidance/Wiki/View.aspx?title=How%20to%20undo%20a%20check-in&referringTitle=Source%20Control%20Practices%20at%20a%20Glance for more information. A: Ben Scheirman - the Changeset Details dialog does not have rollback functionality. A: Another option is TFSPlus. This Visual Studio Addin adds (among others) Get This Version command to the history window. If you have the file checked out it will replace it with that version. If you do a check in afterwards you will effectively do a rollback to that version. It works on individual files instead of complete changesets, though. A: using TFS powertools is the best way http://rajputyh.blogspot.com/2008/08/change-set-rollback-using-tfs-power.html A: Rollback has been moved from tfpt.exe to tf.exe, the Team Foundation Version Control Tool. TF - Team Foundation Version Control Tool, Version 10.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Rolls back the changes in a single or a range of changesets: tf rollback /changeset:changesetfrom~changesetto [itemspec] [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/noprompt] [/login:username,[password]] tf rollback /toversion:versionspec itemspec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/noprompt] [/login:username,[password]] A: Another way to make your option 1 work is to reverse the order of the steps: * *Check Out the items *Get Specific Version to the old version *Check in (ignoring the "warning server version is newer" dialog) OR on the conflicts section of the Pending Changes dialog resolve the conflicts by keeping the local version. This will work even if you have Get Latest On Checkout set. A: You have two options for rolling back (reverting) a changeset in TFS 2010 Version Control. First option is using the User Interface (if you have the latest version of the TFS 2010 Power Tools installed). The other option is using the TFS 2010 version control command-line application: tf.exe rollback I have information about both approaches on my blog post available here: http://www.edsquared.com/2010/02/02/Rollback+Or+Undo+A+Changeset+In+TFS+2010+Version+Control.aspx A: None of these solutions quite worked for me Dave Roberts solution was the closest to what I actually got working. I do not have Get latest version of item on check out enabled, however it seems to be a server policy. My solution to this is to check the file out for edit, get specific version, then when the conflict is detected use the merge tool (and manually merge none of the changes) so that the file is in the condition it was. I was going to go with compare with the specific version and copy the entire file then just paste it over the top of the old one. Still there should be an easier way to do this! A: Get Specific Version In Version Type drop down select Type as Changeset Use Changeset ... button to find your Changeset or just type in, if you know this number. After you have the specific Changeset. Check Out Check In A: The solution above is for TFS2008. TFS2010 has built-in rollback feature. see this article for details. A: Install the latest version of the TFS Power Tools (August 2011), and you can just right-click on a change set and select "Rollback Entire Changeset". It doesn't get much easier than that. It's available here: http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f It's hinted at under Team Explorer Enhancements on the above page: New in this release is the ability to [..] easily rollback changes in version control.
{ "language": "en", "url": "https://stackoverflow.com/questions/32607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: Best way to manage session in NHibernate? I'm new to NHibernate (my 1st big project with it). I had been using a simple method of data access by creating the ISession object within a using block to do my grab my Object or list of Objects, and in that way the session was destroyed after exiting the code block. This doesn't work in a situation where lazy-loading is required, however. For example, if I have a Customer object that has a property which is a collection of Orders, then when the lazy-load is attempted, I get a Hibernate exception. Anyone using a different method? A: Session management: http://code.google.com/p/dot-net-reference-app/source/browse/trunk/src/Infrastructure/Impl/HybridSessionBuilder.cs Session per request: http://code.google.com/p/dot-net-reference-app/source/browse/trunk/src/Infrastructure/Impl/NHibernateSessionModule.cs A: check out the SummerOfNHibernate webcasts for a great tutorial... What you're looking for specifically doesn't come until webisode 5 or 6. A: Keep your session open for your entire unit of work. If your session is life is too small, you cannot benefit from the session level cache (which is significant). Any time you can prevent a roundtrip to the database is going to save a lot of time. You also cannot take advantage of lazy loading, which is crucial to understand. If your session lifetime is too big, you can run into other issues. If this is a web app, you'll probably do fine with the session-per-httpRequest pattern. Basically this is an HttpModule that opens the session at the beginning of the request and flushes/closes at the end. Be sure to store the session in HttpContext.Items NOT A STATIC VARIABLE. <--- leads to all kinds of problems that you don't want to deal with. You might also look at RhinoCommons for a unit of work implementation. A: Since you are developing a Web App (presumably with ASP.NET), check out NHibernate Best Practices with ASP.NET at CodeProject.
{ "language": "en", "url": "https://stackoverflow.com/questions/32612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to do "intraWord" text navigation in Visual Studio? On Windows, Ctrl+Right Arrow will move the text cursor from one "word" to the next. While working with Xcode on the Mac, they extended that so that Option+Right Arrow will move the cursor to the beginning of the next subword. For example, if the cursor was at the beginning of the word myCamelCaseVar then hitting Option+Right Arrow will put the cursor at the first C. This was an amazingly useful feature that I haven't found in a Windows editor. Do you know of any way to do this in Visual Studio (perhaps with an Add-In)? I'm currently using pretty old iterations of Visual Studio (Visual Basic 6.0 and Visual C++), although I'm interested to know if the more modern releases can do this, too. A: ReSharper has a "Camel Humps" feature that lets you do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/32617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I find the the exact lat/lng coordinates of a birdseye scene in Virtual Earth? I'm trying to find the latitude and longitude of the corners of my map while in birdseye view. I want to be able to plot pins on the map, but I have hundreds of thousands of addresses that I want to be able to limit to the ones that need to show on the map. In normal view, VEMap.GetMapView().TopLeftLatLong and .BottomRightLatLong return the coordinates I need; but in Birdseye view they return blank (or encrypted values). The SDK recommends using VEBirdseyeScene.GetBoundingRectangle(), but this returns bounds of up to two miles from the center of my scene which in major cities still returns way too many addresses. In previous versions of the VE Control, there was an undocumented VEDecoder object I could use to decrypt the LatLong values for the birdseye scenes, but this object seems to have disappeared (probably been renamed). How can I decode these values in version 6.1? A: It always seems to me that the example solutions for this issue only find the centre of the current map on the screen, as if that is always the place you're going to click! Anyway, I wrote this little function to get the actual pixel location that you clicked on the screen and return a VELatLong for that. So far it seems pretty accurate (even though I see this as one big, horrible hack - but it's not like we have a choice at the moment). It takes a VEPixel as input, which is the x and y coordinates of where you clicked on the map. You can get that easily enough on the mouse event passed to the onclick handler for the map. function getBirdseyeViewLatLong(vePixel) { var be = map.GetBirdseyeScene(); var centrePixel = be.LatLongToPixel(map.GetCenter(), map.GetZoomLevel()); var currentPixelWidth = be.GetWidth(); var currentPixelHeight = be.GetHeight(); var mapDiv = document.getElementById("map"); var mapDivPixelWidth = mapDiv.offsetWidth; var mapDivPixelHeight = mapDiv.offsetHeight; var xScreenPixel = centrePixel.x - (mapDivPixelWidth / 2) + vePixel.x; var yScreenPixel = centrePixel.y - (mapDivPixelHeight / 2) + vePixel.y; var position = be.PixelToLatLong(new VEPixel(xScreenPixel, yScreenPixel), map.GetZoomLevel()) return (new _xy1).Decode(position); } A: From the VEMap.GetCenter Method documentation: This method returns null when the map style is set to VEMapStyle.Birdseye or VEMapStyle.BirdseyeHybrid. Here is what I've found, though: var northWestLL = (new _xy1).Decode(map.GetMapView().TopLeftLatLong); var southEastLL = (new _xy1).Decode(map.GetMapView().BottomRightLatLong); The (new _xy1) seems to work the same as the old undocumented VEDecoder object. A: Here's the code for getting the Center Lat/Long point of the map. This method works in both Road/Aerial and Birdseye/Oblique map styles. function GetCenterLatLong() { //Check if in Birdseye or Oblique Map Style if (map.GetMapStyle() == VEMapStyle.Birdseye || map.GetMapStyle() == VEMapStyle.BirdseyeHybrid) { //IN Birdseye or Oblique Map Style //Get the BirdseyeScene being displayed var birdseyeScene = map.GetBirdseyeScene(); //Get approximate center coordinate of the map var x = birdseyeScene.GetWidth() / 2; var y = birdseyeScene.GetHeight() / 2; // Get the Lat/Long var center = birdseyeScene.PixelToLatLong(new VEPixel(x,y), map.GetZoomLevel()); // Convert the BirdseyeScene LatLong to a normal LatLong we can use return (new _xy1).Decode(center); } else { // NOT in Birdseye or Oblique Map Style return map.GetCenter(); } } This code was copied from here: http://pietschsoft.com/post/2008/06/Virtual-Earth-Get-Center-LatLong-When-In-Birdseye-or-Oblique-Map-Style.aspx A: According to http://dev.live.com/virtualearth/sdk/ this should do the trick: function GetInfo() { alert('The latitude,longitude at the center of the map is: '+map.GetCenter()); } A: An interesting point in the Bing Maps Terms of Use.. http://www.microsoft.com/maps/product/terms.html Restriction on use of Bird’s eye aerial imagery: You may not reveal latitude, longitude, altitude or other metadata;
{ "language": "en", "url": "https://stackoverflow.com/questions/32621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I set breakpoints in an external JS script in Firebug I can easily set breakpoints in embedded JS functions, but I don't see any way of accessing external JS scripts via Firebug unless I happen to enter them during a debug session. Is there a way to do this without having to 'explore' my way into the script? @Jason: This is a good point, but in my case I do not have easy access to the script. I am specifically talking about the client scripts which are invoked by the ASP.Net Validators that I would like to debug. I can access them during a debug session through entering the function calls, but I could not find a way to access them directly. A: Putting the "debugger;" line also does the trick for the Chrome debugger. A: Place debugger; in your external script file on the line you want to break on. A: To view and access external JavaScript files (*.js) from within Firebug: * *Click on the 'Script' tab. *Click on the 'all' drop down in the upper left hand corner above the script code content window. *Select 'Show Static Scripts'. *Click on the dropdown button just to the right of what now says 'static' (By default, it should show the name of your current web page). You should now see a list of files associated with the current web page including any external JS files. *Select the JavaScript file you are interested in and it's code will display in the content window. From there, you should be able to set breakpoints as normal. A: Clicking on the line number in the left hand margin should create a break point for you (a red circle should appear). All loaded scripts should be available from the firebug menu - click where it says the name of the current file should show a drop down with all files listed. A: After you place a break point in them, you can also call them by name in the firebug console, and see the output of (or step through) any intermediate functions. This can help when the main entry point calls many other helper functions, and you are really just concerned with how these helpers are working. That being said, I don't knwo anything about ASP.Net validators, so its possible this doesn't apply.
{ "language": "en", "url": "https://stackoverflow.com/questions/32633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Easiest way to convert a URL to a hyperlink in a C# string? I am consuming the Twitter API and want to convert all URLs to hyperlinks. What is the most effective way you've come up with to do this? from string myString = "This is my tweet check it out http://tinyurl.com/blah"; to This is my tweet check it out <a href="http://tinyurl.com/blah">http://tinyurl.com/>blah</a> A: I did this exact same thing with jquery consuming the JSON API here is the linkify function: String.prototype.linkify = function() { return this.replace(/[A-Za-z]+:\/\/[A-Za-z0-9-_]+\.[A-Za-z0-9-_:%&\?\/.=]+/, function(m) { return m.link(m); }); }; A: This is actually an ugly problem. URLs can contain (and end with) punctuation, so it can be difficult to determine where a URL actually ends, when it's embedded in normal text. For example: http://example.com/. is a valid URL, but it could just as easily be the end of a sentence: I buy all my witty T-shirts from http://example.com/. You can't simply parse until a space is found, because then you'll keep the period as part of the URL. You also can't simply parse until a period or a space is found, because periods are extremely common in URLs. Yes, regex is your friend here, but constructing the appropriate regex is the hard part. Check out this as well: Expanding URLs with Regex in .NET. A: Regular expressions are probably your friend for this kind of task: Regex r = new Regex(@"(https?://[^\s]+)"); myString = r.Replace(myString, "<a href=\"$1\">$1</a>"); The regular expression for matching URLs might need a bit of work. A: /cheer for RedWolves from: this.replace(/[A-Za-z]+://[A-Za-z0-9-]+.[A-Za-z0-9-:%&\?/.=]+/, function(m){... see: /[A-Za-z]+://[A-Za-z0-9-]+.[A-Za-z0-9-:%&\?/.=]+/ There's the code for the addresses "anyprotocol"://"anysubdomain/domain"."anydomainextension and address", and it's a perfect example for other uses of string manipulation. you can slice and dice at will with .replace and insert proper "a href"s where needed. I used jQuery to change the attributes of these links to "target=_blank" easily in my content-loading logic even though the .link method doesn't let you customize them. I personally love tacking on a custom method to the string object for on the fly string-filtering (the String.prototype.linkify declaration), but I'm not sure how that would play out in a large-scale environment where you'd have to organize 10+ custom linkify-like functions. I think you'd definitely have to do something else with your code structure at that point. Maybe a vet will stumble along here and enlighten us. A: You can add some more control on this by using MatchEvaluator delegate function with regular expression: suppose i have this string: find more on http://www.stackoverflow.com now try this code private void ModifyString() { string input = "find more on http://www.authorcode.com "; Regex regx = new Regex(@"\b((http|https|ftp|mailto)://)?(www.)+[\w-]+(/[\w- ./?%&=]*)?"); string result = regx.Replace(input, new MatchEvaluator(ReplaceURl)); } static string ReplaceURl(Match m) { string x = m.ToString(); x = "< a href=\"" + x + "\">" + x + "</a>"; return x; }
{ "language": "en", "url": "https://stackoverflow.com/questions/32637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Mocking Asp.net-mvc Controller Context So the controller context depends on some asp.net internals. What are some ways to cleanly mock these up for unit tests? Seems like its very easy to clog up tests with tons of setup when I only need, for example, Request.HttpMethod to return "GET". I've seen some examples/helpers out on the nets, but some are dated. Figured this would be a good place to keep the latest and greatest. I'm using latest version of rhino mocks A: Using MoQ it looks something like this: var request = new Mock<HttpRequestBase>(); request.Expect(r => r.HttpMethod).Returns("GET"); var mockHttpContext = new Mock<HttpContextBase>(); mockHttpContext.Expect(c => c.Request).Returns(request.Object); var controllerContext = new ControllerContext(mockHttpContext.Object , new RouteData(), new Mock<ControllerBase>().Object); I think the Rhino Mocks syntax is similar. A: Or you can do this with Typemock Isolator with no need to send in a fake controller at all: Isolate.WhenCalled(()=>HttpContext.Request.HttpMethod).WillReturn("Get"); A: Here is a sample unit test class using MsTest and Moq which mocks HttpRequest and HttpResponse objects. (.NET 4.0, ASP.NET MVC 3.0 ) Controller action get value from request and sets http header in response objects. Other http context objects could be mocked up in similar way [TestClass] public class MyControllerTest { protected Mock<HttpContextBase> HttpContextBaseMock; protected Mock<HttpRequestBase> HttpRequestMock; protected Mock<HttpResponseBase> HttpResponseMock; [TestInitialize] public void TestInitialize() { HttpContextBaseMock = new Mock<HttpContextBase>(); HttpRequestMock = new Mock<HttpRequestBase>(); HttpResponseMock = new Mock<HttpResponseBase>(); HttpContextBaseMock.SetupGet(x => x.Request).Returns(HttpRequestMock.Object); HttpContextBaseMock.SetupGet(x => x.Response).Returns(HttpResponseMock.Object); } protected MyController SetupController() { var routes = new RouteCollection(); var controller = new MyController(); controller.ControllerContext = new ControllerContext(HttpContextBaseMock.Object, new RouteData(), controller); controller.Url = new UrlHelper(new RequestContext(HttpContextBaseMock.Object, new RouteData()), routes); return controller; } [TestMethod] public void IndexTest() { HttpRequestMock.Setup(x => x["x"]).Returns("1"); HttpResponseMock.Setup(x => x.AddHeader("name", "value")); var controller = SetupController(); var result = controller.Index(); Assert.AreEqual("1", result.Content); HttpRequestMock.VerifyAll(); HttpResponseMock.VerifyAll(); } } public class MyController : Controller { public ContentResult Index() { var x = Request["x"]; Response.AddHeader("name", "value"); return Content(x); } } A: i've finished with this spec public abstract class Specification <C> where C: Controller { protected C controller; HttpContextBase mockHttpContext; HttpRequestBase mockRequest; protected Exception ExceptionThrown { get; private set; } [SetUp] public void Setup() { mockHttpContext = MockRepository.GenerateMock<HttpContextBase>(); mockRequest = MockRepository.GenerateMock<HttpRequestBase>(); mockHttpContext.Stub(x => x.Request).Return(mockRequest); mockRequest.Stub(x => x.HttpMethod).Return("GET"); EstablishContext(); SetHttpContext(); try { When(); } catch (Exception exc) { ExceptionThrown = exc; } } protected void SetHttpContext() { var context = new ControllerContext(mockHttpContext, new RouteData(), controller); controller.ControllerContext = context; } protected T Mock<T>() where T: class { return MockRepository.GenerateMock<T>(); } protected abstract void EstablishContext(); protected abstract void When(); [TearDown] public virtual void TearDown() { } } and the juice is here [TestFixture] public class When_invoking_ManageUsersControllers_Update :Specification <ManageUsersController> { private IUserRepository userRepository; FormCollection form; ActionResult result; User retUser; protected override void EstablishContext() { userRepository = Mock<IUserRepository>(); controller = new ManageUsersController(userRepository); retUser = new User(); userRepository.Expect(x => x.GetById(5)).Return(retUser); userRepository.Expect(x => x.Update(retUser)); form = new FormCollection(); form["IdUser"] = 5.ToString(); form["Name"] = 5.ToString(); form["Surename"] = 5.ToString(); form["Login"] = 5.ToString(); form["Password"] = 5.ToString(); } protected override void When() { result = controller.Edit(5, form); } [Test] public void is_retrieved_before_update_original_user() { userRepository.AssertWasCalled(x => x.GetById(5)); userRepository.AssertWasCalled(x => x.Update(retUser)); } } enjoy A: Here's a snippet from Jason's link. Its the same as Phil's method but uses rhino. Note: mockHttpContext.Request is stubbed to return mockRequest before mockRequest's internals are stubbed out. I believe this order is required. // create a fake web context var mockHttpContext = MockRepository.GenerateMock<HttpContextBase>(); var mockRequest = MockRepository.GenerateMock<HttpRequestBase>(); mockHttpContext.Stub(x => x.Request).Return(mockRequest); // tell the mock to return "GET" when HttpMethod is called mockRequest.Stub(x => x.HttpMethod).Return("GET"); var controller = new AccountController(); // assign the fake context var context = new ControllerContext(mockHttpContext, new RouteData(), controller); controller.ControllerContext = context; // act ... A: The procedure for this seems to have changed slightly in MVC2 (I'm using RC1). Phil Haack's solution doesn't work for me if the action requires a specific method ([HttpPost], [HttpGet]). Spelunking around in Reflector, it looks like the method for verifying these attributes has changed. MVC now checks request.Headers, request.Form, and request.QueryString for a X-HTTP-Method-Override value. If you add mocks for these properties, it works: var request = new Mock<HttpRequestBase>(); request.Setup(r => r.HttpMethod).Returns("POST"); request.Setup(r => r.Headers).Returns(new NameValueCollection()); request.Setup(r => r.Form).Returns(new NameValueCollection()); request.Setup(r => r.QueryString).Returns(new NameValueCollection()); var mockHttpContext = new Mock<HttpContextBase>(); mockHttpContext.Expect(c => c.Request).Returns(request.Object); var controllerContext = new ControllerContext(mockHttpContext.Object, new RouteData(), new Mock<ControllerBase>().Object); A: I find that long mocking procedure to be too much friction. The best way we have found - using ASP.NET MVC on a real project - is to abstract the HttpContext to an IWebContext interface that simply passes through. Then you can mock the IWebContext with no pain. Here is an example
{ "language": "en", "url": "https://stackoverflow.com/questions/32640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: Beginning ASP.NET MVC with VB.net 2008 Where can I find a good tutorial on learning ASP.NET MVC using VB.net 2008 as the language? Most in-depth tutorials that I found in searching the web were written in C#. A: Have you tried adding the word "VB" to your searches?? http://www.myvbprof.com/2007_Version/MVC_Intro_Tutorial.aspx http://www.asp.net/learn/mvc/tutorial-07-vb.aspx <Link> A: http://quickstarts.asp.net/3-5-extensions/mvc/default.aspx Is that relevant?
{ "language": "en", "url": "https://stackoverflow.com/questions/32642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Automatic image rotation based on a logo We're looking for a package to help identify and automatically rotate faxed TIFF images based on a watermark or logo. We use libtiff for rotation currently, but don't know of any other libraries or packages I can use for detecting this logo and determining how to rotate the images. I have done some basic work with OpenCV but I'm not sure that it is the right tool for this job. I would prefer to use C/C++ but Java, Perl or PHP would be acceptable too. A: You are in the right place using OpenCV, it is an excellent utility. For example, this guy used it for template matching, which is fairly similar to what you need to do. Also, the link Roddy specified looks similar to what you want to do. I feel that OpenCV is the best library out there for this kind of development. @Brian, OpenCV and the IntelIPP are closely linked and very similar (both Intel libs). As far as I know, if OpenCV finds the intel IPP on your computer it will automatically use it under the hood for improved speed. A: The Intel Performance Primitives (IPP) library has a lot of very efficient algorithms that help with this kind of a task. The library is callable from C/C++ and we have found it to be very fast. I should also note that it's not limited to just Intel hardware. A: That's quite a complex and specialized algorithm that you need. Have a look at http://en.wikipedia.org/wiki/Template_matching. There's also a demo program (but no source) at http://www.lps.usp.br/~hae/software/cirateg/index.html Obviously these require you to know the logo you are looking for in advance...
{ "language": "en", "url": "https://stackoverflow.com/questions/32643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Data Conflict in LINQ When making changes using SubmitChanges(), LINQ sometimes dies with a ChangeConflictException exception with the error message Row not found or changed, without any indication of either the row that has the conflict or the fields with changes that are in conflict, when another user has changed some data in that row. Is there any way to determine which row has a conflict and which fields they occur in, and also is there a way of getting LINQ to ignore the issue and simply commit the data regardless? Additionally, does anybody know whether this exception occurs when any data in the row has changed, or only when data has been changed in a field that LINQ is attempting to alter? A: I've gotten this error in a circumstance completely unrelated to what the error message describes. What I did was load a LINQ object via one DataContext, and then tried to SubmitChanges() for the object via a different DataContext - gave this exact same error. What I had to do was call DataContext.Table.Attach(myOldObject), and then call SubmitChanges(), worked like a charm. Worth a look, especially if you're of the opinion that there really shouldn't be any conflicts at all. A: Here's a way to see where the conflicts are (this is an MSDN example, so you'll need to heavily customize): try { db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(e.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in db.ChangeConflicts) { MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType()); Customer entityInConflict = (Customer)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Customer ID: "); Console.WriteLine(entityInConflict.CustomerID); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } } } To make it ignore the problem and commit anyway: db.SubmitChanges(ConflictMode.ContinueOnConflict); A: The error "Row not found or changed" also will appear sometimes when the columns or types in the O/R-Designer do not match the columns in the SQL database, especially if one column is NULLable in SQL but not nullable in the O/R-Designer. So check if your table mapping in the O/R-Designer matches your SQL database! A: Thanks to @vzczc. I found the example you gave very helpful but that I needed to call SubmitChanges again after resolving. Here are my modified methods - hope it helps someone. /// <summary> /// Submits changes and, if there are any conflicts, the database changes are auto-merged for /// members that client has not modified (client wins, but database changes are preserved if possible) /// </summary> public void SubmitKeepChanges() { this.Submit(RefreshMode.KeepChanges); } /// <summary> /// Submits changes and, if there are any conflicts, simply overwrites what is in the database (client wins). /// </summary> public void SubmitOverwriteDatabase() { this.Submit(RefreshMode.KeepCurrentValues); } /// <summary> /// Submits changes and, if there are any conflicts, all database values overwrite /// current values (client loses). /// </summary> public void SubmitUseDatabase() { this.Submit(RefreshMode.OverwriteCurrentValues); } /// <summary> /// Submits the changes using the specified refresh mode. /// </summary> /// <param name="refreshMode">The refresh mode.</param> private void Submit(RefreshMode refreshMode) { bool moreToSubmit = true; do { try { this.SubmitChanges(ConflictMode.ContinueOnConflict); moreToSubmit = false; } catch (ChangeConflictException) { foreach (ObjectChangeConflict occ in this.ChangeConflicts) { occ.Resolve(refreshMode); } } } while (moreToSubmit); } A: These (which you could add in a partial class to your datacontext might help you understand how this works: public void SubmitKeepChanges() { try { this.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { foreach (ObjectChangeConflict occ in this.ChangeConflicts) { //Keep current values that have changed, //updates other values with database values occ.Resolve(RefreshMode.KeepChanges); } } } public void SubmitOverwrite() { try { this.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { foreach (ObjectChangeConflict occ in this.ChangeConflicts) { // All database values overwrite current values with //values from database occ.Resolve(RefreshMode.OverwriteCurrentValues); } } } public void SubmitKeepCurrent() { try { this.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException e) { foreach (ObjectChangeConflict occ in this.ChangeConflicts) { //Swap the original values with the values retrieved from the database. No current value is modified occ.Resolve(RefreshMode.KeepCurrentValues); } } } A: row not found or changed is most of times a concurrency problem If a different user is changing the same record those errors popup because the record is already changed by that other user. So when you want to eliminate those errors you should handle concurrency in your application. If you handle concurrency well you won't get these errors anymore. The above code samples are a way to handle concurrency errors. Whats missing is that in case of a concurrency error you should put a refresh variable in those methods so when refresh is true the data needs to be refreshed on screen after the update so you will also see the update made by the other user. /// <remarks> /// linq has optimistic concurrency, so objects can be changed by other users, while /// submitted keep database changes but make sure users changes are also submitted /// and refreshed with the changes already made by other users. /// </remarks> /// <returns>return if a refresh is needed.</returns> public bool SubmitKeepChanges() { // try to submit changes to the database. bool refresh = false; try { base.SubmitChanges(ConflictMode.ContinueOnConflict); } /* * assume a "row not found or changed" exception, if thats the case: * - keep the database changes already made by other users and make sure * - this users changes are also written to the database */ catch (ChangeConflictException) { // show where the conflicts are in debug mode ShowConflicts(); // get database values and combine with user changes base.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges); // submit those combined changes again to the database. base.SubmitChanges(); // a refresh is needed refresh = true; } // return if a refresh is needed. return refresh; } A: "and also is there a way of getting LINQ to ignore the issue and simply commit the data regardless?" You can set the 'Update Check' property on your entity to 'Never' to stop that field being used for optimistic concurrency checking. You can also use: db.SubmitChanges(ConflictMode.ContinueOnConflict)
{ "language": "en", "url": "https://stackoverflow.com/questions/32649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is there a constraint that restricts my generic method to numeric types? Can anyone tell me if there is a way with generics to limit a generic type argument T to only: * *Int16 *Int32 *Int64 *UInt16 *UInt32 *UInt64 I'm aware of the where keyword, but can't find an interface for only these types, Something like: static bool IntegerFunction<T>(T value) where T : INumeric A: There's no constraint for this. It's a real issue for anyone wanting to use generics for numeric calculations. I'd go further and say we need static bool GenericFunction<T>(T value) where T : operators( +, -, /, * ) Or even static bool GenericFunction<T>(T value) where T : Add, Subtract Unfortunately you only have interfaces, base classes and the keywords struct (must be value-type), class (must be reference type) and new() (must have default constructor) You could wrap the number in something else (similar to INullable<T>) like here on codeproject. You could apply the restriction at runtime (by reflecting for the operators or checking for types) but that does lose the advantage of having the generic in the first place. A: Workaround using policies: interface INumericPolicy<T> { T Zero(); T Add(T a, T b); // add more functions here, such as multiplication etc. } struct NumericPolicies: INumericPolicy<int>, INumericPolicy<long> // add more INumericPolicy<> for different numeric types. { int INumericPolicy<int>.Zero() { return 0; } long INumericPolicy<long>.Zero() { return 0; } int INumericPolicy<int>.Add(int a, int b) { return a + b; } long INumericPolicy<long>.Add(long a, long b) { return a + b; } // implement all functions from INumericPolicy<> interfaces. public static NumericPolicies Instance = new NumericPolicies(); } Algorithms: static class Algorithms { public static T Sum<P, T>(this P p, params T[] a) where P: INumericPolicy<T> { var r = p.Zero(); foreach(var i in a) { r = p.Add(r, i); } return r; } } Usage: int i = NumericPolicies.Instance.Sum(1, 2, 3, 4, 5); long l = NumericPolicies.Instance.Sum(1L, 2, 3, 4, 5); NumericPolicies.Instance.Sum("www", "") // compile-time error. The solution is compile-time safe. CityLizard Framework provides compiled version for .NET 4.0. The file is lib/NETFramework4.0/CityLizard.Policy.dll. It's also available in Nuget: https://www.nuget.org/packages/CityLizard/. See CityLizard.Policy.I structure. A: There is no way to restrict templates to types, but you can define different actions based on the type. As part of a generic numeric package, I needed a generic class to add two values. class Something<TCell> { internal static TCell Sum(TCell first, TCell second) { if (typeof(TCell) == typeof(int)) return (TCell)((object)(((int)((object)first)) + ((int)((object)second)))); if (typeof(TCell) == typeof(double)) return (TCell)((object)(((double)((object)first)) + ((double)((object)second)))); return second; } } Note that the typeofs are evaluated at compile time, so the if statements would be removed by the compiler. The compiler also removes spurious casts. So Something would resolve in the compiler to internal static int Sum(int first, int second) { return first + second; } A: I created a little library functionality to solve these problems: Instead of: public T DifficultCalculation<T>(T a, T b) { T result = a * b + a; // <== WILL NOT COMPILE! return result; } Console.WriteLine(DifficultCalculation(2, 3)); // Should result in 8. You could write: public T DifficultCalculation<T>(Number<T> a, Number<T> b) { Number<T> result = a * b + a; return (T)result; } Console.WriteLine(DifficultCalculation(2, 3)); // Results in 8. You can find the source code here: https://codereview.stackexchange.com/questions/26022/improvement-requested-for-generic-calculator-and-generic-number A: I had a similar situation where I needed to handle numeric types and strings; seems a bit of a bizarre mix but there you go. Again, like many people I looked at constraints and came up with a bunch of interfaces that it had to support. However, a) it wasn't 100% watertight and b), anyone new looking at this long list of constraints would be immediately very confused. So, my approach was to put all my logic into a generic method with no constraints, but to make that generic method private. I then exposed it with public methods, one explicitly handling the type I wanted to handle - to my mind, the code is clean and explicit, e.g. public static string DoSomething(this int input, ...) => DoSomethingHelper(input, ...); public static string DoSomething(this decimal input, ...) => DoSomethingHelper(input, ...); public static string DoSomething(this double input, ...) => DoSomethingHelper(input, ...); public static string DoSomething(this string input, ...) => DoSomethingHelper(input, ...); private static string DoSomethingHelper<T>(this T input, ....) { // complex logic } A: Unfortunately .NET doesn't provide a way to do that natively. To address this issue I created the OSS library Genumerics which provides most standard numeric operations for the following built-in numeric types and their nullable equivalents with the ability to add support for other numeric types. sbyte, byte, short, ushort, int, uint, long, ulong, float, double, decimal, and BigInteger The performance is equivalent to a numeric type specific solution allowing you to create efficient generic numeric algorithms. Here's an example of the code usage. public static T Sum(T[] items) { T sum = Number.Zero<T>(); foreach (T item in items) { sum = Number.Add(sum, item); } return sum; } public static T SumAlt(T[] items) { // implicit conversion to Number<T> Number<T> sum = Number.Zero<T>(); foreach (T item in items) { // operator support sum += item; } // implicit conversion to T return sum; } A: .NET 6 has this functionality a preview feature: https://devblogs.microsoft.com/dotnet/preview-features-in-net-6-generic-math/#generic-math An example from the article: static T Add<T>(T left, T right) where T : INumber<T> { return left + right; } INumber is an interface that implements other interfaces, such as IAdditionOperators which allows the generic + usage. This is possible now because of another preview feature which is the static abstracts in interfaces, because the + operator overload is a static method: /// <summary>Defines a mechanism for computing the sum of two values.</summary> /// <typeparam name="TSelf">The type that implements this interface.</typeparam> /// <typeparam name="TOther">The type that will be added to <typeparamref name="TSelf" />.</typeparam> /// <typeparam name="TResult">The type that contains the sum of <typeparamref name="TSelf" /> and <typeparamref name="TOther" />.</typeparam> [RequiresPreviewFeatures(Number.PreviewFeatureMessage, Url = Number.PreviewFeatureUrl)] public interface IAdditionOperators<TSelf, TOther, TResult> where TSelf : IAdditionOperators<TSelf, TOther, TResult> { /// <summary>Adds two values together to compute their sum.</summary> /// <param name="left">The value to which <paramref name="right" /> is added.</param> /// <param name="right">The value which is added to <paramref name="left" />.</param> /// <returns>The sum of <paramref name="left" /> and <paramref name="right" />.</returns> static abstract TResult operator +(TSelf left, TOther right); } A: Beginning with C# 7.3, you can use closer approximation - the unmanaged constraint to specify that a type parameter is a non-pointer, non-nullable unmanaged type. class SomeGeneric<T> where T : unmanaged { //... } The unmanaged constraint implies the struct constraint and can't be combined with either the struct or new() constraints. A type is an unmanaged type if it's any of the following types: * *sbyte, byte, short, ushort, int, uint, long, ulong, char, float, double, decimal, or bool *Any enum type *Any pointer type *Any user-defined struct type that contains fields of unmanaged types only and, in C# 7.3 and earlier, is not a constructed type (a type that includes at least one type argument) To restrict further and eliminate pointer and user-defined types that do not implement IComparable add IComparable (but enum is still derived from IComparable, so restrict enum by adding IEquatable < T >, you can go further depending on your circumstances and add additional interfaces. unmanaged allows to keep this list shorter): class SomeGeneric<T> where T : unmanaged, IComparable, IEquatable<T> { //... } But this doesn't prevent from DateTime instantiation. A: I was wondering the same as samjudson, why only to integers? and if that is the case, you might want to create a helper class or something like that to hold all the types you want. If all you want are integers, don't use a generic, that is not generic; or better yet, reject any other type by checking its type. A: There is no 'good' solution for this yet. However you can narrow the type argument significantly to rule out many missfits for your hypotetical 'INumeric' constraint as Haacked has shown above. static bool IntegerFunction<T>(T value) where T: IComparable, IFormattable, IConvertible, IComparable<T>, IEquatable<T>, struct {... A: If you are using .NET 4.0 and later then you can just use dynamic as method argument and check in runtime that the passed dynamic argument type is numeric/integer type. If the type of the passed dynamic is not numeric/integer type then throw exception. An example short code that implements the idea is something like: using System; public class InvalidArgumentException : Exception { public InvalidArgumentException(string message) : base(message) {} } public class InvalidArgumentTypeException : InvalidArgumentException { public InvalidArgumentTypeException(string message) : base(message) {} } public class ArgumentTypeNotIntegerException : InvalidArgumentTypeException { public ArgumentTypeNotIntegerException(string message) : base(message) {} } public static class Program { private static bool IntegerFunction(dynamic n) { if (n.GetType() != typeof(Int16) && n.GetType() != typeof(Int32) && n.GetType() != typeof(Int64) && n.GetType() != typeof(UInt16) && n.GetType() != typeof(UInt32) && n.GetType() != typeof(UInt64)) throw new ArgumentTypeNotIntegerException("argument type is not integer type"); //code that implements IntegerFunction goes here } private static void Main() { Console.WriteLine("{0}",IntegerFunction(0)); //Compiles, no run time error and first line of output buffer is either "True" or "False" depends on the code that implements "Program.IntegerFunction" static method. Console.WriteLine("{0}",IntegerFunction("string")); //Also compiles but it is run time error and exception of type "ArgumentTypeNotIntegerException" is thrown here. Console.WriteLine("This is the last Console.WriteLine output"); //Never reached and executed due the run time error and the exception thrown on the second line of Program.Main static method. } Of course that this solution works in run time only but never in compile time. If you want a solution that always works in compile time and never in run time then you will have to wrap the dynamic with a public struct/class whose overloaded public constructors accept arguments of the desired types only and give the struct/class appropriate name. It makes sense that the wrapped dynamic is always private member of the class/struct and it is the only member of the struct/class and the name of the only member of the struct/class is "value". You will also have to define and implement public methods and/or operators that work with the desired types for the private dynamic member of the class/struct if necessary. It also makes sense that the struct/class has special/unique constructor that accepts dynamic as argument that initializes it's only private dynamic member called "value" but the modifier of this constructor is private of course. Once the class/struct is ready define the argument's type of IntegerFunction to be that class/struct that has been defined. An example long code that implements the idea is something like: using System; public struct Integer { private dynamic value; private Integer(dynamic n) { this.value = n; } public Integer(Int16 n) { this.value = n; } public Integer(Int32 n) { this.value = n; } public Integer(Int64 n) { this.value = n; } public Integer(UInt16 n) { this.value = n; } public Integer(UInt32 n) { this.value = n; } public Integer(UInt64 n) { this.value = n; } public Integer(Integer n) { this.value = n.value; } public static implicit operator Int16(Integer n) { return n.value; } public static implicit operator Int32(Integer n) { return n.value; } public static implicit operator Int64(Integer n) { return n.value; } public static implicit operator UInt16(Integer n) { return n.value; } public static implicit operator UInt32(Integer n) { return n.value; } public static implicit operator UInt64(Integer n) { return n.value; } public static Integer operator +(Integer x, Int16 y) { return new Integer(x.value + y); } public static Integer operator +(Integer x, Int32 y) { return new Integer(x.value + y); } public static Integer operator +(Integer x, Int64 y) { return new Integer(x.value + y); } public static Integer operator +(Integer x, UInt16 y) { return new Integer(x.value + y); } public static Integer operator +(Integer x, UInt32 y) { return new Integer(x.value + y); } public static Integer operator +(Integer x, UInt64 y) { return new Integer(x.value + y); } public static Integer operator -(Integer x, Int16 y) { return new Integer(x.value - y); } public static Integer operator -(Integer x, Int32 y) { return new Integer(x.value - y); } public static Integer operator -(Integer x, Int64 y) { return new Integer(x.value - y); } public static Integer operator -(Integer x, UInt16 y) { return new Integer(x.value - y); } public static Integer operator -(Integer x, UInt32 y) { return new Integer(x.value - y); } public static Integer operator -(Integer x, UInt64 y) { return new Integer(x.value - y); } public static Integer operator *(Integer x, Int16 y) { return new Integer(x.value * y); } public static Integer operator *(Integer x, Int32 y) { return new Integer(x.value * y); } public static Integer operator *(Integer x, Int64 y) { return new Integer(x.value * y); } public static Integer operator *(Integer x, UInt16 y) { return new Integer(x.value * y); } public static Integer operator *(Integer x, UInt32 y) { return new Integer(x.value * y); } public static Integer operator *(Integer x, UInt64 y) { return new Integer(x.value * y); } public static Integer operator /(Integer x, Int16 y) { return new Integer(x.value / y); } public static Integer operator /(Integer x, Int32 y) { return new Integer(x.value / y); } public static Integer operator /(Integer x, Int64 y) { return new Integer(x.value / y); } public static Integer operator /(Integer x, UInt16 y) { return new Integer(x.value / y); } public static Integer operator /(Integer x, UInt32 y) { return new Integer(x.value / y); } public static Integer operator /(Integer x, UInt64 y) { return new Integer(x.value / y); } public static Integer operator %(Integer x, Int16 y) { return new Integer(x.value % y); } public static Integer operator %(Integer x, Int32 y) { return new Integer(x.value % y); } public static Integer operator %(Integer x, Int64 y) { return new Integer(x.value % y); } public static Integer operator %(Integer x, UInt16 y) { return new Integer(x.value % y); } public static Integer operator %(Integer x, UInt32 y) { return new Integer(x.value % y); } public static Integer operator %(Integer x, UInt64 y) { return new Integer(x.value % y); } public static Integer operator +(Integer x, Integer y) { return new Integer(x.value + y.value); } public static Integer operator -(Integer x, Integer y) { return new Integer(x.value - y.value); } public static Integer operator *(Integer x, Integer y) { return new Integer(x.value * y.value); } public static Integer operator /(Integer x, Integer y) { return new Integer(x.value / y.value); } public static Integer operator %(Integer x, Integer y) { return new Integer(x.value % y.value); } public static bool operator ==(Integer x, Int16 y) { return x.value == y; } public static bool operator !=(Integer x, Int16 y) { return x.value != y; } public static bool operator ==(Integer x, Int32 y) { return x.value == y; } public static bool operator !=(Integer x, Int32 y) { return x.value != y; } public static bool operator ==(Integer x, Int64 y) { return x.value == y; } public static bool operator !=(Integer x, Int64 y) { return x.value != y; } public static bool operator ==(Integer x, UInt16 y) { return x.value == y; } public static bool operator !=(Integer x, UInt16 y) { return x.value != y; } public static bool operator ==(Integer x, UInt32 y) { return x.value == y; } public static bool operator !=(Integer x, UInt32 y) { return x.value != y; } public static bool operator ==(Integer x, UInt64 y) { return x.value == y; } public static bool operator !=(Integer x, UInt64 y) { return x.value != y; } public static bool operator ==(Integer x, Integer y) { return x.value == y.value; } public static bool operator !=(Integer x, Integer y) { return x.value != y.value; } public override bool Equals(object obj) { return this == (Integer)obj; } public override int GetHashCode() { return this.value.GetHashCode(); } public override string ToString() { return this.value.ToString(); } public static bool operator >(Integer x, Int16 y) { return x.value > y; } public static bool operator <(Integer x, Int16 y) { return x.value < y; } public static bool operator >(Integer x, Int32 y) { return x.value > y; } public static bool operator <(Integer x, Int32 y) { return x.value < y; } public static bool operator >(Integer x, Int64 y) { return x.value > y; } public static bool operator <(Integer x, Int64 y) { return x.value < y; } public static bool operator >(Integer x, UInt16 y) { return x.value > y; } public static bool operator <(Integer x, UInt16 y) { return x.value < y; } public static bool operator >(Integer x, UInt32 y) { return x.value > y; } public static bool operator <(Integer x, UInt32 y) { return x.value < y; } public static bool operator >(Integer x, UInt64 y) { return x.value > y; } public static bool operator <(Integer x, UInt64 y) { return x.value < y; } public static bool operator >(Integer x, Integer y) { return x.value > y.value; } public static bool operator <(Integer x, Integer y) { return x.value < y.value; } public static bool operator >=(Integer x, Int16 y) { return x.value >= y; } public static bool operator <=(Integer x, Int16 y) { return x.value <= y; } public static bool operator >=(Integer x, Int32 y) { return x.value >= y; } public static bool operator <=(Integer x, Int32 y) { return x.value <= y; } public static bool operator >=(Integer x, Int64 y) { return x.value >= y; } public static bool operator <=(Integer x, Int64 y) { return x.value <= y; } public static bool operator >=(Integer x, UInt16 y) { return x.value >= y; } public static bool operator <=(Integer x, UInt16 y) { return x.value <= y; } public static bool operator >=(Integer x, UInt32 y) { return x.value >= y; } public static bool operator <=(Integer x, UInt32 y) { return x.value <= y; } public static bool operator >=(Integer x, UInt64 y) { return x.value >= y; } public static bool operator <=(Integer x, UInt64 y) { return x.value <= y; } public static bool operator >=(Integer x, Integer y) { return x.value >= y.value; } public static bool operator <=(Integer x, Integer y) { return x.value <= y.value; } public static Integer operator +(Int16 x, Integer y) { return new Integer(x + y.value); } public static Integer operator +(Int32 x, Integer y) { return new Integer(x + y.value); } public static Integer operator +(Int64 x, Integer y) { return new Integer(x + y.value); } public static Integer operator +(UInt16 x, Integer y) { return new Integer(x + y.value); } public static Integer operator +(UInt32 x, Integer y) { return new Integer(x + y.value); } public static Integer operator +(UInt64 x, Integer y) { return new Integer(x + y.value); } public static Integer operator -(Int16 x, Integer y) { return new Integer(x - y.value); } public static Integer operator -(Int32 x, Integer y) { return new Integer(x - y.value); } public static Integer operator -(Int64 x, Integer y) { return new Integer(x - y.value); } public static Integer operator -(UInt16 x, Integer y) { return new Integer(x - y.value); } public static Integer operator -(UInt32 x, Integer y) { return new Integer(x - y.value); } public static Integer operator -(UInt64 x, Integer y) { return new Integer(x - y.value); } public static Integer operator *(Int16 x, Integer y) { return new Integer(x * y.value); } public static Integer operator *(Int32 x, Integer y) { return new Integer(x * y.value); } public static Integer operator *(Int64 x, Integer y) { return new Integer(x * y.value); } public static Integer operator *(UInt16 x, Integer y) { return new Integer(x * y.value); } public static Integer operator *(UInt32 x, Integer y) { return new Integer(x * y.value); } public static Integer operator *(UInt64 x, Integer y) { return new Integer(x * y.value); } public static Integer operator /(Int16 x, Integer y) { return new Integer(x / y.value); } public static Integer operator /(Int32 x, Integer y) { return new Integer(x / y.value); } public static Integer operator /(Int64 x, Integer y) { return new Integer(x / y.value); } public static Integer operator /(UInt16 x, Integer y) { return new Integer(x / y.value); } public static Integer operator /(UInt32 x, Integer y) { return new Integer(x / y.value); } public static Integer operator /(UInt64 x, Integer y) { return new Integer(x / y.value); } public static Integer operator %(Int16 x, Integer y) { return new Integer(x % y.value); } public static Integer operator %(Int32 x, Integer y) { return new Integer(x % y.value); } public static Integer operator %(Int64 x, Integer y) { return new Integer(x % y.value); } public static Integer operator %(UInt16 x, Integer y) { return new Integer(x % y.value); } public static Integer operator %(UInt32 x, Integer y) { return new Integer(x % y.value); } public static Integer operator %(UInt64 x, Integer y) { return new Integer(x % y.value); } public static bool operator ==(Int16 x, Integer y) { return x == y.value; } public static bool operator !=(Int16 x, Integer y) { return x != y.value; } public static bool operator ==(Int32 x, Integer y) { return x == y.value; } public static bool operator !=(Int32 x, Integer y) { return x != y.value; } public static bool operator ==(Int64 x, Integer y) { return x == y.value; } public static bool operator !=(Int64 x, Integer y) { return x != y.value; } public static bool operator ==(UInt16 x, Integer y) { return x == y.value; } public static bool operator !=(UInt16 x, Integer y) { return x != y.value; } public static bool operator ==(UInt32 x, Integer y) { return x == y.value; } public static bool operator !=(UInt32 x, Integer y) { return x != y.value; } public static bool operator ==(UInt64 x, Integer y) { return x == y.value; } public static bool operator !=(UInt64 x, Integer y) { return x != y.value; } public static bool operator >(Int16 x, Integer y) { return x > y.value; } public static bool operator <(Int16 x, Integer y) { return x < y.value; } public static bool operator >(Int32 x, Integer y) { return x > y.value; } public static bool operator <(Int32 x, Integer y) { return x < y.value; } public static bool operator >(Int64 x, Integer y) { return x > y.value; } public static bool operator <(Int64 x, Integer y) { return x < y.value; } public static bool operator >(UInt16 x, Integer y) { return x > y.value; } public static bool operator <(UInt16 x, Integer y) { return x < y.value; } public static bool operator >(UInt32 x, Integer y) { return x > y.value; } public static bool operator <(UInt32 x, Integer y) { return x < y.value; } public static bool operator >(UInt64 x, Integer y) { return x > y.value; } public static bool operator <(UInt64 x, Integer y) { return x < y.value; } public static bool operator >=(Int16 x, Integer y) { return x >= y.value; } public static bool operator <=(Int16 x, Integer y) { return x <= y.value; } public static bool operator >=(Int32 x, Integer y) { return x >= y.value; } public static bool operator <=(Int32 x, Integer y) { return x <= y.value; } public static bool operator >=(Int64 x, Integer y) { return x >= y.value; } public static bool operator <=(Int64 x, Integer y) { return x <= y.value; } public static bool operator >=(UInt16 x, Integer y) { return x >= y.value; } public static bool operator <=(UInt16 x, Integer y) { return x <= y.value; } public static bool operator >=(UInt32 x, Integer y) { return x >= y.value; } public static bool operator <=(UInt32 x, Integer y) { return x <= y.value; } public static bool operator >=(UInt64 x, Integer y) { return x >= y.value; } public static bool operator <=(UInt64 x, Integer y) { return x <= y.value; } } public static class Program { private static bool IntegerFunction(Integer n) { //code that implements IntegerFunction goes here //note that there is NO code that checks the type of n in rum time, because it is NOT needed anymore } private static void Main() { Console.WriteLine("{0}",IntegerFunction(0)); //compile error: there is no overloaded METHOD for objects of type "int" and no implicit conversion from any object, including "int", to "Integer" is known. Console.WriteLine("{0}",IntegerFunction(new Integer(0))); //both compiles and no run time error Console.WriteLine("{0}",IntegerFunction("string")); //compile error: there is no overloaded METHOD for objects of type "string" and no implicit conversion from any object, including "string", to "Integer" is known. Console.WriteLine("{0}",IntegerFunction(new Integer("string"))); //compile error: there is no overloaded CONSTRUCTOR for objects of type "string" } } Note that in order to use dynamic in your code you must Add Reference to Microsoft.CSharp If the version of the .NET framework is below/under/lesser than 4.0 and dynamic is undefined in that version then you will have to use object instead and do casting to the integer type, which is trouble, so I recommend that you use at least .NET 4.0 or newer if you can so you can use dynamic instead of object. A: More than a decade later, this feature finally exists in .NET 7. The most generic interface is INumber<TSelf> rather than INumeric (in the System.Numerics namespace), and it encompasses not just integer types. To accept just integer types, consider using IBinaryInteger<TSelf> instead. To use the example of your prototypical, mystical IntegerFunction: static bool IntegerFunction<T>(T value) where T : IBinaryInteger<T> { return value > T.Zero; } Console.WriteLine(IntegerFunction(5)); // True Console.WriteLine(IntegerFunction((sbyte)-5)); // False Console.WriteLine(IntegerFunction((ulong)5)); // True The (now obsolete) answer below is left as a historical perspective. C# does not support this. Hejlsberg has described the reasons for not implementing the feature in an interview with Bruce Eckel: And it's not clear that the added complexity is worth the small yield that you get. If something you want to do is not directly supported in the constraint system, you can do it with a factory pattern. You could have a Matrix<T>, for example, and in that Matrix you would like to define a dot product method. That of course that means you ultimately need to understand how to multiply two Ts, but you can't say that as a constraint, at least not if T is int, double, or float. But what you could do is have your Matrix take as an argument a Calculator<T>, and in Calculator<T>, have a method called multiply. You go implement that and you pass it to the Matrix. However, this leads to fairly convoluted code, where the user has to supply their own Calculator<T> implementation, for each T that they want to use. As long as it doesn’t have to be extensible, i.e. if you just want to support a fixed number of types, such as int and double, you can get away with a relatively simple interface: var mat = new Matrix<int>(w, h); (Minimal implementation in a GitHub Gist.) However, as soon as you want the user to be able to supply their own, custom types, you need to open up this implementation so that the user can supply their own Calculator instances. For instance, to instantiate a matrix that uses a custom decimal floating point implementation, DFP, you’d have to write this code: var mat = new Matrix<DFP>(DfpCalculator.Instance, w, h); … and implement all the members for DfpCalculator : ICalculator<DFP>. An alternative, which unfortunately shares the same limitations, is to work with policy classes, as discussed in Sergey Shandar’s answer. A: This question is a bit of a FAQ one, so I'm posting this as wiki (since I've posted similar before, but this is an older one); anyway... What version of .NET are you using? If you are using .NET 3.5, then I have a generic operators implementation in MiscUtil (free etc). This has methods like T Add<T>(T x, T y), and other variants for arithmetic on different types (like DateTime + TimeSpan). Additionally, this works for all the inbuilt, lifted and bespoke operators, and caches the delegate for performance. Some additional background on why this is tricky is here. You may also want to know that dynamic (4.0) sort-of solves this issue indirectly too - i.e. dynamic x = ..., y = ... dynamic result = x + y; // does what you expect A: Unfortunately you are only able to specify struct in the where clause in this instance. It does seem strange you can't specify Int16, Int32, etc. specifically but I'm sure there's some deep implementation reason underlying the decision to not permit value types in a where clause. I guess the only solution is to do a runtime check which unfortunately prevents the problem being picked up at compile time. That'd go something like:- static bool IntegerFunction<T>(T value) where T : struct { if (typeof(T) != typeof(Int16) && typeof(T) != typeof(Int32) && typeof(T) != typeof(Int64) && typeof(T) != typeof(UInt16) && typeof(T) != typeof(UInt32) && typeof(T) != typeof(UInt64)) { throw new ArgumentException( string.Format("Type '{0}' is not valid.", typeof(T).ToString())); } // Rest of code... } Which is a little bit ugly I know, but at least provides the required constraints. I'd also look into possible performance implications for this implementation, perhaps there's a faster way out there. A: Probably the closest you can do is static bool IntegerFunction<T>(T value) where T: struct Not sure if you could do the following static bool IntegerFunction<T>(T value) where T: struct, IComparable , IFormattable, IConvertible, IComparable<T>, IEquatable<T> For something so specific, why not just have overloads for each type, the list is so short and it would possibly have less memory footprint. A: This constraint exists in .Net 7. Check out this .NET Blog post and the actual documentation. Starting in .NET 7, you can make use of interfaces such as INumber and IFloatingPoint to create programs such as: using System.Numerics; Console.WriteLine(Sum(1, 2, 3, 4, 5)); Console.WriteLine(Sum(10.541, 2.645)); Console.WriteLine(Sum(1.55f, 5, 9.41f, 7)); static T Sum<T>(params T[] numbers) where T : INumber<T> { T result = T.Zero; foreach (T item in numbers) { result += item; } return result; } INumber is in the System.Numerics namespace. There are also interfaces such as IAdditionOperators and IComparisonOperators so you can make use of specific operators generically. A: Considering the popularity of this question and the interest behind such a function I am surprised to see that there is no answer involving T4 yet. In this sample code I will demonstrate a very simple example of how you can use the powerful templating engine to do what the compiler pretty much does behind the scenes with generics. Instead of going through hoops and sacrificing compile-time certainty you can simply generate the function you want for every type you like and use that accordingly (at compile time!). In order to do this: * *Create a new Text Template file called GenericNumberMethodTemplate.tt. *Remove the auto-generated code (you'll keep most of it, but some isn't needed). *Add the following snippet: <#@ template language="C#" #> <#@ output extension=".cs" #> <#@ assembly name="System.Core" #> <# Type[] types = new[] { typeof(Int16), typeof(Int32), typeof(Int64), typeof(UInt16), typeof(UInt32), typeof(UInt64) }; #> using System; public static class MaxMath { <# foreach (var type in types) { #> public static <#= type.Name #> Max (<#= type.Name #> val1, <#= type.Name #> val2) { return val1 > val2 ? val1 : val2; } <# } #> } That's it. You're done now. Saving this file will automatically compile it to this source file: using System; public static class MaxMath { public static Int16 Max (Int16 val1, Int16 val2) { return val1 > val2 ? val1 : val2; } public static Int32 Max (Int32 val1, Int32 val2) { return val1 > val2 ? val1 : val2; } public static Int64 Max (Int64 val1, Int64 val2) { return val1 > val2 ? val1 : val2; } public static UInt16 Max (UInt16 val1, UInt16 val2) { return val1 > val2 ? val1 : val2; } public static UInt32 Max (UInt32 val1, UInt32 val2) { return val1 > val2 ? val1 : val2; } public static UInt64 Max (UInt64 val1, UInt64 val2) { return val1 > val2 ? val1 : val2; } } In your main method you can verify that you have compile-time certainty: namespace TTTTTest { class Program { static void Main(string[] args) { long val1 = 5L; long val2 = 10L; Console.WriteLine(MaxMath.Max(val1, val2)); Console.Read(); } } } I'll get ahead of one remark: no, this is not a violation of the DRY principle. The DRY principle is there to prevent people from duplicating code in multiple places that would cause the application to become hard to maintain. This is not at all the case here: if you want a change then you can just change the template (a single source for all your generation!) and it's done. In order to use it with your own custom definitions, add a namespace declaration (make sure it's the same one as the one where you'll define your own implementation) to your generated code and mark the class as partial. Afterwards, add these lines to your template file so it will be included in the eventual compilation: <#@ import namespace="TheNameSpaceYouWillUse" #> <#@ assembly name="$(TargetPath)" #> Let's be honest: This is pretty cool. Disclaimer: this sample has been heavily influenced by Metaprogramming in .NET by Kevin Hazzard and Jason Bock, Manning Publications. A: Topic is old but for future readers: This feature is tightly related to Discriminated Unions which is not implemented in C# so far. I found its issue here: https://github.com/dotnet/csharplang/issues/113 This issue is still open and feature has been planned for C# 10 So still we have to wait a bit more, but after releasing you can do it this way: static bool IntegerFunction<T>(T value) where T : Int16 | Int32 | Int64 | ... A: What is the point of the exercise? As people pointed out already, you could have a non-generic function taking the largest item, and compiler will automatically convert up smaller ints for you. static bool IntegerFunction(Int64 value) { } If your function is on performance-critical path (very unlikely, IMO), you could provide overloads for all needed functions. static bool IntegerFunction(Int64 value) { } ... static bool IntegerFunction(Int16 value) { } A: I would use a generic one which you could handle externaly... /// <summary> /// Generic object copy of the same type /// </summary> /// <typeparam name="T">The type of object to copy</typeparam> /// <param name="ObjectSource">The source object to copy</param> public T CopyObject<T>(T ObjectSource) { T NewObject = System.Activator.CreateInstance<T>(); foreach (PropertyInfo p in ObjectSource.GetType().GetProperties()) NewObject.GetType().GetProperty(p.Name).SetValue(NewObject, p.GetValue(ObjectSource, null), null); return NewObject; } A: This limitation affected me when I tried to overload operators for generic types; since there was no "INumeric" constraint, and for a bevy of other reasons the good people on stackoverflow are happy to provide, operations cannot be defined on generic types. I wanted something like public struct Foo<T> { public T Value{ get; private set; } public static Foo<T> operator +(Foo<T> LHS, Foo<T> RHS) { return new Foo<T> { Value = LHS.Value + RHS.Value; }; } } I have worked around this issue using .net4 dynamic runtime typing. public struct Foo<T> { public T Value { get; private set; } public static Foo<T> operator +(Foo<T> LHS, Foo<T> RHS) { return new Foo<T> { Value = LHS.Value + (dynamic)RHS.Value }; } } The two things about using dynamic are * *Performance. All value types get boxed. *Runtime errors. You "beat" the compiler, but lose type safety. If the generic type doesn't have the operator defined, an exception will be thrown during execution. A: The .NET numeric primitive types do not share any common interface that would allow them to be used for calculations. It would be possible to define your own interfaces (e.g. ISignedWholeNumber) which would perform such operations, define structures which contain a single Int16, Int32, etc. and implement those interfaces, and then have methods which accept generic types constrained to ISignedWholeNumber, but having to convert numeric values to your structure types would likely be a nuisance. An alternative approach would be to define static class Int64Converter<T> with a static property bool Available {get;}; and static delegates for Int64 GetInt64(T value), T FromInt64(Int64 value), bool TryStoreInt64(Int64 value, ref T dest). The class constructor could use be hard-coded to load delegates for known types, and possibly use Reflection to test whether type T implements methods with the proper names and signatures (in case it's something like a struct which contains an Int64 and represents a number, but has a custom ToString() method). This approach would lose the advantages associated with compile-time type-checking, but would still manage to avoid boxing operations and each type would only have to be "checked" once. After that, operations associated with that type would be replaced with a delegate dispatch. A: If all you want is use one numeric type, you could consider creating something similar to an alias in C++ with using. So instead of having the very generic T ComputeSomething<T>(T value1, T value2) where T : INumeric { ... } you could have using MyNumType = System.Double; T ComputeSomething<MyNumType>(MyNumType value1, MyNumType value2) { ... } That might allow you to easily go from double to int or others if needed, but you wouldn't be able to use ComputeSomething with double and int in the same program. But why not replace all double to int then? Because your method may want to use a double whether the input is double or int. The alias allows you to know exactly which variable uses the dynamic type. A: All numeric types are structs which implement IComparable, IComparable<T>, IConvertible, IEquatable<T>, IFormattable. However, so does DateTime. So this generic extension method is possible: public static bool IsNumeric<T>(this T value) where T : struct, IComparable, IComparable<T>, IConvertible, IEquatable<T>, IFormattable => typeof(T) != typeof(DateTime); But it will fail for a struct that implements those interfaces, e.g.: public struct Foo : IComparable, IComparable<Foo>, IConvertible, IEquatable<Foo>, IFormattable { /* ... */ } This non-generic alternative is less performant, but guaranteed to work: public static bool IsNumeric(this Type type) => type == typeof(sbyte) || type == typeof(byte) || type == typeof(short) || type == typeof(ushort) || type == typeof(int) || type == typeof(uint) || type == typeof(long) || type == typeof(ulong) || type == typeof(float) || type == typeof(double) || type == typeof(decimal);
{ "language": "en", "url": "https://stackoverflow.com/questions/32664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "445" }
Q: How to remember in CSS that margin is outside the border, and padding inside I don't edit CSS very often, and almost every time I need to go and google the CSS box model to check whether padding is inside the border and margin outside, or vice versa. (Just checked again and padding is inside). Does anyone have a good way of remembering this? A little mnemonic, a good explanation as to why the names are that way round ... A: pin - P is in A: You are using a box. If you were putting something in a box you would put some padding inside to make sure it didn't smack against the sides. Margin would then be the other thing. A: Print the diagram from the Box Dimensions section of the specification, and put it on the wall. A: To me, "padding" just sounds more inner than "margin". Perhaps thinking about the printed page would help? Margins are areas on the far outside - generally, you cannot even print to the edge - they are unmarkable. Within those margins, the content could be padded to provide an extra barrier between the content and the margin? Once you work in CSS enough, it'll become second nature to remember this. A: I've just learnt it over time - the box model is fairly simple but the main reason people find it hard is because body doesn't visibly break the model. Really, if you give body a margin and a background you should see it surrounded by a white strip. However this isn't the case - body's padding is the same as margin. This establishes a few incorrect things about the box model. I usually think about it like this: * *margin = spacing around the box; *border = the edge of the box; *padding = space inside the box. A: Padding is usually used inside. Either on the interior of a wall or a delivery box, that's simple. And if padding is inside, then margin is outside. Shouldn't be too hard. A: use firebug to help you see. A: Tim Saunders gave some excellent advice - when I first got started with CSS, I made a point of building a good, fully commented base stylesheet. That stylesheet has changed many times and remains a terrific resource. However, when I ran into my own box model problems, I started using 'Mo Pi'. As in, "I'm not fat enough, I need to eat mo pi." Strange, but it worked for me. Of course, I put on twenty pounds while learning CSS...;-) A: When working with CSS finally drives you mad the padded cell that they will put you in has the padding on the inside of the walls. A: Create yourself a commented base stylesheet which you can use as a template whenever you need to create a new site or edit an existing site. You can add to it as you grow in knowledge and apply it to various different browsers to see how various things behave. You'll also be able to add in comments or examples about other hard to remember stuff or stuff that is counter intuitive. A: Add border, even just temporarily. As you play with the numbers, you'll see the difference. In fact, temporary borders around elements is a helpful way to work, such that you can see why floats are dropping, etc. A: I know this is an answer to your question, but more of a tip. Whenever I am dealing with margin and padding, I will add a border around the part you are working with, then from there, it shows me the room I have to work with. When I am all set, I remove the border. A: PAdding is a PArt of an element's PAinting: it extends the element's background. It makes sense to think of a pair element+padding as sharing a common background. Padding is analogous to the painting's canvas: the bigger the padding, the bigger the canvas and therefore background. Border (the painting's frame) would go around that pair. And margin will separate paintings on the gallery wall. Thinking about the concept of object background helps glue the pair object+padding together. The usual explanations of what is inside vs outside do not stick to memory: after a while everybody gets back to the original confusion. Remember also that margins are vertically collapsible and padding is not. A: Instead of ask again and again to google you just use inspector window. In that style tab and scroll down to bottom you can see like this. A: Margin:When you want move the block. Padding: When you want move the items within a block.
{ "language": "en", "url": "https://stackoverflow.com/questions/32668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: In MS SQL Server 2005, is there a way to export, the complete maintenance plan of a database as a SQL Script? Currently, if I want to output a SQL script for a table in my database, in Management Studio, I can right click and output a create script. Is there an equivalent to output an SQL script for a database's maintenance plan?# Edit The company I work for has 4 servers, 3 servers and no sign of integration, each one running about 500,000 transactions a day. The original maintenance plans were undocumented and trying to create a default template maintenance plan. A: You can't export them as scripts, but if your intention is to migrate them between server instances then you can import and export them as follows: Connect to Integration Services and expand Stored Packages>MSDB>Maintenance Plans. You can then right click on the plan and select import or export A: I don't think you can do that with Maintenance Plans, because those are DTS packages, well now they are called SSIS (SQL Server Integration Services). There was a stored procedure from which you could add maintenance plans, but I think that it may be deprecated. (sp_add_maintenance_plan). I don't have a SQL 2005 to test here. The question is, why would you want to export the complete mp? :) If it's for importing in other server, then a ssis package could be useful. I suggest you take a look in that direction, because those you can export/import among servers.
{ "language": "en", "url": "https://stackoverflow.com/questions/32689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: "ypcat" and "ypmatch username passwd" don't agree after change on server I'm trying to use NIS for authentication on a st of machines. I had to change one of the user ID numbers for a user account on the NIS server (I changed the userid for username from 500 to 509 to avoid a conflict with a local user account with id 500 on the clients). The problem is that it has not updated properly on the client. In particular, if I do ypcat passwd | grep username, I get the up-to-date info: username:*hidden*:509:509:User Name:/home/username:/bin/bash But if I do, ypmatch username passwd, it says: username:*hidden*:500:500:User Name:/home/username:/bin/bash This means that when the user logs onto one of the clients, it has the wrong userid, which causes all sorts of problems. I've done "cd /var/yp; make" on the server, and "service ypbind restart" on the client, but that hasn't fixed the problem. Does anybody know what would be causing this and how I can somehow force a refresh on the client? (I'm running Fedora 8 on both client and server). A: John O pointed me in the right direction. He is right. If you set "files: 0" in /etc/ypserv.conf, you can get ypserv to not cache files. If you have to restart ypserv after each make, this is the problem. The real solution is to look in /var/log/messages for this error: ypserv[]: refused connect from 127.0.0.1 to procedure ypproc_clear (,;0) makedbm -c means: send YPPROC_CLEAR to the local ypserv. The error message in the log means that CLEAR message is getting denied. You need to add 127.0.0.1 to /var/yp/securenets. A: Encountered same problem - RHEL 5.5. Change (any) source map, then run make. ypcat shows the changed info, ypmatch does not. Anything that needs to actually --use-- the new map fails. As per last post, restarting ypserv makes all OK. After days of testing, running strace, etc. I found that ypserv has a "file handle cache" controlled by the "file:" entry in /etc/ypserv.conf --- the default value is 30. Change this to 0 and everything works following the make. Shouldn't have to do this --- Per the manpage for ypserv.conf... "There was one big change between ypserv 1.1 and ypserv 1.2. Since version 1.2, the file handles are cached. This means you have to call makedbm always with the -c option if you create new maps. Make sure, you are using the new /var/yp/Makefile from ypserv 1.2 or later, or add the -c flag to makedbm in the Makefile. If you don't do that, ypserv will continue to use the old maps, and not the updated one." The makefile DOES use "makedbm -c", but still ypserv uses the old (cached) map. Answer: Don't cache the file handles, e.g. set "files: 0" in ypserv.conf A: OK, I found the problem, I also had to restart the NIS service on the server to get it to refresh everything ("service ypserv restart") A: hmm, you're not supposed to have to restart the ypserver to have updates take effect; the make in /var/yp ought to do the trick. you might want to check the Makefile in /var/yp to be sure it's triggering on the right conditions (namely, passwd.by* should check the timestamp on /etc/passwd in some fashion, versus its current table. the process used to go through a passwd.time rule on the NIS server i ran, back in the dark ages). killing and restarting your nis server can have funky effects on (particularly non-linux) clients, so be careful doing it willy-nilly. A: it is because of the nscd daemon. set the time to live value to 60 in /etc/nscd.conf for passwd session. It will work
{ "language": "en", "url": "https://stackoverflow.com/questions/32694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Isn't Func and Predicate the same thing after compilation? Haven't fired up reflector to look at the difference but would one expect to see the exact same compiled code when comparing Func<T, bool> vs. Predicate<T> I would imagine there is no difference as both take a generic parameter and return bool? A: The more flexible Func family only arrived in .NET 3.5, so it will functionally duplicate types that had to be included earlier out of necessity. (Plus the name Predicate communicates the intended usage to readers of the source code) A: They share the same signature, but they're still different types. A: Robert S. is completely correct; for example:- class A { static void Main() { Func<int, bool> func = i => i > 100; Predicate<int> pred = i => i > 100; Test<int>(pred, 150); Test<int>(func, 150); // Error } static void Test<T>(Predicate<T> pred, T val) { Console.WriteLine(pred(val) ? "true" : "false"); } } A: Even without generics, you can have different delegate types that are identical in signatures and return types. For example: namespace N { // Represents a method that takes in a string and checks to see // if this string has some predicate (i.e. meets some criteria) // or not. internal delegate bool StringPredicate(string stringToTest); // Represents a method that takes in a string representing a // yes/no or true/false value and returns the boolean value which // corresponds to this string internal delegate bool BooleanParser(string stringToConvert); } In the above example, the two non-generic types have the same signature and return type. (And actually also the same as Predicate<string> and Func<string, bool>). But as I tried to indicate, the "meaning" of the two are different. This is somewhat like if I make two classes, class Car { string Color; decimal Price; } and class Person { string FullName; decimal BodyMassIndex; }, then just because both of them hold a string and a decimal, that doesn't mean they're the "same" type.
{ "language": "en", "url": "https://stackoverflow.com/questions/32709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Is there a way to get images to display with ASP.NET and app_offline.htm? When using the app_offline.htm feature of ASP.NET, it only allows html, but no images. Is there a way to get images to display without having to point them to a different url on another site? A: If you don't support browsers prior to IE 8, you can always embed the images using a data URI. http://css-tricks.com/data-uris/ A: If you're willing to do a little more work you can easily create a custom page to take the application offline. One possible solution: * *Create DisplayOfflineMessage.aspx: Contains label to display your offline message from Application["OfflineMessage"]. *ManageOfflineStatus.aspx: Contains an offline/online checkbox, textarea for offline message and an update button. The update button sets two application level variables, one for the message and a flag that states if the application is online. (This page should only be accessible to admins) Then in Global.asax public void Application_Start(object sender, EventArgs e) { Application["OfflineMessage"] = "This website is offline."; Application["IsOffline"] = false; } public void Application_OnBeginRequest(object sender, EventArgs e) { bool offline = Convert.ToBoolean(Application["IsOffline"]); if (offline) { // TODO: allow access to DisplayOfflineMessage.aspx and ManageOfflineStatus.aspx // redirct requests to all other pages Response.Redirect("~/DisplayOfflineMessage.aspx"); } } A: I have an idea. You can create separate application, pointed to the same folder, without ASP.NET enabled. Then accessing to images by this application will not be affected by app_offline.htm file. Or, point that application direсtly to folder with static content, there will not be any app_offline files. But, of course, you need to assign separate dns name for this application, kind of static.somedomain.com. A: Yes, it just can't come from the site that has the app_offline.htm file. The image would have to be hosted elsewhere. A: Another solution is to embed the image inside the app_offline.htm page using a data URI. There is wide support for this these days - see the following for full details - http://en.wikipedia.org/wiki/Data_URI_scheme A: You could just convert your images to base64 and then display them: <html> <body> <h1> Web under maintenance with image in base64 </h1> <img src="data:image/png;base64,iVBORw0K...="> </body> </html> I've created a Fiddle where you can see it in action
{ "language": "en", "url": "https://stackoverflow.com/questions/32715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Out-of-place builds with C# I just finished setting up an out-of-place build system for our existing C++ code using inherited property sheets, a feature that seems to be specific to the Visual C++ product. Building out-of-place requires that many of the project settings be changed, and the inherited property sheets allowed me to change all the necessary settings just by attaching a property sheet to the project. I am migrating our team from C++/MFC for UI to C# and WPF, but I need to provide the same out-of-place build functionality, hopefully with the same convenience. I cannot seem to find a way to do this with C# projects - I first looked to see if I could reference an MsBuild targets file, but could not find a way to do this. I know I could just use MsBuild for the whole thing, but that seems more complicated than necessary. Is there a way I can define a macro for a directory and use it in the output path, for example? A: I'm not quite sure what an "out-of-place" build system is, but if you just need the ability to copy the compiled files (or other resources) to other directories you can do so by tying into the MSBuild build targets. In our projects we move the compiled dlls into lib folders and put the files into the proper locations after a build is complete. To do this we've created a custom build .target file that creates the Target's, Property's, and ItemGroup's that we then use to populate our external output folder. Our custom targets file looks a bit like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <ProjectName>TheProject</ProjectName> <ProjectDepthPath>..\..\</ProjectDepthPath> <ProjectsLibFolder>..\..\lib\</ProjectsLibFolder> <LibFolder>$(ProjectsLibFolder)$(ProjectName)\$(Configuration)\</LibFolder> </PropertyGroup> <Target Name="DeleteLibFiles"> <Delete Files="@(LibFiles-> '$(ProjectDepthPath)$(LibFolder)%(filename)%(extension)')" TreatErrorsAsWarnings="true" /> </Target> <Target Name="CopyLibFiles"> <Copy SourceFiles="@(LibFiles)" DestinationFolder="$(ProjectDepthPath)$(LibFolder)" SkipUnchangedFiles="True" /> </Target> <ItemGroup> <LibFiles Include=" "> <Visible>false</Visible> </LibFiles> </ItemGroup> </Project> The .csproj file in Visual Studio then integrates with this custom target file: <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="3.5" ... > ... <Import Project="..\..\..\..\build\OurBuildTargets.targets" /> <ItemGroup> <LibFiles Include="$(OutputPath)$(AssemblyName).dll"> <Visible>false</Visible> </LibFiles> </ItemGroup> <Target Name="BeforeClean" DependsOnTargets="DeleteLibFiles" /> <Target Name="AfterBuild" DependsOnTargets="CopyLibFiles" /> </Project> In a nutshell, this build script first tells MSBuild to load our custom build script, then adds the compiled file to the LibFiles ItemGroup, and lastly ties our custom build targets, DeleteLibFiles and CopyLibFiles, into the build process. We set this up for each project in our solution so only the files that are updated get deleted/copied and each project is responsible for it's own files (dlls, images, etc). I hope this helps. I apologize if I misunderstood what you mean by out-of-place build system and this is completely useless to you! A: Is there a way I can define a macro for a directory and use it in the output path Have you looked at the pre-build and post-build events of a project? A: Actually, pre-build and post-build events seem to be solely a place to add batch-file type commands. This would not help me to set up standard build directories for our projects, unfortunately. And having these events create batch files seems like a very 1980's approach for a modern language like C#, IMO. After digging some more, and experimenting, I have found that you can add an <Import> directive into your .csproj file. When you do this, the IDE pops up a warning dialog that there is an unsafe entry point in your project - but you can ignore this, and you can make it not appear at all by editing a registry entry, evidently. So this would give me a way to get the variables containing the directory paths I need into the .csproj file. Now to get the Output Path to refer to it - unfortunately when you add a string like "$(MySpecialPath)/Debug" to the Output Path field, and save the project, the $ and () chars are converted to hex, and your file get's put in a Debug directory under a directory named "$(MySpecialPath)". Arrgghh. If you edit the .csproj file in a text editor, you can set this correctly however, and it seems to work as long as the <Import> tag appears before the <PropertyGroup> containing the Output Path. So I think the solution for me will be to create a standard OurTeam.targets MsBuild file in a standard location, add an installer for changing the registry so it doesn't flag warnings, and then create custom project templates that <Import> this file, and also set the Output Path to use the properties defined in the OurTeam.targets file. Sadly, this is more work and a less elegant solution than the property sheet inheritance mechanism in C++.
{ "language": "en", "url": "https://stackoverflow.com/questions/32717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Setting time zone remotely in C# How do you set the Windows time zone on the local machine programmatically in C#? Using an interactive tool is not an option because the remote units have no user interface or users. The remote machine is running .NET 2.0 and Windows XP Embedded and a local app that communicates with a central server (via web service) for automated direction of its tasks. We can deliver a command to synch to a certain time/zone combination, but what code can be put in the local app to accomplish the change? The equipment is not imaged for specific locations before installation, so in order to use any equipment at any location, we have to be able to synch this information. A: SetTimeZoneInformation should do what you need. You'll need to use P/Invoke to get at it. Note also that you'll need to possess and enable the SE_TIME_ZONE_NAME privilege. A: This is not working in Windows 7. I have tried with the following environment windows 7 VSTS-2008 It is opening Change Time Zone Window as we do it manually A: santosh you are definitly correct. RunDLL32 shell32.dll,Control_RunDLL %SystemRoot%\system32\TIMEDATE.cpl,,/Z %1 is deprecated for years and will neither run on Windows 2008, R2, Vista, 7, ... oh, yes i noticed that this forum seems to have a complete bug with recognizing the enter key. maybe someday the programmer will fix that problem. A: Try this... First, you need to find, in the registry, the key that represents the zone you want ("Central Standard Time" is an example). Those are located here: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\Current Version\Time Zones So, with that in mind, create a batch file, "SetTimeZone.bat" with the following line in it: RunDLL32 shell32.dll,Control_RunDLL %SystemRoot%\system32\TIMEDATE.cpl,,/Z %1 From C#, call: System.Diagnostics.Process.Start("SetTimeZone.bat", "The key of the time zone you want to set"); A: You will run the risk of having incorrect data if you do not use UTC to transmit dates... If you change time zones on the device... you're dates will be even further off. You may want to use UTC and then calculate time in each timezone. A: Instead of just setting the time zone of some systems you should consider to use the Windows Time Service. It will not only handle the time zone. It will also take care about correct settings of date and time itself. Take a look at: How to synchronize the time with the Windows Time service in Windows XP Even if you maybe going to make all these settings within Vista or Windows 7 take a look at this here.
{ "language": "en", "url": "https://stackoverflow.com/questions/32718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: .NET : Double-click event in TabControl I would like to intercept the event in a .NET Windows Forms TabControl when the user has changed tab by double-clicking the tab (instead of just single-clicking it). Do you have any idea of how I can do that? A: The MouseDoubleClick event of the TabControl seems to respond just fine to double-clicking. The only additional step I would do is set a short timer after the TabIndexChanged event to track that a new tab has been selected and ignore any double-clicks that happen outside the timer. This will prevent double-clicking on the selected tab. A: For some reason, MouseDoubleClick, as suggested by Jason Z is only firing when clicking on the tabs and clicking on the tab panel does not do anything, so that's exactly what I was looking for. A: How about subclassing the TabControl class and adding your own DoubleClick event?
{ "language": "en", "url": "https://stackoverflow.com/questions/32733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What protocols and servers are involved in sending an email, and what are the steps? For the past few weeks, I've been trying to learn about just how email works. I understand the process of a client receiving mail from a server using POP pretty well. I also understand how a client computer can use SMTP to ask an SMTP server to send a message. However, I'm still missing something... The way I understand it, outgoing mail has to make three trips: * *Client (gmail user using Thunderbird) to a server (Gmail) *First server (Gmail) to second server (Hotmail) *Second server (Hotmail) to second client (hotmail user using OS X Mail) As I understand it, step one uses SMTP for the client to communicate. The client authenticates itself somehow (say, with USER and PASS), and then sends a message to the gmail server. However, I don't understand how gmail server transfers the message to the hotmail server. For step three, I'm pretty sure, the hotmail server uses POP to send the message to the hotmail client (using authentication, again). So, the big question is: when I click send Mail sends my message to my gmail server, how does my gmail server forward the message to, say, a hotmail server so my friend can recieve it? Thank you so much! ~Jason Thanks, that's been helpful so far. As I understand it, the first client sends the message to the first server using SMTP, often to an address such as smtp.mail.SOMESERVER.com on port 25 (usually). Then, SOMESERVER uses SMTP again to send the message to RECEIVESERVER.com on port 25 (not smtp.mail.RECEIVESERVER.com or anything fancy). Then, when the recipient asks RECEIVESERVER for its mail, using POP, s/he recieves the message... right? Thanks again (especially to dr-jan), Jason A: You're looking for the Mail Transfer Agent, Wikipedia has a nice article on the topic. Within Internet message handling services (MHS), a message transfer agent or mail transfer agent (MTA) or mail relay is software that transfers electronic mail messages from one computer to another using a client–server application architecture. An MTA implements both the client (sending) and server (receiving) portions of the Simple Mail Transfer Protocol. The terms mail server, mail exchanger, and MX host may also refer to a computer performing the MTA function. The Domain Name System (DNS) associates a mail server to a domain with mail exchanger (MX) resource records containing the domain name of a host providing MTA services. A: You might also be interested to know why the GMail to HotMail link uses SMTP, just like your Thunderbird client. In other words, since your client can send email via SMTP, and it can use DNS to get the MX record for hotmail.com, why doesn't it just send it there directly, skipping gmail.com altogether? There are a couple of reasons, some historical and some for security. In the original question, it was assumed that your Thunderbird client logs in with a user name and password. This is often not the case. SMTP doesn't actually require a login to send a mail. And SMTP has no way to tell who's really sending the mail. Thus, spam was born! There are, unfortunately, still many SMTP servers out there that allow anyone and everyone to connect and send mail, trusting blindly that the sender is who they claim to be. These servers are called "open relays" and are routinely black-listed by smarter administrators of other mail servers, because of the spam they churn out. Responsible SMTP server admins set up their server to accept mail for delivery only in special cases 1) the mail is coming from "its own" network, or 2) the mail is being sent to "its own" network, or 3) the user presents credentials that identifies him as a trusted sender. Case #1 is probably what happens when you send mail from work; your machine is on the trusted network, so you can send mail to anyone. A lot of corporate mail servers still don't require authentication, so you can impersonate anyone in your office. Fun! Case #2 is when someone sends you mail. And case #3 is probably what happens with your GMail example. You're not coming from a trusted network, you’re just out on the Internet with the spammers. But by using a password, you can prove to GMail that you are who you say you are. The historical aspect is that in the old days, the link between gmail and hotmail was likely to be intermittent. By queuing your mail up at a local server, you could wash your hands of it, knowing that when a link was established, the local server could transfer your messages to the remote server, which would hold the message until the recipient's agent picked it up. A: The first server will look at DNS for a MX record of Hotmail server. MX is a special record that defines a mail server for a certain domain. Knowing IP address of Hotmail server, GMail server will sent the message using SMTP protocol and will wait for an answer. If Hotmail server goes down, GMail server wiil try to resend the message (it will depend on server software configuration). If the process terminates ok, then ok, if not, GMail server will notify you that he wasn´t able to deliver the message. A: If you really want to know how email works you could read the SMTP RFC or the POP3 RFC. A: The SMTP server at Gmail (which accepted the message from Thunderbird) will route the message to the final recipient. It does this by using DNS to find the MX (mail exchanger) record for the domain name part of the destination email address (hotmail.com in this example). The DNS server will return an IP address which the message should be sent to. The server at the destination IP address will hopefully be running SMTP (on the standard port 25) so it can receive the incoming messages. Once the message has been received by the hotmail server, it is stored until the appropriate user logs in and retrieves their messages using POP (or IMAP). Jason - to answer your follow up... Then, SOMESERVER uses SMTP again to send the message to RECEIVESERVER.com on port 25 (not smtp.mail.RECEIVESERVER.com or anything fancy). That's correct - the domain name to send to is taken as everything after the '@' in the email address of the recipient. Often, RECEIVESERVER.com is an alias for something more specific, say something like incoming.RECEIVESERVER.com, (or, indeed, smtp.mail.RECEIVESERVER.com). You can use nslookup to query your local DNS servers (this works in Linux and in a Windows cmd window): nslookup > set type=mx > stackoverflow.com Server: 158.155.25.16 Address: 158.155.25.16#53 Non-authoritative answer: stackoverflow.com mail exchanger = 10 aspmx.l.google.com. stackoverflow.com mail exchanger = 20 alt1.aspmx.l.google.com. stackoverflow.com mail exchanger = 30 alt2.aspmx.l.google.com. stackoverflow.com mail exchanger = 40 aspmx2.googlemail.com. stackoverflow.com mail exchanger = 50 aspmx3.googlemail.com. Authoritative answers can be found from: aspmx.l.google.com internet address = 64.233.183.114 aspmx.l.google.com internet address = 64.233.183.27 > This shows us that email to anyone at stackoverflow.com should be sent to one of the gmail servers shown above. The Wikipedia article mentioned (http://en.wikipedia.org/wiki/Mx_record) discusses the priority numbers shown above (10, 20, ..., 50). A: Step 2 to 3 (i.e. Gmail to Hotmail) would normally happen through SMTP (or ESMTP - extended SMTP). Hotmail doesn't send anything to a client via POP3. It's important to understand some of the nuances here. The client contacts Hotmail via POP3 and requests its mail. (i.e. the client initiates the discussion). A: All emails are transferred using SMTP (or ESMTP). The important thing to understand is that the when you send message to [email protected] this message's destination is not his PC. The destination is someguy's inbox folder at hotmail.com server. After the message arrives at it's destination. The user can check if he has any new messages on his account at hotmail server and retrieve them using POP3 Also it would be possible to send the message without using gmail server, by sending it directly from your PC to hotmail using SMTP.
{ "language": "en", "url": "https://stackoverflow.com/questions/32744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I get today's date in C# in mm/dd/yyyy format? How do I get today's date in C# in mm/dd/yyyy format? I need to set a string variable to today's date (preferably without the year), but there's got to be a better way than building it month-/-day one piece at a time. BTW: I'm in the US so M/dd would be correct, e.g. September 11th is 9/11. Note: an answer from kronoz came in that discussed internationalization, and I thought it was awesome enough to mention since I can't make it an 'accepted' answer as well. kronoz's answer A: If you want it without the year: DateTime.Now.ToString("MM/DD"); DateTime.ToString() has a lot of cool format strings: http://msdn.microsoft.com/en-us/library/aa326721.aspx A: DateTime.Now.Date.ToShortDateString() is culture specific. It is best to stick with: DateTime.Now.ToString("d/MM/yyyy"); A: string today = DateTime.Today.ToString("M/d"); A: DateTime.Now.Date.ToShortDateString() I think this is what you are looking for A: Or without the year: DateTime.Now.ToString("M/dd") A: Not to be horribly pedantic, but if you are internationalising the code it might be more useful to have the facility to get the short date for a given culture, e.g.:- using System.Globalization; using System.Threading; ... var currentCulture = Thread.CurrentThread.CurrentCulture; try { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("en-us"); string shortDateString = DateTime.Now.ToShortDateString(); // Do something with shortDateString... } finally { Thread.CurrentThread.CurrentCulture = currentCulture; } Though clearly the "m/dd/yyyy" approach is considerably neater!! A: DateTime.Now.ToString("M/d/yyyy"); http://msdn.microsoft.com/en-us/library/8kb3ddd4.aspx A: DateTime.Now.ToString("dd/MM/yyyy");
{ "language": "en", "url": "https://stackoverflow.com/questions/32747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: How can I take a byte array of a TIFF image and turn it into a System.Drawing.Image object? I have a byte[] array, the contents of which represent a TIFF file (as in, if I write out these bytes directly to a file using the BinaryWriter object, it forms a perfectly valid TIFF file) and I'm trying to turn it into a System.Drawing.Image object so that I can use it for later manipulation (feeding into a multipage TIFF object) The problem I'm having is that the commonly accepted code for this task: public Image byteArrayToImage(byte[] byteArrayIn) { MemoryStream ms = new MemoryStream(byteArrayIn); Image returnImage = Image.FromStream(ms, true); return returnImage; } doesn't work for me. The second line of the above method where it calls the Image.FromStream method dies at runtime, saying Parameter Not Valid I believe that the method is choking on the fact that this is a TIFF file but I cannot figure out how to make the FromStream method accept this fact. How do I turn a byte array of a TIFF image into an Image object? Also, like I said the end goal of this is to have a byte array representing a multipage TIFF file, which contains the TIFF files for which I have byte array objects of right now. If there's a much better way to go about doing this, I'm all for it. A: OK, I found the issue, and it was from a part of the code unrelated to the part of the code I was asking about. The data was being passed as a string, I was converting it to a byte array (this was a test rig so I was trying to simulate the byte array that I get in the main app), then converting that to a MemoryStream, then making an Image from that. What I failed to realize was that the string was Base64 encoded. Calling Convert.FromBase64String() caused it to turn into a byte array which wouldn't kill the Image.FromStream() method. So basically it boiled down to a stupid mistake on my part. But hey, the code above is still useful and this page will probably serve as a Google result as to how to avoid this mistake to someone else. Also, I found an easy way to construct a Multi-Page TIFF from my byte arrays here. A: Edit: The assumption below is not correct, I had a chance to fire up my IDE later and tested with and without Write and both populated the MemoryStream correctly. I think you need to write to your MemeoryStream first. As if my memory (no pun intended) serves me correctly this: MemoryStream ms = new MemoryStream(byteArrayIn); Creates a memory stream of that size. You then need to write your byte array contents to the memory stream: ms.Write(byteArrayIn, 0, byteArrayIn.Length); See if that fixes it. A: All these were clues that helped me figure out my problem which was the same problem as the question asks. So i want to post my solution which i arrived at because of these helpful clues. Thanks for all the clues posted so far! As Time Saunders posted in his answer, that Write method to actually write the bytes to the memory stream is essential. That was my first mistake. Then my data was bad TIFF data too, but in my case, i had an extra character 13 at the beginning of my image data. Once i removed that, it all worked fine for me. When i read about some basic TIFF file format specs, i found that TIFF files must begin with II or MM (two bytes with values of either 73 or 77). II means little-endian byte order ('Intel byte ordering') is used. MM means big-ending ('Motorola byte ordering') is used. The next two bytes are a two byte integer value ( = Int16 in .NET) of 42, binary 101010. Thus a correct TIFF stream of bytes begins with the decimal byte values of: 73, 73, 42, 0 or 77, 77, 0, 42. I encourage anyone with the same problem that we experienced to inspect your TIFF data byte stream and make sure your data is valid TIFF data! Thanks Schnapple and Tim Saunders!!
{ "language": "en", "url": "https://stackoverflow.com/questions/32750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: generation of designer file failed Every few days VS2008 decides to get mad at me and fails to generate a designer file claiming it cannot find the file specified and that it's missing an assembly. Here's the scenario: The aspx page has a reference to a custom user control (inheriting UserControl) which references another assembly in the backend. There are many other references to this "missing" assembly in other places in code which don't throw errors. rebuilding, updating the source to the clean copy, shouting at the computer, punching the screen, etc all fail to work. Any suggestions? This is quite annoying. A: We've had similar problems before, unfortunately I don't remember the exact solution. If your using a "Web Site" project (no project file) then start by checking that both your page and your control both set the ClassName property in the first line of your aspx/ascx file and that you specify the full name of the class including the namespace. Example: <@Control Language="VB" AutoEventWireup="false" ClassName="YourProjectName.YourUserControl" Inherits="YourProjectName.YourUserControl" CodeFile="YourUserControl.ascx.vb" %> Many times not setting all of these will still work but you will get odd compiler errors and behavior in VS. If you using a Web Site Application project try deleting the designer file manually and then right click on your project and choose "Convert from Web Application." This will should recreate the designer file for you. My only other suggestion would be to recreate the page and/or the user control from scratch. A: Jared, you've hit it. Using "Convert to Web Application" to manually generate the designer file solves my problem. I'm glad you posted this before i started reinstalling. Thanks. A: You might try archiving a template of a new file with its designer equivalent. If VS coughs then you can do an "Add Existing" option with the file you already have. It seems, however, to be an issue with your installation of VS2008 so you might try reinstalling it. A: I found that using a custom control, you would need to add a reference to the .dll. This fixed it for me after migrating from a web site to web app.
{ "language": "en", "url": "https://stackoverflow.com/questions/32766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What is the best and most complete implementation of Unix system commands for Windows? I've found a few (unfortunately, they are bookmarked at home and I'm at work, so no links), but I was wondering if anyone had any opinions about any of them (love it, hate it, whatever) so I could make a good decision. I think I'm going to use Cygwin for my Unix commands on Windows, but I'm not sure how well that's going to work, so I would love for alternatives and I'm sure there are people out there interested in this who aren't running Cygwin. A: These work very well for me: http://unxutils.sourceforge.net/. Cygwin is not so good on Vista or 64 bit, so I stopped using it a while back. A: I use Cygwin, but I have used the Berkley Utilities in the past. They worked well enough, if you are used to DOS and you just want the commands. There are some alternatives listed at TinyApps. Maybe you could also consider running a command line version of Linux in a virtual machine? Colinux is also an option, but it's immature. A: Powershell is what you are looking for, it contains aliases for a lot of UNIX commands and a lot more besides. John A: UnxUtils isn't updated as often and isn't as complete as Cygwin but runs natively just like any other Windows command line utility. Cygwin acts more like a Linux command line emulator. It does feel pretty clunky but it is easier to port utilities to it and this is more complete than UnxUtils. I personally don't like Cygwin. It really does seem to be wanting. Unless it has some specific tool you want that only works in Cygwin I'd find native ports. http://www.activestate.com/Products/activeperl/index.mhtml is a nice Perl package for Windows. http://www.locate32.net/ - I've always liked locate. Much faster than Grep for finding files by name. A: Microsoft distributes a UNIX API compatibility layer for Windows NT-based OSes, as well as many common UNIX command line utilities that run on top of this compatibility layer. Unlike Cygwin, it doesn't sit on top of the Win32 subsystem, but instead interfaces with the NT native APIs directly. It supports features that may be difficult to implement on top of Win32, such as case-sensitive filenames and fork(). The Windows 2K/XP version is called Windows Services for UNIX (SFU). SFU is a free download from Microsoft, and also includes an NFS server and client. The Windows Vista version is called Subsystem for UNIX-based Applications (SUA). SUA is included in the Enterprise and Ultimate editions of Windows Vista, but does not include any NFS support. Neither SFU nor SUA include an X server. It is possible (but possibly ironic?) to use the free Cygwin X server with SFU/SUA applications. The Wikipedia entries on SFU and Interix have more details and history. A: Linux/BSD :) A: Why vote down this question? It's obviously meant to be tongue in cheek is it worth the voter and the receiver losing rep over? Can't you people leave anything at zero and mark up the answers you want to see float rather than mark down the funny one liners? In answer to the question I've used Cygwin in the past but always found it clunky and wanting. I don't think it's the tools problem but mine but I have book marked Eric's suggestion of unxutils for when my new windows machine arrives tomorrow. A: I use Cygwin alot. I use it for any mvn commands, find, grep, perl scp and all the other stuff i got used to use all the years I only worked on FreeBSD desktops and servers. I have my old .vimrc, .zshrc, my .ssh/config and all the other nice stuff. I use rxvt.exe instead of cmd.exe which made all the difference for me! Resize, decent buffer, fonts and so on. A: andLinux is a distribution of coLinux, which runs the entire Linux kernel inside Windows (with better performance than a VM). Then, with the help of Xming (an X windows server for Windows), you can have Linux windows mingle along side Windows windows. With that, pretty much everything Linux-based will just work. You're not limited to just the tools that have been ported to Cygwin, you can apt-get anything you want. andLinux also includes a few niceties, such as desktop shortcuts to launch Linux apps, a launcher that lives in your tray, and context menu items (right click a text file and you can open it in Kate) The downsides of andLinux are: * *Accessing the Linux filesystem is tricky. You have to set up Samba in both directions. *Connecting to a Linux program from a remote connection is also tricky (but possible)
{ "language": "en", "url": "https://stackoverflow.com/questions/32777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Restrict the server access from LAN only Recently we got a new server at the office purely for testing purposes. It is set up so that we can access it from any computer. However today our ip got blocked from one of our other sites saying that our ip has been suspected of having a virus that sends spam emails. we learned this from the cbl http://cbl.abuseat.org/ So of course we turned the server off to stop this. The problem is the server must be on to continue developing our application and to access the database that is installed on it. Our normal admin is on vacation and is unreachable, and the rest of us are idiots(me included) in this area. We believe that the best solution is to remove it from connecting to the internet but still access it on the lan. If that is a valid solution how would this be done or is there a better way? say blocking specified ports or whatever. A: I assume that this server is behind a router? You should be able to block WAN connections to the server on the router and still leave it open to accepting LAN connection. Or you could restrict the IPs that can connect to the server to the development machines on the network.
{ "language": "en", "url": "https://stackoverflow.com/questions/32780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Alternatives to System.exit(1) For various reasons calling System.exit is frowned upon when writing Java Applications, so how can I notify the calling process that not everything is going according to plan? Edit: The 1 is a standin for any non-zero exit code. A: The use of System.exit is frowned upon when the 'application' is really a sub-application (e.g. servlet, applet) of a larger Java application (server): in this case the System.exit could stop the JVM and hence also all other sub-applications. In this situation, throwing an appropriate exception, which could be caught and handled by the application framework/server is the best option. If the java application is really meant to be run as a standalone application, there is nothing wrong with using System.exit. in this case, setting an exit value is probably the easiest (and also most used) way of communicating failure or success to the parent process. A: System.exit() will block, and create a deadlock if the thread that initiated it is used in a shutdown hook. A: Our company's policy is that it's OK (even preferred) to call System.exit(-1), but only in init() methods. I would definitely think twice before calling it during a program's normal flow. A: I think throwing an exception is what you should do when something goes wrong. This way, if your application is not running as a stand-alone app the caller can react to it and has some information about what went wrong. It is also easier for debugging purposes because you as well get a better idea about what went wrong when you see a stack trace. One important thing to note is that when the exception reaches the top level and therefore causes the VM to quit the VM returns a return code of 1, therefore outside applications that use the return code see that something went wrong. The only case where I think System.exit() makes sense is when your app is meant to be called by applications which are not Java and therefore have to use return codes to see if your app worked or not and you want those applications to have a chance to react differently on different things going wrong, i.e. you need different return codes. A: I agree with the "throw an Exception" crowd. One reason is that calling System.exit makes your code difficult to use if you want other code to be able to use it. For example, if you find out that your class would be useful from a web app, or some kind of message consuming app, it would be nice to allow those containers the opportunity to deal with the failure somehow. A container may want to retry the operation, decide to log and ignore the problem, send an email to an administrator, etc. An exception to this would be your main() method; this could trap the Exception, and call System.exit() with some value that can be recognized by the calling process or shell script. A: It can be dangerous / problematic in web servlet environments also. Throwing an Exception is generally considered the other alternative. A: Throwing exceptions is the best way to send information about a certain error up and out of the app. A number doesn't tell you as much as: Exception at thread 'main': FileNotFoundException "The file 'foo' doesn't exist" (or something close to that) A: It's frowned upon for normal exits. If "not everything is going according to plan", then System.exit is fine. Update: I should add that I assume your '1' has meaning that is documented somewhere. A: I feel impelled to add some salt of mine too. This is a great question that always pops up when I write an application. As everyone here seems to agree, you should be careful using System.exit(), and if possible, use exceptions. However, System.exit() still seems the only way to return basic informations to the system, and thus required if you want to make your application scriptable. If you don't need that, just throw an exception and be done with it. But if (and only if) your application is single-threaded, then it's fine to use it -- it's guaranteed that no other stuff is going in, and no resources are open (at least if you consistently use the try-with-resource idiom, which I'd highly recommend as it also makes the code cleaner and more compact). On the other hand, as soon as your application creates any kind of thread that may write resources, System.exit() is a total "no, no", because it can (and, with time, will) corrupt data. To be able to use a multi-threaded and scripted application and still guarantee the data integrity, my best solution so far is to save any resource-modifying thread you create (for example by consistently using a factory method which adds the thread to the list), and also installing a shutdown hook that cleanly ends each thread by interrupting and joining it. As the shutdown hook is also called by System.exit(), this will guarantee (minus programming errors) that no thread is killed in mid-resource writing. Oh yes, maybe I shouldn't even mention it, but: never, EVER, use that horrible System.halt() method. It just shoots the VM in the head and doesn't call any shutdown hook.
{ "language": "en", "url": "https://stackoverflow.com/questions/32790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Email Delivery Question This question comes on the heels of the question asked here. The email that comes from our web server comes from an IP address that is different than that for the Exchange server. Is this okay if the SPF and Domain keys are setup properly? A: Short answer: Yes A: It should just fine. However some spam filters will do a reverse lookup on the originating IP address and see if it's assigned to the domain name the email claims to be from, and some may check to see if the IP is an actual MX for the domain. So the downside is that some recipients may never get the email, and you may not know about it for a long time. I'd suggest routing your mail through an established MX rather than having a webserver do it directly (there are some security implications there too).
{ "language": "en", "url": "https://stackoverflow.com/questions/32803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: ASP.NET Validators inside an UpdatePanel I'm using an older version of ASP.NET AJAX due to runtime limitations, Placing a ASP.NET Validator inside of an update panel does not work. Is there a trick to make these work, or do I need to use the ValidatorCallOut control that comes with the AJAX toolkit? A: @Jonathan Holland: What is wrong with using Validators.dll? Since they replace the original classes, you are quietly bypassing any bug and security fixes, enhancements, etc. that Microsoft might release in the future (or might have already released). Unless you look carefully at the web.config, you might never notice that you are skipping patches. Of course, you have to evaluate each situation. If you are absolutely stuck using .NET 2.0 RTM, then Validators.dll is better than nothing. A: I suspect you are running the original release (RTM) of .NET 2.0. Until early 2007 validator controls were not compatible with UpdatePanels. This was resolved with the SP1 of the .NET Framework. The source of the problem is that UpdatePanel can detect markup changes in your page, but it has no way to track scripts correctly. Validators rely heavily on scripts. During a partial postback, the scripts are either blown away, not updated, or not run when they are meant to. In early betas, MS had the UpdatePanel try to guess what scripts needed to be re-rendered or run. It didn't work very well, and they had to take it out. To get around the immediate problem, Microsoft released a patched version of the validator classes in a new DLL called Validators.DLL, and gave instructions on how to tell ASP.NET to use those classes instead of the real ones. If you Google for that DLL name, you should find more information. See also This blog post. This was a stop-gag measure and you should not use it avoid it if possible. The real solution to the problem came shortly after, in .NET 2.0 SP1. Microsoft introduced a new mechanism to register scripts in SP1, and changed the real validator classes to use that mechanism instead of the older one. Let me give you some details on the changes: Traditionally, you were supposed to register scripts via Page methods such as Page.RegisterStartupScript() and Page.RegisterClientScriptBlock(). The problem is that these methods were not designed for extensibility and UpdatePanel had no way to monitor those calls. In SP1 there is a new property object on the page called Page.ClientScripts. This object has methods to register scripts that are equivalent (and in some ways better) to the original ones. Also, UpdatePanel can monitor these calls, so that it rerenders or calls the methods when appropriate. The older RegisterStartupScript(), etc. methods have been deprecated. They still work, but not inside an UpdatePanel. There is no reason (other than politics, I suppose) to not update your installations to .NET 2.0 SP1. Service Packs carry important fixes. Good luck. A: @jmein Actually the problem is that the Validator client script's don't work when placed inside of an updatePanel (UpdatePanels refresh using .innerHTML, which adds the script nodes as text nodes, not script nodes, so the browser does not run them). The fix was a patch released by microsoft that fixes this issue. I found it with the help of Google. http://blogs.msdn.com/mattgi/archive/2007/01/23/asp-net-ajax-validators.aspx A: If for what ever reason you are unable to use the udpated version of the ASP.NET validator controls is actually very easy to validate a validation group yourself, all you need to do is call Page_ClientValidate("validationGroupName"); Then you can use the PageRequestManager execute the validation as you need. Definately using the updated validation controls is the way to go, but I'm quite partial to JavaScript ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/32814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Why does HttpCacheability.Private suppress ETags? While writing a custom IHttpHandler I came across a behavior that I didn't expect concerning the HttpCachePolicy object. My handler calculates and sets an entity-tag (using the SetETag method on the HttpCachePolicy associated with the current response object). If I set the cache-control to public using the SetCacheability method everything works like a charm and the server sends along the e-tag header. If I set it to private the e-tag header will be suppressed. Maybe I just haven't looked hard enough but I haven't seen anything in the HTTP/1.1 spec that would justify this behavior. Why wouldn't you want to send E-Tag to browsers while still prohibiting proxies from storing the data? using System; using System.Web; public class Handler : IHttpHandler { public void ProcessRequest (HttpContext ctx) { ctx.Response.Cache.SetCacheability(HttpCacheability.Private); ctx.Response.Cache.SetETag("\"static\""); ctx.Response.ContentType = "text/plain"; ctx.Response.Write("Hello World"); } public bool IsReusable { get { return true; } } } Will return Cache-Control: private Content-Type: text/plain; charset=utf-8 Content-Length: 11 But if we change it to public it'll return Cache-Control: public Content-Type: text/plain; charset=utf-8 Content-Length: 11 Etag: "static" I've run this on the ASP.NET development server and IIS6 so far with the same results. Also I'm unable to explicitly set the ETag using Response.AppendHeader("ETag", "static") Update: It's possible to append the ETag header manually when running in IIS7, I suspect this is caused by the tight integration between ASP.NET and the IIS7 pipeline. Clarification: It's a long question but the core question is this: why does ASP.NET do this, how can I get around it and should I? Update: I'm going to accept Tony's answer since it's essentially correct (go Tony!). I found that if you want to emulate the HttpCacheability.Private fully you can set the cacheability to ServerAndPrivate but you also have call cache.SetOmitVaryStar(true) otherwise the cache will add the Vary: * header to the output and you don't want that. I'll edit that into the answer when I get edit permissions (or if you see this Tony perhaps you could edit your answer to include that call?) A: Unfortunately if you look at System.Web.HttpCachePolicy.UpdateCachedHeaders() in .NET Reflector you see that there's an if statement specifically checking that the Cacheability is not Private before doing any ETag stuff. In any case, I've always found that Last-Modified/If-Modified-Since works well for our data and is a bit easier to monitor in Fiddler anyway. A: I think you need to use HttpCacheability.ServerAndPrivate That should give you cache-control: private in the headers and let you set an ETag. The documentation on that needs to be a bit better. Edit: Markus found that you also have call cache.SetOmitVaryStar(true) otherwise the cache will add the Vary: * header to the output and you don't want that. A: If like me you're unhappy with the workaround mentioned here of using Cacheability.ServerAndPrivate, and you really want to use Private instead - perhaps because you are customising pages individually for users and it makes no sense to cache on the server - then at least in .NET 3.5 you can set ETag through Response.Headers.Add and this works fine. N.B. if you do this you have to implement the comparison of the client headers yourself and the HTTP 304 response handling - not sure if .NET takes care of this for you under normal circumstances.
{ "language": "en", "url": "https://stackoverflow.com/questions/32824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: XNA Unit Testing So I'm interested in hearing different thoughts about what is the best way to go about unit testing XNA Game/Applications. Astute googlers can probably figure out why I'm asking, but I didn't want to bias the topic :-) A: I would that this question is geared more toward the approach of unit testing in game development. I mean, XNA is a framework. Plug in NUnit, and begin writing test cases while you develop. Here is a post on SO about unit testing a game. It'll give you a little insight into how you need to think while progressing. A: XNA BOOK This book shows how to code in XNA but the entire book is based on NUNIT testing. So while you are coding the projects in the book, he also shows you how to write the scripts for NUNIT to test the XNA code. A: VS2008 has a nicely integrated unit testing framework. (I assume you're using using the XNA 3.0 CTP with your Zune.) A: The Microsoft testing framework is now available in Visual Studio 2008 Professional and up. If you have this software you already have all the software that you need to start testing your games. Here are two links that will get you started: Unit tests Overview - http://msdn.microsoft.com/en-us/library/ms182516.aspx Creating Unit Tests - http://msdn.microsoft.com/en-us/library/ms182523.aspx If you only have the Visual Studio 2008 Express than you need to use some other testing framework. NUnit is probably the best one, some people even prefer it to MSTest. After you have all the software you need you can start adding tests for your code. Here I've posted some basic techniques about unit testing games that you might find useful. Have you done unit testing before? If you haven't I could posbily give you some more hints and resources A: You should give Scurvy Test a try. Have not used it my self but it looks promising. A: I know this is an old post, but for other people wondering how to best go about testing their XNA Games, there is another option. The built-in testing in Visual Studio is definitely great, but is not well suited for games. Everytime a value is needed, you have to pause the game, and then either hover over the variable, go to quick watch, or add a watch. Then you can see the value of the variable at that frame. To see the value again during another frame, you must pause your game, again. This can be a big hassle. Therefore I have created a debugging terminal to run on top of your game. It allows you to see values of variables, invoke methods, and even watch a variable change in real-time all while you game is running! To find out more, visit: http://www.protohacks.net/xna_debug_terminal/ The project is completely free and open source. If you like it, feel free to tell others using XNA Game Studio about it. Hopes this helps out!
{ "language": "en", "url": "https://stackoverflow.com/questions/32835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Creating System Restore Points - Thoughts? Is it "taboo" to programatically create system restore points? I would be doing this before I perform a software update. If there is a better method to create a restore point with just my software's files and data, please let me know. I would like a means by which I can get the user back to a known working state if everything goes kaput during an update (closes/kills the update app, power goes out, user pulls the plug, etc.) private void CreateRestorePoint(string description) { ManagementScope oScope = new ManagementScope("\\\\localhost\\root\\default"); ManagementPath oPath = new ManagementPath("SystemRestore"); ObjectGetOptions oGetOp = new ObjectGetOptions(); ManagementClass oProcess = new ManagementClass(oScope, oPath, oGetOp); ManagementBaseObject oInParams = oProcess.GetMethodParameters("CreateRestorePoint"); oInParams["Description"] = description; oInParams["RestorePointType"] = 12; // MODIFY_SETTINGS oInParams["EventType"] = 100; ManagementBaseObject oOutParams = oProcess.InvokeMethod("CreateRestorePoint", oInParams, null); } A: Whether it's a good idea or not really depends on how much you're doing. A full system restore point is weighty - it takes time to create, disk space to store, and gets added to the interface of restore points, possibly pushing earlier restore points out of storage. So, if your update is really only changing your application (i.e. the data it stores, the binaries that make it up, the registry entries for it), then it's not really a system level change, and I'd vote for no restore point. You can emulate the functionality by just backing up the parts you're changing, and offering a restore to backup option. My opinion is that System Restore should be to restore the system when global changes are made that might corrupt it (application install, etc). The counter argument that one should just use the system service doesn't hold water for me; I worry that, if you have to issue a number of updates to your application, the set of system restore points might get so large that important, real "system wide" updates might get pushed out, or lost in the noise. A: Is it "taboo" to programatically create system restore points? No. That's why the API is there; so that you can have pseudo-atomic updates of the system. A: No, it's not Taboo - in fact, I'd encourage it. The OS manages how much hard drive takes, and I'd put money down on Microsoft spending more money & time testing System Restore than you the money & time you're putting into testing your setup application. A: If you are developing an application for Vista you can use Transactional NTFS, which supports a similar feature to what you are looking for. http://en.wikipedia.org/wiki/Transactional_NTFS Wouldn't installer packages already include this type of rollback support, though? I'm not terribly familiar with most of them so I am not sure. Finally, Windows will typically automatically create a restore point anytime you run a setup application. A: Take a look at the following link: http://www.calumgrant.net/atomic/ The author described "Transactional Programming". This is analogous to the transactions in data bases. Example: Start transaction: * *Step 1 *Step 2 *Encounter error during step 2 *Roll back to before transaction started. This is a new framework, but you can look at it more as a solution rather then using the framework. By using transactions, you get the "Restore Points" that you're looking for. A: I don't think a complete system restore would be a good plan. Two reasons that quickly come to mind: * *Wasted disk space *Unintended consequences from a rollback
{ "language": "en", "url": "https://stackoverflow.com/questions/32845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Multicasting, Messaging, ActiveMQ vs. MSMQ? I'm working on a messaging/notification system for our products. Basic requirements are: * *Fire and forget *Persistent set of messages, possibly updating, to stay there until the sender says to remove them The libraries will be written in C#. Spring.NET just released a milestone build with lots of nice messaging abstraction, which is great - I plan on using it extensively. My basic question comes down to the question of message brokers. My architecture will look something like app -> message broker queue -> server app that listens, dispatches all messages to where they need to go, and handles the life cycle of those long-lived messages -> message broker queue or topic -> listening apps. Finally, the question: Which message broker should I use? I am biased towards ActiveMQ - We used it on our last project and loved it. I can't really think of a single strike against it, except that it's Java, and will require java to be installed on a server somewhere, and that might be a hard sell to some of the people that will be using this service. The other option I've been looking at is MSMQ. I am biased against it for some unknown reason, and it also doesn't seem to have great multicast support. Has anyone used MSMQ for something like this? Any pros or cons, stuff that might sway the vote one way or the other? One last thing, we are using .NET 2.0. A: Take a look at zeromq. It's one of the fastest message queues around. A: I suggest you have a look at TIBCO Enterprise Messaging Service - EMS, which is a high performance messaging product that supports multicasting, routing, supports JMS specification and provides enterprise wide features including your requirements suchas fire-forget and message persistence using file/database using shared state. As a reference, FEDEX runs on TIBCO EMS as its messaging infrastructure. http://www.tibco.com/software/messaging/enterprise_messaging_service/default.jsp There are lot other references if i provide, you'd really be surprised. A: There are so many options in that arena... Free: MantaRay a peer to peer fully JMS compliant system. The interesting part of Mantaray is that you only need to define where the message goes and MantaRay routes it anyways that will get your message to it's detination - so it is more resistant to failures of individual nodes in your messaging fabric. Paid: At my day job I administer an IBM WebSphere MQ messaging system with several hundred nodes and have found it to be very good. We also recently purchased Tibco EMS and it seems that it will be pretty nice to use as well. A: I'm kinda biased as I work on ActiveMQ but pretty much all of benefits listed for MSMQ above also apply to ActiveMQ really. Some more benefits of ActiveMQ include * *great support for cross language client access and multi protocol support *excellent support for enterprise integration patterns *a ton of advanced features like exclusive queues and message groups The main downside you mention is that the ActiveMQ broker is written in Java; but you can run it on IKVM as a .net assembly if you really want - or run it as a windows service, or compile it to a DLL/EXE via GCJ. MSMQ may or may not be written in .NET - but it doesn't really matter much how its implemented right? Irrespective of whether you choose MSMQ or ActiveMQ I'd recommend at least considering using the NMS API which as you say is integrated great into Spring.NET. There is an MSMQ implementation of this API as well as implementations for TibCo, ActiveMQ and STOMP which will support any other JMS provider via StompConnect. So by choosing NMS as your API you will avoid lockin to any proprietary technology - and you can then easily switch messaging providers at any point in time; rather than locking your code all into a proprietary API A: Pros for MSMQ. * *It is built into Windows *It supports transactions, it also supports queues with no transactions *It is really easy to setup *AD Integration *It is fast, but you would need to compare ActiveMQ and MSMQ for your traffic to know which is faster. *.NET supports it nativity *Supports fire and forget *You can peek at the queue, if you have readers that just look. not sure if you can edit a message in the queue. Cons: * *4MB message size limit *2GB Queue size limit *Queue items are held on disk *Not a mainstream MS product, docs are a bit iffy, or were it has been a few years since I used it. Here is a good blog for MSMQ
{ "language": "en", "url": "https://stackoverflow.com/questions/32851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How can I resize a swf during runtime to have the browser create html scrollbars? I have a swf with loads text into a Sprite that resizes based on the content put into - I'd like though for the ones that are longer than the page to have the browser use its native scroll bars rather than handle it in actionscript (very much like http://www.nike.com/nikeskateboarding/v3/...) I did have a look at the stuff nike did but just wasn't able to pull it off. Any idea's? A: The trick is to use some simple JavaScript to resize the Flash DOM node: function resizeFlash( h ) { // "flash-node-id" is the ID of the embedded Flash movie document.getElementById("flash-node-id").style.height = h + "px"; } Which you call from within the Flash movie like this: ExternalInterface.call("resizeFlash", 400); You don't actually need to have the JavaScript code externally, you can do it all from Flash if you want to: ExternalInterface.call( "function( id, h ) { document.getElementById(id).style.height = h + 'px'; }", ExternalInterface.objectID, 400 ); The anonymous function is just to be able to pass in the ID and height as parameters instead of concatenating them into the JavaScript string. I think that the JavaScript is fairly cross-platform. If you want to see a live example look at this site: talkoftheweather.com. It may not look as though it does anything, but it automatically resizes the Flash movie size to accommodate all the news items (it does this just after loading the news, which is done so quickly that you don't notice it happening). The resize forces the browser to show a vertical scroll bar. A: I've never done it that way around but I think swffit might be able to pull it off. A: I halfway looked at swffit but the height (and width sometimes but mainly height) would be dynamic - swffit let's you declare a maxHeight but that number would be constantly changing...maybe I could figure out how to set it dynamically. A great place for me to start though - thanks! A: What I've mostly been using if for is to limit how small you can make a "fullbrowser" flash, and for that it works great. Happy hacking! (and don't forget to post your findings here, I might need that too soon ;)) A: SWFSize See here for more details. Intuitsolutions.ca
{ "language": "en", "url": "https://stackoverflow.com/questions/32871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Browsers' default CSS stylesheets Are there any lists of default CSS stylesheets for different browsers? (browser stylesheets in tabular form) I want to know the default font of text areas across all browsers for future reference. A: There probably is a list, this is why we use CSS Resets however. * *Eric Meyer's Reset *Yahoo's Reset A: Not tabular, but the source CSS may be helpful if you're looking for something specific: * *Firefox default HTML stylesheet *WebKit default HTML stylesheet You're on your own with IE and Opera though. A: CSS class have compiled a list of CSS2.1 User Agent Style Sheet Defaults. Some links at the bottom of that page as well. A: You cannot possibly know all defaults for all configurations of all browsers into the future. The way people get around this is to start their CSS by resetting everything to known values. Here's an example from one of the main CSS experts: http://meyerweb.com/eric/thoughts/2007/05/01/reset-reloaded/ A: I suspect this is something of a moving target for all the browsers, but there is a default style sheet for HTML 4 as defined by the W3C. A: There was some discussion and testing done on www-style not too long ago: http://lists.w3.org/Archives/Public/www-style/2008Jul/0124.html http://lists.w3.org/Archives/Public/www-style/2008Jul/att-0124/defaultstyles.htm
{ "language": "en", "url": "https://stackoverflow.com/questions/32875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How to remove "VsDebuggerCausalityData" data from SOAP message? I've got a problem where incoming SOAP messages from one particular client are being marked as invalid and rejected by our XML firewall device. It appears extra payload data is being inserted by Visual Studio; we're thinking the extra data may be causing a problem b/c we're seeing "VsDebuggerCausalityData" in these messages but not in others sent from a different client who is not having a problem. It's a starting point, anyway. The question I have is how can the client remove this extra data and still run from VS? Why is VS putting it in there at all? Thanks. A: Darryl's answer didn't work for me. Each developer has to do ggrocco's answer. I ended up writing a MessageInspector, and adding this code to the BeforeSendRequest method: int limit = request.Headers.Count; for(int i=0; i<limit; ++i) { if (request.Headers[i].Name.Equals("VsDebuggerCausalityData")) { request.Headers.RemoveAt(i); break; } } A: Or use "Start without debugging" in Visual Studio. A: For remove 'VsDebuggerCausalityData' you need stop de Visual Studio Diagnostic for WCF using this command: VS 2008 -> c:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE>vsdiag_regwcf.exe -u VS 2010 -> c:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE>vsdiag_regwcf.exe -u I hope this help you or other people. A: A quick google reveals that this should get rid of it, get them to add it to the web.config or app.config for their application. <configuration> <system.diagnostics> <switches> <add name="Remote.Disable" value="1" /> </switches> </system.diagnostics> </configuration> The information is debug information that the receiving service can use to help trace things back to the client. (maybe, I am guessing a little) * *I have proposed a follow up question to determine were the magic switch actually comes from. A: Based on an answer by @Luiz Felipe I came up with this slightly more robust solution: var vs = client.Endpoint.EndpointBehaviors.FirstOrDefault((i) => i.GetType().Namespace == "Microsoft.VisualStudio.Diagnostics.ServiceModelSink"); if (vs != null) { client.Endpoint.Behaviors.Remove(vs); }
{ "language": "en", "url": "https://stackoverflow.com/questions/32877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Do Java multi-line comments account for strings? This question would probably apply equally as well to other languages with C-like multi-line comments. Here's the problem I'm encountering. I'm working with Java code in Eclipse, and I wanted to comment out a block of code. However, there is a string that contains the character sequence "*/", and Eclipse thinks that the comment should end there, even though it is inside a string. It gives me tons of errors and fails to build. /* ... some Java code ... ... "... */ ..." ... ... more Java code ... */ Does the Java specification match with Eclipse's interpretation of my multi-line comment? I would like to think that Java and/or Eclipse would account for this sort of thing. A: Eclipse is correct. There is no interpretation context inside a comment (no escaping, etc). See JLS §3.7. A: In Eclipse you can highlight the part of the source code you want to comment out and use the Ctrl+/ to single-line comment every line in the highlighted section - puts a "//" at the beginning of the lines. Or if you really want to block-comment the selection use the Ctrl+Shift+/ combination. It will detect the block comments in your selection. However undoing this is harder than single-line comments. A: Yes, I am commenting the code out just to do a quick test. I've already tested what I needed to by commenting the code out another way; I was just curious about what appears to be an odd misfeature of Java and/or Eclipse. A: A simple test shows Eclipse is correct: public class Test { public static final void main(String[] args) throws Exception { String s = "This is the original string."; /* This is commented out. s = "This is the end of a comment: */ "; */ System.out.println(s); } } This fails to compile with: Test.java:5: unclosed string literal s = "This is the end of a comment: */ "; A: I may be helpful to just do a "batch" multiline comment so that it comments each line with "//". It is Ctrl+"/" in Idea for commenting and uncommenting the selected lines, Eclipse should have a similar feature. A: I often use only // for inline commments, and use /* */ only for commenting out large blocks the way you have. A lot of developers will still use /* */ for inline comments, because that's what they're familiar with, but they all run into problems like this one, in C it didn't matter as much because you could #if 0 the stuff away.
{ "language": "en", "url": "https://stackoverflow.com/questions/32897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you generate dynamic (parameterized) unit tests in Python? I have some kind of test data and want to create a unit test for each item. My first idea was to do it like this: import unittest l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]] class TestSequence(unittest.TestCase): def testsample(self): for name, a,b in l: print "test", name self.assertEqual(a,b) if __name__ == '__main__': unittest.main() The downside of this is that it handles all data in one test. I would like to generate one test for each item on the fly. Any suggestions? A: This can be solved elegantly using Metaclasses: import unittest l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]] class TestSequenceMeta(type): def __new__(mcs, name, bases, dict): def gen_test(a, b): def test(self): self.assertEqual(a, b) return test for tname, a, b in l: test_name = "test_%s" % tname dict[test_name] = gen_test(a,b) return type.__new__(mcs, name, bases, dict) class TestSequence(unittest.TestCase): __metaclass__ = TestSequenceMeta if __name__ == '__main__': unittest.main() A: As of Python 3.4, subtests have been introduced to unittest for this purpose. See the documentation for details. TestCase.subTest is a context manager which allows one to isolate asserts in a test so that a failure will be reported with parameter information, but it does not stop the test execution. Here's the example from the documentation: class NumbersTest(unittest.TestCase): def test_even(self): """ Test that numbers between 0 and 5 are all even. """ for i in range(0, 6): with self.subTest(i=i): self.assertEqual(i % 2, 0) The output of a test run would be: ====================================================================== FAIL: test_even (__main__.NumbersTest) (i=1) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 ====================================================================== FAIL: test_even (__main__.NumbersTest) (i=3) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 ====================================================================== FAIL: test_even (__main__.NumbersTest) (i=5) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtests.py", line 32, in test_even self.assertEqual(i % 2, 0) AssertionError: 1 != 0 This is also part of unittest2, so it is available for earlier versions of Python. A: You would benefit from trying the TestScenarios library. testscenarios provides clean dependency injection for python unittest style tests. This can be used for interface testing (testing many implementations via a single test suite) or for classic dependency injection (provide tests with dependencies externally to the test code itself, allowing easy testing in different situations). A: There's also Hypothesis which adds fuzz or property based testing. This is a very powerful testing method. A: This is effectively the same as parameterized as mentioned in a previous answer, but specific to unittest: def sub_test(param_list): """Decorates a test case to run it as a set of subtests.""" def decorator(f): @functools.wraps(f) def wrapped(self): for param in param_list: with self.subTest(**param): f(self, **param) return wrapped return decorator Example usage: class TestStuff(unittest.TestCase): @sub_test([ dict(arg1='a', arg2='b'), dict(arg1='x', arg2='y'), ]) def test_stuff(self, arg1, arg2): ... A: load_tests is a little known mechanism introduced in 2.7 to dynamically create a TestSuite. With it, you can easily create parametrized tests. For example: import unittest class GeneralTestCase(unittest.TestCase): def __init__(self, methodName, param1=None, param2=None): super(GeneralTestCase, self).__init__(methodName) self.param1 = param1 self.param2 = param2 def runTest(self): pass # Test that depends on param 1 and 2. def load_tests(loader, tests, pattern): test_cases = unittest.TestSuite() for p1, p2 in [(1, 2), (3, 4)]: test_cases.addTest(GeneralTestCase('runTest', p1, p2)) return test_cases That code will run all the TestCases in the TestSuite returned by load_tests. No other tests are automatically run by the discovery mechanism. Alternatively, you can also use inheritance as shown in this ticket: http://bugs.python.org/msg151444 A: It can be done by using pytest. Just write the file test_me.py with content: import pytest @pytest.mark.parametrize('name, left, right', [['foo', 'a', 'a'], ['bar', 'a', 'b'], ['baz', 'b', 'b']]) def test_me(name, left, right): assert left == right, name And run your test with command py.test --tb=short test_me.py. Then the output will look like: =========================== test session starts ============================ platform darwin -- Python 2.7.6 -- py-1.4.23 -- pytest-2.6.1 collected 3 items test_me.py .F. ================================= FAILURES ================================= _____________________________ test_me[bar-a-b] _____________________________ test_me.py:8: in test_me assert left == right, name E AssertionError: bar ==================== 1 failed, 2 passed in 0.01 seconds ==================== It is simple! Also pytest has more features like fixtures, mark, assert, etc. A: You can use the nose-ittr plugin (pip install nose-ittr). It's very easy to integrate with existing tests, and minimal changes (if any) are required. It also supports the nose multiprocessing plugin. Note that you can also have a customize setup function per test. @ittr(number=[1, 2, 3, 4]) def test_even(self): assert_equal(self.number % 2, 0) It is also possible to pass nosetest parameters like with their built-in plugin attrib. This way you can run only a specific test with specific parameter: nosetest -a number=2 A: I use metaclasses and decorators for generate tests. You can check my implementation python_wrap_cases. This library doesn't require any test frameworks. Your example: import unittest from python_wrap_cases import wrap_case @wrap_case class TestSequence(unittest.TestCase): @wrap_case("foo", "a", "a") @wrap_case("bar", "a", "b") @wrap_case("lee", "b", "b") def testsample(self, name, a, b): print "test", name self.assertEqual(a, b) Console output: testsample_u'bar'_u'a'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test bar FAIL testsample_u'foo'_u'a'_u'a' (tests.example.test_stackoverflow.TestSequence) ... test foo ok testsample_u'lee'_u'b'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test lee ok Also you may use generators. For example this code generate all possible combinations of tests with arguments a__list and b__list import unittest from python_wrap_cases import wrap_case @wrap_case class TestSequence(unittest.TestCase): @wrap_case(a__list=["a", "b"], b__list=["a", "b"]) def testsample(self, a, b): self.assertEqual(a, b) Console output: testsample_a(u'a')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... ok testsample_a(u'a')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... FAIL testsample_a(u'b')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... FAIL testsample_a(u'b')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... ok A: This is called "parametrization". There are several tools that support this approach. E.g.: * *pytest's decorator *parameterized The resulting code looks like this: from parameterized import parameterized class TestSequence(unittest.TestCase): @parameterized.expand([ ["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"], ]) def test_sequence(self, name, a, b): self.assertEqual(a,b) Which will generate the tests: test_sequence_0_foo (__main__.TestSequence) ... ok test_sequence_1_bar (__main__.TestSequence) ... FAIL test_sequence_2_lee (__main__.TestSequence) ... ok ====================================================================== FAIL: test_sequence_1_bar (__main__.TestSequence) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/parameterized/parameterized.py", line 233, in <lambda> standalone_func = lambda *a: func(*(a + p.args), **p.kwargs) File "x.py", line 12, in test_sequence self.assertEqual(a,b) AssertionError: 'a' != 'b' For historical reasons I'll leave the original answer circa 2008): I use something like this: import unittest l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]] class TestSequense(unittest.TestCase): pass def test_generator(a, b): def test(self): self.assertEqual(a,b) return test if __name__ == '__main__': for t in l: test_name = 'test_%s' % t[0] test = test_generator(t[1], t[2]) setattr(TestSequense, test_name, test) unittest.main() A: Using unittest (since 3.4) Since Python 3.4, the standard library unittest package has the subTest context manager. See the documentation: * *26.4.7. Distinguishing test iterations using subtests *subTest Example: from unittest import TestCase param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')] class TestDemonstrateSubtest(TestCase): def test_works_as_expected(self): for p1, p2 in param_list: with self.subTest(): self.assertEqual(p1, p2) You can also specify a custom message and parameter values to subTest(): with self.subTest(msg="Checking if p1 equals p2", p1=p1, p2=p2): Using nose The nose testing framework supports this. Example (the code below is the entire contents of the file containing the test): param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')] def test_generator(): for params in param_list: yield check_em, params[0], params[1] def check_em(a, b): assert a == b The output of the nosetests command: > nosetests -v testgen.test_generator('a', 'a') ... ok testgen.test_generator('a', 'b') ... FAIL testgen.test_generator('b', 'b') ... ok ====================================================================== FAIL: testgen.test_generator('a', 'b') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/case.py", line 203, in runTest self.test(*self.arg) File "testgen.py", line 7, in check_em assert a == b AssertionError ---------------------------------------------------------------------- Ran 3 tests in 0.006s FAILED (failures=1) A: I came across ParamUnittest the other day when looking at the source code for radon (example usage on the GitHub repository). It should work with other frameworks that extend TestCase (like Nose). Here is an example: import unittest import paramunittest @paramunittest.parametrized( ('1', '2'), #(4, 3), <---- Uncomment to have a failing test ('2', '3'), (('4', ), {'b': '5'}), ((), {'a': 5, 'b': 6}), {'a': 5, 'b': 6}, ) class TestBar(TestCase): def setParameters(self, a, b): self.a = a self.b = b def testLess(self): self.assertLess(self.a, self.b) A: import unittest def generator(test_class, a, b): def test(self): self.assertEqual(a, b) return test def add_test_methods(test_class): # The first element of list is variable "a", then variable "b", then name of test case that will be used as suffix. test_list = [[2,3, 'one'], [5,5, 'two'], [0,0, 'three']] for case in test_list: test = generator(test_class, case[0], case[1]) setattr(test_class, "test_%s" % case[2], test) class TestAuto(unittest.TestCase): def setUp(self): print 'Setup' pass def tearDown(self): print 'TearDown' pass _add_test_methods(TestAuto) # It's better to start with underscore so it is not detected as a test itself if __name__ == '__main__': unittest.main(verbosity=1) RESULT: >>> Setup FTearDown Setup TearDown .Setup TearDown . ====================================================================== FAIL: test_one (__main__.TestAuto) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:/inchowar/Desktop/PyTrash/test_auto_3.py", line 5, in test self.assertEqual(a, b) AssertionError: 2 != 3 ---------------------------------------------------------------------- Ran 3 tests in 0.019s FAILED (failures=1) A: Use the ddt library. It adds simple decorators for the test methods: import unittest from ddt import ddt, data from mycode import larger_than_two @ddt class FooTestCase(unittest.TestCase): @data(3, 4, 12, 23) def test_larger_than_two(self, value): self.assertTrue(larger_than_two(value)) @data(1, -3, 2, 0) def test_not_larger_than_two(self, value): self.assertFalse(larger_than_two(value)) This library can be installed with pip. It doesn't require nose, and works excellent with the standard library unittest module. A: Just use metaclasses, as seen here; class DocTestMeta(type): """ Test functions are generated in metaclass due to the way some test loaders work. For example, setupClass() won't get called unless there are other existing test methods, and will also prevent unit test loader logic being called before the test methods have been defined. """ def __init__(self, name, bases, attrs): super(DocTestMeta, self).__init__(name, bases, attrs) def __new__(cls, name, bases, attrs): def func(self): """Inner test method goes here""" self.assertTrue(1) func.__name__ = 'test_sample' attrs[func.__name__] = func return super(DocTestMeta, cls).__new__(cls, name, bases, attrs) class ExampleTestCase(TestCase): """Our example test case, with no methods defined""" __metaclass__ = DocTestMeta Output: test_sample (ExampleTestCase) ... OK A: You can use TestSuite and custom TestCase classes. import unittest class CustomTest(unittest.TestCase): def __init__(self, name, a, b): super().__init__() self.name = name self.a = a self.b = b def runTest(self): print("test", self.name) self.assertEqual(self.a, self.b) if __name__ == '__main__': suite = unittest.TestSuite() suite.addTest(CustomTest("Foo", 1337, 1337)) suite.addTest(CustomTest("Bar", 0xDEAD, 0xC0DE)) unittest.TextTestRunner().run(suite) A: I'd been having trouble with a very particular style of parameterized tests. All our Selenium tests can run locally, but they also should be able to be run remotely against several platforms on SauceLabs. Basically, I wanted to take a large amount of already-written test cases and parameterize them with the fewest changes to code possible. Furthermore, I needed to be able to pass the parameters into the setUp method, something which I haven't seen any solutions for elsewhere. Here's what I've come up with: import inspect import types test_platforms = [ {'browserName': "internet explorer", 'platform': "Windows 7", 'version': "10.0"}, {'browserName': "internet explorer", 'platform': "Windows 7", 'version': "11.0"}, {'browserName': "firefox", 'platform': "Linux", 'version': "43.0"}, ] def sauce_labs(): def wrapper(cls): return test_on_platforms(cls) return wrapper def test_on_platforms(base_class): for name, function in inspect.getmembers(base_class, inspect.isfunction): if name.startswith('test_'): for platform in test_platforms: new_name = '_'.join(list([name, ''.join(platform['browserName'].title().split()), platform['version']])) new_function = types.FunctionType(function.__code__, function.__globals__, new_name, function.__defaults__, function.__closure__) setattr(new_function, 'platform', platform) setattr(base_class, new_name, new_function) delattr(base_class, name) return base_class With this, all I had to do was add a simple decorator @sauce_labs() to each regular old TestCase, and now when running them, they're wrapped up and rewritten, so that all the test methods are parameterized and renamed. LoginTests.test_login(self) runs as LoginTests.test_login_internet_explorer_10.0(self), LoginTests.test_login_internet_explorer_11.0(self), and LoginTests.test_login_firefox_43.0(self), and each one has the parameter self.platform to decide what browser/platform to run against, even in LoginTests.setUp, which is crucial for my task since that's where the connection to SauceLabs is initialized. Anyway, I hope this might be of help to someone looking to do a similar "global" parameterization of their tests! A: This solution works with unittest and nose for Python 2 and Python 3: #!/usr/bin/env python import unittest def make_function(description, a, b): def ghost(self): self.assertEqual(a, b, description) print(description) ghost.__name__ = 'test_{0}'.format(description) return ghost class TestsContainer(unittest.TestCase): pass testsmap = { 'foo': [1, 1], 'bar': [1, 2], 'baz': [5, 5]} def generator(): for name, params in testsmap.iteritems(): test_func = make_function(name, params[0], params[1]) setattr(TestsContainer, 'test_{0}'.format(name), test_func) generator() if __name__ == '__main__': unittest.main() A: Meta-programming is fun, but it can get in the way. Most solutions here make it difficult to: * *selectively launch a test *point back to the code given the test's name So, my first suggestion is to follow the simple/explicit path (works with any test runner): import unittest class TestSequence(unittest.TestCase): def _test_complex_property(self, a, b): self.assertEqual(a,b) def test_foo(self): self._test_complex_property("a", "a") def test_bar(self): self._test_complex_property("a", "b") def test_lee(self): self._test_complex_property("b", "b") if __name__ == '__main__': unittest.main() Since we shouldn't repeat ourselves, my second suggestion builds on Javier's answer: embrace property based testing. Hypothesis library: * *is "more relentlessly devious about test case generation than us mere humans" *will provide simple count-examples *works with any test runner *has many more interesting features (statistics, additional test output, ...) class TestSequence(unittest.TestCase): @given(st.text(), st.text()) def test_complex_property(self, a, b): self.assertEqual(a,b) To test your specific examples, just add: @example("a", "a") @example("a", "b") @example("b", "b") To run only one particular example, you can comment out the other examples (provided example will be run first). You may want to use @given(st.nothing()). Another option is to replace the whole block by: @given(st.just("a"), st.just("b")) OK, you don't have distinct test names. But maybe you just need: * *a descriptive name of the property under test. *which input leads to failure (falsifying example). Funnier example A: I have found that this works well for my purposes, especially if I need to generate tests that do slightly difference processes on a collection of data. import unittest def rename(newName): def renamingFunc(func): func.__name__ == newName return func return renamingFunc class TestGenerator(unittest.TestCase): TEST_DATA = {} @classmethod def generateTests(cls): for dataName, dataValue in TestGenerator.TEST_DATA: for func in cls.getTests(dataName, dataValue): setattr(cls, "test_{:s}_{:s}".format(func.__name__, dataName), func) @classmethod def getTests(cls): raise(NotImplementedError("This must be implemented")) class TestCluster(TestGenerator): TEST_CASES = [] @staticmethod def getTests(dataName, dataValue): def makeTest(case): @rename("{:s}".format(case["name"])) def test(self): # Do things with self, case, data pass return test return [makeTest(c) for c in TestCluster.TEST_CASES] TestCluster.generateTests() The TestGenerator class can be used to spawn different sets of test cases like TestCluster. TestCluster can be thought of as an implementation of the TestGenerator interface. A: The metaclass-based answers still work in Python 3, but instead of the __metaclass__ attribute, one has to use the metaclass parameter, as in: class ExampleTestCase(TestCase,metaclass=DocTestMeta): pass A: I had trouble making these work for setUpClass. Here's a version of Javier's answer that gives setUpClass access to dynamically allocated attributes. import unittest class GeneralTestCase(unittest.TestCase): @classmethod def setUpClass(cls): print '' print cls.p1 print cls.p2 def runTest1(self): self.assertTrue((self.p2 - self.p1) == 1) def runTest2(self): self.assertFalse((self.p2 - self.p1) == 2) def load_tests(loader, tests, pattern): test_cases = unittest.TestSuite() for p1, p2 in [(1, 2), (3, 4)]: clsname = 'TestCase_{}_{}'.format(p1, p2) dct = { 'p1': p1, 'p2': p2, } cls = type(clsname, (GeneralTestCase,), dct) test_cases.addTest(cls('runTest1')) test_cases.addTest(cls('runTest2')) return test_cases Outputs 1 2 .. 3 4 .. ---------------------------------------------------------------------- Ran 4 tests in 0.000s OK A: Besides using setattr, we can use load_tests with Python 3.2 and later. class Test(unittest.TestCase): pass def _test(self, file_name): open(file_name, 'r') as f: self.assertEqual('test result',f.read()) def _generate_test(file_name): def test(self): _test(self, file_name) return test def _generate_tests(): for file in files: file_name = os.path.splitext(os.path.basename(file))[0] setattr(Test, 'test_%s' % file_name, _generate_test(file)) test_cases = (Test,) def load_tests(loader, tests, pattern): _generate_tests() suite = TestSuite() for test_class in test_cases: tests = loader.loadTestsFromTestCase(test_class) suite.addTests(tests) return suite if __name__ == '__main__': _generate_tests() unittest.main() A: Following is my solution. I find this useful when: * *Should work for unittest.Testcase and unittest discover *Have a set of tests to be run for different parameter settings. *Very simple and no dependency on other packages import unittest class BaseClass(unittest.TestCase): def setUp(self): self.param = 2 self.base = 2 def test_me(self): self.assertGreaterEqual(5, self.param+self.base) def test_me_too(self): self.assertLessEqual(3, self.param+self.base) class Child_One(BaseClass): def setUp(self): BaseClass.setUp(self) self.param = 4 class Child_Two(BaseClass): def setUp(self): BaseClass.setUp(self) self.param = 1 A: import unittest def generator(test_class, a, b,c,d,name): def test(self): print('Testexecution=',name) print('a=',a) print('b=',b) print('c=',c) print('d=',d) return test def add_test_methods(test_class): test_list = [[3,3,5,6, 'one'], [5,5,8,9, 'two'], [0,0,5,6, 'three'],[0,0,2,3,'Four']] for case in test_list: print('case=',case[0], case[1],case[2],case[3],case[4]) test = generator(test_class, case[0], case[1],case[2],case[3],case[4]) setattr(test_class, "test_%s" % case[4], test) class TestAuto(unittest.TestCase): def setUp(self): print ('Setup') pass def tearDown(self): print ('TearDown') pass add_test_methods(TestAuto) if __name__ == '__main__': unittest.main(verbosity=1)
{ "language": "en", "url": "https://stackoverflow.com/questions/32899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "348" }
Q: Is there a way to render svg data in a swf at runtime? I'd like to render to svg data in a swf at runtime (not in Flex - not using degrafa) - how would I go about doing that? A: If ActionScript 2: Use the com.itechnica.svg (PathToArray) library to load SVGs at SWF runtime and display them (uses XML for SVG parsing): Using SVG Path Data in Flash, Code download button on the right pane. If ActionScript 3: Use the com.zavoo.svg (SvgPath) library to load SVGs at SWF runtime and display them (uses RegExp for SVG parsing): Source code for SvgLoad and SvgDraw, Code download button on the bottom-left. A: The Ajaxian blog had a post about this today. http://ajaxian.com/archives/the-state-of-svg-browser-support-using-flash-for-svg-in-internet-explorer
{ "language": "en", "url": "https://stackoverflow.com/questions/32914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is a good dvd burning component for Windows or .Net? I'd like to add dvd burning functionality to my .Net app (running on Windows Server 2003), are there any good components available? I've used the NeroCOM sdk that used to come with Nero but they no longer support the sdk in the latest versions of Nero. I learned that Microsoft has created an IMAPI2 upgrade for Windows XP/2003 and there is an example project at CodeProject but not having used it myself I can't say how easy/reliable it is to use. I'm not really worried about burning audio/video to DVD as this is for file backup purposes only. A: At my last job I was tasked with finding a cross platform and preferably free way to write our application specific files to cd/dvd. I quickly found that writing CD's wasn't hard on windows, but I couldn't write DVD's easily, and that only worked on windows. I ended up writing a wrapper around cdrecord cdrecord is an open source project that builds easily with cygwin. I would create a staging directory where I added the files that needed to be written, called mkisofs on that directory to make a cd iso, and then called cdrecord to burn the image. This may not be the best solution if you have a strictly windows audience, but it was the only thing I could find that did window, Linux, and OS X. Another option worht checking out is the StarBurn SDK, I download the trial and used it, it worked well, but in the end it wasn't free so it was too expensive for my purposes. A: I've used the code from the codeproject article and it works pretty well. It's a nice wrapper around the IMAPI2, so as longs as IMAPI2 supports what you need to do, the .NET wrapper will do it. A: My cdrecord method did support dvd burning, I just looked over the code, and boy did I forget how much time and effort I put into that class. cdrecord has no problem burning just about any type of media you throw at it, but since it is a stand alone application, I had to do a lot of parsing to get useful information. I can dig up the flags and different calls I used if you are interested, but unfortunately I cannot share the source as it was developed for a commercial project. While looking over the code I was also reminded that I switched form cdrecord (cdrtools) to wodim (cdrkit). wodim is a branch of cdrecord made a few years ago by the debian team because cdrecord dropped the GPL license. Like I said before this was released as part of a commercial application, our interpretation of the GPL was that you can call external binaries from your program without a problem as long as your program can run without the external binaries (if cdrecord wasn't found we popped up a dialog informing the user that burning capabilities were not available) and we also had to host the source for cdrkit and cygwin and include a copy of the GPL with our distributed program. So basically we would not make "derivative works", we would compile the cdrkit code exactly as it was, and then use the produced binaries. As far as StarBurn SDK, I demoed it, but I didn't use it for a shipped product so I can't really give a recommendation or say much more than it does work A: Did your cdrecord methodology support dvd burning? And is there an easy way to redistribute/install cygwin with an application? StarBurn looks pretty good at first glance, although I'm a little hesitant to go with unproven libraries that have to handle something this complicated (especially with the number of types of media out there now) and the StarBurn portfolio page is a bit on the fluffy side.
{ "language": "en", "url": "https://stackoverflow.com/questions/32930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Shorthand conditional in C# similar to SQL 'in' keyword In C# is there a shorthand way to write this: public static bool IsAllowed(int userID) { return (userID == Personnel.JohnDoe || userID == Personnel.JaneDoe ...); } Like: public static bool IsAllowed(int userID) { return (userID in Personnel.JohnDoe, Personnel.JaneDoe ...); } I know I could also use switch, but there are probably 50 or so functions like this I have to write (porting a classic ASP site over to ASP.NET) so I'd like to keep them as short as possible. A: How about something like this: public static bool IsAllowed(int userID) { List<int> IDs = new List<string> { 1,2,3,4,5 }; return IDs.Contains(userID); } (You could of course change the static status, initialize the IDs class in some other place, use an IEnumerable<>, etc, based on your needs. The main point is that the closest equivalent to the in operator in SQL is the Collection.Contains() function.) A: I would encapsulate the list of allowed IDs as data not code. Then it's source can be changed easily later on. List<int> allowedIDs = ...; public bool IsAllowed(int userID) { return allowedIDs.Contains(userID); } If using .NET 3.5, you can use IEnumerable instead of List thanks to extension methods. (This function shouldn't be static. See this posting: using too much static bad or good ?.) A: How about this? public static class Extensions { public static bool In<T>(this T testValue, params T[] values) { return values.Contains(testValue); } } Usage: Personnel userId = Personnel.JohnDoe; if (userId.In(Personnel.JohnDoe, Personnel.JaneDoe)) { // Do something } I can't claim credit for this, but I also can't remember where I saw it. So, credit to you, anonymous Internet stranger. A: Are permissions user-id based? If so, you may end up with a better solution by going to role based permissions. Or you may end up having to edit that method quite frequently to add additional users to the "allowed users" list. For example, enum UserRole { User, Administrator, LordEmperor } class User { public UserRole Role{get; set;} public string Name {get; set;} public int UserId {get; set;} } public static bool IsAllowed(User user) { return user.Role == UserRole.LordEmperor; } A: A nice little trick is to sort of reverse the way you usually use .Contains(), like:- public static bool IsAllowed(int userID) { return new int[] { Personnel.JaneDoe, Personnel.JohnDoe }.Contains(userID); } Where you can put as many entries in the array as you like. If the Personnel.x is an enum you'd have some casting issues with this (and with the original code you posted), and in that case it'd be easier to use:- public static bool IsAllowed(int userID) { return Enum.IsDefined(typeof(Personnel), userID); } A: Here's the closest that I can think of: using System.Linq; public static bool IsAllowed(int userID) { return new Personnel[] { Personnel.JohnDoe, Personnel.JaneDoe }.Contains((Personnel)userID); } A: Just another syntax idea: return new [] { Personnel.JohnDoe, Personnel.JaneDoe }.Contains(userID); A: Can you write an iterator for Personnel. public static bool IsAllowed(int userID) { return (Personnel.Contains(userID)) } public bool Contains(int userID) : extends Personnel (i think that is how it is written) { foreach (int id in Personnel) if (id == userid) return true; return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/32937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: SQL Server 2008 Reporting Services Control Is the Sql Server 2008 control available for download? Does it yet support the 2008 RDL schema? A: If you are talking about the ReportViewer control, it is available. However you need Windows XP, Windows Vista or Windows Server 2003 to install it. It is also written that .NET 3.5 is required, but I'm not sure about this one. I managed to install it with .NET 2.0 on an XP. A: The ReportViewer control should work just fine with SQL Server 2008. A: The ReportViewer control, http://www.microsoft.com/downloads/details.aspx?FamilyID=cc96c246-61e5-4d9e-bb5f-416d75a1b9ef&DisplayLang=en , supports 2005 RDL using LocalMode (ReportViewer.LocalReport) and 2008 from a Server (ReportViewer.ServerReport).
{ "language": "en", "url": "https://stackoverflow.com/questions/32941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Should a wireless network be open? Obviously there are security reasons to close a wireless network and it's not fun if someone is stealing your bandwidth. That would be a serious problem? To address the first concern: Does a device on the same wireless network have any special privileges or access that an other device on the internet has? Assumptions : Wireless network is connected to the internet The second seems like a community issue. If your neighbor is stealing bandwidth, you'd act just as if he were "borrowing" water or electricity. First, Talk to him about the problem and if that doesn't work, go to the authorities or lock stuff up. Am I missing something? A: I don't think the biggest problem is just someone stealing your bandwidth, but what they do with it. It's one thing if someone uses my wireless network to browse the Internet. It's another thing if they use it for torrenting (I find that slows down the network) or any illegal activities (kiddy porn? not on my network you don't). A: Yes you are, your wireless router also doubles as a firewall preventing harmful data from the Internet, by letting one of your virus-infected neighbors in on your wlan you're essentially letting him bypass that. Now, this shouldn't be a problem in an ideal world since you'd have a well-configured system with a firewall but that's certainly not always the case. What about when you have your less security minded friends over? Not to mention the legal hassle you could get yourself into if one of your neighbors or someone sitting with a laptop in a car close enough starts browsing kiddieporn. A: I feel it all has to due with population density. My parents own a big plot of land nearest neighbor is .5 mile away. To me it doesn't make sense to lock a wireless router down. But if I lived in a apartment complex that thing will be locked down and not broadcasting it's ID. Now at my house I just don't broadcast my ID and keep it open. The signal doesn't travel further then my property line so I am not to worried about people hijacking it. A: I would actually disagree with Thomas in the sense that I think bandwidth is the biggest problem, as it's unlikely there are many dodgy people in your area who just so happen to connect to your network to misbehave. It's more likely I think that you'll have chancers, or even users who don't fully understand wireless, connecting and slowing down your connection. I've experienced horribly laggy connections due to bandwidth stealing, a lot of the problem is with ADSL - it just can't handle big upstream traffic; if a user is using torrents and not restricting the upstream bandwidth it can basically stall everything. A: Bruce Schneier is famous for running an open wireless network at home (see here). He does it for two reasons: * *To be neighborly (you'd let your neighbor borrow a cup of sugar, wouldn't you? Why not a few megabits?) *To keep away from the false sense of security that a firewall gives you. In other words, it forces him to make sure his hosts are secure. Personally, I would never run an open wireless network for one reason: accountability. If someone does something illegal on my network, I don't want to be held accountable. A: For most people, the wireless access point is a router that is acting as a hardware firewall to external traffic. If someone's not on your wireless network, the only way they'll get to a service running on your machine is if the router is configured to forward requests. Once a device is behind the router, you're relying on your computer's firewall for security. From a "paranoid" layered security standpoint, I'd consider an open wireless network in this scenario to be a reduction in security. I've met a lot of people that leave their networks open on purpose, because they feel it's a kind of community service. I don't subscribe to that theory, but I can understand the logic. They don't see it as their neighbor stealing bandwidth because they feel like they aren't using that bandwidth anyway. A: Following joshhinman comment, this is a link to an article where he explains why he has chosen to leave his wireless network setup open.Schneier on Open Wireless This guy is probably the most famous security expert at the moment, so it worths having a look at what he has to say. A: As far as the security aspect goes it is a non issue. An open network can allow a determined person to 'listen' to all your unencrypted communication. This will include emails - probably forum posts - things like this. These things should never EVER be considered secure in the first place unless you are applying your own encryption. Passwords / Secure log in to servers will be encrypted already so there is no benefit to the encryption while the packets are in the air. The advantage comes when, as others have mentioned, users perform illegal actions on your access point. IANAL but it seems some corrolations can be drawn to having your car stolen and someone commits a crime with it. You will be investigated and can be determined innocent if you have some alibi or logs showing your machines were not responsible for that traffic. The best solution to the hassle of using a key for the home user is to restrict the MAC addresses of the computers that can connect. This solves the problem of having un-authorized users (for all but the most advanced at which point your PW likely won't help you either) and it keeps you from having to input a long key every time you need to access something. A: Personally, I would never run an open wireless network for one reason: accountability. If someone does something illegal on my network, I don't want to be held accountable. The flip side of this is deniability. If the government or RIAA come knocking on your door about something done from your IP address you can always point to your insecure wireless connection and blame someone else. A: I wish people would stop referring to an open network as 'insecure'. A network is only insecure if it doesn't meet your security requirements - people need to understand that not everyone has the same security requirements. Some people actually want to share their network. An open network is open. As long as you meant that to be the case, that's all it is. If your security policy doesn't include preventing your neighbors from sharing your bandwidth, then it's not a security fault if it allows them to do that, it's faulty if it doesn't. Are you liable for other's use of your 'insecure' network? No. No more so than your ISP is liable for your use of the Internet. Why would you want it to be otherwise? Note, by the way, that pretty much every commercial WiFi hotspot in the world is set up in exactly such an open mode. So, why should a private individual be held liable for doing exactly the same thing, merely because they don't charge for it? Having said that, you do have to lock down your hosts, or firewall off an 'internal' portion of your network, if you want to run fileshares etc internally with such a setup. Also, another way to deal with 'bandwidth stealing' is to run a proxy that intercepts others traffic and replaces all images with upside down images or pictures of the Hof. :-) A: @kronoz: I guess it depends on where you live. Only two houses are within reach of my wireless network, excluding my own. So I doubt that small number of people can affect my bandwidth. But if you live in a major metro area, and many people are able to see and get on the network, yeah, it might become a problem. A: It is so easy to lock a wireless router down now, that I think a better question is why not lock it down? The only reason I can think of is if you had a yard large enough so that your neighbors can't get a signal and you frequently have visitors bringing devices into your home (since setting them up can be a chore). Note that I'm saying both of those things would need to be true for me to leave one open. A: Personally, I would never run an open wireless network for one reason: accountability. If someone does something illegal on my network, I don't want to be held accountable. The flip side of this is deniability. If the government or RIAA come knocking on your door about something done from your IP address you can always point to your insecure wireless connection and blame someone else. I would argue that anyone who is running a network is responsible for the actions of all people who use it. If you aren't controlling use, then you are failing as a network administrator. But then again, I'm not a lawyer, so... A: As it turns out, when I switched DSL service, the wireless router the company provided is secured out of the box. So unless I add the old router to my network, it will be secured. On the other hand, it was very convenient to "borrow" a few hours of network time from neighbors while I was waiting for the technician to stop by and install the service. Looks like this might not be an option soon, however. A: My biggest concern is there there is never too much bandwidth so a decision to share it is only acceptable if I can somehow guarantee that other people do not use more than, say, 5% of my total bandwidth. Which may or may not render my connection useless to other people, depending on what they mean to do with it. A: As most wireless standards are very hackable I can understand the logic behind not securing it, as it removes the false sense of security that wireless security provides. However, in NZ bandwidth is expensive; I cannot afford for randoms to leech that off me. As the vast majority of people don't have a clue regarding hacking wireless connections having this admitedly pitiful defense wards of most of the lazy. If anyone cares enough they can hack my crappy WEP encryption and get themselves some free Internet and free leech until I care enough to stop them. Then I'll upgrade to something better (white-listed MAC addresses, say) which will be harder for them to hack, and the cycle will begin anew.
{ "language": "en", "url": "https://stackoverflow.com/questions/32964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to program user preferences I'm using Ruby on Rails for an internal site. Different users of the site have access to a wide variety of data and highly disparate perspectives of the data. Within those different classes of users, there needs to be levels of access. Within the levels of access I need to be able to add features from other classes of users. In the released "Version 1.0" of the intranet site I have implemented the general classes of users. I am now needed to implement much finer-grained control of a users access. The question is how? What is the generally accepted practice for coding up user preferences (display the map (or not); access to this feature, but not this feature) without exploding the database schema and populating the view code with <% if feature_allowed %> tags everywhere. A: Another totally different approach would be to use acts_as_authenticated and authorization plugins. The tables will be built by the plugins (ie users, roles and roles_users). From the doc: The authorization plugin provides the following: * *A simple way of checking authorization at either the class or instance method level using #permit and #permit? *Authorization using roles for the entire application, a model class, or an instance of a model (i.e., a particular object). *Some english-like dynamic methods that draw on the defined roles. You will be able to use methods like "user.is_fan_of angelina" or "angelina.has_fans?", where a 'fan' is only defined in the roles table. *Pick-and-choose a mixin for your desired level of database complexity. For all the features, you will want to use "object roles table" (see below) A: populating the view code with <% if feature_allowed %> tags everywhere. I don't think you want to do that. Assuming none of the alternatives suggested are practicable, at the very least you should consider shifting those checks into your controllers, where you can refactor them into a before_filter. See section 11.3 in "Agile Web Development With Rails" (page 158 in my copy of the 2nd edition) where they do exactly that.
{ "language": "en", "url": "https://stackoverflow.com/questions/32966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: NSEnumerator performance vs for loop in Cocoa I know that if you have a loop that modifies the count of the items in the loop, using the NSEnumerator on a set is the best way to make sure your code blows up, however I would like to understand the performance tradeoffs between the NSEnumerator class and just an old school for loop A: After running the test several times, the result is almost the same. Each measure block runs 10 times consecutively. The result in my case from the fastest to the slowest: * *For..in (testPerformanceExample3) (0.006 sec) *While (testPerformanceExample4) (0.026 sec) *For(;;) (testPerformanceExample1) (0.027 sec) *Enumeration block (testPerformanceExample2) (0.067 sec) The for and while loop is almost the same. The tmp is an NSArray which contains 1 million objects from 0 to 999999. - (NSArray *)createArray { self.tmpArray = [NSMutableArray array]; for (int i = 0; i < 1000000; i++) { [self.tmpArray addObject:@(i)]; } return self.tmpArray; } The whole code: ViewController.h #import <UIKit/UIKit.h> @interface ViewController : UIViewController @property (strong, nonatomic) NSMutableArray *tmpArray; - (NSArray *)createArray; @end ViewController.m #import "ViewController.h" @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; [self createArray]; } - (NSArray *)createArray { self.tmpArray = [NSMutableArray array]; for (int i = 0; i < 1000000; i++) { [self.tmpArray addObject:@(i)]; } return self.tmpArray; } @end MyTestfile.m #import <UIKit/UIKit.h> #import <XCTest/XCTest.h> #import "ViewController.h" @interface TestCaseXcodeTests : XCTestCase { ViewController *vc; NSArray *tmp; } @end @implementation TestCaseXcodeTests - (void)setUp { [super setUp]; vc = [[ViewController alloc] init]; tmp = vc.createArray; } - (void)testPerformanceExample1 { [self measureBlock:^{ for (int i = 0; i < [tmp count]; i++) { [tmp objectAtIndex:i]; } }]; } - (void)testPerformanceExample2 { [self measureBlock:^{ [tmp enumerateObjectsUsingBlock:^(NSNumber *obj, NSUInteger idx, BOOL *stop) { obj; }]; }]; } - (void)testPerformanceExample3 { [self measureBlock:^{ for (NSNumber *num in tmp) { num; } }]; } - (void)testPerformanceExample4 { [self measureBlock:^{ int i = 0; while (i < [tmp count]) { [tmp objectAtIndex:i]; i++; } }]; } @end For more information visit: Apples "About Testing with Xcode" A: Using the new for (... in ...) syntax in Objective-C 2.0 is generally the fastest way to iterate over a collection because it can maintain a buffer on the stack and get batches of items into it. Using NSEnumerator is generally the slowest way because it often copies the collection being iterated; for immutable collections this can be cheap (equivalent to -retain) but for mutable collections it can cause an immutable copy to be created. Doing your own iteration — for example, using -[NSArray objectAtIndex:] — will generally fall somewhere in between because while you won't have the potential copying overhead, you also won't be getting batches of objects from the underlying collection. (PS - This question should be tagged as Objective-C, not C, since NSEnumerator is a Cocoa class and the new for (... in ...) syntax is specific to Objective-C.) A: They are very similar. With Objective-C 2.0 most enumerations now default to NSFastEnumeration which creates a buffer of the addresses to each object in the collection that it can then deliver. The one step that you save over the classic for loop is not having to call objectAtIndex:i each time inside the loop. The internals of the collection you are enumerating implement fast enumeration with out calling objectAtIndex:i method. The buffer is part of the reason that you can't mutate a collection as you enumerate, the address of the objects will change and the buffer that was built will no longer match. As a bonus the format in 2.0 looks as nice as the classic for loop: for ( Type newVariable in expression ) { stmts } Read the following documentaion to go deeper: NSFastEnumeration Protocol Reference
{ "language": "en", "url": "https://stackoverflow.com/questions/32986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Leaving your harddrive shared The leaving your wireless network open question reminded me of this. I typically share the root drive on my machines across my network, and tie login authorization to the machines NT ID, so there is at least some form of protection. My question, how easy is it to gain access to these drives for ill good? Is the authorization enough, or should I lock things down more? A: If this is a home network with no wifi or secured wifi, it's probably not an issue. Your isp will almost certainly prevent anyone from trying anything via the larger web. If you have open wifi, then there's a little more cause for concern. If it's properly secured so that some authentication is required, you're probably okay. I mean, a determined hacker could probably break in, but you're not likely to find a determined hacker in wi-fi range. But the risk (if small) is there. You will want to make sure the administrative shares (the \\yourmachine\c$ or \\yourmachine\admin$ mentioned earlier) are disabled if you have open wifi. No sense making it too easy. A: I can't answer the main question, but do keep in mind that Windows, by default, is always sharing the roots of your drives. Try: \\yourmachine\c$ (And then try not to freak out.) A: Windows generally protects shares via two methods - permissions on the share itself, and then NTFS file permissions. Good practice would be to have the share permissions as "Authenticated User" and remove the "Everyone" group. Personally I would make sure that usernames and passwords match up on each computer, and control permissions like that, rather than using computer name.
{ "language": "en", "url": "https://stackoverflow.com/questions/32991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do banks remember "your computer"? As many of you probably know, online banks nowadays have a security system whereby you are asked some personal questions before you even enter your password. Once you have answered them, you can choose for the bank to "remember this computer" so that in the future you can login by only entering your password. How does the "remember this computer" part work? I know it cannot be cookies, because the feature still works despite the fact that I clear all of my cookies. I thought it might be by IP address, but my friend with a dynamic IP claims it works for him, too (but maybe he's wrong). He thought it was MAC address or something, but I strongly doubt that! So, is there a concept of https-only cookies that I don't clear? Finally, the programming part of the question: how can I do something similar myself in, say, PHP? A: The particular bank I was interested in is Bank of America. I have confirmed that if I only clear my cookies or my LSOs, the site does not require me to re-enter info. If, however, I clear both, I had to go through additional authentication. Thus, that appears to be the answer in my particular case! But thank you all for the heads-up regarding other banks, and possibilities such as including the User-Agent string. A: This kind of session tracking is very likely to be done using a combination of a cookie with a unique id identifying your current session, and the website pairing that id with the last IP address you used to connect to their server. That way, if the IP changes, but you still have the cookie, you're identified and logged in, and if the cookie is absent but you have the same IP address as the one save on the server, then they set your cookie to the id paired with that IP. Really, it's that second possibility that is tricky to get right. If the cookie is missing, and you only have your IP address to show for identification, it's quite unsafe to log someone in just based of that. So servers probably store additional info about you, LSO seem like a good choice, geo IP too, but User Agent, not so much because they don't really say anything about you, every body using the same version of the same browser as you has the same. As an aside, it has been mentioned above that it could work with MAC adresses. I strongly disagree! Your MAC address never reaches your bank's server, as they are only used to identify sides of an Ethernet connection, and to connect to your bank you make a bunch of Ethernet connections: from your computer to your home router, or your ISP, then from there to the first internet router you go through, then to the second, etc... and each time a new connection is made, each machine on each side provide their very own MAC addresses. So your MAC address can only be known to the machines directly connected to you through a switch or hub, because anything else that routes your packets will replace your MAC with their own. Only the IP address stays the same all the way. If MAC addresses did go all the way, it would be a privacy nightmare, as all MAC addresses are unique to a single device, hence to a single person. This is a slightly simplified explanation because it's not the point of the question, but it seemed useful to clear what looked like a misunderstanding. A: In fact they most probably use cookies. An alternative for them would be to use "flash cookies" (officially called "Local Shared Objects"). They are similar to cookies in that they are tied to a website and have an upper size limit, but they are maintained by the flash player, so they are invisible to any browser tools. To clear them (and test this theory), you can use the instructions provided by Adobe. An other nifty (or maybe worrying, depending on your viewpoint) feature is that the LSO storage is shared by all browsers, so using LSO you can identify users even if they switched browser (as long as they are logged in as the same user). A: It could be a combination of cookies, and ip address logging. Edit: I have just checked my bank and cleared the cookies. Now I have to re-enter all of my info. A: I think it depends on the bank. My bank does use a cookie since I lose it when I wipe cookies. A: It is possible for flash files to store a small amount of data on your computer. It's also possible that the bank uses that approach to "remember" your computer, but it's risky to rely on users having (and not having disabled) flash. A: My bank's site makes me re-authenticate every time a new version of Firefox is out, so there's definitely a user-agent string component in some. A: Are you using a laptop? Does it remember you, after you delete your cookies, if you access from a different WiFi network? If so, IP/physical location mapping is highly unlikely. A: Based on all these posts, the conclusions that I'm reaching are (1) it depends on the bank and (2) there's probably more than one piece of data that's involved, but see (1). A: MAC address is possible. IP to physical location mapping is also a possibility. User agents and other HTTP headers are quiet unique to each of the machines too. I'm thinking about those websites that prevents you from using an accelerating download managers. There must be a way.
{ "language": "en", "url": "https://stackoverflow.com/questions/33034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How can I measure CppUnit test coverage (on win32 and Unix)? I have a very large code base that contains extensive unit tests (using CppUnit). I need to work out what percentage of the code is exercised by these tests, and (ideally) generate some sort of report that tells me on a per-library or per-file basis, how much of the code was exercised. Here's the kicker: this has to run completely unnatended (eventually inside a continuous integration build), and has to be cross platform (well, WIN32 and *nix at least). Can anyone suggest a tool, or set of tools that can help me do this? I can't change away from CppUnit (nor would I want to - it kicks ass), but otherwise I'm eager to hear any recommendations you might have. Cheers, A: If you can use GNU GCC as your complier, then the gcov tool works well. It's very easy to fully automate the whole process. A: If you are using the GCC toolchain, gcov is going to get you source, functional, and branch coverage statistics. gcov works fine for MinGW and Cygwin. This will allow you to get coverage statistics as well as emitting instrumented source code that allows you to visualize unexecuted code. However, if you really want to hit it out of the park with pretty reports, using gcov in conjunction with lcov is the way to go. lcov will give you bar reports scoped to files and directories, functional coverage statistics, and color coded source file browsing to show coverage (green means executed, red means not...). lcov is easy on Linux, but may require some perl hacking on Cygwin. I personally have had some problems executing the scripts (lcov is implemented in perl) on Windows. I've gotten a hacked up version to work, but be forewarned. Another approach is doing the gcov emit on windows, and doing the lcov post processing on Linux, where it will surely work out of the box. A: Check out our SD C++ Test Coverage tool. It can be obtained for GCC, and for MSVC6. It has low overhead probe data collection, a nice display of coverage data overlayed on your code, and complete report generation with rollups on coverage across the method/class/file/directory levels. EDIT: Aug 2015: Now supports GCC5 and various MS dialects through Visual Studio 2015. To use these tools under Linux, you need Wine, but there the tools provide Linux-native sh scripting and a Linux/Java based UI, so the tool feels like a native Linux tool there. A: Which tool should I use? This article describes another developers frustrations searching for C++ code coverage tools. The author's final solution was Bullseye Coverage. Bullseye Coverage features: * *Cross Platform Support (win32, unix, and embedded), (supports linux gcc compilers and MSVC6) *Easy to use (up and running in a few hours). *Provides "best" metrics: Function Coverage and Condition/Decision Coverage. *Uses source code instrumentation. As for hooking into your continuous integration, it depends on which CI solution you use, but you can likely hook the instrumentation / coverage measurement steps into the make file you use for automated testing. Testing Linux vs Windows? So long as all your tests run correctly in both environments, you should be fine measuring coverage on one or the other. (Though Bullseye appears to support both platforms). But why aren't you doing continuous integration builds in both environments?? If you deliver to clients in both environments then you need to be testing in both. For that reason, it sounds like you might need to have two continuous build servers set up, one for a linux build and one for a windows build. Perhaps this can be easily accomplished with some virtualization software like vmware or virtualbox. You may not need to run code coverage metrics on both OSs, but you should definitely be running your unit tests on both. A: I guess I should have specified the compiler - we're using gcc for Linux, and MSVC 6 (yeah I know, it's old, but it works (mostly) for us) for WIn32. For that reasons, gcov won't work for our Win32 builds, and Bullseye won't work for our Linux builds. Then again maybe I only need coverage in one OS...
{ "language": "en", "url": "https://stackoverflow.com/questions/33042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How would you test observers with rSpec in a Ruby on Rails application? Suppose you have an ActiveRecord::Observer in one of your Ruby on Rails applications - how do you test this observer with rSpec? A: no_peeping_toms is now a gem and can be found here: https://github.com/patmaddox/no-peeping-toms A: You are on the right track, but I have run into a number of frustrating unexpected message errors when using rSpec, observers, and mock objects. When I am spec testing my model, I don't want to have to handle observer behavior in my message expectations. In your example, there isn't a really good way to spec "set_status" on the model without knowledge of what the observer is going to do to it. Therefore, I like to use the "No Peeping Toms" plugin. Given your code above and using the No Peeping Toms plugin, I would spec the model like this: describe Person do it "should set status correctly" do @p = Person.new(:status => "foo") @p.set_status("bar") @p.save @p.status.should eql("bar") end end You can spec your model code without having to worry that there is an observer out there that is going to come in and clobber your value. You'd spec that separately in the person_observer_spec like this: describe PersonObserver do it "should clobber the status field" do @p = mock_model(Person, :status => "foo") @obs = PersonObserver.instance @p.should_receive(:set_status).with("aha!") @obs.after_save end end If you REALLY REALLY want to test the coupled Model and Observer class, you can do it like this: describe Person do it "should register a status change with the person observer turned on" do Person.with_observers(:person_observer) do lambda { @p = Person.new; @p.save }.should change(@p, :status).to("aha!) end end end 99% of the time, I'd rather spec test with the observers turned off. It's just easier that way. A: If you want to test that the observer observes the correct model and receives the notification as expected, here is an example using RR. your_model.rb: class YourModel < ActiveRecord::Base ... end your_model_observer.rb: class YourModelObserver < ActiveRecord::Observer def after_create ... end def custom_notification ... end end your_model_observer_spec.rb: before do @observer = YourModelObserver.instance @model = YourModel.new end it "acts on the after_create notification" mock(@observer).after_create(@model) @model.save! end it "acts on the custom notification" mock(@observer).custom_notification(@model) @model.send(:notify, :custom_notification) end A: Disclaimer: I've never actually done this on a production site, but it looks like a reasonable way would be to use mock objects, should_receive and friends, and invoke methods on the observer directly Given the following model and observer: class Person < ActiveRecord::Base def set_status( new_status ) # do whatever end end class PersonObserver < ActiveRecord::Observer def after_save(person) person.set_status("aha!") end end I would write a spec like this (I ran it, and it passes) describe PersonObserver do before :each do @person = stub_model(Person) @observer = PersonObserver.instance end it "should invoke after_save on the observed object" do @person.should_receive(:set_status).with("aha!") @observer.after_save(@person) end end
{ "language": "en", "url": "https://stackoverflow.com/questions/33048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: SVN repository backup strategies What methods are available for backing up repositories in a Windows environment? A: Basically it's safe to copy the repository folder if the svn server is stopped. (source: https://groups.google.com/forum/?fromgroups#!topic/visualsvn/i_55khUBrys%5B1-25%5D ) So if you're allowed to stop the server, do it and just copy the repository, either with some script or a backup tool. Cobian Backup fits here nicely as it can stop and start services automatically, and it can do incremental backups so you're only backing up parts of repository that have changed recently (useful if the repository is large and you're backing up to remote location). Example: * *Install Cobian Backup *Add a backup task: * *Set source to repository folder (e.g. C:\Repositories\), *Add pre-backup event "STOP_SERVICE" VisualSVN, *Add post-backup event, "START_SERVICE" VisualSVN, *Set other options as needed. We've set up incremental backups including removal of old ones, backup schedule, destination, compression incl. archive splitting etc. *Profit! A: there are 2 main methods to backup a svn server, first is hotcopy that will create a copy of your repository files, the main problem with this approach is that it saves data about the underlying file system, so you may have some difficulties trying to repostore this kind of backup in another svn server kind or another machine. there is another type of backup called dump, this backup wont save any information of the underlying file system and its potable to any kind of SVN server based in tigiris.org subversion. about the backup tool you can use the svnadmin tool(it is able to do hotcopy and dump) from the command prompt, this console resides in the same directory where your svn server lives or you can google for svn backup tools. my recommendation is that you do both kinds of backups and get them out of the office to your email acount, amazon s3 service, ftp, or azure services, that way you will have a securityy backup without having to host the svn server somewhere out of your office. A: Here a GUI Windows tool for make a dump of local and remote subversion repositories: https://falsinsoft-software.blogspot.com/p/svn-backup-tool.html The tool description says: This simply tool allow to make a dump backup of a local and remote subversion repository. The software work in the same way of the "svnadmin" but is not a GUI frontend over it. Instead use directly the subversion libraries for allow to create dump in standalone mode without any other additional tool. Hope this help... A: I like to just copy the entire repo directory to my backup location. That way, if something happens, you can just copy the directory back and be ready to go immediately. Just make sure to preserve permissions, if needed. Usually, this is only a concern on Linux machines. A: If you are using the FSFS repository format (the default), then you can copy the repository itself to make a backup. With the older BerkleyDB system, the repository is not platform independent and you would generally want to use svnadmin dump. The svnbook documentation topic for backup recommends the svnadmin hotcopy command, as it will take care of issues like files in use and such. A: For hosted repositories you can since svn version 1.7 use svnrdump, which is analogous to svnadmin dump for local repositories. This article provides a nice walk-through, which essentially boils down to: svnrdump dump /URL/to/remote/repository > myRepository.dump After you have downloaded the dump file you can import it locally svnadmin load /path/to/local/repository < myRepository.dump or upload it to the host of your choice. A: There's a hotbackup.py script available on the Subversion web site that's quite handy for automating backups. http://svn.apache.org/repos/asf/subversion/trunk/tools/backup/hot-backup.py.in A: Here is a Perl script that will: * *Backup the repo *Copy it to another server via SCP *Retrieve the backup *Create a test repository from the backup *Do a test checkout *Email you with any errors (via cron) The script: my $svn_repo = "/var/svn"; my $bkup_dir = "/home/backup_user/backups"; my $bkup_file = "my_backup-"; my $tmp_dir = "/home/backup_user/tmp"; my $bkup_svr = "my.backup.com"; my $bkup_svr_login = "backup"; $bkup_file = $bkup_file . `date +%Y%m%d-%H%M`; chomp $bkup_file; my $youngest = `svnlook youngest $svn_repo`; chomp $youngest; my $dump_command = "svnadmin -q dump $svn_repo > $bkup_dir/$bkup_file "; print "\nDumping Subversion repo $svn_repo to $bkup_file...\n"; print `$dump_command`; print "Backing up through revision $youngest... \n"; print "\nCompressing dump file...\n"; print `gzip -9 $bkup_dir/$bkup_file\n`; chomp $bkup_file; my $zipped_file = $bkup_dir . "/" . $bkup_file . ".gz"; print "\nCreated $zipped_file\n"; print `scp $zipped_file $bkup_svr_login\@$bkup_svr:/home/backup/`; print "\n$bkup_file.gz transfered to $bkup_svr\n"; #Test Backup print "\n---------------------------------------\n"; print "Testing Backup"; print "\n---------------------------------------\n"; print "Downloading $bkup_file.gz from $bkup_svr\n"; print `scp $bkup_svr_login\@$bkup_svr:/home/backup/$bkup_file.gz $tmp_dir/`; print "Unzipping $bkup_file.gz\n"; print `gunzip $tmp_dir/$bkup_file.gz`; print "Creating test repository\n"; print `svnadmin create $tmp_dir/test_repo`; print "Loading repository\n"; print `svnadmin -q load $tmp_dir/test_repo < $tmp_dir/$bkup_file`; print "Checking out repository\n"; print `svn -q co file://$tmp_dir/test_repo $tmp_dir/test_checkout`; print "Cleaning up\n"; print `rm -f $tmp_dir/$bkup_file`; print `rm -rf $tmp_dir/test_checkout`; print `rm -rf $tmp_dir/test_repo`; Script source and more details about the rational for this type of backup. A: For the daily and full backup solution just use the SVN backup scripts here. A: @echo off set hour=%time:~0,2% if "%hour:~0,1%"==" " set hour=0%time:~1,1% set folder=%date:~6,4%%date:~3,2%%date:~0,2%%hour%%time:~3,2% echo Performing Backup md "\\HOME\Development\Backups\SubVersion\%folder%" svnadmin dump "C:\Users\Yakyb\Desktop\MainRepositary\Jake" | "C:\Program Files\7-Zip\7z.exe" a "\\HOME\Development\Backups\SubVersion\%folder%\Jake.7z" -sibackupname.svn This is the Batch File i have running that performs my Backups A: I use svnsync, which sets up a remote server as a mirror/slave. We had a server go down two weeks ago, and I was able to switch the slave into primary position quite easily (only had to reset the UUID on the slave repository to the original). Another benefit is that the sync can be run by a middle-man, rather than as a task on either server. I've had a client to two VPNs sync a repository between them. A: You could use something like (Linux): svnadmin dump repositorypath | gzip > backupname.svn.gz Since Windows does not support GZip it is just: svnadmin dump repositorypath > backupname.svn A: svnadmin hotcopy svnadmin hotcopy REPOS_PATH NEW_REPOS_PATH This subcommand makes a full “hot” backup of your repository, including all hooks, configuration files, and, of course, database files. A: We use svnadmin hotcopy, e.g.: svnadmin hotcopy C:\svn\repo D:\backups\svn\repo As per the book: You can run this command at any time and make a safe copy of the repository, regardless of whether other processes are using the repository. You can of course ZIP (preferably 7-Zip) the backup copy. IMHO It's the most straightforward of the backup options: in case of disaster there's little to do other than unzip it back into position. A: * *You can create a repository backup (dump) with svnadmin dump. *You can then import it in using svnadmin load. Detailed reference in the SVNBook: "Repository data migration using svnadmin" A: svnbackup over at Google Code, a .NET console application. A: I have compiled the steps I followed for the purpose of taking a backup of the remote SVN repository of my project. install svk (http://svk.bestpractical.com/view/SVKWin32) install svn (http://sourceforge.net/projects/win32svn/files/1.6.16/Setup-Subversion-1.6.16.msi/download) svk mirror //local <remote repository URL> svk sync //local This takes time and says that it is fetching the logs from repository. It creates a set of files inside C:\Documents and Settings\nverma\.svk\local. To update this local repository with the latest set of changes from the remote one, just run the previous command from time to time. Now you can play with your local repository (/home/user/.svk/local in this example) as if it were a normal SVN repository! The only problem with this approach is that the local repository is created with a revision increments by the actual revision in the remote repository. As someone wrote: The svk miror command generates a commit in the just created repository. So all the commits created by the subsequent sync will have revision numbers incremented by one as compared to the remote public repository. But, this was OK for me as I only wanted some backup of the remote repository time to time, nothing else. Verification: To verify, use the SVN client with the local repository like this: svn checkout "file:///C:/Documents and Settings\nverma/.svk/local/" <local-dir-path-to-checkout-onto> This command then goes to checkout the latest revision from the local repository. At the end it says Checked out revision N. This N was one more than the actual revision found in the remote repository (due to the problem mentioned above). To verify that svk also brought all the history, the SVN checkout was run with various older revisions using -r with 2, 10, 50 etc. Then the files in <local-dir-path-to-checkout-onto> were confirmed to be from that revision. At the end, zip the directory C:/Documents and Settings\nverma/.svk/local/ and store the zip somewhere. Keep doing this regularly. A: as others have said, hot-backup.py from the Subversion team has some nice features over just plain svnadmin hotcopy I run a scheduled task on a python script that spiders for all my repositories on the machine, and uses hotbackup to keep several days worth of hotcopies (paranoid of corruption) and an svnadmin svndump on a remote machine. Restoration is really easy from that - so far. A: 1.1 Create Dump from SVN (Subversion) repository svnadmin dump /path/to/reponame > /path/to/reponame.dump Real example svnadmin dump /var/www/svn/testrepo > /backups/testrepo.dump 1.2 Gzip Created Dump gzip -9 /path/to/reponame.dump Real example gzip -9 /backups/testrepo.dump 1.3 SVN Dump and Gzip Dump with One-liner svnadmin dump /path/to/reponame | gzip -9 > /path/to/reponame.dump.gz Real example svnadmin dump /var/www/svn/testrepo |Â gzip -9 > /backups/testrepo.dump.gz How to Backup (dump) and Restore (load) SVN (Subversion) repository on Linux. Ref: svn subversion backup andrestore
{ "language": "en", "url": "https://stackoverflow.com/questions/33055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "196" }
Q: Best way to export a QTMovie with a fade-in and fade-out in the audio I want to take a QTMovie that I have and export it with the audio fading in and fading out for a predetermined amount of time. I want to do this within Cocoa as much as possible. The movie will likely only have audio in it. My research has turned up a couple of possibilities: * *Use the newer Audio Context Insert APIs. http://developer.apple.com/DOCUMENTATION/QuickTime/Conceptual/QT7-2_Update_Guide/NewFeaturesChangesEnhancements/chapter_2_section_11.html. This appears to be the most modern was to accomplish this. *Use the Quicktime audio extraction APIs to pull out the audio track of the movie and process it and then put the processed audio back into the movie replacing the original audio. Am I missing some much easier method? A: Quicktime has the notion of Tween Tracks. A tween track is a track that allows you to modify the properties of another set of tracks properties (such as the volume). See Creating a Tween Track in the Quicktime docs to see an example of how to do this with an Quicktime audio track's volume. There is also a more complete example called qtsndtween on the Apple Developer website. Of course, all of this code requires using the Quicktime C APIs. If you can live with building a 32-bit only application, you can get the underlying Quicktime-C handles from a QTMovie, QTTrack, or QTMedia object using the "movie", "track", or "media" functions respectively. Hopefully we'll get all the features of the Quicktime C APIs in the next version of QTKit, whenever that may be.
{ "language": "en", "url": "https://stackoverflow.com/questions/33061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Looking for Regex to find quoted newlines in a big string (for C#) I have a big string (let's call it a CSV file, though it isn't actually one, it'll just be easier for now) that I have to parse in C# code. The first step of the parsing process splits the file into individual lines by just using a StreamReader object and calling ReadLine until it's through the file. However, any given line might contain a quoted (in single quotes) literal with embedded newlines. I need to find those newlines and convert them temporarily into some other kind of token or escape sequence until I've split the file into an array of lines..then I can change them back. Example input data: 1,2,10,99,'Some text without a newline', true, false, 90 2,1,11,98,'This text has an embedded newline and continues here', true, true, 90 I could write all of the C# code needed to do this by using string.IndexOf to find the quoted sections and look within them for newlines, but I'm thinking a Regex might be a better choice (i.e. now I have two problems) A: Since this isn't a true CSV file, does it have any sort of schema? From your example, it looks like you have: int, int, int, int, string , bool, bool, int With that making up your record / object. Assuming that your data is well formed (I don't know enough about your source to know how valid this assumption is); you could: * *Read your line. *Use a state machine to parse your data. *If your line ends, and you're parsing a string, read the next line..and keep parsing. I'd avoid using a regex if possible. A: State-machines for doing such a job are made easy using C# 2.0 iterators. Here's hopefully the last CSV parser I'll ever write. The whole file is treated as a enumerable bunch of enumerable strings, i.e. rows/columns. IEnumerable is great because it can then be processed by LINQ operators. public class CsvParser { public char FieldDelimiter { get; set; } public CsvParser() : this(',') { } public CsvParser(char fieldDelimiter) { FieldDelimiter = fieldDelimiter; } public IEnumerable<IEnumerable<string>> Parse(string text) { return Parse(new StringReader(text)); } public IEnumerable<IEnumerable<string>> Parse(TextReader reader) { while (reader.Peek() != -1) yield return parseLine(reader); } IEnumerable<string> parseLine(TextReader reader) { bool insideQuotes = false; StringBuilder item = new StringBuilder(); while (reader.Peek() != -1) { char ch = (char)reader.Read(); char? nextCh = reader.Peek() > -1 ? (char)reader.Peek() : (char?)null; if (!insideQuotes && ch == FieldDelimiter) { yield return item.ToString(); item.Length = 0; } else if (!insideQuotes && ch == '\r' && nextCh == '\n') //CRLF { reader.Read(); // skip LF break; } else if (!insideQuotes && ch == '\n') //LF for *nix-style line endings break; else if (ch == '"' && nextCh == '"') // escaped quotes "" { item.Append('"'); reader.Read(); // skip next " } else if (ch == '"') insideQuotes = !insideQuotes; else item.Append(ch); } // last one yield return item.ToString(); } } Note that the file is read character by character with the code deciding when newlines are to be treated as row delimiters or part of a quoted string. A: What if you got the whole file into a variable then split that based on non-quoted newlines? A: EDIT: Sorry, I've misinterpreted your post. If you're looking for a regex, then here is one: content = Regex.Replace(content, "'([^']*)\n([^']*)'", "'\1TOKEN\2'"); There might be edge cases and that two problems but I think it should be ok most of the time. What the Regex does is that it first finds any pair of single quotes that has \n between it and replace that \n with TOKEN preserving any text in-between. But still, I'd go state machine like what @bryansh explained below.
{ "language": "en", "url": "https://stackoverflow.com/questions/33063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Ignore Emacs auto-generated files in a diff How do I make diff ignore temporary files like foo.c~? Is there a configuration file that will make ignoring temporaries the default? More generally: what's the best way to generate a "clean" patch off a tarball? I do this rarely enough (submitting a bug fix to an OSS project by email) that I always struggle with it... EDIT: OK, the short answer is diff -ruN -x *~ ... Is there a better answer? E.g., can this go in a configuration file? A: This doesn't strictly answer your question, but you can avoid the problem by configuring Emacs to use a specific directory to keep the backup files in. There are different implementations for Emacs or XEmacs. In GNU Emacs (defvar user-temporary-file-directory (concat temporary-file-directory user-login-name "/")) (make-directory user-temporary-file-directory t) (setq backup-by-copying t) (setq backup-directory-alist `(("." . ,user-temporary-file-directory) (,tramp-file-name-regexp nil))) (setq auto-save-list-file-prefix (concat user-temporary-file-directory ".auto-saves-")) (setq auto-save-file-name-transforms `((".*" ,user-temporary-file-directory t))) In XEmacs (require 'auto-save) (require 'backup-dir) (defvar user-temporary-file-directory (concat (temp-directory) "/" (user-login-name))) (make-directory user-temporary-file-directory t) (setq backup-by-copying t) (setq auto-save-directory user-temporary-file-directory) (setq auto-save-list-file-prefix (concat user-temporary-file-directory ".auto-saves-")) (setq bkup-backup-directory-info `((t ,user-temporary-file-directory full-path))) You can also remove them all with a simple find command find . -name “*~” -delete Note that the asterisk and tilde are in double quotes to stop the shell expanding them. By the way, these aren't strictly temporary files. They are a backup of the previous version of the file, so you can manually "undo" your last edit at any time in the future. A: You can create an ignore file, like this: core.* *~ *.o *.a *.so <more file patterns you want to skip> and then run diff with -X option, like this: diff -X ignore-file <other diff options you use/need> path1 path2 There used to be a .diffignore file "close" to the Linux kernel (maybe an informal file), but I couldn't find it anymore. Usually you keep using this ignore-file, just adding new patterns you want to ignore. A: You can create a small sunction/script to it, like: #!/bin/bash olddir="/tmp/old" newdir="/tmp/new" pushd $newdir for files in $(find . -name \*.c) do diff $olddir/$file $newdir/$file done popd This is only one way to script this. The simple way. But I think you got the idea. Other suggestion is configuring in emacs a backup dir, so your backup files go always to the same place, outside your work dir! A: The poster has listed this as the 'short answer': diff -ruN -x *~ ... but feeding this to shell will cause the * to be globbed before diff is invoked. This is better: diff -r -x '*~' dir1 dir2 I omit the -u and -N flags as those are matters of taste and not relevant to the question at hand.
{ "language": "en", "url": "https://stackoverflow.com/questions/33073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Pattern recognition algorithms In the past I had to develop a program which acted as a rule evaluator. You had an antecedent and some consecuents (actions) so if the antecedent evaled to true the actions where performed. At that time I used a modified version of the RETE algorithm (there are three versions of RETE only the first being public) for the antecedent pattern matching. We're talking about a big system here with million of operations per rule and some operators "repeated" in several rules. It's possible I'll have to implement it all over again in other language and, even though I'm experienced in RETE, does anyone know of other pattern matching algorithms? Any suggestions or should I keep using RETE? A: The TREAT algorithm is similar to RETE, but doesn't record partial matches. As a result, it may use less memory than RETE in certain situations. Also, if you modify a significant number of the known facts, then TREAT can be much faster because you don't have to spend time on retractions. There's also RETE* which balances between RETE and TREAT by saving some join node state depending on how much memory you want to use. So you still save some assertion time, but also get memory and retraction time savings depending on how you tune your system. You may also want to check out LEAPS, which uses a lazy evaluation scheme and incorporates elements of both RETE and TREAT. I only have personal experience with RETE, but it seems like RETE* or LEAPS are the better, more flexible choices.
{ "language": "en", "url": "https://stackoverflow.com/questions/33076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Fighting programmer colors I have a couple of pet projects where I'm the sole designer/programmer and I spend too much time changing the user interface to make it easier to use by real users and avoiding bright yellow and green that is so common on "programmer" designs. Do you have tips to choose a color scheme when you do not have a graphics designer around? How do you avoid creating the typical "programmer" interface? A: Keep in mind that nearly 10% of the male population of the world have some significant form of color blindness. You should always consider this when choosing interface colors (especially if you need capital - guess what? 1 in 10 male investors might not see your red dots on the green background chart showing risk vs return!). MSDN has a reasonable overview of this, and there are several website filters that show you what your site (or any site) looks like given any form of colorblindness. Aside from that, I really like COLOURLovers - not only do they have a great selection of user tagged and defined color schemes, they give them to you in a variety of ways, and you can sign in and track your favorites (or your favorite color scheme producers). Go check out the fall themes! Can't go wrong with Michigan colors when the leaves change... -Adam A: Colour guides like Kuler are a great start if you have no idea about choosing colours. Some basic considerations: Use contrast not colour to differentiate in your design. This is to accommodate colour-blindness and poorly sighted users. Use as limited a colour palette as you can. Pick one colour as your 'theme' and choose shades of that colour, and then maybe one or two contrasting colours to go with it. Get advice - doesn't have to be from a designer, you might still know someone who has a good eye for these things. Also, more broad feedback - ask a few people for their opinion, that can help. A: kuler has a lot of user submitted colour schemes edit: just remembered... also try colorlovers A: Let me tell you a story. I have absolutely no confidence in my ability to make aesthetic choices - you only have to look at the way I dress to realize I'm justified in my lack of confidence. Anyway, years ago I was put in charge of writing the gui for a new product (the "Clip Editor" in Cineon, for anybody who knows that). I sketched out a design, but asked my boss, the head of sales and marketing, and various "application specialists" for help choosing the colours. Nobody responded, so I said "to hell with it", and chose a colour scheme so ugly I'm sure the beta testers would recoil in horror and demand a change. But they didn't - so it shipped with it. And I heard that customers loved the "bold colours". And not only that, but a few years later a competitor added a program that looked like a direct rip-off of my Clip Editor to their product, and they copied my colour scheme! A: For desktop apps, get the colors from the OS. I, personally, want all of my apps to look and feel the same as my OS. For web apps, I'm not really sure. A: Lately I have been using the following website: http://www.nickherman.com/colormatch/ to help me (also a non-designer) pick matching color schemes. I usually find a color that is fairly pleasing, then use the matching colors from this website. If all else fails, I also ask my wife! A: I like using ColourSchemer and EasyRGB. A: Some updated resources I use: Colorotate.org is a nicely designed site with user-contributed palettes of various sizes (unlike kuler that provides only 5 colours per palette). It allows you to see how the combinations look to various type of color-impaired people. colorschemedesigner.com generates consistent palettes for you using various algorithms (complement, triad, tetrad...) A: Adobe's Kuler website has a lot of user-created color schemes uploaded by designers. I normally search for higher ranked schemes first. A: There are a lot of "color theme pickers" on the web. If you use these, your colors will at least look like they belong together. The first one I looked at on google: http://www.yafla.com/dforbes/yaflaColor/ColorRGBHSL.aspx? A: Aim for pastel colours that are slightly dimmer than their full-blown counterparts, i.e. a pastel red is dimmer than a (255,0,0) red for example. Try to select colours from the same palette, one cheaty way of determining colour schemes I use is to take a screenshot of an Office 2007 app, usually excel, and paste out some colours from their co-ordinated palettes using the colour dropper tool in an app like Paint.NET. In fact this cheaty approach to be extended to 'borrowing' colour schemes from applications that are already out there that have colours schemes you admire :-) A: If you pick a "theme" color for your app, you can use Kuler to help flesh out the palette. Related post: * *Web 2.0 Color Combinations *I know I've seen more, but can't find them :) A: I tend to use alot of grays, along with black and white, keep things simple and avoid any kind of annoying, bright colors. Seems to me like that's what the SO guys did. A: Vitaly Friedman's Essential Bookmarks for web designers & web developers list a lot of online color tools, also a condensed version There is also a list of color tools on twiki.org that has some additional sites. A: I recommend you start by reading up on computational color theory. Wikipedia is actually a fine place to educate yourself on the subject. Try looking up a color by name on Wikipedia. You will find more than you expect. Branch out from there until you've got a synopsis of the field. As you develop your analytic eye, focus on understanding your favorite (or most despised) interface colors and palettes in terms of their various representations in different colorspaces: RGB, HSB, HSL. Keep Photoshop/GIMP open so you can match the subjective experience of palettes to their quantifiable aspects. See how your colors render on lousy monitors. Notice what always remains readable, and which color combos tend to yield uncharismatic or illegible results. Pay attention to the emotional information conveyed by specific palettes. You will quickly see patterns emerge. For example, you will realize that high saturation colors are best avoided in UI components except for special purposes. Eventually, you'll be able to analyze the output of the palette generators recommended here, and you'll develop your own theories about what makes a good match, and how much contrast is needed to play well on most displays. (To avoid potential frustration, you might want to skip over to Pantone's free color perception test. It's best taken on a color calibrated display. If it says you have poor color perception, then numeric analysis is extra important for you.) A: Why do programmers think they can't have dsign skills? It's a pet peeve of mine. It's a learnable skill, just like anything else. http://www.hackification.com/2010/05/16/designing-a-vs2010-color-scheme-consistency-consistency-consistency/
{ "language": "en", "url": "https://stackoverflow.com/questions/33079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Setting the height of a DIV dynamically In a web application, I have a page that contains a DIV that has an auto-width depending on the width of the browser window. I need an auto-height for the object. The DIV starts about 300px from the top screen, and its height should make it stretch to the bottom of the browser screen. I have a max height for the container DIV, so there would have to be minimum-height for the div. I believe I can just restrict that in CSS, and use Javascript to handle the resizing of the DIV. My javascript isn't nearly as good as it should be. Is there an easy script I could write that would do this for me? Edit: The DIV houses a control that does it's own overflow handling (implements its own scroll bar). A: document.getElementById('myDiv').style.height = 500; This is the very basic JS code required to adjust the height of your object dynamically. I just did this very thing where I had some auto height property, but when I add some content via XMLHttpRequest I needed to resize my parent div and this offsetheight property did the trick in IE6/7 and FF3 A: Try this simple, specific function: function resizeElementHeight(element) { var height = 0; var body = window.document.body; if (window.innerHeight) { height = window.innerHeight; } else if (body.parentElement.clientHeight) { height = body.parentElement.clientHeight; } else if (body && body.clientHeight) { height = body.clientHeight; } element.style.height = ((height - element.offsetTop) + "px"); } It does not depend on the current distance from the top of the body being specified (in case your 300px changes). EDIT: By the way, you would want to call this on that div every time the user changed the browser's size, so you would need to wire up the event handler for that, of course. A: What should happen in the case of overflow? If you want it to just get to the bottom of the window, use absolute positioning: div { position: absolute; top: 300px; bottom: 0px; left: 30px; right: 30px; } This will put the DIV 30px in from each side, 300px from the top of the screen, and flush with the bottom. Add an overflow:auto; to handle cases where the content is larger than the div. Edit: @Whoever marked this down, an explanation would be nice... Is something wrong with the answer? A: If I understand what you're asking, this should do the trick: // the more standards compliant browsers (mozilla/netscape/opera/IE7) use // window.innerWidth and window.innerHeight var windowHeight; if (typeof window.innerWidth != 'undefined') { windowHeight = window.innerHeight; } // IE6 in standards compliant mode (i.e. with a valid doctype as the first // line in the document) else if (typeof document.documentElement != 'undefined' && typeof document.documentElement.clientWidth != 'undefined' && document.documentElement.clientWidth != 0) { windowHeight = document.documentElement.clientHeight; } // older versions of IE else { windowHeight = document.getElementsByTagName('body')[0].clientHeight; } document.getElementById("yourDiv").height = windowHeight - 300 + "px"; A: With minor corrections: function rearrange() { var windowHeight; if (typeof window.innerWidth != 'undefined') { windowHeight = window.innerHeight; } // IE6 in standards compliant mode (i.e. with a valid doctype as the first // line in the document) else if (typeof document.documentElement != 'undefined' && typeof document.documentElement.clientWidth != 'undefined' && document.documentElement.clientWidth != 0) { windowHeight = document.documentElement.clientHeight; } // older versions of IE else { windowHeight = document.getElementsByTagName('body')[0].clientHeight; } document.getElementById("foobar").style.height = (windowHeight - document.getElementById("foobar").offsetTop - 6)+ "px"; } A: Simplest I could come up... function resizeResizeableHeight() { $('.resizableHeight').each( function() { $(this).outerHeight( $(this).parent().height() - ( $(this).offset().top - ( $(this).parent().offset().top + parseInt( $(this).parent().css('padding-top') ) ) ) ) }); } Now all you have to do is add the resizableHeight class to everything you want to autosize (to it's parent). A: inspired by @jason-bunting, same thing for either height or width: function resizeElementDimension(element, doHeight) { dim = (doHeight ? 'Height' : 'Width') ref = (doHeight ? 'Top' : 'Left') var x = 0; var body = window.document.body; if(window['inner' + dim]) x = window['inner' + dim] else if (body.parentElement['client' + dim]) x = body.parentElement['client' + dim] else if (body && body['client' + dim]) x = body['client' + dim] element.style[dim.toLowerCase()] = ((x - element['offset' + ref]) + "px"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/33080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: ensuring uploaded files are safe My boss has come to me and asked how to enure a file uploaded through web page is safe. He wants people to be able to upload pdfs and tiff images (and the like) and his real concern is someone embedding a virus in a pdf that is then viewed/altered (and the virus executed). I just read something on a procedure that could be used to destroy stenographic information emebedded in images by altering least sifnificant bits. Could a similar process be used to enusre that a virus isn't implanted? Does anyone know of any programs that can scrub files? Update: So the team argued about this a little bit, and one developer found a post about letting the file download to the file system and having the antivirus software that protects the network check the files there. The poster essentially said that it was too difficult to use the API or the command line for a couple of products. This seems a little kludgy to me, because we are planning on storing the files in the db, but I haven't had to scan files for viruses before. Does anyone have any thoughts or expierence with this? http://www.softwarebyrob.com/2008/05/15/virus-scanning-from-code/ A: I'd recommend running your uploaded files through antivirus software such as ClamAV. I don't know about scrubbing files to remove viruses, but this will at least allow you to detect and delete infected files before you view them. A: Viruses embedded in image files are unlikely to be a major problem for your application. What will be a problem is JAR files. Image files with JAR trailers can be loaded from any page on the Internet as a Java applet, with same-origin bindings (cookies) pointing into your application and your server. The best way to handle image uploads is to crop, scale, and transform them into a different image format. Images should have different sizes, hashes, and checksums before and after transformation. For instance, Gravatar, which provides the "buddy icons" for Stack Overflow, forces you to crop your image, and then translates it to a PNG. Is it possible to construct a malicious PDF or DOC file that will exploit vulnerabilities in Word or Acrobat? Probably. But ClamAV is not going to do a very good job at stopping those attacks; those aren't "viruses", but rather vulnerabilities in viewer software. A: It depends on your company's budget but there are hardware devices and software applications that can sit between your web server and the outside world to perform these functions. Some of these are hardware firewalls with anti-virus software built in. Sometimes they are called application gateways or application proxies. Here are links to an open source gateway that uses Clam-AV: http://en.wikipedia.org/wiki/Gateway_Anti-Virus http://gatewayav.sourceforge.net/faq.html A: You'd probably need to chain an actual virus scanner to the upload process (the same way many virus scanners ensure that a file you download in your browser is safe). In order to do this yourself, you'd have to keep it up to date, which means keeping libraries of virus definitions around, which is likely beyond the scope of your application (and may not even be feasible depending on the size of your organization). A: Yes, ClamAV should scan the file regardless of the extension. A: Use a reverse proxy setup such as www <-> HAVP <-> webserver HAVP (http://www.server-side.de/) is a way to scan http traffic though ClamAV or any other commercial antivirus software. It will prevent users to download infected files. If you need https or anything else, then you can put another reverse proxy or web server in reverse proxy mode that can handle the SSL before HAVP Nevertheless, it does not work at upload, so it will not prevent the files to be stored on servers, but prevent the files from being downloaded and thus propagated. So use it with a regular file scanning (eg clamscan).
{ "language": "en", "url": "https://stackoverflow.com/questions/33086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I use ASP.NET Login Controls when my Login.aspx is not at the root of my application? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. It keeps redirecting to a Login.aspx page at the root of my application that does not exist. My login page is within a folder. A: Use the LoginUrl property for the forms item? <authentication mode="Forms"> <forms defaultUrl="~/Default.aspx" loginUrl="~/login.aspx" timeout="1440" ></forms> </authentication> A: I found the answer at CoderSource.net. I had to put the correct path into my web.config file. <?xml version="1.0"?> <configuration> <system.web> ... <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Forms"> <forms loginUrl="~/FolderName/Login.aspx" /> </authentication> ... </system.web> ... </configuration>
{ "language": "en", "url": "https://stackoverflow.com/questions/33089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How Do Sites Suppress Pasting Text? I've noticed that some sites (usually banks) suppress the ability to paste text into text fields. How is this done? I know that JavaScript can be used to swallow the keyboard shortcut for paste, but what about the right-click menu item? A: Even if it is somewhat possible to intercept the paste event in many browsers (but not all as shown at the link on the previous answer), that is quite unreliable and posible not complete (depending on the browser / OS it may be possible to do the paste operation in different ways that may not be trappable by javascript code). Here is a collection of notes regarding paste (and copy) in the context of rich text editors that may be applied also elsewhere. A: Probably using the onpaste event, and either return false from it or use e.preventDefault() on the Event object. Note that onpaste is non standard, don't rely on it for production sites, because it will not be there forever. $(document).on("paste",function(e){ console.log("paste") e.preventDefault() return false; })
{ "language": "en", "url": "https://stackoverflow.com/questions/33103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Communicating between websites (using Javascript or ?) Here's my problem - I'd like to communicate between two websites and I'm looking for a clean solution. The current solution uses Javascript but there are nasty workarounds because of (understandable) cross-site scripting restrictions. At the moment, website A opens a modal window containing website B using a jQuery plug-in called jqModal. Website B does some work and returns some results to website A. To return that information we have to work around cross-site scripting restrictions - website B creates an iframe that refers to a page on website A and includes *fragment identifiers" containing the information to be returned. The iframe is polled by website A to detect the returned information. It's a common technique but it's hacky. There are variations such as CrossSite and I could perhaps use an HTTP POST from website B to website A but I'm trying to avoid page refreshes. Does anyone have any alternatives? EDIT: I'd like to avoid having to save state on website B. A: My best suggestion would be to create a webservice on each site that the other could call with the information that needs to get passed. If security is necessary, it's easy to add an SSL-like authentication scheme (or actual SSL even, if you like) to this system to ensure that only the two servers are able to talk to their respective web services. This would let you avoid the hackiness that's inherent in any scheme that involves one site opening windows on the other. A: With jQuery newer than 1.2 you can use JSONP A: @jmein - you've described how to create a modal popup (which is exactly what jqModal does) however you've missed that the content of the modal window is served from another domain. The two domains involved belong to two separate companies so can't be combined in the way you describe. A: i believe @pat was refering to this "As of jQuery 1.2, you can load JSON data located on another domain if you specify a JSONP callback, " http://docs.jquery.com/Ajax/jQuery.getJSON#urldatacallback
{ "language": "en", "url": "https://stackoverflow.com/questions/33104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there any way to override the drag/drop or copy/paste behavior of an existing app in Windows? I would like to extend some existing applications' drag and drop behavior, and I'm wondering if there is any way to hack on drag and drop support or changes to drag and drop behavior by monitoring the app's message loop and injecting my own messages. It would also work to monitor for when a paste operation is executed, basically to create a custom behavior when a control only supports pasting text and an image is pasted. I'm thinking Detours might be my best bet, but one problem is that I would have to write custom code for each app I wanted to extend. If only Windows was designed with extensibility in mind! On another note, is there any OS that supports extensibility of this nature? A: Well, think of this from the point of view of the app designer. If you wrote an application, do you want users to be able to inject things into your application (more importantly, would you want to incur the support/revenue headache of clueless users doing this and then blaming you)? Each application's drag and drop infrastructure is written specifically for the application, not to allow you to drop anything you want onto it (potentially causing crashes and all sorts of other nasty behaviour when you drag something onto an app that simply can't handle it). Stuff like this is hard to do for a reason. It is possible to do, but it's a lot of work: you need to acquire the window handle of the thing you want to drop something onto, and then replace that window's message handler with your own. That's fraught with danger, of course, since you either have to replicate all of the existing functionality of that window yourself, or risk the app not working correctly. A: If you're willing to do in-memory diddling while the application is loaded, you could probably finagle that. But if you're looking for an easy way to just inject code you want into another window's message pump, you're not going to find it. The skills required to accomplish something like this are formidable (unless someone has wrapped all of this up in an application/library that I'm unaware of, but I doubt it). It's like clipboard hooking, writ-large: it's frowned upon, there are tons of gotchas, and you're extremely likely to introduce significant instability into your system if you don't really know what you're doing. A: Hm thats really too bad. I suppose there are sometimes reasons why apps don't exist yet. Basically what I'm trying to do is simplify the process of sending image links to people using various apps (mainly web browser text forms, but also anytime I'm editing in a terminal window) by hooking the process of pasting an image in a text context, uploading the image in the background, and pasting a url to where the image was uploaded all with a single action. Edit: I suppose the easier solution to this is to just create a new keyboard combo that is hooked by my app before it gets to any other app. There's no reason in particular that I need to tie it to copy/paste functionality.
{ "language": "en", "url": "https://stackoverflow.com/questions/33113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does C# have the notion of private and protected inheritance? Does C# have the notion of private / protected inheritance, and if not, why? C++ class Foo : private Bar { public: ... }; C# public abstract NServlet class : private System.Web.UI.Page { // error "type expected" } I am implementing a "servlet like" concept in an .aspx page and I don't want the concrete class to have the ability to see the internals of the System.Web.UI.Page base. A: No it doesn't. What would the benefit be of allowing this type of restriction? Private and protected inheritance is good for encapsulation (information hiding). Protected* inheritance is supported in C++, although it isn’t in Java. Here’s an example from my project where it would be useful. There is a base class in as 3rd party framework**. It has dozens of settings plus properties and methods for manipulating them. The base class doesn’t make a lot of checking when individual settings are assigned, but it will generate an exception later if it encounters an unacceptable combination. I’m making a child class with methods for assigning these settings (e.g. example, assigning carefully crafted settings from a file). It would be nice to deny the rest of the code (outside my child class) the ability to manipulate individual settings and mess them up. That said, I think in C++ (which, again, supports private and protected inheritance) it's possible to cast the child class up to parent and get access to parent's public members. (See also Chris Karcher's post) Still, protected inheritance improves information hiding. If members of a class B1 need to be truly hidden within other classes C1 and C2, it can be arranged by making a protected variable of a class B1 within C1 and C2. Protected instance of B1 will be available to children of C1 and C2. Of course, this approach by itself doesn't provide polymorphism between C1 and C2. But polymorphism can be added (if desired) by inheriting C1 and C2 from a common interface I1. *** For brevity will use "protected" instead of "private and protected". ** National Instruments Measurement Studio in my case. * *Nick A: You can hide inherited APIs from being publicly visible by declaring that same member in your class as private, and using the new keyword. See Hiding through Inheritance from MSDN. A: If you want the NServlet class to not know anything about the Page, you should look into using the Adapter pattern. Write a page that will host an instance of the NServlet class. Depending on what exactly you're doing, you could then write a wide array of classes that only know about the base class NServlet without having to pollute your API with asp.net page members. A: C# allows public inheritance only. C++ allowed all three kinds. Public inheritance implied an "IS-A" type of relationship, and private inheritance implied a "Is-Implemented-In-Terms-Of" kind of relationship. Since layering (or composition) accomplished this in an arguably simpler fashion, private inheritance was only used when absolutely required by protected members or virtual functions required it - according to Scott Meyers in Effective C++, Item 42. My guess would be that the authors of C# did not feel this additional method of implementing one class in terms of another was necessary. A: @bdukes: Keep in mind that you aren't truly hiding the member. E.g.: class Base { public void F() {} } class Derived : Base { new private void F() {} } Base o = new Derived(); o.F(); // works But this accomplishes the same as private inheritance in C++, which is what the questioner wanted. A: No, public inheritance only. A: You probably want a ServletContainer class that gets hooked up with a NServlet implementation. In my book, not allowing private / protected inheritance is not really a big deal and keeps the language less confusing - with LINQ etc. we allready have enough stuff to remember. A: I know this is an old question, but I've run into this issue several times while writing C#, and I want to know...why not just use an interface? When you create your subclass of the 3rd party framework's class, also have it implement a public interface. Then define that interface to include only the methods that you want the client to access. Then, when the client requests an instance of that class, give them an instance of that interface instead. That seems to be the C#-accepted way of doing these sorts of things. The first time I did this was when I realized that the C# standard library didn't have a read-only variant of a dictionary. I wanted to provide access to a dictionary, but didn't want to give the client the ability to change items in the dictionary. So I defined a "class DictionaryEx<K,V,IV> : Dictionary<K,V>, IReadOnlyDictionary<K,IV> where V : IV" where K is the key type, V is the real value type, and IV is an interface to the V type that prevents changes. The implementation of DictionaryEx was mostly straightforward; the only difficult part was creating a ReadOnlyEnumerator class, but even that didn't take very long. The only drawback I can see to this approach is if the client tries to dynamically cast your public interface to the related subclass. To stop this, make your class internal. If your client casts your public interface to the original base class, I think it'd be pretty clear to them that they're taking their life in their own hands. :-) A: First solution: protected internal acts as public in the same assembly and protected on other assemblies. You would need to change the access modifier of each members of the class which are not to be exposed through inheritance. It is a bit restrictive though that this solution requires and forces the class to be inherited to be used by another assembly. Thus the choice of being used only by inheritance or not is taken by the unknowing parent... normally the children are more knowing of the architecture... Not a perfect solution but might be a better alternative to adding an interface to hide methods and still leaving the possibility of using the parent methods to be hidden though the child class because you might not easily be able to force the use of the interface. Problem: The protected and private access modifiers cannot be used for methods that are implementing interfaces. That means that the protected internal solution cannot be used for interface implemented methods. This is a big restriction. Final solution: I fell back to the interface solution to hide methods. The problem with it was to be able to force the use of the interface so that members to be hidden are ALWAYS hidden and then definitely avoiding mistakes. To force using only the interface, just make the constructors protected and add a static method for construction (I named it New). This static New method is in fact a factory function and it returns the interface. So the rest of the code has to use the interface only! A: No it doesn't. What would the benefit be of allowing this type of restriction?
{ "language": "en", "url": "https://stackoverflow.com/questions/33115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Building a custom Linux Live CD Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file! A: There are a couple of interesting projects you could look into. But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk? Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years). Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte. Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD. The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale. A: One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into. This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD. If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend. A: Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys A: Depends on your distro. Here's a good article you can check out from LWN.net There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice. A: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file! A: Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example. It's very easy to use. sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu lh_config # edit config/* to your liking sudo lh_build
{ "language": "en", "url": "https://stackoverflow.com/questions/33117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Windows Mobile 6 Development, alternatives to visual studio? I am looking to start writing apps for my Windows Mobile 6.1 professional device (Sprint Mogul/HTC Titan). I use the copy of Visual Studio 2003 that I bought in college for all of my current contracting work, (all of my day job work is done on a company laptop). From what I can tell from MSDN in order to develop using windows Mobile 5 or 6 SDK I need to have at least Visual Studio 2005 Standard and I really don't feel like shelling out that much cash just to be able to develop for my phone. Are there any free tools available to develop for Windows mobile? Or is there an affordable way to get VS that I am overlooking? A: Even if you had Visual Studio 2005 you would be limited to the 2.0 Framework. You will need to use Visual Studio 2008 Professional or better to use the 3.5 Framework. But you also have an alternative. I wrote an article on Windows Mobile Development without Visual Studio. The minimum you need is the .Net Compact Framework SDK. It comes with a command line compiler that can generate .Net assemblies that will run on a Windows Mobile phone. Naturally you will not want to use the command line for your compiling. So the instructions I wrote are centered around the (free) ShartpDevelop.Net editor. You can find the instructions on CodeProject.com. Here is the URL. http://www.codeproject.com/KB/mobile/WiMoSansVS.aspx A: You can also use the free IDE SharpDevelop. They target .NET Compact Framework too... A: For Native (C++) Device Development you will need: eMbedded Visual C++ 3.0 (CE 3.0) eMbedded Visual C++ 4.0 (CE 4.x-5.0) or Visual Studio 2005 Standard or higher (CE 4.x-6.0) or Visual Studio 2008 Professional or higher (CE 4.2-) For Managed Device Development you will need: Visual Studio 2003 Professional or higher (CE 4.2, CF 1.0) or Visual Studio 2005 Standard or higher (CE 4.2- CF 1.0 or 2.0) or Visual Studio 2008 Professional or higher (CE 4.2- CF 2.0 or CF 3.5) None of the Express editions come with the compilers and libraries required for device development. You can, in theory anyway, use the .NET SDK and the device SDK downloads to patch together the ability to compile managed code written in something like even Notepad, but without the IDE and debugging support, it's really not worth doing. Note that EHaskins above is incorrect with regard to Studio 2005. The Standard SKU is enough - it does not have to be Pro. A: CASL from Caslsoft is a nice language for mobile development. I have used it for one project on Windows CE, but it should work on Windows Mobile as well. It was easy to get started. (You can use the free version to do the initial development) A: I looked into more affordable ways to do back in the VS 2003 days, but couldn't find anything. My guess is that you still need VS to do it. @MartinHN You CAN NOT use version older than 2005 or less then Pro for Windows Mobile 5/6 device development. A: You should also have a look at NS Basic/CE. It's been around since 1998. They have continuously enhanced it, and it is well supported. Very much like Visual Basic, it has a screen designer, full programming language and lots more. http://www.nsbasic.com/ce A: Not working with Visual Studio 2010, must be 2005-2008 A: Can't you use Visual Studio Express Editions for Mobile Development http://www.microsoft.com/downloads/details.aspx?FamilyID=94DE806B-E1A1-4282-ABC5-1F7347782553&displaylang=en
{ "language": "en", "url": "https://stackoverflow.com/questions/33144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to pass method name to custom server control in asp.net? I am working on a Customer Server Control that extends another control. There is no problem with attaching to other controls on the form. in vb.net: Parent.FindControl(TargetControlName) I would like to pass a method to the control in the ASPX markup. for example: <c:MyCustomerControl runat=server InitializeStuffCallback="InitializeStuff"> So, I tried using reflection to access the given method name from the Parent. Something like (in VB) Dim pageType As Type = Page.GetType Dim CallbackMethodInfo As MethodInfo = pageType.GetMethod( "MethodName" ) 'Also tried sender.Parent.GetType.GetMethod("MethodName") sender.Parent.Parent.GetType.GetMethod("MethodName") The method isn't found, because it just isn't apart of the Page. Where should I be looking? I'm fairly sure this is possible because I've seen other controls do similar. I forgot to mention, my work-around is to give the control events and attaching to them in the Code-behind. A: If you want to be able to pass a method in the ASPX markup, you need to use the Browsable attribute in your code on the event. VB.NET <Browsable(True)> Public Event InitializeStuffCallback C# [Browsable(true)] public event EventHandler InitializeStuffCallback; Reference: Design-Time Attributes for Components and BrowsableAttribute Class All the events, properties, or whatever need to be in the code-behind of the control with the browsable attribute to make it so you can change it in the tag code. A: Normally you wouldn't need to get the method via reflection. Inside your user control, define a public event (sorry I do not know the vb syntax so this will be in c#) public event EventHandler EventName; Now, inside your aspx page, or whatever container of the user control, define a protected method that matches the EventHandler: protected void MyCustomerControl_MethodName(object sender, EventArgs e) { } Now, inside your markup, you can use <c:MyCustomerControl id="MyCustomerControl" runat=server OnEventName="MyCustomerControl_MethodName"> A: Your workaround is actually the better answer. If you have code that you must run at a certain part of your control's lifecycle, you should expose events to let the container extend the lifecycle with custom functionality. A: Every ASP.NET page is class of its own inherited from Page as in: class MyPage : Page Therefore, to find that method via Reflection, you must get the correct type, which is the type of the page class that stores the page code. I suppose you need to support multiple pages for this control to be instantiated in I believe you can find the child type of any instance of Page via Reflection, but I do not remember how, but you should be able to do it. but... like everyone else has said, such case is what events are for. A: buyutec and Jesse Dearing both have an acceptable answer. [Browsable(true)] lets you see the property in the Properties window. However, the event doesn't show up, which makes no difference to me. The thing I overlooked earlier was the fact that when you reference a control's even from the tag, it prep-ends On.
{ "language": "en", "url": "https://stackoverflow.com/questions/33150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I keep my Login.aspx page's ReturnUrl parameter from overriding my ASP.NET Login control's DestinationPageUrl property? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. I've got pages such as PasswordRecovery.aspx that are accessable to only Anonymous users. When I click my login link from such a page, the login page has a ReturnUrl parameter in the address bar: http://www.example.com/Login.aspx?ReturnUrl=PasswordRecovery.aspx And then after a successful login, users are returned to the PasswordRecovery.aspx page specified in the ReturnUrl parameter to which they no longer have access. A: I found the answer on Velocity Reviews. I handled the LoggedIn event to force a redirection to the DestinationPageUrl page. Public Partial Class Login Inherits System.Web.UI.Page Protected Sub Login1_LoggedIn(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Login1.LoggedIn 'overrides ReturnUrl page parameter Response.Redirect(Login1.DestinationPageUrl) End Sub End Class
{ "language": "en", "url": "https://stackoverflow.com/questions/33166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Accessing Sharepoint from outside the WebUI Is it possible to access the database backend of a sharepoint server? My company uses Sharepoint to store data and pictures of various assets. Ideally I would be able to access the data and display it in my application to allow users both methods of access. Before I go talk to the IT department I would like to find out if this is even possible? Edit: From rails on linux? (Yes, I know I'm crazy) A: yikes! :) look at the web service and .net API before going direct to the database. i've used both and they provide plenty of flexibility (including building your own web services on top of the API if necessary). API for on server clients, web services for off server clients. A: Agree with Adam. Querying the Sharepoint Database is a big no-no, as Microsoft does not guarantee that the Schema is in any way stable. Only access the database if there is really no other way. As for Sharepoint, usually the Lists.asmx Web Service is what you want to look at first. http://www.c-sharpcorner.com/UploadFile/mahesh/WSSInNet01302007093018AM/WSSInNet.aspx http://geekswithblogs.net/mcassell/archive/2007/08/22/Accessing-Sharepoint-Data-through-Web-Services.aspx A: Just a small comment. Never ever go to the database direct. If there is no way to do it via published and supported API's, then there is no way to do it. End of story. This applies even to when you are "just reading data", as this can still cause significant issues. A: Just to support the above if you ever take a look at the SQL tables that sit behind SharePoint you'll realise why its not recommnded or supported to access the database direct. MADNESS!
{ "language": "en", "url": "https://stackoverflow.com/questions/33174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to tell the data types after executing a stored procedure? Is there a way when executing a stored procedure in Management Studio to get the data types of the result sets coming back? I'm looking for something like functionality of when you pass a table name to sp_help A: You do get to look at the types though, if you call the stored procedure via ADO, ADO.NET, ODBC or the likes: The resulting recordsets have the type information you are looking for. Are you really restricted to Management Studio? A: Your best bet would be to change the stored procedure to a function. But that only works if your environment allows it. A: No easy way comes to mind without parsing syscomments to see what it's querying from where. If you can edit the SP to select XML, you can append XML_INFO to the query to get the schema back. A: Actually, you can do it from within an SP: EXEC ('if exists (select * from sys.tables where name = ''tmp_TableName'') drop table tmp_TableName') EXEC ('select * into tmp_TableName from MyTable') -- Grab the column types from INFORMATION_SCHEMA here EXEC ('if exists (select * from sys.tables where name = ''tmp_TableName'') drop table tmp_TableName') Although, I think there must be a better way. A: It's not the most elegant solution, but you could use OPENROWSET to put the stored proc results into a table, then use sp_help to get a description of it. eg select * into tmp_Results from openrowset( 'SQLOLEDB.1' , 'Server=your_server_name;Trusted_Connection=yes;' , 'exec your_stored_proc') exec sp_help 'tmp_Results' drop table tmp_Results A: You could always use an actual table that is garrenteed to be unique. It's a kludge, but it's an option. This will not work inside a stored proc though. if exists (select * from sys.tables where name = 'tmp_TableName') drop table tmp_TableName go select * into tmp_TableName from MyTable --do some stuff go if exists (select * from sys.tables where name = 'tmp_TableName') drop table tmp_TableName go
{ "language": "en", "url": "https://stackoverflow.com/questions/33175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the "condition" in C interview test? Would it be possible to print Hello twice using single condition? if "condition" printf ("Hello"); else printf("World"); A: if ( printf("Hello") == 0 ) printf ("Hello"); else printf ("World"); :-) A: #define CONDITION (0) if (0) {} else or some such. If you see such a question on an interview, run away as fast as you can! The team that asks such questions is bound to be unhealthy. Edit - I forgot to clarify - this relies on "else" being matched with closest open "if", and on the fact that it's written as "if CONDITION" rather than if (CONDITION) - parenthesis would make the puzzle unsolvable. A: The if statement executes one or the other of the controlled statements (both printf in your example). No matter what you use for condition, that snippet will either print "Hello", or "World", but never both. Edit: Okay, so it's a trick question and you can put whatever you like in the condition (including a call to an entire other function that does anything you want). But that's hardly interesting. I can't believe I got downmodded for giving a correct answer. A: if ( printf("Hello")==0) see [http://www.coders2020.com/what-does-printf-return] (matt corrected my =, thanks, C is far away) A: Without knowing the return value of printf off the top of your head: if (printf("Hello") && 0) printf("Hello"); else printf("World"); A: Comment the "else" ;) if(foo) { bar(); } //else { baz(); } A: The basic answer is that in the ordinary course of events you neither want to execute both the statements in the 'if' block and the 'else' block in a single pass through the code (why bother with the condition if you do) nor can you execute both sets of statements without jumping through grotesque hoops. Some grotesque hoops - evil code! if (condition == true) { ...stuff... goto Else; } else { Else: ...more stuff... } Of course, it is a plain abuse of (any) language because it is equivalent to: if (condition == true) { ...stuff... } ...more stuff... However, it might achieve what the question is asking. If you have to execute both blocks whether the condition is true or false, then things get a bit trickier. done_then = false; if (condition == true) { Then: ...stuff... done_then = true; goto Else; } else { Else: ...more stuff... if (!done_then) goto Then; } A: int main() { runIfElse(true); runIfElse(false); return 0; } void runIfElse(bool p) { if(p) { // do if } else { // do else } } A: "condition" === (printf("Hello"), 0) Really lame: int main() { if (printf("Hello"), 0) printf ("Hello"); else printf("World"); } I prefer the use of the comma operator because you don't have to look up the return value of printf in order to know what the conditional does. This increases readability and maintainability. :-) A: if (true) printf ("Hello"); if (false) printf ("Hello"); else printf("World"); A: No love for exit? if(printf("HelloWorld"), exit(0), "ByeBye") printf ("Hello"); else printf ("World"); A: So... you want to execute the code inside the if block... and the code inside of the else block... of the same if/else statement? Then... you should get rid of the else and stick taht code in the if. if something do_this do_that end The else statement is designed to execute only if the if statement is not executed and vice-versa, that is the whole point. This is an odd question... A: This sounds to me like some interview puzzle. I hope this is close to what you want. #include <stdio.h> int main() { static int i = 0 ; if( i++==0 ? main(): 1) printf("Hello,"); else printf("World\n"); return 0 ; } prints Hello, World A: Buckle your seatbelts: #include <stdio.h> #include <setjmp.h> int main() { jmp_buf env; if (!setjmp(env)) { printf("if executed\n"); longjmp(env, 1); } else { printf("else executed\n"); } return 0; } Prints: if executed else executed Is this what you mean? I doubt it, but at least it's possible. Using fork you can do it also, but the branches will run in different processes. A: If it is on Unix: if (fork()) printf ("Hello"); else printf("World"); Ofcoures that doesn't guarantee the order 0f the prints A: This could work: if (printf("Hello") - strlen("Hello")) printf("Hello") else printf("World") This snippet emphasizes the return value of printf: The number of characters printed. A: Solution 1: int main(int argc, char* argv[]) { if( argc == 2 || main( 2, NULL ) ) { printf("Hello "); } else { printf("World\n"); } return 0; } Solution 2 (Only for Unix and Linux): int main(int argc, char* argv[]) { if( !fork() ) { printf("Hello "); } else { printf("World\n"); } return 0; } A: Just put the code before or after the if..else block. Alternatively, if you have an "if, else if, else" block where you want to execute code in some (but not all) branches, just put it in a separate function and call that function within each block. A: #include<stdio.h> int main() { if(! printf("Hello")) printf ("Hello"); else printf ("World"); return 0; } Because Printf returns the number of character it has printed successfully. A: if(printf("Hello") == 1) printf("Hello") else printf("World") A: if (printf("Hello") < 1) printf("Hello"); else printf("World"); A: Greg wrote: No matter what you use for condition, that snippet will either print "Hello", or "World", but never both. Well, this isn't true, but why you would want it to print both, I can't find a use case for. It's defeating the point of having an if statement. The likely "real" solution is to not use an if at all. Silly interview questions... :) A: Very interesting guys, thanks for the answers. I never would have thought about putting the print statement inside the if condition. Here's the Java equivalent: if ( System.out.printf("Hello").equals("") ) System.out.printf("Hello"); else System.out.printf("World"); A: Dont use an if else block then. EDIT to Comment. It might then mean that the code be in both blocks, or before/after the block if it is required to run in both cases. A: use a goto, one of the single most underused keywords of our day A: Cheeting with an empty else statement: if (condition) // do if stuff else; // do else stuff If you don't like the fact that else; is actually an empty else statement try this: for (int ii=0; ii<2; ii++) { if (condition && !ii) // do if stuff else { // do else stuff break; } } A: Two possible Solutions without using printf statements :- First :- #include <stdio.h> int main(void) { if (!stdin || (stdin = 0, main())) printf("hello"); else printf("world"); return 0; } Second #include<stdio.h> void main() { if (1 #define else if (1) ) { printf("hello"); } else { printf("world"); } } Reference :- Link1 , Link2 A: if (printf("hello") & 0) { printf("hello"); } else { printf("world"); No need to bother about the return value of printf. A: Abuse of preprocessing - with cleanup at least. #define else if(1) { printf("hello"); } else { printf("world"); } #undef else A: The condition to this question is: if(printf("hello")? 0 : 1) { }
{ "language": "en", "url": "https://stackoverflow.com/questions/33199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: When to commit changes? Using Oracle 10g, accessed via Perl DBI, I have a table with a few tens of million rows being updated a few times per second while being read from much more frequently form another process. Soon the update frequency will increase by an order of magnitude (maybe two). Someone suggested that committing every N updates instead of after every update will help performance. I have a few questions: * *Will that be faster or slower or it depends (planning to benchmark both way as soon as can get a decent simulation of the new load) *Why will it help / hinder performance. *If "it depends ..." , on what ? *If it helps what's the best value of N ? *Why can't my local DBA have an helpful straight answer when I need one? (Actually I know the answer to that one) :-) EDIT: @codeslave : Thanks, btw losing uncommited changes is not a problem, I don't delete the original data used for updating till I am sure everything is fine , btw cleaning lady did unplugs the server, TWICE :-) Some googling showed it might help because of issue related to rollback segments, but I still don't know a rule of thumb for N every few tens ? hundreds? thousand ? @diciu : Great info, I'll definitely look into that. A: A commit results in Oracle writing stuff to the disk - i.e. in the redo log file so that whatever the transaction being commited has done can be recoverable in the event of a power failure, etc. Writing in file is slower than writing in memory so a commit will be slower if performed for many operations in a row rather then for a set of coalesced updates. In Oracle 10g there's an asynchronous commit that makes it much faster but less reliable: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-6158695.html PS I know for sure that, in a scenario I've seen in a certain application, changing the number of coalesced updates from 5K to 50K makes it faster by an order of magnitude (10 times faster). A: Reducing the frequency of commits will certainly speed things up, however as you are reading and writing to this table frequently there is the potential for locks. Only you can determine the likelihood of the same data being updated at the same time. If the chance of this is low, commit every 50 rows and monitor the situation. Trial and error I'm afraid :-) A: As well as reducing the commit frequency, you should also consider performing bulk updates instead of individual ones. A: Faster/Slower? It will probably be a little faster. However, you run a greater risk of running into deadlocks, losing uncommitted changes should something catastrophic happen (cleaning lady unplugs the server), FUD, Fire, Brimstone, etc. Why would it help? Obviously fewer commit operations, which in turn means fewer disk writes, etc. DBA's and straight answers? If it was easy, you won't need one. A: If you "don't delete the original data used for updating till [you are] sure everything is fine", then why don't you remove all those incremental commits in between, and rollback if there's a problem? It sounds like you effectively have built a transaction systems on top of transactions. A: @CodeSlave your your questions is answered by @stevechol , if i remove ALL the incremental commits there will be locks. I guess if nothing better comes along I'll follow his advice pick a random number , monitor the load and adjust accordingly. While applying @diciu twaks. PS: the transaction on top of transaction is just accidental, I get the files used for updates by FTP and instead of deleting them immediately I set a cron job to deletes them a week later (if no one using the application has complained) that means if something goes wrong I have a week to catch the errors.
{ "language": "en", "url": "https://stackoverflow.com/questions/33204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the best way to unit test Objective-C code? What frameworks exist to unit test Objective-C code? I would like a framework that integrates nicely with Apple Xcode. A: Sen:te (the creator of the testing framework included with Xcode) explains how to use OCUnit with an iPhone project: simple-iphone-ipad-unit-test. A: I recommend gh-unit, it has a nice GUI for test results. http://github.com/gabriel/gh-unit/tree/master A: The Unit Testing support bundled within xcode (for its simple setup) combined with ocrunner (for some autotest/Growl goodness) is currently my favorite Obj-C Unit Testing setup. A: here is a whole lot of them List_of_unit_testing_frameworks in Objective-C A: Matt Gallagher of Cocoa with Love has a very good article on unit testing. A: Check out GHUnit by Gabriel Handford: "The goals of GHUnit are: Runs unit tests within XCode, allowing you to fully utilize the XCode Debugger. A simple GUI to help you visualize your tests. Show stack traces. Be installable as a framework (for Cocoa apps) with a simple (or not) target setup; or easy to package into your iPhone project." A: I would suggest looking into Kiwi, an open source BDD testing framework for iOS: Kiwi Check out the project's WIKI to start or get Daniel Steinberg's book "Test Driving iOS Development with Kiwi" test-driving-ios-development A: I use SimpleUnitTest works with iPhone and iPad libs. http://cbess.blogspot.com/2010/05/simple-iphone-ipad-unit-test.html It comes with a unit test Xcode template to easily add a unit test class. Wraps GTM. You can literally drop it into an active project and start adding unit tests within 3 minutes (or less). A: Xcode includes XCTest, which is similar to OCUnit, an Objective-C unit testing framework, and has full support for running XCTest-based unit tests as part of your project's build process. Xcode's unit testing support is described in the Xcode Overview: Using Unit Tests. Back in the Xcode 2 days, I wrote a series of weblog posts about how to perform some common tasks with Xcode unit testing: * *Unit testing Cocoa frameworks *Debugging Cocoa framework unit tests *Unit testing Cocoa applications *Debugging Cocoa application unit tests Despite using OCUnit rather than XCTest, the concepts are largely the same. Finally, I also wrote a few posts on how to write tests for Cocoa user interfaces; the way Cocoa is structured makes it relatively straightforward, because you don't have to spin an event loop or anything like that in most cases. * *Trust, but verify. *Unit testing Cocoa user interfaces: Target-Action *Unit testing Cocoa user interfaces: Cocoa Bindings This makes it possible to do test-driven development for not just your model-level code but also your controller-level and even view-level code. A: Specta is a modern TDD(Test Driven Development)/BDD(Behavior Driven Development) framework which runs on top of XCTest. It supports unit testing for iOS and Mac OS X projects. A: I started using the Google toolbox testing rig for iPhone, and its working out great for me. google-toolbox-for-mac A: Check out OCUnit. Apple's developer network has a great introduction. A: Note that the Google Toolbox for Mac (GTM) project simply extends/augments Apple's SenTestingKit framework (which is, itself based on OCUnit). As they say on the project site: GTM has several enhancement to the standard SenTestingKit allowing you to do UI unit testing, automated binding unit testing, log tracking, and unit testing on the iPhone, as well as tools for doing static and dynamic testing of your code. Note the following comment about user-interface testing: GTM has extensive support for user interface unit tests. It supports testing both the imaging and/or internal state of almost all of the standard Cocoa/UIKit UI objects, and makes it easy for you to extend this support to your own UI objects. See their "Code Verification and Unit Testing" page for instructions on how to use it. A: I came to the conclusion that GHUnit is the most advanced testing framework for Objective-C. I have done a roundup of testing frameworks on my blog. It is the most flexible in terms of deployment (iphone, simulator or mac os native) and assert capabilities. Because it is based on GTM, it inherits all of GTM's advantages over SenTestingKit but also adds a lot more. Another bonus is that it is being maintained very actively. I have conducted effort to integrate OCMock into GHUnit, it works great!. You can get the code on github. A: I realize this is an old question, but if you prefer BDD-style testing (rspec, Jasmine, etc.) over xUnit-style testing (Test::Unit, JSUnit, JUnit, etc.), then you may consider checking out Cedar. Cedar brings BDD-style testing to Objective-C, now that the language supports closures. We're happily using Cedar for our iOS projects at Pivotal Labs, and we're actively working on improving it. Any feedback or suggestions are welcome at [email protected] A: I would also recommend using coverage tools to see which part of the code are covered with unit tests and which are not. Basic line and branch code coverage can be generated with the GCOV tool. If you want to generate nice HTML coverage reports there are LCOV and ZCOV which do just that. A: I hope u can use 'SenTestKit', from which u can test each and every method.
{ "language": "en", "url": "https://stackoverflow.com/questions/33207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "332" }
Q: How can you implement trackbacks on a custom-coded blog (written in C#)? How can you implement trackbacks on a custom-coded blog (written in C#)? A: The TrackBack specification was created by Six Apart back in the day for their Movable Type blogging system. After some corporate changes it seems to be no longer available, but here's an archived version: http://web.archive.org/web/20081228043036/http://www.sixapart.com/pronet/docs/trackback_spec A: Personally, I wouldn't. Trackbacks became completely unusable years ago from all the spammers and even Akismet hasn't been enough to drag them back to usable (obviously IMO). The best way I've seen to handle trackbacks any more is to have a function that will turn an article's "referrer" (you are tracking those, right?) into a trackback (probably as a customized comment type). This leverages the meat-space processing that guarantees that no spam gets through and still allows you to easily recognize and enable further discussion. A: If you're custom coding your own blog you have too much time on your hands. Start with something like dasBlog or SubText and customize that to your needs. Then you get trackbacks for free.
{ "language": "en", "url": "https://stackoverflow.com/questions/33217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Compact Framework - Lightweight GUI Framework? Winform on CF is a bit heavy, initialising a lot of windows handles takes serious time and memory. Another issue is the lack of inbuilt double buffering and lack of control you have over the UI rendering means that during processor intensive operations the UI might leave the user staring at a half rendered screen. Nice! To alleviate this issue I would seek a lightweight control framework, is there one kicking about already or would one have to homebrew? By lightweight I mean a control library that enables one to fully control painting of controls and doesn't use many expensive windows handles. NOTE: Please don't suggest that I am running too much on the UI thread. That is not the case. A: I ran across this the other day, which might be helpful at least as a starting point: Fuild - Windows Mobile .NET Touch Controls. The look and feel is nice, but there is no design time support. I don't know too much about memory footprint, etc but everything is double buffered and the performance appears to be pretty good. A: Ok, just an idea off the top of my head... How about creating a synchronisation object, e.g. critical section or single lock, in your application, shared between your worker and gui threads. Override the paint. When you start painting, block all the other threads, such that you are not left with a half painted screen while they hog the CPU. (this of course assumes that presenting a pretty picture to you user is the most important thing you require ;) ) A: Actually, you can override the paint event. And the idea is that you offload long-running operations to a separate thread. That's no different from any other event-driven framework, really. Anything that relies on a handling a Paint event is going to be susceptible to that. Also, there's no system that lets you determine when the paint event is raised. That kind of event is generally raised by the window manager layer, which is outside of the application (or even the framework). You can handle the event yourself and just do no work some of the time, but I wouldn't recommend it. A: a bit slow and you can't control the paint event so during processor intensive operations the UI might leave the user staring at a half rendered screen. It's generally a bad idea to do expensive tasks on the UI thread. To keep your UI responsive, these tasks should be performed by a worker thread
{ "language": "en", "url": "https://stackoverflow.com/questions/33222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is elegant, semantic CSS with ASP.Net still a pipe dream? I know Microsoft has made efforts in the direction of semantic and cross-browser compliant XHTML and CSS, but it still seems like a PitA to pull off elegant markup. I've downloaded and tweaked the CSS Friendly Adapters and all that. But I still find myself frustrated with bloated and unattractive code. Is elegant, semantic CSS with ASP.Net still a pipe dream? Or is it finally possible, I just need more practice? A: See this question for more discussion, including use of MVC. This site uses ASP.NET and the markup is pretty clean. Check out the HTML/CSS on MicrosoftPDC.com (a site I'm working on) - it uses ASP.NET webforms, but we're designing with clean markup as a priority. A: The easiest way to generate elegant HTML and CSS is to use MVC framework, where you have much more control over HTML generation than with Web Forms. A: As long as you use the Visual Studio designer, it's probably a pipe dream. I write all of my ASP.NET code (all markup, and CSS) by hand, simply to avoid the designer. Later versions of Visual Studio have gotten much better at not mangling your .aspx/.ascx files, but they're still far from perfect. A: A better question is: is it really worth it? I write web applications and rarely does the elegance of the resulting HTML/CSS/JavaScript add anything to the end goal. If your end goal is to have people do a "view source" on your stuff and admire it, then maybe this is important and worth all of the effort, but I doubt it. If you need the semantics, use XML for your data. I do believe in the idea of the semantic web, but my applications don't need to have anything to do with it. A: As DannySmurf said, hand building is the way to go. That said, you might look at Expression Web. At least it is pretty accurate in how it renders the pages. A: @JasonBunting - Yes, it's absolutely worth it. Semantic and cross-browser markup means that search engines have an easier (and thus higher rankings) time with your content, that browsers have an easier (and thus less error-prone) time parsing your content for display, and that future developers have an easier time maintaining your code. A: Yes - it's a pipe dream. Since working with a professional web designer on a joint project who HATED the output of ASP.net server side controls I stopped using them. I essentially had to write ASP.net apps like you would write a modern PHP app. If you have a heavy business layer then your page or UI code can be minimal. I've never looked back since. The extra time spent writing everything custom has saved me a great deal of time trying to make Visual Studio / ASP.net play nice with CSS/XHTML. A: i can't believe nobody has mentioned css adapters. many of the common controls used in asp.net (gridview and treeview for example) can be processed through an adapter to change the resulting html that is outputted to the browser. if going the mvc route isn't a viable option, it is possible to write your own adapters for any of the built in asp.net controls. http://www.asp.net/CssAdapters/
{ "language": "en", "url": "https://stackoverflow.com/questions/33223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: In SQL Server 2000, is there a sysobjects query that will retrieve user views and not system views? Assuming such a query exists, I would greatly appreciate the help. I'm trying to develop a permissions script that will grant "select" and "references" permissions on the user tables and views in a database. My hope is that executing the "grant" commands on each element in such a set will make it easier to keep permissions current when new tables and views are added to the database. A: select * from information_schema.tables WHERE OBJECTPROPERTY(OBJECT_ID(table_name),'IsMSShipped') =0 Will exclude dt_properties and system tables add where table_type = 'view' if you just want the view A: SELECT * FROM sysobjects WHERE xtype = 'V' AND type = 'V' AND category = 0 Here is a list of the possible values for xtype: * *C = CHECK constraint *D = Default or DEFAULT constraint *F = FOREIGN KEY constraint *L = Log *P = Stored procedure *PK = PRIMARY KEY constraint (type is K) *RF = Replication filter stored procedure *S = System table *TR = Trigger *U = User table *UQ = UNIQUE constraint (type is K) *V = View *X = Extended stored procedure Here are the possible values for type: * *C = CHECK constraint *D = Default or DEFAULT constraint *F = FOREIGN KEY constraint *FN = Scalar function *IF = Inlined table-function *K = PRIMARY KEY or UNIQUE constraint *L = Log *P = Stored procedure *R = Rule *RF = Replication filter stored procedure *S = System table *TF = Table function *TR = Trigger *U = User table *V = View *X = Extended stored procedure Finally, the category field looks like it groups based on different types of objects. After analyzing the return resultset, the system views look to have a category = 2, whereas all of the user views have a category = 0. Hope this helps. For more information, visit http://msdn.microsoft.com/en-us/library/aa260447(SQL.80).aspx A: select * from information_schema.tables where table_type = 'view'
{ "language": "en", "url": "https://stackoverflow.com/questions/33226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Integrating a custom gui framework with the VS designer Imagine you homebrew a custom gui framework that doesn't use windows handles (compact framework, so please don't argue with "whys"). One of the main disadvantages of developing such a framework is that you lose compatability with the winform designer. So my question is to all of you who know a lot about VS customisation, would there be a clever mechanism by which one could incorperate the gui framework into the designer and get it to spit out your custom code instead of the standard windows stuff in the InitialiseComponent() method? A: I recently watched a video of these guys who built a WoW AddOn designer for Visual Studio. They overcame the task of getting their completely custom controls to render correctly in the designer. I'm not sure if this is exactly what you need, but might be worth looking at. It's open-source: http://www.codeplex.com/WarcraftAddOnStudio A: I've also since discovered that DXCore from DevExpress is a tool that simplifies plugin development. The default implementation wouldn't let me dock as document (central) but regardless one can still easily generate a plugin with it that can compile a file on the fly and render the contents of it which may well do the job for me. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/33233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I find unused images and CSS styles in a website? Is there a method (other than trial and error) I can use to find unused image files? How about CSS declarations for ID's and Classes that don't even exist in the site? It seems like there might be a way to write a script that scans the site, profile it, and see which images and styles are never loaded. A: You don't have to pay any web service or search for an addon, you already have this in Google Chrome under F12 (Inspector)->Audits->Remove unused CSS rules Screenshot: Update: 30 Jun, 2017 Now Chrome 59 provides CSS and JS code coverage. See https://developers.google.com/web/updates/2017/04/devtools-release-notes#coverage A: I seem to recall either Adobe Dreamweaver or Adobe Golive having a feature to find both orphaned styles and images; can't remember which now. Possibly both, but the features were well-hidden. A: Chrome canary build has an option in the developer toolbar for "CSS Coverage" as one of the experimental developer features. This options comes up in the timeline tab and can be used for getting a list of the unused CSS. Please note that you would need to enable this feature in the settings in the developer toolbar too. This feature should probably make it to official chrome release. A: At a file level: use wget to aggressively spider the site and then process the http server logs to get the list of files accessed, diff this with the files in the site diff \ <(sed some_rules httpd_log | sort -u) \ <(ls /var/www/whatever | sort -u) \ | grep something A: This little tool gives you a list of the css rules in use by some html. Here it is on Code Pen Click on Run code snippet, then click on Full page to get in to it. Then follow the instructions in the snippet. You can run it full page to see it work with your html / css. But it's easier just to bookmark my code pen as a tool. /* CSS CLEANER INSTRUCTIONS 1. Paste your HTML into the HTML window 2. Paste your CSS into the CSS window 5. The web page result now ends with a list of just the CSS used by your HTML! */ function cssRecursive(e) { var cssList = css(e); for (var i = 0; i < e.children.length; ++i) { var childElement = e.children[i]; cssList = union(cssList, cssRecursive(childElement)); } return cssList; } function css(a) { var sheets = document.styleSheets, o = []; a.matches = a.matches || a.webkitMatchesSelector || a.mozMatchesSelector || a.msMatchesSelector || a.oMatchesSelector; for (var i in sheets) { var rules = sheets[i].rules || sheets[i].cssRules; for (var r in rules) { if (a.matches(rules[r].selectorText)) { o.push(rules[r].cssText); } } } return o; } function union(x, y) { return unique(x.concat(y)); }; function unique(x) { return x.filter(function(elem, index) { return x.indexOf(elem) == index; }); }; document.write("<br/><hr/><code style='background-color:white; color:black;'>"); var allCss = cssRecursive(document.body); for (var i = 0; i < allCss.length; ++i) { var cssRule = allCss[i]; document.write(cssRule + "<br/>"); } document.write("</code>");
{ "language": "en", "url": "https://stackoverflow.com/questions/33242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "125" }
Q: Caching Active Directory Data In one of my applications, I am querying active directory to get a list of all users below a given user (using the "Direct Reports" thing). So basically, given the name of the person, it is looked up in AD, then the Direct Reports are read. But then for every direct report, the tool needs to check the direct reports of the direct reports. Or, more abstract: The Tool will use a person as the root of the tree and then walk down the complete tree to get the names of all the leaves (can be several hundred) Now, my concern is obviously performance, as this needs to be done quite a few times. My idea is to manually cache that (essentially just put all the names in a long string and store that somewhere and update it once a day). But I just wonder if there is a more elegant way to first get the information and then cache it, possibly using something in the System.DirectoryServices Namespace? A: In order to take control over the properties that you want to be cached you can call 'RefreshCache()' passing the properties that you want to hang around: System.DirectoryServices.DirectoryEntry entry = new System.DirectoryServices.DirectoryEntry(); // Push the property values from AD back to cache. entry.RefreshCache(new string[] {"cn", "www" }); A: Active Directory is pretty efficient at storing information and the retrieval shouldn't be that much of a performance hit. If you are really intent on storing the names, you'll probably want to store them in some sort of a tree stucture, so you can see the relationships of all the people. Depending on how the number of people, you might as well pull all the information you need daily and then query all the requests against your cached copy. A: AD does that sort of caching for you so don't worry about it unless performance becomes a problem. I have software doing this sort of thing all day long running on a corporate intranet that takes thousands of hits per hour and have never had to tune performance in this area. A: Depends on how up to date you want the information to be. If you must have the very latest data in your report then querying directly from AD is reasonable. And I agree that AD is quite robust, a typical dedicated AD server is actually very lightly utilised in normal day to day operations but best to check with your IT department / support person. An alternative is to have a daily script to dump the AD data into a CSV file and/or import it into a SQL database. (Oracle has a SELECT CONNECT BY feature that can automatically create multi-level hierarchies within a result set. MSSQL can do a similar thing with a bit of recursion IIRC).
{ "language": "en", "url": "https://stackoverflow.com/questions/33250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Getting files and their version numbers from sharepoint As a temporary stopgap until all the designers are in place we are currently hand-cranking a whole bunch of xml configuration files at work. One of the issues with this is file-versioning because people forget to update version numbers when updating the files (which is to be expected as humans generally suck at perfection). Therefore I figure that as we store the files in Sharepoint I should be able to write a script to pull the files down from Sharepoint, get the version number and automatically enter/update the version number from Sharepoint into the file. This means when someone wants the "latest" files they can run the script and get the latest files with the version numbers correct (there is slightly more to it than this so the reason for using the script isn't just the benefit of auto-versioning). Does anyone know how to get the files + version numbers from Sharepoint? A: I am assuming you are talking about documents in a list or a library, not source files in the 12 hive. If so, each library has built-in versioning. You can access it by clicking on the Form Library Settings available from each library (with appropriate admin privs, of course). From there, select Versioning Settings, and choose a setup that works for your process. As for getting the version number in code, if you pull a SPListItem from the collection, there is a SPListItemVersionCollection named Versions attached to each item. A: There is a way to do it thru web services, but I have done more with implementing custom event handlers. Here is a bit of code that will do what you want. Keep in mind, you can only execute this from the server, so you may want to wrap this up in a web service to allow access from your embedded devices. Also, you will need to reference the Microsoft.SharePoint.dll in this code. using (SPSite site = new SPSite("http://yoursitename/subsite")) { using (SPWeb web = site.OpenWeb()) { SPListItemCollection list = web.Lists["MyDocumentLibrary"].GetItems(new SPQuery()); foreach(SPListItem itm in list) { Stream inStream = itm.File.OpenBinaryStream(); XmlTextReader reader = new XmlTextReader(inStream); XmlDocument xd = new XmlDocument(); xd.Load(reader); //from here you can read whatever XML node that contains your version info reader.Close(); inStream.Close(); } } } The using() statements are to ensure that you do not create a memory leak, as the SPSite and SPWeb are unmanaged objects. Edit: If the version number has been promoted to a library field, you can access it by the following within the for loop above: itm["FieldName"]
{ "language": "en", "url": "https://stackoverflow.com/questions/33252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I load an org.w3c.dom.Document from XML in a string? I have a complete XML document in a string and would like a Document object. Google turns up all sorts of garbage. What is the simplest solution? (In Java 1.5) Solution Thanks to Matt McMinn, I have settled on this implementation. It has the right level of input flexibility and exception granularity for me. (It's good to know if the error came from malformed XML - SAXException - or just bad IO - IOException.) public static org.w3c.dom.Document loadXMLFrom(String xml) throws org.xml.sax.SAXException, java.io.IOException { return loadXMLFrom(new java.io.ByteArrayInputStream(xml.getBytes())); } public static org.w3c.dom.Document loadXMLFrom(java.io.InputStream is) throws org.xml.sax.SAXException, java.io.IOException { javax.xml.parsers.DocumentBuilderFactory factory = javax.xml.parsers.DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); javax.xml.parsers.DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (javax.xml.parsers.ParserConfigurationException ex) { } org.w3c.dom.Document doc = builder.parse(is); is.close(); return doc; } A: Just had a similar problem, except i needed a NodeList and not a Document, here's what I came up with. It's mostly the same solution as before, augmented to get the root element down as a NodeList and using erickson's suggestion of using an InputSource instead for character encoding issues. private String DOC_ROOT="root"; String xml=getXmlString(); Document xmlDoc=loadXMLFrom(xml); Element template=xmlDoc.getDocumentElement(); NodeList nodes=xmlDoc.getElementsByTagName(DOC_ROOT); public static Document loadXMLFrom(String xml) throws Exception { InputSource is= new InputSource(new StringReader(xml)); DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); DocumentBuilder builder = null; builder = factory.newDocumentBuilder(); Document doc = builder.parse(is); return doc; } A: This works for me in Java 1.5 - I stripped out specific exceptions for readability. import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.DocumentBuilder; import org.w3c.dom.Document; import java.io.ByteArrayInputStream; public Document loadXMLFromString(String xml) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); DocumentBuilder builder = factory.newDocumentBuilder(); return builder.parse(new ByteArrayInputStream(xml.getBytes())); } A: To manipulate XML in Java, I always tend to use the Transformer API: import javax.xml.transform.Source; import javax.xml.transform.TransformerException; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMResult; import javax.xml.transform.stream.StreamSource; public static Document loadXMLFrom(String xml) throws TransformerException { Source source = new StreamSource(new StringReader(xml)); DOMResult result = new DOMResult(); TransformerFactory.newInstance().newTransformer().transform(source , result); return (Document) result.getNode(); } A: Whoa there! There's a potentially serious problem with this code, because it ignores the character encoding specified in the String (which is UTF-8 by default). When you call String.getBytes() the platform default encoding is used to encode Unicode characters to bytes. So, the parser may think it's getting UTF-8 data when in fact it's getting EBCDIC or something… not pretty! Instead, use the parse method that takes an InputSource, which can be constructed with a Reader, like this: import java.io.StringReader; import org.xml.sax.InputSource; … return builder.parse(new InputSource(new StringReader(xml))); It may not seem like a big deal, but ignorance of character encoding issues leads to insidious code rot akin to y2k.
{ "language": "en", "url": "https://stackoverflow.com/questions/33262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: How do I best handle role based permissions using Forms Authentication on my ASP.NET web application? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. I've got two roles: * *Users *Administrators I want pages to be viewable by four different groups: * *Everyone (Default, Help) *Anonymous (CreateUser, Login, PasswordRecovery) *Users (ChangePassword, DataEntry) *Administrators (Report) Expanding on the example in the ASP.NET HOW DO I Video Series: Membership and Roles, I've put those page files into such folders: And I used the ASP.NET Web Site Administration Tool to set up access rules for each folder. It works but seems kludgy to me and it creates issues when Login.aspx is not at the root and with the ReturnUrl parameter of Login.aspx. Is there a better way to do this? Is there perhaps a simple way I can set permissions at the page level rather than at the folder level? A: A couple solutions off the top of my head. * *You could set up restrictions for each page in your web.config file. This would allow you to have whatever folder hierarchy you wish to use. However, it will require that you keep the web.config file up to date whenever you add additional pages. The nice part of having the folder structure determine accessibility is that you don't have to think about it when you add in new pages. *Have your pages inherit from custom classes (i.e. EveryonePage, UserPage, AdminPage, etc.) and put a role check in the Page_Load routine. A: One solution I've used in the past is this: * *Create a base page called 'SecurePage' or something to that effect. *Add a property 'AllowedUserRoles' to the base page that is a generic list of user roles List or List where int is the role id. *In the Page_Load event of any page extending SecurePage you add each allowed user role to the AllowedUserroles property. *In the base page override OnLoad() and check if the current user has one of the roles listed in AllowedUserRoles. This allows each page to be customized without you having to put tons of stuff in your web.config to control each page. A: In the master page I define a public property that toggles security checking, defaulted to true. I also declare a string that is a ; delimited list of roles needed for that page. in the page load of my master page I do the following if (_secure) { if (Request.IsAuthenticated) { if (_role.Length > 0) { if (PortalSecurity.IsInRoles(_role)) { return; } else { accessDenied = true; } } else { return; } } } //do whatever you wanna do to people who dont have access.. bump to a login page or whatever also you'll have to put at the top of your pages so you can access the extended properties of your master page
{ "language": "en", "url": "https://stackoverflow.com/questions/33263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the false operator in C# good for? There are two weird operators in C#: * *the true operator *the false operator If I understand this right these operators can be used in types which I want to use instead of a boolean expression and where I don't want to provide an implicit conversion to bool. Let's say I have a following class: public class MyType { public readonly int Value; public MyType(int value) { Value = value; } public static bool operator true (MyType mt) { return mt.Value > 0; } public static bool operator false (MyType mt) { return mt.Value < 0; } } So I can write the following code: MyType mTrue = new MyType(100); MyType mFalse = new MyType(-100); MyType mDontKnow = new MyType(0); if (mTrue) { // Do something. } while (mFalse) { // Do something else. } do { // Another code comes here. } while (mDontKnow) However for all the examples above only the true operator is executed. So what's the false operator in C# good for? Note: More examples can be found here, here and here. A: AFAIK, it would be used in a test for false, such as when the && operator comes into play. Remember, && short-circuits, so in the expression if ( mFalse && mTrue) { // ... something } mFalse.false() is called, and upon returning true the expression is reduced to a call to 'mFalse.true()' (which should then return false, or things will get weird). Note that you must implement the & operator in order for that expression to compile, since it's used if mFalse.false() returns false. A: You can use it to override the && and || operators. The && and || operators can't be overridden, but if you override |, &, true and false in exactly the right way the compiler will call | and & when you write || and &&. For example, look at this code (from http://ayende.com/blog/1574/nhibernate-criteria-api-operator-overloading - where I found out about this trick; archived version by @BiggsTRC): public static AbstractCriterion operator &(AbstractCriterion lhs, AbstractCriterion rhs) { return new AndExpression(lhs, rhs); } public static AbstractCriterion operator |(AbstractCriterion lhs, AbstractCriterion rhs) { return new OrExpression(lhs, rhs); } public static bool operator false(AbstractCriterion criteria) { return false; } public static bool operator true(AbstractCriterion criteria) { return false; } This is obviously a side effect and not the way it's intended to be used, but it is useful. A: It appears from the MSDN article you linked to it was provided to allow for nullable boolean types prior to the Nullable (i.e. int?, bool?, etc.) type being introducted into the language in C#2. Thus you would store an internal value indicating whether the value is true or false or null, i.e. in your example >0 for true, <0 for false and ==0 for null, and then you'd get SQL-style null semantics. You would also have to implement a .IsNull method or property in order that nullity could be checked explicitly. Comparing to SQL, imagine a table Table with 3 rows with value Foo set to true, 3 rows with value Foo set to false and 3 rows with value Foo set to null. SELECT COUNT(*) FROM Table WHERE Foo = TRUE OR Foo = FALSE 6 In order to count all rows you'd have to do the following:- SELECT COUNT(*) FROM Table WHERE Foo = TRUE OR Foo = FALSE OR Foo IS NULL 9 This 'IS NULL' syntax would have equivilent code in your class as .IsNull(). LINQ makes the comparison to C# even clearer:- int totalCount = (from s in MyTypeEnumerable where s || !s select s).Count(); Imagining that MyTypeEnumberable has exactly the same contents of the database, i.e. 3 values equal to true, 3 values equal to false and 3 values equal to null. In this case totalCount would evaluate to 6 in this case. However, if we re-wrote the code as:- int totalCount = (from s in MyTypeEnumerable where s || !s || s.IsNull() select s).Count(); Then totalCount would evaluate to 9. The DBNull example given in the linked MSDN article on the false operator demonstrates a class in the BCL which has this exact behaviour. In effect the conclusion is you shouldn't use this unless you're completely sure you want this type of behaviour, it's better to just use the far simpler nullable syntax!! Update: I just noticed you need to manually override the logic operators !, || and && to make this work properly. I believe the false operator feeds into these logical operators, i.e. indicating truth, falsity or 'otherwise'. As noted in another comment !x won't work off the bat; you have to overload !. Weirdness! A: Shog9 and Nir: thanks for your answers. Those answers pointed me to Steve Eichert article and it pointed me to msdn: The operation x && y is evaluated as T.false(x) ? x : T.&(x, y), where T.false(x) is an invocation of the operator false declared in T, and T.&(x, y) is an invocation of the selected operator &. In other words, x is first evaluated and operator false is invoked on the result to determine if x is definitely false. Then, if x is definitely false, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator & is invoked on the value previously computed for x and the value computed for y to produce the result of the operation. A: The page you link to http://msdn.microsoft.com/en-us/library/6x6y6z4d.aspx says what they were for, which was a way of handling nullable bools before nullable value types were introduced. I'd guess nowadays they're good for the same sort of stuff as ArrayList - i.e. absolutely nothing.
{ "language": "en", "url": "https://stackoverflow.com/questions/33265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: Automated PDF Creation from URL Is there a PDF library that one can use to automate creating PDFs from URLs? The current approach I use is to "Print" a page and select a PDF plugin like PrimoPDF to generate the PDF document but I was wanting to automate that. A: ABCPDF can do it A: wkhtmltopdf generates the most accurate PDFs out of web pages that I have ever found. It renders them in WebKit and converts this to PDF, so the PDF will look exactly like the web page, including all styles and other fancy things. Just use it like this: wkpdfhtmltopdf http://www.whatever.com/page.html page.pdf A: Depends on what platform you are on Windows - Websupergoo's ABC PDF http://www.websupergoo.com/ *nix - Prince XML http://www.princexml.com/overview/ A: I've also used ABC PDF (from classic ASP) and found it very good.
{ "language": "en", "url": "https://stackoverflow.com/questions/33275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Protecting API Secret Keys in a Thick Client application Within an application, I've got Secret Keys uses to calculate a hash for an API call. In a .NET application it's fairly easy to use a program like Reflector to pull out information from the assembly to include these keys. Is obfuscating the assembly a good way of securing these keys? A: Probably not. Look into cryptography and Windows' built-in information-hiding mechanisms (DPAPI and storing the keys in an ACL-restricted registry key, for example). That's as good as you're going to get for security you need to keep on the same system as your application. If you are looking for a way to stop someone physically sitting at the machine from getting your information, forget it. If someone is determined, and has unrestricted access to a computer that is not under your control, there is no way to be 100% certain that the data is protected under all circumstances. Someone who is determined will get at it if they want to. A: I wouldn't think so, as obfuscating (as I understand it at least) will simply mess around with the method names to make it hard (but not impossible) to understand the code. This won't change the data of the actual key (which I'm guessing you have stored in a constant somewhere). If you just want to make it somewhat harder to see, you could run a simple cipher on the plaintext (like ROT-13 or something) so that it's at least not stored in the clear in the code itself. But that's certainly not going to stop any determined hacker from accessing your key. A stronger encryption method won't help because you'd still need to store the key for THAT in the code, and there's nothing protecting that. The only really secure thing I can think of is to keep the key outside of the application somehow, and then restrict access to the key. For instance, you could keep the key in a separate file and then protected the file with an OS-level user-based restriction; that would probably work. You could do the same with a database connection (again, relying on the user-based access restriction to keep non-authorized users out of the database). I've toyed with the idea of doing this for my apps but I've never implemented it. A: DannySmurf is correct that you can't hide keys from the person running an application; if the application can get to the keys, so can the person. However, What you are trying to accomplish exactly? Depending on what it is, there are often ways to accomplish your goal that don't simply rely on keeping a secret "secret", on your user's machine. A: Late to the game here... The approach of storing the keys in the assembly / assembly config is fundamentally insecure. There is no possible ironclad way to store it as a determined user will have access. I don't care if you use the best / most expensive obfuscation product on the planet. I don't care if you use PDAPI to secure the data (although this is better). I don't care if you use a local OS-protected key store (this is even better still). None are ideal as all suffer from the same core issue: the user has access to the keys, and they are there, unchanging for days, weeks, possibly even months and years. A far more secure approach would be is to secure your API calls with tried and true PKI. However, this has obvious performance overhead if your API calls are chatty, but for the vast majority of applications this is a non-issue. If performance is a concern, you can use Diffie-Hellman over asymmetric PKI to establish a shared secret symmetric key for use with a cipher such as AES. "shared" in this case means shared between client and server, not all clients / users. There is no hard-coded / baked in key. Anywhere. The keys are transient, regenerated every time the user runs the program, or if you are truly paranoid, they could time-out and require recreation. The computed shared secret symmetric keys themselves get stored in memory only, in SecureString. They are hard to extract, and even if you do, they are only good for a very short time, and only for communication between that particular client (ie that session). In other words, even if somebody does hack their local keys, they are only good for interfering with local communication. They can't use this knowledge to affect other users, unlike a baked-in key shared by all users via code / config. Furthermore, the entire keys themselves are never, ever passed over the network. The client Alice and server Bob independently compute them. The information they pass in order to do this could in theory be intercepted by third party Charlie, allowing him to independently calculate the shared secret key. That is why you use that (significantly more costLy) asymmetric PKI to protect the key generation between Alice and Bob. In these systems, the key generation is quite often coupled with authentication and thus session creation. You "login" and create your "session" over PKI, and after that is complete, both the client and the server independently have a symmetric key which can be used for order-of-magnitude faster encryption for all subsequent communication in that session. For high-scale servers, this is important to save compute cycles on decryption over using say TLS for everything. But wait: we're not secure yet. We've only prevented reading the messages. Note that it is still necessary to use a message digest mechanism to prevent man-in-the-middle manipulation. While nobody can read the data being transmitted, without a MD there is nothing preventing them from modifying it. So you hash the message before encryption, then send the hash along with the message. The server then re-hashes the payload upon decryption and verifies that it matches the hash that was part of the message. If the message was modified in transit, they won't, and the entire message is discarded / ignored. The final mechanism needed to guard against is replay attacks. At this point, you have prevented people from reading your data, as well as modifying your data, but you haven't prevented them from simply sending it again. If this is a problem for your application, it's protocol must provide data and both client and server must have enough stateful information to detect a replay. This could be something as simple as a counter that is part of the encrypted payload. Note that if you are using a transport such as UDP, you probably already have a mechanism to deal with duplicated packets, and thus can already deal with replay attacks. What should be obvious is getting this right is not easy. Thus, use PKI unless you ABSOLUTELY cannot. Note that this approach is used heavily in the games industry where it is highly desirable to spend as little compute on each player as possible to achieve higher scalability, while at the same time providing security from hacking / prying eyes. So in conclusion, if this is really something that is a concern, instead of trying to find a securely store the API keys, don't. Instead, change how your app uses this API (assuming you have control of both sides, naturally). Use a PKI, or use a PKI-shared symmetric hybrid if PKI will be too slow (which is RARELY a problem these days). Then you won't have anything stored that is a security concern.
{ "language": "en", "url": "https://stackoverflow.com/questions/33288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Call onresize from ASP.NET content page I have a JavaScript method that I need to run on one of my pages, in particular, the onresize event. However, I don't see how I can set that event from my content page. I wish I could just put it on my master page, but I don't have the need for the method to be called on all pages that use that master page. Any help would be appreciated. A: Place the following in your content page: <script type="text/javascript"> // here is a cross-browser compatible way of connecting // handlers to events, in case you don't have one function attachEventHandler(element, eventToHandle, eventHandler) { if(element.attachEvent) { element.attachEvent(eventToHandle, eventHandler); } else if(element.addEventListener) { element.addEventListener(eventToHandle.replace("on", ""), eventHandler, false); } else { element[eventToHandle] = eventHandler; } } attachEventHandler(window, "onresize", function() { // the code you want to run when the browser is resized }); </script> That code should give you the basic idea of what you need to do. Hopefully you are using a library that already has code to help you write up event handlers and such. A: How about use code like the following in your Content Page (C#)? Page.ClientScript.RegisterStartupScript(this.GetType(), "resizeMyPage", "window.onresize=function(){ resizeMyPage();}", true); Thus, you could have a resizeMyPage function defined somewhere in the Javascript and it would be run whenever the browser is resized! A: I had the same problem and have come across this post : IE Resize Bug Revisited The above code works but IE has a problem where the onresize is triggered when the body tag changes shape. This blog gives an alternate method which works well
{ "language": "en", "url": "https://stackoverflow.com/questions/33301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }