text
stringlengths
8
267k
meta
dict
Q: How do I HTML Encode all the output in a web application? I want to prevent XSS attacks in my web application. I found that HTML Encoding the output can really prevent XSS attacks. Now the problem is that how do I HTML encode every single output in my application? I there a way to automate this? I appreciate answers for JSP, ASP.net and PHP. A: One thing that you shouldn't do is filter the input data as it comes in. People often suggest this, since it's the easiest solution, but it leads to problems. Input data can be sent to multiple places, besides being output as HTML. It might be stored in a database, for example. The rules for filtering data sent to a database are very different from the rules for filtering HTML output. If you HTML-encode everything on input, you'll end up with HTML in your database. (This is also why PHP's "magic quotes" feature is a bad idea.) You can't anticipate all the places your input data will travel. The safe approach is to prepare the data just before it's sent somewhere. If you're sending it to a database, escape the single quotes. If you're outputting HTML, escape the HTML entities. And once it's sent somewhere, if you still need to work with the data, use the original un-escaped version. This is more work, but you can reduce it by using template engines or libraries. A: You don't want to encode all HTML, you only want to HTML-encode any user input that you're outputting. For PHP: htmlentities and htmlspecialchars A: For JSPs, you can have your cake and eat it too, with the c:out tag, which escapes XML by default. This means you can bind to your properties as raw elements: <input name="someName.someProperty" value="<c:out value='${someName.someProperty}' />" /> When bound to a string, someName.someProperty will contain the XML input, but when being output to the page, it will be automatically escaped to provide the XML entities. This is particularly useful for links for page validation. A: A nice way I used to escape all user input is by writing a modifier for smarty wich escapes all variables passed to the template; except for the ones that have |unescape attached to it. That way you only give HTML access to the elements you explicitly give access to. I don't have that modifier any more; but about the same version can be found here: http://www.madcat.nl/martijn/archives/16-Using-smarty-to-prevent-HTML-injection..html In the new Django 1.0 release this works exactly the same way, jay :) A: You could wrap echo / print etc. in your own methods which you can then use to escape output. i.e. instead of echo "blah"; use myecho('blah'); you could even have a second param that turns off escaping if you need it. In one project we had a debug mode in our output functions which made all the output text going through our method invisible. Then we knew that anything left on the screen HADN'T been escaped! Was very useful tracking down those naughty unescaped bits :) A: My personal preference is to diligently encode anything that's coming from the database, business layer or from the user. In ASP.Net this is done by using Server.HtmlEncode(string) . The reason so encode anything is that even properties which you might assume to be boolean or numeric could contain malicious code (For example, checkbox values, if they're done improperly could be coming back as strings. If you're not encoding them before sending the output to the user, then you've got a vulnerability). A: If you do actually HTML encode every single output, the user will see plain text of &lt;html&gt; instead of a functioning web app. EDIT: If you HTML encode every single input, you'll have problem accepting external password containing < etc.. A: The only way to truly protect yourself against this sort of attack is to rigorously filter all of the input that you accept, specifically (although not exclusively) from the public areas of your application. I would recommend that you take a look at Daniel Morris's PHP Filtering Class (a complete solution) and also the Zend_Filter package (a collection of classes you can use to build your own filter). PHP is my language of choice when it comes to web development, so apologies for the bias in my answer. Kieran. A: there was a good essay from Joel on software (making wrong code look wrong I think, I'm on my phone otherwise I'd have a URL for you) that covered the correct use of Hungarian notation. The short version would be something like: Var dsFirstName, uhsFirstName : String; Begin uhsFirstName := request.queryfields.value['firstname']; dsFirstName := dsHtmlToDB(uhsFirstName); Basically prefix your variables with something like "us" for unsafe string, "ds" for database safe, "hs" for HTML safe. You only want to encode and decode where you actually need it, not everything. But by using they prefixes that infer a useful meaning looking at your code you'll see real quick if something isn't right. And you're going to need different encode/decode functions anyways. A: Output encoding is by far the best defense. Validating input is great for many reasons, but not 100% defense. If a database becomes infected with XSS via attack (i.e. ASPROX), mistake, or maliciousness input validation does nothing. Output encoding will still work. A: OWASP has a nice API to encode HTML output, either to use as HTML text (e.g. paragraph or <textarea> content) or as an attribute's value (e.g. for <input> tags after rejecting a form): encodeForHTML($input) // Encode data for use in HTML using HTML entity encoding encodeForHTMLAttribute($input) // Encode data for use in HTML attributes. The project (the PHP version) is hosted under http://code.google.com/p/owasp-esapi-php/ and is also available for some other languages, e.g. .NET. Remember that you should encode everything (not only user input), and as late as possible (not when storing in DB but when outputting the HTTP response).
{ "language": "en", "url": "https://stackoverflow.com/questions/58694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Alternatives to XCopy for copying lots of files? The situation: I have a pieceofcrapuous laptop. One of the things that make it pieceofcrapuous is that the battery is dead, and the power cable pulls out of the back with little effort. I recently received a non-pieceofcrapuous laptop, and I am in the process of copying everything from old to new. I'm trying to xcopy c:*.* from the old machine to an external hard drive, but because the cord pulls out so frequently, the xcopy is interrupted fairly often. What I need is a switch in XCopy that will copy eveything except for files that already exist in the destination folder -- the exact opposite of the behavior of the /U switch. Does anyone know of a way to do this? A: /D may be what you are looking for. I find it works quite fast for backing-up as existing files are not copied. xcopy "O:\*.*" N:\Whatever /C /D /S /H /C Continues copying even if errors occur. /D:m-d-y Copies files changed on or after the specified date. If no date is given, copies only those files whose source time is newer than the destination time. /S Copies directories and subdirectories except empty ones. /H Copies hidden and system files also. More information: http://www.computerhope.com/xcopyhlp.htm A: I'm a big fan of TeraCopy. A: Beyond Compare 3 is the best utility I've seen for things like this. It makes everything really easy to assess, and really easy to manipulate. A: It was not clear if you only wanted a command line tool, but Microsoft's free SyncToy program is great for maintaining a replication between a pair of volumes. It supports pushing changes in either or both directions. That is, it support several different types of replication modes. A: I find RoboCopy is a good alternative to xcopy. It supports high latency connections much better and supports resuming a copy. References Wikipedia - robocopy Downloads Edit Robocopy was introduced as a standard feature of Windows Vista and Windows Server 2008. * *Robocopy is shipped as part of the Windows Server 2003 resource kit and can be download from the Microsoft download site. *A very simple GUI has also been release for RoboCopy on technet http://technet.microsoft.com/en-us/magazine/cc160891.aspx A: XcopyGUI. A small, standalone GUI front-end for xcopy. Free. http://lorenstuff.weebly.com/ A: robocopy c:\sourceDirectory\*.* d:\destinationDirectory\*.* /R:5 /W:3 /Z /XX /TEE This will work for your alternative to xCopy... best method imho Good luck! A: I would suggest using rsync, several ports are available, but cwrsync seems to work nicely on Windows. A: How about unison?
{ "language": "en", "url": "https://stackoverflow.com/questions/58697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to log in T-SQL I'm using ADO.NET to access SQL Server 2005 and would like to be able to log from inside the T-SQL stored procedures that I'm calling. Is that somehow possible? I'm unable to see output from the 'print'-statement when using ADO.NET and since I want to use logging just for debuging the ideal solution would be to emit messages to DebugView from SysInternals. A: I think writing to a log table would be my preference. Alternatively, as you are using 2005, you could write a simple SQLCLR procedure to wrap around the EventLog. Or you could use xp_logevent if you wanted to write to SQL log A: I solved this by writing a SQLCLR-procedure as Eric Z Beard suggested. The assembly must be signed with a strong name key file. using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static int Debug(string s) { System.Diagnostics.Debug.WriteLine(s); return 0; } } } Created a key and a login: USE [master] CREATE ASYMMETRIC KEY DebugProcKey FROM EXECUTABLE FILE = 'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll' CREATE LOGIN DebugProcLogin FROM ASYMMETRIC KEY DebugProcKey GRANT UNSAFE ASSEMBLY TO DebugProcLogin Imported it into SQL Server: USE [mydb] CREATE ASSEMBLY SqlServerProject1 FROM 'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll' WITH PERMISSION_SET = unsafe CREATE FUNCTION dbo.Debug( @message as nvarchar(200) ) RETURNS int AS EXTERNAL NAME SqlServerProject1.[StoredProcedures].Debug Then I was able to log in T-SQL procedures using exec Debug @message = 'Hello World' A: You can either log to a table, by simply inserting a new row, or you can implement a CLR stored procedure to write to a file. Be careful with writing to a table, because if the action happens in a transaction and the transaction gets rolled back, your log entry will disappear. A: Logging from inside a SQL sproc would be better done to the database itself. T-SQL can write to files but it's not really designed for it. A: There's the PRINT command, but I prefer logging into a table so you can query it. A: You can write rows to a log table from within a stored procedure. As others have indicated, you could go out of your way to write to some text file or other log with CLR or xp_logevent, but it seems like you need more volume than would be practical for such uses. The tough cases occur (and it's these that you really need your log for) when transactions fail. Since any logging that occurs during these transactions will be rolled back along with the transaction that they are part of, it is best to have a logging API that your clients can use to log errors. This can be a simple DAL that either logs to the same database, or to a shared one. A: For what it's worth, I've found that when I don't assign an InfoMessage handler to my SqlConnection: sqlConnection.InfoMessage += new SqlInfoMessageEventHandler(MySqlConnectionInfoMessageHandler); where the signature of the InfoMessageHandler looks like this: MySqlConnectionInfoMessageHandler(object sender, SqlInfoMessageEventArgs e) then my PRINT statements in my Stored Procs do not appear in DbgView. A: You could use output variables for passing back messages, but that relies on the proc executing without errors. create procedure usp_LoggableProc @log varchar(max) OUTPUT as -- T-SQL statement here ... select @log = @log + 'X is foo' And then in your ADO code somehwere: string log = (string)SqlCommand.Parameters["@log"].Value; You could use raiserror to create your own custom errors with the information that you require and that will be available to you through the usual SqlException Errors collection in your ADO code: RAISERROR('X is Foo', 10, 1) Hmmm but yeah, can't help feeling just for debugging and in your situation, just insert varchar messages to an error table like the others have suggested and select * from it when you're debugging. A: You may want to check Log4TSQL. It provides Database-Logging for Stored Procedures and Triggers in SQL Server 2005 - 2008. You have the possibility to set separate, independent log-levels on a per Procedure/Trigger basis. A: Use cmd commands with cmdshell I found this while searching for an answer to this question. https://www.databasejournal.com/features/mssql/article.php/1467601/A-general-logging-t-sql-process-to-write-to-txt-files.htm select @cmdtxt = "echo " + @logEntry + " >> drive:\path\filename.txt" exec master..xp_cmdshell @cmdtxt A: I've been searching for a way to do this, as I am trying to debug some complicated, chained, stored procedures, all that are called by an external API, and which operate in the context of a transaction. I'd been writing diagnostic messages into a logging file, but if the transaction rolls back, the new log entries disappear with the rollback. I found a way! And it works pretty well. And it has already saved me many, many hours of debugging time. * *Create a linked server to the same SQL instance, using the login's security context. In my case, the simplest method was to use the localhost loop address, 127.0.0.1 *Set the linked server to enable RPC, and to NOT "Enable Promotion of Distributed Transactions". This means that calls through that server will take place outside of your transaction context. *In your logging procedure, (I have an example excerpted below) write to the log table using the procedure through loopback linked server if you are in a transaction. You can write to it the usual way if your are not. Writing though the linked server is considerably slower than direct DML. Voila! My in-process logging survives the rollback, and I can find out what's happening internally when things are going south. I can't claim credit for thinking of this--I found the approach after some time with Google, but I'm so pleased with the result I felt like I had to share it. USE TX GO CREATE PROCEDURE dbo.LogError(@errorSource Varchar(32), @msg Varchar(400)) AS BEGIN SET NOCOUNT ON IF @@TRANCOUNT > 0 EXEC [127.0.0.1].TX.dbo.LogError @errorSource, @msg ELSE INSERT INTO TX.dbo.ErrorLog(source_module, message) SELECT @errorSource, @msg END GO
{ "language": "en", "url": "https://stackoverflow.com/questions/58709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How would you design a very "Pythonic" UI framework? I have been playing with the Ruby library "shoes". Basically you can write a GUI application in the following way: Shoes.app do t = para "Not clicked!" button "The Label" do alert "You clicked the button!" # when clicked, make an alert t.replace "Clicked!" # ..and replace the label's text end end This made me think - how would I design a similarly nice-to-use GUI framework in Python? One that doesn't have the usual tyings of basically being wrappers to a C* library (In the case of GTK, Tk, wx, QT etc etc) Shoes takes things from web devlopment (like #f0c2f0 style colour notation, CSS layout techniques, like :margin => 10), and from ruby (extensively using blocks in sensible ways) Python's lack of "rubyish blocks" makes a (metaphorically)-direct port impossible: def Shoeless(Shoes.app): self.t = para("Not clicked!") def on_click_func(self): alert("You clicked the button!") self.t.replace("clicked!") b = button("The label", click=self.on_click_func) No where near as clean, and wouldn't be nearly as flexible, and I'm not even sure if it would be implementable. Using decorators seems like an interesting way to map blocks of code to a specific action: class BaseControl: def __init__(self): self.func = None def clicked(self, func): self.func = func def __call__(self): if self.func is not None: self.func() class Button(BaseControl): pass class Label(BaseControl): pass # The actual applications code (that the end-user would write) class MyApp: ok = Button() la = Label() @ok.clicked def clickeryHappened(): print "OK Clicked!" if __name__ == '__main__': a = MyApp() a.ok() # trigger the clicked action Basically the decorator function stores the function, then when the action occurred (say, a click) the appropriate function would be executed. The scope of various stuff (say, the la label in the above example) could be rather complicated, but it seems doable in a fairly neat manner.. A: You could actually pull this off, but it would require using metaclasses, which are deep magic (there be dragons). If you want an intro to metaclasses, there's a series of articles from IBM which manage to introduce the ideas without melting your brain. The source code from an ORM like SQLObject might help, too, since it uses this same kind of declarative syntax. A: I was never satisfied with David Mertz's articles at IBM on metaclsses so I recently wrote my own metaclass article. Enjoy. A: This is extremely contrived and not pythonic at all, but here's my attempt at a semi-literal translation using the new "with" statement. with Shoes(): t = Para("Not clicked!") with Button("The Label"): Alert("You clicked the button!") t.replace("Clicked!") The hardest part is dealing with the fact that python will not give us anonymous functions with more than one statement in them. To get around that, we could create a list of commands and run through those... Anyway, here's the backend code I ran this with: context = None class Nestable(object): def __init__(self,caption=None): self.caption = caption self.things = [] global context if context: context.add(self) def __enter__(self): global context self.parent = context context = self def __exit__(self, type, value, traceback): global context context = self.parent def add(self,thing): self.things.append(thing) print "Adding a %s to %s" % (thing,self) def __str__(self): return "%s(%s)" % (self.__class__.__name__, self.caption) class Shoes(Nestable): pass class Button(Nestable): pass class Alert(Nestable): pass class Para(Nestable): def replace(self,caption): Command(self,"replace",caption) class Command(Nestable): def __init__(self, target, command, caption): self.command = command self.target = target Nestable.__init__(self,caption) def __str__(self): return "Command(%s text of %s with \"%s\")" % (self.command, self.target, self.caption) def execute(self): self.target.caption = self.caption A: ## All you need is this class: class MainWindow(Window): my_button = Button('Click Me') my_paragraph = Text('This is the text you wish to place') my_alert = AlertBox('What what what!!!') @my_button.clicked def my_button_clicked(self, button, event): self.my_paragraph.text.append('And now you clicked on it, the button that is.') @my_paragraph.text.changed def my_paragraph_text_changed(self, text, event): self.button.text = 'No more clicks!' @my_button.text.changed def my_button_text_changed(self, text, event): self.my_alert.show() ## The Style class is automatically gnerated by the framework ## but you can override it by defining it in the class: ## ## class MainWindow(Window): ## class Style: ## my_blah = {'style-info': 'value'} ## ## or like you see below: class Style: my_button = { 'background-color': '#ccc', 'font-size': '14px'} my_paragraph = { 'background-color': '#fff', 'color': '#000', 'font-size': '14px', 'border': '1px solid black', 'border-radius': '3px'} MainWindow.Style = Style ## The layout class is automatically generated ## by the framework but you can override it by defining it ## in the class, same as the Style class above, or by ## defining it like this: class MainLayout(Layout): def __init__(self, style): # It takes the custom or automatically generated style class upon instantiation style.window.pack(HBox().pack(style.my_paragraph, style.my_button)) MainWindow.Layout = MainLayout if __name__ == '__main__': run(App(main=MainWindow)) It would be relatively easy to do in python with a bit of that metaclass python magic know how. Which I have. And a knowledge of PyGTK. Which I also have. Gets ideas? A: With some Metaclass magic to keep the ordering I have the following working. I'm not sure how pythonic it is but it is good fun for creating simple things. class w(Wndw): title='Hello World' class txt(Txt): # either a new class text='Insert name here' lbl=Lbl(text='Hello') # or an instance class greet(Bbt): text='Greet' def click(self): #on_click method self.frame.lbl.text='Hello %s.'%self.frame.txt.text app=w() A: The only attempt to do this that I know of is Hans Nowak's Wax (which is unfortunately dead). A: The closest you can get to rubyish blocks is the with statement from pep343: http://www.python.org/dev/peps/pep-0343/ A: If you use PyGTK with glade and this glade wrapper, then PyGTK actually becomes somewhat pythonic. A little at least. Basically, you create the GUI layout in Glade. You also specify event callbacks in glade. Then you write a class for your window like this: class MyWindow(GladeWrapper): GladeWrapper.__init__(self, "my_glade_file.xml", "mainWindow") self.GtkWindow.show() def button_click_event (self, *args): self.button1.set_label("CLICKED") Here, I'm assuming that I have a GTK Button somewhere called button1 and that I specified button_click_event as the clicked callback. The glade wrapper takes a lot of effort out of event mapping. If I were to design a Pythonic GUI library, I would support something similar, to aid rapid development. The only difference is that I would ensure that the widgets have a more pythonic interface too. The current PyGTK classes seem very C to me, except that I use foo.bar(...) instead of bar(foo, ...) though I'm not sure exactly what I'd do differently. Probably allow for a Django models style declarative means of specifying widgets and events in code and allowing you to access data though iterators (where it makes sense, eg widget lists perhaps), though I haven't really thought about it. A: Maybe not as slick as the Ruby version, but how about something like this: from Boots import App, Para, Button, alert def Shoeless(App): t = Para(text = 'Not Clicked') b = Button(label = 'The label') def on_b_clicked(self): alert('You clicked the button!') self.t.text = 'Clicked!' Like Justin said, to implement this you would need to use a custom metaclass on class App, and a bunch of properties on Para and Button. This actually wouldn't be too hard. The problem you run into next is: how do you keep track of the order that things appear in the class definition? In Python 2.x, there is no way to know if t should be above b or the other way around, since you receive the contents of the class definition as a python dict. However, in Python 3.0 metaclasses are being changed in a couple of (minor) ways. One of them is the __prepare__ method, which allows you to supply your own custom dictionary-like object to be used instead -- this means you'll be able to track the order in which items are defined, and position them accordingly in the window. A: This could be an oversimplification, i don't think it would be a good idea to try to make a general purpose ui library this way. On the other hand you could use this approach (metaclasses and friends) to simplify the definition of certain classes of user interfaces for an existing ui library and depending of the application that could actually save you a significant amount of time and code lines. A: I have this same problem. I wan to to create a wrapper around any GUI toolkit for Python that is easy to use, and inspired by Shoes, but needs to be a OOP approach (against ruby blocks). More information in: http://wiki.alcidesfonseca.com/blog/python-universal-gui-revisited Anyone's welcome to join the project. A: If you really want to code UI, you could try to get something similar to django's ORM; sth like this to get a simple help browser: class MyWindow(Window): class VBox: entry = Entry() bigtext = TextView() def on_entry_accepted(text): bigtext.value = eval(text).__doc__ The idea would be to interpret some containers (like windows) as simple classes, some containers (like tables, v/hboxes) recognized by object names, and simple widgets as objects. I dont think one would have to name all containers inside a window, so some shortcuts (like old-style classes being recognized as widgets by names) would be desirable. About the order of elements: in MyWindow above you don't have to track this (window is conceptually a one-slot container). In other containers you can try to keep track of the order assuming that each widget constructor have access to some global widget list. This is how it is done in django (AFAIK). Few hacks here, few tweaks there... There are still few things to think of, but I believe it is possible... and usable, as long as you don't build complicated UIs. However I am pretty happy with PyGTK+Glade. UI is just kind of data for me and it should be treated as data. There's just too much parameters to tweak (like spacing in different places) and it is better to manage that using a GUI tool. Therefore I build my UI in glade, save as xml and parse using gtk.glade.XML(). A: Personally, I would try to implement JQuery like API in a GUI framework. class MyWindow(Window): contents = ( para('Hello World!'), button('Click Me', id='ok'), para('Epilog'), ) def __init__(self): self['#ok'].click(self.message) self['para'].hover(self.blend_in, self.blend_out) def message(self): print 'You clicked!' def blend_in(self, object): object.background = '#333333' def blend_out(self, object): object.background = 'WindowBackground' A: Here's an approach that goes about GUI definitions a bit differently using class-based meta-programming rather than inheritance. This is largley Django/SQLAlchemy inspired in that it is heavily based on meta-programming and separates your GUI code from your "code code". I also think it should make heavy use of layout managers like Java does because when you're dropping code, no one wants to constantly tweak pixel alignment. I also think it would be cool if we could have CSS-like properties. Here is a rough brainstormed example that will show a column with a label on top, then a text box, then a button to click on the bottom which shows a message. from happygui.controls import * MAIN_WINDOW = Window(width="500px", height="350px", my_layout=ColumnLayout(padding="10px", my_label=Label(text="What's your name kiddo?", bold=True, align="center"), my_edit=EditBox(placeholder=""), my_btn=Button(text="CLICK ME!", on_click=Handler('module.file.btn_clicked')), ), ) MAIN_WINDOW.show() def btn_clicked(sender): # could easily be in a handlers.py file name = MAIN_WINDOW.my_layout.my_edit.text # same thing: name = sender.parent.my_edit.text # best practice, immune to structure change: MAIN_WINDOW.find('my_edit').text MessageBox("Your name is '%s'" % ()).show(modal=True) One cool thing to notice is the way you can reference the input of my_edit by saying MAIN_WINDOW.my_layout.my_edit.text. In the declaration for the window, I think it's important to be able to arbitrarily name controls in the function kwargs. Here is the same app only using absolute positioning (the controls will appear in different places because we're not using a fancy layout manager): from happygui.controls import * MAIN_WINDOW = Window(width="500px", height="350px", my_label=Label(text="What's your name kiddo?", bold=True, align="center", x="10px", y="10px", width="300px", height="100px"), my_edit=EditBox(placeholder="", x="10px", y="110px", width="300px", height="100px"), my_btn=Button(text="CLICK ME!", on_click=Handler('module.file.btn_clicked'), x="10px", y="210px", width="300px", height="100px"), ) MAIN_WINDOW.show() def btn_clicked(sender): # could easily be in a handlers.py file name = MAIN_WINDOW.my_edit.text # same thing: name = sender.parent.my_edit.text # best practice, immune to structure change: MAIN_WINDOW.find('my_edit').text MessageBox("Your name is '%s'" % ()).show(modal=True) I'm not entirely sure yet if this is a super great approach, but I definitely think it's on the right path. I don't have time to explore this idea more, but if someone took this up as a project, I would love them. A: Declarative is not necessarily more (or less) pythonic than functional IMHO. I think a layered approach would be the best (from buttom up): * *A native layer that accepts and returns python data types. *A functional dynamic layer. *One or more declarative/object-oriented layers. Similar to Elixir + SQLAlchemy.
{ "language": "en", "url": "https://stackoverflow.com/questions/58711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Open source PDF library for C/C++ application? I want to be able to generate PDF output from my (native) C++ Windows application. Are there any free/open source libraries available to do this? I looked at the answers to this question, but they mostly relate to .Net. A: jagpdf seems to be one of them. It is written in C++ but provides a C API. A: It depends a bit on your needs. Some toolkits are better at drawing, others are better for writing text. Cairo has a pretty good for drawing (it support a wide range of screen and file types, including pdf), but it may not be ideal for good typography. A: PDF Hummus. see for http://pdfhummus.com/ - contains all required features for manipulation with PDF files except rendering. A: LibHaru Haru is a free, cross platform, open-sourced software library for generating PDF written in ANSI-C. It can work as both a static-library (.a, .lib) and a shared-library (.so, .dll). Didn't try it myself, but maybe it can help you A: * *LibHaru seems to be used by many. A non-open source approach is: PDF Creator Pilot which provides more language options including C++, C#, Delphi, ASP, ASP.NET, VB, VB.NET, VBScript, PHP and Python A: muPdf library looks very promising: http://mupdf.com/ There is also an open source viewer: http://blog.kowalczyk.info/software/sumatrapdf/free-pdf-reader.html A: I worked on a project that required a pdf report. After searching for online I found the PoDoFo library. Seemed very robust. I did not need all the features, so I created a wrapper to abstract away some of the complexity. Wasn't too difficult. You can find the library here: http://podofo.sourceforge.net/ Enjoy! A: Try wkhtmltopdf Software features Cross platform. Open source. Convert any web pages into PDF documents using webkit. You can add headers and footers. TOC generation. Batch mode conversions. Can run on Linux server with an XServer (the X11 client libs must be installed). Can be directly used by PHP or Python via bindings to libwkhtmltox. A: If you're brave and willing to roll your own, you could start with a PostScript library and augment it to deal with PDF, taking advantage of Adobe's free online PDF reference. A: http://wxcode.sourceforge.net/docs/wxpdfdoc/ Works with the wxWidgets library.
{ "language": "en", "url": "https://stackoverflow.com/questions/58730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "98" }
Q: Interview questions: WPF Developer What should every WPF developer know? Entry Level * *Strong .NET 2.0 Background & willing to learn! *Explain dependency properties? *What's a style? *What's a template? *Binding *Differences between base classes: Visual, UIElement, FrameworkElement, Control *Visual vs Logical tree? *Property Change Notification (INotifyPropertyChange and ObservableCollection) *ResourceDictionary - Added by a7an *UserControls - Added by a7an *difference between bubble and tunnel routing strategies - added by Carlo *Why did Microsoft introduce yet another markup language? *XAML Mid-level * *Routed Events & Commands *Converters - Added by Artur Carvalho *Explain WPF's 2-pass layout engine? *How to implement a panel? *Interoperability (WPF/WinForms) *Blend/Cider - Added by a7an *Animations and Storyboarding *ClickOnce Deployment *Skinning/Themeing *Custom Controls *How can worker threads update the UI? *DataTemplate vs HierarchicalDataTemplate *ItemsControl vs ItemsPresenter vs ContentControl vs ContentPresenter *Different types of Triggers Senior * *Example of attached behavior? *What is PRISM,CAL & CAG? *How can worker threads update the UI? *WPF 3D - Added by a7an *Differences between Silverlight 2 and WPF *MVVM/MVP - Added by a7an *WPF Performance tuning *Pixel Shaders *Purpose of Freezables Any other "trick" questions to ask? Do you expect your WPF developer to know blend? A: A WPF developer should have a firm grasp of separating the XAML from the code-behind, and be able to discuss at length where that line is to be drawn. Being able to set up a model in the language of her choice, and then using XAML to display views on that model through data binding, data templates, control templates, styles, triggers, value converters and UserControls is a fairly basic task for the mid-level programmer. (Though a small amount of leeway should be granted if asking someone to create something like a control template "from heart.") Really, there is a lot in WPF, and if it weren't for the MSDN forums and in-depth books on the subject, it'd be quite the task to "just pick it up." For that reason, I would rate perseverance and the ability to learn from others as a top requirement for any level. For an entry-level WPF programmer, I wouldn't expect any knowledge of WPF per se, but I would demand knowledge of object-oriented principles, separation of UI from business logic, and comfort with a similar event model. Experience laying out UI elements in a style similar to WPF (with DockPanel containers, etc.) is a plus. Edit: Also, what Colin Mackay said. A: What about GUI programming/graphics experience in general and cs knowledge? If it's for a full time jobs, it does not matter IMHO if the guy has to spend the first few months some time with learning WPF, if he (or she) has a strong background. A: Entry Level * *Property Change NOtification (INotifyPropertyChange and ObservableCollection) *ResourceDictionary *UserControls Mid Level * *Blend/Cider *animations and storyboarding *ClickOnce Deployment Senior * *WPF 3D *Differences between Silverlight 2 and WPF *MVVM/MVP *WPF Performance tuning *Pixel Shaders A: * *What is the relationship between threads and Dispatchers? *What is the purpose of Freezables? *What is the difference between properties and Dependency Properties? Why use one or another? A: * *Converters(simple and multi). *Interoperability. I think I would prefer a dev that would know Blend. He/She could communicate easily with the designer and also do some basic designer stuff faster than just writing in xaml. The list is interesting, maybe links to the topics would help. Cheers A: I'm surprised no one has mentioned basic knowledge of XAML yet. Knowing what XAML is, and the ability to do some basic editing using XAML rather than a graphical design tool. Mid-level developers should be able to knock up form / graphic prototypes using a tool like XAMLPad. A: Another really basic thing would be the difference between bubble and tunnel routing strategies. A: Personally I would sit them down in front of a standard developer build machine and ask them to complete some task. No questions, just see what their code is like after a couple of hours (or more if the task is longer). I have had a zero failure rate on making a hiring descision based on the results of an actual real life programming test. The task doesn't have to be too difficult. I've used a simple message of the day application in the past with the messages being held in a database or XML file and a simple user interface. Ensure you ask them to structure it well (as the task is sufficiently small that it could all be done in one class if they felt inclinded). Of the questions above I'd say you cannot get a good idea of whether they are really any good or not. A potential candicate could actually just read these and create canned answers that sound great. All this shows is that the candidate can talk-the-talk, but what matters in the job itself is if they can walk-the-walk. A: Entry Level * *Knowledge in UX Design *Knowledge in Declarative Binding for business objects *Command usage Senior * *Resource optimization & Performance tuning *Modularity & Scalability *Asynchronize Programming Model A: I'd put binding and converters at entry level, since that is how you spend a lot of time in WPF. A: Pretty good list in my opinion. However I wouldn't ask tricky questions on interview. Interview gives enough stress itself, trick question can confuse even highly skilled person. A: Mid or maybe Senior: WinForms and WPF InterOp. A: Mid or maybe Senior * *Skinning/Themeing *Custom Controls A: * *DataTemplate vs HierarchicalDataTemplate *ItemsControl vs ItemsPresenter vs ContentControl vs ContentPresenter *Different types of Triggers *How to do Animations through StoryBoards A: Personally, I'd put 'How can worker threads update the UI' right under entry-level. Mid-level, if you really need to. If an entry-level programmer can understand the difference between the logical tree and the visual tree, they should understand how to update the UI from a background thread. At my organization, we do a lot of WPF development without Blend. I don't particularly like Blend, so I'm a bit biased, but Blend skills should be a nice-to-have, I think. A: I think lifecycle of WPF application - from creation to runtime should be included in the Beginner level of questions. Without knowing it, its hard to believe one is a real WPF dev. A: I'd extend the ClickOnce deployment with WPF Deployment in general, since it's good to know the limitations and peculiarities of each model (ClickOnce, XBAP, browser only). Placing it at mid-level seems fair though. A: styles provide a mechanism for you to apply a theme across an application and to override that theme in those specific instances where you want to. Styles are defined like resources; in fact, they are defined within the same section of your XAML file in which resources are defined. A: Put in data template selector : http://www.switchonthecode.com/tutorials/wpf-tutorial-how-to-use-a-datatemplateselector Great help with MVVM to swap out templates based on value. A: Knowing about unit testing and the effect it has on how you use WPF is a basic skill that I would put at Entry Level. People can learn the details, but if they have not thought about the basics of software design then you have a problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/58739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "183" }
Q: Databinding an enum property to a ComboBox in WPF As an example take the following code: public enum ExampleEnum { FooBar, BarFoo } public class ExampleClass : INotifyPropertyChanged { private ExampleEnum example; public ExampleEnum ExampleProperty { get { return example; } { /* set and notify */; } } } I want a to databind the property ExampleProperty to a ComboBox, so that it shows the options "FooBar" and "BarFoo" and works in mode TwoWay. Optimally I want my ComboBox definition to look something like this: <ComboBox ItemsSource="What goes here?" SelectedItem="{Binding Path=ExampleProperty}" /> Currently I have handlers for the ComboBox.SelectionChanged and ExampleClass.PropertyChanged events installed in my Window where I do the binding manually. Is there a better or some kind of canonical way? Would you usually use Converters and how would you populate the ComboBox with the right values? I don't even want to get started with i18n right now. Edit So one question was answered: How do I populate the ComboBox with the right values. Retrieve Enum values as a list of strings via an ObjectDataProvider from the static Enum.GetValues method: <Window.Resources> <ObjectDataProvider MethodName="GetValues" ObjectType="{x:Type sys:Enum}" x:Key="ExampleEnumValues"> <ObjectDataProvider.MethodParameters> <x:Type TypeName="ExampleEnum" /> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> </Window.Resources> This I can use as an ItemsSource for my ComboBox: <ComboBox ItemsSource="{Binding Source={StaticResource ExampleEnumValues}}"/> A: My favorite way to do this is with a ValueConverter so that the ItemsSource and SelectedValue both bind to the same property. This requires no additional properties to keep your ViewModel nice and clean. <ComboBox ItemsSource="{Binding Path=ExampleProperty, Converter={x:EnumToCollectionConverter}, Mode=OneTime}" SelectedValuePath="Value" DisplayMemberPath="Description" SelectedValue="{Binding Path=ExampleProperty}" /> And the definition of the Converter: public static class EnumHelper { public static string Description(this Enum e) { return (e.GetType() .GetField(e.ToString()) .GetCustomAttributes(typeof(DescriptionAttribute), false) .FirstOrDefault() as DescriptionAttribute)?.Description ?? e.ToString(); } } [ValueConversion(typeof(Enum), typeof(IEnumerable<ValueDescription>))] public class EnumToCollectionConverter : MarkupExtension, IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return Enum.GetValues(value.GetType()) .Cast<Enum>() .Select(e => new ValueDescription() { Value = e, Description = e.Description()}) .ToList(); } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return null; } public override object ProvideValue(IServiceProvider serviceProvider) { return this; } } This converter will work with any enum. ValueDescription is just a simple class with a Value property and a Description property. You could just as easily use a Tuple with Item1 and Item2, or a KeyValuePair with Key and Value instead of Value and Description or any other class of your choice as long as it has can hold an enum value and string description of that enum value. A: you can consider something like that: * *define a style for textblock, or any other control you want to use to display your enum: <Style x:Key="enumStyle" TargetType="{x:Type TextBlock}"> <Setter Property="Text" Value="&lt;NULL&gt;"/> <Style.Triggers> <Trigger Property="Tag"> <Trigger.Value> <proj:YourEnum>Value1<proj:YourEnum> </Trigger.Value> <Setter Property="Text" Value="{DynamicResource yourFriendlyValue1}"/> </Trigger> <!-- add more triggers here to reflect your enum --> </Style.Triggers> </Style> *define your style for ComboBoxItem <Style TargetType="{x:Type ComboBoxItem}"> <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <TextBlock Tag="{Binding}" Style="{StaticResource enumStyle}"/> </DataTemplate> </Setter.Value> </Setter> </Style> *add a combobox and load it with your enum values: <ComboBox SelectedValue="{Binding Path=your property goes here}" SelectedValuePath="Content"> <ComboBox.Items> <ComboBoxItem> <proj:YourEnum>Value1</proj:YourEnum> </ComboBoxItem> </ComboBox.Items> </ComboBox> if your enum is large, you can of course do the same in code, sparing a lot of typing. i like that approach, since it makes localization easy - you define all the templates once, and then, you only update your string resource files. A: Here is a generic solution using a helper method. This can also handle an enum of any underlying type (byte, sbyte, uint, long, etc.) Helper Method: static IEnumerable<object> GetEnum<T>() { var type = typeof(T); var names = Enum.GetNames(type); var values = Enum.GetValues(type); var pairs = Enumerable.Range(0, names.Length) .Select(i => new { Name = names.GetValue(i) , Value = values.GetValue(i) }) .OrderBy(pair => pair.Name); return pairs; }//method View Model: public IEnumerable<object> EnumSearchTypes { get { return GetEnum<SearchTypes>(); } }//property ComboBox: <ComboBox SelectedValue ="{Binding SearchType}" ItemsSource ="{Binding EnumSearchTypes}" DisplayMemberPath ="Name" SelectedValuePath ="Value" /> A: I don't know if it is possible in XAML-only but try the following: Give your ComboBox a name so you can access it in the codebehind: "typesComboBox1" Now try the following typesComboBox1.ItemsSource = Enum.GetValues(typeof(ExampleEnum)); A: Use ObjectDataProvider: <ObjectDataProvider x:Key="enumValues" MethodName="GetValues" ObjectType="{x:Type System:Enum}"> <ObjectDataProvider.MethodParameters> <x:Type TypeName="local:ExampleEnum"/> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> and then bind to static resource: ItemsSource="{Binding Source={StaticResource enumValues}}" Find this solution at this blog A: Based on the accepted but now deleted answer provided by ageektrapped I created a slimmed down version without some of the more advanced features. All the code is included here to allow you to copy-paste it and not get blocked by link-rot. I use the System.ComponentModel.DescriptionAttribute which really is intended for design time descriptions. If you dislike using this attribute you may create your own but I think using this attribute really gets the job done. If you don't use the attribute the name will default to the name of the enum value in code. public enum ExampleEnum { [Description("Foo Bar")] FooBar, [Description("Bar Foo")] BarFoo } Here is the class used as the items source: public class EnumItemsSource : Collection<String>, IValueConverter { Type type; IDictionary<Object, Object> valueToNameMap; IDictionary<Object, Object> nameToValueMap; public Type Type { get { return this.type; } set { if (!value.IsEnum) throw new ArgumentException("Type is not an enum.", "value"); this.type = value; Initialize(); } } public Object Convert(Object value, Type targetType, Object parameter, CultureInfo culture) { return this.valueToNameMap[value]; } public Object ConvertBack(Object value, Type targetType, Object parameter, CultureInfo culture) { return this.nameToValueMap[value]; } void Initialize() { this.valueToNameMap = this.type .GetFields(BindingFlags.Static | BindingFlags.Public) .ToDictionary(fi => fi.GetValue(null), GetDescription); this.nameToValueMap = this.valueToNameMap .ToDictionary(kvp => kvp.Value, kvp => kvp.Key); Clear(); foreach (String name in this.nameToValueMap.Keys) Add(name); } static Object GetDescription(FieldInfo fieldInfo) { var descriptionAttribute = (DescriptionAttribute) Attribute.GetCustomAttribute(fieldInfo, typeof(DescriptionAttribute)); return descriptionAttribute != null ? descriptionAttribute.Description : fieldInfo.Name; } } You can use it in XAML like this: <Windows.Resources> <local:EnumItemsSource x:Key="ExampleEnumItemsSource" Type="{x:Type local:ExampleEnum}"/> </Windows.Resources> <ComboBox ItemsSource="{StaticResource ExampleEnumItemsSource}" SelectedValue="{Binding ExampleProperty, Converter={StaticResource ExampleEnumItemsSource}}"/> A: You can create a custom markup extension. Example of usage: enum Status { [Description("Available.")] Available, [Description("Not here right now.")] Away, [Description("I don't have time right now.")] Busy } At the top of your XAML: xmlns:my="clr-namespace:namespace_to_enumeration_extension_class and then... <ComboBox ItemsSource="{Binding Source={my:Enumeration {x:Type my:Status}}}" DisplayMemberPath="Description" SelectedValue="{Binding CurrentStatus}" SelectedValuePath="Value" /> And the implementation... public class EnumerationExtension : MarkupExtension { private Type _enumType; public EnumerationExtension(Type enumType) { if (enumType == null) throw new ArgumentNullException("enumType"); EnumType = enumType; } public Type EnumType { get { return _enumType; } private set { if (_enumType == value) return; var enumType = Nullable.GetUnderlyingType(value) ?? value; if (enumType.IsEnum == false) throw new ArgumentException("Type must be an Enum."); _enumType = value; } } public override object ProvideValue(IServiceProvider serviceProvider) // or IXamlServiceProvider for UWP and WinUI { var enumValues = Enum.GetValues(EnumType); return ( from object enumValue in enumValues select new EnumerationMember{ Value = enumValue, Description = GetDescription(enumValue) }).ToArray(); } private string GetDescription(object enumValue) { var descriptionAttribute = EnumType .GetField(enumValue.ToString()) .GetCustomAttributes(typeof (DescriptionAttribute), false) .FirstOrDefault() as DescriptionAttribute; return descriptionAttribute != null ? descriptionAttribute.Description : enumValue.ToString(); } public class EnumerationMember { public string Description { get; set; } public object Value { get; set; } } } A: In the viewmodel you can have: public MyEnumType SelectedMyEnumType { get { return _selectedMyEnumType; } set { _selectedMyEnumType = value; OnPropertyChanged("SelectedMyEnumType"); } } public IEnumerable<MyEnumType> MyEnumTypeValues { get { return Enum.GetValues(typeof(MyEnumType)) .Cast<MyEnumType>(); } } In XAML the ItemSource binds to MyEnumTypeValues and SelectedItem binds to SelectedMyEnumType. <ComboBox SelectedItem="{Binding SelectedMyEnumType}" ItemsSource="{Binding MyEnumTypeValues}"></ComboBox> A: If you are using a MVVM, based on @rudigrobler answer you can do the following: Add the following property to the ViewModel class public Array ExampleEnumValues => Enum.GetValues(typeof(ExampleEnum)); Then in the XAML do the following: <ComboBox ItemsSource="{Binding ExampleEnumValues}" ... /> A: It's a pain to see all to see how certain overly complicated solutions become a "standard (anti-)pattern" for the most trivial problems: the overhead and complexity of implementing a MarkupExtension and especially decorating enum values with attributes should be avoided. Simply implement a data model. Generally, displaying the enumeration value names to the user is a bad idea. Enumerations are not meant to be displayed in the UI. They are constants that are used in a programmatic context. The value names are not meant for display. They are meant to address the engineer, hence the names usually use special semantics and vocabulary, same as scientific vocabulary is not meant to be understood by the public. Don't hesitate to create a dedicated source for the displayed values. The problem becomes more evident when localization gets involved. That's why all posted answers are simply over engeineered. They make a very simple problem look like a critical issue. It's a fact that the most trivial solution is the best. The subject of the original question is most definitely not an exception. I highly recommend against any of the provided answers. Although they may work, they add unnecessary complexity to a trivial problem. Note, that you can always convert an enum to a list of its values or value names by calling the static Enum.GetValues or Enum.GetNames, which both return an IEnumerable that you can directly assign to the ComboBox.ItemsSource property e.g.,via data binding. IEnumerable<ExampleEnum> values = Enum.GetValues<ExampleEnum>(); IEnumerable<string> names = Enum.GetNames<ExampleEnum>(); Usually, when defining an enumeration, you don't have UI in mind. Enumeration value names are not chosen based on UI design rules. Usually, UI labels and text in general are created by people with no developer or programmer background. They usually provide all the required translations to localize the application. There are many good reasons not to mix UI with the application. You would never design a class and name its properties with UI (e.g., DataGrid columns) in mind. You may want your column header to contain whitespaces etc. Same reason why exception messages are directed at developers and not users. You definitely don't want to decorate every property, every exception, enum or whatever data type or member with attributes in order to provide a display name that makes sense to the user in a particular UI context. You don't want to have UI design bleed into your code base and polute your classes. Application and its user interface - this are two different problems. Adding this abstract or virtual extra layer of separation allows e.g., to add enum values that should not be displayed. Or more general, modify code without having to break or modify the UI. Instead of using attributes and implementing loads of additional logic to extract their values (using reflection), you should use a simple IValueConverter or a dedicated class that provides those display values as a binding source. Stick to the most common pattern and implement a data model for the ComboBox items, where the class has a property of the enum type as member, that helps you to identify the ComboBox.SelectedItem (in case you need the enum value): ExampleEnum.cs // Define enumeration without minding any UI elements and context public enum ExampleEnum { FooBar = 0, BarFoo } ExampleClass.cs // Define readable enum display values in the UI context. // Display names can come from a localizable resource. public class BindingSource : INotifyPropertyChanged { public BindingSource() { ItemModels = new List<ItemModel> { new ItemModel { Label = "Foo Bar Display", Value = ExampleEnum.FooBar }, new ItemModel { Label = "Bar Foo Display", Value = ExampleEnum.BarFoo } } } public List<ItemModel> ItemModels { get; } private ItemModel selectedItemModel; public ItemModel SelectedItemModel { get => selectedItemModel; => set and notify; } } ItemModel.cs public class ItemModel { public string Label { get; set; } public ExampleEnum Value { get; set; } } MainWindow.xaml <Window> <Window.DataContext> <BindingSource /> </Window.DataContext> <ComboBox ItemsSource="{Binding ItemModels}" DisplayMemberName="DisplayValue" SelectedItem="{Binding SelectedItemModel}" /> </Window> A: I prefer not to use the name of enum in UI. I prefer use different value for user (DisplayMemberPath) and different for value (enum in this case) (SelectedValuePath). Those two values can be packed to KeyValuePair and stored in dictionary. XAML <ComboBox Name="fooBarComboBox" ItemsSource="{Binding Path=ExampleEnumsWithCaptions}" DisplayMemberPath="Value" SelectedValuePath="Key" SelectedValue="{Binding Path=ExampleProperty, Mode=TwoWay}" > C# public Dictionary<ExampleEnum, string> ExampleEnumsWithCaptions { get; } = new Dictionary<ExampleEnum, string>() { {ExampleEnum.FooBar, "Foo Bar"}, {ExampleEnum.BarFoo, "Reversed Foo Bar"}, //{ExampleEnum.None, "Hidden in UI"}, }; private ExampleEnum example; public ExampleEnum ExampleProperty { get { return example; } set { /* set and notify */; } } EDIT: Compatible with the MVVM pattern. A: This is a DevExpress specific answer based on the top-voted answer by Gregor S. (currently it has 128 votes). This means we can keep the styling consistent across the entire application: Unfortunately, the original answer doesn't work with a ComboBoxEdit from DevExpress without some modifications. First, the XAML for the ComboBoxEdit: <dxe:ComboBoxEdit ItemsSource="{Binding Source={xamlExtensions:XamlExtensionEnumDropdown {x:myEnum:EnumFilter}}}" SelectedItem="{Binding BrokerOrderBookingFilterSelected, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" DisplayMember="Description" MinWidth="144" Margin="5" HorizontalAlignment="Left" IsTextEditable="False" ValidateOnTextInput="False" AutoComplete="False" IncrementalFiltering="True" FilterCondition="Like" ImmediatePopup="True"/> Needsless to say, you will need to point xamlExtensions at the namespace that contains the XAML extension class (which is defined below): xmlns:xamlExtensions="clr-namespace:XamlExtensions" And we have to point myEnum at the namespace that contains the enum: xmlns:myEnum="clr-namespace:MyNamespace" Then, the enum: namespace MyNamespace { public enum EnumFilter { [Description("Free as a bird")] Free = 0, [Description("I'm Somewhat Busy")] SomewhatBusy = 1, [Description("I'm Really Busy")] ReallyBusy = 2 } } The problem in with the XAML is that we can't use SelectedItemValue, as this throws an error as the setter is unaccessable (bit of an oversight on your part, DevExpress). So we have to modify our ViewModel to obtain the value directly from the object: private EnumFilter _filterSelected = EnumFilter.All; public object FilterSelected { get { return (EnumFilter)_filterSelected; } set { var x = (XamlExtensionEnumDropdown.EnumerationMember)value; if (x != null) { _filterSelected = (EnumFilter)x.Value; } OnPropertyChanged("FilterSelected"); } } For completeness, here is the XAML extension from the original answer (slightly renamed): namespace XamlExtensions { /// <summary> /// Intent: XAML markup extension to add support for enums into any dropdown box, see http://bit.ly/1g70oJy. We can name the items in the /// dropdown box by using the [Description] attribute on the enum values. /// </summary> public class XamlExtensionEnumDropdown : MarkupExtension { private Type _enumType; public XamlExtensionEnumDropdown(Type enumType) { if (enumType == null) { throw new ArgumentNullException("enumType"); } EnumType = enumType; } public Type EnumType { get { return _enumType; } private set { if (_enumType == value) { return; } var enumType = Nullable.GetUnderlyingType(value) ?? value; if (enumType.IsEnum == false) { throw new ArgumentException("Type must be an Enum."); } _enumType = value; } } public override object ProvideValue(IServiceProvider serviceProvider) { var enumValues = Enum.GetValues(EnumType); return ( from object enumValue in enumValues select new EnumerationMember { Value = enumValue, Description = GetDescription(enumValue) }).ToArray(); } private string GetDescription(object enumValue) { var descriptionAttribute = EnumType .GetField(enumValue.ToString()) .GetCustomAttributes(typeof (DescriptionAttribute), false) .FirstOrDefault() as DescriptionAttribute; return descriptionAttribute != null ? descriptionAttribute.Description : enumValue.ToString(); } #region Nested type: EnumerationMember public class EnumerationMember { public string Description { get; set; } public object Value { get; set; } } #endregion } } Disclaimer: I have no affiliation with DevExpress. Telerik is also a great library. A: Try using <ComboBox ItemsSource="{Binding Source={StaticResource ExampleEnumValues}}" SelectedValue="{Binding Path=ExampleProperty}" /> A: I've created an open source CodePlex project that does this. You can download the NuGet package from here. <enumComboBox:EnumComboBox EnumType="{x:Type demoApplication:Status}" SelectedValue="{Binding Status}" /> A: Code public enum RULE { [Description( "Любые, без ограничений" )] any, [Description( "Любые если будет три в ряд" )] anyThree, [Description( "Соседние, без ограничений" )] nearAny, [Description( "Соседние если будет три в ряд" )] nearThree } class ExtendRULE { public static object Values { get { List<object> list = new List<object>(); foreach( RULE rule in Enum.GetValues( typeof( RULE ) ) ) { string desc = rule.GetType().GetMember( rule.ToString() )[0].GetCustomAttribute<DescriptionAttribute>().Description; list.Add( new { value = rule, desc = desc } ); } return list; } } } XAML <StackPanel> <ListBox ItemsSource= "{Binding Source={x:Static model:ExtendRULE.Values}}" DisplayMemberPath="desc" SelectedValuePath="value" SelectedValue="{Binding SelectedRule}"/> <ComboBox ItemsSource="{Binding Source={x:Static model:ExtendRULE.Values}}" DisplayMemberPath="desc" SelectedValuePath="value" SelectedValue="{Binding SelectedRule}"/> </StackPanel>
{ "language": "en", "url": "https://stackoverflow.com/questions/58743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "275" }
Q: Copy the entire contents of a directory in C# I want to copy the entire contents of a directory from one location to another in C#. There doesn't appear to be a way to do this using System.IO classes without lots of recursion. There is a method in VB that we can use if we add a reference to Microsoft.VisualBasic: new Microsoft.VisualBasic.Devices.Computer(). FileSystem.CopyDirectory( sourceFolder, outputFolder ); This seems like a rather ugly hack. Is there a better way? A: Much easier private static void CopyFilesRecursively(string sourcePath, string targetPath) { //Now Create all of the directories foreach (string dirPath in Directory.GetDirectories(sourcePath, "*", SearchOption.AllDirectories)) { Directory.CreateDirectory(dirPath.Replace(sourcePath, targetPath)); } //Copy all the files & Replaces any files with the same name foreach (string newPath in Directory.GetFiles(sourcePath, "*.*",SearchOption.AllDirectories)) { File.Copy(newPath, newPath.Replace(sourcePath, targetPath), true); } } A: Or, if you want to go the hard way, add a reference to your project for Microsoft.VisualBasic and then use the following: Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(fromDirectory, toDirectory); However, using one of the recursive functions is a better way to go since it won't have to load the VB dll. A: Try this: Process proc = new Process(); proc.StartInfo.UseShellExecute = true; proc.StartInfo.FileName = Path.Combine(Environment.SystemDirectory, "xcopy.exe"); proc.StartInfo.Arguments = @"C:\source C:\destination /E /I"; proc.Start(); Your xcopy arguments may vary but you get the idea. A: Here's a utility class I've used for IO tasks like this. using System; using System.Runtime.InteropServices; namespace MyNameSpace { public class ShellFileOperation { private static String StringArrayToMultiString(String[] stringArray) { String multiString = ""; if (stringArray == null) return ""; for (int i=0 ; i<stringArray.Length ; i++) multiString += stringArray[i] + '\0'; multiString += '\0'; return multiString; } public static bool Copy(string source, string dest) { return Copy(new String[] { source }, new String[] { dest }); } public static bool Copy(String[] source, String[] dest) { Win32.SHFILEOPSTRUCT FileOpStruct = new Win32.SHFILEOPSTRUCT(); FileOpStruct.hwnd = IntPtr.Zero; FileOpStruct.wFunc = (uint)Win32.FO_COPY; String multiSource = StringArrayToMultiString(source); String multiDest = StringArrayToMultiString(dest); FileOpStruct.pFrom = Marshal.StringToHGlobalUni(multiSource); FileOpStruct.pTo = Marshal.StringToHGlobalUni(multiDest); FileOpStruct.fFlags = (ushort)Win32.ShellFileOperationFlags.FOF_NOCONFIRMATION; FileOpStruct.lpszProgressTitle = ""; FileOpStruct.fAnyOperationsAborted = 0; FileOpStruct.hNameMappings = IntPtr.Zero; int retval = Win32.SHFileOperation(ref FileOpStruct); if(retval != 0) return false; return true; } public static bool Move(string source, string dest) { return Move(new String[] { source }, new String[] { dest }); } public static bool Delete(string file) { Win32.SHFILEOPSTRUCT FileOpStruct = new Win32.SHFILEOPSTRUCT(); FileOpStruct.hwnd = IntPtr.Zero; FileOpStruct.wFunc = (uint)Win32.FO_DELETE; String multiSource = StringArrayToMultiString(new string[] { file }); FileOpStruct.pFrom = Marshal.StringToHGlobalUni(multiSource); FileOpStruct.pTo = IntPtr.Zero; FileOpStruct.fFlags = (ushort)Win32.ShellFileOperationFlags.FOF_SILENT | (ushort)Win32.ShellFileOperationFlags.FOF_NOCONFIRMATION | (ushort)Win32.ShellFileOperationFlags.FOF_NOERRORUI | (ushort)Win32.ShellFileOperationFlags.FOF_NOCONFIRMMKDIR; FileOpStruct.lpszProgressTitle = ""; FileOpStruct.fAnyOperationsAborted = 0; FileOpStruct.hNameMappings = IntPtr.Zero; int retval = Win32.SHFileOperation(ref FileOpStruct); if(retval != 0) return false; return true; } public static bool Move(String[] source, String[] dest) { Win32.SHFILEOPSTRUCT FileOpStruct = new Win32.SHFILEOPSTRUCT(); FileOpStruct.hwnd = IntPtr.Zero; FileOpStruct.wFunc = (uint)Win32.FO_MOVE; String multiSource = StringArrayToMultiString(source); String multiDest = StringArrayToMultiString(dest); FileOpStruct.pFrom = Marshal.StringToHGlobalUni(multiSource); FileOpStruct.pTo = Marshal.StringToHGlobalUni(multiDest); FileOpStruct.fFlags = (ushort)Win32.ShellFileOperationFlags.FOF_NOCONFIRMATION; FileOpStruct.lpszProgressTitle = ""; FileOpStruct.fAnyOperationsAborted = 0; FileOpStruct.hNameMappings = IntPtr.Zero; int retval = Win32.SHFileOperation(ref FileOpStruct); if(retval != 0) return false; return true; } } } A: This site always have helped me out a lot, and now it's my turn to help the others with what I know. I hope that my code below be useful for someone. string source_dir = @"E:\"; string destination_dir = @"C:\"; // substring is to remove destination_dir absolute path (E:\). // Create subdirectory structure in destination foreach (string dir in System.IO.Directory.GetDirectories(source_dir, "*", System.IO.SearchOption.AllDirectories)) { System.IO.Directory.CreateDirectory(System.IO.Path.Combine(destination_dir, dir.Substring(source_dir.Length + 1))); // Example: // > C:\sources (and not C:\E:\sources) } foreach (string file_name in System.IO.Directory.GetFiles(source_dir, "*", System.IO.SearchOption.AllDirectories)) { System.IO.File.Copy(file_name, System.IO.Path.Combine(destination_dir, file_name.Substring(source_dir.Length + 1))); } A: tboswell 's replace Proof version (which is resilient to repeating pattern in filepath) public static void copyAll(string SourcePath , string DestinationPath ) { //Now Create all of the directories foreach (string dirPath in Directory.GetDirectories(SourcePath, "*", SearchOption.AllDirectories)) Directory.CreateDirectory(Path.Combine(DestinationPath ,dirPath.Remove(0, SourcePath.Length )) ); //Copy all the files & Replaces any files with the same name foreach (string newPath in Directory.GetFiles(SourcePath, "*.*", SearchOption.AllDirectories)) File.Copy(newPath, Path.Combine(DestinationPath , newPath.Remove(0, SourcePath.Length)) , true); } A: My solution is basically a modification of @Termininja's answer, however I have enhanced it a bit and it appears to be more than 5 times faster than the accepted answer. public static void CopyEntireDirectory(string path, string newPath) { Parallel.ForEach(Directory.GetFileSystemEntries(path, "*", SearchOption.AllDirectories) ,(fileName) => { string output = Regex.Replace(fileName, "^" + Regex.Escape(path), newPath); if (File.Exists(fileName)) { Directory.CreateDirectory(Path.GetDirectoryName(output)); File.Copy(fileName, output, true); } else Directory.CreateDirectory(output); }); } EDIT: Modifying @Ahmed Sabry to full parallel foreach does produce a better result, however the code uses recursive function and its not ideal in some situation. public static void CopyEntireDirectory(DirectoryInfo source, DirectoryInfo target, bool overwiteFiles = true) { if (!source.Exists) return; if (!target.Exists) target.Create(); Parallel.ForEach(source.GetDirectories(), (sourceChildDirectory) => CopyEntireDirectory(sourceChildDirectory, new DirectoryInfo(Path.Combine(target.FullName, sourceChildDirectory.Name)))); Parallel.ForEach(source.GetFiles(), sourceFile => sourceFile.CopyTo(Path.Combine(target.FullName, sourceFile.Name), overwiteFiles)); } A: It may not be performance-aware, but I'm using it for 30MB folders and it works flawlessly. Plus, I didn't like all the amount of code and recursion required for such an easy task. var src = "c:\src"; var dest = "c:\dest"; var cmp = CompressionLevel.NoCompression; var zip = source_folder + ".zip"; ZipFile.CreateFromDirectory(src, zip, cmp, includeBaseDirectory: false); ZipFile.ExtractToDirectory(zip, dest_folder); File.Delete(zip); Note: ZipFile is available on .NET 4.5+ in the System.IO.Compression namespace A: Here is a concise and efficient solution: namespace System.IO { public static class ExtensionMethods { public static void CopyTo(this DirectoryInfo srcPath, string destPath) { Directory.CreateDirectory(destPath); Parallel.ForEach(srcPath.GetDirectories("*", SearchOption.AllDirectories), srcInfo => Directory.CreateDirectory($"{destPath}{srcInfo.FullName[srcPath.FullName.Length..]}")); Parallel.ForEach(srcPath.GetFiles("*", SearchOption.AllDirectories), srcInfo => File.Copy(srcInfo.FullName, $"{destPath}{srcInfo.FullName[srcPath.FullName.Length..]}", true)); }); } } } To use: new DirectoryInfo(sourcePath).CopyTo(destinationPath); A: Hmm, I think I misunderstand the question but I'm going to risk it. What's wrong with the following straightforward method? public static void CopyFilesRecursively(DirectoryInfo source, DirectoryInfo target) { foreach (DirectoryInfo dir in source.GetDirectories()) CopyFilesRecursively(dir, target.CreateSubdirectory(dir.Name)); foreach (FileInfo file in source.GetFiles()) file.CopyTo(Path.Combine(target.FullName, file.Name)); } EDIT Since this posting has garnered an impressive number of downvotes for such a simple answer to an equally simple question, let me add an explanation. Please read this before downvoting. First of all, this code is not intendend as a drop-in replacement to the code in the question. It is for illustration purpose only. Microsoft.VisualBasic.Devices.Computer.FileSystem.CopyDirectory does some additional correctness tests (e.g. whether the source and target are valid directories, whether the source is a parent of the target etc.) that are missing from this answer. That code is probably also more optimized. That said, the code works well. It has (almost identically) been used in a mature software for years. Apart from the inherent fickleness present with all IO handlings (e.g. what happens if the user manually unplugs the USB drive while your code is writing to it?), there are no known problems. In particular, I’d like to point out that the use of recursion here is absolutely not a problem. Neither in theory (conceptually, it’s the most elegant solution) nor in practice: this code will not overflow the stack. The stack is large enough to handle even deeply nested file hierarchies. Long before stack space becomes a problem, the folder path length limitation kicks in. Notice that a malicious user might be able to break this assumption by using deeply-nested directories of one letter each. I haven’t tried this. But just to illustrate the point: in order to make this code overflow on a typical computer, the directories would have to be nested a few thousand times. This is simply not a realistic scenario. A: A minor improvement on d4nt's answer, as you probably want to check for errors and not have to change xcopy paths if you're working on a server and development machine: public void CopyFolder(string source, string destination) { string xcopyPath = Environment.GetEnvironmentVariable("WINDIR") + @"\System32\xcopy.exe"; ProcessStartInfo info = new ProcessStartInfo(xcopyPath); info.UseShellExecute = false; info.RedirectStandardOutput = true; info.Arguments = string.Format("\"{0}\" \"{1}\" /E /I", source, destination); Process process = Process.Start(info); process.WaitForExit(); string result = process.StandardOutput.ReadToEnd(); if (process.ExitCode != 0) { // Or your own custom exception, or just return false if you prefer. throw new InvalidOperationException(string.Format("Failed to copy {0} to {1}: {2}", source, destination, result)); } } A: This is my code hope this help private void KCOPY(string source, string destination) { if (IsFile(source)) { string target = Path.Combine(destination, Path.GetFileName(source)); File.Copy(source, target, true); } else { string fileName = Path.GetFileName(source); string target = System.IO.Path.Combine(destination, fileName); if (!System.IO.Directory.Exists(target)) { System.IO.Directory.CreateDirectory(target); } List<string> files = GetAllFileAndFolder(source); foreach (string file in files) { KCOPY(file, target); } } } private List<string> GetAllFileAndFolder(string path) { List<string> allFile = new List<string>(); foreach (string dir in Directory.GetDirectories(path)) { allFile.Add(dir); } foreach (string file in Directory.GetFiles(path)) { allFile.Add(file); } return allFile; } private bool IsFile(string path) { if ((File.GetAttributes(path) & FileAttributes.Directory) == FileAttributes.Directory) { return false; } return true; } A: If you like Konrad's popular answer, but you want the source itself to be a folder under target, rather than putting it's children under the target folder, here's the code for that. It returns the newly created DirectoryInfo, which is handy: public static DirectoryInfo CopyFilesRecursively(DirectoryInfo source, DirectoryInfo target) { var newDirectoryInfo = target.CreateSubdirectory(source.Name); foreach (var fileInfo in source.GetFiles()) fileInfo.CopyTo(Path.Combine(newDirectoryInfo.FullName, fileInfo.Name)); foreach (var childDirectoryInfo in source.GetDirectories()) CopyFilesRecursively(childDirectoryInfo, newDirectoryInfo); return newDirectoryInfo; } A: You can always use this, taken from Microsofts website. static void Main() { // Copy from the current directory, include subdirectories. DirectoryCopy(".", @".\temp", true); } private static void DirectoryCopy(string sourceDirName, string destDirName, bool copySubDirs) { // Get the subdirectories for the specified directory. DirectoryInfo dir = new DirectoryInfo(sourceDirName); if (!dir.Exists) { throw new DirectoryNotFoundException( "Source directory does not exist or could not be found: " + sourceDirName); } DirectoryInfo[] dirs = dir.GetDirectories(); // If the destination directory doesn't exist, create it. if (!Directory.Exists(destDirName)) { Directory.CreateDirectory(destDirName); } // Get the files in the directory and copy them to the new location. FileInfo[] files = dir.GetFiles(); foreach (FileInfo file in files) { string temppath = Path.Combine(destDirName, file.Name); file.CopyTo(temppath, false); } // If copying subdirectories, copy them and their contents to new location. if (copySubDirs) { foreach (DirectoryInfo subdir in dirs) { string temppath = Path.Combine(destDirName, subdir.Name); DirectoryCopy(subdir.FullName, temppath, copySubDirs); } } } A: Copied from MSDN: using System; using System.IO; class CopyDir { public static void Copy(string sourceDirectory, string targetDirectory) { DirectoryInfo diSource = new DirectoryInfo(sourceDirectory); DirectoryInfo diTarget = new DirectoryInfo(targetDirectory); CopyAll(diSource, diTarget); } public static void CopyAll(DirectoryInfo source, DirectoryInfo target) { Directory.CreateDirectory(target.FullName); // Copy each file into the new directory. foreach (FileInfo fi in source.GetFiles()) { Console.WriteLine(@"Copying {0}\{1}", target.FullName, fi.Name); fi.CopyTo(Path.Combine(target.FullName, fi.Name), true); } // Copy each subdirectory using recursion. foreach (DirectoryInfo diSourceSubDir in source.GetDirectories()) { DirectoryInfo nextTargetSubDir = target.CreateSubdirectory(diSourceSubDir.Name); CopyAll(diSourceSubDir, nextTargetSubDir); } } public static void Main() { string sourceDirectory = @"c:\sourceDirectory"; string targetDirectory = @"c:\targetDirectory"; Copy(sourceDirectory, targetDirectory); } // Output will vary based on the contents of the source directory. } A: Copy folder recursively without recursion to avoid stack overflow. public static void CopyDirectory(string source, string target) { var stack = new Stack<Folders>(); stack.Push(new Folders(source, target)); while (stack.Count > 0) { var folders = stack.Pop(); Directory.CreateDirectory(folders.Target); foreach (var file in Directory.GetFiles(folders.Source, "*.*")) { File.Copy(file, Path.Combine(folders.Target, Path.GetFileName(file))); } foreach (var folder in Directory.GetDirectories(folders.Source)) { stack.Push(new Folders(folder, Path.Combine(folders.Target, Path.GetFileName(folder)))); } } } public class Folders { public string Source { get; private set; } public string Target { get; private set; } public Folders(string source, string target) { Source = source; Target = target; } } A: Sorry for the previous code, it still had bugs :( (fell prey to the fastest gun problem) . Here it is tested and working. The key is the SearchOption.AllDirectories, which eliminates the need for explicit recursion. string path = "C:\\a"; string[] dirs = Directory.GetDirectories(path, "*.*", SearchOption.AllDirectories); string newpath = "C:\\x"; try { Directory.CreateDirectory(newpath); } catch (IOException ex) { Console.WriteLine(ex.Message); } for (int j = 0; j < dirs.Length; j++) { try { Directory.CreateDirectory(dirs[j].Replace(path, newpath)); } catch (IOException ex) { Console.WriteLine(ex.Message); } } string[] files = Directory.GetFiles(path, "*.*", SearchOption.AllDirectories); for (int j = 0; j < files.Length; j++) { try { File.Copy(files[j], files[j].Replace(path, newpath)); } catch (IOException ex) { Console.WriteLine(ex.Message); } } A: Here is an extension method for DirectoryInfo a la FileInfo.CopyTo (note the overwrite parameter): public static DirectoryInfo CopyTo(this DirectoryInfo sourceDir, string destinationPath, bool overwrite = false) { var sourcePath = sourceDir.FullName; var destination = new DirectoryInfo(destinationPath); destination.Create(); foreach (var sourceSubDirPath in Directory.EnumerateDirectories(sourcePath, "*", SearchOption.AllDirectories)) Directory.CreateDirectory(sourceSubDirPath.Replace(sourcePath, destinationPath)); foreach (var file in Directory.EnumerateFiles(sourcePath, "*", SearchOption.AllDirectories)) File.Copy(file, file.Replace(sourcePath, destinationPath), overwrite); return destination; } A: Use this class. public static class Extensions { public static void CopyTo(this DirectoryInfo source, DirectoryInfo target, bool overwiteFiles = true) { if (!source.Exists) return; if (!target.Exists) target.Create(); Parallel.ForEach(source.GetDirectories(), (sourceChildDirectory) => CopyTo(sourceChildDirectory, new DirectoryInfo(Path.Combine(target.FullName, sourceChildDirectory.Name)))); foreach (var sourceFile in source.GetFiles()) sourceFile.CopyTo(Path.Combine(target.FullName, sourceFile.Name), overwiteFiles); } public static void CopyTo(this DirectoryInfo source, string target, bool overwiteFiles = true) { CopyTo(source, new DirectoryInfo(target), overwiteFiles); } } A: One variant with only one loop for copying of all folders and files: foreach (var f in Directory.GetFileSystemEntries(path, "*", SearchOption.AllDirectories)) { var output = Regex.Replace(f, @"^" + path, newPath); if (File.Exists(f)) File.Copy(f, output, true); else Directory.CreateDirectory(output); } A: Better than any code (extension method to DirectoryInfo with recursion) public static bool CopyTo(this DirectoryInfo source, string destination) { try { foreach (string dirPath in Directory.GetDirectories(source.FullName)) { var newDirPath = dirPath.Replace(source.FullName, destination); Directory.CreateDirectory(newDirPath); new DirectoryInfo(dirPath).CopyTo(newDirPath); } //Copy all the files & Replaces any files with the same name foreach (string filePath in Directory.GetFiles(source.FullName)) { File.Copy(filePath, filePath.Replace(source.FullName,destination), true); } return true; } catch (IOException exp) { return false; } } A: Copy and replace all files of the folder public static void CopyAndReplaceAll(string SourcePath, string DestinationPath, string backupPath) { foreach (string dirPath in Directory.GetDirectories(SourcePath, "*", SearchOption.AllDirectories)) { Directory.CreateDirectory($"{DestinationPath}{dirPath.Remove(0, SourcePath.Length)}"); Directory.CreateDirectory($"{backupPath}{dirPath.Remove(0, SourcePath.Length)}"); } foreach (string newPath in Directory.GetFiles(SourcePath, "*.*", SearchOption.AllDirectories)) { if (!File.Exists($"{ DestinationPath}{newPath.Remove(0, SourcePath.Length)}")) File.Copy(newPath, $"{ DestinationPath}{newPath.Remove(0, SourcePath.Length)}"); else File.Replace(newPath , $"{ DestinationPath}{newPath.Remove(0, SourcePath.Length)}" , $"{ backupPath}{newPath.Remove(0, SourcePath.Length)}", false); } } A: The code below is microsoft suggestion how-to-copy-directories and it is shared by dear @iato but it just copies sub directories and files of source folder recursively and doesn't copy the source folder it self (like right click -> copy ). but there is a tricky way below this answer : private static void DirectoryCopy(string sourceDirName, string destDirName, bool copySubDirs = true) { // Get the subdirectories for the specified directory. DirectoryInfo dir = new DirectoryInfo(sourceDirName); if (!dir.Exists) { throw new DirectoryNotFoundException( "Source directory does not exist or could not be found: " + sourceDirName); } DirectoryInfo[] dirs = dir.GetDirectories(); // If the destination directory doesn't exist, create it. if (!Directory.Exists(destDirName)) { Directory.CreateDirectory(destDirName); } // Get the files in the directory and copy them to the new location. FileInfo[] files = dir.GetFiles(); foreach (FileInfo file in files) { string temppath = Path.Combine(destDirName, file.Name); file.CopyTo(temppath, false); } // If copying subdirectories, copy them and their contents to new location. if (copySubDirs) { foreach (DirectoryInfo subdir in dirs) { string temppath = Path.Combine(destDirName, subdir.Name); DirectoryCopy(subdir.FullName, temppath, copySubDirs); } } } if you want to copy contents of source folder and subfolders recursively you can simply use it like this : string source = @"J:\source\"; string dest= @"J:\destination\"; DirectoryCopy(source, dest); but if you want to copy the source directory it self (similar that you have right clicked on source folder and clicked copy then in the destination folder you clicked paste) you should use like this : string source = @"J:\source\"; string dest= @"J:\destination\"; DirectoryCopy(source, Path.Combine(dest, new DirectoryInfo(source).Name)); A: Below code to copy all files from source to destination of given pattern in same folder structure: public static void Copy() { string sourceDir = @"C:\test\source\"; string destination = @"C:\test\destination\"; string[] textFiles = Directory.GetFiles(sourceDir, "*.txt", SearchOption.AllDirectories); foreach (string textFile in textFiles) { string fileName = textFile.Substring(sourceDir.Length); string directoryPath = Path.Combine(destination, Path.GetDirectoryName(fileName)); if (!Directory.Exists(directoryPath)) Directory.CreateDirectory(directoryPath); File.Copy(textFile, Path.Combine(directoryPath, Path.GetFileName(textFile)), true); } } A: Just wanted to add my version. It can handle both directories and files, and can overwrite or skip if destination file exists. public static void Copy( string source, string destination, string pattern = "*", bool includeSubFolders = true, bool overwrite = true, bool overwriteOnlyIfSourceIsNewer = false) { if (File.Exists(source)) { // Source is a file, copy and leave CopyFile(source, destination); return; } if (!Directory.Exists(source)) { throw new DirectoryNotFoundException($"Source directory does not exists: `{source}`"); } var files = Directory.GetFiles( source, pattern, includeSubFolders ? SearchOption.AllDirectories : SearchOption.TopDirectoryOnly); foreach (var file in files) { var newFile = file.Replace(source, destination); CopyFile(file, newFile, overwrite, overwriteOnlyIfSourceIsNewer); } } private static void CopyFile( string source, string destination, bool overwrite = true, bool overwriteIfSourceIsNewer = false) { if (!overwrite && File.Exists(destination)) { return; } if (overwriteIfSourceIsNewer && File.Exists(destination)) { var sourceLastModified = File.GetLastWriteTimeUtc(source); var destinationLastModified = File.GetLastWriteTimeUtc(destination); if (sourceLastModified <= destinationLastModified) { return; } CreateDirectory(destination); File.Copy(source, destination, overwrite); return; } CreateDirectory(destination); File.Copy(source, destination, overwrite); } private static void CreateDirectory(string filePath) { var targetDirectory = Path.GetDirectoryName(filePath); if (targetDirectory != null && !Directory.Exists(targetDirectory)) { Directory.CreateDirectory(targetDirectory); } } A: Properties of this code: * *No parallel task, is less performant, but the idea is to treat file by file, so you can log or stop. *Can skip hiddden files *Can skip by modified date *Can break or not (you chose) on a file copy error *Uses Buffer of 64K for SMB and FileShare.ReadWrite to avoid locks *Personalize your Exceptions Message *For Windows Notes ExceptionToString() is a personal extension that tries to get inner exceptions and display stack. Replace it for ex.Message or any other code. log4net.ILog _log I use ==Log4net== You can make your Log in a different way. /// <summary> /// Recursive Directory Copy /// </summary> /// <param name="fromPath"></param> /// <param name="toPath"></param> /// <param name="continueOnException">on error, continue to copy next file</param> /// <param name="skipHiddenFiles">To avoid files like thumbs.db</param> /// <param name="skipByModifiedDate">Does not copy if the destiny file has the same or more recent modified date</param> /// <remarks> /// </remarks> public static void CopyEntireDirectory(string fromPath, string toPath, bool continueOnException = false, bool skipHiddenFiles = true, bool skipByModifiedDate = true) { log4net.ILog _log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); string nl = Environment.NewLine; string sourcePath = ""; string destPath = ""; string _exMsg = ""; void TreateException(Exception ex) { _log.Warn(_exMsg); if (continueOnException == false) { throw new Exception($"{_exMsg}{nl}----{nl}{ex.ExceptionToString()}"); } } try { foreach (string fileName in Directory.GetFileSystemEntries(fromPath, "*", SearchOption.AllDirectories)) { sourcePath = fileName; destPath = Regex.Replace(fileName, "^" + Regex.Escape(fromPath), toPath); Directory.CreateDirectory(Path.GetDirectoryName(destPath)); _log.Debug(FileCopyStream(sourcePath, destPath,skipHiddenFiles,skipByModifiedDate)); } } // Directory must be less than 148 characters, File must be less than 261 characters catch (PathTooLongException) { throw new Exception($"Both paths must be less than 148 characters:{nl}{sourcePath}{nl}{destPath}"); } // Not enough disk space. Cancel further copies catch (IOException ex) when ((ex.HResult & 0xFFFF) == 0x27 || (ex.HResult & 0xFFFF) == 0x70) { throw new Exception($"Not enough disk space:{nl}'{toPath}'"); } // used by another process catch (IOException ex) when ((uint)ex.HResult == 0x80070020) { _exMsg = $"File is being used by another process:{nl}'{destPath}'{nl}{ex.Message}"; TreateException(ex); } catch (UnauthorizedAccessException ex) { _exMsg = $"Unauthorized Access Exception:{nl}from:'{sourcePath}'{nl}to:{destPath}"; TreateException(ex); } catch (Exception ex) { _exMsg = $"from:'{sourcePath}'{nl}to:{destPath}"; TreateException(ex); } } /// <summary> /// File Copy using Stream 64K and trying to avoid locks with fileshare /// </summary> /// <param name="sourcePath"></param> /// <param name="destPath"></param> /// <param name="skipHiddenFiles">To avoid files like thumbs.db</param> /// <param name="skipByModifiedDate">Does not copy if the destiny file has the same or more recent modified date</param> public static string FileCopyStream(string sourcePath, string destPath, bool skipHiddenFiles = true, bool skipByModifiedDate = true) { // Buffer should be 64K = 65536‬ bytes // Increasing the buffer size beyond 64k will not help in any circunstance, // as the underlying SMB protocol does not support buffer lengths beyond 64k." byte[] buffer = new byte[65536]; if (!File.Exists(sourcePath)) return $"is not a file: '{sourcePath}'"; FileInfo sourcefileInfo = new FileInfo(sourcePath); FileInfo destFileInfo = null; if (File.Exists(destPath)) destFileInfo = new FileInfo(destPath); if (skipHiddenFiles) { if (sourcefileInfo.Attributes.HasFlag(FileAttributes.Hidden)) return $"Hidden File Not Copied: '{sourcePath}'"; } using (FileStream input = sourcefileInfo.Open(FileMode.Open, FileAccess.Read, FileShare.ReadWrite)) using (FileStream output = new FileStream(destPath, FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite, buffer.Length)) { if (skipByModifiedDate && destFileInfo != null) { if (destFileInfo.LastWriteTime < sourcefileInfo.LastWriteTime) { input.CopyTo(output, buffer.Length); destFileInfo.LastWriteTime = sourcefileInfo.LastWriteTime; return $"Replaced: '{sourcePath}'"; } else { return $"NOT replaced (more recent or same file): '{sourcePath}'"; } } else { input.CopyTo(output, buffer.Length); destFileInfo = new FileInfo(destPath); destFileInfo.LastWriteTime = sourcefileInfo.LastWriteTime; return $"New File: '{sourcePath}'"; } } } A: For UWP and Winui 3 (WindowsAppSdk) using Async API: public async Task CopyAsync(StorageFolder source, StorageFolder dest) { foreach (var item in await source.GetItemsAsync()) if (item is StorageFile file) await file.CopyAsync(dest); else if (item is StorageFolder folder) await CopyAsync(folder, await dest.CreateFolderAsync(folder.Name, CreationCollisionOption.OpenIfExists)); } A: public static class Extensions { public static void Copy(this DirectoryInfo self, DirectoryInfo destination, bool recursively) { foreach (var file in self.GetFiles()) { file.CopyTo(Path.Combine(destination.FullName, file.Name)); } if (recursively) { foreach (var directory in self.GetDirectories()) { directory.Copy(destination.CreateSubdirectory(directory.Name), recursively); } } } } Example of use: var sourceDirectory = new DirectoryInfo(@"C:\source"); var destinationDirectory = new DirectoryInfo(@"C:\destination"); if (destinationDirectory.Exists == false) { sourceDirectory.Copy(destinationDirectory, recursively: true); }
{ "language": "en", "url": "https://stackoverflow.com/questions/58744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "610" }
Q: How do I get raw logs from Google Analytics? Is it possible to obtain raw logs from Google Analytic? Is there any tool that can generate the raw logs from GA? A: No. But why don't you just use your webserver's logs? The value of GA is not in the data they collect, but the aggregation/analysis. That's why it's not called Google Raw Data. A: Please have a look on this article which explains a hack to get Google analytics data. http://blogoscoped.com/archive/2008-01-17-n73.html Also If you can wait for sometime then official Google analytics blog says that they are working on data export api but currently it is in Private Beta. http://analytics.blogspot.com/2008/10/more-enterprise-class-features-added-to.html A: Not exactly the same as raw vs aggregated, but it seems that "unsampled" data is only available to Premium accounts: "Unsampled Reports are only available in Premium accounts using the latest version of Google Analytics." http://support.google.com/analytics/bin/answer.py?hl=en&answer=2601061 A: No you can't get the raw logs, but there's nothing stopping you from getting the exact same data logged to your own web server logs. Have a look at the Urchin code and borrow that, changing the following two lines to point to your web server instead. var _ugifpath2="http://www.google-analytics.com/__utm.gif"; if (_udl.protocol=="https:") _ugifpath2="https://ssl.google-analytics.com/__utm.gif"; You'll want to create a __utm.gif file so that they don't show up in the logs as 404s. Obviously you'll need to parse the variables out of the hits into your web server logs. The log line in Apache looks something like this. You'll have lots of "fun" parsing out all the various stuff you want from that, but everything Google Analytics gets from the basic JavaScript tagging comes in like this. 127.0.0.1 - - [02/Oct/2008:10:17:18 +1000] "GET /__utm.gif?utmwv=1.3&utmn=172543292&utmcs=ISO-8859-1&utmsr=1280x1024&utmsc=32-bit&utmul=en-us&utmje=1&utmfl=9.0%20%20r124&utmdt=My%20Web%20Page&utmhn=www.mydomain.com&utmhid=979599568&utmr=-&utmp=/urlgoeshere/&utmac=UA-1715941-2&utmcc=__utma%3D113887236.511203954.1220404968.1222846275.1222906638.33%3B%2B__utmz%3D113887236.1222393496.27.2.utmccn%3D(organic)%7Cutmcsr%3Dgoogle%7Cutmctr%3Dsapphire%2Btechnologies%2Bsite%253Arumble.net%7Cutmcmd%3Dorganic%3B%2B HTTP/1.0" 200 35 "http://www.mydomain.com/urlgoeshere/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/0.2.153.1 Safari/525.19" A: You can get the Analytics data, but it'll take a bit of hacking. In any analytics report, click the 'email' button at the top of the screen. Set up the email to go to your address (or a new address on your server) and change the format to csv or xml. Then, you can use php (or another language) to check the email account, parse the email and import the attachment to your system. There's an article entitled 'Incoming mail and PHP' on evolt.org: http://evolt.org/incoming_mail_and_php A: No, but there are other paid services like Mixpanel and KISSmetrics that have data export APIs. Much easier than trying to build your own analytics service, but costs money.
{ "language": "en", "url": "https://stackoverflow.com/questions/58750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the best way to do per-user database connections in Rails What is the best way to do per-user database connections in Rails? I realize this is a poor Rails design practice, but we're gradually replacing an existing web application that uses one database per user. A complete redesign/rewrite is not feasible. A: Put something like this in your application controller. I'm using the subdomain plus "_clientdb" to pick the name of the database. I have all the databases using the same username and password, so I can grab that from the db config file. Hope this helps! class ApplicationController < ActionController::Base before_filter :hijack_db def hijack_db db_name = request.subdomains.first + "_clientdb" # lets manually connect to the proper db ActiveRecord::Base.establish_connection( :adapter => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['adapter'], :host => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['host'], :username => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['username'], :password => ActiveRecord::Base.configurations[ENV["RAILS_ENV"]]['password'], :database => db_name ) end end A: Take a look at ActiveRecord::Base.establish_connection. That's how you connect to a different database server. I can't be of much more help since I don't know how you recognize the user or map it to it's database, but I suppose a master database will have that info (and the connection info should be on the database.yml file). Best of luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/58755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Push or Pull for a near real time automation server? We are currently developing a server whereby a client requests interest in changes to specific data elements and when that data changes the server pushes the data back to the client. There has vigorous debate at work about whether or not it would be better for the client to poll for this data. What is considered to be the ideal method, in terms of performance, scalability and network load, of data transfer in a near real time environment? Update: Here's a Link that gives some food for thought with regards to UI updates. A: There's probably no ideal method for every situation, but push is usually better and used more often. It allows to optimize server caching and data transfers, which helps performance and scalability, and cuts network traffic a bit by avoiding client requests and empty responses. It can be important advantage for a server to operate in it's own pace and supply clients with data when it is ready. Industry standarts - such as OPC, GID - support both. Server pushes updates to subscribed clients, but client can pull some rarely used data out without bothering with subscription. A: As long as the client initiates the connection (to get passed firewall and NAT problems) either way is fine. If there are several different type of data you need to send, you might want to have the client specify which type he wants, but this is only needed once per connection. Then you can have the server continue to send updates as it has them. It would be less network traffic to have the server send updates without the client continually asking for updates. A: What do you have on the client's side? Many firewalls allow outgoing requests but block incoming requests. In other words, pull may be your only option if you are crossing the Internet unless you are sending out e-mails.
{ "language": "en", "url": "https://stackoverflow.com/questions/58757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SDK for writing DVD's I need to add DVD writing functionality to an application I'm working on. However it needs to be able to write out files that are being grabbed "live" from a camera, over a long period of time. I can't wait until all the files are captured before I start writing them to the DVD, I need to write them out in chunks as I go along. I've looked at IMAPI v2, but the main problems seems to be that you need to point it to all the files you plan to write out to disk before you start the burning process. I know it has to concept of "sessions", which means you can write to the DVD in several parts, before you finally "close" it. But I was wondering if there were any other DVD writing SDK's that allow you to be constantly writing files to a DVD and in particular files that are only in memory. It would be more efficient if I didn't have to write the captured images out to hard before they are burned to DVD. The solution needs to work under .NET on Windows XP and vista A: The Primo burning engine for .Net works nicely. A: Generally you have to have your data ready before you start writing a session. What you could do is grab the first image before starting the first session and then grabbing the rest in the background and write new sessions as they've been paged down. Also, vbAccelerator has a great IMAPI Wrapper for .NET A: Format your optical media to a Live File System (Incremental Packet Writing instead of using a mastered disc format with IMAPIv2) and then you will be able to add any file just using i.e. CopyFile without creating new sessions. This way you will not waste lead-in/lead-out space each time you want to add a new file in a new session... Notice that to ensure compatibility of disks created on Windows Vista, UDF 2.01 or lower should be selected.
{ "language": "en", "url": "https://stackoverflow.com/questions/58768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you paste multiple tabbed lines into Vi? I want to paste something I have cut from my desktop into a file open in Vi. But if I paste the tabs embed on top of each other across the page. I think it is some sort of visual mode change but can't find the command. A: If you're using plain vi: You probably have autoindent on. To turn it off while pasting: <Esc> :set noai <paste all you want> <Esc> :set ai I have in my .exrc the following shortcuts: map ^P :set noai^M map ^N :set ai^M Note that these have to be the actual control characters - insert them using Ctrl-V Ctrl-P and so on. If you're using vim: Use the paste option. In addition to disabling autoindent it will also set other options such as textwidth and wrapmargin to paste-friendly defaults: <Esc> :set paste <paste all you want> <Esc> :set nopaste You can also set a key to toggle the paste mode. My .vimrc has the following line: set pastetoggle=<C-P> " Ctrl-P toggles paste mode A: If you are using VIM, you can use "*p (i.e. double quotes, asterisk, letter p). A: I found that if I copy tabbed lines first into a text editor and then recopy them from there to vim, then the tabs are correct.
{ "language": "en", "url": "https://stackoverflow.com/questions/58774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Copy files on Windows Command Line with Progress I need to copy files using Windows command-line (available on XP Pro or later by default) and show progress during the process. The progress indicator could be in a terminal or a GUI window. It is intended to be used during batch file scripting. A: I used the copy command with the /z switch for copying over network drives. Also works for copying between local drives. Tested on XP Home edition. A: Some interesting timings regarding all these methods. If you have Gigabit connections, you should not use the /z flag or it will kill your connection speed. Robocopy or dism are the only tools that go full speed and show a progress bar. wdscase is for multicasting off a WDS server and might be faster if you are imaging 5+ computers. To get the 1:17 timing, I was maxing out the Gigabit connection at 920Mbps so you won't get that on two connections at once. Also take note that exporting the small wim index out of the larger wim file too longer than just copying the whole thing. Model Exe OS switches index size time link speed 8760w dism Win8 /export-wim index 1 6.27GB 2:21 link 1Gbps 8760w dism Win8 /export-wim index 2 7.92GB 1:29 link 1Gbps 6305 wdsmcast winpe32 /trans-file res.RWM 7.92GB 6:54 link 1Gbps 6305 dism Winpe32 /export-wim index 1 6.27GB 2:20 link 1Gbps 6305 dism Winpe32 /export-wim index 2 7.92GB 1:34 link 1Gbps 6305 copy Winpe32 /z Whole 7.92GB 25:48 link 1Gbps 6305 copy Winpe32 none Wim 7.92GB 1:17 link 1Gbps 6305 xcopy Winpe32 /z /j Wim 7.92GB 23:54 link 1Gbps 6305 xcopy Winpe32 /j Wim 7.92GB 1:38 link 1Gbps 6305 VBS.copy Winpe32 Wim 7.92 1:21 link 1Gbps 6305 robocopy Winpe32 Wim 7.92 1:17 link 1Gbps If you don't have robocopy.exe available, why not run it from the network share you are copying your files from? In my case, I prefer to do that so I don't have to rebuild my WinPE boot.wim file every time I want to make a change and then update dozens of flash drives. A: robocopy: Robocopy, or "Robust File Copy", is a command-line directory and/or file replication command. Robocopy functionally replaces Xcopy, with more options. It has been available as part of the Windows Resource Kit starting with Windows NT 4.0, and was first introduced as a standard feature in Windows Vista and Windows Server 2008. The command is robocopy... A: The Esentutl /y option allows copyng (single) files with progress bar like this : the command should look like : esentutl /y "FILE.EXT" /d "DEST.EXT" /o The command is available on every windows machine but the y option is presented in windows vista. As it works only with single files does not look very useful for a small ones. Other limitation is that the command cannot overwrite files. Here's a wrapper script that checks the destination and if needed could delete it (help can be seen by passing /h). Another option is to automate shell.Application object through powershell, jscript or vbscript. This will allow you to copy items with the explorer pop-up showing the progress. Here's an example script and an usage: call shellCopy.bat "D:\Folder\anotherFolder" "C:\Destination" With this you can select single file , directory or use files with wildcards. Though if the items size is too small the pop will disapear too fast. If there are items with the same name in the destination it will create a new one with - Copy at the end (as it is used with right click and copy/paste). Though you can play with the option values using the official documentation and ask for overwriting for example. A: This technet link has some good info for copying large files. I used an exchange server utility mentioned in the article which shows progress and uses non buffered copy functions internally for faster transfer. In another scenario, I used robocopy. Robocopy GUI makes it easier to get your command line options right. A: Here is the script I use: @ECHO off SETLOCAL ENABLEDELAYEDEXPANSION mode con:cols=210 lines=50 ECHO Starting 1-way backup of MEDIA(M:) to BACKUP(G:)... robocopy.exe M:\ G:\ *.* /E /PURGE /SEC /NP /NJH /NJS /XD "$RECYCLE.BIN" "System Volume Information" /TEE /R:5 /COPYALL /LOG:from_M_to_G.log ECHO Finished with backup. pause A: If you want to copy files and see a "progress" I suggest the script below in Batch that I used from another script as a base I used a progress bar and a percentage while the script copies the game files Nuclear throne: @echo off title NTU Installer setlocal EnableDelayedExpansion @echo Iniciando instalacao... if not exist "C:\NTU" ( md "C:\NTU ) if not exist "C:\NTU\Profile" ( md "C:\NTU\Profile" ) ping -n 5 localhost >nul for %%f in (*.*) do set/a vb+=1 set "barra=" ::loop da barra for /l %%i in (1,1,70) do set "barra=!barra!Û" rem barra vaiza para ser preenchida set "resto=" rem loop da barra vazia for /l %%i in (1,1,110) do set "resto=!resto!" set i=0 rem carregameno de arquivos for %%f in (*.*) do ( >>"log_ntu.css" ( copy "%%f" "C:\NTU">nul echo Copiado:%%f ) cls set /a i+=1,percent=i*100/vb,barlen=70*percent/100 for %%a in (!barlen!) do echo !percent!%% / [!barra:~0,%%a!%resto%] echo Instalado:[%%f] / Complete:[!percent!%%/100%] ping localhost -n 1.9 >nul ) xcopy /e "Profile" "C:\NTU\Profile">"log_profile.css" @echo Criando atalho na area de trabalho... copy "NTU.lnk" "C:\Users\%username%\Desktop">nul ping localhost -n 4 >nul @echo Arquivos instalados! pause
{ "language": "en", "url": "https://stackoverflow.com/questions/58782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: What tool do you use for counting lines of source code in Visual Studio Projects? I know there are quite a few line count tools around. Is there something simple that's not a part some other big package that you use ? A: Slick Edit Gadgets has a nice report breaking it down by lines of code, whitespace and comments. The plug-in is free and relatively small. A: Sorry if it's not a direct answer but these days I much prefer to use code metric tools or profilers rather than lines of code. Ants profiler and NDepend are two that immediately come to mind. It's just that these tools allow you to get a real grasp on the size/complexity of your software, lines of code is a very primitive metric. A: I use this Python script: import os, sys total_count = 0 for root, dirs, filenames in os.walk(sys.argv[1]): dirs[:] = [ # prune search path dir for dir in dirs if dir.lower() not in ('.svn', 'excludefrombuild')] for filename in filenames: if os.path.splitext(filename)[1].lower() in ('.cpp', '.h'): fullname = os.path.join(root, filename) count = 0 for line in open(fullname): count += 1 total_count += count print count, fullname print total_count A: If you have Visual Studio 2008 Team Developer or Team Suite edition, you can get them directly in Visual Studio using Code Metrics. A: You could use find and wc from this relatively small package, http://unxutils.sourceforge.net/ Like find . -name *.cs -exec wc -l {} \; Or, if you have a linux machine handy you can mount the drive and do it like that, and it'll give you a ballpark figure. You can complexify to remove comments, etc. But given that you just want a ballpark figure, shouldn't be necessary. A: Right click on Project in Solution explorer and select "Calculate Code Metrics". A: not sure if this works in VS08 ... code project A: I have also used this simple C# made tool. http://richnewman.wordpress.com/2007/07/09/c-visual-basic-and-c-net-line-count-utility-version-2/ A: Exact Magic's StodioTools package (free) shows Executable LoC among other metrics. This is a plug-in to VisualStudio 2008. A: Project Line Counter is pretty cool, but you need an updated .reg file for VS 2008 and later. I have a .reg file for Visual Studio 2010 on my website: http://www.onemanmmo.com/index.php?cmd=newsitem&comment=news.1.41.0 There's some instructions in the discussion at CodeProject http://www.codeproject.com/KB/macros/linecount.aspx with info on getting it to run with Visual Studio 2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/58783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Retrieving the associated shared service provider's name? How do you programmatically retrieve the name of a shared services provider that's associated with a specific Sharepoint web application? I have a custom solution that needs to: * *Enumerate all web applications that it's deployed to *Figure out the Shared Services provider that each of the web applications is associated with *Access a Business Data Catalog installed on the SSP to retrieve some data *Enumerate through all site collections in those web applications *Perform various tasks within the site collections according to the data I got points 1, 3, 4 and 5 figured out, but 2 is somewhat troublesome. I want to avoid hardcoding the SSP name anywhere and not require the farm administrator to manually edit a configuration file. All information I need is in the Sharepoint configuration database, I just need to know how to access it through the object model. A: Unfortunately there is no supported way I know of that this can be done. The relevant class is SharedResourceProvider in the Microsoft.Office.Server.Administration namespace, in the Microsoft.Office.Server DLL. It's marked internal so pre-reflection: SharedResourceProvider sharedResourceProvider = ServerContext.GetContext(SPContext.Current.Site).SharedResourceProvider; string sspName = sharedResourceProvider.Name; Post-reflection: ServerContext sc = ServerContext.GetContext(SPContext.Current.Site); PropertyInfo srpProp = sc.GetType().GetProperty( "SharedResourceProvider", BindingFlags.NonPublic | BindingFlags.Instance); object srp = srpProp.GetValue(sc, null); PropertyInfo srpNameProp = srp.GetType().GetProperty( "Name", BindingFlags.Public | BindingFlags.Instance); string sspName = (string)srpNameProp.GetValue(srp, null); An alternative would be to write a SQL query over the configuration database which isn't recommended.
{ "language": "en", "url": "https://stackoverflow.com/questions/58809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Javascript syntax highlighting in vim Has anyone else found VIM's syntax highlighting of Javascript sub-optimal? I'm finding that sometimes I need to scroll around in order to get the syntax highlighting adjusted, as sometimes it mysteriously drops all highlighting. Are there any work-arounds or ways to fix this? I'm using vim 7.1. A: This is a really old post, but I was experiencing the same thing: sometimes syntax highlight would just stop working when looking at the javascript section in an .html file. As the OP mentions, a quick workaround was to scroll up and then magically things would start highlighting again. Today I found the underlying problem and a good solution. In Vim, syntax highlighting uses a context to derive the correct highlight, where context is defined by the previous lines. It is possible to specify how many lines before the current line are used by issuing :syntax sync minlines=200. In this case, it will use up to 200 previous lines as context. It is possible to use the whole file (which can be slow for long files) by running :syntax sync fromstart. Once I found that, I added this line to my .vimrc: autocmd BufEnter *.html :syntax sync fromstart By doing so, .html files will use the whole file as context. Thus, the javascript section will always by highlighted properly, regardless of how long the JS section is. Hope this helps someone else out there! A: You might like to try this improved Javascript syntax highlighter rather than the one that ships with VIMRUNTIME. A: For a quick and dirty fix, sometimes I just scroll up and down and the highlighting readjusts. Ctrl+L for a screen redraw can also fix it. A: Well, I've modified Yi Zhao's Javascript Syntax, and added Ajax Keywords support, also highlight DOM Methods and others. Here it is, it is far from being perfect as I'm still new to Vim, but so far it has work for me. My Javascript Syntax. If you can fix, add features, please do. UPDATE: I forgot these syntax highlights are only shown if you included them in your own colorscheme, as I did in my Nazca colorscheme. I'll test if I could add these line into my modified syntax file. Follow the new version of the javascript syntax file in github, for it is no longer required to modify your current colorscheme. A: Syntax coloring synchronization probably needs adjustment. I've found in certain contexts that I need to change it. Syntax synchronization (":help syn-sync") controls how vim keeps track of and refreshes its parse of the code for coloring, so that it can start drawing anywhere in the file. The defaults don't always work for me, so sometimes I find myself issuing :syn sync fromstart I suggest reading through the documentation under :help syn-sync or just check :help syntax and find the section on synchronization. to make an informed decision among the four available basic options. I maintain mappings to function keys to switch between "fromstart" and "ccomment" modes and for just clearing the sync settings.
{ "language": "en", "url": "https://stackoverflow.com/questions/58825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: ASP.NET Custom Controls and "Dynamic" Event Model OK, I am not sure if the title it completely accurate, open to suggestions! I am in the process of creating an ASP.NET custom control, this is something that is still relatively new to me, so please bear with me. I am thinking about the event model. Since we are not using Web Controls there are no events being fired from buttons, rather I am manually calling __doPostBack with the appropriate arguments. However this can obviously mean that there are a lot of postbacks occuring when say, selecting options (which render differently when selected). In time, I will need to make this more Ajax-y and responsive, so I will need to change the event binding to call local Javascript. So, I was thinking I should be able to toggle the "mode" of the control, it can either use postback and handlle itself, or you can specify the Javascript function names to call instead of the doPostBack. * *What are your thoughts on this? *Am I approaching the raising of the events from the control in the wrong way? (totally open to suggestions here!) *How would you approach a similar problem? Edit - To Clarify * *I am creating a custom rendered control (i.e. inherits from WebControl). *We are not using existnig Web Controls since we want complete control over the rendered output. *AFAIK the only way to get a server side event to occur from a custom rendered control is to call doPostBack from the rendered elements (please correct if wrong!). *ASP.NET MVC is not an option. A: Very odd. You're using ASP.NET server controls and custom controls, but you're not using web controls? And you're calling __doPostBack manually? Do you like to do things the hard way? If I was still using the server control model rather than MVC, I would slap ASP.NET Ajax controls on that sucker and call it a day. What you're doing is like putting a blower on a model T. It may be fun and interesting, but after you're done with all the hard work, what do you really have? A: I have been doing some more digging on this, and came across how to inject Javascript in to the client when required. This will obviously play a huge part in making the controls more responsive and less round-trips to the server. For example: RegisterClientScriptBlock. Look forward to playing with this more, feel free to get invovled people!
{ "language": "en", "url": "https://stackoverflow.com/questions/58827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Ambiguity in Left joins (oracle only?) My boss found a bug in a query I created, and I don't understand the reasoning behind the bug, although the query results prove he's correct. Here's the query (simplified version) before the fix: select PTNO,PTNM,CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD); and here it is after the fix: select PTNO,PTNM,PARTS.CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD); The bug was, that null values were being shown for column CATCD, i.e. the query results included results from table CATEGORIES instead of PARTS. Here's what I don't understand: if there was ambiguity in the original query, why didn't Oracle throw an error? As far as I understood, in the case of left joins, the "main" table in the query (PARTS) has precedence in ambiguity. Am I wrong, or just not thinking about this problem correctly? Update: Here's a revised example, where the ambiguity error is not thrown: CREATE TABLE PARTS (PTNO NUMBER, CATCD NUMBER, SECCD NUMBER); CREATE TABLE CATEGORIES(CATCD NUMBER); CREATE TABLE SECTIONS(SECCD NUMBER, CATCD NUMBER); select PTNO,CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD) left join SECTIONS on (SECTIONS.SECCD=PARTS.SECCD) ; Anybody have a clue? A: Here's the query (simplified version) I think by simplifying the query you removed the real cause of the bug :-) What oracle version are you using? Oracle 10g ( 10.2.0.1.0 ) gives: create table parts (ptno number , ptnm number , catcd number); create table CATEGORIES (catcd number); select PTNO,PTNM,CATCD from PARTS left join CATEGORIES on (CATEGORIES.CATCD=PARTS.CATCD); I get ORA-00918: column ambiguously defined A: Interesting in SQL server that throws an error (as it should) select id from sysobjects s left join syscolumns c on s.id = c.id Server: Msg 209, Level 16, State 1, Line 1 Ambiguous column name 'id'. select id from sysobjects left join syscolumns on sysobjects.id = syscolumns.id Server: Msg 209, Level 16, State 1, Line 1 Ambiguous column name 'id'. A: From my experience if you create a query like this the data result will pull CATCD from the right side of the join not the left when there is a field overlap like this. So since this join will have all records from PARTS with only some pull through from CATEGORIES you will have NULL in the CATCD field any time there is no data on the right side. By explicitly defining the column as from PARTS (ie left side) you will get a non null value assuming that the field has data in PARTS. Remember that with LEFT JOIN you are only guarantied data in fields from the left table, there may well be empty columns to the right. A: This may be a bug in the Oracle optimizer. I can reproduce the same behavior on the query with 3 tables. Intuitively it does seem that it should produce an error. If I rewrite it in either of the following ways, it does generate an error: (1) Using old-style outer join select ptno, catcd from parts, categories, sections where categories.catcd (+) = parts.catcd and sections.seccd (+) = parts.seccd (2) Explicitly isolating the two joins select ptno, catcd from ( select ptno, seccd, catcd from parts left join categories on (categories.CATCD=parts.CATCD) ) left join sections on (sections.SECCD=parts.SECCD) I used DBMS_XPLAN to get details on the execution of the query, which did show something interesting. The plan is basically to outer join PARTS and CATEGORIES, project that result set, then outer join it to SECTIONS. The interesting part is that in the projection of the first outer join, it is only including PTNO and SECCD -- it is NOT including the CATCD from either of the first two tables. Therefore the final result is getting CATCD from the third table. But I don't know whether this is a cause or an effect. A: I am using Oracle 9.2.0.8.0. and it does give the error "ORA-00918: column ambiguously defined". A: I'm afraid I can't tell you why you're not getting an exception, but I can postulate as to why it chose CATEGORIES' version of the column over PARTS' version. As far as I understood, in the case of left joins, the "main" table in the query (PARTS) has precedence in ambiguity It's not clear whether by "main" you mean simply the left table in a left join, or the "driving" table, as you see the query conceptually... But in either case, what you see as the "main" table in the query as you've written it will not necessarily be the "main" table in the actual execution of that query. My guess is that Oracle is simply using the column from the first table it hits in executing the query. And since most individual operations in SQL do not require one table to be hit before the other, the DBMS will decide at parse time which is the most efficient one to scan first. Try getting an execution plan for the query. I suspect it may reveal that it's hitting CATEGORIES first and then PARTS. A: This is a known bug with some Oracle versions when using ANSI-style joins. The correct behavior would be to get an ORA-00918 error. It's always best to specify your table names anyway; that way your queries don't break when you happen to add a new column with a name that is also used in another table. A: It is generally advised to be specific and fully qualify all column names anyway, as it saves the optimizer a little work. Certainly in SQL Server. From what I can gleen from the Oracle docs, it seems it will only throw if you select the column name twice in the select list, or once in the select list and then again elsewhere like an order by clause. Perhaps you have uncovered an 'undocumented feature' :) A: Like HollyStyles, I cannot find anything in the Oracle docs which can explain what you are seeing. PostgreSQL, DB2, MySQL and MSSQL all refuse to run the first query, as it's ambiguous. A: @Pat: I get the same error here for your query. My query is just a little bit more complicated than what I originally posted. I'm working on a reproducible simple example now. A: A bigger question you should be asking yourself is - why do I have a category code in the parts table that doesn't exist in the categories table? A: This is a bug in Oracle 9i. If you join more than 2 tables using ANSI notation, it will not detect ambiguities in column names, and can return the wrong column if an alias isn't used. As has been mentioned already, it is fixed in 10g, so if an alias isn't used, an error will be returned.
{ "language": "en", "url": "https://stackoverflow.com/questions/58831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Precompiled headers with GCC How can I get precompiled headers working with GCC? I have had no luck in my attempts and I haven't seen many good examples for how to set it up. I've tried on Cygwin GCC 3.4.4 and using 4.0 on Ubuntu. A: Call GCC the same way as if you call it for your source file, but with a header file. E.g., g++ $(CPPFLAGS) test.h This generates a file called test.h.gch. Every time GCC searches for test.h, it looks first for test.h.gch and if it finds it it uses it automatically. More information can be found under GCC Precompiled Headers. A: The -x specifier for C++ precompiled headers is -x c++-header, not -x c++. Example usage of PCH follows. pch.h: // Put your common include files here: Boost, STL as well as your project's headers. main.cpp: #include "pch.h" // Use the PCH here. Generate the PCH like this: $ g++ -x c++-header -o pch.h.gch -c pch.h The pch.h.gch must be in the same directory as the pch.h in order to be used, so make sure that you execute the above command from the directory where pch.h is. A: I have managed to get precompiled headers working under gcc once in the past, and I recall having problems then as well. The thing to remember is that gcc will ignore the file (header.h.gch or similar) if certain conditions are not met, a list of which can be found on the gcc precompiled header documentation page. Generally it's safest to have your build system compile the .gch file as a first step, with the same command line options and executable as the rest of your source. This ensures the file is up to date and that there are no subtle differences. It's probably also a good idea to get it working with a contrived example first, just to remove the possibility that your problems are specific to source code in your project. A: I have definitely had success. First, I used the following code: #include <boost/xpressive/xpressive.hpp> #include <iostream> using namespace std; using namespace boost::xpressive; // A simple regular expression test int main() { std::string hello("Hello, World!"); sregex rex = sregex::compile( "(\\w+) (\\w+)!" ); smatch what; if( regex_match( hello, what, rex ) ) { std::cout << what[0] << '\n'; // Whole match std::cout << what[1] << '\n'; // First capture std::cout << what[2] << '\n'; // Second capture } return 0; } This was just a Hello, World! program from Boost Xpressive. First, I compiled with the -H option in GCC. It showed an enormous list of headers that it used. Then, I took a look at the compile flags my IDE (Code::Blocks) was producing and saw something like this: g++ -Wall -fexceptions -g -c main.cpp -o obj/Debug/main.o So I wrote a command to compile the Xpressive.hpp file with the exact same flags: sudo g++ -Wall -fexceptions -g /usr/local/include/boost/xpressive/xpressive.hpp I compiled the original code again with the -H and got this output: g++ -Wall -fexceptions -H -g -c main.cpp -o obj/Debug/main.o ! /usr/local/include/boost/xpressive/xpressive.hpp.gch main.cpp . /usr/include/c++/4.4/iostream .. /usr/include/c++/4.4/x86_64-linux-gnu/bits/c++config.h .. /usr/include/c++/4.4/ostream .. /usr/include/c++/4.4/istream main.cpp The ! means that the compiler was able to use the precompiled header. An x means it was not able to use it. Using the appropriate compiler flags is crucial. I took off the -H and ran some speed tests. The precompiled header had an improvement from 14 seconds to 11 seconds. Not bad, but not great. Note: Here's the example. I couldn't get it to work in the post. BTW: I'm using the following g++: g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3 A: Firstly, see the documentation here. You compile headers just like any other file but you put the output inside a file with a suffix of .gch. So for example if you precompile stdafx.h you will have a precompiled header that will be automatically searched for called stdafx.h.gch anytime you include stdafx.h Example: stdafx.h: #include <string> #include <stdio.h> a.cpp: #include "stdafx.h" int main(int argc, char**argv) { std::string s = "Hi"; return 0; } Then compile as: > g++ -c stdafx.h -o stdafx.h.gch > g++ a.cpp > ./a.out Your compilation will work even if you remove stdafx.h after step 1. A: Make sure to -include your_header.h This is how I precompiled and used bits/stdc++.h collection. Code #include <bits/stdc++.h> Then I located the lib by compiling my file with -H and looking at output g++ sol.cpp -H -O3 -pthread -lm -std=c++14 -o executable where I saw . /usr/include/x86_64-linux-gnu/c++/7/bits/stdc++.h So I made a new directory bits inside of current one and copied stdc++.h from there. Then I ran g++ bits/stdc++.h -O3 -std=c++14 -pthread which generated bits/stdc++.gch Normally I compiled my code via g++ sol.cpp -O3 -pthread -lm -std=c++14 -o executable , but I had to modify that to g++ sol.cpp -include bits/stdc++.h -O3 -pthread -lm -std=c++14 -o executable as it only resolved to .gch file instead of .h with -include bits/stdc++.h That was key for me. Other thing to keep in mind is that you have to compile *.h header file with almost the same parameters as you compile your *.cpp. When I didn't include -O3 or -pthread it ignored the *.gch precompiled header. To check if everything's correct you can measure time difference via comparing result of time g++ sol.cpp ... or run g++ sol.cpp -H -O3 -pthread -lm -std=c++14 -o executable again and look for header paths and if you now get ! before library path, for example ! ./bits/stdc++.h.gch .... A: A subtle tip about the file extension that tripped me up, because I wasn't paying close enough attention: the .gch extension is added to the precompiled file's full name; it doesn't replace .h. If you get it wrong, the compiler won't find it and silently does not work. precomp.h => precomp.h.gch Not: precomp.h => precomp.gch Use GCC's -H to check if it's finding/using it.
{ "language": "en", "url": "https://stackoverflow.com/questions/58841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: Can I set a breakpoint on 'memory access' in GDB? I am running an application through gdb and I want to set a breakpoint for any time a specific variable is accessed / changed. Is there a good method for doing this? I would also be interested in other ways to monitor a variable in C/C++ to see if/when it changes. A: I just tried the following: $ cat gdbtest.c int abc = 43; int main() { abc = 10; } $ gcc -g -o gdbtest gdbtest.c $ gdb gdbtest ... (gdb) watch abc Hardware watchpoint 1: abc (gdb) r Starting program: /home/mweerden/gdbtest ... Old value = 43 New value = 10 main () at gdbtest.c:6 6 } (gdb) quit So it seems possible, but you do appear to need some hardware support. A: What you're looking for is called a watchpoint. Usage (gdb) watch foo: watch the value of variable foo (gdb) watch *(int*)0x12345678: watch the value pointed by an address, casted to whatever type you want (gdb) watch a*b + c/d: watch an arbitrarily complex expression, valid in the program's native language Watchpoints are of three kinds: * *watch: gdb will break when a write occurs *rwatch: gdb will break wnen a read occurs *awatch: gdb will break in both cases You may choose the more appropriate for your needs. For more information, check this out. A: watch only breaks on write, rwatch let you break on read, and awatch let you break on read/write. You can set read watchpoints on memory locations: gdb$ rwatch *0xfeedface Hardware read watchpoint 2: *0xfeedface but one limitation applies to the rwatch and awatch commands; you can't use gdb variables in expressions: gdb$ rwatch $ebx+0xec1a04f Expression cannot be implemented with read/access watchpoint. So you have to expand them yourself: gdb$ print $ebx $13 = 0x135700 gdb$ rwatch *0x135700+0xec1a04f Hardware read watchpoint 3: *0x135700 + 0xec1a04f gdb$ c Hardware read watchpoint 3: *0x135700 + 0xec1a04f Value = 0xec34daf 0x9527d6e7 in objc_msgSend () Edit: Oh, and by the way. You need either hardware or software support. Software is obviously much slower. To find out if your OS supports hardware watchpoints you can see the can-use-hw-watchpoints environment setting. gdb$ show can-use-hw-watchpoints Debugger's willingness to use watchpoint hardware is 1. A: Assuming the first answer is referring to the C-like syntax (char *)(0x135700 +0xec1a04f) then the answer to do rwatch *0x135700+0xec1a04f is incorrect. The correct syntax is rwatch *(0x135700+0xec1a04f). The lack of ()s there caused me a great deal of pain trying to use watchpoints myself. A: Use watch to see when a variable is written to, rwatch when it is read and awatch when it is read/written from/to, as noted above. However, please note that to use this command, you must break the program, and the variable must be in scope when you've broken the program: Use the watch command. The argument to the watch command is an expression that is evaluated. This implies that the variabel you want to set a watchpoint on must be in the current scope. So, to set a watchpoint on a non-global variable, you must have set a breakpoint that will stop your program when the variable is in scope. You set the watchpoint after the program breaks. A: In addition to what has already been answered/commented by asksol and Paolo M I didn't at first read understand, why do we need to cast the results. Though I read this: https://sourceware.org/gdb/onlinedocs/gdb/Set-Watchpoints.html, yet it wasn't intuitive to me.. So I did an experiment to make the result clearer: Code: (Let's say that int main() is at Line 3; int i=0 is at Line 5 and other code.. is from Line 10) int main() { int i = 0; int j; i = 3840 // binary 1100 0000 0000 to take into account endianness other code.. } then i started gdb with the executable file in my first attempt, i set the breakpoint on the location of variable without casting, following were the results displayed Thread 1 "testing2" h Breakpoint 2 at 0x10040109b: file testing2.c, line 10. (gdb) s 7 i = 3840; (gdb) p i $1 = 0 (gdb) p &i $2 = (int *) 0xffffcbfc (gdb) watch *0xffffcbfc Hardware watchpoint 3: *0xffffcbfc (gdb) s [New Thread 13168.0xa74] Thread 1 "testing2" hit Breakpoint 2, main () at testing2.c:10 10 b = a; (gdb) p i $3 = 3840 (gdb) p *0xffffcbfc $4 = 3840 (gdb) p/t *0xffffcbfc $5 = 111100000000 as we could see breakpoint was hit for line 10 which was set by me. gdb didn't break because although variable i underwent change yet the location being watched didn't change (due to endianness, since it continued to remain all 0's) in my second attempt, i did the casting on the address of the variable to watch for all the sizeof(int) bytes. this time: (gdb) p &i $6 = (int *) 0xffffcbfc (gdb) p i $7 = 0 (gdb) watch *(int *) 0xffffcbfc Hardware watchpoint 6: *(int *) 0xffffcbfc (gdb) b 10 Breakpoint 7 at 0x10040109b: file testing2.c, line 10. (gdb) i b Num Type Disp Enb Address What 6 hw watchpoint keep y *(int *) 0xffffcbfc 7 breakpoint keep y 0x000000010040109b in main at testing2.c:10 (gdb) n [New Thread 21508.0x3c30] Thread 1 "testing2" hit Hardware watchpoint 6: *(int *) 0xffffcbfc Old value = 0 New value = 3840 Thread 1 "testing2" hit Breakpoint 7, main () at testing2.c:10 10 b = a; gdb break since it detected the value has changed.
{ "language": "en", "url": "https://stackoverflow.com/questions/58851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "266" }
Q: Rectangle functions in emacs I've read in several places that the rectangle functions in emacs are very useful. I've read a bit about them, and I can't quite figure why. I mean, when you want to kill a paragraph, you mark the first row/column and then the last one, and that's actually a rectangle, right? But you can still use the normal kill... So what kind of transformations would you do with them? A: If you have data in columns in a text file with M-x delete-rectangle or M-x kill-rectangle you can delete a single column of data. Similarly, M-x yank-rectangle will paste in a column of text. For example, take the following text: 1. alligator alphorn 2. baboon bugle 3. crocodile cornet 4. dog didgeridoo 5. elephant euphonium 6. fish flugelhorn 7. gopher guitar Select from the a of alligator to the g of guitar. The beginning and end of the selection mark out two opposite corners of the rectangle. Enter M-x kill-rectangle and you immediately have: 1. alphorn 2. bugle 3. cornet 4. didgeridoo 5. euphonium 6. flugelhorn 7. guitar Next put the mark at the end of the top line, add a few spaces if required and enter M-x yank-rectangle and ta-da! You have re-ordered the columns: 1. alphorn alligator 2. bugle baboon 3. cornet crocodile 4. didgeridoo dog 5. euphonium elephant 6. flugelhorn fish 7. guitar gopher A: In emacs24+ there's also function for numbering lines: (rectangle-number-lines START END START-AT &optional FORMAT) Insert numbers in front of the region-rectangle. START-AT, if non-nil, should be a number from which to begin counting. FORMAT, if non-nil, should be a format string to pass to `format' along with the line count. When called interactively with a prefix argument, prompt for START-AT and FORMAT. It is binded to C-x r N by default. A: I like to use rectangle for 2 main purposes, inserting the same text on every line, or killing a column of text (similar to Dave Webb's answer). There are 2 useful shortcuts for these, C-x r k will kill a rectangle, and C-x r t to insert (there are other rectangle commands with a C-x r prefix, but these are the ones I use). So let's say you want to take some code and format it so that you can post it in a Stack Overflow post... you need to prefix with 4 spaces. So, go to the beginning of the first line, C-SPC to mark, then go to the beginning of the last line and C-x r t <SPC> <SPC> <SPC> <SPC> <RET>, and there you have it! Then you can just copy and paste it into Stack Overflow. I have run into more complex situations where this is useful, where you actually have text you want to insert on every line at a particular place. So the other situation like Dave Webb's situation, if you want to kill a rectangle, use C-x r k though, because it's just a lot quicker ;-) Also, according to my reference card that I printed out when I first started, you can do the following: * *C-x r r: copy to a register *C-x r y: yank a rectangle *C-x r o: open a rectangle, shifting text right (whatever that means...) *C-x r c: blank out a rectangle (I assume that means replace it with spaces, but you'd have to try it out to see) *C-x r t: prefix with text (as described above) *C-x r k: killing (as described above)
{ "language": "en", "url": "https://stackoverflow.com/questions/58872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do I enable line numbers in VS2008? I can't seem to find that option. Surely it's in there? A: Go to Tools - Options - Text Editor - All Languages - General, and check Line numbers to show line numbers for all files. If you just want to see (or not see) the line numbers of a specific file, you can override this global setting by going to the Text Editor - - General page. Did you know... how to show line numbers in the editor? A: That would be Tools > Options Text Editor > All Languages > Line Numbers (at the bottom right) A: Tools -> Options -> Text Editor -> All languages. Near the bottom. A: Main Menu > Tools > Options Text Editor (Tree) > C# > Display group (Line numbers checkbox) that was easy :) A: Don't forget to check "[x] Show all settings" at the bottom of the form, otherwise you won't be able to see "All Languages".
{ "language": "en", "url": "https://stackoverflow.com/questions/58874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Should I use multiple assemblies for an isolated ASP.NET web application? Coming from a corporate IT environment, the standard was always creating a class library project for each layer, Business Logic, Data Access, and sometimes greater isolation of specific types. Now that I am working on my own web application project, I don't see a real need to isolate my code in this fashion. I don't have multiple applications that need to share this logic or service enable it. I also don't see any advantage to deployment scenarios. I am leaning towards putting all artifacts in one web application, logically separated by project folders. I wanted to know what the thoughts are of the community. Let me add more information... I am writing this application using MVC preview 5, so the unit testing piece will be supported by the separation of concerns inherit in the framework. I do like to have tests for everything! A: Start with the simplest thing possible and add complexity if and when required. Sounds as though a single assembly would work just fine for your case. However, do take care not to violate the layers by having layer A access an internal member of layer B. That would make it harder to pull the layers into separate assemblies at a later date. A: I'd say it depends on how serious you are about testing and unit-testing. If you plan to only do user/manual tests, or use basically, only test from the UI downward, then it doesn't really make a difference. On the other hand, if you plan on doing sort of unit-testing, or business rules validation, it definitely makes sense to split up your work into different assemblies. Even for smaller personal projects, I find this approach makes my life easier as the project goes on. I still run everything from the same solution, just with a web project for the UI, library for the business rules / application logic and another library for the DAL. A: You should still separate logically layers into there proper projects. That is a good engineering practice, whether you are just 1 developer or 100. The negative about the code all in one place is that it is going to make you refactor or duplicate code for expansion.
{ "language": "en", "url": "https://stackoverflow.com/questions/58878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Drilling down in VisualVM I just installed Java 1.6_07 so I could try profiling with VisualVM. It tells me that my app is spending 60% of its time in sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run How do I find out what it was doing during that time? How much of the time was it waiting for something to call it, or doing something else? What was calling it and what was it calling? I just can't seem to find any way to drill down to deeper levels like there is in Quantify or the Perl profiler. A: I don't have experience with VisualVM -- but JRockit's profiler does provide this information; you may consider using it instead. Update: a question with a list of java profilers can be found here, for users with sufficient rep to view deleted questions. A: Does your App use RMI over TCP? If not, is it possible that this is a heisenbug, caused by instrumenting the VM? I assume VisualVM must use RMI calls to figure out what's going on in the JVM.... A: I have started using the new VisualVM 1.2. It allows profiling CPU and drilling down using a call graph. Try it out. A: Using 1.3.2 also seeing this being the reported hangup I am hitting. In 1.3.2 if you do a thread dump and look for this call you can see where it lands in the call chain for that thread. Not sure if Yuval F was referring to this or something else. Look up the call chain to see what it's calling and so on, look down to see what it's being called by and so on.
{ "language": "en", "url": "https://stackoverflow.com/questions/58886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Converting SVG to PNG using C# I've been trying to convert SVG images to PNG using C#, without having to write too much code. Can anyone recommend a library or example code for doing this? A: There is a much easier way using the library http://svg.codeplex.com/ (Newer version @GIT, @NuGet). Here is my code var byteArray = Encoding.ASCII.GetBytes(svgFileContents); using (var stream = new MemoryStream(byteArray)) { var svgDocument = SvgDocument.Open(stream); var bitmap = svgDocument.Draw(); bitmap.Save(path, ImageFormat.Png); } A: I'm using Batik for this. Batik is a graphics library written in Java, with a command line interface. This makes that you can Batik from C#, same as the following example in Delphi: procedure ExecNewProcess(ProgramName : String; Wait: Boolean); var StartInfo : TStartupInfo; ProcInfo : TProcessInformation; CreateOK : Boolean; begin FillChar(StartInfo, SizeOf(TStartupInfo), #0); FillChar(ProcInfo, SizeOf(TProcessInformation), #0); StartInfo.cb := SizeOf(TStartupInfo); CreateOK := CreateProcess(nil, PChar(ProgramName), nil, nil, False, CREATE_NEW_PROCESS_GROUP + NORMAL_PRIORITY_CLASS, nil, nil, StartInfo, ProcInfo); if CreateOK then begin //may or may not be needed. Usually wait for child processes if Wait then WaitForSingleObject(ProcInfo.hProcess, INFINITE); end else ShowMessage('Unable to run ' + ProgramName); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); end; procedure ConvertSVGtoPNG(aFilename: String); const ExecLine = 'c:\windows\system32\java.exe -jar C:\Apps\batik-1.7\batik-rasterizer.jar '; begin ExecNewProcess(ExecLine + aFilename, True); end; A: You can call the command-line version of inkscape to do this: http://harriyott.com/2008/05/converting-svg-images-to-png-in-c.aspx Also there is a C# SVG rendering engine, primarily designed to allow SVG files to be used on the web on codeplex that might suit your needs if that is your problem: Original Project http://www.codeplex.com/svg Fork with fixes and more activity: (added 7/2013) https://github.com/vvvv/SVG A: To add to the response from @Anish, if you are having issues with not seeing the text when exporting the SVG to an image, you can create a recursive function to loop through the children of the SVGDocument, try to cast it to a SvgText if possible (add your own error checking) and set the font family and style. foreach(var child in svgDocument.Children) { SetFont(child); } public void SetFont(SvgElement element) { foreach(var child in element.Children) { SetFont(child); //Call this function again with the child, this will loop //until the element has no more children } try { var svgText = (SvgText)parent; //try to cast the element as a SvgText //if it succeeds you can modify the font svgText.Font = new Font("Arial", 12.0f); svgText.FontSize = new SvgUnit(12.0f); } catch { } } Let me know if there are questions. A: When I had to rasterize svgs on the server, I ended up using P/Invoke to call librsvg functions (you can get the dlls from a windows version of the GIMP image editing program). [DllImport("kernel32.dll", SetLastError = true)] static extern bool SetDllDirectory(string pathname); [DllImport("libgobject-2.0-0.dll", SetLastError = true)] static extern void g_type_init(); [DllImport("librsvg-2-2.dll", SetLastError = true)] static extern IntPtr rsvg_pixbuf_from_file_at_size(string file_name, int width, int height, out IntPtr error); [DllImport("libgdk_pixbuf-2.0-0.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)] static extern bool gdk_pixbuf_save(IntPtr pixbuf, string filename, string type, out IntPtr error, __arglist); public static void RasterizeSvg(string inputFileName, string outputFileName) { bool callSuccessful = SetDllDirectory("C:\\Program Files\\GIMP-2.0\\bin"); if (!callSuccessful) { throw new Exception("Could not set DLL directory"); } g_type_init(); IntPtr error; IntPtr result = rsvg_pixbuf_from_file_at_size(inputFileName, -1, -1, out error); if (error != IntPtr.Zero) { throw new Exception(Marshal.ReadInt32(error).ToString()); } callSuccessful = gdk_pixbuf_save(result, outputFileName, "png", out error, __arglist(null)); if (!callSuccessful) { throw new Exception(error.ToInt32().ToString()); } } A: you can use altsoft xml2pdf lib for this
{ "language": "en", "url": "https://stackoverflow.com/questions/58910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Nested SQL Server transaction performing cascade delete Suppose I have a table called Companies that has a DepartmentID column. There's also a Departaments table that has as EmployeeID column. Of course I have an Employee table as well. The problem is that I want to delete a company, so first i have to delete all the employees for every departament and then all the departaments in the company. Cascade Delete is not an option, therefore i wish to use nested transactions. I'm new to SQL so I would appreciate your help. A: I'm not sure why you need nested transactions here. You only need one actual transaction: BEGIN TRAN DELETE FROM Employee FROM Employee INNER JOIN Department ON Employee.DepartmentID = Department.DepartmentID INNER JOIN Company ON Department.CompanyID = Company.CompanyID WHERE Company.CompanyID = @CompanyID DELETE FROM Department FROM Department INNER JOIN Company ON Department.CompanyID = Company.CompanyID WHERE Company.CompanyID = @CompanyID DELETE FROM Company WHERE Company.CompanyID = @CompanyID COMMIT TRAN Note the double FROM, that is not a typo, it's the correct SQL syntax for performing a JOIN in a DELETE. Each statement is atomic, either the entire DELETE will succeed or fail, which isn't that important in this case because the entire batch will either succeed or fail. BTW- I think you had your relationships backwards. The Department would not have an EmployeeID, the Employee would have a DepartmentID. A: I'm not answering your question, but foreign Keys is the way to go, why is it not an option? As for nested transactions they are: BEGIN delete from Employee where departmentId = 1; BEGIN delete from Department where companyId = 2; BEGIN delete from Company where companyId = 2; END END END Programmatically it looks different of course, but that'd depend on the platform you are using
{ "language": "en", "url": "https://stackoverflow.com/questions/58916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ASP.NET how to Render a control to HTML? I have any ASP.NET control. I want the HTML string how to do I get the HTML string of the control? A: If your control is a web user control, this is how you can get to the HTML it emits from another page or handler: public void GetHtmlFromMySweetControl(HttpContext context) { HttpRequest httpRequest = context.Request; HttpResponse httpResponse = context.Response; string foo = httpRequest["foo"]; Page pageHolder = new Page(); string path = "~/usercontrols/MySweetControl.ascx"; MySweetControl ctrl = (MySweetControl)pageHolder.LoadControl(path); ctrl.BindProducts(foo); pageHolder.Controls.Add(ctrl); StringWriter sw = new StringWriter(); context.Server.Execute(pageHolder, sw, false); httpResponse.Write(sw.ToString()); } A: This appears to work. public string RenderControlToHtml(Control ControlToRender) { System.Text.StringBuilder sb = new System.Text.StringBuilder(); System.IO.StringWriter stWriter = new System.IO.StringWriter(sb); System.Web.UI.HtmlTextWriter htmlWriter = new System.Web.UI.HtmlTextWriter(stWriter); ControlToRender.RenderControl(htmlWriter); return sb.ToString(); } A: Accepted answer by David Basarab will not work if control is not part of the page. a7drew's answer seems unnecessary complex - no need in Context or Server.Execute. private string RenderControl() { var sb = new System.Text.StringBuilder(); using (var stWriter = new System.IO.StringWriter(sb)) using (var htmlWriter = new HtmlTextWriter(stWriter)) { var p = new Page(); var ctrl = (YourControl)p.LoadControl("~/controls/building blocks/YourControl.ascx"); ctrl.Visible = true; // do your own init logic if needed p.Controls.Add(ctrl); ctrl.RenderControl(htmlWriter); return sb.ToString(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/58925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Is there a keyboard shortcut for "Build Page" in Visual Studio 2005? "Build Page" is one of the items you can add to your toolbar to compile just the ASPX page or ASCX control you are working on. Is there a keyboard shortcut for it? A: I always use Ctrl + Shift + B, which rebuilds the entire solution. You could also configure your own keyboard shortcut by clicking Tools / Options / Keyboard and scrolling down to the Build options. (There's ones for Build.BuildPage or Build.BuildSelection...)
{ "language": "en", "url": "https://stackoverflow.com/questions/58933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to permanently remove a breakpoint in Visual Studio 2005 /2008 (ASP.NET, C#) Often, when I have a breakpoint on some line in Visual Studio, The program will run and stop there. great. I will then click the red circle (or press F9) to remove it. Obviously I don't want my program to keep stopping there. The problem is that the next time I refresh the page the breakpoint is back! The only way to permanently remove it is to open the breakpoints window and remove it there. Why does this happen and how can I change this behavior? I have noticed that these breakpoints which keep coming back have a little plus next to them in the breakpoints window which when you click on - open up many sub lines of breakpoints. What is the deal with that? Thanks, Adin A: Just clear the breakpoint while the debugger is off. When you clear or add a breakpoint while debugging, the action only lasts for that debugging session. A: The plus in the breakpoints window is there when one user-supplied breakpoint binds in multiple places. This can happen when a single file is loaded multiple times in the same debugging session, for example. The + lets you look at each of the places it bound. @Joel: modifying breakpoints during a debugging session does not make your change temporary, although there are circumstances (like the original question), where the actual behavior can be non-obvious. A: I've post suggestion to MS to fix it: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=391642 A: Helpful Key combo: to permanently delete all breakpoints, press CTRL + SHIFT + F9. A: It appears since Visual Studio allows multiple breakpoints on a single line, i.e. in separate sub-clauses, architecturally it allows multiple identical breakpoints. The interface does not necessarily reflect this and you will see the removal of a breakpoint as graphically removing it, but not programmatically removing all instances of it. Looking at the Debug > Windows > Breakpoints window shows for a given set of breakpoints on a line, they are stored in a sub-tree under that line item. Removing a breakpoint while watching this list will reveal the behaviour, that only one of a series of identical breakpoints is removed from the list associated with that line. By removing the breakpoint line item and with it all sub items it will completely remove all instances of the breakpoint. A: Wipe the breakpoint out using the Breakpoints Window (Ctrl + Alt + B). While debugging, when you hit the breakpoint, look at the BreakPoint window for the one that is bold. Then, right-click it and choose Delete.
{ "language": "en", "url": "https://stackoverflow.com/questions/58935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I toggle Caps Lock in VB.NET? Using VB.NET, how do I toggle the state of Caps Lock? A: Try this: Public Class Form1 Private Declare Sub keybd_event Lib "user32" (ByVal bVk As Byte, ByVal bScan As Byte, ByVal dwFlags As Integer, ByVal dwExtraInfo As Integer) Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Call keybd_event(System.Windows.Forms.Keys.CapsLock, &H14, 1, 0) Call keybd_event(System.Windows.Forms.Keys.CapsLock, &H14, 3, 0) End Sub End Class A: From: http://www.vbforums.com/showthread.php?referrerid=61394&t=537891 Imports System.Runtime.InteropServices Public Class Form2 Private Declare Sub keybd_event Lib "user32" ( _ ByVal bVk As Byte, _ ByVal bScan As Byte, _ ByVal dwFlags As Integer, _ ByVal dwExtraInfo As Integer _ ) Private Const VK_CAPITAL As Integer = &H14 Private Const KEYEVENTF_EXTENDEDKEY As Integer = &H1 Private Const KEYEVENTF_KEYUP As Integer = &H2 Private Sub Button1_Click( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs _ ) Handles Button1.Click ' Toggle CapsLock ' Simulate the Key Press keybd_event(VK_CAPITAL, &H45, KEYEVENTF_EXTENDEDKEY Or 0, 0) ' Simulate the Key Release keybd_event(VK_CAPITAL, &H45, KEYEVENTF_EXTENDEDKEY Or KEYEVENTF_KEYUP, 0) End Sub End Class A: I use this Private Declare Sub keybd_event Lib "user32" (ByVal bVk As Byte, ByVal bScan As Byte, ByVal dwFlags As Integer, ByVal dwExtraInfo As Integer) Private Const KEYEVENTF_EXTENDEDKEY As Integer = &H1 Private Const KEYEVENTF_KEYUP As Integer = &H2 'put this where you want to turn caps lock on or off keybd_event(VK_NUMLOCK, &H45, KEYEVENTF_EXTENDEDKEY Or 0, 0) keybd_event(VK_NUMLOCK, &H45, KEYEVENTF_EXTENDEDKEY Or KEYEVENTF_KEYUP, 0)
{ "language": "en", "url": "https://stackoverflow.com/questions/58937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: JComboBox Selection Change Listener? I'm trying to get an event to fire whenever a choice is made from a JComboBox. The problem I'm having is that there is no obvious addSelectionListener() method. I've tried to use actionPerformed(), but it never fires. Short of overriding the model for the JComboBox, I'm out of ideas. How do I get notified of a selection change on a JComboBox?** Edit: I have to apologize. It turns out I was using a misbehaving subclass of JComboBox, but I'll leave the question up since your answer is good. A: You may try these int selectedIndex = myComboBox.getSelectedIndex(); -or- Object selectedObject = myComboBox.getSelectedItem(); -or- String selectedValue = myComboBox.getSelectedValue().toString(); A: I was recently looking for this very same solution and managed to find a simple one without assigning specific variables for the last selected item and the new selected item. And this question, although very helpful, didn't provide the solution I needed. This solved my problem, I hope it solves yours and others. Thanks. How do I get the previous or last item? A: you can do this with jdk >= 8 getComboBox().addItemListener(this::comboBoxitemStateChanged); so public void comboBoxitemStateChanged(ItemEvent e) { if (e.getStateChange() == ItemEvent.SELECTED) { YourObject selectedItem = (YourObject) e.getItem(); //TODO your actitons } } A: I use this: cb = new JComboBox<String>(); cb.setBounds(10, 33, 46, 22); panelConfig.add(cb); for(int i = 0; i < 10; ++i) { cb.addItem(Integer.toString(i)); } cb.addItemListener(new ItemListener() { @Override public void itemStateChanged(ItemEvent e) { if(e.getID() == temEvent.ITEM_STATE_CHANGED) { if(e.getStateChange() == ItemEvent.SELECTED) { JComboBox<String> cb = (JComboBox<String>) e.getSource(); String newSelection = (String) cb.getSelectedItem(); System.out.println("newSelection: " + newSelection); } } } }); A: I would try the itemStateChanged() method of the ItemListener interface if jodonnell's solution fails. A: It should respond to ActionListeners, like this: combo.addActionListener (new ActionListener () { public void actionPerformed(ActionEvent e) { doSomething(); } }); @John Calsbeek rightly points out that addItemListener() will work, too. You may get 2 ItemEvents, though, one for the deselection of the previously selected item, and another for the selection of the new item. Just don't use both event types! A: Code example of ItemListener implementation class ItemChangeListener implements ItemListener{ @Override public void itemStateChanged(ItemEvent event) { if (event.getStateChange() == ItemEvent.SELECTED) { Object item = event.getItem(); // do something with object } } } Now we will get only selected item. Then just add listener to your JComboBox addItemListener(new ItemChangeListener()); A: Here is creating a ComboBox adding a listener for item selection change: JComboBox comboBox = new JComboBox(); comboBox.setBounds(84, 45, 150, 20); contentPane.add(comboBox); JComboBox comboBox_1 = new JComboBox(); comboBox_1.setBounds(84, 97, 150, 20); contentPane.add(comboBox_1); comboBox.addItemListener(new ItemListener() { public void itemStateChanged(ItemEvent arg0) { //Do Something } });
{ "language": "en", "url": "https://stackoverflow.com/questions/58939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "167" }
Q: Access to Result sets from within Stored procedures Transact-SQL SQL Server I'm using SQL Server 2005, and I would like to know how to access different result sets from within transact-sql. The following stored procedure returns two result sets, how do I access them from, for example, another stored procedure? CREATE PROCEDURE getOrder (@orderId as numeric) AS BEGIN select order_address, order_number from order_table where order_id = @orderId select item, number_of_items, cost from order_line where order_id = @orderId END I need to be able to iterate through both result sets individually. EDIT: Just to clarify the question, I want to test the stored procedures. I have a set of stored procedures which are used from a VB.NET client, which return multiple result sets. These are not going to be changed to a table valued function, I can't in fact change the procedures at all. Changing the procedure is not an option. The result sets returned by the procedures are not the same data types or number of columns. A: I was easily able to do this by creating a SQL2005 CLR stored procedure which contained an internal dataset. You see, a new SqlDataAdapter will .Fill a multiple-result-set sproc into a multiple-table dataset by default. The data in these tables can in turn be inserted into #Temp tables in the calling sproc you wish to write. dataset.ReadXmlSchema will show you the schema of each result set. Step 1: Begin writing the sproc which will read the data from the multi-result-set sproc a. Create a separate table for each result set according to the schema. CREATE PROCEDURE [dbo].[usp_SF_Read] AS SET NOCOUNT ON; CREATE TABLE #Table01 (Document_ID VARCHAR(100) , Document_status_definition_uid INT , Document_status_Code VARCHAR(100) , Attachment_count INT , PRIMARY KEY (Document_ID)); b. At this point you may need to declare a cursor to repetitively call the CLR sproc you will create here: Step 2: Make the CLR Sproc Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub usp_SF_ReadSFIntoTables() End Sub End Class a. Connect using New SqlConnection("context connection=true"). b. Set up a command object (cmd) to contain the multiple-result-set sproc. c. Get all the data using the following: Dim dataset As DataSet = New DataSet With New SqlDataAdapter(cmd) .Fill(dataset) ' get all the data. End With 'you can use dataset.ReadXmlSchema at this point... d. Iterate over each table and insert every row into the appropriate temp table (which you created in step one above). Final note: In my experience, you may wish to enforce some relationships between your tables so you know which batch each record came from. That's all there was to it! ~ Shaun, Near Seattle A: There is a kludge that you can do as well. Add an optional parameter N int to your sproc. Default the value of N to -1. If the value of N is -1, then do every one of your selects. Otherwise, do the Nth select and only the Nth select. For example, if (N = -1 or N = 0) select ... if (N = -1 or N = 1) select ... The callers of your sproc who do not specify N will get a result set with more than one tables. If you need to extract one or more of these tables from another sproc, simply call your sproc specifying a value for N. You'll have to call the sproc one time for each table you wish to extract. Inefficient if you need more than one table from the result set, but it does work in pure TSQL. A: Note that there's an extra, undocumented limitation to the INSERT INTO ... EXEC statement: it cannot be nested. That is, the stored proc that the EXEC calls (or any that it calls in turn) cannot itself do an INSERT INTO ... EXEC. It appears that there's a single scratchpad per process that accumulates the result, and if they're nested you'll get an error when the caller opens this up, and then the callee tries to open it again. Matthieu, you'd need to maintain separate temp tables for each "type" of result. Also, if you're executing the same one multiple times, you might need to add an extra column to that result to indicate which call it resulted from. A: The short answer is: you can't do it. From T-SQL there is no way to access multiple results of a nested stored procedure call, without changing the stored procedure as others have suggested. To be complete, if the procedure were returning a single result, you could insert it into a temp table or table variable with the following syntax: INSERT INTO #Table (...columns...) EXEC MySproc ...parameters... You can use the same syntax for a procedure that returns multiple results, but it will only process the first result, the rest will be discarded. A: Sadly it is impossible to do this. The problem is, of course, that there is no SQL Syntax to allow it. It happens 'beneath the hood' of course, but you can't get at these other results in TSQL, only from the application via ODBC or whatever. There is a way round it, as with most things. The trick is to use ole automation in TSQL to create an ADODB object which opens each resultset in turn and write the results to the tables you nominate (or do whatever you want with the resultsets). you can also do it in DMO if you enjoy pain. A: There are two ways to do this easily. Either stick the results in a temp table and then reference the temp table from your sproc. The other alternative is to put the results into an XML variable that is used as an OUTPUT variable. There are, however, pros and cons to both of these options. With a temporary table, you'll need to add code to the script that creates the calling procedure to create the temporary table before modifying the procedure. Also, you should clean up the temp table at the end of the procedure. With the XML, it can be memory intensive and slow. A: You could select them into temp tables or write table valued functions to return result sets. Are asking how to iterate through the result sets?
{ "language": "en", "url": "https://stackoverflow.com/questions/58940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Best Way to Unit Test a Website With Multiple User Types with PHPUnit I'm starting to learn how to use PHPUnit to test the website I'm working on. The problem I'm running into is that I have five different user types defined and I need to be able to test every class with the different types. I currently have a user class and I would like to pass this to each function but I can't figure out how to pass this or test the different errors that could come back as being correct or not. Edit: I should have said. I have a user class and I want to pass a different instance of this class to each unit test. A: If your various user classes inherit from a parent user class, then I recommend you use the same inheritance structure for your test case classes. Consider the following sample classes: class User { public function commonFunctionality() { return 'Something'; } public function modifiedFunctionality() { return 'One Thing'; } } class SpecialUser extends User { public function specialFunctionality() { return 'Nothing'; } public function modifiedFunctionality() { return 'Another Thing'; } } You could do the following with your test case classes: class Test_User extends PHPUnit_Framework_TestCase { public function create() { return new User(); } public function testCommonFunctionality() { $user = $this->create(); $this->assertEquals('Something', $user->commonFunctionality); } public function testModifiedFunctionality() { $user = $this->create(); $this->assertEquals('One Thing', $user->commonFunctionality); } } class Test_SpecialUser extends Test_User { public function create() { return new SpecialUser(); } public function testSpecialFunctionality() { $user = $this->create(); $this->assertEquals('Nothing', $user->commonFunctionality); } public function testModifiedFunctionality() { $user = $this->create(); $this->assertEquals('Another Thing', $user->commonFunctionality); } } Because each test depends on a create method which you can override, and because the test methods are inherited from the parent test class, all tests for the parent class will be run against the child class, unless you override them to change the expected behavior. This has worked great in my limited experience. A: If you're looking to test the actual UI, you could try using something like Selenium (www.openqa.org). It lets you write the code in PHP (which I'm assuming would work with phpUnit) to drive the browser.. Another approach would be to have a common method that could be called by each test for your different user type. ie, something like 'ValidatePage', which you could then call from TestAdminUser or TestRegularUser and have the method simply perform the same basic validation of what you're expecting.. A: Just make sure you're not running into an anti-pattern here. Maybe you do too much work in the constructor? Or maybe these should be in fact different classes? Tests often give you clues about design of code. Listen to them.
{ "language": "en", "url": "https://stackoverflow.com/questions/58969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I get the current state of Caps Lock in VB.NET? How do I find out whether or not Caps Lock is activated, using VB.NET? This is a follow-up to my earlier question. A: I'm not an expert in VB.NET so only PInvoke comes to my mind: Declare Function GetKeyState Lib "user32" Alias "GetKeyState" (ByValnVirtKey As Int32) As Int16 Private Const VK_CAPSLOCK = &H14 If GetKeyState(VK_CAPSLOCK) = 1 Then ... A: Create a Timer that is set to 5 milliseconds and is enabled. Then make a label named label1. After, try the following code (in the timer event handler). Private Sub Timer1_Tick(sender As Object, e As EventArgs) Handles Timer1.Tick If My.Computer.Keyboard.CapsLock = True Then Label1.Text = "Caps Lock Enabled" Else Label1.Text = "Caps Lock Disabled" End If End Sub A: Control.IsKeyLocked(Keys) Method - MSDN Imports System Imports System.Windows.Forms Imports Microsoft.VisualBasic Public Class CapsLockIndicator Public Shared Sub Main() if Control.IsKeyLocked(Keys.CapsLock) Then MessageBox.Show("The Caps Lock key is ON.") Else MessageBox.Show("The Caps Lock key is OFF.") End If End Sub 'Main End Class 'CapsLockIndicator C# version: using System; using System.Windows.Forms; public class CapsLockIndicator { public static void Main() { if (Control.IsKeyLocked(Keys.CapsLock)) { MessageBox.Show("The Caps Lock key is ON."); } else { MessageBox.Show("The Caps Lock key is OFF."); } } } A: The solution posted by .rp works, but conflicts with the Me.KeyDown event handler. I have a sub that calls a sign in function when enter is pressed (shown below). The My.Computer.Keyboard.CapsLock state works and does not conflict with Me.Keydown. Private Sub WindowLogin_KeyDown(sender As Object, e As KeyEventArgs) Handles Me.KeyDown If Keyboard.IsKeyDown(Key.Enter) Then Call SignIn() End If End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/58976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the reasoning behind the Interface Segregation Principle? The Interface Segregation Principle (ISP) says that many client specific interfaces are better than one general purpose interface. Why is this important? A: It simplifies the interface that any one client will use and removes dependencies that they might otherwise develop on parts of the interface that they don't need. A: One reason is that having many interfaces with a minimal amount of methods for each one makes it easier to implement each interface and to implement them correctly. A large interface can be unruly. Also, using a focused interface in a scenario makes the code more maintanable because you can see which facet of the object is being used (e.g., an IComparable interface lets you know that the object is only being used for comparisons in the given scenario). A: Robert Martin's paper on the subject gives an explanation that's mentioned less often: The backwards force applied by clients upon interfaces. If two classes depend directly on two different methods of a third class, it increases the likelihood that changes to either of the first two classes will affect the other. Suppose we have three classes: Red, Green, and Blue. Red and Green both depend Blue, but each depends on a different method. That means that Red depends on one method of Blue but doesn't use the other method. Likewise, Green depends on Blue, but only uses one method, not the other. The violation of the principle is in Red and Green because each depends on a class - Blue - but doesn't use at least one of its methods. What problem does this potentially create? * *I need to change Red, and I also change Blue to accommodate the needs of Red. *I haven't changed the specific method within Blue that Green depends on, but nonetheless, Green depends on Blue and I have changed Blue, which could still impact Green. *Therefore, my changes to Red have the potential to impact Blue because they've led me to change a class that both depend on. That's the "backwards force." We sometimes change a class because of the needs of its clients. If that class has different clients that use it for different things, we risk impacting them. As stated, the simple definition of the Interface Segregation Principle is: no client should be forced to depend on methods it does not use. Between that and the above point from Robert Martin's paper, it's apparent that many explanations of the ISP are in fact talking about other principles. * *Classes or interfaces with lots of methods are undesirable, but not specifically because of the ISP. They might violate Single Responsibility. But the ISP violation is not in the big interface or big class - it's in the classes that depend on on the big interface if they don't use all of its methods. If they use all of the methods it still sounds messy, but that has nothing to do with the ISP. *Classes that implement an interface but throw exceptions for certain methods are bad, but that's not the ISP, either. The ISP is about classes that depend on interfaces, not classes that implement interfaces. If we Google "interface segregation", most of the top results that include code samples demonstrate classes that don't fully implement interfaces, which is not the point of the ISP. Some even incorrectly restate the principle: The Interface Segregation Principle states that clients should not be forced to implement interfaces they don't use ...but that is not the principle. The defining paper mentions such concerns as a side-effect of violating the ISP, but indicates that they are Liskov Substitution violations. Moreover, each time a new interface is added to the base class, that interface must be implemented (or allowed to default) in derived classes. Indeed, an associated practice is to add these interfaces to the base class as nil virtual functions rather than pure virtual functions; specifically so that derived classes are not burdened with the need to implement them. As we learned in the second article of this column, such a practice violates the Liskov Substitution Principle (LSP), leading to maintenance and reusability problems. What's more, to say that a client should not implement methods it does not use doesn't even make sense. The clients of an interface do not implement the methods they use or do not use - they consume its methods. A client of List<E> does not implement the methods and properties of List<E>. It calls the methods and properties of List<E>. I don't mean to pompously cite the paper as if it's holy writ or something. But if we're going to use the name of the principle described in the article (the name of the article itself) then we should also consider the actual definition and explanation contained in that article. A: ISP states that: Clients should not be forced to depend on methods that they do not use. ISP relates to important characteristics - cohesion and coupling. Ideally your components must be highly tailored. It improves code robustness and maintainability. Enforcing ISP gives you following bonuses: * *High cohesion - better understandability, robustness *Low coupling - better maintainability, high resistance to changes If you want to learn more about software design principles, get a copy of Agile Software Development, Principles, Patterns, and Practices book. A: This principle primarily serves twin purposes * *To make the code more readable and manageable. *Promotes single responsibility for classes ( high cohesion ). Ofcourse why should a class have a method that has no behavioural impact ? Why not just remove it. Thats what ISP is about There are few questions that a designer must ask with concerns to ISP * *What does one achieve with ISP *How to I analyse an already existing code for any ISP violations To take this discussion further, I must also add that this principle isn't a 'principle' in the strictest sense, because under certain circumstances, applying ISP to the design, instead of promoting readability, might make the object structure unreadable and cluttered with unnecessary code. You may well observe this in the java.awt.event package More at my blog: http://design-principle-pattern.blogspot.in/2013/12/interface-segregation-principle.html A: ISP is important. Basic idea of ISP : Client should not be forced to depend on methods it does not use. This principle seems to be more logical. Ideally client should not implement the methods, which are not used by the client. Refer to below SE question for code example: Interface Segregation Principle- Program to an interface Advantages: * *Flexibility : In absence of ISP, you have one Generic FAT interface and many classes implementing it. Assume that you had 1 interface and 50 classes. If there is a change in interface, all 50 classes have to change their implementation. With ISP, you will divide generic FAT interface into fine granular small interfaces. If there is a change in small granular interface, only the classes implementing that interface will be affected. *Maintainability and Ease of use: Since changes are limited to fine granular interface instead of generic FACT interface, code maintenance is easier. Unrelated code is no longer part of implementation classes. A: The interface segregation is the “I” on the SOLID principle, before digging too deep with the first, let’s explain what’s does the latter mean. SOLID can be considered a set of best practices and recommendations made by experts (meaning they have been proved before) in order to provide a reliable foundation in how we design applications. These practices strive to make easier to maintain, extend, adapt and scale our applications. Why should I care about SOLID programming? First of all, you have to realize you are not going to be forever where you are. If we use standards and well known architectures, we can be sure that our code will be easy to maintain by other developers that come after us, and I’m sure you wouldn’t want to deal with the task of fixing a code that didn’t applied any known methodology as it would be very hard to understand it. The interface segregation principle. Know that we know what the SOLID principles are we can get into more detail about the Interface Segregation principle, but what exactly does the interface segregation says? “Clients should not be forced to implement unnecessary methods which they will not use” This means that sometimes we tend to make interfaces with a lot of methods, which can be good to an extent, however this can easily abused, and we can end up with classes that implement empty or useless methods which of course adds extra code and burden to our apps. Imagine you are declaring a lot of methods in single interface, if you like visual aids a class that is implementing an interface but that is really needing a couple of methods of it would look like this: In the other hand, if you properly apply the interface segregation and split your interface in smaller subsets you can me sure to implement those that are only needed: See! Is way better! Enforcing this principle will allow you to have low coupling which aids to a better maintainability and high resistance to changes. So you can really leverage the usage of interfaces and implementing the methods when you really should. Now let’s review a less abstract example, say you declared an interface called Reportable public interface Reportable { void printPDF(); void printWord(); void printExcel(); void printPPT(); void printHTML(); } And you have a client that will only to export some data in Excel format, you can implement the interface, but would you only have to implement the excel method? The answer is no, you will have to code the implementation for all the methods even if you are not going to use them, this can cause a lot of junk code hence making the code hard to maintain.. Remember keep it simple and don’t repeat yourself and you will find that you are already using this principle without knowing.
{ "language": "en", "url": "https://stackoverflow.com/questions/58988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Enforce SSL in code in an ashx handler I have a site, which contains several ashx handlers, on a couple of the handlers I want to reject non-SSL requests. Is there a way that I can do this in code? A: If you must do it programmatically, a way I've done it in the past is to inspect the url and look for "https" in it. Redirect if you don't see that. Request.IsSecureConnection should be the preferred method, however. You may have to add additional logic to handle a loopback address. A: I think the proper way is to check the Request.IsSecureConnection property and redirect or throw if it's false A: Try using the System.Web.HttpContext.Current.Request.IsSecureConnection to validate whether they are connecting securely, and then perform whatever denies you would like after that (returning an error message, or whatever your business need is).
{ "language": "en", "url": "https://stackoverflow.com/questions/59000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use a start commit hook in TortoiseSVN to setup a custom log entry? I'd like to automate TortoiseSVN as part of a commit process. Specifically I'd like to dynamically create a log entry for the commit dialog. I know that I can launch the commit dialog either from the commandline or by right clicking on a folder and selecting svncommit. I'd like to use the start commit hook to setup a log entry. I thought this worked by passing an entry file name in the MESSAGEFILE variable but when I add a hook script it cannot see this variable (hook launched successfully after right clicking and choosing svncommit). When I try using the commandline I use the /logmsgfile parameter but it seems to have no effect. I'm using tortoisesvn 1.5.3. A: Looks like it was my own misunderstanding of the the API that caused by a problem. Solution: 1) I've added a start commit hook script to TortoiseSVN using the hooks gui in the settings area of the right click menu. 2) The script receive 3 pieces of information: PATH MESSAGEFILE CWD For details see: Manual These are passed as command line arguements to the script - for some reason I had thought they were set as temporary environmental variables. My script then simply opens the file specified by the second arguement and adds in the custom text. When the commit dialog comes up the custom text is there. 3) Best of all if tortoisesvn is launched from a script directly into the commit dialog: e.g. [ tortoiseproc /command:commit /path:. /closeonend:1 ] The hooks are still called. A: If you just need a static template, set the tsvn:logtemplate property. For dynamic generation, the /logmsgfile parameter does work, but it seems to need the full path. A batch file that looks like the following might work for you. GenerateLogMsg.exe > tmp.msg "C:\Program Files\TortoiseSVN\bin\TortoiseProc.exe" /command:commit /path:. /logmsgfile:"C:\Documents and Settings\User\My Documents\Visual Studio Projects\Project\tmp.msg"
{ "language": "en", "url": "https://stackoverflow.com/questions/59007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to check which locale is a .NET application running under, without having access to its sourcecode? Context: I'm in charge of running a service written in .NET. Proprietary application. It uses a SQL Server database. It ran as a user member of the Administrators group in the local machine. It worked alright before I added the machine to a domain. So, I added the machine to a domain (Win 2003) and changed the user to a member of the Power Users group and now, the Problem: Some of the SQL sentences it tries to execute are "magically" in spanish localization (where , separates floating point numbers instead of .), leading to errors. There are fewer columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) Operating System and Regional Settings in the machine are in English. I asked the provider of the application and he said: Looks like you have a combination of code running under Spanish locale, and SQL server under English locale. So the SQL expects '15.28' and not '15,28' Which looks wrong to me in various levels (how can SQL Server distinguish between commas to separate arguments and commas belonging to a floating point number?). So, the code seems to be grabbing the spanish locale from somewhere, I don't know if it's the user it runs as, or someplace else (global policy, maybe?). But the question is What are the places where localization is defined on a machine/user/domain basis? I don't know all the places I must search for the culprit, so please help me to find it! A: There are two types of localisation in .NET, both the settings for the cultures can be found in these variables (fire up a .NET command line app on the machine to see what it says): System.Thread.CurrentThread.CurrentCulture & System.Thread.CurrentThread.CurrentUICulture http://msdn.microsoft.com/en-us/library/system.threading.thread_members.aspx They relate to the settings in the control panel (in the regional settings part). Create a .NET command line app, then just call ToString() on the above properties, that should tell you which property to look at. Edit: It turns out the setting for the locales per user are held here: HKEY_CURRENT_USER\Control Panel\International It might be worth inspecting the registry of the user with the spanish locale, and comparing it to one who is set to US or whichever locale you require. A: You can set it in the thread context in which your code is executing. System.Threading.Thread.CurrentThread.CurrentCulture A: Great, I created the console app and indeed, the app is not crazy, CurrentCulture is in spanish, but for THAT User in THAT machine only. If I run the console app as another user it returns english for all cultures. Should I open a new question asking where are user-wise locale settings? A: Well if it's user specific, check out the Regional and Language Options control panel. <rant>On a side note, kick the developer for not being culture aware when using strings.</rant> A: Found out why it happened in that machine only. It was the only one where I actually logged into with that user, then the domain controller set the regional settings as spanish for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/59013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the meaning and reasoning behind the Open/Closed Principle? The Open/Closed Principle states that software entities (classes, modules, etc.) should be open for extension, but closed for modification. What does this mean, and why is it an important principle of good object-oriented design? A: Open Closed Principle is very important in object oriented programming and it's one of the SOLID principles. As per this, a class should be open for extension and closed for modification. Let us understand why. class Rectangle { public int width; public int lenth; } class Circle { public int radius; } class AreaService { public int areaForRectangle(Rectangle rectangle) { return rectangle.width * rectangle.lenth; } public int areaForCircle(Circle circle) { return (22 / 7) * circle.radius * circle.radius; } } If you look at the above design, we can clearly observe that it's not following Open/Closed Principle. Whenever there is a new shape(Tiangle, Square etc.), AreaService has to be modified. With Open/Closed Principle: interface Shape{ int area(); } class Rectangle implements Shape{ public int width; public int lenth; @Override public int area() { return lenth * width; } } class Cirle implements Shape{ public int radius; @Override public int area() { return (22/7) * radius * radius; } } class AreaService { int area(Shape shape) { return shape.area(); } } Whenever there is new shape like Triangle, Square etc. you can easily accommodate the new shapes without modifying existing classes. With this design, we can ensure that existing code doesn't impact. A: Software entities should be open for extension but closed for modification That means any class or module should be written in a way that it can be used as is, can be extended, but neve modified Bad Example in Javascript var juiceTypes = ['Mango','Apple','Lemon']; function juiceMaker(type){ if(juiceTypes.indexOf(type)!=-1) console.log('Here is your juice, Have a nice day'); else console.log('sorry, Error happned'); } exports.makeJuice = juiceMaker; Now if you want to add Another Juice type, you have to edit the module itself, By this way, we are breaking OCP . Good Example in Javascript var juiceTypes = []; function juiceMaker(type){ if(juiceTypes.indexOf(type)!=-1) console.log('Here is your juice, Have a nice day'); else console.log('sorry, Error happned'); } function addType(typeName){ if(juiceTypes.indexOf(typeName)==-1) juiceTypes.push(typeName); } function removeType(typeName){ let index = juiceTypes.indexOf(typeName) if(index!==-1) juiceTypes.splice(index,1); } exports.makeJuice = juiceMaker; exports.addType = addType; exports.removeType = removeType; Now, you can add new juice types from outside the module without editing the same module. A: Let's break down the question in three parts to make it easier to understand the various concepts. Reasoning Behind Open-Closed Principle Consider an example in the code below. Different vehicles are serviced in a different manner. So, we have different classes for Bike and Car because the strategy to service a Bike is different from the strategy to service a Car. The Garage class accepts various kinds of vehicles for servicing. Problem of Rigidity Observe the code and see how the Garage class shows the signs of rigidity when it comes to introducing a new functionality: class Bike { public void service() { System.out.println("Bike servicing strategy performed."); } } class Car { public void service() { System.out.println("Car servicing strategy performed."); } } class Garage { public void serviceBike(Bike bike) { bike.service(); } public void serviceCar(Car car) { car.service(); } } As you may have noticed, whenever some new vehicle like Truck or Bus is to be serviced, the Garage will need to be modified to define some new methods like serviceTruck() and serviceBus(). That means the Garage class must know every possible vehicle like Bike, Car, Bus, Truck and so on. So, it violates the open-closed principle by being open for modification. Also it's not open for extension because to extend the new functionality, we need to modify the class. Meaning Behind Open-Closed Principle Abstraction To solve the problem of rigidity in the code above we can use the open-closed principle. That means we need to make the Garage class dumb by taking away the implementation details of servicing of every vehicle that it knows. In other words we should abstract the implementation details of the servicing strategy for each concrete type like Bike and Car. To abstract the implementation details of the servicing strategies for various types of vehicles we use an interface called Vehicle and have an abstract method service() in it. Polymorphism At the same time, we also want the Garage class to accept many forms of the vehicle, like Bus, Truck and so on, not just Bike and Car. To do that, the open-closed principle uses polymorphism (many forms). For the Garage class to accept many forms of the Vehicle, we change the signature of its method to service(Vehicle vehicle) { } to accept the interface Vehicle instead of the actual implementation like Bike, Car etc. We also remove the multiple methods from the class as just one method will accept many forms. interface Vehicle { void service(); } class Bike implements Vehicle { @Override public void service() { System.out.println("Bike servicing strategy performed."); } } class Car implements Vehicle { @Override public void service() { System.out.println("Car servicing strategy performed."); } } class Garage { public void service(Vehicle vehicle) { vehicle.service(); } } Importance of Open-Closed Principle Closed for modification As you can see in the code above, now the Garage class has become closed for modification because now it doesn't know about the implementation details of servicing strategies for various types of vehicles and can accept any type of new Vehicle. We just have to extend the new vehicle from the Vehicle interface and send it to the Garage. That's it! We don't need to change any code in the Garage class. Another entity that's closed for modification is our Vehicle interface. We don't have to change the interface to extend the functionality of our software. Open for extension The Garage class now becomes open for extension in the context that it will support the new types of Vehicle, without the need for modifying. Our Vehicle interface is open for extension because to introduce any new vehicle, we can extend from the Vehicle interface and provide a new implementation with a strategy for servicing that particular vehicle. Strategy Design Pattern Did you notice that I used the word strategy multiple times? That's because this is also an example of the Strategy Design Pattern. We can implement different strategies for servicing different types of Vehicles by extending it. For example, servicing a Truck has a different strategy from the strategy of servicing a Bus. So we implement these strategies inside the different derived classes. The strategy pattern allows our software to be flexible as the requirements change over time. Whenever the client changes their strategy, just derive a new class for it and provide it to the existing component, no need to change other stuff! The open-closed principle plays an important role in implementing this pattern. That's it! Hope that helps. A: It means that you should put new code in new classes/modules. Existing code should be modified only for bug fixing. New classes can reuse existing code via inheritance. Open/closed principle is intended to mitigate risk when introducing new functionality. Since you don't modify existing code you can be assured that it wouldn't be broken. It reduces maintenance cost and increases product stability. A: It's the answer to the fragile base class problem, which says that seemingly innocent modifications to base classes may have unintended consequences to inheritors that depended on the previous behavior. So you have to be careful to encapsulate what you don't want relied upon so that the derived classes will obey the contracts defined by the base class. And once inheritors exist, you have to be really careful with what you change in the base class. A: Purpose of the Open closed Principle in SOLID Principles is to * *reduce the cost of a business change requirement. *reduce testing of existing code. Open Closed Principle states that we should try not to alter existing code while adding new functionalities. It basically means that existing code should be open for extension and closed for modification(unless there is a bug in existing code). Altering existing code while adding new functionalities requires existing features to be tested again. Let me explain this by taking AppLogger util class. Let's say we have a requirement to log application wide errors to a online tool called Firebase. So we create below class and use it in 1000s of places to log API errors, out of memory errors etc. open class AppLogger { open fun logError(message: String) { // reporting error to Firebase FirebaseAnalytics.logException(message) } } Let's say after sometime, we add Payment Feature to the app and there is a new requirement which states that only for Payment related errors we have to use a new reporting tool called Instabug and also continue reporting errors to Firebase just like before for all features including Payment. Now we can achieve this by putting an if else condition inside our existing method fun logError(message: String, origin: String) { if (origin == "Payment") { //report to both Firebase and Instabug FirebaseAnalytics.logException(message) InstaBug.logException(message) } else { // otherwise report only to Firebase FirebaseAnalytics.logException(message) } } Problem with this approach is that it violates Single Responsibility Principle which states that a method should do only one thing. Another way of putting it is a method should have only one reason to change. With this approach there are two reasons for this method to change (if & else blocks). A better approach would be to create a new Logger class by inheriting the existing Logger class like below. class InstaBugLogger : AppLogger() { override fun logError(message: String) { super.logError(message) // This uses AppLogger.logError to report to Firebase. InstaBug.logException(message) //Reporting to Instabug } } Now all we have to do is use InstaBugLogger.logError() in Payment features to log errors to both Instabug and Firebase. This way we reduce/isolate the testing of new error reporting requirement to only Payment feature as code changes are done only in Payment Feature. The rest of the application features need not be tested as there are no code changes done to the existing Logger. A: Specifically, it is about a "Holy Grail" of design in OOP of making an entity extensible enough (through its individual design or through its participation in the architecture) to support future unforseen changes without rewriting its code (and sometimes even without re-compiling **). Some ways to do this include Polymorphism/Inheritance, Composition, Inversion of Control (a.k.a. DIP), Aspect-Oriented Programming, Patterns such as Strategy, Visitor, Template Method, and many other principles, patterns, and techniques of OOAD. ** See the 6 "package principles", REP, CCP, CRP, ADP, SDP, SAP A: The principle means that it should easy to add new functionality without having to change existing, stable, and tested functionality, saving both time and money. Often, polymorhism, for instance using interfaces, is a good tool for achieving this. A: An additional rule of thumb for conforming to OCP is to make base classes abstract with respect to functionality provided by derived classes. Or as Scott Meyers says 'Make Non-leaf classes abstract'. This means having unimplemented methods in the base class and only implement these methods in classes which themselves have no sub classes. Then the client of the base class cannot rely on a particular implementation in the base class since there is none. A: More specifically than DaveK, it usually means that if you want to add additional functionality, or change the functionality of a class, create a subclass instead of changing the original. This way, anyone using the parent class does not have to worry about it changing later on. Basically, it's all about backwards compatibility. Another really important principle of object-oriented design is loose coupling through a method interface. If the change you want to make does not affect the existing interface, it really is pretty safe to change. For example, to make an algorithm more efficient. Object-oriented principles need to be tempered by common sense too :) A: I just want to emphasize that "Open/Closed", even though being obviously useful in OO programming, is a healthy method to use in all aspects of development. For instance, in my own experience it's a great painkiller to use "Open/Closed" as much as possible when working with plain C. /Robert A: This means that the OO software should be built upon, but not changed intrinsically. This is good because it ensures reliable, predictable performance from the base classes. A: I was recently given an additional idea of what this principle entails: that the Open-Closed Principle describes at once a way of writing code, as well as an end-result of writing code in a resilient way. I like to think of Open/Closed split up in two closely-related parts: * *Code that is Open to change can either change its behavior to correctly handle its inputs, or requires minimum modification to provide for new usage scenarios. *Code that is Closed for modification does not require much if any human intervention to handle new usage scenarios. The need simply does not exist. Thus, code that exhibits Open/Closed behavior (or, if you prefer, fulfills the Open/Closed Principle) requires minimal or no modification in response to usage scenarios beyond what it was originally built for. As far as implementation is concerned? I find that the commonly-stated interpretation, "Open/Closed refers to code being polymorphic!" to be at best an incomplete statement. Polymorphism in code is one tool to achieve this sort of behavior; Inheritance, Implementation...really, every object-oriented design principle is necessary to write code that is resilient in the way implied by this principle. A: In Design principle, SOLID – the "O" in "SOLID" stands for the open/closed principle. Open Closed principle is a design principle which says that a class, modules and functions should be open for extension but closed for modification. This principle states that the design and writing of the code should be done in a way that new functionality should be added with minimum changes in the existing code (tested code). The design should be done in a way to allow the adding of new functionality as new classes, keeping as much as possible existing code unchanged. Benefit of Open Closed Design Principle: * *Application will be more robust because we are not changing already tested class. *Flexible because we can easily accommodate new requirements. *Easy to test and less error prone. My blog post on this: http://javaexplorer03.blogspot.in/2016/12/open-closed-design-principle.html
{ "language": "en", "url": "https://stackoverflow.com/questions/59016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: Replacing plain text password for app We are currently storing plain text passwords for a web app that we have. I keep advocating moving to a password hash but another developer said that this would be less secure -- more passwords could match the hash and a dictionary/hash attack would be faster. Is there any truth to this argument? A: From Wikipedia Some computer systems store user passwords, against which to compare user log on attempts, as cleartext. If an attacker gains access to such an internal password store, all passwords and so all user accounts will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well. More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. A common approache stores only a "hashed" form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, usually, another value known as a salt. The salt prevents attackers from building a list of hash values for common passwords. MD5 and SHA1 are frequently used cryptographic hash functions. There is much more that you can read on the subject on that page. In my opinion, and in everything I've read and worked with, hashing is a better scenario unless you use a very small (< 256 bit) algorithm. A: There is absolutely no excuse to keeping plain text passwords on the web app. Use a standard hashing algorithm (SHA-1, not MD5!) with a salt value, so that rainbow attacks are impossible. A: If you do not salt your Password, you're suspect to Rainbow Table attacks (precompiled Dictionaries that have valid inputs for a given hash) The other developer should stop talking about security if you're storing passwords in plaintext and start reading about security. Collisions are possible, but not a big problem for password apps usually (they are mainly a problem in areas where hashes are used as a way to verify the integrity of files). So: Salt your passwords (by adding the Salt to the right side of the password*) and use a good hashing algorhithm like SHA-1 or preferably SHA-256 or SHA-512. PS: A bit more detail about Hashes here. *i'm a bit unsure whether or not the Salt should to to the beginning or to the end of the string. The problem is that if you have a collisions (two inputs with the same hash), adding the Salt to the "wrong" side will not change the resulting hash. In any way, you won't have big problems with Rainbow Tables, only with collisions A: I don't understand how your other developer things 'more passwords could match the hash'. There is argument to a 'hash attack would be faster', but only if you're not salting the passwords as they're hashed. Normally, hashing functions allow you to provide a salt which makes the use of known hash table a waste of time. Personally, I'd say 'no'. Based on the above, as well as the fact that if you do somehow get clear-text expose, a salted, hashed value is of little value to someone trying to get in. Hashing also provides the benefit of making all passwords 'look' the same length. ie, if hashing any string always results in a 20 character hash, then if you have only the hash to look at, you can't tell whether the original password was eight characters or sixteen for example. A: I encountered this exact same issue in my workplace. What I did to convince him that hashing was more secure was to write a SQL injection that returned the list of users and passwords from the public section of our site. It was escalated right away as a major security issue :) To prevent against dictionary/hash attacks be sure to hash against a token that's unique to each user and static (username/join date/userguid works well) A: There is an old saying about programmers pretending to be cryptographers :) Jeff Atwood has a good post on the subject: You're Probably Storing Passwords Incorrectly To reply more extensively, I agree with all of the above, the hash makes it easier in theory to get the user's password since multiple passwords match the same hash. However, this is much less likely to happen than someone getting access to your database. A: There is truth in that if you hash something, yes, there will be collisions so it would be possible for two different passwords to unlock the same account. From a practical standpoint though, that's a poor argument - A good hashing function (md5 or sha1 would be fine) can pretty much guarantee that for all meaningfully strings, especially short ones, there will be no collisions. Even if there were, having two passwords match for one account isn't a huge problem - If someone is in a position to randomly guess passwords fast enough that they are likely to be able to get in, you've got bigger problems. I would argue that storing the passwords in plain text represents a much greater security risk than hash collisions in the password matching. A: Absolutely none. But it doesn't matter. I've posted a similar response before: It's unfortunate, but people, even programmers, are just too emotional to be easily be swayed by argument. Once he's invested in his position (and, if you're posting here, he is) you're not likely to convince him with facts alone. What you need to do is switch the burden of proof. You need to get him out looking for data that he hopes will convince you, and in so doing learn the truth. Unfortunately, he has the benefit of the status quo, so you've got a tough road there. A: I'm not a security expert but I have a feeling that if plain text were more secure, hashing wouldnt exist in the first place. A: In theory, yes. Passwords can be longer (more information) than a hash, so there is a possibility of hash collisions. However, most attacks are dictionary-based, and the probability of collisions is infinitely smaller than a successful direct match. A: It depends on what you're defending against. If it's an attacker pulling down your database (or tricking your application into displaying the database), then plaintext passwords are useless. There are many attacks that rely on convincing the application to disgorge it's private data- SQL injection, session hijack, etc. It's often better not to keep the data at all, but to keep the hashed version so bad guys can't easily use it. As your co-worker suggests, this can be trivially defeated by running the same hash algorithm against a dictionary and using rainbow tables to pull the info out. The usual solution is to use a secret salt plus additional user information to make the hashed results unique- something like: String hashedPass=CryptUtils.MD5("alsdl;ksahglhkjfsdkjhkjhkfsdlsdf" + user.getCreateDate().toString() + user.getPassword); As long as your salt is secret, or your attacker doesn't know the precise creation date of the user's record, a dictionary attack will fail- even in the event that they are able to pull down the password field. A: Nothing is less secure than storing plain-text passwords. If you're using a decent hashing algorithm (at least SHA-256, but even SHA-1 is better than nothing) then yes, collisions are possible, but it doesn't matter because given a hash, it's impossible* to calculate what strings hash to it. If you hash the username WITH the password, then that possibility goes out the window as well. * - technically not impossible, but "computationally infeasible" If the username is "graeme" and the password is "stackoverflow", then create a string "graeme-stackoverflow-1234" where 1234 is a random number, then hash it and store "hashoutput1234" in the database. When it comes to validating a password, take the username, the supplied password and the number from the end of the stored value (the hash has a fixed length so you can always do this) and hash them together, and compare it with the hash part of the stored value. A: more passwords could match the hash and a dictionary/hash attack would be faster. Yes and no. Use a modern hashing algorithm, like an SHA variant, and that argument gets very, very week. Do you really need to be worried if that brute force attack is going to take only 352 years instead of 467 years? (Anecdotal joke there.) The value to be gained (not having the password stored in plain text on the system) far outstrips your colleague's concern. A: Hope you forgive me for plugging a solution I wrote on this, using client side JavaScript to hash the password before it's transmitted: http://blog.asgeirnilsen.com/2005/11/password-authentication-without.html
{ "language": "en", "url": "https://stackoverflow.com/questions/59022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I do a simple 'Find and Replace" in MsSQL? Question is pretty self explanitory. I want to do a simple find and replace, like you would in a text editor on the data in a column of my database (which is MsSQL on MS Windows server 2003) A: This pointed me in the right direction, but I have a DB that originated in MSSQL 2000 and is still using the ntext data type for the column I was replacing on. When you try to run REPLACE on that type you get this error: Argument data type ntext is invalid for argument 1 of replace function. The simplest fix, if your column data fits within nvarchar, is to cast the column during replace. Borrowing the code from the accepted answer: UPDATE YourTable SET Column1 = REPLACE(cast(Column1 as nvarchar(max)),'a','b') WHERE Column1 LIKE '%a%' This worked perfectly for me. Thanks to this forum post I found for the fix. Hopefully this helps someone else! A: The following will find and replace a string in every database (excluding system databases) on every table on the instance you are connected to: Simply change 'Search String' to whatever you seek and 'Replace String' with whatever you want to replace it with. --Getting all the databases and making a cursor DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name NOT IN ('master','model','msdb','tempdb') -- exclude these databases DECLARE @databaseName nvarchar(1000) --opening the cursor to move over the databases in this instance OPEN db_cursor FETCH NEXT FROM db_cursor INTO @databaseName WHILE @@FETCH_STATUS = 0 BEGIN PRINT @databaseName --Setting up temp table for the results of our search DECLARE @Results TABLE(TableName nvarchar(370), RealColumnName nvarchar(370), ColumnName nvarchar(370), ColumnValue nvarchar(3630)) SET NOCOUNT ON DECLARE @SearchStr nvarchar(100), @ReplaceStr nvarchar(100), @SearchStr2 nvarchar(110) SET @SearchStr = 'Search String' SET @ReplaceStr = 'Replace String' SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''') DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128) SET @TableName = '' --Looping over all the tables in the database WHILE @TableName IS NOT NULL BEGIN DECLARE @SQL nvarchar(2000) SET @ColumnName = '' DECLARE @result NVARCHAR(256) SET @SQL = 'USE ' + @databaseName + ' SELECT @result = MIN(QUOTENAME(TABLE_SCHEMA) + ''.'' + QUOTENAME(TABLE_NAME)) FROM [' + @databaseName + '].INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''BASE TABLE'' AND TABLE_CATALOG = ''' + @databaseName + ''' AND QUOTENAME(TABLE_SCHEMA) + ''.'' + QUOTENAME(TABLE_NAME) > ''' + @TableName + ''' AND OBJECTPROPERTY( OBJECT_ID( QUOTENAME(TABLE_SCHEMA) + ''.'' + QUOTENAME(TABLE_NAME) ), ''IsMSShipped'' ) = 0' EXEC master..sp_executesql @SQL, N'@result nvarchar(256) out', @result out SET @TableName = @result PRINT @TableName WHILE (@TableName IS NOT NULL) AND (@ColumnName IS NOT NULL) BEGIN DECLARE @ColumnResult NVARCHAR(256) SET @SQL = ' SELECT @ColumnResult = MIN(QUOTENAME(COLUMN_NAME)) FROM [' + @databaseName + '].INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = PARSENAME(''[' + @databaseName + '].' + @TableName + ''', 2) AND TABLE_NAME = PARSENAME(''[' + @databaseName + '].' + @TableName + ''', 1) AND DATA_TYPE IN (''char'', ''varchar'', ''nchar'', ''nvarchar'') AND TABLE_CATALOG = ''' + @databaseName + ''' AND QUOTENAME(COLUMN_NAME) > ''' + @ColumnName + '''' PRINT @SQL EXEC master..sp_executesql @SQL, N'@ColumnResult nvarchar(256) out', @ColumnResult out SET @ColumnName = @ColumnResult PRINT @ColumnName IF @ColumnName IS NOT NULL BEGIN INSERT INTO @Results EXEC ( 'USE ' + @databaseName + ' SELECT ''' + @TableName + ''',''' + @ColumnName + ''',''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ', 3630) FROM ' + @TableName + ' (NOLOCK) ' + ' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2 ) END END END --Declaring another temporary table DECLARE @time_to_update TABLE(TableName nvarchar(370), RealColumnName nvarchar(370)) INSERT INTO @time_to_update SELECT TableName, RealColumnName FROM @Results GROUP BY TableName, RealColumnName DECLARE @MyCursor CURSOR; BEGIN DECLARE @t nvarchar(370) DECLARE @c nvarchar(370) --Looping over the search results SET @MyCursor = CURSOR FOR SELECT TableName, RealColumnName FROM @time_to_update GROUP BY TableName, RealColumnName --Getting my variables from the first item OPEN @MyCursor FETCH NEXT FROM @MyCursor INTO @t, @c WHILE @@FETCH_STATUS = 0 BEGIN -- Updating the old values with the new value DECLARE @sqlCommand varchar(1000) SET @sqlCommand = ' USE ' + @databaseName + ' UPDATE [' + @databaseName + '].' + @t + ' SET ' + @c + ' = REPLACE(' + @c + ', ''' + @SearchStr + ''', ''' + @ReplaceStr + ''') WHERE ' + @c + ' LIKE ''' + @SearchStr2 + '''' PRINT @sqlCommand BEGIN TRY EXEC (@sqlCommand) END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH --Getting next row values FETCH NEXT FROM @MyCursor INTO @t, @c END; CLOSE @MyCursor ; DEALLOCATE @MyCursor; END; DELETE FROM @time_to_update DELETE FROM @Results FETCH NEXT FROM db_cursor INTO @databaseName END CLOSE db_cursor DEALLOCATE db_cursor Note: this isn't ideal, nor is it optimized A: like so: BEGIN TRANSACTION; UPDATE table_name SET column_name=REPLACE(column_name,'text_to_find','replace_with_this'); COMMIT TRANSACTION; Example: Replaces <script... with <a ... to eliminate javascript vulnerabilities BEGIN TRANSACTION; UPDATE testdb SET title=REPLACE(title,'script','a'); COMMIT TRANSACTION; A: The following query replace each and every a character with a b character. UPDATE YourTable SET Column1 = REPLACE(Column1,'a','b') WHERE Column1 LIKE '%a%' This will not work on SQL server 2003. A: If you are working with SQL Server 2005 or later there is also a CLR library available at http://www.sqlsharp.com/ that provides .NET implementations of string and RegEx functions which, depending on your volume and type of data may be easier to use and in some cases the .NET string manipulation functions can be more efficient than T-SQL ones.
{ "language": "en", "url": "https://stackoverflow.com/questions/59044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: Save each sheet in a workbook to separate CSV files How do I save each sheet in an Excel workbook to separate CSV files with a macro? I have an excel with multiple sheets and I was looking for a macro that will save each sheet to a separate CSV (comma separated file). Excel will not allow you to save all sheets to different CSV files. A: @AlexDuggleby: you don't need to copy the worksheets, you can save them directly. e.g.: Public Sub SaveWorksheetsAsCsv() Dim WS As Excel.Worksheet Dim SaveToDirectory As String SaveToDirectory = "C:\" For Each WS In ThisWorkbook.Worksheets WS.SaveAs SaveToDirectory & WS.Name, xlCSV Next End Sub Only potential problem is that that leaves your workbook saved as the last csv file. If you need to keep the original workbook you will need to SaveAs it. A: Here is one that will give you a visual file chooser to pick the folder you want to save the files to and also lets you choose the CSV delimiter (I use pipes '|' because my fields contain commas and I don't want to deal with quotes): ' ---------------------- Directory Choosing Helper Functions ----------------------- ' Excel and VBA do not provide any convenient directory chooser or file chooser ' dialogs, but these functions will provide a reference to a system DLL ' with the necessary capabilities Private Type BROWSEINFO ' used by the function GetFolderName hOwner As Long pidlRoot As Long pszDisplayName As String lpszTitle As String ulFlags As Long lpfn As Long lParam As Long iImage As Long End Type Private Declare Function SHGetPathFromIDList Lib "shell32.dll" _ Alias "SHGetPathFromIDListA" (ByVal pidl As Long, ByVal pszPath As String) As Long Private Declare Function SHBrowseForFolder Lib "shell32.dll" _ Alias "SHBrowseForFolderA" (lpBrowseInfo As BROWSEINFO) As Long Function GetFolderName(Msg As String) As String ' returns the name of the folder selected by the user Dim bInfo As BROWSEINFO, path As String, r As Long Dim X As Long, pos As Integer bInfo.pidlRoot = 0& ' Root folder = Desktop If IsMissing(Msg) Then bInfo.lpszTitle = "Select a folder." ' the dialog title Else bInfo.lpszTitle = Msg ' the dialog title End If bInfo.ulFlags = &H1 ' Type of directory to return X = SHBrowseForFolder(bInfo) ' display the dialog ' Parse the result path = Space$(512) r = SHGetPathFromIDList(ByVal X, ByVal path) If r Then pos = InStr(path, Chr$(0)) GetFolderName = Left(path, pos - 1) Else GetFolderName = "" End If End Function '---------------------- END Directory Chooser Helper Functions ---------------------- Public Sub DoTheExport() Dim FName As Variant Dim Sep As String Dim wsSheet As Worksheet Dim nFileNum As Integer Dim csvPath As String Sep = InputBox("Enter a single delimiter character (e.g., comma or semi-colon)", _ "Export To Text File") 'csvPath = InputBox("Enter the full path to export CSV files to: ") csvPath = GetFolderName("Choose the folder to export CSV files to:") If csvPath = "" Then MsgBox ("You didn't choose an export directory. Nothing will be exported.") Exit Sub End If For Each wsSheet In Worksheets wsSheet.Activate nFileNum = FreeFile Open csvPath & "\" & _ wsSheet.Name & ".csv" For Output As #nFileNum ExportToTextFile CStr(nFileNum), Sep, False Close nFileNum Next wsSheet End Sub Public Sub ExportToTextFile(nFileNum As Integer, _ Sep As String, SelectionOnly As Boolean) Dim WholeLine As String Dim RowNdx As Long Dim ColNdx As Integer Dim StartRow As Long Dim EndRow As Long Dim StartCol As Integer Dim EndCol As Integer Dim CellValue As String Application.ScreenUpdating = False On Error GoTo EndMacro: If SelectionOnly = True Then With Selection StartRow = .Cells(1).Row StartCol = .Cells(1).Column EndRow = .Cells(.Cells.Count).Row EndCol = .Cells(.Cells.Count).Column End With Else With ActiveSheet.UsedRange StartRow = .Cells(1).Row StartCol = .Cells(1).Column EndRow = .Cells(.Cells.Count).Row EndCol = .Cells(.Cells.Count).Column End With End If For RowNdx = StartRow To EndRow WholeLine = "" For ColNdx = StartCol To EndCol If Cells(RowNdx, ColNdx).Value = "" Then CellValue = "" Else CellValue = Cells(RowNdx, ColNdx).Value End If WholeLine = WholeLine & CellValue & Sep Next ColNdx WholeLine = Left(WholeLine, Len(WholeLine) - Len(Sep)) Print #nFileNum, WholeLine Next RowNdx EndMacro: On Error GoTo 0 Application.ScreenUpdating = True End Sub A: A small modification to answer from Alex is turning on and off of auto calculation. Surprisingly the unmodified code was working fine with VLOOKUP but failed with OFFSET. Also turning auto calculation off speeds up the save drastically. Public Sub SaveAllSheetsAsCSV() On Error GoTo Heaven ' each sheet reference Dim Sheet As Worksheet ' path to output to Dim OutputPath As String ' name of each csv Dim OutputFile As String Application.ScreenUpdating = False Application.DisplayAlerts = False Application.EnableEvents = False ' Save the file in current director OutputPath = ThisWorkbook.Path If OutputPath <> "" Then Application.Calculation = xlCalculationManual ' save for each sheet For Each Sheet In Sheets OutputFile = OutputPath & Application.PathSeparator & Sheet.Name & ".csv" ' make a copy to create a new book with this sheet ' otherwise you will always only get the first sheet Sheet.Copy ' this copy will now become active ActiveWorkbook.SaveAs Filename:=OutputFile, FileFormat:=xlCSV, CreateBackup:=False ActiveWorkbook.Close Next Application.Calculation = xlCalculationAutomatic End If Finally: Application.ScreenUpdating = True Application.DisplayAlerts = True Application.EnableEvents = True Exit Sub Heaven: MsgBox "Couldn't save all sheets to CSV." & vbCrLf & _ "Source: " & Err.Source & " " & vbCrLf & _ "Number: " & Err.Number & " " & vbCrLf & _ "Description: " & Err.Description & " " & vbCrLf GoTo Finally End Sub A: For Mac users like me, there are several gotchas: You cannot save to any directory you want. Only few of them can receive your saved files. More info there Here is a working script that you can copy paste in your excel for Mac: Public Sub SaveWorksheetsAsCsv() Dim WS As Excel.Worksheet Dim SaveToDirectory As String SaveToDirectory = "~/Library/Containers/com.microsoft.Excel/Data/" For Each WS In ThisWorkbook.Worksheet WS.SaveAs SaveToDirectory & WS.Name & ".csv", xlCSV Next End Sub A: Use Visual Basic to loop through worksheets and save .csv files. * *Open up .xlsx file in Excel. *Press option+F11 *Insert → Module *Insert this into the module code: Public Sub SaveWorksheetsAsCsv() Dim WS As Excel.Worksheet Dim SaveToDirectory As String SaveToDirectory = "./" For Each WS In ThisWorkbook.Worksheets WS.SaveAs SaveToDirectory & WS.Name & ".csv", xlCSV Next End Sub *Run the module. (i.e. Click the play button at the top and then click "Run" on the dialog, if it pops up.) *Find your .csv files in ~/Library/Containers/com.microsoft.Excel/Data. open ~/Library/Containers/com.microsoft.Excel/Data *Close .xlsx file. *Rinse and repeat for other .xlsx files. A: And here's my solution should work with Excel > 2000, but tested only on 2007: Private Sub SaveAllSheetsAsCSV() On Error GoTo Heaven ' each sheet reference Dim Sheet As Worksheet ' path to output to Dim OutputPath As String ' name of each csv Dim OutputFile As String Application.ScreenUpdating = False Application.DisplayAlerts = False Application.EnableEvents = False ' ask the user where to save OutputPath = InputBox("Enter a directory to save to", "Save to directory", Path) If OutputPath <> "" Then ' save for each sheet For Each Sheet In Sheets OutputFile = OutputPath & "\" & Sheet.Name & ".csv" ' make a copy to create a new book with this sheet ' otherwise you will always only get the first sheet Sheet.Copy ' this copy will now become active ActiveWorkbook.SaveAs FileName:=OutputFile, FileFormat:=xlCSV, CreateBackup:=False ActiveWorkbook.Close Next End If Finally: Application.ScreenUpdating = True Application.DisplayAlerts = True Application.EnableEvents = True Exit Sub Heaven: MsgBox "Couldn't save all sheets to CSV." & vbCrLf & _ "Source: " & Err.Source & " " & vbCrLf & _ "Number: " & Err.Number & " " & vbCrLf & _ "Description: " & Err.Description & " " & vbCrLf GoTo Finally End Sub (OT: I wonder if SO will replace some of my minor blogging) A: Building on Graham's answer, the extra code saves the workbook back into it's original location in it's original format. Public Sub SaveWorksheetsAsCsv() Dim WS As Excel.Worksheet Dim SaveToDirectory As String Dim CurrentWorkbook As String Dim CurrentFormat As Long CurrentWorkbook = ThisWorkbook.FullName CurrentFormat = ThisWorkbook.FileFormat ' Store current details for the workbook SaveToDirectory = "C:\" For Each WS In ThisWorkbook.Worksheets WS.SaveAs SaveToDirectory & WS.Name, xlCSV Next Application.DisplayAlerts = False ThisWorkbook.SaveAs Filename:=CurrentWorkbook, FileFormat:=CurrentFormat Application.DisplayAlerts = True ' Temporarily turn alerts off to prevent the user being prompted ' about overwriting the original file. End Sub A: Please look into Von Pookie's answer, all credits to him/her. Sub asdf() Dim ws As Worksheet, newWb As Workbook Application.ScreenUpdating = False For Each ws In Sheets(Array("EID Upload", "Wages with Locals Upload", "Wages without Local Upload")) ws.Copy Set newWb = ActiveWorkbook With newWb .SaveAs ws.Name, xlCSV .Close (False) End With Next ws Application.ScreenUpdating = True End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/59075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: SqlServer Express slow performance I am stress testing a .NET web application. I did this for 2 reasons: I wanted to see what performance was like under real world conditions and also to make sure we hadn't missed any problems during testing. We had 30 concurrent users in the application using it as they would during the normal course of their jobs. Most users had multiple windows of the application open. * *10 Users: Not bad *20 Users: Slowing down *30 Users: Very, very slow but no timeouts It was loaded on the production server. It is a virtual server with a 2.66G Hz Xeon processor and 2 GB of RAM. We are using Win2K3 SP2. We have .NET 1.1 and 2.0 loaded and are using SQLExpress SP1. We rechecked the indexes on all of the tables afterword and they were all as they should be. How can we improve our application's performance? A: * *You may be running into concurrency issues, depending on how your application runs. Try performing your reads with the "nolock" keyword. *Try adding in table aliases for your columns (and avoid the use of SELECT *), this helps out MSSQL, as it doesn't have to "guess" which table the columns come from. *If you aren't already, move to SPROCs, this allows MSSQL to index your data better for a given query's normal result set. *Try following the execution plan of your SPROCS to ensure they are using the indexes you think they are. *Run a trace against your database to see what the incoming requests look like. You may notice a particular SPROC is being run over and over: generally a good sign to cache the responses on the client if possible. (lookup lists, etc.) A: This is just something that I thought of, but check to see how much memory SQL Server is using when you have 20+ users - one of the limitations of the Express version is that it is limited to 1GB of RAM. So it might just be a simple matter of there not being enough memory available to to server due to the limitations of Express. A: Update: Looks like SQL Server express is not the problem as they were using the same product in previous version of the application. I think your next step is in identifying the bottlenecks. If you are sure it is in the database layer, I would recommend taking a profiler trace and bringing down the execution time of the most expensive queries. This is another link I use for collecting statistics from SQL Server Dynamic Management Views (DMVs) and related Dynamic Management Functions (DMFs). Not sure if we can use in the Express edition. Uncover Hidden Data to Optimize Application Performance. Are you using SQL Server Express for a web app? As far as I know, it has some limitations for production deployment. SQL Server Express is free and can be redistributed by ISV's (subject to agreement). SQL Server Express is ideal for learning and building desktop and small server applications. This edition is the best choice for independent software vendors, non-professional developers, and hobbyists building client applications. If you need more advanced database features, SQL Server Express can be seamlessly upgraded to more sophisticated versions of SQL Server. A: I would check disk performance on the virtual server. If that's one of the issues, I would recommend putting the database on a separate spindle. Update: Move to separate spindle or Upgrade SQL Server version as Gulzar aptly suggests. A: make sure you close connections after retrieving data. A: Run SQL Profiler to see the queries sent to the database. Look for queries that are: * *returning too much data *constructed poorly *are being executed too many times
{ "language": "en", "url": "https://stackoverflow.com/questions/59080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is Adobe Flex? Is it just Flash II? Question Alright, I'm confused by all the buzzwords and press release bingo going on. * *What is the relationship between flash and flex: * *Replace flash (not really compatible) *Enhance flash *The next version of flash but still basically compatible *Separate technology altogether *??? *If I'm starting out in Flash now, should I just skip to Flex? Follow up Ok, so what I'm hearing is that there's three different parts to the puzzle: * *Flash * *The graphical editor used to make "Flash Movies", ie it's an IDE that focuses on the visual aspect of "Flash" (Officially Flash CS3?) *The official name for the display plugins (ie, "Download Flash Now!") *A general reference to the entire technology stack *In terms of the editor, it's a linear timeline based editor, best used for animations with complex interactivity. *Actionscript * *The "Flash" programming language *Flex * *An Adobe Flash IDE that focuses on the coding/programming aspect of "Flash" (Flex Builder?) *A Flash library that enhances Flash and makes it easier to program for (Flex SDK?) *Is not bound to a timeline (as the Flash IDE is) and so "standard" applications are more easily accomplished. Is this correct? -Adam A: What is the difference between Flex and Flash? The way I keep it clear in my mind and explain it to others is as follows: Choose the right tool for what you want to create. If you want to write an APPLICATION using Flash technology, use Flex. If you want to create an ANIMATION using Flash techology, use traditional Flash. Flex is optimized for application construction (but you can create primitive animations using states) and it compiles into a SWF. Flash is optimized for animation construction but you can also create applications with some extra work, and it compiles into a SWF. Once you have your SWF you can play in your Flash player, although Flex requires Flash 9 or higher. In conclusion Application -> Flex -> SWF Animation-> Flash -> SWF Hope this helps. A: Part of the confusion is that "Flash" means multiple things. Flash can mean one of a multitude of applications, OR the general technology behind SWFs. There's the Flash CS3 product that Adobe sells. This is generally targeted at designers and uses a Timeline-based approach to creating SWFs. Previously the Flash product was the only way to create SWFs, and SWFs generally were just used for animations and other visual effects not possible in a browser otherwise. Then there's the Flash Player. This is the application/plugin used to run SWFs. SWFs can also be wrapped in a "Projector" which allows them to run as a standalone app, but that's not as common. At some point (I don't know when) Flash started to be used for more interactive applications. The Flash product wasn't very well suited for kind of work, as it was designed to create animations. Recognizing this, Macromedia created Flex. Flex is another development environment for creating SWFs, but it targeted more at developers than designers. The latest version is Flex 3. The Flex SDK is freely available and includes a command line compiler, debugger, and the class libraries. Adobe also provides (for purchase) the Flex Builder app, an Eclipse-based IDE for creating SWFs using Flex. There are some free IDEs for using Flex, most notably FlashDevelop, though I don't know of any that provide a visual designer for MXML, the markup language used by Flex to define UIs, or a visual wrapper for the debugger. If you're approaching Flash from a developer's perspective then you're going to want to use Flex. It's probably a lot closer to what you're used to dealing with, and seems to be the direction Adobe is pushing Flash in general. A: Adobe Flex Builder is Adobe's IDE for developing applications that will run in the Flash plugin. The Flex SDK refers to the libraries that developers uses to write the applications. Essentially, the term 'Flex' is the development side and 'Flash' is the run time side of Adobe's technology. Correction: The term 'Flash' is also used to refer to the Flash IDE which designers use. A: Flash is a Runtime Environment - you use the Flash Authoring tool to make some Flash content, and the Flash player processes your content, executes the scripts, and draws the appropriate pictures onto the screen. Flex is a Development Framework - you use the Flex development tools to define how your component-based content should work, and then Flex generates the Flash content for you. You could have made the same thing with the Flash Authoring tools, but using Flex may let you avoid reinventing some wheels. In functional terms, Flash is an extremely versatile runtime; it gives you lots of freedom to do anything you want. But if you're building a loan simulator, you probably don't need the freedom to define precisely how all the pulldown menus and screen transitions work. Probably you'd rather that you could just use premade components for that stuff so you can concentrate on the loan stuff. Flex lets you do that, with the cost that it may turn out to be a lot of work if you decide that you do need a lot of freedom to change all the fine details. A: Here's another simple view based on how you describe an application you want to develop. Do you want it to have the highly granular UI capabilities you expect from a workstation (e.g. Windows) application, with a fairly complete set of controls (listbox, combobox, grid, etc.)? Flex gives you that. Do you want to deliver it to run in most browsers (i.e. anywhere Flash is installed/installable)? Flex gives you that, because it runs in the Flash virtual machine. Do you also want to be able to offer it to run in the OS, not in a browser (with minimal coding changes?) Flex can give you that, by allowing you run it, not in the Flash VM, but instead in the Adobe AIR VM (which has versions for Windows, Mac, and Linux) which provides wrappers for common OS resources like the filesystem, OS dialog-box UI components (e.g. "Open/File/ etc.) Are you OK using javascript as the development language? You need to be, because it's the only language supported, in the form of ActionScript, which is a proper superset of javascript - it accepts javascript code directly. So the partioned pieces are: * *A runtime environment, either the Flash browser plugin, or the AIR runtime (the native OS equivalent). *Actionscript as the development language. *A bunch of libraries of controls and other resources, i.e. the typical Window, Frame, Combobox, Listbox, Image container, TextBox, TextArea, a wysiwyg-y RichTextArea, etc.) These are the province of Flex. With the above resources alone, you can use the free command-line compiler to build applications in the same fashion as you would with any other command-line-compiler language with libraries. Then, if you want a fancy IDE, there's Flex Builder, which is a set of extensions to good-ol' Eclipse (for several hundred $$). Or there are several other IDE's that are more or less equivalent. A: Flex is a framework FlexBuilder is an IDE (in an attempt to resolve the confusion between the 2, adobe is renaming FlexBuilder to FlashBuilder) if you do not know what a Framework is: Flex is to Flash as what CodeIgniter is to PHP or like what .net is. I like to think of a framework as extending a language like you would extend a class in OOP. A: Flex is a development framework that compiles xml and actionscript into a SWF, which runs on a client machine accessing a website. It can also run as a desktop application using Adobe AIR. Flash uses a drawing canvas instead of xml. Compiled SWFs can be referred to as Flash, which adds to the confusion. You may find Flex similar to ASP.NET, which takes xml and c# (vb.net, etc) and compiles into a dll. Of course, ASP.NET runs on the web server. If you are choosing between Flex and Flash for an application, consider whether the application will be based around data. If you want to make a pretty spreadsheet application, Flex would be the way to go. If you are creating a video game or some sort of presentation, you would probably want Flash. A: Following up on this, I found a very useful article on the subject: Adobe Flash and Flex—Which Makes the Most Sense for Your Project? The Flash Integrated Development Environment (IDE), otherwise known in its current version as Flash CS3, is an ideal tool for developing character animation or linear animation projects. These are projects that require little coding and can be effectively implemented with the Flash IDE timeline. In other words, Flash CS4 is very tightly bound to the timeline - good for linear sequences or animations. While one can add loops and interactivity in the form of jumping to new points on the timeline, it is limiting for projects which cannot easily be mapped out in terms of progression over time. It is, in simplistic terms, an animation tool with very powerful interactive features - creating and composing new elements on the fly, and jumping around on the timeline allows one to create applications, and the Flash CS4 IDE makes this relatively easy, but it's not a great platform for application style software. Flex was built as an answer to that - while it makes use of the same elements and exposes the code, it is not bound to a timeline, and has more in common with object oriented programming languages than animation languages. The Flex Builder IDE includes both the programming IDE, as well as a GUI designer for quickly building interfaces, and a few other tools for developing applications that run on the flash player. A: The term Flash can include any of the other terms defined below, and I find that saying "Flash" without specifying exactly what you mean can be confusing and misleading. Case in point: I'm not sure what you specifically mean when you say "Flash" in your question. Is it Flash Player? The authoring tool? Or the whole collection of technologies that fall under what Adobe calls the "Flash Platform"? To help clear all this up, let me define the technologies involved in creating Flash and Flex content so that we're all using the same terminology here: Flash Player is a runtime for rich media content in the browser. There is also Flash Lite to run Flash content on older or low-end mobile devices, and Adobe AIR extends Flash Player to allow content authors to publish native desktop and mobile applications that can integrate with things like the file system, windowing systems, and device sensors like the accelerometer and camera. Collectively, Adobe refers to these as the Flash runtimes. Flash Professional (often called the Flash authoring tool or the Flash IDE) has traditionally been the primary application used to create content that runs on Flash Player. It is very designer oriented with timelines, layers, and visual drawing tools. Software developers may find this application disorienting and limited compared to other development tools that focus more on code, like Flash Builder (see below). When someone says, "I built something with Flash", they often mean the Flash authoring tool, but that's not always the case. For that reason, it's good to always clarify to avoid any confusion. ActionScript is the primary programming language supported by Adobe to target Flash runtimes. The current version is ActionScript 3 (abbreviated as AS3), which has been supported since Flash Player 9. Content created with older versions of ActionScript can still be run in the latest versions Flash Player, but new features are only supported when using ActionScript 3 to create new content. Flex is a collection of technologies designed to create rich applications that target the Adobe's Flash runtimes. Though saying "Flex" previously had the same ambiguity as "Flash", the Flex brand name is becoming more and more focused on the Flex framework and SDK, described below. The Flex SDK consists of compilers, a command-line debugger, and the Flex framework. The included compilers are: 1. MXMLC, an ActionScript and MXML compiler to output the final SWF file for deployment to Flash Player. 2. COMPC, a static library compiler for ActionScript that outputs SWC files. 3. ASDOC, a documentation generator built on the compiler technology. The Flex framework is a collection of ActionScript classes designed to build Rich Internet Applications. It includes things like user interface controls, web services and other connectivity classes, formatters and validators, drag and drop, modal windowing, and component states. These classes are in the mx.* package. Generally, when developers say "Flex" without any clarifying information, they mean the Flex framework and not the product formerly known as Flex Builder. In 2011, Adobe donated the Flex SDK to the Apache Software Foundation. It is now called Apache Flex and it is fully managed by the community rather than Adobe. However, Adobe employees continue to contribute to the project, and Flash Builder (see below) continues to support new SDKs released by the Apache Flex project. MXML is an XML-based markup language used by the Flex compilers to make layout and placing components into containers easier. The tree-like structure of XML make the containment hierarchy easier to visualize. MXML is actually converted to ActionScript during the compilation process. Flash Builder (formerly known as Flex Builder) is a development environment that allows developers to build different project types to create SWF files that are deployed to Flash runtimes. It is built on the Eclipse platform and is more familiar to software engineers. Flash Builder supports projects built with Flex or pure ActionScript. Flex projects include the Flex framework. ActionScript projects are the most basic you can work with, starting with a single class and an empty canvas, and the Flex framework is not included. Flash Builder does not replace Flash Professional. Some people who have traditionally used Flash Professional may now choose to use Flash Builder instead. Often, these are software engineers who appreciate or require the advanced development tools offered by Flash Builder or don't work heavily with assets designed in a visual tool. Some developers may write their code in Flash Builder, while choosing to compile their projects in the Flash authoring tool. Often, these developers are also designers, or they are working with other people who are designers. In this situation, there may be many graphical assets created in the Flash authoring tool, and it could be difficult or simply inappropriate to bring them into another environment. The Flex framework is specifically designed to build applications. It includes many traditional form controls (buttons, lists, datagrids, etc) and much of the code runs on an advanced component framework written in ActionScript. Not everyone is building the sort of content that Flex is designed to create, and Flex does not replace traditional Flash development practices for everyone. It is a better approach for some developers, but may not be right for others. More design-heavy websites, such as those created for movies, music, energy drinks, advertising campaigns, and things like that probably shouldn't use the Flex framework. These types of content might be better suited to Flash Professional or a pure ActionScript project in Flash Builder. Similarly, little widgets you put into the sidebar of your website or on your profile in a social networking website may need to be built with pure ActionScript (without the Flex framework) because they require a smaller file size and they probably don't need a big complex component architecture designed for larger applications. When targeting Flash runtimes, your development environment, frameworks, and workflow should be chosen based on your project's requirements and goals. A: Yeah, I was confused by this for quite a while too. Flex seems to be thier name for the 'Flex Builder' IDE (based on Eclipse), and the general approach of building flash files using mxml and ActionScript rather than the normal flash tools. I think the mxml and ActionScript approach (i.e. Flex) is designed to appeal much more to programmers, where as the Flash side is designed more to appeal to graphic designers. The end result of either approach is a .swf file which can be run in the browser's flash player plugin (although with Flex you can target the Adobe Air runtime instead if you want access to the file system and to run offline etc). My advice would be, if you're coming from a programming background, to start with Flex. A: Flex and Flash have different target audiences. Flex is more geared towards developers where as Flash is more geared towards designers and artists. A: Flashdeveloper has been mentioned as a free tool to develop flex applications. I just want to add a free tool to design applications (create an MXML file using a designer): designview. It's available directly on the adobe website, it's an air application that is basic but that give the possibility to take a look freely and easily to the possibilities of flex. A: Flex is a free and open source framework based on ActionScript to develop SWFs and AIR applications. Flex Builder (now renamed to Flash Builder as of version 4, to avoid the confusion) is a commercial IDE from Adobe to develop SWF/AIR using the flex framework. While flash (CS3) is good for animation related stuff, flex is good for application/ui related stuff. Adobe positions flex as an RIA (Rich Internet Application) framework. A: Flex runs on Linux, too, while Flash doesn't. Flex is kinda Flash CS 4 second edition. Flex is less graphical, as it separates compiler and IDE, which allows for command line compilation (makefiles, large projects so to say) which allows for alternative IDE's to Flash. Edit: Flex lacks some classes that Flash CSX has (e.g. fl.controls), while Flash lacks some classes that Flex has (e.g. mx.controls or mx.alert). All in all: You can have your own Flash compiler for free by downloading Flex 4 SDK and FlashDevelop. But it is no substitute for Flash. Flash produces much smaller files (e.g. Flash compiles a project to 100 kB while Flex is compiles the same project to 500kB). So Flash is for internet multimedia applications, while Flex is for desktop multimedia applications. A: Flex Builder 3 --> Flash Builder 4, even though you use it for Flex. You can also use it for Flash. If you really want to learn about all this stuff, you should just buy a veteran lunch for a day because it will save you MONTHS. Adobe makes some cool products, but is also well-known to be a lazy company, and this leads to extremely poor documentation. Unless you are a fan of "livedocs," which is a term Adobe coined to describe "slow and bloated HTML." A: "Adobe Flex is a collection of technologies released by Adobe Systems for the development and deployment of cross platform rich Internet applications based on the proprietary Adobe Flash platform." Adobe Flex A: Flash is a programming language rather similar to JavaScript but with support for static types. Flex is a flash library that is intended to help people program in Flash on a much higher level. It may be helpful to some to think about this as Flex over Flash being like MFC over C++. A: Flex is basically a language that compiles down to a flash "movie" or "applet", that will run in the Adobe Flash player plugin. A: In very simple terms, Flex technology uses MXML to create applications. MXML is analogous to HTML and Flash components is analogous to something like form elements. MXML basically allows you to specify what Flash components (such as a table, dropdown list, or something custom that you build in Flash) go on an application screen. This is a very simplified answer, but that's how I tend to explain Flex. (Flex Builder is an environment for you to develop Flex apps and Flash apps) A: Flex is not a programming language. flex is a Framework for developing Rich Internet Applications over the Flash runtime and includes ActionScript & MXML as language. A: Flex is a collection of Technologies, Tools and Frameworks for building cross platform Rich Internet Applications. A: The best answer I've found for "What is Flex" is at this page: http://www.adobe.com/products/flex/faq.html#flex-flash Search for "How is Flex different from Flash?" My interpretation of this is that if your application was generated from Flash Professional, it is a "Flash" application. If it was generated with the Flex SDK (Flash Builder, Flash Develop, or straight code & command line tools) it is a "Flex" application. Both "Flash" applications and "Flex" applications compile into bytecode that can be run by the "Flash Player" or by "Adobe AIR". Both types of applications can include "Actionscript" code.
{ "language": "en", "url": "https://stackoverflow.com/questions/59083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: Does Windows Server 2003 SP2 tell the truth about Free System Page Table Entries? We have some Win32 console applications running on Windows Server 2003 Service Pack 2 that regularly fail with this: Error 1450 (ERROR_NO_SYSTEM_RESOURCES): "Insufficient system resources exist to complete the requested service." All the documentation we've found suggests it is linked to the number of Free System Page Table Entries running out. We have 16GB RAM in these machines and use the /3GB Operating System switch to squeeze the Windows kernel into 1GB and allow our processes access to 3GB of address space. This drastically reduces the total number of Free System Page Table Entries, so combined with our heavy use of MapViewOfFile() it is perhaps not surprising that the kernel page table entries are running out. However, when using Performance Monitor to view the Free System Page Table Entries counter, the value is around 36,000 on reboot and doesn't go down when our application starts. I find it hard to believe that our application, which opens many large memory-mapped files, doesn't have any effect on the kernel page table. If we can't believe the counter, it's much more difficult to test the effect of any system changes we make. There is a promising Knowledge Base article, The Performance tool does not accurately show the available Free System Page Table entries in Windows Server 2003, but it says the problem has been fixed in Service Pack 1, and we are already on Service Pack 2. Has anyone else struggled with or solved this issue? Update: I have checked !sysptes in windbg (debugging the kernel) and the value matches the performance counter, around 36,000. I guess this is most likely to mean that there really are that many free page table entries and Windows is telling the truth. It does leave the question of why we're getting 1450 errors though, if the PTEs are not running out. Further update: We never did get to the bottom of why the 1450 errors were occurring. However, instead we upgraded the OS on these servers to 64-bit Windows. This allows the existing 32-bit applications (without recompilation) to access a full 4GB of virtual address space, and lets the kernel memory area with those pesky Page Table Entries be as big as it likes too. I don't think we've had a 1450 error since. A: Can you try the windbg command "!sysptes" to get System PTE Information? I'm not sure if you can do this with live kernel debug, you may have to get a memory dump. A: I'm not sure why you assume that ERROR_NO_SYSTEM_RESOURCES is caused only by running out of free System Page Table Entries ? As far as I know, such generic error codes are used for more than one resource type. And in fact, the first Google hit suggests that running out of file cache memory may cause it too. (KB on an XP bug, which tripped this error mode). In your case, I'd be checking the "Handle Count". Another possible problem is address space fragmentation. If you you want to create a 1GB file mapping view, you need 1GB of free address space, and it has to be contiguous. If you map a 1GB file, a 800 MB file, and a 1GB file, close the 800MB one and open a 900MB file, the 900MB file may not fit in the hole that's left. A: MS has 2 ways to allow there 32 bit OS to "deal" with hardware that has 4 GB or more of RAM. Option 1: is what you did with the /3GB Switch in the Boot.ini. Option 1 Pros and Cons: (CONS) This option sucks 1 GB from the normal 2 GB kernel area - hence making the OS struggle to meet the demands of both Paged Pool allocations and kernel stack allocations. So a person might think that using the /3GB Switch will help their, but really this option is screwing the 32 bit Window OS into a slow death. (CONS) But, This gives my App 3GB.... WRONG (Hence this is a CON) The catch is that ONLY application that have been recompiled from the vendor to be "/3GB Switch aware" can really use the extra 1 GB. Hence the whole use of the /3GB Switch is a really BAD J.O.K.E on everyone. Read this link for a much better write-up: http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx Option 2: Use the /PAE switch in the Boot.ini. Option 2 Pros and Cons: (PROS) This really this only option if you have a more then 4GB of RAM. It tricks a application by placing the complete application memory footprint in RAM. Normally, only a application "Working Set" memory is in RAM and the remaining application memory requirements go into Windows Pagefile. What is a application total memory requirements?? - it called "Virtual Size". In my world, I have a big fat Java based IBM Product that I deal with. The server that is running the "application" has 16 GB of RAM. I simply add the /PAE switch and watch (thanks to sysinternals Processes Explorer) application paging requests go from 200 KB per sec to up to 4MB per sec. Question: "Why"? Answer: The whole application is in RAM. Question: "Does the application know that it is completely running in RAM? Answer: No - It is running that same old way that it was always run, "THINKING" that it's has part of itself as the "Working Set" memory living in RAM and the remaining application memory requirements go into Windows Pagefile. Yes, it is that flipping GOOD. Please Note: Microsoft has done a poor job telling anyone about the great Windows OS option. Duh Try it and report back to stackoverflow....
{ "language": "en", "url": "https://stackoverflow.com/questions/59098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the difference between the WPF TextBlock element and Label control? Visually both of the following snippets produce the same UI. So why are there 2 controls.. Snippet1 <TextBlock>Name:</TextBlock> <TextBox Name="nameTextBox" /> Snippet2 <Label>Name:</Label> <TextBox Name="nameTextBox" /> (Well I am gonna answer this myself... thought this is a useful tidbit I learnt today from Programming WPF) A: The two biggest reasons for the confusion regarding TextBlocks and Labels are Windows Forms and common sense. * *When you wanted to slap a small bit of text on your form in Windows Forms, you used a Label, so it follows (incorrectly) that you would do the same thing with a WPF Label. *Common sense would lead you to believe that a Label is lightweight and a TextBlock isn't, when the opposite is true. Note that you can put a TextBlock inside a Label. A: Label has an important focus handling responsibility.Its purpose is to allow you to place a caption with an access key. It has a Target property, which indicates the target of the access key. Like this... <Label Target="{Binding ElementName=nameTextBox}">_Name:</Label> <TextBox x:Name="nameTextBox" /> In the absence of the Target property, the Label control does nothing useful. You'll just hear a beep if you press the access key indicating 'unable to process request' A: With TextBlock we can easily have multi-line support I guess - using TextWrapping. Using Label in such cases, e.g. displaying validation message, need to use <AccessKey> tags, which is less straight-forward than TextBlock. On the other hand, using TextBlock not allow us to set the BorderBrush property. So, to me, the two controls should be combined into a text-full-feature control. A: Label takes all kinds of data inputs like String, Number etc... TextBlock, as the name suggests, only accepts a Text string. A: The WPF Textblock inherits from FrameworkElement instead of deriving from System.Windows.Control like the Label Control. This means that the Textblock is much more lightweight. The downside of using a textblock is no support for Access/Accerelator Keys and there is no link to other controls as target. When you want to display text by itself use the TextBlock. The benefit is a light, performant way to display text. When you want to associate text with another control like a TextBox use the Label control. The benefits are access keys and references to target control. A: Label can be used as an alternative to TextBlock for situations where minimal text support is required such as the label for a control. Using Label can be advantageous because it requires even less resources (lighter weight) then a TextBlock.
{ "language": "en", "url": "https://stackoverflow.com/questions/59099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106" }
Q: Best way to write a conversion function Let's say that I'm writing a function to convert between temperature scales. I want to support at least Celsius, Fahrenheit, and Kelvin. Is it better to pass the source scale and target scale as separate parameters of the function, or some sort of combined parameter? Example 1 - separate parameters: function convertTemperature("celsius", "fahrenheit", 22) Example 2 - combined parameter: function convertTemperature("c-f", 22) The code inside the function is probably where it counts. With two parameters, the logic to determine what formula we're going to use is slightly more complicated, but a single parameter doesn't feel right somehow. Thoughts? A: Depends on the language. Generally, I'd use separate arguments with enums. If it's an object oriented language, then I'd recommend a temperature class, with the temperature stored internally however you like and then functions to output it in whatever units are needed: temp.celsius(); // returns the temperature of object temp in celsius A: When writing such designs, I like to think to myself, "If I needed to add an extra unit, what would design would make it the easiest to do so?" Doing this, I come to the conclusion that enums would be easiest for the following reasons: 1) Adding new values is easy. 2) I avoid doing string comparison However, how do you write the conversion method? 3p2 is 6. So that means there are 6 different combinations of celsius, Fahrenheit, and kelvin. What if I wanted to add a new temperate format "foo"? That would mean 4p2 which is 12! Two more? 5p2 = 20 combination. Three more? 6p2 = 30 combinations! You can quickly see how each additional modification requires more and more changes to the code. For this reason I don't do direct conversions! Instead, I do an intermediate conversion. I'd pick one temperature, say Kelvin. And initially, I'd convert to kelvin. I'd then convert kelvin to the desired temperature. Yes, It does result in an extra calculation. However, it makes scalling the code a ton easier. adding adding a new temperature unit will always result in only two new modifications to the code. Easy. A: A few things: * *I'd use an enumerated type that a syntax checker or compiler can check rather than a string that can be mistyped. In Pseudo-PHP: define ('kCelsius', 0); define ('kFarenheit', 1); define ('kKelvin', 2); $a = ConvertTemperature(22, kCelsius, kFarenheit); Also, it seems more natural to me to place the thing you operate on, in this case the temperature to be converted, first. It gives a logical ordering to your parameters (convert -- what? from? to?) and thus helps with mnemonics. A: Your function will be much more robust if you use the first approach. If you need to add another scale, that's one more parameter value to handle. In the second approach, adding another scale means adding as many values as you already had scales on the list, times 2. (For example, to add K to C and F, you'd have to add K-C, K-F, C-K, and C-F.) A decent way to structure your program would be to first convert whatever comes in to an arbitrarily chosen intermediate scale, and then convert from that intermediate scale to the outgoing scale. A better way would be to have a little library of slopes and intercepts for the various scales, and just look up the numbers for the incoming and outgoing scales and do the calculation in one generic step. A: In C# (and probaly Java) it would be best to create a Temperature class that stores temperatures privately as Celcius (or whatever) and which has Celcius, Fahrenheit, and Kelvin properties that do all the conversions for you in their get and set statements? A: Go with the first option, but rather than allow literal strings (which are error prone), take constant values or an enumeration if your language supports it, like this: convertTemperature (TempScale.CELSIUS, TempScale.FAHRENHEIT, 22) A: Depends how many conversions you are going to have. I'd probably choose one parameter, given as an enum: Consider this expanded version of conversion. enum Conversion { CelsiusToFahrenheit, FahrenheitToCelsius, KilosToPounds } Convert(Conversion conversion, X from); You now have sane type safety at point of call - one cannot give correctly typed parameters that give an incorrect runtime result. Consider the alternative. enum Units { Pounds, Kilos, Celcius, Farenheight } Convert(Unit from, Unit to, X fromAmount); I can type safely call Convert(Pounds, Celcius, 5, 10); But the result is meaningless, and you'll have to fail at runtime. Yes, I know you're only dealing with temperature at the moment, but the general concept still holds (I believe). A: I would make an enumeration out of the temperature types and pass in the 2 scale parameters. Something like (in c#): public void ConvertTemperature(TemperatureTypeEnum SourceTemp, TemperatureTypeEnum TargetTemp, decimal Temperature) {} A: I would choose Example 1 - separate parameters: function convertTemperature("celsius", "fahrenheit", 22) Otherwise within your function definition you would have to parse "c-f" into "celsius" and "fahrenheit" anyway to get the required conversion scales, which could get messy. If you're providing something like Google's search box to users, having handy shortcuts like "c-f" is nice for them. Underneath, though, I would convert "c-f" into "celsius" and "fahrenheit" in an outer function before calling convertTemperature() as above. A: In this case single parameters looks totally obscure; Function convert temperature from one scale to another scale. IMO it's more natural to pass source and target scales as separate parameters. I definitely don't want to try to grasp format of first argument. A: I'm always on the lookout for ways to use objects to solve my programming problems. I hope this means that I'm more OO than when I was only using functions to solve problems, but that remains to be seen. In C#: interface ITemperature { CelciusTemperature ToCelcius(); FarenheitTemperature ToFarenheit(); } struct FarenheitTemperature : ITemperature { public readonly int Value; public FarenheitTemperature(int value) { this.Value = value; } public FarenheitTemperature ToFarenheit() { return this; } public CelciusTemperature ToCelcius() { return new CelciusTemperature((this.Value - 32) * 5 / 9); } } struct CelciusTemperature { public readonly int Value; public CelciusTemperature(int value) { this.Value = value; } public CelciusTemperature ToCelcius() { return this; } public FarenheitTemperature ToFarenheit() { return new FarenheitTemperature(this.Value * 9 / 5 + 32); } } and some tests: // Freezing Debug.Assert(new FarenheitTemperature(32).ToCelcius().Equals(new CelciusTemperature(0))); Debug.Assert(new CelciusTemperature(0).ToFarenheit().Equals(new FarenheitTemperature(32))); // crossover Debug.Assert(new FarenheitTemperature(-40).ToCelcius().Equals(new CelciusTemperature(-40))); Debug.Assert(new CelciusTemperature(-40).ToFarenheit().Equals(new FarenheitTemperature(-40))); and an example of a bug that this approach avoids: CelciusTemperature theOutbackInAMidnightOilSong = new CelciusTemperature(45); FarenheitTemperature x = theOutbackInAMidnightOilSong; // ERROR: Cannot implicitly convert type 'CelciusTemperature' to 'FarenheitTemperature' Adding Kelvin conversions is left as an exercise. A: By the way, it doesn't have to be more work to implement the three-parameter version, as suggested in the question statement. These are all linear functions, so you can implement something like float LinearConvert(float in, float scale, float add, bool invert); where the last bool indicates if you want to do the forward transform or reverse it. Within your conversion technique, you can have a scale/add pair for X -> Kelvin. When you get a request to convert format X to Y, you can first run X -> Kelvin, then Kelvin -> Y by reversing the Y -> Kelvin process (by flipping the last bool to LinearConvert). This technique gives you something like 4 lines of real code in your convert function, and one piece of data for every type you need to convert between. A: Similar to what @Rob @wcm and @David explained... public class Temperature { private double celcius; public static Temperature FromFarenheit(double farenheit) { return new Temperature { Farhenheit = farenheit }; } public static Temperature FromCelcius(double celcius) { return new Temperature { Celcius = celcius }; } public static Temperature FromKelvin(double kelvin) { return new Temperature { Kelvin = kelvin }; } private double kelvinToCelcius(double kelvin) { return 1; // insert formula here } private double celciusToKelvin(double celcius) { return 1; // insert formula here } private double farhenheitToCelcius(double farhenheit) { return 1; // insert formula here } private double celciusToFarenheit(double kelvin) { return 1; // insert formula here } public double Kelvin { get { return celciusToKelvin(celcius); } set { celcius = kelvinToCelcius(value); } } public double Celcius { get { return celcius; } set { celcius = value; } } public double Farhenheit { get { return celciusToFarenheit(celcius); } set { celcius = farhenheitToCelcius(value); } } } A: I think I'd go whole hog one direction or another. You could write a mini-language that does any sort of conversion like units does: $ units 'tempF(-40)' tempC -40 Or use individual functions like the recent Convert::Temperature Perl module does: use Convert::Temperature; my $c = new Convert::Temperature(); my $res = $c->from_fahr_to_cel('59'); But that brings up an important point---does the language you are using already have conversion functions? If so, what coding convention do they use? So if the language is C, it would be best to follow the example of the atoi and strtod library functions (untested): double fahrtocel(double tempF){ return ((tempF-32)*(5/9)); } double celtofahr(double tempC){ return ((9/5)*tempC + 32); } In writing this post, I ran across a very interesting post on using emacs to convert dates. The take-away for this topic is that it uses the one function-per-conversion style. Also, conversions can be very obscure. I tend to do date calculations using SQL because it seems unlikely there are many bugs in that code. In the future, I'm going to look into using emacs. A: Here is my take on this (using PHP): function Temperature($value, $input, $output) { $value = floatval($value); if (isset($input, $output) === true) { switch ($input) { case 'K': $value = $value - 273.15; break; // Kelvin case 'F': $value = ($value - 32) * (5 / 9); break; // Fahrenheit case 'R': $value = ($value - 491.67) * (5 / 9); break; // Rankine } switch ($output) { case 'K': $value = $value + 273.15; break; // Kelvin case 'F': $value = $value * (9 / 5) + 32; break; // Fahrenheit case 'R': $value = ($value + 273.15) * (9 / 5); break; // Rankine } } return $value; } Basically the $input value is converted to the standard Celsius scale and then converted back again to the $output scale - one function to rule them all. =) A: My vote is two parameters for conversion types, one for the value (as in your first example). I would use enums instead of string literals, however. A: Use enums, if your language allows it, for the unit specifications. I'd say the code inside would be easier with two. I'd have a table with pre-add, multiplty, and post-add, and run the value through the item for one unit, and then through the item for the other unit in reverse. Basically converting the input temperature to a common base value inside, and then out to the other unit. This entire function would be table-driven. A: I wish there was some way to accept multiple answers. Based on everyone's recommendations, I think I will stick with the multiple parameters, changing the strings to enums/constants, and moving the value to be converted to the first position in the parameter list. Inside the function, I'll use Kelvin as a common middle ground. Previously I had written individual functions for each conversion and the overall convertTemperature() function was merely a wrapper with nested switch statements. I'm writing in both classic ASP and PHP, but I wanted to leave the question open to any language.
{ "language": "en", "url": "https://stackoverflow.com/questions/59102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Are you fluent in Unicode yet? Almost 5 years ago Joel Spolsky wrote this article, "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)". Like many, I read it carefully, realizing it was high-time I got to grips with this "replacement for ASCII". Unfortunately, 5 years later I feel I have slipped back into a few bad habits in this area. Have you? I don't write many specifically international applications, however I have helped build many ASP.NET internet facing websites, so I guess that's not an excuse. So for my benefit (and I believe many others) can I get some input from people on the following: * *How to "get over" ASCII once and for all *Fundamental guidance when working with Unicode. *Recommended (recent) books and websites on Unicode (for developers). *Current state of Unicode (5 years after Joels' article) *Future directions. I must admit I have a .NET background and so would also be happy for information on Unicode in the .NET framework. Of course this shouldn't stop anyone with a differing background from commenting though. Update: See this related question also asked on StackOverflow previously. A: Since I read the Joel article and some other I18n articles I always kept a close eye to my character encoding; And it actually works if you do it consistantly. If you work in a company where it is standard to use UTF-8 and everybody knows this / does this it will work. Here some interesting articles (besides Joel's article) on the subject: * *http://www.tbray.org/ongoing/When/200x/2003/04/06/Unicode *http://www.tbray.org/ongoing/When/200x/2003/04/26/UTF A quote from the first article; Tips for using Unicode: * *Embrace Unicode, don't fight it; it's probably the right thing to do, and if it weren't you'd probably have to anyhow. *Inside your software, store text as UTF-8 or UTF-16; that is to say, pick one of the two and stick with it. *Interchange data with the outside world using XML whenever possible; this makes a whole bunch of potential problems go away. *Try to make your application browser-based rather than write your own client; the browsers are getting really quite good at dealing with the texts of the world. *If you're using someone else's library code (and of course you are), assume its Unicode handling is broken until proved to be correct. *If you're doing search, try to hand the linguistic and character-handling problems off to someone who understands them. *Go off to Amazon or somewhere and buy the latest revision of the printed Unicode standard; it contains pretty well everything you need to know. *Spend some time poking around the Unicode web site and learning how the code charts work. *If you're going to have to do any serious work with Asian languages, go buy the O'Reilly book on the subject by Ken Lunde. *If you have a Macintosh, run out and grab Lord Pixel's Unicode Font Inspection tool. Totally cool. *If you're really going to have to get down and dirty with the data, go attend one of the twice-a-year Unicode conferences. All the experts go and if you don't know what you need to know, you'll be able to find someone there who knows. A: I spent a while working with search engine software - You wouldn't believe how many web sites serve up content with HTTP headers or meta tags which lie about the encoding of the pages. Often, you'll even get a document which contains both ISO-8859 characters and UTF-8 characters. Once you've battled through a few of those sorts of issues, you start taking the proper character encoding of data you produce really seriously. A: The .NET Framework uses Windows default encoding for storing strings, which turns out to be UTF-16. If you don't specify an encoding when you use most text I/O classes, you will write UTF-8 with no BOM and read by first checking for a BOM then assuming UTF-8 (I know for sure StreamReader and StreamWriter behave this way.) This is pretty safe for "dumb" text editors that won't understand a BOM but kind of cruddy for smarter ones that could display UTF-8 or the situation where you're actually writing characters outside the standard ASCII range. Normally this is invisible, but it can rear its head in interesting ways. Yesterday I was working with someone who was using XML serialization to serialize an object to a string using a StringWriter, and he couldn't figure out why the encoding was always UTF-16. Since a string in memory is going to be UTF-16 and that is enforced by .NET, that's the only thing the XML serialization framework could do. So, when I'm writing something that isn't just a throwaway tool, I specify a UTF-8 encoding with a BOM. Technically in .NET you will always be accidentally Unicode aware, but only if your user knows to detect your encoding as UTF-8. It makes me cry a little every time I see someone ask, "How do I get the bytes of a string?" and the suggested solution uses Encoding.ASCII.GetBytes() :( A: Rule of thumb: if you never munge or look inside a string and instead treat it strictly as a blob of data, you'll be much better off. Even doing something as simple as splitting words or lowercasing strings becomes tough if you want to do it "the Unicode way". And if you want to do it "the Unicode way", you'll need an awfully good library. This stuff is incredibly complex.
{ "language": "en", "url": "https://stackoverflow.com/questions/59105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can I convert the following code to use generics? I'm converting an application to use Java 1.5 and have found the following method: /** * Compare two Comparables, treat nulls as -infinity. * @param o1 * @param o2 * @return -1 if o1&lt;o2, 0 if o1==o2, 1 if o1&gt;o2 */ protected static int nullCompare(Comparable o1, Comparable o2) { if (o1 == null) { if (o2 == null) { return 0; } else { return -1; } } else if (o2 == null) { return 1; } else { return o1.compareTo(o2); } } Ideally I would like to make the method take two Comparables of the same type, is it possible to convert this and how? I thought the following would do the trick: protected static <T extends Comparable> int nullCompare(T o1, T o2) { but it has failed to get rid of a warning in IntelliJ "Unchecked call to 'compareTo(T)' as a member of raw type 'java.lang.Comparable'" on the line: return o1.compareTo(o2); A: Here's an odd case: static class A { ... } static class B extends A implements Comparable<A> { public int compareTo(A o) { return ...; } } Luckily code like the one above is rare, but nullCompare() will not support comparison of Bs unless it is stated that Comparable may apply to T or any superclass thereof: protected static <T extends Comparable<? super T>> int nullCompare(T o1, T o2) { Even though most people will never benefit from the above tweak, it may come in handy when designing APIs for exported libraries. A: Change it to: protected static <T extends Comparable<T>> int nullCompare(T o1, T o2) { You need that because Comparable is itself a generic type. A: Cannot edit so I have to post my answer. You need to declare nested type parameter since Comparable is generic. protected static <T extends Comparable<? super T>> int nullCompare(T o1, T o2) { Please note that Comparable< ? super T >, which makes more flexible. You will see the same method definition on Collections.sort public static <T extends Comparable<? super T>> void sort(List<T> list) { A: I'm not sure that genericizing this method makes sense. Currently the method works on any kind of Comparable; if you genericize it you will have to implement it (with exactly the same code) multiple times. Sometimes it is possible to compare two objects that don't have a common ancestor, and any generic version won't allow this. By adding generics you won't add any safety to the code; any problems of safety will occur in the call to compareTo. What I would suggest is simply suppressing the warning. It's not really warning you about anything useful. A: To make it even more general, you could even allow it to work for two different types. =P /** * Compare two Comparables, treat nulls as -infinity. * @param o1 * @param o2 * @return -1 if o1&lt;o2, 0 if o1==o2, 1 if o1&gt;o2 */ protected static <T> int nullCompare(Comparable<? super T> o1, T o2) { if (o1 == null) { if (o2 == null) { return 0; } else { return -1; } } else if (o2 == null) { return 1; } else { return o1.compareTo(o2); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/59107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: VS 2005 Installer Project Version Number I am getting this error now that I hit version number 1.256.0: Error 4 Invalid product version '1.256.0'. Must be of format '##.##.####' The installer was fine with 1.255.0 but something with 256 (2^8) it doesn't like. I found this stated on msdn.com: The Version property must be formatted as N.N.N, where each N represents at least one and no more than four digits. (http://msdn.microsoft.com/en-us/library/d3ywkte8(VS.80).aspx) Which would make me believe there is nothing wrong 1.256.0 because it meets the rules stated above. Does anyone have any ideas on why this would be failing now? A: The link you reference says " This page is specific to Microsoft Visual Studio 2008/.NET Framework 3.5", but you're talking about vs2005. My guess: a 0-based range of 256 numbers ends at 255, therefore trying to use 256 exceeds that and perhaps they changed it for VS2008 Edit: I looked again and see where that link can be switched to talk about VS2005, and gives the same answer. I'm still sticking to my 0-255 theory though. Wouldn't be the first time this week I came across something incorrect in MSDN docs. A: This article says there is a major and minor max of 255. http://msdn.microsoft.com/en-us/library/aa370859(VS.85).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/59120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What GUI should I run with JUnit(similar to NUnit gui) What GUI should use to run my JUnit tests, and how exactly do I do that? My entire background is in .NET, so I'm used to just firing up my NUnit gui and running my unit tests. If the lights are green, I'm clean. Now, I have to write some Java code and want to run something similar using JUnit. The JUnit documentation is nice and clear about adding the attributes necessary to create tests, but its pretty lean on how to fire up a runner and see the results of those tests. A: JUnit stopped having graphical runners following the release of JUnit 4. If you do have an earlier version of JUnit you can use a graphical test runner by entering on the command line[1]: java junit.swingui.TestRunner [optional TestClass] With the optional test class the specified tests will run straight away. Without it you can enter the class into the GUI. The benefits of running your tests this way is that you don't have the overhead of an entire IDE (if you're not already running one). However, if you're already working in an IDE such as Eclipse, the integration is excellent and is a lot less hassle to get the test running. If you do have JUnit 4, and really don't want to use an IDE to run the tests, or want textual feedback, you can run the text UI test runner. In a similar vein as earlier, this can be done by entering on the command line[1]: java junit.textui.TestRunner [TestClass] Though in this case the TestClass is not optional, for obvious reasons. [1] assuming you're in the correct working directory and the classpath has been setup, which may be out of scope for this answer A: Eclipse is by-far the best I've used. Couple JUnit with a code coverage plug-in and Eclipse will probably be the best unit-tester. A: There's a standalone JUnit runner that has a UI, but I recommend using one of the builtin test runners in the Java IDEs (Eclipse, Netbeans, and IntelliJ all have good ones). They all support JUnit, and most support TestNG as well. A: If you want a standalone test runner (not the build-in IDE one), then for Junit3 you can use * *junit.textui.TestRunner %your_class% - command line based runner *junit.swingui.TestRunner [%your_class%] - runner with user interface (swing-powered) For Junit4, the UI-powered runners were removed and so far I haven't found a convenient solution to run new Junit4 tests on old swing-powered runner without additional libraries. But you can use JUnit 4 Extensions that provides a workaround to use junit.swingui.TestRunner. More here A: Why you need a GUI runner? Can't you just run the tests from the IDE itself? In .Net we have TestDriven.net, in Java there must be something equivalent. You can check out IntelliJ IDEA, it has the unit testing support built-in.
{ "language": "en", "url": "https://stackoverflow.com/questions/59128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What platforms JavaFX is/will be supported on? I have read about JavaFX, and like all new technologies I wanted to get my hands "dirty" with it. However, although it talks of multiplatform support, I can't find specifics on this. What platforms support a JavaFX application? All those with Java SE? ME? Does it depend upon the APIs in JavaFX that I use? A: JavaFX has three planned distributions. * *JavaFX Desktop will run on Windows, Mac, Linux, and Solaris at FCS and will require Java SE. Support for Linux and Solaris will be forthcoming. *JavaFX TV and JavaFX Mobile have no announce target platforms. Also unannounced is whether they will run on ME or SE, and if ME which profiles. One important platform distinction is that JavaFX Desktop will support Swing components while JavaFX Mobile will not (only scene graph for graphics). JavaFX TV the least publicly concrete of the three at this time. A: From what I can see JavaFX is a whole new runtime and compiler so is not a subset of Java. Sun will support it on mobile phones and on the desktop. OS-wise it is currently released for Windows/Mac but Solaris/Linux are in the works. A: JavaFx is not a new runtime. It is the same JRE but a new language/compiler with some a few new APIs to make it all works.... Using Netbeans, you can build applications on any platform. As of today, the APIs are beta. Classfiles produced by the compiler are JRE 6 compatible.
{ "language": "en", "url": "https://stackoverflow.com/questions/59129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What do I need to know to globalize an asp.net application? I'm writing an asp.net application that will need to be localized to several regions other than North America. What do I need to do to prepare for this globalization? What are your top 1 to 2 resources for learning how to write a world ready application. A: Learn about the System.Globalization namespace: System.Globalization Also, a good book is NET Internationalization: The Developer's Guide to Building Global Windows and Web Applications A: A couple of things that I've learned: * *Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language. *Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs. *If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions. *If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there. *Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above. *You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it. *Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test. *Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files. *Test. Generally speaking I will test in German, Polish, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support. A: Would be good to refresh a bit on Unicodes if you are targeting other cultures,languages. The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) A: This is a hard problem. I live in Canada, so multilingualism is a big issue. In all my years of doing software development, I've never seen a solution that I liked. I've seen a lot of solutions that worked, and got the job done, but they've always felt like a big kludge. I would go with @harriyott, and make sure that none of your strings are actually in code. A resource file works well for desktop applications. However in ASP.Net, I'd recommend using the database. @John Christensen also has some good pointers. A: Make sure you're compiling with Code Analysis turned on, and pay attention to the Globalization warnings that it gives you. Keep data in an invariant format (CultureInfo.InvariantCulture) until you display it to the user (then use CultureInfo.CurrentCulture). A: I would seriously consider reading the following code project article: Globalization and localization demystified in ASP.NET 2.0 It covers everything from Cultures and Locales, setting the threads current culture, resource files, encodings, you name it! And of course it's loaded with pretty pictures and examples :-). Good luck! A: I would suggest: * *Put all strings in either the database or resource files. *Allow extra space for translated text, as some (e.g. German) are wordier.
{ "language": "en", "url": "https://stackoverflow.com/questions/59130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Bug template in Bugzilla Is there any way to enforce a template in Bugzilla to guide users fill in bugs descriptions ? Actually, i'd like to put some markup texts in the bug description field and avoid the creation of custom fields. I've installed version 3.2rc1. A: Indeed, just check ../enter_bug.cgi?format=guided , which forms an example of the template feature. Half the work is already done for you. A: The mechansism described under 6.2.5 Particular Templates (under the section called bug/create/create.html.tmpl and bug/create/comment.txt.tmpl) works pretty well for us. Even though you say you don't want to create custom fields, adding some arbitrary HTML is easy enough.
{ "language": "en", "url": "https://stackoverflow.com/questions/59133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Subversion Client-Side application Which standalone Windows GUI application do you recommend for use for accessing a Subversion repository? Edit: A lot of people are mentioning Tortoise, however I am looking for an application not a shell extension. Also people are questioning my reasoning behind not wanting to use a shell extension. In our environment we rather interact with the repository (when not using an IDE plugin) through a management application and not navigate the files through Windows Explorer. A: You can try to use SmartSVN - https://www.smartsvn.com/ A: Standalone Clients For total stand alone Synchro SVN is a powerful and cross platform solution. It looks like the most native application on each of the platforms. The Subversion website includes a listing of other standalone SVN Clients (most are cross platform). [Copied list below from http://subversion.tigris.org/links.html#clients] eSvn - cross-platform QT-based GUI frontend to Subversion http://sourceforge.net/projects/esvn FSVS - fast subversion command-line client centered around software deployment http://fsvs.tigris.org/ KDESvn - A Subversion client for KDE http://www.alwins-world.de/wiki/programs/kdesvn QSvn - A cross-platform GUI Subversion client http://ar.oszine.de/projects/qsvn/ RapidSVN - A cross-platform GUI front-end for Subversion http://rapidsvn.tigris.org/ RSVN - Python script which allows multiple repository-side operations in a single, atomic transaction. https://opensvn.csie.org/traccgi/rsvn/trac.cgi/wiki SmartSVN - A cross-platform GUI client for Subversion (Not open source. Available in a free and a commercial version.) https://www.smartsvn.com/ Subcommander - A cross-platform Subversion GUI client including a visual text merge tool. http://subcommander.tigris.org/ SvnX - A Mac OS X Panther GUI client. http://www.lachoseinteractive.net/en/community/subversion/svnx/ Syncro SVN Client - Cross-platform graphical Subversion client. (Not open source. Free trial versions available for Mac OS X, Windows and Linux.) http://www.syncrosvnclient.com WorkBench - Cross platform software development GUI built on Subversion written in Python http://pysvn.tigris.org/ Versions - A GUI Subversion client for Mac OS X. (Not open source; requires commercial license.) http://www.versionsapp.com/ ZigVersion - a Subversion Interface for Mac OS X. Aims to design an interface around the typical workflows of programmers. (Note that this is not open source.) http://zigversion.com/ Integrated Clients TortoiseSVN is the best general use system [An integrated system is not standalone - Thanks Martin Kenny]. It integrates itself into Windows Explorer (You can use it in explorer or any shell dialog) so it works extremely well and gives you the full power of SVN. Ankhsvn is a good solution that integrates into Visual Studios (Except Express Editions). SVN Notifier monitors your repositories and will notify you when anything changes. It integrates with TortoiseSVN to show you diffs and commit logs. Very handy when working in a team environment. A: TortoiseSVN From their website: A Subversion client, implemented as a windows shell extension. TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows. Since it's not an integration for a specific IDE you can use it with whatever development tools you like. TortoiseSVN is free to use. You don't need to get a loan or pay a full years salary to use it. A: Can you explain why TortoiseSVN doesn't work for you? That would help us figure out what you really need in an application. Combine TortoiseSVN with Windows Explorer and you've got a great tool, and then pickup VisualSVN if you want something to integrate with Visual Studio. A: As a shell extension, I guess it's not technically a stand-alone application, but +1 for TortoiseSVN, nevertheless. A: I'd recommend TortoiseSVN to get started with (basically, it adds SVN related contextual menus to explorer), but it can be shockingly memory hungry. I generally use it when I need to, but also make use of the very clean and usable command line tools subversion comes with and Subclipse as part of Eclipse. A: For total stand alone Synchro SVN (60$) is one of the nicest looking and full featured ones. It is cross-platform (Win, Linux, OSX). A: The one and only tortoiseSVN! It is integrated in Windows Explorer, you invoke it with a right click. All commands are under the TortoiseSVN menu, except for frequently used commands such as update, commit or diff (it's configurable). For some reason, the SVN proterties are located in a tab in the Properties menu, not in the TortoiseSVN menu. It makes sense, sort of, but it took some time getting used to it. TortoiseSVN is excellent, but I only realised it was awesome when I moved to a Mac (where Tortoise is not available) and tried to find a decent tool. Nothing comes close. A: SmartSVN is nice if you want a client that doesn't integrate with Explorer and is instead a standalone app. (Although I think later version offer an Explorer integration as well.) A: If you don't like shell extensions TortoiseSVN can be used as an application through its handy automation interface - one executable several command arguements. See TortoiseSVN Manual Each command raises a modal dialog for a specific task. A: I use PHPStorm from JetBrains It can be used in MAC or WIN PC environment. It has internal subversion/git/mercurial tool. though you do have to pay for it ($50) they have 30 day fully functional trial. A: Memory and disk IO can be a problem with TSVNCache, which manages Tortoise's icon overlays. You can fix it by putting your checkouts in one or two directories and making the cache process only look at those directories, rather than your entire drive. See this link for instructions.
{ "language": "en", "url": "https://stackoverflow.com/questions/59148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Set Google Chrome as the debugging browser in Visual Studio When I press F5 in Visual Studio 2008, I want Google Chrome launched as the browser that my ASP.NET app runs in. May I know how this can be done? A: Right click on an .aspx file and click "Browse with..." then select Chrome and click "Set as Default." You can select more than one browser in the list if you want. There's also this really great WoVS Default Browser Switcher Visual Studio extension. A: If you don't see the "Browse With..." option stop debugging first. =) A: * *Go to the visual studio toolbar and click on the dropdown next to CPU (where it says IIS Express in the screenshot). One of the choices should be "Browse With..." *Select a browser, e.g. Google Chrome, then click Set as Default *Click Browse or Cancel. A: To add something to this (cause I found it while searching on this problem, and my solution involved slightly more)... If you don't have a "Browse with..." option for .aspx files (as I didn't in a MVC application), the easiest solution is to add a dummy HTML file, and right-click it to set the option as described in the answer. You can remove the file afterward. The option is actually set in: C:\Documents and Settings[user]\Local Settings\Application Data\Microsoft\VisualStudio[version]\browser.xml However, if you modify the file directly while VS is running, VS will overwrite it with your previous option on next run. Also, if you edit the default in VS you won't have to worry about getting the schema right, so the work-around dummy file is probably the easiest way. A: in visual studio 2012 you can simply select the browser you want to debug with from the dropdown box placed just over the code editor A: For win7 chrome can be found at: C:\Users\[UserName]\AppData\Local\Google\Chrome\Application\chrome.exe For VS2017 click the little down arrow next to the run in debug/release mode button to find the "browse with..." option. A: For MVC developers, * *click on a folder in Solution Explorer (say, Controllers) *Select Browse With... *Select desired browser *(Optionally click ) set as Default A: Click on the arrow near by start button there you will get list of browser. Select the browser you want your application to be run with and click on "Set as Default" Click ok and you are done with this. A: For the new versions of Visual studio, click on the dropdown next to the dropdown of "IIS Express". Now simply choose Browse With and choose Google Chrome, and click Set as default. I hope it helps!! A: In case you are using Visual Studio 2019: To change default browser type defaults in home, then click browser. See this picture: Reference/Source
{ "language": "en", "url": "https://stackoverflow.com/questions/59154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: How do you declare a Predicate Delegate inline? So I have an object which has some fields, doesn't really matter what. I have a generic list of these objects. List<MyObject> myObjects = new List<MyObject>(); myObjects.Add(myObject1); myObjects.Add(myObject2); myObjects.Add(myObject3); So I want to remove objects from my list based on some criteria. For instance, myObject.X >= 10. I would like to use the RemoveAll(Predicate<T> match) method for to do this. I know I can define a delegate which can be passed into RemoveAll, but I would like to know how to define this inline with an anonymous delegate, instead of creating a bunch of delegate functions which are only used in once place. A: There's two options, an explicit delegate or a delegate disguised as a lamba construct: explicit delegate myObjects.RemoveAll(delegate (MyObject m) { return m.X >= 10; }); lambda myObjects.RemoveAll(m => m.X >= 10); Performance wise both are equal. As a matter of fact, both language constructs generate the same IL when compiled. This is because C# 3.0 is basically an extension on C# 2.0, so it compiles to C# 2.0 constructs A: The lambda C# 3.0 way: myObjects.RemoveAll(m => m.x >= 10); The anonymous delegate C# 2.0 way: myObjects.RemoveAll(delegate (MyObject m) { return m.x >= 10; }); And, for the VB guys, the VB 9.0 lambda way: myObjects.RemoveAll(Function(m) m.x >= 10) Unfortunately, VB doesn't support an anonymous delegate. A: //C# 2.0 RemoveAll(delegate(Foo o){ return o.X >= 10; }); or //C# 3.0 RemoveAll(o => o.X >= 10); or Predicate<Foo> matches = delegate(Foo o){ return o.X >= 10; }); //or Predicate<Foo> matches = o => o.X >= 10; RemoveAll(matches); A: Predicate is a delegate which takes an param and returns a boolean. We can do the same in following ways 1) Using inline Lambda expression RemoveAll(p=> p.x > 2); 2) Using anonymous function RemoveAll(delegate(myObject obj){ return obj.x >=10; }) 3) Using Predicate delegate Predicate<myObject> matches = new Predicate<myObject>(IsEmployeeIsValid); RemoveAll(matches); Predicate<Foo> matches = delegate(Foo o){ return o.X >= 20; }); RemoveAll(matches); 3) Declaring a delegate explicitily and pointing to a function public delegate bool IsInValidEmployee (Employee emp); IsInValidEmployee invalidEmployeeDelegate = new IsInValidEmployee(IsEmployeeInValid); myObjects.RemoveAll(myObject=>invalidEmployeeDelegate(myObject); // Actual function public static bool IsEmployeeInValid(Employee emp) { if (emp.Id > 0 ) return true; else return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/59166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: What's the difference between the inner workings of Java's JVM and .NET's CLR? What's the difference between the inner workings of Java's JVM and .NET's CLR? Perhaps a starting point would be, are they basically the same thing in their respective environments (Java > JVM > Machine code) (C# > CLR > IL). Update: Several people have alluded to the points I was trying to cover: * *Garbage Collection *Boxing/Unboxing *JIT debugging *Generics/Templates *Please feel free to suggest other good topics that differentiate the two. @George Mauer - this sounds very interesting: Already posted this once but here is a series of interviews with c# chief language designer Anders Hejlsberg. A: This should be a great thread. One of the biggest differences is between the CLR and JVM is the CLR"s native integration of generics. Java instead removes the generic types and the JVM can only work with objects by autoboxing the objects it appears to be pseudo generics. A: From here. I couldn't have said it better (Well, with the exception of a flame war, this is a flameless place :-) ). Hello, Responding to your question seems fraught with peril by starting a flame war, so I'll proceed cautiously. There are a number of fundamental technical similarities between the Java Runtime and the Common Language Runtime, including garbage collected memory, an intermediate language (Microsoft IL versus Java ByteCode), core system libraries, and support for fairly high level languages, code security, and deployment. However, each of these 'similar' areas also have a number of sizable and small differences, and it's beyond the scope of a simple Forum post to describe most of them. I would suggest asking a more targetted question about any of the various runtime features and component areas (e.g. memory management, compilation, system libraries, security, etc.) and then we can provide a more targetted response (e.g. a blog, a technical article, or some books). A: One essential difference is that the JVM is portable across platforms and runs on Linux, Macintosh, and many cell phones and embedded devices. CLR runs on Microsoft supported platforms with the Mono project providing partial support of older versions of CLR on a few more. Internally this means the JVM's performance will vary on those different platforms based on capabilities provided by the platforms themselves. A: The CLR and JVM have goals and philosophies that differ more than you might think. In general, the JVM aims to optimize more dynamic, higher-level code while the CLR gives you more low-level tools to do these kinds of optimizations yourself. A good example is stack allocation. On the CLR you have explicit stack allocation of custom value types. On the JVM the only custom types are reference types but the JVM can convert heap allocations to stack allocations in certain circumstances through Escape Analysis. Another example. In Java, methods are virtual by default. On C# at least, they are not. It is much more difficult to optimize virtual method calls because the code that gets executed at a given call site cannot be determined statically. Under the hood, their execution systems are quite different. Most JVMs (in particular, Hotspot) start out with a bytecode interpreter and only JIT-compile parts of the code that are executed heavily e.g. tight loops. They can also re-compile these over and over each time using execution statistics collected from previous runs to drive optimizations. This allows more optimization effort to be applied to the parts of the program that need it most. This is called adaptive optimization. The CLR compiles everything up-front only once. It does fewer optimization both because it has more code to compile and so has to be fast and because it doesn't have any statistics of the actual execution paths taken to feed into its optimizations. This approach does have the very significant advantage of allowing you to cache compilation results across processes, which CLR does but JVM does not. A large percentage of the Hotspot JVM code is dedicated to these adaptive optimizations and they are what put Java in the same performance ballpark as native code for most general purpose computation in the early 2000s. They are also what makes the JVM a decent target for dynamic languages. I'm excluding here the more recent developments of the Dynamic Languages Runtime and invokedynamic as I don't know enough about the DLR. A: Miguel de Icaza mentions here: Seasoned industry programmers will notice that the above is very much like Java and the Java VM. They are right, the above is just like Java. The CIL has one feature not found in Java though: it is byte code representation that is powerful enough to be used as a target for many languages: from C++, C, Fortran and Eiffel to Lisp and Haskell including things like Java, C#, JavaScript and Visual Basic in the mix. I wish I had the time to go in more detail, but for the sake of this argument, the above will suffice. The comments go into some details, though, like tail call optimizations. Lot have changed since 2002 though - both CLR and JVM now have multiple languages targeting it. But nonetheless worth a read. A: As Vinko said, the full details are way beyond the scope of a forum post. The differences/similarities boil down to this: They are both a runtime environment "sandbox" that include a "just-in-time" compiler to translate program instructions in an intermediate language (MSIL or ByteCode) to native machine code and provide automatic memory management (garbage collection). Sitting on top of the respective runtime environments are a set of class libraries that provide higher level abstractions to developers to simplify development tasks. The internals of how those runtime environments are actually implemented are, for the most part, proprietary to Microsoft and Sun. The algorithms used by the garbage collection systems, for example, while probably similar in technical functionality are different in implementation. A: As far as I know, .Net CLR still has much more flexible and powerful Code Access Security built into the runtime, allowing much finer grained permissions and execution policy. A: There differences in garbage collection as well. JVM uses Copying collector and Mark and sweep. .NET user Copying collector and Mark and compact (Much harder to implement). Also type erasure mentioned by Flyswat is important. JVM doesn't have a clue about generics and everything is object and associated perf. penalty of boxing and unboxing. Also reflection won't give you generic information. CLR supports generics natively.
{ "language": "en", "url": "https://stackoverflow.com/questions/59175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How do I disable validation in Web Data Administrator? I'm trying to run some queries to get rid of XSS in our database using Web Data Administrator but I keep running into this Potentially Dangerous Request crap. How do I disable validation of the query in Web Data Administrator? A: Go into the install directory of web data admin, usually: C:\Program Files\Microsoft SQL Server Tools\Microsoft SQL Web Data Administrator Then in the "Web" folder open the file "QueryDatabase.aspx" and edit the following line: <%@ Page language="c#" Codebehind="QueryDatabase.aspx.cs" AutoEventWireup="false" Inherits="SqlWebAdmin.query" %> Add ValidateRequest="false" to the end of it like so: <%@ Page language="c#" Codebehind="QueryDatabase.aspx.cs" AutoEventWireup="false" Inherits="SqlWebAdmin.query" ValidateRequest="false" %> NOTE: THIS IS POTENTIALLY DANGEROUS!! Be Careful!
{ "language": "en", "url": "https://stackoverflow.com/questions/59180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WCF Service support file jsdebug fails to load I have a WCF service that gets called from client side JavaScript. The call fails with a Service is null JavaScript error. WebDevelopment helper trace shows that the calls to load the jsdebug support file results in a 404 (file not found) error. Restarting IIS or clearing out the Temp ASP.Net files or setting batch="false" on the compilation tag in web.config does not resolve the problem From the browser https://Myserver/MyApp/Services/MyService.svc displays the service metadata however https://Myserver/MyApp/Services/MyService.svc/jsdebug results in a 404. The issue seems to be with the https protocol. With http /jsdebug downloads the supporting JS file. Any ideas? TIA A: Figured it out! Here is the services configuration section from web.config Look at the bindingConfiguration attribute on the endpoint. The value "webBinding" points to the binding name="webBinding" tag in the bindings and that is what tells the service to use Transport level security it HTTPS. In my case the attribute value was empty causing the webservice request to the /js or /jsdebug file over HTTPS to fail and throw a 404 error. <services> <service name="MyService"> <endpoint address="" behaviorConfiguration="MyServiceAspNetAjaxBehavior" binding="webHttpBinding" bindingConfiguration="webBinding" contract="Services.MyService" /> </service> </services> <bindings> <webHttpBinding> <binding name="webBinding"> <security mode="Transport"> </security> </binding> </webHttpBinding> </bindings> Note that the bindingConfiguration attribute should be empty ("") if the service is accessed via http instead of https (when testing on local machine with no certs) Hope this helps someone. A: If you still get the same error after all your possible work done. Just add a "AJAX Enabled WCF-Service". A: For me the issue was the following; we added MVC to a solution with routing. Our WCF services were not being ignored. I resolved this by adding the following rule (where "WCF" is the folder we keep our services in). routes.IgnoreRoute("WCF/{*pathInfo}"); Hope that saves somebody a few hours.
{ "language": "en", "url": "https://stackoverflow.com/questions/59181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the best way to keep an asp:button from displaying it's URL on the status bar? What is the best way to keep an asp:button from displaying it's URL on the status bar of the browser? The button is currently defines like this: <asp:button id="btnFind" runat="server" Text="Find Info" onclick="btnFind_Click"> </asp:button> Update: This appears to be specific to IE7, IE6 and FF do not show the URL in the status bar. A: I use FF so never noticed this, but the link does in fact appear in the status bar in IE.. I dont think you can overwrite it :( I initially thought maybe setting the ToolTip (al la "title") property might do it.. Seems it does not.. Looking at the source, what appears is nowhere to be found, so I would say this is a browser issue, I don't think you can do anything in code.. :( Update Yeah, Looks like IE always posts whatever the form action is.. Can't see a way to override it, as yet.. Perhaps try explicitly setting it via JS? Update II Done some more Googleing. Don't think there really is a "nice" way of doing it.. Unless you remove the form all together and post data some other way.. Is it really worth that much? Generally this just tends to be the page name? A: I don't see a link, I see this: javascript:__doPostBack('btn',''); EDIT: Sorry, was looking at a LinkButton, not an ASP:Button. The ASP:Button shows the forms ACTION element like stated. But, if you are trying to hide the DoPostBackCall, the only way to do that is to directly manipulate window.status with javascript. The downside is most browsers don't allow this anymore. To do that, in your page_load add: btnFind.Attributes.Add("onmouseover","window.status = '';"); btnFind.Attributes.Add("onmouseout","window.status = '';");
{ "language": "en", "url": "https://stackoverflow.com/questions/59182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Sharepoint 2007 with MS Office 2007 footers We had a need for a document management solution and were hoping SharePoint 2007 would satisfy our needs. We felt our needs were relatively simple. We needed to manage versioning, have searching capabilities, and having an approval workflow. SharePoint handled these three aspects great out of the box. However, we also require that the footer on the Office 2007 (Word, Excel, and PowerPoint) documents reflect the document version, last person to modify, and last modification date. These things can be done with office automation, but we have yet to find a complete solution. We first tried to do it on the checking-in and checked-in events and followed this path for a while, however, the complication we ran into was after we made the changes to the document we had to no way of preventing the save from updating the version number. This resulted in something similar to this: Document checked-in – the document version should be v0.1 however it is v0.2 because we save the document after the footer is replaced. If we look in the document history we there are 2 separate versions v0.1 does not have the footer v0.2 has the footer but it says v0.1 as that is the version the document was when it was replaced. This is an unacceptable solution for us as we want the process to be completely handled on the user side so they would have full control to revert back to a version where the footer would be incorrect and not contain the correct data. When we attempted to create a custom approval/check-in workflow we found that the same problem was present. The footer is necessary so that hard-copies can be traced back to their electronic counterpart. Another solution that was proposed to us was to build plugins for office that would handle the replacement of the footer. This is inadequate for our needs as it requires a client side deployment of our plugins which is undesirable by our clients. What we are looking for is a clean solution to this problem. A: Here is a blog post which seem to be exactly the solution of your problem. Basically they create a custom field in the document library and use event receivers to keep the current version of the document in this field. The "trick" is that on the client side this custom field shows up as a property of the document the value of which you can easily embed into the document's contents. I'm not sure why changing the field won't increase the version of the document, but I guess it is because you're only changing metadata, not the actual document. They do use a little VBA script which runs on the client side, but it doesn't require any client side deployment as it is downloaded with the document. However I'm not sure if any security settings changes on the client side may be needed to allow the script to run. A: Does this information need to be in the footer? A lot of the information is available within the Office 2007 application. If you click on the round button in the upper left, and select "Server", you can view the version history, a lot of the other properties are available by clicking the round button and opening the "Prepare" menu, and selecting Properties. If this information must be displayed in the document footer I would investigate creating a custom Information Management Policy. This may be a good place to start.
{ "language": "en", "url": "https://stackoverflow.com/questions/59186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do I need to copy the .compiled files to the production server? I'm using a deploy project to deploy my ASP.net web application. When I build the deploy project, all the .compiled files are re-created. Do I need to FTP them to the production web server? If I do a small change do I need to copy all the web site again? A: From my own research, the .compiled files must be copied to the production server, but not needed to copied every time from Rick Strahl excellent blog: The output from the merge utilitity can combine all markup and CodeBeside code into a single assembly, but you will still end up with the .compiled files which are required for ASP.NET to associate the page requests with a specific class contained in the assembly. However, because the file names generated are fixed you don’t need to update these files unless you add or remove pages. In effect this means that in most situations you can simply update the single assembly to update your Web. Source A: You can get rid of the .compiled files by using the aspnet_merge tool with the -r option. Removes the .compiled files for the main code assembly (code in the App_Code folder). Do not use this option if your application contains an explicit type reference to the main code assembly. A: There's nothing special about .compiled files: it's just the actual file with a .compiled extension on the end so that nothing happens if you accidentally double click it. But if you're seeing .compiled files, you're publishing your app in such a way that it expects to be formally installed- it's not enough to just copy things to production. You have to run the installer program too. If this is an app you know is already deployed, that seems a bit unnecessary.
{ "language": "en", "url": "https://stackoverflow.com/questions/59191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Developing drivers with no info How does the open-source/free software community develop drivers for products that offer no documentation? A: How do you reverse engineer something? * *You observe the input and output, and develop a set of rules or models that describe the operation of the object. Example: Let's say you want to develop a USB camera driver. The "black box" is the software driver. * *Develop hooks into the OS and/or driver so you can see the inputs and outputs of the driver *Generate typical inputs, and record the outputs *Analyze the outputs and synthesize a model that describes the relationship between the input and output *Test the model - put it in place of the black box driver, and run your tests *If it does everything you need, you're done, if not rinse and repeat Note that this is just a regular problem solving/scientific process. For instance, weather forecasters do the same thing - they observe the weather, test the current conditions against the model, which predicts what will happen over the next few days, and then compare the model's output to reality. When it doesn't match they go back and adjust the model. This method is slightly safer (legally) than clean room reverse engineering, where someone actually decompiles the code, or disassembles the product, analyzes it thoroughly, and makes a model based on what they saw. Then the model (AND NOTHING ELSE) is passed to the developers replicating the functionality of the product. The engineer who took the original apart, however, cannot participate because he might bring copyrighted portions of the code/design and inadvertently put them in the new code. If you never disassemble or decompile the product, though, you should be in legally safe waters - the only problem left is that of patents. -Adam A: Usually by reverse engineering the code. There might be legal issues in some countries, though. * *Reverse Engineering *Reverse engineering Windows USB device drivers for the purpose of creating compatible device drivers for Linux *Nvidia cracks down on third party driver development A: This is a pretty vague question, but I would say reverse engineering. How they go about that is dependent on what kind of device it is and what is available for it. In many cases the device may have a similar core chipset to another device that can be modified to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/59194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How are Mocks meant to be used? When I originally was introduced to Mocks I felt the primary purpose was to mock up objects that come from external sources of data. This way I did not have to maintain an automated unit testing test database, I could just fake it. But now I am starting to think of it differently. I am wondering if Mocks are more effective used to completely isolate the tested method from anything outside of itself. The image that keeps coming to mind is the backdrop you use when painting. You want to keep the paint from getting all over everything. I am only testing that method, and I only want to know how it reacts to these faked up external factors? It seems incredibly tedious to do it this way but the advantage I am seeing is when the test fails it is because it is screwed up and not 16 layers down. But now I have to have 16 tests to get the same testing coverage because each piece would be tested in isolation. Plus each test becomes more complicated and more deeply tied to the method it is testing. It feels right to me but it also seems brutal so I kind of want to know what others think. A: I recommend you take a look at Martin Fowler's article Mocks Aren't Stubs for a more authoritative treatment of Mocks than I can give you. The purpose of mocks is to unit test your code in isolation of dependencies so you can truly test a piece of code at the "unit" level. The code under test is the real deal, and every other piece of code it relies on (via parameters or dependency injection, etc) is a "Mock" (an empty implementation that always returns expected values when one of its methods is called.) Mocks may seem tedious at first, but they make Unit Testing far easier and more robust once you get the hang of using them. Most languages have Mock libraries which make mocking relatively trivial. If you are using Java, I'll recommend my personal favorite: EasyMock. Let me finish with this thought: you need integration tests too, but having a good volume of unit tests helps you find out which component contains a bug, when one exists. A: Yes, I agree. I see mocking as sometimes painful, but often necessary, for your tests to truly become unit tests, i.e. only the smallest unit that you can make your test concerned with is under test. This allows you to eliminate any other factors that could potentially affect the outcome of the test. You do end up with a lot more small tests, but it becomes so much easier to work out where a problem is with your code. A: Don't go down the dark path Master Luke. :) Don't mock everything. You could but you shouldn't... here's why. * *If you continue to test each method in isolation, you have surprises and work cut out for you when you bring them all together ala the BIG BANG. We build objects so that they can work together to solve a bigger problem.. By themselves they are insignificant. You need to know if all the collaborators are working as expected. *Mocks make tests brittle by introducing duplication - Yes I know that sounds alarming. For every mock expect you setup, there are n places where your method signature exists. The actual code and your mock expectations (in multiple tests). Changing actual code is easier... updating all the mock expectations is tedious. *Your test is now privy to insider implementation information. So your test depends on how you chose to implement the solution... bad. Tests should be a independent spec that can be met by multiple solutions. I should have the freedom to just press delete on a block of code and reimplement without having to rewrite the test suite.. coz the requirements still stay the same. To close, I'll say "If it quacks like a duck, walks like a duck, then it probably is a duck" - If it feels wrong.. it probably is. *Use mocks to abstract out problem children like IO operations, databases, third party components and the like.. Like salt, some of it is necessary.. too much and :x * This is the holy war of State based vs Iteraction based testing.. Googling will give you deeper insight. Clarification: I'm hitting some resistance w.r.t. integration tests here :) So to clarify my stand.. * *Mocks do not figure in the 'Acceptance tests'/Integration realm. You'll only find them in the Unit Testing world.. and that is my focus here. *Acceptance tests are different and are very much needed - not belittling them. But Unit tests and Acceptance tests are different and should be kept different. *All collaborators within a component or package do not need to be isolated from each other.. Like micro-optimization that is Overkill. They exist to solve a problem together.. cohesion. A: My philosophy is that you should write testable code to fit the tests, not write tests to fit the code. As for complexity, my opinion is that tests should be simple to write, simply because you write more tests if they are. I might agree that could be a good idea if the classes you're mocking doesn't have a test suite, because if they did have a proper test suite, you would know where the problem is without isolation. Most of them time I've had use for mock objects is when the code I'm writing tests for is so tightly coupled (read: bad design), that I have to write mock objects when classes they depend on is not available. Sure there are valid uses for mock objects, but if your code requires their usage, I would take another look at the design. A: Yes, that is the downside of testing with mocks. There is a lot of work that you need to put in that it feels brutal. But that is the essence of unit testing. How can you test something in isolation if you don't mock external resources? On the other hand, you're mocking away slow functionality (such as databases and i/o operations). If the tests run faster then that will keep programmers happy. There is nothing much more painful than waiting for really slow tests, that take more than 10 seconds to finish running, while you're trying to implement one feature. If every developer in your project spent time writing unit tests, then those 16 layers (of indirection) wouldn't be that much of a problem. Hopefully you should have that test coverage from the beginning, right? :) Also, don't forget to write a function/integration test between objects in collaboration. Or else you might miss something out. These tests won't need to be run often, but are still important. A: On one scale, yes, mocks are meant to be used to simulate external data sources such as a database or a web service. On a more finely grained scale however if you're designing loosely coupled code then you can draw lines throughout your code almost arbitrarily as to what might at any point be an 'outside system'. Take a project I'm working on currently: When someone attempts to check in, the CheckInUi sends a CheckInInfo object to a CheckInMediator object which validates it using a CheckInValidator, then if it is ok, it fills a domain object named Transaction with CheckInInfo using CheckInInfoAdapter then passes the Transaction to an instance of ITransactionDao.SaveTransaction() for persistence. I am right now writing some automated integration tests and obviously the CheckInUi and ITransactionDao are windows unto external systems and they're the ones which should be mocked. However, whose to say that at some point CheckInValidator won't be making a call to a web service? That is why when you write unit tests you assume that everything other than the specific functionality of your class is an external system. Therefore in my unit test of CheckInMediator I mock out all the objects that it talks to. EDIT: Gishu is technically correct, not everything needs to be mocked, I don't for example mock CheckInInfo since it is simply a container for data. However anything that you could ever see as an external service (and it is almost anything that transforms data or has side-effects) should be mocked. An analogy that I like is to think of a properly loosely coupled design as a field with people standing around it playing a game of catch. When someone is passed the ball he might throw a completely different ball to the next person, he might even throw a multiple balls in succession to different people or throw a ball and wait to receive it back before throwing it to yet another person. It is a strange game. Now as their coach and manager, you of course want to check how your team works as a whole so you have team practice (integration tests) but you also have each player practice on his own against backstops and ball-pitching machines (unit tests with mocks). The only piece that this picture is missing is mock expectations and so we have our balls smeared with black tar so they stain the backstop when they hit it. Each backstop has a 'target area' that the person is aiming for and if at the end of a practice run there is no black mark within the target area you know that something is wrong and the person needs his technique tuned. Really take the time to learn it properly, the day I understood Mocks was a huge a-ha moment. Combine it with an inversion of control container and I'm never going back. On a side note, one of our IT people just came in and gave me a free laptop! A: As someone said before, if you mock everything to isolate more granular than the class you are testing, you give up enforcing cohesion in you code that is under test. Keep in mind that mocking has a fundamental advantage, behavior verification. This is something that stubs don't provide and is the other reason that makes the test more brittle (but can improve code coverage). A: Mocks were invented in part to answer the question: How would you unit test objects if they had no getters or setters? These days, recommended practice is to mock roles not objects. Use Mocks as a design tool to talk about collaboration and separation of responsibilities, and not as "smart stubs". A: Mock objects are 1) often used as a means to isolate the code under test, BUT 2) as keithb already pointed out, are important to "focus on the relationships between collaborating objects". This article gives some insights and history related to the subject: Responsibility Driven Design with Mock Objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/59195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How can I tab accross a ButtonBar component in Flex? I have a button bar inf flex along with several other input controls, I have set the tabIndex property for each control and all goes well until I tab to the ButtonBar. The ButtonBar has 3 buttons but tabbing to it, only the first button gets focus, tab again and the focus goes back to the top control... How can I make tabbing go through ALL buttons in a Flex Button bar? Is there a way to do this or do I need to create individual buttons for this? This seems like a possible bug to me... A: The component is written so the user must press the left/right arrow keys when focus is within the bar to traverse the buttons--this is a fairly standard GUI behavior (you also see this in other places like radio button groups). If you look into the SDK source for ButtonBar, you can see where they've explicitly disabled tab focus for each child button as it's created: override protected function createNavItem( label:String, icon:Class = null):IFlexDisplayObject { var newButton:Button = Button(navItemFactory.newInstance()); // Set tabEnabled to false so individual buttons don't get focus. newButton.focusEnabled = false; ... If you really want to change this behavior, you can make a subclass to do it, something like this: package { import mx.controls.Button; import mx.controls.ButtonBar; import mx.core.IFlexDisplayObject; public class FocusableButtonBar extends ButtonBar { public function FocusableButtonBar() { super(); this.focusEnabled = false; } override protected function createNavItem( label:String, icon:Class=null):IFlexDisplayObject { var btn:Button = Button(super.createNavItem(label, icon)); btn.focusEnabled = true; return btn; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/59196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Data Encryption A database that stores a lot of credit card information is an inevitable part of the system we have just completed. What I want though is ultimate security of the card numbers whereby we setup a mechanism to encrypt and decrypt but of ourselves cannot decrypt any given number. What I am after is a way to secure this information even down at the database level so no one can go in and produce a file of card numbers. How have others overcome this issue? What is the 'Standard' approach to this? As for usage of the data well the links are all private and secure and no transmission of the card number is performed except when a record is created and that is encrypted so I am not worried about the front end just the back end. Well the database is ORACLE so I have PL/SQL and Java to play with. A: There's no shortage of processors willing to store your CC info and exchange it for a token with which you can bill against the stored number. That gets you out of PCI compliance, but still allows on demand billing. Depending on why you need to store the CC, that may be a better alternative. Most companies refer to this as something like "Customer Profile Management", and are actually pretty reasonable on fees. A few providers I know of (in no particular order): * *Authorize.NET Customer Information Manager *TrustCommerce Citadel *BrainTree A: Unless you are a payment processor you don't really need to store any kind of CC information. Review your requirements, there really is not many cases where you need to store CC information A: Don't store the credit card numbers, store a hash instead. When you need to verify if a new number matches a stored number, take a hash of the new number and compare it to the stored hash. If they match, the number is (in theory) the same. Alternatively, you could encrypt the data by getting the user who enters the card number to enter a pass phrase; you'd use this as an encryption/decryption key. However, anyone with access to your database and sourcecode (ie. you and your team) will find it trivial to decrypt that data (ie. modify the live code so that it emails any decryption keys entered to a disposable Hotmail account, etc). A: If you are storing the credit card information because you don't want the user to have to re-enter it then hashing of any form isn't going to help. When do you need to act on the credit card number? You could store the credit card numbers in a more secure database, and in the main db just store enough information to show to the user, and a reference to the card. The backend system can be much more locked down and use the actual credit card info just for order processing. You could encrypt these numbers by some master password if you like, but the password would have to be known by the code that needs to get the numbers. Yes, you have only moved the problem around somewhat, but a lot of security is more about reducing the attack footprint rather than eliminating it. If you want to eliminate it then don't store the credit card number anywhere! A: If you're using Oracle you might be interested in Transparent Data Encryption. Only available with an Enterprise license though. Oracle also has utilities for encryption - decryption, for example the DBMS_OBFUSCATION_TOOLKIT. As for "Standards", the proper standard you are interested in is the PCI DSS standard which describes which measures need to be taken to protect sensitive credit card information. A: For an e-commerce type use case (think Amazon 1-Click), you could encrypt the CC (or key) with the user's existing strong password. Assuming you only store a hash of the password, only the user (or a rainbow table - but, it'd have to be run on each user, and would not work if it didn't come up with the same password - not just 1 that hashed the same) can decrypt it. You'd have to take some care to re-encrypt the data when a password changes, and the data would be worthless (and need to be reentered by the user) if they forgot their password - but, if the payments are user-initiated, then it'd work nicely. A: It would be helpful to know the DB server and language/platform types so we could get more specific, but I would be looking into SHA. A: I'd symmetrically encrypt (AES) a secure salted hash (SHA-256 + salt). The salted hash would be enough with a big salt, but the encryption adds a bit extra in case the database and not the code leaks and there are rainbow tables for salted hashes by then or some other means. Store the key in the code, not in the database, of course. It's worth noting that nothing protects you from crooked teammates, they can also store a copy of the date before hashing, for instance. You have to take good care of the code repository and do frequent code revisions for all code in the credit card handling path. Also try to minimize the time from receiving the data and having it crypted/hashed, manually ensuring the variable where it was stored is cleared from memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/59204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do you change the displayed order of ActiveScaffold "actions"? I am using ActiveScaffold in a Ruby on Rails app, and have replaced the default "actions" text in the table (ie. "edit", "delete", "show") with icons using CSS. I have also added a couple of custom actions with action_link.add ("move" and "copy"). For clarity, I would like to have the icons displayed in a different order than they are. Specifically, I would like "edit" to be the first icon displayed. I seem to be able to change the order of the action_links by the changing the order of definition in the controller. I have also been able to change the order of the default actions by first config.actions.excluding everything, and then adding them with config.actions.add in a specific order. However, my custom actions always seem to appear before the default actions in the list. Ideally I would like them to display "edit" "copy" "move" "delete" (ie - built-in, custom, custom, built-in). Can anyone suggest how I might do this? One idea I had was to re-define "edit" as a custom action (with the default functionality), but I don't know how to go about this either. A: Caveat: I don't know ActiveScaffold. This answer is based on me reading its source code. It looks like the action_links variable is a custom data structure, called ActionLinks. It's defined in ActiveScaffold::DataStructures. Internally, it has a @set variable, which is not a Set at all, but an Array. ActionLinks has an add, delete, and each methods that serve as gatekeepers of this @set variable. When displaying the links, ActiveScaffold does this (in _list_actions.rhtml): <% active_scaffold_config.action_links.each :record do |link| -%> # Displays the link (code removed for brevity) <% end -%> So, short of extending ActiveScaffold::DataStructures::ActionLinks to add a method to sort the values in @set differently, there doesn't seem to be a way to do it, at least not generally. If I were you, I'd add something called order_by!, where you pass it an array of symbols, with the proper order, and it resorts @set. That way, you can call it after you're done adding your custom actions.
{ "language": "en", "url": "https://stackoverflow.com/questions/59207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Easiest way to add a Header and Footer to a Printing.PrintDocument (.Net 2.0)? What's the easiest way to add a header and footer to a .Net PrintDocument object, either pragmatically or at design-time? Specifically I'm trying to print a 3rd party grid control (Infragistics GridEx v4.3), which takes a PrintDocument object and draws itself into it. The resulting page just contains the grid and it's contents - however I would like to add a header or title to identify the printed report, and possibly a footer to show who printed it, when, and ideally a page number and total pages. I'm using VB.Net 2.0. Thanks for your help! A: Following booji-boy's answer, here's what I came up with (which I've simplified for example purposes): Private Sub btnPrint_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnPrint.Click Dim oDoc As New Printing.PrintDocument oDoc.DefaultPageSettings.Landscape = True AddHandler oDoc.PrintPage, AddressOf PrintPage oDoc.DocumentName = "Printout" InfragisticsWinGrid.PrintPreview(InfragisticsWinGrid.DisplayLayout, oDoc) End If End Sub Private Sub PrintPage(ByVal sender As Object, ByVal e As System.Drawing.Printing.PrintPageEventArgs) ' Draw title e.Graphics.DrawString("Report Title"), New Font("Arial", 16), Brushes.Black, 95, 70) ' Draw footer e.Graphics.DrawImage(DirectCast(mResources.GetObject("footer_logo"), Drawing.Bitmap), 95, e.PageBounds.Height - 87) Dim drawFont As New Font("Arial", 8.75) e.Graphics.DrawString("Report Title", drawFont, Brushes.Gray, 190, e.PageBounds.Height - 90) e.Graphics.DrawString("Printed", drawFont, Brushes.Gray, 190, e.PageBounds.Height - 76) e.Graphics.DrawString("Printed By", drawFont, Brushes.Gray, 190, e.PageBounds.Height - 62) ' Draw some grid lines to add structure to the footer information e.Graphics.DrawLine(Pens.Gray, 246, e.PageBounds.Height - 90, 246, e.PageBounds.Height - 48) e.Graphics.DrawLine(Pens.Gray, 188, e.PageBounds.Height - 75, 550, e.PageBounds.Height - 75) e.Graphics.DrawLine(Pens.Gray, 188, e.PageBounds.Height - 61, 550, e.PageBounds.Height - 61) e.Graphics.DrawString("Report", drawFont, Brushes.Black, 250, e.PageBounds.Height - 90) e.Graphics.DrawString(Date.Now.ToShortDateString & " " & Date.Now.ToShortTimeString, drawFont, Brushes.Black, 250, e.PageBounds.Height - 76) e.Graphics.DrawString("Andrew", drawFont, Brushes.Black, 250, e.PageBounds.Height - 62) End Sub I had to play with the values of e.PageBounds.Height - x to get the drawn items to line up. Thanks again Booji Boy for the pointer - getting at the ReportPage.Graphics() was exactly what I was after :o) A: The printdocument object fires the printpage event for each page to be printed. You can draw text/lines/etc into the print queue using the printpageeventargs event parameter: http://msdn.microsoft.com/en-us/library/system.drawing.printing.printdocument.aspx Dim it WithEvents when you pass it to the grid, so you can handle the event.
{ "language": "en", "url": "https://stackoverflow.com/questions/59213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Merging two arrays in .NET Is there a built in function in .NET 2.0 that will take two arrays and merge them into one array? The arrays are both of the same type. I'm getting these arrays from a widely used function within my code base and can't modify the function to return the data in a different format. I'm looking to avoid writing my own function to accomplish this if possible. A: I think you can use Array.Copy for this. It takes a source index and destination index so you should be able to append the one array to the other. If you need to go more complex than just appending one to the other, this may not be the right tool for you. A: Everyone has already had their say but I think this more readable than the "use as Extension method" approach: var arr1 = new[] { 1, 2, 3, 4, 5 }; var arr2 = new[] { 6, 7, 8, 9, 0 }; var arr = Queryable.Concat(arr1, arr2).ToArray(); However it can only be used when bringing together 2 arrays. A: This is what I came up with. Works for a variable number of arrays. public static T[] ConcatArrays<T>(params T[][] args) { if (args == null) throw new ArgumentNullException(); var offset = 0; var newLength = args.Sum(arr => arr.Length); var newArray = new T[newLength]; foreach (var arr in args) { Buffer.BlockCopy(arr, 0, newArray, offset, arr.Length); offset += arr.Length; } return newArray; } ... var header = new byte[] { 0, 1, 2}; var data = new byte[] { 3, 4, 5, 6 }; var checksum = new byte[] {7, 0}; var newArray = ConcatArrays(header, data, checksum); //output byte[9] { 0, 1, 2, 3, 4, 5, 6, 7, 0 } A: If you don't want to remove duplicates, then try this Use LINQ: var arr1 = new[] { 1, 2, 3, 4, 5 }; var arr2 = new[] { 6, 7, 8, 9, 0 }; var arr = arr1.Concat(arr2).ToArray(); A: In C# 3.0 you can use LINQ's Concat method to accomplish this easily: int[] front = { 1, 2, 3, 4 }; int[] back = { 5, 6, 7, 8 }; int[] combined = front.Concat(back).ToArray(); In C# 2.0 you don't have such a direct way, but Array.Copy is probably the best solution: int[] front = { 1, 2, 3, 4 }; int[] back = { 5, 6, 7, 8 }; int[] combined = new int[front.Length + back.Length]; Array.Copy(front, combined, front.Length); Array.Copy(back, 0, combined, front.Length, back.Length); This could easily be used to implement your own version of Concat. A: Assuming the destination array has enough space, Array.Copy() will work. You might also try using a List<T> and its .AddRange() method. A: Personally, I prefer my own Language Extensions, which I add or remove at will for rapid prototyping. Following is an example for strings. //resides in IEnumerableStringExtensions.cs public static class IEnumerableStringExtensions { public static IEnumerable<string> Append(this string[] arrayInitial, string[] arrayToAppend) { string[] ret = new string[arrayInitial.Length + arrayToAppend.Length]; arrayInitial.CopyTo(ret, 0); arrayToAppend.CopyTo(ret, arrayInitial.Length); return ret; } } It is much faster than LINQ and Concat. Faster still, is using a custom IEnumerable Type-wrapper which stores references/pointers of passed arrays and allows looping over the entire collection as if it were a normal array. (Useful in HPC, Graphics Processing, Graphics render...) Your Code: var someStringArray = new[]{"a", "b", "c"}; var someStringArray2 = new[]{"d", "e", "f"}; someStringArray.Append(someStringArray2 ); //contains a,b,c,d,e,f For the entire code and a generics version see: https://gist.github.com/lsauer/7919764 Note: This returns an unextended IEnumerable object. To return an extended object is a bit slower. I compiled such extensions since 2002, with a lot of credits going to helpful people on CodeProject and 'Stackoverflow'. I will release these shortly and put the link up here. A: Just to have it noted as an option: if the arrays you are working with are of a primitive type – Boolean (bool), Char, SByte, Byte, Int16 (short), UInt16, Int32 (int), UInt32, Int64 (long), UInt64, IntPtr, UIntPtr, Single, or Double – then you could (or should?) try using Buffer.BlockCopy. According to the MSDN page for the Buffer class: This class provides better performance for manipulating primitive types than similar methods in the System.Array class. Using the C# 2.0 example from @OwenP's answer as a starting point, it would work as follows: int[] front = { 1, 2, 3, 4 }; int[] back = { 5, 6, 7, 8 }; int[] combined = new int[front.Length + back.Length]; Buffer.BlockCopy(front, 0, combined, 0, front.Length); Buffer.BlockCopy(back, 0, combined, front.Length, back.Length); There is barely any difference in syntax between Buffer.BlockCopy and the Array.Copy that @OwenP used, but this should be faster (even if only slightly). A: I needed a solution to combine an unknown number of arrays. Surprised nobody else provided a solution using SelectMany with params. private static T[] Combine<T>(params IEnumerable<T>[] items) => items.SelectMany(i => i).Distinct().ToArray(); If you don't want distinct items just remove distinct. public string[] Reds = new [] { "Red", "Crimson", "TrafficLightRed" }; public string[] Greens = new [] { "Green", "LimeGreen" }; public string[] Blues = new [] { "Blue", "SkyBlue", "Navy" }; public string[] Colors = Combine(Reds, Greens, Blues); Note: There is definitely no guarantee of ordering when using distinct. A: In case someone else is looking for how to merge two image byte arrays: private void LoadImage() { string src = string.empty; byte[] mergedImageData = new byte[0]; mergedImageData = MergeTwoImageByteArrays(watermarkByteArray, backgroundImageByteArray); src = "data:image/png;base64," + Convert.ToBase64String(mergedImageData); MyImage.ImageUrl = src; } private byte[] MergeTwoImageByteArrays(byte[] imageBytes, byte[] imageBaseBytes) { byte[] mergedImageData = new byte[0]; using (var msBase = new MemoryStream(imageBaseBytes)) { System.Drawing.Image imgBase = System.Drawing.Image.FromStream(msBase); Graphics gBase = Graphics.FromImage(imgBase); using (var msInfo = new MemoryStream(imageBytes)) { System.Drawing.Image imgInfo = System.Drawing.Image.FromStream(msInfo); Graphics gInfo = Graphics.FromImage(imgInfo); gBase.DrawImage(imgInfo, new Point(0, 0)); //imgBase.Save(Server.MapPath("_____testImg.png"), ImageFormat.Png); MemoryStream mergedImageStream = new MemoryStream(); imgBase.Save(mergedImageStream, ImageFormat.Png); mergedImageData = mergedImageStream.ToArray(); mergedImageStream.Close(); } } return mergedImageData; } A: If you have the source arrays in an array itself you can use SelectMany: var arrays = new[]{new[]{1, 2, 3}, new[]{4, 5, 6}}; var combined = arrays.SelectMany(a => a).ToArray(); foreach (var v in combined) Console.WriteLine(v); gives 1 2 3 4 5 6 Probably this is not the fastest method but might fit depending on usecase. A: If you can manipulate one of the arrays, you can resize it before performing the copy: T[] array1 = getOneArray(); T[] array2 = getAnotherArray(); int array1OriginalLength = array1.Length; Array.Resize<T>(ref array1, array1OriginalLength + array2.Length); Array.Copy(array2, 0, array1, array1OriginalLength, array2.Length); Otherwise, you can make a new array T[] array1 = getOneArray(); T[] array2 = getAnotherArray(); T[] newArray = new T[array1.Length + array2.Length]; Array.Copy(array1, newArray, array1.Length); Array.Copy(array2, 0, newArray, array1.Length, array2.Length); More on available Array methods on MSDN. A: Use LINQ: var arr1 = new[] { 1, 2, 3, 4, 5 }; var arr2 = new[] { 6, 7, 8, 9, 0 }; var arr = arr1.Union(arr2).ToArray(); Keep in mind, this will remove duplicates. If you want to keep duplicates, use Concat. A: First, make sure you ask yourself the question "Should I really be using an Array here"? Unless you're building something where speed is of the utmost importance, a typed List, like List<int> is probably the way to go. The only time I ever use arrays are for byte arrays when sending stuff over the network. Other than that, I never touch them. A: Easier would just be using LINQ: var array = new string[] { "test" }.ToList(); var array1 = new string[] { "test" }.ToList(); array.AddRange(array1); var result = array.ToArray(); First convert the arrays to lists and merge them... After that just convert the list back to an array :) A: Here is a simple example using Array.CopyTo. I think that it answers your question and gives an example of CopyTo usage - I am always puzzled when I need to use this function because the help is a bit unclear - the index is the position in the destination array where inserting occurs. int[] xSrc1 = new int[3] { 0, 1, 2 }; int[] xSrc2 = new int[5] { 3, 4, 5, 6 , 7 }; int[] xAll = new int[xSrc1.Length + xSrc2.Length]; xSrc1.CopyTo(xAll, 0); xSrc2.CopyTo(xAll, xSrc1.Length); I guess you can't get it much simpler. A: I'm assuming you're using your own array types as opposed to the built-in .NET arrays: public string[] merge(input1, input2) { string[] output = new string[input1.length + input2.length]; for(int i = 0; i < output.length; i++) { if (i >= input1.length) output[i] = input2[i-input1.length]; else output[i] = input1[i]; } return output; } Another way of doing this would be using the built in ArrayList class. public ArrayList merge(input1, input2) { Arraylist output = new ArrayList(); foreach(string val in input1) output.add(val); foreach(string val in input2) output.add(val); return output; } Both examples are C#. A: int [] SouceArray1 = new int[] {2,1,3}; int [] SourceArray2 = new int[] {4,5,6}; int [] targetArray = new int [SouceArray1.Length + SourceArray2.Length]; SouceArray1.CopyTo(targetArray,0); SourceArray2.CopyTo(targetArray,SouceArray1.Length) ; foreach (int i in targetArray) Console.WriteLine(i + " "); Using the above code two Arrays can be easily merged. A: Created and extension method to handle null public static class IEnumerableExtenions { public static IEnumerable<T> UnionIfNotNull<T>(this IEnumerable<T> list1, IEnumerable<T> list2) { if (list1 != null && list2 != null) return list1.Union(list2); else if (list1 != null) return list1; else if (list2 != null) return list2; else return null; } } A: string[] names1 = new string[] { "Ava", "Emma", "Olivia" }; string[] names2 = new string[] { "Olivia", "Sophia", "Emma" }; List<string> arr = new List<string>(names1.Length + names2.Length); arr.AddRange(names1); arr.AddRange(names2); string[] result = arr.Distinct().ToArray(); foreach(string str in result) { Console.WriteLine(str.ToString()); } Console.ReadLine(); A: I wanted to find an approach without using any libraries or functionality beyond arrays themselves. The first two examples are mostly for reading the logic from scratch, but I also wonder if there could be performance variations depending on the sitaution. The third example is the most practical choice. // Two for-loops private static int[] MergedArrays_1(int[] a, int[] b) { int[] result = new int[a.Length + b.Length]; for (int i = 0; i < a.Length; i++) { result[i] = a[i]; } for (int i = a.Length; i < result.Length; i++) { result[i] = b[i - a.Length]; } return result; } // One for-loop private static int[] MergedArrays_2(int[] a, int[] b) { int[] results = new int[a.Length + b.Length]; for (int i = 0; i < results.Length; i++) { results[i] = (i < a.Length) ? a[i] : b[i - a.Length]; } return results; } // Array Method private static int[] MergedArrays_3(int[] a, int[] b) { int[] results = new int[a.Length + b.Length]; a.CopyTo(results, 0); b.CopyTo(results, a.Length); return results; } Lastly, I made a fourth example, that can merge multiple arrays, using the params keyword. int[] result = MultipleMergedArrays(arrayOne, arrayTwo, arrayThree); private static int[] MultipleMergedArrays(params int[][] a) { // Get Length int resultsLength = 0; for (int row = 0; row < a.GetLength(0); row++) { resultsLength += a.Length; } // Initialize int[] results = new int[resultsLength]; // Add Items int index = 0; for (int row = 0; row < a.GetLength(0); row++) { a[row].CopyTo(results, index); index += a[row].Length; } return results; } The way it works when using params, is that the single-dimension arrays are passed into a jagged array. GetLength(0) returns the number of arrays contained within the jagged array. The code first counts the Length of all the arrays, then it initializes a new array based on that size, and starts adding entire arrays into the new results array by using the CopyTo() method, while adding the Length of each added array to an index counter. PS: Some times it is necessary to remove empty items, or certain items, from arrays when merging. private static int[] RemoveEmpty(int[] array) { int count = 0; for (int i = 0; i < array.Length; i++) { if (array[i] == 0) count++; } int[] result = new int[array.Length - count]; count = 0; for (int i = 0; i < array.Length; i++) { if (array[i] == 0) continue; result[count] = array[i]; count++; } return result; } This function can be combined with the ones above. It takes an array, counts the number of items that match zero. And creates a new array of the proper size. Then the counter is recycled and used as an index, for where to put place the input array's values into the new and smaller result array. When an item matches zero, it skips the rest of the code in that round of the loop, and continues with the next round, without incrementing the integer counter. A: Since .NET 5, we now have AllocateUnitializedArray which can possibly add an additional (small) performance improvement for the suggested solutions: public static T[] ConcatArrays<T>(IEnumerable<T[]> arrays) { var result = GC.AllocateUnitializedArray<T>(arrays.Sum(a => a.Length)); var offset = 0; foreach (var a in arrays) { a.CopyTo(result, offset); offset += a.Length; } return result; } A: Try this: ArrayLIst al = new ArrayList(); al.AddRange(array_1); al.AddRange(array_2); al.AddRange(array_3); array_4 = al.ToArray(); A: This code will work for all cases: int[] a1 ={3,4,5,6}; int[] a2 = {4,7,9}; int i = a1.Length-1; int j = a2.Length-1; int resultIndex= i+j+1; Array.Resize(ref a2, a1.Length +a2.Length); while(resultIndex >=0) { if(i != 0 && j !=0) { if(a1[i] > a2[j]) { a2[resultIndex--] = a[i--]; } else { a2[resultIndex--] = a[j--]; } } else if(i>=0 && j<=0) { a2[resultIndex--] = a[i--]; } else if(j>=0 && i <=0) { a2[resultIndex--] = a[j--]; } } A: Simple code to join multiple arrays: string[] arr1 = ... string[] arr2 = ... string[] arr3 = ... List<string> arr = new List<string>(arr1.Length + arr2.Length + arr3.Length); arr.AddRange(arr1); arr.AddRange(arr2); arr.AddRange(arr3); string[] result = arr.ToArray(); A: This is another way to do this :) public static void ArrayPush<T>(ref T[] table, object value) { Array.Resize(ref table, table.Length + 1); // Resizing the array for the cloned length (+-) (+1) table.SetValue(value, table.Length - 1); // Setting the value for the new element } public static void MergeArrays<T>(ref T[] tableOne, T[] tableTwo) { foreach(var element in tableTwo) { ArrayPush(ref tableOne, element); } } Here is the snippet/example
{ "language": "en", "url": "https://stackoverflow.com/questions/59217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "298" }
Q: How Do I Load an Assembly and All of its Dependencies at Runtime in C# for Reflection? I'm writing a utility for myself, partly as an exercise in learning C# Reflection and partly because I actually want the resulting tool for my own use. What I'm after is basically pointing the application at an assembly and choosing a given class from which to select properties that should be included in an exported HTML form as fields. That form will be then used in my ASP.NET MVC app as the beginning of a View. As I'm using Subsonic objects for the applications where I want to use, this should be reasonable and I figured that, by wanting to include things like differing output HTML depending on data type, Reflection was the way to get this done. What I'm looking for, however, seems to be elusive. I'm trying to take the DLL/EXE that's chosen through the OpenFileDialog as the starting point and load it: String FilePath = Path.GetDirectoryName(FileName); System.Reflection.Assembly o = System.Reflection.Assembly.LoadFile(FileName); That works fine, but because Subsonic-generated objects actually are full of object types that are defined in Subsonic.dll, etc., those dependent objects aren't loaded. Enter: AssemblyName[] ReferencedAssemblies = o.GetReferencedAssemblies(); That, too, contains exactly what I would expect it to. However, what I'm trying to figure out is how to load those assemblies so that my digging into my objects will work properly. I understand that if those assemblies were in the GAC or in the directory of the running executable, I could just load them by their name, but that isn't likely to be the case for this use case and it's my primary use case. So, what it boils down to is how do I load a given assembly and all of its arbitrary assemblies starting with a filename and resulting in a completely Reflection-browsable tree of types, properties, methods, etc. I know that tools like Reflector do this, I just can't find the syntax for getting at it. A: I worked out Kent Boogaart's second option. Essentially I had to: 1.) Implement the ResolveEventHandler in a separate class, inheriting from MarshalByRefObject and adding the Serializable attribute. 2.) Add the current ApplicationBase, essentially where the event handler's dll is, to the AppDomain PrivateBinPath. You can find the code on github. A: Couple of options here: * *Attach to AppDomain.AssemblyResolve and do another LoadFile based on the requested assembly. *Spin up another AppDomain with the directory as its base and load the assemblies in that AppDomain. I'd highly recommend pursuing option 2, since that will likely be cleaner and allow you to unload all those assemblies after. Also, consider loading assemblies in the reflection-only context if you only need to reflect over them (see Assembly.ReflectionOnlyLoad).
{ "language": "en", "url": "https://stackoverflow.com/questions/59220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How can I display simple tooltips on icons in html? I am using ActiveScaffold in a Ruby on Rails app, and to save space in the table I have replaced the default "actions" text in the table (ie. "edit", "delete", "show") with icons using CSS. I have also added a couple of custom actions with action_link.add ("move" and "copy"). For clarity, I would like to have a tooltip pop up with the related action (ie. "edit", "copy") when I hover the mouse over the icon. I thought I could do this by adding a simple "alt" definition to the tag, but that doesn't appear to work. Can somebody point me in the right direction? A: Just a minor point to add to this thread... there is no alt tag or title tag. The alt attribute is for images, but all other elements on a page can have a title attribute, which is the best choice for cross browser compatibility. <span title="Click here to edit the foo"> Edit </span> A: You want a "title" tag. I'm not sure if this is necessary anymore, but I usually add both alt and title tags to make sure all browsers display the tool tip the same. A: The alt attribute is to be used as an alternative to the image, in the case of the image missing, or in a text only browser. IE got it wrong, when they made alt appear as a tooltip. It was never meant to be that. The correct attribute for this is title, which of course doesn't do a tooltip in IE. So, to do have a tooltip show up in both IE, and FireFox/Safari/Chrome/Opera, use both an alt attribute and a title attribute. A: Tooltips in HTML are the contents of the alt text for image tags, but if you're setting this using CSS you probably have a background:url(...); style instead of an image. A: Use alt on the images and title on the links. A: The alt property of an img tag works in some browsers, but not all (such as some mozilla-based ones). The "right way" to do this is to use the title property. A: As Prestaul pointed out, the alt tag should work for images and title for links. However, this is also browser dependent...most browsers should implement functionality that displays this metadata as tooltips but they aren't required to do so. A: Realizing, as Joel Coehoom pointed out, that my icon was actually a background image, I created a transparent.gif image with title and alt attributes over top of the background, and voila - tooltips! A: good tool here http://www.guangmingsoft.net/htmlsnapshot/html2image.htm A: you can just use the tag abbr and the tittle atribute with your test eg <abbr tittle="some text"> </abbr> as that answer https://stackoverflow.com/a/61601175/9442717
{ "language": "en", "url": "https://stackoverflow.com/questions/59221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I find duplicate values in a table in Oracle? What's the simplest SQL statement that will return the duplicate values for a given column and the count of their occurrences in an Oracle database table? For example: I have a JOBS table with the column JOB_NUMBER. How can I find out if I have any duplicate JOB_NUMBERs, and how many times they're duplicated? A: How about: SELECT <column>, count(*) FROM <table> GROUP BY <column> HAVING COUNT(*) > 1; To answer the example above, it would look like: SELECT job_number, count(*) FROM jobs GROUP BY job_number HAVING COUNT(*) > 1; A: Aggregate the column by COUNT, then use a HAVING clause to find values that appear more than once. SELECT column_name, COUNT(column_name) FROM table_name GROUP BY column_name HAVING COUNT(column_name) > 1; A: Another way: SELECT * FROM TABLE A WHERE EXISTS ( SELECT 1 FROM TABLE WHERE COLUMN_NAME = A.COLUMN_NAME AND ROWID < A.ROWID ) Works fine (quick enough) when there is index on column_name. And it's better way to delete or update duplicate rows. A: In case where multiple columns identify unique row (e.g relations table ) there you can use following Use row id e.g. emp_dept(empid, deptid, startdate, enddate) suppose empid and deptid are unique and identify row in that case select oed.empid, count(oed.empid) from emp_dept oed where exists ( select * from emp_dept ied where oed.rowid <> ied.rowid and ied.empid = oed.empid and ied.deptid = oed.deptid ) group by oed.empid having count(oed.empid) > 1 order by count(oed.empid); and if such table has primary key then use primary key instead of rowid, e.g id is pk then select oed.empid, count(oed.empid) from emp_dept oed where exists ( select * from emp_dept ied where oed.id <> ied.id and ied.empid = oed.empid and ied.deptid = oed.deptid ) group by oed.empid having count(oed.empid) > 1 order by count(oed.empid); A: Doing select count(j1.job_number), j1.job_number, j1.id, j2.id from jobs j1 join jobs j2 on (j1.job_numer = j2.job_number) where j1.id != j2.id group by j1.job_number will give you the duplicated rows' ids. A: SELECT SocialSecurity_Number, Count(*) no_of_rows FROM SocialSecurity GROUP BY SocialSecurity_Number HAVING Count(*) > 1 Order by Count(*) desc A: I usually use Oracle Analytic function ROW_NUMBER(). Say you want to check the duplicates you have regarding a unique index or primary key built on columns (c1, c2, c3). Then you will go this way, bringing up ROWID s of rows where the number of lines brought by ROW_NUMBER() is >1: Select * From Table_With_Duplicates Where Rowid In (Select Rowid From (Select ROW_NUMBER() Over ( Partition By c1, c2, c3 Order By c1, c2, c3 ) nbLines From Table_With_Duplicates) t2 Where nbLines > 1) A: Simplest I can think of: select job_number, count(*) from jobs group by job_number having count(*) > 1; A: I know its an old thread but this may help some one. If you need to print other columns of the table while checking for duplicate use below: select * from table where column_name in (select ing.column_name from table ing group by ing.column_name having count(*) > 1) order by column_name desc; also can add some additional filters in the where clause if needed. A: You don't need to even have the count in the returned columns if you don't need to know the actual number of duplicates. e.g. SELECT column_name FROM table GROUP BY column_name HAVING COUNT(*) > 1 A: Here is an SQL request to do that: select column_name, count(1) from table group by column_name having count (column_name) > 1; A: 1. solution select * from emp where rowid not in (select max(rowid) from emp group by empno); A: Also u can try something like this to list all duplicate values in a table say reqitem SELECT count(poid) FROM poitem WHERE poid = 50 AND rownum < any (SELECT count(*) FROM poitem WHERE poid = 50) GROUP BY poid MINUS SELECT count(poid) FROM poitem WHERE poid in (50) GROUP BY poid HAVING count(poid) > 1;
{ "language": "en", "url": "https://stackoverflow.com/questions/59232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "305" }
Q: HTML.Button in ASP.NET MVC Starting from ASP.NET MVC Preview 3, HTML.Button ( and other related HTML controls) are no longer supported. The question is, what is the equivalent for them? I've an app that was built using Preview 2, now I have to make it compatible with the latest CTP releases. A: Several of the extension methods got moved to Microsoft.Web.Mvc, which is the MVC Futures DLL. You might want to look there for things that have gone missing. A: Just write <input type="button" ... /> into your html. There's nothing special at all with the html controls. A: I figured it out. It goes something like this: <form method="post" action="<%= Html.AttributeEncode(Url.Action("CastUpVote")) %>"> <input type="submit" value="<%=ViewData.Model.UpVotes%> up votes" /> </form> A: <asp:Button> is the ASP.NET equivalent to the HTML.Button. It will by default generate an <input type="button">. (This is the System.Web.UI.WebControls.Button class)
{ "language": "en", "url": "https://stackoverflow.com/questions/59267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What is the best way to rollout web applications? I'm trying to create a standard way of rolling out web applications for our company. Currently we do it with zip files, vbscript/javascript, and manual some steps. For thick client installs we generate MSI installers using Wise/Wix. We don't create installers currently for websites as in general they are just xcopy deploy. However we have some config files that need to be changed, verify that certain handlers are registered in IIS... The list goes on. Do most people use MSI installers for web applications as well, or some other tool/scripting language? A: I recently spent a few days working on automating deployments at my company. We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over. Our solution is briefly described in an answer to this question Automate Deployment for Web Applications? A: Do consider MSDeploy, that is the direction Microsoft will be investing in the future for deployment of web applications... Know more about the future direction at Overview Post for Web Deployment in VS 2010 A: We have been using FinalBuilder (www.finalbuilder.com) for this purpose for long time and for some time also using InstallAce (www.Installace.com) for build deployment on the Web Farm. A: You may want to look at: * *How do I get a deployable output from a build script with ASP.NET *Step by Step ASP.NET Automated Build/Deploy We use MSI to create basic installers for our web projects too, often using the Web Setup Projects in VS and sometimes completely custom installers. You may also want to look at MSDeploy. A: We're moving to an MSI for our installs, so far with mixed results. I'm a control freak so I would personally prefer a series of scripts that I had more direct control over. I've used ANT in the past with good results. A: Have you checked out NAnt and CruiseControl? Combined, they can provide an easy and automated way to build and deploy your web apps. A: I work for a state agency and we do all our deployments using a product called RepliWeb. It works good because as dev's we have no control over the webservers. But we can deploy to a deployment area and run the RepliWeb job to do the deployment. Not sure on pricing though...
{ "language": "en", "url": "https://stackoverflow.com/questions/59270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Programmatically change combobox I need to update a combobox with a new value so it changes the reflected text in it. The cleanest way to do this is after the comboboxhas been initialised and with a message. So I am trying to craft a postmessage to the hwnd that contains the combobox. So if I want to send a message to it, changing the currently selected item to the nth item, what would the postmessage look like? I am guessing that it would involve ON_CBN_SELCHANGE, but I can't get it to work right. A: You want ComboBox_SetCurSel: ComboBox_SetCurSel(hWndCombo, n); or if it's an MFC CComboBox control you can probably do: m_combo.SetCurSel(2); I would imagine if you're doing it manually you would also want SendMessage rather than PostMessage. CBN_SELCHANGE is the notification that the control sends back to you when the selection is changed. Finally, you might want to add the c++ tag to this question. A: A concise version: const int index = 0; m_comboBox.PostMessage(CBN_SELCHANGE, index); A: What might be going wrong is the selection is being changed inside the selection change message handler, which result in another selection change message. One way to get around this unwanted feedback loop is to add a sentinel to the select change message handler as shown below: void onSelectChangeHandler(HWND hwnd) { static bool fInsideSelectChange = 0; //-- ignore the change message if this function generated it if (fInsideSelectChange == 0) { //-- turn on the sentinel fInsideSelectChange = 1; //-- make the selection changes as required ..... //-- we are done so turn off the sentinel fInsideSelectChange = 0; } } A: if you fx want to change the title - which is the line shown when combobox is closed, then you can do following: m_ComboBox.DeleteString(0); // first delete previous if any, 0 = visual string m_ComboBox.AddString(_T("Hello there")); put this in fx. in OnCloseupCombo - when event close a dropdownbox fires ON_CBN_CLOSEUP(IDC_COMBO1, OnCloseupCombo) This change is a new string not a selection of already assigned combobox elements
{ "language": "en", "url": "https://stackoverflow.com/questions/59280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In SQL, what's the difference between count(column) and count(*)? I have the following query: select column_name, count(column_name) from table group by column_name having count(column_name) > 1; What would be the difference if I replaced all calls to count(column_name) to count(*)? This question was inspired by How do I find duplicate values in a table in Oracle?. To clarify the accepted answer (and maybe my question), replacing count(column_name) with count(*) would return an extra row in the result that contains a null and the count of null values in the column. A: Another minor difference, between using * and a specific column, is that in the column case you can add the keyword DISTINCT, and restrict the count to distinct values: select column_a, count(distinct column_b) from table group by column_a having count(distinct column_b) > 1; A: count(*) counts NULLs and count(column) does not [edit] added this code so that people can run it create table #bla(id int,id2 int) insert #bla values(null,null) insert #bla values(1,null) insert #bla values(null,1) insert #bla values(1,null) insert #bla values(null,1) insert #bla values(1,null) insert #bla values(null,null) select count(*),count(id),count(id2) from #bla results 7 3 2 A: * *The COUNT(*) sentence indicates SQL Server to return all the rows from a table, including NULLs. *COUNT(column_name) just retrieves the rows having a non-null value on the rows. Please see following code for test executions SQL Server 2008: -- Variable table DECLARE @Table TABLE ( CustomerId int NULL , Name nvarchar(50) NULL ) -- Insert some records for tests INSERT INTO @Table VALUES( NULL, 'Pedro') INSERT INTO @Table VALUES( 1, 'Juan') INSERT INTO @Table VALUES( 2, 'Pablo') INSERT INTO @Table VALUES( 3, 'Marcelo') INSERT INTO @Table VALUES( NULL, 'Leonardo') INSERT INTO @Table VALUES( 4, 'Ignacio') -- Get all the collumns by indicating * SELECT COUNT(*) AS 'AllRowsCount' FROM @Table -- Get only content columns ( exluce NULLs ) SELECT COUNT(CustomerId) AS 'OnlyNotNullCounts' FROM @Table A: Basically the COUNT(*) function return all the rows from a table whereas COUNT(COLUMN_NAME) does not; that is it excludes null values which everyone here have also answered here. But the most interesting part is to make queries and database optimized it is better to use COUNT(*) unless doing multiple counts or a complex query rather than COUNT(COLUMN_NAME). Otherwise, it will really lower your DB performance while dealing with a huge number of data. A: COUNT(*) – Returns the total number of records in a table (Including NULL valued records). COUNT(Column Name) – Returns the total number of Non-NULL records. It means that, it ignores counting NULL valued records in that particular column. A: A further and perhaps subtle difference is that in some database implementations the count(*) is computed by looking at the indexes on the table in question rather than the actual data rows. Since no specific column is specified, there is no need to bother with the actual rows and their values (as there would be if you counted a specific column). Allowing the database to use the index data can be significantly faster than making it count "real" rows. A: The explanation in the docs, helps to explain this: COUNT(*) returns the number of items in a group, including NULL values and duplicates. COUNT(expression) evaluates expression for each row in a group and returns the number of nonnull values. So count(*) includes nulls, the other method doesn't. A: We can use the Stack Exchange Data Explorer to illustrate the difference with a simple query. The Users table in Stack Overflow's database has columns that are often left blank, like the user's Website URL. -- count(column_name) vs. count(*) -- Illustrates the difference between counting a column -- that can hold null values, a 'not null' column, and count(*) select count(WebsiteUrl), count(Id), count(*) from Users If you run the query above in the Data Explorer, you'll see that the count is the same for count(Id) and count(*)because the Id column doesn't allow null values. The WebsiteUrl count is much lower, though, because that column allows null. A: Further elaborating upon the answer given by @SQLMeance and @Brannon making use of GROUP BY clause which has been mentioned by OP but not present in answer by @SQLMenace CREATE TABLE table1 ( id INT ); INSERT INTO table1 VALUES (1), (2), (NULL), (2), (NULL), (3), (1), (4), (NULL), (2); SELECT * FROM table1; +------+ | id | +------+ | 1 | | 2 | | NULL | | 2 | | NULL | | 3 | | 1 | | 4 | | NULL | | 2 | +------+ 10 rows in set (0.00 sec) SELECT id, COUNT(*) FROM table1 GROUP BY id; +------+----------+ | id | COUNT(*) | +------+----------+ | 1 | 2 | | 2 | 3 | | NULL | 3 | | 3 | 1 | | 4 | 1 | +------+----------+ 5 rows in set (0.00 sec) Here, COUNT(*) counts the number of occurrences of each type of id including NULL SELECT id, COUNT(id) FROM table1 GROUP BY id; +------+-----------+ | id | COUNT(id) | +------+-----------+ | 1 | 2 | | 2 | 3 | | NULL | 0 | | 3 | 1 | | 4 | 1 | +------+-----------+ 5 rows in set (0.00 sec) Here, COUNT(id) counts the number of occurrences of each type of id but does not count the number of occurrences of NULL SELECT id, COUNT(DISTINCT id) FROM table1 GROUP BY id; +------+--------------------+ | id | COUNT(DISTINCT id) | +------+--------------------+ | NULL | 0 | | 1 | 1 | | 2 | 1 | | 3 | 1 | | 4 | 1 | +------+--------------------+ 5 rows in set (0.00 sec) Here, COUNT(DISTINCT id) counts the number of occurrences of each type of id only once (does not count duplicates) and also does not count the number of occurrences of NULL A: There is no difference if one column is fix in your table, if you want to use more than one column than you have to specify that how much columns you required to count...... Thanks, A: As mentioned in the previous answers, Count(*) counts even the NULL columns, whereas count(Columnname) counts only if the column has values. It's always best practice to avoid * (Select *, count *, …) A: It is best to use Count(1) in place of column name or * to count the number of rows in a table, it is faster than any format because it never go to check the column name into table exists or not
{ "language": "en", "url": "https://stackoverflow.com/questions/59294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "221" }
Q: How do you stop IIS SMTP Server from sending bounce emails? How do you stop the "Default SMTP Virtual Server" from sending bounce messages for email addresses that you don't have? i.e. I'm using IIS' SMTP server to handle my email and if an email is sent unknown at mydomain.com a bounce email with 'address not known' (or something like that) is sent back to the sender. I want it to silently fail. A: I found this article which has a script you can run to configure a catch-all account on your server. All emails which would generate a NDR will instead be directed to this account. Sorry, I haven't tested it. The article above has been removed here it is via the WayBack Machine Basically the short answer to your question is no. On another note, if you don't want to spend any money, or have no budget, and want a better email system, try something like Smarter Mail which you can use for free up to 10 users. I am sure there are others out there, but I have used Smarter Mail in the past successfully. A: This isn't an IIS failure. The SMTP server receiving the message is looking for a valid email address, and when it doesn't find one, sends an email back to your email address saying that there isn't one there. The only way to have it silently fail is by putting the from address as a bogus email like [email protected], etc. A: From an SMTP point of view, a better way to handle this is to reject the RCPT request at some point during the SMTP transaction. This way, your server isn't responsible for sending any blowback to the alleged sender. I don't know how to configure IIS to do this specifically, but you certainly can with Postfix (which is what I use).
{ "language": "en", "url": "https://stackoverflow.com/questions/59296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: When/Why to use Cascading in SQL Server? When setting up foreign keys in SQL Server, under what circumstances should you have it cascade on delete or update, and what is the reasoning behind it? This probably applies to other databases as well. I'm looking most of all for concrete examples of each scenario, preferably from someone who has used them successfully. A: One example is when you have dependencies between entities... ie: Document -> DocumentItems (when you delete Document, DocumentItems don't have a reason to exist) A: Foreign keys are the best way to ensure referential integrity of a database. Avoiding cascades due to being magic is like writing everything in assembly because you don't trust the magic behind compilers. What is bad is the wrong use of foreign keys, like creating them backwards, for example. Juan Manuel's example is the canonical example, if you use code there are many more chances of leaving spurious DocumentItems in the database that will come and bite you. Cascading updates are useful, for instance, when you have references to the data by something that can change, say a primary key of a users table is the name,lastname combination. Then you want changes in that combination to propagate to wherever they are referenced. @Aidan, That clarity you refer to comes at a high cost, the chance of leaving spurious data in your database, which is not small. To me, it's usually just lack of familiarity with the DB and inability to find which FKs are in place before working with the DB that foster that fear. Either that, or constant misuse of cascade, using it where the entities were not conceptually related, or where you have to preserve history. A: ON Delete Cascade: When you want rows in child table to be deleted If corresponding row is deleted in parent table. If on cascade delete isn't used then an error will be raised for referential integrity. ON Update Cascade: When you want change in primary key to be updated in foreign key A: Use cascade delete where you would want the record with the FK to be removed if its referring PK record was removed. In other words, where the record is meaningless without the referencing record. I find cascade delete useful to ensure that dead references are removed by default rather than cause null exceptions. A: I have heard of DBAs and/or "Company Policy" that prohibit using "On Delete Cascade" (and others) purely because of bad experiences in the past. In one case a guy wrote three triggers which ended up calling one another. Three days to recover resulted in a total ban on triggers, all because of the actions of one idjit. Of course sometimes Triggers are needed instead of "On Delete cascade", like when some child data needs to be preserved. But in other cases, its perfectly valid to use the On Delete cascade method. A key advantage of "On Delete cascade" is that it captures ALL the children; a custom written trigger/store procedure may not if it is not coded correctly. I believe the Developer should be allowed to make the decision based upon what the development is and what the spec says. A carpet ban based on a bad experience should not be the criteria; the "Never use" thought process is draconian at best. A judgement call needs to be made each and every time, and changes made as the business model changes. Isn't this what development is all about? A: One reason to put in a cascade delete (rather than doing it in the code) is to improve performance. Case 1: With a cascade delete DELETE FROM table WHERE SomeDate < 7 years ago; Case 2: Without a cascade delete FOR EACH R IN (SELECT FROM table WHERE SomeDate < 7 years ago) LOOP DELETE FROM ChildTable WHERE tableId = R.tableId; DELETE FROM table WHERE tableId = R.tableid; /* More child tables here */ NEXT Secondly, when you add in an extra child table with a cascade delete, the code in Case 1 keeps working. I would only put in a cascade where the semantics of the relationship is "part of". Otherwise some idiot will delete half of your database when you do: DELETE FROM CURRENCY WHERE CurrencyCode = 'USD' A: I try to avoid deletes or updates that I didn't explicitly request in SQL server. Either through cascading or through the use of triggers. They tend to bite you in the ass some time down the line, either when trying to track down a bug or when diagnosing performance problems. Where I would use them is in guaranteeing consistency for not very much effort. To get the same effect you would have to use stored procedures. A: I, like everyone else here, find that cascade deletes are really only marginally helpful (it's really not that much work to delete referenced data in other tables -- if there are lot of tables, you simply automate this with a script) but really annoying when someone accidentally cascade deletes some important data that is difficult to restore. The only case where I'd use is if the data in the table table is highly controlled (e.g., limited permissions) and only updated or deleted from through a controlled process (like a software update) that has been verified. A: I never use cascading deletes. If I want something removed from the database I want to explicitly tell the database what I want taking out. Of course they are a function available in the database and there may be times when it is okay to use them, for example if you have an 'order' table and an 'orderItem' table you may want to clear the items when you delete an order. I like the clarity that I get from doing it in code (or stored procedure) rather than 'magic' happening. For the same reason I am not a fan of triggers either. Something to notice is that if you do delete an 'order' you will get '1 row affected' report back even if the cascaded delete has removed 50 'orderItem's. A: Summary of what I've seen so far: * *Some people don't like cascading at all. Cascade Delete * *Cascade Delete may make sense when the semantics of the relationship can involve an exclusive "is part of" description. For example, an OrderLine record is part of its parent order, and OrderLines will never be shared between multiple orders. If the Order were to vanish, the OrderLine should as well, and a line without an Order would be a problem. *The canonical example for Cascade Delete is SomeObject and SomeObjectItems, where it doesn't make any sense for an items record to ever exist without a corresponding main record. *You should not use Cascade Delete if you are preserving history or using a "soft/logical delete" where you only set a deleted bit column to 1/true. Cascade Update * *Cascade Update may make sense when you use a real key rather than a surrogate key (identity/autoincrement column) across tables. *The canonical example for Cascade Update is when you have a mutable foreign key, like a username that can be changed. *You should not use Cascade Update with keys that are Identity/autoincrement columns. *Cascade Update is best used in conjunction with a unique constraint. When To Use Cascading * *You may want to get an extra strong confirmation back from the user before allowing an operation to cascade, but it depends on your application. *Cascading can get you into trouble if you set up your foreign keys wrong. But you should be okay if you do that right. *It's not wise to use cascading before you understand it thoroughly. However, it is a useful feature and therefore worth taking the time to understand. A: I work a lot with cascading deletes. It feels good to know whoever works against the database might never leave any unwanted data. If dependencies grow I just change the constraints in the diagramm in Management Studio and I dont have to tweak sp or dataacces. That said, I have 1 problem with cascading deletes and thats circular references. This often leads to parts of the database that have no cascading deletes. A: I do a lot of database work and rarely find cascade deletes useful. The one time I have used them effectively is in a reporting database that is updated by a nightly job. I make sure that any changed data is imported correctly by deleting any top level records that have changed since the last import, then reimport the modified records and anything that relates to them. It save me from having to write a lot of complicated deletes that look from the bottom to the top of my database. I don't consider cascade deletes to be quite as bad as triggers as they only delete data, triggers can have all kinds of nasty stuff inside. In general I avoid real Deletes altogether and use logical deletes (ie. having a bit column called isDeleted that gets set to true) instead. A: A deletion or update to S that removes a foreign-key value found in some tuples of R can be handled in one of three ways: * *Rejection *Propagation *nullification. Propagation is referred to as cascading. There are two cases: ‣ If a tuple in S was deleted, delete the R tuples that referred to it. ‣ If a tuple in S was updated, update the value in the R tuples that refer to it. A: If you're working on a system with many different modules in different versions, it can be very helpful, if the cascade deleted items are part of / owned by the PK holder. Else, all modules would require immediate patches to clean up their dependent items before deleting the PK owner, or the foreign key relation would be omitted completely, possibly leaving tons of garbage in the system if cleanup is not performed correctly. I just introduced cascade delete for a new intersection table between two already existing tables (the intersection to delete only), after cascade delete had been discouraged from for quite some time. It's also not too bad if data gets lost. It is, however, a bad thing on enum-like list tables: somebody deletes entry 13 - yellow from table "colors", and all yellow items in the database get deleted. Also, these sometimes get updated in a delete-all-insert-all manner, leading to referential integrity totally omitted. Of course it's wrong, but how will you change a complex software which has been running for many years, with introduction of true referential integrity being at risk of unexpected side effects? Another problem is when original foreign key values shall be kept even after the primary key has been deleted. One can create a tombstone column and an ON DELETE SET NULL option for the original FK, but this again requires triggers or specific code to maintain the redundant (except after PK deletion) key value. A: Cascade deletes are extremely useful when implementing logical super-type and sub-type entities in a physical database. When separate super-type and sub-type tables are are used to physically implement super-types/sub-types (as opposed to rolling up all sub-type attributes into a single physical super-type table), there is a one-to-one relationship between these tables and the issue then becomes how to keep the primary keys 100% in sync between these tables. Cascade deletes can be a very useful tool to: 1) Make sure that deleting a super-type record also deletes the corresponding single sub-type record. 2) Make sure that any delete of a sub-type record also deletes the super-type record. This is achieved by implementing an "instead-of" delete trigger on the sub-type table that goes and deletes the corresponding super-type record, which, in turn, cascade deletes the sub-type record. Using cascade deletes in this manner ensures that no orphan super-type or sub-type records ever exist, regardless of whether you delete the super-type record first or the sub-type record first. A: I would make a distinction between * *Data integrity *Business logic/rules In my experience it is best to enforce integrity as far as possible in the database using PK, FK, and other constraints. However business rules/logic IMO is best implemented using code for the reason of cohesion (google "coupling and cohesion" to learn more). Is cascade delete/update data integrity or business rules? This could of course be debated but I would say it is usually a logic/rule. For example a business rule may be that if an Order is deleted all OrderItems should be automatically deleted. But it could also be that it should never be possible to delete an Order if it still have OrderItems. So this may be up to the business to decide. How do we know how this rule is currently implemented? If it is all in code we can just look at the code (high cohesion). If the rule is maybe implemented in the code or maybe implemented as cascade in the database then we need to look in multiple places (low cohesion). Of course if you go all-in with putting your business rules only in the database and use triggers, stored proc then cascade may make sense. I usually consider database vendor lock-in before using any stored proc or triggers. A SQL database that just stores data and enforces integrity is IMO easier to port to another vendor. So for that reason I usually don't use stored proc or triggers.
{ "language": "en", "url": "https://stackoverflow.com/questions/59297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "170" }
Q: Hibernate - maxElementsOnDisk from EHCache to TreeCache I'm migrating a Hibernate application's cache from EHCache to JBoss TreeCache. I'm trying to find how to configure the equivalent to maxElementsOnDisk to limit the cache size on disk, but I couldn't find anything similar to configure in a FileCacheLoader with passivation activated. Thanks A: This page seems to imply that the correct configuration element is: <attribute name="MaxCapacity">20000</attribute> However, I've only ever used EHCache myself. A: In the version I am working on (JBossCache 1.4.1), it looks like it is not possible to configure this parameter.
{ "language": "en", "url": "https://stackoverflow.com/questions/59299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Wrap an Oracle schema update in a transaction I've got a program that periodically updates its database schema. Sometimes, one of the DDL statements might fail and if it does, I want to roll back all the changes. I wrap the update in a transaction like so: BEGIN TRAN; CREATE TABLE A (PKey int NOT NULL IDENTITY, NewFieldKey int NULL, CONSTRAINT PK_A PRIMARY KEY (PKey)); CREATE INDEX A_2 ON A (NewFieldKey); CREATE TABLE B (PKey int NOT NULL IDENTITY, CONSTRAINT PK_B PRIMARY KEY (PKey)); ALTER TABLE A ADD CONSTRAINT FK_B_A FOREIGN KEY (NewFieldKey) REFERENCES B (PKey); COMMIT TRAN; As we're executing, if one of the statements fail, I do a ROLLBACK instead of a COMMIT. This works great on SQL Server, but doesn't have the desired effect on Oracle. Oracle seems to do an implicit COMMIT after each DDL statement: * *http://www.orafaq.com/wiki/SQL_FAQ#What_are_the_difference_between_DDL.2C_DML_and_DCL_commands.3F *http://infolab.stanford.edu/~ullman/fcdb/oracle/or-nonstandard.html#transactions Is there any way to turn off this implicit commit? A: You can not turn this off. Fairly easy to work around by designing your scripts to drop tables in the event they already exist etc... You can look at using FLASHBACK database, I believe you can do this at the schema/object level but check the docs to confirm that. You would need to be on 10G for that to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/59303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to vertically center content with variable height within a div? What is the best way to vertically center the content of a div when the height of the content is variable. In my particular case, the height of the container div is fixed, but it would be great if there were a solution that would work in cases where the container has a variable height as well. Also, I would love a solution with no, or very little use of CSS hacks and/or non-semantic markup. A: Best result for me so far: div to be centered: position: absolute; top: 50%; transform: translateY(-50%); margin: 0 auto; right: 0; left: 0; A: You can use margin auto. With flex, the div seems to be centered vertically too. body, html { height: 100%; margin: 0; } .site { height: 100%; display: flex; } .site .box { background: #0ff; max-width: 20vw; margin: auto; } <div class="site"> <div class="box"> <h1>blabla</h1> <p>blabla</p> <p>blablabla</p> <p>lbibdfvkdlvfdks</p> </div> </div> A: Just add position: relative; top: 50%; transform: translateY(-50%); to the inner div. What it does is moving the inner div's top border to the half height of the outer div (top: 50%;) and then the inner div up by half its height (transform: translateY(-50%)). This will work with position: absolute or relative. Keep in mind that transform and translate have vendor prefixes which are not included for simplicity. Codepen: http://codepen.io/anon/pen/ZYprdb A: This seems to be the best solution I’ve found to this problem, as long as your browser supports the ::before pseudo element: CSS-Tricks: Centering in the Unknown. It doesn’t require any extra markup and seems to work extremely well. I couldn’t use the display: table method because table elements don’t obey the max-height property. .block { height: 300px; text-align: center; background: #c0c0c0; border: #a0a0a0 solid 1px; margin: 20px; } .block::before { content: ''; display: inline-block; height: 100%; vertical-align: middle; margin-right: -0.25em; /* Adjusts for spacing */ /* For visualization background: #808080; width: 5px; */ } .centered { display: inline-block; vertical-align: middle; width: 300px; padding: 10px 15px; border: #a0a0a0 solid 1px; background: #f5f5f5; } <div class="block"> <div class="centered"> <h1>Some text</h1> <p>But he stole up to us again, and suddenly clapping his hand on my shoulder, said&mdash;"Did ye see anything looking like men going towards that ship a while ago?"</p> </div> </div> A: This is something I have needed to do many times and a consistent solution still requires you add a little non-semantic markup and some browser specific hacks. When we get browser support for css 3 you'll get your vertical centering without sinning. For a better explanation of the technique you can look the article I adapted it from, but basically it involves adding an extra element and applying different styles in IE and browsers that support position:table\table-cell on non-table elements. <div class="valign-outer"> <div class="valign-middle"> <div class="valign-inner"> Excuse me. What did you sleep in your clothes again last night. Really. You're gonna be in the car with her. Hey, not too early I sleep in on Saturday. Oh, McFly, your shoe's untied. Don't be so gullible, McFly. You got the place fixed up nice, McFly. I have you're car towed all the way to your house and all you've got for me is light beer. What are you looking at, butthead. Say hi to your mom for me. </div> </div> </div> <style> /* Non-structural styling */ .valign-outer { height: 400px; border: 1px solid red; } .valign-inner { border: 1px solid blue; } </style> <!--[if lte IE 7]> <style> /* For IE7 and earlier */ .valign-outer { position: relative; overflow: hidden; } .valign-middle { position: absolute; top: 50%; } .valign-inner { position: relative; top: -50% } </style> <![endif]--> <!--[if gt IE 7]> --> <style> /* For other browsers */ .valign-outer { position: static; display: table; overflow: hidden; } .valign-middle { position: static; display: table-cell; vertical-align: middle; width: 100%; } </style> There are many ways (hacks) to apply styles in specific sets of browsers. I used conditional comments but look at the article linked above to see two other techniques. Note: There are simple ways to get vertical centering if you know some heights in advance, if you are trying to center a single line of text, or in several other cases. If you have more details then throw them in because there may be a method that doesn't require browser hacks or non-semantic markup. Update: We are beginning to get better browser support for CSS3, bringing both flex-box and transforms as alternative methods for getting vertical centering (among other effects). See this other question for more information about modern methods, but keep in mind that browser support is still sketchy for CSS3. A: you can use flex display such as below code: .example{ background-color:red; height:90px; width:90px; display:flex; align-items:center; /*for vertically center*/ justify-content:center; /*for horizontally center*/ } <div class="example"> <h6>Some text</h6> </div> A: Using the child selector, I've taken Fadi's incredible answer above and boiled it down to just one CSS rule that I can apply. Now all I have to do is add the contentCentered class name to elements I want to center: .contentCentered { text-align: center; } .contentCentered::before { content: ''; display: inline-block; height: 100%; vertical-align: middle; margin-right: -.25em; /* Adjusts for spacing */ } .contentCentered > * { display: inline-block; vertical-align: middle; } <div class="contentCentered"> <div> <h1>Some text</h1> <p>But he stole up to us again, and suddenly clapping his hand on my shoulder, said&mdash;"Did ye see anything looking like men going towards that ship a while ago?"</p> </div> </div> Forked CodePen: http://codepen.io/dougli/pen/Eeysg A: For me the best way to do this is: .container{ position: relative; } .element{ position: absolute; top: 50%; transform: translateY(-50%); } The advantage is not having to make the height explicit A: This is my awesome solution for a div with a dynamic (percentaged) height. CSS .vertical_placer{ background:red; position:absolute; height:43%; width:100%; display: table; } .inner_placer{ display: table-cell; vertical-align: middle; text-align:center; } .inner_placer svg{ position:relative; color:#fff; background:blue; width:30%; min-height:20px; max-height:60px; height:20%; } HTML <div class="footer"> <div class="vertical_placer"> <div class="inner_placer"> <svg> some Text here</svg> </div> </div> </div> Try this by yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/59309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "172" }
Q: User Initiated Kernel dump in Windows XP I remember watching a webcast from Mark Russinovich showing the sequence of keyboard keys for a user initiated kernel dump. Can somebody refresh my memory on the exact order of the keys. Please note this is for XP. A: http://psacake.com/web/jr.asp contains full instructions, and here's an excerpt: While it may seem odd to think about purposefully causing a Blue Screen Of Death (BSOD), Microsoft includes such a provision in Windows XP. This might come in handy for testing and troubleshooting your Startup And Recovery settings, Event logging, and for demonstration purposes. Here's how to create a BSOD: Launch the Registry Editor (Regedit.exe). Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters. Go to Edit, select New | DWORD Value and name the new value CrashOnCtrlScroll. Double-click the CrashOnCtrlScroll DWORD Value, type 1 in the Value Data textbox, and click OK. Close the Registry Editor and restart Windows XP. When you want to cause a BSOD, press and hold down the [Ctrl] key on the right side of your keyboard, and then tap the [ScrollLock] key twice. Now you should see the BSOD. If your system reboots instead of displaying the BSOD, you'll have to disable the Automatically Restart setting in the System Properties dialog box. To do so, follow these steps: Press [Windows]-Break. Select the Advanced tab. Click the Settings button in the Startup And Recovery panel. Clear the Automatically Restart check box in the System Failure panel. Click OK twice. Here's how you remove the BSOD configuration: Launch the Registry Editor (Regedit.exe). Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters. Select the CrashOnCtrlScroll value, pull down the Edit menu, and select the Delete command. Close the Registry Editor and restart Windows XP. Note: Editing the registry is risky, so make sure you have a verified backup before making any changes. And I may be wrong in assuming you want BSOD, so this is a Microsoft Page showing how to capture kernel dumps: https://web.archive.org/web/20151014034039/https://support.microsoft.com/fr-ma/kb/316450 A: As far as I know, the "Create Dump" command was only added to Task Manager in Vista. The only process I know of to do this is using the adplus VBScript that comes with Debugging Tools. Short of hooking into dbghelp and programmatically doing it yourself. A: You can setup the user dump tool from Microsoft with hot keys to dump a process. However, this is a user process dump, not a kernel dump... A: I don't know of any keyboard short cuts, but are you looking for like in task manager, when you right click on a process and select "Create Dump"?
{ "language": "en", "url": "https://stackoverflow.com/questions/59313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: In SQL, what’s the difference between count(*) and count('x')? I have the following code: SELECT <column>, count(*) FROM <table> GROUP BY <column> HAVING COUNT(*) > 1; Is there any difference to the results or performance if I replace the COUNT(*) with COUNT('x')? (This question is related to a previous one) A: The major performance difference is that COUNT(*) can be satisfied by examining the primary key on the table. i.e. in the simple case below, the query will return immediately, without needing to examine any rows. select count(*) from table I'm not sure if the query optimizer in SQL Server will do so, but in the example above, if the column you are grouping on has an index the server should be able to satisfy the query without hitting the actual table at all. To clarify: this answer refers specifically to SQL Server. I don't know how other DBMS products handle this. A: This question is slightly different that the other referenced. In the referenced question, it was asked what the difference was when using count(*) and count(SomeColumnName), and SQLMenace's answer was spot on. To address this question, essentially there is no difference in the result. Both count(*) and count('x') and say count(1) will return the same number. The difference is that when using " * " just like in a SELECT all columns are returned, then counted. When a constant is used (e.g. 'x' or 1) then a row with one column is returned and then counted. The performance difference would be seen when " * " returns many columns. Update: The above statement about performance is probably not quite right as discussed in other answers, but does apply to subselect queries when using EXISTS and NOT EXISTS A: To say that SELECT COUNT(*) vs COUNT(1) results in your DBMS returning "columns" is pure bunk. That may have been the case long, long ago but any self-respecting query optimizer will choose some fast method to count the rows in the table - there is NO performance difference between SELECT COUNT(*), COUNT(1), COUNT('this is a silly conversation') Moreover, SELECT(1) vs SELECT(*) will NOT have any difference in INDEX usage -- most DBMS will actually optimize SELECT( n ) into SELECT(*) anyway. See the ASK TOM: Oracle has been optimizing SELECT(n) into SELECT(*) for the better part of a decade, if not longer: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1156151916789 problem is in count(col) to count() conversion **03/23/00 05:46 pm *** one workaround is to set event 10122 to turn off count(col) ->count() optimization. Another work around is to change the count(col) to count(), it means the same, when the col has a NOT NULL constraint. The bug number is 1215372. One thing to note - if you are using COUNT(col) (don't!) and col is marked NULL, then it will actually have to count the number of occurrences in the table (either via index scan, histogram, etc. if they exist, or a full table scan otherwise). Bottom line: if what you want is the count of rows in a table, use COUNT(*) A: MySQL: According to the MySQL website, COUNT(*) is faster for single table queries when using MyISAM: http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count I'm guessing with a having clause with a count in it may change things.
{ "language": "en", "url": "https://stackoverflow.com/questions/59322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What online brokers offer APIs? So I'm getting really sick of E*TRADE and, being a developer, would love to find an online broker that offers an API. It would be great to be able to write my own trading tools, and maybe even modify existing ones. Based on my research so far, I've only found one option. Interactive Brokers offers a multi-language API (Java/C++/ActiveX/DDE) and has some fairly decent commission rates to boot. I want to make sure there aren't any other options out there I should be considering. Any ideas? Update: Based on answers so far, here's a quick list... * *Interactive Brokers * *Java *C++ *ActiveX *DDE for Excel *Pinnacle Trading * *C++ *Perl *VB.NET *Excel *MB Trading A: I vote for IB(Interactive Brokers). I've used them in the past as was quite happy. Pinnacle Capital Markets trading also has an API (pcmtrading.com) but I haven't used them. Interactive Brokers: https://www.interactivebrokers.com/en/?f=%2Fen%2Fsoftware%2Fibapi.php Pinnacle Capital Markets: http://www.pcmtrading.com/es/technology/api.html A: I've been using parts of the marketcetera platform. They support all kinds of marketdata sources and brokers and you should easily be able to add more brokers and/or data providers. This is not a direct broker API of course, but that helps you avoid vendor lock-in so that might be a good thing. And of course all the tools they use are open source. A: openecry.com is a broker with plenty of information on an API and instructions on how to do yours. There are also other brokers with the OEC platform and all the bells and whistles a pro could ask for. A: Looks like E*Trade has an API now. For access to historical data, I've found EODData to have reasonable prices for their data dumps. For side projects, I can't afford (rather don't want to afford) a huge subscription fee just for some data to tinker with. A: There are a few. I was looking into MBTrading for a friend. I didn't get too far, as my friend lost interest. Seemed relatively straigt forward with a C# and VB.Net SDK. They had some docs and everything. This was ~6 months ago, so it may be better (or worse) by now. IIRC, you can create a demo account for free. I don't remember all the details, but it let you connect to their test server and pull quotes and make fake trades and such to get your software fine tuned. Don't know much about cost for an actual account or anything. A: Ameritrade also offers an API, as long as you have an Ameritrade account: http://www.tdameritrade.com/tradingtools/partnertools/api_dev.html A: .NET Client Library for TD Ameritrade Trading Platform: TD Ameritrade .NET SDK, also available via NuGet A: Only related with currency trading (Forex), but many Forex brokers are offering MetaTrader which let you code in MQL. The main problem with it (aside that it's limited to Forex) is that you've to code in MQL which might not be your preferred language.
{ "language": "en", "url": "https://stackoverflow.com/questions/59327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "155" }
Q: Visibility of template specialization of C++ function Suppose I have fileA.h which declares a class classA with template function SomeFunc<T>(). This function is implemented directly in the header file (as is usual for template functions). Now I add a specialized implementation of SomeFunc() (like for SomeFunc<int>()) in fileA.C (ie. not in the header file). If I now call SomeFunc<int>() from some other code (maybe also from another library), would it call the generic version, or the specialization? I have this problem right now, where the class and function live in a library which is used by two applications. And one application correctly uses the specialization, while another app uses the generic form (which causes runtime problems later on). Why the difference? Could this be related to linker options etc? This is on Linux, with g++ 4.1.2. A: Have you added a prototype with parameters to your header file? I mean is there somewhere in fileA.h template<> SomeFunc<int>(); If not that's probably the reason. A: I had the same problem with gcc4, here is how i solved it. It was more simple a solution than what i was lead to believe by previous comments. The previous posts ideas were correct but their syntax didn't work for me. ----------header----------------- template < class A > void foobar(A& object) { std::cout << object; } template <> void foobar(int); ---------source------------------ #include "header.hpp" template <> void foobar(int x) { std::cout << "an int"; } A: It is an error to have a specialization for a template which is not visible at the point of call. Unfortunately, compilers are not required to diagnose this error, and can then do what they like with your code (in standardese it is "ill formed, no diagnostic required"). Technically, you need to define the specialization in the header file, but just about every compiler will handle this as you might expect: this is fixed in C++11 with the new "extern template" facility: extern template<> SomeFunc<int>(); This explicitly declares that the particular specialization is defined elsewhere. Many compilers support this already, some with and some without the extern. A: Per the specs, your specialized function template should never be called outside fileA.C, unless you export the template definition, which no compiler (except Comeau) currently supports (or has it planned for the forseeable future). On the other hand, once the function template is instantiated, there is a function visible to the compiler that is no longer a template. GCC may re-use this definition across different compiler units because the standard states that each template shall only be instantiated once for a given set of type arguments [temp.spec]. Still, since the template is not exported, this should be limited to the compilation unit. I believe that GCC may expose a bug here in sharing its list of instantiated templates across compilation units. Normally, this is a reasonable optimization but it should take function specializations into account which it doesn't seem to do correctly. A: In Microsoft C++, I did an experiment with inline functions. I wanted to know what would happen if I defined incompatible versions of a function in different sources. I got different results depending on whether I was using a Debug build or a Release build. In Debug, the compiler refuses to inline anything, and the linker was linking the same version of the function no matter what was in scope in the source. In Release, the compiler inlined whichever version had been defined at the time, and you got differing versions of the function. In neither case were there any warnings. I kind of suspected this, which is why I did the experiment. I assume that template functions would behave the same, as would other compilers. A: As Anthony Williams says, the extern template construct is the correct way to do this, but since his sample code is incomplete and has multiple syntax errors, here's a complete solution. fileA.h: namespace myNamespace { class classA { public: template <class T> void SomeFunc() { ... } }; // The following line declares the specialization SomeFunc<int>(). template <> void classA::SomeFunc<int>(); // The following line externalizes the instantiation of the previously // declared specialization SomeFunc<int>(). If the preceding line is omitted, // the following line PREVENTS the specialization of SomeFunc<int>(); // SomeFunc<int>() will not be usable unless it is manually instantiated // separately). When the preceding line is included, all the compilers I // tested this on, including gcc, behave exactly the same (throwing a link // error if the specialization of SomeFunc<int>() is not instantiated // separately), regardless of whether or not the following line is included; // however, my understanding is that nothing in the standard requires that // behavior if the following line is NOT included. extern template void classA::SomeFunc<int>(); } fileA.C: #include "fileA.h" template <> void myNamespace::classA::SomeFunc<int>() { ... } A: Unless the specialized template function is also listed in the header file, the other application will have no knowledge of the specialized version. The solution is the add SomeFunc<int>() to the header as well. A: Brandon: that's what I thought - the specialized function should never be called. Which is true for the second application I mentioned. The first app, however, clearly calls the specialized form even though the specialization is not declared in the header file! I mainly seek enlightenment here :-) because the first app is a unit test, and it's unfortunate to have a bug that doesn't appear in the test but in the real app... (PS: I have fixed this specific bug, indeed by declaring the specialization in the header; but what other similar bugs might still be hidden?) A: @[anthony-williams], are you sure you're not confusing extern template declarations with extern template instantiations? From what I see, extern template may only be used for explicit instantiation, not for specialization (which implies implicit instantiation). [temp.expl.spec] doesn't mention the extern keyword: explicit-specialization:     template < > declaration
{ "language": "en", "url": "https://stackoverflow.com/questions/59331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Copying relational data from database to database Edit: Let me completely rephrase this, because I'm not sure there's an XML way like I was originally describing. Yet another edit: This needs to be a repeatable process, and it has to be able to be set up in a way that it can be called in C# code. In database A, I have a set of tables, related by PKs and FKs. A parent table, with child and grandchild tables, let's say. I want to copy a set of rows from database A to database B, which has identically named tables and fields. For each table, I want to insert into the same table in database B. But I can't be constrained to use the same primary keys. The copy routine must create new PKs for each row in database B, and must propagate those to the child rows. I'm keeping the same relations between the data, in other words, but not the same exact PKs and FKs. How would you solve this? I'm open to suggestions. SSIS isn't completely ruled out, but it doesn't look to me like it'll do this exact thing. I'm also open to a solution in LINQ, or using typed DataSets, or using some XML thing, or just about anything that'll work in SQL Server 2005 and/or C# (.NET 3.5). The best solution wouldn't require SSIS, and wouldn't require writing a lot of code. But I'll concede that this "best" solution may not exist. (I didn't make this task up myself, nor the constraints; this is how it was given to me.) A: I think the SQL Server utility tablediff.exe might be what you are looking for. See also this thread. A: First, let me say that SSIS is your best bet. But, to answer the question you asked... I don't believe you will be able to get away with creating new id's all around, although you could but you would need to take the original IDs to use for lookups. The best you can get is one insert statement for table. Here is an example of the code to do SELECTs to get you the data from your XML Sample: declare @xml xml set @xml='<People Key="1" FirstName="Bob" LastName="Smith"> <PeopleAddresses PeopleKey="1" AddressesKey="1"> <Addresses Key="1" Street="123 Main" City="St Louis" State="MO" ZIP="12345" /> </PeopleAddresses> </People> <People Key="2" FirstName="Harry" LastName="Jones"> <PeopleAddresses PeopleKey="2" AddressesKey="2"> <Addresses Key="2" Street="555 E 5th St" City="Chicago" State="IL" ZIP="23456" /> </PeopleAddresses> </People> <People Key="3" FirstName="Sally" LastName="Smith"> <PeopleAddresses PeopleKey="3" AddressesKey="1"> <Addresses Key="1" Street="123 Main" City="St Louis" State="MO" ZIP="12345" /> </PeopleAddresses> </People> <People Key="4" FirstName="Sara" LastName="Jones"> <PeopleAddresses PeopleKey="4" AddressesKey="2"> <Addresses Key="2" Street="555 E 5th St" City="Chicago" State="IL" ZIP="23456" /> </PeopleAddresses> </People> ' select t.b.value('./@Key', 'int') PeopleKey, t.b.value('./@FirstName', 'nvarchar(50)') FirstName, t.b.value('./@LastName', 'nvarchar(50)') LastName from @xml.nodes('//People') t(b) select t.b.value('../../@Key', 'int') PeopleKey, t.b.value('./@Street', 'nvarchar(50)') Street, t.b.value('./@City', 'nvarchar(50)') City, t.b.value('./@State', 'char(2)') [State], t.b.value('./@Zip', 'char(5)') Zip from @xml.nodes('//Addresses') t(b) What this does is take Nodes from the XML and parse out the data. To get the relational id from people we use ../../ to go up the chain. A: Dump the XML approach and use the import wizard / SSIS. A: By far the easiest way is Red Gate's SQL Data Compare. You can set it up to do just what you described in a minute or two. A: I love Red Gate's SQL Compare and Data Compare too but it won't meet his requirements for the changing primary keys as far as I can tell. If cross database queries/linked servers are an option you could do this with a stored procedure that copies the records from parent/child in DB A into temporary tables on DB B and then add a column for the new primary key in the temp child table that you would update after inserting the headers. My question is if the records don't have the same primary key how do you tell if it's a new record? Is there some other candidate key? If these are new tables why can't they have the same primary key? A: I have created the same thing with a set of stored procedures. Database B will have its own primary keys, but store Database A's primary keys, for debuging purposes. It means I can have more than one Database A! Data is copied via a linked server. Not too fast; SSIS is faster. But SSIS is not for beginners, and it is not easy to code something that works with changing source tables. And it is easy to call a stored procedure from C#. A: I'd script it in a Stored Procedure, using Inserts to do the hard work. Your code will take the PKs from Table A (presumably via @@Scope_Identity) - I assume that the PK for Table A is an Identity field? You could use temporary tables, cursors or you might prefer to use the CLR - it might lend itself to this kind of operation. I'd be surprised to find a tool that could do this off the shelf with either a) pre-determined keys, or b) identity fields (clearly Tables B & C don't have them). A: Are you clearing the destination tables each time and then starting again? That will make a big difference to the solution you need to implement. If you are doing a complete re-import each time then you could do something like the following: Create a temporary table or table variable to record the old and new primary keys for the parent table. Insert the parent table data into the destination and use the OUTPUT clause to capture the new ID's and insert them with the old IDs into the temp table. NOTE: Using the output clause is efficient and allows you to do the insert in bulk without cycling through each record to be inserted. Insert the child table data. Join to the temp table to retrieve the new foreign key required. The above process could be done using T-SQL Script, C# code or SSIS. My preference would be for SSIS. A: If you are adding each time then you may need to keep a permanent table to track the relationship between source database primary keys and destination database primary keys (at least for the parent table). If you needed to keep this kind of data out of the destination database, you could get SSIS to store/retrieve it from some kind of logging database or even a flat file. You could probably avoid the above scenario if there is a combination of fields in the parent table that can be used to uniquely identify that record and therefore "find" the primary key for that record in the destination database. A: I think most likely what I'm going to use is typed datasets. It won't be a generalized solution; we'll have to regenerate them if any of the tables change. But based on what I've been told, that's not a problem; the tables aren't expected to change much. Datasets will make it reasonably easy to loop through the data hierarchically and refresh PKs from the database after insert. A: When dealing with similar tasks I simply created a set of stored procedures to do the job. As the task that you specified is pretty custom, you are not likely to find "ready to use" solution. Just to give you some hints: * *If the databases are on different servers use linked servers so you can access both source and destination tables simply through TSQL In the stored procedure: * *Identify the parent items that need to be copied - you said that the primary keys are different so you need to use unique constraints instead (you should be able to define them if the tables are normalised) *Identify the child items that need to be copied based on the identified parents, to check if some of them are already in the destination db use the unique constraints approach again *Identify the grandchild items (same logic as with parent-child) *Copy data over starting with the lowest level (grandchildren, children, parents) There is no need for cursors etc, simply store the immediate results in the temporary table (or table variable if working within one stored procedure) That approach worked for me pretty well. You can of course add parameter to the main stored procedure so you can either copy all new records or only ones that you specify. Let me know if that is of any help.
{ "language": "en", "url": "https://stackoverflow.com/questions/59357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to use GflAx to incorporate gradient colours? Ok, narrow question of the day. I'm using GflAx (from xnview) to create some graphic tiles. I would like to put some gradients in as well though. Is there a way I can do this within this product? There is also an SDK which is part of this product but I can't find that info there. A: You can not do this but you can create the gradient in another program and then do a "LoadBitmap" make the mods you need ontop of that 'background' and then save to a new file.
{ "language": "en", "url": "https://stackoverflow.com/questions/59377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Wildcard Subdomain Exceptions I have a wildcard subdomain enabled and dynamically parse the URL by passing it as-is to my index.php (ex. somecity.domain.com). Now, I wish to create a few subdomains that are static where I can install different application and not co-mingle with my current one (ex. blog.domain.com). My .htaccess currently reads: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Can I manipulate this .htaccess to achieve what I need? Can it be done through Apache? A: Your .htaccess does nothing useful, as Apache is probably configured with DirectoryIndex index.php. Well, it does move domain.com/a to domain.com/index.php, but I doubt that is what you want. Your wildcard virtualhost works because you probably have ServerAlias *.domain.com in your configuration, or a single virtualhost and DNS pointing to the address of your server. (When you have a single virtualhost, it shows up for any request, and the first listed virtualhost is the default one) You have to create new VirtualHosts for the static domains, leaving the default one as, well, the default one :) Check these tutorials that explain it all. A: You'll have to configure apache for those static sub-domains. The "catch-all" site will be the default site configured, so that one will catch the other ones. A: I'm not sure I understand completely what you need to accomplish, but it might helpful to setup virtual domains within your Apache configuration file. You can map them to folders on the drive with different applications installed. Each virtual domain is treated much like a root directory. I have my development environment setup locally on my Windows machine a lot like this: NameVirtualHost *:80 # Begin virtual host directives. <VirtualHost *:80> # myblog.com virtual host. ServerAdmin [email protected] DocumentRoot "c:/apache_www/myblog.com/www" ServerName myblog.com ServerAlias *.myblog.com ErrorLog "c:/apache_www/myblog.com/logs/log" ScriptAlias /cgi-bin/ "c:/apache_www/myblog.com/cgi-bin/" <Directory "c:/apache_www/myblog.com/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> If this does not help get you on the right track, then try researching the VirtualHost directive to come up with a solution. I find trying to do all this in an .htaccess to be cumbersome and difficult to manage. A: I don't know if you have cPanel installed on your host, but I was able to do this by adding a new subdomain * and then sending all that traffic to a particular subdomain, for example: *.domain.com -> master.domain.com. Then you can read out which URL you are at in master.domain.com and go from there.
{ "language": "en", "url": "https://stackoverflow.com/questions/59380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ColdFusion: Is it safe to leave out the variables keyword in a CFC? In a ColdFusion Component (CFC), is it necessary to use fully qualified names for variables-scoped variables? Am I going to get myself into trouble if I change this: <cfcomponent> <cfset variables.foo = "a private instance variable"> <cffunction name = "doSomething"> <cfset var bar = "a function local variable"> <cfreturn "I have #variables.foo# and #bar#."> </cffunction> </cfcomponent> to this? <cfcomponent> <cfset foo = "a private instance variable"> <cffunction name = "doSomething"> <cfset var bar = "a function local variable"> <cfreturn "I have #foo# and #bar#."> </cffunction> </cfcomponent> A: Especially in CFCs, proper scoping is important. The extra 'verbosity' is worth the clarity. Having variables slip out of their indended scope will cause severe problems and very hard to diagnose. Verbosity isn't always a bad thing. We name our functions and methods in descriptive manners like getAuthenticatedUser(), rather than gau(). Database columns and tables are best left descriptive like EmployeePayroll rather than empprl. Thus, being terse might be 'easier' when your short term memory is full of the project details, but being descriptive shows your intent and is helpful during the maintenance phase of an application, long after your short term memory has been filled with other stuff. A: I'll say Yes. Is it explicitly necessary? Nope. Can you get away with not doing it? Sure. Are you asking for trouble? Absolutely. If you have the following inside a cffunction: <cfset foo = "bar" /> That will not place that variable in the function local var scope, it will place it in the CFC's global VARIABLES scope, meaning that it is available to every method of that CFC. There are times when you may want to do this, but most of the time you'd be asking for a race condition. When any variable is being read by the server, if that variable is not explicity declared as part of a scope (REQUEST., SESSION., etc.) then ColdFusion will run ScopeCheck() to determine which scope the variable is in. Not only is this placing unnecessary overhead on your application server, it also introduces the ability for hijacking, whereby your variable is in one scope, but ScopeCheck() has found a variable of the same name higher in the precedence order. Always, always, ALWAYS, scope all variables. No matter how trivial. Even things like query names and looping indexes. Save yourself, and those that come behind you, from the pain. A: The short answer to your question is that no, you will probably not run into trouble attempting to do that. Outside the context of a UDF (even still inside a CFC), an un-scoped set statement implies the variables scope. In addition, in a CFC, the Variables scope is available to all of its functions; it is sort of the global scope within that CFC -- similar to the "this" scope, except variables scope is akin to "private" variables, whereas the this scope is akin to public variables. To test this, create test.cfc: <cfcomponent> <cfset foo = "bar" /> <cffunction name="dumpit" output="true"> <cfdump var="#variables#" label="cfc variables scope"> <cfdump var="#this#" label="cfc this scope"> </cffunction> </cfcomponent> and a page to test it, test.cfm: <cfset createObject("component", "test").dumpit() /> And the results will be: Now, to address another problem I see in your example code... In CF, all User Defined Functions have a special un-named scope commonly referred to as the "var" scope. If you do the following inside a UDF: <cfset foo = "bar" /> Then you are telling CF to put that variable into the var scope. To compound things a bit, you can run into problems (variable values changing when you weren't expecting them to) when you are not using the var scope in your inline UDFs. So the rule of thumb is to always, Always, ALWAYS, ALWAYS var-scope your function-internal variables (including query names). There is a tool called varScoper that will assist you in finding variables that need to be var-scoped. Last I checked it wasn't perfect, but it's definitely a start. However, it is a bad idea to reference (display/use) variables without a scope (obviously excepting var-scoped variables, as you can't specify the scope to read from) in CFCs or even on your standard CFM pages. As of CF7, there were 9 scopes that were checked in a specific order when you read a variable without specifying the scope, first match wins. With CF8, there could be more scopes in that list, I haven't checked. When you do this, you run the risk of getting a value from one scope when you are expecting it from another; which is a nightmare to debug... I assure you. ;) So in short: implying a variable's scope (on set) is not a terrible idea (though I usually specify it anyway); but inferring variable's scope (on read) is asking for trouble. A: Not explicitly scoping in the variables scope may work, but it's not a good idea, and honestly the only reason not to is out of laziness IMO. If you explicitly scope everything 1) you avoid potential issues, and 2) it makes the code easier to read because there's no question which scope things are in. To me it doesn't make the code more verbose (and certainly not unnecessarily verbose)--it's actually easier to read, avoids confusion, and avoids weird side effects that may crop up if you don't explicitly scope. A: It won't matter to specify "variables" when you create the variable, because foo will be placed in the variables scope by default; but it will matter when you access the variable. <cfcomponent> <cfset foo = "a private instance variable"> <cffunction name="doSomething"> <cfargument name="foo" required="yes"/> <cfset var bar = "a function local variable"> <cfreturn "I have #foo# and #bar#."> </cffunction> <cffunction name="doAnotherThing"> <cfargument name="foo" required="yes"/> <cfset var bar = "a function local variable"> <cfreturn "I have #variables.foo# and #bar#."> </cffunction> </cfcomponent> doSomething("args") returns "I have args and a function local variable" doAnotherThing("args") returns "I have a private instance of a variable and a function local variable." A: The simple answer to your question is: "NO, it isn't necessary" However, I think best practices would suggest that you do, in fact, use the variables indentifier when accessing those variables. In my opinion anyone who comes upon your code in the future, and is looking in the middle of a function, will instantly know the scoping of the variable without having to scan the top of the function the local functions. In fact, I add a little extra verbosity to my CFC UDFs by creating one local struct: <cfset var local = structNew() /> Then I put all my local vars in that struct and reference them that way so my code will look something like this: <cfset local.foo = variables.bar + 10 /> A: After reading your answers here's what I'm thinking: Yes, it's safe. In general, it's not necessary or useful to explicitly specify the variables scope. It just adds clutter to an already verbose language. Granted, there is one minor exception, as Soldarnal pointed out, where qualifying a variables-scoped variable is required. That is if you have a function local variable with the same name. (But you probably shouldn't do that anyway.) A: Best practices aside, i believe it could also depend on how your going to access your cfc's i have not had any problems leaving them out when creating objects and accessing them from coldfusion. However i think it might be needed when accessing and/or mapping them remotely via actionscript in flex/flash. A: Here's a very good CFC scope reference from Raymond Camden. Personally, I prefer to make a 'self' hash to avoid all confusion (notice I don't use the 'variables' scope in the functions): <cfcomponent> <cfset variables.self = structNew()> <cfscript> structInsert(variables.self, <key>, <value>); ... </cfscript> <cffunction name="foo"> self.<key> = <value> <cfreturn self.<key> /> </cffunction> ...
{ "language": "en", "url": "https://stackoverflow.com/questions/59390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best way to migrate from VSS to Subversion? I'm a single developer looking to get off of Visual Source Safe and move to svn. A quick search brings up several tools, but I don't see a clear winner and I can't afford to spend a lot of time testing different tools. Has anyone done this successfully, and can recommend a method? A: The CodePlex Version of VSStoSVN is one of the best I've found. I had pretty bad results with the PumaCode version, but this one ran smooth. http://vss2svn.codeplex.com/ A: We did this migration recently at work. I strongly suggest: * *Just add the new code from VSS, take the hit that pre-svn history will have to stay in the old VSS repository. *If your VSS repository is still in use after the initial code dump, migrate changes using Vendor Branches. Ie, assume your VSS repository is a vendor and use dated tags to merge the changes into the SVN repository. Slightly more detail here. A: My company has developed a Source Safe to Subversion migration tool: http://www.abstrakti.com/Products/Krepost This tool was developed after having problems with every other tool, when we had to migrate a customer's repository. Let me know if you have any problems, I'll be glad to help you. Eric. A: I recommend just adding your code to a new Subversion repository rather than importing from VSS. VSS has a convoluted version control model that doesn't translate well to many other systems, and just starting fresh is usually the best way to avoid taking that clutter with you. If you need to keep the history around, make your VSS repository read-only. A: The following tool works quite well: http://www.pumacode.org/projects/vss2svn/wiki/RunningTheMigration It takes a bit of work to clean up the imported repository, but if you really want to keep your history it could be worth it. Edit: pumacode.org domain is gone, the code is now hosted on https://github.com/irontoby/vss2svn A: At my current job we just created a subversion repository, setup hook scripts to ignore all vss and generated files, and then just started importing the various projects with tortoiseSVN. Worked out pretty decent, we were up and running within a couple of hours. A: I totally agree with Jon Galloway's answer. I have also tried using vss2svn but found that there were a lot of problems with the imported repository and in the end decided that it was not worth the effort required to clean it up. We just imported a copy of the code into subversion and have gone back to VSS on the rare occasion that an older version of the code needed to be consulted. In my previous company we also used the same approach for migrating from ClearCase to Subversion, and I can't remember any occasion that we ever needed to go back into ClearCase to look at the history. The biggest issue was getting everyone to switch to the new repository at the same time, but as a single developer you shouldn't have any problem there! A: We downloaded and tested several migration tools and I would recommend Polarion SVNImporter. We used it to carry a selective migration of almost a Gb from a VSS6 repository to Subversion. As the source code is available, we were able to patch it and tailor to our specific needs (linked files detection). A: I have used vss2svn with great success. A: I have used some script (I can't remember which one) to assist in a VSS to SVN conversion. It was a bit painful and finicky but ended up working, and kept all history. I had to keep all the history for political reasons at the time; if I had my way I probably would have thrown away the history and imported all the code into SVN. Also for political reasons, I wrote some really hacky scripts that kept VSS updated with changes from Subversion. These worked for a while but kept breaking every week or two, until somebody renamed a directory or something and the whole thing fell apart. By that time it was okay to simply continue using Subversion.
{ "language": "en", "url": "https://stackoverflow.com/questions/59392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Rhino Mocks: How can I mock out a method that transforms its input? I have a Data Access Object TransactionDao. When you call TransactionDao.Save(transaction) I would like for it to setting a transaction.IsSaved=true flag (this is a simplification the actual thing I'm trying to do is not quite so banal). So when mocking my TransactionDao with RhinoMocks how can I indicate that it should transform its input? Ideally I would like to write something like this: Expect.Call(delegate {dao.Save(transaction);}).Override(x => x.IsSaved=true); Does anyone know how to do this? Though I got a hint how to do it from the answer specified below the actual type signature is off, you have to do something like this: Because of what Mark Ingram posted, seems like the best answer, though nobody's explicitly said it, is to do this: public delegate void FakeSave(Transaction t); ... Expect.Call(delegate {dao.Save(t); }).Do( new FakeSave(delegate(Transaction t2) { t.IsSaved = true; })); A: Gorge, The simplest solution, which I found, applied to your question is the following: Expect.Call(() => dao.Save(transaction)) .Do(new Action<Transaction>(x => x.IsSaved = true)); So you don't need to create a special delegate or anything else. Just use Action which is in standard .NET 3.5 libraries. Hope this help. Frantisek A: You can accomplish this using the Do callback: Expect.Call(delegate {dao.Save(transaction);}) .Do(x => x.IsSaved = true); A: you should mock the transaction and make it return true fo IsSaved, if you can mock the transaction of course. ITransaction transaction = _Mocker.dynamicMock<ITransaction>; Expect.Call(transaction.IsSaved).IgnoreArguments.Return(true); _mocker.ReplayAll(); dao.Save(transaction);
{ "language": "en", "url": "https://stackoverflow.com/questions/59396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Pex users: what are your Impressions of Pex and Automated Exploratory Testing in general? Those of you who have used Pex, what do you think its advantages and disadvantages are of Pex as a tool? Also, what do you think are the advantages and disadvantages of "Automated Exploratory Testing" in general, as a supplement to TDD/Unit Testing? A: If you look for literature on writing Theories (google David Saff) - which are a more general way of writing unit tests, and use Pex as a theory explorer I've found a step change in productivity from my experience so far. I've just wrote a blog post detailing my experiences of Pex in TDD, here: http://taumuon-jabuka.blogspot.com/2009/01/theory-driven-development-using_11.html and as I said - I see it as TDD on steroids! It in no way replaces TDD, but enhances the activity. A: I'm really pumped about Pex. It will provide tests for edege cases that you won't ever dream up, especially if your team is small and the person writing the methods is the same as the person writing the tests. It will also provide contractual obligations that your methods will obey. A: Test-first development makes you structure your code for testability. In this respect, Pex finds clever and awkward paths thru your code, helping out beyond simple coverage metrics. Major forte of Pex with Moles is enabling of tracking side effects when doing Brownfield development: run Pex once and save outputs, then apply code changes, and run Pex again to see what got broken. A: Pex lets your write parameterized unit tests. In that sense, it totally fits into the TDD/unit testing flow: write the test, have Pex 'explore' it, find some failing tests, fix the code, and so forth. The big advantage is that you can express your tests for classes of inputs, not just a couple hard-coded values. This gives more expressiveness for writing tests and also forces to think about the invariant/expectation that your code should fullfill (i.e. it's harder to write assertions). A: I think Pex as an exploratory testing tool is really intriguing. In that regard, I see it as something I'd want to hand off to QA to use. As a TDD tool, it needs some work, as TDD is a design activity. However, I do like the direction that Peli is heading. There's something to be said for automated assisted design. For example, just because TDD is a design tool, there's no reason I can't have an automated tool point out potential edge cases while I'm designing, right? Build quality in from the start. Check out this post in which Peli uses Pex in a TDD style workflow. http://blog.dotnetwiki.org/TDDingABinaryHeapWithPexPart1.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/59398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Clean up Designer.vb file in Visual Studio 2008 I noticed that my Designer.vb file of one of my forms has a lot of controls that aren't even used or visible on my form. This is probably from copying controls from my other forms. Is there a way to clean up the Designer.vb file and get rid of all the unused controls? **UPDATE: This is for a Windows Form project. A: The only real solution I see is to copy all the controls into a new form by selecting them in the designer. This way all the not created controls should not follow you to the next form.
{ "language": "en", "url": "https://stackoverflow.com/questions/59418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is a bool read/write atomic in C# Is accessing a bool field atomic in C#? In particular, do I need to put a lock around: class Foo { private bool _bar; //... in some function on any thread (or many threads) _bar = true; //... same for a read if (_bar) { ... } } A: As stated above, bool is atomic, but you still need to remember that it also depends on what you want to do with it. if(b == false) { //do something } is not an atomic operation, meaning that the value of b could change before the current thread executes the code after the if statement. A: bool accesses are indeed atomic, but that isn't the whole story. You don't have to worry about reading a value that is 'incompletely written' - it isn't clear what that could possibly mean for a bool in any case - but you do have to worry about processor caches, at least if details of timing are an issue. If thread #1 running on core A has your _bar in cache, and _bar gets updated by thread #2 running on another core, thread #1 will not see the change immediately unless you add locking, declare _bar as volatile, or explicitly insert calls to Thread.MemoryBarrier() to invalidate the cached value. A: the approach I have used, and I think is correct, is volatile bool b = false; .. rarely signal an update with a large state change... lock b_lock { b = true; //other; } ... another thread ... if(b) { lock b_lock { if(b) { //other stuff b = false; } } } the goal was basically to avoid having to repetively lock an object on every iteration just to check if we needed to lock it in order to provide a large amount of state change information which occurs rarely. I think this approach works. And if absolute consistancy is required, I think volatile would be appropriate on the b bool. A: Yes. Reads and writes of the following data types are atomic: bool, char, byte, sbyte, short, ushort, uint, int, float, and reference types. as found in C# Language Spec. Edit: It's probably also worthwhile understanding the volatile keyword.
{ "language": "en", "url": "https://stackoverflow.com/questions/59422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: I have a link icon next to each link. How do I exclude the link icon from images? I've got the following in my .css file creating a little image next to each link on my site: div.post .text a[href^="http:"] { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } How do I modify this snippet (or add something new) to exclude the link icon next to images that are links themselves? A: If you set the background color and have a negative right margin on the image, the image will cover the external link image. Example: a[href^="http:"] { background: url(http://en.wikipedia.org/skins-1.5/monobook/external.png) right center no-repeat; padding-right: 14px; white-space: nowrap; } a[href^="http:"] img { margin-right: -14px; border: medium none; background-color: red; } <a href="http://www.google.ca">Google</a> <br/> <a href="http://www.google.ca"> <img src="http://upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/50px-Commons-logo.svg.png" /> </a> edit: If you've got a patterned background this isn't going to look great for images that have transparency. Also, your href^= selector won't work on IE7 but you probably knew that already A: It might be worth it to add a class to those <a> tags and then add another declaration to remove the background: div.post .text a.noimage{ background:none; } A: If you have the content of the links as a span, you could do this, otherwise I think you would need to give one scenario a class to differentiate it. a > span { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } a > img { /* any specific styling for images wrapped in a link (e.g. polaroid like) */ border: 1px solid #cccccc; padding: 4px 4px 25px 4px; } A: You need a class name on either the a elements you want to include or exclude. If you don't want to do this in your server side code or documents, you could add the classes with javascript as the page is loaded. With the selection logic wrapped up elsewhere, your rule could just be: a.external_link { background: url(../../pics/remote.gif) right top no-repeat; padding-right: 10px; white-space: nowrap; } It would be possible with XPath to create a pattern like yours that would also exclude a elements that had img children, however this facility has been repeatedly (2002, 2006, 2007) proposed and rejected for CSS, largely on the grounds it goes against the incremental layout principles. So, while it is possible to do neat conditional content additions as you have with a contextual selector and a prefix match on the href attribute, CSS is considerably weaker than a general purpose programming language. To do more complex things you need to move the logic up a level and write out simpler instructions for the style engine to handle.
{ "language": "en", "url": "https://stackoverflow.com/questions/59423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Creating a mini-site in ASP.NET that works on Blackberry, Windows Mobile, and iPhone I'm working on an ASP.NET website which targets desktop browsers. We want to enable an optional mobile view (e.g. http://m.sample.com) which will offer a few simple pages which will be mostly text. There will be not need for AJAX or even Javascript, and there's no user input - it's really just tables of text with a few links to navigate between the pages. What's the best way to set this up so it will work on Blackberry, Windows Mobile, and iPhone? Should I be looking at ASP.NET Mobile support, or just rolling my own pages? UPDATE: This was for m.microsoftpdc.com. We went with the /Mobile subfolder approach, and used Scott Hanselman's iPhone tips for viewport and other stuff. A: I have done this in the past and the way I did it is by separating the pages by creating a directory for Desktop and creating a directory for Mobile. This gives you better separation of the views, since in reality they are a lot different. In ASP.NET Forms I used the Model View Presenter pattern a lot since it went with the way ASP.NET Forms functioned the best. That way I could reuse some code between the two views. Then in your index.aspx page for the site, you just parse the user-agent string of the request to figure out the browser and redirect accordingly. So, for example a person with an iphone comes to your site, you parse the user-agent string and figure out it is an iphone. Then you redirect to m.sample.com which is pointing to Mobile/Index.aspx page. Otherwise you redirect to Desktop/Index.aspx. I did the parsing of the user-agent string at the page level, but of course you could do this kind of logic in the HttpModule or HttpHandler level also. Edit I just rolled my own pages since we weren't targeting phones that have WML support. That would be the only reason in my opinion to use the ASP.NET Mobile support, is if you want to support WML enabled phones also. A: You have only identified 3 handset 'platforms' as your target. One thing to consider is that there are a LOT more non-Blackberry / Windows Mobile / iPhone handsets out there and perhaps they will be the majority of your audience. (?) From how you describe your application (JUST text), you should be able to hit pretty much any Internet-enabled cell phone out there, which is pretty much every phone sold in the last eight years. Rolling your own will likely give you more control over how the content is displayed and navigated, which your users will appreciate, but you will lose much of the automatic formatting and advanced interaction capability that something like ASP.NET Mobile may give you. It is a trade-off that you might want to consider in light of where you anticipate your user community will go with this in the next 2 years. Is it possible that they may ask for more of the desktop capability on the mobile side? If it is a likely 'yes' (even more so when I think of the 3 platforms you are targeting) then I'd recommend some automated formatting / enablement tool like ASP.NET mobile. If not, just roll your own and leave it simple and easy for your visitors to use. A: I know from personal experience there really isn't much you need to do for the iPhone. I usually rather just browse your regular site with my iPhone. Just my two cents though. A: Different style sheets based on user agent will handle the "pretty". Are you using master pages? You could also set up different masters based on the device using device filters. A: At Mix this year (2009) mdbf was announced. See this video or this blog post by Scott Hanselman for examples on using it to identify and redirect mobile browsers as needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/59424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do I find records added to my database table in the past 24 hours? I'm using MySQL in particular, but I'm hoping for a cross-vendor solution. I'm using the NOW() function to add a timestamp as a column for each record. INSERT INTO messages (typeId, messageTime, stationId, message) VALUES (?, NOW(), ?, ?) A: SELECT * FROM messages WHERE DATE_SUB(CURDATE(),INTERVAL 1 DAY) <= messageTime A: The SQL Server query is: Select * From Messages Where MessageTime > DateAdd(dd, -1, GetDate()) As far as I can tell the (untested!) MySQL equivalent is Select * From Messages Where MessageTime > ADDDATE(NOW(), INTERVAL -1 DAY) A: For Sybase SQL Anywhere: Select * From Messages Where MessageTime > dateadd( day, -1, now() ) A: For Oracle SELECT * FROM messages WHERE messageTime > SYSDATE - 1 (The psuedo variable SYSDATE includes the time, so sysdate -1 will give you the last 24 hrs) A: There is no cross database solution, as most of them have their own date handling (and mainly interval representation) syntax and semantics. In PostgreSQL it would be SELECT * FROM messages WHERE messagetime >= messagetime - interval '1 day' A: If you are accessing this from an API based client (I'm guessing that is the case because of the '?'s in the query) you can do this from your program rather than through SQL. Note: The rest is for JDBC syntax, other APIs/languages will be different syntax, but should be conceptually the same. On the insert side do PreparedStatement stmt = connection.prepareStatement( "INSERT INTO messages " + "(typeId, messageTime, stationId, message) VALUES " + "(?, ?, ?, ?)" ); stmt.setInt(1, typeId); stmt.setDate(2, new java.sql.Date(System.currentTimeMillis())); stmt.setInt(3, stationId); stmt.setString(4, message); On the query side do: PrepatedStatement stmt = connection.prepareStatement( "SELECT typeId, messageTime, stationId, message " + "from messages where messageTime < ?"); long yesterday = System.currentTimeMillis() - 86400000; // 86400 sec/day stmt.setDate(1,new java.sql.Date(yesterday)); That should work in a portable manner.
{ "language": "en", "url": "https://stackoverflow.com/questions/59425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Which environment, IDE or interpreter to put in practice Scheme? I've been making my way through The Little Schemer and I was wondering what environment, IDE or interpreter would be best to use in order to test any of the Scheme code I jot down for myself. A: I would highly recommend both Chicken and Gauche for scheme. A: PLT Scheme (DrScheme) is one of the best IDEs out there, especially for Scheme. The package you get when downloading it contains all you need for developing Scheme code - libraries, documentation, examples, and so on. Highly recommended. A: If you just want to test your scheme code, I would recommend PLT Scheme. It offers a very complete environment, with debugger, help, etc., and works on most platforms. But if you also want to get an idea of how the interpreter behind the scenes works, and have Visual Studio, I would recommend Tachy. It is a very lightweight scheme interpreter written in c#. It allows you to debug just your scheme code, or also step through the c# interpreter behind the scenes to see what is going on. A: Racket (formerly Dr Scheme) has a nice editor, several different Scheme dialects, an attempt at visual debugging, lots of libraries, and can run on most platforms. It even has some modes specifically geared around learning the language. A: Just for the record I have to mention IronScheme. IronScheme will aim to be a R6RS conforming Scheme implementation based on the Microsoft DLR. Version 1.0 Beta 1 was just released. I think this should be good implementation for someone that is already using .NET framework. EDIT Current version is 1.0 RC 1 from Oct 23 2009 A: Google for the book's authors (Daniel Friedman and Matthias Felleisen). See whether either of them is involved with a popular, free, existing Scheme implementation. A: It doesn't matter, as long as you subscribe to the mailing list(wiki/irc/online-community-site) for the associated community. It's probably worth taking a look at the list description and archives to be sure you are in the right one. Most of these are friendly and welcoming to newcomers, so don't be afraid to ask. It's also worth searching the archives of their mailing list(or FAQ or whatever they use) when you have a question - just in case it is a frequent question. Good Luck! A: Guile running under Geiser within Emacs provides a nice, lightweight implementation for doing the exercises. Racket will also run under Geiser and Emacs, though I personally prefer Guile and Chez Scheme a bit more. Obviously installation of each will depend on your OS. I would recommend using Emacs version 24 and later since this allows you to use Melpa or Marmalade to install Geiser and other Emacs extensions. The current version of Geiser also works quite nicely with Chicken Scheme, Chez Scheme, MIT Scheme and Chibi Scheme. A: LispMe works on a Palm Pilot, take it anywhere, and scheme on the go. GREAT way to learn scheme. A: I've used PLT as mentioned in some of the other posts and it works quite nicely. One that I have read about but have not used is Allegro Common LISP Express. I read a stellar review about their database app called Allegro Cache and found that they are heavy into LISP. Like I said, I don't know if it's any good, but it might be worth a try. A: I am currently working through the Little Schemer as well and use Emacs as my environment, along Quack, which adds additional support and utilities for scheme-mode within Emacs. If you are planning on experimenting with other Lisps (e.g. Common Lisp), Emacs has excellent support for those dialects as well (Emacs itself can be customized with its own dialect of Lisp, appropriately named Emacs Lisp). As far as Scheme implementations go, I am currently using Petit Chez Scheme, which is an interpreted, freely distributable version of Chez Scheme (which uses a compiler and costs money to obtain a license).
{ "language": "en", "url": "https://stackoverflow.com/questions/59428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }