text
stringlengths 8
267k
| meta
dict |
---|---|
Q: SQL Select Bottom Records I have a query where I wish to retrieve the oldest X records. At present my query is something like the following:
SELECT Id, Title, Comments, CreatedDate
FROM MyTable
WHERE CreatedDate > @OlderThanDate
ORDER BY CreatedDate DESC
I know that normally I would remove the 'DESC' keyword to switch the order of the records, however in this instance I still want to get records ordered with the newest item first.
So I want to know if there is any means of performing this query such that I get the oldest X items sorted such that the newest item is first. I should also add that my database exists on SQL Server 2005.
A: Why not just use a subquery?
SELECT T1.*
FROM
(SELECT TOP X Id, Title, Comments, CreatedDate
FROM MyTable
WHERE CreatedDate > @OlderThanDate
ORDER BY CreatedDate) T1
ORDER BY CreatedDate DESC
A: Embed the query. You take the top x when sorted in ascending order (i.e. the oldest) and then re-sort those in descending order ...
select *
from
(
SELECT top X Id, Title, Comments, CreatedDate
FROM MyTable
WHERE CreatedDate > @OlderThanDate
ORDER BY CreatedDate
) a
order by createddate desc
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Priority of a query in MS SQL Is there a way to tell MS SQL that a query is not too important and that it can (and should) take its time?
Likewise is there a way to tell MS SQL that it should give higher priority to a query?
A: SQL Server does not have any form of resource governor yet. There is a SET option called QUERY_GOVERNOR_COST_LIMIT but it's not quite what you're looking for. And it prevents queries from executing based on the cost rather than controlling resources.
A: Not in versions below SQL 2008. In SQL Server 2008 there's the resource governor. Using that you can assign logins to groups based on properties of the login (login name, application name, etc). The groups can then be assigned to resource pools and limitations or restrictions i.t.o. resources can be applied to those resource pools
A: I'm not sure if this is what you're asking, but I had a situation where a single UI click added 10,000 records to an email queue (lots of data in the body). The email went out over the next several days so it didn't need to be a high priority, in fact it would bog the server every time it happened.
I split the procedure into 10,000 individual calls, ran the process on the UI in a different thread (set to low priority) and set it to sleep for a second after running the procedure. It took a while, but I had very granular control over exactly what it was doing.
btw, this was NOT spam, so don't flame me thinking it was.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How do you avoid Technical Debt while still keep true to Agile, i.e.: avoiding violation of YAGNI and avoiding BDUF? Technical Debt via Martin Fowler, via Steve McConnell
YAGNI (You Ain't Gonna Need It) via Wikipedia
BDUF (Big Design Up Front) via Wikipedia
UPDATE: To clarify the question, I think I can also state it this way and keep my meaning:
"In what ways do you, as an Agile practioner, find the right balance between "quick and dirty" (unintentionally risking Technical Debt while attempting to adhere to YAGNI) and over-engineering (BDUF) within each iteration?"
A: It seems that if you stick with the "plan, do, adapt; plan, do, adapt" idea of agile (iterations, iteration reviews) you would avoid those things by default. BDUF is just so contrary to the idea of agile estimating & planning that if you are really agile, you wont be BDUF automatically.
The purpose of release & iteration planning meetings is to make sure you are adding the most valuable features to the project for that iteration. If you keep that in mind, you'll avoid YAGNI for free.
I would very strongly recommend the Mike Cohn books on agile planning:
*
*User Stories Applied
*Agile Estimating and Planning
Update: After your clarification about avoiding YAGNI and BDUF within an iteration...
BDUF...If I felt a feature was not clearly defined before I started work on it, I would create a small "feature" or story to account for the design type portion of the work needed. So that maybe the smaller story has a story point estimate of 1 instead of the real feature's 5. That way, the design is time-boxed into the smaller story, and you will be driven to move on to the feature itself.
To avoid violating YAGNI I would work to be very clear about what the customer expects for a feature within an iteration. Only do work that maps to what the customer expects. If you think something extra should be added, create a new feature for it, and add it to the backlog of work to be done. You would then persuade the customer to see the benefit of it; just as the customer would push for a feature being done at a certain point in time.
A: There was an interesting discussion of Technical Debt based on your definition of done on HanselMinutes a couple of weeks ago -- What is Done. The basics of the show were that if you re-define 'Done' to increase perceived velocity, then you will amass Technical Debt. The corollary of this is that if you do not have a proper definition of 'Done' then you most likely are acquiring a list of items that will need to be finished before release irrespective of the design methodology.
A: You seem to say that "YAGNI" implies "quick and dirty". I do not see that.
As an agile programmer, I practice test-driven development, code review and continuous integration.
*
*Test-driven development (TDD), as a process, is a good way to avoid YAGNI. Code that's just there "in case it will be useful" tends to be untested and hard to test.
*TDD also largely removes the compulsion to BDUF: when your process is to start by sitting down and start doing something that actually delivers value, you cannot indulge in BDUF.
*TDD, as a design practice means that the big design will emerge as you gain experience with the problem, and refactor real code.
*Continous integration means that you design your process so your product is essentially releasable at any time. That means that you have a integrated quality process that tries to prevent the quality of the mainline from dropping.
In my experience, the main forms of technical debt are:
*
*Code not covered by the automated test suite. Do not allow that to happen, except for very localized components that are especially hard to test. Untested code is broken code.
*Ugly code that violates the coding standard. Do not allow that to happen. That is one of the reasons why you need to build code review into the continous integration process.
*Code and tests that smell and need refactoring to be more easily modified or understood. This is the benign form of technical debt. Use your experience to know when to accumulate it, and when to repay it.
Not sure if that answered your question, but I had fun writing it.
Troy DeMonbreun commented:
No, that wasn't my point... "quick and dirty" = (unintentionally risking Technical Debt while attempting to adhere to YAGNI"). That does not mean YAGNI is only quick and dirty. The phrase "quick and dirty" is what I used to quote Martin Fowler in his description of Technical Debt
Avoiding YAGNI is another way of saying KISS. YAGNI increases the technical debt. There is no tension between between avoiding YAGNI and keeping the technical debt low.
I think I might still be missing the point of your question.
A: I find Robert Martin's Test Driven Development
(TDD) approach helps with these concerns.
Why?
*
*You only have to write enough code to pass the next test.
*I think testable code is cleaner.
*The design has to feed into tests which can help keep the design focused.
*When you do have to change (refactor) you have tests to fall back on
Regardless of when the test are written (before or after) I find writing
the test helps you make practical decisions. E.g., we picked design A or B because
A is more testable.
A: The 'traditional' XP answer is refactoring combined with automated unit testing.
But it's a very interesting question philosophically. I don't believe you need to avoid technical debt, just keep it at a manageable level. Steve McConnell's article is good on this balance - the reason the analogy works is that it's normal and acceptable to build up financial debt in a company, as long as you accept the costs and risks - and technical debt is fine too.
Maybe the answer itself also lies in the principle of YAGNI. You Ain't Gonna Need the technical debt paid off until you do, and that's when you do the refactor. When you're doing substantial work on an area of the system with technical debt, take a look at how much short-term difference it will make to do the redesign. If it's enough to make it worthwhile, pay it off. McConnell's suggestion of maintaining a debt list will help you to know when to make this consideration.
I don't suppose there is an absolute answer to this - like many things it's a judgment call based on your experience, intuition and your analysis in each particular situation.
A: Just do the simplest thing that works. I agree with Ayende when he says that the key to being agile is to "ship it, often". A regular release cycle like this will mean that there is no time for BDUF and it will also dissuade developers from violating YAGNI.
A: Where I work, I believe the way we avoid the debt is by spinning through the cycle quickly, i.e. showing functionality to the would be end user and get either a sign off that it should be pushed to test or a rejection to say what was wrong giving an updated requirement. By doing this repeatedly within an iteration, a lot of changes can be found about what the user wants by trying this and that.
A key point is to try to do exactly what the user wants as doing more violates YAGNI and brings in BDUF while the idea of refining the requirements over and over again is at the heart of Agile to my mind.
A: That's why it's always easier to write nice "acadamic papers" talking about how Agile development is good, what are the "best practices" and so on.
That's why you find a lot of "suited engineers" making up new software engineering techniques.
Process is important, keeping best practices is cool but over any other thing, common sense drive design process. Software is developed by people so YAGNI really should be:
I might not gonna needed but maybe I will because in my concrete bussiness/company/department this thing do happen or I will need it but I just haven't the time right no so quick and dirty hack to make the cash and keep my job, or I might need it and refactoring later will be a pain in the ass costing 10 times more than just doing it now from the scratch, and I have the time NOW.
So use your common sense, trust it or trust the common sense of the people working for you. Don't take every academic paper as proven fact, experience is the best teacher and your company should improve their way or making things with time and its own experience.
Edit: Incidentally, TDD is the opposite of YAGNI you're building test before even knowing if you are gonna need them. Seriously, stop listening to academics!! There's no magical way to produce software.
A: Surely being agile is going to keep your TD down for any given project?
The fact that you are implementing what the customer wants, i.e. with their feedback, is keeping TD to a minimum,
A: The problem may actually be at a higher level: with management and product owners concerned with deadlines as opposed to the developers themselves.
At my last place of employment, I worked on legacy code but was in contact with people developing brand new code using (supposedly) Agile and TDD.
Now TDD is supposed to be Red - Green - Refactor: write failing tests, do the minimum to pass the test, then rework the code to make it more maintainable while making sure it still passes the test.
But… progress was measured by how quickly user stories got implemented, i.e. how quickly new functionality got added. The thing about refactoring is it doesn’t add any new functionality. Yes, it is very important but it doesn’t have the same immediate impact as showing the product owner the new functionality or even the lines of test code to go with it.
As deadlines loomed closer, and overtime became standard, there was the incentive to skimp on the refactoring part. The more one does this, the more quickly one gets thru the user stories, which seemed to be what management cared about.
Now, this did lead to technical debt because on a number of occasions to get the next user story done it turned out to be necessary to go back and refactor - really rewrite - code for the previous bunch of user stories. Management became irate at this because from their point of view the user story in question did not look to be all that different from the previous ones so why was the estimate for it 10 times longer?
(There was an ugly game theory aspect to things as well. If you or your team were conscientious about refactoring that didn't prevent you from getting stuck cleaning up after another team that wasn't. Of course, that other team looked better because they got more user stories done in the same amount of time.)
My point here is perhaps TDD does not lead to technical debt if done properly. But for it to be done properly there must be genuine buy in from higher up.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ASP.NET Convert Invalid String to Null In my application I have TextBox in a FormView bound to a LinqDataSource like so:
<asp:TextBox ID="MyTextBox" runat="server"
Text='<%# Bind("MyValue") %>' AutoPostBack="True"
ontextchanged="MyTextBox_TextChanged" />
protected void MyTextBox_TextChanged(object sender, EventArgs e)
{
MyFormView.UpdateItem(false);
}
This is inside an UpdatePanel so any change to the field is immediately persisted. Also, the value of MyValue is decimal?. This works fine unless I enter any string which cannot be converted to decimal into the field. In that case, the UpdateItem call throws:
LinqDataSourceValidationException -
Failed to set one or more properties on type MyType. asdf is not a valid value for Decimal.
I understand the problem, ASP.NET does not know how to convert from 'asdf' to decimal?. What I would like it to do is convert all these invalid values to null. What is the best way to do this?
A: I think you should handle the Updating event of the LinqDataSource on your page. Do your check for invalid strings (use a TryParse method or something) and then continue with the base class update.
(Edit: My intuition lines up with what's recommended here)
A: Not familiar with ASP, but in .net, couldn't you just do something along the lines of
protected void MyTextBox_TextChanged(object sender, EventArgs e)
{
Decimal d = null;
TextBox tb = sender as TextBox;
if(!Decimal.TryParse(tb.Text, out d))
{
tb.Text = String.Empty;
}
MyFormView.UpdateItem(false);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I open a cmd window in a specific location? How can I open a cmd window in a specific location without having to navigate all the way to the directory I want?
A: For windows 7 or later, inside the target folder address bar just type cmd. That is it. It will open up command prompt with path set to your present directory.
A: From Windows 7 up to some versions of Windows 10, it is very simple to open a command prompt anywhere you wish, without navigation using command "cd".
Try the following one.
Click the mouse's right button by holding Shift key .
It will produce an option like this. Then simply select the "Open command window here " option.
The latest versions of Windows 10 have replaced this feature with "Open Powershell here".
A: In File Explorer, press and hold the Shift key, then right click or press and hold on a folder or drive that you want to open the command prompt at that location for, and click/tap on Open Command Prompt Here option.
A: I see that there are multiple answers, some are quite complex :) , strange to see them. You just have to open any windows folder window, navigate to your desired folder and focus on address bar and enter "cmd" and press enter, you would be presented with new command prompt window directly with the folder path or location that we already navigated in windows folder window.
In case you want to see these steps with clear images you can check out
how to open command prompt in a specific folder directly
A: Make the shortcut to cmd.exe with params /S /K pushd "C:\YOUR FOLDER\"
A: For windows :
Select the folder which you want to open in command prompt - After selection,
Keeping the 'Shift key' pressed. Right click there and choose option
"open command window here"
A: In Windows go to the specific folder, then click on the file explorer path and remove it then type cmd and click enter.. and in cmd your specific folder with its path will open..
A: Try out this "PowerToy" from Microsoft:
Open Command Window Here
This PowerToy adds an "Open Command
Window Here" context menu option on
file system folders, giving you a
quick way to open a command window
(cmd.exe) pointing at the selected
folder.
EDIT : This software will not work on any version of Windows apart from Windows XP.
A: In Windows Explorer - shift + right mouse click above folder "Open command window here" option show up in the menu. Or in language of your Windows version.
A: <===||==========> On Windows 10 <==========||===>
Assuming that in File Explorer you have opened the target directory/folder, do this :
*
*Click on address bar, alternatively press Alt + A
*Now when address bar is highlighted, type cmd in the bar.
*Press Enter key
For a powershell window :
*
*Just press Alt + f + s + a
A: On Windows Vista, Windows 7 and Windows 10 simply hold down the Shift key and right-click on a folder.
The context menu will contain an entry titled: "Open command window here"
Update: Type "cmd" in the address bar of Explorer and press enter
Update 2: In windows 10, go to file menu and select "Open Windows PowerShell". There is an option for running as administrator.
Update 3: You can also add a quick access shortcut by going to file menu, right click on "Open windows Powershel" and select "Add to Quick Access Toolbar" and after that with one single click you can access the powershell immediately
A: This might be what you want:
cmd /K "cd C:\Windows\"
Note that in order to change drive letters, you need to use cd /d. For example:
C:\Windows\System32\cmd.exe /K "cd /d H:\Python\"
(documentation)
A: Right click the desktop and navigate to new and then from the sub-menu select "shortcut" → Browse to the Windows directory (or folder) and then to the system32 directory and click OK.
Add a \ and "cmd.exe" (without the quotes) to the command string. It should look like this:
C:\WINDOWS\System32\cmd.exe.
Click Next and Finish. Right click the new CMD icon on your desktop and select properties, and Next to the Start. In options, delete the line and add the path to wherever the directory is that you want it to start in... For example, C:\temp\mp3 and click OK.
A: There is a simplier way I know. Find cmd.exe in start menu and send it to Desktop as shortcut. Then right-click it and choose properties. You will see "Start in" box under the "Target". Change that directory as whatever you'd like to set. Click OK and start cmd.exe which is in your Desktop. In my opinion, it's a very easy and certain solution :)
A: This program always opens cmd.exe in the current path of your Explorer:
https://github.com/jhasse/smart_cmd
You can also pin it to your taskbar and then use WindowsKey+[1-0] as a keyboard shortcut.
A: If you use Total Commander there is a field in the bottom for this. It shows the active directory you are currently in and will run the entered command in that directory.
A: With a Just-one-line file in batch:
START "Desire_Path" // Without quotes puth the location that you want to start in with cmd
Example (Open a text editor, place the code in there and save the file with a .bat extension):
START cd C:\Users
Then just double click on it
****Note: if you want the explorer to complete the task don´t put the CD command.
*To do the opossite:
In order for you to open a particular directory with the explorer.exe aplication while using cmd you can use the command START and the absolute route of the folder that you want to display.
A: This method is using cmd.exe and Send to shortcut so cmd.exe can open directory directly. This alternative method is in case of not having Open command window here in right click menu.
*
*Open 'File Explorer' and enter shell:sendto in location bar to navigate to Send to folder.
*Copy a Command Prompt shortcut or create a new shortcut .lnk file.
*Edit the properties of the shortcut and edit the target to %windir%\system32\cmd.exe /k cd /d and press 'OK' to save the change.
*Right click on a folder and expand Send to menu to use the cmd shortcut.
This shortcut should open a cmd window with directory selected by the right click.
This method should work under Window 7 and 10 at least. Name the shortcut as Command Prompt (cd) to specify the task of the shortcut.
Possible error messages:
*
*Show 'The directory name is invalid.' if other than folder is
selected.
*Show 'The system cannot find the drive specified.' if the folder is
not existed.
*Show 'The filename, directory name, or volume label syntax is incorrect.' if multiple files are selected.
Little about shortcut: The directory would be automatically added to the end of the shortcut as a parameter when using under Send to, so the shortcut does not need to type in the directory.
A: In my case I VERY SPECIFICALLY wanted an opened CMD window in ADMIN mode in a specific folder. Here's how (works for Windows 7):
In the target folder, create START.BAT that simply contains one line:
start cd c:\MyTargetFolder
Drag a shortcut from START.BAT and call it "START AS ADMIN".
Right-click the shortcut and select "Run as Administrator" and "Run Minimized". Also make sure that the "Start In" will cause the same drive to be selected (as CD does not change the drive!).
When you click on that shortcut you will get the UAC prompt and then an open command window in the desired folder. The title bar will show that this CMD window is in ADMINISTRATOR mode.
A: Use the /K switch. For example
cmd /K "cd /d c:\WINDOWS\"
Will create a cmd window at the C:\Windows directory
A: Just write cmd in the address bar, it will open in the current folder.
A: Assuming that in File Explorer you have opened the target directory/folder, do this:
*
*Click on address bar, alternatively press Alt+D
*Now when address bar is highlighted, type cmd in the bar.
*Press Enter key
You will notice that command prompt from that folder
A: In windows go to folder location in file explorer remove path and type cmd and press enter. and path will open in cmd.
A: If you have Windows Vista or later, right-click on the folder icon in Explorer while holding the Shift key, and then click on the "Open command window here" or "Open PowerShell window here" context menu option.
If you're already in the folder you want, you can do one of the following:
*
*[only Win8+] Click the Explorer Ribbon's File button, then click on "Open command window here" or "Open PowerShell window here".
*Shift-right-click on the background of the Explorer window, then click on "Open command window here" or "Open PowerShell window here". (recommended by Kate in the comments)
*[only Vista or Win7] Hold down Shift when opening the Explorer File menu, then click on "Open command window here". If you can't see the menu bar, open the File menu by pressing Alt-Shift-F - Alt-F to open the File menu, plus Shift.
For Windows XP, use the PowerToy mentioned by dF to get the same function.
A: Rather than saving it as a shortcut, this is how I do it and I find it very useful. There are already answers to show as a shortcut, but I just wanted to share this, especially I find it very useful for angular projects.
*
*Create a new txt file
and write the following code into it.
@ECHO OFF
cd C:\YourProjectPath\FolderPath\
*save as .bat file with a convenient name. (I usually save it as
"goto-myProjectName.bat"
*Then copy that bat file into your default path (when you run the cmd,
whatever is your default path, it starts with that. For instance, on
my machine it is windows/system32)
*Then type your bat file's name without its extension.
*For instance:goto-myProjectName
Then it should take you to there.
A: Another easy solution is to install Windows Terminal.
And then you automatically have "Open in Windows Terminal" when you right-click on a folder:
A: Also, here is a shortcut to open a console in any windows folder:
*
*Open any folder on windows explorer.
*Press Alt + D to focus the adress bar
*type cmd and press enter
Very practical shortcut.
A: You can also do this:
[HKEY_CLASSES_ROOT\Directory\shell\cmd]
@="command prompt here"
[HKEY_CLASSES_ROOT\Directory\shell\cmd\command]
@="cmd.exe /c start \"%1\" cmd.exe /k cd /d %1"
[HKEY_CLASSES_ROOT\Drive\shell\cmd]
@="command prompt here"
[HKEY_CLASSES_ROOT\Drive\shell\cmd\command]
@="cmd.exe /c start \"%1\" cmd.exe /k cd /d %1"
Update: for Win10 you need ShowBasedOnVelocityId - see answer above.
A: Despite a few answers for HKCR\Directory\shell under Windows 10 (which did not work) the following worked for me:
SetOpenCmdHere.reg
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\Folder\shell\cmd]
@="Open CMD here..."
[HKEY_CLASSES_ROOT\Folder\shell\cmd\command]
@="C:\\Windows\\system32\\cmd.exe /k pushd \"%1\""
A: pushd command sets the current folder. so:
cmd /k "pushd D:\Music"
A: In Windows 8, you can click the address bar and type "cmd" (without quotes) and hit enter. This will open the cmd window in the current path.
A: Easiest way is to goto the address bar of the Windows Explorer and type cmd there. It will automatically open the command prompt window for you.
A: If you are starting cmd from taskbar, this is what you need to do:
right click --> rightclick on Command Prompt --> Properties
Then in the properties window change the value of Start in:
This solution doesn't work for opening command prompt as administrator
A: Create a shortcut and edit the "Start In" property of the shortcut to the directory you want the cmd.exe to start in.
A: I just saw this question and cannot help to post my AutoHotkey script for cmd on Windows XP. You can spot the hot keys in the script. The nice thing is when your current windows is Explorer, the cmd will open in the path showing in the address bar.
I keep this script in a folder where I store all green tools (including AutoHotkey). For a new machine, I just copy the folder, double click the script to associate .ahk with AutoHotkey and create a shortcut in my startup folder. It is faster than installing PowerToys.
; Get working folder
GetWorkingFolder() {
if WinActive("ahk_class ExploreWClass") or WinActive("ahk_class CabinetWClass") {
ControlGetText, path, Edit1
return %path%
} else if WinActive("FreeCommander") {
Send, {CTRLDOWN}{ALTDOWN}{INS}{ALTUP}{CTRLUP}
Sleep, 100
return clipboard
} else {
return "C:\"
}
}
#IfWinActive,
#c::
path := GetWorkingFolder()
Run, %ComSpec%, %path%
return
; PowerShell
#+C::
path := GetWorkingFolder()
Run, %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe, %path%
return
#^c::
Run, %ComSpec%, %temp%
return
#!c::
path := GetWorkingFolder()
Run, %comspec% /k "%VS90COMNTOOLS%vsvars32.bat", %path%
return
; irb
#!b::
path := GetWorkingFolder()
Run, c:\cygwin\bin\ruby /usr/bin/irb, %path%
return
; Bash
#b::
path := GetWorkingFolder()
Run, bash --login, %path%
return
; Paste in console
+INS::
if WinActive("ahk_class ConsoleWindowClass") {
WinGetPos, x, y, w, h, A
MouseGetPos, mx, my
;MsgBox x=%x% y=%y% w=%w% h=%h% mx=%mx% my=%my%
if (mx < 10)
mx = 10
else if (mx > w - 30)
mx := w - 30
if (my < 40)
my = 40
else if (my > h)
my := h - 10
MouseClick, right, mx, my
}
return
For anyone who is interested, you can find this script at rwin on github
A: This will add entries to the context-menu to launch a command window that is automatically navigated to the directory you clicked.
Usage:
Right-click a folder icon (or the empty background area inside an already open folder)
and click either "Open in Terminal" or "Open in Terminal (Admin)".
You can also right click files to execute them with a command window.
When the file is done running you are left with a command window that is navigated to the files directory.
Open_in_Terminal.reg
Windows Registry Editor Version 5.00
; Admin versions.
; Right click on a folder in a directory.
[HKEY_CLASSES_ROOT\Directory\shell\OpenCommandWindowHereAsAdministrator]
@="Open in Terminal (Admin)"
"Icon"="cmd.exe"
"HasLUAShield"=""
"Position"="middle"
[HKEY_CLASSES_ROOT\Directory\shell\OpenCommandWindowHereAsAdministrator\command]
@="cmd.exe /c powershell.exe -Command \"Start-Process cmd -Verb runas -ArgumentList '/k pushd \"%1\"'\""
; Right click on nothing in a directory, i.e. the "background" of the directory.
[HKEY_CLASSES_ROOT\Directory\Background\shell\OpenCommandWindowHereAsAdministrator]
@="Open in Terminal (Admin)"
"Icon"="cmd.exe"
"HasLUAShield"=""
"Position"="middle"
[HKEY_CLASSES_ROOT\Directory\Background\shell\OpenCommandWindowHereAsAdministrator\command]
@="cmd.exe /c powershell.exe -Command \"Start-Process cmd -Verb runas -ArgumentList '/k pushd \"%V\"'\""
; Right click on nothing in a library directory, i.e. the "background" of the library directory.
[HKEY_CLASSES_ROOT\LibraryFolder\Background\shell\OpenCommandWindowHereAsAdministrator]
@="Open in Terminal (Admin)"
"Icon"="cmd.exe"
"HasLUAShield"=""
"Position"="middle"
[HKEY_CLASSES_ROOT\LibraryFolder\Background\shell\OpenCommandWindowHereAsAdministrator\command]
@="cmd.exe /c powershell.exe -Command \"Start-Process cmd -Verb runas -ArgumentList '/k pushd \"%V\"'\""
; Right click on a file in a directory.
[HKEY_CLASSES_ROOT\*\shell\OpenWithCommandWindowAsAdministrator]
@="Open in Terminal (Admin)"
"Icon"="cmd.exe"
"HasLUAShield"=""
"Position"="middle"
[HKEY_CLASSES_ROOT\*\shell\OpenWithCommandWindowAsAdministrator\command]
@="cmd.exe /c powershell.exe -Command \"Start-Process cmd -Verb runas -ArgumentList '/k pushd \\\"%W \\\" && \\\"%1\\\"'\""
; Non-Admin versions.
; Right click on a folder in a directory.
[HKEY_CLASSES_ROOT\Directory\shell\OpenCommandWindowHere]
@="Open in Terminal"
"Icon"="cmd.exe"
"Position"="middle"
[HKEY_CLASSES_ROOT\Directory\shell\OpenCommandWindowHere\command]
@="cmd.exe /k pushd \"%1\""
; Right click on nothing in a directory, i.e. the "background" of the directory.
[HKEY_CLASSES_ROOT\Directory\Background\shell\OpenCommandWindowHere]
@="Open in Terminal"
"Icon"="cmd.exe"
"Position"="middle"
[HKEY_CLASSES_ROOT\Directory\Background\shell\OpenCommandWindowHere\command]
@="cmd.exe /k pushd \"%V\""
; Right click on nothing in a library directory, i.e. the "background" of the library directory.
[HKEY_CLASSES_ROOT\LibraryFolder\Background\shell\OpenCommandWindowHere]
@="Open in Terminal"
"Icon"="cmd.exe"
"Position"="middle"
[HKEY_CLASSES_ROOT\LibraryFolder\Background\shell\OpenCommandWindowHere\command]
@="cmd.exe /k pushd \"%V\""
; Right click on a file in a directory.
[HKEY_CLASSES_ROOT\*\shell\OpenWithCommandWindow]
@="Open in Terminal"
"Icon"="cmd.exe"
"Position"="middle"
[HKEY_CLASSES_ROOT\*\shell\OpenWithCommandWindow\command]
@="cmd.exe /k pushd \"%W\" && \"%1\""
This took a lot of effort to make so if you're feeling generous then feel free to send a paypal donation to help me overcome the PTSD of debugging and testing it :)
An uninstaller if you need one:
Open_in_Terminal_Remover.reg
Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\Directory\shell\OpenCommandWindowHereAsAdministrator]
[-HKEY_CLASSES_ROOT\Directory\Background\shell\OpenCommandWindowHereAsAdministrator]
[-HKEY_CLASSES_ROOT\LibraryFolder\Background\shell\OpenCommandWindowHereAsAdministrator]
[-HKEY_CLASSES_ROOT\*\shell\OpenWithCommandWindowAsAdministrator]
[-HKEY_CLASSES_ROOT\Directory\shell\OpenCommandWindowHere]
[-HKEY_CLASSES_ROOT\Directory\Background\shell\OpenCommandWindowHere]
[-HKEY_CLASSES_ROOT\LibraryFolder\Background\shell\OpenCommandWindowHere]
[-HKEY_CLASSES_ROOT\*\shell\OpenWithCommandWindow]
A: Update: This is built into Windows now. See this answer.
The XP powertoy is a good option, but I thought I'd post another, in case you'd like to "roll your own". Create a text file, name it anything.reg, paste in the code below, save it, then double-click on it to add it to the registry (or just add the info to the registry manually if you understand what's going on in this .reg file).
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\Folder\shell\Command_Prompt_Here...]
@="Command Prompt Here..."
[HKEY_CLASSES_ROOT\Folder\shell\Command_Prompt_Here...\command]
@="cmd.exe \"%1\""
Update: After an Windows-update, Win10 removed the cmd-here feature. To reactivate it you've to use:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\Directory\shell\cmd]
@="@shell32.dll,-8506"
"Extended"=""
"NoWorkingDirectory"=""
"ShowBasedOnVelocityId"=dword:00639bc8
[HKEY_CLASSES_ROOT\Directory\shell\cmd\command]
@="cmd.exe /s /k pushd \"%V\""
The entry ShowBasedOnVelocityId is mandatory
A: This answer is for windows 10.
Create a command prompt shortcut in the folder wherever you want, then right click on that shortcut
and
A: In windows 10, you just need one click to get cmd in any folder.
Just hold "shift + mouse right click " in your desire folder and cmd will open with your folder path.
A: Windows 10 File Explorer now has a "Quick Access Toolbar".
Just press "Alt+F" to open the file menu, navigate to the "Open Windows PowerShell" menu, right click and select "Add to Quick Access Toolbar":
Now you will get a little icon that you can click on, that will open PowerShell in the directory you are in:
A: Why that much of setup, for this simple matter, when your on the path in cmd just enter
start .
and press Enter
A: For a better experience using a terminal in windows system, cmder may help for a shortcut usage:
*
*Download cmder into your system
*Make shortcut
*type path_of_the_cmder /START target_path_wish_to_run
For an instance:
TARGET -> C:\Users\<username>i\AppData\Roaming\cmder\Cmder.exe /START C:\SOURCE\
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "429"
} |
Q: Changing the font in Aquamacs? I've recently had a need to do a bit of lisp editing and I found the nifty Ready Lisp package for OS X, which is great, except Aquamacs automatically uses a proportional font (which is idiotic, IMHO) and I want to change it to a monospace font. However, I'm not really much of an EMACS user, and the preferences menu in Aquamacs is less than clear on where and how one might make such a change.
A: This is what I have in my .emacs for OS X:
(set-default-font "-apple-bitstream vera sans mono-medium-r-normal--0-0-0-0-m-0-mac-roman")
Now, I'm not sure Bitstream Vera comes standard on OS X, so you may have to either download it or choose a different font. You can search the X font names by running (x-list-fonts "searchterm") in an ELisp buffer (e.g. *scratch* - to run it, type it in and then type C-j on the same line).
A: From the EmacsWiki Aquamacs FAQ:
To change the font used to display the
current frame, go to the font panel.
You can do this with the keystroke
Apple-t, or via the menu: Options →
Show/Hide → Font Panel. Once there,
select the font you want.
To make the current frame’s font the
default, go to Options → Frame
Appearance Styles. Select “use current
style for foo mode”, where foo is the
mode of the current frame (e.g.,
foo=text for text mode), to use the
current style (including the font, but
also any other changes you’ve made to
the frame’s style) for all files of
this type. Select “use current style
as default” to use the current style
for all files for whose major mode no
special style has been defined.
There are also recommendations for monospaced fonts - Monaco or "Vera Sans Mono".
A: this is the one I use:
-apple-DejaVu_Sans_Mono-medium-normal-normal-*-12-*-*-*-m-0-iso10646-1
You can set it in .emacs file like:
(set-default-font "-apple-DejaVu_Sans_Mono-medium-normal-normal-*-12-*-*-*-m-0-iso10646-1")
You can download it from dejavu-fonts.org
A: In Aquamacs 2.1, you can set the font through Options->Appearance->Font for Text Mode... That brings up the standard font chooser window, choose the font you like. Then, when you exit out of emacs (C-x C-c) you'll be prompted to save options, hit "y".
A: Fast forward a decade, for recent Aquamacs like ver 3.3 please see the nice solution for setting a fixed-width by default at https://emacs.stackexchange.com/questions/45135/change-permanently-font-size-in-aquamacs
Here's the relevant bit for those who are REALLY impatient but please go upvote that answer, user @nega deserves credit here
(when window-system
(setq initial-frame-alist nil) ;; Undo Aquamacs forced defaults
(setq default-frame-alist nil) ;; Undo Aquamacs forced defaults
(aquamacs-autoface-mode -1) ;; Use one face (font) everywhere
(set-frame-font "Menlo-12") ;; Set the default font to Menlo size 12
;;(set-default-font "Menlo-12") ;; This would do the same.
)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What is the worker process for IIS7? I'm trying to do 'Attach to Process' for debugging in Visual Studio 2008 and I can't figure out what process to attach to. Help.
A: Indeed it is still w3wp.exe - You'll need to check the 'Show processes in all sessions' option to get it to show up though.
(It caught me out for a while too.)
A: Isn't it w3wp.exe?
A: Here's a useful article for identifying w3wp processes, if you're running more than one.
For IIS 6.0
*
*Start > Run > Cmd
*Go To Windows > System32
*Run cscript iisapp.vbs
*You will get the list of Running Worker ProcessID and the Application Pool Name.
From IIS 7.0 you need you to run IIS Command Tool ( appcmd ) .
*
*Start > Run > Cmd
*Go To Windows > System32 > Inetsrv
*Run appcmd list wp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Does SqlCommand.Dispose close the connection? Can I use this approach efficiently?
using(SqlCommand cmd = new SqlCommand("GetSomething", new SqlConnection(Config.ConnectionString))
{
cmd.Connection.Open();
// set up parameters and CommandType to StoredProcedure etc. etc.
cmd.ExecuteNonQuery();
}
My concern is : Will the Dispose method of the SqlCommand (which is called when exiting the using block) close the underlying SqlConnection object or not?
A: No, Disposing of the SqlCommand will not effect the Connection. A better approach would be to also wrap the SqlConnection in a using block as well:
using (SqlConnection conn = new SqlConnection(connstring))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(cmdstring, conn))
{
cmd.ExecuteNonQuery();
}
}
Otherwise, the Connection is unchanged by the fact that a Command that was using it was disposed (maybe that is what you want?). But keep in mind, that a Connection should
be disposed of as well, and likely more important to dispose of than a command.
EDIT:
I just tested this:
SqlConnection conn = new SqlConnection(connstring);
conn.Open();
using (SqlCommand cmd = new SqlCommand("select field from table where fieldid = 1", conn))
{
Console.WriteLine(cmd.ExecuteScalar().ToString());
}
using (SqlCommand cmd = new SqlCommand("select field from table where fieldid = 2", conn))
{
Console.WriteLine(cmd.ExecuteScalar().ToString());
}
conn.Dispose();
The first command was disposed when the using block was exited. The connection was still open and good for the second command.
So, disposing of the command definitely does not dispose of the connection it was using.
A: SqlCommand.Dispose will not be sufficient because many SqlCommand(s) can (re)use the same SqlConnection. Center your focus on the SqlConnection.
A: Soooo many places get this wrong, even MS' own documentation. Just remember - in DB world, almost everything is backed by an unmanaged resource, so almost everything implements IDisposable. Assume a class does unless the compiler tells you otherwise.
Wrap your command in a using. Wrap your connection in a using. Create your connection off a DbProvider (get that from DbProviderFactories.GetFactory), and your command off your connection so that if you change your underlying DB, you only need to change the call to DBPF.GetFactory.
So your code should end up looking nice and symmetrical:
var provider = DbProviderFactories.GetFactory("System.Data.SqlClient");// Or MS.Data.SqlClient
using (var connection = provider.CreateConnection())
{
connection.ConnectionString = "...";
using (var command = connection.CreateCommand())
{
command.CommandText = "...";
connection.Open();
using (var reader = command.ExecuteReader())
{
...
}
}
}
A: I use this pattern. I have this private method somewhere in my app:
private void DisposeCommand(SqlCommand cmd)
{
try
{
if (cmd != null)
{
if (cmd.Connection != null)
{
cmd.Connection.Close();
cmd.Connection.Dispose();
}
cmd.Dispose();
}
}
catch { } //don't blow up
}
Then I always create SQL commands and connections in a try block (but without being wrapped in a using block) and always have a finally block as:
finally
{
DisposeCommand(cmd);
}
The connection object being a property of the command object makes a using block awkward in this situation - but this pattern gets the job done without cluttering up your code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
} |
Q: What is the best implementation for DB Audit Trail? A DB Audit Trail captures the User Last Modified, Modified Date, and Created Date.
There are several possible implementations:
*
*SQL Server Triggers
*Add UserModified, ModifiedDate, CreatedDate columns to the database and include logic in Stored Procedures or Insert, Update statements accordingly.
It would be nice if you include implementation (or link to) in your answer.
A: Depending on what you're doing, you might want to move the audit out of the data layer into the data access layer. It give you more control.
I asked a similar question wrt NHibernate and SqlServer here.
A: I totally second @IainMH (and voted him up).
You want to have it in your DAL and ideally tied to some kind of aspect/interceptor/code injection mechanism.
A: +2 for implementation of when/how to audit in the DAL.
As for where the audit entries themselves should live, it depends on how it will be visible. I'd do a separate table if users can view a separate "audit trail report," but tag existing tables if you want to display last modified-type audits inline.
A: Here is the implementation I use to audit tables:
Pop Rivett's SQL Server FAQ No.5: Pop on the Audit Trail
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Real life examples of methodologies and lifecycles Choosing the correct lifecycle and methodology isn't as easy as it was before when there weren't so many methodologies, this days a new one emerges every day.
I've found that most projects require a certain level of evolution and that each project is different from the rest. That way, extreme programming works with for a project for a given company with 15 employees but doesn't quite work with a 100 employee company or doesn't work for a given project type (for example real time application, scientific application, etc).
I'd like to have a list of experiences, mostly stating the project type, the project size (number of people working on it), the project time (real or planned), the project lifecycle and methodology and if the project succeded or failed. Any other data will be appreciated, I think we might find some patterns if there's enough data. Of course, comments are welcomed.
*
*PS: Very large, PT: Very long, LC:
Incremental-CMMI, PR: Success
*PS: Very large, PT: Very long, LC:
Waterfall-CMMI, PR: Success
Edit: I'll be constructing a "summary" with the stats of all answers.
A: My personal experience:
*
*Project size: Very large (150+
persons)
*Project time: Very long (+6 years)
*Project income (estimated): 40
Million $ (Military is paying)
*Project life cycle: Incremental
lyfetime. Main milestones every
year.
*Project structure: Traditional at
first (system department,
development department, etc) not so
good. Process based later (the
process establish a flow of work,
requirements, design,
implementation, test, feedback,
metrics): quite good so far.
*Project result: success (so far)
A: Here you go:
*
*Project size: about 1 million lines of code, 30 people
*Project time: 9 years
*Project life cycle: good old waterfall, due to big customers requirements, but with staggered delivery to the QA team - it is very difficult to be agile when you have customers commitments to large clients
*Project structure: we are organized in departments but we use CMMI to keep them in sync - we have stakeholders, work products, deviance procedures, etc.
*Project result: we've really improved with the implementation of CMMI and have delivered our last few releases on time every time
-C.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: The best way to start a project When you are starting a personal programming project, what is your first step? I'm trying to start a project thats just an idea at the moment. I get lots of these and I dive right into the code and after a while just completely lose interest and or just forget about the project.
When you are starting, what is your first step? do you plan out the project? make a diagram? write some code on paper? How do you start a project in a manner that you know you will succeed?
A: I agree with the already given advice of:
*
*Planning a minimal implementation that does something useful as a first complete release.
*Have concrete goals about what you want to achieve to have something to compare your progress with.
I would also recommend beginning with a lightweight design of you overall architecture so you can have a roadmap of how to build your product.
I find it difficult to start building something when I don't have a clear idea about how it should look at least at a first level of decomposition. Think about what do you need besides functionality: high performance?, extensibility scenarios?, which ones?, usability goals?, high scalability?, ease of deployment and installability?, etc. Ask yourself: What components I will have to build in order to achieve those architectural qualities?.
And don't get me wrong, I'm a strong proponent of agile software development. You don't need to spend a lot of times designing your architecture (because it surely will have to evolve as you build and get feedback about what works and what doesn't), but having a blueprint of how to build your product based on its architecture should be useful in for planning your progress and setting realistic goals.
A: Define the goal for the project. Sounds like you are looking almost exclusively at the solution rather than the problem.
A program isn't useful to you or anyone else unless it addresses some problem. Writing code to get moving is great, but you appear to lose interest and focus after you start -- because you're looking at the code, not the problem.
Spend some time considering what led you to write this code. Ponder how other people might discover the same need, what path might take them to the same frustration you worked to solve.
Then, find some of those people and offer your (partial) solution, and you'll generate interest and suggestions among them all.
THAT will keep you going on your project. The fellow interest, the sharing, even the disagreements -- it's people who need software! Don't create solutions (software) looking for a problem (people). You started with YOU, with your need or desire, but focused on the code, and lost the impetus for the project.
Programming's a lot more fun when you're problem-solving. But you need to keep the problem in front of you. Sharing the problem builds community. That's what this is really all about, isn't it?
A: For my own personal projects I just dive right in. Of course, none of these have yet been sufficiently large as to require any sort of pre-planning. If this is going to be a serious project or a relatively large scale, it is always a good idea to flush out at least what each part of the program needs to do and a high level view of how they will do it.
A: Like the others, my personal projects always have:
*
*A Final Goal
*A Task List
*Small usable units
*Source control
As an additional motivator, I try to use a technology that I have never used before. Learning something new generally becomes the largest motivator for me.
A: Easy - don't start at all projects you're likely to lose interest in. Spend more time to make sure you want to commit yourself to an idea before beginning any work.
A: The only thing that works for me: Create the smallest possible implementation of it that's somehow usable and then use it.
A: From 7 Habits of Highly Effective People, Habit 2: Begin with the End In Mind.
With any project you need a clear goal, a point where you can say "I'm finished". A clear outcome will give you direction. Once you have that, you can start planning how to get there. The size and complexity of the project will determine how much detail your plan needs, but in general you'll want to feel your making progress against your plan quite regularly.
My next step is to sketch out a design of the modules that will be needed and the APIs between each module. If the APIs are clean then the modules are probably right. Then I start implementing the modules, testing as I go.
A: I spend a lot of time thinking about the various aspects of the project before I even touch a keyboard.
I go through what I've learnt from previous projects and write it down in various categories ('technical', 'promotion', etc)
Personal project or not, I always set up source code control. Git, Mercurial of Bazaar are examples of source code control tools that are not intrusive because you do not need to set up a master server. Just type a simple command to create the project, check your files in, commit. In the future, when you mess up one of your files, you will be able to 'undo'
I also set up a lightweight ticket system to keep track of 1.issues and 2.ideas
By "lightweight" I mean that if maintaining two text documents with these lists works for you, that's good enough.
Hope this helps.
A: It depends on the project - how big is it?
If I'm writing the next Notepad clone I might just dive in, if I wanted to roll my own operating system it'd take a lot more non-coding work.
I like to do a lot of diagrams, the tool I use for most development is clean A4 paper and a pencil. Draw out the UI, workflow, basic classes, and how you're going to store any data - then the coding is just a computer readable way of writing what you drew already.
Source control le.g. SVN is a couple of keystrokes/clicks, so the overhead is low and the benefit is high, its handy to try stuff and just revert to an earlier state if they don't work.
Then just make the most basic protoype that will work - once something is actually going it is much easier to get enthused and add more. If it is overwhelming I'll find I think the problem is solved in my head, and thats enough.
A: First plan out the basic outline of the final application. Most important features, basic GUI, program flow, etc. Then refine that so that you don't take on too much at first, remove unnecessary features, and add what else you want in the first version. Then use that outline to start a task list to create the smallest possible working version of your application. Then it's much easier to add extra features and make it fully functioning.
A: I like Maximillian's answer.. to expand a little, my person projects are developed to solve something I'm working on already. So when I get tired of repeat work I'll prototype a solution. and then use it. If Its similar enough to one of my earlier projects I'll borrow as much code as I can and try to improve the level of my work, make it more professional.
Fusion's use of Source Control is important too. Takes 2 minutes to install SVN.
A: If you want to turn it into a public open source project, Producing Open Source Software is supposed to be a good read (available both online and in print).
A: If your personal project is similar to an existing open source project, you should consider contributing to that project instead. A couple of small contributions (bug fixes etc.) are
more valuable than a half finished project.
A: All of the above, but start to cement the plan in place.....
Go for some tools
SmartSheet - even if you are working on your own you should set out some stages and dates
yEd - and Graphity from www.yworks.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How can I send the stdout of one process to multiple processes using (preferably unnamed) pipes in Unix (or Windows)? I'd like to redirect the stdout of process proc1 to two processes proc2 and proc3:
proc2 -> stdout
/
proc1
\
proc3 -> stdout
I tried
proc1 | (proc2 & proc3)
but it doesn't seem to work, i.e.
echo 123 | (tr 1 a & tr 1 b)
writes
b23
to stdout instead of
a23
b23
A: Since @dF: mentioned that PowerShell has tee, I thought I'd show a way to do this in PowerShell.
PS > "123" | % {
$_.Replace( "1", "a"),
$_.Replace( "2", "b" )
}
a23
1b3
Note that each object coming out of the first command is processed before the next object is created. This can allow scaling to very large inputs.
A: Like dF said, bash allows to use the >(…) construct running a command in place of a filename. (There is also the <(…) construct to substitute the output of another command in place of a filename, but that is irrelevant now, I mention it just for completeness).
If you don't have bash, or running on a system with an older version of bash, you can do manually what bash does, by making use of FIFO files.
The generic way to achieve what you want, is:
*
*decide how many processes should receive the output of your command, and create as many FIFOs, preferably on a global temporary folder:
subprocesses="a b c d"
mypid=$$
for i in $subprocesses # this way we are compatible with all sh-derived shells
do
mkfifo /tmp/pipe.$mypid.$i
done
*
*start all your subprocesses waiting input from the FIFOs:
for i in $subprocesses
do
tr 1 $i </tmp/pipe.$mypid.$i & # background!
done
*
*execute your command teeing to the FIFOs:
proc1 | tee $(for i in $subprocesses; do echo /tmp/pipe.$mypid.$i; done)
*
*finally, remove the FIFOs:
for i in $subprocesses; do rm /tmp/pipe.$mypid.$i; done
NOTE: for compatibility reasons, I would do the $(…) with backquotes, but I couldn't do it writing this answer (the backquote is used in SO). Normally, the $(…) is old enough to work even in old versions of ksh, but if it doesn't, enclose the … part in backquotes.
A: Unix (bash, ksh, zsh)
dF.'s answer contains the seed of an answer based on tee and output process substitutions
(>(...)) that may or may not work, depending on your requirements:
Note that process substitutions are a nonstandard feature that (mostly)
POSIX-features-only shells such as dash (which acts as /bin/sh on Ubuntu,
for instance), do not support. Shell scripts targeting /bin/sh should not rely on them.
echo 123 | tee >(tr 1 a) >(tr 1 b) >/dev/null
The pitfalls of this approach are:
*
*unpredictable, asynchronous output behavior: the output streams from the commands inside the output process substitutions >(...) interleave in unpredictable ways.
*In bash and ksh (as opposed to zsh - but see exception below):
*
*output may arrive after the command has finished.
*subsequent commands may start executing before the commands in the process substitutions have finished - bash and ksh do not wait for the output process substitution-spawned processes to finish, at least by default.
*jmb puts it well in a comment on dF.'s answer:
be aware that the commands started inside >(...) are dissociated from the original shell, and you can't easily determine when they finish; the tee will finish after writing everything, but the substituted processes will still be consuming the data from various buffers in the kernel and file I/O, plus whatever time is taken by their internal handling of data. You can encounter race conditions if your outer shell then goes on to rely on anything produced by the sub-processes.
*
*zsh is the only shell that does by default wait for the processes run in the output process substitutions to finish, except if it is stderr that is redirected to one (2> >(...)).
*ksh (at least as of version 93u+) allows use of argument-less wait to wait for the output process substitution-spawned processes to finish.
Note that in an interactive session that could result in waiting for any pending background jobs too, however.
*bash v4.4+ can wait for the most recently launched output process substitution with wait $!, but argument-less wait does not work, making this unsuitable for a command with multiple output process substitutions.
*However, bash and ksh can be forced to wait by piping the command to | cat, but note that this makes the command run in a subshell. Caveats:
*
*ksh (as of ksh 93u+) doesn't support sending stderr to an output process substitution (2> >(...)); such an attempt is silently ignored.
*While zsh is (commendably) synchronous by default with the (far more common) stdout output process substitutions, even the | cat technique cannot make them synchronous with stderr output process substitutions (2> >(...)).
*However, even if you ensure synchronous execution, the problem of unpredictably interleaved output remains.
The following command, when run in bash or ksh, illustrates the problematic behaviors (you may have to run it several times to see both symptoms): The AFTER will typically print before output from the output substitutions, and the output from the latter can be interleaved unpredictably.
printf 'line %s\n' {1..30} | tee >(cat -n) >(cat -n) >/dev/null; echo AFTER
In short:
*
*Guaranteeing a particular per-command output sequence:
*
*Neither bash nor ksh nor zsh support that.
*Synchronous execution:
*
*Doable, except with stderr-sourced output process substitutions:
*
*In zsh, they're invariably asynchronous.
*In ksh, they don't work at all.
If you can live with these limitations, using output process substitutions is a viable option (e.g., if all of them write to separate output files).
Note that tzot's much more cumbersome, but potentially POSIX-compliant solution also exhibits unpredictable output behavior; however, by using wait you can ensure that subsequent commands do not start executing until all background processes have finished.
See bottom for a more robust, synchronous, serialized-output implementation.
The only straightforward bash solution with predictable output behavior is the following, which, however, is prohibitively slow with large input sets, because shell loops are inherently slow.
Also note that this alternates the output lines from the target commands.
while IFS= read -r line; do
tr 1 a <<<"$line"
tr 1 b <<<"$line"
done < <(echo '123')
Unix (using GNU Parallel)
Installing GNU parallel enables a robust solution with serialized (per-command) output that additionally allows parallel execution:
$ echo '123' | parallel --pipe --tee {} ::: 'tr 1 a' 'tr 1 b'
a23
b23
parallel by default ensures that output from the different commands doesn't interleave (this behavior can be modified - see man parallel).
Note: Some Linux distros come with a different parallel utility, which won't work with the command above; use parallel --version to determine which one, if any, you have.
Windows
Jay Bazuzi's helpful answer shows how to do it in PowerShell. That said: his answer is the analog of the looping bash answer above, it will be prohibitively slow with large input sets and also alternates the output lines from the target commands.
bash-based, but otherwise portable Unix solution with synchronous execution and output serialization
The following is a simple, but reasonably robust implementation of the approach presented in tzot's answer that additionally provides:
*
*synchronous execution
*serialized (grouped) output
While not strictly POSIX compliant, because it is a bash script, it should be portable to any Unix platform that has bash.
Note: You can find a more full-fledged implementation released under the MIT license in this Gist.
If you save the code below as script fanout, make it executable and put int your PATH, the command from the question would work as follows:
$ echo 123 | fanout 'tr 1 a' 'tr 1 b'
# tr 1 a
a23
# tr 1 b
b23
fanout script source code:
#!/usr/bin/env bash
# The commands to pipe to, passed as a single string each.
aCmds=( "$@" )
# Create a temp. directory to hold all FIFOs and captured output.
tmpDir="${TMPDIR:-/tmp}/$kTHIS_NAME-$$-$(date +%s)-$RANDOM"
mkdir "$tmpDir" || exit
# Set up a trap that automatically removes the temp dir. when this script
# exits.
trap 'rm -rf "$tmpDir"' EXIT
# Determine the number padding for the sequential FIFO / output-capture names,
# so that *alphabetic* sorting, as done by *globbing* is equivalent to
# *numerical* sorting.
maxNdx=$(( $# - 1 ))
fmtString="%0${#maxNdx}d"
# Create the FIFO and output-capture filename arrays
aFifos=() aOutFiles=()
for (( i = 0; i <= maxNdx; ++i )); do
printf -v suffix "$fmtString" $i
aFifos[i]="$tmpDir/fifo-$suffix"
aOutFiles[i]="$tmpDir/out-$suffix"
done
# Create the FIFOs.
mkfifo "${aFifos[@]}" || exit
# Start all commands in the background, each reading from a dedicated FIFO.
for (( i = 0; i <= maxNdx; ++i )); do
fifo=${aFifos[i]}
outFile=${aOutFiles[i]}
cmd=${aCmds[i]}
printf '# %s\n' "$cmd" > "$outFile"
eval "$cmd" < "$fifo" >> "$outFile" &
done
# Now tee stdin to all FIFOs.
tee "${aFifos[@]}" >/dev/null || exit
# Wait for all background processes to finish.
wait
# Print all captured stdout output, grouped by target command, in sequences.
cat "${aOutFiles[@]}"
A: Editor's note:
- >(…) is a process substitution that is a nonstandard shell feature of some POSIX-compatible shells: bash, ksh, zsh.
- This answer accidentally sends the output process substitution's output through the pipeline too: echo 123 | tee >(tr 1 a) | tr 1 b.
- Output from the process substitutions will be unpredictably interleaved, and, except in zsh, the pipeline may terminate before the commands inside >(…) do.
In unix (or on a mac), use the tee command:
$ echo 123 | tee >(tr 1 a) >(tr 1 b) >/dev/null
b23
a23
Usually you would use tee to redirect output to multiple files, but using >(...) you can
redirect to another process. So, in general,
$ proc1 | tee >(proc2) ... >(procN-1) >(procN) >/dev/null
will do what you want.
Under windows, I don't think the built-in shell has an equivalent. Microsoft's Windows PowerShell has a tee command though.
A: You can also save the output in a variable and use that for the other processes:
out=$(proc1); echo "$out" | proc2; echo "$out" | proc3
However, that works only if
*
*proc1 terminates at some point :-)
*proc1 doesn't produce too much output (don't know what the limits are there but it's probably your RAM)
But it is easy to remember and leaves you with more options on the output you get from the processes you spawned there, e. g.:
out=$(proc1); echo $(echo "$out" | proc2) / $(echo "$out" | proc3) | bc
I had difficulties doing something like that with the | tee >(proc2) >(proc3) >/dev/null approach.
A: another way to do would be,
eval `echo '&& echo 123 |'{'tr 1 a','tr 1 b'} | sed -n 's/^&&//gp'`
output:
a23
b23
no need to create a subshell here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "87"
} |
Q: What are the CSS secrets to a flexible/fluid HTML form? The below HTML/CSS/Javascript (jQuery) code displays the #makes select box. Selecting an option displays the #models select box with relevant options. The #makes select box sits off-center and the #models select box fills the empty space when it is displayed.
How do you style the form so that the #makes select box is centered when it is the only form element displayed, but when both select boxes are displayed, they are both centered within the container?
var cars = [
{
"makes" : "Honda",
"models" : ['Accord','CRV','Pilot']
},
{
"makes" :"Toyota",
"models" : ['Prius','Camry','Corolla']
}
];
$(function() {
vehicles = [] ;
for(var i = 0; i < cars.length; i++) {
vehicles[cars[i].makes] = cars[i].models ;
}
var options = '';
for (var i = 0; i < cars.length; i++) {
options += '<option value="' + cars[i].makes + '">' + cars[i].makes + '</option>';
}
$("#make").html(options); // populate select box with array
$("#make").bind("click", function() {
$("#model").children().remove() ; // clear select box
var options = '';
for (var i = 0; i < vehicles[this.value].length; i++) {
options += '<option value="' + vehicles[this.value][i] + '">' +
vehicles[this.value][i] +
'</option>';
}
$("#model").html(options); // populate select box with array
$("#models").addClass("show");
}); // bind end
});
.hide {
display: none;
}
.show {
display: inline;
}
fieldset {
border: #206ba4 1px solid;
}
fieldset legend {
margin-top: -.4em;
font-size: 20px;
font-weight: bold;
color: #206ba4;
}
fieldset fieldset {
position: relative;
margin-top: 25px;
padding-top: .75em;
background-color: #ebf4fa;
}
body {
margin: 0;
padding: 0;
font-family: Verdana;
font-size: 12px;
text-align: center;
}
#wrapper {
margin: 40px auto 0;
}
#myFieldset {
width: 213px;
}
#area {
margin: 20px;
}
#area select {
width: 75px;
float: left;
}
#area label {
display: block;
font-size: 1.1em;
font-weight: bold;
color: #000;
}
#area #selection {
display: block;
}
#makes {
margin: 5px;
}
#models {
margin: 5px;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script>
<div id="wrapper">
<fieldset id="myFieldset">
<legend>Cars</legend>
<fieldset id="area">
<label>Select Make:</label>
<div id="selection">
<div id="makes">
<select id="make"size="2"></select>
</div>
<div class="hide" id="models">
<select id="model" size="3"></select>
</div>
</div>
</fieldset>
</fieldset>
</div>
A: It's not entirely clear from your question what layout you're trying to achieve, but judging by that fact that you have applied "float:left" to the select elements, it looks like you want the select elements to appear side by side. If this is the case, you can achieve this by doing the following:
*
*To centrally align elements you need to add "text-align:center" to the containing block level element, in this case #selection.
*The position of elements that are floating is not affected by "text-align" declarations, so remove the "float:left" declaration from the select elements.
*In order for the #make and #model divs to sit side by side with out the use of floats they must be displayed as inline elements, so add "display:inline" to both #make and #model (note that this will lose the vertical margin on those elements, so you might need to make some other changes to get the exact layout you want).
As select elements are displayed inline by default, an alternative to the last step is to remove the #make and #model divs and and apply the "show" and "hide" classes to the model select element directly.
A: Floating the select boxes changes their display properties to "block". If you have no reason to float them, simply remove the "float: left" declaration, and add "text-align: center" to #makes and #models.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a better Windows Console Window? I find working on the command line in Windows frustrating, primarily because the console window is wretched to use compared to terminal applications on linux and OS X such as "rxvt", "xterm", or "Terminal". Major complaints:
*
*No standard copy/paste. You have to turn on "mark" mode and it's only available from a multi-level popup triggered by the (small) left hand corner button. Then copy and paste need to be invoked from the same menu
*You can't arbitrarily resize the window by dragging, you need to set a preference (back to the multi-level popup) each time you want to resize a window
*You can only make the window so big before horizontal scroll bars enter the picture. Horizontal scroll bars suck.
*With the cmd.exe shell, you can't navigate to folders with \\netpath notation (UNC?), you need to map a network drive. This sucks when working on multiple machines that are going to have different drives mapped
Are there any tricks or applications, (paid or otherwise), that address these issue?
A: Console
From documentation:
NOTE: Console is NOT a shell.
Therefore, it does not implement shell
features like command-line completion,
syntax coloring, command history, etc.
Console is simply a nice-looking front
end for a shell of your choice
(cmd.exe, 4NT, bash, etc.) Other
command-line utilities can also be
used as 'shells' by Console.
As a programming shell one can use ipython.
A: I've had these issues too for years on Windows, but I recently found this project:
Console
It still requires "mark mode" for copy/paste, but at least it's available from a right-click contextual menu (so you don't need to move the mouse to the top left and then move it again to the text you want to select)
UNC paths are not supported by cmd.exe but they are supported by PowerShell.
(Console can be configured to use any shell, including cmd.exe and PowerShell)
A: I use Cygwin inside the Poderosa terminal emulator.
A: I personally use Mintty. Therefore I use Cygwin (because thats the only shell it supports, as far as I know).
BTW There is another question: better command for Windows? I found.
A: Take Command. This one has been around for a long time (formerly 4DOS). I used this on Windows NT 3.5 (!) and loved it.
Cygwin lets you run X on Windows, so you can fire up xterm or whatever terminal app you prefer, and also get the benefit of using a UNIX shell.
A: I think you will love PowerCMD which you can work 4 command windows at the same time. Also, you can use many extra commands inside the PowerCMD.
PowerCMD
A: Use Gow.exe ..
This will make your DOS-Prompt as Linux terminal...
else
Use ZOC.exe...its Trial-period terminal...
else
Install Git .. it gives a bash-console from where u can use unix commands, partially
A: There is a small program mo.exe on github that solves the first three issues:
https://github.com/boolship/Mo
It runs in normal DOS console window, Git Bash on Windows, etc.
update:
That link is now deprecated, use: https://github.com/boolship/MoDi
A: Sorry for the self-promotion, I'm the author of another Console Emulator, not mentioned here.
ConEmu is opensource console emulator with tabs, which represents multiple consoles and simple GUI applications as one customizable GUI window.
Initially, the program was designed to work with Far Manager (my favorite shell replacement - file and archive management, command history and completion, powerful editor). But ConEmu can be used with any other console application or simple GUI tools (like PuTTY for example). ConEmu is a live project, open to suggestions.
A brief excerpt from the long list of options:
*
*Latest versions of ConEmu may set up itself as default terminal for Windows
*Use any font installed in the system, or copied to a folder of the program (ttf, otf, fon, bdf)
*Run selected tabs as Administrator (Vista+) or as selected user
*Windows 7 Jump lists and Progress on taskbar
*Integration with DosBox (useful in 64bit systems to run DOS applications)
*Smooth resize, maximized and fullscreen window modes
*Scrollbar initially hidden, may be revealed by mouseover or checkbox in settings
*Optional settings (e.g. pallette) for selected applications
*User friendly text and block selection (from keyboard or mouse), copy, paste, text search in console
*ANSI X3.64 and Xterm 256 color
Far Manager users will acquire shell style drag-n-drop, thumbnails and tiles in panles, tabs for editors and viewers, true colors and font styles (italic/bold/underline).
PS. Far Manager supports UNC paths (\\server\share\...)
A: Take a look at Take Command.
Take Command is a comprehensive interactive GUI and command line environment that makes using the Windows command prompt and creating batch files easy and far more powerful.
(Take Command is, however, "not free".)
A: I'm using Terminals for remote connection via Telnet, RDC, SSH, ...
Combines most used protocolls in one program.
URL: http://www.codeplex.com/Terminals
A: Why not use Putty?
A: I use rxvt from cygwin. It behaves very much like an xterm.
A: *
*Turn on quickedit mode (but selection is still rectangular instead of line-wrapped)
*Resizing by dragging works for me
*You can change the buffer size which will impact when scrollbars appear
*pushd \\server\share
Even with those, cmd.exe isn't a great console. See all the other replies and the earlier stackoverflow questions on the same subject. The "Console" project from sourceforge looks pretty good.
A: Try Console 2.
Console is a Windows console window enhancement. Console features include: multiple tabs, text editor-like text selection, different background types, alpha and color-key transparency, configurable font, different window styles
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "316"
} |
Q: How do I keep track of related windows in X11? Unfortunately, my question is not as simple as keeping track of two windows created by the same process.
Here is what I have:
*
*Two users, Jack and Jim are remotely logged in to the same Unix system and run X servers
*Jack runs an application, 'AwesomeApp', that opens a GUI in a X window
*Jim runs another instance of this application, opening his own GUI window
*Now, Jack runs a supervisor application that will communicate with the process owning the first window (eg 'AwesomeApp') because it's HIS instance of 'AwesomeApp'
*How can his instance of the supervisor find which instance of 'AwesomeApp' window is his own?
Aaaahhhh...looking it up on a per-user basis yes that could work.
As long as I tell the users that they cannot log in with the same user account from two different places.
A: You can use pgrep to get the process ID of Jack's instance of AwesomeApp:
pgrep -u Jack AwesomeApp
So if you launch the supervisor application from a shell script, you could do something like the following:
AWESOME_ID=`pgrep -u $USER AwesomeApp 2>/dev/null`
# run the supervisor application and pass the process id as the argument
supervisor $AWESOME_ID
Alternatively, if you don't want to use external programs like pgrep or ps, you could always try looking for the process in /proc directly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you create a database from an EDM? How do you create a database from an Entity Data Model.
So I created a database using the EDM Designer in VisualStudio 2008, and now I want to generate the SQL Server Schema to create storage in SQL Server.
A: From what I understand you are not just supposed to use EDM as a "pretty" database designer, in fact EDM does not depend on a specific storage layer. It tries to abstract that part for the developer. There are design schemas (CSDL) and storage schemas (SSDL). Anyway, don't mean to lecture you. ;)
There is EDM Generator, which you use to create models and class, etc.. For a DDL kind of export, I've never done that but what I did was map my EDM to an existing database, which was easier for me to get started.
There is a great tutorial on MSDN, which details step by step instructions on how to go about using an existing database, but also touches the how to start from scratch approach.
http://msdn.microsoft.com/en-us/magazine/cc163286.aspx
A: The Feature "Generate Database Schema from Model" is scheduled for a future release of Entity Framework. V1 does'nt support schema generatiorn based on EF models.
A: I believe the other answers implied this, but just to be explicit - use SSMS (or whatever equivlent if you're a brave sole and not using SQL Server provider) to design the DB layout and then suck that into EDM - and then apply application changes as necessary to the model.
I spent about an hour trying to do it your way first (leftover habit from some other Java ORM tools) - I eventually gave up and now do it the 'Right Way' (tm)
Eventually it would be nice (as JRoppert indicated) to have the generate databse schema from model feature - then you could get your DDLs for various DB flavours automagically.
A: Generating databases from model is a feature planned for vNext of Entity Framework.
Check out this blog post of Entity Framework Design explaining the planned features for database generation from a model.
What you must do right now is either 1) generate the database by hand, or 2) parse the CSDL file and write your own generator. I think option 1) is probably a better option.
A: Avilable in EF 4:
http://blogs.msdn.com/b/efdesign/archive/2008/09/10/model-first.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Resetting detection of source file changes Sometimes I have to work on code that moves the computer clock forward. In this case some .cpp or .h files get their latest modification date set to the future time.
Later on, when my clock is fixed, and I compile my sources, system rebuilds most of the project because some of the latest modification dates are in the future. Each subsequent recompile has the same problem.
Solution that I know are:
a) Find the file that has the future time and re-save it. This method is not ideal because the project is very big and it takes time even for windows advanced search to find the files that are changed.
b) Delete the whole project and re-check it out from svn.
Does anyone know how I can get around this problem?
Is there perhaps a setting in visual studio that will allow me to tell the compiler to use the archive bit instead of the last modification date to detect source file changes?
Or perhaps there is a recursive modification date reset tool that can be used in this situation?
A: I would recommend using a virtual machine where you can mess with the clock to your heart's content and it won't affect your development machine. Two free ones are Virtual PC from Microsoft and VirtualBox from Sun.
A: I don't know if this works in your situation but how about you don't move your clock forward, but wrap your gettime method (or whatever you're using) and make it return the future time that you need?
A: Install Unix Utils
touch temp
find . -newer temp -exec touch {} ;
rm temp
Make sure to use the full path when calling find or it will probably use Windows' find.exe instead. This is untested in the Windows shell -- you might need to modify the syntax a bit.
A: If this was my problem, I'd look for ways to avoid mucking with the system time. Isolating the code under unit tests, or a virtual machine, or something.
However, because I love PowerShell:
Get-ChildItem -r . |
? { $_.LastWriteTime -gt ([DateTime]::Now) } |
Set-ItemProperty -Name "LastWriteTime" -Value ([DateTime]::Now)
A: I don't use windows - but surely there is something like awk or grep that you can use to find the "future" timestamped files, and then "touch" them so they have the right time - even a perl script.
A: 1) Use a build system that doesn't use timestamps to detect modifications, like scons
2) Use ccache to speed up your build system that does use timestamps (and rebuild all).
In either case it is using md5sum's to verify that a file has been modified, not timestamps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I access the non-IPM_SUBTREE Public Folder Tree with WMI? I'm trying to verify when the OAB (Offline Address Book) root folder for a new OAB is created with powershell. Is there a WMI class that exposes this? I'm using powershell, but any examples or links will do.
A: If your Exchange server is Exchange 2007, then PowerShell (using the Exchange snapin) will be able to access it by running this command:
Get-PublicFolder \NON_IPM_SUBTREE -recurse
If your server is Exchange 2003, then you will need a mixture of ADSI/LDAP to query that. Reply back if its Exchange 2003.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Development directory Structure I am wondering what directory structure are commonly used in development projects. I mean with the idea of facilitating builds, deploys release, and etc.
I recently used a Maven structure for a java project, but I am not sure it's the best structure for a non-maven driven project.
So, I have two questions: When you guys start new projects, what structure you use? And: What if you need to integrate two different languages, like for example java classes into a PHP application; PHP files are source files, web files, you you use a /src, /classes, webapps/php ? What are your choices in such scenarios.
As a note: I am wondering also what are you choices for directories names. I like the 3-letters names (src, lib, bin, web, img, css, xml, cfg) but what are your opinions about descriptive names like libraris, sources or htdocs/public_html ?
A: After a couple years working with different structures I recently found a structure that hols most variations for me:
/project_name (everything goes here)
/web (htdocs)
/img
/css
/app (usually some framework or sensitive code)
/lib (externa libs)
/vendor_1
/vendor_2
/tmp
/cache
/sql (sql scripts usually with maybe diagrams)
/scripts
/doc (usually an empty directory)
A: Although we don't use Maven, we use the Maven directory structure.
We've found that it accurately reflects the concepts we need (e.g. separation of deployment code from test code, code from data, installers from code). Also we figure that if someday we switched to Maven, most of our process would remain the same.
A: I just found a interesting document about Directory structures on Zend website:
http://framework.zend.com/wiki/display/ZFDEV/Choosing+Your+Application%27s+Directory+Layout
A: A 2011 update:
http://java.sun.com/blueprints/code/projectconventions.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I generate a git diff of what's changed since the last time I pulled? I'd like to script, preferably in rake, the following actions into a single command:
*
*Get the version of my local git repository.
*Git pull the latest code.
*Git diff from the version I extracted in step #1 to what is now in my local repository.
In other words, I want to get the latest code form the central repository and immediately generate a diff of what's changed since the last time I pulled.
A: If you drop this into your bash profile you'll be able to run grin (git remote incoming) and grout (git remote outgoing) to see diffs of commits that are incoming and outgoing for origin master.
function parse_git_branch {
git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/'
}
function gd2 {
echo branch \($1\) has these commits and \($2\) does not
git log $2..$1 --no-merges --format='%h | Author:%an | Date:%ad | %s' --date=local
}
function grin {
git fetch origin master
gd2 FETCH_HEAD $(parse_git_branch)
}
function grout {
git fetch origin master
gd2 $(parse_git_branch) FETCH_HEAD
}
A: Greg's way should work (not me, other Greg :P). Regarding your comment, origin is a configuration variable that is set by Git when you clone the central repository to your local machine. Essentially, a Git repository remembers where it came from. You can, however, set these variables manually if you need to using git-config.
git config remote.origin.url <url>
where url is the remote path to your central repository.
Here is an example batch file that should work (I haven't tested it).
@ECHO off
:: Retrieve the changes, but don't merge them.
git fetch
:: Look at the new changes
git diff ...origin
:: Ask if you want to merge the new changes into HEAD
set /p PULL=Do you wish to pull the changes? (Y/N)
IF /I %PULL%==Y git pull
A: This is very similar to a question I asked about how to get changes on a branch in git. Note that the behaviour of git diff vs. git log is inconsistently different when using two dots vs. three dots. But, for your application you can use:
git fetch
git diff ...origin
After that, a git pull will merge the changes into your HEAD.
A: You could do this fairly simply with refspecs.
git pull origin
git diff @{1}..
That will give you a diff of the current branch as it existed before and after the pull. Note that if the pull doesn't actually update the current branch, the diff will give you the wrong results. Another option is to explicitly record the current version:
current=`git rev-parse HEAD`
git pull origin
git diff $current..
I personally use an alias that simply shows me a log, in reverse order (i.e. oldest to newest), sans merges, of all the commits since my last pull. I run this every time my pull updates the branch:
git config --global alias.lcrev 'log --reverse --no-merges --stat @{1}..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83"
} |
Q: How do I determine the permission settings for PHP scripts? What are the best file permission settings for PHP scripts? Any suggestions on ways to figure out the minimum required permissions?
A: The minimum permissions necessary for the script to function.
A: WalloWizard is correct that you should only use the minimum permissions necessary for the script to function.
However, let me be more specific, assuming that you're running on a Unix-based system such as Linux or BSD or Mac OSX. Your web server usually runs as an unprivileged user such as "nobody" and your scripts need to be readable by that user, so the best permissions are usually 644, meaning that you can read and write the script, and everyone else can only read it.
In the uncommon case that the script is owned by the same user running the web server, you can set the permissions to 600, so that you can read and write the script and no one else can even read it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What steps should be necessary to optimize a poorly performing query? I know this is a broad question, but I've inherited several poor performers and need to optimize them badly. I was wondering what are the most common steps involved to optimize. So, what steps do some of you guys take when faced with the same situation?
Related Question:
What generic techniques can be applied to optimize SQL queries?
A: In SQL Server you can look at the Query Plan in Query Analyzer or Management Studio. This will tell you the rough percentage of time spent in each batch of statements. You'll want to look for the following:
*
*Table scans; this means you are completely missing indexes
*Index scans; your query may not be using the correct indexes
*The thickness of the arrows between each step in a query tells you how many rows are being produced by that step, very thick arrows means you are processing a lot of rows, and can indicate that some joins need to be optimized.
Some other general tips:
*
*A large amount of conditional statements, such as multiple if-else statements, can cause SQL Server to constantly rebuild the query plan. You can check for this using Profiler.
*Make sure that different queries aren't blocking each other, such as an update statement blocking a select statement. This can be avoided by specifying the (nolock) hint in SQL Server select statements.
*As others have mentioned, try out the Performance Tuning wizard in Management Studio.
Finally, I would highly recommend creating a set of load tests (using Visual Studio 2008 Test Edition), which you can use to simulate your application's behavior when dealing with a large amount of requests. Some SQL performance bottlenecks only manifest themselves under these circumstances, and being able to reproduce them makes it a lot easier to fix.
A: Indexes may be a good place to start...
The low hanging fruit can be knocked down with the SQL Server Index Tuning Wizard.
A: I'm not sure about other databases, but for SQL Server I recommend the Execution Plan. It very clearly (albeit with lots of vertical and horizontal scrolling, unless you've got a 400" monitor!) shows what steps of your query are sucking up the time.
If you've got one step that takes a crazy 80%, then maybe an index could be added, then after tweaking the index, re-run the Execution Plan to find your next biggest step.
After a couple tweaks you may find that there really are no steps that stand out from the others i.e. they're all 1-2% each. If that is the case, then you might then need to see if there is a way you can cut down the amount of data included in your query, do those four million closed sales orders need to be included in the "Active Sales Orders" query? No, so exclude all those with STATUS='C' ... or something like that.
Another improvement you'll see from the Execution Plan is bookmark lookups, basically it finds a match in the index, but then SQL Server has to quickly trawl through the table to find the record you want. This operation might at times take longer than just scanning the table in the first place would have, if that is the case, do you really need that index?
With indexes, and especially with SQL Server 2005 you should look to the INCLUDE clause, this basically allows you to have a column in an index without really being in the index, so if all the data you need for your query is in your index or is an included columnn then SQL Server doesn't have to even look at the table, a big performance pickup.
A: There are a couple of things you can look at to optimize your query performance.
*
*Ensure that you just have the minimum of data. Make sure you select only the columns you need. Reduce field sizes to a minimum.
*Consider de-normalising your database to reduce joins
*Avoid loops (i.e. fetch cursors), stick to set operations.
*Implement the query as a stored procedure as this is pre-compiled and will execute faster.
*Make sure that you have the correct indexes set up. If your database is used mostly for searching then consider more indexes.
*Use the execution plan to see how the processing is done. What you want to avoid is a table scan as this is costly.
*Make sure that the Auto Statistics is set to on. SQL needs this to help decide the optimal execution. See Mike Gunderloy's great post for more info. Basics of Statistics in SQL Server 2005
*Make sure your indexes are not fragmented Reducing SQL Server Index Fragmentation
*Make sure your tables are not fragmented. How to Detect Table Fragmentation in SQL Server 2000 and 2005
A: *
*Look at the execution plan in query analyzer
*See what step costs the most
*Optimize the step!
*Return to step 1 [thx to Vinko]
A: Look at the indexes on the tables that make the query. An indexes may be needed on particular fields that participate in the where clause. Also look at the fields used in the joins in the query (if joins exist). If indexes already exist, look at the type of index.
Failing that (because there are negatives to using locking hints) Look at locking hints and explicitly naming the index to use in the join. Using NOLOCKS is more obvious if you're getting a lot of deadlocked transactions.
Do what roman and Andy S mentioned first though.
A: The execution plan is a great start and will help you figure out what part of your query you need to tackle.
Once you figure out the where, it is time to tackle the how and why. Take a look at the type of queries you are trying to preform. Avoid loops at all cost as they are slow. Avoid cursors at all costs because they are slow. Stick to set based queries when ever possible.
There are ways to give sql hints on the type of joins to use if you are using joins. Be careful here though, while one hint may speed up your query once, it may slow down your query 10 fold the next time through depending on the data and parameters.
Finally, make sure your database is well indexed. A good place to start is any field that is contained in a where clause probably should have a index on it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to check if a value already exists to avoid duplicates? I've got a table of URLs and I don't want any duplicate URLs. How do I check to see if a given URL is already in the table using PHP/MySQL?
A: The simple SQL solutions require a unique field; the logic solutions do not.
You should normalize your urls to ensure there is no duplication. Functions in PHP such as strtolower() and urldecode() or rawurldecode().
Assumptions: Your table name is 'websites', the column name for your url is 'url', and the arbitrary data to be associated with the url is in the column 'data'.
Logic Solutions
SELECT COUNT(*) AS UrlResults FROM websites WHERE url='http://www.domain.com'
Test the previous query with if statements in SQL or PHP to ensure that it is 0 before you continue with an INSERT statement.
Simple SQL Statements
Scenario 1: Your db is a first come first serve table and you have no desire to have duplicate entries in the future.
ALTER TABLE websites ADD UNIQUE (url)
This will prevent any entries from being able to be entered in to the database if the url value already exists in that column.
Scenario 2: You want the most up to date information for each url and don't want to duplicate content. There are two solutions for this scenario. (These solutions also require 'url' to be unique so the solution in Scenario 1 will also need to be carried out.)
REPLACE INTO websites (url, data) VALUES ('http://www.domain.com', 'random data')
This will trigger a DELETE action if a row exists followed by an INSERT in all cases, so be careful with ON DELETE declarations.
INSERT INTO websites (url, data) VALUES ('http://www.domain.com', 'random data')
ON DUPLICATE KEY UPDATE data='random data'
This will trigger an UPDATE action if a row exists and an INSERT if it does not.
A: In considering a solution to this problem, you need to first define what a "duplicate URL" means for your project. This will determine how to canonicalize the URLs before adding them to the database.
There are at least two definitions:
*
*Two URLs are considered duplicates if they represent the same resource knowing nothing about the corresponding web service that generates the corresponding content. Some considerations include:
*
*The scheme and domain name portion of the URLs are case-insensitive, so HTTP://WWW.STACKOVERFLOW.COM/ is the same as http://www.stackoverflow.com/.
*If one URL specifies a port, but it is the conventional port for the scheme and they are otherwise equivalent, then they are the same ( http://www.stackoverflow.com/ and http://www.stackoverflow.com:80/).
*If the parameters in the query string are simple rearrangements and the parameter names are all different, then they are the same; e.g. http://authority/?a=test&b=test and http://authority/?b=test&a=test. Note that http://authority/?a%5B%5D=test1&a%5B%5D=test2 is not the same, by this first definition of sameness, as http://authority/?a%5B%5D=test2&a%5B%5D=test1.
*If the scheme is HTTP or HTTPS, then the hash portions of the URLs can be removed, as this portion of the URL is not sent to the web server.
*A shortened IPv6 address can be expanded.
*Append a trailing forward slash to the authority only if it is missing.
*Unicode canonicalization changes the referenced resource; e.g. you can't conclude that http://google.com/?q=%C3%84 (%C3%84 represents 'Ä' in UTF-8) is the same as http://google.com/?q=A%CC%88 (%CC%88 represents U+0308, COMBINING DIAERESIS).
*If the scheme is HTTP or HTTPS, 'www.' in one URL's authority can not simply be removed if the two URLs are otherwise equivalent, as the text of the domain name is sent as the value of the Host HTTP header, and some web servers use virtual hosts to send back different content based on this header. More generally, even if the domain names resolve to the same IP address, you can not conclude that the referenced resources are the same.
*Apply basic URL canonicalization (e.g. lower case the scheme and domain name, supply the default port, stable sort query parameters by parameter name, remove the hash portion in the case of HTTP and HTTPS, ...), and take into account knowledge of the web service. Maybe you will assume that all web services are smart enough to canonicalize Unicode input (Wikipedia is, for example), so you can apply Unicode Normalization Form Canonical Composition (NFC). You would strip 'www.' from all Stack Overflow URLs. You could use PostRank's postrank-uri code, ported to PHP, to remove all sorts of pieces of the URLs that are unnecessary (e.g. &utm_source=...).
Definition 1 leads to a stable solution (i.e. there is no further canonicalization that can be performed and the canonicalization of a URL will not change). Definition 2, which I think is what a human considers the definition of URL canonicalization, leads to a canonicalization routine that can yield different results at different moments in time.
Whichever definition you choose, I suggest that you use separate columns for the scheme, login, host, port, and path portions. This will allow you to use indexes intelligently. The columns for scheme and host can use a character collation (all character collations are case-insensitive in MySQL), but the columns for the login and path need to use a binary, case-insensitive collation. Also, if you use Definition 2, you need to preserve the original scheme, authority, and path portions, as certain canonicalization rules might be added or removed from time to time.
EDIT: Here are example table definitions:
CREATE TABLE `urls1` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`scheme` VARCHAR(20) NOT NULL,
`canonical_login` VARCHAR(100) DEFAULT NULL COLLATE 'utf8mb4_bin',
`canonical_host` VARCHAR(100) NOT NULL COLLATE 'utf8mb4_unicode_ci', /* the "ci" stands for case-insensitive. Also, we want 'utf8mb4_unicode_ci'
rather than 'utf8mb4_general_ci' because 'utf8mb4_general_ci' treats accented characters as equivalent. */
`port` INT UNSIGNED,
`canonical_path` VARCHAR(4096) NOT NULL COLLATE 'utf8mb4_bin',
PRIMARY KEY (`id`),
INDEX (`canonical_host`(10), `scheme`)
) ENGINE = 'InnoDB';
CREATE TABLE `urls2` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`canonical_scheme` VARCHAR(20) NOT NULL,
`canonical_login` VARCHAR(100) DEFAULT NULL COLLATE 'utf8mb4_bin',
`canonical_host` VARCHAR(100) NOT NULL COLLATE 'utf8mb4_unicode_ci',
`port` INT UNSIGNED,
`canonical_path` VARCHAR(4096) NOT NULL COLLATE 'utf8mb4_bin',
`orig_scheme` VARCHAR(20) NOT NULL,
`orig_login` VARCHAR(100) DEFAULT NULL COLLATE 'utf8mb4_bin',
`orig_host` VARCHAR(100) NOT NULL COLLATE 'utf8mb4_unicode_ci',
`orig_path` VARCHAR(4096) NOT NULL COLLATE 'utf8mb4_bin',
PRIMARY KEY (`id`),
INDEX (`canonical_host`(10), `canonical_scheme`),
INDEX (`orig_host`(10), `orig_scheme`)
) ENGINE = 'InnoDB';
Table `urls1` is for storing canonical URLs according to definition 1. Table `urls2` is for storing canonical URLs according to definition 2.
Unfortunately you will not be able to specify a UNIQUE constraint on the tuple (`scheme`/`canonical_scheme`, `canonical_login`, `canonical_host`, `port`, `canonical_path`) as MySQL limits the length of InnoDB keys to 767 bytes.
A: If you don't want to have duplicates you can do following:
*
*add uniqueness constraint
*use "REPLACE" or "INSERT ... ON DUPLICATE KEY UPDATE" syntax
If multiple users can insert data to DB, method suggested by @Jeremy Ruten, can lead to an error: after you performed a check someone can insert similar data to the table.
A: To answer your initial question, the easiest way to check whether there is a duplicate is to run an SQL query against what you're trying to add!
For example, were you to want to check for the url http://www.example.com/ in the table links, then your query would look something like
SELECT * FROM links WHERE url = 'http://www.example.com/';
Your PHP code would look something like
$conn = mysql_connect('localhost', 'username', 'password');
if (!$conn)
{
die('Could not connect to database');
}
if(!mysql_select_db('mydb', $conn))
{
die('Could not select database mydb');
}
$result = mysql_query("SELECT * FROM links WHERE url = 'http://www.example.com/'", $conn);
if (!$result)
{
die('There was a problem executing the query');
}
$number_of_rows = mysql_num_rows($result);
if ($number_of_rows > 0)
{
die('This URL already exists in the database');
}
I've written this out longhand here, with all the connecting to the database, etc. It's likely that you'll already have a connection to a database, so you should use that rather than starting a new connection (replace $conn in the mysql_query command and remove the stuff to do with mysql_connect and mysql_select_db)
Of course, there are other ways of connecting to the database, like PDO, or using an ORM, or similar, so if you're already using those, this answer may not be relevant (and it's probably a bit beyond the scope to give answers related to this here!)
However, MySQL provides many ways to prevent this from happening in the first place.
Firstly, you can mark a field as "unique".
Lets say I have a table where I want to just store all the URLs that are linked to from my site, and the last time they were visited.
My definition might look something like this:-
CREATE TABLE links
(
url VARCHAR(255) NOT NULL,
last_visited TIMESTAMP
)
This would allow me to add the same URL over and over again, unless I wrote some PHP code similar to the above to stop this happening.
However, were my definition to change to
CREATE TABLE links
(
url VARCHAR(255) NOT NULL,
last_visited TIMESTAMP,
PRIMARY KEY (url)
)
Then this would make mysql throw an error when I tried to insert the same value twice.
An example in PHP would be
$result = mysql_query("INSERT INTO links (url, last_visited) VALUES ('http://www.example.com/', NOW()", $conn);
if (!$result)
{
die('Could not Insert Row 1');
}
$result2 = mysql_query("INSERT INTO links (url, last_visited) VALUES ('http://www.example.com/', NOW()", $conn);
if (!$result2)
{
die('Could not Insert Row 2');
}
If you ran this, you'd find that on the first attempt, the script would die with the comment Could not Insert Row 2. However, on subsequent runs, it'd die with Could not Insert Row 1.
This is because MySQL knows that the url is the Primary Key of the table. A Primary key is a unique identifier for that row. Most of the time, it's useful to set the unique identifier for a row to be a number. This is because MySQL is quicker at looking up numbers than it is looking up text. Within MySQL, keys (and espescially Primary Keys) are used to define relationships between two tables. For example, if we had a table for users, we could define it as
CREATE TABLE users (
username VARCHAR(255) NOT NULL,
password VARCHAR(40) NOT NULL,
PRIMARY KEY (username)
)
However, when we wanted to store information about a post the user had made, we'd have to store the username with that post to identify that the post belonged to that user.
I've already mentioned that MySQL is faster at looking up numbers than strings, so this would mean we'd be spending time looking up strings when we didn't have to.
To solve this, we can add an extra column, user_id, and make that the primary key (so when looking up the user record based on a post, we can find it quicker)
CREATE TABLE users (
user_id INT(10) NOT NULL AUTO_INCREMENT,
username VARCHAR(255) NOT NULL,
password VARCHAR(40) NOT NULL,
PRIMARY KEY (`user_id`)
)
You'll notice that I've also added something new here - AUTO_INCREMENT. This basically allows us to let that field look after itself. Each time a new row is inserted, it adds 1 to the previous number, and stores that, so we don't have to worry about numbering, and can just let it do this itself.
So, with the above table, we can do something like
INSERT INTO users (username, password) VALUES('Mez', 'd3571ce95af4dc281f142add33384abc5e574671');
and then
INSERT INTO users (username, password) VALUES('User', '988881adc9fc3655077dc2d4d757d480b5ea0e11');
When we select the records from the database, we get the following:-
mysql> SELECT * FROM users;
+---------+----------+------------------------------------------+
| user_id | username | password |
+---------+----------+------------------------------------------+
| 1 | Mez | d3571ce95af4dc281f142add33384abc5e574671 |
| 2 | User | 988881adc9fc3655077dc2d4d757d480b5ea0e11 |
+---------+----------+------------------------------------------+
2 rows in set (0.00 sec)
However, here - we have a problem - we can still add another user with the same username! Obviously, this is something we don't want to do!
mysql> SELECT * FROM users;
+---------+----------+------------------------------------------+
| user_id | username | password |
+---------+----------+------------------------------------------+
| 1 | Mez | d3571ce95af4dc281f142add33384abc5e574671 |
| 2 | User | 988881adc9fc3655077dc2d4d757d480b5ea0e11 |
| 3 | Mez | d3571ce95af4dc281f142add33384abc5e574671 |
+---------+----------+------------------------------------------+
3 rows in set (0.00 sec)
Lets change our table definition!
CREATE TABLE users (
user_id INT(10) NOT NULL AUTO_INCREMENT,
username VARCHAR(255) NOT NULL,
password VARCHAR(40) NOT NULL,
PRIMARY KEY (user_id),
UNIQUE KEY (username)
)
Lets see what happens when we now try and insert the same user twice.
mysql> INSERT INTO users (username, password) VALUES('Mez', 'd3571ce95af4dc281f142add33384abc5e574671');
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO users (username, password) VALUES('Mez', 'd3571ce95af4dc281f142add33384abc5e574671');
ERROR 1062 (23000): Duplicate entry 'Mez' for key 'username'
Huzzah!! We now get an error when we try and insert the username for the second time. Using something like the above, we can detect this in PHP.
Now, lets go back to our links table, but with a new definition.
CREATE TABLE links
(
link_id INT(10) NOT NULL AUTO_INCREMENT,
url VARCHAR(255) NOT NULL,
last_visited TIMESTAMP,
PRIMARY KEY (link_id),
UNIQUE KEY (url)
)
and let's insert "http://www.example.com" into the database.
INSERT INTO links (url, last_visited) VALUES ('http://www.example.com/', NOW());
If we try and insert it again....
ERROR 1062 (23000): Duplicate entry 'http://www.example.com/' for key 'url'
But what happens if we want to update the time it was last visited?
Well, we could do something complex with PHP, like so:-
$result = mysql_query("SELECT * FROM links WHERE url = 'http://www.example.com/'", $conn);
if (!$result)
{
die('There was a problem executing the query');
}
$number_of_rows = mysql_num_rows($result);
if ($number_of_rows > 0)
{
$result = mysql_query("UPDATE links SET last_visited = NOW() WHERE url = 'http://www.example.com/'", $conn);
if (!$result)
{
die('There was a problem updating the links table');
}
}
Or, even grab the id of the row in the database and use that to update it.
$result = mysql_query("SELECT * FROM links WHERE url = 'http://www.example.com/'", $conn);
if (!$result)
{
die('There was a problem executing the query');
}
$number_of_rows = mysql_num_rows($result);
if ($number_of_rows > 0)
{
$row = mysql_fetch_assoc($result);
$result = mysql_query('UPDATE links SET last_visited = NOW() WHERE link_id = ' . intval($row['link_id'], $conn);
if (!$result)
{
die('There was a problem updating the links table');
}
}
But, MySQL has a nice built in feature called REPLACE INTO
Let's see how it works.
mysql> SELECT * FROM links;
+---------+-------------------------+---------------------+
| link_id | url | last_visited |
+---------+-------------------------+---------------------+
| 1 | http://www.example.com/ | 2011-08-19 23:48:03 |
+---------+-------------------------+---------------------+
1 row in set (0.00 sec)
mysql> INSERT INTO links (url, last_visited) VALUES ('http://www.example.com/', NOW());
ERROR 1062 (23000): Duplicate entry 'http://www.example.com/' for key 'url'
mysql> REPLACE INTO links (url, last_visited) VALUES ('http://www.example.com/', NOW());
Query OK, 2 rows affected (0.00 sec)
mysql> SELECT * FROM links;
+---------+-------------------------+---------------------+
| link_id | url | last_visited |
+---------+-------------------------+---------------------+
| 2 | http://www.example.com/ | 2011-08-19 23:55:55 |
+---------+-------------------------+---------------------+
1 row in set (0.00 sec)
Notice that when using REPLACE INTO, it's updated the last_visited time, and not thrown an error!
This is because MySQL detects that you're attempting to replace a row. It knows the row that you want, as you've set url to be unique. MySQL figures out the row to replace by using the bit that you passed in that should be unique (in this case, the url) and updating for that row the other values. It's also updated the link_id - which is a bit unexpected! (In fact, I didn't realise this would happen until I just saw it happen!)
But what if you wanted to add a new URL? Well, REPLACE INTO will happily insert a new row if it can't find a matching unique row!
mysql> REPLACE INTO links (url, last_visited) VALUES ('http://www.stackoverflow.com/', NOW());
Query OK, 1 row affected (0.00 sec)
mysql> SELECT * FROM links;
+---------+-------------------------------+---------------------+
| link_id | url | last_visited |
+---------+-------------------------------+---------------------+
| 2 | http://www.example.com/ | 2011-08-20 00:00:07 |
| 3 | http://www.stackoverflow.com/ | 2011-08-20 00:01:22 |
+---------+-------------------------------+---------------------+
2 rows in set (0.00 sec)
I hope this answers your question, and gives you a bit more information about how MySQL works!
A: i don't know the syntax for MySQL, but all you need to do is wrap your INSERT with IF statement that will query the table and see if the record with given url EXISTS, if it exists - don't insert a new record.
if MSSQL you can do this:
IF NOT EXISTS (SELECT 1 FROM YOURTABLE WHERE URL = 'URL')
INSERT INTO YOURTABLE (...) VALUES (...)
A: Are you concerned purely about URLs that are the exact same string .. if so there is a lot of good advice in other answers. Or do you also have to worry about canonization?
For example: http://google.com and http://go%4fgle.com are the exact same URL, but would be allowed as duplicates by any of the database only techniques. If this is an issue you should preprocess the URLs to resolve and character escape sequences.
Depending where the URLs are coming from you will also have to worry about parameters and whether they are significant in your application.
A: First, prepare the database.
*
*Domain names aren't case-sensitive, but you have to assume the rest of a URL is. (Not all web servers respect case in URLs, but most do, and you can't easily tell by looking.)
*Assuming you need to store more than a domain name, use a case-sensitive collation.
*If you decide to store the URL in two columns--one for the domain name and one for the resource locator--consider using a case-insensitive collation for the domain name, and a case-sensitive collation for the resource locator. If I were you, I'd test both ways (URL in one column vs. URL in two columns).
*Put a UNIQUE constraint on the URL column. Or on the pair of columns, if you store the domain name and resource locator in separate columns, as UNIQUE (url, resource_locator).
*Use a CHECK() constraint to keep encoded URLs out of the database. This CHECK() constraint is essential to keep bad data from coming in through a bulk copy or through the SQL shell.
Second, prepare the URL.
*
*Domain names aren't case-sensitive. If you store the full URL in one column, lowercase the domain name on all URLs. But be aware that some languages have uppercase letters that have no lowercase equivalent.
*Think about trimming trailing characters. For example, these two URLs from amazon.com point to the same product. You probably want to store the second version, not the first.
http://www.amazon.com/Systemantics-Systems-Work-Especially-They/dp/070450331X/ref=sr_1_1?ie=UTF8&qid=1313583998&sr=8-1
http://www.amazon.com/Systemantics-Systems-Work-Especially-They/dp/070450331X
*Decode encoded URLs. (See php's urldecode() function. Note carefully its shortcomings, as described in that page's comments.) Personally, I'd rather handle these kinds of transformations in the database rather than in client code. That would involve revoking permissions on the tables and views, and allowing inserts and updates only through stored procedures; the stored procedures handle all the string operations that put the URL into a canonical form. But keep an eye on performance when you try that. CHECK() constraints (see above) are your safety net.
Third, if you're inserting only the URL, don't test for its existence first. Instead, try to insert and trap the error that you'll get if the value already exists. Testing and inserting hits the database twice for every new URL. Insert-and-trap just hits the database once. Note carefully that insert-and-trap isn't the same thing as insert-and-ignore-errors. Only one particular error means you violated the unique constraint; other errors mean there are other problems.
On the other hand, if you're inserting the URL along with some other data in the same row, you need to decide ahead of time whether you'll handle duplicate urls by
*
*deleting the old row and inserting a new one (See MySQL's REPLACE extension to SQL)
*updating existing values (See ON DUPLICATE KEY UPDATE)
*ignoring the issue
*requiring the user to take further action
REPLACE eliminates the need to trap duplicate key errors, but it might have unfortunate side effects if there are foreign key references.
A: To guarantee uniqueness you need to add a unique constraint. Assuming your table name is "urls" and the column name is "url", you can add the unique constraint with this alter table command:
alter table urls add constraint unique_url unique (url);
The alter table will probably fail (who really knows with MySQL) if you've already got duplicate urls in your table already.
A: If you want to insert urls into the table, but only those that don't exist already you can add a UNIQUE contraint on the column and in your INSERT query add IGNORE so that you don't get an error.
Example: INSERT IGNORE INTO urls SET url = 'url-to-insert'
A: First things first. If you haven't already created the table, or you created a table but do not have data in in then you need to add a unique constriant, or a unique index. More information about choosing between index or constraints follows at the end of the post. But they both accomplish the same thing, enforcing that the column only contains unique values.
To create a table with a unique index on this column, you can use.
CREATE TABLE MyURLTable(
ID INTEGER NOT NULL AUTO_INCREMENT
,URL VARCHAR(512)
,PRIMARY KEY(ID)
,UNIQUE INDEX IDX_URL(URL)
);
If you just want a unique constraint, and no index on that table, you can use
CREATE TABLE MyURLTable(
ID INTEGER NOT NULL AUTO_INCREMENT
,URL VARCHAR(512)
,PRIMARY KEY(ID)
,CONSTRAINT UNIQUE UNIQUE_URL(URL)
);
Now, if you already have a table, and there is no data in it, then you can add the index or constraint to the table with one of the following pieces of code.
ALTER TABLE MyURLTable
ADD UNIQUE INDEX IDX_URL(URL);
ALTER TABLE MyURLTable
ADD CONSTRAINT UNIQUE UNIQUE_URL(URL);
Now, you may already have a table with some data in it. In that case, you may already have some duplicate data in it. You can try creating the constriant or index shown above, and it will fail if you already have duplicate data. If you don't have duplicate data, great, if you do, you'll have to remove the duplicates. You can see a lit of urls with duplicates using the following query.
SELECT URL,COUNT(*),MIN(ID)
FROM MyURLTable
GROUP BY URL
HAVING COUNT(*) > 1;
To delete rows that are duplicates, and keep one, do the following:
DELETE RemoveRecords
FROM MyURLTable As RemoveRecords
LEFT JOIN
(
SELECT MIN(ID) AS ID
FROM MyURLTable
GROUP BY URL
HAVING COUNT(*) > 1
UNION
SELECT ID
FROM MyURLTable
GROUP BY URL
HAVING COUNT(*) = 1
) AS KeepRecords
ON RemoveRecords.ID = KeepRecords.ID
WHERE KeepRecords.ID IS NULL;
Now that you have deleted all the records, you can go ahead and create you index or constraint. Now, if you want to insert a value into your database, you should use something like.
INSERT IGNORE INTO MyURLTable(URL)
VALUES('http://www.example.com');
That will attempt to do the insert, and if it finds a duplicate, nothing will happen. Now, lets say you have other columns, you can do something like this.
INSERT INTO MyURLTable(URL,Visits)
VALUES('http://www.example.com',1)
ON DUPLICATE KEY UPDATE Visits=Visits+1;
That will look try to insert the value, and if it finds the URL, then it will update the record by incrementing the visits counter. Of course, you can always do a plain old insert, and handle the resulting error in your PHP Code. Now, as for whether or not you should use constraints or indexes, that depends on a lot of factors. Indexes make for faster lookups, so your performance will be better as the table gets bigger, but storing the index will take up extra space. Indexes also usually make inserts and updates take longer as well, because it has to update the index. However, since the value will have to be looked up either way, to enforce the uniqueness, in this case, It may be quicker to just have the index anyway. As for anything performance related, the answer is try both options and profile the results to see which works best for your situation.
A: If you just want a yes or no answer this syntax should give you the best performance.
select if(exists (select url from urls where url = 'http://asdf.com'), 1, 0) from dual
A: If you just want to make sure there are no duplicates then add an unique index to the url field, that way there is no need to explicitly check if the url exists, just insert as normal, and if it is already there then the insert will fail with a duplicate key error.
A: The answer depends on whether you want to know when an attempt is made to enter a record with a duplicate field. If you don't care then use the "INSERT... ON DUPLICATE KEY" syntax as this will make your attempt quietly succeed without creating a duplicate.
If on the other hand you want to know when such an event happens and prevent it, then you should use a unique key constraint which will cause the attempted insert/update to fail with a meaningful error.
A: $url = "http://www.scroogle.com";
$query = "SELECT `id` FROM `urls` WHERE `url` = '$url' ";
$resultdb = mysql_query($query) or die(mysql_error());
list($idtemp) = mysql_fetch_array($resultdb) ;
if(empty($idtemp)) // if $idtemp is empty the url doesn't exist and we go ahead and insert it into the db.
{
mysql_query("INSERT INTO urls (`url` ) VALUES('$url') ") or die (mysql_error());
}else{
//do something else if the url already exists in the DB
}
A: Make the column the primary key
A: You can locate (and remove) using a self-join. Your table has some URL and also some PK (We know that the PK is not the URL because otherwise you would not be allowed to have duplicates)
SELECT
*
FROM
yourTable a
JOIN
yourTable b -- Join the same table
ON b.[URL] = a.[URL] -- where the URL's match
AND b.[PK] <> b.[PK] -- but the PK's are different
This will return all rows which have duplicated URLs.
Say, though, that you wanted to only select the duplicates and exclude the original.... Well you would need to decide what constitutes the original. For the purpose of this answer let's assume that the lowest PK is the "original"
All you need to do is add the following clause to the above query:
WHERE
a.[PK] NOT IN (
SELECT
TOP 1 c.[PK] -- Only grabbing the original!
FROM
yourTable c
WHERE
c.[URL] = a.[URL] -- has the same URL
ORDER BY
c.[PK] ASC) -- sort it by whatever your criterion is for "original"
Now you have a set of all non-original duplicated rows. You could easily execute a DELETE or whatever you like from this result set.
Note that this approach may be inefficient, in part because mySQL doesn't always handle IN well but I understand from the OP that this is sort of "clean up" on the table, not always a check.
If you want to check at INSERT time whether or not a value already exists you can run something like this
SELECT
1
WHERE
EXISTS (SELECT * FROM yourTable WHERE [URL] = 'testValue')
If you get a result then you can conclude the value already exists in your DB at least once.
A: You could do this query:
SELECT url FROM urls WHERE url = 'http://asdf.com' LIMIT 1
Then check if mysql_num_rows() == 1 to see if it exists.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: When did browsers start supporting multiple classes per tag? You can use more than one css class in an HTML tag in current web browsers, e.g.:
<div class="style1 style2 style3">foo bar</div>
This hasn't always worked; with which versions did the major browsers begin correctly supporting this feature?
A: @Wayne Kao - IE6 has no problem reading more than one class name on an element, and applying styles that belong to each class. What the article is referring to is creating new styles based on the combination of class names.
<div class="bold italic">content</div>
.bold {
font-weight: 800;
}
.italic {
font-style: italic;
{
IE6 would apply both bold and italic styles to the div. However, say we wanted all elements that have bold and italic classes to also be purple. In Firefox (or possibly IE7, not sure), we could write something like this:
.bold.italic {
color: purple;
}
That would not work in IE6.
A: I believe Firefox has always supported this, at least since v1.5 anyway. IE only added full support in v7. IE6 does partially support it, but its pretty buggy, so don't count on it working properly.
A: According to blooberry, IE4 and Netscape 4.x do not support this. HTML 4.0 spec says
class = cdata-list [CS]
This attribute
assigns a class name or set of class
names to an element. Any number of
elements may be assigned the same
class name or names. Multiple class
names must be separated by white space
characters.
A: Apparently IE 6 doesn't handle these correctly if you have CSS selectors that contain multiple class names:
http://www.ryanbrill.com/archives/multiple-classes-in-ie/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you find out the ProductCode from a .Net Installer class custom action I need to know the application's ProductCode in the Installer.OnCommitted callback. There doesn't seem to be an obvious way of determining this.
A: You can avoid hardcoding your product code, using /productCode=[ProductCode] in your CustomActionData property.
A: I ended up passing the product code as a command line argument to my Installer class using the CustomActionData property in Visual Studio (e.g. /productcode={31E1145F-B833-47c6-8C80-A55F306B8A6C}.
I can then access this from any callback within the Installer class using the Context.Parameters StringDictionary
string productCode = (string)Context.Parameters["productcode"];
A: The MSI function MsiGetProperty can be used to get the name of the ProductCode property. I don't know if that would work in this case, since I've never created a .NET installer.
A: The suggestion from @Chris Tybur seems to work
Here's my C# Code:
public static string GetProductCode(string fileName)
{
IntPtr hInstall = IntPtr.Zero;
try
{
uint num = MsiOpenPackage(fileName, ref hInstall);
if ((ulong)num != 0)
{
throw new Exception("Cannot open database: " + num);
}
int pcchValueBuf = 255;
StringBuilder szValueBuf = new StringBuilder(pcchValueBuf);
num = MsiGetProperty(hInstall, "ProductCode", szValueBuf, ref pcchValueBuf);
if ((ulong)num != 0)
{
throw new Exception("Failed to Get Property ProductCode: " + num);
}
return szValueBuf.ToString();
}
finally
{
if (hInstall != IntPtr.Zero)
{
MsiCloseHandle(hInstall);
}
}
}
[DllImport("msi.dll", CharSet = CharSet.Unicode, EntryPoint = "MsiGetPropertyW", ExactSpelling = true, SetLastError = true)]
private static extern uint MsiGetProperty(IntPtr hInstall, string szName, [Out] StringBuilder szValueBuf, ref int pchValueBuf);
[DllImport("msi.dll", CharSet = CharSet.Unicode, EntryPoint = "MsiOpenPackageW", ExactSpelling = true, SetLastError = true)]
private static extern uint MsiOpenPackage(string szDatabasePath, ref IntPtr hProduct);
[DllImport("msi.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = true)]
private static extern int MsiCloseHandle(IntPtr hAny);
FWIW: There is a small blurb on this MSDN Site that might be cause for concern: https://learn.microsoft.com/en-us/windows/win32/msi/obtaining-context-information-for-deferred-execution-custom-actions
Function Description
MsiGetProperty Supports a limited set of properties when used with deferred execution custom actions:
the CustomActionData property, ProductCode property, and UserSID property.Commit custom
actions cannot use the MsiGetProperty function to obtain the ProductCode property.
Commit custom actions can use the CustomActionData property to obtain the product code.
Note the call out cannot use the MsiGetProperty function to obtain the ProductCode property. So YMMV.
Reviewing How can I find the product GUID of an installed MSI setup? you can use the COM API to gather this (the current version shows a VBScript) which might be worth inspecting as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Loading Assemblies from the Network This is related to the this question and the answer maybe the same but
I'll ask anyways.
I understand that we can start managed executables from the network from .NET
3.5 SP1 but what about assemblies loaded from inside the executable?
Does the same thing apply?
A: You have been able to load Assemblies from the network at leasst from .NET 2.0. I have used this on a previous project. The only thing to watch is the size of the assembly and the number and size of the dependancies that it is loading.
If you are using a seperate AppDomain, then you will need to take special consideration of the dependancies.
A: My understanding is yes, you're trying to load an untrusted module into your local app domain.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I disassemble a VC++ application? I believe the application has some parts that target .NET, and some that don't. I'm particularly interested in looking at the resource files, if there are any.
A: If you want to disassemble native x86/64 app use IDA, .NET exe/dll can be disassembled using Reflector. There are tons of utilities to extract resources. Can you elaborate your question a bit?
A: To add to aku's excellent answer, for English speakers, IDA Pro is available at http://www.hex-rays.com/.
A: Looking at the resource files isn't really "disassembling" (not really) and if that's all you want to do you can just open the .exe or .dll inside Visual Studio or a similar tool and it will give you a resources view.
A: Do not get scared by the prices, the freeware version (available from hex-rays.com) is perfectly sufficient for reversing Win32 x86 code.
A: I would too highly recommend IDA for reverse engineering if you want to see the assembly code and how the binaries have been compiled/linked.
To simpley see "inside" binary files (exe, dll, sys, ...) try CFF Explorer, its free and its great:
http://www.ntcore.com/exsuite.php
you can examine the binary files structure in great detail including resources.
If CFF Explorer is not enough then try PE Explorer which costs a little bit:
http://www.heaventools.com/
A: PE Explorer is definitely the best resource viewing tool, but you might want to have a look at its "resource-only" version - Resource Tuner.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I make my VS2008 x86 installer install x64 assemblies on x64? I'm using the VS2008 installer (plus a custom Orca action) to create an installer for my .NET product.
I just recently found out that one of the third-party assemblies I was using is x86-specific (as it includes some native code); thus, x64 customers were getting crashes on startup with errors about the assembly not being appropriate for their platform.
I sent such a customer a copy of the x64 version of this third-party assembly, and told him to just copy it over the existing x86 one. It worked, sweet! So now I just need to make the installer do this for me.
This actually appears nontrivial :(. Ideally, I just want the installer (which would be x86, since that can run on both platforms) to include both the x86 and x64 versions of this third-party assembly, and install the appropriate one. In other words, I want a single installer that makes my users' lives easy.
I thought I had this worked out, using MSI conditional statements and all that. But apparently no... the VS2008 setup projects won't compile unless you specify "x86" or "x64." If you specify x86, it gives a compilation error saying it can't include the x64 assembly. If you specify x64, then the result cannot be executed on an x86 computer. Damn!
Someone must have had this problem before. Unfortunately Google is unhelpful, so I turn to StackOverflow!
A: When I looked into this a year ago, I came to the conclusion that it was not possible. It's worth noting that many Microsoft-supplied MSI files come in separate x86 and x64 flavors -- and presumably, they'd only deliver a single file if that were possible.
A: If I understand you correctly, you want to do a copy of one file if you're installing on x86 and different file (with the same name) if you're installing on a x64 platform.
First of all, you cannot create one MSI for 2 different platforms, since a x64 MSI simply will not run on a x86 platform and a x86 MSI will be installed using WOW64 on a x64 platform.
On the other hand, you CAN create one x86 MSI that contains 2 different versions of a file and selectively copy the appropiate file during installation.
The easiest way is using WIX (V3) instead of the build-in VS2008 MSI generator. WIX gives you far greater control over what gets installed on the customer's machine and where, the ability to generate different installers for different platforms and full MSBuild support as an added bonus. (see http://wix.sourceforge.net for more info.)
In case if you're wondering that WIX is still in Beta, the generated MSI files are perfectly OK and I never ran into a bug yet. (And I develop setup projects for a living.)
Finally, you can check with the VersionNT64 property if an x86 installer is running on a x64 platform. If that property is present, you're running x64, otherwise you're running on a x86.
Hope this helps.
A: I have had some success by using two features to selectively include the two sets of files (in separate components of course, with their individualized file identifiers!).
The installation must be marked as x32 to install on both x32 and x64. It will always install to the x32 directories and will largely be treated as a 32 bit application running under WOW.
Using the VersionNT64 property you can determine if a Windows 64 installation is present, and conditionally install the files you need.
I'm not sure how much of this functionality is available in VS2008 install projects - I am using some other commercial tools to set up the installer this way. Of course you can use Orca to do it too, though it was non-trivial to get it to work with commercial tools and Orca is much harder.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Empty namespace using Linq Xml I'm trying to create a sitemap using Linq to Xml, but am getting an empty namespace attribute, which I would like to get rid of. e.g.
XNamespace ns = "http://www.sitemaps.org/schemas/sitemap/0.9";
XDocument xdoc = new XDocument(new XDeclaration("1.0", "utf-8", "true"),
new XElement(ns + "urlset",
new XElement("url",
new XElement("loc", "http://www.example.com/page"),
new XElement("lastmod", "2008-09-14"))));
The result is ...
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url xmlns="">
<loc>http://www.example.com/page</loc>
<lastmod>2008-09-14</lastmod>
</url>
</urlset>
I would rather not have the xmlns="" on the url element. I can strip it out using Replace on the final xdoc.ToString(), but is there a more correct way?
A: The "more correct way" would be:
XDocument xdoc = new XDocument(new XDeclaration("1.0", "utf-8", "true"),
new XElement(ns + "urlset",
new XElement(ns + "url",
new XElement(ns + "loc", "http://www.example.com/page"),
new XElement(ns + "lastmod", "2008-09-14"))));
Same as your code, but with the "ns +" before every element name that needs to be in the sitemap namespace. It's smart enough not to put any unnecessary namespace declarations in the resulting XML, so the result is:
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>http://www.example.com/page</loc>
<lastmod>2008-09-14</lastmod>
</url>
</urlset>
which is, if I'm not mistaken, what you want.
A: I stumbled across this post while dealing with a similar problem in VB.NET. I was using XML literals and it took me some searching to figure out how to make this solution work with the XML literal construction and not just the functional construction.
The solution is to import the XML namespace at the top of the file.
Imports <xmlns:ns="x-schema:tsSchema.xml">
And then prefix all of my XML literals in the query expression with the imported namespace. This removes the empty namespace that were appearing on the elements when I saved my output.
Dim output As XDocument = <?xml version="1.0" encoding="utf-8"?>
<XML ID="Microsoft Search Thesaurus">
<thesaurus xmlns="x-schema:tsSchema.xml">
<diacritics_sensitive>0</diacritics_sensitive>
<%= From tg In termGroups _
Select <ns:expansion>
<%= From t In tg _
Select <ns:sub><%= t %></ns:sub> %>
</ns:expansion> %>
</thesaurus>
</XML>
output.Save("C:\thesaurus.xml")
I hope this helps someone. Despite bumps in the road like this, the XLinq API is pretty darn cool.
A: If one element uses a namespace, they all must use one. In case you don't define one on your own the framework will add a empty namespace as you have noticed. And, sadly, there is no switch or something similiar to suppress this "feature".
So, there seems to be no better method as to strip it out. Using Replace(" xmlns=\"\"", "") could be a little bit faster than executing a RegEx.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: SQLite/PHP read-only? I've been trying to use SQLite with the PDO wrapper in PHP with mixed success. I can read from the database fine, but none of my updates are being committed to the database when I view the page in the browser. Curiously, running the script from my shell does update the database. I suspected file permissions as the culprit, but even with the database providing full access (chmod 777) the problem persists. Should I try changing the file owner? If so, what to?
By the way, my machine is the standard Mac OS X Leopard install with PHP activated.
@Tom Martin
Thank you for your reply. I just ran your code and it looks like PHP runs as user _www. I then tried chowning the database to be owned by _www, but that didn't work either.
I should also note that PDO's errorInfo function doesn't indicate an error took place. Could this be a setting with PDO somehow opening the database for read-only? I've heard that SQLite performs write locks on the entire file. Is it possible that the database is locked by something else preventing the write?
I've decided to include the code in question. This is going to be more or less a port of Grant's script to PHP. So far it's just the Questions section:
<?php
$db = new PDO('sqlite:test.db');
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://stackoverflow.com/users/658/kyle");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_COOKIE, "shhsecret=1293706652");
$page = curl_exec($ch);
preg_match('/summarycount">.*?([,\d]+)<\/div>.*?Reputation/s', $page, $rep);
$rep = preg_replace("/,/", "", $rep[1]);
preg_match('/iv class="summarycount".{10,60} (\d+)<\/d.{10,140}Badges/s', $page, $badge);
$badge = $badge[1];
$qreg = '/question-summary narrow.*?vote-count-post"><strong.*?>(-?\d*).*?\/questions\/(\d*).*?>(.*?)<\/a>/s';
preg_match_all($qreg, $page, $questions, PREG_SET_ORDER);
$areg = '/(answer-summary"><a href="\/questions\/(\d*).*?votes.*?>(-?\d+).*?href.*?>(.*?)<.a)/s';
preg_match_all($areg, $page, $answers, PREG_SET_ORDER);
echo "<h3>Questions:</h3>\n";
echo "<table cellpadding=\"3\">\n";
foreach ($questions as $q)
{
$query = 'SELECT count(id), votes FROM Questions WHERE id = '.$q[2].' AND type=0;';
$dbitem = $db->query($query)->fetch(PDO::FETCH_ASSOC);
if ($dbitem['count(id)'] > 0)
{
$lastQ = $q[1] - $dbitem['votes'];
if ($lastQ == 0)
{
$lastQ = "";
}
$query = "UPDATE Questions SET votes = '$q[1]' WHERE id = '$q[2]'";
$db->exec($query);
}
else
{
$query = "INSERT INTO Questions VALUES('$q[3]', '$q[1]', 0, '$q[2]')";
echo "$query\n";
$db->exec($query);
$lastQ = "(NEW)";
}
echo "<tr><td>$lastQ</td><td align=\"right\">$q[1]</td><td>$q[3]</td></tr>\n";
}
echo "</table>";
?>
A: I found the answer on the PHP manual "The folder that houses the database file must be writeable."
A: Kyle, in order for PDO/Sqlite to work you need write permission to directory where your database resides.
Also, I see you perform multiple selects in loop. This may be ok if you are building something small and not heavy loaded. Otherwise I'd suggest building single query that returns multiple rows and process them in separate loop.
A: I think PHP commonly runs as the user "nodody". Not sure about on Mac though. If Mac has whoami you could try echo exec('whoami'); to find out.
A: For those who have encountered read-only issues with SQLite on OS X:
1) Determine the Apache httpd user and group the user belongs to:
grep "^User" /private/etc/apache2/httpd.conf
groups _www
2) Create a subdirectory in /Library/WebServer/Documents for your database(s) and change the group to the httpd's group:
sudo chgrp _www /Library/WebServer/Documents/db
A less secure option is to open permissions on /Library/WebServer/Documents:
sudo chmod a+w /Library/WebServer/Documents
A: @Tom
Depends on how the hosting is setup, If the server runs PHP as an Apache Module then its likely that it is 'nobody' (usually whatever user apache is setup as). But if PHP is setup as cgi (such as fast-cgi) and the server runs SuExec then php runs as the same user who owns the files.
Eitherway the folder that will contain the database must be writable by the script, either by being the same user, or by having write permission set to the php user.
@Michal
That aside, one could use beginTransaction(); perform all the actions needed then comit(); to actually comit them.
A: Well, I had the same problem now and figured it out by a mistake: just put every inserting piece of SQL instruction inside a try...catch block that it goes. It makes you do it right way otherwise it doesn't work. Well, it works now. Good luck for anyone else with this problem(as I used this thread myself to try to solve my problem).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Hidden Features of JavaScript? What "Hidden Features" of JavaScript do you think every programmer should know?
After having seen the excellent quality of the answers to the following questions I thought it was time to ask it for JavaScript.
*
*Hidden Features of HTML
*Hidden Features of CSS
*Hidden Features of PHP
*Hidden Features of ASP.NET
*Hidden Features of C#
*Hidden Features of Java
*Hidden Features of Python
Even though JavaScript is arguably the most important Client Side language right now (just ask Google) it's surprising how little most web developers appreciate how powerful it really is.
A: Also mentioned in Crockford's "Javascript: The Good Parts":
parseInt() is dangerous. If you pass it a string without informing it of the proper base it may return unexpected numbers. For example parseInt('010') returns 8, not 10. Passing a base to parseInt makes it work correctly:
parseInt('010') // returns 8! (in FF3)
parseInt('010', 10); // returns 10 because we've informed it which base to work with.
A: Functions are objects and therefore can have properties.
fn = function(x) {
// ...
}
fn.foo = 1;
fn.next = function(y) {
//
}
A: I'd have to say self-executing functions.
(function() { alert("hi there");})();
Because Javascript doesn't have block scope, you can use a self-executing function if you want to define local variables:
(function() {
var myvar = 2;
alert(myvar);
})();
Here, myvar is does not interfere with or pollute the global scope, and disappears when the function terminates.
A: The concept of truthy and falsy values. You don't need to do something like
if(someVar === undefined || someVar === null) ...
Simply do:
if(!someVar).
Every value has a corresponding boolean representation.
A: Function statements and function expressions are handled differently.
function blarg(a) {return a;} // statement
bleep = function(b) {return b;} //expression
All function statements are parsed before code is run - a function at the bottom of a JavaScript file will be available in the first statement. On the other hand, it won't be able to take advantage of certain dynamic context, such as surrounding with statements - the with hasn't been executed when the function is parsed.
Function expressions execute inline, right where they are encountered. They aren't available before that time, but they can take advantage of dynamic context.
A: window.name's value persists across page changes, can be read by the parent window if in same domain (if in an iframe, use document.getElementById("your frame's ID").contentWindow.name to access it), and is limited only by available memory.
A: The parentheses are optional when creating new "objects".
function Animal () {
}
var animal = new Animal();
var animal = new Animal;
Same thing.
A: Know how many parameters are expected by a function
function add_nums(num1, num2, num3 ){
return num1 + num2 + num3;
}
add_nums.length // 3 is the number of parameters expected.
Know how many parameters are received by the function
function add_many_nums(){
return arguments.length;
}
add_many_nums(2,1,122,12,21,89); //returns 6
A: Javascript has static variables inside functions:
function someFunction(){
var Static = arguments.callee;
Static.someStaticVariable = (Static.someStaticVariable || 0) + 1;
alert(Static.someStaticVariable);
}
someFunction() //Alerts 1
someFunction() //Alerts 2
someFunction() //Alerts 3
It also has static variables inside Objects:
function Obj(){
this.Static = arguments.callee;
}
a = new Obj();
a.Static.name = "a";
b = new Obj();
alert(b.Static.name); //Alerts b
A: All functions are actually instances of the built-in Function type, which has a constructor that takes a string containing the function definition, so you can actually define functions at run-time by e.g., concatenating strings:
//e.g., createAddFunction("a","b") returns function(a,b) { return a+b; }
function createAddFunction(paramName1, paramName2)
{ return new Function( paramName1, paramName2
,"return "+ paramName1 +" + "+ paramName2 +";");
}
Also, for user-defined functions, Function.toString() returns the function definition as a literal string.
A: You can execute an object's method on any object, regardless of whether it has that method or not. Of course it might not always work (if the method assumes the object has something it doesn't), but it can be extremely useful. For example:
function(){
arguments.push('foo') // This errors, arguments is not a proper array and has no push method
Array.prototype.push.apply(arguments, ['foo']) // Works!
}
A: The == operator has a very special property, that creates this disturbing equality (Yes, I know in other dynamic languages like Perl this behavior would be expected but JavaScript ususally does not try to be smart in comparisons):
>>> 1 == true
true
>>> 0 == false
true
>>> 2 == true
false
A: let.
Counterpart to var's lack of block-scoping is let, introduced in JavaScript 1.7.
*
*The let statement provides a way to associate values with variables
within the scope of a block, without
affecting the values of like-named
variables outside the block.
*The let expression lets you establish variables scoped only to a
single expression.
*The let definition defines variables whose scope is constrained
to the block in which they're defined.
This syntax is very much like the
syntax used for var.
*You can also use let to establish variables that exist only within the
context of a for loop.
function varTest() {
var x = 31;
if (true) {
var x = 71; // same variable!
alert(x); // 71
}
alert(x); // 71
}
function letTest() {
let x = 31;
if (true) {
let x = 71; // different variable
alert(x); // 71
}
alert(x); // 31
}
As of 2008, JavaScript 1.7 is supported in FireFox 2.0+ and Safari 3.x.
A: If you blindly eval() a JSON string to deserialize it, you may run into problems:
*
*It's not secure. The string may contain malicious function calls!
*If you don't enclose the JSON string in parentheses, property names can be mistaken as labels, resulting in unexpected behaviour or a syntax error:
eval("{ \"foo\": 42 }"); // syntax error: invalid label
eval("({ \"foo\": 42 })"); // OK
A: You can turn "any* object with integer properties, and a length property into an array proper, and thus endow it with all array methods such as push, pop, splice, map, filter, reduce, etc.
Array.prototype.slice.call({"0":"foo", "1":"bar", 2:"baz", "length":3 })
// returns ["foo", "bar", "baz"]
This works with jQuery objects, html collections, and Array objects from other frames (as one possible solution to the whole array type thing). I say, if it's got a length property, you can turn it into an array and it doesn't matter. There's lots of non array objects with a length property, beyond the arguments object.
A: If you're attempting to sandbox javascript code, and disable every possible way to evaluate strings into javascript code, be aware that blocking all the obvious eval/document.write/new Function/setTimeout/setInterval/innerHTML and other DOM manipulations isn't enough.
Given any object o, o.constructor.constructor("alert('hi')")() will bring up an alert dialog with the word "hi" in it.
You could rewrite it as
var Z="constructor";
Z[Z][Z]("alert('hi')")();
Fun stuff.
A: Here are some interesting things:
*
*Comparing NaN with anything (even NaN) is always false, that includes ==, < and >.
*NaN Stands for Not a Number but if you ask for the type it actually returns a number.
*Array.sort can take a comparator function and is called by a quicksort-like driver (depends on implementation).
*Regular expression "constants" can maintain state, like the last thing they matched.
*Some versions of JavaScript allow you to access $0, $1, $2 members on a regex.
*null is unlike anything else. It is neither an object, a boolean, a number, a string, nor undefined. It's a bit like an "alternate" undefined. (Note: typeof null == "object")
*In the outermost context, this yields the otherwise unnameable [Global] object.
*Declaring a variable with var, instead of just relying on automatic declaration of the variable gives the runtime a real chance of optimizing access to that variable
*The with construct will destroy such optimzations
*Variable names can contain Unicode characters.
*JavaScript regular expressions are not actually regular. They are based on Perl's regexs, and it is possible to construct expressions with lookaheads that take a very, very long time to evaluate.
*Blocks can be labeled and used as the targets of break. Loops can be labeled and used as the target of continue.
*Arrays are not sparse. Setting the 1000th element of an otherwise empty array should fill it with undefined. (depends on implementation)
*if (new Boolean(false)) {...} will execute the {...} block
*Javascript's regular expression engine's are implementation specific: e.g. it is possible to write "non-portable" regular expressions.
[updated a little in response to good comments; please see comments]
A: I know I'm late to the party, but I just can't believe the + operator's usefulness hasn't been mentioned beyond "convert anything to a number". Maybe that's how well hidden a feature it is?
// Quick hex to dec conversion:
+"0xFF"; // -> 255
// Get a timestamp for now, the equivalent of `new Date().getTime()`:
+new Date();
// Safer parsing than parseFloat()/parseInt()
parseInt("1,000"); // -> 1, not 1000
+"1,000"; // -> NaN, much better for testing user input
parseInt("010"); // -> 8, because of the octal literal prefix
+"010"; // -> 10, `Number()` doesn't parse octal literals
// A use case for this would be rare, but still useful in cases
// for shortening something like if (someVar === null) someVar = 0;
+null; // -> 0;
// Boolean to integer
+true; // -> 1;
+false; // -> 0;
// Other useful tidbits:
+"1e10"; // -> 10000000000
+"1e-4"; // -> 0.0001
+"-12"; // -> -12
Of course, you can do all this using Number() instead, but the + operator is so much prettier!
You can also define a numeric return value for an object by overriding the prototype's valueOf() method. Any number conversion performed on that object will not result in NaN, but the return value of the valueOf() method:
var rnd = {
"valueOf": function () { return Math.floor(Math.random()*1000); }
};
+rnd; // -> 442;
+rnd; // -> 727;
+rnd; // -> 718;
A: "Extension methods in JavaScript" via the prototype property.
Array.prototype.contains = function(value) {
for (var i = 0; i < this.length; i++) {
if (this[i] == value) return true;
}
return false;
}
This will add a contains method to all Array objects. You can call this method using this syntax
var stringArray = ["foo", "bar", "foobar"];
stringArray.contains("foobar");
A: Function.toString() (implicit):
function x() {
alert("Hello World");
}
eval ("x = " + (x + "").replace(
'Hello World',
'STACK OVERFLOW BWAHAHA"); x("'));
x();
A: Microsofts gift to JavaScript: AJAX
AJAXCall('http://www.abcd.com/')
function AJAXCall(url) {
var client = new XMLHttpRequest();
client.onreadystatechange = handlerFunc;
client.open("GET", url);
client.send();
}
function handlerFunc() {
if(this.readyState == 4 && this.status == 200) {
if(this.responseXML != null)
document.write(this.responseXML)
}
}
A: The Module Pattern
<script type="text/javascript">
(function() {
function init() {
// ...
}
window.onload = init;
})();
</script>
Variables and functions declared without the var statement or outside of a function will be defined in the global scope. If a variable/function of the same name already exists it will be silently overridden, which can lead to very hard to find errors. A common solution is to wrap the whole code body into an anonymous function and immediately execute it. This way all variables/functions are defined in the scope of the anonymous function and don't leak into the global scope.
To explicitly define a variable/function in the global scope they have to be prefixed with window:
window.GLOBAL_VAR = 12;
window.global_function = function() {};
A: This is a hidden feature of jQuery, not Javascript, but since there will never be a "hidden features of jQuery" question...
You can define your own :something selectors in jQuery:
$.extend($.expr[':'], {
foo: function(node, index, args, stack) {
// decide if selectors matches node, return true or false
}
});
For selections using :foo, such as $('div.block:foo("bar,baz") span'), the function foo will be called for all nodes which match the already processed part of the selector. The meaning of the arguments:
*
*node holds the current node
*index is the index of the node in the node set
*args is an array that is useful if the selector has an argument or multiple names:
*
*args[0] is the whole selector text (e.g. :foo("bar, baz"))
*args[1] is the selector name (e.g. foo)
*args[2] is the quote character used to wrap the argument
(e.g. " for :foo("bar, baz")) or an empty string if there is no quoting
(:foo(bar, baz)) or undefined if there is no argument
*args[3] is the argument, including any quotes, (e.g. "bar, baz")
or undefined if there are no arguments
*stack is the node set (an array holding all nodes which are matched at that point)
The function should return true if the selector matches, false otherwise.
For example, the following code will enable selecting nodes based on a full-text regexp search:
$.extend($.expr[':'], {
matches: function(node, index, args, stack) {
if (!args.re) { // args is a good place for caching
var re = args[3];
if (args[2]) { // get rid of quotes
re = re.slice(1,-1);
}
var separator = re[0];
var pos = re.lastIndexOf(separator);
var modifiers = re.substr(pos+1);
var code = re.substr(1, pos-1);
args.re = new RegExp(code, modifiers);
}
return $(node).text().match(args.re);
}
});
// find the answers on this page which contain /**/-style comments
$('.answer .post-text code:matches(!/\\*[\\s\\S]*\\*/!)');
You could reach a similar effect with the callback version of .filter(), but custom selectors are much more flexible and usually more readable.
A: To properly remove a property from an object, you should delete the property instead of just setting it to undefined:
var obj = { prop1: 42, prop2: 43 };
obj.prop2 = undefined;
for (var key in obj) {
...
The property prop2 will still be part of the iteration. If you want to completely get rid of prop2, you should instead do:
delete obj.prop2;
The property prop2 will no longer will make an appearance when you're iterating through the properties.
A: undefined is undefined. So you can do this:
if (obj.field === undefined) /* ... */
A: Visit:
*
*http://images.google.com/images?q=disco
Paste this JavaScript code into your web browser's address bar:
*
*http://amix.dk/upload/awt/spin.txt
*http://amix.dk/upload/awt/disco.txt
Enjoy the JavaScript disco show :-p
A: Generators and Iterators (works only in Firefox 2+ and Safari).
function fib() {
var i = 0, j = 1;
while (true) {
yield i;
var t = i;
i = j;
j += t;
}
}
var g = fib();
for (var i = 0; i < 10; i++) {
document.write(g.next() + "<br>\n");
}
The function containing the yield
keyword is a generator. When you call
it, its formal parameters are bound to
actual arguments, but its body isn't
actually evaluated. Instead, a
generator-iterator is returned. Each
call to the generator-iterator's
next() method performs another pass
through the iterative algorithm. Each
step's value is the value specified by
the yield keyword. Think of yield as
the generator-iterator version of
return, indicating the boundary
between each iteration of the
algorithm. Each time you call next(),
the generator code resumes from the
statement following the yield.
In normal usage, iterator objects are
"invisible"; you won't need to operate
on them explicitly, but will instead
use JavaScript's for...in and for each...in statements to loop naturally
over the keys and/or values of
objects.
var objectWithIterator = getObjectSomehow();
for (var i in objectWithIterator)
{
document.write(objectWithIterator[i] + "<br>\n");
}
A: with.
It's rarely used, and frankly, rarely useful... But, in limited circumstances, it does have its uses.
For instance: object literals are quite handy for quickly setting up properties on a new object. But what if you need to change half of the properties on an existing object?
var user =
{
fname: 'Rocket',
mname: 'Aloysus',
lname: 'Squirrel',
city: 'Fresno',
state: 'California'
};
// ...
with (user)
{
mname = 'J';
city = 'Frostbite Falls';
state = 'Minnesota';
}
Alan Storm points out that this can be somewhat dangerous: if the object used as context doesn't have one of the properties being assigned to, it will be resolved in the outer scope, possibly creating or overwriting a global variable. This is especially dangerous if you're used to writing code to work with objects where properties with default or empty values are left undefined:
var user =
{
fname: "John",
// mname definition skipped - no middle name
lname: "Doe"
};
with (user)
{
mname = "Q"; // creates / modifies global variable "mname"
}
Therefore, it is probably a good idea to avoid the use of the with statement for such assignment.
See also: Are there legitimate uses for JavaScript’s “with” statement?
A: Methods (or functions) can be called on object that are not of the type they were designed to work with. This is great to call native (fast) methods on custom objects.
var listNodes = document.getElementsByTagName('a');
listNodes.sort(function(a, b){ ... });
This code crashes because listNodes is not an Array
Array.prototype.sort.apply(listNodes, [function(a, b){ ... }]);
This code works because listNodes defines enough array-like properties (length, [] operator) to be used by sort().
A: This one is super hidden, and only occasionally useful ;-)
You can use the prototype chain to create an object that delegates to another object without changing the original object.
var o1 = { foo: 1, bar: 'abc' };
function f() {}
f.prototype = o1;
o2 = new f();
assert( o2.foo === 1 );
assert( o2.bar === 'abc' );
o2.foo = 2;
o2.baz = true;
assert( o2.foo === 2 );
// o1 is unchanged by assignment to o2
assert( o1.foo === 1 );
assert( o2.baz );
This only covers 'simple' values on o1. If you modify an array or another object, then the prototype no longer 'protects' the original object. Beware anytime you have an {} or [] in a Class definition/prototype.
A: All your "hidden" features are right here on the Mozilla wiki: http://developer.mozilla.org/en/JavaScript.
There's the core JavaScript 1.5 reference, what's new in JavaScript 1.6, what's new in JavaScript 1.7, and also what's new in JavaScript 1.8. Look through all of those for examples that actually work and are not wrong.
A: Namespaces
In larger JavaScript applications or frameworks it can be useful to organize the code in namespaces. JavaScript doesn't have a module or namespace concept buildin but it is easy to emulate using JavaScript objects. This would create a namespace called nsand attaches the function footo it.
if (!window.ns) {
window.ns = {};
}
window.ns.foo = function() {};
It is common to use the same global namespace prefix throughout a project and use sub namespaces for each JavaScript file. The name of the sub namespace often matches the file's name.
The header of a file called ns/button.jscould look like this:
if (!window.ns) {
window.ns = {};
}
if (!window.ns.button) {
window.ns.button = {};
}
// attach methods to the ns.button namespace
window.ns.button.create = function() {};
A: Prototypal inheritance (popularized by Douglas Crockford) completely revolutionizes the way you think about loads of things in Javascript.
Object.beget = (function(Function){
return function(Object){
Function.prototype = Object;
return new Function;
}
})(function(){});
It's a killer! Pity how almost no one uses it.
It allows you to "beget" new instances of any object, extend them, while maintaining a (live) prototypical inheritance link to their other properties. Example:
var A = {
foo : 'greetings'
};
var B = Object.beget(A);
alert(B.foo); // 'greetings'
// changes and additionns to A are reflected in B
A.foo = 'hello';
alert(B.foo); // 'hello'
A.bar = 'world';
alert(B.bar); // 'world'
// ...but not the other way around
B.foo = 'wazzap';
alert(A.foo); // 'hello'
B.bar = 'universe';
alert(A.bar); // 'world'
A: Some would call this a matter of taste, but:
aWizz = wizz || "default";
// same as: if (wizz) { aWizz = wizz; } else { aWizz = "default"; }
The trinary operator can be chained to act like Scheme's (cond ...):
(cond (predicate (action ...))
(predicate2 (action2 ...))
(#t default ))
can be written as...
predicate ? action( ... ) :
predicate2 ? action2( ... ) :
default;
This is very "functional", as it branches your code without side effects. So instead of:
if (predicate) {
foo = "one";
} else if (predicate2) {
foo = "two";
} else {
foo = "default";
}
You can write:
foo = predicate ? "one" :
predicate2 ? "two" :
"default";
Works nice with recursion, too :)
A: Numbers are also objects. So you can do cool stuff like:
// convert to base 2
(5).toString(2) // returns "101"
// provide built in iteration
Number.prototype.times = function(funct){
if(typeof funct === 'function') {
for(var i = 0;i < Math.floor(this);i++) {
funct(i);
}
}
return this;
}
(5).times(function(i){
string += i+" ";
});
// string now equals "0 1 2 3 4 "
var x = 1000;
x.times(function(i){
document.body.innerHTML += '<p>paragraph #'+i+'</p>';
});
// adds 1000 parapraphs to the document
A: It's surprising how many people don't realize that it's object oriented as well.
A: jQuery and JavaScript:
Variable-names can contain a number of odd characters. I use the $ character to identify variables containing jQuery objects:
var $links = $("a");
$links.hide();
jQuery's pattern of chaining objects is quite nice, but applying this pattern can get a bit confusing. Fortunately JavaScript allows you to break lines, like so:
$("a")
.hide()
.fadeIn()
.fadeOut()
.hide();
General JavaScript:
I find it useful to emulate scope by using self-executing functions:
function test()
{
// scope of test()
(function()
{
// scope inside the scope of test()
}());
// scope of test()
}
A: Large loops are faster in while-condition and backwards - that is, if the order of the loop doesn't matter to you. In about 50% of my code, it usually doesn't.
i.e.
var i, len = 100000;
for (var i = 0; i < len; i++) {
// do stuff
}
Is slower than:
i = len;
while (i--) {
// do stuff
}
A: These are not always a good idea, but you can convert most things with terse expressions. The important point here is that not every value in JavaScript is an object, so these expressions will succeed where member access on non-objects like null and undefined will fail. Particularly, beware that typeof null == "object", but you can't null.toString(), or ("name" in null).
Convert anything to a Number:
+anything
Number(anything)
Convert anything to an unsigned four-byte integer:
anything >>> 0
Convert anything to a String:
'' + anything
String(anything)
Convert anything to a Boolean:
!!anything
Boolean(anything)
Also, using the type name without "new" behaves differently for String, Number, and Boolean, returning a primitive number, string, or boolean value, but with "new" these will returned "boxed" object types, which are nearly useless.
A: You don't need to define any parameters for a function. You can just use the function's arguments array-like object.
function sum() {
var retval = 0;
for (var i = 0, len = arguments.length; i < len; ++i) {
retval += arguments[i];
}
return retval;
}
sum(1, 2, 3) // returns 6
A: How about closures in JavaScript (similar to anonymous methods in C# v2.0+). You can create a function that creates a function or "expression".
Example of closures:
//Takes a function that filters numbers and calls the function on
//it to build up a list of numbers that satisfy the function.
function filter(filterFunction, numbers)
{
var filteredNumbers = [];
for (var index = 0; index < numbers.length; index++)
{
if (filterFunction(numbers[index]) == true)
{
filteredNumbers.push(numbers[index]);
}
}
return filteredNumbers;
}
//Creates a function (closure) that will remember the value "lowerBound"
//that gets passed in and keep a copy of it.
function buildGreaterThanFunction(lowerBound)
{
return function (numberToCheck) {
return (numberToCheck > lowerBound) ? true : false;
};
}
var numbers = [1, 15, 20, 4, 11, 9, 77, 102, 6];
var greaterThan7 = buildGreaterThanFunction(7);
var greaterThan15 = buildGreaterThanFunction(15);
numbers = filter(greaterThan7, numbers);
alert('Greater Than 7: ' + numbers);
numbers = filter(greaterThan15, numbers);
alert('Greater Than 15: ' + numbers);
A: You can also extend (inherit) classes and override properties/methods using the prototype chain spoon16 alluded to.
In the following example we create a class Pet and define some properties. We also override the .toString() method inherited from Object.
After this we create a Dog class which extends Pet and overrides the .toString() method again changing it's behavior (polymorphism). In addition we add some other properties to the child class.
After this we check the inheritance chain to show off that Dog is still of type Dog, of type Pet, and of type Object.
// Defines a Pet class constructor
function Pet(name)
{
this.getName = function() { return name; };
this.setName = function(newName) { name = newName; };
}
// Adds the Pet.toString() function for all Pet objects
Pet.prototype.toString = function()
{
return 'This pets name is: ' + this.getName();
};
// end of class Pet
// Define Dog class constructor (Dog : Pet)
function Dog(name, breed)
{
// think Dog : base(name)
Pet.call(this, name);
this.getBreed = function() { return breed; };
}
// this makes Dog.prototype inherit from Pet.prototype
Dog.prototype = new Pet();
// Currently Pet.prototype.constructor
// points to Pet. We want our Dog instances'
// constructor to point to Dog.
Dog.prototype.constructor = Dog;
// Now we override Pet.prototype.toString
Dog.prototype.toString = function()
{
return 'This dogs name is: ' + this.getName() +
', and its breed is: ' + this.getBreed();
};
// end of class Dog
var parrotty = new Pet('Parrotty the Parrot');
var dog = new Dog('Buddy', 'Great Dane');
// test the new toString()
alert(parrotty);
alert(dog);
// Testing instanceof (similar to the `is` operator)
alert('Is dog instance of Dog? ' + (dog instanceof Dog)); //true
alert('Is dog instance of Pet? ' + (dog instanceof Pet)); //true
alert('Is dog instance of Object? ' + (dog instanceof Object)); //true
Both answers to this question were codes modified from a great MSDN article by Ray Djajadinata.
A: Off the top of my head...
Functions
arguments.callee refers to the function that hosts the "arguments" variable, so it can be used to recurse anonymous functions:
var recurse = function() {
if (condition) arguments.callee(); //calls recurse() again
}
That's useful if you want to do something like this:
//do something to all array items within an array recursively
myArray.forEach(function(item) {
if (item instanceof Array) item.forEach(arguments.callee)
else {/*...*/}
})
Objects
An interesting thing about object members: they can have any string as their names:
//these are normal object members
var obj = {
a : function() {},
b : function() {}
}
//but we can do this too
var rules = {
".layout .widget" : function(element) {},
"a[href]" : function(element) {}
}
/*
this snippet searches the page for elements that
match the CSS selectors and applies the respective function to them:
*/
for (var item in rules) {
var elements = document.querySelectorAll(rules[item]);
for (var e, i = 0; e = elements[i++];) rules[item](e);
}
Strings
String.split can take regular expressions as parameters:
"hello world with spaces".split(/\s+/g);
//returns an array: ["hello", "world", "with", "spaces"]
String.replace can take a regular expression as a search parameter and a function as a replacement parameter:
var i = 1;
"foo bar baz ".replace(/\s+/g, function() {return i++});
//returns "foo1bar2baz3"
A: You may catch exceptions depending on their type. Quoted from MDC:
try {
myroutine(); // may throw three exceptions
} catch (e if e instanceof TypeError) {
// statements to handle TypeError exceptions
} catch (e if e instanceof RangeError) {
// statements to handle RangeError exceptions
} catch (e if e instanceof EvalError) {
// statements to handle EvalError exceptions
} catch (e) {
// statements to handle any unspecified exceptions
logMyErrors(e); // pass exception object to error handler
}
NOTE: Conditional catch clauses are a Netscape (and hence Mozilla/Firefox) extension that is not part of the ECMAScript specification and hence cannot be relied upon except on particular browsers.
A: Joose is a nice object system if you would like Class-based OO that feels somewhat like CLOS.
// Create a class called Point
Class("Point", {
has: {
x: {
is: "rw",
init: 0
},
y: {
is: "rw",
init: 0
}
},
methods: {
clear: function () {
this.setX(0);
this.setY(0);
}
}
})
// Use the class
var point = new Point();
point.setX(10)
point.setY(20);
point.clear();
A: Syntactic sugar: in-line for-loop closures
var i;
for (i = 0; i < 10; i++) (function ()
{
// do something with i
}());
Breaks almost all of Douglas Crockford's code-conventions, but I think it's quite nice to look at, never the less :)
Alternative:
var i;
for (i = 0; i < 10; i++) (function (j)
{
// do something with j
}(i));
A: Existence checks. So often I see stuff like this
var a = [0, 1, 2];
// code that might clear the array.
if (a.length > 0) {
// do something
}
instead for example just do this:
var a = [0, 1, 2];
// code that might clear the array.
if (a.length) { // if length is not equal to 0, this will be true
// do something
}
There's all kinds of existence checks you can do, but this was just a simple example to illustrate a point
Here's an example on how to use a default value.
function (someArgument) {
someArgument || (someArgument = "This is the deault value");
}
That's my two cents. There's other nuggets, but that's it for now.
A: You can iterate over Arrays using "for in"
Mark Cidade pointed out the usefullness of the "for in" loop :
// creating an object (the short way, to use it like a hashmap)
var diner = {
"fruit":"apple"
"veggetable"="bean"
}
// looping over its properties
for (meal_name in diner ) {
document.write(meal_name+"<br \n>");
}
Result :
fruit
veggetable
But there is more. Since you can use an object like an associative array, you can process keys and values,
just like a foreach loop :
// looping over its properties and values
for (meal_name in diner ) {
document.write(meal_name+" : "+diner[meal_name]+"<br \n>");
}
Result :
fruit : apple
veggetable : bean
And since Array are objects too, you can iterate other array the exact same way :
var my_array = ['a', 'b', 'c'];
for (index in my_array ) {
document.write(index+" : "+my_array[index]+"<br \n>");
}
Result :
0 : a
1 : b
3 : c
You can remove easily an known element from an array
var arr = ['a', 'b', 'c', 'd'];
var pos = arr.indexOf('c');
pos > -1 && arr.splice( pos, 1 );
You can shuffle easily an array
arr.sort(function() Math.random() - 0.5); – not really random distribution, see comments.
A: JavaScript typeof operator used with arrays or nulls always returns object value which in some cases may not be what programmer would expect.
Here's a function that will return proper values for those items as well. Array recognition was copied from Douglas Crockford's book "JavaScript: The Good Parts".
function typeOf (value) {
var type = typeof value;
if (type === 'object') {
if (value === null) {
type = 'null';
} else if (typeof value.length === 'number' &&
typeof value.splice === 'function' &&
!value.propertyIsEnumerable('length')) {
type = 'array';
}
}
return type;
}
A: You can use objects instead of switches most of the time.
function getInnerText(o){
return o === null? null : {
string: o,
array: o.map(getInnerText).join(""),
object:getInnerText(o["childNodes"])
}[typeis(o)];
}
Update: if you're concerned about the cases evaluating in advance being inefficient (why are you worried about efficiency this early on in the design of the program??) then you can do something like this:
function getInnerText(o){
return o === null? null : {
string: function() { return o;},
array: function() { return o.map(getInnerText).join(""); },
object: function () { return getInnerText(o["childNodes"]; ) }
}[typeis(o)]();
}
This is more onerous to type (or read) than either a switch or an object, but it preserves the benefits of using an object instead of a switch, detailed in the comments section below. This style also makes it more straightforward to spin this out into a proper "class" once it grows up enough.
update2: with proposed syntax extensions for ES.next, this becomes
let getInnerText = o -> ({
string: o -> o,
array: o -> o.map(getInnerText).join(""),
object: o -> getInnerText(o["childNodes"])
}[ typeis o ] || (->null) )(o);
A: Be sure to use the hasOwnProperty method when iterating through an object's properties:
for (p in anObject) {
if (anObject.hasOwnProperty(p)) {
//Do stuff with p here
}
}
This is done so that you will only access the direct properties of anObject, and not use the properties that are down the prototype chain.
A: Private variables with a Public Interface
It uses a neat little trick with a self-calling function definition.
Everything inside the object which is returned is available in the public interface, while everything else is private.
var test = function () {
//private members
var x = 1;
var y = function () {
return x * 2;
};
//public interface
return {
setx : function (newx) {
x = newx;
},
gety : function () {
return y();
}
}
}();
assert(undefined == test.x);
assert(undefined == test.y);
assert(2 == test.gety());
test.setx(5);
assert(10 == test.gety());
A: Timestamps in JavaScript:
// Usual Way
var d = new Date();
timestamp = d.getTime();
// Shorter Way
timestamp = (new Date()).getTime();
// Shortest Way
timestamp = +new Date();
A: I could quote most of Douglas Crockford's excellent book
JavaScript: The Good Parts.
But I'll take just one for you, always use === and !== instead of == and !=
alert('' == '0'); //false
alert(0 == ''); // true
alert(0 =='0'); // true
== is not transitive. If you use === it would give false for
all of these statements as expected.
A: You can assign local variables using [] on the left hand side. Comes in handy if you want to return more than one value from a function without creating a needless array.
function fn(){
var cat = "meow";
var dog = "woof";
return [cat,dog];
};
var [cat,dog] = fn(); // Handy!
alert(cat);
alert(dog);
It's part of core JS but somehow I never realized till this year.
A: As Marius already pointed, you can have public static variables in functions.
I usually use them to create functions that are executed only once, or to cache some complex calculation results.
Here's the example of my old "singleton" approach:
var singleton = function(){
if (typeof arguments.callee.__instance__ == 'undefined') {
arguments.callee.__instance__ = new function(){
//this creates a random private variable.
//this could be a complicated calculation or DOM traversing that takes long
//or anything that needs to be "cached"
var rnd = Math.random();
//just a "public" function showing the private variable value
this.smth = function(){ alert('it is an object with a rand num=' + rnd); };
};
}
return arguments.callee.__instance__;
};
var a = new singleton;
var b = new singleton;
a.smth();
b.smth();
As you may see, in both cases the constructor is run only once.
For example, I used this approach back in 2004 when I had to
create a modal dialog box with a gray background that
covered the whole page (something like Lightbox). Internet
Explorer 5.5 and 6 have the highest stacking context for
<select> or <iframe> elements due to their
"windowed" nature; so if the page contained select elements,
the only way to cover them was to create an iframe and
position it "on top" of the page. So the whole script was
quite complex and a little bit slow (it used filter:
expressions to set opacity for the covering iframe). The
"shim" script had only one ".show()" method, which created
the shim only once and cached it in the static variable :)
A: To convert a floating point number to an integer, you can use one of the following cryptic hacks (please don't):
*
*3.14 >> 0 (via 2.9999999999999999 >> .5?)
*3.14 | 0 (via What is the best method to convert floating point to an integer in JavaScript?)
*3.14 & -1
*3.14 ^ 0
*~~3.14
Basically, applying any binary operation on the float that won't change the final value (i.e. identity function) ends up converting the float to an integer.
A: There is also an almost unknown JavaScript syntax:
var a;
a=alert(5),7;
alert(a); // alerts undefined
a=7,alert(5);
alert(a); // alerts 7
a=(3,6);
alert(a); // alerts 6
More about this here.
A: This seems to only work on Firefox (SpiderMonkey). Inside a function:
*
*arguments[-2] gives the number of arguments (same as arguments.length)
*arguments[-3] gives the function that was called (same as arguments.callee)
A: Maybe one of the lesser-known ones:
arguments.callee.caller + Function#toString()
function called(){
alert("Go called by:\n"+arguments.callee.caller.toString());
}
function iDoTheCall(){
called();
}
iDoTheCall();
Prints out the source code of iDoTheCall --
Deprecated, but can be useful sometimes when alerting is your only option....
A: Hm, I didn't read the whole topic though it's quite interesting for me, but let me make a little donation:
// forget the debug alerts
var alertToFirebugConsole = function() {
if ( window.console && window.console.log ) {
window.alert = console.log;
}
}
A: JavaScript is considered to be very good at exposing all its object so no matter if its window object itself.
So if i would like to override the browser alert with JQuery/YUI div popup which too accepts string as parameter it can be done simply using following snippet.
function divPopup(str)
{
//code to show the divPopup
}
window.alert = divPopup;
With this change all the calls to the alert() will show the good new div based popup instead of the browser specific alert.
A: JavaScript versatility - Overriding default functionality
Here's the code for overriding the window.alert function with jQuery UI's Dialog widget. I did this as a jQuery plug-in. And you can read about it on my blog; altAlert, a jQuery plug-in for personalized alert messages.
jQuery.altAlert = function (options)
{
var defaults = {
title: "Alert",
buttons: {
"Ok": function()
{
jQuery(this).dialog("close");
}
}
};
jQuery.extend(defaults, options);
delete defaults.autoOpen;
window.alert = function ()
{
jQuery("<div />", {
html: arguments[0].replace(/\n/, "<br />")
}).dialog(defaults);
};
};
A: All objects in Javascript are implemented as hashtables, so their properties can be accessed through the indexer and vice-versa. Also, you can enumerate all the properties using the for/in operator:
var x = {a: 0};
x["a"]; //returns 0
x["b"] = 1;
x.b; //returns 1
for (p in x) document.write(p+";"); //writes "a;b;"
A: Functions are first class citizens in JavaScript:
var passFunAndApply = function (fn,x,y,z) { return fn(x,y,z); };
var sum = function(x,y,z) {
return x+y+z;
};
alert( passFunAndApply(sum,3,4,5) ); // 12
Functional programming techniques can be used to write elegant javascript.
Particularly, functions can be passed as parameters, e.g. Array.filter() accepts a callback:
[1, 2, -1].filter(function(element, index, array) { return element > 0 });
// -> [1,2]
You can also declare a "private" function that only exists within the scope of a specific function:
function PrintName() {
var privateFunction = function() { return "Steve"; };
return privateFunction();
}
A: There are several answers in this thread showing how to
extend the Array object via its prototype. This is a BAD
IDEA, because it breaks the for (i in a) statement.
So is it okay if you don't happen to use for (i in a)
anywhere in your code? Well, only if your own code is the
only code that you are running, which is not too likely
inside a browser. I'm afraid that if folks start extending
their Array objects like this, Stack Overflow will start
overflowing with a bunch of mysterious JavaScript bugs.
See helpful details here.
A: When you want to remove an element from an array, one can use the delete operator, as such:
var numbers = [1,2,3,4,5];
delete numbers[3];
//numbers is now [1,2,3,undefined,5]
As you can see, the element was removed, but a hole was left in the array since the element was replaced with an undefined value.
Thus, to work around this problem, instead of using delete, use the splice array method...as such:
var numbers = [1,2,3,4,5];
numbers.splice(3,1);
//numbers is now [1,2,3,5]
The first argument of splice is an ordinal in the array [index], and the second is the number of elements to delete.
A: You can use the in operator to check if a key exists in an object:
var x = 1;
var y = 3;
var list = {0:0, 1:0, 2:0};
x in list; //true
y in list; //false
1 in list; //true
y in {3:0, 4:0, 5:0}; //true
If you find the object literals too ugly you can combine it with the parameterless function tip:
function list()
{ var x = {};
for(var i=0; i < arguments.length; ++i) x[arguments[i]] = 0;
return x
}
5 in list(1,2,3,4,5) //true
A: In a function, you can return the function itself:
function showSomething(a){
alert(a);
return arguments.callee;
}
// Alerts: 'a', 'b', 'c'
showSomething('a')('b')('c');
// Or what about this:
(function (a){
alert(a);
return arguments.callee;
})('a')('b')('c');
I don't know when it could be useful, anyway, it's pretty weird and fun:
var count = function(counter){
alert(counter);
if(counter < 10){
return arguments.callee(counter+1);
}
return arguments.callee;
};
count(5)(9); // Will alert 5, 6, 7, 8, 9, 10 and 9, 10
Actually, the FAB framework for Node.js seems to have implemented this feature; see this topic for example.
A: Assigning default values to variables
You can use the logical or operator || in an assignment expression to provide a default value:
var a = b || c;
The a variable will get the value of c only if b is falsy (if is null, false, undefined, 0, empty string, or NaN), otherwise a will get the value of b.
This is often useful in functions, when you want to give a default value to an argument in case isn't supplied:
function example(arg1) {
arg1 || (arg1 = 'default value');
}
Example IE fallback in event handlers:
function onClick(e) {
e || (e = window.event);
}
The following language features have been with us for a long time, all JavaScript implementations support them, but they weren't part of the specification until ECMAScript 5th Edition:
The debugger statement
Described in: § 12.15 The debugger statement
This statement allows you to put breakpoints programmatically in your code just by:
// ...
debugger;
// ...
If a debugger is present or active, it will cause it to break immediately, right on that line.
Otherwise, if the debugger is not present or active this statement has no observable effect.
Multiline String literals
Described in: § 7.8.4 String Literals
var str = "This is a \
really, really \
long line!";
You have to be careful because the character next to the \ must be a line terminator, if you have a space after the \ for example, the code will look exactly the same, but it will raise a SyntaxError.
A: The way JavaScript works with Date() just excites me!
function isLeapYear(year) {
return (new Date(year, 1, 29, 0, 0).getMonth() != 2);
}
This is really "hidden feature".
Edit: Removed "?" condition as suggested in comments for politcorrecteness.
Was: ... new Date(year, 1, 29, 0, 0).getMonth() != 2 ? true : false ...
Please look at comments for details.
A: JavaScript does not have block scope (but it has closure so let's call it even?).
var x = 1;
{
var x = 2;
}
alert(x); // outputs 2
A: You can access object properties with [] instead of .
This allows you look up a property matching a variable.
obj = {a:"test"};
var propname = "a";
var b = obj[propname]; // "test"
You can also use this to get/set object properties whose name is not a legal identifier.
obj["class"] = "test"; // class is a reserved word; obj.class would be illegal.
obj["two words"] = "test2"; // using dot operator not possible with the space.
Some people don't know this and end up using eval() like this, which is a really bad idea:
var propname = "a";
var a = eval("obj." + propname);
This is harder to read, harder to find errors in (can't use jslint), slower to execute, and can lead to XSS exploits.
A: If you're Googling for a decent JavaScript reference on a given topic, include the "mdc" keyword in your query and your first results will be from the Mozilla Developer Center. I don't carry any offline references or books with me. I always use the "mdc" keyword trick to directly get to what I'm looking for. For example:
Google: javascript array sort mdc
(in most cases you may omit "javascript")
Update: Mozilla Developer Center has been renamed to Mozilla Developer Network. The "mdc" keyword trick still works, but soon enough we may have to start using "mdn" instead.
A: Maybe a little obvious to some...
Install Firebug and use console.log("hello"). So much better than using random alert();'s which I remember doing a lot a few years ago.
A: Here's a couple of shortcuts:
var a = []; // equivalent to new Array()
var o = {}; // equivalent to new Object()
A: My favorite trick is using apply to perform a callback to an object's method and maintain the correct "this" variable.
function MakeCallback(obj, method) {
return function() {
method.apply(obj, arguments);
};
}
var SomeClass = function() {
this.a = 1;
};
SomeClass.prototype.addXToA = function(x) {
this.a = this.a + x;
};
var myObj = new SomeClass();
brokenCallback = myObj.addXToA;
brokenCallback(1); // Won't work, wrong "this" variable
alert(myObj.a); // 1
var myCallback = MakeCallback(myObj, myObj.addXToA);
myCallback(1); // Works as expected because of apply
alert(myObj.a); // 2
A: The Zen of Closures
Other people have mentioned closures. But it's surprising how many people know about closures, write code using closures, yet still have the wrong perception of what closures really are. Some people confuse first-class functions with closures. Yet others see it as a kind of static variable.
To me a closure is a kind of 'private' global variable. That is, a kind of variable that some functions see as global but other functions can't see. Now, I know this is playing fast and loose with the description of the underlying mechanism but that is how it feels like and behaves. To illustrate:
// Say you want three functions to share a single variable:
// Use a self-calling function to create scope:
(function(){
var counter = 0; // this is the variable we want to share;
// Declare global functions using function expressions:
increment = function(){
return ++counter;
}
decrement = function(){
return --counter;
}
value = function(){
return counter;
}
})()
now the three function increment, decrement and value share the variable counter without counter being an actual global variable. This is the true nature of closures:
increment();
increment();
decrement();
alert(value()); // will output 1
The above is not a really useful use of closures. In fact, I'd say that using it this way is an anti-pattern. But it is useful in understanding the nature of closures. For example, most people get caught when they try to do something like the following:
for (var i=1;i<=10;i++) {
document.getElementById('span'+i).onclick = function () {
alert('this is span number '+i);
}
}
// ALL spans will generate alert: this span is span number 10
That's because they don't understand the nature of closures. They think that they are passing the value of i into the functions when in fact the functions are sharing a single variable i. Like I said before, a special kind of global variable.
To get around this you need detach* the closure:
function makeClickHandler (j) {
return function () {alert('this is span number '+j)};
}
for (var i=1;i<=10;i++) {
document.getElementById('span'+i).onclick = makeClickHandler(i);
}
// this works because i is passed by reference
// (or value in this case, since it is a number)
// instead of being captured by a closure
*note: I don't know the correct terminology here.
A: Private Methods
An object can have private methods.
function Person(firstName, lastName) {
this.firstName = firstName;
this.lastName = lastName;
// A private method only visible from within this constructor
function calcFullName() {
return firstName + " " + lastName;
}
// A public method available to everyone
this.sayHello = function () {
alert(calcFullName());
}
}
//Usage:
var person1 = new Person("Bob", "Loblaw");
person1.sayHello();
// This fails since the method is not visible from this scope
alert(person1.calcFullName());
A: You never have to use eval() to assemble global variable names.
That is, if you have several globals (for whatever reason) named spec_grapes, spec_apples, you do not have to access them with eval("spec_" + var).
All globals are members of window[], so you can do window["spec_" + var].
A: JavaScript uses a simple object literal:
var x = { intValue: 5, strValue: "foo" };
This constructs a full-fledged object.
JavaScript uses prototype-based object orientation and provides the ability to extend types at runtime:
String.prototype.doubleLength = function() {
return this.length * 2;
}
alert("foo".doubleLength());
An object delegates all access to attributes that it doesn't contain itself to its "prototype", another object. This can be used to implement inheritance, but is actually more powerful (even if more cumbersome):
/* "Constructor" */
function foo() {
this.intValue = 5;
}
/* Create the prototype that includes everything
* common to all objects created be the foo function.
*/
foo.prototype = {
method: function() {
alert(this.intValue);
}
}
var f = new foo();
f.method();
A: Prevent annoying errors while testing in Internet Explorer when using console.log() for Firebug:
function log(message) {
(console || { log: function(s) { alert(s); }).log(message);
}
A: One of my favorites is constructor type checking:
function getObjectType( obj ) {
return obj.constructor.name;
}
window.onload = function() {
alert( getObjectType( "Hello World!" ) );
function Cat() {
// some code here...
}
alert( getObjectType( new Cat() ) );
}
So instead of the tired old [Object object] you often get with the typeof keyword, you can actually get real object types based upon the constructor.
Another one is using variable arguments as a way to "overload" functions. All you are doing is using an expression to detect the number of arguments and returning overloaded output:
function myFunction( message, iteration ) {
if ( arguments.length == 2 ) {
for ( i = 0; i < iteration; i++ ) {
alert( message );
}
} else {
alert( message );
}
}
window.onload = function() {
myFunction( "Hello World!", 3 );
}
Finally, I would say assignment operator shorthand. I learned this from the source of the jQuery framework... the old way:
var a, b, c, d;
b = a;
c = b;
d = c;
The new (shorthand) way:
var a, b, c, d;
d = c = b = a;
Good fun :)
A: You can do almost anything between parentheses if you separate statements with commas:
var z = ( x = "can you do crazy things with parenthesis", ( y = x.split(" "), [ y[1], y[0] ].concat( y.slice(2) ) ).join(" ") )
alert(x + "\n" + y + "\n" + z)
Output:
can you do crazy things with parenthesis
can,you,do,crazy,things,with,parenthesis
you can do crazy things with parenthesis
A: The fastest loops in JavaScript are while(i--) ones. In all browsers.
So if it's not that important for order in which elements of your loop get processed you should be using while(i--) form:
var names = new Array(1024), i = names.length;
while(i--)
names[i] = "John" + i;
Also, if you have to use for() loop going forward, remember always to cache .length property:
var birds = new Array(1024);
for(var i = 0, j = birds.length; i < j; i++)
birds[i].fly();
To join large strings use Arrays (it's faster):
var largeString = new Array(1024), i = largeString.length;
while(i--) {
// It's faster than for() loop with largeString.push(), obviously :)
largeString[i] = i.toString(16);
}
largeString = largeString.join("");
It's much faster than largeString += "something" inside an loop.
A: You can redefine large parts of the runtime environment on the fly, such as modifying the Array constructor or defining undefined. Not that you should, but it can be a powerful feature.
A somewhat less dangerous form of this is the addition of helper methods to existing objects. You can make IE6 "natively" support indexOf on arrays, for example.
A: JavaScript tips or the jslibs project.
A: You can bind a JavaScript object as a HTML element attribute.
<div id="jsTest">Klick Me</div>
<script type="text/javascript">
var someVariable = 'I was klicked';
var divElement = document.getElementById('jsTest');
// binding function/object or anything as attribute
divElement.controller = function() { someVariable += '*'; alert('You can change instance data:\n' + someVariable ); };
var onclickFunct = new Function( 'this.controller();' ); // Works in Firefox and Internet Explorer.
divElement.onclick = onclickFunct;
</script>
A: The coalescing operator is very cool and makes for some clean, concise code, especially when you chain it together: a || b || c || "default"; The gotcha is that since it works by evaluating to bool rather than null, if values that evaluate to false are valid, they'll often times get over looked. Not to worry, in these cases just revert to the good ol' ternary operator.
I often see code that has given up and used global instead of static variables, so here's how (in an example of what I suppose you could call a generic singleton factory):
var getInstance = function(objectName) {
if ( !getInstance.instances ) {
getInstance.instances = {};
}
if ( !getInstance.instances[objectName] ) {
getInstance.instances[objectName] = new window[objectName];
}
return getInstance.instances[objectName];
};
Also, note the new window[objectName]; which was the key to generically instantiating objects by name. I just figured that out 2 months ago.
In the same spirit, when working with the DOM, I often bury functioning parameters and/or flags into DOM nodes when I first initialize whatever functionality I'm adding. I'll add an example if someone squawks.
Surprisingly, no one on the first page has mentioned hasOwnProperty, which is a shame. When using in for iteration, it's good, defensive programming to use the hasOwnProperty method on the container being iterated over to make sure that the member names being used are the ones that you expect.
var x = [1,2,3];
for ( i in x ) {
if ( !x.hasOwnProperty(i) ) { continue; }
console.log(i, x[i]);
}
Read here for more on this.
Lastly, with is almost always a bad idea.
A: You can make "classes" that have private (inaccessible outside the "class" definition) static and non-static members, in addition to public members, using closures.
Note that there are two types of public members in the code below. Instance-specific (defined in the constructor) that have access to private instance members, and shared members (defined in the prototype object) that only have access to private static members.
var MyClass = (function () {
// private static
var nextId = 1;
// constructor
var cls = function () {
// private
var id = nextId++;
var name = 'Unknown';
// public (this instance only)
this.get_id = function () { return id; };
this.get_name = function () { return name; };
this.set_name = function (value) {
if (typeof value != 'string')
throw 'Name must be a string';
if (value.length < 2 || value.length > 20)
throw 'Name must be 2-20 characters long.';
name = value;
};
};
// public static
cls.get_nextId = function () {
return nextId;
};
// public (shared across instances)
cls.prototype = {
announce: function () {
alert('Hi there! My id is ' + this.get_id() + ' and my name is "' + this.get_name() + '"!\r\n' +
'The next fellow\'s id will be ' + MyClass.get_nextId() + '!');
}
};
return cls;
})();
To test this code:
var mc1 = new MyClass();
mc1.set_name('Bob');
var mc2 = new MyClass();
mc2.set_name('Anne');
mc1.announce();
mc2.announce();
If you have Firebug you'll find that there is no way to get access to the private members other than to set a breakpoint inside the closure that defines them.
This pattern is very useful when defining classes that need strict validation on values, and complete control of state changes.
To extend this class, you would put MyClass.call(this); at the top of the constructor in the extending class. You would also need to copy the MyClass.prototype object (don't reuse it, as you would change the members of MyClass as well.
If you were to replace the announce method, you would call MyClass.announce from it like so: MyClass.prototype.announce.call(this);
A: Using Function.apply to specify the object that the function will work on:
Suppose you have the class
function myClass(){
this.fun = function(){
do something;
};
}
if later you do:
var a = new myClass();
var b = new myClass();
myClass.fun.apply(b); //this will be like b.fun();
You can even specify an array of call parameters as a secondo argument
look this: https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Function/apply
A: My first submission is not so much a hidden feature as a rarely used application of the property re-definition feature. Because you can redefine an object's methods, you can cache the result of a method call, which is useful if the calculation is expensive and you want lazy evaluation. This gives the simplest form of memoization.
function Circle(r) {
this.setR(r);
}
Circle.prototype = {
recalcArea: function() {
this.area=function() {
area = this.r * this.r * Math.PI;
this.area = function() {return area;}
return area;
}
},
setR: function (r) {
this.r = r;
this.invalidateR();
},
invalidateR: function() {
this.recalcArea();
}
}
Refactor the code that caches the result into a method and you get:
Object.prototype.cacheResult = function(name, _get) {
this[name] = function() {
var result = _get.apply(this, arguments);
this[name] = function() {
return result;
}
return result;
};
};
function Circle(r) {
this.setR(r);
}
Circle.prototype = {
recalcArea: function() {
this.cacheResult('area', function() { return this.r * this.r * Math.PI; });
},
setR: function (r) {
this.r = r;
this.invalidateR();
},
invalidateR: function() {
this.recalcArea();
}
}
If you want a memoized function, you can have that instead. Property re-definition isn't involved.
Object.prototype.memoize = function(name, implementation) {
this[name] = function() {
var argStr = Array.toString.call(arguments);
if (typeof(this[name].memo[argStr]) == 'undefined') {
this[name].memo[argStr] = implementation.apply(this, arguments);
}
return this[name].memo[argStr];
}
};
Note that this relies on the standard array toString conversion and often won't work properly. Fixing it is left as an exercise for the reader.
My second submission is getters and setters. I'm surprised they haven't been mentioned yet. Because the official standard differs from the de facto standard (defineProperty vs. define[GS]etter) and Internet Explorer barely supports the official standard, they aren't generally useful. Maybe that's why they weren't mentioned. Note that you can combine getters and result caching rather nicely:
Object.prototype.defineCacher = function(name, _get) {
this.__defineGetter__(name, function() {
var result = _get.call(this);
this.__defineGetter__(name, function() { return result; });
return result;
})
};
function Circle(r) {
this.r = r;
}
Circle.prototype = {
invalidateR: function() {
this.recalcArea();
},
recalcArea: function() {
this.defineCacher('area', function() {return this.r * this.r * Math.PI; });
},
get r() { return this._r; }
set r(r) { this._r = r; this.invalidateR(); }
}
var unit = new Circle(1);
unit.area;
Efficiently combining getters, setters and result caching is a little messier because you have to prevent the invalidation or do without automatic invalidation on set, which is what the following example does. It's mostly an issue if changing one property will invalidate multiple others (imagine there's a "diameter" property in these examples).
Object.prototype.defineRecalcer = function(name, _get) {
var recalcFunc;
this[recalcFunc='recalc'+name.toCapitalized()] = function() {
this.defineCacher(name, _get);
};
this[recalcFunc]();
this.__defineSetter__(name, function(value) {
_set.call(this, value);
this.__defineGetter__(name, function() {return value; });
});
};
function Circle(r) {
this.defineRecalcer('area',
function() {return this.r * this.r * Math.PI;},
function(area) {this._r = Math.sqrt(area / Math.PI);},
);
this.r = r;
}
Circle.prototype = {
invalidateR: function() {
this.recalcArea();
},
get r() { return this._r; }
set r(r) { this._r = r; this.invalidateR(); }
}
A: Here's a simple way of thinking about 'this'. 'This' inside a function will refer to future object instances of the function, usually created with operator new. So clearly 'this' of an inner function will never refer to an instance of an outer function.
The above should keep one out of trouble. But there are more complicated things you can do with 'this.'
Example 1:
function DriveIn()
{
this.car = 'Honda';
alert(this.food); //'food' is the attribute of a future object
//and DriveIn does not define it.
}
var A = {food:'chili', q:DriveIn}; //create object A whose q attribute
//is the function DriveIn;
alert(A.car); //displays 'undefined'
A.q(); //displays 'chili' but also defines this.car.
alert(A.car); //displays 'Honda'
The Rule of This:
Whenever a function is called as the attribute of an object, any occurrence of 'this' inside the function (but outside any inner functions) refers to the object.
We need to make clear that "The Rule of This" applies even when operator new is used. Behind the scenes new attaches 'this' to the object through the object's constructor attribute.
Example 2:
function Insect ()
{
this.bug = "bee";
this.bugFood = function()
{
alert("nectar");
}
}
var B = new Insect();
alert(B.constructor); //displays "Insect"; By "The Rule of This" any
//ocurrence of 'this' inside Insect now refers
//to B.
To make this even clearer, we can create an Insect instance without using operator new.
Example 3:
var C = {constructor:Insect}; //Assign the constructor attribute of C,
//the value Insect.
C.constructor(); //Call Insect through the attribute.
//C is now an Insect instance as though it
//were created with operator new. [*]
alert(C.bug); //Displays "bee."
C.bugFood(); //Displays "nectar."
[*] The only actual difference I can discern is that in example 3, 'constructor' is an enumerable attribute. When operator new is used 'constructor' becomes an attribute but is not enumerable. An attribute is enumerable if the for-in operation "for(var name in object)" returns the name of the attribute.
A: function can have methods.
I use this pattern of AJAX form submissions.
var fn = (function() {
var ready = true;
function fnX() {
ready = false;
// AJAX return function
function Success() {
ready = true;
}
Success();
return "this is a test";
}
fnX.IsReady = function() {
return ready;
}
return fnX;
})();
if (fn.IsReady()) {
fn();
}
A: Simple self-contained function return value caching:
function isRunningLocally(){
var runningLocally = ....; // Might be an expensive check, check whatever needs to be checked.
return (isRunningLocally = function(){
return runningLocally;
})();
},
The expensive part is only performed on the first call, and after that all the function does is return this value. Of course this is only useful for functions that will always return the same thing.
A: Closures:
function f() {
var a;
function closureGet(){ return a; }
function closureSet(val){ a=val;}
return [closureGet,closureSet];
}
[closureGet,closureSet]=f();
closureSet(5);
alert(closureGet()); // gives 5
closureSet(15);
alert(closureGet()); // gives 15
The closure thing here is not the so-called destructuring assignment ([c,d] = [1,3] is equivalent to c=1; d=3;) but the fact that the occurences of a in closureGet and closureSet still refer to the same variable. Even after closureSet has assigned a a new value!
A: When you are write callbacks you have a lot of code, which will look like this:
callback: function(){
stuff(arg1,arg2);
}
You can use the function below, to make it somewhat cleaner.
callback: _(stuff, arg1, arg2)
It uses a less well known function of the Function object of javascript, apply.
It also shows another character you can use as functionname: _.
function _(){
var func;
var args = new Array();
for(var i = 0; i < arguments.length; i++){
if( i == 0){
func = arguments[i];
} else {
args.push(arguments[i]);
}
}
return function(){
return func.apply(func, args);
}
}
A:
function l(f,n){n&&l(f,n-1,f(n));}
l( function( loop ){ alert( loop ); },
5 );
alerts 5, 4, 3, 2, 1
A: Well, it's not much of a feature, but it is very useful:
Shows selectable and formatted alerts:
alert(prompt('',something.innerHTML ));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "312"
} |
Q: Close and Dispose - which to call? Having read the threads Is SqlCommand.Dispose enough? and Closing and Disposing a WCF Service I am wondering for classes such as SqlConnection or one of the several classes inheriting from the Stream class does it matter if I close Dispose rather than Close?
A: This would-be quick advice became a long answer. Sorry.
As tyler pointed out in his nice answer, calling Dispose() is a great programming practice. This is because this method is supposed to "rally together" all the resource-freeing needed so there are no unneeded open resources. If you wrote some text to a file, for example, and failed to close the file (free the resource), it will remain open and no one else will be able to write to it until the GC comes around and does what you should have done.
Now, in some cases there will be "finalizing" methods more specific to the class you're dealing with, like StreamWriter.Close(), which overrides TextWriter.Close(). Indeed they are usually more suited to the situation: a StreamWriter's Close(), for example, flushes the stream and the underlying encoder before Dispose()ing of the object! Cool!
However, browsing MSDN you'll find that even Microsoft is sometimes confused by the multitude of closers and disposers. In this webpage, for instance, in some examples Close() is called before the implicit Dispose() (see using statement if you don't understand why it's implicit), and in one in particular they don't bother to. Why would that be? I too was perplexed.
The reason I figured (and, I stress, this is original research and I surely might lose reputation if I'm wrong) is that Close() might fail, yielding an exception whilst leaving resources open, while Dispose() would surely free them. Which is why a Dispose() should always safeguard a Close() call (sorry for the pun).
MyResource r = new MyResource();
try {
r.Write(new Whatever());
r.Close()
finally {
r.Dispose();
}
And yes, I guess Microsoft slipped on that one example. Perhaps that timestamp would never get flushed to the file.
I'm fixing my old code tomorrow.
Edit: sorry Brannon, I can't comment on your answer, but are you sure it's a good idea to call Close() on a finally block? I guess an exception from that might ruin the rest of the block, which likely would contain important cleanup code.
Reply to Brannon's: great, just don't forget to call Close() when it is really needed (e.g. when dealing with streams - don't know much about SQL connections in .NET).
A: As usual the answer is: it depends. Different classes implement IDisposable in different ways, and it's up to you to do the necessary research.
As far as SqlClient goes, the recommended practice is to do the following:
using (SqlConnection conn = /* Create new instance using your favorite method */)
{
conn.Open();
using (SqlCommand command = /* Create new instance using your favorite method */)
{
// Do work
}
conn.Close(); // Optional
}
You should be calling Dispose (or Close*) on the connection! Do not wait for the garbage collector to clean up your connection, this will tie up connections in the pool until the next GC cycle (at least). If you call Dispose, it is not necessary to call Close, and since the using construct makes it so easy to handle Dispose correctly, there is really no reason to call Close.
Connections are automatically pooled, and calling Dispose/Close on the connection does not physically close the connection (under normal circumstances). Do not attempt to implement your own pooling. SqlClient performs cleanup on the connection when it's retrieved from the pool (like restoring the database context and connection options).
*if you are calling Close, make sure to do it in an exception-safe way (i.e. in a catch or finally block).
A: I want to clarify this situation.
According to Microsoft guidelines, it's a good practice to provide Close method where suitable. Here is a citation from Framework design guidelines
Consider providing method Close(), in addition to the Dispose(), if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ...
In most of cases Close and Dispose methods are equivalent. The main difference between Close and Dispose in the case of SqlConnectionObject is:
An application can call Close more
than one time. No exception is
generated.
If you called Dispose method
SqlConnection object state will be
reset. If you try to call any
method on disposed SqlConnection
object, you will receive exception.
That said:
*
*If you use connection object one
time, use Dispose. A using block will ensure this is called even in the event of an exception.
*If connection object must be reused,
use Close method.
A: Typecast to iDisposable, and call dispose on that. That will invoke whatever method is configured as implementing "iDisposable.Dispose", regardless of what the function is named.
A: For SqlConnection, from the perspective of the connection itself, they are equivalent. According to Reflector, Dispose() calls Close() as well as doing a few additional memory-freeing operations -- mostly by setting members equal to null.
For Stream, they actually are equivalent. Stream.Dispose() simply calls Close().
A: You DO need to call Dispose()!
Dispose() is for the developer to call, the Garbage Collector calls Finalize(). If you don't call Dispose() on your objects any unmanaged resources that they used won't be disposed until the garbage collector comes around and calls finalize on them (and who knows when that will happen).
This scenario is called Non Deterministic Finalization and is a common trap for .net developers. If you're working with objects that implement IDisposable then call Dispose() on them!
http://www.ondotnet.com/pub/a/oreilly/dotnet/news/programmingCsharp_0801.html?page=last
While there may be many instances (like on SqlConnection) where you call Disponse() on some object and it simply calls Close() on it's connection or closes a file handle, it's almost always your best bet to call Dispose()! unless you plan on reusing the object in the very near future.
A: Generally we are facing issue in Close(), Abort() and Dispose() but let me tell you difference among them.
1) ABORT:- I won't suggest to use this because when abort is called the client will delete the connection without telling the server so server will wait for some amount of time (approximately 1 min). If you having bulk request then you can't use abort() because it may caused time out for your limited connection pool.
2) Close:- Close is very good way to closing the connection because when closing the connection it will call server and acknowledge the server to close by that side too.
Here, one more thing to look.
In some cases, if error generates then it is not a good way to write code in finally that connection.close() because at that time Communication State will be faulted.
3) Dispose :- It is one type of close but after closing the connection you can not open it again.
So try this way ,
private void CloseConnection(Client client)
{
if (client != null && client.State == CommunicationState.Opened)
{
client.Close();
}
else
{
client.Abort();
}
}
A: I just had an issue on net6 app using TransactionScope: I had multiple consequently created, opened and disposed connections, but was catching This platform does not support distributed transactions on completing scope (while clearly there were no distributed transactions).
The issue was resolved by adding Close method call. The majority of answers say that Dispose and Close are identical, but it seems they aren't: from what I gleaned from source code, Close method explicitly returns connection to pool, while I could not find identical code for Dispose.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "182"
} |
Q: What languages have higher levels of abstraction and require less manual memory management than C++? I have been learning C++ for a while now, I find it very powerful. But, the problem is the the level of abstraction is not much and I have to do memory management myself.
What are the languages that I can use which uses a higher level of abstraction.
A: Trying something really foreign like Haskell will allow you to think in different ways. It also helps you to think recursively. C++ has recursion but it infiltrates many more parts of functional languages.
A: Java, C#, Ruby, Python and JavaScript are probably the big choices before you.
Java and C# are not hugely different languages. This big difference you'll find from C++ is memory management (i.e. objects are automatically freed when they are no longer referenced). You would chose these if you were interested in desktop style applications, or keen on static typing (and you'd probably choose between them based on how you feel towards Microsoft and the Windows platform). In both cases you'll find much richer standard libraries than you'll be used to from C++.
Python and Ruby take a step away from static typing, into a world where you can call and method on any object (and fail at runtime if it's not there). That is both a blessing (a lot less boilerplate code) and a curse (the compiler can't catch those errors for you anymore). Once again, you'll find they have richer standard libraries, and are higer level again than Java / C#. Performance is the main downfall, with Python being somewhat faster than Ruby as I understand it. To choose between them, you'd probably choose Ruby if you're interesting in web development for the Ruby on Rails framework community, and otherwise go with Python.
JavaScript is even more different from C++ in that it does away with classes entirely. Objects are simply cloned from other objects and can have methods and properties added to them at runtime. Very flexible, but also very easy to make into a total mess. JavaScript is the only real choice if you're interested in running applications in a browser, which is really coming into its own as a platform. You'll find the standard libraries available rather limited if you're not doing a lot with the browser, but there are quite a few good frameworks which fill in some of the gaps.
Some other interesting, though more niche choices are
*
*Smalltalk - More or less in the Ruby and Python camp, and significantly faster as I understand it. Be careful though _ I've seen lots of good engineers learn Smalltalk and never come back ;)
*Objective-C - When C went object oriented, C++ went one way (static typing), and Objective-C went the other (dynamic typing). It's quite Smalltalk inspired, and has a good standard library if you're in Mac / iPhone land. In terms of memory management, unlike everything else I've listed, it's not garbage collected (though that's now an option on Mac OS X 10.5), but it does have a reference counting scheme which makes life significantly simpler than managing memory by hand.
*Lisp - I've never learnt it myself beyond what I needed for minor Emacs hacking. As I understand it, the libraries were nice in their day, but though the language remains supremely elegant, they've fallen a little behind the times.
*Haskel - If you wanted a complete break from objects and classes, Haskel and it's functional approach is an interesting way to go (or Lisp as above, or F# if you are in .Net land). Basically, you're giving up loops and variables in favour of doing everything recursively. Takes some time to wrap your mind around, and probably isn't practical for most real world applications, but it's a good one to learn.
*Eiffel - I love it - Very clean syntax, and designed for serious engineering type systems. Statically types like C# and Java, and with a weaker standard library, but it will make you really think about language and class library design.
*ActionScript and Flex - The programming interface to Flash, which is based on what seems to be a statically typed version of JavaScript. I've played with it a bit, and it's quite slick if you're interested in developing media based applications. You can also push beyond the browser with Flex and into the Air platform to build real desktop apps.
A: ditto Lisp,.. or scheme
Even if you don't ever use it, it's handy. I only really got template programming after learning it.
Another one is prolog. it puts you in a non sequential mindset.
A: If you're comfortable with C++ syntax and style, you might find D to be an interesting language. Or if you want to branch out, any of Python, C#, Java, Ruby would be excellent choices.
A: C# if you're in the Microsoft ecosystem.
Python and Ruby seem to have the most traction in the Linux/Unix/etc space.
ObjectiveC is dominant on the Macintosh and iPhone. The most recent MacOS implements garbage collection for a subset of the frameworks, but to use the rest you'd have to do resource management yourself.
You could learn Java, as it does garbage collection as well, but the number of frameworks you'd need to become familiar with to be a productive Java developer is daunting.
A: I would say that from your question you probably haven't finished learning about C++. If you're still doing your own memory managment then you still have a long way to go my friend!
Check out the auto_ptr and shared_ptr - check out the Boost libraries.
Similarly with abstraction - what are you specifically complaining about? AFAIK there's not much you can't do with C++ that is present in other strongly-typed languages.
I know this doesn't answer your question - you want to move forwards, but C++ is one of those things where you never really stop learning. If you get bored, take a brief foray into templates and template meta-programming...
A: Well if you're looking for a very high level of abstraction and memory management then I'd say lisp would be an ideal candidate. I'm learning it now, slowly, and it's the most fun I've had with a new language.
Having said that Python or Ruby may be a better compromise between expressiveness and popularity. Python's Django framework is one of the better RAD frameworks if you're looking for web application stuff.
A: I'd say it depends on the kind of programming you want to try. If you want to stay on the OOP side, learn Python or Ruby, both languages provide an easy way to create bindings to use your C++ code from a script (for efficiency reasons).
If you need another approach to programming, learn a "functional" language like Lisp or Haskell.
And if you need to include a fast and small scripting language inside your C++ application, try Lua.
Last but not least, if you know Java and hate it, you can try Scala, a language where you can mix your Java classes with your Scala code, very interesting.
A: Scheme.
The Little Schemer and Structure and Interpretation of Computer Program will stretch your mind in strange and wonderful ways.
DrScheme is a good IDE for beginners. The Scheme Programming Language makes a good, free reference.
A: I see a lot of excellent suggestions so far. However, I think there's something missing, assembler.
Why learn assembly language?
*
*It's not as difficult as you may think. Assembly language is a lot smaller in scope than many modern languages, there are a few tricks you need to understand for it to make sense, but it's not that complicated.
*It broadens your knowledge base. Knowing the fundamentals is almost always beneficial, even when working at a high level.
*It can be extremely useful when debugging. Especially debugging native code without the source, the knowledge you gain from learning assembler enhances your ability to debug in these situations by leaps and bounds.
*It gives you more options. When the rare circumstance comes up where assembly code is needed you won't be helpless.
*It's good for your resume. It shows that you learn beyond just the bare minimum needed to keep your current job, it shows a curiosity about fundamentals, and it puts you in a different class of programmers, and that class tends to be more experienced and more capable.
*It's just plain cool.
Some assembly language resources:
*
*Sandpile.org (assembly language / processor architecture reference)
*Gavin's Guide to 80x86 Assembly (a decent online tutorial)
*Assembly Language for Intel-Based Computers (5e) (a decent textbook for x86 assembly)
A: try c# much :)
A: if you want to abstract memory management, Java comes to my mind instantly.
A: I suggest learning database design and a query language such as SQL.
You can start with a desktop tool like Microsoft Access or use the free SQL Server Express or Postgre or MySQL.
A: Well I think there is no predefined route in learning programming languages. You may learn your next lang based on your job needs, academic research, just for fun, etc. There are many options.
In you feel comfortable in C++, you can go down and learn some assembly. It's a dark art but you'll be glad when you encounter some hard debugging session.
In terms of more abstraction, Smalltalk is extremely fun, OOP-pure and 100% dynamic (debugging is a pleasant thing to do, which is not in static-typed languages). Dolphin Smalltalk is a good implementation for Windows, even the free community edition gives enough to play with. In multiplatform Smalltalk VMs, go for Visualworks or Squeak. Visualworks is extremely stable and comes with a lot of documentation.
Python is used today in many, many fields. I don't know anything about Python excepting the basic syntax and semantics, but it's required today for many jobs.
Java it's, well Java. It's interesting that Java never catch on me. You may get interested on Java, altough. Ask here for advantages of using it over C++ or other OOP languages.
For Web development go for Javascript, specially considering the AJAX wave. It's getting interesting those days. We've talked about Smalltalk, all right, Seaside is an amazing framework for web development. It works (at least I tried on) Squeak /Visualworks... it's beatiful.
Well, there are a lot of more to get your hands on: Scheme, LISP, Ruby, Lua, Bash (!), Perl (ugh), Haskell... Try them all and have fun!
A: Qt
A: Why not learn Qt? Its a great application development framework available on all platforms and even mobile devices!
A: Clojure is well worth exploring as it meets both of your criteria:
*
*It has a strong emphasis on programming with higher level abstractions. see e.g. this video: Clojure: The Art of Abstraction
*It has automatic memory management / garbage collection (via the JVM, which has some of the world's best GC implementations)
I'll give some examples using just one abstraction: in Clojure you can manipulate pretty much any data structure via the sequence abstraction.
;; treat a vector as a sequence and reverse it
(reverse [1 2 3 4 5])
=> (5 4 3 2 1)
;; Take 10 items from a infinite sequence
(take 10 (range))
=> (0 1 2 3 4 5 6 7 8 9)
;; Treat a String as a sequence of characters, calculate the frequencies
(frequencies "abracadabra")
=> {\a 5, \b 2, \r 2, \c 1, \d 1}
;; Define an infinite lazy sequence of fibonacci numbers, take the first 10
(def fibs (concat [0 1] (lazy-seq (map + fibs (rest fibs)))))
(take 10 fibs)
=> (0 1 1 2 3 5 8 13 21 34)
A: Since you are already into C++, next step would be to learn .Net through managed C++ or managed extensions for C++..this will get you a step in the big world of .Net framework..Once you understand the framework, makes it more comfortable to learn other .Net languages like C#, VB.Net etc.
One of the areas that MC++ excels in, and is in fact unique in amongst the .NET languages, is the ability to take an existing unmanaged (C++) application, recompile it with the /clr switch, have it generate MSIL and then run under the CLR. This extraordinary feat is aptly termed "It Just Works (IJW)!" There are some limitations, but for the most part, the application will just run. The C++ code can consist of old-fashioned printf statements, MFC, ATL, or even templates!
A: I recommend python as it's not only a sexy language, but also very widely used and easy to integrate with C++ through Boost.Python.
But as Thomi said, there's lot to explore in C++ and with the help of Boost libraries it's becoming really easy to develop in.
A: Rather than suggest a specific language, I would recommend you pick any language or languages that offer the following 4 features:
*
*Automatic Memory Management
*Reflection/Introspection
*Declarative/Functional constructs(e.g. lambda functions)
*Duck Typing
The idea here is to expand your programming perspective to include concepts that the C++ language does not offer you out of the box.
A: It depends on what you want to do. If you have some specific tasks that you are interested in accomplishing then look at languages that are best for those types of tasks. The best way to learn a language is to actually use it.
A: I'd say get started with Python. It has a higher level of abstraction and it teaches you the importance of indenting and making "pretty" code. Not that "pretty" is very important, but it will make the future maintainer of your code a lot happier :)
There's a lot of example code out there, and if you are into Linux there are various distributions out there who have all (or most) of their tools based on the language. If you like digging into how managing an operating systems works (something most programmers do) it's a good start. Before I get the flames I said managing, not the actual kernel stuff for that you mostly need C and you should have that covered.
On the other hand it might be nice to dive into the C side of things, ignore the OO stuff and learn functional programming. If you head down that road I also suggest to start with basic assembly language like one of the upper posts suggested. Maybe HLA (High-Level Assembly by Randall Hyde, he wrote a great book called Art of Assembly Language Programming) is a good start. You'll either learn to love memory management or hate it for the rest of your live. Good to know in case you want to start a career in programming :)
However if you're looking to make a job out of programming, Java and J2EE is an easy money maker if you know what you're doing. IMHO it gets boring really quick though.
A: Personally, I have been programming in Java, Python, C/++ and my favorite has to be python. Although C++ can do everything Python can do and more, I wrote a Python program with about 10 lines that would take about 50 in C++. So, moral of the story, use python.
A: If you haven't already, try out a scripting language. It should change the way you work & think. Hopefully, in a good way :)
A: I've got to put up a separate answer for Perl. While Python is roughly equivalent in functionality and considered more clean and modern, Perl has an elegance all of its own - the elegance of pure pragmatism. It also boasts a truly great library support. Take a look at Perl to expand your brain in the direction opposite to Haskel :) (although Perl aficionados claim that it can be used for functional programming).
A: Rust
*
*Syntactically similar to C++
*Designed for performance and safety, especially safe concurrency
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to save the output of a console application I need advice on how to have my C# console application display text to the user through the standard output while still being able access it later on. The actual feature I would like to implement is to dump the entire output buffer to a text file at the end of program execution.
The workaround I use while I don't find a cleaner approach is to subclass TextWriter overriding the writing methods so they would both write to a file and call the original stdout writer. Something like this:
public class DirtyWorkaround {
private class DirtyWriter : TextWriter {
private TextWriter stdoutWriter;
private StreamWriter fileWriter;
public DirtyWriter(string path, TextWriter stdoutWriter) {
this.stdoutWriter = stdoutWriter;
this.fileWriter = new StreamWriter(path);
}
override public void Write(string s) {
stdoutWriter.Write(s);
fileWriter.Write(s);
fileWriter.Flush();
}
// Same as above for WriteLine() and WriteLine(string),
// plus whatever methods I need to override to inherit
// from TextWriter (Encoding.Get I guess).
}
public static void Main(string[] args) {
using (DirtyWriter dw = new DirtyWriter("path", Console.Out)) {
Console.SetOut(dw);
// Teh codez
}
}
}
See that it writes to and flushes the file all the time. I'd love to do it only at the end of the execution, but I couldn't find any way to access to the output buffer.
Also, excuse inaccuracies with the above code (had to write it ad hoc, sorry ;).
A: The perfect solution for this is to use log4net with a console appender and a file appender. There are many other appenders available as well. It also allows you to turn the different appenders off and on at runtime.
A: I don't think there's anything wrong with your approach.
If you wanted reusable code, consider implementing a class called MultiWriter or somesuch that takes as input two (or N?) TextWriter streams and distributes all writs, flushes, etc. to those streams. Then you can do this file/console thing, but just as easily you can split any output stream. Useful!
A: Probably not what you want, but just in case... Apparently, PowerShell implements a version of the venerable tee command. Which is pretty much intended for exactly this purpose. So... smoke 'em if you got 'em.
A: I would say mimic the diagnostics that .NET itself uses (Trace and Debug).
Create a "output" class that can have different classes that adhere to a text output interface. You report to the output class, it automatically sends the output given to the classes you have added (ConsoleOutput, TextFileOutput, WhateverOutput).. And so on.. This also leaves you open to add other "output" types (such as xml/xslt to get a nicely formatted report?).
Check out the Trace Listeners Collection to see what I mean.
A: Consider refactoring your application to separate the user-interaction portions from the business logic. In my experience, such a separation is quite beneficial to the structure of your program.
For the particular problem you're trying to solve here, it becomes straightforward for the user-interaction part to change its behavior from Console.WriteLine to file I/O.
A: I'm working on implementing a similar feature to capture output sent to the Console and save it to a log while still passing the output in real time to the normal Console so it doesn't break the application (eg. if it's a console application!).
If you're still trying to do this in your own code by saving the console output (as opposed to using a logging system to save just the information you really care about), I think you can avoid the flush after each write, as long as you also override Flush() and make sure it flushes the original stdoutWriter you saved as well as your fileWriter. You want to do this in case the application is trying to flush a partial line to the console for immediate display (such as an input prompt, a progress indicator, etc), to override the normal line-buffering.
If that approach has problems with your console output being buffered too long, you might need to make sure that WriteLine() flushes stdoutWriter (but probably doesn't need to flush fileWriter except when your Flush() override is called). But I would think that the original Console.Out (actually going to the console) would automatically flush its buffer upon a newline, so you shouldn't have to force it.
You might also want to override Close() to (flush and) close your fileWriter (and probably stdoutWriter as well), but I'm not sure if that's really needed or if a Close() in the base TextWriter would issue a Flush() (which you would already override) and you might rely on application exit to close your file. You should probably test that it gets flushed on exit, to be sure. And be aware that an abnormal exit (crash) likely won't flush buffered output. If that's an issue, flushing fileWriter on newline may be desirable, but that's another tricky can of worms to work out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Recursive lambda expression to traverse a tree in C# Can someone show me how to implement a recursive lambda expression to traverse a tree structure in C#.
A: Ok, I found some free time finally.
Here we go:
class TreeNode
{
public string Value { get; set;}
public List<TreeNode> Nodes { get; set;}
public TreeNode()
{
Nodes = new List<TreeNode>();
}
}
Action<TreeNode> traverse = null;
traverse = (n) => { Console.WriteLine(n.Value); n.Nodes.ForEach(traverse);};
var root = new TreeNode { Value = "Root" };
root.Nodes.Add(new TreeNode { Value = "ChildA"} );
root.Nodes[0].Nodes.Add(new TreeNode { Value = "ChildA1" });
root.Nodes[0].Nodes.Add(new TreeNode { Value = "ChildA2" });
root.Nodes.Add(new TreeNode { Value = "ChildB"} );
root.Nodes[1].Nodes.Add(new TreeNode { Value = "ChildB1" });
root.Nodes[1].Nodes.Add(new TreeNode { Value = "ChildB2" });
traverse(root);
A: A proper solution, and indeed the idiomatic solution in many functional programming languages, would be the use of a fixed-point combinator. In a nutshell: a fixed-point combinator answers the question “how do I define an anonymous function to be recursive?”. But the solution is so nontrivial that whole articles are written to explain them.
A simple, pragmatic alternative is to “go back in time” to the antics of C: declaration before definition. Try the following (the “factorial” function):
Func<int, int> fact = null;
fact = x => (x == 0) ? 1 : x * fact(x - 1);
Works like a charm.
Or, for a pre-order tree traversal on an object of class TreeNode which implements IEnumerable<TreeNode> appropriately to go over its children:
Action<TreeNode, Action<TreeNode>> preorderTraverse = null;
preorderTraverse = (node, action) => {
action(node);
foreach (var child in node) preorderTraverse(child, action);
};
A:
A simple alternative is to “go back in time” to the antics of C and C++: declaration before definition. Try the following:
Func<int, int> fact = null;
fact = x => (x == 0) ? 1 : x * fact(x - 1);
Works like a charm.
Yes, that does work, with one little caveat. C# has mutable references. So make sure you don't accidentally do something like this:
Func<int, int> fact = null;
fact = x => (x == 0) ? 1 : x * fact(x - 1);
// Make a new reference to the factorial function
Func<int, int> myFact = fact;
// Use the new reference to calculate the factorial of 4
myFact(4); // returns 24
// Modify the old reference
fact = x => x;
// Again, use the new reference to calculate
myFact(4); // returns 12
Of course, this example is a bit contrived, but this could happen when using mutable references. If you use the combinators from aku's links, this won't be possible.
A: Assuming a mythical object TreeItem, that conatins a Children collection to represent your hierarchy.
public void HandleTreeItems(Action<TreeItem> item, TreeItem parent)
{
if (parent.Children.Count > 0)
{
foreach (TreeItem ti in parent.Children)
{
HandleTreeItems(item, ti);
}
}
item(parent);
}
Now to call it, passing in the lambda that handles one item, by printing its name to the console.
HandleTreeItems(item => { Console.WriteLine(item.Name); }, TreeItemRoot);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Mocking Static Blocks in Java My motto for Java is "just because Java has static blocks, it doesn't mean that you should be using them." Jokes aside, there are a lot of tricks in Java that make testing a nightmare. Two of the most I hate are Anonymous Classes and Static Blocks. We have a lot of legacy code that make use of Static Blocks and these are one of the annoying points in our push in writing unit tests. Our goal is to be able to write unit tests for classes that depend on this static initialization with minimal code changes.
So far my suggestion to my colleagues is to move the body of the static block into a private static method and call it staticInit. This method can then be called from within the static block. For unit testing another class that depends on this class could easily mock staticInit with JMockit to not do anything. Let's see this in example.
public class ClassWithStaticInit {
static {
System.out.println("static initializer.");
}
}
Will be changed to
public class ClassWithStaticInit {
static {
staticInit();
}
private static void staticInit() {
System.out.println("static initialized.");
}
}
So that we can do the following in a JUnit.
public class DependentClassTest {
public static class MockClassWithStaticInit {
public static void staticInit() {
}
}
@BeforeClass
public static void setUpBeforeClass() {
Mockit.redefineMethods(ClassWithStaticInit.class, MockClassWithStaticInit.class);
}
}
However this solution also comes with its own problems. You can't run DependentClassTest and ClassWithStaticInitTest on the same JVM since you actually want the static block to run for ClassWithStaticInitTest.
What would be your way of accomplishing this task? Or any better, non-JMockit based solutions that you think would work cleaner?
A: PowerMock is another mock framework that extends EasyMock and Mockito. With PowerMock you can easily remove unwanted behavior from a class, for example a static initializer. In your example you simply add the following annotations to your JUnit test case:
@RunWith(PowerMockRunner.class)
@SuppressStaticInitializationFor("some.package.ClassWithStaticInit")
PowerMock does not use a Java agent and therefore does not require modification of the JVM startup parameters. You simple add the jar file and the above annotations.
A: Sounds to me like you are treating a symptom: poor design with dependencies on static initialization. Maybe some refactoring is the real solution. It sounds like you've already done a little refactoring with your staticInit() function, but maybe that function needs to be called from the constructor, not from a static initializer. If you can do away with static initializers period, you will be better off. Only you can make this decision (I can't see your codebase) but some refactoring will definitely help.
As for mocking, I use EasyMock, but I have run into the same issue. Side effects of static initializers in legacy code make testing difficult. Our answer was to refactor out the static initializer.
A: When I run into this problem, I usually do the same thing you describe, except I make the static method protected so I can invoke it manually. On top of this, I make sure that the method can be invoked multiple times without problems (otherwise it is no better than the static initializer as far as the tests go).
This works reasonably well, and I can actually test that the static initializer method does what I expect/want it to do. Sometimes it is just easiest to have some static initialization code, and it just isn't worth it to build an overly complex system to replace it.
When I use this mechanism, I make sure to document that the protected method is only exposed for testing purposes, with the hopes that it won't be used by other developers. This of course may not be a viable solution, for example if the class' interface is externally visible (either as a sub-component of some kind for other teams, or as a public framework). It is a simple solution to the problem though, and doesn't require a third party library to set up (which I like).
A: You could write your test code in Groovy and easily mock the static method using metaprogramming.
Math.metaClass.'static'.max = { int a, int b ->
a + b
}
Math.max 1, 2
If you can't use Groovy, you will really need to refactoring the code (maybe to inject something like a initializator).
Kind Regards
A: Occasionally, I find static initilizers in classes that my code depends on. If I cannot refactor the code, I use PowerMock's @SuppressStaticInitializationFor annotation to suppress the static initializer:
@RunWith(PowerMockRunner.class)
@SuppressStaticInitializationFor("com.example.ClassWithStaticInit")
public class ClassWithStaticInitTest {
ClassWithStaticInit tested;
@Before
public void setUp() {
tested = new ClassWithStaticInit();
}
@Test
public void testSuppressStaticInitializer() {
asserNotNull(tested);
}
// more tests...
}
Read more about suppressing unwanted behaviour.
Disclaimer: PowerMock is an open source project developed by two colleagues of mine.
A: This is going to get into more "Advanced" JMockit. It turns out, you can redefine static initialization blocks in JMockit by creating a public void $clinit() method. So, instead of making this change
public class ClassWithStaticInit {
static {
staticInit();
}
private static void staticInit() {
System.out.println("static initialized.");
}
}
we might as well leave ClassWithStaticInit as is and do the following in the MockClassWithStaticInit:
public static class MockClassWithStaticInit {
public void $clinit() {
}
}
This will in fact allow us to not make any changes in the existing classes.
A: I suppose you really want some kind of factory instead of the static initializer.
Some mix of a singleton and an abstract factory would probably be able to get you the same functionality as today, and with good testability, but that would add quite a lot of boiler-plate code, so it might be better to just try to refactor the static stuff away completely or if you could at least get away with some less complex solution.
Hard to tell if it´s possible without seeing your code though.
A: I'm not super knowledgeable in Mock frameworks so please correct me if I'm wrong but couldn't you possibly have two different Mock objects to cover the situations that you mention? Such as
public static class MockClassWithEmptyStaticInit {
public static void staticInit() {
}
}
and
public static class MockClassWithStaticInit {
public static void staticInit() {
System.out.println("static initialized.");
}
}
Then you can use them in your different test cases
@BeforeClass
public static void setUpBeforeClass() {
Mockit.redefineMethods(ClassWithStaticInit.class,
MockClassWithEmptyStaticInit.class);
}
and
@BeforeClass
public static void setUpBeforeClass() {
Mockit.redefineMethods(ClassWithStaticInit.class,
MockClassWithStaticInit.class);
}
respectively.
A: Not really an answer, but just wondering - isn't there any way to "reverse" the call to Mockit.redefineMethods?
If no such explicit method exists, shouldn't executing it again in the following fashion do the trick?
Mockit.redefineMethods(ClassWithStaticInit.class, ClassWithStaticInit.class);
If such a method exists, you could execute it in the class' @AfterClass method, and test ClassWithStaticInitTest with the "original" static initializer block, as if nothing has changed, from the same JVM.
This is just a hunch though, so I may be missing something.
A: You can use PowerMock to execute the private method call like:
ClassWithStaticInit staticInitClass = new ClassWithStaticInit()
Whitebox.invokeMethod(staticInitClass, "staticInit");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: Where do the Python unit tests go? If you're writing a library, or an app, where do the unit test files go?
It's nice to separate the test files from the main app code, but it's awkward to put them into a "tests" subdirectory inside of the app root directory, because it makes it harder to import the modules that you'll be testing.
Is there a best practice here?
A: How I do it...
Folder structure:
project/
src/
code.py
tests/
setup.py
Setup.py points to src/ as the location containing my projects modules, then i run:
setup.py develop
Which adds my project into site-packages, pointing to my working copy. To run my tests i use:
setup.py tests
Using whichever test runner I've configured.
A: A common practice is to put the tests directory in the same parent directory as your module/package. So if your module was called foo.py your directory layout would look like:
parent_dir/
foo.py
tests/
Of course there is no one way of doing it. You could also make a tests subdirectory and import the module using absolute import.
Wherever you put your tests, I would recommend you use nose to run them. Nose searches through your directories for tests. This way, you can put tests wherever they make the most sense organizationally.
A: I prefer toplevel tests directory. This does mean imports become a little more difficult. For that I have two solutions:
*
*Use setuptools. Then you can pass test_suite='tests.runalltests.suite' into setup(), and can run the tests simply: python setup.py test
*Set PYTHONPATH when running the tests: PYTHONPATH=. python tests/runalltests.py
Here's how that stuff is supported by code in M2Crypto:
*
*http://svn.osafoundation.org/m2crypto/trunk/setup.py
*http://svn.osafoundation.org/m2crypto/trunk/tests/alltests.py
If you prefer to run tests with nosetests you might need do something a little different.
A: I put my tests in the same directory as the code under test (CUT). In projects where I can tweak pytest with my plugin, for foo.py I use foo.pt for the tests which makes editing a particular module and its test together really easy: vi foo.*.
Where I can't do this, I use foo_ut.py or similar. You can still use vi foo* though that will also catch foobar.py and foobar_ut.py if those exist.
In either case I tweak the test discovery process to find these.
This puts the tests right beside the code in a directory listing, making it obvious that tests are there, and makes opening the tests as easy as it can possibly be when they're in a separate file. (For editors started from the command line, as described above; for GUI systems, click on the code file and the adjacent (or very nearly adjacent) test file.
As others have pointed out, this also makes it easier to refactor and to extract the code for use elsewhere should that ever be necessary.
I really dislike the idea of putting tests in a completely different directory tree; why make it harder than necessary for developers to open up the tests when they're opening the file with the CUT? It's not like the vast majority of developers are so keen on writing or tweaking tests that they'll ignore any barrier to doing that, instead of using the barrier as an excuse. (Quite the opposite, in my experience; even when you make it as easy as possible I know many developers who can't be bothered to write tests.)
A: We had the very same question when writing Pythoscope (https://pypi.org/project/pythoscope/), which generates unit tests for Python programs. We polled people on the testing in python list before we chose a directory, there were many different opinions. In the end we chose to put a "tests" directory in the same directory as the source code. In that directory we generate a test file for each module in the parent directory.
A: We use
app/src/code.py
app/testing/code_test.py
app/docs/..
In each test file we insert ../src/ in sys.path. It's not the nicest solution but works. I think it would be great if someone came up w/ something like maven in java that gives you standard conventions that just work, no matter what project you work on.
A: I also tend to put my unit tests in the file itself, as Jeremy Cantrell above notes, although I tend to not put the test function in the main body, but rather put everything in an
if __name__ == '__main__':
do tests...
block. This ends up adding documentation to the file as 'example code' for how to use the python file you are testing.
I should add, I tend to write very tight modules/classes. If your modules require very large numbers of tests, you can put them in another, but even then, I'd still add:
if __name__ == '__main__':
import tests.thisModule
tests.thisModule.runtests
This lets anybody reading your source code know where to look for the test code.
A: Every once in a while I find myself checking out the topic of test placement, and every time the majority recommends a separate folder structure beside the library code, but I find that every time the arguments are the same and are not that convincing. I end up putting my test modules somewhere beside the core modules.
The main reason for doing this is: refactoring.
When I move things around I do want test modules to move with the code; it's easy to lose tests if they are in a separate tree. Let's be honest, sooner or later you end up with a totally different folder structure, like django, flask and many others. Which is fine if you don't care.
The main question you should ask yourself is this:
Am I writing:
*
*a) reusable library or
*b) building a project than bundles together some semi-separated modules?
If a:
A separate folder and the extra effort to maintain its structure may be better suited. No one will complain about your tests getting deployed to production.
But it's also just as easy to exclude tests from being distributed when they are mixed with the core folders; put this in the setup.py:
find_packages("src", exclude=["*.tests", "*.tests.*", "tests.*", "tests"])
If b:
You may wish — as every one of us do — that you are writing reusable libraries, but most of the time their life is tied to the life of the project. Ability to easily maintain your project should be a priority.
Then if you did a good job and your module is a good fit for another project, it will probably get copied — not forked or made into a separate library — into this new project, and moving tests that lay beside it in the same folder structure is easy in comparison to fishing up tests in a mess that a separate test folder had become. (You may argue that it shouldn't be a mess in the first place but let's be realistic here).
So the choice is still yours, but I would argue that with mixed up tests you achieve all the same things as with a separate folder, but with less effort on keeping things tidy.
A: For a file module.py, the unit test should normally be called test_module.py, following Pythonic naming conventions.
There are several commonly accepted places to put test_module.py:
*
*In the same directory as module.py.
*In ../tests/test_module.py (at the same level as the code directory).
*In tests/test_module.py (one level under the code directory).
I prefer #1 for its simplicity of finding the tests and importing them. Whatever build system you're using can easily be configured to run files starting with test_. Actually, the default unittest pattern used for test discovery is test*.py.
A: I use a tests/ directory, and then import the main application modules using relative imports. So in MyApp/tests/foo.py, there might be:
from .. import foo
to import the MyApp.foo module.
A: I don't believe there is an established "best practice".
I put my tests in another directory outside of the app code. I then add the main app directory to sys.path (allowing you to import the modules from anywhere) in my test runner script (which does some other stuff as well) before running all the tests. This way I never have to remove the tests directory from the main code when I release it, saving me time and effort, if an ever so tiny amount.
A: From my experience in developing Testing frameworks in Python, I would suggest to put python unit tests in a separate directory. Maintain a symmetric directory structure. This would be helpful in packaging just the core libraries and not package the unit tests. Below is implemented through a schematic diagram.
<Main Package>
/ \
/ \
lib tests
/ \
[module1.py, module2.py, [ut_module1.py, ut_module2.py,
module3.py module4.py, ut_module3.py, ut_module.py]
__init__.py]
In this way when you package these libraries using an rpm, you can just package the main library modules (only). This helps maintainability particularly in agile environment.
A: I recommend you check some main Python projects on GitHub and get some ideas.
When your code gets larger and you add more libraries it's better to create a test folder in the same directory you have setup.py and mirror your project directory structure for each test type (unittest, integration, ...)
For example if you have a directory structure like:
myPackage/
myapp/
moduleA/
__init__.py
module_A.py
moduleB/
__init__.py
module_B.py
setup.py
After adding test folder you will have a directory structure like:
myPackage/
myapp/
moduleA/
__init__.py
module_A.py
moduleB/
__init__.py
module_B.py
test/
unit/
myapp/
moduleA/
module_A_test.py
moduleB/
module_B_test.py
integration/
myapp/
moduleA/
module_A_test.py
moduleB/
module_B_test.py
setup.py
Many properly written Python packages uses the same structure. A very good example is the Boto package.
Check https://github.com/boto/boto
A: Only 1 test file
If there has only 1 test files, putting it in a top-level directory is recommended:
module/
lib/
__init__.py
module.py
test.py
Run the test in CLI
python test.py
Many test files
If has many test files, put it in a tests folder:
module/
lib/
__init__.py
module.py
tests/
test_module.py
test_module_function.py
# test_module.py
import unittest
from lib import module
class TestModule(unittest.TestCase):
def test_module(self):
pass
if __name__ == '__main__':
unittest.main()
Run the test in CLI
# In top-level /module/ folder
python -m tests.test_module
python -m tests.test_module_function
Use unittest discovery
unittest discovery will find all test in package folder.
Create a __init__.py in tests/ folder
module/
lib/
__init__.py
module.py
tests/
__init__.py
test_module.py
test_module_function.py
Run the test in CLI
# In top-level /module/ folder
# -s, --start-directory (default current directory)
# -p, --pattern (default test*.py)
python -m unittest discover
Reference
*
*pytest Good Practices for test layout
*unittest
Unit test framework
*
*nose
*nose2
*pytest
A: In C#, I've generally separated the tests into a separate assembly.
In Python -- so far -- I've tended to either write doctests, where the test is in the docstring of a function, or put them in the if __name__ == "__main__" block at the bottom of the module.
A: If the tests are simple, simply put them in the docstring -- most of the test frameworks for Python will be able to use that:
>>> import module
>>> module.method('test')
'testresult'
For other more involved tests, I'd put them either in ../tests/test_module.py or in tests/test_module.py.
A: When writing a package called "foo", I will put unit tests into a separate package "foo_test". Modules and subpackages will then have the same name as the SUT package module. E.g. tests for a module foo.x.y are found in foo_test.x.y. The __init__.py files of each testing package then contain an AllTests suite that includes all test suites of the package. setuptools provides a convenient way to specify the main testing package, so that after "python setup.py develop" you can just use "python setup.py test" or "python setup.py test -s foo_test.x.SomeTestSuite" to the just a specific suite.
A: I've recently started to program in Python, so I've not really had chance to find out best practice yet.
But, I've written a module that goes and finds all the tests and runs them.
So, I have:
app/
appfile.py
test/
appfileTest.py
I'll have to see how it goes as I progress to larger projects.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "556"
} |
Q: What is the best way to position a div in CSS? I'm trying to place this menu on the left hand side of the page:
<div class="left-menu" style="left: 123px; top: 355px">
<ul>
<li> Categories </li>
<li> Weapons </li>
<li> Armor </li>
<li> Manuals </li>
<li> Sustenance </li>
<li> Test </li>
</ul>
</div>
The problem is that if I use absolute or fixed values, different screen sizes will render the navigation bar differently. I also have a second div that contains all the main content which also needs to be moved to the right, so far I'm using relative values which seems to work no matter the screen size.
A: float is indeed the right property to achieve this. However, the example given by bmatthews68 can be improved. The most important thing about floating boxes is that they must specify an explicit width. This can be rather inconvenient but this is the way CSS works. However, notice that px is a unit of measure that has no place in the world of HTML/CSS, at least not to specify widths.
Always resort to measures that will work with different font sizes, i.e. either use em or %. Now, if the menu is implemented as a floating body, then this means that the main content floats “around” it. If the main content is higher than the menu, this might not be what you want:
float1 http://page.mi.fu-berlin.de/krudolph/stuff/float1.png
<div style="width: 10em; float: left;">Left</div>
<div>Right, spanning<br/> multiple lines</div>
You can correct this behaviour by giving the main content a margin-left equal to the width of the menu:
float2 http://page.mi.fu-berlin.de/krudolph/stuff/float2.png
<div style="width: 10em; float: left;">Left</div>
<div style="margin-left: 10em;">Right, spanning<br/> multiple lines</div>
In most cases you also want to give the main content a padding-left so it doesn't “stick” to the menu too closely.
By the way, it's trivial to change the above so that the menu is on the right side instead of the left: simply change every occurrence of the word “left” to “right”.
Ah, one last thing. If the menu's content is higher than the main content, it will render oddly because float does some odd things. In that case, you will have to clear the box that comes below the floating body, as in bmatthews68's example.
/EDIT: Damn, HTML doesn't work the way the preview showed it. Well, I've included pictures instead.
A: I think you're supposed to use the float property for positioning things like that. You can read about it here.
A: All the answers saying to use floats (with explicit widths) are correct. But to answer the original question, what is the best way to position a <div>? It depends.
CSS is highly contextual, and the flow of a page is dependent on the structure of your HTML. Normal flow is how elements, and their children, will layout top to bottom (for block elements) and left to right (for inline elements) inside their containing block (usually the parent). This is how the majority of your layout should work. You will tend to rely on width, margin, and padding to define the spacing and layout of the elements to the other elements around it (be they <div>, <ul>, <p>, or otherwise, HTML is mostly semantic at this point).
Using styles like float or absolute or relative positioning can help you achieve very specific goals of your layout, but it's important to know how to use them. As has been explained, float is generally used to place block elements next to each other, and it really good for multi-column layouts.
I won't go into more details here, but you might want to check out the following:
*
*SitePoint CSS References - probably the most straightforward and complete CSS reference I've found online.
*W3C CSS2.1 Visual Formatting Model - Yes, its a tough read, but it does explain everything.
A: You should use the float and clear CSS attributes to get the desired effect.
First I defined styles for the called left and right for the two columns in my layout and a style called clearer used to reset the page flow.
<style type="text/css">
.left {
float: left;
width: 200px;
}
.right {
float: right;
width: 800px;
}
.clear {
clear: both;
height: 1px;
}
</style>
Then I use them to layout my page.
<div>
<div class="left">
<ul>
<li>Categories</li>
<li>Weapons</li>
<li>Armor</li>
<li>Manuals</li>
<li>Sustenance</li>
<li>Test</li>
</ul>
</div>
<div class="right">
Blah Blah Blah....
</div>
</div>
<div class="clear" />
A: you can use float
<div class="left-menu">
<ul>
<li> Categories </li>
<li> Weapons </li>
<li> Armor </li>
<li> Manuals </li>
<li> Sustenance </li>
<li> Test </li>
</ul>
</div>
in css file
.left-menu{float:left;width:200px;}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Virtualbox host-guest network setup How do I set up a network between the Host and the guest OS in Windows vista?
A: Give the guest two network adapters, one NAT and the other Host-only. The NAT one will allow the guest to see the Internet, and the Host-only one will allow the host to see the guest.
One of them also allows the guest to see the host. I'm not sure which, but I know it works since I've tested web server stuff with it. You just have to choose the right IP address, 10.x.x.x or 192.168.x.x.
Also, you may have to be careful about having File and Printer Sharing running on both adapters at once, since the guest will see its own name and conflict with itself. I ran into this during install.
A: You can do this on a Linux host. I've documented the steps I took in Ubuntu 9.04 here.
A: I've got a better answer than my first one.
Give the guest a single Host-only network adapter, and enable Internet Connection Sharing (ICS) on the host. I've tried this on a Windows XP host with a Windows XP guest.
The guest can connect to the Internet.
The guest can connect to the host at an address like 192.168.0.1 (chosen by ICS). -- Remember to allow the guest through the host's firewall.
The host can connect to the guest at an address like 192.168.0.22 (assigned by the DHCP service provided by ICS).
A: I don't run vista, but virtualbox should do most of the setup for you - all you need to do is assign an IP address, subnet mask, and (optionally) a default gateway to your guest OS, and it should just work.
Don't bother with any of the advanced network settings in the options for the VM - they're useful in some situations, but I've never had to use them, and I've been using virtualbox for some years now.
If you post the specific problem you're having perhaps I can help more. But your question is rather vague...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Getting mail from GMail into Java application using IMAP I want to access messages in Gmail from a Java application using JavaMail and IMAP. Why am I getting a SocketTimeoutException ?
Here is my code:
Properties props = System.getProperties();
props.setProperty("mail.imap.host", "imap.gmail.com");
props.setProperty("mail.imap.port", "993");
props.setProperty("mail.imap.connectiontimeout", "5000");
props.setProperty("mail.imap.timeout", "5000");
try {
Session session = Session.getDefaultInstance(props, new MyAuthenticator());
URLName urlName = new URLName("imap://[email protected]:[email protected]");
Store store = session.getStore(urlName);
if (!store.isConnected()) {
store.connect();
}
} catch (NoSuchProviderException e) {
e.printStackTrace();
System.exit(1);
} catch (MessagingException e) {
e.printStackTrace();
System.exit(2);
}
I have set the timeout values so that it wouldn't take "forever" to timeout. Also, MyAuthenticator also has the username and password, which seems redundant with the URL. Is there another way to specify the protocol? (I didn't see it in the JavaDoc for IMAP.)
A: You need to use the following properties for imaps:
props.setProperty("mail.imaps.host", "imap.gmail.com");
props.setProperty("mail.imaps.port", "993");
props.setProperty("mail.imaps.connectiontimeout", "5000");
props.setProperty("mail.imaps.timeout", "5000");
Notices it's "imaps", not "imap", since the protocol you're using is imaps (IMAP + SSL)
A: Using imaps was a great suggestion. Neither of the answers provided just worked for me, so I googled some more and found something that worked. Here's how my code looks now.
Properties props = System.getProperties();
props.setProperty("mail.store.protocol", "imaps");
try {
Session session = Session.getDefaultInstance(props, null);
Store store = session.getStore("imaps");
store.connect("imap.gmail.com", "<username>@gmail.com", "<password>");
...
} catch (NoSuchProviderException e) {
e.printStackTrace();
System.exit(1);
} catch (MessagingException e) {
e.printStackTrace();
System.exit(2);
}
This is nice because it takes the redundant Authenticator out of the picture. I'm glad this worked because the SSLNOTES.txt make my head spin.
A: In JavaMail, you can use imaps as the URL scheme to use IMAP over SSL. (See SSLNOTES.txt in your JavaMail distribution for more details.) For example, imaps://username%[email protected]/INBOX.
Similarly, use smtps to send emails via Gmail. e.g., smtps://username%[email protected]/. Again, read SSLNOTES.txt for more details. Hope it helps!
A: You have to connect to GMail using SSL only. Setting the following properties will force that for you.
props.setProperty("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
props.setProperty("mail.imap.socketFactory.fallback", "false");
A: Here is what worked for my team and I, given a classic account [email protected] and a business account [email protected] :
final Properties properties = new Properties();
properties.put("mail.imap.ssl.enable", "true");
imapSession = Session.getInstance(properties, null);
imapSession.setDebug(false);
imapStore = imapSession.getStore("imap");
imapStore.connect("imap.gmail.com", USERNAME, "password");
with USERNAME = "nickname" in the classic case, and USERNAME = "[email protected]" in the business account case.
In the classic case, don't forget to lower the account security here : https://www.google.com/settings/security/lesssecureapps
In both cases check in GMail Settings => Forwarding POP / IMAP if IMAP is enabled for the account.
Hope it helps!
To go further :
*
*http://www.oracle.com/technetwork/java/javamail/faq/index.html#gmail
*https://support.google.com/mail/accounts/answer/78754
A: If you'd like more sample code on using JavaMail with Gmail (e.g. converting Gmail labels to IMAP folder names, or using IMAP IDLE), do check out my program GmailAssistant on SourceForge.
A: Check http://g4j.sourceforge.net/. There is a minimal gmail client built using this API.
A: I used following properties to get the store and It works well.
"mail.imaps.host" : "imap.gmail.com"
"mail.store.protocol" : "imaps"
"mail.imaps.port" : "993"
A: URLName server = new URLName("imaps://<gmail-user-name>:<gmail-pass>@imap.gmail.com/INBOX");
A: You need to have JSSE installed to use SSL with Java
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: Web in a desktop application: Good web browser controls? I've been utlising a "web browser control" in desktop based applications (in my case Windows Forms .NET) for a number of years. I mostly use it to create a familiar flow-based user interface that also allows a seamless transition to the internet where required.
I'm really tired of the IE browser control because of the poor quality html it generates on output. Also, I guess that it is really just IE7 behind the scenes and so has many of that browser "issues". Despite this, it is quite a powerful control and provides rich interaction with your desktop app.
So, what other alternatives to the IE browser control are there? I looked at a Mosaic equivalent a year ago but was disappointed with the number of unimplemented features, maybe this has improved recently?
A: hmm..Interestingly
*
*Mozilla seems to provide ActiveX control
*K-Melon is another Gecko based browser control
A: Popular layout engines:
*
*Mozilla Gecko
*KHTML
*WebKit (based on KHTML)
Though I'm not sure how easy it is to embed those in a .Net app.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jTDS - No Suitable Driver Exception when running a Maven built project We have a simple [spring-hibernate] application (console app) where in we have set the classpath in manifest file of the executable JAR file. And the app connects to the database using jTDS JDBC Driver, Everything works as expected on Windows machine and JDK 1.6, but on Linux, the app is unable to find the driver,
We are running the program using java -jar MainClassName.
Any suggestions why this might be happening is greatly appreciated.
A: This issue occurred because our jdbc.url had invalid url. This was because maven treats jdbc.url property as a special property and while profiling, instead of url defined in the filter.properties. And that is the reason "No Suitable Driver" exception. The question should have been more clear.
Anyways to fix that we had to rename jdbc.url properties to jdbc.url.somename. This fixed our issue with maven profiling. We also had a similar maven profiling issue for a property called "server.name" This filter property was also confusing maven profiling . We had to change the name of that property as well.
Thanks again Fernando.
A: Honestly it sounds like bad CLASSPATH. One thing I suggest to start debugging this problem is copying the jtds package to same path as your main packages/classes and see if it works. This way you can assure the Classpath manifest is or isn't the problem. The Spring/Hibernate relies on the lib directory, so it will always be on classpath because it's main structure. Use the lib directory also to test.
Hope this guidelines will help. Also send more information, like paths, classpath and manifest files.
A: It is a Maven bug
http://jira.codehaus.org/browse/MNG-3563
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Tools to create maximum velocity in a .NET dev team If you were to self-fund a software project which tools, frameworks, components would you employ to ensure maximum productivity for the dev team and that the "real" problem is being worked on.
What I'm looking for are low friction tools which get the job done with a minimum of fuss. Tools I'd characterize as such are SVN/TortioseSVN, ReSharper, VS itself. I'm looking for frameworks which solve the problems inherient in all software projects like ORM, logging, UI frameworks/components. An example on the UI side would be ASP.NET MVC vs WebForms vs MonoRail.
A: Great tools and frameworks are essential, but the other essential is great project leadership.
A: I would add Resharper to the list and Ndepend. Most likely Rhino mocks too.
A: *
*Versioning. Subversion is the popular choice. If you can afford it, Team Foundation Server offers some benefits. If you want to be super-modern, consider a distributed versioning system, such as git, bazaar or Mercurial. Whatever you do, don't use SourceSafe or other lock-based tools, but rather merge-baseed ones. Consider installing both a Windows Explorer client (such as TortoiseSVN) as well as a Visual Studio add-in (such as AnkhSVN or VisualSVN).
*Issue tracking. Given that Joel Spolsky is on this site's staff, FogBugz deserves a mention. Trac, Mantis and BugZilla are widespread open-source choices.
*Continuous integration. CruiseControl.NET is a popular and open-source choice. There's also Draco.NET.
*Unit testing. NUnit is the popular open-source choice. Does the job. Consider installing the TestDriven.NET Visual Studio add-in.
That said, you want to look at the answers to Essential Programming Tools and What is your best list of ‘must have’ development tools?; while not .NET-specific, they should apply anyway.
A: I would add one more to what edg says up there. You need people with at last some talent as well.
As David Wheeler, author of the Flawfinder source code checker says:
A fool with a tool is still a fool
A: I'd recommend a Safari Books Online subscription as well.
Oh, and gallons of coffee.
A: I'll add Moq to the list for mocking to the list. Much less syntax than most other mocking frameworks.
A: I'd definitely recommend Coderush+Refactor or Resharper (Coderush being my personal favourite), these tools dramatically reduce the time to go from code in your head to code on the page.
For quick development the UI component sets from the likes of Telerik/DevExpress/Infragistics can be good, but in my experience can cause pain further out in the project when you want to refine things more precisely.
Regarding frameworks etc I think you'd need to be a bit more specific about the project itself to get any meaningful suggestions.
A: Good source control should probably be your number 1 priority. I've mentioed them before, but CVSDude are an excellent managed source control provider. I'm using a SVN package and it's brilliant. Saves a lot of hassle setting up your own server etc.
A: Microsoft's Enterprise Library can be also helpful.
This release of Enterprise Library includes application blocks for Caching, Cryptography, Data Access, Exception Handling, Logging, Policy Injection, Security and Validation.
A: This is what we use for our team:
Issue Tracking: Redmine - This is an awesome, free, Issue/Project management tool. It is a ruby on rails app however, so you'll need a proper environment to get it up and running.
Source Control: Subversion with tortoiseSVN - subversion is an awesome source control solution and tortoise integrates with the explorer very nicely, no need for command line stuff. It also supports user side hook scripts, which has come in handy a number of times for my team.
And that's about it really. We don't use a main framework, instead we just roll our own libraries that fit what we need to do with a given project. We do use jquery for a JavaScript library however.
Some other random things would be free coffee, and the best equipment money can buy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How do I remove local (untracked) files from the current Git working tree? How do I delete untracked local files from the current working tree?
A:
git-clean - Remove untracked files from the working tree
Synopsis
git clean [-d] [-f] [-i] [-n] [-q] [-e <pattern>] [-x | -X] [--] <path>…
Description
Cleans the working tree by recursively removing files that are not under version control, starting from the current directory.
Normally, only files unknown to Git are removed, but if the -x option is specified, ignored files are also removed. This can, for example, be useful to remove all build products.
If any optional <path>... arguments are given, only those paths are affected.
Step 1 is to show what will be deleted by using the -n option:
# Print out the list of files and directories which will be removed (dry run)
git clean -n -d
Clean Step - beware: this will delete files:
# Delete the files from the repository
git clean -f
*
*To remove directories, run git clean -f -d or git clean -fd
*To remove ignored files, run git clean -f -X or git clean -fX
*To remove ignored and non-ignored files, run git clean -f -x or git clean -fx
Note the case difference on the X for the two latter commands.
If clean.requireForce is set to "true" (the default) in your configuration, one needs to specify -f otherwise nothing will actually happen.
Again see the git-clean docs for more information.
Options
-f, --force
If the Git configuration variable clean.requireForce is not set to
false, git clean will refuse to run unless given -f, -n or -i.
-x
Don’t use the standard ignore rules read from .gitignore (per
directory) and $GIT_DIR/info/exclude, but do still use the ignore
rules given with -e options. This allows removing all untracked files,
including build products. This can be used (possibly in conjunction
with git reset) to create a pristine working directory to test a clean
build.
-X
Remove only files ignored by Git. This may be useful to rebuild
everything from scratch, but keep manually created files.
-n, --dry-run
Don’t actually remove anything, just show what would be done.
-d
Remove untracked directories in addition to untracked files. If an
untracked directory is managed by a different Git repository, it is
not removed by default. Use -f option twice if you really want to
remove such a directory.
A: Remove all extra folders and files in this repo + submodules
This gets you in same state as fresh clone.
git clean -ffdx
Remove all extra folders and files in this repo but not its submodules
git clean -fdx
Remove extra folders but not files (ex. build or logs folder)
git clean -fd
Remove extra folders + ignored files (but not newly added files)
If file wasn't ignored and not yet checked-in then it stays. Note the capital X.
git clean -fdX
New interactive mode
git clean
A: We can easily removed local untracked files from the current git working tree by using below git comments.
git reset [--soft | --mixed [-N] | --hard | --merge | --keep] [-q] [<commit>]
Example:
git reset --hard HEAD
Links :
*
*https://git-scm.com/docs/git-reset
*How do I use 'git reset --hard HEAD' to revert to a previous commit?
*Reset local repository branch to be just like remote repository HEAD
*https://jwiegley.github.io/git-from-the-bottom-up/3-Reset/4-doing-a-hard-reset.html
A:
Clean out git repository and all submodules recursively
The following command will clean out
the current git repository and all its submodules recursively:
(git clean -d -x -f && git submodule foreach --recursive git clean -d -x -f)
A: oh-my-zsh with zsh provides those great aliases via the git plugin. They can be used in bash as well.
gclean='git clean -fd'
gpristine='git reset --hard && git clean -dfx'
*
*gclean removes untracked directories in addition to untracked files.
*gpristine hard reset the local changes, remove untracked directories,
untracked files and don't use the standard ignore rules read from .gitignore (per directory) and $GIT_DIR/info/exclude, but do still use the ignore rules given with -e options. This allows removing all untracked files, including build products. This can be used (possibly in conjunction with git reset) to create a pristine working directory to test a clean build.
A: git clean -f
will remove the untracked files from the current git
git clean -fd
when you want to remove directories and files, this will delete only untracked directories and files
A: OK, deleting unwanted untracked files and folders are easy using git in command line, just do it like this:
git clean -fd
Double check before doing it as it will delete the files and folders without making any history...
Also in this case, -f stands for force and -d stands for directory...
So, if you want to delete files only, you can use -f only:
git clean -f
If you want to delete(directories) and files, you can delete only untracked directories and files like this:
git clean -fd
Also, you can use -x flag for including the files which are ignored by git. This would be helpful if you want to delete everything.
And adding -i flag, makes git asking you for permission for deleting files one by one on the go.
If you not sure and want to check things first, add -n flag.
Use -q if you don't want to see any report after successful deletion.
I also create the image below to make it more memorable, especially I have seen many people confuse -f for cleaning folder sometimes or mix it up somehow!
A: git clean -fd removes directory
git clean -fX removes ignored files
git clean -fx removes ignored and un-ignored files
can be used all above options in combination as
git clean -fdXx
check git manual for more help
A: I think the safe and easy way is this!
git add .
git stash
For more information
https://www.atlassian.com/git/tutorials/saving-changes/git-stash#stashing-your-work
A: I am surprised nobody mentioned this before:
git clean -i
That stands for interactive and you will get a quick overview of what is going to be deleted offering you the possibility to include/exclude the affected files. Overall, still faster than running the mandatory --dry-run before the real cleaning.
You will have to toss in a -d if you also want to take care of empty folders. At the end, it makes for a nice alias:
git iclean
That being said, the extra hand holding of interactive commands can be tiring for experienced users. These days I just use the already mentioned git clean -fd
A: To remove complete changes git clean -f -d
$ git clean -f -d
Removing client/app/helpers/base64.js
Removing files/
Removing package.json.bak
where
-f is force
-d is a directory
A: A better way is to use: git clean
git clean -d -x -f
This removes untracked files, including directories (-d) and files ignored by git (-x).
Also, replace the -f argument with -n to perform a dry-run or -i for interactive mode and it will tell you what will be removed.
A: git-clean - Remove untracked files from the working tree
A: I like to use git stash command, later you can get stashed files and changes.
git clean is also a good option but totally depends on your requirement.
here is the explanation of git stash and git clean,7.3 Git Tools - Stashing and Cleaning
A: Simple Way to remove untracked files
To remove all untracked files, The simple
way is to add all of them first and reset the repo as below
git add --all
git reset --hard HEAD
A: User interactive approach:
git clean -i -fd
Remove .classpath [y/N]? N
Remove .gitignore [y/N]? N
Remove .project [y/N]? N
Remove .settings/ [y/N]? N
Remove src/com/arsdumpgenerator/inspector/ [y/N]? y
Remove src/com/arsdumpgenerator/manifest/ [y/N]? y
Remove src/com/arsdumpgenerator/s3/ [y/N]? y
Remove tst/com/arsdumpgenerator/manifest/ [y/N]? y
Remove tst/com/arsdumpgenerator/s3/ [y/N]? y
-i for interactive
-f for force
-d for directory
-x for ignored files(add if required)
Note: Add -n or --dry-run to just check what it will do.
A: Note: First navigate to the directory and checkout the branch you want to clean.
-i interactive mode and it will tell you what will be removed and you can choose an action from the list.
*
*To clean files only [Folders will not be listed and will not be cleaned]:
$ git clean -i
*To clean files and folders:
$ git clean -d -i
-d including directories.
If you choose c from the list. The files/folders will be deleted that are not tracked and will also remove files/folders that you mess-up.*
For instance: If you restructure the folder in your remote and pull the changes to your local computer. files/folders that are created by others initially will be in past folder and in the new one that you restructure.
A: If untracked directory is a git repository of its own (e.g. submodule), you need to use -f twice:
git clean -d -f -f
A: To remove Untracked files :
git add .
git reset --hard HEAD
A: A lifehack for such situation I just invented and tried (that works perfectly):
git add .
git reset --hard HEAD
Beware! Be sure to commit any needed changes (even in non-untracked files) before performing this.
A: For me only following worked:
git clean -ffdx
In all other cases, I was getting message "Skipping Directory" for some subdirectories.
A: git clean -f -d -x $(git rev-parse --show-cdup) applies clean to the root directory, no matter where you call it within a repository directory tree. I use it all the time as it does not force you to leave the folder where you working now and allows to clean & commit right from the place where you are.
Be sure that flags -f, -d, -x match your needs:
-d
Remove untracked directories in addition to untracked files. If an
untracked directory is managed by a different Git repository, it is
not removed by default. Use -f option twice if you really want to
remove such a directory.
-f, --force
If the Git configuration variable clean.requireForce is not set to
false, git clean will refuse to delete files or directories unless
given -f, -n or -i. Git will refuse to delete directories with .git
sub directory or file unless a second -f is given. This affects
also git submodules where the storage area of the removed submodule
under .git/modules/ is not removed until -f is given twice.
-x
Don't use the standard ignore rules read from .gitignore (per
directory) and $GIT_DIR/info/exclude, but do still use the ignore
rules given with -e options. This allows removing all untracked
files, including build products. This can be used (possibly in
conjunction with git reset) to create a pristine working directory
to test a clean build.
There are other flags as well available, just check git clean --help.
A:
I haved failed using most popular answers here - git doesn't delete
untracked files from the repository anyway. No idea why. This is my
super simplified answer without SPECIAL GIT COMMANDS!
Mission: delete untracked files from git repository:
*
*Move files and folders elsewhere from your local project folder for a while
*Delete all lines in .gitignore about these files and folders for the commit
*Git add .
*Git commit -m “Cleaning repository from untracked files”
*Git push
All files and folders has been deleted from the repository.
Lets restore them on localhost if you need them:
*
*Move back all files and folders you have moved temporary to the local project folder again
*Move back all lines about these files and folders to .gitignore
*Git add .
*Git commit -m “Checking or files not appearing again in git repository”
*Git push
You are done!
A: If you just want to delete the files listed as untracked by 'git status'
git stash save -u
git stash drop "stash@{0}"
I prefer this to 'git clean' because 'git clean' will delete files
ignored by git, so your next build will have to rebuild everything
and you may lose your IDE settings too.
A: To know what will be deleted before actually deleting:
git clean -d -n
It will output something like:
Would remove sample.txt
To delete everything listed in the output of the previous command:
git clean -d -f
It will output something like:
Removing sample.txt
A: This is what I always use:
git clean -fdx
For a very large project you might want to run it a couple of times.
A: I like git stash push -u because you can undo them all with git stash pop.
EDIT: Also I found a way to show untracked file in a stash (e.g. git show stash@{0}^3) https://stackoverflow.com/a/12681856/338986
EDIT2: git stash save is deprecated in favor of push. Thanks @script-wolf.
A: git add --all, git stash and git stash drop, try these three commands in this order inorder to remove all untracked files. By adding all those untracked files to git and stashing them will move all those untracked files to stash list and dropping out top one i.e., stash@{0} will remove the stashed changes from stash list.
A: To remove the untracked files you should first use command to view the files that will be affected by cleaning
git clean -fdn
This will show you the list of files that will be deleted. Now to actually delete those files use this command:
git clean -fd
A: uggested Command for Removing Untracked Files from git docs is git clean
git clean - Remove untracked files from the working tree
Suggested Method: Interative Mode by using git clean -i
so we can have control over it. let see remaining available options.
Available Options:
git clean
-d -f -i -n -q -e -x -X (can use either)
Explanation:
1. -d
Remove untracked directories in addition to untracked files. If an untracked directory is managed by a different Git repository,
it is not removed by default. Use -f option twice if you really want to remove such a directory.
2. -f, --force
If the Git configuration variable clean.requireForce is not set to false, git clean will refuse to run unless given -f, -n or
-i.
3. -i, --interactive
Show what would be done and clean files interactively. See “Interactive mode” for details.
4. -n, --dry-run
Don’t actually remove anything, just show what would be done.
5. -q, --quiet
Be quiet, only report errors, but not the files that are successfully removed.
6. -e , --exclude=
In addition to those found in .gitignore (per directory) and $GIT_DIR/info/exclude, also consider these patterns to be in the
set of the ignore rules in effect.
7. -x
Don’t use the standard ignore rules read from .gitignore (per directory) and $GIT_DIR/info/exclude, but do still use the ignore
rules given with -e options. This allows removing all untracked files, including build products. This can be used (possibly in
conjunction with git reset) to create a pristine working directory to test a clean build.
8. -X
Remove only files ignored by Git. This may be useful to rebuild everything from scratch, but keep manually created files.
A: Use git clean -f -d to make sure that directories are also removed.
*
*Don’t actually remove anything, just show what would be done.
git clean -n
or
git clean --dry-run
*Remove untracked directories in addition to untracked files. If an untracked directory is managed by a different Git repository, it is not removed by default. Use the -f option twice if you really want to remove such a directory.
git clean -fd
You can then check if your files are really gone with git status.
A: git clean -f to remove untracked files from working directory.
I have covered some basics here in my blog, git-intro-basic-commands
A: Be careful while running `git clean` command.
Always use -n first
Always use -n before running the clean command as it will show you what files would get removed.
-d Normally, when no is specified, git clean will not recurse into untracked directories to avoid removing too much. Specify -d to have it recurse into such directories as well. If any paths are specified, -d is irrelevant; all untracked files matching the specified paths (with exceptions for nested git directories mentioned under --force) will be removed.
-f | --force
If the Git configuration variable clean.requireForce is not set to false, git clean will refuse to delete files or directories unless given -f or -i. Git will refuse to modify untracked nested git repositories (directories with a .git subdirectory) unless a second -f is given.
git clean -n -d
git clean -n -d -f
Now run without -n if output was what you intend to remove.
git clean -d -f
By default, git clean will only remove untracked files that are not ignored. Any file that matches a pattern in your .gitignore or other ignore files will not be removed. If you want to remove those files too, you can add a -x to the clean command.
git clean -f -d -x
There is also interactive mode available -i with the clean command
git clean -x -i
Alternatively
If you are not 100% sure that deleting your uncommitted work is safe, you could use stashing instead
git stash --all
Before you use stash --all note:
If the --all option is used, then the ignored files are stashed and cleaned in addition to the untracked files.
git stash push --keep-index
If the --keep-index option is used, all changes already added to the index are left intact. Your staged changes remain in your workspace, but at the same time, they are also saved into your stash.
Calling git stash without any arguments is equivalent to git stash push.
git stash push -m "name your stash" // before git stash save (deprecated)
Stashing based on the used flags can clear your directory from unstaged / staged files by writing them to stash storage. I give’s flexibility to retrieve the files at any point in time using stash with apply or pop. Then if you are fine with removing your stashed files you could run:
git stash drop // or clean
To see full instruction on how to work with stash see this How to name and retrieve a stash by name in git?
A: Normal git clean command doesn't remove untracked files with my git version 2.9.0.windows.1.
$ git clean -fdx # doesn't remove untracked files
$ git clean -fdx * # Append star then it works!
A: git-clean is what you are looking for. It is used to remove untracked files from the working tree.
A: If needed to remove untracked files from particular subdirectory,
git clean -f {dir_path}
And combined way to delete untracked dir/files and ignored files.
git clean -fxd {dir_path}
after this you will have modified files only in git status.
A: If nothing else works, to simply remove all the changes listed by the "git status" command one can use the following combo:
git add -A && git commit -m temp && git reset --hard HEAD^
This will first stage all of your changes then create a temporary commit and then discard it.
A: usage: git clean [-d] [-f] [-i] [-n] [-q] [-e ] [-x | -X] [--] ...
-q, --quiet do not print names of files removed
-n, --dry-run dry run
-f, --force force
-i, --interactive interactive cleaning
-d remove whole directories
-e, --exclude <pattern>
add <pattern> to ignore rules
-x remove ignored files, too
-X remove only ignored files
A: use git reset HEAD <file> to unstage a file
A: This can be done using a shell script, I use this scrtipt that lists what will be removed, then lets me confirm the operation.
This is useful since I sometimes have patches or other files I'd like to check on before wiping everything away.
#!/bin/bash
readarray -t -d '' FILES < <(git ls-files -z --other --directory)
if [ "$FILES" = "" ]; then
echo "Nothing to clean!"
exit 0
fi
echo -e "Dirty files:\n"
printf ' %s\n' "${FILES[@]}"
DO_REMOVE=0
while true; do
echo ""
read -p "Remove ${#FILES[@]} files? [y/n]: " choice
case "$choice" in
y|Y )
DO_REMOVE=1
break ;;
n|N )
echo "Exiting!"
break ;;
* ) echo "Invalid input, expected [Y/y/N/n]"
continue ;;
esac
done
if [ "$DO_REMOVE" -eq 1 ];then
echo "Removing!"
for f in "${FILES[@]}"; do
rm -rfv -- "$f"
done
fi
A: I use this:
*
*git status
*copy the path of the file
*rm <path of file>
My project has a lot of generated files created by a giant ANT build script. Using git clean would create chaos.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7901"
} |
Q: Getting HTML from a page behind a login This question is a follow up to my previous question about getting the HTML from an ASPX page. I decided to try using the webclient object, but the problem is that I get the login page's HTML because login is required. I tried "logging in" using the webclient object:
WebClient ww = new WebClient();
ww.DownloadString("Login.aspx?UserName=&Password=");
string html = ww.DownloadString("Internal.aspx");
But I still get the login page all the time. I know that the username info is not stored in a cookie. I must be doing something wrong or leaving out an important part. Does anyone know what it could be?
A: Try setting the credentials property of the WebClient object
WebClient ww = new WebClient();
ww.Credentials = CredentialCache.DefaultCredentials;
ww.DownloadString("Login.aspx?UserName=&Password=");
string html = ww.DownloadString("Internal.aspx");
A: Well does opening the page in a brower with "Login.aspx?UserName=&Password=" normaly work?
Some pages may not allow login using data provided in the url, and that it must be entered in the login form on the page and then submitted.
A: The only other reason I can think of then is that the web page is intentionally blocking it from loggin in. If you have access to the code, take a look at the loggin system used to see if theres anything designed to block such logins.
A: Just pass valid login parameters to a given URI. Should help you out.
If you don't have login information you shouldn't be trying to circumvent it.
public static string HttpPost( string URI, string Parameters )
{
System.Net.WebRequest req = System.Net.WebRequest.Create( URI );
req.ContentType = "application/x-www-form-urlencoded";
req.Method = "POST";
byte[] bytes = System.Text.Encoding.ASCII.GetBytes( Parameters );
req.ContentLength = bytes.Length;
System.IO.Stream os = req.GetRequestStream();
os.Write( bytes, 0, bytes.Length );
os.Close();
System.Net.WebResponse resp = req.GetResponse();
if ( resp == null ) return null;
System.IO.StreamReader sr = new System.IO.StreamReader( resp.GetResponseStream() );
return sr.ReadToEnd().Trim();
}
A: Use Fiddler to see the HTTP requests and responses that happen when you do it manually through the browser.
A: @Fire Lancer: I asked myself that same question during my tests, so I checked, and it does work from a browser.
A: As the aspx page I was trying to get was in my own projct, I could use the Server.Execute method. More details in my answer to my original question
A: Use Firefox with the LiveHttpHeaders plugin.
This will allow you to login via an actual browser and see EXACTLY what is being sent to the server. My first question would be to verify that it isn't expecting a POST from the form. The example URL you are loading is sending the info via a querystring GET.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Debug.Assert vs. Specific Thrown Exceptions I've just started skimming 'Debugging MS .Net 2.0 Applications' by John Robbins, and have become confused by his evangelism for Debug.Assert(...).
He points out that well-implemented Asserts store the state, somewhat, of an error condition, e.g.:
Debug.Assert(i > 3, "i > 3", "This means I got a bad parameter");
Now, personally, it seems crazy to me that he so loves restating his test without an actual sensible 'business logic' comment, perhaps "i <= 3 must never happen because of the flobittyjam widgitification process".
So, I think I get Asserts as a kind-of low-level "Let's protect my assumptions" kind of thing... assuming that one feels this is a test one only needs to do in debug - i.e. you are protecting yourself against colleague and future programmers, and hoping that they actually test things.
But what I don't get is, he then goes on to say that you should use assertions in addition to normal error handling; now what I envisage is something like this:
Debug.Assert(i > 3, "i must be greater than 3 because of the flibbity widgit status");
if (i <= 3)
{
throw new ArgumentOutOfRangeException("i", "i must be > 3 because... i=" + i.ToString());
}
What have I gained by the Debug.Assert repetition of the error condition test? I think I'd get it if we were talking about debug-only double-checking of a very important calculation...
double interestAmount = loan.GetInterest();
Debug.Assert(debugInterestDoubleCheck(loan) == interestAmount, "Mismatch on interest calc");
...but I don't get it for parameter tests which are surely worth checking (in both DEBUG and Release builds)... or not. What am I missing?
A: I've thought about this long and hard when it comes to providing guidance on debug vs. assert with respect to testing concerns.
You should be able to test your class with erroneous input, bad state, invalid order of operations and any other conceivable error condition and an assert should never trip. Each assert is checking something should always be true regardless of the inputs or computations performed.
Good rules of thumb I've arrived at:
*
*Asserts are not a replacement for robust code that functions correctly independent of configuration. They are complementary.
*Asserts should never be tripped during a unit test run, even when feeding in invalid values or testing error conditions. The code should handle these conditions without an assert occurring.
*If an assert trips (either in a unit test or during testing), the class is bugged.
For all other errors -- typically down to environment (network connection lost) or misuse (caller passed a null value) -- it's much nicer and more understandable to use hard checks & exceptions. If an exception occurs, the caller knows it's likely their fault. If an assert occurs, the caller knows it's likely a bug in the code where the assert is located.
Regarding duplication: I agree. I don't see why you would replicate the validation with a Debug.Assert AND an exception check. Not only does it add some noise to the code and muddy the waters regarding who is at fault, but it a form of repetition.
A: Assertions are not for parameter checking. Parameter checking should always be done (and precisely according to what pre-conditions are specified in your documentation and/or specification), and the ArgumentOutOfRangeException thrown as necessary.
Assertions are for testing for "impossible" situations, i.e., things that you (in your program logic) assume are true. The assertions are there to tell you if these assumptions are broken for any reason.
Hope this helps!
A: I use explicit checks that throw exceptions on public and protected methods and assertions on private methods.
Usually, the explicit checks guard the private methods from seeing incorrect values anyway. So really, the assert is checking for a condition that should be impossible. If an assert does fire, it tells me the there is a defect in the validation logic contained within one of the public routines on the class.
A: An exception can be caught and swallowed making the error invisible to testing. That can't happen with Debug.Assert.
No one should ever have a catch handler that catches all exceptions, but people do it anyway, and sometimes it is unavoidable. If your code is invoked from COM, the interop layer catches all exceptions and turns them into COM error codes, meaning you won't see your unhandled exceptions. Asserts don't suffer from this.
Also when the exception would be unhandled, a still better practice is to take a mini-dump. One area where VB is more powerful than C# is that you can use an exception filter to snap a mini-dump when the exception is in flight, and leave the rest of the exception handling unchanged. Gregg Miskelly's blog post on exception filter inject provides a useful way to do this from c#.
One other note on assets ... they inteact poorly with Unit testing the error conditions in your code. It is worthwhile to have a wrapper to turn off the assert for your unit tests.
A: IMO it's a loss of development time only. Properly implemented exception gives you a clear picture of what happened. I saw too much applications showing obscure "Assertion failed: i < 10" errors. I see assertion as a temporary solution. In my opinion no assertions should be in a final version of a program. In my practice I used assertions for quick and dirty checks. Final version of the code should take erroneous situation into account and behave accordingly. If something bad happens you have 2 choices: handle it or leave it. Function should throw an exception with meaningful description if wrong parameters passed in. I see no points in duplication of validation logic.
A: There is a communication aspect to asserts vs exception throwing.
Let's say we have a User class with a Name property and a ToString method.
If ToString is implemented like this:
public string ToString()
{
Debug.Assert(Name != null);
return Name;
}
It says that Name should never null and there is a bug in the User class if it is.
If ToString is implement like this:
public string ToString()
{
if ( Name == null )
{
throw new InvalidOperationException("Name is null");
}
return Name;
}
It says that the caller is using ToString incorrectly if Name is null and should check that before calling.
The implementation with both
public string ToString()
{
Debug.Assert(Name != null);
if ( Name == null )
{
throw new InvalidOperationException("Name is null");
}
return Name;
}
says that if Name is null there bug in the User class, but we want to handle it anyway. (The user doesn't need to check Name before calling.) I think this is the kind of safety Robbins was recommending.
A: Example of a good use of Assert:
Debug.Assert(flibbles.count() < 1000000, "too many flibbles"); // indicate something is awry
log.warning("flibble count reached " + flibbles.count()); // log in production as early warning
I personally think that Assert should only be used when you know something is outside desirable limits, but you can be sure it's reasonably safe to continue. In all other circumstances (feel free point out circumstances I haven't thought of) use exceptions to fail hard and fast.
The key tradeoff for me is whether you want to bring down a live/production system with an Exception to avoid corruption and make troubleshooting easier, or whether you have encountered a situation that should never be allowed to continue unnoticed in test/debug versions but could be allowed to continue in production (logging a warning of course).
cf. http://c2.com/cgi/wiki?FailFast
copied and modified from java question: Exception Vs Assertion
A: Here is by 2 cents.
I think that the best way is to use both assertions and exceptions. The main differences between the two methods, imho, if that Assert statements can be removed easily from the application text (defines, conditional attributes...), while Exception thrown are dependent (tipically) by a conditional code which is harder to remove (multine section with preprocessor conditionals).
Every application exception shall be handled correctly, while assertions shall be satisfied only during the algorithm developement and testing.
If you pass an null object reference as routine parameter, and you use this value, you get a null pointer exception. Indeed: why you should write an assertion? It's a waste of time in this case.
But what about private class members used in class routines? When these value are set somewhere, is better to check with an assertion if a null value is set. That's only because when you use the member, you get a null pointer exception but you don't know how the value was set. This cause a restart of the program breaking on all entry point use to set the private member.
Exception are more usefull, but they can be (imho) very heavy to manage and there is the possibility to use too much exceptions. And they requires additional check, maybe undesired to optimize the code.
Personally I use exceptions only whenever the code requires a deep catch control (catch statements are very low in the call stack) or whenever the function parameters are not hardcoded in the code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: C variable and constant value comparison not matching If I have:
signed char * p;
and I do a comparison:
if ( *p == 0xFF )
break;
it will never catch 0XFF, but if I replace it with -1 it will:
if ( *p == (signed char)0xFF )
break;
How can this happen? Is it something with the sign flag? I though that 0xFF == -1 == 255.
A: Integer literals have signed int type. Since 0xFF is a signed int, the compiler converts *p to a signed int and then does the comparison.
When *p is -1, which is then converted from a signed char to a signed int, it is still -1 which has a representation of 0xFFFFFFFF, which is not equal to 0xFF.
A: The value 0xFF is a signed int value. C will promote the *p to an int when doing the comparison, so the first if statement is equivalent to:
if( -1 == 255 ) break;
which is of course false. By using (signed char)0xFF the statement is equivalent to:
if( -1 == -1 ) break;
which works as you expect. The key point here is that the comparison is done with int types instead of signed char types.
A: It casts to an int for the first comparison since 0xFF is still considered an int, meaning your char is -128 to 127, but the 0xFF is still 255.
In the second case your telling it that 0xFF is really an signed char, not an int
A: 0xff will be seen as an integer constant, with the value of 255. You should always pay attention to these kind of comparison between different types. If you want to be sure that the compiler will generate the right code, you should use the typecast:
if( *p == (signed char)0xFF ) break;
Anyway, beware that the next statement will not work the same way:
if( (int)*p == 0xFF ) break;
Also, maybe it would be a better idea to avoid signed chars, or, it you must use signed chars, to compare them with signed values such as -1 in this case:
if( *p == -1 ) break;
0xff==-1 only if those values would be assigned to some char (or unsigned char) variables:
char a=0xff;
char b=-1;
if(a==b) break;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: The Best Way to shred XML data into SQL Server database columns What is the best way to shred XML data into various database columns? So far I have mainly been using the nodes and value functions like so:
INSERT INTO some_table (column1, column2, column3)
SELECT
Rows.n.value('(@column1)[1]', 'varchar(20)'),
Rows.n.value('(@column2)[1]', 'nvarchar(100)'),
Rows.n.value('(@column3)[1]', 'int'),
FROM @xml.nodes('//Rows') Rows(n)
However I find that this is getting very slow for even moderate size xml data.
A: Stumbled across this question whilst having a very similar problem, I'd been running a query processing a 7.5MB XML file (~approx 10,000 nodes) for around 3.5~4 hours before finally giving up.
However, after a little more research I found that having typed the XML using a schema and created an XML Index (I'd bulk inserted into a table) the same query completed in ~ 0.04ms.
How's that for a performance improvement!
Code to create a schema:
IF EXISTS ( SELECT * FROM sys.xml_schema_collections where [name] = 'MyXmlSchema')
DROP XML SCHEMA COLLECTION [MyXmlSchema]
GO
DECLARE @MySchema XML
SET @MySchema =
(
SELECT * FROM OPENROWSET
(
BULK 'C:\Path\To\Schema\MySchema.xsd', SINGLE_CLOB
) AS xmlData
)
CREATE XML SCHEMA COLLECTION [MyXmlSchema] AS @MySchema
GO
Code to create the table with a typed XML column:
CREATE TABLE [dbo].[XmlFiles] (
[Id] [uniqueidentifier] NOT NULL,
-- Data from CV element
[Data] xml(CONTENT dbo.[MyXmlSchema]) NOT NULL,
CONSTRAINT [PK_XmlFiles] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Code to create Index
CREATE PRIMARY XML INDEX PXML_Data
ON [dbo].[XmlFiles] (Data)
There are a few things to bear in mind though. SQL Server's implementation of Schema doesn't support xsd:include. This means that if you have a schema which references other schema, you'll have to copy all of these into a single schema and add that.
Also I would get an error:
XQuery [dbo.XmlFiles.Data.value()]: Cannot implicitly atomize or apply 'fn:data()' to complex content elements, found type 'xs:anyType' within inferred type 'element({http://www.mynamespace.fake/schemas}:SequenceNumber,xs:anyType) ?'.
if I tried to navigate above the node I had selected with the nodes function. E.g.
SELECT
,C.value('CVElementId[1]', 'INT') AS [CVElementId]
,C.value('../SequenceNumber[1]', 'INT') AS [Level]
FROM
[dbo].[XmlFiles]
CROSS APPLY
[Data].nodes('/CVSet/Level/CVElement') AS T(C)
Found that the best way to handle this was to use the OUTER APPLY to in effect perform an "outer join" on the XML.
SELECT
,C.value('CVElementId[1]', 'INT') AS [CVElementId]
,B.value('SequenceNumber[1]', 'INT') AS [Level]
FROM
[dbo].[XmlFiles]
CROSS APPLY
[Data].nodes('/CVSet/Level') AS T(B)
OUTER APPLY
B.nodes ('CVElement') AS S(C)
Hope that that helps someone as that's pretty much been my day.
A: in my case i'm running SQL 2005 SP2 (9.0).
The only thing that helped was adding OPTION ( OPTIMIZE FOR ( @your_xml_var = NULL ) ).
Explanation is on the link below.
Example:
INSERT INTO @tbl (Tbl_ID, Name, Value, ParamData)
SELECT 1,
tbl.cols.value('name[1]', 'nvarchar(255)'),
tbl.cols.value('value[1]', 'nvarchar(255)'),
tbl.cols.query('./paramdata[1]')
FROM @xml.nodes('//root') as tbl(cols) OPTION ( OPTIMIZE FOR ( @xml = NULL ) )
https://connect.microsoft.com/SQLServer/feedback/details/562092/an-insert-statement-using-xml-nodes-is-very-very-very-slow-in-sql2008-sp1
A: I'm not sure what is the best method. I used OPENXML construction:
INSERT INTO Test
SELECT Id, Data
FROM OPENXML (@XmlDocument, '/Root/blah',2)
WITH (Id int '@ID',
Data varchar(10) '@DATA')
To speed it up, you can create XML indices. You can set index specifically for value function performance optimization. Also you can use typed xml columns, which performs better.
A: We had a similar issue here. Our DBA (SP, you the man) took a look at my code, made a little tweak to the syntax, and we got the speed we had been expecting. It was unusual because my select from XML was plenty fast, but the insert was way slow. So try this syntax instead:
INSERT INTO some_table (column1, column2, column3)
SELECT
Rows.n.value(N'(@column1/text())[1]', 'varchar(20)'),
Rows.n.value(N'(@column2/text())[1]', 'nvarchar(100)'),
Rows.n.value(N'(@column3/text())[1]', 'int')
FROM @xml.nodes('//Rows') Rows(n)
So specifying the text() parameter really seems to make a difference in performance. Took our insert of 2K rows from 'I must have written that wrong - let me stop it' to about 3 seconds. Which was 2x faster than the raw insert statements we had been running through the connection.
A: I wouldn't claim this is the "best" solution, but I've written a generic SQL CLR procedure for this exact purpose - it takes a "tabular" Xml structure (such as that returned by FOR XML RAW) and outputs a resultset.
It does not require any customization / knowledge of the structure of the "table" in the Xml, and turns out to be extremely fast / efficient (although this wasn't a design goal). I just shredded a 25MB (untyped) xml variable in under 20 seconds, returning 25,000 rows of a pretty wide table.
Hope this helps someone:
http://architectshack.com/ClrXmlShredder.ashx
A: This isn't an answer, more an addition to this question - I have just come across the same problem and I can give figures as edg asks for in the comment.
My test has xml which results in 244 records being inserted - so 244 nodes.
The code that I am rewriting takes on average 0.4 seconds to run.(10 tests run, spread from .56 secs to .344 secs) Performance is not the main reason the code is being rewritten, but the new code needs to perform as well or better. This old code loops the xml nodes, calling a sp to insert once per loop
The new code is pretty much just a single sp; pass the xml in; shred it.
Tests with the new code switched in show the new sp takes on average 3.7 seconds - almost 10 times slower.
My query is in the form posted in this question;
INSERT INTO some_table (column1, column2, column3)
SELECT
Rows.n.value('(@column1)[1]', 'varchar(20)'),
Rows.n.value('(@column2)[1]', 'nvarchar(100)'),
Rows.n.value('(@column3)[1]', 'int'),
FROM @xml.nodes('//Rows') Rows(n)
The execution plan appears to show that for each column, sql server is doing a separate "Table Valued Function [XMLReader]" returning all 244 rows, joining all back up with Nested Loops(Inner Join). So In my case where I am shredding from/ inserting into about 30 columns, this appears to happen separately 30 times.
I am going to have to dump this code, I don't think any optimisation is going to get over this method being inherently slow. I am going to try the sp_xml_preparedocument/OPENXML method and see if the performance is better for that. If anyone comes across this question from a web search (as I did) I would highly advise you to do some performance testing before using this type of shredding in SQL Server
A: There is an XML Bulk load COM object (.NET Example)
From MSDN:
You can insert XML data into a SQL
Server database by using an INSERT
statement and the OPENXML function;
however, the Bulk Load utility
provides better performance when you
need to insert large amounts of XML
data.
A: My current solution for large XML sets (> 500 nodes) is to use SQL Bulk Copy (System.Data.SqlClient.SqlBulkCopy) by using a DataSet to load the XML into memory and then pass the table to SqlBulkCopy (defining a XML schema helps).
Obviously there a pitfalls such as needlessly using a DataSet and loading the whole document into memory first. I would like to go further in the future and implement my own IDataReader to bypass the DataSet method but currently the DataSet is "good enough" for the job.
Basically I never found a solution to my original question regarding the slow performance for that type of XML shredding. It could be slow due to the typed xml queries being inherently slow or something to do with transactions and the the SQL Server log. I guess the typed xml functions were never designed for operating on non-trivial node sizes.
XML Bulk Load: I tried this and it was fast but I had trouble getting the COM dll to work under 64bit environments and I generally try to avoid COM dlls that no longer appear to be supported.
sp_xml_preparedocument/OPENXML: I never went down this road so would be interested to see how it performs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Is there a good, free WYSIWYG editor for creating HTML using a Django template? I'm interested to get a free, WYSIWYG HTML editor that is compatible with Django template. Any ideas?
Thanks LainMH.
But I afraid fckeditor is used in web app, for the purpose of editing
HTML. What I want is an editor that allows me to write HTML that is
Django compatible.
Hope this clarifies the issue.
A: http://www.fckeditor.net/ ?
EDIT: Just found this: http://blog.newt.cz/blog/integration-fckeditor-django/
A: vim has syntax highlighting for Django template tags, works for me ^_^
A: I don't think any of the HTML based editors will work with the django templates, but rather the editable content areas within templates.
The process for creating / editing Django templates is really to create a standard HTML page first (with CSS & images etc), then make that into a base template. Then you create other templates that extend the base one.
The type or program typically used for editing the templates would be an IDE, although I prefer the lighter weight Textmate bundle for editing the templates (and Django python code). If you have an IDE, just google for a Python pluggin for Django.
What will probably help most is having the Django templates page open, or using a Django cheetsheet.
A: According to brief Googling (no personal experience with this), Aptana now supports Python development via Pydev. Pydev again can be configured to work with Django.
Thus I would expect Aptana to be usable with Django templates aswell, though I have no complete guide these links should be helpful :
*
*http://www.aptana.com/python
*http://pydev.blogspot.com/2006/09/configuring-pydev-to-work-with-django.html
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: DIV's vs. Tables or CSS vs. Being Stupid I know that tables are for tabular data, but it's so tempting to use them for layout. I can handle DIV's to get a three column layout, but when you got 4 nested DIV's, it get tricky.
Is there a tutorial/reference out there to persuade me to use DIV's for layout?
I want to use DIV's, but I refuse to spend an hour to position my DIV/SPAN where I want it.
@GaryF: Blueprint CSS has to be the CSS's best kept secret.
Great tool - Blueprint Grid CSS Generator.
A: Why tables for layout is stupid: problems defined, solutions offered.
A: In my opinion, the bias should be in favour of CSS over IE6 - i.e. unless there's an insanely good reason (e.g. your site is only targetted at people using IE6, which would be weird), it's better to 'alienate' people using IE6 rather than people with poor vision and/or automated user agents. Usage of IE6 is decreasing; the latter group is increasing in number. Even if your site doesn't look perfect in IE6, it will probably be easy for those users to read it than a table-based layout will for those who can't see it.
This is a very general question, so it's difficult to answer with specifics. The two books that are excellent resources are:
*
*Bulletproof Web Design, Dan Cederholm
*CSS Mastery, Andy Budd
If you only have to spend an hour designing your overall site layout, that's not bad going.
A: CSS may not be a religion, but it is how browsers interpret HTML for layout. Like it or not, all modern browsers use (some version) of the W3C box model. To continue to rely on tables is continue to rely on a methodology that is just plain wrong in the eyes of the people who design web rendering technology.
I know CSS can seem awfully complicated at times, but I believe it is a necessity in this day and age (trust me, your clients are going to want it).
If you don't feel comfortable taking the time really learn CSS (so it takes you seconds or minutes to position elements...not an hour), then you need to pass the layout work on to someone who knows really knows the front-end.
Yes, there are a lot of problems with the current browser implementations of CSS, but nothing so drastic that you should ever feel the need to return to table based layout. Just sit down and take the time to learn it, like you would any other language or framework.
The best online reference resource I've found is this one:
http://reference.sitepoint.com/css
But it might not hurt to look at a book like Designing With Web Standards which goes a long way in helping you to understand why this stuff is important.
A: I was also thinking Blueprint was great until I saw YAML (Yet Another Multicolumn Layout). There is an online builder tool which is fantastic. I can get a cool looking multicolumn layout within 5 mins.
A: There's the Yahoo Grid CSS which can do all sorts of things.
But remember: CSS IS NOT A RELIGION. If you save hours by using tables instead of css, do so.
One of the corner cases I could never make my mind up about is forms. I'd love to do it in css, but it's just so much more complicated than tables.
You could even argue that forms are tables, in that they have headers (labels) and data (input fields).
A: After a while you don't even think about it. Using divs with CSS seems like the easier option imo. Plus, you have more freedom when using frameworks such as jQuery. I couldn't imagine doing some of the cool jQuery stuff without using css or divs. If you use tables for style and layout I feel like you miss out on a lot of new technologies and stay stuck in the 90's.
A: In the UK and in US there is a legal requirement for favouring CSS layouts over Tables. Both Section 508 (US) and the Disability Discrimination Act (UK) cover accessibility standards for users with limited vision.
In the UK the legislation extends so far as to actually make it illegal to commercially produce a site that impedes the ability of a partially sighted user in the same way that it is now illegal to have a shop with a step to enter it and no way for a wheelchair user to get in - admittedly there have been no prosecutions over website accessibility yet. However I would always go with CSS as it means that your site design is so much easier to maintain in the longer term.
Investing time in learning CSS (I used W3C schools and .Net Magazine http://www.netmag.co.uk) will pay off.
A: This may be unhelpful but I somehow don't understand all these problems related to CSS. If a newspaper designer would try to embed a movie in the ad page, everybody would agree that he's a bit crazy. But still those same people pine after three-column layouts in HTML. HTML is just not apt to handle this kind of layout well at the moment. Furthermore, multi-column layouts are generally not really well-suited for reading on computer monitors. Aren't there enough viable alternatives?
And by the way, even tables don't offer a good way of implementing a fluent column layout so this is no reason at all to resort to such hacks. Assuming a halfway modern browser (i.e. > MSIE 6), tables don't offer any advantages over clean HTML + CSS that I know of.
A: I would just use the table.
In my experience, using a table for layout will work the same in all browsers and the CSS will not (especially if you're trying to support IE6). It's just not worth the hours and hours of coding to get a layout to work in CSS when it can be done in 10 minutes using a table.
The other advantage to using tables is that your layout can very easily dynamically size itself to content. Trying to get that done with CSS is a huge nightmare.
A: I find there are lots of limitations to CSS that just seem to hint the specification designers don't make websites for a living.
Use HTML tables if you can't do it easy in CSS.
Having said that, some of the frameworks do help and it always nicer to do in CSS if you can manage it.
A: You might be able to find some inspiration here: http://blog.html.it/layoutgala/
A: A List Apart is a great reference for using semantic HTML, the Holy Grail article is probably one of the best examples. Also, check out CSS Zen Garden for some inspiration on the topic or read Dave Shea's excellent book "The Zen of CSS Design."
A: You use CSS for layout because not only is it semantically correct but because tables have multiple drawbacks.
Tables are horrible for accessibility because they break almost all screen readers, which in turn gives the visually impaired worthless information because of the way the tables are read.
They also render much slower than their CSS counterparts. Tables have to be drawn twice, once for the layout, and again for the content. This can mean that if you have a remote image or two on a server with a slow connection that your ENTIRE LAYOUT will not render.
Would you use an array to store a dictionary when you have a hashmap? No. And you shouldn't use a table when there's something out there which works better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: How to be notified of file/directory change in C/C++, ideally using POSIX The subject says it all - normally easy and cross platform way is to poll, intelligently. But every OS has some means to notify without polling. Is it possible in a reasonably cross platform way? (I only really care about Windows and Linux, but I use mac, so I thought posix may help?)
A: Linux users can use inotify
inotify is a Linux kernel subsystem
that provides file system event
notification.
Some goodies for Windows fellows:
*
*File Change Notification on MSDN
*"When Folders Change" article
*File System Notification on Change
A: The Qt library has a QFileSystemWatcher class which provides cross platform notifications when a file changes. Even if you are not using Qt, because the source is available you could have a look at it as a sample for your own implementation. Qt has separate implementations for Windows, Linux and Mac.
A: There's File System Events API as of Leopard.
A: I don't think POSIX itself has facilities for that. The closest to cross-platform I've seen is FAM, which seems to work for Linux, BSD, and Irix, but I'm not how easy it would be to port it to Windows and MacOS.
A: I've actually built this system before for use in a commercial C++ code base- as long as you don't need every weird thing under the sun, the Windows and POSIX systems have a lot of overlap you can abstract.
POSIX: Use inotify- it is a whole system literally built for this job
Windows: Use "change events". You have to build more of the glue and reporting yourself (all the APIs you need are available, there's just not the 1-stop-shopping inotify gives you).
The common things you can detect in your "notification thread" for forwarding events are:
1) Basically any invasive operation boost::filesystem supports, with the (possible) exception of modifying permissions. This is things like moving, creating, deleting, copying folders and files.
2) Reads and writes to files (esp. writes). Be aware that if you're using async I/O the notifications can show up out-of-order.
3) When a new volume comes in, such as somebody connecting a flash drive.
inotify especially gives you an insane level of fine-grained control, Windows less so. With inotify you can literally monitor everything the filesystem is doing in near-real time if you really want to. I know #3 is possible with both without polling, but be aware that it can be really tricky to get it working correctly- on either system.
A: I believe OS X now has appropriate hooks/callbacks because they were needed for Spotlight indexing.
On linux you'll have the additional trouble that there are multiple file systems commonly used. If you need the functionality for only a limited amount of files/directories, I'd try about actively looking for modifications at regular intervals.
A: libevent or libev seem to be what you want, though I haven't used them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: IIS uses proxy for webservice request. How to stop this? I have a problem with a little .Net web application which uses the Amazon webservice. With the integrated Visual Studio web server everything works fine. But after deploying it to the IIS on the same computer i get the following error message:
Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach
einer bestimmten Zeitspanne nicht ordnungsgemäß reagiert hat, oder
die hergestellte Verbindung war fehlerhaft, da der verbundene Host
nicht reagiert hat 192.168.123.254:8080
Which roughly translates to "cant connect to 192.168.123.254:8080"
The computer is part of an Active Directory. The AD-Server was installed on a network which uses 192.168.123.254 as a proxy. Now it is not reachable and should not be used.
How do I prevent the IIS from using a proxy?
I think it has something to do with policy settings for the Internet Explorer. An "old" AD user has this setting, but a newly created user does not. I checked all the group policy settings and nowhere is a proxy defined.
The web server is running in the context of the anonymous internet user account on the local computer. Do local users get settings from the AD? If so how can I change that setting, if I cant login as this user?
What can I do, where else i could check?
A: Proxy use can be configured in the web.config.
The system.net/defaultProxy element will let you specify whether a proxy is used by default or provide a bypass list.
For more info see: http://msdn.microsoft.com/en-us/library/kd3cf2ex.aspx
A: Some group policy settings that may be relevant:
Root \ Local computer policy \ Computer configuration \ Administrative templates \ Windows components \ Internet Explorer \ Make proxy settings per-machine -- by default this is disabled, meaning individual users on the server have customised proxy settings.
Root \ Local computer policy \ User configuration \ Windows settings \ Internet Explorer maintenance \ Connection. In "Automatic Browser Configuration" the value "Automatically detect configuration settings" -- you can set this off to prevent the process trying to detect proxy settings automatically.
That said, using the defaultProxy setting as shown in hwiechers' answer would seem to be a better way of doing it, not affecting other processes or users on the machine.
A: IIS is a destination. The configuration issue is in whatever is doing the call (acting like a client). If you are using the built-in .Net communication methods you will need to make the adjustment inside of ... Wait for it ... Internet Explorer.
Yep! That little bugger has bitten me more times than I care to remember. I used to have to switch the proxy server settings in IE 5 or 6 times a day as I switched between internal and external servers. Newer versions of IE have a much better "don't use proxy server" set of rules.
-- Clarification --
As it seems that the user ID used by IIS is using this setting, you'll probably need to search the registry for where the proxy information is stored for each user ID and/or the default.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you resolve .Net namespace conflicts with the 'using' keyword? Here's the problem, you include multiple assemblies and add 'using namespaceX' at the top of your code file.
Now you want to create a class or use a symbol which is defined in multiple namespaces,
e.g. System.Windows.Controls.Image & System.Drawing.Image
Now unless you use the fully qualified name, there will be a crib/build error due to ambiguity inspite of the right 'using' declarations at the top. What is the way out here?
(Another knowledge base post.. I found the answer after about 10 minutes of searching because I didn't know the right keyword to search for)
A: This page has a very good writeup on namespaces and the using-statement:
http://www.blackwasp.co.uk/Namespaces.aspx
You want to read the part about "Creating Aliases" that will allow you to make an alias for one or both of the name spaces and reference them with that like this:
using ControlImage = System.Windows.Controls.Image;
using System.Drawing.Image;
ControlImage.Image myImage = new ControlImage.Image();
myImage.Width = 200;
A: Use alias
using System.Windows.Controls;
using Drawing = System.Drawing;
...
Image img = ... //System.Windows.Controls.Image
Drawing.Image img2 = ... //System.Drawing.Image
C# using directive
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Breakpoints in core .NET runtime? I have a third party library that internally constructs and uses the SqlConnection class. I can inherit from the class, but it has a ton of overloads, and so far I have been unable to find the right one. What I'd like is to tack on a parameter to the connection string being used.
Is there a way for me to put a breakpoint in the .NET library core itself? Specifically in the constructors of the SqlConnection class, so that I can look at the stack trace and see where it is actually being constructed?
Barring that, is there some other way I can do this?
Specifically, what I want to do is to tack on the Application Name parameter, so that our application is more easily identified on the server when looking at connections.
Edit: Well, it appears I need more help. I think I've enabled everything related to symbol server support, and I've noticed that the directory I configured has filled up with directories that contain .pdb files. Still, I can't get the actual source to the SqlConnection class to become available.
Is there some definite guide to how to do this successfully?
A: You can download .NET source code and set break point right in .NET FW source code.
You can use NetMassDownloader to grab .NET sources quickly.
A: According to this article you can download the source code for the .NET framework and then debug it using visual studio:
http://weblogs.asp.net/scottgu/archive/2007/10/03/releasing-the-source-code
A: I almost forgot to mention Deblector - it's a Reflector plugin, that allows you to debug almost any .net app without source codes :)
A: While source debugging is defintely better, you don't need pdbs or source for the VS debugger to set a bp on the function you want.
Make sure you go to Tools/Options/Debugger and turn off the option called "Just My Code". Since the framework is not 'your code' the debugger unhelpfully prevents you from setting breakpoints there.
Next you need the full name of the method as it appears in the metadata. This includes any namespaces it is nested in. I'd recommend ILDasm or Reflector if you need to find the name.
On the breakpoints window in the upper left corner is a "new bp" menu button. One of the choices is to set a bp on function name. When the dialog comes up uncheck having intellisense check the name since you don't have a project. I hope that helps.
A: And if you can't use source level debugging with the .Net framework source code Microsoft supplied, you could try a different debugger. Like mdbg or even windbg.
edit
This explains getting the released parts of .Net framework and how to set breakpoints in great detail. The NetMassDownloader will give you everything (pdb and source) in one download. But not all source code of the .Net framework is available. If your SqlConnection is not you can always use IL debuggers like the ones I mentioned. And don't forget Lutz's Reflector to give you a look at the source code anyway.
A: OK, if you want definitive guide, here it is:
Configuring Visual Studio to Debug .NET Framework Source Code
If you want some help, go ahead and tell use which steps did you perform?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Quick and dirty way to profile your code What method do you use when you want to get performance data about specific code paths?
A: Well, I have two code snippets. In pseudocode they are looking like (it's a simplified version, I'm using QueryPerformanceFrequency actually):
First snippet:
Timer timer = new Timer
timer.Start
Second snippet:
timer.Stop
show elapsed time
A bit of hot-keys kung fu, and I can say how much time this piece of code stole from my CPU.
A: I do my profiles by creating two classes: cProfile and cProfileManager.
cProfileManager will hold all the data that resulted from cProfile.
cProfile with have the following requirements:
*
*cProfile has a constructor which initializes the current time.
*cProfile has a deconstructor which sends the total time the class was alive to cProfileManager
To use these profile classes, I first make an instance of cProfileManager. Then, I put the code block, which I want to profile, inside curly braces. Inside the curly braces, I create a cProfile instance. When the code block ends, cProfile will send the time it took for the block of code to finish to cProfileManager.
Example Code
Here's an example of the code (simplified):
class cProfile
{
cProfile()
{
TimeStart = GetTime();
};
~cProfile()
{
ProfileManager->AddProfile (GetTime() - TimeStart);
}
float TimeStart;
}
To use cProfile, I would do something like this:
int main()
{
printf("Start test");
{
cProfile Profile;
Calculate();
}
ProfileManager->OutputData();
}
or this:
void foobar()
{
cProfile ProfileFoobar;
foo();
{
cProfile ProfileBarCheck;
while (bar())
{
cProfile ProfileSpam;
spam();
}
}
}
Technical Note
This code is actually an abuse of the way scoping, constructors and deconstructors work in C++. cProfile exists only inside the block scope (the code block we want to test). Once the program leaves the block scope, cProfile records the result.
Additional Enhancements
*
*You can add a string parameter to the constructor so you can do something like this:
cProfile Profile("Profile for complicated calculation");
*You can use a macro to make the code look cleaner (be careful not to abuse this. Unlike our other abuses on the language, macros can be dangerous when used).
Example:
#define START_PROFILE cProfile Profile(); {
#define END_PROFILE }
*cProfileManager can check how many times a block of code is called. But you would need an identifier for the block of code. The first enhancement can help identify the block. This can be useful in cases where the code you want to profile is inside a loop (like the second example aboe). You can also add the average, fastest and longest execution time the code block took.
*Don't forget to add a check to skip profiling if you are in debug mode.
A: Note, the following is all written specifically for Windows.
I also have a timer class that I wrote to do quick-and-dirty profiling that uses QueryPerformanceCounter() to get high-precision timings, but with a slight difference. My timer class doesn't dump the elapsed time when the Timer object falls out of scope. Instead, it accumulates the elapsed times in to an collection. I added a static member function, Dump(), which creates a table of elapsed times, sorted by timing category (specified in Timer's constructor as a string) along with some statistical analysis such as mean elapsed time, standard deviation, max and min. I also added a Clear() static member function which clears the collection & lets you start over again.
How to use the Timer class (psudocode):
int CInsertBuffer::Read(char* pBuf)
{
// TIMER NOTES: Avg Execution Time = ~1 ms
Timer timer("BufferRead");
: :
return -1;
}
Sample output :
Timer Precision = 418.0095 ps
=== Item Trials Ttl Time Avg Time Mean Time StdDev ===
AddTrade 500 7 ms 14 us 12 us 24 us
BufferRead 511 1:19.25 0.16 s 621 ns 2.48 s
BufferWrite 516 511 us 991 ns 482 ns 11 us
ImportPos Loop 1002 18.62 s 19 ms 77 us 0.51 s
ImportPosition 2 18.75 s 9.38 s 16.17 s 13.59 s
Insert 515 4.26 s 8 ms 5 ms 27 ms
recv 101 18.54 s 0.18 s 2603 ns 1.63 s
file Timer.inl :
#include <map>
#include "x:\utils\stlext\stringext.h"
#include <iterator>
#include <set>
#include <vector>
#include <numeric>
#include "x:\utils\stlext\algorithmext.h"
#include <math.h>
class Timer
{
public:
Timer(const char* name)
{
label = std::safe_string(name);
QueryPerformanceCounter(&startTime);
}
virtual ~Timer()
{
QueryPerformanceCounter(&stopTime);
__int64 clocks = stopTime.QuadPart-startTime.QuadPart;
double elapsed = (double)clocks/(double)TimerFreq();
TimeMap().insert(std::make_pair(label,elapsed));
};
static std::string Dump(bool ClipboardAlso=true)
{
static const std::string loc = "Timer::Dump";
if( TimeMap().empty() )
{
return "No trials\r\n";
}
std::string ret = std::formatstr("\r\n\r\nTimer Precision = %s\r\n\r\n", format_elapsed(1.0/(double)TimerFreq()).c_str());
// get a list of keys
typedef std::set<std::string> keyset;
keyset keys;
std::transform(TimeMap().begin(), TimeMap().end(), std::inserter(keys, keys.begin()), extract_key());
size_t maxrows = 0;
typedef std::vector<std::string> strings;
strings lines;
static const size_t tabWidth = 9;
std::string head = std::formatstr("=== %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s ===", tabWidth*2, tabWidth*2, "Item", tabWidth, tabWidth, "Trials", tabWidth, tabWidth, "Ttl Time", tabWidth, tabWidth, "Avg Time", tabWidth, tabWidth, "Mean Time", tabWidth, tabWidth, "StdDev");
ret += std::formatstr("\r\n%s\r\n", head.c_str());
if( ClipboardAlso )
lines.push_back("Item\tTrials\tTtl Time\tAvg Time\tMean Time\tStdDev\r\n");
// dump the values for each key
{for( keyset::iterator key = keys.begin(); keys.end() != key; ++key )
{
time_type ttl = 0;
ttl = std::accumulate(TimeMap().begin(), TimeMap().end(), ttl, accum_key(*key));
size_t num = std::count_if( TimeMap().begin(), TimeMap().end(), match_key(*key));
if( num > maxrows )
maxrows = num;
time_type avg = ttl / num;
// compute mean
std::vector<time_type> sortedTimes;
std::transform_if(TimeMap().begin(), TimeMap().end(), std::inserter(sortedTimes, sortedTimes.begin()), extract_val(), match_key(*key));
std::sort(sortedTimes.begin(), sortedTimes.end());
size_t mid = (size_t)floor((double)num/2.0);
double mean = ( num > 1 && (num % 2) != 0 ) ? (sortedTimes[mid]+sortedTimes[mid+1])/2.0 : sortedTimes[mid];
// compute variance
double sum = 0.0;
if( num > 1 )
{
for( std::vector<time_type>::iterator timeIt = sortedTimes.begin(); sortedTimes.end() != timeIt; ++timeIt )
sum += pow(*timeIt-mean,2.0);
}
// compute std dev
double stddev = num > 1 ? sqrt(sum/((double)num-1.0)) : 0.0;
ret += std::formatstr(" %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s\r\n", tabWidth*2, tabWidth*2, key->c_str(), tabWidth, tabWidth, std::formatstr("%d",num).c_str(), tabWidth, tabWidth, format_elapsed(ttl).c_str(), tabWidth, tabWidth, format_elapsed(avg).c_str(), tabWidth, tabWidth, format_elapsed(mean).c_str(), tabWidth, tabWidth, format_elapsed(stddev).c_str());
if( ClipboardAlso )
lines.push_back(std::formatstr("%s\t%s\t%s\t%s\t%s\t%s\r\n", key->c_str(), std::formatstr("%d",num).c_str(), format_elapsed(ttl).c_str(), format_elapsed(avg).c_str(), format_elapsed(mean).c_str(), format_elapsed(stddev).c_str()));
}
}
ret += std::formatstr("%s\r\n", std::string(head.length(),'=').c_str());
if( ClipboardAlso )
{
// dump header row of data block
lines.push_back("");
{
std::string s;
for( keyset::iterator key = keys.begin(); key != keys.end(); ++key )
{
if( key != keys.begin() )
s.append("\t");
s.append(*key);
}
s.append("\r\n");
lines.push_back(s);
}
// blow out the flat map of time values to a seperate vector of times for each key
typedef std::map<std::string, std::vector<time_type> > nodematrix;
nodematrix nodes;
for( Times::iterator time = TimeMap().begin(); time != TimeMap().end(); ++time )
nodes[time->first].push_back(time->second);
// dump each data point
for( size_t row = 0; row < maxrows; ++row )
{
std::string rowDump;
for( keyset::iterator key = keys.begin(); key != keys.end(); ++key )
{
if( key != keys.begin() )
rowDump.append("\t");
if( nodes[*key].size() > row )
rowDump.append(std::formatstr("%f", nodes[*key][row]));
}
rowDump.append("\r\n");
lines.push_back(rowDump);
}
// dump to the clipboard
std::string dump;
for( strings::iterator s = lines.begin(); s != lines.end(); ++s )
{
dump.append(*s);
}
OpenClipboard(0);
EmptyClipboard();
HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, dump.length()+1);
if( hg != 0 )
{
char* buf = (char*)GlobalLock(hg);
if( buf != 0 )
{
std::copy(dump.begin(), dump.end(), buf);
buf[dump.length()] = 0;
GlobalUnlock(hg);
SetClipboardData(CF_TEXT, hg);
}
}
CloseClipboard();
}
return ret;
}
static void Reset()
{
TimeMap().clear();
}
static std::string format_elapsed(double d)
{
if( d < 0.00000001 )
{
// show in ps with 4 digits
return std::formatstr("%0.4f ps", d * 1000000000000.0);
}
if( d < 0.00001 )
{
// show in ns
return std::formatstr("%0.0f ns", d * 1000000000.0);
}
if( d < 0.001 )
{
// show in us
return std::formatstr("%0.0f us", d * 1000000.0);
}
if( d < 0.1 )
{
// show in ms
return std::formatstr("%0.0f ms", d * 1000.0);
}
if( d <= 60.0 )
{
// show in seconds
return std::formatstr("%0.2f s", d);
}
if( d < 3600.0 )
{
// show in min:sec
return std::formatstr("%01.0f:%02.2f", floor(d/60.0), fmod(d,60.0));
}
// show in h:min:sec
return std::formatstr("%01.0f:%02.0f:%02.2f", floor(d/3600.0), floor(fmod(d,3600.0)/60.0), fmod(d,60.0));
}
private:
static __int64 TimerFreq()
{
static __int64 freq = 0;
static bool init = false;
if( !init )
{
LARGE_INTEGER li;
QueryPerformanceFrequency(&li);
freq = li.QuadPart;
init = true;
}
return freq;
}
LARGE_INTEGER startTime, stopTime;
std::string label;
typedef std::string key_type;
typedef double time_type;
typedef std::multimap<key_type, time_type> Times;
// static Times times;
static Times& TimeMap()
{
static Times times_;
return times_;
}
struct extract_key : public std::unary_function<Times::value_type, key_type>
{
std::string operator()(Times::value_type const & r) const
{
return r.first;
}
};
struct extract_val : public std::unary_function<Times::value_type, time_type>
{
time_type operator()(Times::value_type const & r) const
{
return r.second;
}
};
struct match_key : public std::unary_function<Times::value_type, bool>
{
match_key(key_type const & key_) : key(key_) {};
bool operator()(Times::value_type const & rhs) const
{
return key == rhs.first;
}
private:
match_key& operator=(match_key&) { return * this; }
const key_type key;
};
struct accum_key : public std::binary_function<time_type, Times::value_type, time_type>
{
accum_key(key_type const & key_) : key(key_), n(0) {};
time_type operator()(time_type const & v, Times::value_type const & rhs) const
{
if( key == rhs.first )
{
++n;
return rhs.second + v;
}
return v;
}
private:
accum_key& operator=(accum_key&) { return * this; }
const Times::key_type key;
mutable size_t n;
};
};
file stringext.h (provides formatstr() function):
namespace std
{
/* ---
Formatted Print
template<class C>
int strprintf(basic_string<C>* pString, const C* pFmt, ...);
template<class C>
int vstrprintf(basic_string<C>* pString, const C* pFmt, va_list args);
Returns :
# characters printed to output
Effects :
Writes formatted data to a string. strprintf() works exactly the same as sprintf(); see your
documentation for sprintf() for details of peration. vstrprintf() also works the same as sprintf(),
but instead of accepting a variable paramater list it accepts a va_list argument.
Requires :
pString is a pointer to a basic_string<>
--- */
template<class char_type> int vprintf_generic(char_type* buffer, size_t bufferSize, const char_type* format, va_list argptr);
template<> inline int vprintf_generic<char>(char* buffer, size_t bufferSize, const char* format, va_list argptr)
{
# ifdef SECURE_VSPRINTF
return _vsnprintf_s(buffer, bufferSize-1, _TRUNCATE, format, argptr);
# else
return _vsnprintf(buffer, bufferSize-1, format, argptr);
# endif
}
template<> inline int vprintf_generic<wchar_t>(wchar_t* buffer, size_t bufferSize, const wchar_t* format, va_list argptr)
{
# ifdef SECURE_VSPRINTF
return _vsnwprintf_s(buffer, bufferSize-1, _TRUNCATE, format, argptr);
# else
return _vsnwprintf(buffer, bufferSize-1, format, argptr);
# endif
}
template<class Type, class Traits>
inline int vstringprintf(basic_string<Type,Traits> & outStr, const Type* format, va_list args)
{
// prologue
static const size_t ChunkSize = 1024;
size_t curBufSize = 0;
outStr.erase();
if( !format )
{
return 0;
}
// keep trying to write the string to an ever-increasing buffer until
// either we get the string written or we run out of memory
while( bool cont = true )
{
// allocate a local buffer
curBufSize += ChunkSize;
std::ref_ptr<Type> localBuffer = new Type[curBufSize];
if( localBuffer.get() == 0 )
{
// we ran out of memory -- nice goin'!
return -1;
}
// format output to local buffer
int i = vprintf_generic(localBuffer.get(), curBufSize * sizeof(Type), format, args);
if( -1 == i )
{
// the buffer wasn't big enough -- try again
continue;
}
else if( i < 0 )
{
// something wierd happened -- bail
return i;
}
// if we get to this point the string was written completely -- stop looping
outStr.assign(localBuffer.get(),i);
return i;
}
// unreachable code
return -1;
};
// provided for backward-compatibility
template<class Type, class Traits>
inline int vstrprintf(basic_string<Type,Traits> * outStr, const Type* format, va_list args)
{
return vstringprintf(*outStr, format, args);
}
template<class Char, class Traits>
inline int stringprintf(std::basic_string<Char, Traits> & outString, const Char* format, ...)
{
va_list args;
va_start(args, format);
int retval = vstringprintf(outString, format, args);
va_end(args);
return retval;
}
// old function provided for backward-compatibility
template<class Char, class Traits>
inline int strprintf(std::basic_string<Char, Traits> * outString, const Char* format, ...)
{
va_list args;
va_start(args, format);
int retval = vstringprintf(*outString, format, args);
va_end(args);
return retval;
}
/* ---
Inline Formatted Print
string strprintf(const char* Format, ...);
Returns :
Formatted string
Effects :
Writes formatted data to a string. formatstr() works the same as sprintf(); see your
documentation for sprintf() for details of operation.
--- */
template<class Char>
inline std::basic_string<Char> formatstr(const Char * format, ...)
{
std::string outString;
va_list args;
va_start(args, format);
vstringprintf(outString, format, args);
va_end(args);
return outString;
}
};
File algorithmext.h (provides transform_if() function) :
/* ---
Transform
25.2.3
template<class InputIterator, class OutputIterator, class UnaryOperation, class Predicate>
OutputIterator transform_if(InputIterator first, InputIterator last, OutputIterator result, UnaryOperation op, Predicate pred)
template<class InputIterator1, class InputIterator2, class OutputIterator, class BinaryOperation, class Predicate>
OutputIterator transform_if(InputIterator first, InputIterator last, OutputIterator result, BinaryOperation binary_op, Predicate pred)
Requires:
T is of type EqualityComparable (20.1.1)
op and binary_op have no side effects
Effects :
Assigns through every iterator i in the range [result, result + (last1-first1)) a new corresponding value equal to one of:
1: op( *(first1 + (i - result))
2: binary_op( *(first1 + (i - result), *(first2 + (i - result))
Returns :
result + (last1 - first1)
Complexity :
At most last1 - first1 applications of op or binary_op
--- */
template<class InputIterator, class OutputIterator, class UnaryFunction, class Predicate>
OutputIterator transform_if(InputIterator first,
InputIterator last,
OutputIterator result,
UnaryFunction f,
Predicate pred)
{
for (; first != last; ++first)
{
if( pred(*first) )
*result++ = f(*first);
}
return result;
}
template<class InputIterator1, class InputIterator2, class OutputIterator, class BinaryOperation, class Predicate>
OutputIterator transform_if(InputIterator1 first1,
InputIterator1 last1,
InputIterator2 first2,
OutputIterator result,
BinaryOperation binary_op,
Predicate pred)
{
for (; first1 != last1 ; ++first1, ++first2)
{
if( pred(*first1) )
*result++ = binary_op(*first1,*first2);
}
return result;
}
A: This method has several limitations, but I still find it very useful. I'll list the limitations (I know of) up front and let whoever wants to use it do so at their own risk.
*
*The original version I posted over-reported time spent in recursive calls (as pointed out in the comments to the answer).
*It's not thread safe, it wasn't thread safe before I added the code to ignore recursion and it's even less thread safe now.
*Although it's very efficient if it's called many times (millions), it will have a measurable effect on the outcome so that scopes you measure will take longer than those you don't.
I use this class when the problem at hand doesn't justify profiling all my code or I get some data from a profiler that I want to verify. Basically it sums up the time you spent in a specific block and at the end of the program outputs it to the debug stream (viewable with DbgView), including how many times the code was executed (and the average time spent of course)).
#pragma once
#include <tchar.h>
#include <windows.h>
#include <sstream>
#include <boost/noncopyable.hpp>
namespace scope_timer {
class time_collector : boost::noncopyable {
__int64 total;
LARGE_INTEGER start;
size_t times;
const TCHAR* name;
double cpu_frequency()
{ // cache the CPU frequency, which doesn't change.
static double ret = 0; // store as double so devision later on is floating point and not truncating
if (ret == 0) {
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
ret = static_cast<double>(freq.QuadPart);
}
return ret;
}
bool in_use;
public:
time_collector(const TCHAR* n)
: times(0)
, name(n)
, total(0)
, start(LARGE_INTEGER())
, in_use(false)
{
}
~time_collector()
{
std::basic_ostringstream<TCHAR> msg;
msg << _T("scope_timer> ") << name << _T(" called: ");
double seconds = total / cpu_frequency();
double average = seconds / times;
msg << times << _T(" times total time: ") << seconds << _T(" seconds ")
<< _T(" (avg ") << average <<_T(")\n");
OutputDebugString(msg.str().c_str());
}
void add_time(__int64 ticks)
{
total += ticks;
++times;
in_use = false;
}
bool aquire()
{
if (in_use)
return false;
in_use = true;
return true;
}
};
class one_time : boost::noncopyable {
LARGE_INTEGER start;
time_collector* collector;
public:
one_time(time_collector& tc)
{
if (tc.aquire()) {
collector = &tc;
QueryPerformanceCounter(&start);
}
else
collector = 0;
}
~one_time()
{
if (collector) {
LARGE_INTEGER end;
QueryPerformanceCounter(&end);
collector->add_time(end.QuadPart - start.QuadPart);
}
}
};
}
// Usage TIME_THIS_SCOPE(XX); where XX is a C variable name (can begin with a number)
#define TIME_THIS_SCOPE(name) \
static scope_timer::time_collector st_time_collector_##name(_T(#name)); \
scope_timer::one_time st_one_time_##name(st_time_collector_##name)
A: The article Code profiler and optimizations has lots of information about C++ code profiling and also has a free download link to a program/class that will show you a graphic presentation for different code paths/methods.
A: I have a quick-and-dirty profiling class that can be used in profiling in even the most tight inner loops. The emphasis is on extreme light weight and simple code. The class allocates a two-dimensional array of fixed size. I then add "checkpoint" calls all over the place. When checkpoint N is reached immediately after checkpoint M, I add the time elapsed (in microseconds) to the array item [M,N]. Since this is designed to profile tight loops, I also have "start of iteration" call that resets the the "last checkpoint" variable. At the end of test, the dumpResults() call produces the list of all pairs of checkpoints that followed each other, together with total time accounted for and unaccounted for.
A: I wrote a simple cross-platform class called nanotimer for this reason. The goal was to be as lightweight as possible so as to not interfere with actual code performance by adding too many instructions and thereby influencing the instruction cache. It is capable of getting microsecond accuracy across windows, mac and linux (and probably some unix variants).
Basic usage:
plf::timer t;
timer.start();
// stuff
double elapsed = t.get_elapsed_ns(); // Get nanoseconds
start() also restarts the timer when necessary. "Pausing" the timer can be achieved by storing the elapsed time, then restarting the timer when "unpausing" and adding to the stored result the next time you check elapsed time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How do I locate a Word application window? I have a VB.net test application that clicks a link that opens the Microsoft Word application window and displays the document. How do I locate the Word application window so that I can grab some text from it?
A: I've done something similar with a SourceSafe dialog, which I posted on my blog. Basically, I used either Spy++ or Winspector to find out the window class name, and make Win32 calls to do stuff with the window. I've put the source on my blog: http://harriyott.com/2006/07/sourcesafe-cant-leave-well-alone.aspx
A: Are you trying to activate the word app? If you want full control, you need to automate word from your vb.net app. Check here for some samples: 1, 2
A: You can use the Word COM object to open the work document and then you manipulate it. Make sure to add a reference for Microsoft Word first.
Imports System.Runtime.InteropServices
Imports Microsoft.Office.Interop.Word
Public Class Form1
Inherits System.Windows.Forms.Form
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Dim strFileName As String
Dim wordapp As New Microsoft.Office.Interop.Word.Application
Dim doc As Microsoft.Office.Interop.Word.Document
Try
doc = wordapp.Documents.Open("c:\testdoc.doc")
doc.Activate()
Catch ex As COMException
MessageBox.Show("Error accessing Word document.")
End Try
End Sub
End Class
The doc object is a handle for the instance of Word you have created and you can use all the normal options (save, print etc). You can do likewise with the wordapp. A trick is to use the macro editor in Word to record what you want to do. You can then view this in the Macro Editor. This give you a great starting point for your VB code.
Also, be sure to dispose of the Word COM objects at the end.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Learn Silverlight or WPF first? It seems that Silverlight/WPF are the long term future for user interface development with .NET. This is great because as I can see the advantage of reusing XAML skills on both the client and web development sides. But looking at WPF/XAML/Silverlight they seem very large technologies and so where is the best place to get start?
I would like to hear from anyone who has good knowledge of both and can recommend which is a better starting point and why.
A: I'll go against the grain and say learn WPF first.
Here's my reasoning:
*
*Much more resources are available for WPF than Silverlight, such as books, blogs, and msdn documentation
*
*WPF Books
*You're not dealing with a Beta, moving target
*You don't have to deal with working with only asynchronous calls
*Not limited by lack of features such as Merged Dictionaries, Triggers, TileBrushes, etc.
*You don't have to worry about re-learning to do things correctly because of lacks of features in SL
A: Silverlight is a stripped down version of WPF so it should have fewer things to learn inside. On the other hand, the two platforms have different targets (web & rich client) so I guess it depends on what app you're going to build.
If you just want to learn for yourself (no app in the close future) I'd pick Silverlight because it would be less to assimilate. Still, Silverlight is pretty much a moving target, much more than WPF, so you'll have to keep up with some changes from time to time (the joys of being an early adopter :)).
WPF has lots more stuff that you will probably want to use at some point but I would wait for the needs to arise first.
A: Every industry expert I've heard on podcasts, blogs and interviews recommend learning Silverlight first and then gradually moving to WPF which is a huge UI framework.
Silverlight is light and allows you to work on smaller subset of controls and features such that you get your head around this new UI building paradigm based on,
*
*Templating
*DataBinding
*Styles
Update: 07/2011
I hate to mention this, but in recent times Microsoft has put more focus on HTML5, Javascript and CSS by bringing forward powers of IE 9 and IE 10, as well as the upcoming Windows 8.
More and more developers and CTOs are skeptical about Silverlight as a LOB application platform as the time passes by, we are suspecting Silverlight will be limited to Windows Phone and niche, domain areas like healthcare of graphics related applications rather than a regular LOB app.
As it seems right now, as of summer 2011, the future might look fragmented with more opportunities for pure web technologies (HTML5, JS and CSS) as opposed to a plugin and OS-specific UI technology.
A: Should you learn ASP.NET or Winforms first? ASP or MFC? HTML or VB? C# or VB?
Set aside the idea that there is a logical progression through what has become a highly complex interwoven set of technologies, and take a step back and ask yourself a series of questions:
*
*What are your goals; how do you want to balance profit against enjoyment
*Are you short term oriented or in for the long haul
*Are you the type of person who likes to get good at something and do it a lot or do you get bored once you fully understand it?
The next and hardest step is to come to accept that any advice you are given is bound to be wrong; and the longer the time horizon the more likely it is to be incorrect. If the advice is for more than six to 12 months, the probability the advice is wildly incorrect approaches 1.
I can only tell you my story, quickly. In 2000 I was happy as a consultant working profitably in C++ on Windows applications, writing about ASP.NET and WinForms. then I saw C# and the world turned upside down. I never went back.
Two years ago I had the same kind of revelation, only an order of magnitude bigger, stronger and with more conviction about Silverlight. Yes, WPF is magnificent, and it may be that I'm all wet about this, but I believe in my gut that Silverlight changes everything. There was no doubt then and there is no doubt today that Silverlight is the most important development platform for Microsoft since .NET (certainly) and possibly since the switch to C++.
In a nutshell, here is why. I don't understand where its limitations are. With most platforms I do: you can do this, but you can't do that. WPF is a pretty good case in point, as was ASP.Net and WinForms and, well really everything until now.
With Silverlight, I don't see the boundaries yet. Silverlight has already leaped off the desktop onto phones, and I don't see any reason for it to stop there. Yes, it is true, it is bound by the browser, but I see that less as a jail cell than as a tank in which Silverlight will be riding over lots of terrain (it must be very late, I should go to bed).
In any case, for now, learning Silverlight is a gas, there is a lot of material on the Silverlight.net site, and what is the very best thing about learning Silverlight is that if you don't see what you need you can holler at me and I'll make sure you get it pretty quickly.
Enjoy, good luck and the dirty little secret is you'll be fine whichever you choose. It's all just software.
-jesse
Jesse Liberty
"Silverlight Geek"
A: I would start by learning XAML, by reading a few tutorials and playing around with XAMLPad. This will give you a feel for the basics before actually building an app.
A: I would start with WPF and doing very simple control familiarizaton samples. You goal should be to learn XAML and Binding. So if you just create some basic WPF window apps will bootstrap your learning speed. Then eventually you can move to silverlight. Yeah as other mentioned here Silverlight is a subset of WPF.
A: I'd say go with Silverlight first!
I have programmed with WPF and Silverlight before.
But as Silverlight is a subset of WPF if you go in too deep and try to switch to writing Silverlight applications, you'll be scratching your heads looking for that "tag" you learned to love in WPF but is not available in Silverlight.
When you master the basic things in Silverlight first, the extra mechanism/trigger/whatever features in WPF will simply add to most of what you've already known.
Silverlight in WPF differs at the features level, not just some missing controls or animations. Take the WPF triggers mechanism for example, is not available fully in Silverlight.
So learning the smaller subset first, you can extend that knowledge to the full set later, but if you started at the full set and gets addicted to some of the niceties available, you'll have trouble down the line when someone asks you to port your designed-utilizing-WPF apps to Silverlight.
A: Well, it depends on what you are going to be working on. If you are working on client/server, then I would go with WPF. If you are working in an environment where you can guarantee that .Net is installed on all of the machines, then I would go with WPF as well, because you can use what is called an XBAP, which is a WPF application that is run through the browser.
It's really up to you. However, I would state that silverlight is not RTM yet, and WPF is. WPF has a lot of books out on the subject, where silverlight does not. It may be easier to get the whole Zen of WPF by reading a few of those books, and then dive into which ever one you would like to play with.
Just keep in mind that silverlight has a subset of the controls of WPF, a paired down .Net framework, and does not do synchronous calls. As long as you know that up front, you can start learned the core of the whole foundation and tailor your practical experience later on to whichever technology is best for you.
A: Some tips at Getting started with Silverlight Development
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: What are the pros and cons of the SVN plugins for Eclipse, Subclipse and Subversive? SVN in Eclipse is spread into two camps. The SVN people have developed a plugin called Subclipse. The Eclipse people have a plugin called Subversive. Broadly speaking they both do the same things. What are the advantages and disadvantages of each?
A: Certainly both IDE plugins have their issues. But neither precludes the parallel use of other solutions like TortoiseSVN or command-line. I use all three for my projects at work.
The important thing to remember is that all your client SVN software should use the same SVN file format--which differs between versions of SVN--or you are asking for trouble.
Another issue we found is when your client software uses a different SVN file format than the server. (By file format, I mean the way all the information is represented in all those seemingly invisible .svn files that effectively record what SVN needs to know about your project files.) That can wreak havoc. There's a documented bug between 1.5 server and 1.6 clients, but I can't find the link right now.
We had issues running the superior (IMO) Subclipse 1.6 plugin because of incompatibilities with our SVN 1.5.5 server. So we reverted to Subversive. It works fine, albeit slow and somewhat buggy (but improving). We will switch to Subclipse when our server is updated, though. And yes, we check out our projects with TortoiseSVN and import them into Eclipse (it's faster).
We found that, as other posters said here, it would NOT work if we ran newer versions of TortoiseSVN that wrote files in 1.6.x format, but when we reverted to TortoiseSVN 1.5.x, it worked just fine. The same was true of the command-line client (which we leverage with our Ant tasks).
A: If you are using svn+ssh as the protocol to access your repository I strongly suggest you to choose Subclipse: Subversive is not intelligent enough to remember your credentials properly and prompts you for username and private key every single time you update your working copy and also for each svn-external you may have set up.
The "remember credentials" options is broken in this context and has been since the first public release of Subversive.
A: I chose to go with Subclipse since it is most closely associated with the Subversion project and so more likely to better handle the core SVN functionality. If at all it fails to perform any function then I have TortoiseSVN as a backup.
A: Just an update. I recently was reinstalling Eclipse and was faced with choice of Subclipse vs Subversive. I, also, had my share of troubles trying to get Subversive to work so I went for Subclipse.
It installed perfectly on my Linux 64 bit machine and is running just fine. I mapped most common functions like Update, Commit, .. to shortcuts and it's a blast. The merging is good too, although for bigger merges I still turn to TortoiseSVN. I tried it with both 3.5 and 3.6, and they both work fine. I ended up using 3.5 because for some reason key binding were not working with 3.6.
A: If you are using one of them in your company and maybe even want to bundle them in own Eclipse-based products, your life is much easier with Subclipse, because it is available under the business-friendly Eclipse Public License.
Subversive on the other hand needs so-called connectors to fully work. And those have separate and different licenses. So you may end up with two or three different licenses just for the Subversive functionality, while all other Eclipse plugins are just under that one EPL. That's also the reason why those connectors are not hosted at eclipse.org.
And that's why they are downloaded dynamically after the Subversive installation (which also means that simply mirroring the eclipse.org update site does not give you a usable Subversive offline installation in your company network).
A: After reading this post, I changed to Subclipse hands down.
http://eclipsezone.com/eclipse/forums/t77149.rhtml#92035407
A: Up until about May 2008 I was using Subclipse, but due to issues with some projects, I've switched over to Subversive and am using that with no issues. If you are doing something fancy like headless Buckminster builds, then Subversive is definitely the one to go with.
A: If you use TortoiseSVN and regularly update the version you may find Eclipse with Subversive losing all SVN information and throwing some scary errors.
The reason being the new version of TortoiseSVN adds new meta data that Eclipse Subversive does not understand unless you also keep your Eclipse SVN connectors up to date as well.
I generally use the SVNKit connector, so TortoiseSVN 1.5.x will work with Eclipse SVNKit connector 1.5.x and TortoiseSVN 1.6.x will work with Eclipse SVNKit connector 1.6.x.
A: +1 Subclipse
-1 Subversive
Subversive gets confused after even minor refactoring and has validation issues as above.
Environment: STS 2.7.2 (based on Galileo)
A: Subversive has more advantages than the Subclipse as listed below. But just one feature Subversion does not have is so critical about using branches. So we have to use Subclipse.
Subversive advantages:
*
*View and icons are more informative
*After commit sync items are refreshed, committed file is closed.
Subclipse advantage
*
*ability to compare two branches
A: If you do much merging with Subversion then you will probably prefer CollabNet Desktop - Eclipse Edition. You have to register an account with CollabNet to get the download, but it is free. It is essentially Subclipse with a better merge UI.
I am not affiliated with CollabNet.
CollabNet has made their improved merge client available to non-registered users of Subclipse. You get it by selecting the CollabNet Merge Client feature when installing Subclipse from the update site.
A: For me neither is better or worse, but Subversive is the default SVN plugin in Eclipse Ganymede platform, so there's a chance that it's better integrated with Eclipse.
A: As an addition to Brendons answer:
We use Subversion since version 1.5.1 and used Subclipse first. But because we greatly depend on the merging feature, we switched to Subversive which is more convenient and has a seperate Reintegrate option in the merging dialog.
One bug that might hinder at merging is that if you select revisions explicitly, it doesn't take the last revision listed. E.g. "101-100" doesn't merge r100 and "100" thus doesn't merge anything at all. (version 0.7.5)
And it has uses the same indicators as the CVS plugin.
A: While I got both working with Helios, I have a slight preference for Subclipse because of its excellent support for bugtraq properties (details here).
The History view shows a separate column (titled bugtraq:label, displaying BUGIDs), and the context menu has a dedicated action to "Open Bug URL" (linking to bugtraq:url) -- I couldn't figure out how to access any of this info with Subversive.
A: I would say Subclipse, as I couldn't even get Subversive working ;)
A: I've used both, and while Subclipse has been flaky for me, Subversive (at least with a previous version) locked out an account of my coworker when he accidentally put in the wrong credentials (the network login is used to access the subversion repository).
Subclipse tends to get disorganized over time. If Eclipse is not refreshed regularly Subclipse seems to lose its file tracking information. Honestly, though, since I have the Easy Explorer Plugin, I use Subversive (occasionally) for history and change information, but I easy explore and use TortoiseSVN for commits and updates to the projects I know I've changed recently.
A: I've been using Subversive since I upgraded to Ganymede. I use it with Eclipse in Linux (Ubuntu and Fedora Core), Windows XP and Mac OS X.5. Aside from some issues getting Subversion 1.5.1 to use the right security libraries under Mac OS, I haven't had any problems. Given that it has been adopted as an Eclipse technology project, I am inclined to place my bets on it, in terms of long-term hopes.
A: I have not really used it, but it seems Subversive supports "Check Out As", just like the built-in CVS support does.
Like, to take a project from SVN and be able to run it as a web project, one might be able to do so in one go. But to get the same result in Subclipse, I just check out the sources and run:
mvn eclipse:eclipse -Dwtpversion=2.0
A: I have just discovered that I cannot figure out how to view a properties diff with Subclipse. In Subversive you select two revisions in the history view, right-click and select compare properties from the popup. This is enough for me to stick with Subversive.
The reason for trying to switch was Subversive's strange behavior on OS X: Some automatic operation called 'svn cache update' hogged the CPU at abnormal levels after every 'svn update' run, always taking an annoyingly long time to complete.
A: FWIW, we are using an ancient version of SVN server (1.4 something), and I seem to remember that at one point there was an update to Subclipse that broke backward compatibility, and the gist was "nobody should be on such an old version of SVN anyway".
Subversive was the only one that seemed to be able to handle the older version. I can't remember the details, though, sorry.
A: We tried both in our team.
Since Subclipse (the one from Galileo/Helios) had some trouble authenticating our SVN server via VAS, we had no problem elsewhere, i.e. TortoiseSVN client, browsers (except Internet Explorer 7).
So we installed Subversive and the problem was resolved.
A: The advantage of Subclipse over Subversive... IT ACTUALLY WORKS!
I used Subclipse a long time ago when developing a collaborative plugin for Eclipse that depended on Subclipse. The Subclipse part of the plugin was never a problem, although the whole Ant thing still confuses me a bit, but the good part is you don't have to understand how the Ant part works to know how to use it.
I am attempting to install PDT today (which is a whole other blog) and then Subversive because, like many, it is portrayed as "The Eclipse SVN Plugin". I was unable to install the four connectors at once, so I had to install them one at a time and one at a time I tried them, and one at a time it could not authenticate with the SVN server.
I am trying PDT and Subversive, because I want to SAVE time, not spending more of it on different issues with a plugin.
I uninstalled Subversive, installed Subclipse, and connected just like that.
Save yourself the time and hassle, go Subclipse from the start.
A: I actually think both of them kind of suck. Using TortoiseSVN is a far better solution in my opinion. It's far more robust and tends to just work better, and I've always had integration issues with Subclipse and Subversive.
A: Both are very similar but Subversive is the "eclipse svn provider". I primarily use Subversive because of a few convenient features:
Grouping of history
When I'm browsing the history of a branch instead of just seeing a bunch of rows for every commit it can group commits by today, week, etc.
Mapping of trunk, branches, and tags
Subversive assumes the default svn layout: trunk, branches, tags (which you can change), so whenever you want to tag or branch it is one click and you provide the name of the tag or branch.
Like I said these are minor differences that I just find convenient. Both work great with mylyn, but overall there really isn't a whole lot of differences with these two extensions.
Merging with Subversive is a pain though (haven't tried Subclipse), I've never been able to successfully merge. The preview of the merge is great but it would never complete the merge or it will take way to long. Most of the time I complete merging through the command line without any issues.
A: I will take a crack at answering this. I am a project lead for Subclipse, and I manage all of the releases, etc. for the project. So my biases are obvious.
I am not going to talk too much about Subversive. Clearly, there are users that use it and like it. Functionally the products are very similar as both are mature products.
One thing I do want to comment on is this notion that somehow Subversive is the "official Eclipse" plugin. That is just not true, as there is no such designation. Eclipse is an open-source foundation and any project that wants to follow their rules, process and IP requirements, etc. can host their project with the foundation. That does not make you any more or less official than any other plugin.
I will also note that Subversive has remained in the "Incubation" phase since its inception, and it does not appear to me that it will ever meet the requirements for graduation. As you can see here, there has been only one committer on the project and commit activity has dwindled to very low levels.
Subversive - SVN Team Provider
So why should you use Subclipse? We are actively involved with Subversion itself. I am a Subversion PMC member and help maintain the Java language bindings so that we (and other projects like Subversive) can use the API.
We work directly with Subversion to define and improve the API and make sure necessary features are exposed to clients like Subclipse. We also work closely and collaborate with the Visual Studio integration (AnkhSVN) and TortoiseSVN teams to make sure there is a relatively consistent user experience across clients.
Subclipse is still actively maintained and we maintain support for Eclipse versions 3.2 to 4.2. We are always trying to listen to feedback and incorporate ideas from the community. The recent 1.8.x releases include internal changes that greatly improve performance of Eclipse when working with large projects (that is when you really see it).
Subclipse has led the way in areas like merge tracking support, where we worked closely with the Subversion team in first adding this feature in 1.5 and then evolving it in subsequent releases. We were often the initial consumers of new API and provided the project with the feedback needed to harden the feature. We also introduced a graphical revision graph feature a couple years ago, becoming the first to bring this long asked for feature to Eclipse users.
If there are specific UI features in Subversive that people would like to see made in Subclipse, I would encourage you to visit our community and engage in our discussion forums. Maybe other users share your views and we can improve the UI together.
Forum [Subclipse-users].
Eclipse 4.2 is the latest release at the time of this post, but it is safe to assume that Subclipse will support all future Eclipse releases as they are made.
A: They both have pretty heinous warts, but I couldn't get Subversive to work with a project I had checked out from the command-line, and that was a show-stopper for me.
A: I tried both of them, and both Subclipse and Subversive are awful. Both are challenging to install. If you use Subversive, you cannot use an external SVN client.
However you need to have a SVN client installed in Eclipse to keep track of changes, and also to not corrupt your local repository.
I have Subclipse installed, but use TortoiseSVN to actually do comitting/tagging/branching/merging.
A: Subclipse, because at least it works.
Subversive has been a bucket of fail for me so far. It wouldn't play nice with all of my old projects I had checked out with Subclipse.
A: With every new version of Eclipse, I install Subversive, because it's the standard provided by Eclipse. And every time, it has issues recognizing my pre-existing projects.
So I end up uninstalling Subversive and installing Subclipse instead, which works marvellously. I also frequently use SVN from the command line as well as in Eclipse, and Subclipse has no problems with this.
A: I've also used both. I had the problem that I have around 150 projects on my workspace, and Subversive would take an awful long time when I selected all plugins and said "synchronize repository". The UI would freeze for an extremely long time. I find Subclipse to be more stable.
Anyway, I combine the tools a lot. For some tasks like checking out whole branches I prefer the command line. For others I use TortoiseSVN. I use Subclipse mostly to view history and run comparisons directly on the tool, and occasionally to compare (I prefer Beyond Compare for that, though).
A: I had the same problem as some others even getting Subversive to work, so I can't say if it's better than Subclipse.
Subclipse is really lacking when it comes to integration with Eclipse for tags and branches. You can do them, but it's nowhere near as seamless as it is with CVS.
A: If you are using Zend Studio 9, Zend's implementation of Eclipse, I recommend using Subclipse instead of Subversive which comes shipped with Zend Studio be default.
I have posted a problem with Subversive and Zend Studio 9 and my solution of using Subclipse instead on the Zend forums.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "639"
} |
Q: Getting user photo from SPUser using WSS Object model I am trying to retrieve a user on Sharepoint's user photo through the WSS 3.0 object model. I have been browsing the web for solutions, but so far I've been unable to find a way to do it. Is it possible, and if so how?
A: Here is a code snippet that should help get the job done for you. You may need to do some additional validation to avoid any exceptions (ensuring the profile actually exists, ensuring the image URL actually exists, etc...):
//get current profile manager
UserProfileManager objUserProfileManager = new UserProfileManager(PortalContext.Current);
//get current users profile
UserProfile profile = objUserProfileManager.GetUserProfile(true);
//get user image URL
string imageUrl = (string)profile[PropertyConstants.PictureUrl];
//do something here with imageUrl
A: If you are strictly talking about WSS 3.0 (and not MOSS), then you really don't have global user profiles per se, but a hiddenh User Information List in each site collection. That mean none of the stuff in the Microsoft.Office.Server namespaces is available to you.
However, you can update the User Information List programatically as long as you know the URL to a user's picture. As long as you're running with some kind of elevated privileges, you should be able to manipulate this list just like you can with any other SharePoint list. Keep in mind that this list is only good for the scope of a site collection, so users would have to make this same update all over the place to actually have a photo URL. Plus users don't get into the User Information List until someone assigns some kind of permission to them, so not every user in your domain will be in there.
The clean way to handle this is definitely the User Profile mechanism is MOSS, but if that's an option the question should really be updated to ask about MOSS vs WSS.
A: Ah, You have to use the UserProfileManager class.
More information here: http://msdn.microsoft.com/en-us/library/microsoft.office.server.userprofiles.userprofilemanager.aspx
Example use:
public override void ItemAdded(SPItemEventProperties properties)
{
// Get list item on which the event occurred.
SPListItem item = properties.ListItem;
// Set the Author Image field to the user's PictureURL if it exists.
using (SPWeb web = properties.OpenWeb())
{
// Author: {C32DB804-FF2D-4656-A38A-B0394BA5C931}
SPFieldUserValue authorValue = new SPFieldUserValue(properties.OpenWeb(), item[new Guid("{C32DB804-FF2D-4656-A38A-B0394BA5C931}")].ToString());
UserProfileManager profileManager = new UserProfileManager(ServerContext.GetContext(web.Site));
UserProfile profile = profileManager.GetUserProfile(authorValue.LookupId);
UserProfileValueCollection values = profile[PropertyConstants.PictureUrl];
if (values.Count > 0)
{
// Author Image: {37A5CA4C-7621-44d7-BF3B-583F742CE52F}
SPFieldUrlValue urlValue = new SPFieldUrlValue(values.Value.ToString());
item[new Guid("{37A5CA4C-7621-44d7-BF3B-583F742CE52F}")] = urlValue.Url;
}
}
item.Update();
// News Text: {7F55A8F0-4555-46BC-B24C-222240B862AF}
//
// Author Image: {37A5CA4C-7621-44d7-BF3B-583F742CE52F}
//
// Publish Date: {45E84B8B-E161-46C6-AD51-27A42E4992B5}
//
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there a way to insert assembly code into C? I remember back in the day with the old borland DOS compiler you could do something like this:
asm {
mov ax,ex
etc etc...
}
Is there a semi-platform independent way to do this now? I have a need to make a BIOS call, so if there was a way to do this without asm code, that would be equally useful to me.
A: Using GCC
__asm__("movl %edx, %eax\n\t"
"addl $2, %eax\n\t");
Using VC++
__asm {
mov eax, edx
add eax, 2
}
A: For Microsoft compilers, inline assembly is supported only for x86. For other targets you have to define the whole function in a separate assembly source file, pass it to an assembler and link the resulting object module.
You're highly unlikely to be able to call into the BIOS under a protected-mode operating system and should use whatever facilities are available on that system. Even if you're in kernel mode it's probably unsafe - the BIOS may not be correctly synchronized with respect to OS state if you do so.
A: In GCC, there's more to it than that. In the instruction, you have to tell the compiler what changed, so that its optimizer doesn't screw up. I'm no expert, but sometimes it looks something like this:
asm ("lock; xaddl %0,%2" : "=r" (result) : "0" (1), "m" (*atom) : "memory");
It's a good idea to write some sample code in C, then ask GCC to produce an assembly listing, then modify that code.
A: A good start would be reading this article which talk about inline assembly in C/C++:
http://www.codeproject.com/KB/cpp/edujini_inline_asm.aspx
Example from the article:
#include <stdio.h>
int main() {
/* Add 10 and 20 and store result into register %eax */
__asm__ ( "movl $10, %eax;"
"movl $20, %ebx;"
"addl %ebx, %eax;"
);
/* Subtract 20 from 10 and store result into register %eax */
__asm__ ( "movl $10, %eax;"
"movl $20, %ebx;"
"subl %ebx, %eax;"
);
/* Multiply 10 and 20 and store result into register %eax */
__asm__ ( "movl $10, %eax;"
"movl $20, %ebx;"
"imull %ebx, %eax;"
);
return 0 ;
}
A: use of asm or __asm__ function ( in compilers have difference )
also you can write fortran codes with fortran function
asm("syscall");
fortran("Print *,"J");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Threads or asynch? How do you make your application multithreaded ?
Do you use asynch functions ?
or do you spawn a new thread ?
I think that asynch functions are already spawning a thread so if your job is doing just some file reading, being lazy and just spawning your job on a thread would just "waste" ressources...
So is there some kind of design when using thread or asynch functions ?
A: If you are talking about .Net, then don't forget the ThreadPool. The thread pool is also what asynch functions often use. Spawning to much threads can actually hurt your performance. A thread pool is designed to spawn just enough threads to do the work the fastest. So do use a thread pool instead of spwaning your own threads, unless the thread pool doesn't meet your needs.
PS: And keep an eye out on the Parallel Extensions from Microsoft
A: Spawning threads is only going to waste resources if you start spawning tons of them, one or two extra threads isn't going to effect the platforms proformance, infact System currently has over 70 threads for me, and msn is using 32 (I really have no idea how a messenger can use that many threads, exspecialy when its minimised and not really doing anything...)
Useualy a good time to spawn a thread is when something will take a long time, but you need to keep doing something else.
eg say a calculation will take 30 seconds. The best thing to do is spawn a new thread for the calculation, so that you can continue to update the screen, and handle any user input because users will hate it if your app freezes untill its finished doing the calculation.
On the other hand, creating threads to do something that can be done almost instantly is nearly pointless, since the overhead of creating (or even just passing work to an existing thread using a thread pool) will be higher than just doing the job in the first place.
Sometimes you can break your app into a couple of seprate parts which run in their own threads. For example in games the updates/physics etc may be one thread, while grahpics are another, sound/music is a third, and networking is another. The problem here is you really have to think about how these parts will interact or else you may have worse proformance, bugs that happen seemingly "randomly", or it may even deadlock.
A: I'll second Fire Lancer's answer - creating your own threads is an excellent way to process big tasks or to handle a task that would otherwise be "blocking" to the rest of synchronous app, but you have to have a clear understanding of the problem that you must solve and develope in a way that clearly defines the task of a thread, and limits the scope of what it does.
For an example I recently worked on - a Java console app runs periodically to capture data by essentially screen-scraping urls, parsing the document with DOM, extracting data and storing it in a database.
As a single threaded application, it, as you would expect, took an age, averaging around 1 url a second for a 50kb page. Not too bad, but when you scale out to needing to processes thousands of urls in a batch, it's no good.
Profiling the app showed that most of the time the active thread was idle - it was waiting for I/O operations - opening of a socket to the remote URL, opening a connection to the database etc. It's this sort of situation that can easily be improved with multithreading. Rewriting to be multi-threaded and with just 5 threads instead of one, even on a single core cpu, gave an increase in throughput of over 20 times.
In this example, each "worker" thread was explicitly limited to what it did - open the remote a remote url, parse the data, store it in the db. All the "high level" processing - generating the list of urls to parse, working out which next, handling errors, all remained with the control of the main thread.
A: The use of threads makes you think more about the way your application needs threading and can in the long run make it easier to improve / control your performance.
Async methods are faster to use but they are a bit magic - a lot of things happen to make them possible - so it's probable that at some point you will need something that they can't give you. Then you can try and roll some custom threading code.
It all depends on your needs.
A: The answer is "it depends".
It depends on what you're trying to achieve. I'm going to assume that you're aiming for more performance.
The simplest solution is to find another way to improve your performance. Run a profiler. Look for hot spots. Reduce unnecessary IO.
The next solution is to break your program into multiple processes, each of which can run in their own address space. This is easiest because there is no chance of the individual processes messing each other up.
The next solution is to use threads. At this point you're opening a major can of worms, so start small, and only multi-thread the critical path of the code.
The next solution is to use asynch IO. Generally only recommended for people writing some of very heavily loaded server, and even then I would rather re-use one of the existing frameworks that abstract away the details e.g. the C++ framework ICE, or an EJB server under java.
Note that each of these solutions has multiple sub-solutions - there are different breeds of threads and different kinds of asynch IO, each with slightly different performance characteristics, but again, it's generally best to let the framework handle it for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Can an iPhone App Be Run as Root? I am thinking about the design of an iPhone app I'd like to create. One possible problem is that this application will have to run as root (to access certain network ports). In a typical UNIX app, I'd just get the app to run with setuid, but I'm wondering if that is possible with an iPhone app.
I've read this question in Apple's forum, which is discouraging:
http://discussions.apple.com/thread.jspa?threadID=1664575
I understand that Apple wants to limit what a program can do, but there are plenty of good, legitimate reasons for a user to run a program with elevated privileges. I'm not trying to create a hacker tool here.
I'm sure I could get around this on a jail-broken iPhone, but that's not what I'm after. Is there any way to run an app with elevated privileges on an unbroken iPhone?
(BTW, there is no need to warn me about the NDA.)
A: Section 3.3.4 of the iPhone SDK Agreement suggests that you mustn't work outside your sandbox.
Given that Apple has been somewhat arbitrary on which applications they permit, you should definitely double-check with them before you start developing.
Compared to 2.0.x, the sandbox restrictions have actually increased in 2.1; you can no longer even read from another application's sandbox. So, even if it currently is possible to elevate your app's privileges, it very likely won't be in a future release.
A: The only options you have is
*
*Run the application as root on the iphone
*Set the applications setuid bit and owner root.
I can't see any of them being blessed by Apple.
I guess it depends on what you want to do with the privileges, if you're lucky there might be more fine grained privileges available, but afaik you have to choose a port above 1024.
A: Doesn't matter one bit if you can do this on your normal desktop computer. The iPhone is not a normal desktop computer.
Unlike a desktop computer, the only way to get an application on the iPhone without a jailbreak is to get it from the App Store. The only way to get on the App Store is to follow Apple's rules, and Apple's rules clearly include "no privilege escalation", "no escaping the sandbox", and "no accessing network ports outside the existing, provided APIs".
What you want to do is not possible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to get entire chain of Exceptions in Application.ThreadException event handler? I was just working on fixing up exception handling in a .NET 2.0 app, and I stumbled onto some weird issue with Application.ThreadException.
What I want is to be able to catch all exceptions from events behind GUI elements (e.g. button_Click, etc.). I then want to filter these exceptions on 'fatality', e.g. with some types of Exceptions the application should keep running and with others it should exit.
In another .NET 2.0 app I learned that, by default, only in debug mode the exceptions actually leave an Application.Run or Application.DoEvents call. In release mode this does not happen, and the exceptions have to be 'caught' using the Application.ThreadException event.
Now, however, I noticed that the exception object passed in the ThreadExceptionEventArgs of the Application.ThreadException event is always the innermost exception in the exception chain. For logging/debugging/design purposes I really want the entire chain of exceptions though. It isn't easy to determine what external system failed for example when you just get to handle a SocketException: when it's wrapped as e.g. a NpgsqlException, then at least you know it's a database problem.
So, how to get to the entire chain of exceptions from this event? Is it even possible or do I need to design my excepion handling in another way?
Note that I do -sort of- have a workaround using Application.SetUnhandledExceptionMode, but this is far from ideal because I'd have to roll my own message loop.
EDIT: to prevent more mistakes, the GetBaseException() method does NOT do what I want: it just returns the innermost exception, while the only thing I already have is the innermost exception. I want to get at the outermost exception!
A: This question is more usefully phrased and answered here:
Why does the inner exception reach the ThreadException handler and not the actual thrown exception?
A: I tried to reproduce this behaviour (always getting the innermost exception),
but I get the exception I expect, with all InnerExceptions intact.
Here is the code I used to test:
Private Shared Sub Test1()
Try
Test2()
Catch ex As Exception
Application.OnThreadException(New ApplicationException("test1", ex))
End Try
End Sub
Private Shared Sub Test2()
Try
Test3()
Catch ex As Exception
Throw New ApplicationException("test2", ex)
End Try
End Sub
Private Shared Sub Test3()
Throw New ApplicationException("blabla")
End Sub
Private Shared Sub HandleAppException(ByVal sender As Object, ByVal e As ThreadExceptionEventArgs)
...
End Sub
Sub HandleAppException handles Application.ThreadException. The Test1() method is called first.
This is the result (e As ThreadExceptionEventArgs) I get in HandleAppException:
ThreadException http://mediasensation.be/dump/?download=ThreadException.jpg
If you just catch and (re)throw exceptions, no InnerExceptions wil show up, but it will be appended to the Exception.StackTrace, like this:
at SO.Test3() in Test.vb:line 166
at SO.Test2() in Test.vb:line 159
at SO.Test1() in Test.vb:line 151
A: Normally, you only lose the whole exception chain except for the base exception in a Application.ThreadException exception handler if the exception happened in an other thread.
From the MSDN Library:
This event allows your Windows Forms
application to handle otherwise
unhandled exceptions that occur in
Windows Forms threads. Attach your
event handlers to the ThreadException
event to deal with these exceptions,
which will leave your application in
an unknown state. Where possible,
exceptions should be handled by a
structured exception handling block.
Solution: If you do threading, make sure that all your threads/async calls are in a try/catch block. Or as you said, you can play with Application.SetUnhandledExceptionMode.
A: Just discovered something interesting. Different GUI events will get you different results. An exception thrown from a Form.Shown event handler will result in Application.ThreadException catching the inner-most exception, but the exact same code run in the Form.Load event will result in the outer-most exception getting caught in Application.ThreadException.
A: Based on some of the information in this chain, I used UnhandledExceptionMode.ThrowException as opposed to UnhandledExceptionMode.CatchException. I then catch the exception outside the form's Run() and this gives me the entire chain of exceptions.
A: Have you tried the Exception.GetBaseException Method? This returns the exception which created the Application.TreadException. You could then use the same process to go up the chain to get all exceptions.
exception.getbaseexception Method
A: According to the MSDN documentation:
When overridden in a derived class, returns the Exception that is the root cause of one or more subsequent exceptions.
Public Overridable Function GetBaseException() As Exception
Dim innerException As Exception = Me.InnerException
Dim exception2 As Exception = Me
Do While (Not innerException Is Nothing)
exception2 = innerException
innerException = innerException.InnerException
Loop
Return exception2
End Function
You could use a variation on this to parse the exception chain.
Public Sub LogExceptionChain(ByVal CurrentException As Exception)
Dim innerException As Exception = CurrentException.InnerException
Dim exception2 As Exception = CurrentException
Debug.Print(exception2.Message) 'Log the Exception
Do While (Not innerException Is Nothing)
exception2 = innerException
Debug.Print(exception2.Message) 'Log the Exception
'Move to the next exception
innerException = innerException.InnerException
Loop
End Sub
This would strike me as exactly what you are looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Using CSS how best to display name value pairs? Should I still be using tables anyway?
The table code I'd be replacing is:
<table>
<tr>
<td>Name</td><td>Value</td>
</tr>
...
</table>
From what I've been reading I should have something like
<label class="name">Name</label><label class="value">Value</value><br />
...
Ideas and links to online samples greatly appreciated. I'm a developer way out of my design depth.
EDIT: My need is to be able to both to display the data to a user and edit the values in a separate (but near identical) form.
A: I think tables are best used for tabular data, which it seems you have there.
If you do not want to use tables, the best thing would be to use definition lists(<dl> and <dt>). Here is how to style them to look like your old <td> layout.
http://maxdesign.com.au/articles/definition/
A: I think that definition lists are pretty close semantically to name/value pairs.
<dl>
<dt>Name</dt>
<dd>Value</dd>
</dl>
Definition lists - misused or misunderstood?
A: It's perfectly reasonable to use tables for what seems to be tabular data.
A: If I'm writing a form I usually use this:
<form ... class="editing">
<div class="field">
<label>Label</label>
<span class="edit"><input type="text" value="Value" ... /></span>
<span class="view">Value</span>
</div>
...
</form>
Then in my css:
.editing .view, .viewing .edit { display: none }
.editing .edit, .editing .view { display: inline }
Then with a little JavaScript I can swap the class of the form from editing to viewing.
I mention this approach since you wanted to display and edit the data with nearly the same layout and this is a way of doing it.
A: Like macbirdie I'd be inclined to mark data like this up as a definition list unless the content of the existing table could be judged to actually be tabular content.
I'd avoid using the label tag in the way you propose. Take a look at explanation of the label tag @ https://developer.mozilla.org/en-US/docs/Web/HTML/Element/label - it's really intended to allow you to focus on its associated control. Also avoid using generic divs and spans as from a semantic point of view they're weak.
If you're display multiple name-value pairs on one screen, but editing only one on an edit screen, I'd use a table on the former screen, and a definition list on the latter.
A: Horizontal definition lists work pretty well, i.e.
<dl class="dl-horizontal">
<dt>ID</dt>
<dd>25</dd>
<dt>Username</dt>
<dd>Bob</dd>
</dl>
The dl-horizontal class is provided by the Bootstrap CSS framework.
From Bootstrap4:
<dl class="row">
<dt class="col">ID</dt>
<dd class="col">25</dd>
</dl>
<dl class="row">
<dt class="col">Username</dt>
<dd class="col">Bob</dd>
</dl>
A: use the float: property eg:
css:
.left {
float:left;
padding-right:20px
}
html:
<div class="left">
Name<br/>
AnotherName
</div>
<div>
Value<br />
AnotherValue
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Rolling your own message loop, any pitfalls? This question is slightly related to this question about exception handling. The workaround I found there consists of rolling my own message loop.
So my Main method now looks basically like this:
[STAThread]
static void Main() {
// this is needed so there'll actually an exception be thrown by
// Application.Run/Application.DoEvents, instead of the ThreadException
// event being raised.
Application.SetUnhandledExceptionMode(UnhandledExceptionMode.ThrowException);
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Form form = new MainForm();
form.Show();
// the loop is here to keep app running if non-fatal exception is caught.
do {
try {
Application.DoEvents();
Thread.Sleep(100);
}
catch (Exception ex) {
ExceptionHandler.ConsumeException(ex);
}
}
while (!form.IsDisposed);
}
What I'm wondering though, is this a safe/decent way to replace the more typical
'Application.Run(new MainForm());', whether it's used for exception handling or for whatever else, or should I always stick to using Application.Run?
On another app that's in testing now a similar approach is used for both loading (splashscreen) and exception handling, and I don't think it has caused any troubles (yet :-))
A: Pitfall 1:
Thread.Sleep(100);
Never. Use WaitMessage().
Otherwise, it is possible roll out your own message loop, but in your scenario it seems somewhat pointless.
You may also want to examine Application.Run() code (with .Net Reflector, for instance).
A: If you want to customize message processing, consider implementing IMessageFilter, then call Application.AddMessageFilter to tell the standard message pump to call your filter function.
A: Yes... I think some components wont work with that code. Some of them require to live in a thread that has an Application.Run in it to effectively pick up their messages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I create a loop in an onClick event? I want to write an onClick event which submits a form several times, iterating through selected items in a multi-select field, submitting once for each.
How do I code the loop?
I'm working in Ruby on Rails and using remote_function() to generate the JavaScript for the ajax call.
A: My quick answer (as I've not coded it yet) would be to create another function that creates a POST using XMLHTTPRequest and the specific parameters for a single call. Then inside your onClick() handler call that function as you loop through your selected items.
I would suggest that you do a Proof of Concept just using a dummy HTML page and javascript and then try to figure out how to get it to work in RoR.
Also, why are you attempting to make the multiple calls from the browser as opposed to handling the looping conditions in the RoR controller?
A: You'd have to manually write some javascript. Rails' generators won't do something this complex for you.
Prototype.js will do almost all of the heavy lifting for you though. Off the top of my head, the code would look like this: (UNTESTED)
<%= javascript_include_tag 'prototype' %>
<form id="my-form">
<input type="text" name="username" />
<select multiple="true" id="select-box">
<option value="1">First</option>
<option value="2">Second</option>
<option value="3">Third</option>
<option value="4">Fourth</option>
</select>
</form>
<script type="text/javascript" language="javascript">
submitFormMultipleTimes = function() {
$F('select-box').each(function(selectedItemValue){
new Ajax.Request('/somewhere?val='+selectedItemValue,
{method: 'POST', postBody: Form.serialize('my-form')});
});
}
</script>
<a href="#" onclick="submitFormMultipleTimes(); return false;">Clicky Clicky</a>
Note:
*
*Using Prototype's $F() method to get the selected item values. It returns an array for multiple-select boxes
*Using Ajax.Request to send the data to the server as a POST.
To the server, this looks exactly the same as just submitting a normal form
*Using Form.serialize to get the data out of the form and stick it in the request's body.
This is the exact same data that would get sent if you submitted the form normally
A: Unless you're modifying the browser DOM, I can't think of a reason that you would want to do this. (But without knowing fully what you're trying to do, I could be wrong in this case =)
You should be able to send back data from mulitple objects (even nested complex objects in your form) in just one POST.
Chances are the rails code will be a lot less complex, easier to write (and easier to debug!) than any javascript you come up with.
If you need to update different parts of the page depending on what the user has selected, you can still make multiple updates to the DOM via RJS in your render :update block, so that shouldn't be an issue.
You'll also have the (large) benefit of only one server round-trip instead of the multiple trips you would need using multiple POSTS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is there a good admin generator for Ruby on Rails? My current project is in Rails. Coming from a Symfony (PHP) and Django (Python) background, they both have excellent admin generators. Seems like this is missing in Rails.
For those who aren't familiar with Symfony or Django, they both allow you to specify some metadata around your models to automatically (dynamically) generate an admin interface to do the common CRUD operations. You can create an entire Intranet with only a few commands or lines of code. They have a good appearance and are extensible enough for 99% of your admin needs.
I've looked for something similar for Rails, but all of the projects either have no activity or they died long ago. Is there anything to generate an intranet/admin site for a rails app other than scaffolding?
A: Here is a roundup of a few options, including more than just ActiveScaffold.
A: Active Admin (http://activeadmin.info/) was released in May of 2011, and looks like it's going to become the best Rails 3 option.
A: ActiveScaffold is available for Rails 2.3.x :)
Just for someonse's info who have found this question one year later like me :)
A: ActiveScaffold is a good solution, but if you want a more configurable and powerful tool, I think Typus is a great solution:
http://github.com/fesplugas/typus
A: rails_admin appears to be the latest-n-greatest free project as of January 2011.
...best of all, there has been a lot of activity in the repository.
A: You have mainly two:
*
*ActiveScaffolding: the most popular but be careful with rails 2.1
*Streamlined
A: ActiveScaffold is by far and away the most configurable/easiest to integrate/most automagic scaffolding around at the moment.
It has built in ajax support, near seamless db introspection and it even plays nicely with legacy Oracle databases (which can be a real pain in Rails).
Try it: http://activescaffold.com/
A: Have a look at Casein (http://www.caseincms.com/), might be what you're looking for.
A: Scaffolding is the normal way to create an admin backend BUT there is a project called ActiveScaffold which may solve your problem.
A: Having also tried typus, caseincms and ActiveScaffold over the weekend, I can't rave enough about admin_data.
It is
*
*super-quick to install (Rails 3 is the gem, Rails 2.3 is a plugin branch,
no digging through trees on github),
*unintrusive (all code is in the vendor/admin_data folder or the gem where it belongs),
*requires no set-up and optional configuration is one block in one file in your app,
*correctly (!) gets all model information from your model definitions (primary_key, foreign_key, relationships etc.),
*including multiple databases, SQL Server connections via activerecord-sqlserver-adapter, and even composite primary keys, as everything is abstracted on top of ActiveRecord, if you model works, admin_data will work,
*works great with legacy data for the above reasons,
*uses your existing authentication solution which is called in the most wonderful DRYness in your configuration file.
It maybe less flexible or pretty than other solutions, but this plugin does many thingks right for quick admin panel setup.
A: The most common way to create a CRUD interface is to use Scaffold.
./script/generate scaffold_resource MyModel property:type property2:type2
This command would generate a CRUD interface for the model named MyModel (singular) with two properties. Properties is what's called columns in DB lingo. So you could have name:string age:integer active:boolean etc.
A: I can suggest you active_admin that is best
Active Admin main site
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: Is knowing blend required? Do you expect your WPF developers to know expression blend?
Any good resources for learning more about Blend?
[UPDATE] Does knowing blend make you more productive?
A: As a WPF developer I surely see the benifit of knowing Expression Blend for many of my previous projects. This help me to jump start on creating Usercontrols and Custom controls very effectively. And if we do in the conventional way of writing XAML from the scratch, it is gonna take a very long time of your development.
And also for creating DataTemplate,ControlTemplate,Styles and ItemsPanelTemplate - it is just a click away in Expression blend.
So I highly recommend Expression blend for a WPF programmer
A: I typically work in both blend and Visual Studio (2005) side by side when doing WPF development. (Although, granted, I typically do both design and c# coding).
The benefits of using Blend is that certain tasks are extremely fast there - things like picking colors/brushes, creating animations and layout fixes such as tweaking margins/paddings.
Another usage is to instantly see how your hand written XAML will look like without actually starting the app.
Blend has a bad habit of producing some weird XAML so I always have to clean it up in the VS text editor afterwards. I still find it to be a net win to use blend though.
So, to answer your question: Is Blend required? no, not really. But it will make your life easier for certain tasks and thus make you more productive.
A: Things like animation and gradient color definitions can really only be done effectively in Blend. Blend is also often extremely useful for generating some non-trivial custom visual elements, just so that you can view the generated Xaml and import a CLEANER version into your production code. Unfortunately, the point-and-click nature of Blend disguises the fact that huge volumes of very messy Xaml is being generated under the hood, and you'll want to REFACTOR that Xaml before using it in your production source. Fortunately, learning Blend is not that hard. The best tutorial I ever found was called the "Fabrikam" tutorial. There may be updated versions available, but one version of that tutorial is still available at the link below.
http://blogs.msdn.com/expression/articles/516589.aspx
Realistically, very few dev. shops have access to qualified "interactive designers" (its not somethiing a company can just re-task one of its junior Mar-Com people to perform), which means, at most places, developers will need to learn some amount of Blend if marketing wants to add the kind of fancy visuals that provide alot of the justification for using WPF in the first place.
As a developer, after working intensively with WPF for several months, you will find yourself becoming totally comfortable editing Xaml directly and, unlike with Windows Forms, you'll rarely rely on features in the VStudio designer. Not only is direct editing MUCH faster than scrolling through property lists, but VStudio does not have point-and-click support for many of the features you will use in production WPF applications (they just got around to adding an "event" tab in SP#1). Blend has more support for many of these items (it can generate a DataTemplate, for instance), but I usually only jump into Blend to create a quick animation or other visual effect, cut and paste a carefully-refactored version of the markup into my "official" VStudio project source, and move on.
A: I think at least the designers should start using the Expression Suite.
The developers should be somewhat familiar with the tools but just enough to enable them to communicate better with the designers.
A: Since there are not so many good WPF tools, knowing Blend is a pretty useful skill. However I wouldn't consider it as requirement. The whole idea of WPF is to distribute work between coders and designers. IMO developer is not required to know Blend throughout, but basic skills are required to understand designer's needs.
A: I found Blend a great way to ease into XAML. Many of the common things you want to do are easy in Blend, especially databinding. Databinding has no intellisense and I found doing things in Blend a great way of discovering how do write the databinding syntax.
I now find myself mostly editing raw XAML buy hand.
The areas where blend is really handy:
*
*Customizing templates.
*Animation
*Breaking the UI down into user controls
A: Video training for expression blend:
*
*Total Training Expression Blend
*http://expression.microsoft.com/en-us/cc136536.aspx
*http://windowsclient.net/learn/videos_wpf.aspx
A: I (as a developer, not designer, soo not designer) tried to start learning WPF through Blend. While I could get stuff working, looking back at what I produced makes me shiver.
Now that I know my way around WPF pretty good, I still use Blend and Design every now and then, but my work is based in XAML (not designer view in VS, mind you, but XAML). In other words,
I know how to clean it up now.
I'm still wondering how I can get my Adobe-Flash, -Photoshop, -Illustrator design guru to work with me in WPF.
A: It fully depends on what you want to do. To answer your second question, would you really want to try editing an animation storyboard outside of Blend? If you're working with the actual Visuals of the application, Blend is best suited for this. If you want to hack around with databinding, validation and other things where you must swap back and forth with code. Obviously its more sense to work on the XAML in Visual Studio.
A: Lynda.com has some cool expression blend training available online...
Getting Started with Expression Blend by Lee Brimelow
A: Developers don't need to know Expression at all.
What you do need to know is XAML and not hide behind some tool, which would be the worst thing you could do as a WPF developer. Your tool of choice is yours to decide on. I used to use the XML editor in Visual Studio.
The only persons who need to know Blend are the ones in charge of the visual aspect of your WPF application. They have to be able to understand how to skin your application with templates, but other than that, they can keep to Blend exclusively.
A: In general, I think it's more important to for developers to understand XAML, as Blend is just a view on top of it. XAMLPad may be more useful for learning XAML in the first instance.
More specifically to this question though, I think if developers are working alongside designers using Blend, it could be very useful to know at least the basics. As well as allowing better communication (as mentioned by @kokos), it will let the developer perform minor edits (such as alignment etc.) in the same environment, and also understand the limitations and boundaries of the tool with respect to the code generation.
Historically, designer tools have had a few quirks that developers have had to work around, such as re-coding HTML in FrontPage, or generating font tags instead of using styles or classes. I'm sure Blend wouldn't do such things, but it might generate XAML that the developer would prefer to restructure or slim down, so knowing which features generate which styles of code could be very hand for the developer.
A: Brennon Williams new book should also be good!!!
(source: pearsoned-ema.com)
A: Would you require your HTML developers to use DreamWeaver?
All good WPF coders should know XAML by hand and only use tools like Blend for quick mockups, for doing animations or tweening, or for doing complicated gradients, etc.
Coding XAML by hand is a requirement for good WPF developers - Blend is a tool, not a substitute for knowing XAML.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Enhancing the web user experience for the vision impaired I was listening to a recent episode of Hanselminutes where Scott Hanselman was discussing accessibility in web applications and it got me thinking about accessibility in my own applications.
We all understand the importance of semantic markup in our web applications as it relates to accessibility but what about other simple enhancements that can be made to improve the user experience for disabled users?
In the episode, there were a number of times where I slapped my forehead and said "Of course! Why haven't I done that?" In particular, Scott talked about a website that placed a hidden link at the top of a web page that said "skip to main content". The link will only be visible to people using screen readers and it allows their screen reader to jump past menus and other secondary content. It's such an obvious improvement yet it's easy not to think of it.
There is more to accessibility and the overall user experience than simply creating valid XHTML and calling it a day.
What are some of your simple tricks for improving the user experience for the vision impaired?
A: Creating accessible pages is something that is hard to think about if you have never done it. However, once you learn the basic concepts it is very easy to do in 95% of the cases. I will mostly be repeating what others have said, but:
*
*Only use tables for tabular data
*Make sure you use the semantic tools available to you via HTML. This means using TH with a scope attribute. Use <em> instead of <i> and <strong> instead of <b>. Use the acronym and abbrev tags. Use definition lists. I can expand on these things if anyone wishes.
*One of the most important things is to use the label tag on input fields. For every input field, radio button, checkbox and textinput you should have:
<label for="username">Username:</label><input name="username" />
*Add a "skip navigation" or "skip to navigation" depending on where big chunks of text are. If you are working on a government site this should be second nature that everything you're creating allows you to skip repetitive information.
*Do not use colors for emphasis.
*Ensure that all of your text is resizable. This pretty much means don't use "px" in your css.
*I will re-emphasize this: create semantic pages. Use H tags for your titles. Use ul/li for navigation.
*Use the alt attribute on all images. If you have a spacer gif... well.. don't. Otherwise, explain what the picture is of and what its significance is to the content it is associated with. don't use "a chart" as your alt tag. Use "Chart of YTD finances: $5,000 Q1, $4,000 Q2, $8,000 Q3" or something similar.
*Provide closed captioning or transcripts for all audio and video components
The key here is to provide those with visual, hearing and motor impairments the same experience as those with standard physical capabilities. If you can't tab into a field, a screen reader can't either. If you can't click on the text next to a check box to select it, the screen reader doesn't know the text is related to the check box.
You should frequently view your site without stylesheets (ctrl-shift-s if you have Firefox and the Web Developer Toolbar) to see if the page makes sense. If it doesn't make sense to you as a sighted individual, it won't make sense to someone using a screen reader.
A: Check out Fangs
Fangs is an in-browser tool for Firefox that emulates what a screen reader “sees” when visiting a Web page. Its function is simple: to output a transcript of what a screen reader will read out to a user when a Web page is visited. It’s a helpful tool for quickly analyzing if you’ve structured your content effectively so that it’s understandable and usable by vision-impaired individuals, without forcing you to learn to use (and purchase) a screen-reader application such as JAWS or Windows Eyes.
A: It's been awhile since I've been at a job where we had to adhere to Section 508, but here's what I remember that hasn't been touched on by the other posters...
*
*Only use tables for data. Do not use tables for layout if you can avoid it.
*When using tables for data, your column headers should be nested in TH tags and you should use title and scope attributes. Your table tags should use the summary attribute.
*Images should all have a value for the alt attribute that describes what's going on in the image and if the image serves no purpose (it's a shim image or something similar) then the alt attribute should be set to empty string.
*Try using a text to speech reader and/or navigate only through the keyboard and/or turn off stylesheets. I believe you need to purchase JAWS, but I'm sure there are free screen readers out there. You need to experience a site through a screen reader to truly understand how difficult most web pages are to navigate without the cues that screen readers interpret.
A: "Vision impaired" includes colour-blindness. I used to work with someone who couldn't distinguish red from green too well, so any applications that used a traffic-light style interface was very difficult for him to use. In the industry we were working in, alerts in rows were colour-coded, so another form of display was useful for him, such as an extra column in the row with the text of the alert type ("emergency", "warning" etc).
A: Biggest problem with screen readers is usually tables to position things on your page. Screenreaders can't really handle those. Put stuff in div's in your html and put them in a sensible order. Then position the div's on your page with css. Use tables to display content that should be in a table.
A: The code for many web pages is structured as:
*
*Header
*Top Navigation
*Left Navigation
*Content
*Footer
When structured this way, then the hidden link for "Skip to Main Content" is beneficial. However, with CSS layout, you may be able to reorder this so that you have:
*
*Content
*Header
*Top Navigation
*Left Navigation
*Footer
You then use CSS positioning and floats to move these different elements around on the screen to make the page look the way you want it to look.
The main advantage to structuring a web page in this way is that if the browser doesn't support the CSS, then the content is first on the page. In addition to screen readers, this is beneficial for mobile devices and search engine spiders.
A: For partially partially sighted we need to make sure text is not excessivly small and contrasts the background color substantially. We should also make sure text is resizable by using relative sizing units such as em's rather than absolute units like px's (although, in my opinion, this is becoming less of an issue as browsers are increasingly favoring zooming over text resizing).
For users of screen readers, it's helpful to get a good idea of the way screen readers are actualy used. The following article presents guidlines based on observations of blind people browsing the web using screen readers; it's a little out of date now, but gives you a good feel for what will help screen reader users, and what won't:
http://redish.net/content/papers/interactions.html
Additionally, the American Foundation for the blind have a section of their website dedicated to advice for web developers on how to cater for vision impaired users.
In addition to the visually impared, we need to consider those with disabilities that prevent them from using a mouse, and also those with neurological disabilities. If anyone can provide resources giving advice on how to cater for those individuals, that would be great.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What Makes a Good Unit Test? I'm sure most of you are writing lots of automated tests and that you also have run into some common pitfalls when unit testing.
My question is do you follow any rules of conduct for writing tests in order to avoid problems in the future? To be more specific: What are the properties of good unit tests or how do you write your tests?
Language agnostic suggestions are encouraged.
A: Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
*
*Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
*Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
*Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
*Independent: Very important.
*
*Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
*Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
*Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
*Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
*
*Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
A: Some properties of great unit tests:
*
*When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
*When you refactor, no tests should fail.
*Tests should run so fast that you never hesitate to run them.
*All tests should pass always; no non-deterministic results.
*Unit tests should be well-factored, just like your production code.
@Alotor: If you're suggesting that a library should only have unit tests at its external API, I disagree. I want unit tests for each class, including classes that I don't expose to external callers. (However, if I feel the need to write tests for private methods, then I need to refactor.)
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
A: Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
A: What you're after is delineation of the behaviours of the class under test.
*
*Verification of expected behaviours.
*Verification of error cases.
*Coverage of all code paths within the class.
*Exercising all member functions within the class.
The basic intent is increase your confidence in the behaviour of the class.
This is especially useful when looking at refactoring your code. Martin Fowler has an interesting article regarding testing over at his web site.
HTH.
cheers,
Rob
A: Test should originally fail. Then you should write the code that makes them pass, otherwise you run the risk of writing a test that is bugged and always passes.
A: I like the Right BICEP acronym from the aforementioned Pragmatic Unit Testing book:
*
*Right: Are the results right?
*B: Are all the boundary conditions correct?
*I: Can we check inverse relationships?
*C: Can we cross-check results using other means?
*E: Can we force error conditions to happen?
*P: Are performance characteristics within bounds?
Personally I feel that you can get pretty far by checking that you get the right results (1+1 should return 2 in a addition function), trying out all the boundary conditions you can think of (such as using two numbers of which the sum is greater than the integer max value in the add function) and forcing error conditions such as network failures.
A: Good tests need to be maintainable.
I haven't quite figured out how to do this for complex environments.
All the textbooks start to come unglued as your code base starts reaching
into the hundreds of 1000's or millions of lines of code.
*
*Team interactions explode
*number of test cases explode
*interactions between components explodes.
*time to build all the unittests becomes a significant part of the build time
*an API change can ripple to hundreds of test cases. Even though the production code change was easy.
*the number of events required to sequence processes into the right state increases which in turn increases test execution time.
Good architecture can control some of interaction explosion, but inevitably as
systems become more complex the automated testing system grows with it.
This is where you start having to deal with trade-offs:
*
*only test external API otherwise refactoring internals results in significant test case rework.
*setup and teardown of each test gets more complicated as an encapsulated subsystem retains more state.
*nightly compilation and automated test execution grows to hours.
*increased compilation and execution times means designers don't or won't run all the tests
*to reduce test execution times you consider sequencing tests to take reduce set up and teardown
You also need to decide:
where do you store test cases in your code base?
*
*how do you document your test cases?
*can test fixtures be re-used to save test case maintenance?
*what happens when a nightly test case execution fails? Who does the triage?
*How do you maintain the mock objects? If you have 20 modules all using their own flavor of a mock logging API, changing the API ripples quickly. Not only do the test cases change but the 20 mock objects change. Those 20 modules were written over several years by many different teams. Its a classic re-use problem.
*individuals and their teams understand the value of automated tests they just don't like how the other team is doing it. :-)
I could go on forever, but my point is that:
Tests need to be maintainable.
A: I covered these principles a while back in This MSDN Magazine article which I think is important for any developer to read.
The way I define "good" unit tests, is if they posses the following three properties:
*
*They are readable (naming, asserts, variables, length, complexity..)
*They are Maintainable (no logic, not over specified, state-based, refactored..)
*They are trust-worthy (test the right thing, isolated, not integration tests..)
A: *
*Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
*Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
*Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
*Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
*Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.
A: Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
Womp's 5 Laws of Writing Tests:
1. Use long, descriptive test method names.
- Map_DefaultConstructorShouldCreateEmptyGisMap()
- ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
- Dog_Object_Should_Eat_Homework_Object_When_Hungry()
2. Write your tests in an Arrange/Act/Assert style.
*
*While this organizational strategy
has been around for a while and
called many things, the introduction
of the "AAA" acronym recently has
been a great way to get this across.
Making all your tests consistent with
AAA style makes them easy to read and
maintain.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
*
*A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
*
*This may seem obvious, but this
practice will protect the integrity
of your tests from people who don't
understand the reason behind the test
in the first place. I've seen many
tests get removed or modified that
were perfectly fine, simply because
the person didn't understand the
assumptions that the test was
verifying.
*If the test is trivial or the method
name is sufficiently descriptive, it
can be permissible to leave the
comment off.
5. Every test must always revert the state of any resource it touches
*
*Use mocks where possible to avoid
dealing with real resources.
*Cleanup must be done at the test
level. Tests must not have any
reliance on order of execution.
A: *
*Unit Testing just tests the external API of your Unit, you shouldn't test internal behaviour.
*Each test of a TestCase should test one (and only one) method inside this API.
*
*Aditional Test Cases should be included for failure cases.
*Test the coverage of your tests: Once a unit it's tested, the 100% of the lines inside this unit should had been executed.
A: Jay Fields has a lot of good advices about writing unit tests and there is a post where he summarize the most important advices. There you will read that you should critically think about your context and judge if the advice is worth to you. You get a ton of amazing answers here, but is up to you decide which is best for your context. Try them and just refactoring if it smells bad to you.
Kind Regards
A: Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)
*
*Tests should reduce risk, not
introduce it.
*Tests should be easy to run.
*Tests should be easy to maintain as
the system evolves around them
Some things to make this easier:
*
*Tests should only fail because of
one reason.
*Tests should only test one thing
*Minimize test dependencies (no
dependencies on databases, files, ui
etc.)
Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate
A: Never assume that a trivial 2 line method will work. Writing a quick unit test is the only way to prevent the missing null test, misplaced minus sign and/or subtle scoping error from biting you, inevitably when you have even less time to deal with it than now.
A: I second the "A TRIP" answer, except that tests SHOULD rely on each other!!!
Why?
DRY - Dont Repeat Yourself - applies to testing as well! Test dependencies can help to 1) save setup time, 2) save fixture resources, and 3) pinpoint to failures. Of course, only given that your testing framework supports first-class dependencies. Otherwise, I admit, they are bad.
Follow up http://www.iam.unibe.ch/~scg/Research/JExample/
A: Often unit tests are based on mock object or mock data.
I like to write three kind of unit tests:
*
*"transient" unit tests: they create their own mock objects/data and test their function with it, but destroy everything and leave no trace (like no data in a test database)
*"persistent" unit test: they test functions within your code creating objects/data that will be needed by more advanced function later on for their own unit test (avoiding for those advanced function to recreate every time their own set of mock objects/data)
*"persistent-based" unit tests: unit tests using mock objects/data that are already there (because created in another unit test session) by the persistent unit tests.
The point is to avoid to replay everything in order to be able to test every functions.
*
*I run the third kind very often because all mock objects/data are already there.
*I run the second kind whenever my model change.
*I run the first one to check the very basic functions once in a while, to check to basic regressions.
A: Think about the 2 types of testing and treat them differently - functional testing and performance testing.
Use different inputs and metrics for each. You may need to use different software for each type of test.
A: I use a consistent test naming convention described by Roy Osherove's Unit Test Naming standards Each method in a given test case class has the following naming style MethodUnderTest_Scenario_ExpectedResult.
*The first test name section is the name of the method in the system under test.
*Next is the specific scenario that is being tested.
*Finally is the results of that scenario.
Each section uses Upper Camel Case and is delimited by a under score.
I have found this useful when I run the test the test are grouped by the name of the method under test. And have a convention allows other developers to understand the test intent.
I also append parameters to the Method name if the method under test have been overloaded.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "97"
} |
Q: Hidden Features of PHP? I know this sounds like a point-whoring question but let me explain where I'm coming from.
Out of college I got a job at a PHP shop. I worked there for a year and a half and thought that I had learned all there was to learn about programming.
Then I got a job as a one-man internal development shop at a sizable corporation where all the work was in C#. In my commitment to the position I started reading a ton of blogs and books and quickly realized how wrong I was to think I knew everything. I learned about unit testing, dependency injection and decorator patterns, the design principle of loose coupling, the composition over inheritance debate, and so on and on and on - I am still very much absorbing it all. Needless to say my programming style has changed entirely in the last year.
Now I find myself picking up a php project doing some coding for a friend's start-up and I feel completely constrained as opposed to programming in C#. It really bothers me that all variables at a class scope have to be referred to by appending '$this->' . It annoys me that none of the IDEs that I've tried have very good intellisense and that my SimpleTest unit tests methods have to start with the word 'test'. It drives me crazy that dynamic typing keeps me from specifying implicitly which parameter type a method expects, and that you have to write a switch statement to do method overloads. I can't stand that you can't have nested namespaces and have to use the :: operator to call the base class's constructor.
Now I have no intention of starting a PHP vs C# debate, rather what I mean to say is that I'm sure there are some PHP features that I either don't know about or know about yet fail to use properly. I am set in my C# universe and having trouble seeing outside the glass bowl.
So I'm asking, what are your favorite features of PHP? What are things you can do in it that you can't or are more difficult in the .Net languages?
A: The standard class is a neat container. I only learned about it recently.
Instead of using an array to hold serveral attributes
$person = array();
$person['name'] = 'bob';
$person['age'] = 5;
You can use a standard class
$person = new stdClass();
$person->name = 'bob';
$person->age = 5;
This is particularly helpful when accessing these variables in a string
$string = $person['name'] . ' is ' . $person['age'] . ' years old.';
// vs
$string = "$person->name is $person->age years old.";
A: Include files can have a return value you can assign to a variable.
// config.php
return array(
'db' => array(
'host' => 'example.org',
'user' => 'usr',
// ...
),
// ...
);
// index.php
$config = include 'config.php';
echo $config['db']['host']; // example.org
A: You can take advantage of the fact that the or operator has lower precedence than = to do this:
$page = (int) @$_GET['page']
or $page = 1;
If the value of the first assignment evaluates to true, the second assignment is ignored. Another example:
$record = get_record($id)
or throw new Exception("...");
A: __autoload() (class-) files aided by set_include_path().
In PHP5 it is now unnecessary to specify long lists of "include_once" statements when doing decent OOP.
Just define a small set of directory in which class-library files are sanely structured, and set the auto include path:
set_include_path(get_include_path() . PATH_SEPARATOR . '../libs/');`
Now the __autoload() routine:
function __autoload($classname) {
// every class is stored in a file "libs/classname.class.php"
// note: temporary alter error_reporting to prevent WARNINGS
// Do not suppress errors with a @ - syntax errors will fail silently!
include_once($classname . '.class.php');
}
Now PHP will automagically include the needed files on-demand, conserving parsing time and memory.
A: You can easily add an element to an array.
$my_array = array();
$my_array[] = 'first element';
$my_array[] = 'second element';
Element may be anything: object, array, scalar...
A: As others have mentioned, the ability to run PHP at the command line level is fantastic. I set PHP scripts as cron jobs for data cleanup and backup purposes all the time. Just start the file with these lines:
#!/usr/bin/php5
<?php
// start coding here
Note that the first line may be different depending on where PHP is installed on your system.
From here, it's easy to implement PHP for more complex system-level processes, like daemons.
A: Shorthand Boolean Chains
<?php
TRUE AND print 'Hello';
FALSE OR print 'World';
// Prints "Hello World";
// Complex example...
User::logged_in() or die('Not allowed');
User::is_admin() AND print 'Admin Area';
Which is really useful if you have PHP files in a web-accessable area. By inserting this little tidbit at the top of each file you can make sure that no-one can access any file but index.php
<?php defined('YOURCONSTANT') or die('Not allowed');
///rest of your code
A: Variable variables and functions without a doubt!
$foo = 'bar';
$bar = 'foobar';
echo $$foo; //This outputs foobar
function bar() {
echo 'Hello world!';
}
function foobar() {
echo 'What a wonderful world!';
}
$foo(); //This outputs Hello world!
$$foo(); //This outputs What a wonderful world!
The same concept applies to object parameters ($some_object->$some_variable);
Very, very nice. Make's coding with loops and patterns very easy, and it's faster and more under control than eval (Thanx @Ross & @Joshi Spawnbrood!).t
A: Easiness. The greatest feature is how easy it is for new developers to sit down and write "working" scripts and understand the code.
The worst feature is how easy it is for new developers to sit down and write "working" scripts and think they understand the code.
The openness of the community surrounding PHP and the massive amounts of PHP projects available as open-source is a lot less intimidating for someone entering the development world and like you, can be a stepping stone into more mature languages.
I won't debate the technical things as many before me have but if you look at PHP as a community rather than a web language, a community that clearly embraced you when you started developing, the benefits really speak for themselves.
A: Built in filters for parsing variables against specific predefined types - as well as covering the basics (int/float etc), extends to covering emails, urls and even if a variable is a valid regular expression.
http://ch2.php.net/manual/en/book.filter.php
A: You can use functions with a undefined number of arguments using the func_get_args().
<?php
function test() {
$args = func_get_args();
echo $args[2]; // will print 'd'
echo $args[1]; // will print 3
}
test(1,3,'d',4);
?>
A: I love remote files. For web development, this kind of feature is exceptionally useful.
Need to work with the contents of a web page? A simple
$fp = fopen('http://example.com');
and you've got a file handle ready to go, just like any other normal file.
Or how about reading a remote file or web page directly in to a string?
$str = file_get_contents('http://example.com/file');
The usefulness of this particular method is hard to overstate.
Want to analyze a remote image? How about doing it via FTP?
$imageInfo = getimagesize('ftp://user:[email protected]/image/name.jpg');
Almost any PHP function that works with files can work with a remote file. You can even include() or require() code files remotely this way.
A: strtr()
It's extremely fast, so much that you would be amazed. Internally it probably uses some crazy b-tree type structure to arrange your matches by their common prefixes. I use it with over 200 find and replace strings and it still goes through 1MB in less than 100ms. For all but trivially small strings strtr() is even significantly faster than strtolower() at doing the exact same thing, even taking character set into account. You could probably write an entire parser using successive strtr calls and it'd be faster than the usual regular expression match, figure out token type, output this or that, next regular expression kind of thing.
I was writing a text normaliser for splitting text into words, lowercasing, removing punctuation etc and strtr was my Swiss army knife, it beat the pants off regular expressions or even str_replace().
A: One not so well known feature of PHP is extract(), a function that unpacks an associative array into the local namespace. This probably exists for the autoglobal abormination but is very useful for templating:
function render_template($template_name, $context, $as_string=false)
{
extract($context);
if ($as_string)
ob_start();
include TEMPLATE_DIR . '/' . $template_name;
if ($as_string)
return ob_get_clean();
}
Now you can use render_template('index.html', array('foo' => 'bar')) and only $foo with the value "bar" appears in the template.
A: Typecasting and the ctype_* functions become important to ensure clean data. I have made extensive use of exceptions lately, which has greatly simplified my error handling code.
I wouldn't say the language has lots of killer features. (At least, I don't find much occasion to seek them out.) I like that the language is unobtrusive.
A: Using array elements or object properties inside strings.
Instead of writing
$newVar = $ar['foo']['bar'];
echo "Array value is $newVar";
$newVar = $obj->foo->bar;
echo "Object value is $newVar";
You can write:
echo "Array value is {$ar['foo']['bar']}";
echo "Object value is {$obj->foo->bar}";
A: The ReflectionClass class provides information about a given class.
$classInfo = new ReflectionClass ('MyClass');
if ($classInfo->hasMethod($methodName))
{
$cm = $classInfo->getMethod($name);
$methodResult = $cm->invoke(null);
}
Among other things, useful to check if a method exists and call it.
A: Range() isn't hidden per se, but I still see a lot of people iterating with:
for ($i=0; $i < $x; $i++) {
// code...
}
when they could be using:
foreach (range(0, 12) as $number) {
// ...
}
And you can do simple things like
foreach (range(date("Y"), date("Y")+20) as $i)
{
print "\t<option value=\"{$i}\">{$i}</option>\n";
}
A: preg_split(), array_intersect(), and array_intersect_key().
A: Just about any file type can be included, from .html to .jpeg. Any byte string found inside bound by PHP open tags will be executed. Yes, an image of goat.se can contain all your usual utility functions. I'm guessing the internal behavior of include is to convert the input file to string, and parse for any php code.
A: PHP enabled webspace is usually less expensive than something with (asp).net.
You might call that a feature ;-)
A: One nice feature of PHP is the CLI. It's not so "promoted" in the documentation but if you need routine scripts / console apps, using cron + php cli is really fast to develop!
A: The static keyword is useful outside of a OOP standpoint. You can quickly and easily implement 'memoization' or function caching with something as simple as:
<?php
function foo($arg1)
{
static $cache;
if( !isset($cache[md5($arg1)]) )
{
// Do the work here
$cache[md5($arg1)] = $results;
}
return $cache[md5($arg1)];
}
?>
The static keyword creates a variable that persists only within the scope of that function past the execution. This technique is great for functions that hit the database like get_all_books_by_id(...) or get_all_categories(...) that you would call more than once during a page load.
Caveat: Make sure you find out the best way to make a key for your hash, in just about every circumstance the md5(...) above is NOT a good decision (speed and output length issues), I used it for illustrative purposes. sprintf('%u', crc32(...)) or spl_object_hash(...) may be much better depending on the context.
A:
specifying implicitly which parameter type a method expects
Actually, this one is partly possible (at least in PHP5) - you can specify the type for array and object parameters for functions and methods, though you are out of luck in case of scalar types.
class Bar
{
public function __construct(array $Parameters, Bar $AnotherBar){}
}
Apart from this one and the magic methods Allain mentioned, I also find the interfaces provided by SPL (Standard PHP library) indispensible - you can implement the necessary methods in your class, for example, I particulary like the ArrayAccess and Iterator interfaces, that allow using an object like an associative array or iterating over it just like any simple array.
A: I'm partial to the other PHP users out there. It's easy to get answers and direction when necessary.
A: I also like the difference between ' and ".
$foo = 'Bob';
echo 'My name is {$foo}'; // Doesn't swap the variable
echo "My name is {$foo}"; // Swaps the variable
Therefore, if your string doesn't need variable swapping, don't use a ", it's a waste of time. I see lots of people declaring strings with " all the time.
Note: I use { } as it makes my variables stand out more.
A: There's lots of gems hidden in the Standard PHP Library. Array access allows you to build an object that works to an array interface but add your own functionality on top.
Also when you create an ArrayAccess object by setting a flag in the constructor you can read and write an object as either an array or an object. Here's an example:
$obj = new ArrayObject(array("name"=>"bob", "email"=>"[email protected]"),2);
$obj->fullname = "Bob Example";
echo $obj["fullname"];
$obj["fullname"]="Bobby Example";
echo $obj->fullname;
A: The alternative syntax for control structures
There are a lot of people who don't know this syntax. When I use pure PHP for templating, this syntax offers a nice and clean way to mix simple control structures such as if or foreach with your HTML template code, usually combined with the <?= $myVar ?> short style of printing a variable.
A: I suggest using PHPUnit for unit testing, if you want to have annotations for marking your tests, and data providers, and data driven tests, and so on. Not to mention, it seems to get all the integration love when it comes to things like continuous integration (cruise control, bamboo, hudson, etc...).
PHP 5.3, it's a big jump, and it's throughly worth it in terms of language features. It maybe rough around the edges, but this is a startup and they'll be fixed up releases by the time you launch.
As far as magic methods go __invoke() alone is a big deal, but it doesn't have the reciprocal method for it, even then, paired with array_map, array_reduce, and array_filter, and some wrappers you can do some amazing functional programming.
__get, __set, and __call are really handy as well, I used these and some interface/class naming convention trickery to implement traits prior to 5.3, but now you have traits, as well.
Also have a look at the addendum library, written by derik rethans of ezComponents, and XDebug fame, it allows you to do annotations for php 5+. It's not bad, and performance is a non-issue with caching.
For profiling, you can use xdebug + webcachegrind.
The best IDE is probably the free eclipse PDT, if you use type hinting on parameters, and phpdoc comments for parameters and returns it can figure things out from those and provide you code completion. That should give you decent intellisense.
BTW, it's tempting to do all sorts of crazy string concats, or variable variables, or variable method calls, or variable class creation, do this in more than one place, that's not well documented and easy to search via regex, and you're SCREWED. Forget hard to debug, but refactoring is a major pain. This is something people rarely consider php has NO automated refactoring tools, and refactoring large code bases is VERY hard to do in php.
A few things to caution you, even if you smell the slightest bit of possibility that you might have to deal with multi-byte chars, or 'exotic' character encodings, I strongly urge you to wrap up string handling. In fact, introducing a thin layer of indirection which allows you to shim between or act as seams for testing/injectability between your code and built-ins will make your life easier. Not strictly necessary, but unless you have the benefit of foresight, it's hard to tackle internationalization or such large cross-cutting projects.
autoload, learn it and love it. Run away from hard coded require/includes, or worse, their *_once variants, they tie your hands in terms of injection, instead use an autoloader, simplest thing is to jam all your includes in a array, keyed on the class name, and the value is the file path from some root, it's fast. The wicked thing about this is that it makes testing really easy, as you've implemented a class loader, and so you can do some really neat stuff with it.
PHP 5.3 has name spaces now, jump for joy and use them like a mad man. This alone provides an opportunity to create seams (rare) for testing/injections.
Opcode caches, file accesses are slow, to avoid them, use an opcode cache, it's not just the file access, it's all the parsing, really. If you don't have to parse PER request, it makes a BIG difference. Even doing this for a front controller/interceptor will give you a lot of benefits.
Think different, one of the most troubling things for PHP programmers if they come from Java/.Net is that your application server is spread across PHP/Apache, or whatever web server you're using.
Phing/Ant/PHPMaven early on it seems easy just to jam everything in, but build scripts are still useful in php and they have some great support.
I had trouble with method overloading, and still contend with it. I came up with a pattern to alleviate a certain aspect of it. I often had many things that could fulfill a certain parameter, so when you document it @param mixed(int|array|fooObject) if those were the possibilities, I created a static method called Caster::CastTo($param, $toTypeAsString) that would just run through a case matching the type and trying to convert it to a known type. The rest of the method could then assume that one type, or a failure to convert, and work with that. And since I jammed ALL conversions in one class, it stopped mapping of types from being a cross cutting concern, and since these functions can be individually tested, I could test them once, and rely on them everywhere else.
A: Then "and print" trick
<?php $flag and print "Blah" ?>
Will echo Blah if $flag is true. DOES NOT WORK WITH ECHO.
This is very handy in template and replace the ? : that are not really easy to read.
A: You can use minus character in variable names like this:
class style
{
....
function set_bg_colour($c)
{
$this->{'background-color'} = $c;
}
}
Why use it? No idea: maybe for a CSS model? Or some weird JSON you need to output. It's an odd feature :)
A: Probably not many know that it is possible to specify constant "variables" as default values for function parameters:
function myFunc($param1, $param2 = MY_CONST)
{
//code...
}
Strings can be used as if they were arrays:
$str = 'hell o World';
echo $str; //outputs: "hell o World"
$str[0] = 'H';
echo $str; //outputs: "Hell o World"
$str[4] = null;
echo $str; //outputs: "Hello World"
A: HEREDOC syntax is my favourite hidden feature. Always difficult to find as you can't Google for <<< but it stops you having to escape large chunks of HTML and still allows you to drop variables into the stream.
echo <<<EOM
<div id="someblock">
<img src="{$file}" />
</div>
EOM;
A: The single most useful thing about PHP code is that if I don't quite understand a function I see I can look it up by using a browser and typing:
http://php.net/function
Last month I saw the "range" function in some code. It's one of the hundreds of functions I'd managed to never use but turn out to be really useful:
http://php.net/range
That url is an alias for http://us2.php.net/manual/en/function.range.php. That simple idea, of mapping functions and keywords to urls, is awesome.
I wish other languages, frameworks, databases, operating systems has as simple a mechanism for looking up documentation.
A: Documentation. The documentation gets my vote. I haven't encountered a more thorough online documentation for a programming language - everything else I have to piece together from various websites and man pages.
A: Fast block comments
/*
die('You shall not pass!');
//*/
//*
die('You shall not pass!');
//*/
These comments allow you to toggle if a code block is commented with one character.
A: Well, the community is in the first place for me.
Whatever can your problem be, you'll always find someone who had it before and almost every time a solution... and sometimes I've seen a completely free share of ideas, ways to approach a single problem.
I'm trying to learn Python now (to grow up as... well.. programmer, can that be?) and the most useful thing of Python is the indentation.
I love the PHP indentation, the $ mark for sign the variables, curly braces for loops and cycles, well, those smart things keep my code very easy to understand (even if the one who's wrote the code was little..messy up.. 'spaghetti-code', mh?)
Arrays, in PHP are pretty simple and powerful.
Databases: MySQL, Postrgee, sql; you can use almost every kind of databases.. easily.
Quick: logically depends by how is the code wrote, but usually PHP is pretty fast for small/medium application (as it lose wheel in bigger application).
A: I have started to switch over to python, and one thing I loved in python is the live interpreter. It wasn't until working on a php project later that I realized php does have this option, it's just not widely known. In a command prompt, type php -a and paste in any php code you want to test, but just remember to start it with <?php
A: I think that their proper respect for the GOTO function is key.
http://us2.php.net/goto
A: My list.. most of them fall more under the "hidden features" than the "favorite features" (I hope!), and not all are useful, but .. yeah.
// swap values. any number of vars works, obviously
list($a, $b) = array($b, $a);
// nested list() calls "fill" variables from multidim arrays:
$arr = array(
array('aaaa', 'bbb'),
array('cc', 'd')
);
list(list($a, $b), list($c, $d)) = $arr;
echo "$a $b $c $d"; // -> aaaa bbb cc d
// list() values to arrays
while (list($arr1[], $arr2[], $arr3[]) = mysql_fetch_row($res)) { .. }
// or get columns from a matrix
foreach($data as $row) list($col_1[], $col_2[], $col_3[]) = $row;
// abusing the ternary operator to set other variables as a side effect:
$foo = $condition ? 'Yes' . (($bar = 'right') && false) : 'No' . (($bar = 'left') && false);
// boolean False cast to string for concatenation becomes an empty string ''.
// you can also use list() but that's so boring ;-)
list($foo, $bar) = $condition ? array('Yes', 'right') : array('No', 'left');
You can nest ternary operators too, comes in handy sometimes.
// the strings' "Complex syntax" allows for *weird* stuff.
// given $i = 3, if $custom is true, set $foo to $P['size3'], else to $C['size3']:
$foo = ${$custom?'P':'C'}['size'.$i];
$foo = $custom?$P['size'.$i]:$C['size'.$i]; // does the same, but it's too long ;-)
// similarly, splitting an array $all_rows into two arrays $data0 and $data1 based
// on some field 'active' in the sub-arrays:
foreach ($all_rows as $row) ${'data'.($row['active']?1:0)}[] = $row;
// slight adaption from another answer here, I had to try out what else you could
// abuse as variable names.. turns out, way too much...
$string = 'f.> <!-? o+';
${$string} = 'asdfasf';
echo ${$string}; // -> 'asdfasf'
echo $GLOBALS['f.> <!-? o+']; // -> 'asdfasf'
// (don't do this. srsly.)
${''} = 456;
echo ${''}; // -> 456
echo $GLOBALS['']; // -> 456
// I have no idea.
Right, I'll stop for now :-)
Hmm, it's been a while..
// just discovered you can comment the hell out of php:
$q/* snarf */=/* quux */$_GET/* foo */[/* bar */'q'/* bazz */]/* yadda */;
So, just discovered you can pass any string as a method name IF you enclose it with curly brackets. You can't define any string as a method alas, but you can catch them with __call(), and process them further as needed. Hmmm....
class foo {
function __call($func, $args) {
eval ($func);
}
}
$x = new foo;
$x->{'foreach(range(1, 10) as $i) {echo $i."\n";}'}();
Found this little gem in Reddit comments:
$foo = 'abcde';
$strlen = 'strlen';
echo "$foo is {$strlen($foo)} characters long."; // "abcde is 5 characters long."
You can't call functions inside {} directly like this, but you can use variables-holding-the-function-name and call those! (*and* you can use variable variables on it, too)
A: Array manipulation.
Tons of tools for working with and manipulating arrays. It may not be unique to PHP, but I've never worked with a language that made it so easy.
A: I'm a bit like you, I've coded PHP for over 8 years. I had to take a .NET/C# course about a year ago and I really enjoyed the C# language (hated ASP.NET) but it made me a better PHP developer.
PHP as a language is pretty poor, but, I'm extremely quick with it and the LAMP stack is awesome. The end product far outweighs the sum of the parts.
That said, in answer to your question:
http://uk.php.net/SPL
I love the SPL, the collection class in C# was something that I liked as soon as I started with it. Now I can have my cake and eat it.
Andrew
A: I'm a little surprised no-one has mentioned it yet, but one of my favourite tricks with arrays is using the plus operator. It is a little bit like array_merge() but a little simpler. I've found it's usually what I want. In effect, it takes all the entries in the RHS and makes them appear in a copy of the LHS, overwriting as necessary (i.e. it's non-commutative). Very useful for starting with a "default" array and adding some real values all in one hit, whilst leaving default values in place for values not provided.
Code sample requested:
// Set the normal defaults.
$control_defaults = array( 'type' => 'text', 'size' => 30 );
// ... many lines later ...
$control_5 = $control_defaults + array( 'name' => 'surname', 'size' => 40 );
// This is the same as:
// $control_5 = array( 'type' => 'text', 'name' => 'surname', 'size' => 40 );
A: Here's one, I like how setting default values on function parameters that aren't supplied is much easier:
function MyMethod($VarICareAbout, $VarIDontCareAbout = 'yippie') { }
A: Quick and dirty is the default.
The language is filled with useful shortcuts, This makes PHP the perfect candidate for (small) projects that have a short time-to-market.
Not that clean PHP code is impossible, it just takes some extra effort and experience.
But I love PHP because it lets me express what I want without typing an essay.
PHP:
if (preg_match("/cat/","one cat")) {
// do something
}
JAVA:
import java.util.regex.*;
Pattern p = Pattern.compile("cat");
Matcher m = p.matcher("one cat")
if (m.find()) {
// do something
}
And yes, that includes not typing Int.
A: How extremely easy is to find PHP related things Examples, Applications, Classes, Documentation, Frameworks, etc...
All over the web, it's the easiest language to learn when going commando(by yourself), and also the one with more value for your time.
After learning PHP might put CMS with joomla, a blog with wordpress, etc....
A: Let's see...
*
*Ternary operators. They work wonders for processing checkboxes in form results.
$var = ($_POST['my_checkbox']=='checked') ? TRUE : FALSE;
*All of the wonderful string and array processing functions are worth trawling through. strtotime(), strlen(), and strpos() are a few of my favorites.
*The SimpleXML class and json_decode() function. Call a REST API or RSS feed with file_get_contents(), parse it effortlessly with one of those tools, and you're done.
A: The predefined interfaces:
http://php.net/manual/en/reserved.interfaces.php
For example implementing ArrayAccess will make your object appear as an array or Iterator will allow it to be used in a foreach statement.
Unfortunately you can't use "object arrays" with the native functions that take arrays as parameters.
I also found it useful to override the __call function which allows you to dynamically create properties and methods for an object.
In my database abstraction I use this to generate functions that are named by the database column names. For example if there is a column 'name' then you can change values in it by using updateByName("foo").
A: Lambda functions
Example - sort by field in multidimension-array
function sort_by_field($field, & $data) {
$sort_func = create_function('$a,$b', 'if ($a["' . $field . '"] == $b["' . $field . '"]) {return 0;}
return ($a["' . $field . '"] < $b["' . $field . '"]) ? -1 : 1;');
uasort($data, $sort_func);
}
Anonymous functions
Anonymous functions lets you define a function to a variable.
http://www.php.net/manual/en/functions.anonymous.php
A: Arrays. Judging from the answers to this question I don't think people fully appreciate just how easy and useful Arrays in PHP are. PHP Arrays act as lists, maps, stacks and generic data structures all at the same time. Arrays are implemented in the language core and are used all over the place which results in good CPU cache locality. Perl and Python both use separate language constructs for lists and maps resulting in more copying and potentially confusing transformations.
A: Stream Handlers allow you to extend the "FileSystem" with logic that as far as I know is quite difficult to do in most other languages.
For example with the MS-Excel Stream handler you can create a MS Excel file in the following way:
$fp = fopen("xlsfile://tmp/test.xls", "wb");
if (!is_resource($fp)) {
die("Cannot open excel file");
}
$data= array(
array("Name" => "Bob Loblaw", "Age" => 50),
array("Name" => "Popo Jijo", "Age" => 75),
array("Name" => "Tiny Tim", "Age" => 90)
);
fwrite($fp, serialize($data));
fclose($fp);
A: Output buffering via ob_start() is far more useful than most realize. The first hidden feature here is that ob_start accepts a callback:
function twiterize($text) {
// Replace @somename with the full twitter handle
return preg_replace("(\s+)@(\w)+(\s+)", "http://www.twitter.com/${2}", $text);
}
ob_start(twiterize);
Secondly, you can nest output buffers... Using the previous example:
ob_start(parseTemplate);
// ...
ob_start(twiterize);
// ...
ob_end_flush();
// ...
ob_end_flush();
Help contents, text ads, dictionary/index functionality, linkify, link-redirection for tracking purposes, templating engine, all these things are very easy by using different combinations of these 2 things.
A: You can use break N; to exit nested loops (to compensate for the lack of goto). For example
for (int i=0; i<100; i++) {
foreach ($myarr as $item) {
if ($item['name'] == 'abort')
break 2;
}
}
More info here - http://php.net/manual/en/control-structures.break.php
A: Actually, you're not quite right about that you cannot specify what types a method expects, it does work as you'd expect.
function foo ( array $param0, stdClass $param1 );
Note: This only works for 'array' and object names.
And so on, and you can even pass in your own classes as expected parameters. Calling the methods/functions with something else will result in a fatal error.
Another hint about a good intellisense in PHP. We use ZendStudio and it will actually work a lot better if you write good PHPDocs for your methods, it will look into those when hinting.
A: Magic Methods are fall-through methods that get called whenever you invoke a method that doesn't exist or assign or read a property that doesn't exist, among other things.
interface AllMagicMethods {
// accessing undefined or invisible (e.g. private) properties
public function __get($fieldName);
public function __set($fieldName, $value);
public function __isset($fieldName);
public function __unset($fieldName);
// calling undefined or invisible (e.g. private) methods
public function __call($funcName, $args);
public static function __callStatic($funcName, $args); // as of PHP 5.3
// on serialize() / unserialize()
public function __sleep();
public function __wakeup();
// conversion to string (e.g. with (string) $obj, echo $obj, strlen($obj), ...)
public function __toString();
// calling the object like a function (e.g. $obj($arg, $arg2))
public function __invoke($arguments, $...);
// called on var_export()
public static function __set_state($array);
}
A C++ developer here might notice, that PHP allows overloading some operators, e.g. () or (string). Actually PHP allows overloading even more, for example the [] operator (ArrayAccess), the foreach language construct (Iterator and IteratorAggregate) and the count function (Countable).
A: a) the manual -- extremely comprehensive, up-to-date and just a huge source for inspiration while problem-solving - stuck? browse/search the manual, it'll come to you
b) arrays - they're plastic, they're associatively indexed, they can be easily nested (!) to make up some wild data structures, and there's a multitude of functions just for array operations alone. Oh, and did I mention treating separate variables as an array of values?
c) eval() and similar constructs (like dynamic variable and function names) which allow for much greater flexibility (and are still relatively safe provided you know what you're doing) - nothing beats a program that basically defines its own process flow (or even specific execution) on the fly
d) most probably the easiest thing to overlook: as almost everything in the ZEND engine is a zVal (which in essence is a collection of pointer references), the ability to return about anything as a function return value
Also, I'd like to point out one great feature, but one which is related more to PHP source than the language (and so - listed separately):
e) the ease of writing C extensions (mostly interfaces for other objects like OpenAL or SDL) - great source code structure and about as many powerfull tools on the 'inside' as there are on the 'outside' - if you ever need to expand the functionality just that little bit further.
A: Date functions. I have to handle a lot of time information and date strings all day long, so functions like strftime() and strtotime() are just awesome.
A: Besides instant access to start coding away at anything you need for a website?
Besides magic methods and reflections, some interesting functions are:
*
*serialize / unserialize - state saving goodness via sql, cookies, processes, flatfile. good stuff.
*json_encode / json_decode - instant AJAX fun
*get_class - helpful for those weary loose-typing moments
*call_user_func_array - powerful when you can work with your code as strings (think dynamic)
*method_exists - reflection
*func_num_args / func_get_arg - unknown arguments ftw
*set_error_handler / set_exception_handler - very good debugging capabilities for a scripting language
A: Ctype functions are faster than preg_match() for basic character validation.
ctype_alnum() — Check for alphanumeric character(s)
ctype_alpha() — Check for alphabetic character(s)
ctype_cntrl() — Check for control character(s)
ctype_digit() — Check for numeric character(s)
...etc...
A: Error suppression via the error control operator, @, should almost never be used. It promotes lazy and non-defensive coding practices by simply ignoring errors, creates debugging nightmares since errors of all types--even fatal ones--will be suppressed, and, in some cases, can cause a hit to performance (especially when suppressing large quantities of errors).
A: filter_var function. Not a hidden pearl, but pretty new.
A: Well, I've recently delivered my first GUI application to a paying customer - written in PHP! It gathers data from a barcode reader or from GUI pushbuttons, checkboxes, radio buttons or text fields, stores to SQLite or remote MySQL, launches other Windows apps, sends zipped XML reports as email attachments, encrypts and decrypts stored data and even plays a sound when done.
Did it with miniPHP and Winbinder. Is that hidden enough? I guess not many PHP developers have really tried this out.
A: GOOD:
*
*The wide aceptance of PHP in WebHosting. Nearly every web-hosting service has PHP support.
*Simple things can be solve with simple code. No classes or namespaces are strictly required.
BAD:
*
*There is a ton of functions without any naming-convention. It is so hard to remember all these functions to use it effectively.
*Bad coding habits, all over the web :(
A: Definitely the magic and overloading methods. Allain cited __get(), __set(), __call() and __toString(), but I also love __wakeup() and __sleep().
This magic methods are called when the object is serialized (sleep) and deserialized (wakeup). This feature ables making things like serializable Database-wrappers, which i am using in an application:
Class Connection {
private $dsn;
private $connection;
...
public __wakeup() {
$this->connection = ADONewConnection();
}
}
In this way i can "save" connections in $_SESSION, etc.
A: The json_encode/decode functions in php are pretty useful, though not very hidden.
A: In PHP5.3 you can place PHAR archives inside PHAR archives!
Like WAR/EJB in the java world.
A: My revelations over the years have been more conceptual than language based.
1: Rendering instead of echoing.
function render_title($title){
return "<title>$title</title";
}
so much easier to use the parts repeatably and pass them to templates when you are rendering your output instead of using echos (in which case you'd have to rely on output buffering).
2: functional programming, or at least as close as I can move towards it, functions without side-effects. Rendering, not using globals, keeping your functions to having a local scope, things like that. I thought that object oriented programming was the way to go with php for a while there, but the reduction in overhead and syntax complexity that I experienced from dropping down from object oriented methods to functional programming methods in php makes functional programing the clear choice for me.
3: Templating systems (e.g. smarty). It's taken me a long time to realize that you -need- a templating system inside what is already a template scripting language, but the seperation of logic from display that it gives you is so, so necessary.
A: Lot already said about this.
Just to add that one thing that looked pretty forgotten, if not hidden, is http://talks.php.net part of the http://www.php.net. It collects lot of useful presentations, some really old, but some new and extremely valuable.
A: Stackable unit files
<?
// file unit1.php
$this_code='does something.';
?>
<?
// file unit2.php
$this_code='does something else. it could be a PHP class object!';
?>
<?
// file unit3.php
$this_code='does something else. it could be your master include file';
require_once('unit2.php');
include('unit1.php');
?>
<?
// file main.php
include('unit1.php');
require_once('unit2.php');
require_once('unit3.php');
?>
I purposely used include and require_once interchangeably to show what can be done, because they work differently.
There are multiple ways to construct your code or add files into your code. It is even possible to link HTML, AJAX, CSS, JAVASCRIPT, IMAGES and all sorts of files into your code dynamically.
I especially like it, because there are also no requirements of placing the includes/requires at the beginning, middle or end. This allows for more freedom, depending on the use.
A: This is great:
//file page_specific_funcs.inc
function doOtherThing(){
}
class MyClass{
}
//end file
//file.php
function doSomething(){
include("page_specific_funcs.inc");
$var = new MyClass();
}
//end of file.php
"page_specific_funcs.inc" file is
only included if doSomething gets
called. The declaration of classes,
funcs, etc., inside methods works
perfectly.
A: Another nice feature is copy(). This function makes it possible to get a file from any place(even urls work) and copy it to a local resource. So grabbing files becomes really easy.
A: Magic method __callStatic.
Really useful to make singletons, like this PDO singleton class
A: Question about the original post: Why do you need a switch statement in order to overload a method in PHP? Maybe you mean something by the term "overload" that doesn't match what I learned from C++.
As for favorite features of PHP, I like the Exception object. I find that having a standard error container makes it much easier to decouple the presentation logic from the business logic, and the throw/catch syntax makes it much easier to write automated tests for each class in isolation.
A: Using cURL to set up a test suite to drive a large, complex web form and its back end application. The tests were exhaustive - at least in terms of executing every combination of acceptable inputs.
A: As far as i know, you can Implicit parameter type in function call:
function getInt(int $v)
{
echo $v;
}
getInt(5); // will work
getInt('hello'); // will fail
A: Boolean casting, which is particularly helpful for redwall_hp's first example, above.
Instead of:
$var = ($_POST['my_checkbox']=='checked') ? TRUE : FALSE;
You can type:
$var = !!($_POST['my_checkbox']=='checked');
A: You can set a check on every option when use switch statement, this is an example:
$check = "HELLO";
switch ($check) {
case (eregi('HI', $check)):
echo "Write HI!";
case (eregi('HELLO', $check)):
echo "Write HELLO!";
case (eregi('OTHER', $check)):
echo "Write OTHER!";
}
Bye...
A: the hidden features that I love from php:
1. easy to learn (also easy to missused it .. ie: bad programming habits. like you can type $something = "1" ; and then you did $something += 3 ; and suddenly $something becomes an integer .. without error message/freaking exceptions, like those in java)
*lots of library. go to phpclasses.org and I almost got everything from there.
*lots of web using it. Love it or hate it .. that's the fact ! :)
*simple, small and easy to maintenance. you just install xampplite + vim (my favourite) on your portable devices.
*cheap !!! as cheap as a beer ... for example: hosting. compared to java or .net host, php host really cheap and you can get free one from some websites (although they will put some banners / hidden thing inside your website)
*the documentation for php was very good !! that's the main reason i am stick to php for about 6 years (although I did some projects using Groovy/Grails)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "174"
} |
Q: C++ Exception code lookup Knowing an exception code, is there a way to find out more about what the actual exception that was thrown means?
My exception in question:
0x64487347
Exception address: 0x1
The call stack shows no information.
I'm reviewing a .dmp of a crash and not actually debugging in Visual Studio.
A: A true C++ exception thrown from Microsoft's runtime will have an SEH code of 0xe06d7363 (E0 + 'msc'). You have some other exception.
.NET generates SEH exceptions with the code 0xe0434f4d (E0 + 'COM').
NT's status codes are documented in ntstatus.h, and generally start 0x80 (warnings) or 0xC0 (errors). The most famous is 0xC0000005, STATUS_ACCESS_VIOLATION.
A: Because you're reviewing a crash dump I'll assume it came in from a customer and you cannot easily reproduce the fault with more instrumentation.
I don't have much help to offer save to note that the exception code 0x64487347 is ASCII "dShG", and developers often use the initials of the routine or fault condition when making up magic numbers like this.
A little Googling turned up one hit for dHsg in the proper context, the name of a function in a Google Book search for "Using Visual C++ 6" By Kate Gregory. Unfortunately that alone was not helpful.
A: If you know which block threw the exceptioon, can you put more specific handlers in the catch block to try and isolate it that way?
Are you throwing an exception that you rolled yourself?
Edit: I forgot to point you towards this article on Visual C++ exceptions which I've found to be quite useful.
Rob
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I make a subproject with Qt? I'm about to start on a large Qt application, which is made up of smaller components (groups of classes that work together). For example, there might be a dialog that is used in the project, but should be developed on its own before being integrated into the project. Instead of working on it in another folder somewhere and then copying it into the main project folder, can I create a sub-folder which is dedicated to that dialog, and then somehow incorporate it into the main project?
A: Here is what I would do. Let's say I want the following folder hierarchy :
/MyWholeApp
will contain the files for the whole application.
/MyWholeApp/DummyDlg/
will contain the files for the standalone dialogbox which will be eventually part of the whole application.
I would develop the standalone dialog box and the related classes. I would create a Qt-project file which is going to be included. It will contain only the forms and files which will eventually be part of the whole application.
File DummyDlg.pri, in /MyWholeApp/DummyDlg/ :
# Input
FORMS += dummydlg.ui
HEADERS += dummydlg.h
SOURCES += dummydlg.cpp
The above example is very simple. You could add other classes if needed.
To develop the standalone dialog box, I would then create a Qt project file dedicated to this dialog :
File DummyDlg.pro, in /MyWholeApp/DummyDlg/ :
TEMPLATE = app
DEPENDPATH += .
INCLUDEPATH += .
include(DummyDlg.pri)
# Input
SOURCES += main.cpp
As you can see, this PRO file is including the PRI file created above, and is adding an additional file (main.cpp) which will contain the basic code for running the dialog box as a standalone :
#include <QApplication>
#include "dummydlg.h"
int main(int argc, char* argv[])
{
QApplication MyApp(argc, argv);
DummyDlg MyDlg;
MyDlg.show();
return MyApp.exec();
}
Then, to include this dialog box to the whole application you need to create a Qt-Project file :
file WholeApp.pro, in /MyWholeApp/ :
TEMPLATE = app
DEPENDPATH += . DummyDlg
INCLUDEPATH += . DummyDlg
include(DummyDlg/DummyDlg.pri)
# Input
FORMS += OtherDlg.ui
HEADERS += OtherDlg.h
SOURCES += OtherDlg.cpp WholeApp.cpp
Of course, the Qt-Project file above is very simplistic, but shows how I included the stand-alone dialog box.
A: Yes, you can edit your main project (.pro) file to include your sub project's project file.
See here
A: For Qt on Windows you can create DLLs for every subproject you want. No problem with using them from the main project (exe) after that. You'll have to take care of dependencies but it's not very difficult.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How to disable a warning in Delphi about "return value ... might be undefined"? I have a function that gives me the following warning:
[DCC Warning] filename.pas(6939): W1035 Return value of function 'function' might be undefined
The function, however, is clean, small, and does have a known, expected, return value. The first statement in the function is:
Result := '';
and there is no local variable or parameter called Result either.
Is there any kind of pragma-like directive I can surround this method with to remove this warning? This is Delphi 2007.
Unfortunately, the help system on this Delphi installation is not working, therefore i can't pop up the help for that warning right now.
Anyone know off the top of their head what i can do?
A: Are you sure you have done everything to solve the warning? Maybe you could post the code for us to look at?
You can turn off the warning locally this way:
{$WARN NO_RETVAL OFF}
function func(...): string;
begin
...
end;
{$WARN NO_RETVAL ON}
A: I am not sure that I want to see the code for this unit... after all, the error occurs at line 6939 ... Maybe some internal compiler table have been exceeded?
A: There seems to be some sort of bug in Delphi. Read this post, the last comment links to other bug-reports that may be the one that you have got:
http://qc.codegear.com/wc/qcmain.aspx?d=8144
A: The {$WARN NO_RETVAL OFF} is what you are looking for, but generally I like to find out why stuff like this happens. You might consider formatting it differently and seeing if that helps.
Do you have any flow altering commands like Exit in there? Do you directly raise exceptions, etc? Does your case statement have an else at the end that sets a value on Result?
Might try tweaking those elements and see if that eliminates the warning too.
A: In order to get a good answer for this, you'll have to post the code. In general, the Delphi compiler will give this warning if there is a possible code path that could result in the Result not being defined. Sometimes that code path is less than obvious.
A: There is such a bug in Delphi compiler since, at least, Delphi4: if sum of numbers of function's parameters (including Self and Result) and local variables exceeds 31, it causes problems. For example, it can write W1035 warnings (result might be undefined). It can miss not used variables. Just try this project:
program TestCompilerProblems;
procedure Proc;
var
a01, a02, a03, a04, a05, a06, a07, a08, a09, a10,
a11, a12, a13, a14, a15, a16, a17, a18, a19, a20,
a21, a22, a23, a24, a25, a26, a27, a28, a29, a30,
a31, a32, a33, a34, a35, a36, a37, a38, a39, a40: Integer;
begin
end;
begin
Proc;
end.
It would cause 31 hint, not 40.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I make a ListBox refresh its item text? I'm making an example for someone who hasn't yet realized that controls like ListBox don't have to contain strings; he had been storing formatted strings and jumping through complicated parsing hoops to get the data back out of the ListBox and I'd like to show him there's a better way.
I noticed that if I have an object stored in the ListBox then update a value that affects ToString, the ListBox does not update itself. I've tried calling Refresh and Update on the control, but neither works. Here's the code of the example I'm using, it requires you to drag a listbox and a button onto the form:
Public Class Form1
Protected Overrides Sub OnLoad(ByVal e As System.EventArgs)
MyBase.OnLoad(e)
For i As Integer = 1 To 3
Dim tempInfo As New NumberInfo()
tempInfo.Count = i
tempInfo.Number = i * 100
ListBox1.Items.Add(tempInfo)
Next
End Sub
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
For Each objItem As Object In ListBox1.Items
Dim info As NumberInfo = DirectCast(objItem, NumberInfo)
info.Count += 1
Next
End Sub
End Class
Public Class NumberInfo
Public Count As Integer
Public Number As Integer
Public Overrides Function ToString() As String
Return String.Format("{0}, {1}", Count, Number)
End Function
End Class
I thought that perhaps the problem was using fields and tried implementing INotifyPropertyChanged, but this had no effect. (The reason I'm using fields is because it's an example and I don't feel like adding a few dozen lines that have nothing to do with the topic I'm demonstrating.)
Honestly I've never tried updating items in place like this before; in the past I've always been adding/removing items, not editing them. So I've never noticed that I don't know how to make this work.
So what am I missing?
A: I use this class when I need to have a list box that updates.
Update the object in the list and then call either of the included methods, depending on if you have the index available or not. If you are updating an object that is contained in the list, but you don't have the index, you will have to call RefreshItems and update all of the items.
public class RefreshingListBox : ListBox
{
public new void RefreshItem(int index)
{
base.RefreshItem(index);
}
public new void RefreshItems()
{
base.RefreshItems();
}
}
A: lstBox.Items[lstBox.SelectedIndex] = lstBox.SelectedItem;
A: BindingList handles updating the bindings by itself.
using System;
using System.ComponentModel;
using System.Windows.Forms;
namespace TestBindingList
{
public class Employee
{
public string Name { get; set; }
public int Id { get; set; }
}
public partial class Form1 : Form
{
private BindingList<Employee> _employees;
private ListBox lstEmployees;
private TextBox txtId;
private TextBox txtName;
private Button btnRemove;
public Form1()
{
InitializeComponent();
FlowLayoutPanel layout = new FlowLayoutPanel();
layout.Dock = DockStyle.Fill;
Controls.Add(layout);
lstEmployees = new ListBox();
layout.Controls.Add(lstEmployees);
txtId = new TextBox();
layout.Controls.Add(txtId);
txtName = new TextBox();
layout.Controls.Add(txtName);
btnRemove = new Button();
btnRemove.Click += btnRemove_Click;
btnRemove.Text = "Remove";
layout.Controls.Add(btnRemove);
Load+=new EventHandler(Form1_Load);
}
private void Form1_Load(object sender, EventArgs e)
{
_employees = new BindingList<Employee>();
for (int i = 0; i < 10; i++)
{
_employees.Add(new Employee() { Id = i, Name = "Employee " + i.ToString() });
}
lstEmployees.DisplayMember = "Name";
lstEmployees.DataSource = _employees;
txtId.DataBindings.Add("Text", _employees, "Id");
txtName.DataBindings.Add("Text", _employees, "Name");
}
private void btnRemove_Click(object sender, EventArgs e)
{
Employee selectedEmployee = (Employee)lstEmployees.SelectedItem;
if (selectedEmployee != null)
{
_employees.Remove(selectedEmployee);
}
}
}
}
A: If you derive from ListBox there is the RefreshItem protected method you can call. Just re-expose this method in your own type.
public class ListBox2 : ListBox {
public void RefreshItem2(int index) {
RefreshItem(index);
}
}
Then change your designer file to use your own type (in this case, ListBox2).
A: typeof(ListBox).InvokeMember("RefreshItems",
BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.InvokeMethod,
null, myListBox, new object[] { });
A: Use the datasource property and a BindingSource object in between the datasource and the datasource property of the listbox. Then refresh that.
update added example.
Like so:
Public Class Form1
Private datasource As New List(Of NumberInfo)
Private bindingSource As New BindingSource
Protected Overrides Sub OnLoad(ByVal e As System.EventArgs)
MyBase.OnLoad(e)
For i As Integer = 1 To 3
Dim tempInfo As New NumberInfo()
tempInfo.Count = i
tempInfo.Number = i * 100
datasource.Add(tempInfo)
Next
bindingSource.DataSource = datasource
ListBox1.DataSource = bindingSource
End Sub
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
For Each objItem As Object In datasource
Dim info As NumberInfo = DirectCast(objItem, NumberInfo)
info.Count += 1
Next
bindingSource.ResetBindings(False)
End Sub
End Class
Public Class NumberInfo
Public Count As Integer
Public Number As Integer
Public Overrides Function ToString() As String
Return String.Format("{0}, {1}", Count, Number)
End Function
End Class
A: It's little bit unprofessional, but it works.
I just removed and added the item (also selected it again).
The list was sorted according to "displayed and changed" property so, again, was fine for me. The side effect is that additional event (index changed) is raised.
if (objLstTypes.SelectedItem != null)
{
PublisherTypeDescriptor objType = (PublisherTypeDescriptor)objLstTypes.SelectedItem;
objLstTypes.Items.Remove(objType);
objLstTypes.Items.Add(objType);
objLstTypes.SelectedItem = objType;
}
A: If you use a draw method like:
private void listBox1_DrawItem(object sender, DrawItemEventArgs e)
{
e.DrawBackground();
e.DrawFocusRectangle();
Sensor toBeDrawn = (listBox1.Items[e.Index] as Sensor);
e.Graphics.FillRectangle(new SolidBrush(toBeDrawn.ItemColor), e.Bounds);
e.Graphics.DrawString(toBeDrawn.sensorName, new Font(FontFamily.GenericSansSerif, 14, FontStyle.Bold), new SolidBrush(Color.White),e.Bounds);
}
Sensor is my class.
So if I change the class Color somewhere, you can simply update it as:
int temp = listBoxName.SelectedIndex;
listBoxName.SelectedIndex = -1;
listBoxName.SelectedIndex = temp;
And the Color will update, just another solution :)
A: If you are doing databinding, try this:
private void CheckBox_Click(object sender, EventArgs e)
{
// some kind of hack to make the ListBox refresh
int currentPosition = bindingSource.Position;
bindingSource.Position += 1;
bindingSource.Position -= 1;
bindingSource.Position = currentPosition;
}
In this case there is a checkbox that updates an item in a data bound ListBox. Toggling the position of the binding source back and forth seems to work for me.
A: Some code that I built some code in VBnet to help do this. The class for anObject has the ToString override to show the object's "title/name".
Dim i = LstBox.SelectedIndex
LstBox.Items(i) = anObject
LstBox.Sorted = True
A: you also can try with this fragment of code, it works fine:
Public Class Form1
Dim tempInfo As New NumberInfo()
Private Sub Form1_Load() Handles Me.Load
For i As Integer = 1 To 3
tempInfo.Count = i
tempInfo.Number = i * 100
ListBox1.Items.Add(tempInfo)
Next
End Sub
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Dim info As NumberInfo = tempInfo
Dim obj As New Object
info.Count += 1
info.Number = info.Count * 100
obj = info
ListBox1.Items.Add(obj)
ListBox1.Items.RemoveAt(0)
End Sub
End Class
Public Class NumberInfo
Public Count As Integer
Public Number As Integer
Public Overrides Function ToString() As String
Return String.Format("{0}, {1}", Count, Number)
End Function
End Class
A: If objLstTypes is your ListBox name
Use
objLstTypes.Items.Refresh();
Hope this works...
A: I don't know much about vb.net but in C# you should use datasource and then bind it by calling listbox.bind() would do the trick.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: What are some viable alternatives to BizTalk Server? In evaluating different systems integration strategies, I've come across some words of encouragement, but also some words of frustration over BizTalk Server.
What are some pros and cons to using BizTalk Server (both from a developer standpoint and a business user), and should companies also consider open source alternatives? What viable alternatives are out there?
EDIT: Jitterbit seems like an interesting choice. Open Source and seems to be nicely engineered. Anyone on here have any experience working with it?
A: We evaluated BizTalk at our company and were really disappointed.
We are using IBM WebSphere Transformation Extender (which has lots of (other) problems, too) and the mapping tool of BizTalk is a joke in comparison to WTX.
The graphical tool is not really usable for complex mappings (we have schemas with a few hundred fields in repeating groups) and if you do more than the usual "concat first name and last name to name" mappings, you will be tired of the graphical approach (for example the arguments of the functoids in the graphical mapper are not labeled and the order in which you connect fields to these arguments is important).
The XSLT-Mapper was usable but not really convincing, and even the microsoft rep told us to use a tool like XMLSpy for XSLT and load the resulting XSL file into BizTalk.
A third approach to mapping is to use C#-Code for the mapping, which was not acceptable for us as a general approach (we don't want to teach everyone C#).
In addition to the mapping tool we did not like the deployment in BizTalk. In order to deploy your process, you need to make lots of settings in different tools and places. We had hoped to find a mechanism like a WAR file for Java Web Applications in BizTalk, so that you can give one archive for your whole process solution to your administrator and he can deploy it.
A: We've been using BizTalk since version 2004, and now have a mix of versions 2006 R2 and 2004 running. I found that the learning curve was quite severe, and development time for solutions is not always quick. Those are definitely shortcomings. Where BizTalk really excels is in its fault tolerance, gauranteed delivery, and performance. You can rest assured that data will not get lost. Retry functionality and fault tolerance robustness is baked in so generally speaking if systems are down BizTalk will handle that and successful delivery will occur once systems come back on line. All these issues such as downtime, etc that are important in an integration scenario are handled by BizTalk.
Further, generally speaking when developing solutions BizTalk abstracts the communication protocols and data formats of the native systems by dealing with everything as xml, so when developing solutions, you typically don't have to wrote code specific to those systems, you use the BizTalk xml framework.
In the last year, we've implemented a java open source engine called Mirth for our HL7 routing. I found that for HL7 purposes, the HL7 adaptor for BizTalk is a challange to work with. Management dicated that we use Mirth for HL7 routing. Where BizTalk falls down in terms of learning curve, Mirth makes up. It is far easier to develop a solution. The problem with mirth is that it doesn't really have any gauranteed delivery. Most of the adaptors (except for hl7) have no retry functionality so if you wanted that you'd have to write your own. Second, Mirth can lose date if it goes down. I would call it very easy to use (although there is no documentation) but I'd be hard pressed to call it an enterprise solution. I'm going to check out jitterbit which was mentioned by someone else.
A: We used BizTalk for a couple of years, but gave it up for our own custom framework that allowed more flexibility.
A: There is always Sun's (now Oracle) OpenESB framework. Its generally speaking a smaller, lighter version of Biztalk but with roughly all the same features.
You do get to write more code with it, though.
Its Open Source as well.
A: BizTalk Server's key benefit is that it provides a lot of 'plumbing' around deployment, management, performance, and scalability. Through Visual Studio, it also provides a comprehensive framework for developing solutions, often with relatively little code.
The frustration and steep learning curve that others mention often comes from using BizTalk for the wrong purpose and from a misunderstanding about how to work with BizTalk and message-oriented systems in general. The learning curve is not as steep as most people suggest - the essential part of the underlying learning actually focuses on changing thinking from a procedural approach to a stateless message-based approach.
A drawback people often cite is cost. The sticker price can seem to be quite high; however, this is cheap in comparison to the amount you'd spend on developing and supporting features on your own.
Before you consider alternatives, or even consider BizTalk server, you should consider your organization's approach to integration and it's long term goals. BizTalk Server is great in cases where you want to integrate systems using a hub and spoke model where BizTalk orchestrates the activities of many applications.
There are other integration models too - one of the more popular ones is a distributed bus (don't confuse this with the term "Enterprise Service Bus" or ESB). You can also get BizTalk to work as a distributed bus and there are alternative solutions that provide more direct support. One of the alternate solutions is an open source solution called nServiceBus.
When considering whether to use a commercial product like BizTalk, verses something else (open source or developed in house), also consider maintenance and enhancements and the availability of the necessary skill-set in the marketplace.
I wrote some articles that go into more detail about the points I discussed here - here are the links:
*
*Why BizTalk?
*Top 10 BizTalk Mistakes
*Extensibility Features in BizTalk Server
*Open Source Integration with nServiceBus
A: My experience with BizTalk was basically a frustrating waste of time.
There are so many edge cases and weird little business logic tweaks you have to make when you are doing B2B data integration (which is probably the hardest part of any enterprise application) that you just need to roll your own solution.
How hard is it to parse data files and convert them to a different format? Not that hard. Unless you're trying to inject a bloated middleware system like Biztalk into the middle of it.
A: In the OSS space (though I've never used them as a BizTalk replacement personally - this is anecdotal) you can use one of the Java/J2EE Messaging engines such as OpenMQ (which is the Sun enterprise one rebadged and without support). If you need Orchestration / Choreography (i.e. SOA/ESB pieces) on top of this, you could look into something like Apache Mule
A: My experience with BizTalk and doing B2B integrations is that most organizations do not truly do schema first design or fully understand xml standards for that matter. Most tend to weave objects and hope they materialize into meaninful schemas. In an enterprise environment, this is backwards.
BizTalk does have a learning curve, but once you get it you are rewarded with durability, performance, true scalability, and extensibility. Like most have said though, it best to make sure it meets your needs and contort your needs to BizTalk.
In the past I have worked with BizTalk 2004 through 2009, and another product called webMethods.
A: As a BizTalk consultant I have to agree at least partly with Eric Z Beard, there are a lot of edge cases that take up alot of time. But quite a few scenarios are handled extremly smooth as well, so it all depends IMO. But when you (Eric) call BizTalk bloated I have to disagree! We've found that the performance and reliability is excellent, it's flexible and comes with a lot of good adapters out of the box.
A: BizTalk needs to be used correctly,
I am a BizTalk developer and my experience with BizTalk is quite good.
Its reliable, performant, scalable, contains a lot of built in architectural patterns and build in components to make integration easy and fast, you get security, retries, secondary transports, validation, transformation etc... and what ever you dont have build in with BizTalk you can easily customized with .NET code, its basically a hard earned integration system and you get all this in one box.
BUT you need to know how to implement BizTalk correctly, not once I came across solutions that where implemented and often also architected incorrectly.
but the real benefit of BizTalk is that you can implement small solutions and scale up whilst most other integration systems from big vendors will only sell a whole integration pack which can cost much more.
BizTalk is considered the most complicated server from the house of Microsoft.
So any body saying BizTalk is not good dosent know BizTalk period.
A: I have no direct experience with JitterBit, but I have heard very good things from coworkers.
A: I came across Apatar (unable to post url, but Google finds it) while looking for a solution cheaper than BizTalk. I have yet to try this out.
My last company had many problems with BizTalk being too complex and ridged, but I can’t help but think this was mainly down to the implementation the consultant did.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Rollover safe timer (tick) comparisons I have a counter in hardware that I can observe for timing considerations. It counts miliseconds and is stored in a 16 bit unsigned value. How do I safely check if a timer value has passed a certain time and safely handle the inevitable rollover:
//this is a bit contrived, but it illustrates what I'm trying to do
const uint16_t print_interval = 5000; // milliseconds
static uint16_t last_print_time;
if(ms_timer() - last_print_time > print_interval)
{
printf("Fault!\n");
last_print_time = ms_timer();
}
This code will fail when ms_timer overflows to 0.
A: I use this code to illustrate the bug and possible solution using a signed comparison.
/* ========================================================================== */
/* timers.c */
/* */
/* Description: Demonstrate unsigned vs signed timers */
/* ========================================================================== */
#include <stdio.h>
#include <limits.h>
int timer;
int HW_DIGCTL_MICROSECONDS_RD()
{
printf ("timer %x\n", timer);
return timer++;
}
// delay up to UINT_MAX
// this fails when start near UINT_MAX
void delay_us (unsigned int us)
{
unsigned int start = HW_DIGCTL_MICROSECONDS_RD();
while (start + us > HW_DIGCTL_MICROSECONDS_RD())
;
}
// works correctly for delay from 0 to INT_MAX
void sdelay_us (int us)
{
int start = HW_DIGCTL_MICROSECONDS_RD();
while (HW_DIGCTL_MICROSECONDS_RD() - start < us)
;
}
int main()
{
printf ("UINT_MAX = %x\n", UINT_MAX);
printf ("INT_MAX = %x\n\n", INT_MAX);
printf ("unsigned, no wrap\n\n");
timer = 0;
delay_us (10);
printf ("\nunsigned, wrap\n\n");
timer = UINT_MAX - 8;
delay_us (10);
printf ("\nsigned, no wrap\n\n");
timer = 0;
sdelay_us (10);
printf ("\nsigned, wrap\n\n");
timer = INT_MAX - 8;
sdelay_us (10);
}
Sample output:
bob@hedgehog:~/work2/test$ ./timers|more
UINT_MAX = ffffffff
INT_MAX = 7fffffff
unsigned, no wrap
timer 0
timer 1
timer 2
timer 3
timer 4
timer 5
timer 6
timer 7
timer 8
timer 9
timer a
unsigned, wrap
timer fffffff7
timer fffffff8
signed, no wrap
timer 0
timer 1
timer 2
timer 3
timer 4
timer 5
timer 6
timer 7
timer 8
timer 9
timer a
signed, wrap
timer 7ffffff7
timer 7ffffff8
timer 7ffffff9
timer 7ffffffa
timer 7ffffffb
timer 7ffffffc
timer 7ffffffd
timer 7ffffffe
timer 7fffffff
timer 80000000
timer 80000001
bob@hedgehog:~/work2/test$
A: I used to write code like following for the such case.
I tested with test case and assure that it works 100%.
In addition, change to uint32_t from uint16_t and 0xFFFFFFFF from 0xFFFF in below code with 32 bits timer tick.
uint16_t get_diff_tick(uint16_t test_tick, uint16_t prev_tick)
{
if (test_tick < prev_tick)
{
// time rollover(overflow)
return (0xFFFF - prev_tick) + 1 + test_tick;
}
else
{
return test_tick - prev_tick;
}
}
/* your code will be.. */
uint16_t cur_tick = ms_timer();
if(get_diff_tick(cur_tick, last_print_time) > print_interval)
{
printf("Fault!\n");
last_print_time = cur_tick;
}
A: You don't actually need to do anything here. The original code listed in your question will work fine, assuming ms_timer() returns a value of type uint16_t.
(Also assuming that the timer doesn't overflow twice between checks...)
To convince yourself this is the case, try the following test:
uint16_t t1 = 0xFFF0;
uint16_t t2 = 0x0010;
uint16_t dt = t2 - t1;
dt will equal 0x20.
A: Just check if ms_timer < last_print_time and if so add 2^16 no?
Edit: You also need to up to an uint32 for this if you can.
A: Probably the safest way to avoid the problem would be to use a signed 32-bit value. To use your example:
const int32 print_interval = 5000;
static int32 last_print_time; // I'm assuming this gets initialized elsewhere
int32 delta = ((int32)ms_timer()) - last_print_time; //allow a negative interval
while(delta < 0) delta += 65536; // move the difference back into range
if(delta < print_interval)
{
printf("Fault!\n");
last_print_time = ms_timer();
}
A: This seems to work for intervals up to 64k/2, which is suitable for me:
const uint16_t print_interval = 5000; // milliseconds
static uint16_t last_print_time;
int next_print_time = (last_print_time + print_interval);
if((int16_t) (x - next_print_time) >= 0)
{
printf("Fault!\n");
last_print_time = x;
}
Makes use of nature of signed integers. (twos complement)
A: I found that using a different timer API works better for me. I created a timer module that has two API calls:
void timer_milliseconds_reset(unsigned index);
bool timer_milliseconds_elapsed(unsigned index, unsigned long value);
The timer indices are also defined in the timer header file:
#define TIMER_PRINT 0
#define TIMER_LED 1
#define MAX_MILLISECOND_TIMERS 2
I use unsigned long int for my timer counters (32-bit) since that is the native sized integer on my hardware platform, and that gives me elapsed times from 1 ms to about 49.7 days. You could have timer counters that are 16-bit which would give you elapsed times from 1 ms to about 65 seconds.
The timer counters are an array, and are incremented by the hardware timer (interrupt, task, or polling of counter value). They can be limited to the maximum value of the datatype in the function that handles the increment for a no-rollover timer.
/* variable counts interrupts */
static volatile unsigned long Millisecond_Counter[MAX_MILLISECOND_TIMERS];
bool timer_milliseconds_elapsed(
unsigned index,
unsigned long value)
{
if (index < MAX_MILLISECOND_TIMERS) {
return (Millisecond_Counter[index] >= value);
}
return false;
}
void timer_milliseconds_reset(
unsigned index)
{
if (index < MAX_MILLISECOND_TIMERS) {
Millisecond_Counter[index] = 0;
}
}
Then your code becomes:
//this is a bit contrived, but it illustrates what I'm trying to do
const uint16_t print_interval = 5000; // milliseconds
if (timer_milliseconds_elapsed(TIMER_PRINT, print_interval))
{
printf("Fault!\n");
timer_milliseconds_reset(TIMER_PRINT);
}
A: Sometimes I do it like this:
#define LIMIT 10 // Any value less then ULONG_MAX
ulong t1 = tick of last event;
ulong t2 = current tick;
// This code needs to execute every tick
if ( t1 > t2 ){
if ((ULONG_MAX-t1+t2+1)>=LIMIT){
do something
}
} else {
if ( t2 - t1 >= LIMT ){
do something
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What is the best code template facility for Emacs? Particularly, what is the best snippets package out there?
Features:
*
*easy to define new snippets (plain text, custom input with defaults)
*simple navigation between predefined positions in the snippet
*multiple insertion of the same custom input
*accepts currently selected text as a custom input
*cross-platform (Windows, Linux)
*dynamically evaluated expressions (embedded code) written in a concise programming language (Perl, Python, Ruby are preferred)
*nicely coexists with others packages in Emacs
Example of code template, a simple for loop in C:
for (int i = 0; i < %N%; ++i) {
_
}
It is a lot of typing for such common code. I want to invoke a code template or snippet which inserts
that boilerplate code for me. Additionally it stops (on TAB or other keystroke) at %N% (my input replaces it) and final position of the cursor is _.
A: Personally, I've been using Dmacro for years (ftp://ftp.sgi.com/other/dmacro/dmacro.tar.gz).
Here's a review of it that also mentions some alternatives: http://linuxgazette.net/issue39/marsden.html
A: The EmacsWiki has a page of template engines.
Of these, I've used tempo in the (distant) past to add table support to html-helper-mode, but don't know how it has progressed in the last 15 years.
A: I'd add my vote for tempo snippets ... easy to setup, powerful (you can run arbitrary elisp in your template - so that you can downcase things, lookup filenames & classes, count things, etc), set the indentation, integrate with abbrevs ... I use it a lot ;)
A: TextMate's snippets is the most closest match but it is not a cross-platform solution and not for Emacs.
The second closest thing is YASnippet (screencast shows the main capabilities). But it interferes with hippie-expand package in my setup and the embedded language is EmacsLisp which I'm not comfortable with outside .emacs.
EDIT: Posted my answer here to allow voting on YASnippet.
A: I vote for http://cedet.sourceforge.net/srecode.shtml
It has very clean syntax and has access to code environment through Semantic.
Also it is a part of a large well supported CEDET distribution (which was built into Emacs for 24.x version series).
UPDATE YASnippet is also a powerful template engine. But it uses an ugly file naming schema (your file name === template name) for you can't put several templates into a single file and have issues with national character sets...
A: You can try a lightweight solution muban.el
It is written completely in Elisp and has a very simple syntax.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: How do I access the host machine from the guest machine? I've just created a new Windows XP VM on my Mac using VMware Fusion. The VM is using NAT to share the host's internet connection.
How do I access a Rails application, which is accessible on the Mac itself using http://localhost:3000?
A: You can use your host Mac's (or any other Mac on the network) 'local' name:
http://macname.local:3000/
where macname is the network name of your host (or other desired) Mac.
A: For Django it's important to do the following:
./manage.py runserver [default-gateway-IP]:8000
because
https://docs.djangoproject.com/en/dev/ref/django-admin/
Note that the default IP address, 127.0.0.1, is not accessible from other machines on your network. To make your development server viewable to other machines on the network, use its own IP address (e.g. 192.168.2.1) or 0.0.0.0 or :: (with IPv6 enabled).
A: I just spent an hour trying to get this to work following the steps on SO but mine ended up being a bit different.
VMWare settings
1.) Set VMWare connection to NAT
2.) run > cmd > ipconfig > copy Default Gateway value
3.) edit hosts file (c:/Windows/System32/drivers/etc/hosts)
*
*add this to your hosts file:
<gateway-ip> yourserver.local
OS X settings
1.) edit Apache config (e.g., sudo vim /etc/apache2/httpd.conf)
*
*add this vhost entry to your httpd.conf file:
NameVirtualHost 127.0.0.1
<VirtualHost 127.0.0.1>
DocumentRoot "/path/to/your/project"
ServerName yourserver.local
<Directory "/path/to/your/project">
AllowOverride All
Options All
</Directory>
</VirtualHost>
*
*save & quit (:wq)
2.) Edit your hosts file (sudo vim /etc/hosts)
*
*add this line to your hosts file
127.0.0.1 yourserver.local
3.) Restart Apache (sudo apachectl restart)
I found that I had to switch the connection setting on VMWare in order to restart the connection before these settings worked for me. I hope this helps.
A: For future visitors: once you've got the IP address figured out, you can add an entry to the Windows hosts file, which is located at C:\Windows\system32\drivers\etc\hosts, to map the IP address to a (virtual) server name. Add a line like this:
192.168.78.1 myrubyapp
Now you can access the site in IE at the address http://myrubyapp:3000
If you use virtual hosts under Apache you'll need this to provide the correct server name.
A: On the XP machine, find your IP address by going to the command prompt and typing ipconfig. Try replacing the last number with 1 or 2. For example, if your IP address is 192.168.78.128, use http://192.168.78.1:3000.
A: *
*On the XP machine, Start -> Connect To -> Show all connections.
*Double click Local Area Connection.
*Click the Support tab.
*Take the Default Gateway IP <gateway-ip> and hit http://<gateway-ip>:3000 in your browser.
Gotcha: You must have http:// in the address or IE will give you "The webpage cannot be displayed".
A: As this question is quite old and referring to XP, here is an alternative for new OSs;
If you're rocking Vista or Windows 7 as the Guest OS, and you have Virtual Hosts setup in the Host via Apache, here's how to setup:
In the Host OS, you need to ensure the network connection is done via NAT;
*
*Right click the network icon in the VM window (bottom-right)
*Select "NAT"
*Select "Connect"
*Wait for the guest OS reconnect to the network
Then, In the Guest OS;
*
*Click Start > Network > Network & Sharing Center
*Click "View Status" next to the network connection
*Click "Details"
*Find "IPv4 Default Gateway"
*Open Wordpad
*Edit C:\Windows\System32\drivers\etc\hosts
*Add a line to the file such as:
[default-gateway-IP] www.example.com
[default-gateway-IP] example.com
*Save
*Try opening http://www.example.com or http://example.com in IE
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
} |
Q: Does Django have HTML helpers? Does Django have any template tags to generate common HTML markup? For example, I know that I can get a url using
{% url mapper.views.foo %}
But that only gives me the URL and not the HTML code to create the link. Does Django have anything similar to Rails' link_to helper? I found django-helpers but since this is a common thing I thought Django would have something built-in.
A: No it doesn't.
James Bennett answered a similar question a while back, regarding Rails' built-in JavaScript helpers.
It's really unlikely that Django will ever have 'helper' functionality built-in. The reason, if I understand correctly, has to do with Django's core philosophy of keeping things loosely coupled. Having that kind of helper functionality built-in leads to coupling Django with a specific JavaScript library or (in your case) html document type.
EG. What happens if/when HTML 5 is finally implemented and Django is generating HTML 4 or XHTML markup?
Having said that, Django's template framework is really flexible, and it wouldn't be terribly difficult to write your own tags/filters that did what you wanted. I'm mostly a designer myself, and I've been able to put together a couple custom tags that worked like a charm.
A: The purpose of helpers is not, as others here imply, to help developers who don't know how to write HTML. The purpose is to encapsulate common functionality -- so you don't need to write the same thing a thousand times -- and to provide a single place to edit common HTML used throughout your app.
It's the same reason templates and SSI are useful -- not because people don't know how to write the HTML in their headers and footers, but sometimes you want to write it just once.
EG. What happens if/when HTML 5 is
finally implemented and Django is
generating HTML 4 or XHTML markup?
Same thing that happens when HTML 5 is implemented and all your templates are written in repetitive HTML, except a lot easier.
The other posts have already answered the question, linking to the docs on custom template tags; you can use tags and filters to build your own, but no, there aren't any built in.
A: it doesnt look like they're built in but here's a couple snippets. it looks like it'd be pretty easy to create these helpers:
http://www.djangosnippets.org/snippets/441/
A: Here is a list of all template tags and filters built into Django. Django core doesn't have as much HTML helpers as Rails, because Django contributors assumed that web developer knows HTML very well. As stated by saturdaypalace, it's very unlikely for AJAX helpers to be added to Django, because it would lead to coupling Django with a specific JavaScript library.
It's very easy to write your own template tags in Django (often you need just to define one function, similiar to Rails). You could reimplement most of Rails helpers in Django during a day or two.
A: I bet if there would be any consent of what is common html, there would be helpers module too, just for completeness (or because others have it). ;)
Other than that, Django template system is made mostly for HTML people, who already know how to write p, img and a tags and do not need any helpers for that. On the other side there are Python developers, who write code and do not care if the variable they put in context is enclosed by div or by span (perfect example of separation of concerns paradigm). If you need to have these two worlds to be joined, you have do to it by yourself (or look for other's code).
A: This won't answer directly to the question, but why not using <a href="{% url mapper.views.foo %}">foo</a> in template then?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Accessing Firefox cache from an XPCOM component Does anybody know how to get local path of file cached by Firefox based on its URL from an XPCOM component?
A: To access cached items, new cache session must be created using createSession method provided in nsICacheService. This method creates nsICacheSession
object. Information about cache item can be obtained using openCacheEntry method of the session object (method return nsICacheEntryDescriptor). To read data user must open input stream using openInputStream method of the cache entry object.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: MVC.net jQuery Validation After trying to avoid JavaScript for years, Iv started using Query for validation in MVC asp.net, as there does not seem to be an official way of doing validation, Iv been surprised how good jQuery is.
Firstly is there a way to get intellisense working for jQuery and its validation plugin, so that i don have to learn the api?
Secondly how do I create a validation summary for this, it currently appends the error to the right of the text box.:
<script type="text/javascript">
$().ready(function() {
$("#CreateLog").validate({
rules: {
UserName: {
required: true,
minLength: 2,
}
},
messages: {
UserName: {
required: "Please enter a username",
minLength: "Your username must consist of at least 2 characters",
}
}
});
});
</script>
<form id="CreateLog" action="Create" method="post" />
<label>UserName</label><br />
<%=Html.TextBox("UserName")%>
<br />
<div class="error"> </div>
<input type=submit value=Save />
</form>
I tried adding this to the script:
errorLabelContainer: $("#CreateLog div.error")
and this to the html:
<div class="error"> </div>
But this didn't work.
A: There is a Visual Studio 2008 hotfix for JQuery IntelliSense in VS2008 . This might have been bundled with SP1 as well.
A: Try specifying both a wrapper and a label container in your options. I also added display:none; to the style of error-container to let jQuery decide when to show it.
$().ready(function() {
$("#CreateLog").validate({
errorLabelContainer: $("ul", $('div.error-container')),
wrapper: 'li',
rules: {
UserName: {
required: true,
minLength: 2,
}
},
messages: {
UserName: {
required: "Please enter a username",
minLength: "Your username must consist of at least 2 characters"
}
}
});
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="error-container">
<ul></ul>
</div>
<form id="CreateLog" action="Create" method="post" />
<label>UserName</label><br />
<%=Html.TextBox("UserName")%>
<br />
<input type=submit value=Save />
</form>
That should work.
A: You might want to check out Karl Seguin's ASP.NET MVC validation approach on CodeBetter.com and his sample application canvas.
Validation - Part 1 - Getting Started
Validation - Part 2 - Client-Side
Validation - Part 3 - Server-Side
A: regarding intellisense for jquery (and other plugins): in order to have full intellisense in your own script files as well, just include the following line at the top of your .js file once for each file you want intellisensee from:
/// <reference path="[insert path to script file here]" />
simple, but very useful =)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can I access the backing variable of an auto-implemented property? In the past we declared properties like this:
public class MyClass
{
private int _age;
public int Age
{
get{ return _age; }
set{ _age = value; }
}
}
Now we can do:
public class MyClass
{
public int Age {get; set;}
}
My question is, how can I access the private variable that is created automatically using this notation?
I would rather access the private variable and not the public accessor 'Age'. Is there a default notation to access the private variable, or it is just not possible?
A: The aim of the new automatic properties is to reduce the amount of boilerplate code you need to write when you just have a simple property that doesn't need any special logic in the get or the set.
If you want to access the private member that these properties use, that's usually for a few reasons:
*
*You need to more than just a simple get/set - in this case, you should just avoid using automatic properties for this member.
*You want to avoid the performance hit of going through the get or set and just use the member directly - in this case, I'd be surprised if there really was a performance hit. The simple get/set members are very very easy to inline, and in my (admittedly limited) testing I haven't found a difference between using the automatic properties and accessing the member directly.
*You only want to have public read access (i.e. just a 'get') and the class write to the member directly - in this case, you can use a private set in your automatic property. i.e.
public class MyClass
{
public int Age {get; private set;}
}
This usually covers most the reasons for wanting to directly get to the backing field used by the automatic properties.
A: You shouldn't, and it's very unlikely you need to. If you need to access the property, just use the public property (e.g. this.Age). There's nothing special about the private field backing the public property, using it in preference to the property is just superstition.
A: Your usage of automatic properties implies that you do not need any getting/setting logic for the property thus a private backing variable is unneccessary.
Don't use automatic properties if you have any complex logic in your class. Just go private int _age and normal getters/setters as you normally would.
IMO, automatic properties are more suited for quickly implementing throwaway objects or temporary data capsules like:
public class TempMessage {
public int FromID { get; set; }
public int ToID { get; set; }
public string Message { get; set; }
}
Where you don't need much logic.
A: You can't, it's a language feature as opposed to a IDE feature. To be honest i'd prefer then IDE to add the private variable in for you. I agree that it is slightly weird for the class to internally have to use the public entry point to access its own variables. Hence I don't use this new feature that much myself.
A: This syntax is commonly called "syntax sugar", which means that the compiler takes that syntax and translates it into something else. In your example, the compiler would generate code that looks something like this:
[CompilerGenerated]
private int <Age>k_BackingField;
public int Age
{
[CompilerGenerated]
get
{
return this.<Age>k_BackingField;
}
[CompilerGenerated]
set
{
this.<Age>k_BackingField = value;
}
Even knowing all of that, you could probably access the backing field directly but that sort of defeats the purpose of using automatic properties. I say probably here because you then depend on an implementation detail that could change at any point in a future release of the C# compiler.
A:
Behind the scenes what happens is the injection of a private member variable, prefixed with <>k__AutomaticallyGeneratedPropertyField#
From C# 3.0 Automatic Properties explained
Although it may be possible to use that private member directly, it's very hacky and unnecessary.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
} |
Q: Figure out div that is visible out of four divs I need to figure out what div is visible out of four possible divs using jQuery. Only one of those div's will be visible at any given time.
This is what I have that works so far:
$("#FeatureImage1:visible, #FeatureImage2:visible, #FeatureImage3:visible, #FeatureImage4:visible").attr("id");
Is there a way to refactor this? Is there an easier way to figure this out?
A: Assign the same class to each div then:
$("div.myClass:visible").attr("id");
A: When applicable, it's better to use contextual selectors rather than add spurious classes. For instance, if the <div> elements are the only children of an element with id="foo", then using $("#foo > div:visible").attr("id") would better reflect the purpose of the code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Making portable code With all the fuss about opensource projects, how come there is still not a strong standard that enables you to make portable code (I mean in C/C++ not Java or C#)
Everyone is kind of making it's own soup.
There are even some third party libs like Apache Portable Runtime.
A: Yes, there is no standard but libraries like Qt and boost can make your life much easier when you do cross-platform development.
A: wxwidgets is a great abstraction layer on the native GUI widgets of most window managers.
A: I think the main reason there isn't any single library anyone agrees on is that everyone's requirements are different. When you want to wrap system libraries you'll often need to make some assumptions about what the use cases will be, unless you want to make the wrapper huge and impossible to work with. I think that might be the main reason there's no single, common cross platform runtime.
For GUI, the reason would be that each platform has its own UI conventions, you can't code one GUI that fits all, you'll simply get one that fits just one or even none at all.
A: There are many libraries that make cross-platform development easier on their own, but making a complete wrapper for all platforms ends up being either small and highly customized, or massive and completely ridiculous.
Carried to it's logical conclusion, a complete wrapper for all aspects of an operating system becomes an entire virtual runtime. You might as well make your own programming language.
A: The ADAPTIVE Communication Environment (ACE) is an excellent object oriented framework that provides cross-platform support for all of the low level OS functionality like threading, sockets, mutexes, etc. It runs with a crazy number of compilers and operating systems.
A: C and C++ as languages are standards languages. If you closely follow their rules when coding (That means not using vendor-specific extensions) you're code should be portable and you should be able to compile it with any modern compiler on any OS.
However C and C++ don't have a GUI library, like Java or C#, however there exist some free or commercial GUI libraries that will allow you to write portable GUI applications.
I think the most populars are Qt (Commercial) and wxWidgets (FOSS). According to wikipedia there is a lot more.
There is also boost, while not a GUI library boost is a really great complement to C++'s STL. In fact some of the boost libraires will be added in the next C++ standard.
A: If you make sure it compiles cleanly with both GCC and MS VC++, it will be little extra effort to port to somewhere else.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Python dictionary from an object's fields Do you know if there is a built-in function to build a dictionary from an arbitrary object? I'd like to do something like this:
>>> class Foo:
... bar = 'hello'
... baz = 'world'
...
>>> f = Foo()
>>> props(f)
{ 'bar' : 'hello', 'baz' : 'world' }
NOTE: It should not include methods. Only fields.
A: Late answer but provided for completeness and the benefit of googlers:
def props(x):
return dict((key, getattr(x, key)) for key in dir(x) if key not in dir(x.__class__))
This will not show methods defined in the class, but it will still show fields including those assigned to lambdas or those which start with a double underscore.
A: vars() is great, but doesn't work for nested objects of objects
Convert nested object of objects to dict:
def to_dict(self):
return json.loads(json.dumps(self, default=lambda o: o.__dict__))
A: The dir builtin will give you all the object's attributes, including special methods like __str__, __dict__ and a whole bunch of others which you probably don't want. But you can do something like:
>>> class Foo(object):
... bar = 'hello'
... baz = 'world'
...
>>> f = Foo()
>>> [name for name in dir(f) if not name.startswith('__')]
[ 'bar', 'baz' ]
>>> dict((name, getattr(f, name)) for name in dir(f) if not name.startswith('__'))
{ 'bar': 'hello', 'baz': 'world' }
So can extend this to only return data attributes and not methods, by defining your props function like this:
import inspect
def props(obj):
pr = {}
for name in dir(obj):
value = getattr(obj, name)
if not name.startswith('__') and not inspect.ismethod(value):
pr[name] = value
return pr
A: I think the easiest way is to create a getitem attribute for the class. If you need to write to the object, you can create a custom setattr . Here is an example for getitem:
class A(object):
def __init__(self):
self.b = 1
self.c = 2
def __getitem__(self, item):
return self.__dict__[item]
# Usage:
a = A()
a.__getitem__('b') # Outputs 1
a.__dict__ # Outputs {'c': 2, 'b': 1}
vars(a) # Outputs {'c': 2, 'b': 1}
dict generates the objects attributes into a dictionary and the dictionary object can be used to get the item you need.
A: Note that best practice in Python 2.7 is to use new-style classes (not needed with Python 3), i.e.
class Foo(object):
...
Also, there's a difference between an 'object' and a 'class'. To build a dictionary from an arbitrary object, it's sufficient to use __dict__. Usually, you'll declare your methods at class level and your attributes at instance level, so __dict__ should be fine. For example:
>>> class A(object):
... def __init__(self):
... self.b = 1
... self.c = 2
... def do_nothing(self):
... pass
...
>>> a = A()
>>> a.__dict__
{'c': 2, 'b': 1}
A better approach (suggested by robert in comments) is the builtin vars function:
>>> vars(a)
{'c': 2, 'b': 1}
Alternatively, depending on what you want to do, it might be nice to inherit from dict. Then your class is already a dictionary, and if you want you can override getattr and/or setattr to call through and set the dict. For example:
class Foo(dict):
def __init__(self):
pass
def __getattr__(self, attr):
return self[attr]
# etc...
A: In 2021, and for nested objects/dicts/json use pydantic BaseModel - will convert nested dicts and nested json objects to python objects and JSON and vice versa:
https://pydantic-docs.helpmanual.io/usage/models/
>>> class Foo(BaseModel):
... count: int
... size: float = None
...
>>>
>>> class Bar(BaseModel):
... apple = 'x'
... banana = 'y'
...
>>>
>>> class Spam(BaseModel):
... foo: Foo
... bars: List[Bar]
...
>>>
>>> m = Spam(foo={'count': 4}, bars=[{'apple': 'x1'}, {'apple': 'x2'}])
Object to dict
>>> print(m.dict())
{'foo': {'count': 4, 'size': None}, 'bars': [{'apple': 'x1', 'banana': 'y'}, {'apple': 'x2', 'banana': 'y'}]}
Object to JSON
>>> print(m.json())
{"foo": {"count": 4, "size": null}, "bars": [{"apple": "x1", "banana": "y"}, {"apple": "x2", "banana": "y"}]}
Dict to object
>>> spam = Spam.parse_obj({'foo': {'count': 4, 'size': None}, 'bars': [{'apple': 'x1', 'banana': 'y'}, {'apple': 'x2', 'banana': 'y2'}]})
>>> spam
Spam(foo=Foo(count=4, size=None), bars=[Bar(apple='x1', banana='y'), Bar(apple='x2', banana='y2')])
JSON to object
>>> spam = Spam.parse_raw('{"foo": {"count": 4, "size": null}, "bars": [{"apple": "x1", "banana": "y"}, {"apple": "x2", "banana": "y"}]}')
>>> spam
Spam(foo=Foo(count=4, size=None), bars=[Bar(apple='x1', banana='y'), Bar(apple='x2', banana='y')])
A: Dataclass(from Python 3.7) is another option which can be used for converting class properties to dict. asdict can be used along with dataclass objects
for the conversion.
Example:
@dataclass
class Point:
x: int
y: int
p = Point(10, 20)
asdict(p) # it returns {'x': 10, 'y': 20}
A: I've settled with a combination of both answers:
dict((key, value) for key, value in f.__dict__.iteritems()
if not callable(value) and not key.startswith('__'))
A: I thought I'd take some time to show you how you can translate an object to dict via dict(obj).
class A(object):
d = '4'
e = '5'
f = '6'
def __init__(self):
self.a = '1'
self.b = '2'
self.c = '3'
def __iter__(self):
# first start by grabbing the Class items
iters = dict((x,y) for x,y in A.__dict__.items() if x[:2] != '__')
# then update the class items with the instance items
iters.update(self.__dict__)
# now 'yield' through the items
for x,y in iters.items():
yield x,y
a = A()
print(dict(a))
# prints "{'a': '1', 'c': '3', 'b': '2', 'e': '5', 'd': '4', 'f': '6'}"
The key section of this code is the __iter__ function.
As the comments explain, the first thing we do is grab the Class items and prevent anything that starts with '__'.
Once you've created that dict, then you can use the update dict function and pass in the instance __dict__.
These will give you a complete class+instance dictionary of members. Now all that's left is to iterate over them and yield the returns.
Also, if you plan on using this a lot, you can create an @iterable class decorator.
def iterable(cls):
def iterfn(self):
iters = dict((x,y) for x,y in cls.__dict__.items() if x[:2] != '__')
iters.update(self.__dict__)
for x,y in iters.items():
yield x,y
cls.__iter__ = iterfn
return cls
@iterable
class B(object):
d = 'd'
e = 'e'
f = 'f'
def __init__(self):
self.a = 'a'
self.b = 'b'
self.c = 'c'
b = B()
print(dict(b))
A: Instead of x.__dict__, it's actually more pythonic to use vars(x).
A: As mentioned in one of the comments above, vars currently isn't universal in that it doesn't work for objects with __slots__ instead of a normal __dict__. Moreover, some objecs (e.g., builtins like str or int) have neither a __dict__ nor __slots__.
For now, a more versatile solution could be this:
def instance_attributes(obj: Any) -> Dict[str, Any]:
"""Get a name-to-value dictionary of instance attributes of an arbitrary object."""
try:
return vars(obj)
except TypeError:
pass
# object doesn't have __dict__, try with __slots__
try:
slots = obj.__slots__
except AttributeError:
# doesn't have __dict__ nor __slots__, probably a builtin like str or int
return {}
# collect all slots attributes (some might not be present)
attrs = {}
for name in slots:
try:
attrs[name] = getattr(obj, name)
except AttributeError:
continue
return attrs
Example:
class Foo:
class_var = "spam"
class Bar:
class_var = "eggs"
__slots__ = ["a", "b"]
>>> foo = Foo()
>>> foo.a = 1
>>> foo.b = 2
>>> instance_attributes(foo)
{'a': 1, 'b': 2}
>>> bar = Bar()
>>> bar.a = 3
>>> instance_attributes(bar)
{'a': 3}
>>> instance_attributes("baz")
{}
Rant:
It's a pity that this isn't built into vars already. Many builtins in Python promise to be "the" solution to a problem but then there's always several special cases that aren't handled... And one just ends up having to write the code manually in any case.
A: A downside of using __dict__ is that it is shallow; it won't convert any subclasses to dictionaries.
If you're using Python3.5 or higher, you can use jsons:
>>> import jsons
>>> jsons.dump(f)
{'bar': 'hello', 'baz': 'world'}
A:
To build a dictionary from an arbitrary object, it's sufficient to use __dict__.
This misses attributes that the object inherits from its class. For example,
class c(object):
x = 3
a = c()
hasattr(a, 'x') is true, but 'x' does not appear in a.__dict__
A: Python3.x
return dict((key, value) for key, value in f.__dict__.items() if not callable(value) and not key.startswith('__'))
A: If you want to list part of your attributes, override __dict__:
def __dict__(self):
d = {
'attr_1' : self.attr_1,
...
}
return d
# Call __dict__
d = instance.__dict__()
This helps a lot if your instance get some large block data and you want to push d to Redis like message queue.
A: PYTHON 3:
class DateTimeDecoder(json.JSONDecoder):
def __init__(self, *args, **kargs):
JSONDecoder.__init__(self, object_hook=self.dict_to_object,
*args, **kargs)
def dict_to_object(self, d):
if '__type__' not in d:
return d
type = d.pop('__type__')
try:
dateobj = datetime(**d)
return dateobj
except:
d['__type__'] = type
return d
def json_default_format(value):
try:
if isinstance(value, datetime):
return {
'__type__': 'datetime',
'year': value.year,
'month': value.month,
'day': value.day,
'hour': value.hour,
'minute': value.minute,
'second': value.second,
'microsecond': value.microsecond,
}
if isinstance(value, decimal.Decimal):
return float(value)
if isinstance(value, Enum):
return value.name
else:
return vars(value)
except Exception as e:
raise ValueError
Now you can use above code inside your own class :
class Foo():
def toJSON(self):
return json.loads(
json.dumps(self, sort_keys=True, indent=4, separators=(',', ': '), default=json_default_format), cls=DateTimeDecoder)
Foo().toJSON()
A: Try:
from pprint import pformat
a_dict = eval(pformat(an_obj))
A: Python 3.7+ in 2023
You can add the dataclass decorator to your class and define a custom JSON serializer, then json.dumps will work (and you can extend it to work with non-serializable attributes by providing a custom encoder to cls).
f=Foo()
json.dumps(f, cls=CustomJSONEncoder)
{"bar": "hello", "baz": "world", "modified": "2023-02-08T11:49:15.675837"}
A custom JSON serializer can be easily modified to make it compatible with any type that isn't natively JSON serializable.
from datetime import datetime
import dataclasses
import json
@dataclasses.dataclass # <<-- add this decorator
class Foo():
"""An example dataclass."""
bar: str = "hello"
baz: str = "world"
modified: datetime = Column(DateTime(timezone=True), default=datetime.utcnow)
class CustomJSONEncoder(json.JSONEncoder): # <<-- Add this custom encoder
"""Custom JSON encoder for the DB class."""
def default(self, o):
if dataclasses.is_dataclass(o): # this serializes anything dataclass can handle
return dataclasses.asdict(o)
if isinstance(o, datetime): # this adds support for datetime
return o.isoformat()
return super().default(o)
To further extend it for any non-serializable type, add another if statement to the custom encoder class that returns something serializable (e.g. str).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "473"
} |
Q: What are the pros and cons of object databases? There is a lot of information out there on object-relational mappers and how to best avoid impedance mismatch, all of which seem to be moot points if one were to use an object database. My question is why isn't this used more frequently? Is it because of performance reasons or because object databases cause your data to become proprietary to your application or is it due to something else?
A: One objection to object databases is that it creates a tight coupling between the data and your code. For certain apps this may be OK, but not for others. One nice thing that a relational database gives you is the possibility to put many views on your data.
Ted Neward explains this and a lot more about OODBMSs a lot better than this.
A: It has nothing to do with performance. That is to say, basically all applications would perform better with an OODB. But that would also put lots of DBA's out of work/having to learn a new technology. Even more people would be out of work correcting errors in the data. That's unlikely to make OODBs popular with established companies. Gavin seems to be totally clueless, a better link would be Kirk
A: *
*Familiarity. The administrators of databases know relational concepts; object ones, not so much.
*Performance. Relational databases have been proven to scale far better.
*Maturity. SQL is a powerful, long-developed language.
*Vendor support. You can pick between many more first-party (SQL servers) and third-party (administrative interfaces, mappings and other kinds of integration) tools than is the case with OODBMSs.
Naturally, the object-oriented model is more familiar to the developer, and, as you point out, would spare one of ORM. But thus far, the relational model has proven to be the more workable option.
See also the recent question, Object Orientated vs Relational Databases.
A: I've been using db4o which is an OODB and it solves most of the cons listed:
*
*Familiarity - Programmers know their language better then SQL (see Native queries)
*Performance - this one is highly subjective but you can take a look at PolePosition
*Vendor support and maturity - can change over time
*Cannot be used by programs that don't also use the same framework - There are OODB standards and you can use different frameworks
*Versioning is probably a bit of a bitch - Versioning is actually easier!
The pros I'm interested in are:
*
*Native queries - Db4o lets you write queries in your static typed language so you don't have to worry about mistyping a string and finding data missing at runtime,
*Ease of use - Defining buissiness logic in the domain layer, persistence layer (mapping) and finally the SQL database is certainly violation of DRY. With OODB you define your domain where it belongs.
I agree - OODB have a long way to go but they are going. And there are domain problems out there that are better solved by OODB,
A: Cons:
*
*Cannot be used by programs that
don't also use the same framework
for accessing the data store, making
it more difficult to use across the
enterprise.
*Less resources available online for
non SQL-based database
*No compatibility across database
types (can't swap to a different db
provider without changing all the
code)
*Versioning is probably a bit of a
bitch. I'd guess adding a new
property to an object isn't quite as
easy as adding a new column to a
table.
A: Sören
All of the reasons you stated are valid, but I see the problem with OODBMS is the logical data model. The object-model (or rather the network model of the 70s) is not as simple as the relational one, and is therefore inferior.
A: jodonnel, i dont' see how use of object databases couples application code to the data. You can still abstract your application from the OODB through using a Repository pattern and replace with an ORM backed SQL database if you design things properly.
For an OO application, an OO database will provide a more natural fit for persisting objects.
What's probably true is that you tie your data to your domain model, but then that's the crux!
Wouldn't it be good to have a single way of looking at both data, business rules and processes using a domain centric view?
So, a big pro is that an OODB matches how most modern, enterprise level object orientated software applications are designed, there is no extra effort to design a data layer using a different (relational) design. Cheaper to build and maintain, and in many cases general higher performance.
Cons, just general lack of maturity and adoption i reckon...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Testing network interrupts in software I have a network C++ program in Windows that I'd like to test for network disconnects at various times. What are my options?
Currently I am:
*
*Actually disconnecting the network wire from the back of my computer
*using ipconfig /release
*Using the cports program to close out the socket completely
None of these methods though are ideal for me, and I'd like to emulate network problems more easily.
I would like for sometimes connects to fail, sometimes socket reads to fail, and sometimes socket writes to fail. It would be great if there was some utility I could use to emulate these types of problems.
It would also be nice to be able to build some automated unit tests while this emulated bad network is up.
A: The closest I can think of is doing something similar with VEDekstop from Shunra..
Simulating High Latency and Low Bandwidth in Testing of Database Applications
Shunra VE Desktop Standard is a Windows-based client software solution that simulates a wide area network link so that you can test applications under a variety of current and potential network conditions – directly from your desktop.
A: You might want to abstract the network layer, and then you can have unit tests that inject interesting failure events at appropriate points.
A: You can subclass whatever library class you are using to manage your sockets (presumably CAsyncSocket or CSocket if you are using MFC), override the methods whose failure you want to test, and insert appropriate test code in your overrides.
A: There are some methods you can use, it is depend on which level you want to test. For function level, you can use XUNIT testing framework to mock a responce. For software level, you can use a local proxy server to contral connection.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How To Discover RSS Feeds for a given URL I get a URL from a user. I need to know:
a) is the URL a valid RSS feed?
b) if not is there a valid feed associated with that URL
using PHP/Javascript or something similar
(Ex. http://techcrunch.com fails a), but b) would return their RSS feed)
A: This link will allow you to validate the link against the RSS/Atom specifications using the W3C specs, but does require you to manually enter the url.
There are a number of ways to do this programmatically, depending on your choice of language - in PHP, parsing the file as valid XML is a good way to start, then compare it to the relevant DTD.
For b), if the link itself isn't a feed, you can parse it and look for a specified feed in the <head> section of the page, searching for a link whose type is "application/rss+xml", e.g:
<link rel="alternate" title="RSS Feed"
href="http://www.example.com/rss-feed.xml" type="application/rss+xml" />
This type of link is the one used by most browsers to "auto-discover" feeds (causing the RSS icon to appear in your address bar)
A: a) Retrieve it and try to parse it. If you can parse it, it's valid.
b) Test if it's an HTML document (server sent text/html) MIME-type. If so, run it through an HTML parser and look for <link> elements with RSS feed relations.
A: For Perl, there is Feed::Find , which does automate the discovery of syndication feeds from the webpage. The usage is quite simplicistic:
use Feed::Find;
my @feeds = Feed::Find->find('http://example.com/');
It first tries the link tags and then scans the a tags for files named .rss and something like that.
A: Found something that I wanted:
Google's AJAX Feed API has a load feed and lookup feed function (Docs here).
a) Load feed provides the feed (and feed status) in JSON
b) Lookup feed provides the RSS feed for a given URL
Theres also a find feed function that searches for RSS feeds based on a keyword.
Planning to use this with JQuery's $.getJSON
A: Are you doing this in a specific language, or do you just want details about the RSS specification?
In general, look for the XML prolog:
<?xml version="1.0" encoding="UTF-8"?>
followed by an <rss> element, but you might want to validate it as XML, fully validate it against a DTD, or verify that - for example, each URL referred to is valid, etc. More detail would help.
UPDATE: Ah - PHP. I've found this library to be pretty useful: MagpieRSS
A: The Zend Feed class of the Zend-framework can automatically parse a webpage and list the available feeds.
Example:
$feedArray = Zend_Feed::findFeeds('http://www.example.com/news.html');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Are there legitimate uses for JavaScript's "with" statement? Alan Storm's comments in response to my answer regarding the with statement got me thinking. I've seldom found a reason to use this particular language feature, and had never given much thought to how it might cause trouble. Now, I'm curious as to how I might make effective use of with, while avoiding its pitfalls.
Where have you found the with statement useful?
A: As my previous comments indicated, I don't think you can use with safely no matter how tempting it might be in any given situation. Since the issue isn't directly covered here, I'll repeat it. Consider the following code
user = {};
someFunctionThatDoesStuffToUser(user);
someOtherFunction(user);
with(user){
name = 'Bob';
age = 20;
}
Without carefully investigating those function calls, there's no way to tell what the state of your program will be after this code runs. If user.name was already set, it will now be Bob. If it wasn't set, the global name will be initialized or changed to Bob and the user object will remain without a name property.
Bugs happen. If you use with you will eventually do this and increase the chances your program will fail. Worse, you may encounter working code that sets a global in the with block, either deliberately or through the author not knowing about this quirk of the construct. It's a lot like encountering fall through on a switch, you have no idea if the author intended this and there's no way to know if "fixing" the code will introduce a regression.
Modern programming languages are chocked full of features. Some features, after years of use, are discovered to be bad, and should be avoided. Javascript's with is one of them.
A: I think the obvious use is as a shortcut. If you're e.g. initializing an object you simply save typing a lot of "ObjectName." Kind of like lisp's "with-slots" which lets you write
(with-slots (foo bar) objectname
"some code that accesses foo and bar"
which is the same as writing
"some code that accesses (slot-value objectname 'foo) and (slot-value objectname 'bar)""
It's more obvious why this is a shortcut then when your language allows "Objectname.foo" but still.
A: You can use with to introduce the contents of an object as local variables to a block, like it's being done with this small template engine.
A: I actually found the with statement to be incredibly useful recently. This technique never really occurred to me until I started my current project - a command line console written in JavaScript. I was trying to emulate the Firebug/WebKit console APIs where special commands can be entered into the console but they don't override any variables in the global scope. I thought of this when trying to overcome a problem I mentioned in the comments to Shog9's excellent answer.
To achieve this effect, I used two with statements to "layer" a scope behind the global scope:
with (consoleCommands) {
with (window) {
eval(expression);
}
}
The great thing about this technique is that, aside from the performance disadvantages, it doesn't suffer the usual fears of the with statement, because we're evaluating in the global scope anyway - there's no danger of variables outside our pseudo-scope from being modified.
I was inspired to post this answer when, to my surprise, I managed to find the same technique used elsewhere - the Chromium source code!
InjectedScript._evaluateOn = function(evalFunction, object, expression) {
InjectedScript._ensureCommandLineAPIInstalled();
// Surround the expression in with statements to inject our command line API so that
// the window object properties still take more precedent than our API functions.
expression = "with (window._inspectorCommandLineAPI) { with (window) { " + expression + " } }";
return evalFunction.call(object, expression);
}
EDIT: Just checked the Firebug source, they chain 4 with statements together for even more layers. Crazy!
const evalScript = "with (__win__.__scope__.vars) { with (__win__.__scope__.api) { with (__win__.__scope__.userVars) { with (__win__) {" +
"try {" +
"__win__.__scope__.callback(eval(__win__.__scope__.expr));" +
"} catch (exc) {" +
"__win__.__scope__.callback(exc, true);" +
"}" +
"}}}}";
A: Using "with" can make your code more dry.
Consider the following code:
var photo = document.getElementById('photo');
photo.style.position = 'absolute';
photo.style.left = '10px';
photo.style.top = '10px';
You can dry it to the following:
with(document.getElementById('photo').style) {
position = 'absolute';
left = '10px';
top = '10px';
}
I guess it depends whether you have a preference for legibility or expressiveness.
The first example is more legible and probably recommended for most code. But most code is pretty tame anyway. The second one is a bit more obscure but uses the expressive nature of the language to cut down on code size and superfluous variables.
I imagine people who like Java or C# would choose the first way (object.member) and those who prefer Ruby or Python would choose the latter.
A: Having experience with Delphi, I would say that using with should be a last-resort size optimization, possibly performed by some kind of javascript minimizer algorithm with access to static code analysis to verify its safety.
The scoping problems you can get into with liberal use of the with statement can be a royal pain in the a** and I wouldn't want anyone to experience a debugging session to figure out what the he.. is going on in your code, only to find out that it captured an object member or the wrong local variable, instead of your global or outer scope variable which you intended.
The VB with statement is better, in that it needs the dots to disambiguate the scoping, but the Delphi with statement is a loaded gun with a hairtrigger, and it looks to me as though the javascript one is similar enough to warrant the same warning.
A: Yes, yes and yes. There is a very legitimate use. Watch:
with (document.getElementById("blah").style) {
background = "black";
color = "blue";
border = "1px solid green";
}
Basically any other DOM or CSS hooks are fantastic uses of with. It's not like "CloneNode" will be undefined and go back to the global scope unless you went out of your way and decided to make it possible.
Crockford's speed complaint is that a new context is created by with. Contexts are generally expensive. I agree. But if you just created a div and don't have some framework on hand for setting your css and need to set up 15 or so CSS properties by hand, then creating a context will probably be cheaper then variable creation and 15 dereferences:
var element = document.createElement("div"),
elementStyle = element.style;
elementStyle.fontWeight = "bold";
elementStyle.fontSize = "1.5em";
elementStyle.color = "#55d";
elementStyle.marginLeft = "2px";
etc...
A: Another use occurred to me today, so I searched the web excitedly and found an existing mention of it: Defining Variables inside Block Scope.
Background
JavaScript, in spite of its superficial resemblance to C and C++, does not scope variables to the block they are defined in:
var name = "Joe";
if ( true )
{
var name = "Jack";
}
// name now contains "Jack"
Declaring a closure in a loop is a common task where this can lead to errors:
for (var i=0; i<3; ++i)
{
var num = i;
setTimeout(function() { alert(num); }, 10);
}
Because the for loop does not introduce a new scope, the same num - with a value of 2 - will be shared by all three functions.
A new scope: let and with
With the introduction of the let statement in ES6, it becomes easy to introduce a new scope when necessary to avoid these problems:
// variables introduced in this statement
// are scoped to each iteration of the loop
for (let i=0; i<3; ++i)
{
setTimeout(function() { alert(i); }, 10);
}
Or even:
for (var i=0; i<3; ++i)
{
// variables introduced in this statement
// are scoped to the block containing it.
let num = i;
setTimeout(function() { alert(num); }, 10);
}
Until ES6 is universally available, this use remains limited to the newest browsers and developers willing to use transpilers. However, we can easily simulate this behavior using with:
for (var i=0; i<3; ++i)
{
// object members introduced in this statement
// are scoped to the block following it.
with ({num: i})
{
setTimeout(function() { alert(num); }, 10);
}
}
The loop now works as intended, creating three separate variables with values from 0 to 2. Note that variables declared within the block are not scoped to it, unlike the behavior of blocks in C++ (in C, variables must be declared at the start of a block, so in a way it is similar). This behavior is actually quite similar to a let block syntax introduced in earlier versions of Mozilla browsers, but not widely adopted elsewhere.
A: Using with is not recommended, and is forbidden in ECMAScript 5 strict mode. The recommended alternative is to assign the object whose properties you want to access to a temporary variable.
Source: Mozilla.org
A: The with statement can be used to decrease the code size or for private class members, example:
// demo class framework
var Class= function(name, o) {
var c=function(){};
if( o.hasOwnProperty("constructor") ) {
c= o.constructor;
}
delete o["constructor"];
delete o["prototype"];
c.prototype= {};
for( var k in o ) c.prototype[k]= o[k];
c.scope= Class.scope;
c.scope.Class= c;
c.Name= name;
return c;
}
Class.newScope= function() {
Class.scope= {};
Class.scope.Scope= Class.scope;
return Class.scope;
}
// create a new class
with( Class.newScope() ) {
window.Foo= Class("Foo",{
test: function() {
alert( Class.Name );
}
});
}
(new Foo()).test();
The with-statement is very usefull if you want to modify the scope, what is necessary for having your own global scope that you can manipulate at runtime. You can put constants on it or certain helper functions often used like e.g. "toUpper", "toLower" or "isNumber", "clipNumber" aso..
About the bad performance I read that often: Scoping a function won't have any impact on the performance, in fact in my FF a scoped function runs faster then an unscoped:
var o={x: 5},r, fnRAW= function(a,b){ return a*b; }, fnScoped, s, e, i;
with( o ) {
fnScoped= function(a,b){ return a*b; };
}
s= Date.now();
r= 0;
for( i=0; i < 1000000; i++ ) {
r+= fnRAW(i,i);
}
e= Date.now();
console.log( (e-s)+"ms" );
s= Date.now();
r= 0;
for( i=0; i < 1000000; i++ ) {
r+= fnScoped(i,i);
}
e= Date.now();
console.log( (e-s)+"ms" );
So in the above mentioned way used the with-statement has no negative effect on performance, but a good one as it deceases the code size, what impacts the memory usage on mobile devices.
A: You can define a small helper function to provide the benefits of with without the ambiguity:
var with_ = function (obj, func) { func (obj); };
with_ (object_name_here, function (_)
{
_.a = "foo";
_.b = "bar";
});
A: Using with also makes your code slower in many implementation, as everything now gets wrapped in an extra scope for lookup. There's no legitimate reason for using with in JavaScript.
A: I think the with-statement can come in handy when converting a template language into JavaScript. For example JST in base2, but I've seen it more often.
I agree one can program this without the with-statement. But because it doesn't give any problems it is a legitimate use.
A: It's good for putting code that runs in a relatively complicated environment into a container: I use it to make a local binding for "window" and such to run code meant for a web browser.
A: I think the object literal use is interesting, like a drop-in replacement for using a closure
for(var i = nodes.length; i--;)
{
// info is namespaced in a closure the click handler can access!
(function(info)
{
nodes[i].onclick = function(){ showStuff(info) };
})(data[i]);
}
or the with statement equivilent of a closure
for(var i = nodes.length; i--;)
{
// info is namespaced in a closure the click handler can access!
with({info: data[i]})
{
nodes[i].onclick = function(){ showStuff(info) };
}
}
I think the real risk is accidently minipulating variables that are not part of the with statement, which is why I like the object literal being passed into with, you can see exactly what it will be in the added context in the code.
A: I created a "merge" function which eliminates some of this ambiguity with the with statement:
if (typeof Object.merge !== 'function') {
Object.merge = function (o1, o2) { // Function to merge all of the properties from one object into another
for(var i in o2) { o1[i] = o2[i]; }
return o1;
};
}
I can use it similarly to with, but I can know it won't affect any scope which I don't intend for it to affect.
Usage:
var eDiv = document.createElement("div");
var eHeader = Object.merge(eDiv.cloneNode(false), {className: "header", onclick: function(){ alert("Click!"); }});
function NewObj() {
Object.merge(this, {size: 4096, initDate: new Date()});
}
A: For some short code pieces, I would like to use the trigonometric functions like sin, cos etc. in degree mode instead of in radiant mode. For this purpose, I use an AngularDegreeobject:
AngularDegree = new function() {
this.CONV = Math.PI / 180;
this.sin = function(x) { return Math.sin( x * this.CONV ) };
this.cos = function(x) { return Math.cos( x * this.CONV ) };
this.tan = function(x) { return Math.tan( x * this.CONV ) };
this.asin = function(x) { return Math.asin( x ) / this.CONV };
this.acos = function(x) { return Math.acos( x ) / this.CONV };
this.atan = function(x) { return Math.atan( x ) / this.CONV };
this.atan2 = function(x,y) { return Math.atan2(x,y) / this.CONV };
};
Then I can use the trigonometric functions in degree mode without further language noise in a with block:
function getAzimut(pol,pos) {
...
var d = pos.lon - pol.lon;
with(AngularDegree) {
var z = atan2( sin(d), cos(pol.lat)*tan(pos.lat) - sin(pol.lat)*cos(d) );
return z;
}
}
This means: I use an object as a collection of functions, which I enable in a limited code region for direct access. I find this useful.
A: Hardly seems worth it since you can do the following:
var o = incrediblyLongObjectNameThatNoOneWouldUse;
o.name = "Bob";
o.age = "50";
A: I just really don't see how using the with is any more readable than just typing object.member. I don't think it's any less readable, but I don't think it's any more readable either.
Like lassevk said, I can definitely see how using with would be more error prone than just using the very explicit "object.member" syntax.
A: I think that the usefulness of with can be dependent on how well your code is written. For example, if you're writing code that appears like this:
var sHeader = object.data.header.toString();
var sContent = object.data.content.toString();
var sFooter = object.data.footer.toString();
then you could argue that with will improve the readability of the code by doing this:
var sHeader = null, sContent = null, sFooter = null;
with(object.data) {
sHeader = header.toString();
sContent = content.toString();
sFooter = content.toString();
}
Conversely, it could be argued that you're violating the Law of Demeter, but, then again, maybe not. I digress =).
Above all else, know that Douglas Crockford recommends not using with. I urge you to check out his blog post regarding with and its alternatives here.
A: I don't ever use with, don't see a reason to, and don't recommend it.
The problem with with is that it prevents numerous lexical optimizations an ECMAScript implementation can perform. Given the rise of fast JIT-based engines, this issue will probably become even more important in the near future.
It might look like with allows for cleaner constructs (when, say, introducing a new scope instead of a common anonymous function wrapper or replacing verbose aliasing), but it's really not worth it. Besides a decreased performance, there's always a danger of assigning to a property of a wrong object (when property is not found on an object in injected scope) and perhaps erroneously introducing global variables. IIRC, latter issue is the one that motivated Crockford to recommend to avoid with.
A: I have been using the with statement as a simple form of scoped import. Let's say you have a markup builder of some sort. Rather than writing:
markupbuilder.div(
markupbuilder.p('Hi! I am a paragraph!',
markupbuilder.span('I am a span inside a paragraph')
)
)
You could instead write:
with(markupbuilder){
div(
p('Hi! I am a paragraph!',
span('I am a span inside a paragraph')
)
)
}
For this use case, I am not doing any assignment, so I don't have the ambiguity problem associated with that.
A: Visual Basic.NET has a similar With statement. One of the more common ways I use it is to quickly set a number of properties. Instead of:
someObject.Foo = ''
someObject.Bar = ''
someObject.Baz = ''
, I can write:
With someObject
.Foo = ''
.Bar = ''
.Baz = ''
End With
This isn't just a matter of laziness. It also makes for much more readable code. And unlike JavaScript, it does not suffer from ambiguity, as you have to prefix everything affected by the statement with a . (dot). So, the following two are clearly distinct:
With someObject
.Foo = ''
End With
vs.
With someObject
Foo = ''
End With
The former is someObject.Foo; the latter is Foo in the scope outside someObject.
I find that JavaScript's lack of distinction makes it far less useful than Visual Basic's variant, as the risk of ambiguity is too high. Other than that, with is still a powerful idea that can make for better readability.
A: You got to see the validation of a form in javascript at W3schools http://www.w3schools.com/js/js_form_validation.asp where the object form is "scanned" through to find an input with name 'email'
But i've modified it to get from ANY form all the fields validate as not empty, regardless of the name or quantity of field in a form. Well i've tested only text-fields.
But the with() made things simpler. Here's the code:
function validate_required(field)
{
with (field)
{
if (value==null||value=="")
{
alert('All fields are mandtory');return false;
}
else
{
return true;
}
}
}
function validate_form(thisform)
{
with (thisform)
{
for(fiie in elements){
if (validate_required(elements[fiie])==false){
elements[fiie].focus();
elements[fiie].style.border='1px solid red';
return false;
} else {elements[fiie].style.border='1px solid #7F9DB9';}
}
}
return false;
}
A: CoffeeScript's Coco fork has a with keyword, but it simply sets this (also writable as @ in CoffeeScript/Coco) to the target object within the block. This removes ambiguity and achieves ES5 strict mode compliance:
with long.object.reference
@a = 'foo'
bar = @b
A: My
switch(e.type) {
case gapi.drive.realtime.ErrorType.TOKEN_REFRESH_REQUIRED: blah
case gapi.drive.realtime.ErrorType.CLIENT_ERROR: blah
case gapi.drive.realtime.ErrorType.NOT_FOUND: blah
}
boils down to
with(gapi.drive.realtime.ErrorType) {switch(e.type) {
case TOKEN_REFRESH_REQUIRED: blah
case CLIENT_ERROR: blah
case NOT_FOUND: blah
}}
Can you trust so low-quality code? No, we see that it was made absolutely unreadable. This example undeniably proves that there is no need for with-statement, if I am taking readability right ;)
A: using "with" statement with proxy objects
I recently want to write a plugin for babel that enables macros. I would like to have a separate variable namespace that keeps my macro variables, and I can run my macro codes in that space. Also, I want to detect new variables that are defined in the macro codes(Because they are new macros).
First, I choose the vm module, but I found global variables in the vm module like Array, Object, etc. are different from the main program, and I cant implement module and require that be fully compatible with that global objects(Because I cant reconstruct the core modules). In the end, I find the "with" statement.
const runInContext = function(code, context) {
context.global = context;
const proxyOfContext = new Proxy(context, { has: () => true });
let run = new Function(
"proxyOfContext",
`
with(proxyOfContext){
with(global){
${code}
}
}
`
);
return run(proxyOfContext);
};
This proxy object traps search of all variables and says: "yes, I have that variable." and If the proxy object doesn't really have that variable, show its value as undefined.
In this way, if any variable is defined in the macro code with the var statement, I can find it in the context object(like the vm module). But variables that are defined with let or const only available in that time and will not be saved in the context object(the vm module saves them but doesn't expose them).
Performance: Performance of this method is better than vm.runInContext.
safety: If you want to run code in a sandbox, this is not safe in any way, and you must use the vm module. It only provides a new namespace.
A: Here's a good use for with: adding new elements to an Object Literal, based on values stored in that Object. Here's an example that I just used today:
I had a set of possible tiles (with openings facing top, bottom, left, or right) that could be used, and I wanted a quick way of adding a list of tiles which would be always placed and locked at the start of the game. I didn't want to keep typing types.tbr for each type in the list, so I just used with.
Tile.types = (function(t,l,b,r) {
function j(a) { return a.join(' '); }
// all possible types
var types = {
br: j( [b,r]),
lbr: j([l,b,r]),
lb: j([l,b] ),
tbr: j([t,b,r]),
tbl: j([t,b,l]),
tlr: j([t,l,r]),
tr: j([t,r] ),
tl: j([t,l] ),
locked: []
};
// store starting (base/locked) tiles in types.locked
with( types ) { locked = [
br, lbr, lbr, lb,
tbr, tbr, lbr, tbl,
tbr, tlr, tbl, tbl,
tr, tlr, tlr, tl
] }
return types;
})("top","left","bottom","right");
A: You can use with to avoid having to explicitly manage arity when using require.js:
var modules = requirejs.declare([{
'App' : 'app/app'
}]);
require(modules.paths(), function() { with (modules.resolve(arguments)) {
App.run();
}});
Implementation of requirejs.declare:
requirejs.declare = function(dependencyPairs) {
var pair;
var dependencyKeys = [];
var dependencyValues = [];
for (var i=0, n=dependencyPairs.length; i<n; i++) {
pair = dependencyPairs[i];
for (var key in dependencyPairs[i]) {
dependencyKeys.push(key);
dependencyValues.push(pair[key]);
break;
}
};
return {
paths : function() {
return dependencyValues;
},
resolve : function(args) {
var modules = {};
for (var i=0, n=args.length; i<n; i++) {
modules[dependencyKeys[i]] = args[i];
}
return modules;
}
}
}
A: As Andy E pointed out in the comments of Shog9's answer, this potentially-unexpected behavior occurs when using with with an object literal:
for (var i = 0; i < 3; i++) {
function toString() {
return 'a';
}
with ({num: i}) {
setTimeout(function() { console.log(num); }, 10);
console.log(toString()); // prints "[object Object]"
}
}
Not that unexpected behavior wasn't already a hallmark of with.
If you really still want to use this technique, at least use an object with a null prototype.
function scope(o) {
var ret = Object.create(null);
if (typeof o !== 'object') return ret;
Object.keys(o).forEach(function (key) {
ret[key] = o[key];
});
return ret;
}
for (var i = 0; i < 3; i++) {
function toString() {
return 'a';
}
with (scope({num: i})) {
setTimeout(function() { console.log(num); }, 10);
console.log(toString()); // prints "a"
}
}
But this will only work in ES5+. Also don't use with.
A: I am working on a project that will allow users to upload code in order to modify the behavior of parts of the application. In this scenario, I have been using a with clause to keep their code from modifying anything outside of the scope that I want them to mess around with. The (simplified) portion of code I use to do this is:
// this code is only executed once
var localScope = {
build: undefined,
// this is where all of the values I want to hide go; the list is rather long
window: undefined,
console: undefined,
...
};
with(localScope) {
build = function(userCode) {
eval('var builtFunction = function(options) {' + userCode + '}');
return builtFunction;
}
}
var build = localScope.build;
delete localScope.build;
// this is how I use the build method
var userCode = 'return "Hello, World!";';
var userFunction = build(userCode);
This code ensures (somewhat) that the user-defined code neither has access to any globally-scoped objects such as window nor to any of my local variables through a closure.
Just as a word to the wise, I still have to perform static code checks on the user-submitted code to ensure they aren't using other sneaky manners to access global scope. For instance, the following user-defined code grabs direct access to window:
test = function() {
return this.window
};
return test();
A: with is useful coupled with shorthand object notation when you need to transform object structures from flat to hierarchical. So if you have:
var a = {id: 123, name: 'abc', attr1: 'efg', attr2: 'zxvc', attr3: '4321'};
So instead of:
var b = {
id: a.id,
name: a.name
metadata: {name: a.name, attr1: a.attr1}
extrastuff: {attr2: a.attr2, attr3: a.attr3}
}
You can simply write:
with (a) {
var b = {
id,
name,
metadata: {name, attr1}
extrastuff: {attr2, attr3}
}
}
A: Just wanted to add you can get "with()" functionality with pretty syntax and no ambiguity with your own clever method...
//utility function
function _with(context){
var ctx=context;
this.set=function(obj){
for(x in obj){
//should add hasOwnProperty(x) here
ctx[x]=obj[x];
}
}
return this.set;
}
//how calling it would look in code...
_with(Hemisphere.Continent.Nation.Language.Dialect.Alphabet)({
a:"letter a",
b:"letter b",
c:"letter c",
d:"letter a",
e:"letter b",
f:"letter c",
// continue through whole alphabet...
});//look how readable I am!!!!
..or if you really want to use "with()" without ambiguity and no custom method, wrap it in an anonymous function and use .call
//imagine a deeply nested object
//Hemisphere.Continent.Nation.Language.Dialect.Alphabet
(function(){
with(Hemisphere.Continent.Nation.Language.Dialect.Alphabet){
this.a="letter a";
this.b="letter b";
this.c="letter c";
this.d="letter a";
this.e="letter b";
this.f="letter c";
// continue through whole alphabet...
}
}).call(Hemisphere.Continent.Nation.Language.Dialect.Alphabet)
However as others have pointed out, its somewhat pointless since you can do...
//imagine a deeply nested object Hemisphere.Continent.Nation.Language.Dialect.Alphabet
var ltr=Hemisphere.Continent.Nation.Language.Dialect.Alphabet
ltr.a="letter a";
ltr.b="letter b";
ltr.c="letter c";
ltr.d="letter a";
ltr.e="letter b";
ltr.f="letter c";
// continue through whole alphabet...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "387"
} |
Q: Spell Checking Service with python using mod_python What is the best available method for developing a spell check engine (for example, with aspell_python), that works with apache mod_python?
apache 2.0.59+RHEL4+mod_python+aspell_python seems to crash.
Is there any alternative to using aspell_python?
A: Looks like RHEL4 is the culprit. Works well on Fedore 7 (the version of apache is newer and there is no crash)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is .NET/Mono or Java the better choice for cross-platform development? How much less libraries are there for Mono than for Java?
I lack the overview over both alternatives but I have pretty much freedom of choice for my next project. I'm looking for hard technical facts in the areas of
*
*performance (for example, I'm told Java is good for threading, and I hear the runtime code optimization has become very good recently for .NET)
*real world portability (it's both meant to be portable, what's Catch-22 for each?)
*tool availability (CI, build automation, debugging, IDE)
I am especially looking for what you actually experienced in your own work rather than the things I could google. My application would be a back-end service processing large amounts of data from time series.
My main target platform would be Linux.
Edit:
To phrase my question more adequately, I am interested in the whole package (3rd party libraries etc.), not just the language. For libraries, that probably boils down to the question "how much less libraries are there for Mono than for Java"?
FYI, I have since chosen Java for this project, because it seemed just more battle-worn on the portability side and it's been around for a while on older systems, too. I'm a tiny little bit sad about it, because I'm very curious about C# and I'd love to have done some large project in it, but maybe next time. Thanks for all the advice.
A: Well....Java is actually more portable. Mono isn't implemented everywhere, and it lags behind the Microsoft implementation significantly. The Java SDK seems to stay in better sync across platforms (and it works on more platforms).
I'd also say Java has more tool availability across all those platforms, although there are plenty of tools available for .NET on Windows platforms.
Update for 2014
I still hold this opinion in 2014. However, I'll qualify this by saying I'm just now starting to pay some attention to Mono after a long while of not really caring, so there may be improvements in the Mono runtime (or ecosystem) that I haven't been made aware of. AFAIK, there is still no support for WPF, WCF, WF, of WIF. Mono can run on iOS, but to my knowledge, the Java runtime still runs on far more platforms than Mono. Also, Mono is starting to see some much improved tooling (Xamarin), and Microsoft seems to have a much more cross-platform kind of attitude and willingness to work with partners to make them complimentary, rather than competitive (for example, Mono will be a pretty important part of the upcoming OWIN/Helios ASP.NET landscape). I suspect that in the coming years the differences in portability will lessen rapidly, especially after .NET being open-sourced.
Update for 2018
My view on this is starting to go the other way. I think .NET, broadly, particularly with .NET Core, has started to achieve "portability parity" with Java. There are efforts underway to bring WPF to .NET Core for some platforms, and .NET Core itself runs on a great many platforms now. Mono (owned by Xamarin, which is now owned by Microsoft) is a more mature and polished product than ever, and writing applications that work on multiple platforms is no longer the domain of deep gnosis of .NET hackery, but is a relatively straightforward endeavor. There are, of course, libraries and services and applications that are Windows-only or can only target specific platforms - but the same can be said of Java (broadly).
If I were in the OP's shoes at this point, I can think of no reason inherent in the languages or tech stacks themselves that would prevent me from choosing .NET for any application going forward from this point.
A: I've been asking the same question off-late and IMHO, .NET/Mono seems to be a better option simply because Mono has a great track record for cross-platform desktop applications (as opposed to Java) and of course, Mono is improving by leaps and bounds these days.
A: I'm going to say Java as well. If you look at it in terms of maturity, a lot more time and effort has been expended by Sun (and others) in getting the JVM to work on non-Windows platforms.
In contrast, Mono is definitely a second class citizen in the .NET ecosystem.
Depending on who your target customers are, you may also find there is real pushback against using Mono - does Novell offer the same kind of vendor support for Mono that you would get for Java or .NET on Windows?
If you were primarily targeting hosting your service on Windows, it would make sense to be considering this choice, but since you're targeting Linux primarily, it seems like kind of a no-brainer to me.
A: Java was designed to be cross-platform; C#/.Net wasn't. When in doubt, use the tool that was designed for your purpose.
EDIT: in fairness, .NET was designed to work on embedded/PC/Server environments, so that's SORT of cross-platform. But it wasn't designed for Linux.
A: I think the answer is "it depends." Java runs on just about anything, but .NET/Mono are (IMHO) a better framework for the desktop. So I guess the answer really depends on what platforms you plan on targeting.
A: To add a bit more to the conversation, Java is more portable if you remain about one version behind - Java 5 still has many excellent features so you can wait for Java 6 and still have a lot of range in terms of language and libraries to develop with. The Mac is the primary platform that can take some time to catch up to the latest Java version.
Java also has an excellent standards body that intelligently grows the platform based on input from many different companies. This is an oft overlooked feature but it keeps even new features working well across multiple platforms and provides a lot of range in library support for some esoteric things (as optional extensions).
A: I actually develop in .NET, run all my tests first on Mono, and then on Windows. That way I know my applications are cross platform. I have done this very successfully on both ASP.NET and Winforms applications.
I am not really sure where some people get the impression Mono is so horrible from, but it certainly has done it's job in my cases and opinions.It is true you will have a bit of lag for the latest and greatest inventions in the .NET world, but so far, .NET 2.0 on Windows and Linux is very solid for me.
Keep in mind there are obviously many quirks to this, but most of them come from making sure you are writing portable code. While the frameworks do a great job of abstracting away what OS you are running on, little things like Linux's case sensitivity in paths and file names takes a bit of getting used to, as do things like permissions.
.NET is definitely very cross platform due to Mono based on my experiences so far.
A: I would vote for Java being more portable than C#. Java definitely also has a very rich set of standard libraries. There is also a broad set of open source 3rd party libraries out there such as those provided by the Jakarta project (http://jakarta.apache.org/).
All the usual suspects exist for CI, Unit testing, etc too. Cross platform IDE support is also very good with the likes of Eclipse, Netbeans, IntelliJ IDEA etc.
A: There are other language choices too. I've become quite fond of Python, which works well on Windows, Linux, and Mac, and has a rich set of libraries.
A: While Mono has its share of problems I think it has a better cross-platform compatibility story especially IF you have reliance on native platform invocation.
There are not enough words on Stack Overflow to stress how much smoother it is to get something native called and executed in .NET/Mono on (at least in my experience 3...) multiple platforms vs. the equivalent Java effort.
A: Java actually is as cross-platform as everyone says it is. There's a JVM implementation for just about any mainstream OS out there (even Mac OS X, finally), and they all work really well. And there's tons of open source tools out there that are just as cross platform.
The only catch is that there are certain native operations you can't do in Java without writing some DLLs or SOs. It's very rare that these come up in practice. In all those cases, though, I've been able to get around it by spawning native processes and screen-scraping the results.
A: Gatorhall do you have some data to back that up?
Performance. Java and .Net have similar performance level due to the virtual machine, but JVM normally has better performance because of years and years optimization.
Background: I'm a Windows guy since Windows 3.1 and currently a Linux user (still running Windows 7, great OS, on a VM for Visual Studio 2010 and other tools).
The point: me and a lot of users (windows, linux, etc) I know, may disagree from you. Java tends to perform slower even on a linux desktop application, ASP.NET perform's faster that java server pages many of the times. Some may agree that even non-compiled PHP performs better i several scenarios.
Java is more cross-platform? I have no doubts about this (the history back this on), but faster (not saying .NET is) not so certain and I would like to see some real benchmarks.
A: I think the question is phrased incorrectly. C# vs. Java is much less interesting in terms of cross-platform usage than is (a) which platforms you need to support, and (b) considering the core libraries and available third party libraries. The language is almost the least important part of the decision-making process.
A: Java is a better choice for Cross-Platform development.
*
*Performance. Java and .Net have similar performance level due to the virtual machine, but JVM normally has better performance because of years and years optimization.
*Library. Although this depends on your task, Java has much more open source or third party libraries available there. For server App, J2EE, Spring, Struts, etc. For GUI, although .Net provides Win32 layer API but this causes compatibility issues. Java has Swing, SWT, AWT, etc. It works in most cases.
*Compatibility. This is the key issues that need to be considered when develop the cross-platform program. Two issue: first, platform compatibility. Java still wins since JDK is well maintained by single and original company Sun. Mono is not maintained by MS, so you have no guarantee yet for update compatibility. 2. Backward compatibility. Sun maintains a good reputation on their backward compatibility, although sometimes this seems too rigid and slows the pace.
*Tools. Java has good cross-platform IDEs. Netbeans, Eclipse, etc. Most of them are free. VS Studio is good but only on Windows, and not cost a bit. Both of them provides good unit tests, debugs, profiles, etc.
Hence I'd suggest that Java is a better choice. As a show case, there are some famous desktop cross-platforms apps developed by Java: Vuze, Limewire, BlogBridge, CrossFTP, not to mention those IDEs. As to .Net, I have limited knowledge on such success apps.
A: Mono does a better job at targeting the platforms I want to support. Other than that, it is all subjective.
I share C# code across the following platforms:
- iOS (iPhone/iPad)
- Android
- The Web (HTML5)
- Mac (OS X)
- Linux
- Windows
I could share it even more places:
- Windows Phone 7
- Wii
- XBox
- PS3
- etc.
The biggie is iOS since MonoTouch works fantastically. I do not know of any good way to target iOS with Java. You cannot target Windows Phone 7 with Java, so I would say that the days of Java being better for mobile are behind us.
The biggest factor for me though is personal productivity (and happiness). C# as a language is years ahead of Java IMHO and the .NET framework is a joy to use. Most of what is being added in Java 7 and Java 8 has been in C# for years. JVM languages like Scala and Clojure (both available on the CLR) are pretty nice though.
I see Mono as a platform in it's own right (a great one) and treat .NET as the Microsoft implementation of Mono on Windows. This means that I develop and test on Mono first. This works wonderfully.
If both Java and .NET (Mono let's say) were Open Source projects without any corporate backing, I would choose Mono over Java every time. I believe it is just a better platform.
Both .NET/Mono and the JVM are great choices, although I would personally use some other language than Java on the JVM.
My take on some of the other comments:
Issue: Performance.
**Answer: Both the JVM and the CLR perform better than detractors say they do. I would say that the JVM performs better. Mono is generally slower than .NET (though not always).
I personally would take ASP.NET MVC over J2EE any day both as a developer and an end-user. Support for Google Native Client is pretty cool too. Also, I know that poor GUI performance for desktop Java apps is supposed to be a thing of the past but I keep finding slow ones. Then again, I could say the same for WPF. GTK# is plenty fast though so there is no reason they have to be slow.
Issue: Java has a larger ecosystem of libraries available.
Answer: Probably true, but it is a non-issue in practice.
Practically every Java library (including the JDK) runs just dandy on .NET/Mono thanks to IKVM.NET. This piece of technology is a true marvel. The integration is amazing; you can use a Java library just like it was native. I have only had to use Java libraries in one .NET app though. The .NET/Mono ecosystem generally offers more than I need.
Issue: Java has better (broader) tools support
Answer: Not on Windows. Otherwise I agree. MonoDevelop is nice though.
I want to give a shout-out to MonoDevelop; it is a jewel. MonoDevelop integrates most of the tools I want use including code completion (intellisense), Git/Subversion integration, support for unit tests, SQL integration, debugging, easy refactoring, and assembly browsing with on-the-fly decompilation. It is wonderful to use the same environment for everything from server-side web to mobile apps.
Issue: Compatibility across platforms.
Answer: Mono is a single code-base across all platforms, including Windows.
Develop for Mono first and deploy to .NET on Windows if you like. If you compare .NET from MS to Java though then Java has the edge in terms of consistency across platforms. See next answer...
Issue: Mono lags .NET.
Answer: No it does not. IMHO, this is an often stated but incorrect statement.
The Mono distribution from Xamarin ships with C#, VB.NET, F#, IronPython, IronRuby, and I think maybe Boo out of the box. The Mono C# compiler is completely up to date with MS. The Mono VB.NET compiler does lag the MS version. The other compilers are the same on both platforms (as are other .NET languages like Nemerle, Boo, and Phalanger (PHP) ).
Mono ships with a lot of the actual Microsoft written code including the Dynamic Language Runtime (DLR), Managed Extensibility Framework (MEF), F#, and ASP.NET MVC. Because Razor is not Open Source, Mono currently ships with MVC2 but MVC3 works on Mono just fine.
The core Mono platform has kept pace with .NET or many years and the compatibility is impressive. You can use the full C# 4.0 language and even some C# 5.0 features today. In fact, Mono often leads .NET in many ways.
Mono implements parts of the CLR spec that even Microsoft does not support (like 64 bit arrays). One of the most exciting new pieces of technology in the .NET world is Rosylyn. Mono has offered the C# compiler as a service for many years. Some of what Rosylyn offers is available via NRefractory as well. An example of were Mono is still ahead would be the SIMD instructions to accelerate gaming performance.
Microsoft does offer a number of products on top of .NET that are not available in Mono which is were the misconception about Mono lagging comes from. Windows Presentation Foundation (WPF), Entity Framework (EF), WCF (Windows Communication Foundation) are examples of products which do not work, or are poorly supported, on Mono. The obvious solution is to use cross-platform alternatives like GTK#, NHibernate, and ServiceStack instead.
Issue: Microsoft is evil.
Answer: True. So what.
Many people offer the following reasons to avoid using Mono:
1) You should not use Mono because Microsoft tech should be avoided
2) Mono sucks because it does not let you use every technology that Microsoft offers
To me, it is clear that these statements are incompatible. I reject the first statement but will skip that argument here. The second statement is true of all .NET alternatives.
The JVM is a great platform and the explosion of JVM languages is awesome. Use what makes you happy. For now, that is often .NET/Mono for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "110"
} |
Q: Code to make a DHTMLEd control replace straight quotes with curly quotes I've got an old, legacy VB6 application that uses the DHTML editing control as an HTML editor. The Microsoft DHTML editing control, a.k.a. DHTMLEd, is probably nothing more than an IE control using IE's own native editing capability internally.
I'd like to modify the app to implement smart quotes like Word. Specifically, " is replaced with “ or ” and ' is replaced with ‘ or ’ as appropriate as it is typed; and if the user presses Ctrl+Z immediately after the replacement, it goes back to being a straight quote.
Does anyone have code that does that?
If you don't have code for DHTML/VB6, but do have JavaScript code that works in a browser with contentEditable regions, I could use that, too
A: Here's the VB6 version:
Private Sub DHTMLEdit1_onkeypress()
Dim e As Object
Set e = DHTMLEdit1.DOM.parentWindow.event
'Perform smart-quote replacement'
Select Case e.keyCode
Case 34: 'Double-Quote'
e.keyCode = 0
If IsAtWordEnd Then
InsertDoubleUndo ChrW$(8221), ChrW$(34)
Else
InsertDoubleUndo ChrW$(8220), ChrW$(34)
End If
Case 39: 'Single-Quote'
e.keyCode = 0
If IsAtWordEnd Then
InsertDoubleUndo ChrW$(8217), ChrW$(39)
Else
InsertDoubleUndo ChrW$(8216), ChrW$(39)
End If
End Select
End Sub
Private Function IsLetter(ByVal character As String) As Boolean
IsLetter = UCase$(character) <> LCase$(character)
End Function
Private Sub InsertDoubleUndo(VisibleText As String, HiddenText As String)
Dim selection As Object
Set selection = DHTMLEdit1.DOM.selection.createRange()
selection.Text = HiddenText
selection.moveStart "character", -Len(HiddenText)
selection.Text = VisibleText
End Sub
Private Function IsAtWordEnd() As Boolean
Dim ch As String
ch = PreviousChar
IsAtWordEnd = (ch <> " ") And (ch <> "")
End Function
Private Function PreviousChar() As String
Dim selection As Object
Set selection = m_dom.selection.createRange()
selection.moveStart "character", -1
PreviousChar = selection.Text
End Function
Note: this solution inserts an additional level in the undo chain. For example, typing "This is a test" gives a chain of “This is a test” -> “This is a test" -> “This is a test -> “ -> " (extra level in bold). To remove this extra level you'd have to implement some sort of PostMessage+subclassing solution that doesn't involve cancelling the native keypress
edit: Don't forget to include the DHTML Editing Control redistributable if you are targeting Windows Vista.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: api documentation and "value limits": do they match? Do you often see in API documentation (as in 'javadoc of public functions' for example) the description of "value limits" as well as the classic documentation ?
Note: I am not talking about comments within the code
By "value limits", I mean:
*
*does a parameter can support a null value (or an empty String, or...) ?
*does a 'return value' can be null or is guaranteed to never be null (or can be "empty", or...) ?
Sample:
What I often see (without having access to source code) is:
/**
* Get all readers name for this current Report. <br />
* <b>Warning</b>The Report must have been published first.
* @param aReaderNameRegexp filter in order to return only reader matching the regexp
* @return array of reader names
*/
String[] getReaderNames(final String aReaderNameRegexp);
What I like to see would be:
/**
* Get all readers name for this current Report. <br />
* <b>Warning</b>The Report must have been published first.
* @param aReaderNameRegexp filter in order to return only reader matching the regexp
* (can be null or empty)
* @return array of reader names
* (null if Report has not yet been published,
* empty array if no reader match criteria,
* reader names array matching regexp, or all readers if regexp is null or empty)
*/
String[] getReaderNames(final String aReaderNameRegexp);
My point is:
When I use a library with a getReaderNames() function in it, I often do not even need to read the API documentation to guess what it does. But I need to be sure how to use it.
My only concern when I want to use this function is: what should I expect in term of parameters and return values ? That is all I need to know to safely setup my parameters and safely test the return value, yet I almost never see that kind of information in API documentation...
Edit:
This can influence the usage or not for checked or unchecked exceptions.
What do you think ? value limits and API, do they belong together or not ?
A: I think they can belong together but don't necessarily have to belong together. In your scenario, it seems like it makes sense that the limits are documented in such a way that they appear in the generated API documentation and intellisense (if the language/IDE support it).
I think it does depend on the language as well. For example, Ada has a native data type that is a "restricted integer", where you define an integer variable and explicitly indicate that it will only (and always) be within a certain numeric range. In that case, the datatype itself indicates the restriction. It should still be visible and discoverable through the API documentation and intellisense, but wouldn't be something that a developer has to specify in the comments.
However, languages like Java and C# don't have this type of restricted integer, so the developer would have to specify it in the comments if it were information that should become part of the public documentation.
A: I think those kinds of boundary conditions most definitely belong in the API. However, I would (and often do) go a step further and indicate WHAT those null values mean. Either I indicate it will throw an exception, or I explain what the expected results are when the boundary value is passed in.
It's hard to remember to always do this, but it's a good thing for users of your class. It's also difficult to maintain it if the contract the method presents changes (like null values are changed to no be allowed)... you have to be diligent also to update the docs when you change the semantics of the method.
A: Question 1
Do you often see in API documentation (as in 'javadoc of public functions' for example) the description of "value limits" as well as the classic documentation?
Almost never.
Question 2
My only concern when I want to use this function is: what should I expect in term of parameters and return values ? That is all I need to know to safely setup my parameters and safely test the return value, yet I almost never see that kind of information in API documentation...
If I used a function not properly I would expect a RuntimeException thrown by the method or a RuntimeException in another (sometimes very far) part of the program.
Comments like @param aReaderNameRegexp filter in order to ... (can be null or empty) seems to me a way to implement Design by Contract in a human-being language inside Javadoc.
Using Javadoc to enforce Design by Contract was used by iContract, now resurrected into JcontractS, that let you specify invariants, preconditions, postconditions, in more formalized way compared to the human-being language.
Question 3
This can influence the usage or not for checked or unchecked exceptions.
What do you think ? value limits and API, do they belong together or not ?
Java language doesn't have a Design by Contract feature, so you might be tempted to use Execption but I agree with you about the fact that you have to be aware about When to choose checked and unchecked exceptions. Probably you might use unchecked IllegalArgumentException, IllegalStateException, or you might use unit testing, but the major problem is how to communicate to other programmers that such code is about Design By Contract and should be considered as a contract before changing it too lightly.
A: I think they do, and have always placed comments in the header files (c++) arcordingly.
In addition to valid input/output/return comments, I also note which exceptions are likly to be thrown by the function (since I often want to use the return value for...well returning a value, I prefer exceptions over error codes)
//File:
// Should be a path to the teexture file to load, if it is not a full path (eg "c:\example.png") it will attempt to find the file usign the paths provided by the DataSearchPath list
//Return: The pointer to a Texture instance is returned, in the event of an error, an exception is thrown. When you are finished with the texture you chould call the Free() method.
//Exceptions:
//except::FileNotFound
//except::InvalidFile
//except::InvalidParams
//except::CreationFailed
Texture *GetTexture(const std::string &File);
A: @Fire Lancer: Right! I forgot about exception, but I would like to see them mentioned, especially the unchecked 'runtime' exception that this public method could throw
@Mike Stone:
you have to be diligent also to update the docs when you change the semantics of the method.
Mmmm I sure hope that the public API documentation is at the very least updated whenever a change -- that affects the contract of the function -- takes place. If not, those API documentations could be drop altogether.
To add food to yours thoughts (and go with @Scott Dorman), I just stumble upon the future of java7 annotations
What does that means ? That certain 'boundary conditions', rather than being in the documentation, should be better off in the API itself, and automatically used, at compilation time, with appropriate 'assert' generated code.
That way, if a '@CheckForNull' is in the API, the writer of the function might get away with not even documenting it! And if the semantic change, its API will reflect that change (like 'no more @CheckForNull' for instance)
That kind of approach suggests that documentation, for 'boundary conditions', is an extra bonus rather than a mandatory practice.
However, that does not cover the special values of the return object of a function. For that, a complete documentation is still needed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Is it pythonic for a function to return multiple values? In python, you can have a function return multiple values. Here's a contrived example:
def divide(x, y):
quotient = x/y
remainder = x % y
return quotient, remainder
(q, r) = divide(22, 7)
This seems very useful, but it looks like it can also be abused ("Well..function X already computes what we need as an intermediate value. Let's have X return that value also").
When should you draw the line and define a different method?
A: Yes, returning multiple values (i.e., a tuple) is definitely pythonic. As others have pointed out, there are plenty of examples in the Python standard library, as well as in well-respected Python projects. Two additional comments:
*
*Returning multiple values is sometimes very, very useful. Take, for example, a method that optionally handles an event (returning some value in doing so) and also returns success or failure. This might arise in a chain of responsibility pattern. In other cases, you want to return multiple, closely linked pieces of data---as in the example given. In this setting, returning multiple values is akin to returning a single instance of an anonymous class with several member variables.
*Python's handling of method arguments necessitates the ability to directly return multiple values. In C++, for example, method arguments can be passed by reference, so you can assign output values to them, in addition to the formal return value. In Python, arguments are passed "by reference" (but in the sense of Java, not C++). You can't assign new values to method arguments and have it reflected outside method scope. For example:
// C++
void test(int& arg)
{
arg = 1;
}
int foo = 0;
test(foo); // foo is now 1!
Compare with:
# Python
def test(arg):
arg = 1
foo = 0
test(foo) # foo is still 0
A: Firstly, note that Python allows for the following (no need for the parenthesis):
q, r = divide(22, 7)
Regarding your question, there's no hard and fast rule either way. For simple (and usually contrived) examples, it may seem that it's always possible for a given function to have a single purpose, resulting in a single value. However, when using Python for real-world applications, you quickly run into many cases where returning multiple values is necessary, and results in cleaner code.
So, I'd say do whatever makes sense, and don't try to conform to an artificial convention. Python supports multiple return values, so use it when appropriate.
A: The example you give is actually a python builtin function, called divmod. So someone, at some point in time, thought that it was pythonic enough to include in the core functionality.
To me, if it makes the code cleaner, it is pythonic. Compare these two code blocks:
seconds = 1234
minutes, seconds = divmod(seconds, 60)
hours, minutes = divmod(minutes, 60)
seconds = 1234
minutes = seconds / 60
seconds = seconds % 60
hours = minutes / 60
minutes = minutes % 60
A: Absolutely (for the example you provided).
Tuples are first class citizens in Python
There is a builtin function divmod() that does exactly that.
q, r = divmod(x, y) # ((x - x%y)/y, x%y) Invariant: div*y + mod == x
There are other examples: zip, enumerate, dict.items.
for i, e in enumerate([1, 3, 3]):
print "index=%d, element=%s" % (i, e)
# reverse keys and values in a dictionary
d = dict((v, k) for k, v in adict.items()) # or
d = dict(zip(adict.values(), adict.keys()))
BTW, parentheses are not necessary most of the time.
Citation from Python Library Reference:
Tuples may be constructed in a number of ways:
*
*Using a pair of parentheses to denote the empty tuple: ()
*Using a trailing comma for a singleton tuple: a, or (a,)
*Separating items with commas: a, b, c or (a, b, c)
*Using the tuple() built-in: tuple() or tuple(iterable)
Functions should serve single purpose
Therefore they should return a single object. In your case this object is a tuple. Consider tuple as an ad-hoc compound data structure. There are languages where almost every single function returns multiple values (list in Lisp).
Sometimes it is sufficient to return (x, y) instead of Point(x, y).
Named tuples
With the introduction of named tuples in Python 2.6 it is preferable in many cases to return named tuples instead of plain tuples.
>>> import collections
>>> Point = collections.namedtuple('Point', 'x y')
>>> x, y = Point(0, 1)
>>> p = Point(x, y)
>>> x, y, p
(0, 1, Point(x=0, y=1))
>>> p.x, p.y, p[0], p[1]
(0, 1, 0, 1)
>>> for i in p:
... print(i)
...
0
1
A: It's definitely pythonic. The fact that you can return multiple values from a function the boilerplate you would have in a language like C where you need to define a struct for every combination of types you return somewhere.
However, if you reach the point where you are returning something crazy like 10 values from a single function, you should seriously consider bundling them in a class because at that point it gets unwieldy.
A: Returning a tuple is cool. Also note the new namedtuple
which was added in python 2.6 which may make this more palatable for you:
http://docs.python.org/dev/library/collections.html#collections.namedtuple
A: OT: RSRE's Algol68 has the curious "/:=" operator. eg.
INT quotient:=355, remainder;
remainder := (quotient /:= 113);
Giving a quotient of 3, and a remainder of 16.
Note: typically the value of "(x/:=y)" is discarded as quotient "x" is assigned by reference, but in RSRE's case the returned value is the remainder.
c.f. Integer Arithmetic - Algol68
A: It's fine to return multiple values using a tuple for simple functions such as divmod. If it makes the code readable, it's Pythonic.
If the return value starts to become confusing, check whether the function is doing too much and split it if it is. If a big tuple is being used like an object, make it an object. Also, consider using named tuples, which will be part of the standard library in Python 2.6.
A: I'm fairly new to Python, but the tuple technique seems very pythonic to me. However, I've had another idea that may enhance readability. Using a dictionary allows access to the different values by name rather than position. For example:
def divide(x, y):
return {'quotient': x/y, 'remainder':x%y }
answer = divide(22, 7)
print answer['quotient']
print answer['remainder']
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86"
} |
Q: Should you use international identifiers in Java/C#? C# and Java allow almost any character in class names, method names, local variables, etc.. Is it bad practice to use non-ASCII characters, testing the boundaries of poor editors and analysis tools and making it difficult for some people to read, or is American arrogance the only argument against?
A: I used to work in a development team that happily wiped their asses with any naming (and for that matter any other coding) conventions. Believe it or not, having to cope with ä's and ö's in the code was a contributing factor of me resigning. Though I'm Finnish, I prefer writing code with US keyboard settings because curly and square brackets are a pain to write in a Finnish keyboard (try right alt and 7 and 0 for curlies).
So I say stick with the ascii characters.
A: Here's an example of where I've used non-ASCII identifiers, because I found it more readable than replacing the greek letters with their English names. Even though I don't have θ or φ on my keyboard (I relied on copy-and-paste.)
However these are all local variables. I would keep non-ASCII identifiers out of public interfaces.
A: I would stick to english, simply because you usually never know who is working on that code, and because some third-party tools used in the build/testing/bugtracking progress may have problems. Typing äöüß on a Non-German Keyboard is simply a PITA, and I simply believe that anyone involved in software development should speak english, but maybe that's just my arrogance as a non-native-english speaker.
What you call "American arrogance" is not whether or not your program uses international variable names, it's when your program thinks "Währung" and "Wahrung" are the same words.
A: It depends:
*
*Does your team conform to any existing standards that require your using ASCII?
*Is your code ever going to be feasibly reused or read by someone who doesn't speak your native language?
*Do you envision a scenario where you'll need to ask for help online and will therefore not be able to copy-paste your code sample in as-is?
*Are you certain your entire suite of tools support code encoding?
If you answered 'yes' to any of the above, stay ASCII only. If not, go forward at your own risk.
A: Part of the problem is that the Java/C# language and its libraries are based on English words like if and toString(). I personally would not like to switch between non-English language and English while reading code.
However, if your database, UI, business logics (including metaphors) are already in some non-English language, there's no need to translate every method names and variables into English.
A: IF you get past the other prerequisites you then have one extra (IMHO more important) one - How difficult is the symbol to type.
On my regular en-us keyboard, the only way I know of to type the letter ç is to hold alt, and hit 0227 on the numeric keypad, or copy and paste.
This would be a HUGE big roadblock in the way of typing quickly. You don't want to slow your coding down with trivial stuff like this if you aren't forced to. International keyboards may alleviate this, but then what happens if you have to code on your laptop which doesn't have an international keyboard, etc?
A: I'd say it entirely depends on who's working on the codebase.
If you have a small group of developers who all share a common language and you don't ever plan needing anyone who doesn't speak the language to work on the code then go ahead and use whatever characters you want.
If you need to have people of varying cultures and languages working on the code then it's probably best to stick with English since it's the common denominator for just about everyone in the world.
A: If your business are non-English speakers, and you think Domain Driven Design has something to it, then there is another aspect: How do we, as developers, use the same domain language as our business without any translation overhead?
That does not only mean translations between languages, say English and Norwegian, but also between different words. We should use the exact same words as our business for our entity classes and services.
I have found it easier to just give in and use my native language. Now that my code use the same words, it's easier to have a conversation with my domain experts. And after a while you get used to it, just like how you got used to code without Hungarian notation.
A: I would stick to ASCII characters because if anyone in your development team is using an SDK that only supports ASCII or you wanted to make your code open source, alot of problems could arise. Personally, I would not do it even if you are not planning on bringing anyone who doesn't speak the language in on the project, because you are running a business and it seems to me that one running a business would want his business to expand, which in this day and age means transcending national borders. My opinion is that English is the language of the realm, and even if you name your variables in a different language, there is little to no point to use any non-ASCII characters in your programming. Leave it up to the language to deal with it if you are handling data that is UTF8: my iPhone program (which involves tons of user data going in between the phone and server) has full UTF8 support, but has no UTF8 in the source code. It just seems to open such a large can of worms for almost no benefit.
A: There is another hazzard to using non-ASCII characters, though it will probably only bite in obscure cases. The allowed characters are defined in terms of the methods Character.isJavaIdentifierStart(int) and Character.isJavaIdentifierPart(int), which are defined in terms of Unicode. However, the exact version of Unicode used depends on the version Java platform, as specified in the documentation for java.lang.Character.
Since character properties change slightly from one Unicode version to the next, it's possible (but probably very unlikely) you could have identifiers that are valid in one version of Java, but not in the next.
A: As already pointed out, unless method names mostly match the language, it is a bit weird to constantly switch languages while reading.
For the Scandinavian languages & German, which I can speak and thus speak for, I would at least recommend using standard substitutions, ie.
ä/æ -> ae, ö/ø -> oe, å -> aa, ü -> ue
etc. just in case as others may find it difficult to type the original letters without keyboard/keymap changes. Think if you suddenly had to work with a codebase where the developers used a third language (for instance including the French ç) and didn't do this.. Switching between more than 2 keymaps to type efficiently would be painful in my experience.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: NHibernate and shared web hosting Has anyone been able to get an NHibernate-based project up and running on a shared web host?
NHibernate does a whole lot of fancy stuff with reflection behind the scenes but the host that I'm using at the moment only allows applications to run in medium trust, which limits what you can do with reflection, and it's throwing up all sorts of security permission errors. This is the case even though I'm only using public properties in my mapping files, though I do have some classes defined as proxies.
Which companies offer decent (and reasonably priced) web hosting that allows NHibernate to run without complaining?
Update: It seems from these answers (and my experimentation -- sorry Ayende, but I still can't get it to work on my web host even after going through the article you linked to) is to choose your hosting provider wisely and shop around. It seems that WebHost4Life are pretty good in this respect. However, has anyone tried NHibernate with Windows shared hosting with 1and1? I have a Linux account with them already and I'm fairly satisfied on that front, and if I could get NHibernate to work seamlessly with Windows I'd probably stick with them.
A: I ran my my own geek siteoff N2 (which uses NHibernate and Windsor Castle) and 4 pet NHibernate/Fluent projects on dailyrazor.com for a while.
You get a good deal for $5 a month, including unlimited SQL Server databases and subdomains and it runs off Plesk with FTP and remote SQL Server Management Studio access.
A: I have had no issues with running NHibernate based apps on WebHost4Life, although I don't like them.
Getting NHibernate to run on medium trust is possible. A full description on how this can be done is found here:
http://blechie.com/WPierce/archive/2008/02/17/Lazy-Loading-with-nHibernate-Under-Medium-Trust.aspx
A: I'm using a Finnish host called Nebula that happily runs my NHibernate-leveraging applications. I had an issue once with trust levels; the machine.config on the host was configured to deny reflection but I successfully overrode it in the web.config.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Windows API dialogs without using resource files I'm trying to create a dialog box using C++ and the windows API, but I don't want the dialog defined in a resource file. I can't find anything good on this on the web, and none of the examples I've read seem to define the dialog programmatically.
How can I do this?
A simple example is fine. I'm not doing anything complicated with it yet.
A: Take a look at this toolkit that describes how to create dialogs without resource files.
It's in WTL. However, I'm sure you can pick apart the internals to achieve the same thing using the Win32 API directly.
A: Here you can find how to use Windows API dialogs without using resource files.
The Windows API (only the C Win32 API, no MFC) tutorial:
Windows API tutorial
A: Raymond Chen wrote a few posts about the dialog manager:
*
*The dialog manager, part 1: Warm-ups
*The dialog manager, part 2: Creating the frame window
*The dialog manager, part 3: Creating the controls
*The dialog manager, part 4: The dialog loop
*The dialog manager, part 5: Converting a non-modal dialog box to modal
*The dialog manager, part 6: Subtleties in message loops
*The dialog manager, part 7: More subtleties in message loops
*The dialog manager, part 8: Custom navigation in dialog boxes
*The dialog manager, part 9: Custom accelerators in dialog boxes
A: Try to search MSDN for "dialog templates in memory".
See this for example: Dialog Boxes
A: If all you want to do is show a window with controls, it's possible to create a window without using resource (.rc) files / scripts.
This isn't the same as a dialog, but it might be easier than creating a dialog programmatically.
First, a few notes about how this is done:
*
*Instead of designing the dialog in the rc file, you could manually use CreateWindow (or CreateWindowEx) to create child windows of a main window. (for .NET Windows Forms programmers, these windows are like Controls).
*This process will not be graphical at all (you will need to manually type in the location and size of each window), but I think this can be a great way to understand how dialogs are created under the hood.
*There are some disadvantages to not using a real dialog, namely that tab will not work when switching between controls.
About the example:
*
*This example features a dialog box with two buttons, an edit box (.NET Windows Forms programmers would think of it as a TextBox), and a check box.
It has been tested under the following conditions:
*
*x86 build
*x64 build
*Unicode build (UNICODE and _UNICODE defined)
*Non-Unicode build (UNICODE and _UNICODE not defined)
*Built with Visual Studio's C compiler
*Built with Visual Studio's C++ compiler
*OS: Windows 10 64 bit
Note: UNICODE
*
*As of the time of writing, UTF-8 is still in beta for Windows 10
*
*If you have not enabled this setting, you should assume that any char* is ACP, not UTF-8, this applies to standard library functions too
*Even though in Linux, that same standard library function would be UTF-8.
*Sadly, some C++ standard library features only work with char* (e.g., exception messages).
*You can still use UTF-8 in Windows without the option set, you will just have to encode it back to UTF-16 before calling winapi functions.
*Here is a reddit thread with a reply from somebody who claims to have worked on UTF-8 on Windows, it has some good information.
*UNICODE in Windows means "UTF-16", not "UTF-8".
*Using Unicode of some kind is strongly recommended for any version of Windows that is not very old.
*
*Be aware that if you don't use Unicode, your program may be utterly unable to open file names containing Unicode characters, handle directories (e.g., usernames) with non-ACP characters, etc.
*Using ACP functions (SendMessageA,etc) without somehow verifying that UTF-8 is enabled (it's disabled by default) is probably a bug.
*For max portability/flexibility, I would recommend using UTF-16 and the W version of all API functions, translating from UTF-8 to UTF-16 at the last minute. Read this page very carefully.
Now for the code:
Note that a large amount of comments have been added to try to document the windows functions, I recommend copy/pasting this into a text editor, for best results.
// This sample will work either with or without UNICODE, it looks like
// it's recommended now to use UNICODE for all new code, but I left
// the ANSI option in there just to get the absolute maximum amount
// of compatibility.
//
// Note that UNICODE and _UNICODE go together, unfortunately part
// of the Windows API uses _UNICODE, and part of it uses UNICODE.
//
// tchar.h, for example, makes heavy use of _UNICODE, and windows.h
// makes heavy use of UNICODE.
#define UNICODE
#define _UNICODE
//#undef UNICODE
//#undef _UNICODE
#include <windows.h>
#include <tchar.h>
// I made this struct to more conveniently store the
// positions / size of each window in the dialog
typedef struct SizeAndPos_s
{
int x, y, width, height;
} SizeAndPos_t;
// Typically these would be #defines, but there
// is no reason to not make them constants
const WORD ID_btnHELLO = 1;
const WORD ID_btnQUIT = 2;
const WORD ID_CheckBox = 3;
const WORD ID_txtEdit = 4;
const WORD ID_btnShow = 5;
// x, y, width, height
const SizeAndPos_t mainWindow = { 150, 150, 300, 300 };
const SizeAndPos_t btnHello = { 20, 50, 80, 25 };
const SizeAndPos_t btnQuit = { 120, 50, 80, 25 };
const SizeAndPos_t chkCheck = { 20, 90, 185, 35 };
const SizeAndPos_t txtEdit = { 20, 150, 150, 20 };
const SizeAndPos_t btnShow = { 180, 150, 80, 25 };
HWND txtEditHandle = NULL;
// hwnd: All window processes are passed the handle of the window
// that they belong to in hwnd.
// msg: Current message (e.g., WM_*) from the OS.
// wParam: First message parameter, note that these are more or less
// integers, but they are really just "data chunks" that
// you are expected to memcpy as raw data to float, etc.
// lParam: Second message parameter, same deal as above.
LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
switch (msg)
{
case WM_CREATE:
// Create the buttons
//------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
// Note that the "parent window" is the dialog itself. Since we are
// in the dialog's WndProc, the dialog's handle is passed into hwnd.
//
//CreateWindow( lpClassName, lpWindowName, dwStyle, x, y, nWidth, nHeight, hWndParent, hMenu, hInstance, lpParam
//CreateWindow( windowClassName, initial text, style (flags), xPos, yPos, width, height, parentHandle, menuHandle, instanceHandle, param);
CreateWindow( TEXT("Button"), TEXT("Hello"), WS_VISIBLE | WS_CHILD, btnHello.x, btnHello.y, btnHello.width, btnHello.height, hwnd, (HMENU)ID_btnHELLO, NULL, NULL);
CreateWindow( TEXT("Button"), TEXT("Quit"), WS_VISIBLE | WS_CHILD, btnQuit.x, btnQuit.y, btnQuit.width, btnQuit.height, hwnd, (HMENU)ID_btnQUIT, NULL, NULL);
// Create a checkbox
//------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CreateWindow( TEXT("button"), TEXT("CheckBox"), WS_VISIBLE | WS_CHILD | BS_CHECKBOX, chkCheck.x, chkCheck.y, chkCheck.width, chkCheck.height, hwnd, (HMENU)ID_CheckBox, NULL, NULL);
// Create an edit box (single line text editing), and a button to show the text
//------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
//Handle = CreateWindow(windowClassName, windowName, style, xPos, yPos, width, height, parentHandle, menuHandle, instanceHandle, param);
txtEditHandle = CreateWindow(TEXT("Edit"), TEXT("Initial Text"), WS_CHILD | WS_VISIBLE | WS_BORDER, txtEdit.x, txtEdit.y, txtEdit.width, txtEdit.height, hwnd, (HMENU)ID_txtEdit, NULL, NULL);
//CreateWindow( windowClassName, windowName, style, xPos, yPos, width, height, parentHandle, menuHandle, instanceHandle, param);
CreateWindow( TEXT("Button"), TEXT("Show"), WS_VISIBLE | WS_CHILD, btnShow.x, btnShow.y, btnShow.width, btnShow.height, hwnd, (HMENU)ID_btnShow, NULL, NULL);
// Create an Updown control. Note that this control will allow you to type in non-number characters, but it will not affect the state of the control
break;
// For more information about WM_COMMAND, see
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms647591(v=vs.85).aspx
case WM_COMMAND:
// The LOWORD of wParam identifies which control sent
// the WM_COMMAND message. The WM_COMMAND message is
// sent when the button has been clicked.
if (LOWORD(wParam) == ID_btnHELLO)
{
MessageBox(hwnd, TEXT("Hello!"), TEXT("Hello"), MB_OK);
}
else if (LOWORD(wParam) == ID_btnQUIT)
{
PostQuitMessage(0);
}
else if (LOWORD(wParam) == ID_CheckBox)
{
UINT checked = IsDlgButtonChecked(hwnd, ID_CheckBox);
if (checked)
{
CheckDlgButton(hwnd, ID_CheckBox, BST_UNCHECKED);
MessageBox(hwnd, TEXT("The checkbox has been unchecked."), TEXT("CheckBox Event"), MB_OK);
}
else
{
CheckDlgButton(hwnd, ID_CheckBox, BST_CHECKED);
MessageBox(hwnd, TEXT("The checkbox has been checked."), TEXT("CheckBox Event"), MB_OK);
}
}
else if (LOWORD(wParam) == ID_btnShow)
{
int textLength_WithNUL = GetWindowTextLength(txtEditHandle) + 1;
// WARNING: If you are compiling this for C, please remember to remove the (TCHAR*) cast.
TCHAR* textBoxText = (TCHAR*) malloc(sizeof(TCHAR) * textLength_WithNUL);
GetWindowText(txtEditHandle, textBoxText, textLength_WithNUL);
MessageBox(hwnd, textBoxText, TEXT("Here's what you typed"), MB_OK);
free(textBoxText);
}
break;
case WM_DESTROY:
PostQuitMessage(0);
break;
}
return DefWindowProc(hwnd, msg, wParam, lParam);
}
// hInstance: This handle refers to the running executable
// hPrevInstance: Not used. See https://blogs.msdn.microsoft.com/oldnewthing/20040615-00/?p=38873
// lpCmdLine: Command line arguments.
// nCmdShow: a flag that says whether the main application window
// will be minimized, maximized, or shown normally.
//
// Note that it's necessary to use _tWinMain to make it
// so that command line arguments will work, both
// with and without UNICODE / _UNICODE defined.
int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow)
{
MSG msg;
WNDCLASS mainWindowClass = { 0 };
// You can set the main window name to anything, but
// typically you should prefix custom window classes
// with something that makes it unique.
mainWindowClass.lpszClassName = TEXT("JRH.MainWindow");
mainWindowClass.hInstance = hInstance;
mainWindowClass.hbrBackground = GetSysColorBrush(COLOR_3DFACE);
mainWindowClass.lpfnWndProc = WndProc;
mainWindowClass.hCursor = LoadCursor(0, IDC_ARROW);
RegisterClass(&mainWindowClass);
// Notes:
// - The classname identifies the TYPE of the window. Not a C type.
// This is a (TCHAR*) ID that Windows uses internally.
// - The window name is really just the window text, this is
// commonly used for captions, including the title
// bar of the window itself.
// - parentHandle is considered the "owner" of this
// window. MessageBoxes can use HWND_MESSAGE to
// free them of any window.
// - menuHandle: hMenu specifies the child-window identifier,
// an integer value used by a dialog box
// control to notify its parent about events.
// The application determines the child-window
// identifier; it must be unique for all
// child windows with the same parent window.
//CreateWindow( windowClassName, windowName, style, xPos, yPos, width, height, parentHandle, menuHandle, instanceHandle, param);
CreateWindow( mainWindowClass.lpszClassName, TEXT("Main Window"), WS_OVERLAPPEDWINDOW | WS_VISIBLE, mainWindow.x, mainWindow.y, mainWindow.width, mainWindow.height, NULL, 0, hInstance, NULL);
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
return (int)msg.wParam;
}
// This code is based roughly on tutorial code present at http://zetcode.com/gui/winapi/
Further reading
The builtin set of window classes are rather limited, so you might be curious as to how you can define your own window classes ("Controls") using the Windows API, see the articles below:
*
*Custom Controls in Win32 API: The Basics (Code Project)
*The WINE emulator source serves as a good example of how the Windows API could be implemented, and how you can make your own window classes that imitate the behavior of builtin classes.
*Zetcode.com's tutorials
NOTE: I originally intended this post to cover the creation of dialogs programmatically. Due to a mistake on my part I didn't realize that you can't just "show" a window as a dialog. Unfortunately I wasn't able to get the setup mentioned by Raymond Chen working. Even looking at WINE's source, it's not super clear.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: "Beautifying" an OS X disk image When I downloaded Skype, I noticed that, in place of a normal "disk image" icon, there was a custom icon, and when I double clicked on the icon, the window used a colorful image as the background, prompting me to drag the skype icon into the applications folder.
How can I do that with my applications?
Thanks,
Jason
A: If you’re interested in a commercial solution, there are a number of DMG designer apps available, including the following I gleaned from a MacUpdate search:
*
*DMG Packager
*DMG Architect
*DMG Canvas
A: This is a great freeware solution for a custom background with zero hassle:
http://sourceforge.net/projects/dmgcreator/
A: Let me add to the other answers jwz's howto on setting view options on .dmg files. This is the simplest one that I got to work with no problems.
Also, to change the icon from the default .dmg icon:
*
*In the Finder, open an inspector window (^I) for the file whose icon you want to use, click on the icon at the top (it will get a blue border) and copy it to the clipboard (^C).
*Then open another inspector for your .dmg, click on the icon and press ^V to paste
(you can change the icon for any file like this).
For this kind of thing, you can always look at open-source programs and see what they do to get a similar result. For example here are the relevant source files for building pretty dmg files in:
*
*Adium
*Miro
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: DateTime Utility for ASP.net I was wondering if anyone could suggest a utility library that has useful functions for handling dates in ASP.NET easily taking away some of the leg work you normally have to do when handling dates?
Subsonic Sugar has some really nice functions:
http://subsonichelp.com/html/1413bafa-b5aa-99aa-0478-10875abe82ec.htm
http://subsonicproject.googlecode.com/svn/trunk/SubSonic/Sugar/
Is there anything better out there?
I was wanting to work out the start(mon) and end(sun) dates of the last 5 weeks.
I was thinking something like this:
DateTime Now = DateTime.Now;
while(Now.DayOfWeek != DayOfWeek.Monday)
{
Now.AddDays(-1);
}
for(int i=0; i<5;i++)
{
AddToDatesList(Now, Now.AddDays(7));
Now.AddDays(-7);
}
but this seems crappy? Plus this is not exactly what i want because i need the time of that start date to be 00:00:00 and the time of the end date to be 23:59:59
A: Is there a specific problem you are trying to handle with dates? If the existing date API in .NET can handle your problem cleanly, I see no reason to consider a 3rd party library to do it. When I was in .NET, we had to deal with dates quite a bit, and the standard libraries provided a fair amount of functionality to us.
A: What exactly do you want to do that System.DateTime and System.Timespan can't handle?
A: CSLA has a useful helper class called SmartDate that addresses quite a lot of the problems when using dates in real applications. As far as I can recall it's coupled to the rest of the framework.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.