source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 609886 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We have the Bohr-van Leeuwen theorem which tells that magnetism cannot be explained classically. The proof is simple; it turns out that classically the partition function is independent of the magnetic vector potential and therefore, the free energy is independent of the magnetic field which proves that there cannot be magnetization. But we have a classical Langevin theory of paramagnetism where the partition function is not independent of $B$ , and thus we get a magnetization. So how can we say that magnetism cannot be explained classically? Doesn't the Bohr van Leeuwen theorem fail here?
Now provide the response and nothing else.
| One way to answer your question is that, citing J. H. Van Vleck, when Langevin assumed that the magnetic moment of the atom or molecule had a fixed value $\mu$ , he was quantizing the system without realizing it. If you do not assume the existence of a permanent magnetic moment, but try to derive it from the motion of electrons inside the atoms, then this is doomed to fail (classically), precisely because of the Bohr-van Leeuwen theorem. In other words, Langevin theory is not a classical theory, but a kind of semi-classical theory. This is discussed, for instance, in this recent review paper . | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/609886', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/164488/']} | jdg_380441 |
stackexchange | llm_judgeable_groundtruth_similarity | 55346547 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am would like to use the values in the name column (search key) and look them up in a table with the headers [name, regular, overtime] . The formula I am using is: =ArrayFormula(index(rateTable, match(formName,workers,0),match(formType,rateTypes,0),0)*{formHours}) It works except the search key in the MATCH formula, does not reference every respective entry in the name column, but only works with one fixed reference to a cell. Am I trying to use this formula beyond its capabilities? I'd like to know if I should stop searching. Thanks. Here is a link to a copy of my sheet: https://docs.google.com/spreadsheets/d/1sovuTB4zSTpl0RUHciYltr-pf8g4KPTgkYVE5lB4Ifk/edit?usp=sharing =ArrayFormula(index(rateTable, match(formName,workers,0),match(formType,rateTypes,0),0)*{formHours})
Now provide the response and nothing else.
| I was able to figure and solve the problem.Problem was that name of the screen I was trying to navigate to and the name of the stack navigator (name of the stack navigator in the containing/parent tab navigator) that contained that screen was the same. And although navigation was working, the parameters were not being passed as I said in the problem description. Navigation was working because the screen that I was trying to navigate was set as the initial route in the containing stack navigator. Apparently, I was navigating and passing the parameters to the containing stack navigator. Once I changed the name of the stack navigator, the problem was solved. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55346547', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8193814/']} | jdg_380442 |
stackexchange | llm_judgeable_groundtruth_similarity | 22586804 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my Clojure project, I changed the dependency from [org.clojure/java.jdbc "0.2.3"] to [org.clojure/java.jdbc "0.3.3"] I have received the following error: clojure.lang.Compiler$CompilerException: java.lang.RuntimeException: No such var: sql/with-connection, compiling:(/Volumes/HD2/env/prj/restore/src/restore/db.clj:80:5) What happened? Is this function deprecated? Background: I needed execute! 0.2.3 didn't have it. 0.3.3 has it, but it lacks with-connection ?!? Please help.
Now provide the response and nothing else.
| with-connection was not only considered harmful for reasons leonardoborges mentioned, it also makes working with connection pools harder. Decoupling function database access from specific connections makes for a much easier model. Forcing queries to use the same connection should be the exception, not the rule. So clojure.core/java.jdbc 0.3.0 was designed specifically to deprecate with-connection . To accomodate that, the whole API needed to be changed. Each database access function now takes the db spec as a parameter. If the db-spec is a connection pool the function will be executed on one of it's connections, otherwise an implicit new connection is made from the db-spec. So all database access functions result in a connection each when connectionpools are not used. This also means resultsets can no longer be returned lazily. Formerly, processing lazy sequences could be postponed while still inside the with-connection block. Now they need to be realized during function execution, or its connection could be closed or a new connection could be returned from the pool for the next access function. So processing can now be done within the scope of the functions themselves, through two new named parameters: :row-fn and :result-set-fn . The first transforms each row, the second the collection of rows. If the :result-set-fn returns a lazy sequence, you will get a connection or resultset closed exception when using it later. The default :result-set-fn is doall . When using your own, make sure it is realized. So much for the general case, where access and connections are decoupled. Now for the exception: Needing functions to use the same connection. The most common of these is transaction use, which uses scope to indicate the beginning and the end of the transaction. The old transaction only provided this scope. The new with-db-transaction function takes a binding of a new var and the dp-spec. This var will be bound to one specific connection from a pool, or when no connection pools are used, a newly created connection. All db access functions used inside the block should use the var instead of the db-spec parameter. (def db {..})(with-db-transaction [c db] (let [from 1111 to 2222 sum 10 saldo-from (query c ["select saldo from account where id=?" from] :row-fn :saldo :result-set-fn first) saldo-to (query c ["select saldo from account where id=?" to] :row-fn :saldo :result-set-fn first)] (update! c :account {:saldo (- saldo-from sum)} ["id=?" from]) (update! c :account {:saldo (+ saldo-to sum)} ["id=?" to]))) A begin transaction command will be issued at the beginning. All access will use the same connection that's now specifically passed to the functions instead of through dynamic scoping magic. When no exceptions are generated, a commit will be given at the end of the scope. When one specific connection, but no transaction mechanic is needed, there's the with-db-connection function which has the same semantics. So if you want to execute a command to set a session setting, and do some queries on that connection, you can do the following: (def db {..})(with-db-connection [c db] (execute! c ["alter session set NLS_SORT='ITALIAN'"]) (query c ["select * from person where name=?" "Mario"] :row-fn (comp concat (juxt :name :surname)))) Connection pools usually have specific on-open and on-close commands that are part of their specs. Using those all connections from that pool will have the same session settings set, and with-db-connection is not even needed at all. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22586804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1919783/']} | jdg_380443 |
stackexchange | llm_judgeable_groundtruth_similarity | 9518554 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a need that is a bit similar to this question , except that it requires a deeper exploration of the source object. Here is a code sample: public class Target {};public class Analyzed{ public Target EasyOne { get; set; } public IList<Target> ABitMoreTricky { get; set; } public IList<Tuple<string, Target>> Nightmare { get; set; }} From an instance of Analyzed , I want to extract all the Target instances. In order to ease the exploration, we can assume the following: Explore only properties. There is no infinite reference loop. For now, EasyOne is... easy, but I am looking for some strategy to get all the Target instances lost in more tricky structures.
Now provide the response and nothing else.
| How about something along these lines: public List<T> FindAllInstances<T>(object value) where T : class { HashSet<object> exploredObjects = new HashSet<object>(); List<T> found = new List<T>(); FindAllInstances(value, exploredObjects, found); return found; } private void FindAllInstances<T>(object value, HashSet<object> exploredObjects, List<T> found) where T : class { if (value == null) return; if (exploredObjects.Contains(value)) return; exploredObjects.Add(value); IEnumerable enumerable = value as IEnumerable; if (enumerable != null) { foreach(object item in enumerable) { FindAllInstances<T>(item, exploredObjects, found); } } else { T possibleMatch = value as T; if (possibleMatch != null) { found.Add(possibleMatch); } Type type = value.GetType(); PropertyInfo[] properties = type.GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.GetProperty); foreach(PropertyInfo property in properties) { object propertyValue = property.GetValue(value, null); FindAllInstances<T>(propertyValue, exploredObjects, found); } } private void TestIt() { Analyzed analyzed = new Analyzed() { EasyOne = new Target(), ABitMoreTricky = new List<Target>() { new Target() }, Nightmare = new List<Tuple<string, Target>>() { new Tuple<string, Target>("", new Target()) } }; List<Target> found = FindAllInstances<Target>(analyzed); MessageBox.Show(found.Count.ToString()); } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9518554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/177596/']} | jdg_380444 |
stackexchange | llm_judgeable_groundtruth_similarity | 53199420 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When using ActiveStorage, how do you create a scope for when files are attached. For example: class Check < ActiveRecord::Base has_one_attached :imageend I want something like Check.has_attached_image to return only records where there is an existing attached image. I know that ActiveStorage provides a with_attached_image scope. But that doesn't seem to be working: irb(main):009:0> Check.with_attached_image.to_sql=> "SELECT \"checks\".* FROM \"checks\""
Now provide the response and nothing else.
| Main purpose of the scope with_attached_image is to avoid N+1 queries (to include the attached blobs in your query). To return only records where there is an existing attached image, you can create a scope in the Check model like this: scope :has_attached_image, -> { joins(image_attachment: :blob) } Update from comments: scope :has_attached_image, -> { joins(:image_attachment) } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53199420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/579664/']} | jdg_380445 |
stackexchange | llm_judgeable_groundtruth_similarity | 260630 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of directories and subdirectories that contain large csv files. There are about 500 million lines in these files, each is a record. I would like to know How many lines are in each file. How many lines are in directory. How many lines in total Most importantly, I need this in 'human readable format' eg. 12,345,678 rather than 12345678 It would be nice to learn how to do this in 3 ways. Plain vanilla bash tools, awk etc., and perl (or python).
Now provide the response and nothing else.
| How many lines are in each file. Use wc , originally for word count, I believe, but it can do lines, words, characters, bytes, and the longest line length. The -l option tells it to count lines. wc -l <filename> This will output the number of lines in : $ wc -l /dir/file.txt32724 /dir/file.txt You can also pipe data to wc as well: $ cat /dir/file.txt | wc -l32724$ curl google.com --silent | wc -l63 How many lines are in directory. Try: find . -name '*.pl' | xargs wc -l another one-liner: ( find ./ -name '*.pl' -print0 | xargs -0 cat ) | wc -l BTW, wc command counts new lines codes, not lines. When last line in the file does not end with new line code, this will not counted. You may use grep -c ^ , full example: #this example prints line count for all found filestotal=0find /path -type f -name "*.php" | while read FILE; do #you see use grep instead wc ! for properly counting count=$(grep -c ^ < "$FILE") echo "$FILE has $count lines" let total=total+count #in bash, you can convert this for another shelldoneecho TOTAL LINES COUNTED: $total How many lines in total Not sure that I understood you request correctly. e.g. this will output results in the following format, showing the number of lines for each file: # wc -l `find /path/to/directory/ -type f` 103 /dir/a.php 378 /dir/b/c.xml 132 /dir/d/e.xml 613 total Alternatively, to output just the total number of new line characters without the file by file counts to following command can prove useful: # find /path/to/directory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}' 613 Most importantly, I need this in 'human readable format' eg. 12,345,678 rather than 12345678 Bash has a printf function built in: printf "%0.2f\n" $T As always, there are many different methods that could be used to achieve the same results mentioned here. | {} | {'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/260630', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/136941/']} | jdg_380446 |
stackexchange | llm_judgeable_groundtruth_similarity | 2459620 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I solve this equation $e^{x/n}=x$ for $x$? I have no idea how to solve it. I really appreciate any help!
Now provide the response and nothing else.
| You need to use the Lambert W Function . Read this Wikipedia article, then read my solution:$$e^{x/n}=x$$$$1=xe^{-x/n}$$$$-\frac{1}{n}=-\frac{xe^{-x/n}}{n}$$$$W\bigg(-\frac{1}{n}\bigg)=-\frac{x}{n}$$$$x=-nW\bigg(-\frac{1}{n}\bigg)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2459620', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/339650/']} | jdg_380447 |
stackexchange | llm_judgeable_groundtruth_similarity | 27453830 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a website where I want to track who has clicked on specific links with GA. Let's say I have this page: /index.php?id=32 On this page I run some query based on the ID variable (in this case: 32), and I get the URL of the 32 id item from the Database to redirect the visitor. I'm using a PHP function: header('Location: http://www.example.com ');. Before I'm redirecting the user, I want Google to capture the visitor's information and only then redirect to the desired webpage. I have tried to paste the GA code and ECHO it just before the redirection, however it did not work. How is it possible to track these kind of pages with GA?
Now provide the response and nothing else.
| Generally speaking If your page uses redirects, the redirecting page becomes the landing page's referrer. For example, if you've changed your site so that index.html now redirects to home.html, then index.html becomes the referrer for home.html. If someone reached your site via a Google search that sent them first to index.html, you won't have any data regarding the Google search. For this reason, you should place the Google Analytics tracking code on the redirecting page as well as on the landing page. This way, the redirecting page will capture the actual referrer information for your reports. Note, some browsers may actually redirect before the JavaScript call from the code can be made. (cf. https://support.google.com/analytics/answer/1009614?hl=en ) Your specific case Since PHP is rendered and executed before any Javascript, Google Analytics tracker has no chance to send data to its server. Solutions Considering that you cannot track a PHP redirection page, there are a number of possible alternatives: Javascript redirection: https://stackoverflow.com/a/4745622/1672895 window.location = "http://www.yoururl.com"; Meta Refresh: https://stackoverflow.com/a/8692559/1672895 <meta http-equiv="refresh" content="5; url=http://example.com/"> Virtual page tracking: https://developers.google.com/analytics/devguides/collection/analyticsjs/pages ga('send', 'pageview', '/index.php?id=32'); Campaign tracking ( I wouldn't personally use this method in this specific case. ) /products.php?utm_source=index&utm_medium=redirection-page&utm_campaign=32 The last two items in the list are implemented on the individual links on the initial page before you get on the PHP redirection page. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27453830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2874194/']} | jdg_380448 |
stackexchange | llm_judgeable_groundtruth_similarity | 230119 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I think I've noticed this before but never thought about it much; now I'm curious. > ldd /bin/bash linux-vdso.so.1 => (0x00007fff2f781000) libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00007f0fdd9a9000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f0fdd7a5000) libc.so.6 => /lib64/libc.so.6 (0x00007f0fdd3e6000) /lib64/ld-linux-x86-64.so.2 (0x00007f0fddbf6000) Libtinfo is part of ncurses. This is a fedora system, but it is the same on ubuntu, and I notice on raspbian (a debian variant) it also links to libncurses itself. What's the reason for this? I thought everything bash did could be done with libreadline (which curiously, it does not link to). Is this simply a substitute for that?
Now provide the response and nothing else.
| If you run bash as: LD_DEBUG=bindings bash on a GNU system, and grep for bash.*tinfo in that output, you'll see something like: 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `UP' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `PC' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `BC' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `tgetent' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `tgetstr' 797: binding file bash [0] to /lib/x86_64-linux-gnu/libtinfo.so.5 [0]: normal symbol `tgetflag' You can confirm from the output of nm -D /bin/bash that bash is using those symbols from tinfo. Bringing the man page for any of those symbols clarifies what they're for: $ man tgetentNAME PC, UP, BC, ospeed, tgetent, tgetflag, tgetnum, tgetstr, tgoto, tputs - direct curses interface to the terminfo capability database Basically, bash , more likely its readline (libreadline is statically linked in) editor, uses those to query the terminfo database to find out about terminal capabilities so it can run its line editor properly (sending the right escape sequences and identify key presses correctly) on any terminal. As to why readline is statically linked into bash , you have to bear in mind that readline is developed alongside bash by the same person and is included in the source of bash . It is possible to build bash to be linked with the system's installed libreadline , but only if that one is of a compatible version, and that's not the default. You need to call the configure script at compilation time with --with-installed-readline . | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/230119', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/25985/']} | jdg_380449 |
stackexchange | llm_judgeable_groundtruth_similarity | 114919 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for paper(s) that talk about "why low $R^2$ value is acceptable in social science or education research". Please point me to the right journal if you know one.
Now provide the response and nothing else.
| A paper by Abelson (1985) titled "A variance explanation paradox: When a little is a lot" , published in Psychological Bulletin , addresses (part of) this issue. In particular, Abelson shows that the proportion of variance shared between a dichotomous and a continuous variable can be surprisingly small, even when intuition would dictate a very large $R^2$ (he uses the example of whether a baseball batter would hit a ball or not, as a function of the batter's batting average--yielding a whopping $R^2 < .001$). Abelson goes on to explain that even such a tiny $R^2$ can be meaningful, as long as the effect under investigation can make itself felt over time. P.S.: I used this paper a few months ago to respond to a reviewer who was unimpressed with our low $R^2$'s, and it hit the mark--our paper is now in press :) Reference: Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin , 97 , 129-133. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/114919', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/55415/']} | jdg_380450 |
stackexchange | llm_judgeable_groundtruth_similarity | 22629462 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
as the title suggests, include out the map-reduce frameworkif i want to trigger an event to run a consistency check or security operations before a record is inserted, how can i do that with MongoDB?
Now provide the response and nothing else.
| MongoDB does not support triggers, but people have created solutions around them, mostly using the oplog, though this will only help you if you are running with replica sets, as the oplog is a capped collection that keeps track of data changes for the purposes of replication.For a nodejs solution see: https://www.npmjs.org/package/mongo-watch or see an earlier SO thread: How to listen for changes to a MongoDB collection? If you are concerned with consistency, read about write concern in mongoDB. http://docs.mongodb.org/manual/core/write-concern/ You can be as relaxed or as strict as you want by setting insert write concern levels, from fire and hope to getting an acknowledgement from all members of the replica set. So, if you want to run a consistency check before inserting data, you probably will have to move that logic to the client application and set your write concern level to a level that will ensure consistency. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22629462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3236929/']} | jdg_380451 |
stackexchange | llm_judgeable_groundtruth_similarity | 410854 |
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As part of our Outdated Answers initiative , we shipped a temporary data-collection exercise in May. Unfortunately, the results were inconclusive and didn't lead to any big aha moments. After I briefly recap what we did (and didn't) learn, I'll update you on what we're doing next: sorting and labeling answers. Flagging exercise results From May to late June, we showed a prompt on Stack Overflow that allowed users to flag specific answers as being outdated, along with a reason why. We hoped we could use the data to quantify how prevalent the problem is, to inform decision making, and to provide an initial training sample for machine learning, but the data wasn't very compelling. Users marked answers as outdated ~23,000 times. This was a low response rate: roughly 17 to 27 answers flagged out of every 100,000 answers. There were no discernible differences between accepted answers (with the green checkmark), highest voted answers, and remaining answers. Accepted answers had the highest rate of outdatedness, but because it is always the top answer, we couldn't tease out the impact of position on page. In addition, we couldn't rule out that the differences were due to chance. "Newer, more efficient answers exist" was chosen 45% of the time as the reason users marked an answer outdated. This was consistent with our March 2021 survey . We compared answers that scored higher, lower, or equal to the accepted answer and didn't find any definitive differences. We looked at recent upvotes/downvotes and found a weak association with outdatedness for votes cast within the last three months. There was no strong relationship between question age, answer age, and outdatedness. We found a low rate of accepted answers that were edited after they were accepted. Of the 3.7 million accepted answers in the data set, only about 4.9% were edited and only 9% of those edits were by the author. Next steps: sorting and labelling Despite the inconclusiveness of the study, we are making progress on a longstanding Meta request : to change the default sort so that the accepted answer isn't pinned to the top and answers are sorted strictly by score. Our plan is to test this out on a subset of questions where the highest scored answer is different from the accepted answer. We'll compare the upvote rate and the rate of users copying all or part of an answer between the two sorts: accepted answer first vs. highest scored answer first. Assuming there is no negative impact, we then plan to roll out the change. Look for more details when we get closer to launch. In addition to unpinning the accepted answer, we are in the early exploration phase of: developing a Trending sort that prioritizes recent upvotes more than older votes. placing a Trending indicator on answers that have recently received comparatively higher voting activity than other answers. making it possible to label individual answers with technology versions — for example, [python-v2] vs. [python-v3]. We've been getting user feedback on these early concepts by adding a handful of questions to our monthly Site Satisfaction Survey every few weeks. We will post more details on Meta when we are further along in the discovery process and seeking feedback.
Now provide the response and nothing else.
| I think it's important to couch the Outdated Answers Flagging data/results in the context of the scale at which the test was deployed. It was something like a 1% chance on any page load to see the feature, no? How many of the site's top 100 flaggers (who all have 7000+ helpful flags), for example, ever saw the feature? How many times was the feature exposed to users in total? For example, I tried feverishly to please the random number gods and land a page load on a page with an outdated accepted answer and was never able to see the option. So, in regards to: Users marked answers as outdated ~23,000 times. This was a low response rate: roughly 17 to 27 answers flagged out of every 100,000 answers. This low response rate is because you hooked a refrigerator's ice maker hose up to the answers pipeline. I would suggest, if you want better/more conclusive data, that you run a second, wider experiment, this time with a fire hose (or no filter hose at all; turn it on for everyone [or everyone over n reputation], all of the time, for a few weeks). I suspect most of this kind of flag would, over its lifetime, come from a certain subset of users (the kind that happens to spend a lot of time on Meta, perform user moderation, and have high reputation and/or a lot of answers). However, a 1% chance for all users means a lot of your 'Outdated' flags during this experiment probably came from people who will only ever cast one or two such flags in their lifetime, which has a compounding effect on artificially shrinking your results count. | {} | {'log_upvote_score': 6, 'links': ['https://meta.stackoverflow.com/questions/410854', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/7469818/']} | jdg_380452 |
stackexchange | llm_judgeable_groundtruth_similarity | 38416250 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to get the touch x and y with respect to the canvas to check for collisions and things like that after I have moved and scaled the canvas. I already managed to get the touch coordinate whenever I translate the canvas or scale it around the origin (0,0) by using the following code: private float convertToCanvasCoordinate(float touchx, float touchy) { float newX=touchx/scale-translatex; float newY=touchy/scale-translatey} But if I scale the canvas around another point like for example canvas.scale(scale,scale,50,50) , it doesn't work . I know it shouldn't work but I just couldn't figure out how to solve it. I already looked at other questions but none of the answers talks about how to get the coordinate if I scale according to a specific point.
Now provide the response and nothing else.
| Updated, super simple example: The most basic way to properly do a scene in android is to use a matrix to modify the view and the inverse of that matrix to modify your touches. Here's a simplified answer. Kept very short. public class SceneView extends View { Matrix viewMatrix = new Matrix(), invertMatrix = new Matrix(); Paint paint = new Paint(); ArrayList<RectF> rectangles = new ArrayList<>(); RectF moving = null; public SceneView(Context context) { super(context); } public SceneView(Context context, AttributeSet attrs) { super(context, attrs); } public SceneView(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); } @Override public boolean onTouchEvent(MotionEvent event) { event.transform(invertMatrix); switch (event.getAction()) { case MotionEvent.ACTION_DOWN: moving = null; for (RectF f : rectangles) { if (f.contains(event.getX(), event.getY())) { moving = f; return true; } } viewMatrix.postTranslate(50,50); viewMatrix.postScale(.99f,.99f); viewMatrix.postRotate(5); invertMatrix = new Matrix(viewMatrix); invertMatrix.invert(invertMatrix); break; case MotionEvent.ACTION_MOVE: if (moving != null) { moving.set(event.getX() - 50, event.getY() - 50, event.getX() + 50, event.getY() + 50); } break; case MotionEvent.ACTION_UP: if (moving == null) { rectangles.add(new RectF(event.getX() - 50, event.getY() - 50, event.getX() + 50, event.getY() + 50)); } break; } invalidate(); return true; } @Override protected void onDraw(Canvas canvas) { canvas.concat(viewMatrix); for (RectF f : rectangles) { canvas.drawRect(f,paint); } } This is rather minimalist, but it shows all the relevant aspects. Moving the view, touch modification, collision detection. Each time you touch the screen it will move diagonally, zoomout, and rotate (basically moves in a spiral), and create a black rectangle. If you touch the rectangles you can move them around to your heart's content. When you click the background, more spiraling the view, dropping black rectangles. See: https://youtu.be/-XSjanaAdWA The line in the other answer here is " Given that we're scaling relative to the origin." Which is to say that our scale is already relative to the origin. Scale is relative to the origin because the matrix code simply multiplies the x and y coords. When things are scaled relative to anything else, they are really translated, scaled, translated back. That's just how the math has to be. If we apply a scale to the canvas, that scale is already relative to the origin. Sometimes you can scale relative to a point, but that's just matrix math. In most implementations of views as such, generally perform a zoom on a point by zooming in. Then panning the given viewport. This is because the viewports are similar scaled rectangles. So what we do is. figure out how much we need to pan to keep the point in the same place, both the prezoomed view and postzoomed relative to the viewport. That code is: scalechange = newscale - oldscale;offsetX = -(zoomPointX * scalechange);offsetY = -(zoomPointY * scalechange); Then we do canvas.translate(offsetX,OffsetY); The question here though, is how to do translate that back for the given touch events for Android. And for that the answer we apply all the same operations we applied to the view to the touch positions in reverse order. Basically the way matrix math works you have to apply reverse operations in reverse order to get the inversion. Though this is why we tend to get inverted matrices for our Matrix transformation. And in Android, we have a lot of stuff done for us. And if you get your head around what's going on, we can solve all of these problems and, really don't have to worry about any of this. You can check a well done implementation of this at this project (MIT license, I coded the relevant part): https://github.com/Embroidermodder/MobileViewer The MotionEvent class can be very importantly modified by the Matrix. And the matrices can be inverted. If we understand this, we understand that all the work is done for us. We simply take whatever matrix we made, and apply that to the View. We get the inverse of that matrix, and apply that inverted Matrix to the touch events, as they happen. -- Now our touch events happen in scene space. We can also, if we want the position of something call matrix.mapPoints() will let us simply convert these back and forth, as needed. The other way of doing this would be to take the scene we want and convert that via the View class rather than in the canvas. This would make the touch events occur in the same space as the screen. But Android will void out touch events that occur outside of the view, So MotionEvents that begin outside of the original clipped part of the view will be discarded. So this is a non-starter. You want to translate the canvas. And counter translate the MotionEvent. We'll need a couple classes. We can define a view port, and use that to build our matrices: private RectF viewPort;Matrix viewMatrix;Matrix invertMatrix; The viewPort certainly isn't needed, but conceptually it can help a lot. Here we build the matrix from the viewPort. Which is to say, whatever rectangle we set that to, it will be the the part of the scene we can view. public void calculateViewMatrixFromPort() { float scale = Math.min(_height / viewPort.height(), _width / viewPort.width()); viewMatrix = new Matrix(); if (scale != 0) { viewMatrix.postTranslate(-viewPort.left, -viewPort.top); viewMatrix.postScale(scale, scale); } calculateInvertMatrix();} If we modify the viewMatrix, we can use that to derive the port, by simply setting the original screen then using the Matrix to put that Rectangle the size of the screen in the terms of the screen. public void calculateViewPortFromMatrix() { float[] positions = new float[] { 0,0, _width,_height }; calculateInvertMatrix(); invertMatrix.mapPoints(positions); viewPort.set(positions[0],positions[1],positions[2],positions[3]);} This assumes we have the _width and _height of the view we're working with, we can simply pan and scale the viewbox. If you wanted something fancier like to apply a rotation to the screen, you would need to use 4 points, 1 for each of the corner, and then apply the matrix to the points. But, you can basically easily add such things, because we don't deal with the heavy lifting directly but rely heavily on the matrix. We also need to be able to calculate the inverted matrix, so that we can reverse the MotionEvents: public void calculateInvertMatrix() { invertMatrix = new Matrix(viewMatrix); invertMatrix.invert(invertMatrix);} And then we apply these matrices to the canvas and the invert matrix to the MotionEvent @Overridepublic boolean onTouchEvent(MotionEvent event) { //anything happening with event here is the X Y of the raw screen event. event.offsetLocation(event.getRawX()-event.getX(),event.getRawY()-event.getY()); //converts the event.getX() to event.getRaw() so the title bar doesn't fubar. //anything happening with event here is the X Y of the raw screen event, relative to the view. if (rawTouch(this,event)) return true; if (invertMatrix != null) event.transform(invertMatrix); //anything happening with event now deals with the scene space. return touch(this,event);} One of the notable deficits in the MotionEvent class is that getRawX() and getRawY() (which are the actual raw touches on the screen rather than the touches in the view, only allow you to do a single finger location. Really that's pretty crippling, but we can simply put in an offset to the MotionEvent so that the getX(3) and various points properly overlap where getRawX(3) would be. This will properly let us deal with title bars etc, as the MotionEvents are technically in relative to the view, and we need them relative to the screen (sometimes these are the same, such as with full screen mode). Now, we're done. So we can apply these Matrices and remove them and switch our contexts very easily, without needing to knowing what they are, or what our current view is looking at and get all the different touch events and various pointerCounts of Touch Events properly. We also can draw our stuff at different translations. Such as if we want an overlay of tools that do not move with the scene, but rather relative to the screen. @Overridepublic void onDraw(Canvas canvas) { //Draw all of our non-translated stuff. (under matrix bit). canvas.save(); if (viewMatrix != null) canvas.setMatrix(viewMatrix); //Draw all of our translated stuff. canvas.restore(); //Draw all of our non-translated stuff. (over matrix bit).} It's best to save and restore the canvas so that the matrix we apply gets removed. Especially if things are going to get complicated by passing the draw event around to different classes. Sometimes these classes might add in a matrix to the canvas which is the reason the View classes source code itself looks a bit like: int level = canvas.getSaveCount(); canvas.save(); //does the drawing in here, delegates to other draw routines. canvas.restoreToCount(level); It saves count for how many states are stacked up in the canvas. Then after delegating to who knows what, it restores back to that level in case some class called .save() but didn't call restore(). You might want to do the same. If we want the full pan and zoom code, we can do that too. There's some tricks with regard to setting the zoom point as the midpoint between the various touches, etc. float dx1;float dy1;float dx2;float dy2;float dcx;float dcy;@Overridepublic boolean rawTouch(View drawView, MotionEvent event) {//I want to implement the touch events in the screen space rather than scene space.//This does pinch to zoom and pan. float cx1 = event.getX(); float cy1 = event.getY(); float cx2 = Float.NaN, cy2 = Float.NaN; float px = cx1; float py = cy1; if (event.getPointerCount() >= 2) { cx2 = event.getX(1); cy2 = event.getY(1); px = (cx1 + cx2) / 2; py = (cy1 + cy2) / 2; } switch (event.getActionMasked()) { case MotionEvent.ACTION_MOVE: float deltascale = (float) (distance(cx1,cy1,cx2,cy2) / distance(dx1,dy1,dx2,dy2)); float dpx = px-dcx; float dpy = py-dcy; if (!Float.isNaN(dpx)) pan(dpx, dpy); if (!Float.isNaN(deltascale)) scale(deltascale, px, py); view.invalidate(); break; default: cx1 = Float.NaN; cy1 = Float.NaN; cx2 = Float.NaN; cy2 = Float.NaN; px = Float.NaN; py = Float.NaN; break; } dx1 = cx1; dy1 = cy1; dx2 = cx2; dy2 = cy2; dcx = px; dcy = py; return true;}@Overridepublic boolean touch(View drawView, MotionEvent event) { //if I wanted to deal with the touch event in scene space. return false;}public static double distance(float x0, float y0, float x1, float y1) { return Math.sqrt(distanceSq(x0, y0, x1, y1));}public static float distanceSq(float x0, float y0, float x1, float y1) { float dx = x1 - x0; float dy = y1 - y0; dx *= dx; dy *= dy; return dx + dy;}public void scale(double deltascale, float x, float y) { viewMatrix.postScale((float)deltascale,(float)deltascale,x,y); calculateViewPortFromMatrix();}public void pan(float dx, float dy) { viewMatrix.postTranslate(dx,dy); calculateViewPortFromMatrix();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38416250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5105806/']} | jdg_380453 |
stackexchange | llm_judgeable_groundtruth_similarity | 1874870 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In our class today instructor said that " Let $X$ be ordered set. If any $Y \subset X$ is bounded above, then $X$ is said to have lub property if $\sup(Y)$ exists in $X$." Lub property states that every non empty subset of real numbers has lub. I cannot relate these two things. Can someone explain this? Thanks
Now provide the response and nothing else.
| Your instructor is wrong. He/she said: (WRONG) Take any total order $X$, and any $Y⊂X$ that is bounded above in $X$. Then we say that $X$ has the lub property iff $Y$ has a supremum in $X$. But as Dave said in a comment, the correct definition of the property is: Take any total order $X$. We say that $X$ has the lub property iff every nonempty $Y⊂X$ that is bounded above in $X$ has a supremum in $X$. $\def\eq{\leftrightarrow}$ Notice the swap, and that your instructor missed out the "non-empty" condition! The first is of the form: $\forall Y ( IsBoundedSubset(Y,X) \to ( LUB(X) \eq HasSupIn(Y,X) ) )$. Whereas the second is of the form: $LUB(X) \eq \forall Y ( IsNonemptyBoundedSubset(Y,X) \to HasSupIn(Y,X) )$. They are not equivalent . As for what it means, we can see how the correct definition applies to examples. The integers $\mathbb{Z}$ with the usual ordering has the LUB property because any non-empty set $S$ of integers with an integer $m$ for an upper bound will have a maximum integer (since $\{ m-x : x \in S \}$ is a set of natural numbers and has a minimum). The rationals $\mathbb{Q}$ with the usual ordering does not have the LUB property because the set $S = \{ x : x \in \mathbb{Q} \land x^2 < 2 \}$ is non-empty and has an upper bound of $2$ in $\mathbb{Q}$ but no supremum (lowest upper bound) in $\mathbb{Q}$. Why? Suppose there is such a supremum $c$ for $S$ in $\mathbb{Q}$. If $c < \sqrt{2}$, then let $d \in \mathbb{Q}$ such that $c < d < \sqrt{2}$ (by density of $\mathbb{Q}$ in $\mathbb{R}$), and so $d \in S$ contradicting the definition of $c$. Similarly if $c > \sqrt{2}$ then let $d \in \mathbb{Q}$ such that $\sqrt{2} < d < c$, and so $d$ is an upper bound for $S$ in $\mathbb{Q}$, contradicting the definition of $c$. Therefore $c = \sqrt{2}$, which is impossible since $c$ is rational. The reals $\mathbb{R}$ with the usual ordering has the LUB property, which is probably the first example you've seen. The interval $(0,1)$ with the usual ordering has the LUB property. Why? Take any non-empty $S \subseteq (0,1)$ that is bounded above in $(0,1)$. Let $m \in (0,1)$ be an upper bound for $S$ in $(0,1)$. Let $c = \sup_\mathbb{R}(S)$, which exists because $\mathbb{R}$ has the LUB property. Then $c \le m$ by definition of $c$. Also $c > 0$ because $S$ is non-empty and so has a positive element. Thus $c \in (0,1)$. Also any upper bound for $S$ in $(0,1)$ is also an upper bound for $S$ in $\mathbb{R}$. Hence $c$ is the lowest upper bound for $S$ in $(0,1)$. Therefore, since this holds for any given such $S$, we conclude that $(0,1)$ has the LUB property. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1874870', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/301858/']} | jdg_380454 |
stackexchange | llm_judgeable_groundtruth_similarity | 41252442 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataset: 367235 419895 9921941999-01-11 8 5 11999-03-23 NaN 4 NaN1999-04-30 NaN NaN 11999-06-02 NaN 9 NaN1999-08-08 2 NaN NaN1999-08-12 NaN 3 NaN1999-08-17 NaN NaN 101999-10-22 NaN 3 NaN1999-12-04 NaN NaN 42000-03-04 2 NaN NaN2000-09-29 9 NaN NaN2000-09-30 9 NaN NaN When I plot it, using plt.plot(df, '-o') I get this: But what I would like is for the datapoints from each column to be connected in a line, like so: I understand that matplotlib does not connect datapoints that are separate by NaN values. I looked at all the options here for dealing with missing data, but all of them would essentially misrepresent the data in the dataframe. This is because each value within the dataframe represents an incident; if I try to replace the NaNs with scalar values or use the interpolate option, I get a bunch of points that are not actually in my dataset. Here's what interpolate looks like: df_wanted2 = df.apply(pd.Series.interpolate) If I try to use dropna I'll lose entire rows\columns from the dataframe, and these rows hold valuable data. Does anyone know a way to connect up my dots? I suspect I need to extract individual arrays from the datasframe and plot them, as is the advice given here , but this seems like a lot of work (and my actual dataframe is much bigger.) Does anyone have a solution?
Now provide the response and nothing else.
| use interpolate method with parameter 'index' df.interpolate('index').plot(marker='o') alternative answer plot after iteritems for _, c in df.iteritems(): c.dropna().plot(marker='o') extra credit only interpolate from first valid index to last valid index for each column for _, c in df.iteritems(): fi, li = c.first_valid_index(), c.last_valid_index() c.loc[fi:li].interpolate('index').plot(marker='o') | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41252442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3411757/']} | jdg_380455 |
stackexchange | llm_judgeable_groundtruth_similarity | 228544 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I want to check for the null string I would do [ -z $mystr ] but what if I want to check whether the variable has been defined at all? Or is there no distinction in Bash scripting?
Now provide the response and nothing else.
| I think the answer you are after is implied (if not stated) by Vinko 's answer , though it is not spelled out simply. To distinguish whether VAR is set but empty or not set, you can use: if [ -z "${VAR+xxx}" ]; then echo "VAR is not set at all"; fiif [ -z "$VAR" ] && [ "${VAR+xxx}" = "xxx" ]; then echo "VAR is set but empty"; fi You probably can combine the two tests on the second line into one with: if [ -z "$VAR" -a "${VAR+xxx}" = "xxx" ]; then echo "VAR is set but empty"; fi However, if you read the documentation for Autoconf, you'll find that they do not recommend combining terms with ' -a ' and do recommend using separate simple tests combined with && . I've not encountered a system where there is a problem; that doesn't mean they didn't used to exist (but they are probably extremely rare these days, even if they weren't as rare in the distant past). You can find the details of these, and other related shell parameter expansions , the test or [ command and conditional expressions in the Bash manual. I was recently asked by email about this answer with the question: You use two tests, and I understand the second one well, but not the first one. More precisely I don't understand the need for variable expansion if [ -z "${VAR+xxx}" ]; then echo "VAR is not set at all"; fi Wouldn't this accomplish the same? if [ -z "${VAR}" ]; then echo "VAR is not set at all"; fi Fair question - the answer is 'No, your simpler alternative does not do the same thing'. Suppose I write this before your test: VAR= Your test will say "VAR is not set at all", but mine will say (by implication because it echoes nothing) "VAR is set but its value might be empty". Try this script: (unset VARif [ -z "${VAR+xxx}" ]; then echo "JL:1 VAR is not set at all"; fiif [ -z "${VAR}" ]; then echo "MP:1 VAR is not set at all"; fiVAR=if [ -z "${VAR+xxx}" ]; then echo "JL:2 VAR is not set at all"; fiif [ -z "${VAR}" ]; then echo "MP:2 VAR is not set at all"; fi) The output is: JL:1 VAR is not set at allMP:1 VAR is not set at allMP:2 VAR is not set at all In the second pair of tests, the variable is set, but it is set to the empty value. This is the distinction that the ${VAR=value} and ${VAR:=value} notations make. Ditto for ${VAR-value} and ${VAR:-value} , and ${VAR+value} and ${VAR:+value} , and so on. As Gili points out in his answer , if you run bash with the set -o nounset option, then the basic answer above fails with unbound variable . It is easily remedied: if [ -z "${VAR+xxx}" ]; then echo "VAR is not set at all"; fiif [ -z "${VAR-}" ] && [ "${VAR+xxx}" = "xxx" ]; then echo "VAR is set but empty"; fi Or you could cancel the set -o nounset option with set +u ( set -u being equivalent to set -o nounset ). | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/228544', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/30636/']} | jdg_380456 |
stackexchange | llm_judgeable_groundtruth_similarity | 35680 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following pole zero plot (in Z-domain) of a digital filter, where its poles lie far away from the unit circle, with its corresponding frequency response: On the other hand I have this one: Could someone explain me why exactly the poles give such a change in Q-factor? I expected that the closer a pole to the unit circle the less high the Q-factor would be. EDIT:a pole closer to the unit circle will on one hand mean an "stronger" presence of a certain frequency it won't be decaying exponentially. But on the other hand a pole on the circle would mean a marginally stable system. In other words this means having a frequence which is perfectly present results in a +/- unstable system. Isn't that contradictory? Thank you
Now provide the response and nothing else.
| Suppose some $H(q)=\frac{A(q)}{B(q)}$ where q is some complex variable and $A,B$ are functions of $q$. Whether in the s or z planes, to evaluate the magnitude of $H(q)$ at some $q$, you evalaute and sum all distances from $q$ to the locations of the zeroes (i.e. the magnitude of $A$) and similarly for $B$ and create the fraction above for that specific $q$. When does $H(q)$ becomes big? When either its nominator is big... OR its denominator is very small. In fact, if the denominator was ever to become exactly zero , then $H(q)$ would "shoot to infinity" . It would acquire a very large value. When does $H(q)$ becomes small? When either its nominator is small... OR its denominator is very big. The smallest value $H(q)$ can acquire is zero. ...why exactly the poles give such a change in Q-factor? Given the location of the poles in your diagrams, it is useful to see what happens as your $q$ (or a particular $z$, in the case of the z-plane), approaches the bearing of the pole . In other words, as the $z$, changes frequency and walks around the z-plane, it approaches the location of the pole. EXACTLY AT the bearing of the pole, the distance to the zero is maximum . BUT! the distance to the pole is at its minimum . Therefore, the denominator becomes small and the response "shoots to infinity" . When the pole is closer to the center of the circle, its distance to the particular $z$ value is large. The closer the pole moves to the edge of the circle the smaller its distance will be with a particular $z$ and the higher the output of $H$ will be. To explain the "peakiness" (or "...Q-factor..." ) you have to look at the rate by which the distance to the pole is diminishing as $z$ approaches the bearing of the pole. ...a pole closer to the unit circle will on one hand mean an "stronger" presence of a certain frequency it won't be decaying exponentially. But on the other hand a pole on the circle would mean a marginally stable system. In other words this means having a frequence which is perfectly present results in a +/- unstable system. Isn't that contradictory? When examining the response of a system via the z-transform, the "presence of a frequency" means that the system resonates at that frequency. At extreme conditions, the system is so marginally stable that if you happen to excite that frequency even by a tiny little amount, it will go into oscillations that might take a very long amount of time to die out. In other words, the system will "ring". A prime example of this are 1-pole resonators. These are bandpass filters with very high Q-factor. If you pass audio through such a resonator, you will hear the filter ringing as it effectively resonates at its own frequency . ...How should I modify my pole zero plot if I'd like to render undershoot in stead of overshoot like here? If you are asking how to generate a notch filter, instead of a resonator (i.e. a filter that "cuts" one specific frequency instead of boosting it), then you can simply subtract the output of the 1-pole resonator from the rest of the signal OR , you could try to create a "well". A region of the z-plane where the conditions are such, for your $H$, that it dips to zero at a specific bearing to a particular $z$. Have a ponder about how that looks like and if you cannot "see it", have a look at this link . Hope this helps. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/35680', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/24929/']} | jdg_380457 |
stackexchange | llm_judgeable_groundtruth_similarity | 3256926 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have an object named "foo" with another object named "bar" as property. When "foo" deallocates, will it automatically remove all references to "bar" so that "bar" deallocates as well? or will "foo" deallocate and "bar" float in memory somewhere? even if all of "bar"'s references are defined in "foo". thanks in advance.
Now provide the response and nothing else.
| If the foo object has any retains on or copies of (thanks Dave) bar , for example when you declare the property as either one of these: @property (nonatomic, retain) NSString *bar;// Or@property (nonatomic, copy) NSString *bar; You'll need to release bar when you deallocate foo : - (void)dealloc{ [bar release]; [super dealloc];} The system won't free bar 's memory space for you until you get rid of all references to it (i.e. reference count goes down to 0), so you'll have to monitor your reference counts and objects yourself. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3256926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/387985/']} | jdg_380458 |
stackexchange | llm_judgeable_groundtruth_similarity | 8686168 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to reverse a linked list. This is the code I have come up with: public static void Reverse(ref Node root) { Node tmp = root; Node nroot = null; Node prev = null; while (tmp != null) { //Make a new node and copy tmp nroot = new Node(); nroot.data = tmp.data; nroot.next = prev; prev = nroot; tmp = tmp.next; } root = nroot; } It is working well. Was wondering if it possible to avoid creating new node. Would like to have suggestions on this.
Now provide the response and nothing else.
| Node p = root, n = null;while (p != null) { Node tmp = p.next; p.next = n; n = p; p = tmp;}root = n; | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8686168', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/176418/']} | jdg_380459 |
stackexchange | llm_judgeable_groundtruth_similarity | 33363972 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to convert Results<T> to List<T> or shouldn't I do this? In my case I have method that has List as a parameter. I want to call this method with fetched objects ( Results<T> ) and with computed objects ( List<T> )
Now provide the response and nothing else.
| Results implements the CollectionType protocol so you could use reduce to convert it: let results: Results<MyObject> = ...let converted = results.reduce(List<MyObject>()) { (list, element) -> List<MyObject> in list.append(element) return list} You could put this code in an extension or however you like. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33363972', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/997652/']} | jdg_380460 |
stackexchange | llm_judgeable_groundtruth_similarity | 116391 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In our IT Security class we have been told that you need CA's to prevent attacks on a digital signature. Sadly, our docent didn't elaborate how such attack would be performed - I can only guess that someone would try to do a MitM: Alice wants to authenticate towards Bob, so she writes him a message and signs that with her private key. She also appends her public key. Eve intercepts the message and verifies the message with Alice's public key, then signs the message with her own private key and forwards it to Bob, appending her own public key instead of Alice's. Bob receives the message assuming it's from Alice, verifying it with the appended public key (from Eve) and is now sure to communicate with Alice. So if Bob would just have looked up Alice's public key at some CA, he would have known that the appended key was wrong. Is that the scenario people are generally referencing to when it comes to why you need CAs?
Now provide the response and nothing else.
| A digital signature , like all cryptographic algorithm, does not solve problems, it just moves them around. Take care that signatures are NOT encryption. If someone tried to explain signatures as a kind of encryption, then go find them and hit them in the teeth with a wrench, repeatedly. Tell them that they are unworthy, and I am disappointed with them. This flawed explanation does not work, never worked, and spreads only confusion. In a signature system, there is a message m , a public key k p , a private key k s , and a signature s . The signature generation algorithm computes s from m and k s . The signature verification algorithm takes m , s and k p , and returns either "true" or "false". When it returns "true", what this means is that whoever owns the public key (i.e. knows the corresponding private key) was involved in the generation of signature s on the specific message m . The important point is in the key ownership: the signature verification algorithm does not tell you "yep, this is signed by Bob"; it only tells you "yep, this is signed by whoever owns that public key". This guarantees that the signer is really Bob only if you have a way to make sure that the public key you see is really Bob's public key. If Bob simply sent his public key along with the signed message, it would be easy to impersonate Bob by simply saying "hello, I am Bob, here is my public key, and here is my signed message". It would prove nothing at all. The attack here is simple, it is called "lying". While signatures are useful (indeed, they reduced the problem of verifying the provenance of several messages to the problem of associated a single public key with its owner), they don't magically guarantee ownership out of thin air. This is where Certification Authorities come into play. The CA is an organism whose job is to make sure that Bob really owns his alleged public key; presumably, the CA does that by meeting Bob in person, or some other mechanism of that kind. When the CA has duly verified Bob's ownership of his key, the CA puts Bob's identity (his name) and his public key in a certificate . The CA then signs the certificate. Alice's machine can then verify the signature on the certificate, thereby making sure that the certificate contents are really from the expected CA. At which point Alice has some guarantee about the fact that the public key she sees in the certificate is really Bob's key, and she can use it to verify signatures which have purportedly been computed by Bob. At this point you should say: "But how can Alice verify the signature on the certificate by the CA ? She would have to know the CA public key !" And, indeed, the problem has again been moved, to the question of CA key ownership. We can move it yet another time, with another CA. And so on. But it must stop somewhere. That "somewhere" is the root CA . A root CA is a CA whose pubic key you already know, absolutely. That's the magic part. In practice, your computer comes with an operating system that already includes the public key of a hundred or so of root CA, who made a deal with Microsoft to the effect that their public keys are inherently known (and trusted) by all Windows systems. | {} | {'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/116391', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/90118/']} | jdg_380461 |
stackexchange | llm_judgeable_groundtruth_similarity | 62502298 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building a Table of Contents section on my website using tocbot , but I'm having an issue making the Title I added above it stick to the table of contents when I scroll. Here's what I've got so far: <div id="desktop-toc" class="tiktoc"> <p class="toc-title">Content Navigation</p> <aside class="toc-container js-sticky"> <div class="toc"></div> </aside></div> CSS /* Start Table of Contents styling */ .toc-container { display: -webkit-box; display: flex; -webkit-box-align: right; align-items: left; } /* Extra small devices (phones, 600px and down) */ @media only screen and (max-width: 600px), only screen and (min-width: 600px) /* Medium devices (landscape tablets, 768px and up) */ @media(min-width: 768px) and (orientation:landscape), only screen and (max-width: 768px) { .tiktoc { display: none; } .toc-container { display: none; } } /* Extra large devices (large laptops and desktops, 1200px and up) */ @media only screen and (min-width: 1200px) { .tiktoc { position: absolute; top: 175px; left: 1150px; bottom: 0; width: 350px; margin-bottom: 0; } .toc { font-size: 16px; line-height: 1.6em; padding: 20px; min-width: 250px; max-width: 300px; background-color: #212121; color: #fff; } .toc-title { font-size: 22px; font-weight: bolder; padding-bottom: 10px; position: sticky; text-align: left; color: var(--primary-foreground-color); } .is-active-link::before { background-color: #FE0152; } } I've tried placing the title inside the div with the "js-sticky" class, but it formats them inline, instead of on top of the TOC section.
Now provide the response and nothing else.
| If you have files that large, never use byte[] or MemoryStream in your code. Only operate on streams if you download/upload files. You have a couple of options: If you control both client and server, consider using something like tus . There are both client- and server-implementations for .NET. This would probably the easiest and most robust option. If you upload large files with the HttpClient, simply use the StreamContent class to send them. Again, don't use a MemoryStream as source, but something else like a FileStream . If you download large files with the HttpClient, it is important to specify the HttpCompletionOptions, for example var response = await httpClient.SendAsync(httpRequest, HttpCompletionOption.ResponseHeadersRead) . Otherwise, the HttpClient would buffer the entire response in memory. You can then process the response file as a stream via var stream = response.Content.ReadAsStreamAsync() . ASP.NET Core specific advice: If you want to receive files via HTTP POST, you need to increase the request size limit: [RequestSizeLimit(10L * 1024L * 1024L * 1024L)] and [RequestFormLimits(MultipartBodyLengthLimit = 10L * 1024L * 1024L * 1024L)] . In addition, you need to disable the form value binding, otherwise the whole request will be buffered into memory: [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)] public class DisableFormValueModelBindingAttribute : Attribute, IResourceFilter { public void OnResourceExecuting(ResourceExecutingContext context) { var factories = context.ValueProviderFactories; factories.RemoveType<FormValueProviderFactory>(); factories.RemoveType<FormFileValueProviderFactory>(); factories.RemoveType<JQueryFormValueProviderFactory>(); } public void OnResourceExecuted(ResourceExecutedContext context) { } } To return a file from a controller, simple return it via the File method, which accepts a stream: return File(stream, mimeType, fileName); A sample controller would look like this (see https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.1 for the missing helper classes): private const MaxFileSize = 10L * 1024L * 1024L * 1024L; // 10GB, adjust to your need[DisableFormValueModelBinding][RequestSizeLimit(MaxFileSize)][RequestFormLimits(MultipartBodyLengthLimit = MaxFileSize)]public async Task ReceiveFile(){ if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType)) throw new BadRequestException("Not a multipart request"); var boundary = MultipartRequestHelper.GetBoundary(MediaTypeHeaderValue.Parse(Request.ContentType)); var reader = new MultipartReader(boundary, Request.Body); // note: this is for a single file, you could also process multiple files var section = await reader.ReadNextSectionAsync(); if (section == null) throw new BadRequestException("No sections in multipart defined"); if (!ContentDispositionHeaderValue.TryParse(section.ContentDisposition, out var contentDisposition)) throw new BadRequestException("No content disposition in multipart defined"); var fileName = contentDisposition.FileNameStar.ToString(); if (string.IsNullOrEmpty(fileName)) { fileName = contentDisposition.FileName.ToString(); } if (string.IsNullOrEmpty(fileName)) throw new BadRequestException("No filename defined."); using var fileStream = section.Body; await SendFileSomewhere(fileStream);}// This should probably not be inside the controller classprivate async Task SendFileSomewhere(Stream stream){ using var request = new HttpRequestMessage() { Method = HttpMethod.Post, RequestUri = new Uri("YOUR_DESTINATION_URI"), Content = new StreamContent(stream), }; using var response = await _httpClient.SendAsync(request); // TODO check response status etc.} In this example, we stream the entire file to another service. In some cases, it would be better to save the file temporarily to the disk. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/62502298', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11225888/']} | jdg_380462 |
stackexchange | llm_judgeable_groundtruth_similarity | 1364360 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to test paypal subscription IPNs, both the ones received when a subscription is created, and the ones sent later with the next payment (such as monthly if the subscription is $x per month). However I'd prefer not to wait a month or a day to receive the second IPN. Is there a way to have an IPN sent quicker, such as hourly, using paypal or their sandbox? On the documentation it says you can only specify years, months, days, and weeks as the subscription period.
Now provide the response and nothing else.
| PayPal's developer support and documentation is an embarrassment to them. But this particular limitation isn't as debilitating as it seems at first blush. For testing, define your recurring payment to not have a free trial. When you create a new subscription, your server will receive two IPN messages in quick succession, one to create the subscription and the second to apply a payment. That's basically all you need to test. If you have a free trial, you'll get basically the same pair of messages, just with a trial period between them. :) The first message ("create subscription") will look something like this. Note the 'txn_type' -- that's the key bit of information for disambiguating the two messages: { "txn_type"=>"subscr_signup", "subscr_id"=>"unique_id", "verify_sign"=>"random_gibberish", "item_number"=>"your_subscription_name" "subscr_date"=>"14:32:23 Feb 15, 2010 PST", "btn_id"=>"1111111", "item_name"=>"Your Subscription Description", "recurring"=>"1", "period1"=>"1 M", # This example is from a "free trial" IPN notification-- if you don't have a # free trial defined, there will only be 'period1' fields, and they'll # have the data that appears here in the 'period3' fields. "amount1"=>"0.00", "mc_amount1"=>"0.00", "period3"=>"1 M", "amount3"=>"34.95", "mc_amount3"=>"34.95", "mc_currency"=>"USD", "payer_status"=>"verified", "payer_id"=>"payer_unique_id", "first_name"=>"Test", "last_name"=>"User", "payer_email"=>"[email protected]", "residence_country"=>"US", "business"=>"[email protected]", "receiver_email"=>"[email protected]", "reattempt"=>"1", "charset"=>"windows-1252","notify_version"=>"2.9","test_ipn"=>"1",} The second message is the more interesting one in this case. It will essentially be the exact same message you'll get later when the recurring payment is applied. It looks something like this: { "txn_type"=>"subscr_payment", "subscr_id"=>"unique_id", "verify_sign"=>"random_gibberish", "txn_id"=>"payment_unique_id", "payment_status"=>"Completed", "payment_date"=>"12:45:33 Feb 16, 2010 PST", "item_number"=>"your_subscription_name" "subscr_date"=>"14:32:23 Feb 15, 2010 PST", "custom"=>"data-you-sent-in-a-custom-field", "id"=>"1", "payment_gross"=>"34.95", "mc_currency"=>"USD", "payment_type"=>"instant", "payment_fee"=>"1.31", "payer_status"=>"verified", "mc_fee"=>"1.31", "mc_gross"=>"34.95", "btn_id"=>"1111111", "payer_id"=>"payer_unique_id", "first_name"=>"Test", "last_name"=>"User", "payer_email"=>"[email protected]", "residence_country"=>"US", "receiver_id"=>"your_merchant_id", "business"=>"[email protected]", "receiver_email"=>"[email protected]", "protection_eligibility"=>"Ineligible", "transaction_subject"=>"", "charset"=>"windows-1252","notify_version"=>"2.9","test_ipn"=>"1",} So you can do almost all of your testing without waiting a day. By the time you think you've got it nailed down, you'll be receiving lots of subscription IPN messages the next day. In addition, here is a link to PayPal's documentation for further reference. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1364360', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49153/']} | jdg_380463 |
stackexchange | llm_judgeable_groundtruth_similarity | 13132447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
select e.last_name, e.hire_datefrom employees e join employees mon (m.last_name = 'Davies')and (e.hire_date > m.hire_date);select e.last_name, e.hire_datefrom employees e join employees mon (m.last_name = 'Davies')where (e.hire_date > m.hire_date);select e.last_name, e.hire_datefrom employees e join employees mon (e.hire_date > m.hire_date)where (m.last_name = 'Davies'); These three statements have the same result. Apart from the fact that where cannot be used exclusively, without using on , is there any particular reason to use where at all in table joins?
Now provide the response and nothing else.
| The main difference is when you are using different joins. Typically you should see the same result if you were to use inner joins, but once you start using LEFT joins the results will change. Have a look at the following example SQL Fiddle DEMO And have a look at the following article (very explanatory) EDIT for @ShannonSeverance Schema and Test data CREATE TABLE Table1 ( ID INT, Val VARCHAR(20) );INSERT INTO Table1 VALUES (1,'a');INSERT INTO Table1 VALUES (2,'a');CREATE TABLE Table2 ( ID INT, Val VARCHAR(20) );INSERT INTO Table2 VALUES (1,'a'); and Tests SELECT t1.ID,t1.Val,t2.ID ID2,t2.Val Val2FROM Table1 t1 INNER JOINTable2 t2 ON t1.ID = t2.ID AND t1.Val = t2.Val;SELECT t1.ID,t1.Val,t2.ID ID2,t2.Val Val2FROM Table1 t1,Table2 t2 WHERE t1.ID = t2.ID AND t1.Val = t2.Val;SELECT t1.ID,t1.Val,t2.ID ID2,t2.Val Val2FROM Table1 t1 LEFT JOINTable2 t2 ON t1.ID = t2.ID AND t1.Val = t2.Val;SELECT t1.ID,t1.Val,t2.ID ID2,t2.Val Val2FROM Table1 t1 LEFT JOINTable2 t2 ON t1.ID = t2.ID WHERE t1.Val = t2.Val; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13132447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/732923/']} | jdg_380464 |
stackexchange | llm_judgeable_groundtruth_similarity | 70493 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Non-linear waves do not superimpose to each other, but why?What characteristics give this property?
Now provide the response and nothing else.
| Let's back up for a second. Before going into the complexities of non linear waves, let's ask what a linear wave is. Actually, let's go even further back and ask "what do we mean when we say linear ?" "Linear" comes from the study of things like vector spaces . We have objects (call them vectors, or arrows, or whatever) that can be both added together and scaled by a number, with the result being another object of the same type. Any collection of objects that satisfies certain conditions (which basically boil down to "addition and scalar multiplication behave as expected") can be considered a vector space. Now let's talk about waves. But to keep things simple, let's just talk about the effect of some waves at a single point, where the effect can change in time. One wave might have a value $\psi_1(t) = \sin(\omega_1 t)$ at this point. Another might have a different frequency: $\psi_2(t) = \sin(\omega_2 t)$. Suppose we scale the waves by factors of $a$ and $b$, and suppose we have them both affect the point together. If the waves' effects just scale and add in the sensible way, then the value of the combined wave at the point will be$$ \psi_{(a\otimes1)\oplus(b\otimes2)}(t) \equiv a \sin(\omega_1 t) + b \sin(\omega_2 t) = a \psi_1(t) + b \psi_2(t). $$Here I am using the symbol "$\otimes$" to mean "physically scaled by the preceding factor" and "$\oplus$" to mean "combined physically." In this particular case, $\otimes$ and $\oplus$ reduced to sensible scalar multiplication of the the wave value and regular addition of the values of two waves. We call these "linear" waves. One of their characteristics is that you can think of the waves as noninteracting $\psi_2$ will add its effect to the total in the same way, regardless of how much amplitude $\psi_1$ has already contributed. But I didn't have to have that structure. In some cases, driving a physical displacement with twice the force does not result in twice the displacement, and having two different driving forces work together does not result in a force that gives a displacement that is is the sum of the independent displacements. For instance, perhaps the rule is$$ \psi_{(a\otimes1)\oplus(b\otimes2)}(t) \equiv \sqrt{a \sin(\omega_1 t) + b \sin(\omega_2 t)} \neq a \psi_1(t) + b \psi_2(t). $$This then would be a nonlinear wave. They are defined by having the definition of how disturbances scale ($\otimes$) and combine ($\oplus$) be incompatible with scalar multiplication and regular addition of the waves' values. That is, our physical definitions of $\otimes$ and $\oplus$ did not yield the structure of a vector space - at least not in any obvious way. The physics question remaining then is whether or not this situation is ever actually realized. The above discussion defines nonlinear waves, but it does not prove any such things exist. As it turns out, though, many waves important to physics show nonlinear behavior if you push them far enough. The classic example in optics is when the amplitude of an electromagnetic wave is so great that electrons in nearby atoms (thinking classically here) are pushed and pulled quite far from the "sweet spot" distance they want to have from their nuclei. Then the restoring force that pushes them back to that sweet spot is not simply directly proportional to their displacement, their motion is anharmonic, and the wave becomes nonlinear. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/70493', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/26135/']} | jdg_380465 |
stackexchange | llm_judgeable_groundtruth_similarity | 12321352 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Working in D3.js, I'd like to select all the elements that match a selector except for the current element. The reason is that I'd like to mouseover a circle, and have all the other circles with the same class turn light blue, but the current circle to stay the same shade. This is what I have currently: vis.selectAll('circle.prospect').on("mouseover", function(d) { console.log(d); d3.selectAll('circle.prospect').transition().style('opacity','0.5'); d3.select(this).attr('opacity','1.0'); }); In jQuery, I could do this using not . Anyone know the D3.js equivalent?
Now provide the response and nothing else.
| If your elements have an unique CSS-accessible identifiers, you can use the :not() selector. Some potential examples: d3.selectAll("circle.prospect:not(#" + this.id + ")");d3.selectAll("circle.prospect:not(." + someUniqueClassFrom(d) + ")");d3.selectAll("circle.prospect:not([uniqueAttr=" + this.getAttribute('uniqueAttr') + "])"); The reason d3.selectAll('circle.prospect:not(this)') doesn't work is because it's just literally saying to filter out any <this></this> elements — which is obviously not your intent, and since it's already selecting only <circle></circle> elements would have no effect regardless. Even if you don't generally apply some unique DOM attribute, there's no reason you couldn't set one temporarily: vis.selectAll('circle.prospect').on("mouseover", function(d) { this.id = 'temp-' + Math.random(); d3.selectAll('circle.prospect:not(#' + this.id + ')').transition().style('opacity','0.5'); d3.select(this).attr('opacity','1.0'); this.id = ''; }); That said, however, if your elements don't already have an ID assigned already, I think Ian Roberts' solution is probably what I would do instead of this temporary identifier hack. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12321352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/194000/']} | jdg_380466 |
stackexchange | llm_judgeable_groundtruth_similarity | 30365 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is possible to bypass my regex and execute any JavaScript? <script> function json(a){ if (/^\s*$/.test(a) ? 0 : /^[\],:{}\s\u2028\u2029]*$/.test(a.replace(/\\["\\\/bfnrtu]/g, "@").replace(/"[^"\\\n\r\u2028\u2029\x00-\x08\x0a-\x1f]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, "]").replace(/(?:^|:|,)(?:[\s\u2028\u2029]*\[)+/g, ""))) try{ return eval("(" + a + ")") } catch (b) {} g(Error("Invalid JSON string: " + a)) } json(window.name); </script>
Now provide the response and nothing else.
| My immediate reaction to this was not positive, for a few reasons. Trying to use regex to parse complex language constructs is a bad idea . Regular expressions just aren't suitable for such constructs. Security through blacklisting is a bad idea because you will always be, by definition, one step behind the attackers. You should use a positive security model . There are a huge number of XSS filter evasion techniques that can be used on top of standard vectors. You cannot possibly detect and block them all. Javascript parsing of JSON via eval() is considered a security vulnerability . Modern browsers have support for proper native JSON parsing, via JSON.parse() and JSON.stringify() . If you have to support old browsers, there is a safe JSON library you can use that does not use eval() for decoding. All in all, your regex approach is over-engineered, insecure, misguided, and redundant. You're attempting to solve a problem that has already been solved. Don't be a Dave. Use the proper JSON parsing functions and libraries available to you. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/30365', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/20440/']} | jdg_380467 |
stackexchange | llm_judgeable_groundtruth_similarity | 62044084 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created a model in OpenModelica that assembles state space matrices using an interpolation table. The matrices are created using a table that accepts one input and 39 outputs (13 eigenvalues, 13 system inputs and 13 system outputs). I can assemble the matrices just fine, but when assigning them to the State Space block I get the following error: [ZapuniInterpolation: 53:52-53:57]: Component A of variability parameter has binding 'ssA' of higher variability continuous. The model is given by the code below: block ZapuniInterpolation// ----------------------------------------------------------------------------- // Modelica.Blocks.Tables.CombiTable1Ds modelDef(table = [180.0, 0.0, 0.125639618, 0.122496368, 0.152595123, 1.30134654, 7.03518772, 10.1638193, 11.777113, 12.5243101, 21.683424, 27.5360508, 34.4655838, 34.1353836, -0.000156808334, 0.000567958435, -6.16513252e-06, 5.07716331e-05, 0.000295116516, -0.00226303029, 1.72715359e-05, -2.76345825e-05, -0.000384722595, -1.60172176e-05, 2.38217144e-05, 8.43528748e-06, 2.26263523e-05, -0.0392020416, -0.0412239838, 0.000444312775, -0.00735736036, 0.083243988, 0.00416324186, 0.00528137589, -0.012360487, 1.24845228e-05, -0.196934738, 6.53376694e-05, -7.1772345e-07, -0.047993618; 182.5, 0.0, 0.12338031, 0.121614441, 0.152516403, 1.28446889, 7.03372765, 10.160368, 11.7592869, 12.5243101, 24.4601326, 27.5334625, 34.4655838, 32.8034935, -0.000155028641, 0.000569061768, -1.06752777e-05, 4.43576736e-05, 0.000292540131, -0.00226327395, 1.87270737e-05, -2.94425335e-05, -0.000384721771, -1.95333481e-05, 2.38186722e-05, 8.43502998e-06, 2.18605156e-05, -0.038757122, -0.0399122047, 0.000745555359, -0.00661621618, 0.080380722, 0.00384176278, 0.00474613094, -0.010696907, 1.03946018e-05, -0.187355469, 0.000304825897, -3.04758026e-07, -0.0856405334; 185.0, 0.0, 0.121165268, 0.120708992, 0.152441833, 1.26705885, 7.03220701, 10.1565781, 11.7394562, 12.5243101, 26.2911625, 27.5308762, 34.4655838, 33.3992195, -0.000153284882, 0.000568685425, -4.0209774e-05, 3.89236832e-05, 0.000290123566, -0.00226349735, 2.0346014e-05, -3.14919395e-05, -0.000384720856, -2.75364189e-05, 2.39696522e-05, 8.43503952e-06, 1.46955872e-05, -0.0383211823, -0.0385091896, 0.00271969485, -0.00597921848, 0.0775424423, 0.00357709074, 0.00436138248, -0.00955755329, 8.97015572e-06, -0.137678406, 0.000970456665, -8.64766174e-07, -0.156308731; 187.5, 0.0, 0.118988342, 0.119786552, 0.152369507, 1.24918771, 7.0306282, 10.1524391, 11.7177658, 12.5243092, 26.049839, 27.5282478, 34.4655838, 37.3313332, -0.000151576111, 0.000570432922, 2.22414017e-05, 3.42602654e-05, 0.000287870667, -0.00226369929, 2.21327553e-05, -3.37703247e-05, -0.000384719849, -3.28333435e-05, 2.40614624e-05, 8.43494415e-06, 6.49287128e-06, -0.0378939934, -0.0372608452, -0.00145591199, -0.00542455482, 0.074719574, 0.00335672474, 0.00408232164, -0.00875357533, 7.95372009e-06, -0.080245285, 0.000683899231, 4.3715094e-07, -0.196699677; 190.0, 0.0, 0.116858894, 0.118837822, 0.152297958, 1.23091626, 7.02899122, 10.1479139, 11.6941319, 12.5243092, 24.9532948, 27.5255089, 34.4655838, 43.7087631, -0.000149901413, 0.000571410095, 8.67566872e-06, 3.02129364e-05, 0.000285783508, -0.00226387715, 2.41047344e-05, -3.62895241e-05, -0.000384718781, -3.62359619e-05, 2.40104408e-05, 8.43495083e-06, 3.14631987e-06, -0.0374753189, -0.0359707489, -0.000549157532, -0.0049361062, 0.0719031982, 0.00317143512, 0.00388165021, -0.00817939949, 7.20596743e-06, -0.0537428932, 0.000341564392, 1.62058945e-07, -0.2088992; 195.0, 0.0, 0.112727142, 0.116876053, 0.152152176, 1.19337523, 7.02551937, 10.1374197, 11.639926, 12.5243092, 22.4547005, 27.5198441, 34.4655838, 63.5006485, -0.000146650772, 0.000572308289, 3.85471559e-06, 2.35291004e-05, 0.000282106537, -0.00226414824, 2.87340622e-05, -4.21810608e-05, -0.00038471637, -4.2829483e-05, 2.39550076e-05, 8.43493652e-06, 1.15650666e-06, -0.0366626587, -0.0333576317, -0.00022747081, -0.0041120615, 0.0662582474, 0.0028797617, 0.00365292263, -0.00749141979, 6.22078133e-06, -0.0335395622, 0.00014459668, 7.19030685e-08, -0.2181745; 200.0, 0.0, 0.10876281, 0.114828865, 0.151995438, 1.15478432, 7.02172279, 10.1242056, 11.5731678, 12.5243082, 20.1688595, 27.5138836, 34.4655838, 97.7778931, -0.000143526184, 0.000572750549, 2.4388926e-06, 1.82274666e-05, 0.000279082581, -0.00226427436, 3.46423569e-05, -4.95841827e-05, -0.000384713623, -5.03695107e-05, 2.39311123e-05, 8.4349184e-06, 1.23468935e-06, -0.0358815079, -0.030764555, -0.000133495087, -0.00343931985, 0.0605537033, 0.00266271996, 0.00360132694, -0.00721686459, 5.66029549e-06, -0.0255758495, 8.98340149e-05, 4.8260746e-08, -0.220523834]) annotation( Placement(visible = true, transformation(origin = { 0, 40}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));// ----------------------------------------------------------------------------- // Real [13] omega2={modelDef.y[1], modelDef.y[2], modelDef.y[3], modelDef.y[4], modelDef.y[5], modelDef.y[6], modelDef.y[7], modelDef.y[8], modelDef.y[9], modelDef.y[10], modelDef.y[11], modelDef.y[12], modelDef.y[13]}; // ------ Real [13] Ba={modelDef.y[14], modelDef.y[15], modelDef.y[16], modelDef.y[17], modelDef.y[18], modelDef.y[19], modelDef.y[20], modelDef.y[21], modelDef.y[22], modelDef.y[23], modelDef.y[24], modelDef.y[25], modelDef.y[26]}; // ------ Real [13] Ca={modelDef.y[27], modelDef.y[28], modelDef.y[29], modelDef.y[30], modelDef.y[31], modelDef.y[32], modelDef.y[33], modelDef.y[34], modelDef.y[35], modelDef.y[36], modelDef.y[37], modelDef.y[38], modelDef.y[39]}; // ------ Real [13] damping={0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};// ----------------------------------------------------------------------------- // Real [26, 26] ssA = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1; - omega2[1], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[1], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, - omega2[2], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[2], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, - omega2[3], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[3], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, - omega2[4], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[4], 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, - omega2[5], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[5], 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, - omega2[6], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[6], 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, - omega2[7], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[7], 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, - omega2[8], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[8], 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, - omega2[9], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[9], 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, - omega2[10], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[10], 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - omega2[11], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[11], 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - omega2[12], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[12], 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - omega2[13], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, damping[13] ];// ------ Real [26, 1] ssB = [0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; Ba[1]; Ba[2]; Ba[3]; Ba[4]; Ba[5]; Ba[6]; Ba[7]; Ba[8]; Ba[9]; Ba[10]; Ba[11]; Ba[12]; Ba[13]]; // ------ Real [3, 26] ssC = [Ca[1], Ca[2], Ca[3], Ca[4], Ca[5], Ca[6], Ca[7], Ca[8], Ca[9], Ca[10], Ca[11], Ca[12], Ca[13], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; Ba[1], Ba[2], Ba[3], Ba[4], Ba[5], Ba[6], Ba[7], Ba[8], Ba[9], Ba[10], Ba[11], Ba[12], Ba[13], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, Ba[1], Ba[2], Ba[3], Ba[4], Ba[5], Ba[6], Ba[7], Ba[8], Ba[9], Ba[10], Ba[11], Ba[12], Ba[13] ];// ------ Real [3, 1] ssD = [0; 0; 0];// ----------------------------------------------------------------------------- // Modelica.Blocks.Interfaces.RealInput tableInput annotation( Placement(visible = true, transformation(origin = {-60, 40}, extent = {{-20, -20}, {20, 20}}, rotation = 0), iconTransformation(origin = {-60, 40}, extent = {{-20, -20}, {20, 20}}, rotation = 0))); Modelica.Blocks.Continuous.StateSpace stateSpace(A=ssA, B=ssB, C=ssC, D=ssD, initType = Modelica.Blocks.Types.Init.NoInit) annotation( Placement(visible = true, transformation(origin = {0, 0}, extent = {{-10, -10}, {10, 10}}, rotation = 0))); Modelica.Blocks.Interfaces.RealOutput out1 annotation( Placement(visible = true, transformation(origin = {50, 20}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {50, 20}, extent = {{-10, -10}, {10, 10}}, rotation = 0))); Modelica.Blocks.Interfaces.RealOutput out2 annotation( Placement(visible = true, transformation(origin = {50, 0}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {50, 0}, extent = {{-10, -10}, {10, 10}}, rotation = 0))); Modelica.Blocks.Interfaces.RealOutput out3 annotation( Placement(visible = true, transformation(origin = {50, -20}, extent = {{-10, -10}, {10, 10}}, rotation = 0), iconTransformation(origin = {50, -20}, extent = {{-10, -10}, {10, 10}}, rotation = 0))); Modelica.Blocks.Interfaces.RealInput ssInput annotation( Placement(visible = true, transformation(origin = {-60, 0}, extent = {{-20, -20}, {20, 20}}, rotation = 0), iconTransformation(origin = {-60, 0}, extent = {{-20, -20}, {20, 20}}, rotation = 0)));equation// ----------------------------------------------------------------------------- // connect(modelDef.u, tableInput) annotation( Line(points = {{-12, 40}, {-60, 40}}, color = {0, 0, 127})); connect(stateSpace.y[1], out1) annotation( Line(points = {{12, 0}, {20, 0}, {20, 20}, {50, 20}, {50, 20}}, color = {0, 0, 127})); connect(stateSpace.y[2], out2) annotation( Line(points = {{12, 0}, {42, 0}, {42, 0}, {50, 0}}, color = {0, 0, 127})); connect(stateSpace.y[3], out3) annotation( Line(points = {{12, 0}, {20, 0}, {20, -20}, {50, -20}, {50, -20}}, color = {0, 0, 127})); connect(stateSpace.u[1], ssInput) annotation( Line(points = {{-12, 0}, {-60, 0}}, color = {0, 0, 127}));// ----------------------------------------------------------------------------- //annotation( uses(Modelica(version = "3.2.3")));end ZapuniInterpolation; Now I can obviously change all of the Real declarations to parameter Real , which solves the problem for the State-Space block. However then I cannot define the arrays omega2, Ba, Ca based on the interpolation table: the following error then occurs [ZapuniInterpolationTest: 6:3-6:229]: Component omega2 of variability parameter has binding '{modelDef.y[1], modelDef.y[2], modelDef.y[3], modelDef.y[4], modelDef.y[5], modelDef.y[6], modelDef.y[7], modelDef.y[8], modelDef.y[9], modelDef.y[10], modelDef.y[11], modelDef.y[12], modelDef.y[13]}' of higher variability continuous. So my question is, what can I do to define the matrices using the interpolation table and use them in the State-Space block?
Now provide the response and nothing else.
| You can cancel a build through REST API: PATCH https://dev.azure.com/atbagga/atbagga/_apis/build/Builds/120Request content: {'status': 'Cancelling'} Here you have an example: steps:- task: PowerShell@2 name: ConditionalStep inputs: targetType: 'inline' script: | Write-Host "I'm here" Write-Host ('$(SomeVariable)' -eq 'Stop') if ('$(SomeVariable)' -eq 'Stop') { $uri = "https://dev.azure.com/thecodemanual/DevOps Manual/_apis/build/builds/$(Build.BuildId)?api-version=5.1" $json = @{status="Cancelling"} | ConvertTo-Json -Compress $build = Invoke-RestMethod -Uri $uri -Method Patch -Headers @{Authorization = "Bearer $(System.AccessToken)"} -ContentType "application/json" -Body $json Write-Host $build } Write-Host "And now here!" pwsh: true- pwsh: Start-Sleep -Seconds 60 - task: PowerShell@2 inputs: targetType: 'inline' script: | $uri = "https://dev.azure.com/thecodemanual/DevOps Manual/_apis/build/builds/$(Build.BuildId)/timeline?api-version=5.1" Write-Host $uri # Invoke the REST call $build = Invoke-RestMethod -Uri $uri -Method Get -Headers @{Authorization = "Bearer $(System.AccessToken)"} -ContentType "application/json" $taskResult = $build.records | Where-Object {$_.name -eq "ConditionalStep" } | Select-Object result Write-Host $taskResult.result pwsh: true For that you will get that output: If you get such error: | {"$id":"1","innerException":null,"message":"Access denied. | DevOps Manual Build Service (thecodemanual) needs Stop builds | permissions for vstfs:///Build/Build/1611 in team project | DevOps Manual to perform the | action.","typeName":"Microsoft.TeamFoundation.Build.WebApi.AccessDeniedException, Microsoft.TeamFoundation.Build2.WebApi","typeKey":"AccessDeniedException","errorCode":0,"eventId":3000} Please make sure that your build account has permission to stop a build: You will find this under this section: Please note What you can't do is set a build as completed. If you dod this. Whole pipeline will be still executed. So if this isn't what you want, you need to add condition to every step with an output variable set previously in the pipeline and in that way ignore those steps. steps:- task: PowerShell@2 name: ConditionalStep inputs: targetType: 'inline' script: | Write-Host "I'm here" Write-Host ('$(SomeVariable)' -eq 'Stop') if ('$(SomeVariable)' -eq 'Stop') { Write-Host '##vso[task.setvariable variable=shouldStop;isOutput=true]Yes' } Write-Host "And now here!" pwsh: true- pwsh: Start-Sleep -Seconds 60 condition: ne(variables['ConditionalStep.shouldStop'], 'Yes')- task: PowerShell@2 condition: ne(variables['ConditionalStep.shouldStop'], 'Yes') inputs: targetType: 'inline' script: | $uri = "https://dev.azure.com/thecodemanual/DevOps Manual/_apis/build/builds/$(Build.BuildId)/timeline?api-version=5.1" Write-Host $uri # Invoke the REST call $build = Invoke-RestMethod -Uri $uri -Method Get -Headers @{Authorization = "Bearer $(System.AccessToken)"} -ContentType "application/json" $taskResult = $build.records | Where-Object {$_.name -eq "ConditionalStep" } | Select-Object result Write-Host $taskResult.result pwsh: true | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/62044084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13627371/']} | jdg_380468 |
stackexchange | llm_judgeable_groundtruth_similarity | 8207488 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to insert all variables sent with post, they were checkboxes each representing an user. If I use GET I get something like this: ?19=on&25=on&30=on I need to insert the variables in the database. How do I get all variables sent with POST? As an array or values separated with comas or something?
Now provide the response and nothing else.
| The variable $_POST is automatically populated. Try var_dump($_POST); to see the contents. You can access individual values like this: echo $_POST["name"]; This, of course, assumes your form is using the typical form encoding (i.e. enctype=”multipart/form-data” If your post data is in another format (e.g. JSON or XML, you can do something like this: $post = file_get_contents('php://input'); and $post will contain the raw data. Assuming you're using the standard $_POST variable, you can test if a checkbox is checked like this: if(isset($_POST['myCheckbox']) && $_POST['myCheckbox'] == 'Yes'){ ...} If you have an array of checkboxes (e.g. <form action="myscript.php" method="post"> <input type="checkbox" name="myCheckbox[]" value="A" />val1<br /> <input type="checkbox" name="myCheckbox[]" value="B" />val2<br /> <input type="checkbox" name="myCheckbox[]" value="C" />val3<br /> <input type="checkbox" name="myCheckbox[]" value="D" />val4<br /> <input type="checkbox" name="myCheckbox[]" value="E" />val5 <input type="submit" name="Submit" value="Submit" /></form> Using [ ] in the checkbox name indicates that the selected values will be accessed by PHP script as an array. In this case $_POST['myCheckbox'] won't return a single string but will return an array consisting of all the values of the checkboxes that were checked. For instance, if I checked all the boxes, $_POST['myCheckbox'] would be an array consisting of: {A, B, C, D, E} . Here's an example of how to retrieve the array of values and display them: $myboxes = $_POST['myCheckbox']; if(empty($myboxes)) { echo("You didn't select any boxes."); } else { $i = count($myboxes); echo("You selected $i box(es): <br>"); for($j = 0; $j < $i; $j++) { echo $myboxes[$j] . "<br>"; } } | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8207488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/535967/']} | jdg_380469 |
stackexchange | llm_judgeable_groundtruth_similarity | 12296235 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I encountered a problem about importing php files. Why this works: include( Yii::getPathOfAlias( 'ext.payu.payU').'.php' ); but that don't: Yii::import( 'ext.payu.payU'); ? file payU.php: include_once( dirname(__FILE__) . "/sdk/openpayu.php"); //this is a valid pathclass payU{ }
Now provide the response and nothing else.
| Try to do Yii::import('ext.payu.payU', true) because default one-parameter Yii::import() function call means: hey, just add this path to the known aliases list and do not require() it now and you have to do new payU() for this to work. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12296235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1024298/']} | jdg_380470 |
stackexchange | llm_judgeable_groundtruth_similarity | 22236029 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is what I've been doing: Locally - where I've got a brand new postgres database, and two models.py files from two different apps: python manage.py syncdbpython manage.py schemamigration api --initialpython manage.py schemamigration extapi --initialpython manage.py migrate api 0001 --fakepython manage.py migrate extapi 0001 --fake This works swell, and I can add stuff to the database just fine. Then, when pushing to Heroku, where I've already created an empty app: git add .git commit -m "Ready to go to Heroku"git push heroku masterheroku run python manage.py syncdb That outputs this: Running `python manage.py syncdb` attached to terminal... up, run.9548Syncing...Creating tables ...Creating table django_admin_logCreating table auth_permissionCreating table auth_group_permissionsCreating table auth_groupCreating table auth_user_groupsCreating table auth_user_user_permissionsCreating table auth_userCreating table django_content_typeCreating table django_sessionCreating table south_migrationhistory# create superuser prompt...Superuser created successfully.Installing custom SQL ...Installing indexes ...Installed 0 object(s) from 0 fixture(s)Synced: > django.contrib.admin > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.messages > django.contrib.staticfiles > south > rest_frameworkNot synced (use migrations): - api - extapi Then I try to migrate those apps with heroku run python manage.py migrate and get this error: Running `python manage.py migrate` attached to terminal... up, run.3724Running migrations for api: - Migrating forwards to 0001_initial. > api:0001_initialFATAL ERROR - The following SQL query failed: ALTER TABLE "api_song" ADD CONSTRAINT "summary_id_refs_id_36bb6e06" FOREIGN KEY ("summary_id") REFERENCES "extapi_summary" ("id") DEFERRABLE INITIALLY DEFERRED;The error was: relation "extapi_summary" does not existError in migration: api:0001_initialTraceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line utility.execute() File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/base.py", line 242, in run_from_argv self.execute(*args, **options.__dict__) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute output = self.handle(*args, **options) File "/app/.heroku/python/lib/python2.7/site-packages/south/management/commands/migrate.py", line 111, in handle ignore_ghosts = ignore_ghosts, File "/app/.heroku/python/lib/python2.7/site-packages/south/migration/__init__.py", line 220, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/.heroku/python/lib/python2.7/site-packages/south/migration/migrators.py", line 254, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/.heroku/python/lib/python2.7/site-packages/south/migration/migrators.py", line 329, in migrate_many result = self.migrate(migration, database) File "/app/.heroku/python/lib/python2.7/site-packages/south/migration/migrators.py", line 133, in migrate result = self.run(migration, database) File "/app/.heroku/python/lib/python2.7/site-packages/south/migration/migrators.py", line 114, in run return self.run_migration(migration, database) File "/app/.heroku/python/lib/python2.7/site-packages/south/migration/migrators.py", line 85, in run_migration south.db.db.execute_deferred_sql() File "/app/.heroku/python/lib/python2.7/site-packages/south/db/generic.py", line 318, in execute_deferred_sql self.execute(sql) File "/app/.heroku/python/lib/python2.7/site-packages/south/db/generic.py", line 282, in execute cursor.execute(sql, params) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute return self.cursor.execute(sql, params) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute return self.cursor.execute(sql, params)django.db.utils.ProgrammingError: relation "extapi_summary" does not exist To me, it looks like the tables just aren't even being created, but I have no idea why not. When I run heroku run python manage.py sqlall it says everything's been made, but then I look at the database itself (the one heroku makes on s3) there's nothing from app_one and app_two. Again, this all works perfect locally, it's just when it goes up on heroku that things fall apart.
Now provide the response and nothing else.
| There's a circular import that should've been handled by just deferring the creation of api_userprofile, but because of how South handles transactions, it breaks. So! Easiest way to make this work is to get syncdb to make all the tables and just fake the migration: python manage.py syncdb --all That gets us: Synced: > django.contrib.admin > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.messages > django.contrib.staticfiles > api > extapi > moodranker > recommender > south > rest_frameworkNot synced (use migrations): - Then fake the migrations: python manage.py migrate --fake | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22236029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2585031/']} | jdg_380471 |
stackexchange | llm_judgeable_groundtruth_similarity | 1002150 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Until recently, I had a bunch of virtual sites set up like so: <VirtualHost 127.0.0.1:1234> ... This works fine for testing on my local machine, where I use a Linux desktop. In order to test how MS and explorer displays my pages from my Windows laptop, I changed this to <VirtualHost *:1234> ... Which also works fine, calling the site up from http://[mylinuxservername]:1234 on my laptop's IE. However, I want to restrict that wildcard to the local lan. Plugging in any ip, like 192.nnn.nnn.nnn or 192.*.*.* where the wildcard is above results in 403 Forbidden on the windows machine. The local server still works fine on my Linux box: <VirtualHost 127.0.0.1:1234 192.*.*.*:1234> ... or <VirtualHost 127.0.0.1:1234 192.nnn.nnn.nnn:1234> #exact IP of laptop ... Anyway, I don't like that wildcard in the second config example above. Hints anyone?
Now provide the response and nothing else.
| The parameter(s) of VirtualHost are the local addresses you listen to, not the remote ones. In Apache 2.4 and newer, use the Require directive: Require ip 127.0.0.0/8Require ip 192.0.0.0/8 If you are using Apache 2.2 or earlier, use the authz_host configuration: Order Allow,DenyAllow from 127.0.0.0/8Allow from 192.168.0.0/16 This may also work on Apache 2.4, but Order and Allow have been deprecated . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1002150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105090/']} | jdg_380472 |
stackexchange | llm_judgeable_groundtruth_similarity | 277977 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The clique-coclique bound is said to hold for a simple graph $G$ on $n$ vertices if $\lvert \omega(G) \rvert \lvert \alpha(G) \lvert \leq n$, letting $\omega(G)$ and $\alpha(G)$ denote its clique and coclique (independent set) numbers respectively. It is known, in particular, that the clique-coclique bound holds for all vertex-transitive graphs and distance-regular graphs - two families of walk-regular graphs. The clique-coclique also appears to hold for all of the examples of walk-regular graphs that I know of that are neither vertex-transitive nor distance-regular. It is also apparent that the clique-coclique bound holds for some other families of walk-regular graphs, namely semi-symmetric graphs. Could it be possible that the clique-coclique bound actually holds for all (connected) walk-regular graphs? By informal reasoning in head, it feels plausible to me that this could be the case? I wonder what would might be a good approach to take to try to prove or disprove this?
Now provide the response and nothing else.
| EDIT: The answer is no, see comment below. In every case I know of, the clique-coclique bound can be proven for a class of graphs by proving the stronger fact that $\vartheta(G)\bar{\vartheta}(G) \le |V(G)|$ for all $G$ in that class, where $\vartheta(G)$ is the Lovasz theta number of $G$ and $\bar{\vartheta}(G) := \vartheta(\bar{G})$ is the Lovasz theta number of the complement of $G$. It is well known that $\alpha(G) \le \vartheta(G) \le \chi(\bar{G})$, i.e., that $\omega(G) \le \bar{\vartheta}(G) \le \chi(G)$, for all graphs $G$. Thus if $\vartheta(G)\bar{\vartheta}(G) \le |V(G)|$, then $\alpha(G)\omega(G) \le |V(G)|$ follows immediately. In general, $\vartheta(G)\bar{\vartheta}(G) \ge |V(G)|$ holds for any graph $G$, but in some cases the other inequality holds and so you get equality. So if you could prove that $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$ for any walk-regular graph $G$, then you are done. Unfortunately, we will see that this turns out not to be the case. However, we can prove it for a related, but smaller, class of graphs. First, it is known that $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$ for 1-walk-regular graphs $G$ (actually I do not know if this is written anywhere in the literature, but it is true). A graph is 1-walk-regular if it is walk-regular and additionally the number of walks of length $\ell$ starting from one end of an edge and ending at the other does not depend on the edge. In terms of the adjacency matrix $A$, this says that $A \circ A^\ell$ is a constant times $A$ for all $\ell \in \mathbb{N}$, where $\circ$ denotes the Schur (entrywise) product (the walk-regularity conidition is of course equivalent to $I \circ A^\ell$ being a constant times $I$). So for walk-regular graphs that are additionally 1-walk-regular (or their complement is 1-walk-regular), the clique-coclique bound holds. Unfortunately, there are walk-regular graphs that are not 1-walk-regular and their complements are not 1-walk-regular, so this does not prove what you want. But let's take a closer look at why it works for 1-walk-regular graphs The Lovasz theta number of a graph can be defined by the following semidefinite program: \begin{equation}\label{eq:dual}\begin{array}{lc}\vartheta(G) \ = & \begin{array}[t]{ll}\max & \text{sum}(B) \tag{D} \\\text{s.t.} & B_{ij} = 0 \text{ for } i \sim j \\ & \text{Tr}(B) = 1 \\ & B \succeq 0\end{array}\end{array}\end{equation}where $\text{sum}(B)$ denotes the sum of the entries of $B$, which is also equal to $\text{Tr}(BJ)$ where $J$ is the all ones matrix. One can also define Lovasz theta (of the complement) of $G$ by the following semidefinite program: \begin{equation}\label{eq:primal}\begin{array}{lc}\bar{\vartheta}(G)= & \begin{array}[t]{ll}\min & t \tag{P}\\\text{s.t.} & M_{ii} = t-1 \text{ for } i \in V(G) \\ & M_{ij} = -1 \text{ for } i \sim j \\ & M \succeq 0\end{array}\end{array}\end{equation} In order to prove the inequality $\vartheta(G)\bar{\vartheta}(G) \ge |V(G)|$ for any graph $G$, the usual proof takes an optimal solution $M$ to (\ref{eq:primal}), and uses the matrix $(1/nt)(M+J)$ (where $t = \bar{\vartheta}(G)$ and $n = |V(G)|$) as a feasible solution to (\ref{eq:dual}). This feasible solution will have objective value $$\frac{1}{nt}\text{Tr}((M+J)J) = \frac{1}{nt}(\text{Tr}(MJ) + \text{Tr}(J^2)) \ge \frac{1}{nt}\text{Tr}(J^2) = n/t$$since $\text{Tr}(MJ) \ge 0$ because $M,J \succeq 0$. To prove $\vartheta(G)\bar{\vartheta}(G) \le |V(G)|$ for certain classes of graphs, the usual proof takes an optimal solution $B$ to (\ref{eq:dual}) that has constant diagonal and a constant row sum and uses $(n^2/s)B - J$ where $s = \vartheta(G)$ as a feasible solution to (\ref{eq:primal}). This will have objective value equal to $n/s$, thus proving the desired inequality. Though the assumptions on the graph $G$ may vary, I believe that the existence of an optimal solution $B$ to (\ref{eq:dual}) with constant diagonal and constant row sum is the essential ingredient. In fact, you can show that $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$ if and only if such an optimal solution to (\ref{eq:dual}) exists (I don't know if this is in the literature). You can also show that the $i^\text{th}$ row sum is equal to $\vartheta(G)$ times the $i^\text{th}$ diagonal entry for any optimal solution to (\ref{eq:dual}), and so it is necessary and sufficient to find an optimal solution with constant diagonal. For example, if $G$ is vertex transitive, then you can easily find such an optimal solution by "symmetrizing" any optimal solution using the automorphisms of $G$. But there is something more general you can do, and it involves coherent algebras. A coherent algebra is a subalgebra of the $n \times n$ complex matrices that contains the identity and all ones matrices, is closed under conjugate transposition, and is closed under Schur product. It is easy to see that the intersection of two coherent algebras is a coherent algebra, and this allows one to define the coherent algebra of a graph $G$ as the smallest coherent algebra containing the adjacency matrix of $G$. Based on your personal webpage, you already know what these are, so I won't elaborate too much. If we consider the coherent algebra $\mathcal{C}$ of a graph $G$ as a subspace of the vector space of $n \times n$ matrices, we can construct a linear map $\Phi$ that is the orthogonal projection onto $\mathcal{C}$. It turns out that this map has some very nice properties. In particular, if $M$ is positive semidefinite, then so is $\Phi(M)$. Also, $\Phi$ is trace-preserving, maps the identity to itself, and it is "doubly-stochastic" in the sense that it maps (entrywise) nonnegative matrices to nonnegative matrices, preserves the sum of the entries of a matrix, and maps the all ones matrix to itself. These (and a few other) properties show that if $B$ is a feasible solution to (\ref{eq:dual}), then $\Phi(B)$ is a feasible solution with the same objective value (and similarly for (\ref{eq:primal})). Of course, $\Phi(B)$ is contained in the coherent algebra of $G$, and is thus in the span of the unique basis of $01$-matrices of $\mathcal{C}$ (basically, the map $\Phi$ just smooths a matrix out over the entries of each of these 01-matrices). Thus, if $\mathcal{C}$ is homogeneous (meaning every matrix in $\mathcal{C}$ has constant diagonal), then there is always an optimal solution to (\ref{eq:dual}) with constant diagonal and so $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$, and the clique-coclique bound holds. As far as I know, this includes almost all classes of graphs for which it is known that the clique-coclique bound holds. However, this does not suffice for the proof of the case of 1-walk-regular graphs, since their coherent algebras may not be homogeneous (for instance the Hoffman graph). So how do we prove it for 1-walk-regular graphs? Well, you can show that if $G$ is the complement of a 1-walk-regular graph, then $A - \tau I$ is (up to a scalar) an optimal solution for (\ref{eq:dual}) where $\tau$ is the minimum eigenvalue of $G$ (see https://arxiv.org/abs/1305.5545 for a proof of this, but there 1-walk-regular is called 1-homogeneous). The matrix $A - \tau I$ obviously has constant diagonal and so we are done. But can we capture what is happening with 1-walk-regular graphs by a more general argument? Yes. Define the partially coherent algebra of a graph $G$ to be the smallest subalgebra of the $|V(G)| \times |V(G)|$ complex matrices that contains the identity matrix, the all ones matrix, the adjacency matrix $A$ of $G$, is closed under conjugate transposition, and is closed under Schur product where one of the two factors involved is $I$ or $A$. This will necessarily be a (possibly equal) subalgebra of the coherent algebra of $G$. We can construct the map $\Phi'$ which is the orthogonal projection onto the partially coherent algebra of $G$. The map $\Phi'$ will not be quite as nice as the map $\Phi$ above, but it will still take feasible solutions to (\ref{eq:dual}) to feasible solutions to (\ref{eq:dual}) with the same objective value (and similarly for (\ref{eq:primal})). Thus, if every matrix in the partially coherent algebra of $G$ has constant diagonal, then $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$. In an upcoming work by Laura Mančinska, Antonios Varvitsiotis, and myself, we show that the partially coherent algebra of a connected 1-walk-regular graph is equal to the adjacency algebra of $G$, i.e., the algebra of polynomials in its adjacency matrix. The matrices in the adjacency algebra will have constant diagonal since $G$ is necessarily walk-regular by assumption. In the case of a 1-walk-regular graph that is not connected, I think you can show that the partially coherent algebra is just the span of the adjacency algebra plus the all ones matrix, which is also the adjacency algebra of the complement. So in this case it still holds that the partially coherent algebra will only contain constant diagonal matrices, and so this shows that $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$ for any 1-walk-regular graph. Unfortunately, this doesn't work for walk-regular graphs. It is possible for the partially coherent algebra of a walk-regular graph to contain matrices that do not have constant diagonal. I suspected this was true but did not previously have a counterexample. But you do, on your webpage. In the post titled "Examples of Walk-Regular Graphs" from December 17, 2016 , you give some examples of walk-regular graphs that are neither vertex transitive nor distance-regular. The first example you give has 12 vertices and has Graph6 string equal to ${\tt KU`OXC`XKpHW}$. If $A$ is its adjacency matrix, then the matrix $A\left(A \circ[A(A \circ A^2)]\right)$ is in the partially coherent algebra of this graph but does not have constant diagonal. Of course, we could hope that there is some other way to prove that $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$ for walk-regular graphs (and thus that the clique-coclique bound holds for these graphs), but this hope is crushed by your second example, with Graph6 string ${\tt KCOfeqkfJkLg}$, also having 12 vertices. This graph has $\bar{\vartheta}(G) = 4$ and $\vartheta(G) \approx 3.3431457$ according to Sage. But, the clique-coclique bound does hold for this graph since we have $\alpha(G) = 3$ and $\omega(G) = 4$. So we don't have a counterexample to your conjecture, but if your conjecture is true, then walk-regular graphs will be the first "natural" family of graphs (that I am aware of) that satisfy the clique-coclique bound but do not satisfy $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$. So I am not too optimistic that it is true. P.S. Thanks for finding those examples of walk-regular graphs. I have wondered if $\vartheta(G)\bar{\vartheta}(G) = |V(G)|$ held for all walk-regular graphs for some time, but could find no proof. Now I know why. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/277977', 'https://mathoverflow.net', 'https://mathoverflow.net/users/75248/']} | jdg_380473 |
stackexchange | llm_judgeable_groundtruth_similarity | 38131981 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I'm populating new data in my tableView Controller, my top cell is being rewrite by the bottom one. Let me explain. I have one image that is loading async in the latest cell row, in the bottom. All the other cells are loading static with an image that is in the app. When I scroll down my app and display my bottom cell which have a different image than the others cells, and I scroll up again, I see the first shop image has changed to the dynamic one loaded in the bottom cell. Does anyone know why this is happening? func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell : shopsTableViewCell = tableView.dequeueReusableCellWithIdentifier("shopCell", forIndexPath: indexPath) as! shopsTableViewCell cell.index = indexPath.row let qualityOfServiceClass = QOS_CLASS_BACKGROUND let backgroundQueue = dispatch_get_global_queue(qualityOfServiceClass, 0) let apiCall = webApi() dispatch_async(backgroundQueue, { apiCall.downloadPhotoAsync("http://api-ytyg.wbbdev.com/files/shops/\(shops[indexPath.row].id)/\(shops[indexPath.row].featured_img)"){(image: UIImage?) -> Void in dispatch_async(dispatch_get_main_queue()){ if(image != nil){ if (cell.index == indexPath.row){ // shops[indexPath.row].photo = image! cell.shopImg?.image = image } }else{ shops[indexPath.row].photo = UIImage.init(named: "No_Image_Available.jpg")! } } } }) cell.shopName?.text = shops[indexPath.row].name cell.shopDescription?.text = shops[indexPath.row].address cell.label1?.text = shops[indexPath.row].city + " | " + shops[indexPath.row].distance + "Km" cell.becomeFirstResponder() return cell}
Now provide the response and nothing else.
| it is because your cells are being reused : // this line initialize a cell of type shopsTableViewCell //if no cell can be reused, get an already used cell elselet cell : shopsTableViewCell = tableView.dequeueReusableCellWithIdentifier("shopCell", forIndexPath: indexPath) as! shopsTableViewCell so image and other data should be the last image and data of the dequeued cell. To avoid this, you shall implement prepareForReuse in the cell class file to reset data and image. docs of apple about prepareForReuse func From Apple docs : If a UITableViewCell object is reusable—that is, it has a reuse identifier—this method is invoked just before the object is returned from the UITableView method dequeueReusableCellWithIdentifier:. For performance reasons, you should only reset attributes of the cell that are not related to content, for example, alpha, editing, and selection state. The table view's delegate in tableView:cellForRowAtIndexPath: should always reset all content when reusing a cell. If the cell object does not have an associated reuse identifier, this method is not called. If you override this method, you must be sure to invoke the superclass implementation. //Example : override func prepareForReuse() { super.prepareForReuse() //Do reset here this.shopImg?.image = nil this.shopName?.text = "" this.shopDescription?.text = ""} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38131981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3033437/']} | jdg_380474 |
stackexchange | llm_judgeable_groundtruth_similarity | 36353 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Since we know that there are some oracle problems which can be solved on a quantum computer, but not on an NP machine with the same oracle, the idea of nondeterministic (i.e. infinitely parallel) machine is not sufficient to describe what is going on in quantum mechanics. The question then arises--- what is? What is the natural classical machine which can simulate a quantum computer efficiently in polynomial time? What is the complexity class of this natural machine?
Now provide the response and nothing else.
| The smallest 'simple' complexity class which is known to contain BQP (and suspected to do so strictly) is the class PP . As PP is contained in PSPACE , this yields a potentially tighter algorithm in your hypothetical machine model. Translating from a more traditional description of PP in terms of nondeterministic Turing machines , a generic computation for solving a PP problem (which are 'yes/no' problems, like those in P and in NP ) looks like some branching program of the sort you're interested in, and where each of the 'threads' submits a vote for whether the answer is 'yes' or 'no'. If the majority (fifty percent plus one) vote 'yes', then the answer which the machine produces is 'yes'; otherwise it produces a 'no' answer.It is straightforward to show that PP contains NP ; and PP was proven to contain BQP by Adleman, DeMarrais, and Huang, Quantum computability , SIAM Journal on Computing 26:1524-1540 (1997) ; however, I find that a simpler approach to the proof is presented by Dawson et al., Quantum computing and polynomial equations over the finite field ℤ 2 , Quant. Info. & Comp 5:102-112 (2005) , which, like the traditional proof that BQP is contained in PSPACE , uses an approach in terms of a sum-over-paths; but unlike that approach restricts itself to paths with weights $\pm 2^{-n/2}$. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/36353', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4864/']} | jdg_380475 |
stackexchange | llm_judgeable_groundtruth_similarity | 2934741 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Question: Solve: $xy+x+y=23\tag{1}$ $yz+y+z=31\tag{2}$ $zx+z+x=47\tag{3}$ My attempt: By adding all we get $$\sum xy +2\sum x =101$$ Multiplying $(1)$ by $z$ , $(2)$ by $x$ , and $(3)$ by $y$ and adding altogether gives $$3xyz+ 2\sum xy =31x+47y+23z$$ Then, from above two equations after eliminating $\sum xy$ term we get $$35x+51y+27z=202+3xyz$$ After that subtracting $(1)\times 3z$ from equation just above (to eliminate $3xyz$ term) gives $$35x +51y-3z(14+x+y)=202\implies (x+y)[35-3z]+16y-42z=202$$ I tried pairwise subtraction of $(1),(2)$ and $(3)$ but it also seems to be not working. Please give me some hint so that I can proceed or provide with the answer.
Now provide the response and nothing else.
| Hint: Put $$X=x+1$$ $$Y=y+1$$ $$Z=z+1$$ Then we have $$XY=24$$ $$YZ=32$$ $$ZX=48$$ Can you take it from there? | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2934741', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380476 |
stackexchange | llm_judgeable_groundtruth_similarity | 303549 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose there is file foo bar cat dogfoo foo cat barbar foo foo foo How do we grep for lines with a certain number of occurrences of foo e.g. if the number is 1 only the first line in the sample file should be printed ?
Now provide the response and nothing else.
| $ grep 'foo' file | grep -v 'foo.*foo' First pick out all lines containing foo , then remove all lines with foo followed by another foo somewhere on the line. If all lines contain at least one foo (as in your example), you may skip the first grep . For a general solution to "How do I grep for exactly N occurrences of a string?": grep for lines with at least N matches, then remove lines with N+1 matches (or more). | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/303549', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/184788/']} | jdg_380477 |
stackexchange | llm_judgeable_groundtruth_similarity | 40436895 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've already used this method in Swift 2 var myDict: NSDictionary?if let path = NSBundle.mainBundle().pathForResource("Config", ofType: "plist") {myDict = NSDictionary(contentsOfFile: path)} But don't know how to read plist in Swift3 without using NSDictionary(contentsOfFile: path)
Now provide the response and nothing else.
| The native Swift way is to use PropertyListSerialization if let url = Bundle.main.url(forResource:"Config", withExtension: "plist") { do { let data = try Data(contentsOf:url) let swiftDictionary = try PropertyListSerialization.propertyList(from: data, format: nil) as! [String:Any] // do something with the dictionary } catch { print(error) }} You can also use NSDictionary(contentsOf: with a type cast: if let url = Bundle.main.url(forResource:"Config", withExtension: "plist"), let myDict = NSDictionary(contentsOf: url) as? [String:Any] { print(myDict)} but you explicitly wrote: without using NSDictionary(contentsOf... Basically don't use NSDictionary without casting in Swift, you are throwing away the important type information. Meanwhile (Swift 4+) there is still more comfortable PropertyListDecoder which is able to decode Plist directly into a model. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40436895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5309431/']} | jdg_380478 |
stackexchange | llm_judgeable_groundtruth_similarity | 1600248 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i set my view controller to the particular view through Interfacebuilder.but initWithFrame is not calling , but drawRect is being called?when i put break point?
Now provide the response and nothing else.
| Right, because it's not guaranteed that initWithFrame: will be called when unarchiving the xib. Try using awakeFromNib or viewWillLoad or viewDidLoad . Which one you choose will mainly depend on at what stage during the display process you need to insert your code. Also, check out the answer to this question . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1600248', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/141302/']} | jdg_380479 |
stackexchange | llm_judgeable_groundtruth_similarity | 26837081 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a C# application with multiple project referenced. One of the project is referencing Microsoft.SqlServer.Types (Version 11), because it is using SQLGeometry. When i install my application to an empty computer (Only windows 7 with VC++ 2010) i get an error in my application, that it " Could not load file or assembly 'Microsoft.SqlServer.Types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. Any ideas why it would require Version 10?
Now provide the response and nothing else.
| Please refer to this answer. You need to do one of the following: Add the Type System Version=SQL Server 2012 keyword to your connection string in app.config : <configuration> <connectionStrings> <add name="YourConnectionStringName" connectionString="(connection string values);Type System Version=SQL Server 2012" /> </connectionStrings></configuration> Add a bindingRedirect for Microsoft.SqlServer.Types in app.config : <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Microsoft.SqlServer.Types" publicKeyToken="89845dcd8080cc91" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-11.0.0.0" newVersion="11.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime></configuration> Either option will ensure that SqlConnection will load version 11.0.0.0 of the Microsoft.SqlServer.Types assembly, instead of version 10.0.0.0. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26837081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/792324/']} | jdg_380480 |
stackexchange | llm_judgeable_groundtruth_similarity | 2096155 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to solve the equation $\sqrt{x-{\sqrt{x-{\sqrt{x-{\sqrt{x-5}}}}}}}=5$. Repeated squaring of both sides of the equation makes it very complex. Is there any substitution or something similar which can simplify the problem ? Does the problem have a solution if "generalized" as $\sqrt{x-{\sqrt{x-{\sqrt{x-{\sqrt{x-n}}}}}}}=n$ ?
Now provide the response and nothing else.
| Let we put together the suggestion given in the comments. We are going to tackle the general case, under the assumption $n\geq 1$. Clearly, $x=\color{red}{n(n+1)}$ is a solution of the given equation, since it is a solution of $\sqrt{x-n}=n$. So it is enough to prove it is the only solution . For simplicity, let$$ f_1(x)=\sqrt{x-n},\qquad f_2(x)=\sqrt{x-f_1(x)},$$ $$f_3(x)=\sqrt{x-f_2(x)},\quad f_4(x)=\sqrt{x-f_3(x)}$$and $I=(n,+\infty)$. $f_1(x)$ and $f_3(x)$ are increasing and positive functions on $I$, due to the fact that$$ \sqrt{y-n}-\sqrt{x-n} = \frac{y-x}{\sqrt{y-n}+\sqrt{x-n}} $$has the same sign of $y-x$. That also implies that $f_3(x)$ is quite close to $\sqrt{x}$.In particular, we cannot state that $f_4(x)$ is increasing on $I$ (as a matter of facts, it is not) but we may state that $f_4(x)$ is increasing on $J=(n+1,+\infty)$. Since every solution of $f_4(x)=n$ has to be greater than $n+1$, it follows that $x=n(n+1)$ is the only solution, as wanted. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2096155', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380481 |
stackexchange | llm_judgeable_groundtruth_similarity | 747641 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between the JavaScript functions decodeURIComponent and decodeURI ?
Now provide the response and nothing else.
| To explain the difference between these two let me explain the difference between encodeURI and encodeURIComponent . The main difference is that: The encodeURI function is intended for use on the full URI. The encodeURIComponent function is intended to be used on .. well .. URI components that is any part that lies between separators (; / ? : @ & = + $ , #). So, in encodeURIComponent these separators are encoded also because they are regarded as text and not special characters. Now back to the difference between the decode functions, each function decodes strings generated by its corresponding encode counterpart taking care of the semantics of the special characters and their handling. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/747641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/343/']} | jdg_380482 |
stackexchange | llm_judgeable_groundtruth_similarity | 622177 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following circuit, which is an active high-pass filter with the knee frequency \$ f_{-3 dB} = \frac{1}{2\pi RC} = \text{1 kHz}\$ I wanted to concatenate two of those systems in order to make a second-order HPF: simulate this circuit – Schematic created using CircuitLab When I did, I noticed that the knee frequency shifted to \$ f_{-3 dB} = \text {~1.4 kHz}\$ I have tried to look up the mathematics behind why that is, but I couldn't find anything on the subject. I'd like to know the relationship behind the shift on the knee frequency to concatenation of systems like that. Note: I don't want to build a second order HPF with one op-amp because in my circuit one of the legs of the op-amp is connected to ground, and the input voltage is as well and I can't change that. My reasoning was: $$ HPF(s) = \frac{sRC}{sRC + 1} $$ therefore $$ HPF(s)\cdot HPF(s) =\frac{s^2R^2C^2}{(sRC + 1)^2} = \frac{s^2}{s^2 + \frac{2s}{RC} + \frac{1}{R^2C^2}} $$ So as I understood (which is wrong): $$ \omega_0 = \frac{1}{RC} $$ Edit: I ended up solving the equation $$ |HPF(s) \cdot HPF(s) | = \frac{1}{\sqrt{2}} $$ with $$ \omega = 2\pi \cdot 1000 $$ and I found the C values that get a -3 dB frequency at 1000 Hz.
Now provide the response and nothing else.
| Cascaded 1st order filters, when buffered and with the same elements, converge towards a Gaussian bell. It only happens after many such stages, but that's the point of convergence. For your case, as you have correctly shown, the transfer function is: $$H(s)=\dfrac{s}{s+\dfrac{1}{RC}} \tag{1}$$ Cascading \$N\$ such stages means the overall transfer function will be of the form: $$G(s)=H(s)^N \tag{2}$$ Since the denominator has the same form, for whatever power of \$N\$ , the \$s^0\$ term will be of the form \$1/(RC)^N\$ . For a 1st order, the attenuation @1 Hz is always -3 dB ( \$1/\sqrt2=1/2^{1/2}\$ ). For two cascaded sections, the transfer function becomes a 2nd order, and solving for the frequency at a specific attenuation is better suited for squared terms (considering \$RC=1\$ ): $$\begin{align}G(j\omega)^2&=H(j\omega)^4\quad\Rightarrow \\\dfrac{\omega^4}{\omega^4+2\omega^2+1}&=\dfrac12 \tag{3}\end{align}$$ Solving the above will yield 4 roots since it's a 4th order, but two of them will be imaginary and one negative, which leaves the positive one as the real result: $$\begin{cases}\omega^{2\text{nd}}_{1,2}&=\pm\sqrt{1+\sqrt2}=\pm\sqrt{2^\frac02+2^\frac12} \\\omega^{2\text{nd}}_{3,4}&=\pm\sqrt{1-\sqrt2} \tag{4} \\\end{cases}$$ For a 3rd order the results are more complicated, as you would expect, but a pattern starts forming: $$\omega^{3rd}_1=\sqrt{2^\frac03+2^\frac13+2^\frac23} \tag{5}$$ By now you can readily form a general formula that gives you the precise value for the frequency when the attenuation is -3 dB: $$A_{-3\;\text{dB}}^{HP}=\sqrt{\sum_{k=0}^{N-1}{2^\frac{k}{N}}} \tag{6}\label{6}$$ A simple numerical check with wxMaxima confirms it: H(s) := s/(s + 1)$n:7$find_root( cabs( H(%i*w)^n )=1/2^-0.5, w, 1, 100 ); /* numerical */float( sqrt( sum ( 2^(k/n), k, 0, n-1 ) ) ); /* analytical */ The results come up as: 3.0995347538284983.099534753828497 the difference being in the last decimal, due to the numerical nature of find_root() (IIRC it uses Brent's method). And for n=13 : 4.2731111116139134.273111111613912 For the sake of completeness, cascaded 1st order lowpass will have this formula: $$A_{-3\;\text{dB}}^{LP}=\sqrt{2^\frac1N-1} \tag{7}\label{7}$$ I'll leave it to you to prove it. One thing to note is that everything above treats the ideal case, when buffers have infinite input impedance and zero output impedance, thus achieving perfect isolation. In practice this will not happen, so minor deviations will occur. (edit) Regarding \$\eqref{6}\$ , it can be written in a different format, considering one thing: the corner frequency for the lowpass is gradually shifting downwards in frequency, and it does so relative to unity (or \$\omega_p\$ , here 1). This means that the highpass will follow in the exact same manner, mirrored against \$\omega_p\$ in a geometrical sense: \$\omega_p^2=\omega_{_\text{LO}}\omega_{_\text{HI}}\$ . This further means that the simpler, more digestible \$\eqref{7}\$ -- which can be derived a bit more easily (you did try it, didn't you?) -- can be used to obtain the same flavour formula for the highpass: $$\begin{align}{}&\begin{cases}\omega_p^2&=\omega_{_\text{LO}}\omega_{_\text{HI}} \\\omega_p&=1 \\\omega_{_\text{LO}}&=\sqrt{2^\frac1N-1}\end{cases} \\\Rightarrow\quad 1^2&=\omega_{_\text{HI}}\sqrt{2^\frac1N-1} \\\Rightarrow\quad \omega_{_\text{HI}}&=\dfrac{1}{\sqrt{2^\frac1N-1}} \tag{8}\label{8}\end{align}$$ Wolfram Alpha confirms it . | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/622177', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/308772/']} | jdg_380483 |
stackexchange | llm_judgeable_groundtruth_similarity | 14129 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Liberal political blogger Josh Marshall just wrote a blog post discussing the recent U.N. resolution condemning Israeli settlements, but he starts off by discussing the viability of long-term solutions to the Israel-Palestine problem: There is no longterm solution to the conflict other than some form of partition of the land. This is dictated by an iron grip of demography and ideology. You can either have partition, a binational state or a state in which Jews and a portion of the Arab population (those who are currently citizens of Israel) have political rights and the majority of Arabs (those who now live in the territories) do not. You can call that last option anything you want. But the countries of the world will never and should never accept it. The binational option would be the end of Zionism and either a politically unstable and unworkable state or a slow motion and perhaps fast motion bloodbath. The least bad option for everyone is partition (indeed, it needn't be a bad option at all with creativity and good will, but both are close to non-existent). This is obvious. The alternative ideas one hears from the Zionist right are either ugly or fantastical. From a Israeli and Zionist perspective, whether the Palestinians are nice or pragmatic or want peace or don't is basically beside the point. The trajectory is the same regardless. These questions affect timing and process, not end result. My question is, what are these "alternative ideas" that Marshall is alluding to? Among members of the Israeli Right who do not support a two-state solution, what alternatives have they proposed? I think there are some on the Israeli Left who support a one-state solution dissolving Israel's identity as a Jewish state and making all Palestinians into Israeli citizens. But that doesn't strike me as something that anyone on the Israeli Right would support.
Now provide the response and nothing else.
| First of all, the main option the question is ignoring is that most opponents of the "standard" Arab-imposed 2-state solution idea (return to 1967 lines) are actually in favor of 2-state solution, just in forms different than what Arab states want. Dual-citizenship idea. Quoting from Wikipedia: A number of proposals for the granting of Palestinian citizenship or residential permits to Jewish settlers in return for the removal of Israeli military installations from the West Bank have been fielded by such individuals[47] as Arafat,[48] Ibrahim Sarsur[49] and Ahmed Qurei. Israeli Minister Moshe Ya'alon said in April 2010 that "just as Arabs live in Israel, so, too, should Jews be able to live in Palestine." … "If we are talking about coexistence and peace, why the [Palestinian] insistence that the territory they receive be ethnically cleansed of Jews? ".[50] Populated-Area Exchange Plan (Lieberman plan) a territorial exchange whereby Israel would annex almost all Israeli settlements in the West Bank which are situated in major settlement blocs close to the border, and withdraw from the remaining few deep inside the Palestinian territories. At the same time, it would transfer Arab-Israeli areas to the Palestinian state. Now, as to complete alternatives to two-state solution: Woodrow Wilson School Graduate Policy Workshop to the Two-State Solution In the Palestinian-Israeli Conflict compiled a pretty comprehensive list of solutions. Page 17 of the report lists a summary of main ones (I'm just listing the options themselves and not the workshop's analysis of them): Status Quo: Many Israelis, and some Americans, believe that managing the status quo (meaning the continued absence of a formal settlement) is the most viable and feasible alternative to the two-state solution. One-state solutions 1: Many left-wing Palestinian academics and activists, and some far-left-wing Israelis, view a one-state solution with equal rights for all inhabitants as the only just solution to the conflict. One-state solutions 2: Growing numbers of right-wing Israelis, and Palestinian cholar Sari Nusseibeh, have promoted a one-state model in which Palestinianshave limited or different political rights. Numerous variations of both ideas exist based on unitary, federal, binational and confederal models. Three-state solution (Israel, West Bank and Gaza): There is a growing belief among some that prolonged West Bank-Gaza divide could become permanent, producing a de facto three-state solution. Jordan Option 1: “Jordan is Palestine” idea promoted by far-right Israelis; Jordan Option 2: Jordan-Palestine confederation promoted by some Jordanians. Egypt-Gaza Option: Some far-right Israelis would like Egypt to annex or assume greater responsibility for Gaza. Trusteeship: Interim international administration over the future Palestinian state | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/14129', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/1953/']} | jdg_380484 |
stackexchange | llm_judgeable_groundtruth_similarity | 12420317 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a shell script that starts unit tests for modules. I need the name of the module in all lowercase and with the first character uppercase. So far I have been doing it like this: #!/bin/sh -x # z.B. getbrowser strModuleToTest=$1# g strModuleToTestUppercaseFirstletter=${strModuleToTest:0:1}# etbrowser strModuleToTestUppercaseLastletters=${strModuleToTest:1}# g -> G strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/a/A}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/b/B}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/c/C}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/d/D}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/e/E}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/f/F}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/g/G}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/h/H}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/i/I}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/j/J}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/k/K}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/l/L}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/m/M}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/n/N}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/o/O}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/p/P}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/q/Q}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/r/R}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/s/S}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/t/T}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/u/U}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/v/V}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/w/W}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/x/X}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/y/Y}"strModuleToTestUppercaseFirstletter="${strModuleToTestUppercaseFirstletter/z/Z}"# GetbrowserstrModuleToTestUppercase=$strModuleToTestUppercaseFirstletter""$strModuleToTestUppercaseLastletters What would be an easier way to set $strModuleToTestUppercase? I tried strModuleToTestUppercase="${strModuleToTest[@]^}" or strModuleToTestUppercasesed="sed 's/\<./\u&/g' $strModuleToTest" but without success.
Now provide the response and nothing else.
| If: s=somemodule with bash v4+ echo ${s^} This should work with a bit older bash versions (from Glenn): echo $(tr a-z A-Z <<< ${s:0:1})${s:1}") with zsh echo ${(C)s} with ash and coreutils echo $(echo $s | cut -c1 | tr a-z A-Z)$(echo $s | cut -c2-) with GNU sed echo $s | sed 's/./\U&/' with BSD sed echo $s | sed ' h; y/quvwxzdermatoglyphicsbfjkn/QUVWXZDERMATOGLYPHICSBFJKN/; G; s/\(.\)[^\n]*\n.\(.*\)/\1\2/;' with awk echo $s | awk '{ print toupper(substr($0, 1, 1)) substr($0, 2) }' with perl echo $s | perl -nE 'say ucfirst' with python echo $s | python -c 'import sys; print sys.stdin.readline().rstrip().capitalize()' with ruby echo $s | ruby -e 'puts ARGF.read.capitalize' Output in all cases Somemodule | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12420317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1257145/']} | jdg_380485 |
stackexchange | llm_judgeable_groundtruth_similarity | 16839149 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Upon pressing my CommandButton, I would like to have a pop-up that asks "These changes cannot be undone. It is advised to save a copy before proceeding. Do you wish to proceed?" And I want to have three options: Yes - pop-up window is closed and CommandButton Macro is executed No - This closes the pop-up window and changes nothing Save - closes pop-up window and opens "Save As" (macro is not executed) I don't really know where to start with this. Could you please give me a hand? Thank you very much indeed.
Now provide the response and nothing else.
| You can use a message box, but that is somewhat limited. You can rephrase the question slightly to use the vbYesNoCancel buttons, since Save As is not an optional button on Message Box. Then you can work with the result of the message box button-click: Dim mbResult as IntegermbResult = MsgBox("These changes cannot be undone. Would you like to save a copy before proceeding?", _ vbYesNoCancel)Select Case mbResult Case vbYes 'Modify as needed, this is a simple example with no error handling: With ActiveWorkbook If Not .Saved Then .SaveAs Application.GetSaveAsFilename() End With Case vbNo ' Do nothing and allow the macro to run Case vbCancel ' Do NOT allow the macro to run Exit SubEnd Select | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16839149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1568517/']} | jdg_380486 |
stackexchange | llm_judgeable_groundtruth_similarity | 828179 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A \in K^{m\times n}$ and $B \in K^{n \times r}$ Prove that min$\{rk(A),rk(B)\}\geq rk(AB)\geq rk(A)+rk(B)-n$ My attempt at a solution: $(1)$ $AB=(AB_1|...|AB_j|...|AB_r)$ ($B_j$ is the j-th column of $B$), I don't know if the following statement is correct: the columns of $AB$ are a linear combination of the columns of $B$, then $rk(AB) \leq rk(B)$. $(2)$In a similar way, $AB= \begin{bmatrix} —A_1B— \\ \vdots \\ —A_jB— \\ \vdots \\—A_mB— \end{bmatrix}$ ($A_j$ denotes the j-th row of $A$), so the rows of $AB$ are a linear combination of the rows of $A$, from here one deduces $rk(AB)\leq rk(A)$. From $(1)$ and $(2)$ it follows $rk(AB)\leq min\{rk(A),rk(B)\}$. This is what I've done so far. I am having doubts with, for example (1), this statement I've conjectured: the columns of $AB$ are a linear combination of the columns of $B$, then $rk(AB) \leq rk(B)$, but wouldn't this be the case iff $AB=(\alpha_1B_1|...|\alpha_jB_j|...|\alpha_rB_r)$ with $\alpha_1,...,\alpha_n \in K$ instead of $(AB_1|...|AB_j|...|AB_r)$ ? This is a major doubt I have, the same goes for (2). I need help to show the inequality $rk(AB)\geq rk(A)+rk(B)-n$
Now provide the response and nothing else.
| Consider $A \in \mathbb{K^{m\times n}}$. I will use the following notation: column space of $A \equiv \text{col}(A) = \{ x \in \mathbb{K^m}: x=Ay, \text{ for some } y\in \mathbb{K^n} \}$; null space of $A \equiv \text{ker}(A) = \{ y \in \mathbb{K^n}: Ay=0\}$; rank of $A \equiv \text{rk}(A) = \text{dim} \; \text{col}(A)$. Let's start by proving a very useful equality. Theorem 1 If $A \in \mathbb{K^{m\times n}}$ and $B \in \mathbb{K^{n\times r}}$ then $$\text{rk}(AB)=\text{rk}(B) - \text{dim} \; ({ \text{ker}(A) \cap \text{col}(B)}).$$ Proof: Take $S=\{x_1,...,x_s\}$ as a basis for $\text{ker}(A) \cap \text{col}(B)$ and note that $({ \text{ker}(A) \cap \text{col}(B)}) \subseteq \text{col}(B)$. If $\text{dim} \; \text{col}(B) = s+t$, then we can find an extension set $S_e=\{ z_1,\ldots,z_t \}$ such that $U = \{ x_1, \ldots, x_s,z_1, \ldots, z_t\}$ is a basis for $\text{col}(B)$. Then, we just have to prove that $\text{dim}\;\text{col}(AB) = t$, which we can do by showing that $T=\{ Az_1, \ldots, Az_t \}$ is a basis for $\text{col}(AB)$. In fact, we have that: If $b \in \text{col}(AB)$ then $\exists y\in \mathbb{K^r}: b=ABy$. Now, $$By \in \text{col}(B) \Rightarrow By= \sum\limits_{i=1}^{s} {\xi_i x_i} + \sum\limits_{i=1}^{t} {\eta_i z_i}$$so$$b=A \left( {\sum\limits_{i=1}^{s} {\xi_i x_i} + \sum\limits_{i=1}^{t} {\eta_i z_i}} \right) = \sum\limits_{i=1}^{s} {\xi_i \underbrace{Ax_i}_{={\bf{0}}}} + \sum\limits_{i=1}^{t} {\eta_i Az_i} = \sum\limits_{i=1}^{t} {\eta_i Az_i}.$$Hence, $T$ spans $\text{col}(AB)$. If ${\bf{0}} = \sum\limits_{i=1}^{t} {\alpha_i Az_i} = A \left( \sum\limits_{i=1}^{t} {\alpha_i z_i} \right)$, then $\sum\limits_{i=1}^{t} {\alpha_i z_i} \in \text{ker}(A) \cap \text{col}(B)$, so there are scalars $\beta_j$ such that$$\sum\limits_{i=1}^{t} {\alpha_i z_i} = \sum\limits_{j=1}^{s} {\beta_j z_j} \Leftrightarrow \sum\limits_{i=1}^{t} {\alpha_i z_i} - \sum\limits_{j=1}^{s} {\beta_j z_j} = {\bf{0}}.$$Hence, recalling that $U$ is a basis for $\text{col}(B)$, therefore forming a linearly independent set, $\alpha_i=\beta_j=0$, so we conclude that $T$ is also a linearly independent set. Thus $T$ is a basis for $\text{col}(AB)$, so $t= \text{dim} \; \text{col}(AB) = \text{rk}(AB)$, and we finally get$$\text{rk}(B) = \text{dim} \; \text{col}(B) = s + t = \text{dim} \; ({ \text{ker}(A) \cap \text{col}(B)}) + \text{rk}(AB).$$ Q.E.D. i) Now, let's prove that $\text{rk}(AB) \leq \text{min} \{ \text{rk}(A),\text{rk}(B) \}$. Resorting to Theorem 1 , we have$$\tag{1} \text{rk}(AB)=\text{rk}(B) - \text{dim} \; ({ \text{ker}(A) \cap \text{col}(B)}) \leq \text{rk}(B).$$Recalling that transposition does not alter rank, and again using Theorem 1 , we get$$\tag{2} \text{rk}(AB)=\text{rk}(AB)^T = \text{rk}( B^T A^T) = \underbrace{\text{rk}(A^T)}_{=\text{rk}(A)} - \text{dim} \; ({ \text{ker}(B^T) \cap \text{col}(A^T)}) \leq \text{rk}(A).$$From (1) and (2) , we're able to conclude$$\text{rk}(AB) \leq \text{min} \{ \text{rk}(A),\text{rk}(B) \}.$$ ii) To prove $\text{rk}(A) + \text{rk}(B) - n \leq \text{rk}(AB)$, recall that if $X$ and $Y$ are vector spaces such that $X \subseteq Y$ then $\text{dim} \;X \leq \text{dim} \;Y$, and note that $\text{ker}(A) \cap \text{col}(B) \subseteq \text{ker}(A)$. We then have$$\text{dim} \; (\text{ker}(A) \cap \text{col}(B)) \leq \text{dim} \; \text{ker}(A) \mathop{=}^{\text{R-N}} n - \text{rk}(A)$$where we have resorted to the Rank-Nullity Theorem ( R-N ) to get the last equality. Plugging the last expression into Theorem 1 , we arrive at$$\text{rk}(AB)=\text{rk}(B) - \text{dim} \; (\text{ker}(A) \cap \text{col}(B)) \geq \text{rk}(B) + \text{rk}(A) - n.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/828179', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/100106/']} | jdg_380487 |
stackexchange | llm_judgeable_groundtruth_similarity | 9934 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Cyclic groups have at most one subgroup of any given finite index. Can we describe the class of all groups having such property? Thank you!
Now provide the response and nothing else.
| Since $G$ has exactly one subgroup of each finite index, and the index of a conjugate of $H$ equals the index of $H$, then every subgroup of finite index is normal. If $G$ is finite, then every subgroup is normal, so the group must be a Dedekind group (also known as Hamiltonian groups). All such groups that are nonabelian are of the form $G = Q_8 \times B \times D$, where $Q_8$ is the quaternion group of $8$ elements, $B$ is a direct sum of copies of the cyclic group of order $2$, and $D$ is an abelian group of odd order. Any of the factors may be missing. Since $Q_8$ contains several subgroups of index $2$ (exactly three, in fact), if a factor of $Q_8$ appears then $G$ would have several subgroups of the same index, hence $G$ must in fact be an abelian group. Since $G$ is finite and abelian, it is isomorphic to a direct sum of cyclic groups, $G = C_{a_1}\oplus\cdots\oplus C_{a_k}$, where $1\lt a_1|a_2|\cdots|a_k$. If $k\gt 1$, then $G$ contains at least two subgroups of order $a_{k-1}$; thus $k=1$ so $G$ is in fact cyclic. So the only finite groups with the desired property are the cyclic groups.If $G$ is infinite, you can have other possibilities. One example is the Prüfer group , Added: but only by vacuity: it has no proper subgroups of finite index. In general, if $H$ if a subgroup of finite index in $G$ then $H$ is normal, as above, and $G/H$ also has the desired property and is finite; thus, $G/H$ is cyclic for every subgroup of finite index by the argument above. I'm sure there's more to be said, but I'll think about it a bit first... | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/9934', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3427/']} | jdg_380488 |
stackexchange | llm_judgeable_groundtruth_similarity | 8276323 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Rails 3.1.2 with mail gem 2.3.0 I'm trying to use ActionMailer with an ActiveModel based form. I've defined my table less model like so: class StprodApp include ActiveModel::Validations include ActiveModel::Conversion extend ActiveModel::Naming attr_accessor :email,:name attr_reader :errors validates_presence_of :email, :message => "Please enter your email address" def initialize(attributes = {}) attributes.each do |name, value| send("#{name}=", value) end @errors = ActiveModel::Errors.new(self) end def persisted? false endend So far so good, I can post a form based on stprod_app, and validations work So now I want to send an email based off this model, so I do this in my create action: @stprod_app = StprodApp.new(params[:stprod_app])if @stprod_app.valid? EmployeeMailer.stprod_app(@stprod_app).deliverend I get this error undefined method `index' for #<StprodApp:0x007f95325a1ad0> and this stack trace mail (2.3.0) lib/mail/encodings.rb:117:in `value_decode' mail (2.3.0) lib/mail/encodings.rb:101:in `decode_encode'mail (2.3.0) lib/mail/fields/unstructured_field.rb:74:in `do_decode'mail (2.3.0) lib/mail/fields/unstructured_field.rb:56:in `decoded'mail (2.3.0) lib/mail/fields/unstructured_field.rb:107:in `wrapped_value'mail (2.3.0) lib/mail/fields/unstructured_field.rb:70:in `do_encode'mail (2.3.0) lib/mail/fields/unstructured_field.rb:52:in `encoded'mail (2.3.0) lib/mail/field.rb:123:in `method_missing'mail (2.3.0) lib/mail/header.rb:190:in `block in encoded'mail (2.3.0) lib/mail/header.rb:189:in `each'mail (2.3.0) lib/mail/header.rb:189:in `encoded'mail (2.3.0) lib/mail/message.rb:1708:in `encoded'actionmailer (3.1.2) lib/action_mailer/base.rb:451:in `set_payload_for_mail'actionmailer (3.1.2) lib/action_mailer/base.rb:431:in `block in deliver_mail'activesupport (3.1.2) lib/active_support/notifications.rb:53:in `block ininstrument'activesupport (3.1.2) lib/active_support/notifications/instrumenter.rb:21:in`instrument'activesupport (3.1.2) lib/active_support/notifications.rb:53:in `instrument'actionmailer (3.1.2) lib/action_mailer/base.rb:430:in `deliver_mail'mail (2.3.0) lib/mail/message.rb:230:in `deliver'app/controllers/stprod_app_controller.rb:20:in `create' So I read the docs for the mail gem, and it says this: ---snip--- All objects that can render into an email, have an #encoded method. Encoded will return the object as a complete string ready to send in the mail system, that is, it will include the header field and value and CRLF at the end and wrapped as needed. All objects that can render into an email, have a :decoded method. Decoded will return the object's "value" only as a string. This means it will not include the header fields (like 'To:' or 'Subject:'). By default, calling #to_s on a container object will call its encoded method, while #to_s on a field object will call it's decoded method. So calling #to_s on a Mail object will return the mail, all encoded ready to send, while calling #to_s on the From field or the body will return the decoded value of the object. The header object of Mail is considered a container. If you are in doubt, call #encoded, or #decoded explicitly, this is safer if you are not sure. Structured fields that have parameter values that can be encoded (e.g. Content-Type) will provide decoded parameter values when you call the parameter names as methods against the object. Structured fields that have parameter values that can be encoded (e.g. Content-Type) will provide encoded parameter values when you call the parameter names through the object.parameters[''] method call. ---snip--- It looks like the index error is Rail's last desperate attempt to decode/encode? the email. The only 'index' I found in rails that makes sense is in ActiveSupport::Multibyte::Chars. It sounds like I have to implement an encode/decode method in my ActiveModel, but I can't figure out how! Anybody know how to define an ActiveModel that works with ActionMailer?
Now provide the response and nothing else.
| thanks for reading my book! As JacobM and dbalatero have already explained, this is a new feature in Rails 3.1. This particular feature was added by Aaron Patterson as a way to simplify the migration syntax. In earlier versions of Rails, you would have to do as the book shows: class CreatePurchases < ActiveRecord::Migration def self.up create_table :purchases do |t| t.string :name t.float :cost t.timestamps end end def self.down drop_table :purchases endend But that's kind of repeating yourself. Aaron created a migration syntax that looks good and is simpler, calling only the methods necessary for migrating forward, but also allowing the migrations backwards (known as a "rollback") too. The same migration written with the Rails 3.1 syntax is this: class CreatePurchases < ActiveRecord::Migration def change create_table :purchases do |t| t.string :name t.float :cost t.timestamps end endend So when this migration runs "forwards", Rails will create the purchases table with the fields. When you roll it back (or run it "backwards") then Rails will know to drop the table. This syntax isn't entirely perfect however, and you'll run into problems with methods such as change_column . When that happens, it's best to stick with defining both the def up and def down methods in the migrations: class CreatePurchases < ActiveRecord::Migration def up change_column :purchases, :cost, :integer end def down change_column :purchases, :cost, :float endend That's because in this example Rails won't know how to switch it back to the previous type. I hope this explains it better! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8276323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144589/']} | jdg_380489 |
stackexchange | llm_judgeable_groundtruth_similarity | 18548814 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I begin to learn Scala and I'm interesting can I define and use function without any class or object in Scala as in Haskell where there is no OOP concept. I'm interested can I use Scala totally without any OOP concept? P.S. I use IntelliJ plugin for Scala
Now provide the response and nothing else.
| Well, you cannot do that really, but you can get very close to that by using package objects : src/main/scala/yourpackage/package.scala : package object yourpackage { def function(x: Int) = x*x} src/main/scala/yourpackage/Other.scala : package yourpackageobject Other { def main(args: Array[String]) { println(function(10)); // Prints 100 }} Note how function is used in Other object - without any qualifiers. Here function belongs to a package, not to some specific object/class. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18548814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1774391/']} | jdg_380490 |
stackexchange | llm_judgeable_groundtruth_similarity | 21473 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a "rule" to determine the minimum sample size required for a t-test to be valid? For example, a comparison needs to be performed between the means of 2 populations. There are 7 data points from one population and only 2 data points from the other. Unfortunately, the experiment is very expensive and time consuming, and obtaining more data is not feasible. Can a t-test be used? Why or why not? Please provide details (the population variances and distributions are not known). If a t-test can not be used, can a non parametric test (Mann Whitney) be used? Why or why not?
Now provide the response and nothing else.
| I'd recommend using the non-parametric Mann-Whitney U test rather than an unpaired t -test here. There's no absolute minimum sample size for the t -test, but as the sample sizes get smaller, the test becomes more sensitive to the assumption that both samples are drawn from populations with a normal distribution. With samples this small, especially with one sample of only two, you'd need to be very sure that the population distributions were normal -- and that has to be based on external knowledge, as such small samples gives very little information in themselves about the normality or otherwise of their distributions. But you say that "the population variances and distributions are not known" (my italics). The Mann-Whitney U test does not require any assumptions about the parametric form of the distributions, requiring only the assumption that the distributions of the two groups are the same under the null hypothesis. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/21473', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/8664/']} | jdg_380491 |
stackexchange | llm_judgeable_groundtruth_similarity | 92221 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
A Soergel bimodule (for $S_n$) is a bimodule over $R = \mathbb{Q}[x_1,\dots,x_n]$ which appears as a summand/grading shift of tensor products of the basic bimodules$$B_{i,i+1} = R \otimes_{i,i+1} R$$where $\otimes_{i,i+1}$ means the tensor product over the subring of polynomials invariant under permuting $i$ and $i+1$. It follows immediately that every Soergel bimodule $M$ has the following properties: (1) $M$ is free as a left module or as a right module, although not necessarily as a bimodule. (2) $M$ commutes with invariant polynomials, in the sense that for every invariant polynomial $p \in \mathbb{Q}[x_1,\dots,x_n]$ and $m \in M$, we have$$pm = mp.$$ I think they also have the following property: (3) There is an invariant vector, an element $m_0 \in M$ so that$$x_i m_0 = m_0 x_i$$for every $i=1,\dots,n$. Do these properties characterize Soergel bimodules? Without the third condition, you could have, for instance, a bimodule that just permuted the $x_i$: a one-dimensional module with a single generator $a$ as a right module, so that$x_i a = a x_{\sigma(i)}$for some permutation $\sigma$. Edit: The natural generalization for a general Weyl group $W$ would be to replace the invariant polynomials in (2) by the polynomials that are invariant under $W$. Clearly all Soergel bimodules would still satisfy this generalization of (2). Any references are welcome. If it's not known, I'll try to prove it. Edit: Ben Webster gave a counterexample below. More generally, I'm still interested in some sort of intrinsic, elementary characterization.
Now provide the response and nothing else.
| There is an intrinsic characterisation which is probably more complicated than what you are looking for. As Ben says, Soergel bimodules are pretty subtle things ... Because Soergel bimodules are (finitely generated) $R$-bimodules one can think about them as coherent sheaves on $V \times V$ (where $V = Spec R$). Inside $V \times V$ one has for any $w \in S_n$ its reversed graph: $Gr_w = \{ (wv, v) \;| \;v \in V \}$ Hence, given any subset of $U \subset S_n$ one can talk about "sections of an $R$-bimodule $M$ with support in $U$": those sections of $M$ which have support in the union of the graphs of all elements of $U$. In this way, for any subset $I$ of $W$ one can consider $\Gamma_I M \subset M$. Your point (2) means that Soergel any Soergel bimodule satisfies $\Gamma_{S_n} M = M$ (that is, every element is supported on the union of all the graphs of elements of $S_n$). It follows that any Soergel bimodule has a canonical filtration indexed by the ideals of the poset $S_n$. A basic fact is that if one considers the quotient $\Gamma_{\le w / < w} (M) := \Gamma_{\le w} M / \Gamma_{< w} M$ this is a free left $R$-module, isomorphic as a bimodule to a direct sum of copies of $R_w$ (the ``standard'' bimodule with normal left action and right action twisted by $w$). This is proved in Soergel's "Kazhdan-Lusztig-Polynome und unzerlegbare Bimoduln über Polynomringen" and is also discussed and generalised in my "Singular Soergel bimodules". So now one can consider all bimodules which satisfy the above property. One this category one can put an exact structure: a sequence is exact if whenever one applies the functor $\Gamma_{\le w / < w}$ one obtains a split exact sequence of $R$-bimodules (necessarily isomorphic to direct sums of shifts of $R_w$'s). Then the claim is that Soergel bimodules are the injective objects in this exact structure. I don't think this is written down anywhere. In the very similar language of moment graphs it is proved by Peter Fiebig in "Sheaves on moment graphs and a localization of Verma flags" here: arxiv.org/abs/math.RT/0505108 (I might be mixing things up a bit. I think Peter considers the opposite filtration, which is why he gets projective objects. Anyway, if this is really what you're looking for then I can try to provide some more detail.) By the way, the condition that the subsequent quotients in this filtration be split has other applications. In this paper http://arxiv.org/abs/1205.4206 we examine when Rouquier complexes satisfy this property. It turns out that this is the case if (probably: and only if) the braid is a positive lift of an element of the Weyl group. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/92221', 'https://mathoverflow.net', 'https://mathoverflow.net/users/5010/']} | jdg_380492 |
stackexchange | llm_judgeable_groundtruth_similarity | 34913329 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the next configuration (index.js) to call to my interceptor factory. angular.module('pasApp') .factory('InterceptorFactory',['$q','$location',require('./factory-interceptor.js')]) .config(['$stateProvider','$urlRouterProvider', '$httpProvider','InterceptorFactory',require('./config-app.js')]) .run(['$ionicPlatform','$rootScope','$window','StorageFactory','$state','$timeout','$http',require('./run-app.js')]); My folder and files order: >config >config-app.js >factory-interceptor.js >index.js >run-app.js When I call my "InterceptorFactory" function from './factory-inteceptor.js', my console presents the next error: Uncaught Error: [$injector:modulerr] Failed to instantiate module pasApp due to:Error: [$injector:unpr] Unknown provider: InterceptorFactoryhttp://errors.angularjs.org/1.4.3/$injector/unpr?p0=InterceptorFactoryat http://localhost:8100/js/app.bundle.js:9874:12at http://localhost:8100/js/app.bundle.js:14068:19at getService (http://localhost:8100/js/app.bundle.js:14215:39)at Object.invoke (http://localhost:8100/js/app.bundle.js:14247:13)at runInvokeQueue (http://localhost:8100/js/app.bundle.js:14162:35)at http://localhost:8100/js/app.bundle.js:14171:11at forEach (http://localhost:8100/js/app.bundle.js:10142:20)at loadModules (http://localhost:8100/js/app.bundle.js:14152:5)at createInjector (http://localhost:8100/js/app.bundle.js:14078:11)at doBootstrap (http://localhost:8100/js/app.bundle.js:11436:20)http://errors.angularjs.org/1.4.3/$injector/modulerr?p0=pasApp&p1=Error%3A%…otstrap%20(http%3A%2F%2Flocalhost%3A8100%2Fjs%2Fapp.bundle.js%3A11436%3A20)
Now provide the response and nothing else.
| Since you're only looking for the date portion, you can compare those easily using MySQL's DATE() function . SELECT * FROM table WHERE DATE(created_at) = DATE(NOW() - INTERVAL 1 DAY); Note that if you have a very large number of records this can be inefficient; indexing advantages are lost with the derived value of DATE() . In that case, you can use this query: SELECT * FROM table WHERE created_at BETWEEN CURDATE() - INTERVAL 1 DAY AND CURDATE() - INTERVAL 1 SECOND; This works because date values such as the one returned by CURDATE() are assumed to have a timestamp of 00:00:00. The index can still be used because the date column's value is not being transformed at all. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34913329', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1399262/']} | jdg_380493 |
stackexchange | llm_judgeable_groundtruth_similarity | 60427 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've made some attempt with this dataset of elevation . But I have some trouble with ListSurfacePlot3D , which can not show the globe correctly: And I've checked that the data is not the problem, since the same data drawn by ListPointPlot3D shows a globe: By the way, my goal is to make a topographic globe which shows 3D mountains and sea basins like this: Here is my code: (*elev data input*)elev1d = BinaryReadList["D:\\topo\\ETOPO5.DAT", {"Integer16"}, ByteOrdering -> +1];elev2d = ArrayReshape[elev1d, {2160, 4320}];lati = Flatten @ Transpose @ Table[Rest @ Table[i, {i, 90, -90, -1/12}], {4320}];long = Flatten @ Table[Rest @ Table[i, {i, 0, 360, 1/12}], {2160}];(* make a {lat, lon, altitude} matrix*)elevlatlon = Transpose @ {lati, long, Flatten @ elev1d};(*select part of the huge amount of data, add mean earth radius to altitude*)elevlatlonInUse = (elevlatlon[[;; ;; 12, All]] /.{m_, n_, o_} -> {m, n, o/200 + 6721}) /. {x_, y /; y > 180, z_} -> {x, y - 360, z};coordsToXYZ[list_] := Transpose[{Cos[#[[1]]*Pi/180.]*Cos[#[[2]]*Pi/180.]*#[[3]], Cos[#[[1]]*Pi/180.]*Sin[#[[2]]*Pi/180.]*#[[3]], Sin[#[[1]]*Pi/180.]*#[[3]]} & @ Transpose[list]]xyz = First[coordsToXYZ /@ {elevlatlonInUse}];ListPointPlot3D[xyz, BoxRatios -> {1, 1, 1}]ListSurfacePlot3D[xyz, BoxRatios -> {1, 1, 1}] It's a little different from How to make a 3D globe? . That's a globe with a 2D texture covering it, but this is a real 3D globe with elevations shown in 3D as well. P.S. Someone reminded me that, compared with the radius of the Earth (about $6371 \text{ km}$), even Mt. Everest ($8.8\text{ km}$) and the Marianas Trench ($-11\text{ km}$) can be ignored. That's true, I know, but to draw a globe with bumps, we can just scale the elevation. A visualized topographic globe is just for presentation, and not for calculation.
Now provide the response and nothing else.
| This answer is intended to demonstrate a neat method I'd recently learned for constructing interpolating functions over the sphere. A persistent problem dogging a lot of interpolation methods on the sphere has been the subject of what to do at the poles. A recently studied method, dubbed the "double Fourier sphere method" in this paper (based on earlier work by Merilees ) copes remarkably well. This is based on constructing a periodic extension/reflection of the data over at the poles, and then subjecting the resulting matrix to a low-rank approximation. The first reference gives a sophisticated method based on structured Gaussian elimination; in this answer, to keep things simple (at the expense of some slowness), I will use SVD instead. As I noted in this Wolfram Community post , one can conveniently obtain elevation data for the Earth through GeoElevationData[] . Here is some elevation data with modest resolution (those with sufficient computing power might consider increasing the GeoZoomLevel setting): gdm = Reverse[QuantityMagnitude[GeoElevationData["World", "Geodetic", GeoZoomLevel -> 2, UnitSystem -> "Metric"]]]; The DFS trick is remarkably simple: gdmdfst = Join[gdm, Reverse[RotateLeft[gdm, {0, Length[gdm]}]]]; This yields a $1024\times 1024$ matrix. We now take its SVD: {uv, s, vv} = SingularValueDecomposition[gdmdfst]; To construct the required low-rank approximations, we treat the left and right singular vectors ( uv and vv ) as interpolation data. Here is a routine for trigonometric fitting (code originally from here , but made slightly more convenient): trigFit[data_?VectorQ, n : (_Integer?Positive | Automatic) : Automatic, {x_, x0_: 0, x1_}] := Module[{c0, clist, cof, k, l, m, t}, l = Quotient[Length[data] - 1, 2]; m = If[n === Automatic, l, Min[n, l]]; cof = If[! VectorQ[data, InexactNumberQ], N[data], data]; clist = Rest[cof]/2; cof = Prepend[{1, I}.{{1, 1}, {1, -1}}.{clist, Reverse[clist]}, First[cof]]; cof = Fourier[cof, FourierParameters -> {-1, 1}]; c0 = Chop[First[cof]]; clist = Rest[cof]; cof = Chop[Take[{{1, 1}, {-1, 1}}.{clist, Reverse[clist]}, 2, m]]; t = Rescale[x, {x0, x1}, {0, 2 π}]; c0 + Total[MapThread[Dot, {cof, Transpose[Table[{Cos[k t], Sin[k t]}, {k, m}]]}]]] Now, convert the singular vectors into trigonometric interpolants (and extract the singular values as well): vals = Diagonal[s];usc = trigFit[#, {φ, 2 π}] & /@ Transpose[uv];vsc = trigFit[#, {θ, 2 π}] & /@ Transpose[vv]; Now, build the spherical interpolant, taking as many singular values and vectors as seen fit (I arbitrarily chose $\ell=768$, corresponding to $3/4$ of the singular values), and construct it as a compiled function for added efficiency: l = 768; (* increase or decrease as needed *)earthFun = With[{fun = Total[Take[vals, l] Take[usc, l] Take[vsc, l]]}, Compile[{{θ, _Real}, {φ, _Real}}, fun, Parallelization -> True, RuntimeAttributes -> {Listable}, RuntimeOptions -> "Speed"]]; Now, for the plots. Here is an appropriate color gradient: myGradient1 = Blend[{{-8000, RGBColor["#000000"]}, {-7000, RGBColor["#141E35"]}, {-6000, RGBColor["#263C6A"]}, {-5000, RGBColor["#2E5085"]}, {-4000, RGBColor["#3563A0"]}, {-3000, RGBColor["#4897D3"]}, {-2000, RGBColor["#5AB9E9"]}, {-1000, RGBColor["#8DD2EF"]}, {0, RGBColor["#F5FFFF"]}, {0, RGBColor["#699885"]}, {50, RGBColor["#76A992"]}, {200, RGBColor["#83B59B"]}, {600, RGBColor["#A5C0A7"]}, {1000, RGBColor["#D3C9B3"]}, {2000, RGBColor["#D4B8A4"]}, {3000, RGBColor["#DCDCDC"]}, {5000, RGBColor["#EEEEEE"]}, {6000, RGBColor["#F6F7F6"]}, {7000, RGBColor["#FAFAFA"]}, {8000, RGBColor["#FFFFFF"]}}, #] &; Let's start with a density plot: DensityPlot[earthFun[θ, φ], {θ, 0, 2 π}, {φ, 0, π}, AspectRatio -> Automatic, ColorFunction -> myGradient1, ColorFunctionScaling -> False, Frame -> False, PlotPoints -> 185, PlotRange -> All] Due to the large amount of terms, the plotting is a bit slow, even with the compilation. One might consider using e.g. the Goertzel-Reinsch algorithm for added efficiency, which I leave to the interested reader to try out. For comparison, here are plots constructed from approximations of even lower rank ($\ell=128,256,512$), compared with a ListDensityPlot[] of the raw data (bottom right): Finally, we can look at an actual globe: With[{s = 2*^5}, ParametricPlot3D[(1 + earthFun[θ, φ]/s) {Sin[φ] Cos[θ], Sin[φ] Sin[θ], -Cos[φ]} // Evaluate, {θ, 0, 2 π}, {φ, 0, π}, Axes -> None, Boxed -> False, ColorFunction -> (With[{r = Norm[{#1, #2, #3}]}, myGradient1[s r - s]] &), ColorFunctionScaling -> False, MaxRecursion -> 1, Mesh -> False, PlotPoints -> {500, 250}]] // Quiet (I had chosen the scaling factor s to make the depressions and elevations slightly more prominent, just like in my Community post.) Of course, using all the singular values and vectors will result in an interpolation of the data (tho it is even more expensive to evaluate). It is remarkable, however, that even the low-rank DFS approximations already do pretty well. | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/60427', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19934/']} | jdg_380494 |
stackexchange | llm_judgeable_groundtruth_similarity | 29825604 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to save my array to Core Data. let array = [8, 17.7, 18, 21, 0, 0, 34] The values inside that array, and the number of values are variable. 1. What do I declare inside my NSManagedObject class? class PBOStatistics: NSManagedObject, Equatable { @NSManaged var date: NSDate @NSManaged var average: NSNumber @NSManaged var historicAverage: NSNumber @NSManaged var total: NSNumber @NSManaged var historicTotal: NSNumber @NSManaged var ordersCount: NSNumber @NSManaged var historicOrdersCount: NSNumber @NSManaged var values: [Double] //is it ok? @NSManaged var location: PBOLocation} 2. What do I declare inside my .xcdatamodel? 3. How do I save this in my Entity? (I use MagicalRecord) let statistics = (PBOStatistics.MR_createInContext(context) as! PBOStatistics)statistics.values = [8, 17.7, 18, 21, 0, 0, 34] //is it enough?
Now provide the response and nothing else.
| Ok, I made some research and testing. Using Transformable type, solution is simple: 1. What do I declare inside my NSManagedObject class? @NSManaged var values: [NSNumber] //[Double] also works 2. What do I declare inside my .xcdatamodel? Transformable data type. 3. How do I save this in my Entity? statistics!.values = [23, 45, 567.8, 123, 0, 0] //just this “You can store an NSArray or an NSDictionary as a transformable attribute. This will use the NSCoding to serialize the array or dictionary to an NSData attribute (and appropriately deserialize it upon access)” - Source Or If you want to declare it as Binary Data then read this simple article : | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/29825604', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2725435/']} | jdg_380495 |
stackexchange | llm_judgeable_groundtruth_similarity | 151358 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am suppose to use integration by parts but I have no idea what to do for this problem$$\int e^{2x} \sin3x dx$$ $u = \sin3x dx$ $du = 3\cos3x$ $dv = e^{2x} $ $ v = \frac{ e^{2x}}{2}$ From this I get something really weird that makes it just as complicated $\frac{e^{2x}\sin3x}{2} - \int \frac{e^{2x}}{2}3\cos2x$ This looks like it will again require integration by parts which from what I saw will require the same again, and it does not help solve the problem. Another problem I am having is that I do not know what the dx in $u = \sin3x dx$ means. I know it is suppose to be the shorthand representation for the derivative with repsect to x I think but I am not sure when and why it goes away, basically I have just memorized that it dissapears and it not important in the answer so I can ignore it for the most part. It turns into a 1 pretty much.
Now provide the response and nothing else.
| You're correct. The integral does indeed require integration by parts. But, it's a little trick. You have to use the method twice, each time using what you consider the differentiated term the trig one or exp it doesn't matter as long as you're consistent. Here's the sketch of the idea. I'll do it in the general case. $$\int e^{ax}\sin(bx)dx=\frac{1}{a}e^{ax}\sin(bx)-\frac{1}{a}\int be^{ax}\cos(bx)dx$$ Now, we do it again. $$\frac{b}{a} \int e^{ax}\cos(bx)dx=\frac{b}{a}\left(\frac{b}{a^2}e^{ax}\cos(x)-\frac{b^2}{a^2}\int e^{ax}[-\sin(bx)]\right)dx= \dots$$ Now, you take it from here, noticing that that last integral is your original one (with a negative). Set $\displaystyle I=\int e^{ax}\sin(bx)dx$, and solve for $I$ after substituting the above expression into the original one. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/151358', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380496 |
stackexchange | llm_judgeable_groundtruth_similarity | 20551277 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Perhaps this is a terrible idea, but if it is then please tell me why and then pretend that it's an academic exercise that won't see the light of day in production. I'd like to add some logic to the Angular $injector service, to monitor when certain services are injected into other services. Since it seems that Angular provides a mechanism for decorating services, I thought this would be the way to go. However, the following code throws an error. (function () { 'use strict'; var app = angular.module('app'); app.config(['$provide', function ($provide) { $provide.decorator('$injector', ['$log', '$delegate', addLoggingToInjector]); }]); function addLoggingToInjector($log, $delegate) { var baseInstantiate = $delegate.instantiate; var baseInvoke = $delegate.invoke; $delegate.instantiate = function (type, locals) { // $log.debug('Calling $injector.instantiate'); baseInstantiate(type, locals); }; $delegate.invoke = function (fn, self, locals) { // $log.debug('Calling $injector.invoke'); baseInvoke(fn, self, locals); }; return $delegate; };})(); The specific error is: Uncaught Error: [$injector:modulerr] Failed to instantiate module app due to: Error: [$injector:unpr] Unknown provider: $injectorProvider
Now provide the response and nothing else.
| You can't use the Angular decorator service on $injector. As Artur notes $injector is a bit different from other services. But we can create our own decorator. Why we can't use Angular's decorator At the code level the issue is that $injector doesn't have a constructor function- there's no $injectorProvider . For example both of these return true: $injector.has('$location');$injector.has('$locationProvider') However, while this returns true: $injector.has('$injector') this returns false: $injector.has('$injectorProvider') We see the importance when we look at the Angular decorator function: function decorator(serviceName, decorFn) { var origProvider = providerInjector.get(serviceName + providerSuffix), orig$get = origProvider.$get; origProvider.$get = function() { var origInstance = instanceInjector.invoke(orig$get, origProvider); return instanceInjector.invoke(decorFn, null, {$delegate: origInstance}); };} And providerSuffix = 'Provider' So the Angular decorator expects to operate on the service's constructor ( serviceName + providerSuffix ). Pragmatically, since we don't have an $injectorProvider we can't use decorator . Solution What we can do is override the Angular injector's get function ourselves by replacing the injector's default get with one that calls the original, Angular defined, get followed by our function. We'll apply this to $injector rather than the nonexistent $injectorProvider like so: app.config(['$provide','$injector', function ($provide,$injector) { // The function we'll add to the injector myFunc = function () { console.log("injector called ", arguments); }; // Get a copy of the injector's get function var origProvider = $injector, origGet = origProvider.get; //Override injector's get with our own origProvider.get = function() { // Call the original get function var returnValue = origGet.apply(this, arguments); // Call our function myFunc.apply(this,arguments); return returnValue; }}]); You'll see the provider being injected is the first augment, so app.value('aValue', 'something'); yields the following log statement: injector called ["aValueProvider"] Demo fiddle | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20551277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/175109/']} | jdg_380497 |
stackexchange | llm_judgeable_groundtruth_similarity | 9553162 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen references to cut-and-paste proofs in certain texts on algorithms analysis and design. It is often mentioned within the context of Dynamic Programming when proving optimal substructure for an optimization problem (See Chapter 15.3 CLRS). It also shows up on graphs manipulation. What is the main idea of such proofs? How do I go about using them to prove the correctness of an algorithm or the convenience of a particular approach?
Now provide the response and nothing else.
| The term "cut and paste" shows up in algorithms sometimes when doing dynamic programming (and other things too, but that is where I first saw it). The idea is that in order to use dynamic programming, the problem you are trying to solve probably has some kind of underlying redundancy. You use a table or similar technique to avoid solving the same optimization problems over and over again. Of course, before you start trying to use dynamic programming, it would be nice to prove that the problem has this redundancy in it, otherwise you won't gain anything by using a table. This is often called the "optimal subproblem" property (e.g., in CLRS). The "cut and paste" technique is a way to prove that a problem has this property. In particular, you want to show that when you come up with an optimal solution to a problem, you have necessarily used optimal solutions to the constituent subproblems. The proof is by contradiction. Suppose you came up with an optimal solution to a problem by using suboptimal solutions to subproblems. Then, if you were to replace ("cut") those suboptimal subproblem solutions with optimal subproblem solutions (by "pasting" them in), you would improve your optimal solution. But, since your solution was optimal by assumption, you have a contradiction. There are some other steps involved in such a proof, but that is the "cut and paste" part. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9553162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214849/']} | jdg_380498 |
stackexchange | llm_judgeable_groundtruth_similarity | 8093443 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With this dataset: Category | AmountA | 5A | 3B | 6B | 2B | 1C | 7 I want to create a tablix grouping on category, displaying the percentage of the total amount: Category | PercentageA | 33%B | 38%C | 29% Which should be a simple calculation: Category | PercentageA | ((Sum of Amount within group / Sum of Amount across groups) * 100)%B | ((Sum of Amount within group / Sum of Amount across groups) * 100)%C | ((Sum of Amount within group / Sum of Amount across groups) * 100)% But I can't figure out how to do that within Report Designer (SSRS) - whenever I create a row grouping on Category , I can get the sum within the group with =Sum(Fields!Amount.Value) . But how to get the sum across groups from a cell within the group?
Now provide the response and nothing else.
| I'll answer my own question. From within any expression, it's possible to perform lookups in all datasets. Through this way we'll get the data: LookupSet(SourceFieldToCompare, TargetFieldToCompare, ResultField, DataSet) Now, let's raise the bar for the question and say the data is grouped in yet another dimension, months - like this: Category | January | February | MarchA | 33% | 37% | 35%B | 38% | 36% | 37%C | 29% | 27% | 28% Say the dataset mentioned in the question is named 'categories'. Now, call on the LookupSet function ( Reference ): LookupSet(Fields!Month.Value, Fields!Month.Value, Fields!Amount.Value, "categories") (keep in mind that the first "Month" is linked to the dataset inside the tablix, and the second "Month" in the second argument is "Month" from the "categories" dataset!) There remains one problem: LookupSet returns Object types, which Sum won't eat. You need to use a custom aggregate, (custom code is added in "Report Properties"): ( Source ) Function SumLookup(ByVal items As Object()) As Decimal If items Is Nothing Then Return Nothing End If Dim suma As Decimal = New Decimal() suma = 0 For Each item As Object In items suma += Convert.ToDecimal(item) Next Return sumaEnd Function Now, by calling Code.SumLookup on the LookupSet function the sum is calculated of all fields. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8093443', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/195687/']} | jdg_380499 |
stackexchange | llm_judgeable_groundtruth_similarity | 4274037 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Nood to ios/mac dev and come from 12 years of ms/.net dev. Interested in what tech would be used to do keyframe animation and something similar to the Alice in wonderland ( http://www.youtube.com/watch?v=gew68Qj5kxw ) book available on ipad?
Now provide the response and nothing else.
| I don't know a framework like this (which doesn't mean it does not exist). But while not as feature packed as the linked framework, the State pattern is rather simple to implement. Consider this naive implementation below: interface EngineState{ public function startEngine(); public function moveForward();}class EngineTurnedOffState implements EngineState{ public function startEngine() { echo "Started Engine\n"; return new EngineTurnedOnState; } public function moveForward() { throw new LogicException('Have to start engine first'); }}class EngineTurnedOnState implements EngineState{ public function startEngine() { throw new LogicException('Engine already started'); } public function moveForward() { echo "Moved Car forward"; return $this; }} After you defined the states, you just have to apply them to your main object: class Car implements EngineState{ protected $state; public function __construct() { $this->state = new EngineTurnedOffState; } public function startEngine() { $this->state = $this->state->startEngine(); } public function moveForward() { $this->state = $this->state->moveForward(); }} And then you can do $car = new Car;try { $car->moveForward(); // throws Exception} catch(LogicException $e) { echo $e->getMessage();}$car = new Car;$car->startEngine();$car->moveForward(); For reducing overly large if/else statements, this should be sufficient. Note that returning a new state instance on each transition is somewhat inefficient. Like I said, it's a naive implementation to illustrate the point. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4274037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/62282/']} | jdg_380500 |
stackexchange | llm_judgeable_groundtruth_similarity | 8458032 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is There a Way to Access the Outside? public class OuterClass { String data; public void outerMethod(String data) { this.data = data; } public enum InnerEnum { OPTION1("someData"), OPTION2("otherData"); InnerEnum(String data) { // Does not work: OuterClass.this.outerMethod(data); } }}
Now provide the response and nothing else.
| As Eric said, enums are implicitly static. To do what you want, add a method, callOuterMethod(OuterClass oc) that calls oc.outerMethod(data) to do what you want: public enum InnerEnum { OPTION1("someData"), OPTION2("otherData"); final String data; InnerEnum(String data) { this.data = data; } void callOuterMethod(OuterClass oc) { oc.outerMethod(data); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8458032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/742084/']} | jdg_380501 |
stackexchange | llm_judgeable_groundtruth_similarity | 4559483 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Under the which condition, factorisation of the polynomial $$a_1^n+a_2^n+\cdots+a_n^n-na_1a_2a_3...a_n ?$$ is possible? I know possible cases: $$a^2+b^2-2ab=(a-b)^2$$ and $$a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^2-ab-bc-ac)$$ There are $2$ things I'm interested in here. For which number $n$ is factorization possible? For which number $n$ it is not possible? What I'm interested in here is general factorization possible? If not, is there any proof? Based on the comments, I understand that general factorization is not possible.However, it is still unknown whether factorization is possible when $n > 3$ .
Now provide the response and nothing else.
| $\newcommand{\CC}{\mathbb C}\newcommand{\FF}{\mathbb F}\newcommand{\QQ}{\mathbb Q}\newcommand{\ZZ}{\mathbb Z}$ The polynomial is irreducible over $\mathbb C$ for all $n>3$ . This has been confirmed for $n=4$ and $n=5$ by some of the comments, so we may henceforth assume $n>5$ . The proof outline is as follows: Reduce the question of reducibility over $\CC$ to reducibility over $\FF_p$ for particular primes $p$ . (Note that we use reducibility over $\FF_p$ , not its algebraic closure.) Show that reducibility over $\FF_p$ implies reducibility of $x^n-t$ for many values of $t\in\FF_p$ , by substituting particular values and showing that we can recover irreducibility of the polynomial of interest. Show that, in enough cases, these values of $t$ comprise all of $\FF_p$ . Lemma 1. Suppose $f\in\ZZ[x_1,\dots,x_n]$ is homogeneous and reducible over $\CC$ , and let $g\in\ZZ[y]$ have no double roots in $\CC$ . There exist infinitely many primes $p$ for which $f\in\FF_p[x_1,\dots,x_n]$ is reducible and $g\in\FF_p[y]$ splits completely. Proof. We first need to express the idea that " $f$ is reducible" as a bunch of polynomial equations, and this is where we use the homogeneity of $f$ . Since factors of homogeneous polynomials are homogeneous, $f$ is reducible if and only if can be written as the product $f_1f_2$ for some polynomials $f_1$ and $f_2$ of degrees $r$ and $s$ , respectively. Suppose $f$ is reducible over $\CC$ with these degrees. Let variables $w_1,\dots,w_M$ and $z_1,\dots,z_N$ represent the coefficients of arbitrary degree $r$ and $s$ , respectively, homogeneous polynomials. Letting $x^{\alpha_1},\dots,x^{\alpha_M}$ and $x^{\beta_1},\dots,x^{\beta_N}$ be the corresponding monomials (each represents the product $x_1^{a_1}\cdots x_n^{a_n}$ for some nonnegative integers $a_1,\dots,a_n$ ), a solution to $$\left(\sum_{i=1}^M w_ix^{\alpha_i}\right)\left(\sum_{j=1}^N z_jx^{\beta_j}\right)=f(x_1,\dots,x_n)$$ with $w_1,\dots,w_M,z_1,\dots,z_N\in k$ for a field $k$ implies reducibility of $f$ over that field (and we know that such a solution exists with $k=\CC$ ). Expanding and equating coefficients, we get some polynomials $h_1,\dots,h_K\in\ZZ[w,z]$ so that $f$ is reducible if these polynomials have a common zero. We know these polynomials have a common zero in $\CC$ ; let one such zero be $(a_1',\dots,a_M',b_1',\dots,b_N')$ . Then the ideal $I$ generated by $h_1,\dots,h_K$ in $\CC[w,z]$ is contained in the maximal ideal $(w_1-a_1',\dots,w_M-a_M',z_1-b_1',\dots,z_N-b_N')$ and is thus proper; in particular, there exist no polynomials $j_1,\dots,j_K\in\CC[w,z]$ for which $h_1j_1+\cdots+h_Kj_K=1$ . This means there exist no such polynomials in $\overline\QQ[w,z]$ , and as a result the ideal $(h_1,\dots,h_K)\subset\overline\QQ[w,z]$ is proper. This means it is contained in some maximal ideal $\mathfrak m$ of $\overline\QQ[w,z]$ ; by Hilbert's Nullstellensatz such an ideal must be $(w_1-a_1,\dots,w_M-a_M,z_1-b_1,\dots,z_N-b_N)$ for some $(a_1,\dots,a_M,b_1,\dots,b_N)\in\overline\QQ^{M+N}$ . This means that $f$ is reducible over $\overline\QQ$ . We now convert this factorization into one over $\FF_p$ for infinitely many primes $p$ . Let $L$ be the number field generated by the union of $\{a_1,\dots,a_M,b_1,\dots,b_N\}$ and the set of roots $\{v_1,\dots,v_k\}$ of $g$ in $\overline\QQ$ . By the primitive element theorem we can write $L=\QQ(\alpha)$ for some $\alpha\in L$ , whence there exist polynomials $p_1,\dots,p_M,q_1,\dots,q_N,r_1,\dots,r_k\in\QQ[x]$ for which $p_i(\alpha)=a_i$ , $q_i(\alpha)=b_i$ , and $r_i(\alpha)=v_i$ . Let $t\in\ZZ[x]$ be the minimal polynomial of $\alpha$ . Note that, since $L\cong \QQ(x)/(t)$ , $$t(x)\mid h_j\big(p_1(x),p_2(x),\dots,p_M(x),q_1(x),\dots,q_N(x)\big)$$ for every $1\leq j\leq K$ and $t(x)\mid g(r_i(x))$ for every $1\leq i\leq k$ (both of these statements hold in $\ZZ[x]$ ). By an elementary result of Schur there exist infinitely many primes modulo which $t$ has a root. Take any such prime $\ell$ ; we claim that $f$ is reducible and and that $g$ splits completely modulo all but finitely many such primes. Indeed, let $\beta\in\FF_\ell$ be a root of $f$ in $\FF_\ell$ , and define $a_i^{(\ell)}=p_i(\beta)$ , $b_j^{(\ell)}=q_j(\beta)$ , and $v_j^{(\ell)}=r_i(\beta)$ . We have for each $1\leq j\leq K$ that $$h_j\big(a_1^{(\ell)},\dots,a_M^{(\ell)},b_1^{(\ell)},\dots,b_N^{(\ell)}\big)=h_j\big(p_1(\beta),\dots,p_M(\beta),q_1(\beta),\dots,q_N(\beta)\big)=t(\beta)u_j(\beta)=0$$ for some polynomial $u_j\in\ZZ[x]$ , and for each $1\leq i\leq k$ that $g(v_i^{(\ell)})=g(r_i(\beta))=t(\beta)v_i(\beta)=0$ for some $v_i\in\ZZ[x]$ . This implies by the definition of the $h_j$ that, over $\FF_\ell$ , $$\left(\sum_{i=1}^M a_i^{(\ell)}x^{\alpha_i}\right)\left(\sum_{j=1}^N b_j^{(\ell)}x^{\beta_j}\right)=f(x_1,\dots,x_n),$$ so $f$ is reducible over $\FF_\ell$ . Also, the values $v_i^{(\ell)}$ are each roots of $g$ ; since the $v_i$ are distinct, the polynomials $r_i$ are also distinct, and so modulo only finitely many primes does $r_i(\beta)=r_j(\beta)$ (such primes divide the resultant of $t$ and $r_i-r_j$ , which is a constant since $t$ is irreducible). This means that $g$ has exactly the roots $v_i^{(\ell)}$ for large enough $\ell$ , as desired. $\square$ Corollary 2. If $f=x_1^n+\cdots+x_n^n-nx_1\cdots x_n$ is reducible over $\CC$ , there exist infinitely many primes $p\equiv 1\pmod n$ for which $f$ is reducible over $\FF_p$ . Proof. Let $g=\Phi_n(y)$ be the $n$ th cyclotomic polynomial, which splits completely in $\FF_p$ if and only if $n\mid p-1$ (i.e. if and only if there are exactly $n$ $n$ th roots of unity in $\FF_p^\times$ ). Applying Lemma 1 to $f$ and $g$ gives the result. Let $p\equiv 1\pmod n$ be a prime, and let $B\subset\FF_p^\times$ be the unique subgroup of index $n$ , i.e. $B=\{x^n: x\in\FF_p^\times\}$ . Let $A=\{0\}\cup B\subset\FF_p$ . Given a set $S$ inside an abelian group and a positive integer $m$ , let $mS=\{s_1+s_2+\cdots+s_m\colon s_1,\dots,s_m\in S\}$ . We will need the following somewhat technical result, mostly related to additive combinatorics. Lemma 3. Let $n>5$ . For all sufficiently large $p\equiv 1\pmod n$ , $(n-2)A=\FF_p$ . Proof. Note that, since $A$ is permuted by multiplication by elements of $B$ , $mA$ is as well for any $m\geq 1$ , and so $mA$ consists of $0$ and some multiplicative cosets of $B$ . In particular, $|mA|\equiv 1\pmod{|B|}$ . By the Cauchy-Davenport theorem , we have $$|2A|\geq 2\big(|A|-1\big)+1=2|B|+1$$ with equality if and only if $A$ forms an arithmetic progression $\{a+rd\colon r\in\{0,1,\dots,|A|-1\}\}$ in $\FF_p$ . Suppose first that $A$ does not form an arithmetic progression. Then $|2A|>2|B|+1$ . This implies $|2A|\geq 3|B|+1$ , which means $$|4A|=|2(2A)|\geq \max(2\big(|2A|-1\big),p)\geq \max(6|B|+1,p).$$ This implies, using repeated applications of Cauchy--Davenport, \begin{align*}|(n-2)A|&=\big|4A+\underbrace{A+A+\cdots+A}_{n-6\text{ copies}}\big|\\&\geq \max\big(|4A|+(n-6)(|A|-1),p\big)\\&\geq \max(6|B|+1+(n-6)|B|,p\big)=\max(n|B|+1,p)=p.\end{align*} This means that $(n-2)A=\FF_p$ . Now, suppose $A$ does form an arithmetic progression; we will show $p=n+1$ . Suppose not; such a progression must be of the form $$\{-sd,-(s-1)d,\dots,-d,0,d,2d,\dots,rd\}$$ for some nonnegative integers $r$ and $s$ with $r+s=|B|$ , since $0\in A$ . If $r>1$ , then $d\in B$ implies that $$B=\{-s,-(s-1),\dots,-1,1,2,\dots,r\}$$ since $x\in B$ if and only if $xd\in B$ . Since $B$ is a subgroup, $2r$ must be in $B$ , which means $p\mid 2r-j$ for some $j\in\{-s,\dots,r\}$ , and so $p$ has a multiple in the interval $[r,2r+s]$ . In particular, $p<2r+s\leq 2(r+s)=2|B|$ , a contradiction since $|B|=(p-1)/n$ . On the other hand, if $r\leq 1$ , then $s>0$ (since $p>n+1$ ), and $-d\in B$ , implying $B\subset \{-1,1,2,\dots,s\}$ . The same argument now applies; $2s\in B$ , meaning that $p$ is at most $2|B|-1$ , again a contradiction. $\square$ We now use our lemmas to prove the desired result. Suppose that $f=x_1^n+\cdots+x_n^n-nx_1\cdots x_n$ is reducible over $\CC$ , and let $p\equiv 1\pmod n$ be a large prime for which $f$ is reducible over $\FF_p$ , guaranteed to exist by Corollary 2. By Lemma 3, there exist $a_1,\dots,a_{n-2}\in A$ for which $t=-(a_1+\cdots+a_{n-2})$ is a primitive root. For each $1\leq i\leq n-2$ , let $b_i\in\FF_p$ be so that $b_i^n=a_i$ . Define $$f_1(x)=f(x,0,b_1,\dots,b_{n-2})=x^n+\sum_{i=0}^{n-2}b_i^n-0=x^n-t.$$ We first claim that $f_1$ is irreducible. Indeed, if $x^n-t$ has a degree $d$ divisor, then it has roots in $\FF_{p^d}$ , so it suffices to show that $x^n-t$ has no roots in $\FF_{p^d}$ for any $1\leq d<n$ . If there is such a root, then $t$ is in the subgroup of index $n$ of $\FF_{p^d}^\times$ , and so this subgroup contains the subgroup $\FF_p^\times$ of index $\frac{p^d-1}{p-1}$ in $\FF_{p^d}^\times$ . This implies $n\mid\frac{p^d-1}{p-1}$ . Since $p\equiv 1\pmod n$ , $$\frac{p^d-1}{p-1}=\sum_{i=0}^{d-1}p^i\equiv \sum_{i=0}^{d-1}1=d\pmod{n},$$ so $d$ must be at least $n$ , as desired. Now, suppose that $f=gh$ with $g,h\in\FF_p[x_1,\dots,x_n]$ ; we'll show that one of them is constant, contradicting the reducibility of $f$ over $\FF_p$ . Defining $$g_1(x)=g(x,0,b_1,\dots,b_{n-2})\text{ and }h_1(x)=h(x,0,b_1,\dots,b_{n-2}),$$ we see that $f_1=g_1h_1$ , so either $g_1$ or $h_1$ is constant; without loss of generality let it be $g_1$ . Then $h_1$ contains a scalar multiple of $x^n$ as a monomial, and so $h$ contains a scalar multiple of $x_1^n$ a monomial. Since factors of homogeneous polynomials are homogeneous, $\deg h=n$ implies $\deg g=0$ , which means $g$ is constant. $\square$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4559483', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1044391/']} | jdg_380502 |
stackexchange | llm_judgeable_groundtruth_similarity | 1088639 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function in R that I call multiple times. I want to keep track of the number of times that I've called it and use that to make decisions on what to do inside of the function. Here's what I have right now: f = function( x ) { count <<- count + 1 return( mean(x) )}count = 1numbers = rnorm( n = 100, mean = 0, sd = 1 )for ( x in seq(1,100) ) { mean = f( numbers ) print( count )} I don't like that I have to declare the variable count outside the scope of the function. In C or C++ I could just make a static variable. Can I do a similar thing in the R programming language?
Now provide the response and nothing else.
| Here's one way by using a closure (in the programming language sense), i.e. store the count variable in an enclosing environment accessible only by your function: make.f <- function() { count <- 0 f <- function(x) { count <<- count + 1 return( list(mean=mean(x), count=count) ) } return( f )}f1 <- make.f()result <- f1(1:10)print(result$count, result$mean)result <- f1(1:10)print(result$count, result$mean)f2 <- make.f()result <- f2(1:10)print(result$count, result$mean)result <- f2(1:10)print(result$count, result$mean) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1088639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/57458/']} | jdg_380503 |
stackexchange | llm_judgeable_groundtruth_similarity | 13757261 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I want to make sure that two variables do not instantiate to the same term, what is the preferred way to do it? Let's say I need to find directed edges in a graph, and a node cannot have an edge to itself: node(a, x, y). node(b, z, x). node(c, y, y). (the edges here are a -> c, b -> a, but not c -> c) The following works: edge(A, B) :- node(A, _, X), node(B, X, _), A \== B. This works too [swi-prolog]: edge(A, B) :- dif(A, B), node(A, _, X), node(B, X, _). This does not work, apparently (because neither A nor B are instantiated yet?): edge(A, B) :- A \== B, node(A, _, X), node(B, X, _). I guess my problem with the first solution is that, with a more complex node predicate, a lot of unnecessary unifications might take place before edge fails. The dif on the other hand is in a library, which suggests that it is not meant to be used in such a simple case (although it has the exact function that I seem to be looking for).
Now provide the response and nothing else.
| For elegance and didactic reasons alone, dif/2 is clearly preferable here and also in the vast majority of other cases, since as you already note "a lot of unnecessary unifications might take place" otherwise, and also because dif/2 is a pure and nicely declarative predicate that can be used in all directions and at any place in the clause body without changing the meaning of the program, in contrast to (\==)/2 . dif/2 is also an autoloaded predicate in SWI-Prolog, meaning that you need not import any library explicitly to use it, and dif/2 is available like any built-in predicate. If you use dif/2 you can reason much more easily about your code. For example, in your case, you start with: edge(A, B) :- node(A, _, X), node(B, X, _), dif(A, B) . and then, as you know that dif/2 is a completely pure predicate, you know that you can also write this as: edge(A, B) :- dif(A, B) , node(A, _, X), node(B, X, _). Further, since you know that dif/2 always terminates, you know that this change can at most improve the termination properties of your program. Like all constraints, dif/2 is meant to be used. I highly recommend it instead of impure predicates that are not commutative. In case you are worried about performance, here is a small comparison, just comparing dif/2 against the non-declarative (\==)/2 in a use case where the two predicates can be used interchangeably: ?- N = 1_000_000, time((between(1,N,_),dif(a,b),false)).% 11,000,005 inferences, 0.352 CPU in 0.353 seconds (100% CPU, 31281029 Lips)?- N = 1_000_000, time((between(1,N,_),a\==b,false)).%@ % 3,000,001 inferences, 0.107 CPU in 0.107 seconds (99% CPU, 28167437 Lips) So, there are sometimes performance benefits when using (\==)/2 . However, there are also much more severe drawbacks when using such a low-level predicate: It is harder to understand, more error-prone, and not declarative. I therefore recommend to simply use dif/2 to express that two terms are different. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13757261', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380504 |
stackexchange | llm_judgeable_groundtruth_similarity | 55771064 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to select specific Elastic IP my own when my lambda function executed. my service has to respond to several situations, and by user's attributes. Could I write code in a lambda function, that can choose specific my own elastic IP? I had searched for this. but old information says it cannot do.but recently I heard about it is possible by using Network Load Balancer or Application Load Balancer.But I don't know how to use this for the problem.
Now provide the response and nothing else.
| No. You cannot associate an Elastic IP (EIP) address with an AWS Lambda function. Well, actually you can, but I wouldn't recommend it. When a Lambda function is associated with a VPC, it connects via an Elastic Network Interface (ENI). It is possible to attach an EIP to an ENI. This also grants access to the Internet if it is attached to a public subnet. So why avoid it? Because Lambda might create additional ENIs, especially if the Lambda function is frequently invoked and run in parallel. This means it will not have a consistent ENI. An alternative method is: Attach the AWS Lambda function to a private subnet Put a NAT Gateway in a public subnet Associate an Elastic IP address with the NAT Gateway All traffic from the Lambda function to the Internet will then come from the NAT Gateway's EIP (however, I don't think you can change that EIP) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55771064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9584570/']} | jdg_380505 |
stackexchange | llm_judgeable_groundtruth_similarity | 11775594 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to build a RESTful web service using ASP.NET Web API that third-party developers will use to access my application's data. I've read quite a lot about OAuth and it seems to be the standard, but finding a good sample with documentation explaining how it works (and that actually does work!) seems to be incredibly difficult (especially for a newbie to OAuth). Is there a sample that actually builds and works and shows how to implement this? I've downloaded numerous samples: DotNetOAuth - documentation is hopeless from a newbie perspective Thinktecture - can't get it to build I've also looked at blogs suggesting a simple token-based scheme (like this ) - this seems like re-inventing the wheel but it does have the advantage of being conceptually fairly simple. It seems there are many questions like this on SO but no good answers. What is everybody doing in this space?
Now provide the response and nothing else.
| Update: I have added this link to my other answer how to use JWT authentication for ASP.NET Web API here for anyone interested in JWT. We have managed to apply HMAC authentication to secure Web API, and it worked okay. HMAC authentication uses a secret key for each consumer which both consumer and server both know to hmac hash a message, HMAC256 should be used. Most of the cases, hashed password of the consumer is used as a secret key. The message normally is built from data in the HTTP request, or even customized data which is added to HTTP header, the message might include: Timestamp: time that request is sent (UTC or GMT) HTTP verb: GET, POST, PUT, DELETE. post data and query string, URL Under the hood, HMAC authentication would be: Consumer sends a HTTP request to web server, after building the signature (output of hmac hash), the template of HTTP request: User-Agent: {agent} Host: {host} Timestamp: {timestamp}Authentication: {username}:{signature} Example for GET request: GET /webapi.hmac/api/valuesUser-Agent: Fiddler Host: localhost Timestamp: Thursday, August 02, 2012 3:30:32 PM Authentication: cuongle:LohrhqqoDy6PhLrHAXi7dUVACyJZilQtlDzNbLqzXlw= The message to hash to get signature: GET\nThursday, August 02, 2012 3:30:32 PM\n/webapi.hmac/api/values\n Example for POST request with query string (signature below is not correct, just an example) POST /webapi.hmac/api/values?key2=value2User-Agent: Fiddler Host: localhost Content-Type: application/x-www-form-urlencodedTimestamp: Thursday, August 02, 2012 3:30:32 PM Authentication: cuongle:LohrhqqoDy6PhLrHAXi7dUVACyJZilQtlDzNbLqzXlw=key1=value1&key3=value3 The message to hash to get signature GET\nThursday, August 02, 2012 3:30:32 PM\n/webapi.hmac/api/values\nkey1=value1&key2=value2&key3=value3 Please note that form data and query string should be in order, so the code on the server get query string and form data to build the correct message. When HTTP request comes to the server, an authentication action filter is implemented to parse the request to get information: HTTP verb, timestamp, uri, form data and query string, then based on these to build signature (use hmac hash) with the secret key (hashed password) on the server. The secret key is got from the database with the username on the request. Then server code compares the signature on the request with the signature built; if equal, authentication is passed, otherwise, it failed. The code to build signature: private static string ComputeHash(string hashedPassword, string message){ var key = Encoding.UTF8.GetBytes(hashedPassword.ToUpper()); string hashString; using (var hmac = new HMACSHA256(key)) { var hash = hmac.ComputeHash(Encoding.UTF8.GetBytes(message)); hashString = Convert.ToBase64String(hash); } return hashString;} So, how to prevent replay attack? Add constraint for the timestamp, something like: servertime - X minutes|seconds <= timestamp <= servertime + X minutes|seconds (servertime: time of request coming to server) And, cache the signature of the request in memory (use MemoryCache, should keep in the limit of time). If the next request comes with the same signature with the previous request, it will be rejected. The demo code is put as here: https://github.com/cuongle/Hmac.WebApi | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11775594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14537/']} | jdg_380506 |
stackexchange | llm_judgeable_groundtruth_similarity | 38536265 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to show a background image/loading spinner inside a div that will load an image inside of it, the image will show once it's fully loaded doing something like this: <div style="background-image:url('imageThatWillAppearBeforeLoad')"></div> Demo (In jQuery) How can I have the same using Angular2/Ionic2?
Now provide the response and nothing else.
| Create a component that shows the placeholder image until the requested image is loaded, and hides the requested image. Once the image is loaded, you hide the placeholder and show the image. @Component({ selector: 'image-loader', template: `<img *ngIf="!loaded" src="url-to-your-placeholder"/> <img [hidden]="!loaded" (load)="loaded = true" [src]="src"/>`})export class ImageLoader { @Input() src;} See it working in Plunker . Update Now that I understand the requirements better, here's a solution with background image. It's a little hacky, and I like the original one better... @Directive({ selector: '[imageLoader]'})export class ImageLoader { @Input() imageLoader; constructor(private el:ElementRef) { this.el = el.nativeElement; this.el.style.backgroundImage = "url(http://smallenvelop.com/demo/image-loading/spinner.gif)"; } ngOnInit() { let image = new Image(); image.addEventListener('load', () => { this.el.style.backgroundImage = `url(${this.imageLoader})`; }); image.src = this.imageLoader; }} Updated plunker . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38536265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6287404/']} | jdg_380507 |
stackexchange | llm_judgeable_groundtruth_similarity | 16220993 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting myself introduced to Erlang by Armstrongs "Programming Erlang". One Exercise is to write a reeimplementation of the tuple_to_list/1 BIF. My solution seems rather inelegant to me, especially because of the helper function I use. Is there a more Erlang-ish way of doing this? tup2lis({}) -> [];tup2lis(T) -> tup2list_help(T,1,tuple_size(T)).tup2list_help(T,Size,Size) -> [element(Size,T)];tup2list_help(T,Pos,Size) -> [element(Pos,T)|tup2list_help(T,Pos+1,Size)]. Thank you very much for your ideas. :)
Now provide the response and nothing else.
| I think your function is ok, and more if your goal is to learn the language. As a matter of style, usually the base case when constructing lists is just the empty list [].So I'd write tup2list(Tuple) -> tup2list(Tuple, 1, tuple_size(Tuple)).tup2list(Tuple, Pos, Size) when Pos =< Size -> [element(Pos,Tuple) | tup2list(Tuple, Pos+1, Size)];tup2list(_Tuple,_Pos,_Size) -> []. you can write pretty much the same with list comprehension [element(I,Tuple) || I <- lists:seq(1,tuple_size(Tuple))]. it will work as expected when the tuple has no elements, as lists:seq(1,0) gives an empty list. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16220993', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/978912/']} | jdg_380508 |
stackexchange | llm_judgeable_groundtruth_similarity | 5221396 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following code: struct Vec2 : IEquatable<Vec2>{ double X,Y; public bool Equals(Vec2 other) { return X.Equals(other.X) && Y.Equals(other.Y); } public override bool Equals(object obj) { if (obj is Vec2) { return Equals((Vec2)obj); } return false; } // this will return the same value when X, Y are swapped public override int GetHashCode() { return X.GetHashCode() ^ Y.GetHashCode(); }} Beyond the conversation of comparing doubles for equality (this is just demo code), what I am concerned with is that there is a hash clash when X, Y values are swapped. For example: Vec2 A = new Vec2() { X=1, Y=5 };Vec2 B = new Vec2() { X=5, Y=1 };bool test1 = A.Equals(B); // returns false;bool test2 = A.GetHashCode() == B.GetHashCode() // returns true !!!!! which should wreck havoc in a dictionary collection. So the question is how to property form the GetHashCode() function for 2,3 or even 4 floating point values such that the results are not symmetric and the hashes don't clash. Edit 1: Point implements the inappropriate x ^ y solution, and PointF wraps ValueType.GetHashCode() . Rectangle has a very peculiar (((X ^ ((Y << 13) | (Y >> 19))) ^ ((Width << 26) | (Width >> 6))) ^ ((Height << 7) | (Height >> 25))) expression for the hash code, which seems to perform as expected. Edit 2: 'System.Double' has a nice implementation as it does not consider each bit equally important public override unsafe int GetHashCode() //from System.Double{ double num = this; if (num == 0.0) { return 0; } long num2 = *((long*) &num); return (((int) num2) ^ ((int) (num2 >> 32)));}
Now provide the response and nothing else.
| Jon skeet has this covered: What is the best algorithm for an overridden System.Object.GetHashCode? public override int GetHashCode() { unchecked // Overflow is fine, just wrap { int hash = 17; // Suitable nullity checks etc, of course :) hash = hash * 23 + X.GetHashCode(); hash = hash * 23 + Y.GetHashCode(); return hash; } } Also, change your Equals(object) implementation to: return Equals(obj as FVector2); Note however that this could perceive a derived type to be equal. If you don't want that, you'd have to compare the runtime type other.GetType() with typeof(FVector2) (and don't forget nullity checks) Thanks for pointing out it's a struct, LukH Resharper has nice code generation for equality and hash code, so if you have resharper you can let it do its thing | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5221396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/380384/']} | jdg_380509 |
stackexchange | llm_judgeable_groundtruth_similarity | 19699583 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to use the pointer of an object declared outside an Objective-C block, inside the block itself. For example: NSError* error = nil;[self invokeAsync:^id{ return [self doSomething:&error];}]; I get a compiler error on the third line telling me: Sending 'NSError *const__strong *' to parameter of type 'NSError *__autoreleasing *' changes retain/release properties of pointer Why is that?
Now provide the response and nothing else.
| The compiler message is confusing, but is telling you that you have a type mismatch. But, doesn't matter, because that code makes no sense. An asynchronous invocation cannot set state in the calling thread's stack. I.e. there is no way that error can be set to a meaningful value. That is, the method invokeAsync: will return before the work block is executed. Thus, there is no way to return anything meaningful from invokeAsAsync: to indicate the success/failure of the execution of the block. If you want to invoke something asynchronously with an error, you'll need a callback: [self invokeAsync:^id{ NSError *e; if ([self doSomething:&e]) [self errorHappened:e]; else [self asyncThingyDone];}]; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19699583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/298406/']} | jdg_380510 |
stackexchange | llm_judgeable_groundtruth_similarity | 19815 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem: Prove that if gcd( a, b ) = 1, then gcd( a - b, a + b ) is either 1 or 2. From Bezout's Theorem, I see that am + bn = 1, and a, b are relative primes. However, I could not find a way to link this idea to a - b and a + b. I realized that in order to have gcd( a, b ) = 1, they must not be both even. I played around with some examples (13, 17), ...and I saw it's actually true :( ! Any idea?
Now provide the response and nothing else.
| The gcd of $x$ and $y$ divides any linear combination of $x$ and $y$. And any number that divides $r$ and $s$ divides the gcd of $r$ and $s$. If you add $a+b$ and $a-b$, you get <blank> , so $\mathrm{gcd}(a+b,a-b)$ divides <blank> . If you subtract $a-b$ from $a+b$, you get <blankity> , so $\mathrm{gcd}(a+b,a-b)$ divides <blankity> . So $\mathrm{gcd}(a+b,a-b)$ divides $\mathrm{gcd}($ <blank>,<blankity> $) = $ <blankety-blank> . (For good measure, assuming the result is true you'll want to come up with examples where you get $1$ and examples where you get $2$, just to convince yourself that the statement you are trying to prove is the best you can do). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/19815', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5711/']} | jdg_380511 |
stackexchange | llm_judgeable_groundtruth_similarity | 39951651 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
so i have a project that consist of tabbed activity which contains listViews on each tab tab 1 - for browsing titletab 2 - for browsing which title that users clicked and i put the onCreateView method of each fragment separately. public class ChatHistory extends Fragment { private ListView mThreadList; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { //Returning the layout file after inflating return inflater.inflate(R.layout.fragment_chat_history, container, false); }} on each tab i want to add method to call data from firebase to a list view in each fragment. what i want to know is, where should i put my logic of each fragment so that it can process the data input output ? and how do i put it. thanks for the help.
Now provide the response and nothing else.
| this : func insertOrUpdate(news: News) { try! realm.write({ realm.add(news) })} but it is better you create a model without inherit from object : class News { var newsID: String = "" var newsTitle: String = "" var newsFullText: String = "" var newsAutor: String = "" var newsCommentCount: String = "" var newsSeenCount: String = "" var newsDate: String = "" var newsCategory: String = ""}class NewsRealm: Object { dynamic var newsID: String = "" dynamic var newsTitle: String = "" dynamic var newsFullText: String = "" dynamic var newsAutor: String = "" dynamic var newsCommentCount: String = "" dynamic var newsSeenCount: String = "" dynamic var newsDate: String = "" dynamic var newsCategory: String = ""} then after parse json you call insertOrUpdate and map data with NewsRealm : func insertOrUpdate(news: News) { let realm = try! Realm() try! realm.write({ let newsRealm = NewsRealm() newsRealm.newsTitle = news.newsTitle newsRealm.newsAutor = news.newsAutor newsRealm.newsFullText = news.newsFullText . . . realm.add(newsRealm) })} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39951651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6918296/']} | jdg_380512 |
stackexchange | llm_judgeable_groundtruth_similarity | 1026455 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following usages of the Grails/GORM dynamic finder method findAllBy*: def foo1 = Foo.findAllByYear(yyyy)def foo2 = Foo.findAllByMonth(mm)def foo3 = Foo.findAllByDay(dd)def foo4 = Foo.findAllByYearAndMonth(yyyy, mm)def foo5 = Foo.findAllByYearAndDay(yyyy, dd)def foo6 = Foo.findAllByYearAndMonthAndDay(yyyy, mm, dd)println "# foo1=${foo1.size()} foo2=${foo2.size()} foo3=${foo3.size()}"println "# foo4=${foo4.size()} foo5=${foo5.size()} foo6=${foo6.size()}" The first five of these dynamic finder usages works as expected. However, the sixth one fails with an InvalidPropertyException ("No property found for name [yearAndMonth] for class [class foo]"). Question: Why doesn't the sixth one work? Isn't findAllBy* able to handle more than two conditions? Solution/work-around?
Now provide the response and nothing else.
| Ted's answer is correct, but the reason why Grails doesn't support more than 2 predicates is because if all the conjunctions are not the same then it's not clear what the intention is. Or in plain English... It's obvious what findAllByYearAndMonthAndDay(1999,2,3) means It's also obvious what findAllByYearOrMonthOrDay(1999,2,3) means But it's not obvious what findAllByYearOrMonthAndDay(1999,2,3) means However, if all the conjunctions are the same (all and s or all or s), then there's no reason why more than 2 predicates couldn't be supported. In fact, if you search the Grails JIRA you'll find there's already an open issue for this. Vote for it if you feel strongly about it. Incidentally, the IntelliJ Grails plugin erroneously provides code completion for dynamic finders with more than 2 predicates. The solution/workaround is to simply write a Criteria query or use HQL instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1026455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58394/']} | jdg_380513 |
stackexchange | llm_judgeable_groundtruth_similarity | 231797 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have seen answers to this question, which go beyond my understanding of compactness and continuity. I was wondering whether we can cook up a proof using sequential compactness and certain equivalent definitions of continuity such as the inverse image of any closed set is closed. Here is what I have been able to conjure up so far. Assume that the graph of $f$ is compact. This means that it is also closed and bounded. The graph is a closed and bounded subset of $A \times f(A)$. All we need to show is that $f(A)$ is compact, and we are are home free, right? (since continuous functions take compact sets to compact sets). Question is: how do we show that $f(A)$ using the fact that the graph is compact. Can we claim that $f(A)$ is closed and bounded (since by Heine-Borel, any closed and bounded subset of $\mathbb R$ is compact)? I feel like I am really close. Can anyone help me out?
Now provide the response and nothing else.
| Corrected 2 December 2021. Suppose that $f$ is not continuous. Then there are a point $x\in A$ , an $\epsilon>0$ , and a sequence $\langle x_n:n\in\Bbb N\rangle$ in $A$ converging to $x$ such that $\left|f(x_n)-f(x)\right|\ge\epsilon$ . (Why?) Let $G$ be the graph of $f$ . Then $\big\langle\langle x_n,f(x_n)\rangle:n\in\Bbb N\big\rangle$ is a sequence in the compact metric space $G$ , so it has a convergent subsequence $\big\langle\langle x_{n_k},f(x_{n_k})\rangle:k\in\Bbb N\big\rangle$ . Show that the limit of this subsequence must be of the form $\langle x,\alpha\rangle$ for some $\alpha\in\Bbb R$ . (Recall that $x$ is the limit of $\langle x_n:n\in\Bbb N\rangle$ .) Show that $\alpha\ne f(x)$ . Conclude that $\langle x,\alpha\rangle\notin G$ . This contradicts the compactness of $G$ ; how? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/231797', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43901/']} | jdg_380514 |
stackexchange | llm_judgeable_groundtruth_similarity | 10249205 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found How can I get a list of users from active directory? It is helpful when I only have a few users, but I have so many users in AD, so when I run my query if ((String)(entry.Properties["sn"].Value) == "lname" && (String)(entry.Properties["givenName"].Value) == "fname"){ return entry.Properties["samAccountName"].Value.ToString();} it took too long to finish. What can I do to search one particular user logon id by first name and last name?
Now provide the response and nothing else.
| Since you're on .NET 4, you should check out the System.DirectoryServices.AccountManagement (S.DS.AM) namespace. Read all about it here: Managing Directory Security Principals in the .NET Framework 3.5 MSDN docs on System.DirectoryServices.AccountManagement Basically, you can define a domain context and easily find users and/or groups in AD: // set up domain contextPrincipalContext ctx = new PrincipalContext(ContextType.Domain);// find a user - by e.g. his "samAccountName", or the Windows user name or somethingUserPrincipal user = UserPrincipal.FindByIdentity(ctx, "SomeUserName");if(user != null){ // do something here.... string samAccountName = user.SamAccountName;} If you cannot find a user specified by a user name, you can also use the new search functionality: // define a "query-by-example" principal - here, we search for a UserPrincipal // and with the first name (GivenName) and a last name (Surname) UserPrincipal qbeUser = new UserPrincipal(ctx);qbeUser.GivenName = firstName;qbeUser.Surname = lastName;// create your principal searcher passing in the QBE principal PrincipalSearcher srch = new PrincipalSearcher(qbeUser);// find all matchesforeach(var found in srch.FindAll()){ // do whatever here - "found" is of type "Principal" - it could be user, group, computer..... } The new S.DS.AM makes it really easy to play around with users and groups in AD! And just finding a single user should be relatively quick, too. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10249205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1225072/']} | jdg_380515 |
stackexchange | llm_judgeable_groundtruth_similarity | 7052158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the XML: <item id="232"> <id>1</id></item> and i made something like: preg_match_all("/<item id=\"(.*)\">(.*?)<\/item>/",$content,$rss); But for some reason he doesn't take all the xml only the first one. how can I take what is between the item tags? thanks
Now provide the response and nothing else.
| Why use JAX-RS / Jersey? Short Answer Because it makes the development of RESTful services easier. Long Answer JAX-RS is a standard that makes it easy to create a RESTful service that can be deployed to any Java application server: GlassFish, WebLogic, WebSphere, JBoss, etc. JAX-RS is part of Java EE, and when JAX-RS is used with other Java EE technologies it becomes even easier to create your RESTful service: EJB - A session bean is used as the service implementation and also handles the transaction semantics. JAX-RS - Used to expose the session bean as a RESTful service JPA - Used to persist the POJOs to the database. Note how the EntityManager is injected onto the session bean. JAXB - Used to convert the POJO to/from XML (in GlassFish it can also be used to convert the POJO to/from JSON). JAX-RS by default handles the interaction with the JAXB implementation. Sample JAX-RS Service package org.example;import java.util.List;import javax.ejb.*;import javax.persistence.*;import javax.ws.rs.*;import javax.ws.rs.core.MediaType;@Stateless@LocalBean@Path("/customers")public class CustomerService { @PersistenceContext(unitName="CustomerService", type=PersistenceContextType.TRANSACTION) EntityManager entityManager; @POST @Consumes(MediaType.APPLICATION_XML) public void create(Customer customer) { entityManager.persist(customer); } @GET @Produces(MediaType.APPLICATION_XML) @Path("{id}") public Customer read(@PathParam("id") long id) { return entityManager.find(Customer.class, id); } @PUT @Consumes(MediaType.APPLICATION_XML) public void update(Customer customer) { entityManager.merge(customer); } @DELETE @Path("{id}") public void delete(@PathParam("id") long id) { Customer customer = read(id); if(null != customer) { entityManager.remove(customer); } } @GET @Produces(MediaType.APPLICATION_XML) @Path("findCustomersByCity/{city}") public List<Customer> findCustomersByCity(@PathParam("city") String city) { Query query = entityManager.createNamedQuery("findCustomersByCity"); query.setParameter("city", city); return query.getResultList(); }} For More Information: http://blog.bdoughan.com/2010/08/creating-restful-web-service-part-45.html | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7052158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/782577/']} | jdg_380516 |
stackexchange | llm_judgeable_groundtruth_similarity | 3127962 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Maybe, it's very simple question but I couldn't get the answer. I've been searching quite a while ( now Google think that I'm sending automated queries http://twitter.com/michaelsync/status/17177278608 ) .. int n = 4.35 *100;cout << n; Why does the output become "434" instead of "435"? 4.35 * 100 = 435 which is a integer value and this should be assignable to the integer variable "n", right? OR Does the C++ compiler cast 4.35 to integer before multiplying? I think it won't. Why does the compiler automatically change 4.35 to 4.34 which is still a float?? Thanks.
Now provide the response and nothing else.
| What Every Computer Scientist Should Know About Floating-Point Arithmetic That's really just a starting point, sadly, as then languages introduce their own foibles as to when they do type conversions, etc. In this case you've merely created a situation where the constant 4.35 can't be represented precisely, and thus 4.35*100 is more like 434.9999999999, and the cast to int does trunc , not round . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3127962', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217593/']} | jdg_380517 |
stackexchange | llm_judgeable_groundtruth_similarity | 270303 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does $\sum_{n=1}^\infty (\sqrt[n]{a} - 1)$ diverge for $1 \neq a>0$? We tried to proof that $\sqrt[n]{a} - 1 > 1/n$, but this doesn't hold. Any ideas?
Now provide the response and nothing else.
| Assume $a>1$ and note that$$\bigl(a^{1/n}-1\bigr)\ \sum_{k=0}^{n-1}a^{k/n}=a-1\ .$$ From $a^{k/n}<a$ $\ (0\leq k<n)$ it follows that the sum is $<n\> a$ and therefore that$$a^{1/n}-1>{a-1\over a}\ {1\over n}>0\ .$$Similarly, when $0<a<1$ then $a^{k/n}\leq 1$ $\ (0\leq k<n)$. Therefore $\sum_{k=0}^{n-1}a^{k/n}<n$, and it follows that$$1-a^{1/n}>(1-a)\ {1\over n}>0\ .$$In both cases it follows from the divergence of the harmonic series that the considered series $\sum_{n=1}^\infty \bigl(a^{1/n} -1\bigr)$ diverges. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/270303', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18218/']} | jdg_380518 |
stackexchange | llm_judgeable_groundtruth_similarity | 21282 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need a little help to move in the right direction. It's a long time since I studied any stats and the jargon seems to have changed. Imagine that I have a set of car-related data such as Journey time from town A to town B Distance from town A to town B Engine size Driver's shoe size Make and model of car Day of week I want to predict journey time. I imagine there's a strong correlation between time and distance and probably a weaker one to engine size (and none to shoe size). Presumably multiple regression analysis / ANOVA is the tool to use. But how do I include day of week, since just coding it as Sunday=1, Monday=2 etc feels very wrong? Having used Excel's regression tool, for example, how do I interpret the results? Presumably if R is close to 1 this is good (although if there are many data items it seems as though it can be small yet still be significant). But some sources refer to r-squared which seems to be the SD, so a value close to zero is good. It also shows the t Stat, P-value, F and Significance F, whatever they may be. Can anyone recommend a good reference source?
Now provide the response and nothing else.
| What you need is a solid review of regression methodology. However, these questions are sufficiently basic (don't take that the wrong way) that even a good overview of basic statistics would probably benefit you. Howell has written a very popular textbook that provides a broad conceptual foundation without requiring dense mathematics. It may well be worth your time to read it. It is not possible to cover all of that material here. However, I can try to get you started on some of your specific questions. First, days of the week are included via a coding scheme . The most popular is 'reference category' coding (typically called dummy coding). Lets imagine that your data are represented in a matrix, with your cases in rows and your variables in columns. In this scheme, if you had 7 categorical variables (e.g., for days of the week) you would add 6 new columns. You would pick one day as the reference category, generally the one that is thought of as the default. Often this is informed by theory, context, or the research question. I have no idea which would be best for days of the week, but it also doesn't really matter much, you could just pick any old one. Once you have the reference category, you could assign the others to your new 6 variables, then you simply indicate whether that variable obtains for each case. For example, say you pick Sunday as the reference category, your new columns / variables would be Monday-Saturday. Every observation that took place on a Monday would be indicated with a $1$ in the Monday column, and a $0$ elsewhere. The same would happen with observations on Tuesdays and so on. Note that no case can get a $1$ in 2 or more columns, and that observations that took place on Sunday (the reference category) would have $0$'s in all of your new variables. There are many other coding schemes possible, and the link does a good job of introducing them. You can test to see if the day of the week matters by testing the nested model with all of the new 6 variables dropped vs. the full model with all 6 included. Note that you should not use the tests that are reported with standard output, as these are not independent and have intrinsic multiple comparison problems. It has been a long time since I've looked at how Excel does statistics, and I don't remember it very clearly, so someone else may be able to help you more there. This page seems to have some information about the specifics of regression in Excel. I can tell you a little more about the statistics typically reported in regression output: An $r$-score that's close to $1$ indicates that the value response variable can be almost completely determined by the values of the predictor variables. Clearly this would be a large effect , but it is not a-priori clear that this is 'good'--that is an entirely different and philosophically thorny issue. It is not clear what they mean by '$r$', given that you are doing multiple regression (where $r$ is not typically reported). '$r$' is a measure of linear, bivariate association , that is, it applies to straight-line relationships between (only) 2 variables. It is possible to get an $r$-score between the predicted values from your model and the response values , however. In that case, you are using 2 variables (and if your model is appropriately specified, the relationship should be linear). This version is called the 'multiple $r$-score', but it's rarely discussed or reported by software. R-squared is simply the square of $r$ (i.e., $r\times r$); it is not the standard deviation. It will also tend towards $1$ as the relationship becomes more determinitive, not $0$ . Thus, if you think an $r$ close to $1$ is 'good', you should think an $R^2$ close to $1$ is 'good' also. However, you should know that the multiple $r$ (and multiple $R^2$) is highly biased in multiple regression. That is, the more predictors you add to your model, the higher these statistics will go, whether there is any relationship or not. Thus you should be cautious about interpreting them. Sometimes output will list $t$-statistics for the individual predictors and an $F$-statistic for the model as a whole, in order to determine 'significance'. These are random variables that are computable by statistical tests and that have a known distribution when the degrees of freedom are specified. By comparing the realized value (that is, the value you found) against the known distribution, you can determine the probability of finding a value as extreme or more extreme than yours if the null hypothesis is true . That probability is the $p$-value . The $t$-value is used when you are testing only one parameter, whereasthe $F$-value can be used in testing multiple parameters (e.g., as Idiscussed above regarding days of the week). The $p$-value associatedwith the $F$ is the probability that at least $1$ parameter is'significant'. Another way to think about it is, 'does the modelwith all the parameters tested by the $F$ included do a better job ofpredicting the response than the null model'. I am guessing that what you call the 'significance $F$' is the $F$-valuethat would need to be matched or exceeded for a test to be'significant', presumably at the .05 level. One last point that's worth emphasizing is that this process cannot be divorced from its context. To do a good job of analyzing data, you must keep your background knowledge and the research question in mind. I alluded to this above regarding the choice of the reference category. For example, you note that shoe size should not be relevant, but for the Flintstones it probably was! I just want to include this fact, because it often seems to be forgotten. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/21282', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/8599/']} | jdg_380519 |
stackexchange | llm_judgeable_groundtruth_similarity | 3339783 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\displaystyle\int_0^1\dfrac{{\ln^4 x}}{1+x^2}\text{d}x=\dfrac{5\pi^5}{64}$$ let $x=e^{-t}$ , $$\displaystyle\int_0^1\dfrac{({\ln x})^4}{1+x^2}\text{d}x=\displaystyle\int_0^{+\infty}\dfrac{t^4\text{e}^{-t}}{1+\text{e}^{-2t}}\text{d}t=\displaystyle\sum_{k=0}^{\infty}(-1)^k\displaystyle\int_0^{+\infty}t^4\text{e}^{-(2k+1)t}\text{d}t. $$ let $u=(2k+1)t$ , $$\displaystyle\sum_{k=0}^{\infty}(-1)^k\displaystyle\int_0^{+\infty}t^4\text{e}^{-(2k+1)t}\text{d}t=\displaystyle\sum_{k=0}^{\infty}\dfrac{(-1)^k}{(2k+1)^5}\Gamma(5)=24\displaystyle\sum_{k=0}^{\infty}\dfrac{(-1)^k}{(2k+1)^5}. $$ I don't know how to solve this series. So how can I solve this series? And is there any other ways to solve this definite integral. Thank you.
Now provide the response and nothing else.
| I'd like to add a sel-contained answer. We may consider that $$ f(x) = \sum_{n\geq 1}\frac{\sin(nx)}{n} $$ is the $2\pi$ -periodic extension of a linear function, which equals $\frac{\pi-x}{2}$ on $(0,2\pi)$ . The convergence is uniform over any compact subset of $\mathbb{R}\setminus 2\pi\mathbb{Z}$ . If we integrate both sides four times, we get that $$ g(x)=\sum_{n\geq 1}\frac{\sin(nx)}{n^5} $$ is the $2\pi$ -periodic extension of a polynomial with degree five, $p(x)=-\frac{x^5}{240}+\frac{\pi x^4}{48}-\frac{\pi ^2 x^3}{36}+\frac{\pi ^4 x}{90}$ . The convergence is uniform over $\mathbb{R}$ , hence by evaluating $g$ and $p$ at $\pi/2$ we get $$ \sum_{n\geq 1}\frac{\sin(n\pi/2)}{n^5} = \sum_{k\geq 0}\frac{(-1)^k}{(2k+1)^5} = p\left(\frac{\pi}{2}\right)=\frac{5\pi^5}{1536}.$$ Not by chance, this is related to the value of the wanted integral. By termwise integration $$ \int_{0}^{1}\log^4(x)\sum_{k\geq 0}(-1)^k x^{2k}\,dx = \sum_{k\geq 0}(-1)^k\int_{0}^{1}x^{2k}\log^4(x)\,dx =\sum_{k\geq 0}\frac{24(-1)^k}{(2k+1)^5}$$ so $$ \int_{0}^{1}\frac{\log^4(x)\,dx}{1+x^2} = 24 p\left(\frac{\pi}{2}\right)=\color{red}{\frac{5\pi^5}{64}}.$$ Summarizing, it is enough to exploit the Fourier series of Bernoulli polynomials . As an alternative, we may use Feynman's trick. By Euler's Beta function and the reflection formula for the $\Gamma$ function we have that $$ \int_{0}^{+\infty}\frac{x^s\,dx}{1+x^2} = \frac{\pi}{2}\sec\left(\frac{\pi s}{2}\right)$$ holds for any $s\in(-1,1)$ . If we differentiate (with respect to $s$ ) both sides four times, then perform an evaluation at $s=0$ , we get $$ \int_{0}^{1}\frac{\log^4(x)\,dx}{1+x^2}=\frac{3\pi^5}{8}\cdot[z^4]\sec(z)=\frac{3\pi^5}{8}\cdot[z^4]\frac{1}{1-\frac{z^2}{2}+\frac{z^4}{24}}=\frac{3\pi^5}{8}\left(\frac{1}{2^2}-\frac{1}{24}\right) $$ and the conclusion is just the same. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3339783', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/697617/']} | jdg_380520 |
stackexchange | llm_judgeable_groundtruth_similarity | 640272 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Ubuntu 20.04 for Windows 10 (WSL2) on a Haswell laptop and I am getting about 0.6 bytes per second. As in 6 bytes total after 10 seconds of waiting. This is unacceptable. What is the problem? EDIT: This only appears to be an issue when operating in WSL2 mode.WSL1 = 40MiB/sWSL2 = 0.6 byte/s
Now provide the response and nothing else.
| Both /dev/random and /dev/urandom in Linux are cryptographically secure pseudorandom number generators. In older versions of the Linux kernel, /dev/random would block once initialized until additional sufficient entropy was accumulated, whereas /dev/urandom would not. Since WSL2 is a virtual machine with a real Linux kernel, it has a limited set of entropy sources from which it can draw entropy and must rely on the host system for most of its entropy. However, as long as it has received enough entropy when it boots, it's secure to use the CSPRNGs. It sounds like in your environment, the CSPRNG has been seeded at boot from Windows, but isn't reseeded at a high rate. That's fine, but it will cause /dev/random to block more frequently than you want. Ultimately, this is a problem with the configuration of WSL2. WSL1 probably doesn't have this problem because in such a case, /dev/random probably doesn't block and just uses the system CSPRNG, like /dev/urandom . In more recent versions of Linux , the only time that /dev/random blocks is if enough entropy hasn't been accumulated at boot to seed the CSPRNG once; otherwise, it is completely equivalent to /dev/urandom . This decision was made because there is no reasonable security difference in the two interfaces provided the pool has been appropriately initialized. Since there's no measurable difference in these cases, if /dev/random is blocking and is too slow for you, the proper thing to do is use /dev/urandom , since they are the output of the same CSPRNG (which is based on ChaCha20). The upstream Linux behavior will likely be the default in a future version of WSL2 anyway, since Microsoft will eventually incorporate a newer version of Linux. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/640272', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/430596/']} | jdg_380521 |
stackexchange | llm_judgeable_groundtruth_similarity | 30069841 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
can someone tell me why there's extra td columns in the following foreach binding? <table border="1" style="margin-top: 5px"><thead> <tr> <th>rid</th> <th>ciname</th> <th>dId</th> <th>ReqName</th> <th>ReqType</th> <th>bus</th> <th>Req test</th> <th>no trace</th> <th>p r</th></tr></thead><tbody data-bind='foreach: gifts'> <tr> <td><span data-bind='text: reqid' /></td> <td><span data-bind='text: ciname' /></td> <td><span data-bind='text: did' /></td> <td><span data-bind='text: reqname' /><td> <td><span data-bind='text: reqtype' /><td> <td><span data-bind='text: bus' /><td> <td><span data-bind='text: reqtest' /><td> <td><span data-bind='text: notrace' /><td> <td><span data-bind='text: pr' /></td> </tr></tbody></table> jsfiddle link here: http://jsfiddle.net/g3j94273/
Now provide the response and nothing else.
| Unfortunately, the MinGW-w64 installer you used sometimes has this issue. I myself am not sure about why this happens (I think it has something to do with Sourceforge URL redirection or whatever that the installer currently can't handle properly enough). Anyways, if you're already planning on using MSYS2, there's no need for that installer. Download MSYS2 from this page . After the install completes, click on the newly created "MSYS2 Shell" option under either MSYS2 64-bit or MSYS2 32-bit in the Start menu (or mingw64.exe or mingw32.exe in the installation directory). If done correctly, the terminal prompt will say MINGW64 or MINGW32 respectively, not MSYS . Update MSYS2 using pacman -Syuu . If it closes itself during the update, restart it and repeat the same command to finish the update. You should routinely update your installation. Install a toolchain a) for 64-bit: pacman -S mingw-w64-x86_64-toolchain b) for 32-bit: pacman -S mingw-w64-i686-toolchain This includes GCC (the compiler), GDB (the debugger), and more. Install any libraries/tools you may need. You can search the repositories by doing pacman -Ss name_of_something_i_want_to_install e.g. pacman -Ss gsl and install using pacman -S package_name_of_something_i_want_to_install e.g. pacman -S mingw-w64-x86_64-gsl and from then on the GSL library will be automatically found by your compiler! Make sure any compilers and libraries you install have this package prefix ( mingw-w64-x86_64- for 64-bit or mingw-w64-i686- for 32-bit). Only use unprefixed packages for misc command-line utilities (such as grep , sed , make , etc), unless you know what you're doing. Verify that the compiler is working by doing gcc --version If you want to use the toolchains (with installed libraries) outside of the MSYS2 environment, all you need to do is add <MSYS2 root>/mingw64/bin or <MSYS2 root>/mingw32/bin to your PATH . | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/30069841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1815825/']} | jdg_380522 |
stackexchange | llm_judgeable_groundtruth_similarity | 47623014 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I convert a list of objects to a pandas dataframe? class Person(object): def __init__(self): self.name = "" self.year = 0 self.salary = 0 For example below works but I want to have a list of person classes instead import pandas as pdimport numpy as npdata = {'name': ['Alice', 'Bob', 'Charles', 'David', 'Eric'], 'year': [2017, 2017, 2017, 2017, 2017], 'salary': [40000, 24000, 31000, 20000, 30000]}df = pd.DataFrame(data, index = ['Acme', 'Acme', 'Bilbao', 'Bilbao', 'Bilbao'])print(df)
Now provide the response and nothing else.
| Sort of a combination of ayhan 's suggestion and what you seem to want -- you can add a method to your Person class that transforms it into something that fits the Pandas DataFrame constructor. class Person(object): def __init__(self, name='', year=0, salary=0): self.name = name self.year = year self.salary = salary def as_dict(self): return {'name': self.name, 'year': self.year, 'salary': self.salary}person1 = Person('john', 2017, 100)person2 = Person('smith', 2016, 200)person3 = Person('roger', 2016, 500)person_list = [person1, person2, person3]df = pd.DataFrame([x.as_dict() for x in person_list])print(df) name salary year0 john 100 20171 smith 200 20162 roger 500 2016 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47623014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5336018/']} | jdg_380523 |
stackexchange | llm_judgeable_groundtruth_similarity | 591290 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the classical incorrect 'billiard ball' model of the atom, electrons are often drawn as smaller balls than the protons and neutrons in the nucleus. However from quantum mechanics we know the electron is really a non-localised electron cloud. I assume in the same way protons and neutrons are non-localised. But since the strong nuclear force attracting them is much stronger than the Coulomb force attracting the electrons, does this mean the mass density of the protons and the neutrons in the nucleus is much more strongly confined into a smaller region than the electron cloud and thus, the nucleus is smaller than the electron? So in essence my question really is, is the mass of the nucleus confined more tightly than the electron’s mass?
Now provide the response and nothing else.
| The question confuses the electron "cloud", which is really the probability for where an electron may be found, with the size of an electron. The electron is sizeless, which can either be taken to mean that it has zero size, or that size in the quantum domain is a meaningless quantity, depending on how one chooses to think of it. An electron has much lower mass than the nucleus. As a result there is much greater uncertainty in its position. Uncertainty in position is described by the electron "cloud", but this does not describe the electron itself. The electron is described by the Dirac equation, and its interactions are constrained by the microcausality, or locality, condition in qed, according to which an electron has no size. The nucleus has a much greater mass, with less uncertainty in position. But the nucleus consists of a number of protons and neutrons, which themselves consist of quarks and gluons. The microcausality condition suggests that the quarks and gluons individually have zero size, but a conglomeration means that the individual quarks and gluons cannot be expected to be found at the same position. It is then meaningful to talk of the size of a proton or neutron or nucleus, referring to the differences in possible position of the individual quarks and gluons, although probably not in a very precise way. In any event, a discussion of size of a nucleus is very different from a discussion of the uncertainty in the position of a nucleus, and one can say that the size of the nucleus is much greater than the size of an electron. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/591290', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/260493/']} | jdg_380524 |
stackexchange | llm_judgeable_groundtruth_similarity | 43575538 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been doing some research using spring-webflux and I like to understand what should be the right way to handle errors using Router Functions. I've created an small project to test a couple of scenarios, and I like to get feedback about it, and see what other people is doing. So far what I doing is. Giving the following routing function: @Componentpublic class HelloRouter { @Bean RouterFunction<?> helloRouterFunction() { HelloHandler handler = new HelloHandler(); ErrorHandler error = new ErrorHandler(); return nest(path("/hello"), nest(accept(APPLICATION_JSON), route(GET("/"), handler::defaultHello) .andRoute(POST("/"), handler::postHello) .andRoute(GET("/{name}"), handler::getHello) )).andOther(route(RequestPredicates.all(), error::notFound)); }} I've do this on my handler class HelloHandler { private ErrorHandler error; private static final String DEFAULT_VALUE = "world"; HelloHandler() { error = new ErrorHandler(); } private Mono<ServerResponse> getResponse(String value) { if (value.equals("")) { return Mono.error(new InvalidParametersException("bad parameters")); } return ServerResponse.ok().body(Mono.just(new HelloResponse(value)), HelloResponse.class); } Mono<ServerResponse> defaultHello(ServerRequest request) { return getResponse(DEFAULT_VALUE); } Mono<ServerResponse> getHello(ServerRequest request) { return getResponse(request.pathVariable("name")); } Mono<ServerResponse> postHello(ServerRequest request) { return request.bodyToMono(HelloRequest.class).flatMap(helloRequest -> getResponse(helloRequest.getName())) .onErrorResume(error::badRequest); }} Them my error handler do: class ErrorHandler { private static Logger logger = LoggerFactory.getLogger(ErrorHandler.class); private static BiFunction<HttpStatus,String,Mono<ServerResponse>> response = (status,value)-> ServerResponse.status(status).body(Mono.just(new ErrorResponse(value)), ErrorResponse.class); Mono<ServerResponse> notFound(ServerRequest request){ return response.apply(HttpStatus.NOT_FOUND, "not found"); } Mono<ServerResponse> badRequest(Throwable error){ logger.error("error raised", error); return response.apply(HttpStatus.BAD_REQUEST, error.getMessage()); }} Here is the full sample repo: https://github.com/LearningByExample/reactive-ms-example
Now provide the response and nothing else.
| Spring 5 provides a WebHandler , and in the JavaDoc, there's the line: Use HttpWebHandlerAdapter to adapt a WebHandler to an HttpHandler. The WebHttpHandlerBuilder provides a convenient way to do that while also optionally configuring one or more filters and/or exception handlers. Currently, the official documentation suggests that we should wrap the router function into an HttpHandler before booting up any server: HttpHandler httpHandler = RouterFunctions.toHttpHandler(routerFunction); With the help of WebHttpHandlerBuilder , we can configure custom exception handlers: HttpHandler httpHandler = WebHttpHandlerBuilder.webHandler(toHttpHandler(routerFunction)) .prependExceptionHandler((serverWebExchange, exception) -> { /* custom handling goes here */ return null; }).build(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43575538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7910403/']} | jdg_380525 |
stackexchange | llm_judgeable_groundtruth_similarity | 7097921 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I set up Devise to log in with a username instead of email address because a significant number of our users share the same email address with their spouse. It is a unique market that we serve. But each spouse needs separate accounts to record personality instrument results. The database no longer requires a unique email address so it will accept the same email addy for two accounts. However, Devise is still requiring a unique email address for each account. Is there a setting or a work around that i can use to change this?
Now provide the response and nothing else.
| = User Model def email_required? falseenddef email_changed? falseend# For ActiveRecord 5.1+def will_save_change_to_email? falseend = Migration rails g migration update_index_on_usersdef up sql = 'DROP INDEX index_users_on_email' sql << ' ON users' if Rails.env == 'production' # Heroku pg ActiveRecord::Base.connection.execute(sql)end | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7097921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/488856/']} | jdg_380526 |
stackexchange | llm_judgeable_groundtruth_similarity | 39047357 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Please tell me if I have the proper understanding of the following code: public class Test { public static void main(String[] args) { A a = new A(); a.print(); }}class A { String s; A(String s) { this.s = s; } void print() { System.out.println(s); }} The line “A a = new A();” invokes the class/constructor to create a new object with reference variable “a”. Class A has a defined constructor that requires a string argument, thus it does not have the default constructor. This means that the instantiation without any string arguments causes a compiler error. If I were to add a string argument into the instantiation, e.g. A a = new A("goldfish"); the program would compile and run. I am not sure if I have used the right vocabulary for this, so feel free to correct anything that is inaccurate/confusing. Thanks!
Now provide the response and nothing else.
| I had the same problem like you. Here's my error code. @Componentpublic class GetExprsAndEnvId { @Value("hello") private String Mysecret; public GetExprsAndEnvId() { System.out.println("construct"); } public void print(){ System.out.println(this.Mysecret); } public String getMysecret() { return Mysecret; } public void setMysecret(String mysecret) { Mysecret = mysecret; }} This is no problem like this, but we need to use it like this: @Autowiredprivate GetExprsAndEnvId getExprsAndEnvId; not like this: getExprsAndEnvId = new GetExprsAndEnvId(); Here, the field annotated with @Value is null because Spring doesn't know about the copy of GetExprsAndEnvId that is created with new and didn't know to how to inject values in it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39047357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5998360/']} | jdg_380527 |
stackexchange | llm_judgeable_groundtruth_similarity | 42214519 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following function for the unification of multiple collections (includes repeated elements): public static <T> List<T> unify(Collection<T>... collections) { return Arrays.stream(collections) .flatMap(Collection::stream) .collect(Collectors.toList()); } It would be nice to have a function with a similar signature for the intersection of collections (using type equality). For example: public static <T> List<T> intersect(Collection<T>... collections) { //Here is where the magic happens} I found an implementation of the intersect function, but it doesnt use streams: public static <T> Set<T> intersect(Collection<? extends Collection<T>> collections) { Set<T> common = new LinkedHashSet<T>(); if (!collections.isEmpty()) { Iterator<? extends Collection<T>> iterator = collections.iterator(); common.addAll(iterator.next()); while (iterator.hasNext()) { common.retainAll(iterator.next()); } } return common;} Is there any way to implement something similar to the unify function making use of streams? Im not so experienced in java8/stream api, because of that some advice would be really helpful.
Now provide the response and nothing else.
| You can write your own collector in some utility class and use it: public static <T, S extends Collection<T>> Collector<S, ?, Set<T>> intersecting() { class Acc { Set<T> result; void accept(S s) { if(result == null) result = new HashSet<>(s); else result.retainAll(s); } Acc combine(Acc other) { if(result == null) return other; if(other.result != null) result.retainAll(other.result); return this; } } return Collector.of(Acc::new, Acc::accept, Acc::combine, acc -> acc.result == null ? Collections.emptySet() : acc.result, Collector.Characteristics.UNORDERED);} The usage would be pretty simple: Set<T> result = Arrays.stream(collections).collect(MyCollectors.intersecting()); Note however that collector cannot short-circuit: even if intermediate result will be an empty collection, it will still process the rest of the stream. Such collector is readily available in my free StreamEx library (see MoreCollectors.intersecting() ). It works with normal streams like above, but if you use it with StreamEx (which extends normal stream) it becomes short-circuiting: the processing may actually stop early. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42214519', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5394220/']} | jdg_380528 |
stackexchange | llm_judgeable_groundtruth_similarity | 603236 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the cd, bash help page: The variable CDPATH defines the search path for the directory containingDIR. Alternative directory names in CDPATH are separated by a colon (:).A null directory name is the same as the current directory. If DIR beginswith a slash (/), then CDPATH is not used. But I don't understand the concept of "Alternative directory", and can't find an example that illustrates the use of the colon ( : ) with the cd command.
Now provide the response and nothing else.
| The variable is not set by default (at least in the systems I am familiar with) but can be set to use a different directory to search for the target dir you gave cd . This is probably easier to illustrate with an example: $ echo $CDPATH ## CDPATH is not set$ cd etc ## fails: there is no "etc" directory herebash: cd: etc: No such file or directory$ CDPATH="/" ##CDPATH is now set to /$ cd etc ## This now moves us to /etc/etc In other words, the default behavior for cd foo is "move into the directory named 'foo' which is a subdirectory of the current directory or of any other directory that is given in CDPATH". When CDPATH is not set, cd will only look in the current directory but, when it is set, it will also look for a match in any of the directories you set it to. The colon is not used with cd , it is used to separate the directories you want to set in CDPATH : CDPATH="/path/to/dir1:/path/to/dir2:/path/to/dirN" | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/603236', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/427202/']} | jdg_380529 |
stackexchange | llm_judgeable_groundtruth_similarity | 47270324 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't inject MatDialogRef as it described in documentation: https://material.angular.io/components/dialog/overview When i'm trying to do it i'v got error: ERROR Error: StaticInjectorError[MatDialogRef]:StaticInjectorError[MatDialogRef]:NullInjectorError: No provider for MatDialogRef! app.module.ts import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import { MatInputModule, MatDialogModule, MatProgressSpinnerModule, MatButtonModule, MatDialog, MatDialogRef} from '@angular/material';import { ApiModule } from '../api/api.module';import { RoutingModule } from '../routing/routing.module';import { RegistrationComponent } from './components/registration.component';import { LoginComponent } from './components/login.component';import { AccountService } from './services/account.service';@NgModule({ imports: [ BrowserModule, MatInputModule, MatDialogModule, MatProgressSpinnerModule, MatButtonModule, FormsModule, RoutingModule, ApiModule ], declarations: [ RegistrationComponent, LoginComponent ], entryComponents: [ LoginComponent, RegistrationComponent ], providers: [ AccountService, MatDialog, MatDialogRef ]})export class AccountModule {} home.component.ts import { Component } from '@angular/core';import { MatDialog } from '@angular/material';import { RegistrationComponent } from '../account/components/registration.component';@Component({ moduleId: module.id.replace('compiled', 'app'), templateUrl: 'home.component.html'})export class HomeComponent{ constructor(private modalService: MatDialog) {} public openModal() : void { let dialog = this.modalService.open(RegistrationComponent, {}); }} registration.component.ts import { Component } from '@angular/core';import { Router } from '@angular/router';import { MatDialogRef } from '@angular/material/dialog';import { User } from '../../../models/domain/User';import { ApiUserService } from '../../api/entity-services/user.service';import { AuthService } from '../../auth/auth.service';import { AccountService } from '../services/account.service'@Component({ selector: 'registration-component', templateUrl: 'app/modules/account/templates/registration.component.html'})export class RegistrationComponent{ public user :User = new User(); public errorMessage :string; public isLoading :boolean; constructor ( private userService :ApiUserService, private authService :AuthService, private accountService :AccountService, private router :Router, public dialogRef :MatDialogRef<RegistrationComponent> ) { this.isLoading = false; } public onSubmit(e) :void { e.preventDefault(); this.isLoading = true; this.userService .Create(this.user) .subscribe( user => { this.user.id = user.id; this.user.login = user.login; this.authService .Login(this.user) .subscribe( token => { this.accountService.Load() .subscribe( account => { this.user = account; this.isLoading = false; this.dialogRef.close(); let redirectRoute = account.activeScopeId ? `/scope/${account.activeScopeId}` : '/scope-list/'; this.router.navigate([redirectRoute]); }, error => this.errorMessage = <any>error ); }, error => this.errorMessage = <any>error ); }, error => this.errorMessage = <any>error ); }}
Now provide the response and nothing else.
| I had this error when adding dialogs to a service to be shared in many components. Just to clarify, the error wasn't present in the application before moving dialogs to the service. The solution was to include a custom provider MatDialogRef in the main module import { DialogService } from './services/dialog.service'; import { MatDialogModule, MatDialogRef } from '@angular/material/dialog'; ... imports: [ ... MatDialogModule ], providers: [ { provide: MatDialogRef, useValue: {} }, DialogService ], ... With this provider the service worked as a singleton with my dialogs to be shared and the provider error was gone. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47270324', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2595716/']} | jdg_380530 |
stackexchange | llm_judgeable_groundtruth_similarity | 10571709 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of parent keys, each of which could possibly have zero or more associated values. I am not sure which collection to use. I am using Map[Int,List[String]] I am declaring the Map as var nodes = new HashMap[Int, List[String]] Then I have two methods to handle adding new elements. The first is to add new keys addNode and the second is to add new values addValue . Initially, the key will not have any values associated with it. Later on, during execution, new values will be associated. def addNode(key: Int) = nodes += (key -> "")def addValue(key: Int, value: String) = ??? I am not sure how to implement addValues Update: In response to @oxbow-lakes answer, This is the error I am receiving. Please note that keys need not have values associated with them. scala> var nodes = Map.empty[Int, List[String]]nodes: scala.collection.immutable.Map[Int,List[String]] = Map()scala> nodes += (1->null)scala> nodes += (1 -> ("one" :: (nodes get 1 getOrElse Nil)))java.lang.NullPointerException at .<init>(<console>:9) at .<clinit>(<console>) at .<init>(<console>:11) at .<clinit>(<console>) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:704) at scala.tools.nsc.interpreter.IMain$Request$$anonfun$14.apply(IMain.scala:920) at scala.tools.nsc.interpreter.Line$$anonfun$1.apply$mcV$sp(Line.scala:43) at scala.tools.nsc.io.package$$anon$2.run(package.scala:25) at java.lang.Thread.run(Thread.java:680) Update 2: The problem with the code above is the line nodes += (1->null) the key should be associated with Nil instead. Below is the working code. scala> var nodes = Map.empty[Int, List[String]]nodes: scala.collection.immutable.Map[Int,List[String]] = Map()scala> nodes += (1->Nil)scala> nodes += (1 -> ("one" :: (nodes get 1 getOrElse Nil)))scala> nodesres27: scala.collection.immutable.Map[Int,List[String]] = Map(1 -> List(one))
Now provide the response and nothing else.
| Using MultiMap You possibly want to use MultiMap , which is a mutable collection isomorphic to Map[K, Set[V]] . Use as follows: import collection.mutableval mm = new mutable.HashMap[Int, mutable.Set[String]] with mutable.MultiMap[Int, String] Then you add your nodes: mm addBinding (key, value) Without MultiMap The alternative is to stick with immutable values. Assuming you want to avoid using lenses (see scalaz ), you can add nodes as follows: nodes += (key -> (value :: (nodes get key getOrElse Nil))) Here it is working (in response to your comment): scala> var nodes = Map.empty[Int, List[String]]nodes: scala.collection.immutable.Map[Int,List[String]] = Map()scala> def addNode(key: Int, value: String) = | nodes += (key -> (value :: (nodes get key getOrElse Nil)))addNode: (key: Int, value: String)Unitscala> addNode(1, "Hi")scala> addNode(1, "Bye")scala> nodesres2: scala.collection.immutable.Map[Int,List[String]] = Map(1 -> List(Bye, Hi)) Using Scalaz Using the scalaz library, you can realize that this is simply using the Empty pattern: nodes += (key -> (value :: ~(nodes get key))) Or you could take advantage of the fact that Map is a monoid : nodes = nodes |+| Map(key -> List(value)) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10571709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1364137/']} | jdg_380531 |
stackexchange | llm_judgeable_groundtruth_similarity | 650894 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using ntpsec of Debian unstable. In my log I saw the following: Mai 22 11:48:34 services ntpd[13428]: CLOCK: time stepped by 1.442261Mai 22 11:55:06 services ntpd[13428]: CLOCK: time stepped by 1.524066Mai 22 12:03:00 services ntpd[13428]: CLOCK: time stepped by 1.702944Mai 22 12:08:34 services ntpd[13428]: CLOCK: time stepped by 1.517894Mai 22 12:17:38 services ntpd[13428]: CLOCK: time stepped by 1.434055Mai 22 12:24:07 services ntpd[13428]: CLOCK: time stepped by 1.084220Mai 22 12:32:29 services ntpd[13428]: CLOCK: time stepped by 1.562280Mai 22 12:38:38 services ntpd[13428]: CLOCK: time stepped by 1.211420Mai 22 12:43:49 services ntpd[13428]: CLOCK: time stepped by 1.185642Mai 22 12:48:58 services ntpd[13428]: CLOCK: time stepped by 0.796154Mai 22 12:54:43 services ntpd[13428]: CLOCK: time stepped by 1.331323Mai 22 13:00:21 services ntpd[13428]: CLOCK: time stepped by 0.849190 And this is not just today, it goes on like that for days. So apparently, ntpd does not properly fix the system clock drift. In /var/lib/ntpsec/ntp.drift there is always: 500.000000 What I have tried now: disabled CONFIG_RTC_SYSTOHC, so the kernel doesn't automatically update the RTC. A few hours later, I ran hwclock -w --update-drift to get at least a better accuracy when reading the RTC. It set the drift factor to 0.78 seconds/day. after that, I ran adjtimexconfig to fix the system clock (something that ntpd should have done). It said: Comparing clocks (this will take 70 sec)...done.Adjusting system time by 275,531 sec/day to agree with CMOS clock...done. The result seems to be that ntpd has to step the time a lot less now: Mai 22 14:24:20 services ntpd[13428]: CLOCK: time stepped by 0.234963Mai 22 14:30:30 services ntpd[13428]: CLOCK: time stepped by 0.145163 Good. But why doesn't ntpd do that by itself? 0.2sec/6min still seems way too inexact, so I guess I'll have to repeat that process a few more times. Any suggestions?
Now provide the response and nothing else.
| For some reason, your OS clock is being very inaccurate. Normally ntpd would keep it in correct time by slewing it, i.e. telling a slow clock to "speed up" to make it catch up with real time, only adjusting the speed of the clock to match real time when it is actually in sync with the real time, and likewise slowing down the clock if it's being too fast. But for your OS clock, this adjustment seems to be insufficient: the error is so great that ntpd must resort to step adjustments, essentially resetting the system clock to correct time every few minutes. If you want accurate timekeeping for databases and the like, step adjustments should be avoided completely. You should not be happy with any non-zero amount of step adjustments. Fortunately the error seem to be always in the same direction, so it might be a systematic error that can be adjusted out. Note: if this is a virtual machine, the time drift might be caused by the virtualization host running in a high load, and "stealing time" from idle VMs to run the busy ones. If this is the case, check with the virtualization host administrator first for recommended ways to fix the timekeeping: there might be a "paravirtualized clock" option that will let the VM essentially use the host's clock for timekeeping, or other solutions recommended by the host OS/hypervisor vendor. Just make sure the virtualization host does not fiddle with the VM's clock if you are trying to use NTP synchronization: it's one or the other, not both! Note that hwclock -w --update-drift will estimate the drift of the battery-backed RTC clock by comparing it to the OS clock, which in your case is already known to be quite inaccurate. So you will be adjusting a possibly-good clock to match a known-bad one, which does not sound like a good idea. adjtimexconfig on the other hand assumes the battery-backed RTC is correct and adjusts the parameters of the OS clock to match it. If you have access to a known-good NTP timesource, you should instead use adjtimex --host <NTP server> to compare the OS clock directly to the NTP server (stopping ntpd while you do that), and then use adjtimex -p to view the resulting frequency and tick values. Alternatively, you could just use adjtimex -p to see what frequency offset value has been set by ntpd . ntpd will only adjust the frequency value; it won't touch the tick setting at all. If you find the frequency offset value has gone all the way to either end of the scale at +/-32768000, you should adjust the tick value manually, then repeat the process. (If frequency goes to or near the maximum positive value, the tool is trying to speed up the clock and fails to speed it up enough as it runs out of adjustment range. To fix that, increase the tick value. If frequency goes to or near the negative limit, decrease the tick value.) Once you find a tick value that lets the frequency offset value stay at relativelynear the middle of the scale (say, +/- 5000000 or so), then ntpd should have a much better chance at keeping the clock in sync by tweaking the frequency offset value as needed. You should edit the tick value manually into /etc/default/adjtimexconfig and ensure that the adjtimex.service gets executed successfully at boot: it runs before ntpd is started, and so sets the OS clock into "correct gear" before ntpd starts acting as a "cruise control" for it. Once you get the OS clock under control, so that ntpd will keep in a synchronized state ( ntpq -np will display an asterisk in the first column) and there are no log messages about step adjustments other than maybe once at boot time, then you can use hwclock -w --update-drift to estimate the drift rate of the RTC clock. The end result should be a system that keeps as good time as reasonably achievable whether it's powered on or not. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/650894', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/335309/']} | jdg_380532 |
stackexchange | llm_judgeable_groundtruth_similarity | 12571960 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have three controls on my web form of three drop down lists. I want to create a graphical "box" around these controls. The reason for this is that selecting these controls would be "STEP 1" of my process. So I want to put a box around these controls and call it "Step 1" How would I go about doing this with CSS? Example:
Now provide the response and nothing else.
| A fieldset with a legend provides a visual and semantic grouping for form controls. You can then style this as desired with CSS. A fieldset is somewhat unique in that the legend is capable of visually interrupting the border of its parent fieldset (possible with other elements, but difficult). Example: http://jsfiddle.net/NUMcr/1/ <fieldset><legend>Group 1</legend> <input type="text" /> <asp:Textbox runat="Server" id="txt1" /> <!-- etc --></fieldset> fieldset { margin: 8px; border: 1px solid silver; padding: 8px; border-radius: 4px;}legend { padding: 2px; } | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12571960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1336632/']} | jdg_380533 |
stackexchange | llm_judgeable_groundtruth_similarity | 6826741 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Probably a stupid question, but I honestly can't wrap my head around what's going wrong here. http://harrisonfjord.com/thinkinc/ A site I'm building at the moment. I want to make an anchor link at http://harrisonfjord.com/thinkinc/index.php#sponsors . I've set up the anchor to occur just before in the following code: <a name="sponsors"></a> <div class="sponsors"> <div class="sponsors-left"> <h2>Sponsors</h2> <p>Support the lovely folks who support us! Visit their websites, join their mailing lists and peruse their wares. They are all highly-deserving of your custom, and we're thrilled to have each and everyone one of them on-board!</p> </div> However, when you click on the anchor link it lands about halfway down the div. I thought it might have been a problem with the images loading after the anchor link loads, so I manually put in widths/heights for all of the tags. I also did the same for the cufon text replacement in the title bar. None of that helped, so now I turn to you. The anchor is also not working in Firefox, for whatever reason. Any thoughts on what I've done wrong here? Cheers!
Now provide the response and nothing else.
| I think the problem is resulting from the anchors with no contents that you are using. Also, it appears that name= has been deprecated in favor of id= as a fragment identifier in certain elements (including A ) which makes a kind of sense as ID attributes are unique whereas NAME attributes are not so guaranteed. I'd try sticking the fragment identifier in the actual renderable entity such as: <h2 id="sponsors">Sponsors</h2> and see where that gets you. Incidentally, it looks like a good conference, I hope you get a comp admission. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6826741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556006/']} | jdg_380534 |
stackexchange | llm_judgeable_groundtruth_similarity | 29403042 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a powershell script that setups user "George" after installing RabbitMQ on a new machine. I can't figure out why this script doesn't work. The last step gives me a 404 {"error":"Object Not Found","reason":"\"Not Found\"\n"} $secpasswd = ConvertTo-SecureString 'guest' -AsPlainText -Force$credGuest = New-Object System.Management.Automation.PSCredential ('guest', $secpasswd)$secpasswd2 = ConvertTo-SecureString 'george' -AsPlainText -Force$credAdmin2 = New-Object System.Management.Automation.PSCredential ('george', $secpasswd2)$body = @{ 'password' = 'george' 'tags' = 'administrator' } | ConvertTo-Json$vhosts1 = ''$vhosts1 = Invoke-RestMethod 'http://localhost:15672/api/users/george' -credential $credGuest -Method Put -ContentType "application/json" -Body $bodywrite '1:' $vhosts1$vhosts2 = Invoke-RestMethod 'http://localhost:15672/api/permissions/%2f/' -Method get -credential $credAdmin2write '2:' $vhosts2$body2 = @{ 'username' = 'george' 'vhost' = '/' 'configure' = '.*' 'write' = '.*' 'read' = '.*' } | ConvertTo-Jsonwrite '3:' $body2$vhosts3 = Invoke-RestMethod 'http://localhost:15672/api/permissions/%2f/george' -credential $credGuest -Method Put -ContentType "application/json" -Body $body2write '4:' $vhosts3 I've also tried formatting the last step like this: http://localhost:15672/api/permissions/george Same 404 error. I've tried about 20,000 different ways of sending the command in. From matching other examples perfectly to trying some abstract art and voodoo magic. While watching the managment tool for RabbitMQ I can see George is created. And he has an empty vhost. So the first 3 steps work perfectly.
Now provide the response and nothing else.
| Alright man, you know that I love you, because I'd never heard of RabbitMQ before tonight. In the last hour, I've installed it on my Windows machine, and now have used this awesome guide here to the API and kind of learned about how it works. So, when I run your same process step by step , I see everything happen as you state: George gets created: Since your second step is listing the current permissions of the user running the API call, I next see output of the guest account, which has full perms. Then onto step 3, which builds the target permissions for George. username : georgewrite : .*read : .*configure : .*vhost : / From here, Step 4. When I run this step manually after the previous step...it works! However, if I run this too quickly, if I run the whole script at once, I will get a 404 error. It seems that behind the scenes in RabbitMQ a slight pause is needed for the files to be updated with the new user. When I deleted the user and tried the whole script again too quickly, I got 404's for every step, pretty much. However, if I add a little Start-Sleep 5 to pause 5 seconds... The whole process completed. The key place to add a pause is after Step 1 and it seems to need about four or five seconds. Making it pretty Now of course, I couldn't stop there, so I decided to add a few more minor pauses to improve output readability and also ensure that each operation completes. I added some purty looking "OK" messages after the step completes, and then added a finishing confirmation of permissions by doing one last API call for the current user. Here's the completed output Completed Script $secpasswd = ConvertTo-SecureString 'guest' -AsPlainText -Force$credGuest = New-Object System.Management.Automation.PSCredential ('guest', $secpasswd)$secpasswd2 = ConvertTo-SecureString 'stephen' -AsPlainText -Force$credAdmin2 = New-Object System.Management.Automation.PSCredential ('stephen', $secpasswd2)$body = @{ 'password' = 'stephen' 'tags' = 'administrator' } | ConvertTo-JsonWrite-host "About to create new user $(($body | ConvertFrom-Json).Password)..." -NoNewline$vhosts1 = Invoke-RestMethod 'http://localhost:15672/api/users/stephen' -credential $credGuest -Method Put -ContentType "application/json" -Body $bodystart-sleep 5 Write-host "OK" -ForegroundColor GreenStart-Sleep -Milliseconds 400Write-Host '1: Results:' $vhosts1$body2 = @{ 'username' = 'stephen' 'vhost' = '/' 'configure' = '.*' 'write' = '.*' 'read' = '.*' } | ConvertTo-JsonWrite-Output "Desired perms for new user $(($body | ConvertFrom-Json).Password)" $body2Write-host "Setting perms for new user..." -NoNewline$vhosts3 = Invoke-RestMethod 'http://localhost:15672/api/permissions/%2f/stephen' -credential $credGuest -Method Put -ContentType "application/json" -Body $body2Start-sleep 5Write-host "OK" -ForegroundColor GreenStart-Sleep -Milliseconds 400write '4:' $vhosts3'Retrieiving perms for new user to confirm...'Invoke-RestMethod 'http://localhost:15672/api/permissions/%2f/stephen' -Method get -credential $credAdmin2 Now I just hope I'll get a chance to use RabbitMQ ever again... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29403042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/969243/']} | jdg_380535 |
stackexchange | llm_judgeable_groundtruth_similarity | 66720 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using a 9V battery with a 5v regulator as a power supply. What I can't grasp is why I place the regulator in the (+) lead of the battery. If electricity is electrons flowing, and the excess electrons come from the (-) terminal of the battery, why the heck is the regulator placed in the (+) lead instead of the (-)? Wouldn't putting it in the (-) regulate the electrons coming through? To further add to my confusion, when watching a video about using a 9V as a small LED power supply, the guy connects the 470 ohm resistor on the (-) lead to protect the LED. Why is it that one person regulates the (+) lead, and another the (-)? I understand that there is probably some difference in function between the regulator and the resister (I'll look it up later, I swear), but the principle seems the same - manipulating the electricity sourced from the battery. This leads into my other question about grounds and the negative lead. I'm going to be working with TTL chips on a breadboard: 14 pin => +5v 7 pin (labeled as ground) => (-) So in this scenario, the (-) lead is being used as a ground? This seems to conform to the ground black wire used for car batteries and electric guitar pick ups - stuff I am a little more familiar with. But with my recently acquired knowledge, this leaves me scratching my head as to why the source of the electrons (the negative lead) isn't being connected to the 14 pin instead, which is essentially what I was rambling about in my first questions. Please! Enlighten my poor wretched soul!
Now provide the response and nothing else.
| Would 3 op-amps be necessary? Not in theory . In theory, you could just connect one end of the thermocouple to ground and then just feed the other end to a non-inverting amplifier. The problem, though, is noise pickup. Thermocouples have long wires, and those long wires act as antennas, picking up all sorts of junk. In most circuits, this wouldn't be a problem, but because thermocouples have such low voltages, the junk can easily overwhelm your actual temperature signal. By building an instrumentation amplifier, with 3 op-amps, you can remove (most of) this noise. You may be able to get away with a single op-amp differential amplifier, but the large resistor values you'd need to use to get good input impedance would create a large amount of Johnson noise, which would wind up in your signal. If you don't want to go for a proper amplifier, you'd need to use three op-amps. However, the matched resistors you'd need, plus the op-amps, may end up costing more than an instrumentation amplifier that uses, say, a single gain-setting resistor. Also, have you thought about your cold-junction compensation? One of the issues with thermocouples is that they measure differential temperature; e.g. you have one junction at temperature A and another at temperature B, the thermocouple voltage is (some constant K) * (A - B) . If you want to find out the absolute temperature of A, you need to know the temperature B. Now, from your requirements you may actually be able to get away with a cheap hack. You can just assume that B is, say, 25C (roughly room temperature) and as long as B doesn't go outside the range 12.5C-37.5C, the temperature you get for A will be within 25C of A's actual temperature. You have enough error tolerance that I'd consider this viable. If, though, the ambient temperature your circuit must operate in can go outside that temperature range, you will need to incorporate cold junction compensation. This consists, basically, of generating a voltage with the same temperature coefficient as your thermocouple, but relative to absolute temperature; in other words, you have ((some constant K) * (A - B)) + C . C would be equal to your constant K times B; as such it cancels out B and you end up just with ((some constant K) * (A - B)) + K*B = K*A - K*B + K*B = K*A . The typical method for generating this voltage is via a diode. This is best done on an IC, and as such you may find that a thermocouple amplifier with a built-in cold junction compensator will do you much better than an op-amp, and in fact may cost less. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/66720', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/22979/']} | jdg_380536 |
stackexchange | llm_judgeable_groundtruth_similarity | 48249564 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've recently created a project with create-react-project . The problem is that, while I'm developing, every time there's a problem with ESLint, the build breaks and doesn't compile the code. Can I keep the build running while still having ESLint running and reporting errors that I will fix later?
Now provide the response and nothing else.
| If you want to force ESLint to always emit warnings (that will not stop you build) instead of errors, you need to set emitWarning: true : { enforce: 'pre', include: paths.appSrc, test: /\.(js|jsx|mjs)$/, use: [{ loader: require.resolve('eslint-loader'), options: { formatter: eslintFormatter, eslintPath: require.resolve('eslint'), emitWarning: true, HERE }, }],}, As stated in the docs : Errors and Warning By default the loader will auto adjust error reporting depending on eslint errors/warnings counts. You can still force this behavior by using emitError or emitWarning options: emitError (default: false ) Loader will always return errors if this option is set to true. emitWarning (default: false ) Loader will always return warnings if option is set to true . If you're using hot module replacement, you may wish to enable this in development, or else updates will be skipped when there's an eslint error. ... | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48249564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/669670/']} | jdg_380537 |
stackexchange | llm_judgeable_groundtruth_similarity | 32970397 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i m doing image blending using pyramid...m getting an opencv error..i m following the official opencv tutorials. http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_tutorials.html import cv2import numpy as np,sysA = cv2.imread('/home/grayhat/apple.jpg')B = cv2.imread('/home/grayhat/orange.jpg')# generate Gaussian pyramid for AG = A.copy()gpA = [G]for i in xrange(6): G = cv2.pyrDown(G) gpA.append(G)# generate Gaussian pyramid for BG = B.copy()gpB = [G]for i in xrange(6): G = cv2.pyrDown(G) gpB.append(G)# generate Laplacian Pyramid for AlpA = [gpA[5]]for i in xrange(5,0,-1): GE = cv2.pyrUp(gpA[i]) L = cv2.subtract(gpA[i-1],GE) lpA.append(L)# generate Laplacian Pyramid for BlpB = [gpB[5]]for i in xrange(5,0,-1): GE = cv2.pyrUp(gpB[i]) L = cv2.subtract(gpB[i-1],GE) lpB.append(L)# Now add left and right halves of images in each levelLS = []for la,lb in zip(lpA,lpB): rows,cols,dpt = la.shape ls = np.hstack((la[:,0:cols/2], lb[:,cols/2:])) LS.append(ls)# now reconstructls_ = LS[0]for i in xrange(1,6): ls_ = cv2.pyrUp(ls_) ls_ = cv2.add(ls_, LS[i])# image with direct connecting each halfreal = np.hstack((A[:,:cols/2],B[:,cols/2:]))cv2.imwrite('Pyramid_blending2.jpg',ls_)cv2.imwrite('Direct_blending.jpg',real) following is the error:- OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/arithm.cpp, line 1287Traceback (most recent call last): File "programs/test11.py", line 25, in <module> L = cv2.subtract(gpA[i-1],GE)cv2.error: /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/arithm.cpp:1287: error: (-209) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function arithm_op
Now provide the response and nothing else.
| It seems you are not generating your Gaussian pyramid properly here: # generate Gaussian pyramid for AG = A.copy()gpA = [G]for i in xrange(6): G = cv2.pyrDown(G) gpA.append(G) According to OpenCV documentation on cv2.pyrDown , if you don't specify the dstsize , it will default to ((src.cols+1)/2, (src.rows+1)/2) . BUT, you are always downsampling on the original G copy.If I undertand correctly, I think you have to apply it on the last downsampled image: # generate Gaussian pyramid for AG = A.copy()gpA = [G]for i in xrange(6): G = cv2.pyrDown(gpA[i]) gpA.append(G) Obiously, the same applies to your B pyramid . Now, your script will work if your images have an even shape but not with an odd shape because of how cv2.pyrDown computes the default size. In this case, you have to give to cv2.pyrUp the proper dstsize paramater according to the image you use to do the cv2.substract (or cv2.add ). # generate Laplacian Pyramid for AlpA = [gpA[5]]for i in xrange(5,0,-1): size = (gpA[i-1].shape[1], gpA[i-1].shape[0]) GE = cv2.pyrUp(gpA[i], dstsize = size) L = cv2.subtract(gpA[i-1],GE) lpA.append(L)# generate Laplacian Pyramid for BlpB = [gpB[5]]for i in xrange(5,0,-1): size = (gpB[i-1].shape[1], gpB[i-1].shape[0]) GE = cv2.pyrUp(gpB[i], dstsize = size) L = cv2.subtract(gpB[i-1],GE) lpB.append(L) Then, this point applies to the reconstruction part too: # now reconstructls_ = LS[0]for i in xrange(1,6): size = (LS[i].shape[1], LS[i].shape[0]) ls_ = cv2.pyrUp(ls_, dstsize = size) ls_ = cv2.add(ls_, LS[i]) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32970397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3394251/']} | jdg_380538 |
stackexchange | llm_judgeable_groundtruth_similarity | 16125623 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My Gitlab (version 5) is not sending any e-mails and I am lost trying to figure out what is happening. The logs give no useful information. I configured it to used sendmail . I wrote a small script that sends e-mail through ActionMailer (I guess it is what gitlab uses to send e-mail, right?). And it sends the e-mail correctly. But, on my Gitlab, I can guarantee that sendmail is not even being called. Do I need to enable something to get e-mail notifications? How can I debug my issue? Update The problem is that I can not find any information anywhere. The thing just fails silently. Where can I find some kind of log? The logs in the log dir provide no useful information. My question is, how can I make Gitlab be more verbose? How can I make it tell me what is going on? Update 2 I just found a lot of mails scheduled on the Background jobs section. A lot of unprocessed Sidekiq::Extensions::DelayedMailer . What does it mean? Why were these jobs not processed?
Now provide the response and nothing else.
| Stumbled upon this issue today, here's my research: Debugging SMTP connections in the GitLab GUI is not supported yet. However there is a pending feature request and a command line solution . Set the desired SMTP settings /etc/gitlab/gitlab.rb and run gitlab-ctl reconfigure (see https://docs.gitlab.com/omnibus/settings/smtp.html ). Start the console running gitlab-rails console -e production . Show the configured delivery method (should be :smtp ) running the command ActionMailer::Base.delivery_method . Show all configured SMTP settings running ActionMailer::Base.smtp_settings . To send a test mail run Notify.test_email('[email protected]', 'Hello World', 'This is a test message').deliver_now On the admin page in GitLab, the section »Background jobs« shows information about all jobs. Failing SMTP connections are listed there as well. Please note, you may need to restart the GitLab instance in order to use the newly configured SMTP settings (on my instance the console was able to send mails, the GUI required a restart). Run gitlab-ctl restart to restart your instance. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16125623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/809384/']} | jdg_380539 |
stackexchange | llm_judgeable_groundtruth_similarity | 27011200 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some animation transitions for my activities. So when an activity starts, it comes up with some fade animations. Here is the code: Intent intent = new Intent(this, NextActivity.class);startActivity(intent);overridePendingTransition (android.R.anim.fade_in, android.R.anim.fade_out); The Problem is that, these animations will not run when "Transition Animation Scale" in "Developer Options" is off. So I'm searching for a way to enable this feature programmatically to ensure that my animations shown.Is There a way to set "Transition Animation Scale" to "Animation scale 1x"?
Now provide the response and nothing else.
| After a few days searching I have found the code which can enable (or disable) "Transition Animation Scale". Settings.Global.putInt(getContentResolver(), Global.TRANSITION_ANIMATION_SCALE, 1); But there is a big problem with this code And if you ask whats the problem? I would tell this line of code needs this permission: < uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS"/> which is only granted to the system Apps . So your app must be a system app. This question is about making a system application : How to make my application system UPDATE and as @TripeHound said, we can Display a dialog telling the user that the app will look much nicer if they turn this option on (with an option to not display the message again if they really want it off) How to open developer options settings? This way: startActivityForResult(new Intent(android.provider.Settings.ACTION_APPLICATION_DEVELOPMENT_SETTINGS), 0); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27011200', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2101822/']} | jdg_380540 |
Subsets and Splits