source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 8115116 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was asked how can a value of a const variable can be changed. My my obvious answer was "pointers!" but I tried the next piece of code and I'm puzzled... int main(){ const int x = 5; int *ptr = (int *)(&x); // "Cast away" the const-ness.. cout << "Value at " << ptr << ":"<< (*ptr) <<endl; *ptr = 6; cout << "Now the value of "<< ptr << " is: " << (*ptr) <<endl; cout << "But the value of x is still " << x <<endl; return 0;} And the output was: Value at <some address> :5Now the value of <same address> is: 6But the value of x is still 5 Now, I'm not sure exactly what is returned from '&x' but it's definitely not the actual address of x, since the value at x wasn't changed! But on the over hand, ptr did contain the value of x at the beginning!So, what is it exactly? EDIT compiled with VS2010
Now provide the response and nothing else.
| Your program invokes undefined behavior (writing to a const variable through a pointer is undefined behavior), so anything might happen. That being said here's the most likely explanation why you get the behavior you see on your particular implementation: When you do &x , you do get the address of x . When you do *ptr = 6 , you do write 6 to x 's memory location. However when you do cout << x , you don't actually read from x 's memory location because your compiler optimized the code by replacing x with 5 here. Since x is const the compiler is allowed to do that since there is no legal C++ program in which doing so would change the program's behavior. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8115116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/548360/']} | jdg_72641 |
stackexchange | llm_judgeable_groundtruth_similarity | 8331119 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a string that contains invalid XML characters. How can I escape (or remove) invalid XML characters before I parse the string?
Now provide the response and nothing else.
| As the way to remove invalid XML characters I suggest you to use XmlConvert.IsXmlChar method. It was added since .NET Framework 4 and is presented in Silverlight too. Here is the small sample: void Main() { string content = "\v\f\0"; Console.WriteLine(IsValidXmlString(content)); // False content = RemoveInvalidXmlChars(content); Console.WriteLine(IsValidXmlString(content)); // True}static string RemoveInvalidXmlChars(string text) { var validXmlChars = text.Where(ch => XmlConvert.IsXmlChar(ch)).ToArray(); return new string(validXmlChars);}static bool IsValidXmlString(string text) { try { XmlConvert.VerifyXmlChars(text); return true; } catch { return false; }} And as the way to escape invalid XML characters I suggest you to use XmlConvert.EncodeName method. Here is the small sample: void Main() { const string content = "\v\f\0"; Console.WriteLine(IsValidXmlString(content)); // False string encoded = XmlConvert.EncodeName(content); Console.WriteLine(IsValidXmlString(encoded)); // True string decoded = XmlConvert.DecodeName(encoded); Console.WriteLine(content == decoded); // True}static bool IsValidXmlString(string text) { try { XmlConvert.VerifyXmlChars(text); return true; } catch { return false; }} Update: It should be mentioned that the encoding operation produces a string with a length which is greater or equal than a length of a source string. It might be important when you store a encoded string in a database in a string column with length limitation and validate source string length in your app to fit data column limitation. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8331119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/603200/']} | jdg_72642 |
stackexchange | llm_judgeable_groundtruth_similarity | 8432601 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't know what the problem is. WAMP was very slow, so I reformatted my computer and installed WAMP. Still, accessing localhost is very, very slow, and sometimes it doesn't even load at all. I even removed it and replaced it with XAMPP, but I still got the same result. What might possibly be the problem? Here's my current hosts file: 127.0.0.1 localhost127.0.0.1 localhost It was perfectly working fine before, but I do not know what happened and why it has started acting strange lately, since even a reformat didn't fix it.
Now provide the response and nothing else.
| I had the same problem running on Windows 8 running on 64bit. Apache is really slow but when you press F5 many times it goes ok. In the end i after doing many things managed to solve it. Right now it works fast. Try the following tasks to increase the performance: Change apache's listening port Change listening port from 80 to 8080 to avoid conflicts with programs like Skype. Open your httpd.conf file and find the line that starts with Listen (it's around line 62). Change it like the following: Listen 127.0.0.1:8080 Change your powerplan Change your power plan from Balanced to High Performance. You can do this in Control Panel\All Control Panel Items\Power Options Disable IPv6 The credits of this particular task go to Jef where he pointed this out in his blog post . From the Windows 8 desktop, press the Windows Key and the R key at the same time Type regedit in the Run dialog box and click OK Use Registry Editor to expand the registry tree and browse to: \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP6\Parameters Right click on Parameters , expand New , and select DWORD (32-bit) Value Enter DisabledComponents into the Name field Double click on the new DisabledComponents value, enter ffffffff into the Value data dialog box, and click the OK button Confirm the new registry value contains the required data. Change your etc/hosts If you use virtual hosts don't add each virtual host on a new line. Instead list them like the following. 127.0.0.1 site-a site-b site-c I also added 127.0.0.1 127.0.0.1 since I heard this somehow improves the lookup as well. (Can't confirm this but it can't hurt putting it there) Your hosts file is located at C:\Windows\System32\Drivers\etc Check how many apache processes are running In my case I had two apache processes running. Be sure you only have one running. You can check this by pressing CTRL+ALT+DEL and press Task Manager Turn off the Base Filtering Engine (BFE) What I find to be working a bit as well was turning off the Base Filtering Engine. Since stopping or disabling the BFE service will significantly reduce the security of the system you should only do this when needed. Go to Control Panel => Administrative Tools => Services => Base Filtering Engine Stop the Base Filtering Engine by clicking on Stop Increase Apache's process priority To to your task manager and change Apache's process priority from Normal to High by right clicking -> Set priority -> High Keep Apache's process busy This is a bit of an ugly method but it does certainly work. It keeps Apache busy and will process your own requests faster. Insert your local web-address in the iframe location and save it in a html file, run it and just leave it there until you're done. <html> <head><script>setTimeout(function(){ window.location.reload(1);}, 2000);</script></head><body><iframe name="iframe" id="iframe" src="http://mywebsite:8080"></iframe> </body></html> Downgrade to Windows 7 Pro As a Windows 8 Pro user you are entitled to have downgrade rights to Windows 7. Read here more about this. For me that was the only solution that really did the job properly. Good luck! | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8432601', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1020215/']} | jdg_72643 |
stackexchange | llm_judgeable_groundtruth_similarity | 222054 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
NOTE: If client devices ( computer B in this example) want to obtain internet through the gateway computer, maybe they still need to configure nameserver resolution. This is not explained here (a gateway does not necessarily serve internet). I am trying to understand the fundamentals of networks routing. So I am experimenting with my LAN (I don't need internet for now, just LAN communications). I know the network configuration matters are a rather complex thing, but I am just trying to make a computer (say A) to act as a gateway for another (say B) (both running Ubuntu Linux). I only need B to be capable to reach the router, that is only reachable for A. This is the case: Router for computer A --> 192.168.0.1Computer A - eth0 --> 192.168.0.2Computer A - eth1 --> 192.168.1.1Computer B - eth0 --> 192.168.1.2 Computer A connects fine to router . Computer A and B connect fine (ping, SSH... etc) between them . Computer B can not reach the router for computer A. I was thinking that just adding on B Computer A as default gateway and activating IP Forwarding on A would make B to be able to reach the router for A: luis@ComputerB:~$ sudo route add default gw 192.168.1.1luis@ComputerB:~$ sudo routeltarget gateway source proto scope dev tbl127.0.0.0 broadcast 127.0.0.1 kernel link lo local127.0.0.0 8 local 127.0.0.1 kernel host lo local127.0.0.1 local 127.0.0.1 kernel host lo local127.255.255.255 broadcast 127.0.0.1 kernel link lo local192.168.1.0 broadcast 192.168.1.2 kernel link eth0 local192.168.1.2 local 192.168.1.2 kernel host eth0 local192.168.1.255 broadcast 192.168.1.2 kernel link eth0 localdefault 192.168.1.1 eth0169.254.0.0 16 link eth0192.168.1.0 24 192.168.1.2 kernel link eth0 And on Computer A (the intermediate gateway): root@ComputerA:~$ echo 1 > /proc/sys/net/ipv4/ip_forward Computer B can still ping computer A, but router for A does not answer: luis@ComputerB:~$ ping 192.168.0.1PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.^C (No ping response) Is this the correct procedure to make a computer running Linux to act as a gateway for another computer in a simple manner?
Now provide the response and nothing else.
| You are almost there you just need to make sure traffic gets back to B. Right now you have forwarded traffic from B to the outside world but A doesn't know how to get traffic back to B. You need A to keep some state about the connections going through it. To do this you will want to enable NAT . You already have step one which is allow forwarding. Then you need to add a few firewall rules using iptables : iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE This says: on the network address translation table, after we have figured out the routing of a packet on output eth0 (the external), replace the return address information with our own so the return packets come to us. Also, remember that we did this (like a lookup table that remembers this connection). iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT Allow packets that want to come from eth1 (the internal interface) to go out eth0 (the external interface). iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT Use that lookup table we had from before to see if the packet arriving on the external interface actually belongs to a connection that was already initiated from the internal. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/222054', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/57439/']} | jdg_72644 |
stackexchange | llm_judgeable_groundtruth_similarity | 8836384 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have two entities in my Symfony2 project : Category and Article (a category having many articles). In my CategoryRepository , I have this method: findAllDummy(){ return $this->createQueryBuilder('c') ->leftJoin('c.Articles a') ->getQuery()->getResult();} If I remember well, in Symfony1.4 (and the corresponding version of Doctrine), the returned objects would have their 'articles' attribute filled by the corresponding Article objects.Now, in Symfony2, Proxy objects are returned. So if I loop through a specific category's articles, As many queries as iterations will be executed. foreach($category->getArticles() as $article){ echo $article->getDoctrine() ->getRepository('')getTitle();} I understand this is Doctrine2.1's default lazy loading behavior. Question 1: how is this a better solution?N queries instead of 1. I tried to force eager loading by doing the following: findAllDummy(){ return $this->createQueryBuilder('c') ->leftJoin('c.articles a') ->getQuery() ->setFetchMode('Category', 'articles', 'EAGER') ->getResult();} But the result remains the same. Question 2: how to force eager loading in Doctrine2?
Now provide the response and nothing else.
| You're joining a table but you're not selecting anything from it. Add ->addSelect('a') to your query builder. Consider two following SQL queries to understand the difference: SELECT a.id, a.titleFROM article a JOIN category c ON a.category_id = c.id WHERE a.id = 123;SELECT a.id, a.title, c.id, c.name FROM article a JOIN category c ON a.category_id = c.id WHERE a.id = 123; Eager/lazy joining has nothing to do with DQL queries. It defines what should be loaded when you use $articleRepository->find(123) . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8836384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/97142/']} | jdg_72645 |
stackexchange | llm_judgeable_groundtruth_similarity | 2403 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The statements of Poincare duality for manifolds and Serre Duality for coherent sheaves on algebraic varieties or analytic spaces look tantalizingly similar. I have heard tangential statements from some people that there is indeed some connection between the two. But I was never able to figure it out on myself. For instance for a naive attempt on a smooth complex manifold, the dimensions don't match. Can somebody help me out?
Now provide the response and nothing else.
| As far as I know, to make a precise connection, one has to invoke Hodge theory. Supposethat $X$ is a compact smooth projective variety of dimension $d$. Then Poincare duality pairs $H^n(X,\mathbb C)$ with $H^{2d-n}(X,\mathbb C),$ for any $n$. Now the Hodge decomposition gives$$H^n(X,\mathbb C) = \oplus_{p+q = n}H^q(X,\Omega^p)$$and $$H^{2d-n}(X,\mathbb C) = \oplus_{p'+q' = 2 d - n} H^{q'}(X,\Omega^{p'})= \oplus_{p + q = n}H^{d-q}(X,\Omega^{d - p}).$$ Now Serre duality gives a duality between$H^q(X,\Omega^p)$ and $H^{d - q}(X,\Omega^{d-p}),$and the compatibility statement is that Poincare duality between $H^n$ and $H^{2 d - n}$is induced by the direct sum of the pairings on the various summands in the Hodge decomposition given by Serre duality. (Perhaps up to signs and powers of $2 \pi i$,which I'm not brave enough to work out right now.) Added: A good case to think about for a newcomer to Hodge theory is the case when $X$ isa compact Riemann surface (or equivalently, an algebraic curve). If the genus of $X$ is $g$,then $H^1(X,\mathbb C)$ is $2g$-dimensional, and is endowed with a symplectic pairing viaPoincare duality. Hodge theory breaks $H^1(X,\mathbb C)$ up into the sum of two $g$-dimensional subspaces, namely $H^0(X,\Omega^1)$ and $H^1(X,\mathcal O)$. These are isotropic under Poincare duality (i.e. the Poincare duality pairing vanishes when restricted to either of them),but the become dual to one another under Poincare duality, and that pairing agrees withthe Serre duality pairing (up to a factor of $2\pi i$, perhaps). The easiest part of this to understand is the inclusion $H^0(X,\Omega^1) \subsetH^1(X,\mathbb C)$: a holomorphic differential gives a cohomology class just via de Rhamtheory (i.e. we integrate the holomorphic one form over 1-cycles); note that holomorphic1-forms are automatically exact, because if you apply the exterior derivative, you get a holomorphic 2-form, which must vanish (because $X$ is a curve, i.e. of complex dimension one). To see why $H^0(X,\Omega^1)$ is isotropic under Poincare duality, note that in the de Rhampicture, the Poincare duality pairing corresponds to wedging forms. But wedging two holomorphic 1-forms againgives a holomorphic 2-form, which must vanish (as we already noted). | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2403', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_72646 |
stackexchange | llm_judgeable_groundtruth_similarity | 101028 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
On this blogsite I've read a interesting blog about visualising a correlation matrix . So I started with a simple file like: data = {{1, 2, 3, 4, 5, 4, 3, 2, 1}, {2, 2, 14, 16, 2, 3, 4, 5, 1}, {1, 1, 12, 1, 2, 3, 2, 1, 2}, {1, 2, 1, 1, 2, 3, 4, 5, 6}, {1, 6, 5, 1, 4, 3, 1, 2, 1}, {1, 2, 3, 6, 8, 10, 13, 15, 17}, {2, 6, 10, 12, 15, 21, 30, 35, 40}, {2, 6, 10, 8, 7, 6, 5, 4, 3}, {2, 8, 12, 8, 14, 2, 3, 4, 5}};datahead = { "var1", "var2", "var3", "var4", "var5", "var6", "var7", "var8", "var9"}; Then I calculated a correlation-matrix datacor = N[Correlation[data]]; In the same blog I read about this function portfolioMaxtrix[θ_] := ReplacePart[datacor, {i_, i_} -> 0] /. {x_ /; x > θ -> 1, x_ /; x <= θ -> 0}; So I created a graph AdjacencyGraph[portfolioMaxtrix[0.6], VertexLabels -> MapThread[ Rule, {Range[Length[datahead]], Style[#, 7, GrayLevel[.3], FontFamily -> "Verdana"] & /@ datahead}], VertexSize -> Large, GraphLayout -> {"PackingLayout" -> "ClosestPacking"}, ImagePadding -> 20]; Then I calculated two dimensions. numrows = Length[datacor];numvar = Length[datacor[[1]]]; The idea is that I want to vary the thickness of the lines in the network graph, based on the correlation coeffcient. edgestyle = Table[x <-> y -> Thickness@Abs@datacor[[x, y]], {x, 1, numrows}, {y, 1, numvar}] This gives the following result: {{1 <-> 1 -> Thickness[1.], 1 <-> 2 -> Thickness[0.594442], 1 <-> 3 -> Thickness[0.737199], 1 <-> 4 -> Thickness[0.84042], 1 <-> 5 -> Thickness[0.558359], 1 <-> 6 -> Thickness[0.294155], 1 <-> 7 -> Thickness[0.337391], 1 <-> 8 -> Thickness[0.338754], 1 <-> 9 -> Thickness[0.28061]}, {2 <-> 1 -> Thickness[0.594442], 2 <-> 2 -> Thickness[1.], 2 <-> 3 -> Thickness[0.281924], 2 <-> 4 -> Thickness[0.215337], 2 <-> 5 -> Thickness[0.724505],.....}} When I run the next code, it works fine: AdjacencyGraph[portfolioMaxtrix[0.6], VertexLabels -> MapThread[ Rule, {Range[Length[datahead]], Style[#, 7, GrayLevel[.3], FontFamily -> "Verdana"] & /@ datahead}], VertexSize -> Large, GraphLayout -> {"PackingLayout" -> "ClosestPacking"}, EdgeStyle -> {1 <-> 4 -> Thickness[0.05]}, ImagePadding -> 20] Then I tried this one AdjacencyGraph[portfolioMaxtrix[0.6], VertexLabels -> MapThread[ Rule, {Range[Length[datahead]], Style[#, 7, GrayLevel[.3], FontFamily -> "Verdana"] & /@ datahead}], VertexSize -> Large, GraphLayout -> {"PackingLayout" -> "ClosestPacking"}, EdgeStyle -> edgestyle, ImagePadding -> 20] This goes wrong. As far as I understand, because I selected more edges in EdgeStyle then used by the portfolioMaxtrix -function. I tried several options like 'DeleteCases' but all failed. Does anyone have a suggestion how to solve this issue?
Now provide the response and nothing else.
| portfolioMaxtrix[p_] := Sign @ Threshold[ReplacePart[datacor, {i_, i_} -> 0], p]pmat = portfolioMaxtrix[0.6]; Get positions of chosen correlations (these correspond to the edges of the graph) (pos = Position[1] @ UpperTriangularize @ pmat) // MatrixForm Extract the correlation values and rescale cor = Rescale @ Extract[datacor, pos]/100; Build the EdgeStyle - rules tra = Rule @@@ Transpose[{UndirectedEdge @@@ pos, Thickness /@ cor}]; Plot AdjacencyGraph[pmat, VertexLabels -> MapThread[Rule, {Range[Length[datahead]], Style[#, 13, GrayLevel[.3], FontFamily -> "Verdana"] & /@ datahead}], VertexSize -> Large, GraphLayout -> {"PackingLayout" -> "ClosestPacking"}, EdgeStyle -> tra, ImagePadding -> 40] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/101028', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9590/']} | jdg_72647 |
stackexchange | llm_judgeable_groundtruth_similarity | 240799 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The functional $$\int_0^1 (y^{\prime 2} + (y + 2y')y'' + kxyy' + y^2) ~dx,$$ $$y(0) = 0, ~y(1) = 1, ~y'(0) = 2, ~y'(1) = 3$$ is path independent if $k$ equals (A) $1$ (B) $2$ (C) $3$ (D) $4$ I have used Euler's formula for extremizing the given functional and get $k=2$ . But I am pretty sure that I have done mistake as I could not get to use the given conditions and also I will be grateful if someone explains what path independent really means and the appropriate formula to be used to tackle the problem.
Now provide the response and nothing else.
| The Jacobi matrix of $F$ at $(x,y)$ is$$DF(x,y)\left(\begin{matrix}p'(x+y) & p'(x-y)\\p'(x+y) & -p'(x-y)\end{matrix}\right)$$and its determinant is given by $-2p'(x+y) p'(x-y)$. So $DF$ is not invertible at $(x,y)$ if and only if $x+y \in Z$ or $x-y \in Z$ where $Z$ is the set of roots of $p'$. Since $p$ is non-constant, $p'$ is not the zero polynomial, so $Z$ is finite. The set of points where $DF$ is not invertible is therefore given by$$\bigcup_{a \in Z} \left(\operatorname{Graph}(y = a+x) \cup \operatorname{Graph}(y = a-x)\right),$$a finite union of lines in $\mathbb R^2$. Clearly, the complement of this is open and dense. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/240799', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33640/']} | jdg_72648 |
stackexchange | llm_judgeable_groundtruth_similarity | 191742 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How does a sine choke exactly function? In most high power inverter systems, the primary side of the output transformer is always driven by a PWM signal. The secondary output which is sent to a load should also come out to be PWM. How does a pure sine wave inverter exactly convert this PWM into a pure sine wave?
Now provide the response and nothing else.
| Here's a PWM signal fed to a transformer and superimposed on this diagram is the sinewave that the PWM represents: - The secondary of the transformer usually has an inductor and a capacitor that form a 2nd order low pass filter thus converting the PWM signal into (more-or-less) a fairly decent sinewave. For instance, if you take the high frequency content of the PWM waveform it looks like a square wave with varying duty cycle and, you can low-pass filter this quite easily to get this: - On the left is the original square wave. In the middle a little bit of filtering has happened and on the right the filtering is far greater. Thus, the high frequency edges of the PWM signal can be greatly reduced leaving the low frequency content that represents the sinewave. In effect you get something that typically looks like this: - You can still see that the waveform has a little bit of the PWM signal but, in the main, it is a sinewave. If your PWM frequency is 60 kHz and your AC is 60 Hz you could position a filter to have a cut-off of 600 Hz and there would be 2 decades between it and the 60 kHz. A 2nd order filter would attenuate the 60 kHz by 80 dB (40 dB per decade): - You might note that I mentioned a filter having a cut-off of 600 Hz and wonder why it is position ten times higher than the AC 60Hz. You might ask why not have it at 60 Hz and this would be a good question. The reason it isn't at 60Hz is two-fold: - There would be a 3dB attenuation of the AC If the filter was extremely resonant it would consume vast amounts of current because, in effect, it is also a series resonant circuit across the line. It has to be positioned as far away from 60 Hz as possible to avoid large circulating currents in the L and the C of the filter BUT you don't want it up close at 60 kHz because it won't filter out the high frequency content very well. Minimum is 100 Hz I would say and it should be at least 1 decade away from the lowest PWM frequency (generalism alert!). | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/191742', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/41069/']} | jdg_72649 |
stackexchange | llm_judgeable_groundtruth_similarity | 3089033 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In terms of $R$ which is the radius of all four circles, what is the area of the intersection region of these four equal circles and the height of the marked arrow in the figure? The marked arrow is along the line CD, also the midpoint of all the circles are points A, B, C and D. Looking for a very short intuitive solution. I have checked similar questions on this site for example this and this .
Now provide the response and nothing else.
| If you repeat this pattern infinitely, you'll find that each circle consists of $12$ of these (American!) football shaped areas, together with $6$ 'triangular' areas in between (see red circle in figure). So, setting the area of the footballs to $A$ , and that of the triangles to $B$ , we have: $\pi=12A +6B$ (I am setting radius to $1$ ... you can easily adjust for $R$ ) OK, now consider the green rectangle formed by four of the points on a circle. We see that the height of such a rectangle is equal to the radius, so that is $1$ , and the width is easily found to be $\sqrt{3}$ , and this area includes $4$ whole footballs, $4$ half footballs, $2$ whole 'triangles', and $4$ half triangles. So: $\sqrt{3}=6A+4B$ Now you can easily solve for $A$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3089033', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/471752/']} | jdg_72650 |
stackexchange | llm_judgeable_groundtruth_similarity | 339170 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a client who had to rebuild his automated build server. He checked out his project folder from my subversion server but is now no longer able to commit - he gets this error: Error: Commit failed (details follow): Error: Cannot write to the prototype revision file of transaction'551-1' because a Error: previous representation is currently being written by anotherprocess Finished!: I have searched Google but although this error has been often reported there is no clear explanation - does anyone on StackOverflow have a solution? UPDATE: Nobody else commits to that repository, so it was not a transaction stuck (at least not from another user). In the end we found that permissions were not set correctly. Not that you would know it from this message, but that fixed the problem.
Now provide the response and nothing else.
| You can clear this error w/out a reboot: Identify (or create) a directory in the root SVN repository that was not a part of the failed commit. Create a new file in the directory from step 1. Commit the new file (and directory if created in step 1). Because the file is new, it doesn't conflict w/the previous commit and can update the revision # on the server. Commit the original files/directories. Delete the new file (and directory if created in step 1). Commit the entire tree. **Steps 5 & 6 aren't required, but I like to keep my repository tidy... I get this error a lot b/c SVN frequently conks out in the middle of committing my large files, which in turn causes subsequent commits to try to write to the revision that was interrupted. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/339170', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9731/']} | jdg_72651 |
stackexchange | llm_judgeable_groundtruth_similarity | 113925 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When searching for string (no regex etc.) in large file less filename and less -b 1 filename quits or prints it cannot allocate memory. top shows less using roughly about 5% of memory before it dies (1 sec interval). What other tool can I use that supports large files? The file has only 100 mb of logs. $ less -Vless 458 (GNU regular expressions)Copyright (C) 1984-2012 Mark Nudelman OS details: $ ulimit -a-t: cpu time (seconds) unlimited-f: file size (blocks) unlimited-d: data seg size (kbytes) unlimited-s: stack size (kbytes) 8192-c: core file size (blocks) 0-m: resident set size (kbytes) unlimited-u: processes 15988-n: file descriptors 1024-l: locked-in-memory size (kbytes) 64-v: address space (kbytes) 2048000-x: file locks unlimited-i: pending signals 15988-q: bytes in POSIX msg queues 819200-e: max nice 0-r: max rt priority 0-N 15: unlimited$ cat /etc/*rele*DISTRIB_ID=UbuntuDISTRIB_RELEASE=13.10
Now provide the response and nothing else.
| This probably has nothing to do with your memory and everything to do with the way less is written and how much space it allocates for its internal variables. Anyway, less is really not designed for this, you should use a tool like grep instead: grep yourQuery file If you want to see the lines around your query (5 for example), run this: grep -C 5 yourQuery file If you have too many matches for that, you can pass the output to less and now you should be able to search through it: grep -C 5 yourQuery file | less Of course, if you want to do this manually, you can open the file in a text editor. Any serious editor like vim or emacs will be able to open and search through a 100MB file. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/113925', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/57402/']} | jdg_72652 |
stackexchange | llm_judgeable_groundtruth_similarity | 9242 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a mathematician who's currently teaching a course on differential equations. Though I don't know much about chemistry, I like to include examples from chemistry in my course, and I prefer for the details to be accurate. Here is a typical exam problem: When a container of gaseous nitrogen dioxide is heated above 150 °C, the gas begins to decompose into oxygen and nitric oxide: $$\ce{2NO2 -> O2 + 2NO}$$ The rate of this reaction is determined by the equation $$\frac{\mathrm d\left[\ce{NO2}\right]}{\mathrm dt} = -k\left[\ce{NO}_2\right]^2$$ where $k$ is a constant. (a) Find the general solution to the above equation. (b) A large container holds 50.0 moles of $\ce{NO2}$ at a constant temperature of 600 °C. After one hour, only 34.3 moles remain. How much $\ce{NO2}$ will there be after another hour? So my questions are: Is the science in this problem reasonably accurate? Is there anything you would change? (I looked up a suitable value of the rate constant $k$ to make sure that the time in part b was reasonable.) What are some other examples of reactions that are governed by simple rate laws? Ideally, I'd like to have several examples each of reactions governed by the equations$$\frac{\mathrm dy}{\mathrm dt} = -ky,\qquad \frac{\mathrm dy}{\mathrm dt}=-ky^2,\qquad\text{and}\qquad \frac{\mathrm dy}{\mathrm dt}=-ky^3.$$(Is $y^3$ really possible? Are non-integer powers of $y$ possible?)
Now provide the response and nothing else.
| How thoughtful of you to include chemistry in your differential equations course! We appreciate your effort, especially going the extra mile to make it realistic. I hope you've seen the rate equation page on Wikipedia as it contains a good deal of the mathematics in several examples of reactions. You should also be interested in the reaction order page (you can find examples here of reactions with fraction or negative order with respect to some reagents). Let me do a quick recap on basic chemical kinetics theory. Consider the general reaction equation: $$\ce{a\ A + b\ B + c\ C + d\ D + ... \longrightarrow w\ W + x\ X + y\ Y + z\ Z + ...}$$ , where uppercase letters indicate different molecules in a gas or solution, and lowercase letters indicate the stoichiometric coefficients (negative for the reactants, positive for the products). Naively, we can assume the reaction happens because, while the molecules are jostling about, at some point all of the reactants ($a$ molecules of $\ce{A}$, $b$ molecules of $\ce{B}$, $c$ molecules of $\ce{C}$, etc.) will bump into each other all at the same time and undergo reaction. If you think of collisions as independent events with a probability of happening proportional to the concentration of each species, then it isn't too hard to understand that the frequency with which reactions happens is proportional to $[A]^a[B]^b[C]^c[D]^d...$, and so the rate equation for the reaction would be: $$r=k[A]^a[B]^b[C]^c[D]^d...$$ , where r is the reaction rate and k is a proportionality constant. The consumption or production rate for any species $\Gamma$ with stoichiometric coefficient $\gamma$ is trivially related to the reaction rate by: $$\frac{d\Gamma}{dt}=\gamma r$$ However, experimentally things aren't so simple (fortunately for nature, unfortunately for our minds). A reaction that requires the simultaneous collision of many molecules would be highly unlikely, meaning that it would happen very, very slowly. Even so, experimentally we observe many reactions that involve a large amount of molecules. For example, the combustion of cyclohexane in air is formally given by the equation: $$\ce{C6H12 + 9 O2 → 6CO2 + 6 H2O}$$ If the reaction really required ten molecules to bump into eachother at the same time with sufficient energy and in the right geometry, then this reaction probably could not happen in air, as oxygen would be too rarefied to compensate the extremely low proportionality constant k for a single-step reaction. In actuality, a gaseous mix of cyclohexane and air can react so fast as to cause an explosion, converting all the reactants into products in a miniscule fraction of a second. Clearly something is wrong. It turns out the assumption that reactions occur in one step is incorrect in general. Usually, there is more than one step involved, and indeed there generally is more than one path from reactants to products. The speed at which a reaction occurs is really described by considering all paths at once, with all their steps, and adding their contributions (like a start and end point in a complex tree diagram). In general this is pretty difficult and unnecessarily complex. It is often a reasonable approximation to model the reaction by selecting only the fastest route from reactants to products, and the reaction rate is bottlenecked by the slowest step of the fastest route, which becomes responsible for determining the rate equation. Now, since the reaction rate depends on selecting the fastest route, you shouldn't expect any of the steps in it to contain some very huge bottleneck, such as requiring six molecules to react simultaneously; there would likely be a slightly different route, perhaps with more steps, but in which each step doesn't involve so many molecules interacting at once, and hence being a faster route. Because of this, it turns out that most of the fastest reaction pathways involve slowest steps that very rarely depend on more than two molecules at once. Most reactions which can be modeled simply like this therefore have rate equations of the type $r=k[A]$, $r=k[A]^2$, $r=k[A][B]$ or even $r=k$. Few reactions rates are of the third order, of the type $r=k[A]^3$, $r=k[A]^2[B]$, $r=k[A][B][C]$ or similar. I don't know of any reaction that goes as $r=k[A]^3$ specifically. Fourth order reaction rates are so rare that they are the focus of research when found . I don't expect anyone to know a fifth order reaction. At the end of the day, kinetic theory is hard enough that reaction kinetics end up being determined simply by parameter fitting from experiments. Even if the underlying reaction mechanism is not completely understood, if a fit of the type $r=k[A]^{3/2}[B]^{-1}$ happens to be good, then so be it. Reactions which have been studied in depth are significantly more complex, such as the misleading simplicity of the reaction between hydrogen and bromine . Edit: I don't know why I didn't search for a fifth order reaction rate! Turns out some exist, such as this one . The reaction rate is of the type $r=k[A][B]^4$, so you could have an example where $\frac{d[B]}{dt}=k[B]^4$. | {} | {'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/9242', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/4853/']} | jdg_72653 |
stackexchange | llm_judgeable_groundtruth_similarity | 2143240 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble distinguishing the practical difference between calling glFlush() and glFinish() . The docs say that glFlush() and glFinish() will push all buffered operations to OpenGL so that one can be assured they will all be executed, the difference being that glFlush() returns immediately where as glFinish() blocks until all the operations are complete. Having read the definitions, I figured that if I were to use glFlush() that I would probably run into the problem of submitting more operations to OpenGL than it could execute. So, just to try, I swapped out my glFinish() for a glFlush() and lo and behold, my program ran (as far as I could tell), the exact same; frame rates, resource usage, everything was the same. So I'm wondering if there's much difference between the two calls, or if my code makes them run no different. Or where one should be used vs. the other.I also figured that OpenGL would have some call like glIsDone() to check whether or not all the buffered commands for a glFlush() are complete or not (so one doesn't send operations to OpenGL faster than they can be executed), but I could find no such function. My code is the typical game loop: while (running) { process_stuff(); render_stuff();}
Now provide the response and nothing else.
| Mind that these commands exist since the early days of OpenGL. glFlush ensures that previous OpenGL commands must complete in finite time ( OpenGL 2.1 specs , page 245). If you draw directly to the front buffer, this shall ensure that the OpenGL drivers starts drawing without too much delay. You could think of a complex scene that appears object after object on the screen, when you call glFlush after each object. However, when using double buffering, glFlush has practically no effect at all, since the changes won't be visible until you swap the buffers. glFinish does not return until all effects from previously issued commands [...] are fully realized . This means that the execution of your program waits here until every last pixel is drawn and OpenGL has nothing more to do. If you render directly to the front buffer, glFinish is the call to make before using the operating system calls to take screenshots. It is far less useful for double buffering, because you don't see the changes you forced to complete. So if you use double buffering, you probably won't need neither glFlush nor glFinish. SwapBuffers implicitly directs the OpenGL calls to the correct buffer, there's no need to call glFlush first . And don't mind stressing the OpenGL driver: glFlush will not choke on too many commands. It is not guaranteed that this call returns immediately (whatever that means), so it can take any time it needs to process your commands. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2143240', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242641/']} | jdg_72654 |
stackexchange | llm_judgeable_groundtruth_similarity | 30132282 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How should I transform from datetime to string ? My attempt: dates = p.to_datetime(p.Series(['20010101', '20010331']), format = '%Y%m%d')dates.str
Now provide the response and nothing else.
| There is no .str accessor for datetimes and you can't do .astype(str) either. Instead, use .dt.strftime : >>> series = pd.Series(['20010101', '20010331'])>>> dates = pd.to_datetime(series, format='%Y%m%d')>>> dates.dt.strftime('%Y-%m-%d')0 2001-01-011 2001-03-31dtype: object See the docs on customizing date string formats here: strftime() and strptime() Behavior . For old pandas versions <0.17.0 , one can instead can call .apply with the Python standard library's datetime.strftime : >>> dates.apply(lambda x: x.strftime('%Y-%m-%d'))0 2001-01-011 2001-03-31dtype: object | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30132282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2892281/']} | jdg_72655 |
stackexchange | llm_judgeable_groundtruth_similarity | 4990 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My professor in class went a little over chaos theory, and basically said that Newtonian determinism no longer applies, since as time goes to infinity, no matter how close together two initial points are, the distance between them will increase greatly. But why isn't this merely a matter of the imprecision of our measuring instruments? If we can somehow know our initial conditions exactly , wouldn't we still be able to calculate what the system will be like at some time t in the future?
Now provide the response and nothing else.
| Well, yes. In a purely mathematical world where you can specify initial conditions exactly, chaotic systems are fully deterministic. It's not like a quantum system with wavefunction collapse, whose evolution can never be specified exactly by the initial conditions. But in practice, we can never specify (or know) the initial conditions exactly. So there will always be some uncertainty in the initial conditions, and it makes sense to characterize the behavior of a system in terms of its response to this uncertainty . Basically, a chaotic system is one in which any uncertainty in the state at time $t=0$ leads to exponentially larger uncertainties in the state as time goes on, and a non-chaotic system is one in which any initial uncertainty in the state decays away or at least stays steady with time. In the former (chaotic) case, given that we can't know the initial conditions to infinite precision, there will always be some time after which predictions of the behavior of the system become essentially meaningless - the uncertainty becomes so large that it fills up most of the state space. This is effectively similar to the behavior of a truly non-deterministic (e.g. quantum) system, in that our ability to make predictions about it is limited, so some people call chaotic systems non-deterministic. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/4990', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/202/']} | jdg_72656 |
stackexchange | llm_judgeable_groundtruth_similarity | 16222748 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a code base which I want to distribute as jar. It also have dependency on external jars, which I want to bundle in the final jar. I heard that this can be done using maven-assembly-plug-in , but I don't understand how. Could someone point me to some examples. Right now, I'm using fat jar to bundle the final jar. I want to achieve the same thing using maven.
Now provide the response and nothing else.
| Note: If you are a spring-boot application, read the end of answer Add following plugin to your pom.xml The latest version can be found at ...<build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>CHOOSE LATEST VERSION HERE</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>assemble-all</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins></build>... After configuring this plug-in, running mvn package will produce two jars: one containing just the project classes, and a second fat jar with all dependencies with the suffix "-jar-with-dependencies". if you want correct classpath setup at runtime then also add following plugin <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>fully.qualified.MainClass</mainClass> </manifest> </archive> </configuration></plugin> For spring boot application use just following plugin (choose appropriate version of it) <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <fork>true</fork> <mainClass>${start-class}</mainClass> </configuration> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions></plugin> | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/16222748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/911359/']} | jdg_72657 |
stackexchange | llm_judgeable_groundtruth_similarity | 28029880 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this code: <label><input type="checkbox">True?</label> and $("label").click(function () { $(this).toggleClass("active");}); When I click the checkbox class toggles, when I click "True?" nothing happens. Why? Then if I don't wrap checkbox in label everything works fine. <input type="checkbox" id="c1"><label for="c1">True?</label> This is so weird... And it doesn't matter if I put "for" or not. http://jsfiddle.net/zufs6ueh/1/ What is going wrong here?
Now provide the response and nothing else.
| This would be safer to use: $(function() { $('input[type="checkbox"]').bind('change', function (v) { if($(this).is(':checked')) { $(this).parent().addClass('active'); } else { $(this).parent().removeClass('active'); } });}); Using change instead of click allows for people that navigate forms using the keyboard. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28029880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2230094/']} | jdg_72658 |
stackexchange | llm_judgeable_groundtruth_similarity | 494276 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A \in \mathcal{M}_{7 \times 7} (\mathbb{R})$ such that $$A^2= \begin{pmatrix} 1&0&0&0&0&0&0\\0&1&0&0&0&0&0\\0&0&2&0&0&0&0 \\ 0&0&0&3&0&0&0 \\ 0&0&0&0&5&0&0 \\ 0&0&0&0&0&8&0 \\ 0&0&0&0&0&0&13\end{pmatrix} $$ How many matrices, which satisfy this condition could you find? My friend told to me, that the correct answer is infinity. But I can find only one: $$A^2= \begin{pmatrix} 1&0&0&0&0&0&0\\0&1&0&0&0&0&0\\0&0&\sqrt{2}&0&0&0&0 \\ 0&0&0&\sqrt{3}&0&0&0 \\ 0&0&0&0&\sqrt{5}&0&0 \\ 0&0&0&0&0&\sqrt{8}&0 \\ 0&0&0&0&0&0&\sqrt{13} \end{pmatrix} $$ I will grateful for hints, how find any other examples. I suppose that $A$ should depend from parametr which don't govern on product $A \cdot A$.
Now provide the response and nothing else.
| Hint: $$\begin{pmatrix} \cos \varphi & \sin \varphi & 0 & \ldots & 0 \\ \sin \varphi & -\cos \varphi & 0 & \ldots & 0 \\ 0 & 0& 0& \ldots & 0\\ \vdots & \vdots& \vdots& \ddots & \vdots\\ 0 & 0& 0& \ldots & 0 \end{pmatrix}^2 = ?$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/494276', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/87658/']} | jdg_72659 |
stackexchange | llm_judgeable_groundtruth_similarity | 31875748 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have my documentation written in markdown and I would like to render those files from my JSX (ES6+CommonJS) code into React components. How can I achieve this? For example I have styles.markdown and I would like to render it into a <p> tag.
Now provide the response and nothing else.
| You can use React-Markdown : const React = require('react')const ReactDOM = require('react-dom')const ReactMarkdown = require('react-markdown')const input = '# This is a header\n\nAnd this is a paragraph'ReactDOM.render(<ReactMarkdown source={input} />, document.getElementById('container')) Or... You can just create a simple React component that wraps a call to a Markdown parser. There are two very good ones for JavaScript: Remarkable Marked Now, you can create a component like this: var MarkdownViewer = React.createClass({ render: function() { // pseudo code here, depends on the parser var markdown = markdown.parse(this.props.markdown); return <div dangerouslySetInnerHTML={{__html:markdown}} />; }}); There used to have one already, but it doesn't seem to be maintained anymore: https://github.com/tcoopman/markdown-react Also, if you need a React Markdown Editor, check out: react-mde . Disclaimer: I am the author. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31875748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1967395/']} | jdg_72660 |
stackexchange | llm_judgeable_groundtruth_similarity | 13581409 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a class that contains different enums (different types). This class is used as key for a HashMap . The classes hashCode currently is implemented like this: public static class Key implements Comparable<Key> { final int a; final Enum1 enum1; final Enum2 enum2; @Override public int hashCode() { return a ^ enum1.hashCode() ^ enum2.hashCode(); } // ... definition of equals and toString ... } Now if enums hashCode would just return the index of the enum value in the enum's definition, this would not be optimal (too many clashes). The method definition for Enum.hashCode() is this: /** * Returns a hash code for this enum constant. * * @return a hash code for this enum constant. */public final int hashCode() { return super.hashCode();} Assuming this delegates to Object.hashCode() , everything should be fine because for every enum constant there only exists one instance, and Object.hashCode() will in theory be something like an integer derived from the internal address of the object. Am I right? PS: Of course you will have to use something more complex when the same enum is used several times in a key.
Now provide the response and nothing else.
| Yes, you are right in that the hashcode of an enum element will come from the static instance, bound to memory positions, and be unique. On the other hand, there are better ways of generating a hashcode with less collision probability. Check out, for example, the defaults that eclipse can autogenerate for you (right click, Source> Generate hashCode and equals) public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((enum1 == null) ? 0 : enum1.hashCode()); result = prime * result + ((enum2 == null) ? 0 : enum2.hashCode()); return result;} By throwing prime numbers into the mix (the precise math escapes me) you are supposed to be a little more resistant. Note you can also let eclipse generate an equals method for you! (Even a toString). Not saying you must blindly trust them, but they are usually a very good start. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13581409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341291/']} | jdg_72661 |
stackexchange | llm_judgeable_groundtruth_similarity | 42798236 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Update, in short: I am looking for the equivalent of doing something like this but for an Observable rather than a regular array: var i = this.customers.findIndex(customer => customer._id === id);~i && this.customers[i] = newObject. I have 2 components on the screen. A list component on the left and a display component on the right (imagine it to be a PDF that is just rendering the latest 'version' of data) When you click an item in the list, it displays the data for that selected item in the component on the right. The list is an observable array: items$: Observable<Proposal[]>; Each item in the list has a child component. It is possible to click an icon on a single item, which changes the data of that child. The child has an event emitter to tell the parent the data has changed: @Output() proposalDataChanged: EventEmitter<string> = new EventEmitter(); The parent binds to it: <fb-proposal-list-item [proposal]="proposal" (proposalDataChanged)="handleDataChanged(p)"> </fb-proposal-list-item> The problem I have is that in the handleDataChanged method I want to search the Observable for the item that has changed and replace it with the new payload returned from the emitter. I do not want to call the server to refresh the entire list. I need to do this so that the component on the right reflects the new data. I am able to find the item like this: handleDataChanged(data: Proposal){ this.items$.subscribe((items: Proposal[]) => item = items.find(p => p.id == data.id));} but cannot figure out how to update the item in the Observable, rather than just finding the one that changed. I know I can 'trick' the component by navigating elsewhere and then back again to force it to refresh, but that also hits the API (and reloads the page as well). The url looks like this: /pages/proposals/manage/-XHzOJY/document That slug in the url is the id of the currently selected item (which is rendered in the component on the right). So I cannot use params change detection here, because it doesn't change. The user is causing a change to the already selected object, which is one of many inside the observable array. UPDATE Here is the full code for the parent component: import { Component, OnInit } from '@angular/core';import { Router } from '@angular/router';import { Observable, Subject } from 'rxjs/Rx';import { Proposal } from '../proposal';import { ProposalService } from '../proposal.service';import { SearchService } from '../../services/search-service';@Component({ selector: 'fb-proposal-list', templateUrl: './proposal-list.component.html', styleUrls: ['./proposal-list.component.css']})export class ProposalListComponent implements OnInit { total$: Observable<number>; items$: Observable<Proposal[]>; term: string = ""; currentPage: number = 1; private pageStream = new Subject<number>(); constructor( private _searchService: SearchService, private _proposalService: ProposalService, private _router: Router) { } ngOnInit() { this.setupSearching(); // let timer = Observable.timer(0, 60000); // timer.subscribe(() => this.goToPage(this.currentPage)); } setupSearching(){ const searchSource = this._searchService.searchTermStream .map(searchTerm => { this.term = searchTerm; return {search: searchTerm, page: 1} }); const pageSource = this.pageStream.map(pageNumber => { this.currentPage = pageNumber; return {search: this.term, page: pageNumber} }); const source = pageSource .merge(searchSource) .startWith({search: this.term, page: this.currentPage}) .switchMap((params: {search: string, page: number}) => { return this._proposalService.getProposalsPaged(params.search, params.page) }) .share(); this.total$ = source.pluck('meta').pluck('total_items'); this.items$ = source.pluck('items'); } goToPage(page: number) { this.pageStream.next(page) } handleDataChanged(id: string){ this.goToPage(this.currentPage); }}
Now provide the response and nothing else.
| I don't think you fully understand how Observables work. What you call here "observable collection" is not what you might think it is, as in "collection of observable elements". What it is, actually, is a stream of emitted collections. So when you have Observable<Model[]> , it's not a collection that's being observed by some observer, it's actually a stream of emitted Model[] collections. In that sense, you can't update the emitted value (obviously, as it already has been emitted), but what you want is it for observable to emit another, updated collection of Model . Before you try that, you'll have to know that you can't emit something from observable that you haven't created yourself. What you need is a Subject, an object that is both Observable and Observer (it inherits from both Observer and Observable interfaces). So, let's construct something like that: subject: Subject<Proposal[]> = new Subject();_proposals: Proposal[] = [];get proposals() { return this.subject.asObservable();} Let's say you get your proposals from some API call: this.http.get("...").map(response => response.json().subscribe(data => { this._proposals= <Proposal[]>data; // save your data this.subject.next(this._proposals); // emit your data}); And now, you want to update your data: updateProposals() { this._proposals = this._proposals.filter(...); // or whatever this.subject.next(Object.assign({}, this._proposals)); // emit completely new value} This may seem like a lot and I would recommend reading more on how Observables work for the sake of future issues that you might have. Cheers. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42798236', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1028679/']} | jdg_72662 |
stackexchange | llm_judgeable_groundtruth_similarity | 305202 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $f:{\Bbb R}\to{\Bbb R}$. Is there a courterexample for the following equality or is it always true? $$\lim_{x\to 0}f(x)=\lim_{n\to\infty}f\left(\frac{1}{n}\right)$$ What I think is that one might need a non-continuous function since this is always true for a continuous function. Would $1_{\Bbb Q}$ work? Are there any other counterexamples?
Now provide the response and nothing else.
| Actually, even for a function which is discontinuous at only one point, this need not be true. For example: $$f(x)=\begin{cases}\sin(\pi/x)&:x\neq0\\ 0&:x=0\end{cases}.$$ This equals $0$ for every $n\in\Bbb Z\setminus\{0\}$ as $\pi/(1/n)=n\pi$ but the limit as $x\to0$ does not exist. However, if the function is continuous, then yes, the limits will be equal. Thus, as you can see, the only way to guarantee the limits are equal is if $$\lim_{x\to0}f(x)$$ exists. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/305202', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_72663 |
stackexchange | llm_judgeable_groundtruth_similarity | 24 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I think it's clear enough that if you turn your bicycle's steering wheel left, while moving, and you don't lean left, the bike will fall over (to the right) as you turn. I figure this is because the bike's momentum keeps it moving in the direction you were going, and since your wheels have friction against the ground, the top of the bike moves forward relative to the bottom of the wheels. The top of the bike going north while the bottom of the wheels go northwest will understandably cause you to topple. So to counteract this and keep you from falling over, leaning into the turn is necessary. But is there also a direct causal relationship -- that leaning will cause the bike to start to turn? If I start leaning left, I will turn left... but maybe that's because I know that if I don't turn the steering wheel left, the bike will fall over (to the left). I experimented with unruly turns of the steering wheel when I was a kid, and got my scrapes and bruises. Now that I'm a cautious and sedate adult I'm not anxious to experiment that way. :-) (I also want to ask why airplanes bank into a turn... they don't have the same issues as a bike, i.e. the bottom part has no special friction against the ground. But that would probably make the question too broad.)
Now provide the response and nothing else.
| The simple answer is that the angle between the front fork and the vertical causes the force from the ground to create a moment about the axis of rotation that turns the wheel in that direction. This has nothing to do with actually riding the bike, and it will happen even if the bike is stationary. Basically, if you project the axis of (steering) rotation all the way through the wheel, top to bottom, it will not be coincident with the point of contact with the ground. When the bike leans over, the upward (normal) force from the ground is not in the same plane as the axis of rotation, which causes a moment about that axis. When the bike begins to turn, the frictional component of the contact force will cause the force to go back into the same plane as the axis of rotation, which causes the wheel to hold its position steady. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/24', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/70/']} | jdg_72664 |
stackexchange | llm_judgeable_groundtruth_similarity | 18074916 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Simple.Data with SQL Server and I have multiple methods that are independent of each other. On a form I have multiple drop down lists that need to be populated and I think these can be populated asynchronously. I have a repository that returns List of entities. From my Asp.Net website I call the methods on the repository one by one and bind them to the drop down lists here is a sample code private void Initialize() { LoadTechnologies(); LoadInstallationTypes(); LoadProvinces(); LoadYears(); } private void LoadTechnologies() { ddlTechnologies.DataSource = _GizRepository.GetTechnologies(); ddlTechnologies.DataValueField = "Name"; ddlTechnologies.DataTextField = "Name"; ddlTechnologies.Items.Insert(0, new ListItem("All", "-1")); ddlTechnologies.DataBind(); } private void LoadInstallationTypes() { ddlInstallationType.DataSource = _GizRepository.GetInstallationTypes(); ddlInstallationType.DataValueField = "Type"; ddlInstallationType.DataTextField = "Type"; ddlInstallationType.Items.Insert(0, new ListItem("Any", "-1")); ddlInstallationType.DataBind(); } private void LoadProvinces() { ddlProvinces.DataSource = _GizRepository.GetProvinces(); ddlProvinces.DataValueField = "Name"; ddlProvinces.DataTextField = "Name"; ddlProvinces.Items.Insert(0, new ListItem("All", "-1")); ddlProvinces.DataBind(); } private void LoadYears() { ddlYearFrom.DataSource = _GizRepository.GetYears(); ddlYearFrom.DataValueField = "Year"; ddlYearFrom.DataTextField = "Year"; ddlYearFrom.DataBind(); ddlYearTo.DataSource = _GizRepository.GetYears(); ddlYearTo.DataValueField = "Year"; ddlYearTo.DataTextField = "Year"; ddlYearTo.DataBind(); } You can see from the code above that all I am doing is fetching some lists from the repository and bind them to the drop downs. I want to execute these methods asynchronously instead of synchronously, Kindly guide how it can be done?
Now provide the response and nothing else.
| Subscribe is lower level and you should really never have to use it. On provides friendlier overloads that allow for strong typing of arguments. Here's an example: Server public class MyHub{ public void Send(string message, int age) { Clients.All.send(message, age); }} Client Subscribe pattern public void Main(){ var connection = new HubConnection("http://myserver"); var proxy = connection.CreateHubProxy("MyHub"); var subscription = proxy.Subscribe("send"); subscription.Received += arguments => { string name = null; int age; if (arguments.Count > 0) { name = arguments[0].ToObject<string>(); } if (arguments.Count > 1) { age = arguments[1].ToObject<int>(); } Console.WriteLine("Name {0} and age {1}", name, age); };} "On" Pattern public void Main(){ var connection = new HubConnection("http://myserver"); var proxy = connection.CreateHubProxy("MyHub"); proxy.On<string, int>("send", (name, age) => { Console.WriteLine("Name {0} and age {1}", name, age); });} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18074916', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1241992/']} | jdg_72665 |
stackexchange | llm_judgeable_groundtruth_similarity | 20474556 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to specify that if anything entered in the url address other than existing routes (in routes.php, then show 404 page. I know about this: App::abort(404); but how can I specify the part where everything else except the routes defined?
Now provide the response and nothing else.
| You can add this to your filters.php file: App::missing(function($exception){ return Response::view('errors.missing', array(), 404);}); And create the errors.missing view file to show them the error. Also take a look at the Errors & Logging docs EDIT If you need to pass data to that view, the second parameter is an array you can use: App::missing(function($exception){ return Response::view('errors.missing', array('url' => Request::url()), 404);}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20474556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2160618/']} | jdg_72666 |
stackexchange | llm_judgeable_groundtruth_similarity | 4471408 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $C_n$ denote the $n$ -th Catalan number defined by $${\displaystyle C_{n}={\frac {1}{n+1}}{2n \choose n}=\prod \limits _{k=2}^{n}{\frac {n+k}{k}}\quad \left(n\geqslant 0\right).}$$ Next, we define the sequence $${\displaystyle A_{n}={\frac {C_{n}^{3}}{2^{6n}}}\sum _{k=0}^{n}(-1)^{k}{n \choose k}{\frac {(n-k)(k-1)}{(2k-1)(2k+1)}}}.$$ I've numerically managed to verify that \begin{equation}\tag{1}\label{pi}{\displaystyle \sum _{n=0}^{\infty }(n+1)A_{n}={\frac {1}{\pi}}}.\end{equation} Is it possible to prove the relation in (\ref{pi})? If yes, then how could we go about it? Also, is this series already known or studied in the literature? If yes, then any references will be highly appreciated. Thanks!
Now provide the response and nothing else.
| $$\sum _{k=0}^{n}(-1)^{k}{n \choose k}{\frac {(n-k)(k-1)}{(2k-1)(2k+1)}}=\frac{\sqrt{\pi } \,\,\Gamma (n+2)}{4 \,\Gamma \left(n+\frac{1}{2}\right)}$$ $$ A_{n}={\frac {C_{n}^{3}}{2^{6n}}}\frac{\sqrt{\pi } \,\,\Gamma (n+2)}{4 \,\Gamma \left(n+\frac{1}{2}\right)}=\frac{\Gamma \left(n+\frac{1}{2}\right)^2}{4 \pi \Gamma (n+2)^2}$$ $$(n+1)A_n=\frac{(n+1) \Gamma \left(n+\frac{1}{2}\right)^2}{4 \pi \Gamma (n+2)^2}$$ $$S_p=\sum_{n=0}^p(n+1)A_n=\frac 1 \pi \frac{(p+1) \Gamma \left(p+\frac{3}{2}\right)^2}{\Gamma (p+2)^2}$$ Now, Stirling approximation $$\log \left(\frac{(p+1) \Gamma \left(p+\frac{3}{2}\right)^2}{\Gamma (p+2)^2}\right)=\log(p+1)+2\log \left(\Gamma \left(p+\frac{3}{2}\right)\right)-2\log \left(\Gamma \left(p+{2}\right)\right)$$ $$\log \left(\frac{(p+1) \Gamma \left(p+\frac{3}{2}\right)^2}{\Gamma (p+2)^2}\right)=-\frac{1}{4 p}+\frac{1}{4 p^2}+O\left(\frac{1}{p^3}\right)$$ $$\frac{(p+1) \Gamma \left(p+\frac{3}{2}\right)^2}{\Gamma (p+2)^2}=1-\frac{1}{4 p}+\frac{9}{32 p^2}+O\left(\frac{1}{p^3}\right)$$ $$S_p=\frac 1 \pi \left(1-\frac{1}{4 p}+\frac{9}{32 p^2}+O\left(\frac{1}{p^3}\right) \right)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4471408', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/896686/']} | jdg_72667 |
stackexchange | llm_judgeable_groundtruth_similarity | 31266947 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have made some ref keyword tests and there is one thing I can't understand: static void Test(ref int a, ref int b){ Console.WriteLine(Int32.ReferenceEquals(a,b));}static void Main(string[] args){ int a = 4; Test(ref a, ref a); Console.ReadLine();} Why does this code display False ? I know that int is a value type but here it should pass references to the same object.
Now provide the response and nothing else.
| Why does this code display False ? Because int a and int b are being boxed when you call object.ReferenceEquals . Each integer is boxed inside an object instance. Thus, you are actually comparing references between two boxed values, which clearly aren't equal. You can easily see this if you look at the generated CIL for the method: Test:IL_0000: nopIL_0001: ldarg.0 Load argument aIL_0002: ldind.i4IL_0003: box System.Int32IL_0008: ldarg.1 Load argument bIL_0009: ldind.i4IL_000A: box System.Int32IL_000F: call System.Object.ReferenceEqualsIL_0014: call System.Console.WriteLineIL_0019: nopIL_001A: ret Checking for storage location equality can be achieved either by using verifiable CIL (such as in @leppie's answer ) or by unsafe code: unsafe static void Main(string[] args){ int a = 4; int b = 5; Console.WriteLine(Test(ref a, ref a)); // True Console.WriteLine(Test(ref a, ref b)); // False;}unsafe static bool Test(ref int a, ref int b){ fixed (int* refA = &a) fixed (int* refB = &b) { return refA == refB; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31266947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5089169/']} | jdg_72668 |
stackexchange | llm_judgeable_groundtruth_similarity | 568962 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create an empty array and append items to it, one at a time. xs = []for item in data: xs.append(item) Can I use this list-style notation with NumPy arrays?
Now provide the response and nothing else.
| That is the wrong mental model for using NumPy efficiently. NumPy arrays are stored in contiguous blocks of memory. To append rows or columns to an existing array, the entire array needs to be copied to a new block of memory, creating gaps for the new elements to be stored. This is very inefficient if done repeatedly. Instead of appending rows, allocate a suitably sized array, and then assign to it row-by-row: >>> import numpy as np>>> a = np.zeros(shape=(3, 2))>>> aarray([[ 0., 0.], [ 0., 0.], [ 0., 0.]])>>> a[0] = [1, 2]>>> a[1] = [3, 4]>>> a[2] = [5, 6]>>> aarray([[ 1., 2.], [ 3., 4.], [ 5., 6.]]) | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/568962', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11522/']} | jdg_72669 |
stackexchange | llm_judgeable_groundtruth_similarity | 201461 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was playing around with my own Sudoku solver and was looking for some pointers to good and fast design when I came across this: def r(a):i=a.find('0');~i or exit(a);[min[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]forj in range(81)]or r(a[:i]+m+a[i+1:])for m in'%d'%5**18]from sys import*;r(argv[1]) My own implementation solves Sudokus the same way I solve them in my head but how does this cryptic algorithm work? http://scottkirkwood.blogspot.com/2006/07/shortest-sudoku-solver-in-python.html
Now provide the response and nothing else.
| Well, you can make things a little easier by fixing up the syntax: def r(a): i = a.find('0') ~i or exit(a) [m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)] or r(a[:i]+m+a[i+1:])for m in'%d'%5**18]from sys import *r(argv[1]) Cleaning up a little: from sys import exit, argvdef r(a): i = a.find('0') if i == -1: exit(a) for m in '%d' % 5**18: m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3) or a[j] for j in range(81)] or r(a[:i]+m+a[i+1:])r(argv[1]) Okay, so this script expects a command-line argument, and calls the function r on it. If there are no zeros in that string, r exits and prints out its argument. (If another type of object is passed, None is equivalent to passing zero, and any other object is printed to sys.stderr and results in an exit code of 1. In particular, sys.exit("some error message") is a quick way to exit a program when an error occurs. See http://www.python.org/doc/2.5.2/lib/module-sys.html ) I guess this means that zeros correspond to open spaces, and a puzzle with no zeros is solved. Then there's that nasty recursive expression. The loop is interesting: for m in'%d'%5**18 Why 5**18? It turns out that '%d'%5**18 evaluates to '3814697265625' . This is a string that has each digit 1-9 at least once, so maybe it's trying to place each of them. In fact, it looks like this is what r(a[:i]+m+a[i+1:]) is doing: recursively calling r, with the first blank filled in by a digit from that string. But this only happens if the earlier expression is false. Let's look at that: m in [(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3) or a[j] for j in range(81)] So the placement is done only if m is not in that monster list. Each element is either a number (if the first expression is nonzero) or a character (if the first expression is zero). m is ruled out as a possible substitution if it appears as a character, which can only happen if the first expression is zero. When is the expression zero? It has three parts that are multiplied: (i-j)%9 which is zero if i and j are a multiple of 9 apart, i.e. the same column. (i/9^j/9) which is zero if i/9 == j/9, i.e. the same row. (i/27^j/27|i%9/3^j%9/3) which is zero if both of these are zero: i/27^j^27 which is zero if i/27 == j/27, i.e. the same block of three rows i%9/3^j%9/3 which is zero if i%9/3 == j%9/3, i.e. the same block of three columns If any of these three parts is zero, the entire expression is zero. In other words, if i and j share a row, column, or 3x3 block, then the value of j can't be used as a candidate for the blank at i. Aha! from sys import exit, argvdef r(a): i = a.find('0') if i == -1: exit(a) for m in '3814697265625': okay = True for j in range(81): if (i-j)%9 == 0 or (i/9 == j/9) or (i/27 == j/27 and i%9/3 == j%9/3): if a[j] == m: okay = False break if okay: # At this point, m is not excluded by any row, column, or block, so let's place it and recurse r(a[:i]+m+a[i+1:])r(argv[1]) Note that if none of the placements work out, r will return and back up to the point where something else can be chosen, so it's a basic depth first algorithm. Not using any heuristics, it's not particularly efficient. I took this puzzle from Wikipedia ( http://en.wikipedia.org/wiki/Sudoku ): $ time python sudoku.py 530070000600195000098000060800060003400803001700020006060000280000419005000080079534678912672195348198342567859761423426853791713924856961537284287419635345286179real 0m47.881suser 0m47.223ssys 0m0.137s Addendum: How I would rewrite it as a maintenance programmer (this version has about a 93x speedup :) import sysdef same_row(i,j): return (i/9 == j/9)def same_col(i,j): return (i-j) % 9 == 0def same_block(i,j): return (i/27 == j/27 and i%9/3 == j%9/3)def r(a): i = a.find('0') if i == -1: sys.exit(a) excluded_numbers = set() for j in range(81): if same_row(i,j) or same_col(i,j) or same_block(i,j): excluded_numbers.add(a[j]) for m in '123456789': if m not in excluded_numbers: # At this point, m is not excluded by any row, column, or block, so let's place it and recurse r(a[:i]+m+a[i+1:])if __name__ == '__main__': if len(sys.argv) == 2 and len(sys.argv[1]) == 81: r(sys.argv[1]) else: print 'Usage: python sudoku.py puzzle' print ' where puzzle is an 81 character string representing the puzzle read left-to-right, top-to-bottom, and 0 is a blank' | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/201461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27736/']} | jdg_72670 |
stackexchange | llm_judgeable_groundtruth_similarity | 371691 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have PostFix up and running on a CentOS box and would like to send mail from a Windows server on the same network out through the PostFix server. When I try to telnet from the Windows server into port 25 on the PostFix server currently the connection fails. Where do I set this up within PostFix/CentOS? Thanks in advance!
Now provide the response and nothing else.
| You will need to configure relay. However when postfix is running you should be able to still connect to port 25. Might there be a firewall blocking this connection? When you open main.cf, you can need to add this directive: mynetworks=A.B.C.D example: mynetworks = 127.0.0.0/8 168.100.189.0/28mynetworks = !192.168.0.1, 192.168.0.0/28mynetworks = 127.0.0.0/8 168.100.189.0/28 [::1]/128 [2001:240:587::]/64 do not put 0.0.0.0 or you will become an open relay. | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/371691', 'https://serverfault.com', 'https://serverfault.com/users/37860/']} | jdg_72671 |
stackexchange | llm_judgeable_groundtruth_similarity | 444069 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The other day, my friend said that he wants to write an app that uses the NFC chip on a phone to read pet microchips. I told him I was pretty sure that pet microchips don't use NFC, but he said he was pretty sure they did. So I hopped onto Wikipedia and read the article about pet microchips. Sure enough, Wikipedia told me that pet microchips operate on RFID. My friend said that they were the same thing. Is he right, or just talking through his hat?
Now provide the response and nothing else.
| NFC is a type of RFID. Not all RFID devices use NFC. NFC is a bundle of specific technologies and protocols used to communicate with storage devices (often credit cards or passive identifier tags). The NFC specifications cover everything from the radio frequency used (13.56 MHz) and the types of modulation used, to the communications protocol used to read and write data on tags. RFID, by contrast, is a generic term used to refer to any form of radio communications with an identifier tag. There are dozens, if not hundreds, of different systems which fall under this category, and most of them are mutually incompatible. NFC is often considered a type of RFID, but they are not synonymous. Pet microchips do not use NFC. There are a variety of different RFID systems used for these chips; most of them operate at 125, 128, or 134.2 kHz, and they are largely incompatible with each other. (It is not unheard of for an animal to be implanted with multiple brands of ID microchips to increase the odds that they will be identified!) A NFC reader in a cell phone will have no way of communicating with these chips. | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/444069', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/224588/']} | jdg_72672 |
stackexchange | llm_judgeable_groundtruth_similarity | 4106035 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A_1, \dots , A_n$ be commutative unitary Rings and $A = \prod_{i=1}^{n} A_i$ .Then every prime Ideal $\frak{p}$ $\subset A$ is of the form $\pi_i^{-1}(\frak{p}_i)$ where $\pi_i: A \to A_i$ are the canonical projections and $\frak{p_i}$ $\subset A_i$ is prime. I know that every Ideal in $A$ is a direct product of Ideals in the $A_i$ , so we have $\frak{p}$ $= \frak{a}_i \times \dots \times \frak{a}_n$ for some Ideals $\frak{a}_i$ $\subset A_i$ . They are also prime since $\frak{p}$ is prime. Now, since the $\pi_i$ are surjective $\pi_i(\frak{p})$ $\subset A_i$ is prime as well. Then we have $\pi_i^{-1}(\pi_i(\frak{p})) \supset \frak{p}$ . Now I don't know how to finish the proof. Hints and/or improvements are greatly appreciated!
Now provide the response and nothing else.
| You can derive the proof from the following facts: An ideal $\mathfrak{a} \subseteq A$ is prime iff $A/\mathfrak{a}$ is an integral domain. There is an isomorphism of rings $(\prod_i A_i) / (\prod_i \mathfrak{a}_i) \cong \prod_i A_i / \mathfrak{a_i}$ A product $\prod_i A_i$ is an integral domain iff $A_i=0$ for all indices $i$ except for exactly one index $j$ , and for this index $A_j$ is an integral domain. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4106035', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/697395/']} | jdg_72673 |
stackexchange | llm_judgeable_groundtruth_similarity | 68653 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been working with the Fermat numbers recently, but this problem has really tripped me up. If the Fermat theorem is set as $f_a=2^{2^a}+1$ , then how can we say that for an integer $b<a$ , the $\gcd(f_b,f_a)=1$ ?
Now provide the response and nothing else.
| Claim. $f_n=f_0\cdots f_{n-1}+2$. The result holds for $f_1$: $f_0=2^{2^0}+1 = 2^1+1 = 3$, $f_1=2^{2}+1 = 5 = 3+2$. Assume the result holds for $f_n$. Then$$\begin{align*}f_{n+1} &= 2^{2^{n+1}}+1\\&= (2^{2^n})^2 + 1\\&= (f_n-1)^2 +1\\&= f_n^2 - 2f_n +2\\&= f_n(f_0\cdots f_{n-1} + 2) -2f_n + 2\\&= f_0\cdots f_{n-1}f_n + 2f_n - 2f_n + 2\\&= f_0\cdots f_n + 2,\end{align*}$$which proves the formula by induction. $\Box$ Now, let $d$ be a common factor of $f_b$ and $f_a$. Then $d$ divides $f_0\cdots f_{a-1}$ (because it's a multiple of $f_b$) and divides $f_a$. That means that it divides$$f_a - f_0\cdots f_{a-1} = (f_0\cdots f_{a-1}+2) - f_0\cdots f_{a-1} = 2;$$but $f_a$ and $f_b$ are odd, so $d$ is an odd divisor of $2$. Therefore, $d=\pm 1$. So $\gcd(f_a,f_b)=1$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/68653', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16895/']} | jdg_72674 |
stackexchange | llm_judgeable_groundtruth_similarity | 145563 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can two identical metallic objects have the same but opposite charges (one loses electrons and one gains electrons) when we know that metals have the ability to lose electrons only?
Now provide the response and nothing else.
| You can't simultaneously conserve energy and linear momentum. Let the photon have energy $E_{\gamma} = p_{\gamma} c$ and the electron have energy $E_{-}^{2} = p_{e}^{2}c^2 + m_{e}^{2}c^4$ and an analogous expression for the positron. Suppose the electron and positron depart from the interaction site with an angle $2\theta$ between them. Conservation of energy. $$ p_{\gamma} c = \sqrt{(p_{e}^{2}c^2 +m_e^{2}c^4} + \sqrt{(p_{p}^{2}c^2 +m_e^{2}c^4},$$but we know that $p_{p} = p_{e}$ from conservation of momentum perpendicular to the original photon direction. So$$ p_{\gamma} = 2\sqrt{p_{e}^2 + m_e^{2}c^2}$$ Now conserving linear momentum in the original direction of the photon.$$p_{\gamma} = p_e \cos{\theta} + p_p \cos\theta = 2p_e \cos\theta$$ Equating these two expression for the photon momentum we have$$p_e \cos{\theta} = \sqrt{p_{e}^2 + m_e^{2}c^2}$$$$\cos \theta = \sqrt{1 + m_e^{2}c^2/p_e^{2}}$$As $\cos \theta$ cannot exceed 1 we see that this impossible. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/145563', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/63885/']} | jdg_72675 |
stackexchange | llm_judgeable_groundtruth_similarity | 662671 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Before ubuntu desktop was installed sites were normal and everything (virtuals hosts/ sites) was working normally but suddenly it stopped listening to 80 and went on 7080 and 7081 any idea why this could happen. Sites are working on 7080. If I change the port in ports.conf from 7080 to 80, sites stop working and I can see the generic webserver start page (which we can see usually before setting up any site)
Now provide the response and nothing else.
| I found a solution to this problem, that doesn't require additional servers or services.I'm not entirely sure if this doesn't spawn new problems though. For me it seems to work right now. The way I did it, was to create a frontend for each domain that required different ssl settings. I then set the bind option of those frontends to high ports (these are not reachable from public!). I created another frontend listening on port :443 to divide traffic based on SNI, and set the backend servers to 127.0.0.1:high-port. This way, I created sort of a loop in haproxy [incoming]->[haproxy:443]->[haproxy:7000]->[www.intern.lan][incoming]->[haproxy:443]->[haproxy:8000]->[private.intern.lan] Here is the config part. frontend frnd_snipt # Frontend_SNI-PassThrough (snipt) bind *:443 # Do not use bind *:8443 ssl crt etc....! option tcplog mode tcp tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } acl subdomain_is_www req_ssl_sni -i www.example.com acl subdomain_is_www req_ssl_sni -i example.com acl subdomain_is_private req_ssl_sni -i private.example.com use_backend bknd_snipt_private if subdomain_is_private use_backend bknd_snipt_www if subdomain_is_wwwbackend bknd_snipt_www mode tcp # tcp mode must match the frontend mode - already set as default in [global] server snipt-www 127.0.0.1:7000 # run without "check", otherwise haproxy checks itself all the time!backend bknd_snipt_private mode tcp server snipt-private 127.0.0.1:8000 # also, don't add "ssl" when in tcp mode. "ssl" is an http mode option (result in "NO-SRV" when set in tcp)##### NORMAL HAPROXY PART #####frontend www_example_com # this frontend can be in tcp or http mode... bind *:7000 ssl crt /etc/mycert.pem no-sslv3 # www. frontend with normal https mode http option httplogfrontend private_example_com bind *:8000 ssl crt /etc/mycert.pem ca-file /etc/myca.pem verify optional no-sslv3 # private. frontend with client certificate request. mode http option httplog ... # whatever you have in your frontend If anyone has thoughts on this, or any idea why this could be a bad idea please let me know. It works, but I'm wondering why use_frontend isn't an option. Maybe because it's something that shouldn't be done for whatever reasons. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/662671', 'https://serverfault.com', 'https://serverfault.com/users/144769/']} | jdg_72676 |
stackexchange | llm_judgeable_groundtruth_similarity | 4417545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on an app that uses MKOverlay views to layer my own custom maps on top of the Google base map. I have been using Apple's excellent TileMap sample code (from WWDC 2010) as a guide. My problem - when "overzoomed" to a level of detail deeper than my generated tile set, the code displays nothing because there are no tiles available at the calculated Z level. The behavior I want - when "overzoomed" the app should just keep magnifying the deepest level of tiles. It is a good user experience for the overlay to become blurrier - it is a very bad experience to have the overlay vanish. Here is the code which returns the tiles to draw - I need to figure out how to modify this to cap the Z-depth without breaking the scaling of the frame being calculated for the overlay tile. Any thoughts??? - (NSArray *)tilesInMapRect:(MKMapRect)rect zoomScale:(MKZoomScale)scale{ NSInteger z = zoomScaleToZoomLevel(scale); // PROBLEM: I need to find a way to cap z at my maximum tile directory depth. // Number of tiles wide or high (but not wide * high) NSInteger tilesAtZ = pow(2, z); NSInteger minX = floor((MKMapRectGetMinX(rect) * scale) / TILE_SIZE); NSInteger maxX = floor((MKMapRectGetMaxX(rect) * scale) / TILE_SIZE); NSInteger minY = floor((MKMapRectGetMinY(rect) * scale) / TILE_SIZE); NSInteger maxY = floor((MKMapRectGetMaxY(rect) * scale) / TILE_SIZE); NSMutableArray *tiles = nil; for (NSInteger x = minX; x <= maxX; x++) { for (NSInteger y = minY; y <= maxY; y++) { // As in initWithTilePath, need to flip y index // to match the gdal2tiles.py convention. NSInteger flippedY = abs(y + 1 - tilesAtZ); NSString *tileKey = [[NSString alloc] initWithFormat:@"%d/%d/%d", z, x, flippedY]; if ([tilePaths containsObject:tileKey]) { if (!tiles) { tiles = [NSMutableArray array]; } MKMapRect frame = MKMapRectMake((double)(x * TILE_SIZE) / scale, (double)(y * TILE_SIZE) / scale, TILE_SIZE / scale, TILE_SIZE / scale); NSString *path = [[NSString alloc] initWithFormat:@"%@/%@.png", tileBase, tileKey]; ImageTile *tile = [[ImageTile alloc] initWithFrame:frame path:path]; [path release]; [tiles addObject:tile]; [tile release]; } [tileKey release]; } } return tiles;} FYI, here is the zoomScaleToZoomLevel helper function that someone asked about: // Convert an MKZoomScale to a zoom level where level 0 contains 4 256px square tiles,// which is the convention used by gdal2tiles.py.static NSInteger zoomScaleToZoomLevel(MKZoomScale scale) { double numTilesAt1_0 = MKMapSizeWorld.width / TILE_SIZE; NSInteger zoomLevelAt1_0 = log2(numTilesAt1_0); // add 1 because the convention skips a virtual level with 1 tile. NSInteger zoomLevel = MAX(0, zoomLevelAt1_0 + floor(log2f(scale) + 0.5)); return zoomLevel;}
Now provide the response and nothing else.
| Imagine that the overlay is cloud cover - or in our case, cellular signal coverage. It might not "look good" while zoomed in deep, but the overlay is still conveying essential information to the user. I've worked around the problem by adding an OverZoom mode to enhance Apple's TileMap sample code. Here is the new tilesInMapRect function in TileOverlay.m: - (NSArray *)tilesInMapRect:(MKMapRect)rect zoomScale:(MKZoomScale)scale{ NSInteger z = zoomScaleToZoomLevel(scale); // OverZoom Mode - Detect when we are zoomed beyond the tile set. NSInteger overZoom = 1; NSInteger zoomCap = MAX_ZOOM; // A constant set to the max tile set depth. if (z > zoomCap) { // overZoom progression: 1, 2, 4, 8, etc... overZoom = pow(2, (z - zoomCap)); z = zoomCap; } // When we are zoomed in beyond the tile set, use the tiles // from the maximum z-depth, but render them larger. NSInteger adjustedTileSize = overZoom * TILE_SIZE; // Number of tiles wide or high (but not wide * high) NSInteger tilesAtZ = pow(2, z); NSInteger minX = floor((MKMapRectGetMinX(rect) * scale) / adjustedTileSize); NSInteger maxX = floor((MKMapRectGetMaxX(rect) * scale) / adjustedTileSize); NSInteger minY = floor((MKMapRectGetMinY(rect) * scale) / adjustedTileSize); NSInteger maxY = floor((MKMapRectGetMaxY(rect) * scale) / adjustedTileSize); NSMutableArray *tiles = nil; for (NSInteger x = minX; x <= maxX; x++) { for (NSInteger y = minY; y <= maxY; y++) { // As in initWithTilePath, need to flip y index to match the gdal2tiles.py convention. NSInteger flippedY = abs(y + 1 - tilesAtZ); NSString *tileKey = [[NSString alloc] initWithFormat:@"%d/%d/%d", z, x, flippedY]; if ([tilePaths containsObject:tileKey]) { if (!tiles) { tiles = [NSMutableArray array]; } MKMapRect frame = MKMapRectMake((double)(x * adjustedTileSize) / scale, (double)(y * adjustedTileSize) / scale, adjustedTileSize / scale, adjustedTileSize / scale); NSString *path = [[NSString alloc] initWithFormat:@"%@/%@.png", tileBase, tileKey]; ImageTile *tile = [[ImageTile alloc] initWithFrame:frame path:path]; [path release]; [tiles addObject:tile]; [tile release]; } [tileKey release]; } } return tiles;} And here is the new drawMapRect in TileOverlayView.m: - (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{ // OverZoom Mode - Detect when we are zoomed beyond the tile set. NSInteger z = zoomScaleToZoomLevel(zoomScale); NSInteger overZoom = 1; NSInteger zoomCap = MAX_ZOOM; if (z > zoomCap) { // overZoom progression: 1, 2, 4, 8, etc... overZoom = pow(2, (z - zoomCap)); } TileOverlay *tileOverlay = (TileOverlay *)self.overlay; // Get the list of tile images from the model object for this mapRect. The // list may be 1 or more images (but not 0 because canDrawMapRect would have // returned NO in that case). NSArray *tilesInRect = [tileOverlay tilesInMapRect:mapRect zoomScale:zoomScale]; CGContextSetAlpha(context, tileAlpha); for (ImageTile *tile in tilesInRect) { // For each image tile, draw it in its corresponding MKMapRect frame CGRect rect = [self rectForMapRect:tile.frame]; UIImage *image = [[UIImage alloc] initWithContentsOfFile:tile.imagePath]; CGContextSaveGState(context); CGContextTranslateCTM(context, CGRectGetMinX(rect), CGRectGetMinY(rect)); // OverZoom mode - 1 when using tiles as is, 2, 4, 8 etc when overzoomed. CGContextScaleCTM(context, overZoom/zoomScale, overZoom/zoomScale); CGContextTranslateCTM(context, 0, image.size.height); CGContextScaleCTM(context, 1, -1); CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), [image CGImage]); CGContextRestoreGState(context); // Added release here because "Analyze" was reporting a potential leak. Bug in Apple's sample code? [image release]; }} Seems to be working great now. BTW - I think the TileMap sample code is missing an [image release] and was leaking memory. Note where I added it in the code above. I hope that this helps some others with the same problem. Cheers, Chris | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4417545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/529774/']} | jdg_72677 |
stackexchange | llm_judgeable_groundtruth_similarity | 78843 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the decomposition products? For example: Calcium formate decomposes into formaldehyde and calcium carbonate. $$\ce{Ca(HCOO)2 -> CaCO3 + HCHO}$$ Calcium acetate decomposes into acetone and calcium carbonate. $$\ce{Ca(CH3COO)2 -> CaCO3 + (CH3)2CO}$$ How does calcium propionate decompose?
Now provide the response and nothing else.
| There is a paper (1) where TG - GLC - MS with subsequent XRD analysis of solid products has been utilized for the analysis of evolved gases during the decomposition of calcium propionate at various heating rates. Detected products were $\ce{H2O}$, $\ce{CO2}$, $\ce{C2H5CHO}$, $\ce{C2H5COCH3}$, $\ce{C2H5COC2H5}$, $\ce{CaCO3}$. The Discussion section of the paper suggests the following mechanism of formation: Following the free radical mechanism of Hites and Biemann (2) , proposed for the decomposition of the calcium salts of monocarboxylic acids we suggest that the initial stage in the decomposition of calcium propanoate can be represented as: $$\ce{Ca(C2H5COO)2 -> CaCO3 + C2H^.5 + C2H5C^.=O,}$$ Combination of the two free radicals results in the formation of the major product, 3-pentanone: $$\ce{C2H5C=O + C2H^.5 -> C2H5 - CO - C2H5}$$ The other trace products observed, 2-butanone and propanal, can be accounted for by disproportionation reactions involving the ethyl radical: $$\ce{C2H^.5 + C2H^.5 -> C3H^.7 + CH^.3}$$ and $$\ce{C2H^.5 -> C2H5 + H^.}$$ The methyl and hydrogen radicals so produced may react with the large excess of the original propanoyl radicals to give 2-butanone and propanal respectively. $$\ce{C2H^.5=O + CH^.3 -> C2H5COCH3}$$ and $$\ce{C2H5C=O + H -> C2H5CHO}$$ The $\ce{C3H7}$ may combine with any of the other radicals or may disproportionate. In any event there was no evidence for compounds larger than 3-propanone. However on close examination of the GC curves there were indications of a very small peak before that due to $\ce{CO2}$ which could be attributed to low molecular weight alkanes and alkenes. (1) Barnes, P. A.; Stephenson, G.; Warrington, S. B. Journal of Thermal Analysis 1982, 25 (2), 299–311. DOI: 10.1007/BF01912955 (2) Hites, R. A.; Biemann, K. J. Am. Chem. Soc. 1972, 94 (16), 5772–5777. DOI: 10.1021/ja00771a039 | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/78843', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/47360/']} | jdg_72678 |
stackexchange | llm_judgeable_groundtruth_similarity | 39077 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The mass of Venus seems rather complicated to determine to me: Venus doesn't have any satellites, so you can't just apply Kepler's third law (like you would with Jupiter or Saturn for instance) to determine its mass. The gravitational tug of Venus on the Sun is very small compared to that of other Jupiter or Saturn, so it seems like it would be difficult to extract what part of the Sun's proper motion is caused by Venus. There are few asteroids with orbits close to that of Venus, so not many objects who might have their trajectory modified by Venus. Knowing Venus' radius, and assuming it has the same density as the Earth, you can get a pretty close estimate of its mass (85% of Earth's mass with this assumption, when the actual value is 82%). But that's a pretty strong assumption (the density of Earth and Venus only happen to be close by chance) and a rather unsatisfactory "guesstimate". Nowdays, there are a few probes that have flown by Venus, so by looking at their trajectory, you can infer what Venus' gravitational field looks like. But those fly-bys are pretty recent. Did we know about Venus' mass before those fly-bys? How was the mass of Venus measured for the first time?
Now provide the response and nothing else.
| How was the mass of Venus measured for the first time? In the mid 19 th century, Urbain Le Verrier's predicted of the existence of a then unknown planet beyond the orbit of Uranus. He even predicted this planet's orbit. The discovery of Neptune based on his predictions was perhaps his greatest accomplishment. Le Verrier then went on to investigate Mercury. He used observations of Mercury, Venus, the Sun (as a stand-in for the Earth) and Mars and calculated that Mercury should precess by 532 arc seconds per century based on Newtonian mechanics. Along the way, he had to (and did) estimate the mass of Venus. There was a problem here; the observed precession of Mercury's orbit is 575 arc seconds per century, 43 arc seconds per century greater than his calculated value. This led Le Verrier to conjecture that there was a planet even closer to the Sun than Mercury. Despite the failure to discover the non-existent planet tentatively named Vulcan, Le Verrier's estimate for the mass of Venus was fairly close to the correct figure, within a couple of percent. Once the cause of this 43 arc second per century discrepancy was discovered by Einstein, the mass of Venus was determined with even greater accuracy. Of course, once probes were sent into orbit about Venus, its mass was determined with greater accuracy yet. Reference: Leverrier, M. "On the masses of the planets, and the parallax of the Sun." Monthly Notices of the Royal Astronomical Society 32 (1872): 322. | {} | {'log_upvote_score': 6, 'links': ['https://astronomy.stackexchange.com/questions/39077', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/9527/']} | jdg_72679 |
stackexchange | llm_judgeable_groundtruth_similarity | 1745182 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting an intermittent exception saying that asp.net mvc can’t find the action method. Here’s the exception: A public action method 'Fill' could not be found on controller 'Schoon.Form.Web.Controllers.ChrisController'. I think I have the routing set up correctly because this application works most of the time. Here is the controller’s action method. [ActionName("Fill")][AcceptVerbs(HttpVerbs.Get | HttpVerbs.Post), UserIdFilter, DTOFilter]public ActionResult Fill(int userId, int subscriberId, DisplayMode? mode){ //…} The route: routes.MapRoute( "SchoonForm", "Form/Fill/{subscriberId}", new { controller = "ChrisController", action = "Fill" }, new { subscriberId = @"\d+" } ); And here is the stack: System.Web.HttpException: A public action method 'Fill' could not be found on controller 'Schoon.Form.Web.Controllers.ChrisController'. at System.Web.Mvc.Controller.HandleUnknownAction(String actionName) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\Controller.cs:line 197 at System.Web.Mvc.Controller.ExecuteCore() in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\Controller.cs:line 164 at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\ControllerBase.cs:line 76 at System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\ControllerBase.cs:line 87 at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\MvcHandler.cs:line 80 at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\MvcHandler.cs:line 68 at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\MvcHandler.cs:line 104 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Here is an example of my filters they all work the same way: public class UserIdFilter : ActionFilterAttribute{ public override void OnActionExecuting(ActionExecutingContext filterContext) { const string Key = "userId"; if (filterContext.ActionParameters.ContainsKey(Key)) { filterContext.ActionParameters[Key] = // get the user id from session or cookie } base.OnActionExecuting(filterContext); }} Thanks,Chris
Now provide the response and nothing else.
| We found the answer. We looked into our web logs. It showed that we were receiving some weird http actions (verbs/methods) like OPTIONS, PROPFIND and HEAD. This seems to the cause of some of theses exceptions. This explains why it was intermittent. We reproduced the issue with the curl.exe tool: curl.exe -X OPTIONS http://localhost/v2.3.1.0/(S(boztz1aquhzurevtjwllzr45))/Form/Fill/273curl.exe -X PROPFIND http://localhost/v2.3.1.0/(S(boztz1aquhzurevtjwllzr45))/Form/Fill/273curl.exe -X HEAD http://localhost/v2.3.1.0/(S(boztz1aquhzurevtjwllzr45))/Form/Fill/273 The fix we used was to add an authorization section to web.config: <authorization> <deny users="*" verbs="OPTIONS, PROPFIND, HEAD"/></authorization> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1745182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212407/']} | jdg_72680 |
stackexchange | llm_judgeable_groundtruth_similarity | 8148632 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to redirect from one page to another page in ASP.NET MVC 3.0 using JavaScript/jQuery/Ajax. On button click event I have written JavaScript code like below. function foo(id){ $.post('/Branch/Details/' + id);} My controller code is like this: public ViewResult Details(Guid id){ Branch branch = db.Branches.Single(b => b.Id == id); return View(branch);} When I click on a button it is calling the Details action inside BranchController, but it doesn't return to the Details view. I didn't get any error or exception. It's showing status 200 OK in Firebug . What is wrong in my code and how can I redirect to the Details view page?
Now provide the response and nothing else.
| You are not subscribing to any success callback in your $.post AJAX call. Meaning that the request is executed, but you do nothing with the results. If you want to do something useful with the results, try: $.post('/Branch/Details/' + id, function(result) { // Do something with the result like for example inject it into // some placeholder and update the DOM. // This obviously assumes that your controller action returns // a partial view otherwise you will break your markup}); On the other hand if you want to redirect, you absolutely do not need AJAX. You use AJAX only when you want to stay on the same page and update only a portion of it. So if you only wanted to redirect the browser: function foo(id) { window.location.href = '/Branch/Details/' + id;} As a side note: You should never be hardcoding urls like this. You should always be using url helpers when dealing with urls in an ASP.NET MVC application. So: function foo(id) { var url = '@Url.Action("Details", "Branch", new { id = "__id__" })'; window.location.href = url.replace('__id__', id);} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8148632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_72681 |
stackexchange | llm_judgeable_groundtruth_similarity | 229583 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say we have a web server. Server startup time is the time the server process was started. Say, the startup time was being leaked via its response headers due to the carelessness of a programmer by not getting current time while the generation of the response. The programmer is using the time variable he initialized before entering the "listening" loop, which remains the same throughout the process's lifetime, which is the time the process was started. As we know that date field must be the date and time the response was generated which it is not in the current situation. Could this information be useful to an attacker? How?
Now provide the response and nothing else.
| Yes, this could have value as a source of info. If the attacker is looking to see if a DoS is working, then the server uptime can provide data about success. Alternatively, knowing that a server has been up for a very long time can indicate that resources might be starved, or it is unpatched. It could also be used to get a sense about the infrastructure's response to load. If you are in a DevOps scenario, where you generate new servers, one could start to map out the unique servers and make some estimates about how many servers you might spawn (if a small pool) or at what load you start to spawn new servers. Uptime could also be used to uniquely identify one server in your cluster. The real question is: how much value is this to an attacker? That will depend on your specific environment. I might push back on the term "sensitive info", because I do not think it is. I would rather use your other expression "useful information" because it might not have intrinsic value by itself, but can help provide secondary information that an attacker could certainly take advantage of while pursuing other goals. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/229583', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/92241/']} | jdg_72682 |
stackexchange | llm_judgeable_groundtruth_similarity | 8636983 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I take a following question. What is the best way to initialize NSMutableString Class?(All instance will be return at unexpected times... so I'll assume that the initialization as follows:) If I know in advance the amount of work. ( expected ) NSMutableString *str1 = [NSMutableString stringWithString:@""];NSMutableString *str2 = [NSMutableString stringWithCapacity:0];NSMutableString *str3 = [NSMutableString stringWithCapacity:1000];NSMutableString *str4 = [NSMutableString string];for(int i = 0; i< 1000; i++){ //The following tasks.(Adding to the string to continue working.) [/*str1~str4*/ appendFormat:@"%d", i];} If I don't know in advance the amount of work. ( unexpected ) NSMutableString *str1 = [NSMutableString stringWithString:@""];NSMutableString *str2 = [NSMutableString stringWithCapacity:0];NSMutableString *str3 = [NSMutableString stringWithCapacity:1000];NSMutableString *str4 = [NSMutableString string];for(int i = 0; i< /*a large of size(unpredictable)*/ ; i++){ //The following tasks.(Adding to the string to continue working.) [/*str1~str4*/ appendFormat:@"%d", i];} Largely split into two when performing these tasks, What is the best way to initialize? I sometimes when working with these task is also confusing.
Now provide the response and nothing else.
| Case 1 Of the options listed, I'd use: NSMutableString *str3 = [NSMutableString stringWithCapacity:1000]; …if you know the destination size, or estimate it with a little room at the top and are able to quickly determine the exact size, or the size worst case scenario, this could save multiple reallocate and copy operations. If you don't know the size in the worst case scenario, or if it takes a lot of time to calculate, then you may as well use [NSMutableString string] or [NSMutableString new] . Also, *WithCapacity is a hint, which the frameworks are free to ignore. Of course, the body of your loop and the size you reserve also implies that all the values are [0…9] (specifically, that all values consume one character), and you could in that case likely do far better by using format strings with more arguments. However, i is obviously larger than 9 for most iterations, and will consume on average 3 characters each, so 3000 would be a more appropriate reserve capacity for the exact code you posted. Case 2 Of the options listed, I'd use: NSMutableString *str4 = [NSMutableString string]; Even better, if you don't need to add it to an autorelease pool: [NSMutableString new] or [[NSMutableString alloc] init] . Other Notes Yes, keeping objects out of autorelease pools (e.g. use alloc+init) can improve performance and reduce peak memory usage significantly. Sometimes, this is beyond your control, and in some environments (e.g. ARC), this may happen even though you use an autoreleased convenience constructor - e.g. [NSMutableString string] . The Faster Solution Finally, if this case you have outlined really is a performance concern, the fastest way would be to create a char buffer on the stack and then create one NSString from the result of copying the numbers over to the char buffer. Assuming your int s are all 0-9, it would be very fast and easy, then simply create an NSString from the (terminated) cstring. You can even do this if the input size varies, or is very large (results in a very long string). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8636983', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/785923/']} | jdg_72683 |
stackexchange | llm_judgeable_groundtruth_similarity | 5618740 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to start reading file from 2nd line skipping 1st line. This seems to work but is it best way to do so? using (StreamReader sr = new StreamReader(varFile, Encoding.GetEncoding(1250))) { string[] stringSeparator = new string[] { "\",\"" }; int i = 0; while (!sr.EndOfStream) { string line = sr.ReadLine(); //.Trim('"'); if (i > 0) { string[] values = line.Split(stringSeparator, StringSplitOptions.None); for (int index = 0; index < values.Length; index++) { MessageBox.Show(values[index].Trim('"')); } } i++; } }
Now provide the response and nothing else.
| If the file is not very large and can fit in memory: foreach (var line in File.ReadAllLines(varFile, Encoding.GetEncoding(1250)).Skip(1)){ string[] values = line.Split(','); ...} If not write an iterator: public IEnumerable<string> ReadAllLines(string filename, Encoding encoding){ using (var reader = new StreamReader(filename, encoding)) { string line; while ((line = reader.ReadLine()) != null) { yield return line; } }} and then consume it: foreach (var line in ReadAllLines(varFile, Encoding.GetEncoding(1250)).Skip(1)){ string[] values = line.Split(','); ...} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5618740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/218540/']} | jdg_72684 |
stackexchange | llm_judgeable_groundtruth_similarity | 22646 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know about the structure theorem for finitely generated abelian groups . I'm wondering whether there exists a similar structure theorem for abelian groups that are not finitely generated. In particular, I'm interested in torsion groups . Maybe having a finite exponent helps?
Now provide the response and nothing else.
| A torsion abelian group can always be decomposed into its $p$-parts: given an abelian group $A$, and a prime $p$, let$$A_p = \{a\in A\mid \mbox{the order of $a$ is a power of $p$}\}.$$Then $A_p$ is a subgroup of $A$, each has trivial intersection with the subgroup generated by all the others, so the subgroup they generate is isomorphic to their direct sum; and so $$\Bigl\langle A_p\;\Bigm|\; \mbox{$p$ prime}\Bigr\rangle = \mathop{\oplus}\limits_{p}A_p \leq A,$$where $p$ ranges over all primes. Given any $a\in A$, since $a$ is torsion, $a$ can be written as a sum of elements of $A$, each of which has order a prime power, so we have$$A = \mathop{\oplus}\limits_{p} A_p,$$where $p$ ranges over all primes. Two torsion abelian groups are isomorphic if and only if their $p$-components are isomorphic. So the problem reduces to classifying abelian $p$-groups. You can then decompose the group into its divisible and its reduced part, $A = A_{\mathrm{div}}\oplus A_{\mathrm{red}}$ (recall that an abelian group $A$ is divisible if and only if for every $a\in A$ and every positive integer $n$, there exists an element $x\in A$ such that $nx=a$; an abelian group $A$ is reduced if its largest divisible subgroup is the trivial group). Since every divisible abelian group is a direct sum of copies of $\mathbb{Q}$ and copies of the Prüfer groups $\mathbb{Z}_{p^{\infty}}$ for different primes, in the $p$-group torsion case your divisible part will just be a direct sum of (possibly infinitely many) copies of $\mathbb{Z}_{p^{\infty}}$. Two abelian groups are isomorphic if and only if their divisible parts are isomorphic and their reduced parts are isomorphic. So the problem is down to reduced abelian $p$-groups. (The following can be found in Rotman's Introduction to the Theory of Groups , 4th Edition, Chapter 10). In the case of bounded exponent, there is no problem: Prüfer (1923 for countable groups) and Baer (1934 for the general case) proved that: Theorem. (Prüfer, Baer) If $A$ is an abelian group of bounded exponent (that is, there exists $n\gt 0$ such that $na=0$ for all $a\in A$), then $A$ is a direct sum of cyclic groups. So if the exponent of the reduced part is bounded, it's just a sum of cyclic groups (necessarily of $p$-power order). And then, counting with cardinality, you get that two are isomorphic if and only if they have the same number of direct summands of each order. However, things get more complicated at this stage. Some results: for nonnegative integer $n$, define $U\{n,A\}$ to be$$U\{n,A\}= \dim_{\mathbf{F}_p}\left(\frac{p^nA\cap A[p]}{p^{n+1}A\cap A[p]}\right)$$where $A[p]$ is the subgroup of elements such that $pa=0$. If $A$ is a sum of cyclic groups, then $U\{n,A\}$ is the number of cyclic summands of order $p^{n+1}$. In 1933, Ulm proved that there is a transfinite version of these numbers, with $n$ ranging over ordinal numbers, called the Ulm invariants . Two countable torsion abelian $p$-groups are isomorphic if and only if their Ulm invariants are equal. (The result does not extend to uncountable groups). Prüfer proved that countable abelian $p$-group is a direct sum of cyclic groups if and only if $\cap_{n=1}^{\infty}p^nA = 0$ (again, false for uncountable groups). Kulikov characterized the groups that are direct sums of cyclic groups. Note that Prüfer's result gives you that in the countable reduced case you get a direct sum of cyclic groups (an element of the intersection would necessarily have a $p^i$th root for every $i$, and being of order a power of $p$, it would have $n$th roots for all $n$, hence taking it and its $n$th roots you get a divisible subgroup, so since $A$ is reduced the element must be trivial). So really, the problems start appearing when you go to the uncountable case. That argument is incorrect; taking $x$ and its $p^i$th roots does not necessarily lead to a divisible subgroup, because you may not be able to pick a system of $p^i$th roots that goes "all the way to $\infty$". E.g., you could have countably many different $p$th roots, such that the $i$th $p$th root has a $p^{i-1}$th root but no $p^i$th root; you could end up with an element that has $n$th roots for all $n$, but the roots themselves don't have $n$th roots for all $n$, so the subgroup is not actually divisible. Sorry about that, and thanks to Jack for the heads up. Added. There is also a theorem of Kulikov: Theorem (Kulikov, 1945) Every torsion abelian group $A$ is an extension of a direct sum of cyclic groups by a divisible group. You might be tempted to jump to the conclusion that this means that the reduced part of the $p$-component of $A$ will necessarily be a direct sum of cyclic groups (since it has no nontrivial divisible subgroup), but the problem is that a reduced group may have divisible quotients (even if it is torsion), so this is not the case. For an example of a reduced group with a divisible quotient, take$$A = \mathop{\oplus}\limits_{n=1}^{\infty}C_{p^n},$$where $C_{p^i}$ is a cyclic group of order $p^i$, generated by $x_i$. Note that $A$ is reduced. But if we let$$B = \bigl\langle x_1-px_2,\ x_2-px_3,\ \ldots, x_n-px_{n+1},\ldots\bigr\rangle,$$then $A/B$ is isomorphic to the Prüfer group $\mathbb{Z}_{p^{\infty}}$, which is divisible. So Kulikov's theorem gives you a different way of thinking about the reduced part, as an extension. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/22646', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1683/']} | jdg_72685 |
stackexchange | llm_judgeable_groundtruth_similarity | 12520375 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to query hstore for all the values of a certain key that match a search criteria. I can get all the values for a certain key like this: SELECT DISTINCTsvals(slice(data, ARRAY['Supplier']))FROM "products" I can also get a specific value: SELECT DISTINCTsvals(slice(data, ARRAY['Supplier'])) AS supFROM "products"WHERE data @> 'Supplier => Toshiba' What I would really like is something like (this doesn't work): SELECT DISTINCTsvals(slice(data, ARRAY['Supplier'])) AS supFROM "products"WHERE data @> 'Supplier => %tosh%' or: SELECT DISTINCTsvals(slice(data, ARRAY['Supplier'])) AS supFROM "products"WHERE lower(sup)LIKE '%tosh%' for case-insensitive search. How is this done?
Now provide the response and nothing else.
| You can extract values by key from an hstore column with the -> operator . SELECT data->'Supplier' AS supFROM productsWHERE lower(data->'Supplier') LIKE '%tosh%'; Additionally, like most expressions in PostgreSQL (excepting things like random() ), you can index this value: CREATE INDEX products_supplier_key ON products ((data->'Supplier'));CREATE INDEX products_supplier_lowercase_key ON products ((lower(data->'Supplier'))); This would allow PostgreSQL to answer many such queries using the index instead of fetching each row and scanning the hstore column. See the notes on Index Types regarding index usage with LIKE. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12520375', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/456280/']} | jdg_72686 |
stackexchange | llm_judgeable_groundtruth_similarity | 22881 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why is it that for quaternions, $u*v = \mathrm{cross}(u,v)-\mathrm{dot}(u,v)$? I wonder for what reason they are equal to each others.
Now provide the response and nothing else.
| Assume that $u = bi + cj + dk$ and $v = xi + yj + zk$ are imaginary quaternions (no real part). Then a straightforward computation using the identities $i^2 = j^2 = k^2 = ijk = -1$ gives \[uv = -(bx + cy + dz) + (cz - dy)i + (dx - bz)j + (by - cx)k = - \langle u,v \rangle + u \times v\]with the usual identification $\mathbb{R}^{3} = \operatorname{Im}\mathbb{H}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/22881', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7217/']} | jdg_72687 |
stackexchange | llm_judgeable_groundtruth_similarity | 22778784 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to precompile assets using the command rake assets:precompile RAILS_ENV=production , but I always get the error below. ** Invoke assets:precompile (first_time) ** Execute assets:precompile /usr/bin/ruby /usr/bin/rake assets:precompile:all RAILS_ENV=production RAILS_GROUPS=assets --trace ** Invoke assets:precompile:all (first_time) ** Execute assets:precompile:all ** Invoke assets:precompile:primary (first_time) ** Invoke assets:environment (first_time) ** Execute assets:environment ** Invoke environment (first_time) ** Execute environment ** Invoke tmp:cache:clear (first_time) ** Execute tmp:cache:clear ** Execute assets:precompile:primary rake aborted! LoadError: cannot load such file -- uglifier (in /home/cool_tech/cool_tech/app/assets/javascripts/application.js) /usr/lib/ruby/gems/2.0.0/gems/activesupport-3.2.14/lib/active_support/dependencies.rb:251:in `require' /usr/lib/ruby/gems/2.0.0/gems/activesupport-3.2.14/lib/active_support/dependencies.rb:251:in `block in require' /usr/lib/ruby/gems/2.0.0/gems/activesupport-3.2.14/lib/active_support/dependencies.rb:236:in `load_dependency' /usr/lib/ruby/gems/2.0.0/gems/activesupport-3.2.14/lib/active_support/dependencies.rb:251:in `require' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/compressors.rb:31:in `registered_js_compressor' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/bootstrap.rb:18:in `block in run' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/compressors.rb:80:in `call' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/compressors.rb:80:in `compressor' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/compressors.rb:74:in `compress' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/processing.rb:265:in `block in js_compressor=' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/processor.rb:29:in `call' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/processor.rb:29:in `evaluate' /usr/lib/ruby/gems/2.0.0/gems/tilt-1.4.1/lib/tilt/template.rb:103:in `render' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/context.rb:193:in `block in evaluate' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/context.rb:190:in `each' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/context.rb:190:in `evaluate' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/bundled_asset.rb:26:in `initialize' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:252:in `new' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:252:in `build_asset' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/index.rb:93:in `block in build_asset' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/caching.rb:19:in `cache_asset' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/index.rb:92:in `build_asset' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:169:in `find_asset' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/index.rb:60:in `find_asset' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/static_compiler.rb:19:in `block in compile' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:219:in `block in each_logical_path' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:206:in `block (2 levels) in each_file' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:196:in `each' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:196:in `each_entry' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:204:in `block in each_file' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:203:in `each' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:203:in `each_file' /usr/lib/ruby/gems/2.0.0/gems/sprockets-2.2.2/lib/sprockets/base.rb:217:in `each_logical_path' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/static_compiler.rb:18:in `compile' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/assets.rake:56:in `internal_precompile' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/assets.rake:70:in `block (3 levels) in <top (required)>' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:240:in `call' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:240:in `block in execute' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:235:in `each' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:235:in `execute' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:179:in `block in invoke_with_call_chain' /usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:172:in `invoke_with_call_chain' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:165:in `invoke' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/assets.rake:60:in `block (3 levels) in <top (required)>' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:240:in `call' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:240:in `block in execute' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:235:in `each' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:235:in `execute' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:179:in `block in invoke_with_call_chain' /usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:172:in `invoke_with_call_chain' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:165:in `invoke' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:150:in `invoke_task' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:106:in `block (2 levels) in top_level' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:106:in `each' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:106:in `block in top_level' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:115:in `run_with_threads' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:100:in `top_level' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:78:in `block in run' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:176:in `standard_exception_handling' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:75:in `run' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/bin/rake:33:in `<top (required)>' /usr/bin/rake:23:in `load' /usr/bin/rake:23:in `<main>' Tasks: TOP => assets:precompile:primary rake aborted! Command failed with status (1): [/usr/bin/ruby /usr/bin/rake assets:precomp...] /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/file_utils.rb:54:in `block in create_shell_runner' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/file_utils.rb:45:in `call' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/file_utils.rb:45:in `sh' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/file_utils_ext.rb:37:in `sh' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/file_utils.rb:82:in `ruby' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/file_utils_ext.rb:37:in `ruby' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/assets.rake:12:in `ruby_rake_task' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/assets.rake:21:in `invoke_or_reboot_rake_task' /usr/lib/ruby/gems/2.0.0/gems/actionpack-3.2.14/lib/sprockets/assets.rake:29:in `block (2 levels) in <top (required)>' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:240:in `call' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:240:in `block in execute' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:235:in `each' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:235:in `execute' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:179:in `block in invoke_with_call_chain' /usr/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:172:in `invoke_with_call_chain' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/task.rb:165:in `invoke' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:150:in `invoke_task' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:106:in `block (2 levels) in top_level' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:106:in `each' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:106:in `block in top_level' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:115:in `run_with_threads' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:100:in `top_level' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:78:in `block in run' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:176:in `standard_exception_handling' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/lib/rake/application.rb:75:in `run' /usr/lib/ruby/gems/2.0.0/gems/rake-10.2.2/bin/rake:33:in `<top (required)>' /usr/bin/rake:23:in `load' /usr/bin/rake:23:in `<main>' Tasks: TOP => assets:precompile Does anybody has any idea what could be wrong? Thanks.
Now provide the response and nothing else.
| Replace rake assets:precompile RAILS_ENV=production with rake assets:precompile ( RAILS_ENV=production bundle exec rake assets:precompile is the exact rake task) Since precompilation is done in production mode only, no need to explicitly specify the environment. Update: Try adding the below line to your Gemfile: group :assets do gem 'therubyracer' gem 'sass-rails', " ~> 3.1.0" gem 'coffee-rails', "~> 3.1.0" gem 'uglifier'end Then run bundle install . Hope it will work :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22778784', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3081511/']} | jdg_72688 |
stackexchange | llm_judgeable_groundtruth_similarity | 296918 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This problem occurred during daylight saving time change. After the change occurred, we've noticed that our server application started writing into the log incorrect time - one hour ahead which means that .NET caches time zone offset. We had to restart our application to resolve this problem.I wrote a simple application to reproduce this problem. When I change the time zone while application is running, DateTime.Now property keeps producing the time in the old time zone.Does anybody know if there is a workaround for this problem apart from restarting the application?
Now provide the response and nothing else.
| Yes, the current time zone is cached. For a good reason, it avoids trouble with broken code that uses DateTime.Now to implement elapsed time measurement. Such code tends to suffer a heart-attack when the time suddenly changes by an hour or more. You will have to call System.Globalization.CultureInfo.ClearCachedData() to reset the cached value. The next call to DateTime.Now will now give the new local time. If you use the .NET 3.5 TimeZoneInfo class at all then you'll also need to call its ClearCachedData() method. You can use the SystemEvents.TimeChanged event as a trigger. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/296918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8653/']} | jdg_72689 |
stackexchange | llm_judgeable_groundtruth_similarity | 142533 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm into epidemiology. I'm not a statistician but I try to perform the analyses myself, although I often encounter difficulties. I did my first analysis some 2 years ago. P values were included everywhere in my analyses (I simply did what other researchers were doing) from descriptive tables to regression analyses. Little by little, statisticians working in my apartment persuaded me to skip all (!) the p values, except from where I truly have a hypothesis. The problem is that p values are abundant in medical research publications.It is conventional to include p values on far too many lines; descriptive data of means, medians or whatever usually go along with p values (students t-test, Chi-square etc). I've recently submitted a paper to a journal, and I refused (politely) to add p values to my "baseline" descriptive table. The paper was ultimately rejected. To exemplify, see the figure below; it is the descriptive table from the latest published article in a respected journal of internal medicine.: Statisticians are mostly (if not always) involved in the reviewing of these manuscripts. So a laymen like myself expects to not find any p values where there are no hypothesis. But they are abundant, but the reason for this remain elusive to me. I find it hard to believe that it is ignorance. I realize that this is a borderline statistical question. But I'm looking for the rationale behind this phenomenon.
Now provide the response and nothing else.
| Clearly I don't need to tell you what a p-value is, or why over-reliance on them is a problem; you apparently understand those things quite well enough already. With publishing, you have two competing pressures. The first - and one you should push for at every reasonable opportunity - is to do what makes sense. The second, ultimately, is the need to actually publish. There's little gain if nobody sees your fine efforts at reforming terrible practice. So instead of avoiding it altogether: do it as little of such pointless activity as you can get away with that still gets it published maybe include a mention of this recent Nature methods article [1] if you think it will help, or perhaps better one or more of the other references. It at least should help establish that there's some opposition to the primacy of p-values. consider other journals, if another would be suitable Is this the same in other disciplines? The problem of over-use of p-values occurs in a number of disciplines (this can even be a problem when there is some hypothesis), but is much less common in some than others. Some disciplines do have issues with p-value-itis, and the problems that causes can eventually lead to somewhat overblown reactions [2] (and to a smaller extent, [1], and at least in some places, a few of the others as well). I think there are a variety of reasons for it, but the over-reliance of p-values seems to acquire a momentum of its own - there's something about saying "significant" and rejecting a null that people seem to find very attractive; various disciplines (e.g. see [3][4][5][6][7][8][9][10][11]) have (with varying degrees of success) been fighting against the problem of over reliance on p-values (especially $\alpha$=0.05) for many years, and have made many different kinds of suggestions - not all of which I agree with, but I include a variety of views to give some sense of the different things people have had to say. Some of them advocate focusing on confidence intervals, some advocate looking at effect sizes, some advocate Bayesian methods, some smaller p-values, some just on avoiding using p-values in particular ways, and so on. There are many different views on what to do instead, but between them there's a lot of material on problems with relying on p-values, at least the way it's pretty commonly done. See those references for many further references in turn. This is just a sampling - many dozens more references can be found. A few authors give reasons why they think p-values are prevalent. Some of these references may be useful if you do want to argue the point with an editor. [1] Halsey L.G., Curran-Everett D., Vowler S.L. & Drummond G.B. (2015), "The fickle P value generates irreproducible results," Nature Methods 12 , 179–185 doi:10.1038/nmeth.3288 http://www.nature.com/nmeth/journal/v12/n3/abs/nmeth.3288.html [2] David Trafimow, D. and Marks, M. (2015), Editorial, Basic and Applied Social Psychology , 37 :1–2 http://www.tandfonline.com/loi/hbas20 DOI: 10.1080/01973533.2015.1012991 [3] Cohen, J. (1990), Things I have learned (so far), American Psychologist , 45 (12), 1304–1312. [4] Cohen, J. (1994), The earth is round (p < .05), American Psychologist , 49 (12), 997–1003. [5] Valen E. Johnson (2013), Revised standards for statistical evidence PNAS , vol. 110, no. 48, 19313–19317 http://www.pnas.org/content/110/48/19313.full.pdf [6] Kruschke J.K. (2010), What to believe: Bayesian methods for data analysis, Trends in cognitive sciences 14 (7), 293-300 [7] Ioannidis, J. (2005) Why Most Published Research Findings Are False, PLoS Med. Aug; 2(8): e124. doi: 10.1371/journal.pmed.0020124 [8] Gelman, A. (2013),P Values and Statistical Practice, Epidemiology Vol. 24 , No. 1, January, 69-72 [9] Gelman, A. (2013), "The problem with p-values is how they're used", (Discussion of “In defense of P-values,” by Paul Murtaugh, for Ecology )unpublished http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.300.9053 http://www.stat.columbia.edu/~gelman/research/unpublished/murtaugh2.pdf [10] Nuzzo R. (2014), Statistical errors: P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume, News and Comment, Nature , Vol. 506 (13), 150-152 [11] Wagenmakers E, (2007) A practical solution to the pervasive problems of p values, Psychonomic Bulletin & Review 14 (5), 779-804 | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/142533', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/35413/']} | jdg_72690 |
stackexchange | llm_judgeable_groundtruth_similarity | 250667 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following grid n = 10;Grid[{{"Matrix", SpanFromLeft}, RandomInteger[n, n], RandomInteger[n, n]}, Frame -> All, Alignment -> Center] Is there any "clever" way of getting the following layout so that the length of the "backline" is always adjusted according to n and, without the main frame, I'd get I know about FontVariations and though about using "StrikeThrough" over an empty string, but couldn't make it work. Maybe customizing a divider's position? Any ideas? Edit: going one step further, what about something like
Now provide the response and nothing else.
| ItemSize >> Properties and Relations : "Controls and graphics will by default expand to fit the available area." So we can use a graphics object as the title row: ClearAll[addTitleRow]addTitleRow[ttl_, lineStyle_: Automatic, textStyle_: Automatic, pos_: {.5, 0}] := PadRight[ Prepend[{Graphics[{lineStyle /. Automatic -> Directive[Gray, Thickness[Tiny]], Line[{{0, 0}, {1, 0}}], Black, Text[Style[ttl, textStyle /. Automatic -> 14, Background -> White], pos]}, ImagePadding -> 0, PlotRangePadding -> 0, ImageSize -> {Scaled[1.03], All}, AspectRatio -> 1/20]}] @ #, Automatic, SpanFromLeft] &; Examples: n = 10;mat = {RandomInteger[n, n], RandomInteger[n, n]};Grid[addTitleRow[" matrix "] @ mat, Alignment -> {Center, Center}, ItemSize -> {3, 1}, Frame -> All] Grid[addTitleRow[" matrix "] @ mat, Alignment -> {Center, Center}, ItemSize -> {3, 1}, Frame -> {True, {False, {True}}}] Grid[addTitleRow[" matrix "] @ mat, Alignment -> {Center, Center}, ItemSize -> {3, 1}] Grid[addTitleRow[" matrix ", Directive[Orange, AbsoluteThickness[12], CapForm["Butt"]], Directive[Blue, 18], {.15, 0}] @ mat, Alignment -> {Center, Center}, ItemSize -> {3, 1}, Frame -> All] Replace Frame -> All with Frame -> {True, {False, {True}}} to get Remove the option Frame -> .. to get mat2 = {RandomInteger[n, 2 n], RandomInteger[n, 2 n]};Grid[addTitleRow[" matrix "] @ mat2, Alignment -> {Center, Center}, ItemSize -> {2, 1}, Frame -> All] | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/250667', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/64001/']} | jdg_72691 |
stackexchange | llm_judgeable_groundtruth_similarity | 10075826 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First time here so I hope this makes sense! I have a Map which contains a String as it's Key, and a List of Strings as it's Value. I need to iterate over all vlaues contained within each List within the Map. So, first I want to get the Keys, which works: Set<String> keys = theMap.keySet(); This returns me a Set containing all my Keys. Great :) This is where I've got stuck - most of the info on the web seems to assume that the values I'd want returned from the Key would be a simple String or Integer, not another Set, or in this case a List. I tried theMap.values() but that didn't work, and I tried a forloop / for:eachloop, and neither of those did the trick. Thanks y'all!
Now provide the response and nothing else.
| for(List<String> valueList : map.values()) { for(String value : valueList) { ... }} That's really the "normal" way to do it. Or, if you need the key as well... for(Map.Entry<String, List<String>> entry : map.entrySet()) { String key = entry.getKey(); for (String value : entry.getValue()) { ... }} That said, if you have the option, you might be interested in Guava's ListMultimap , which is a lot like a Map<K, List<V>> , but has a lot more features -- including a Collection<V> values() that acts exactly like what you're asking for, "flattening" all the values in the multimap into one collection. (Disclosure: I contribute to Guava.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10075826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1322190/']} | jdg_72692 |
stackexchange | llm_judgeable_groundtruth_similarity | 49141927 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Super stumped by this. I have some server code that for some reason throws a UTF-8 error in my tests but works fine when running the server normally: code: export default ({ projectId = PROJECT_ID, esHost = ES_HOST } = {}) => { let app = express(); app.use(cors()); app.use(bodyParser.json({ limit: '50mb' })); let http = Server(app); let io = socketIO(http); let server = { app, io, http, status: 'off', listen( port = PORT, cb = () => { rainbow(`⚡️ Listening on port ${port} ⚡️`); }, ) { this.http.listen(port, () => { main({ io, app, projectId, esHost, port }); this.status = 'on'; cb(); }); }, close(cb = () => {}) { if (this.http) { this.http.close(() => { this.status = 'off'; cb(); }); } else { throw '❗️ cannot close server that has not been started ❗️'; } }, }; return server;}; usage (exactly the same, but in jest test body-parser isn't working properly): import createServer from '../server'let server = createServer()server.listen(5050); I'm using postman, post response outside of test: { "projects": [ { "id": "test", "active": true, "timestamp": "2018-02-25T21:33:08.006Z" }, { "id": "TEST-PROJECT", "active": true, "timestamp": "2018-03-05T21:34:34.604Z" }, { "id": "asd", "active": true, "timestamp": "2018-03-06T23:29:55.348Z" } ], "total": 3} unexpected post response inside jest test server: Error UnsupportedMediaTypeError: unsupported charset "UTF-8" at /Users/awilmer/Projects/arranger/node_modules/body-parser/lib/read.js:83:18 at invokeCallback (/Users/awilmer/Projects/arranger/node_modules/raw-body/index.js:224:16) at _combinedTickCallback (internal/process/next_tick.js:131:7) at process._tickCallback (internal/process/next_tick.js:180:9)
Now provide the response and nothing else.
| So I was able to reproduce the issue and find the source of the issue and the workaround to make it work. The issue is caused by jest framework. Before you jump on reading the rest of the thread, I would suggest you read another Jest thread I answer long back. This would help get some context internals about the require method in jest Specify code to run before any Jest setup happens Cause The issue happens only in test and not in production. This is because of jest require method. When you run your tests, it starts a express server, which calls the node_modules/raw-body/index.js as shown in below image As you can see the encodings is null . This is because the iconv-lite module does a lazy loading of encodings . The encodings are only loaded when getCodec method gets executed. Now when your test has fired the API, the server needs to read the body so the getCodec gets called This then goes through the jest-runtime/build/index.js custom require method (which is overloaded if you read the previous link). The execModule has a check for this._environment.global , which is blank in this case and hence a null value is returned and the module never gets executed Now when you look at the exports of the encodings module, it just is a blank object So the issue is purely a jest . A feature jest lacks or a bug mostly? Related Issues Related issues have already been discussed on below threads https://github.com/facebook/jest/issues/2605 https://github.com/RubenVerborgh/N3.js/issues/120 https://github.com/sidorares/node-mysql2/issues/489#issuecomment-313374683 https://github.com/ashtuchkin/iconv-lite/issues/118 https://github.com/Jason-Rev/vscode-spell-checker/issues/159 Fix The fix to the problem is that we load the module during our test itself and force a early loading instead of lazy loading. This can be done by adding a line to your index.test.js at the top import encodings from '../../node_modules/iconv-lite/encodings';import createServer from '@arranger/server'; After the change all the test pass, though you have a error in the url of the test so you get Cannot POST / | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49141927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3225108/']} | jdg_72693 |
stackexchange | llm_judgeable_groundtruth_similarity | 25583861 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using phone gap to develop an android app. Is it possible to check if the app is running in background or foreground using javascript? As we can close the app by calling navigator.app.exitApp() . We can perform other functions as well. Is there any function which can tell us whether the app is running in background or foreground? Actually, I want to make the app working in following way. If app is in foreground, it should show an alert message rather than a push notification.If app is in background it should show a push notification. Many thanks Indeed.
Now provide the response and nothing else.
| Pause : This is an event that fires when a Cordova application is put into the background. document.addEventListener("pause", yourCallbackFunction, false); Details Cordova consists of two code bases: native and JavaScript. While the native code puts the application into the background the pause event is fired. Typically, you will want to attach an event listener with document.addEventListener once you receive the Cordova 'deviceready' event.Supported Platforms Android BlackBerry WebWorks (OS 5.0 and higher) iOS Windows Phone 7 Quick Example document.addEventListener("pause", onPause, false);function onPause() { // Handle the pause event} Resume : This is an event that fires when a Cordova application is retrieved from the background. document.addEventListener("resume", yourCallbackFunction, false); Details Cordova consists of two code bases: native and JavaScript. While the native code pulls the application from the background the resume event is fired. Typically, you will want to attach an event listener with document.addEventListener once you receive the Cordova 'deviceready' event.Supported Platforms Android BlackBerry WebWorks (OS 5.0 and higher) iOS Windows Phone 7 Quick Example document.addEventListener("resume", onResume, false);function onResume() { // Handle the resume event} More Information here : http://docs.phonegap.com/en/2.2.0/cordova_events_events.md.html#resume http://docs.phonegap.com/en/2.2.0/cordova_events_events.md.html#pause | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25583861', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/555071/']} | jdg_72694 |
stackexchange | llm_judgeable_groundtruth_similarity | 651642 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Two charges experience Coulomb force. We say electric field $E=F/q$ at this point and at that point and so this is the field around the charge. But why do we introduce a new concept when we can just solve anything with just the force? Why do we talk about $E$ every time? I mean this is one charge, that is another charge, and this is force acting on it. End of story. Why do we talk about $E$ due to various charge distributions and solve countless problems on finding the net field. Why?
Now provide the response and nothing else.
| When electric charges were first studied, scientists described effects just as you want to: electric charges exerting forces on each other. It wasn't until Michael Faraday came up with the idea of lines of force around 1855 that scientists began to rethink interactions that take place between separated objects. When one thinks about forces that affect objects that are not touching each other--gravity, electricity, magnetism--one begins to wonder: how do these separated objects affect each other? How do charged objects know about each other so that they may repel or attract? How does the Sun reach across empty space to hold Earth in orbit? Famously, Isaac Newton didn't even try to guess at this last question; he just postulated that there was a force and calculated. However, when the concept of a field is introduced, the mystery can be answered. For example, every object with an electric charge creates an electric field in the space surrounding it. When another electrically charged object is placed in that space, it is the electric field at that location that puts a force on the second charge. There is no need for two separated charges to somehow find out where the other is in order to accelerate correctly. Each electric charge just has to feel the electric field in its own location. The second consequence of introducing the field concept is that the field itself becomes a subject of study. The question becomes: is this field a physical object like electric charges? Is a universe with action-at-a-distance electrical interactions different from a universe with electric fields? Consider a universe that is totally empty except for one electron. In a universe with action-at-a-distance electrical interactions, the electron's electric charge is completely irrelevant. There are no other electrical charges for it to push on nor to be pushed by. If the electron gets shaken, nothing happens. Now add in an electric fields. The electron has an electric charge, so there is an electric field that fills the whole universe around it. Is this universe different? There are still no other charges for the electric field to push, nor are there electric fields from other charges to push the electron. However, the electric field is a physical entity. If the electron is shaken back and forth, the electric field is shaken as well. In the same way that shaking a rope causes a wave to propagate along it, so does shaking an electric field create waves to travel away from the shaken charge. The waves that are created by shaking an electric field is what we call light. Electric fields are real physical entities in our universe. In fact, to continue the single-electron universe example, once the light is created by shaking the electron, the electron can be removed from the universe and the light wave will still exist--propagating away from where the electron used to be. Undulating electric fields can exist where there are no electric charges, so we need a way to describe electric fields separately from electric charges. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/651642', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/304377/']} | jdg_72695 |
stackexchange | llm_judgeable_groundtruth_similarity | 38836795 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I having trouble passing a function as a parameter to another function. This is my code: ga.py: def display_pageviews(hostname): pageviews_results = get_pageviews_query(service, hostname).execute() if pageviews_results.get('rows', []): pv = pageviews_results.get('rows') return pv[0] else: return Nonedef get_pageviews_query(service, hostname): return service.data().ga().get( ids=VIEW_ID, start_date='7daysAgo', end_date='today', metrics='ga:pageviews', sort='-ga:pageviews', filters='ga:hostname==%s' % hostname,) models.py: class Stats(models.Model): user = models.OneToOneField('auth.User') views = models.IntegerField() visits = models.IntegerField() unique_visits = models.IntegerField() updatestats.py: class Command(BaseCommand): def handle(self, *args, **options): users = User.objects.all() try: for user in users: hostname = '%s.%s' % (user.username, settings.NETWORK_DOMAIN) stats = Stats.objects.update_or_create( user=user, views=display_pageviews(hostname), visits=display_visits(hostname), unique_visits=display_unique_visits(hostname),) except FieldError: print ('There was a field error.') When I run this: python manage.py updatestats I get the error: TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' I don't know what's causing this. I've tried converting it to a string, but I get the same error. Any ideas? Full traceback: Traceback (most recent call last): File "manage.py", line 20, in <module> execute_from_command_line(sys.argv) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/__init__.py", line 345, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/base.py", line 348, in run_from_argv self.execute(*args, **cmd_options) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/base.py", line 399, in execute output = self.handle(*args, **options) File "/Users/myusername/project/Dev/project_files/project/main/management/commands/updatestats.py", line 23, in handle unique_visits=display_unique_visits(hostname),) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/manager.py", line 122, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 480, in update_or_create obj = self.get(**lookup) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 378, in get clone = self.filter(*args, **kwargs) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 790, in filter return self._filter_or_exclude(False, *args, **kwargs) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 808, in _filter_or_exclude clone.query.add_q(Q(*args, **kwargs)) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1243, in add_q clause, _ = self._add_q(q_object, self.used_aliases) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1269, in _add_q allow_joins=allow_joins, split_subq=split_subq, File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1203, in build_filter condition = self.build_lookup(lookups, col, value) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1099, in build_lookup return final_lookup(lhs, rhs) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/lookups.py", line 19, in __init__ self.rhs = self.get_prep_lookup() File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/lookups.py", line 57, in get_prep_lookup return self.lhs.output_field.get_prep_lookup(self.lookup_name, self.rhs) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/fields/__init__.py", line 1860, in get_prep_lookup return super(IntegerField, self).get_prep_lookup(lookup_type, value) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/fields/__init__.py", line 744, in get_prep_lookup return self.get_prep_value(value) File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/fields/__init__.py", line 1854, in get_prep_value return int(value)TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' Edit: Alright, I understand what the issue is. I used the shell to get the type of function output: >>> type(display_pageviews('test.domain.com'))<class 'list'> I tried with this but it is still considered as a list: pv = pageviews_results.get('rows')[0] return pv
Now provide the response and nothing else.
| What the error is telling, is that you can't convert an entire list into an integer. You could get an index from the list and convert that into an integer: x = ["0", "1", "2"] y = int(x[0]) #accessing the zeroth element If you're trying to convert a whole list into an integer, you are going to have to convert the list into a string first: x = ["0", "1", "2"]y = ''.join(x) # converting list into stringz = int(y) If your list elements are not strings, you'll have to convert them to strings before using str.join : x = [0, 1, 2]y = ''.join(map(str, x))z = int(y) Also, as stated above, make sure that you're not returning a nested list. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38836795', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6660913/']} | jdg_72696 |
stackexchange | llm_judgeable_groundtruth_similarity | 1115397 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the necessity of using Application.DoEvents and when we should use it?
Now provide the response and nothing else.
| Application.DoEvents is usually used to make sure that events get handled periodicaly when you're performing some long-running operation on the UI thread. A better solution is just not to do that. Perform long-running operations on separate threads, marshalling to the UI thread (either using Control.BeginInvoke / Invoke or with BackgroundWorker ) when you need to update the UI. Application.DoEvents introduces the possibility of re-entrancy, which can lead to very hard-to-understand bugs. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1115397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/61156/']} | jdg_72697 |
stackexchange | llm_judgeable_groundtruth_similarity | 3806833 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For my Qt project, I use a .pro file that includes a separate .pri file for the various header, source, form and resource files. However, every time I add a new file I need to manually add it to the .pri file. This is tedious and error-prone. Is there a way to "magically" add all files from a directory, either directly in the .pri file or by telling qmake to run a separate script beforehand?
Now provide the response and nothing else.
| Running qmake -project from the directory will create a project file that includes all the .cpp and .h files in that directory. You could add a pre-compile step that calls qmake -project , then pass the generated file to a script that removes the first few lines. Here's a quick one-liner that could do the job : qmake -project -o MyFiles.pro && sed '1,/^# Input/d' MyFiles.pro > MyFiles.pri && rm MyFiles.pro | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3806833', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71141/']} | jdg_72698 |
stackexchange | llm_judgeable_groundtruth_similarity | 19577423 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
See this playground snippet . Relevant code: type somethingFuncy func(int) boolfunc funcy(i int) bool { return i%2 == 0}var a interface{} = funcyfunc main() { _ = a.(func(int) bool) // Works fmt.Println("Awesome -- apparently, literally specifying the func signature works.") _ = a.(somethingFuncy) // Panics fmt.Println("Darn -- doesn't get here. But somethingFuncy is the same signature as func(int) bool.")} The first cast works, by explicitly declaring the type. But the second cast panics. Why? Is there a clean way to cast to a longer func signature?
Now provide the response and nothing else.
| tl;dr For type assertions (which you use) only the actual type matters. So somethingFuncy is only equal to somethingFuncy and not to func(int) bool . Explanation To start with, this has nothing to do with casting. There is no casting in go.There are type assertions and type conversions . You're dealing with a type assertion and are assuming that the same conditions holdas for type conversions . I made the same mistake while reading your question but actually there is a huge difference in behaviour. Suppose you have two types, say int and type MyInt int . These are convertible as they bothshare the same underlying type (one of the conversion rules), so this works ( play ): var a int = 10var b MyInt = MyInt(a) Now, suppose a is not of type int but of type interface{} ( play ): var a interface{} = int(10)var b MyInt = MyInt(a) The compiler will tell you: cannot convert a (type interface {}) to type MyInt: need type assertion So now we're not doing conversions anymore but assertions . We need to do this ( play ): var a interface{} = int(10)var b MyInt = a.(MyInt) Now we have the same problem as in your question. This assertion fails with this panic: panic: interface conversion: interface is int, not main.MyInt The reason for this is stated in the type assertions section of the spec: For an expression x of interface type and a type T , the primary expression x.(T) asserts that x is not nil and that the value stored in x is of type T . The notation x.(T) is called a type assertion. More precisely, if T is not an interface type, x.(T) asserts that the dynamic type of x is identical to the type T . So int must be identical to MyInt . The rules of type identity state that (amongst other rules): Two named types are identical if their type names originate in the same TypeSpec. As int and MyInt have different declarations ( TypeSpecs ) they're not equaland the assertion fails. When you assert a to int , the assertion works.So what you're doing is not possible. Bonus: The actual check happens in this code , which simply checks if both types arethe same, as expected. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19577423', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1048862/']} | jdg_72699 |
stackexchange | llm_judgeable_groundtruth_similarity | 53609590 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What I'm trying to do is to filter the list, then map it and use orElse if null and then collect it back to the list. Now I can achieve it this way: return users.stream() .filter(user -> id.equals(user.getId())) .map( user -> { if(user.getData() != null) { return user.getData(); } return Collections.emptyMap(); } ) .collect(Collectors.toList()); But the question is: how can I make this structure better and why cannot I use orElse in this case?
Now provide the response and nothing else.
| It might be more readable with ternary conditional operator: return users.stream() .filter(user -> id.equals(user.getId())) .map( user -> (user.getData() != null) ? user.getData() : emptyMap() ) .collect(Collectors.toList()); In order to use orElse you'll have to create an Optional that wraps user.getData() . I'm not sure that's a good idea. If you insist on using orElse (or even better, orElseGet , to avoid evaluating emptyMap() when it's not required), it can look like this: return users.stream() .filter(user -> id.equals(user.getId())) .map( user -> Optional.ofNullable( user.getData() ).orElseGet( () -> emptyMap() ) ) .collect(Collectors.toList()); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53609590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10440388/']} | jdg_72700 |
stackexchange | llm_judgeable_groundtruth_similarity | 8484592 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given an array [0, 0, 1, 0, 1] , is there a built-in method to get all of the indexes of values greater than 0? So, the method should return [2, 4] . find_index only returns the first match. Working in Ruby 1.9.2.
Now provide the response and nothing else.
| In Ruby 1.8.7 and 1.9, iterator methods called without a block return an Enumerator object. So you could do something like: [0, 0, 1, 0, 1].each_with_index.select { |num, index| num > 0 }.map { |pair| pair[1] }# => [2, 4] Stepping through: [0, 0, 1, 0, 1].each_with_index# => #<Enumerator: [0, 0, 1, 0, 1]:each_with_index>_.select { |num, index| num > 0 }# => [[1, 2], [1, 4]]_.map { |pair| pair[1] }# => [2, 4] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8484592', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/336920/']} | jdg_72701 |
stackexchange | llm_judgeable_groundtruth_similarity | 45316882 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to perform an action after my animation finishes. I tried adding a statusListener but that is not working for me. My code looks like this: @override void initState() { super.initState(); _controller = new AnimationController( duration: new Duration(milliseconds: 500), vsync: this, )..addStatusListener((AnimationStatus status) { print("Going"); if (status.index == 3 && spins > 0) { // AnimationStatus index 3 == completed animation _controller.duration = new Duration(milliseconds: speed - 50); _controller.forward(from: _controller.value == null ? 0.0 : 1 - _controller.value); spins--; print(speed); } }); } The print(Going); never gets executed but my animation does end. What is going wrong? ///---Edit---/// I'm using an AnimatedBuilder , that part of the code looks like this: child: new AnimatedBuilder( animation: _controller, child: new Image.network(widget.url), builder: (BuildContext context, Widget child) { return new Transform.rotate( angle: _controller.value * 2.0 * math.PI, child: child, ); },),
Now provide the response and nothing else.
| Reacting to your comment and edit I looked into the AnimationBuilder. Adapting the example in the docs I came up with this working solution: class Spinner extends StatefulWidget { @override _SpinnerState createState() => new _SpinnerState();}class _SpinnerState extends State<Spinner> with SingleTickerProviderStateMixin { AnimationController _controller; CurvedAnimation _animation; @override void initState() { super.initState(); _controller = new AnimationController( duration: const Duration(seconds: 5), vsync: this, )..forward(); _animation = new CurvedAnimation( parent: _controller, curve: Curves.linear, )..addStatusListener((AnimationStatus status) { if (status == AnimationStatus.completed) print('completed'); }); } @override void dispose() { _controller.dispose(); super.dispose(); } @override Widget build(BuildContext context) { return new AnimatedBuilder( animation: _animation, child: new Container(width: 200.0, height: 200.0, color: Colors.green), builder: (BuildContext context, Widget child) { return new Transform.rotate( angle: _controller.value * 2.0 * 3.1415, child: child, ); }, ); }} As you can see, I used the controller as parent to an animation, which was than used as animation for the AnimationBuilder. Hope it helps. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45316882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5832842/']} | jdg_72702 |
stackexchange | llm_judgeable_groundtruth_similarity | 25968902 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Even though objects are passed to functions by means of the normal call-by-value parameter passing mechanism, which, in theory, protects and insulates the calling argument, it is still possible for a side effect to occur that may affect, or even damage, the object used as an argument. For example, if an object used as an argument allocates memory and frees that memory when it is destroyed, then its local copy inside the function will free the same memory when its destructor is called. This will leave the original object damaged and effectively useless. This is written in C++: The Complete Reference In this program here #include<iostream>using namespace std;class Sample{ public: int *ptr; Sample(int i) { ptr = new int(i); } ~Sample() { cout<<"destroyed"; delete ptr; } void PrintVal() { cout << "The value is " << *ptr; } }; void SomeFunc(Sample x){ cout << "Say i am in someFunc " << endl;} int main(){ Sample s1= 10;SomeFunc(s1); s1.PrintVal();} It generates a runtime error as the object s1 is destroyed when it returns from the object. I couldnt figure out why this might happen, since a copy should have been made.I thought maybe it was because there was no copy constructor present in the class defination. But i was surprised to find that if use this function declaration void SomeFunc(Sample &x){ cout << "Say i am in someFunc " << endl;} In this declaration no error occurs. Shouldnt the error occur here as well because it is referenced?Can anyone explain what happens in both cases.
Now provide the response and nothing else.
| This is indeed because you didn't provide a copy constructor. Thus the compiler will generate one for you, which does trivial copy. And that's the trivial copy of the pointer that's problematic here. For the following declaration void SomeFunc(Sample x); There will be indeed a copy when you pass s1 to the function, but this copy will have a copy of the pointer, that is, the two object will point to the same int . Then, when exiting the function, the copy will be destroyed and will delete that pointer, leaving the original object in the calling code with a pointer just deleted (remember, they point to the same thing). Then, for the following declaration void SomeFunc(Sample &x); you don't have any copy, thus the problem doesn't show up. Indeed, passing by reference means that inside the function, the Sample object you're manipulating is exactly the same as the one you passed to the function, and won't be destroyed when the function exits. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25968902', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2770024/']} | jdg_72703 |
stackexchange | llm_judgeable_groundtruth_similarity | 50269 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After reading a number of explanations about it after googling, I'm a bit confused on this. Some explanations are doing it using a solution density, others aren't, though most are. On one site it's showing different densities for a solution of a compound at different w/w% values. I'm now very lost as to how you convert from a w/w% to molarity. Can someone clarify?
Now provide the response and nothing else.
| Weight percent is essentially the same as (I arbitrarily picked grams as the mass unit): $$\%~\mathrm{w/w~X} = \mathrm{g~X\over g~solution}$$ In order to get to the $\mathrm{mol\over L}$ units of molarity, you have to convert by multiplying by the solution density, $\rho$, and dividing by the molecular weight of $\mathrm X$, $M_\mathrm{X}$: $$\mathrm{g~X\over g~solution} \times \stackrel{\left(1\over M_\mathrm X\right)}{\mathrm{mol~X\over g~X}} \times \stackrel{\left(\rho\right)}{\mathrm{g~solution\over L~solution}} = \mathrm{mol~X\over L~solution}$$ One key thing to remember is that $\rho$ is the density of the solution , not of the pure solvent. If you're working with dilute solutions (below $\sim\!100~\mathrm{mM}$ for aqueous) these two densities will be similar, but in concentrated solutions they will diverge. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/50269', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/29423/']} | jdg_72704 |
stackexchange | llm_judgeable_groundtruth_similarity | 1468758 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While practising some problems I encountered a question: Prove that the following equation has no real solution, $$\sin (x) \sin (2x) \sin(3x)=1$$ At first glance, I thought about applying Rolle's theorem or check the domain of the function but all these approaches lead me to a dead-end.Could anyone provide an insight into the technique that could be used to handle such questions?
Now provide the response and nothing else.
| For real $x$, $-1\leq\sin(x)\leq 1$. To have a product of sines equal to one, all of them would have to be $1$, or two of them would have to be $-1$. In either case, one of them has to be $1$, which means one of the following is true for some integer $n$:$$x=\pi/2+2\pi n$$$$2x=\pi/2+2\pi n$$$$3x=\pi/2+2\pi n.$$If the first is true, then $2x=\pi+4\pi n$, and $\sin(2x)=0$, which is no good. If the second is true, then $x=\pi/4+n\pi$, and $|\sin(x)|\neq 1$. The same problem arises for if the third is true. Thus, there are no real solutions. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1468758', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/255452/']} | jdg_72705 |
stackexchange | llm_judgeable_groundtruth_similarity | 52769202 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've just recently switched over from ASP.NET MVC to using .Core 2 and I can't figure out how to get the current URL in Core. I could get it easily enough using the Request in previous asp.net versions, but since that's no long valid in .Net Core I'm at a loss. I haven't been able to find any way from my google searching as of now. Any help is appreciated. Thanks!
Now provide the response and nothing else.
| In the ConfigureServices method of your Startup.cs file, add the line: services.AddHttpContextAccessor(); and you will now have access to the IHttpContextAccessor interface throughout your code when using dependency injection. Usage as follows: public class CustomerRepository{ private readonly IHttpContextAccessor _context; public CustomerRepository(IHttpContextAccessor context) { _context = context; } public string BaseUrl() { var request = _context.HttpContext.Request; // Now that you have the request you can select what you need from it. return string.Empty; }} Hope this answers your question :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52769202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8915831/']} | jdg_72706 |
stackexchange | llm_judgeable_groundtruth_similarity | 8200701 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it legal to have a vector of references to objects, like the following? vector<Agent&> seenAgents; Which would for example be populated by some, but not all of the objects in the scene? I have a vector of Agent objects, but the vector outlined above should hold references to only the ones each agent can currently see - meaning that the references will be being added and removed all the time. Is this something the language will allow? And in addition, is there anything else I need to be aware of? If I remove a reference from the vector does it persist anywhere? Is it a memory leak? I seem to be getting this error on the line declaring the vector: error C2528: 'pointer' : pointer to reference is illegal Is this something directly to do with the line or is it most likely occurring somewhere else? It's being initialised in the constructors initialiser list like this: seenAgents(vector<Agents&>())
Now provide the response and nothing else.
| You can't have vector of references, as a reference is not copyable assignable and all STL containers are supposed to store copyable assignable items. But you can make the container to hold pointers. Like this: vector< Agents* > seenAgents; This is a little dangerous. You need to be sure that these pointers will remain valid. I mean - if someone deletes an object, pointed by a pointer in this container, the pointer becomes invalid. You need to be sure that this will not happen, because you can't check it (you can't check for NULL , because a pointer will not become NULL , if someone deletes the pointed object). The best solution here (provided by container with pointers) would be to use some smart pointers - some with reference count, for example; they will guarantee you that the object will exist and that the pointer is valid. And in case that the object, pointed by the smart pointer, is destroyed, you can check it for NULL . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8200701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/956689/']} | jdg_72707 |
stackexchange | llm_judgeable_groundtruth_similarity | 373294 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is my attempt at making a LED-based solar panel: I learned that if an LED is reverse biased, it can yield a considerable voltage up to 2.5 volts. I also did this experiment at home and, using my multimeter, I found that the voltage produced by an illuminated white LED is a maximum of 2.5 volts; yet the current is too small, about 0.7µA. I mounted 25 White LEDs in series on the breadboard, and found that if the array of LEDs is exposed to bright sunshine, the output voltage is 29.7 Volts. The output current, however, is still too small, about 0.07µA. I connected a 10 Ohm resistor in series to the positive output of the panel, hoping that, based on Ohm's law, a considerable current would be forced to flow across the resistor, if we presume that the voltage is about 20V. This did not work, for when I set up the meter on the Ampere measuring function, and connected it in series with the circuit, I still got a reading of the aforementioned value, even with the resistor in series. I presumed that the internal resistance of the LEDs in series could account for this failure, for it appears to be infinite.
Now provide the response and nothing else.
| Any semiconductor diode can be used as a photocell. However, most of them are very inefficient unless designed specifically for that purpose. White LEDs are particularly bad since they produce white light by re-emission of blue or near UV by phosphors. Those phosphors absorb some of the light hitting the LED from outside. In any case, LEDs don't make good photocells. The total power they can provide will be quite small. Your basic problem is that you seem to be confusing open circuit voltage with power. Any diode produces about its band gap voltage in reverse when you shine the right light on it and it is left open circuit. However, as you saw, if it can't produce much power then the voltage will rapidly collapse when you try to draw any meaningful current from it. You can harvest the little power from these LEDs with a well-designed circuit. The first thing to do is to put a capacitor across the LEDs. That will slowly get charged up, asymptotically approaching the open circuit voltage. At that point you have a fixed amount of energy available, regardless of how little the LEDs can provide over time. What happens then depends on how you want to use the energy. Keep in mind that just sensing the cap voltage and powering the circuit to do so has to take less current than the LEDs require, else the circuit will be a net negative. This can get tricky. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/373294', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/179257/']} | jdg_72708 |
stackexchange | llm_judgeable_groundtruth_similarity | 3143205 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We know that $(-1)^{\frac 12}$ , $(-1)^{\frac 14}$ , ... $(-1)^{\frac 1{even}}$ all result in complex answers, whereas $(-1)^{\frac 13}$ , $(-1)^{\frac 15}$ , ... $(-1)^{\frac 1{odd}}$ all result in real answers. I believe this comes from the definitions that $(-1)^{\frac 12} = \sqrt{-1} = i$ and $(-1)^{\frac 13}=\sqrt[3]{-1}=-1$ . Moreover, of the set that provides real answers, if the numerator of the exponent is even, the result then becomes $+1$ instead of $-1$ . This explains why the graph $y = (-1)^x$ results in an infinite amount of discontinuous points on $y=1$ and $y=-1$ , only existing where the denominator of the exponent is odd: But the sets I provided only explain $(-1)^{rational}$ ; in other words, this assumes that we are only raising $-1$ to a fraction. When I plug in $(-1)^e$ , $(-1)^{\pi}$ , $(-1)^{\sqrt{2}}$ , or any other irrational number, the answer comes back undefined/imaginary. This cannot come from the explanation that I provided earlier because, even though it is the fundamental explanation for where imaginary results come from, it assumes the exponent is a fraction. My intuition is that there would be some other rule that would sometimes result in an imaginary answer and sometimes result in a real number - similar to how rational exponents work - but it seems to always be imaginary. What, then, is the rule for this, and can we prove that this is the case for all irrational numbers?
Now provide the response and nothing else.
| This is because the most general definition of $x^\alpha$ for complex $x,\alpha$ is $x^\alpha:=\exp(\alpha\log x)$ , where $\exp$ is the exponential function and $\log$ is the (principal) natural logarithm. Pictorially, this means that taking $x$ to the power of $\alpha$ means multiplying the argument of $x$ by $\alpha$ ; that is, rotating $x$ with respect to the origin by an angle of $\alpha$ times the seperation between $x$ and the positive real axis. So if $x$ is a negative real number and $\alpha$ is an irrational real number, then we must rotate $x$ by some irrational multiple of $\pi$ (i.e. $180^\circ$ ). Obviously, irrational numbers are never integers, so the result after the rotation never ends up on the real axis; that is, $x^\alpha$ is not real. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3143205', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_72709 |
stackexchange | llm_judgeable_groundtruth_similarity | 411 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hematite is composed of $\ce{Fe2O3}$, and is paramagnetic, whereas magnetite is $\ce{Fe3O4}$ and is diamagnetic. Magnetite's nature is due to the presence of both $\ce{Fe^{2+}}$ and $\ce{Fe^{3+}}$ (Wikipedia even goes so far as to call it $\ce{FeO}\cdot \ce{Fe2O3}$, but explains that this is not a "solid solution"). I can surmise that in the crystalline solid, there are divalent and trivalent cations dispersed within. I can also assume that there might be some sharing of electrons between the oxygens and each of these cations. One would assume this also happens within the hematite crystal, so what is it about "tossing" the divalent cation into the mix of the crystalline structure that makes the magnetite diamagnetic? Does the charge differential between the two cations cause a permanent dipole, if so, why don't all of the small dipoles simply cancel each other out?
Now provide the response and nothing else.
| Magnetite has a spinel structure with two types of $\ce{Fe}$ sites: octahedral and tetrahedral, respectively in green and orange below. (in this structure, oxygen atoms are the small red balls). This structure is called a spinel , with $\ce{Fe^{2+}}$ ion in tetrahedral coordination and $\ce{Fe^{3+}}$> ions in octahedral sites. The coupling between atoms from these two lattices are superexchange interactions , which result in an antiparallel alignment of spins. However, because the magnetic moments of the two lattices are not equal, their addition results in a net magnetic moment, even though they are of opposition alignment. This is called ferrimagnetism , and in practice most of its macroscopic properties are similar to ferromagnetism , though they stem from a completely different microscopic magnetic ordering. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/411', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/73/']} | jdg_72710 |
stackexchange | llm_judgeable_groundtruth_similarity | 47678262 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ViewPager with 4 ListViews.The list views all download unique Images and I want to cache them but there is an error and the app works very slow. Is there a way to cache all images in 1 LruCache or something like that?This is my code so farfirst list Adapter public class TwitchAdapter extends ArrayAdapter<Twitch>{private Context context;private int resourceId;private ArrayList<Twitch> twitchList;LruCache<Integer,Bitmap> logoCache;LruCache<Integer,Bitmap> iconCache;public TwitchAdapter(Context c, int resId, ArrayList<Twitch> objects){ super(c, resId, objects); this.context = c; this.resourceId = resId; this.twitchList = objects; final int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024); iconCache = new LruCache<Integer, Bitmap>( maxMemory / 16); logoCache = new LruCache<Integer, Bitmap>( maxMemory / 64);}public View getView(final int position, View convertView, final ViewGroup parent){ LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); View view = inflater.inflate(resourceId, parent, false); final Twitch twitch = twitchList.get(position); TextView viewrs = (TextView) view.findViewById(R.id.viewers); TextView language = (TextView) view.findViewById(R.id.language); TextView status = (TextView) view.findViewById(R.id.streamer_status); TextView name = (TextView) view.findViewById(R.id.streamer_name); ImageView profile = (ImageView) view.findViewById(R.id.logo); CircleImageView imageView = (CircleImageView) view.findViewById(R.id.twitch_icon); int id = twitch.getId(); status.setText(twitch.getStatus()); language.setText(twitch.getLanguage()); name.setText(twitch.getName()); viewrs.setText(twitch.getViewrs()+""); Bitmap bitmapIcon = iconCache.get(id+1); if (bitmapIcon != null){ imageView.setImageBitmap(bitmapIcon); }else { LoadImageTask(twitch,imageView); } Bitmap bitmapLogo = logoCache.get(twitch.getId()); if (bitmapLogo != null){ profile.setImageBitmap(bitmapLogo); }else { LoadIconTast(twitch,profile); } return view;}public void LoadImageTask(final Twitch twitch, final ImageView iconView){ ImageRequest request =new ImageRequest(twitch.getIconUrl(), new Response.Listener<Bitmap>() { @Override public void onResponse(Bitmap response) { iconView.setImageBitmap(response); iconCache.put(twitch.getId()+1,response); } }, 0, 0, null, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { } }); RequestQueue quew = Volley.newRequestQueue(context); quew.add(request);}private void LoadIconTast(final Twitch twitch, final ImageView profileView) { ImageRequest request =new ImageRequest(twitch.getLogoUrl(), new Response.Listener<Bitmap>() { @Override public void onResponse(Bitmap response) { profileView.setImageBitmap(response); iconCache.put(twitch.getId(),response); notifyDataSetChanged(); } }, 0, 0, null, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { } }); RequestQueue quew = Volley.newRequestQueue(context); quew.add(request);} } And another one exactly like thisand this is error 12-06 19:15:24.392 31970-2219/app.mma.introsliderproject E/AndroidRuntime: FATAL EXCEPTION: Thread-100374 Process: app.mma.introsliderproject, PID: 31970 java.lang.OutOfMemoryError: Failed to allocate a 17 byte allocation with 27048 free bytes and 26KB until OOM at com.android.volley.toolbox.DiskBasedCache.streamToBytes(DiskBasedCache.java:323) at com.android.volley.toolbox.DiskBasedCache.readString(DiskBasedCache.java:541) at com.android.volley.toolbox.DiskBasedCache.readStringStringMap(DiskBasedCache.java:564) at com.android.volley.toolbox.DiskBasedCache$CacheHeader.readHeader(DiskBasedCache.java:404) at com.android.volley.toolbox.DiskBasedCache.initialize(DiskBasedCache.java:157) at com.android.volley.CacheDispatcher.run(CacheDispatcher.java:84)
Now provide the response and nothing else.
| Method #1 Using broadcasting , we can use comparison against the max values, while keeping dims to facilitate broadcasting - a.max(axis=1,keepdims=1) == a Sample run - In [83]: aOut[83]: array([[0, 1], [2, 1], [4, 8]])In [84]: a.max(axis=1,keepdims=1) == aOut[84]: array([[False, True], [ True, False], [False, True]], dtype=bool) Method #2 Alternatively with argmax indices for one more case of broadcasted-comparison against the range of indices along the columns - In [92]: a.argmax(axis=1)[:,None] == range(a.shape[1])Out[92]: array([[False, True], [ True, False], [False, True]], dtype=bool) Method #3 To finish off the set, and if we are looking for performance, use intialization and then advanced-indexing - out = np.zeros(a.shape, dtype=bool)out[np.arange(len(a)), a.argmax(axis=1)] = 1 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47678262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_72711 |
stackexchange | llm_judgeable_groundtruth_similarity | 14467646 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this css code which I use for title background in JSF page. .container{ background: rgb(180,221,180); /* Old browsers */ background: -moz-linear-gradient(45deg, rgba(180,221,180,1) 0%, rgba(95,163,9,1) 0%, rgba(95,163,9,1) 48%, rgba(0,87,0,1) 68%, rgba(0,0,0,1) 86%); /* FF3.6+ */ background: -webkit-gradient(linear, left bottom, right top, color-stop(0%,rgba(180,221,180,1)), color-stop(0%,rgba(95,163,9,1)), color-stop(48%,rgba(95,163,9,1)), color-stop(68%,rgba(0,87,0,1)), color-stop(86%,rgba(0,0,0,1))); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(45deg, rgba(180,221,180,1) 0%,rgba(95,163,9,1) 0%,rgba(95,163,9,1) 48%,rgba(0,87,0,1) 68%,rgba(0,0,0,1) 86%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(45deg, rgba(180,221,180,1) 0%,rgba(95,163,9,1) 0%,rgba(95,163,9,1) 48%,rgba(0,87,0,1) 68%,rgba(0,0,0,1) 86%); /* Opera 11.10+ */ background: -ms-linear-gradient(45deg, rgba(180,221,180,1) 0%,rgba(95,163,9,1) 0%,rgba(95,163,9,1) 48%,rgba(0,87,0,1) 68%,rgba(0,0,0,1) 86%); /* IE10+ */ background: linear-gradient(45deg, rgba(180,221,180,1) 0%,rgba(95,163,9,1) 0%,rgba(95,163,9,1) 48%,rgba(0,87,0,1) 68%,rgba(0,0,0,1) 86%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#b4ddb4', endColorstr='#000000',GradientType=1 ); /* IE6-9 fallback on horizontal gradient */ /* background:-webkit-gradient(linear, left top, left bottom, color-stop(0, #5fa309), color-stop(1, #3b8018)); background:-webkit-linear-gradient(top, #5fa309 0%, #3b8018 100%); background:-moz-linear-gradient(top, #5fa309 0%, #3b8018 100%); background:-o-linear-gradient(top, #5fa309 0%, #3b8018 100%); background:-ms-linear-gradient(top, #5fa309 0%, #3b8018 100%); background:linear-gradient(top, #5fa309 0%, #3b8018 100%); */ position:relative; display:inline-block; padding:0 20px 0 10px; width:270px; line-height:20px; font-size:12px; font-family:sans-serif; text-shadow:0 1px 0 #264400; font-weight:bold; color:#fff} How I can make the edges of the figure more round?
Now provide the response and nothing else.
| 1.True or False. OpenSSL connection API calls (SSL_Read, SSL_Write, etc.) may execute concurrently on the same SSL instance (pointer-to-SSL returned by a SSL_new call)? * False. No, you cannot use SSL_read / SSL_write concurrently on the same SSL instance.* 2.True or False. For blocking sockets where SSL_MODE_AUTO_RETRY is enabled, thread A can call SSL_Read() on SSL instance X while thread B concurrently calls SSL_Write() on SSL instance X? * Same answer as above. With or Without SSL_MODE_AUTO_RETRY, you cannot use the same SSL instance X concurrently to do SSL_read and SSL_write parallelly* 3.True or False. OpenSSL works error free when an application uses non-blocking sockets and prevents concurrent execution of SSL_Read and SSL_Write (as well as other connection API calls) on the same SSL instance? True. If no concurrent execution is there, then OpenSSL works fine for Blocking as well as Non-Blocking Sockets. 4.True or False. OpenSSL SSL instance's returned by SSL_new are bound to the single thread which called SSL_new; bound meaning that the SSL instance may not be shared with any other threads, the SSL instance is only valid for use on the thread which called SSL_new? False. The SSL instance is not bound to any thread by OpenSSL itself. You can use the SSL instance created in one thread in another thread as long as only one thread is using a single SSL instance at any one point of time. 5.True or False. If thread A i) calls SSL_new, obtaining an SSL instance X and ii) calls SSL_Read using the SSL instance X. A failure will eventually occur if thread B non-concurrently calls SSL_Read/SSL_Write using the same SSL instance X? False. Both Thread A and Thread B can use the same SSL instance X, as long both don't do concurrent operations on SSL instance X. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14467646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1103606/']} | jdg_72712 |
stackexchange | llm_judgeable_groundtruth_similarity | 11958825 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some problem. I want to creating instance of class by name.I found Activator.CreateInstance http://msdn.microsoft.com/en-us/library/d133hta4.aspx and it works fine, and I found this: Setting a property by reflection with a string value too. But how to do both od this? I mean, I know the name of class, I know all properties in that class and I have this in string.For example: string name = "MyClass";string property = "PropertyInMyClass"; How to create instance and set some value to properties ?
Now provide the response and nothing else.
| You could use Reflection: using System;using System.Reflection;public class Foo{ public string Bar { get; set; }}public class Program{ static void Main() { string name = "Foo"; string property = "Bar"; string value = "Baz"; // Get the type contained in the name string Type type = Type.GetType(name, true); // create an instance of that type object instance = Activator.CreateInstance(type); // Get a property on the type that is stored in the // property string PropertyInfo prop = type.GetProperty(property); // Set the value of the given property on the given instance prop.SetValue(instance, value, null); // at this stage instance.Bar will equal to the value Console.WriteLine(((Foo)instance).Bar); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11958825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1592504/']} | jdg_72713 |
stackexchange | llm_judgeable_groundtruth_similarity | 3410256 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to find out if there could ever be conflicts between two known regular expressions, in order to allow the user to construct a list of mutually exclusive regular expressions. For example, we know that the regular expressions below are quite different but they both match xy50 : '^xy1\d''[^\d]\d2$' Is it possible to determine, using a computer algorithm, if two regular expressions can have such a conflict ? How?
Now provide the response and nothing else.
| There's no halting problem involved here. All you need is to compute if the intersection of ^xy1\d and [^\d]\d2$ in non-empty. I can't give you an algorithm here, but here are two discussions of a method to generate the intersection without resorting the construction of a DFA: http://sulzmann.blogspot.com/2008/11/playing-with-regular-expressions.html And then there's RAGEL http://www.complang.org/ragel/ which can compute the intersection of regular expressions too. UPDATE: I just tried out Ragel with OP's regexp. Ragel can generate a "dot" file for graphviz from the resulting state machine, which is terrific. The intersection of the OP's regexp looks like this in Ragel syntax: ('xy1' digit any*) & (any* ^digit digit '2') and has the following state machine: While the empty intersection: ('xy1' digit any*) & ('q' any* ^digit digit '2') looks like this: So if all else fails , then you can still have Ragel compute the intersection and check if it outputs the empty state machine, by comparing the generated "dot" file. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3410256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26155/']} | jdg_72714 |
stackexchange | llm_judgeable_groundtruth_similarity | 3971326 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: How will I know when to create an interface? I'm wondering about the point of using an Interface. Do you use Interfaces? If so, when do you decide to use them and when do you decide NOT to use them? I've currently got interfaces defined for my service layers and my repository layers, but I'm wondering if I'm missing out on other places where they'd be useful. I guess I just don't fully understand their purpose.
Now provide the response and nothing else.
| An interfaces defines a contract. Any class that implements an interface has to fulfil that contract. This means that the class must implement methods defined in the interface. An interface basically says "I'm defining something that all implementers must do. I don't care how you do it, but you must support these operations that I've specified". Another use of interfaces is that you can use them in method signatures or type definitions to specify the most generic type of an object. For example, in Java Map is an interface that is implemented by other classes like HashMap or LinkedHashMap . Both HashMap and LinkedHashMap are essentially of type Map . They implement the same methods, but they do things differently ( LinkedHashMap preserves insertion-order). Consider the situation where you have a method that accepts maps. If you didn't have interfaces, you would need to specify a method for each type of map. Indeed, you could do that via overloaded methods, but that approach is not very good. The better way is to specify the type of the method argument as Map . Then, any class that implements Map can be passed in to the method. This way, you don't have to specify a method for each type of map, and also you are not restricting the person who uses your method, to specific implementations of the map. An interface also guarantees that the specified functionality is present in implementing classes. As such, it also provides a standard way to access that functionality. Interfaces are also useful when you are designing an API (that way you can specify a standard interface to the things you want to expose). Another benefit of interfaces is that it makes refactoring easy. Let's say that you want to switch out an implementation of a some sort of object. The object might be a method argument or it may be a class property. Since you've typed that argument or object as an interface, you can simply create a new class that implements the interface and pass that class instead. Since you used the interface, you didn't make an extra assumptions as to the details of the class. The interface abstracts out the implementation details of the class that you're using. That way you don't end up making assumptions that makes your code too tightly-coupled to a specific implementation. To sum it up, interfaces are about abstraction and contracts . With abstraction you hide the underlying details and expose only the bare minimum that you need to expose. That way the person who uses your class or interface is not burdened with the implementation details. All that information is neatly hidden inside the specific class that implements the interface. The contract ensures standardization across the board; a person who uses the interface is sure that all classes that implement the interface expose the same methods. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3971326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124069/']} | jdg_72715 |
stackexchange | llm_judgeable_groundtruth_similarity | 201743 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently learning about TDD and trying to put it into practice in my personal projects. I have also used version control extensively on many of these projects. I am interested in the interplay of these two tools in a typical work flow, especially when it comes to the maxim to keep commits small. Here are some examples that come to mind: I start a new project and write a simple test to create an as-yet-nonexistent class. Should I commit the test before writing the class even though the test doesn't even compile? Or should I stub out the minimum amount of code that is needed to get the test to compile before committing? I find a bug and write a test to recreate it. Should I commit the failing test or implement the bug fix and then commit? These are the two examples that come immediately to mind. Feel free to provide additional examples in your answer. Edit: I made an assumption in both examples that immediately after writing the test I will write code to make the test pass. Another situation might also arise: I work on a project using TDD for several hours without committing. When I finally make commits, I want to break up my work into small chunks. (Git makes this relatively easy even if you want to want to commit only some of the changes in a single file.) This means that my question is as much about as what to commit as it is about when to commit.
Now provide the response and nothing else.
| Should I commit the test before writing the class even though the test doesn't even compile? Or should I stub out the minimum amount of code that is needed to get the test to compile before committing? Of course not. You should finish both the test and the class. Committing something 1 that doesn't even compile makes no sense, and will certainly make people working on the same project angry if you do it regularly. I find a bug and write a test to recreate it. Should I commit the failing test or implement the bug fix and then commit? No, do not commit a failing test. LeBlanc's Law states : Later equals never. and your test might fail for a long time. It is better to fix the problem as soon as it is detected. Also, TDD development style tells : Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. If you check in a failed test, that means you didn't complete the cycle. 1 When I said commit, I meant really commit to the trunk (for git users, push your changes, so other developers would get them). | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/201743', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/61509/']} | jdg_72716 |
stackexchange | llm_judgeable_groundtruth_similarity | 206795 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to execute one script which reads through a file containing a list of packages/applications, tests if each entry is already installed and if not, proceed to install? I'm trying to find an approach which I can use in installing applications on a number of virgin servers autonomously.
Now provide the response and nothing else.
| In ksh (both AT&T and pdksh variants) or zsh , you can do: if print -nu3; then echo fd 3 is writeablefi They won't write anything on that fd, but still check if the fd is writable (using fcntl(3, F_GETFL) ) and report an error otherwise: $ ksh -c 'print -nu3' 3< /dev/nullksh: print: -u: 3: fd not open for writing (which you can redirect to /dev/null ). With bash , I think your only option is to check if a dup() succeeds like in your approach, though that won't guarantee that the fd is writable (or call an external utility ( zsh / perl ...) to do the fcntl() ). Note that in bash (like most shells), if you use (...) instead of {...;} , that will fork an extra process. You can use: if { true >&3; } 2> /dev/null instead to avoid the fork (except in the Bourne shell where redirecting compound commands always causes a subshell). Don't use : instead of true as that's a special builtin, so would cause the shell to exit when bash is in POSIX compliance mode. You could however shorten it to: if { >&3; } 2> /dev/null | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/206795', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/16154/']} | jdg_72717 |
stackexchange | llm_judgeable_groundtruth_similarity | 32457657 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my gemfile I have: gem 'mysql2' My database.yml is as follows: default: &default adapter: mysql2 database: <%= ENV['db_name'] %> username: <%= ENV['db_user'] %> password: <%= ENV['db_pass'] %> host: <%= ENV['db_host'] %> pool: 32 socket: <%= ENV['socket'] %>development: <<: *defaultproduction: <<: *default I've run both bundle update and bundle install and my Gemfile.lock shows mysql2. However when I run rake db:migrate I get this on both my computer and on the staging server: myproject.com(master)$ rake db:migrateWARNING: Use strings for Figaro configuration. 10000012508 was converted to "10000012508".WARNING: Use strings for Figaro configuration. 860526407370038 was converted to "860526407370038".rake aborted!Gem::LoadError: Specified 'mysql2' for database adapter, but the gem is not loaded. Add `gem 'mysql2'` to your Gemfile (and ensure its version is at the minimum required by ActiveRecord)...... Just to make sure there wasn't a bad version of mysql2 or something, I did bundle clean --force and ran bundle install and bundle update again and when I run gem list I see mysql2 (0.4.0) and no other versions. Any ideas would be most appreciated. SOLUTION It's currently an issue with Rails 4.1.x and 4.2.x, per this bug report , it will be fixed in the next release of rails 4.2.x (credit to dcorr in comments for the link). In the mean time you can fix by doing downgraded to version 0.3.18 of mysql2 by adding this line to your gemfile: gem 'mysql2', '~> 0.3.18'
Now provide the response and nothing else.
| Just a further update - the solution in the question is correct. The 4th comment is worth taking note of: This isn't a bug with mysql2, it's a problem with the requirement in the ActiveRecord adapter: http://github.com/rails/rails/issues/21544 . This is fixed in rails master: https://github.com/rails/rails/commit/5da5e3772c32593ecf2f27b8865e81dcbe3af692 I was able to tie Rails 4.2.4 to the 4-2-stable branch and get it working with the latest mysql2 : gem 'rails', '~> 4.2.4', git: "git://github.com/rails/rails.git", branch: '4-2-stable'gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw] #-> Rails 4.1+#DBgem 'mysql2' | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32457657', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2229277/']} | jdg_72718 |
stackexchange | llm_judgeable_groundtruth_similarity | 54627918 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am running the NUnit tests (project in .Net Framework 4.5), as part of azure devops build pipeline. - task: VSTest@2 inputs: testAssemblyVer2: 'tests/**/*.Tests.dll' pathtoCustomTestAdapters: '$(Build.SourcesDirectory)/packages' codeCoverageEnabled: true displayName: 'NUnit Testing'- task: PublishCodeCoverageResults@1 inputs: codeCoverageTool: JaCoCo summaryFileLocation: '$(Common.TestResultsDirectory)/**/*.xml' displayName: 'Publish Code Coverage' // summaryFileLocation: '$(Common.TestResultsDirectory)/**/*.coverage' But I am not able to see the coverage report, all I see the download link for coverage results... How can I convert the .coverage report to JaCoCo format? OR generate the report directly in JaCoCo format? I have seen some solution for .Net Core ( link ), but none for .Net framework
Now provide the response and nothing else.
| Update: As per the release to Azure Devops for Sprint 150 When publishing code coverage reports, you no longer need to specify HTML files. Therefore, the script in my illustration no longer needs to use the report generator tool directly to create the html report, and when publishing the coverage results, the directory containing those html reports doesn't need to be specified. Edit: The trick I've found for getting the coverage results from a .Net Framework project to show up on the Code Coverage tab is in the same line of thought to your linked article . Don't run tests with the VS Test Task in Azure Install the Report Generator and Coverlet tools directly Use dotnet-vstest command for running tests through Coverlet Publish reports generated with Report Generator and Cobertura format coverage results Don't use the VS Test Task Running this task will allow you to collect coverage with a simple checkbox, but you then surrender your opportunity to provide the content for the Code Coverage Tab Install tools directly Use a Powershell task (or similar) to install the Coverlet and Report Generator tools directly. This allows you to use them on projects that are not .Net Core . "install tools:"&dotnet tool install dotnet-reportgenerator-globaltool --tool-path . --version 4.0.12&dotnet tool install coverlet.console --tool-path . --version 1.4.1 Use dotnet vstest through coverlet It's my understanding that dotnet test doesn't play nice with .Net Framework projects/assemblies. However, we can still use the dotnet command, which we know will be on the agent machine, but we need to use it as a mechanism to get to the vstest.console.exe . The Coverlet tool, as mentioned in the article you linked, will output coverage results in Cobertura format if you tell it to do so. &$coverlet $unitTestFile.FullName --target "dotnet" --targetargs "vstest $($unitTestFile.FullName) --logger:trx" --format "cobertura" Publish results Complete script sample note: this script is pretty rough, so use it as a thought exercise for your individual situation. "install tools:"&dotnet tool install dotnet-reportgenerator-globaltool --tool-path . --version 4.0.12&dotnet tool install coverlet.console --tool-path . --version 1.4.1"`nmake reports dir:"mkdir .\reports"`nrun tests:"$unitTestFile = gci -Recurse | ?{ $_.FullName -like "*bin\*UnitTestProject2.dll" }Write-Host "`$unitTestFile value: $unitTestFile"$coverlet = "$pwd\coverlet.exe""calling $coverlet for $($unitTestFile.FullName)"&$coverlet $unitTestFile.FullName --target "dotnet" --targetargs "vstest $($unitTestFile.FullName) --logger:trx" --format "cobertura""`ngenerate report(s)"gci -Recurse | ?{ $_.Name -eq "coverage.cobertura.xml" } | %{ &"$pwd\reportgenerator.exe" "-reports:$($_.FullName)" "-targetdir:reports" "-reporttypes:HTMLInline;HTMLChart" } If you're struggling to figure out the escaping of quotes and such with the Coverlet command, YOU ARE NOT ALONE . I used the echoargs commandlet from PSCX more times than I care to admit so I could see what was actually getting provided to the .exe calls I was making. The Results!! ...because that's really what matters Original Answer: Because of the way the linked article you mentioned is installing and using the report generator global tool I would think you can still follow those guidelines for creating the HTML inline and chart report types. I'm not sure what is meant or how it works when the article says, The point is the reporttypes: Use HTMLInLine for enabling the output on the Azure DevOps page. Azure DevOps Coverage page show index.html on the web. I'm understanding that you can use the tool to create the HTML report from the .xml coverage results, and then publish the coverage results and report together with the Publish Code Coverage task. So it seems all you need is to have an .xml format of the .coverage tool. I didn't get it working in straight powershell, but you could follow the instructions from the Report Generator documentation to write a C# utility to access the Coverage.Analysis library. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54627918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1139099/']} | jdg_72719 |
stackexchange | llm_judgeable_groundtruth_similarity | 5040303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have noticed that some Buttons don't seem as responsive as they could be. This applies equally to my app and to most other apps I've tried. When I press a Button there is a tiny bit of lag (EDIT: I would estimate about 20-50 ms) before the Button lights up in the pressed state. Some apps have managed to remove this bit of lag, for instance RealCalc (available in the Market) whose Buttons switch to the pressed state immediately after you press your finger on them. Most of the time this lag is not noticeable, but in my case where the Buttons are used in a custom number pad, this tiny bit of lag is disruptive to the user. RealCalc feels so much more responsive and polished because this lag has been removed. My question is - how do I remove this lag? I am aware that I could just subclass, override onTouchEvent and proceed from there, but I would really prefer a solution using just the standard controls and options. I suspect the solution may also interfere with scrolling, but I can live with that. EDIT: Specifically, the lag mentioned is the time from you put your finger on a button and hold it there until the button switches to the pressed state. The onClick handler is called when you remove your finger again. Some answers suggested moving the bulk of my onClick handler to a thread. This is not the issue. To make doubly sure, I have removed all click handlers, and the tiny lag is still there.
Now provide the response and nothing else.
| I have dug into the Android source code to see what was going on. It turns out that the android.view.View class (from which Button derives) enters a "PREPRESSED" state before going into the PRESSED state: android.view.View: 1529 /**1530 * Indicates a prepressed state;1531 * the short time between ACTION_DOWN and recognizing1532 * a 'real' press. Prepressed is used to recognize quick taps1533 * even when they are shorter than ViewConfiguration.getTapTimeout().1534 * 1535 * @hide1536 */1537 private static final int PREPRESSED = 0x02000000; android.view.ViewConfiguration.getTapTimeout() is 115 ms on my Nexus One, which is a lot longer than my estimation. android.view.ViewConfiguration: 67 /**68 * Defines the duration in milliseconds we will wait to see if a touch event 69 * is a tap or a scroll. If the user does not move within this interval, it is70 * considered to be a tap. 71 */72 private static final int TAP_TIMEOUT = 115; Anyway, from examining View.onTouchEvent is doesn't look like there's a way to avoid this PREPRESSED state by any standard option. Which is a real shame. The good news is that I have now verified that the way to avoid this lag is to subclass and override onTouchEvent. Thanks for the discussion and answers. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5040303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/619821/']} | jdg_72720 |
stackexchange | llm_judgeable_groundtruth_similarity | 10006889 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have a class A class A attr_accessor :x, :y def initialize(x,y) @x, @y = x, y endend How can I get x and y attributes without knowing how exactly they were named. E.g. a = A.new(5,10)a.attributes # => [5, 10]
Now provide the response and nothing else.
| Use introspection, Luke! class A attr_accessor :x, :y def initialize(*args) @x, @y = args end def attrs instance_variables.map{|ivar| instance_variable_get ivar} endenda = A.new(5,10)a.x # => 5a.y # => 10a.attrs # => [5, 10] | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10006889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1179942/']} | jdg_72721 |
stackexchange | llm_judgeable_groundtruth_similarity | 4077582 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How Can I format text in Rich Text Box like the following 02/11/2010 - 05:15 PM - Adam: Another test notes added on 2nd November 02/11/2010 - 05:14 PM - Z_kas: Test Notes. STAGE CHANGED TO: N Enq - Send Quote 02/11/2010 - 05:12 PM - user32: Another test notes added on 2nd November Thanks
Now provide the response and nothing else.
| as stated by others there is a possible duplication with an earlier question. However, please see a code snippet below. You don’t have to get the length of the text you append in order to change its formatting, just set the format before you append. This (i think) gives better performance if you have a lot of text in the textbox. This will work as long as there are no selections in the textbox by the user, then strange things will happen that I cannot explain. Perhaps someone else can enlighten us? The same problem appears with the solution proposed in Change color of text within a WinForms RichTextBox . I replaced the “:” you had after the user name just to get my code sample to work more easily with the DateTime thing, this can be easily modified in the “Split”. private void AddText(string text) { string[] str = text.Split(new string[] { ";" }, StringSplitOptions.RemoveEmptyEntries); if (str.Length == 2) { richTextBox1.DeselectAll(); richTextBox1.SelectionFont = new Font(richTextBox1.SelectionFont, FontStyle.Bold); richTextBox1.AppendText(Environment.NewLine + str[0] + ";"); richTextBox1.SelectionFont = new Font(richTextBox1.SelectionFont, FontStyle.Regular); richTextBox1.AppendText(str[1]); } // Else?? Well, do something else.. } And the call: private void button1_Click(object sender, EventArgs e) { AddText(DateTime.Now.ToString() + " - Mike; Did something"); } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4077582', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/77674/']} | jdg_72722 |
stackexchange | llm_judgeable_groundtruth_similarity | 3428840 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to prove $$I=\int_0^{\pi/2} x^2\left(\sum_{n=1}^\infty (-1)^{n-1} \cos^n(x)\cos(nx)\right)dx=\frac16\left(\frac{\pi^3}{12}-\pi\operatorname{Li}_2\left(\frac13\right)\right)$$ This problem is proposed by Cornel which can be found here where he suggested that the problem can be solved with and without harmonic series. Here is my approach but I got stuck at the blue integral: Using the common identity $$ \sum_{n=1}^{\infty}p^n \cos(nx)=\frac{p(\cos(x)-p)}{1-2p\cos(x)+p^2}, \ |p|<1$$ Set $p=-\cos(x)$ we get $$ \sum_{n=1}^{\infty}(-1)^n \cos^n(x) \cos(nx)=-\frac{2\cos^2(x)}{1+3\cos^2(x)}=-\frac23+\frac23\frac1{1+3\cos^2(x)}$$ Multiply both sides by $-x^2$ then integrate from $x=0$ to $\pi/2$ we get $$\int_0^{\pi/2} x^2\left(\sum_{n=1}^\infty (-1)^{n-1} \cos^n(x)\cos(nx)\right)dx=\frac23\int_0^{\pi/2} x^2dx-\frac23\color{blue}{\int_0^{\pi/2}\frac{x^2}{1+3\cos^2(x)}dx}\\=\frac{\pi^3}{36}-\frac23\left(\color{blue}{\frac{\pi^3}{48}+\frac{\pi}{4}\operatorname{Li}_2\left(\frac13\right)}\right)=\frac{\pi^3}{72}-\frac{\pi}{6}\operatorname{Li}_2\left(\frac13\right)$$ I have two Questions: 1) Can we evaluate $I$ in a different way? 2) How to finish the blue integral? My try to the blue integral is using integration by parts $$\int\frac{dx}{1+3\cos^2(x)}=\frac12\tan^{-1}\left(\frac{\tan(x)}{2}\right)=-\frac12\tan^{-1}\left(2\cot(x)\right)$$ which gives us $$\int_0^{\pi/2}\frac{x^2}{1+3\cos^2(x)}dx=\frac{\pi^3}{16}-\int_0^{\pi/2}x\tan^{-1}\left(\frac{\tan(x)}{2}\right)dx$$ Or $$\int_0^{\pi/2}\frac{x^2}{1+3\cos^2(x)}dx=\int_0^{\pi/2}x\tan^{-1}\left(2\cot(x)\right)dx$$ I also tried the trick $x\to \pi/2-x$ but got complicated Proof of the identity: \begin{align}\sum_{n=0}^\infty p^ne^{inx}&=\sum_{n=0}^\infty\left(p e^{ix}\right)^n=\frac{1}{1-pe^{ix}},\quad |p|<1\\&=\frac{1}{1-p\cos(x)-ip\sin(x)}=\frac{1-p\cos(x)+ip\sin(x)}{1-2p\cos(x)+p^2}\\&=\frac{1-p\cos(x)}{1-2p\cos(x)+p^2}+i\frac{p\sin(x)}{1-2p\cos(x)+p^2}\end{align} By comparing the real and imaginary parts, we get $$\sum_{n=\color{blue}{0}}^\infty p^n \cos(nx)=\frac{1-p\cos(x)}{1-2p\cos(x)+p^2}\Longrightarrow \sum_{n=\color{blue}{1}}^\infty p^{n-1} \cos(nx)=\frac{\cos(x)-p}{1-2p\cos(x)+p^2}$$ and $$\sum_{n=\color{red}{0}}^\infty p^n \sin(nx)=\frac{p\sin(x)}{1-2p\cos(x)+p^2}\Longrightarrow \sum_{n=\color{red}{1}}^\infty p^n \sin(nx)=\frac{p\sin(x)}{1-2p\cos(x)+p^2}$$
Now provide the response and nothing else.
| We can use the following fourier series : $$\frac{1}{a+b\cos t}=\frac{1}{\sqrt{a^2-b^2}}+\frac{2}{\sqrt{a^2-b^2}}\sum_{n=1}^{\infty}\left(\frac{\sqrt{a^2-b^2}-a}{b}\right)^n\cos{(nt)},\ a>b$$ Plugging $a=5, b=3$ and $t=2x$ we get: $$\frac{1}{1+3\cos^2 x}=\frac{2}{5+3\cos(2x)}=\frac{1}{2}+\sum_{n=1}^\infty (-1)^n\left(\frac{1}{3}\right)^n\cos(2nx)$$ $$\Rightarrow \int_0^\frac{\pi}{2}\frac{x^2}{1+3\cos^2 x}dx=\frac12\int_0^\frac{\pi}{2} x^2dx+\sum_{n=1}^\infty(-1)^n \left(\frac13\right)^n\int_0^\frac{\pi}{2}x^2 \cos(2nx)dx$$ $$=\frac{\pi^3}{48}+\frac{\pi}4\sum_{n=1}^\infty \left(\frac13\right)^n\frac{1}{n^2}=\frac{\pi^3}{48}+\frac{\pi}{4}\operatorname{Li}_2\left(\frac13\right)$$ Using the series obtain above, we can also conclude that: $$\sum_{n=1}^{\infty}(-1)^n \cos^n(x) \cos(nx)=-\frac13+\frac23\sum_{n=1}^\infty \left(-\frac{1}{3}\right)^n\cos(2nx)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3428840', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/432085/']} | jdg_72723 |
stackexchange | llm_judgeable_groundtruth_similarity | 148859 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to connect multiple components (same components) to an FPGA output. The FPGA runs at 30Mhz and transmitting both a clock and data on lines connected to these similar components. Each component is located in a different distance from the FPGA where the shortest is 10mm and the longest is 300mm When the difference is 290mm and the frequency is 30Mhz, do I need to worry about one signal reaching before the other? Is there a rule of thumb or a way to calculate what is the maximum difference in distances that is allowed for a specific signal frequency? Should the trace be thicker or thinner for fast data? will 8mil do? PCB is flex 2-layer, no right-angle folds, 1OZ copper, signal can be 3.3V or 5.0V Gilad.
Now provide the response and nothing else.
| When the difference is 290mm and the frequency is 30Mhz, do I need to worry about one signal reaching before the other? Is there a rule of thumb or a way to calculate what is the maximum difference in distances that is allowed for a specific signal frequency? The propagation velocity is given roughly by the dielectric constant of the material of your pcb $$ v \approx \frac{c}{\sqrt{\epsilon_r}}$$ (Actually it will be a bit higher because some of the signal travels in air rather than in the PCB) For FR4, \$\epsilon_r\$ is roughly 4.5 (+/- 0.5). For flex material, you'll need to check the datasheet of your material. The difference in time-of-arrival is just the difference in trace length divided by the velocity. And the rule-of-thumb in a digital design is that you need to satisfy the set-up and hold times for all the logic you are sending signals to. In some cases, you may also need to keep your clocks in sync between the different load components, but that is a design-specific requirement. Should the trace be thicker or thinner for fast data? will 8mil do? For 30 MHz with a maximum trace length of 300 mm, the trace width is not critical. P.S. For this design, if you don't have any special syncronization requirements, I'd rather minimize the total capacitance of the trace by running a single track that goes near all 7 loads, and make a short stub to each load, rather than have very long stubs leading to the loads from the point of origin. P.P.S. Your wavelength and trace length are in the realm of needing to worry about radiated emissions if you are planning to use or sell this in the US or Europe. Make sure you have a well-defined return path for each signal path, and remember the critical distances for EMI purposes are more realted to the signal's rise and fall times than on the repetition frequency. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/148859', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/5778/']} | jdg_72724 |
stackexchange | llm_judgeable_groundtruth_similarity | 28442920 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
so, I've been working with mongoose for some time and I found some really weird stuff going on. It would be great if someone could enlighten me. The thing is, when using the .find() method of mongoose, the object I get as response is full of properties I don't know where It came from (I'm guessing they are built-in properties, but whatever) and I want to iterate only through the properties I .select(). Got it? No? ok... explaining better: I have my schema and model declared: var mySchema = mongoose.Schema({ name: String, prop1: String, prop2: String, prop3: String})var myModel = DB.model('myDataBase', mySchema) Then I want to find a document with the name, let's say, John and retrieve all but the 'name' field, so I go: myModel.find({name: 'John'}, '-name', function(err, results){ log(results[0])} and log(results[0]) logs { prop1: 'one', prop2: 'two', prop3: 'three' } So far, so good. But the problems is, now I want to iterate through these properties and check one by one, and I don't know for sure how many 'props' each result will have, so I wanted to do something like: for(var key in results[0]){ log(key)} So, I'm hoping it will log 'prop1', 'prop2' and 'prop3', but no! Ok, I get props 1, 2 and 3, but also I get a lots of other properties and functions like: isNew, error, _maxListeners, _doc, etc. Not only these extras properties, I also get the 'name' property, the one I excluded from the selection (and it was excluded, like shown in the first log). Weird huh? But wait! There's more! I've searched online and found some people saying "Dude, when iterating through object properties use the hasOwnProperty method!". So there I went: for (var key in results[0]){ if (results[0].hasOwnProperty(key)) log(key)} the log result is a few properties (to be specific: $__, isNew, error, _maxListeners, _doc, _pres, _posts, save, _events) and doesnt include any of the props I wanted in the first place. My question is, how can I iterate through only prop 1, 2 and 3, excluding these, I don't know, built-in properties and the one I explicitly excluded in the parameters? (ps: I was thinking of a solution that doesnt involve having to convert my object into an array, if thats possible) Also, not a question per se, but for curiosity, where does these properties come from? Why do they appear in the for loop and not when I log the object? Why the property I excluded ('-name') also appears in the for loop? What the hell is hasOwnProperty for if it doesnt recognize the properties that were just logged? Thanks for your time and help!Bye!
Now provide the response and nothing else.
| Alternatively to Kevin B's answer, you can pass {lean: true} as an option: myModel.find({name: 'John'}, '-name', {lean: true}, function(err, results){ log(results[0])} In MongoDB, the documents are saved simply as objects. When Mongoose retrieves them, it casts them into Mongoose documents. In doing so it adds all those keys that are being included in your for loop. This is what allows you to use all the document methods. If you won't be using any of these, lean is a great option as it skips that entire process, increasing query speed. Potentially 3x as fast. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28442920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3280904/']} | jdg_72725 |
stackexchange | llm_judgeable_groundtruth_similarity | 428927 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X\in\mathbb{Z}.$ Let $F_n$ be a sequence of positive integers given by $$F_{i+1}=F_i+F_{i-1}$$ $$F_2=X*F_1+F_0$$ I am trying to find an upper bound or (sharp) inequality of $F_i$ in terms of $F_1$ and $F_0$. If $F_{i+1}=F_i+F_{i-1}$ for all $i$, then substituting $F_i=q^i$ gives $F_i=(\frac{1+\sqrt{5}}{2})^i$. In our case,$$F_j= (\frac{1+\sqrt{5}}{2})^j+(XF_1+F_0)-(F_1+F_0)= (\frac{1+\sqrt{5}}{2})^j+(X-1)F_1 \,\,(*)$$ Is the first equality in $(*)$ correct?
Now provide the response and nothing else.
| The general solution of $F_{i+1}=F_i+F_{i-1}$ is $F_j=A\phi^j+B\overline\phi\,^{j}$ where $\phi=(1+\sqrt5)/2$ and $\overline\phi=(1-\sqrt5)/2$, and $A$ and $B$ are determined by the initial conditions. In your case the initial conditions are $F_1=F_1$ and $F_2=XF_1+F_0$. So you can substitute in $i=1$ and $i=2$ to get two linear equations for the two unknowns $A$ and $B$, and then you'll have your equation for $F_j$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/428927', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17955/']} | jdg_72726 |
stackexchange | llm_judgeable_groundtruth_similarity | 46941498 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been through itertools inside and out and I cannot figure out how to do the following. I want to take a list. x = [1,2,3,4,5,6,7,8] and I want to get a new list: y = [[1],[1,2],[1,2,3],.......[2],[2,3],[2,3,4].....[8]] I need a list of all slices, but not combinations or permutations. x = list(zip(x[::2], x[1::2])) is close, but doesn't do exactly what I'm hoping
Now provide the response and nothing else.
| Use combinations not of x , but of the range of possible slice indices (including one past the end, thus len(x)+1 , since slices are exclusive on the end) to make the slice end points, then use them to slice x : from itertools import combinationsy = [x[s:e] for s, e in combinations(range(len(x)+1), 2)] That gets exactly what you're going for as straightforwardly as possible. If you want (possibly) faster map based code, you can rephrase it as ( list wrapper unnecessary on Python 2): from itertools import combinations, starmapy = list(map(x.__getitem__, starmap(slice, combinations(range(len(x)+1), 2)))) which gets the same result, but without any Python bytecode execution per-item, which might run faster (implementation dependent). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46941498', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2843275/']} | jdg_72727 |
stackexchange | llm_judgeable_groundtruth_similarity | 309495 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently using Win32ShellFolderManager2 and ShellFolder.getLinkLocation to resolve windows shortcuts in Java. Unfortunately, if the Java program is running as a service under Vista, getLinkLocation , this does not work. Specifically, I get an exception stating "Could not get shell folder ID list". Searching the web does turn up mentions of this error message, but always in connection with JFileChooser . I'm not using JFileChooser , I just need to resolve a .lnk file to its destination. Does anyone know of a 3rd-party parser for .lnk files written in Java I could use? I've since found unofficial documentation for the .lnk format here , but I'd rather not have to do the work if anyone has done it before, since the format is rather scary.
Now provide the response and nothing else.
| Added comments (some explanation as well as credit to each contributor so far),additional check on the file magic, a quick test to see if a given file might be a valid link (without reading all of the bytes), a fix to throw a ParseException with appropriate message instead of ArrayIndexOutOfBoundsException if the file is too small, did some general clean-up. Source here (if you have any changes, push them right to the GitHub repo / project . package org.stackoverflowusers.file;import java.io.ByteArrayOutputStream;import java.io.File;import java.io.FileInputStream;import java.io.IOException;import java.io.InputStream;import java.text.ParseException;/** * Represents a Windows shortcut (typically visible to Java only as a '.lnk' file). * * Retrieved 2011-09-23 from http://stackoverflow.com/questions/309495/windows-shortcut-lnk-parser-in-java/672775#672775 * Originally called LnkParser * * Written by: (the stack overflow users, obviously!) * Apache Commons VFS dependency removed by crysxd (why were we using that!?) https://github.com/crysxd * Headerified, refactored and commented by Code Bling http://stackoverflow.com/users/675721/code-bling * Network file support added by Stefan Cordes http://stackoverflow.com/users/81330/stefan-cordes * Adapted by Sam Brightman http://stackoverflow.com/users/2492/sam-brightman * Based on information in 'The Windows Shortcut File Format' by Jesse Hager <[email protected]> * And somewhat based on code from the book 'Swing Hacks: Tips and Tools for Killer GUIs' * by Joshua Marinacci and Chris Adamson * ISBN: 0-596-00907-0 * http://www.oreilly.com/catalog/swinghks/ */public class WindowsShortcut{ private boolean isDirectory; private boolean isLocal; private String real_file; /** * Provides a quick test to see if this could be a valid link ! * If you try to instantiate a new WindowShortcut and the link is not valid, * Exceptions may be thrown and Exceptions are extremely slow to generate, * therefore any code needing to loop through several files should first check this. * * @param file the potential link * @return true if may be a link, false otherwise * @throws IOException if an IOException is thrown while reading from the file */ public static boolean isPotentialValidLink(File file) throws IOException { final int minimum_length = 0x64; InputStream fis = new FileInputStream(file); boolean isPotentiallyValid = false; try { isPotentiallyValid = file.isFile() && file.getName().toLowerCase().endsWith(".lnk") && fis.available() >= minimum_length && isMagicPresent(getBytes(fis, 32)); } finally { fis.close(); } return isPotentiallyValid; } public WindowsShortcut(File file) throws IOException, ParseException { InputStream in = new FileInputStream(file); try { parseLink(getBytes(in)); } finally { in.close(); } } /** * @return the name of the filesystem object pointed to by this shortcut */ public String getRealFilename() { return real_file; } /** * Tests if the shortcut points to a local resource. * @return true if the 'local' bit is set in this shortcut, false otherwise */ public boolean isLocal() { return isLocal; } /** * Tests if the shortcut points to a directory. * @return true if the 'directory' bit is set in this shortcut, false otherwise */ public boolean isDirectory() { return isDirectory; } /** * Gets all the bytes from an InputStream * @param in the InputStream from which to read bytes * @return array of all the bytes contained in 'in' * @throws IOException if an IOException is encountered while reading the data from the InputStream */ private static byte[] getBytes(InputStream in) throws IOException { return getBytes(in, null); } /** * Gets up to max bytes from an InputStream * @param in the InputStream from which to read bytes * @param max maximum number of bytes to read * @return array of all the bytes contained in 'in' * @throws IOException if an IOException is encountered while reading the data from the InputStream */ private static byte[] getBytes(InputStream in, Integer max) throws IOException { // read the entire file into a byte buffer ByteArrayOutputStream bout = new ByteArrayOutputStream(); byte[] buff = new byte[256]; while (max == null || max > 0) { int n = in.read(buff); if (n == -1) { break; } bout.write(buff, 0, n); if (max != null) max -= n; } in.close(); return bout.toByteArray(); } private static boolean isMagicPresent(byte[] link) { final int magic = 0x0000004C; final int magic_offset = 0x00; return link.length >= 32 && bytesToDword(link, magic_offset) == magic; } /** * Gobbles up link data by parsing it and storing info in member fields * @param link all the bytes from the .lnk file */ private void parseLink(byte[] link) throws ParseException { try { if (!isMagicPresent(link)) throw new ParseException("Invalid shortcut; magic is missing", 0); // get the flags byte byte flags = link[0x14]; // get the file attributes byte final int file_atts_offset = 0x18; byte file_atts = link[file_atts_offset]; byte is_dir_mask = (byte)0x10; if ((file_atts & is_dir_mask) > 0) { isDirectory = true; } else { isDirectory = false; } // if the shell settings are present, skip them final int shell_offset = 0x4c; final byte has_shell_mask = (byte)0x01; int shell_len = 0; if ((flags & has_shell_mask) > 0) { // the plus 2 accounts for the length marker itself shell_len = bytesToWord(link, shell_offset) + 2; } // get to the file settings int file_start = 0x4c + shell_len; final int file_location_info_flag_offset_offset = 0x08; int file_location_info_flag = link[file_start + file_location_info_flag_offset_offset]; isLocal = (file_location_info_flag & 2) == 0; // get the local volume and local system values //final int localVolumeTable_offset_offset = 0x0C; final int basename_offset_offset = 0x10; final int networkVolumeTable_offset_offset = 0x14; final int finalname_offset_offset = 0x18; int finalname_offset = link[file_start + finalname_offset_offset] + file_start; String finalname = getNullDelimitedString(link, finalname_offset); if (isLocal) { int basename_offset = link[file_start + basename_offset_offset] + file_start; String basename = getNullDelimitedString(link, basename_offset); real_file = basename + finalname; } else { int networkVolumeTable_offset = link[file_start + networkVolumeTable_offset_offset] + file_start; int shareName_offset_offset = 0x08; int shareName_offset = link[networkVolumeTable_offset + shareName_offset_offset] + networkVolumeTable_offset; String shareName = getNullDelimitedString(link, shareName_offset); real_file = shareName + "\\" + finalname; } } catch (ArrayIndexOutOfBoundsException e) { throw new ParseException("Could not be parsed, probably not a valid WindowsShortcut", 0); } } private static String getNullDelimitedString(byte[] bytes, int off) { int len = 0; // count bytes until the null character (0) while (true) { if (bytes[off + len] == 0) { break; } len++; } return new String(bytes, off, len); } /* * convert two bytes into a short note, this is little endian because it's * for an Intel only OS. */ private static int bytesToWord(byte[] bytes, int off) { return ((bytes[off + 1] & 0xff) << 8) | (bytes[off] & 0xff); } private static int bytesToDword(byte[] bytes, int off) { return (bytesToWord(bytes, off + 2) << 16) | bytesToWord(bytes, off); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/309495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15255/']} | jdg_72728 |
stackexchange | llm_judgeable_groundtruth_similarity | 201469 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For some reason, I have trouble getting absolute value right. This is of a great importance in the definition of the limit. How do I solve the following inequality for $x$: $$|x -a| < \epsilon$$ I know the solution is $a - \epsilon < x < a + \epsilon$, but I cannot provide the logical deductions. This is how I would start: $|x-a| < \epsilon$ $\iff$ $(x-a) < \epsilon$ and $(x-a) < -\epsilon$ Simplify: $(x-a) < \epsilon$ and $-(x-a) > \epsilon$ This is where I think I go wrong because now I have a negative $x$. Could someone show me the logical deductions step by step so I can see how it turns out?
Now provide the response and nothing else.
| You went astray right at the start: $|x-a|<\epsilon$ is emphatically not equivalent to the statement that $x-a<\epsilon$ and $x-a<-\epsilon$. It is, as M Turgeon said, equivalent to $x-a<\epsilon$ and $x-a>-\epsilon$. One way to see this is directly from the definition of absolute value: $$|x-a|=\begin{cases}x-a,&\text{if }x-a\ge 0\\-(x-a),&\text{if }x-a<0\;.\end{cases}$$ If $x-a\ge 0$, then $|x-a|<\epsilon$ if and only if $x-a<\epsilon$. If $x-a<0$, then $|x-a|<\epsilon$ if and only if $-(x-a)<\epsilon$, which after multiplication by $-1$ becomes $x-a>-\epsilon$. Thus, $|a-b|<\epsilon$ if and only if $x-a<\epsilon$ and $x-a>-\epsilon$, or in one statement if and only if $-\epsilon<x-a<\epsilon$. Adding $a$ to all three ‘sides’ of this then yields the desired $a-\epsilon<x<a+\epsilon$. Another way to see it is perhaps more intuitive. For any real numbers $a$ and $b$, $|a-b|$ is simply the distance between $a$ and $b$. The condition that $|x-a|<\epsilon$ just says that the distance between $x$ and $a$ is less than $\epsilon$. This means that $x$ must be within $\epsilon$ units on either side of $a$, i.e., between $a-\epsilon$ and $a+\epsilon$. In symbols, $a-\epsilon<x<a+\epsilon$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/201469', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38879/']} | jdg_72729 |
stackexchange | llm_judgeable_groundtruth_similarity | 21760612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I try to create an Admin User with FOsUserBundle from command windows with the following command: php app/console fos:user:create In my project the Admin User extends other user with mandatory propriety. So, when I choose my username, mail and password, it tells me: SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'latitude' cannot be null How can I set the value "latitude" in my AdminUser? I also use PUGXMultiUserBundle.
Now provide the response and nothing else.
| Only possibile way to reach that to me is 1 - override the cli command of FOSUserBundle placed into Command/CreateUserCommand.php 2 - override the user create method of FOSUserBundle placed into Util/UserManipulator.php // Command/CreateUserCommand.phpprotected function execute(InputInterface $input, OutputInterface $output){ $username = $input->getArgument('username'); $email = $input->getArgument('email'); $password = $input->getArgument('password'); $inactive = $input->getOption('inactive'); $superadmin = $input->getOption('super-admin'); $latitude = $input->getOption('latitude'); //this will be your own logic add $manipulator = $this->getContainer()->get('fos_user.util.user_manipulator'); $manipulator->create($username, $password, $email, $latitude, !$inactive, $superadmin); $output->writeln(sprintf('Created user <comment>%s</comment>', $username));} and // Util/UserManipulator.phppublic function create($username, $password, $email, $latitude, $active, $superadmin){ $user = $this->userManager->createUser(); $user->setUsername($username); $user->setEmail($email); $user->setPlainPassword($password); $user->setEnabled((Boolean) $active); $user->setSuperAdmin((Boolean) $superadmin); $user->setLatitude($latitude); $this->userManager->updateUser($user); return $user; } Of course when I say override i mean ... override :P So you haven't to modify FOSUserBundle original files (you know, it's dangerous for many reasons) but make your own files by making your bundle extended by FOSUserBundle Are you wondering how to make your bundle extended by FOSUserBundle? Into your bundle "mainfile" - is the one you use to register your bundle - just add this lines public function getParent(){ return 'FOSUserBundle';} Then you simply recreate the tree structure where your ovverride files lives into original bundle, into your custom bundle's Resources/ directory (same position, same file name, same annotations if any) and .... the magic can start :) ( this is valid only for views, please pay attention! ) What "override" means? Override means that you take an existent function, "shadow" it by redefining elsewhere (declare a function with the same name, no matter how many parameters it accept, no matter the type of paramenters since php doesn't support method overloading [except if you do some "hack"]) and then you can use it instead of the original one. This is a common technique for add extra functionalities to a function or to change the function itself. Say that we have two classes, A and B with B that is a child class of A. Say also that A have a method called myMethod() . In B we can do something like public function myMethod() { parent::myMethod(); //add extra functionalities here} in that way we're adding extra functionalities as we're calling the parent ("original") method and then execute some extra functionalities Whereas if in B we make something like public function myMethod() { //some code here, but not calling parent method} we're redefining the behaviour of myMethod() How Symfony2 let me override methods? As I said previously in my answer, you have to make your bundle a child of the bundle that containts the function(s) you're trying to override (in that case FOSUserBundle ). Once you did it, use the Resources directory of your bundle to accomplish what you need. reproduce the "tree-folder-structure" of the original bundle (ie.: same names of the folders) until you reach the class that contains the function you need to override. Follow your real example: you need to override execute() function contained in Command/CreateUserCommand.php . You have to create, into your bundle folder that path: PathTo/YourCostumBundle/Command/ and place inside the file CreateUserCommand.php with the content I show you above. If you don't understand where I find that path, please take a look to FOSUserBundle code and it will be absolutely clear! Why is dangerous to modify the FOSUserBundle code directly? Well, there's a lot of answer an critic point that I can show you. Choosing the main (not ordered for importance): What if you need to update FOSUserBundle? You'll use composer and lost every modify that you made to FOSUserBundle code What if you have more than one bundle into your project that need to use FOSUserBundle? Maybe the custom behaviour makes sense for a bundle but not for the other one. Costumizing the behaviour at local bundle level helps you to keep FOSUserBundle logic intact What if you're developing a bundle that you want to share with other user? You need to force them to "take" your own costumized FOSUserBundle version and warn them about updating it Finally: I perfeclty know that your entity isn't into FOSUserBundle, but I can bet that they extend FOSUserBundle base user so what I told above is applicable to your case. Hope it's less fuzzy now :) Documentation: http://symfony.com/doc/current/cookbook/bundles/inheritance.html#overriding-controllers | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21760612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2980075/']} | jdg_72730 |
stackexchange | llm_judgeable_groundtruth_similarity | 112422 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I try to solve this problem. The algorithm I developped has a complexity of $O(n^2)$. When dealing with large data the program is brought to its knees. Do you have any idea that might be faster than a quadratic algorithm? I summarize the problem briefly: Let A be a one dimensional array containing both positive and negative values, and also suppose a k. What we are searching is the size of the biggest subarray whose average is bigger than k or even equal. k is an integer and the array contains only integers. For example if k=0 and A[] = -8 3 -1 -1 -1 -1 -1 2 -11. The output should be 7 (from A[2] to A[8]. Another simpler example 1 10 -1 -1 4 -1 7 2 8 1 . The output should be 3.
Now provide the response and nothing else.
| If you are not in the mood to go to the library right away, here is a simple description: 1) Subtract $\kappa$ from everything ($N$ operations). 2) Replace $a_k$ by the partial sums $S_k=\sum_{j=1}^k a_j$ ($N$ operations). Now you are looking for the pair $0\le k\le m\le N$ such that $S_k\le S_m$ with the largest difference $m-k$. Note that $S_k$ is necessarily the minimum of $S_1,\dots,S_k$ and $S_m$ is necessarily the maximum of $S_m,S_{m+1},\dots,S_{N-1}$. Don't forget $S_0$ (the empty sum)! 3) Mark all such minima going from the left and all such maxima going from the right ($5N$ operations or so). You will have a decreasing sequence of positive starting positions and a decreasing sequence of positive ending positions. 4) Start with the leftmost starting position and go along the possible ending positions from the left to the right until you get the rightmost that still works. Record the difference. Go to the next starting position and see how far you can move the ending position to fit now. Compare the difference with the previous one and record it if it is larger. 5) Repeat until you reach the end. Note that you go left to right all the time never coming back, so these steps are linear as well. Probably, you can optimize a bit but I'm too lazy to think of how. 6) Once you finish debugging the program and get some free time, follow Gerhard's advice :). Editing with response to comments: I'm not sure what you guys are doing that it doesn't work for you, but here how it runs on 2 2 -1000 -1000 2 2 -2 2 2 2 2 -2 -2 2 2 -1000 -1000 2 2 2 with average 2: Step 1: remove 2: 0,0,-1002,-1002,0,0,-4,0,0,0,0,-4,-4,0,0,-1002,-1002,0,0,0 Step 1: partial summation left to right: 0,0,0,-1002,-2004,-2004,-2004,-2008,-2008,-2008,-2008,-2008,-2012,-2016,-3018,-4020,-4020,-4020,-4020. Step 3: Min positions (strict!) counting from the left: 0: 0,3: -1002,4: -2004,7: -2008,12: -2012,13: -2016,14: -3018,15: -4020, Max positions (strict) counting from the right: 2: 0,3: -2002,6: -2004,11: -2008,12: -2012,13: -2016,14: -3018,18: -4020, Steps 4,5. Put the marker B (beginning-1) and E (end) at the leftmost possible positions: B=0,E=2. Record the length 2 and the interval [1,2] Try moving E to the right with this B. Impossible. Change B to 3 (value -1002). Try to move E to the right. Impossible B=4 (-2004) Now E can go to te right to 6 (-2004). Same length. No record. B=7 (-2008) E goes (from the previous position, not from the beginning!) to 11 (-2008)Length 4>2, interval [8,11]. Record. B=12 (-2012) E goes to 12. Length 0, no record. B=13 (-2016), E goes to 13. Length 0, no record. B=14 (-3018), E goes to 14. Length 0, no record. B=15 (-4020), E goes to 18. Length 3<4, no record, reached the end, terminate, output [8,11]. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/112422', 'https://mathoverflow.net', 'https://mathoverflow.net/users/28070/']} | jdg_72731 |
stackexchange | llm_judgeable_groundtruth_similarity | 47769641 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use PyCharm and all the initial settings are okay.Simple package installation is working. Then why do I need to reinstall a package for each project? Is there any way to install the packages for all projects from now on?
Now provide the response and nothing else.
| This is depending on your project settings, the project interpreter to be specific. The project interpreter can be set to one of the following: an interpreter installed globally on your system an interpreter in a shared virtual environment an interpreter in a virtual environment associated with a project Now the approach I'd recommend would be to create a shared virtual environment where you install your packages to, and use this environment for all your project. That way, you have the desired result of needing to install your packages only once, but still have an environment isolated from your system environment. To create such an environment, follow these steps: Settings -> Project -> Project Interpreter Click the cogwheel / gear icon right-side the interpreter dropdown Select "Add Local..." -> Virtualenv Environment Select a path as a root directory for the new environment Select base interpreter you want to use Tick the checkbox "Make available to all projects" Click the "OK" button to save the new environment | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47769641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1756071/']} | jdg_72732 |
stackexchange | llm_judgeable_groundtruth_similarity | 3743837 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am really struggling to work out the limit of the following product: $$ \lim_{n \to \infty} \prod_{1\leq k \leq n} \left (1+\frac{k}{n} \right)^{1/k}.$$ So far, I have spent most of my time looking at the log of the above expression. If we set the desired limit equal to $L$ , I end up with: $$\log L = \lim_{n\to \infty}\log\left(\frac{n+1}{n} \right)+\frac{1}{2}\log\left(\frac{n+2}{n} \right) +\cdots +\frac{1}{n}\log\left(\frac{n+n}{n} \right),$$ which I can simplify to: $$ \log L = \lim_{n\to \infty} \log(n+1)+\frac{1}{2}\log(n+2)+\cdots \frac{1}{n}\log(2n)-\log(n)\left(1+\frac{1}{2}+\cdots\frac{1}{n}\right). $$ I tried to consider the above expression in a different form with an integral, but was unable to arrive at anything useful. I have been stuck on this for quite awhile now, and would appreciate any insight. Thanks
Now provide the response and nothing else.
| Hint, based on Surb: $$\log L = \lim_{n\to\infty} \sum_{k=1}^n\frac{1}{k}\log\left(1+\frac{k}{n}\right)=\frac{1}{n}\sum_{k=1}^n\frac{n}{k}\log\left(1+\frac{k}{n}\right)=\int_0^1\frac{\log(1+x)}{x}\;dx$$ by a Riemann sum argument. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3743837', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/678957/']} | jdg_72733 |
stackexchange | llm_judgeable_groundtruth_similarity | 22901903 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to relearn Java after 10 years of not touching it. I want to create a library using JSch from some other apps I am looking to write. I have the connections ironed out but I am using stdin and stdout right now.I am looking to have a method that accepts a single command in a String and returns an ArrayList with the results. Any assistance would be great! //This vars are in the class.private JSch _jsch;private Session _session;private Channel _channel; Here is my connection method public boolean Connect () throws JSchException{ try { _jsch = new JSch(); _session = _jsch.getSession(_user, _hostname, _port); _session.setPassword(_password); _session.setConfig("StrictHostKeyChecking", "no"); _channel = _session.openChannel("shell"); //((ChannelShell)_channel).setPtyType("vt100"); _channel.setInputStream(bais); _channel.setOutputStream(baos); _channel.connect(3000); }//try to connect catch (JSchException ex) { throw ex; } return true;} The goal is once I open the connection I can send this method a command and return the results in an array. Act based on the results and send more commands. I don't want to close the connection each time as they will build on the commands that come before it. I am not sure how to work with the Inputs and Output Streams enough to get the results I am looking for. If you can assist with filling out the following method, I would be grateful. public List<String> ExecuteCommand (String command) { //run command to existing connection //Get returned lines to Array List and return the list} Thanks
Now provide the response and nothing else.
| The answer to this is that fflush(stream) is only formally defined for output streams, so fflush(stdout) is OK, but fflush(stdin) is not. The purpose of fflush(stream) is to make the operating system flush any buffers to the underlying file. For an example of a legitimate use, students often have problems like “my prompt doesn't appear!” if they do something like: printf("Enter a number: "); However, they find that this works just fine: printf("Enter a number:\n"); Of course, they don't want a newline after their prompt, so they have a bit of a problem. The reason for this is that the output to stdout is buffered by the OS and the default behavior is (often) only to actually write the output to the terminal when a newline is encountered. Adding an fflush(stdout) after the printf() solves the problem: printf("Enter a number: ");fflush(stdout); Now, working by analogy, people often think that fflush(stdin) should discard any unused input, but if you think about it a little bit that doesn't make much sense. What does it mean to “flush” an input buffer? Where is it “flushed” to ? If you flush an output buffer, the output is sent to the underlying file or the terminal, where it would eventually wind up anyway, but where would input “eventually end up anyway”? There's no way of knowing! What should the behavior be if the input stream data comes from a file or a pipe or a socket? It isn't at all clear for input streams what the behavior of fflush() should be, but it's very clear for output streams in all cases. Hence, fflush() is only defined for output streams. The reason why the erroneous use of fflush(stdin) became commonplace is that, many years ago, a few operating systems did implement a scheme where it worked as many people expected, discarding unused input. Microsoft DOS is a good example. Surprisingly, modern versions of Linux also implement fflush() for input streams. The right thing to do with “extra” unwanted terminal input is simply to read it and do nothing with it. This is almost as easy as calling fflush(stdin) , works everywhere, and doesn't rely on formally undefined behavior. The C standard says: If stream points to an output stream or an update stream in which the most recent operation was not input, the fflush function causes any unwritten data for that stream to be delivered to the host environment to be written to the file; otherwise, the behavior is undefined . POSIX says (also explicitly defers to C standard): If stream points to an output stream or an update stream in which the most recent operation was not input, fflush() shall cause any unwritten data for that stream to be written to the file, ... But the Linux manpage says: For output streams, fflush() forces a write of all user-space buffered data for the given output or update stream via the stream's underlying write function. For input streams, fflush() discards any buffered data that has been fetched from the underlying file, but has not been consumed by the application . The open status of the stream is unaffected. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22901903', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1544113/']} | jdg_72734 |
stackexchange | llm_judgeable_groundtruth_similarity | 7566 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in knowing if it is possible to recover intermediate results from a Table after an interruption. Let us say I want to produce a really useful table tt = Table[Pause[0.1*i]; i, {i, 50}] and since Mathematica takes too long to evaluate it, I get impatient. Is it possible to interrupt the calculation and recover what it has already calculated? I am guessing the answer is no, but would like to be proven wrong.
Now provide the response and nothing else.
| Since I was corrected and this is indeed not an exact duplicate of the parallel computations question, I will reproduce here my implementation of abortable table , to have it here on SE: ClearAll[abortableTable];SetAttributes[abortableTable, HoldAll];abortableTable[expr_, iter : {_Symbol, __} ..] := Module[{indices, indexedRes, sowTag, depth = Length[Hold[iter]] - 1}, Hold[iter] /. {sym_Symbol, __} :> sym /. Hold[syms__] :> (indices := {syms}); indexedRes = Replace[#, {x_} :> x] &@ Last@Reap[ CheckAbort[Do[Sow[{expr, indices}, sowTag], iter], Null],sowTag]; AbortProtect[ SplitBy[indexedRes, Array[Function[x, #[[2, x]] &], {depth}]][[##,1]] & @@ Table[All, {depth + 1}] ]]; the usage is tt=abortableTable[Pause[0.1*i];i,{i,50}](* {1,2,3,4,5,6,7,8} *) (I aborted the computation after some time). Generally, abortableTable accepts the same iterator syntax as Table , and can work with multiple dimensions. The details on how this works can be found in the linked discussion. EDIT Per request, a simple abortable Map (which only maps on the first level): abortableMap[f_, expr_] := Module[{sowTag}, Head[expr] @@ If[# === {}, #, First@#] &[ Last[Reap[ CheckAbort[Do[Sow[f[part], sowTag], {part, List @@ expr}],Null]]]]] The usage is, for example: abortableMap[(Pause[0.1*#];f[#])&,Range[10]](* {f[1],f[2],f[3]} *) (again, I aborted manually soon after it started to compute). | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/7566', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1089/']} | jdg_72735 |
stackexchange | llm_judgeable_groundtruth_similarity | 828398 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I did'nt mean binary search tree. for example, if I insert values 1,2,3,4,5 in to a binary search tree the inorder traversal will give1,2,3,4,5 as output. but if I insert the same values in to a binary tree, the inorder traversal should give4,2,5,1,3 as output. Binary tree can be created using dynamic arrays in which for each element in index n,2n+1 and 2n+2 represents its left and right childs respectively. so representation and level order traversal is very easy here. but I think, in-order,post-order,pre-order is difficult. my question is how can we create a binary tree like a binary search tree.ie. have a tree class which contains data, left and right pointers instead of arrays.so that we can recursively do traversal.
Now provide the response and nothing else.
| If I understand you correctly, you want to create a binary tree from an array int[] values = new int[] {1, 2, 3, 4, 5};BinaryTree tree = new BinaryTree(values); this should prepopulate the binary tree with the values 1 - 5 as follows: 1 / \ 2 3 / \4 5 this can be done using the following class: class BinaryTree{ int value; BinaryTree left; BinaryTree right; public BinaryTree(int[] values) : this(values, 0) {} BinaryTree(int[] values, int index) { Load(this, values, index); } void Load(BinaryTree tree, int[] values, int index) { this.value = values[index]; if (index * 2 + 1 < values.Length) { this.left = new BinaryTree(values, index * 2 + 1); } if (index * 2 + 2 < values.Length) { this.right = new BinaryTree(values, index * 2 + 2); } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/828398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/52393/']} | jdg_72736 |
stackexchange | llm_judgeable_groundtruth_similarity | 214176 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Say, " $\psi \Rightarrow \varphi$ " is a theorem and $\psi$ is essential in the hypothesis. I don't understand what's the meaning of essential. Here's what i guess; If $[\psi \Rightarrow \Phi] \bigwedge \neg [\Phi \Rightarrow \psi] \Rightarrow \neg [\Phi \Rightarrow \varphi]$, then we call $\psi$ is essential in the hypothesis. Am i correct?
Now provide the response and nothing else.
| There is not a single formal definition of an "essential hypothesis"; it is an informal phrase used in mathematics exposition rather than part of the mathematics itself. It's easier to think of a theorem that has several hypotheses $A$ and $B$ and a conclusion $C$. There are two things people usually mean by an "essential" hypothesis. If $A$ and $B$ implies $C$, but $B$ alone does not imply $C$, some people will say $A$ is an essential hypothesis. It is "essential" in the sense that it cannot simply be omitted from the proof. So for example the hypothesis of second-countability in the theorem that "every second-countable regular space is metrizable" is essential in this sense, because not every regular space is metrizable. A stronger meaning is that the conclusion actually implies the hypothesis. For example, in the theorem that every second-countable regular space is metrizable, we know that any metrizable space is regular, so that hypothesis is not only essential in the sense of the first bullet, it is essential in the sense that if the conclusion holds it will also hold. You have to use context to distinguish between these two meanings. My sense is that the first meaning is more common. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/214176', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/29511/']} | jdg_72737 |
stackexchange | llm_judgeable_groundtruth_similarity | 5600297 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a user model like this: class User < ActiveRecord::Base validates :password, :presence => true, :confirmation => true, :length => { :within => 6..40 } . . .end In the User model, I have a billing_id column I want to save into from a OrdersController which looks like this: class OrdersController < ApplicationController . . . def create @order = Order.new(params[:order]) if @order.save if @order.purchase response = GATEWAY.store(credit_card, options) result = response.params['billingid'] @thisuser = User.find(current_user) @thisuser.billing_id = result if @thisuser.save redirect_to(root_url), :notice => 'billing id saved') else redirect_to(root_url), :notice => @thisuser.errors) end end end end Because of validates :password in the User model, @thisuser.save doesn't save. However, once I comment out the validation, @thisuser.save returns true. This is an unfamiliar territory for me because I thought this validation only worked when creating a new User. Can someone tell me if validates :password is supposed to kick in each time I try to save in User model? Thanks
Now provide the response and nothing else.
| You need to specify when you want to run your validations otherwise they will be run on every save call. This is easy to limit, though: validates :password, :presence => true, :confirmation => true, :length => { :within => 6..40 }, :on => :create An alternative is to have this validation trigger conditionally: validates :password, :presence => true, :confirmation => true, :length => { :within => 6..40 }, :if => :password_required? You define a method that indicates if a password is required before this model can be considered valid: class User < ActiveRecord::Base def password_required? # Validation required if this is a new record or the password is being # updated. self.new_record? or self.password? endend | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5600297', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/683456/']} | jdg_72738 |
stackexchange | llm_judgeable_groundtruth_similarity | 81469 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Bivariate and multivariate distribution relationship. If we have say 3 variables where any two variables follow a normal bivariate distribution, then does it necessarily follow a multivariate normal distribution?
Now provide the response and nothing else.
| Here's a counter-example: Let $X$, $Y$, $Z$ be independent standard normal, and let $W = |Z|\cdot \text{sign}(XY)$. Then $(W,X)$, $(W,Y)$ and $(X,Y)$ are bivariate normal, but $(W,X,Y)$ is not trivariate normal, since $WXY$ is never negative. What's happening is that the trivariate distribution has been constructed so that probability is only in four of the eight octants, in such a way that each quadrant of the pairwise margins gets an octant with probability and an octant without. To help visualize what's going on, see the following simulation: x=rnorm(1000) y=rnorm(1000) z=rnorm(1000) w=abs(z)*sign(x*y) Here are the pairwise samples: Here's the sample bivariate distribution of $X$ and $Y$ when $W$ is restricted to be positive: (when $W$ is restricted to be negative, the $(X,Y)$ values are in the other two quadrants) And here's a particular projection of the trivariate distribution; you should, for example, be able to make out that there's a low-density "gap" at the bottom. That might seem like a somewhat artificial counterexample, but it's not an issue just with some odd edge-cases. More generally, the trivariate distribution may be quite different from trivariate normal, in any number of smooth or not-smooth ways. The same goes for more than three variables. Copulas give us a way of constructing infinities of such counterexamples with various characteristics. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/81469', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/36964/']} | jdg_72739 |
stackexchange | llm_judgeable_groundtruth_similarity | 1227713 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that any bounded function with a finite number of discontinuities is Riemann integrable over some interval. Is vice versa i.e., If a bounded function is Riemann integrable, then it has a finite number of discontinuities? Thanks.
Now provide the response and nothing else.
| No: the classic counterexample is Thomae's function,$$ f(x) = \begin{cases}0 & x \text{ irrational} \\1/q & x=p/q \text{; $p,q$ integers and $(p,q)=1$} \end{cases}, $$which is continuous at every irrational and discontinuous at every rational. You can show using basic properties of the rationals that the upper sums converge to zero (lower sums are obviously all zero), and hence the function is Riemann integrable on say, $[0,1]$, with integral $0$. See also the Riemann-Lebesgue criterion , which says a Riemann-integrable function is only discontinuous on a set of measure zero. Measure zero is easy to understand: a set has measure zero if you can cover it with a set of intervals of arbitrarily small length. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1227713', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/228834/']} | jdg_72740 |
Subsets and Splits