source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 22365470 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm attempting to create an instance in another region, but I get this error: AWS Error Code: InvalidParameterCombination, AWS Error Message: VPC security groups may not be used for a non-VPC launch Here is the code I'm executing. RunInstancesRequest instancereq = new RunInstancesRequest();instancereq.setInstanceType("m3.medium");instancereq.setImageId("ami-37b1b45e");instancereq.setMinCount(1);instancereq.setMaxCount(1);ArrayList<String> secgroup = new ArrayList<String>(); instancereq.setKeyName("testkey"); secgroup.add("testdefault"); instancereq.setSecurityGroups(secgroup);instancereq.setPlacement(getAzPlacement()); RunInstancesResult instanceresult = ec2.runInstances(instancereq); I've also tried, instead of using the name "testdefault", using the actual groupid (sg-########) , but I'll get an error saying that security group doesn't exist (which is wrong, it does). Which, based on the API doc , if using a non-default VPC, you should pass the actual groupid but I'll get an error like this: InvalidGroup.NotFound, AWS Error Message: The security group 'sg-########' does not exist If I use "default" as the setSecurityGroups it will use the default VPC. It just doesn't seem like like the groupid I'm passing, despite it being accurate. Also, if I comment out the setSecurityGroups code, and use setSubnetId instead and pass the subnet id, it will create the instance just fine, but it goes into the "default" security group, not "testdefault" like I want. All I'm trying to accomplish is creating an instance and having it use the already existing VPC group.
Now provide the response and nothing else.
| My Answer will focus on below statement: All I'm trying to accomplish is creating an instance and having it use the already existing VPC group. So, as I understand, you want to launch an instance in a non-default VPC and assign it an existing VPC security group to it. I am not a java guy, but I could do what you wanted in ruby as below. require 'aws-sdk-core'Aws.config = { :access_key_id => "my_access_key", :secret_access_key => "my_secret_key", :region => 'us-west-2'}ec2 = Aws::EC2.newec2.run_instances( min_count: 1, max_count: 1, image_id: 'ami-8635a9b6', instance_type: 't1.micro', placement: { availability_zone: 'us-west-2a' }, network_interfaces: [ { subnet_id: 'subnet-e881bd63', groups: ['sg-fd53bf5e'], device_index: 0, associate_public_ip_address: true } ], key_name: 'my-key').each do |resp| resp.instances.each do |x| puts x.instance_id endend Although this is a Ruby code, it is pretty straight forward and should give you some clear hints on how to go about doing it in Java as all these AWS SDKs are polling the same web service APIs. I guess, the things that you should be concentrating in above code is: :region => 'us-west-2' and placement: { availability_zone: 'us-west-2a'},network_interfaces: [ { subnet_id: 'subnet-e881bd63', groups: ['sg-fd53bf5e'], device_index: 0, associate_public_ip_address: true }], Make sure you explicitly specify the region. Check how I have defined the subnet ID and security group ID. This code will launch my EC2 instance in subnet-e881bd63 of my VPC and will apply VPC security group ID sg-fd53bf5e to its 0 th network interface. Besides, it will also assign a public IP address to my instance. (by default, it will not assign a public IP address when you launch instances in VPC). FYI. When you launch instances in VPC, you must provide Security group ID instead of security group name. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22365470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3264628/']} | jdg_380241 |
stackexchange | llm_judgeable_groundtruth_similarity | 41601 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to learning algorithms.I was reading backtracking algorithm for generating all strings for n bits.If I dry run the program I know the program is giving right result.But I didn't understood the logic can anybody please explain? What I mean is what will be be the thought process to arrive at this solution void binary(int n){ if(n < 1) printf("%s\n",A); // Assume A is a global variable else { A[n-1] = '0'; binary(n-1); A[n-1] = '1'; binary(n-1); }}
Now provide the response and nothing else.
| This answer begins with a first section explaining the problem in moregeneral terms. The direct answer to the question for the example givenby the OP is then given in a second section that illustrate thediscussion in the first. You may well want to skip the first section and go direcly to thesecond one, depending on whether you like to start with basicprinciples or with the analysis of an example. The principles : non-deterministic programming My description is not intended for this algorithm only, but is more ageneral way to design such algorithms. The key idea is that backtracking is a technique to implementnon-determinism with depth-first exploration of the non-deterministicspace of possibilities. Non-determinism allows you to separate the logic of the problem fromthe non-deterministic exploration of the solution space. It makesprograms clearer, simplifies analysis and proofs of properties. This ispretty much the same advantage that you get when using non-determinismin Automata Theory, to simplify the design of automata performing agiven calculation. Non-determinism makes your technical life much easier. This advantage is such that various programming experts and languagedesigners have analyzed or experimented with the introduction of astandard non-deterministic functionality in various programminglanguages , not to mention the language Prolog where non-determinism isa central control feature. Another advantage of using non-determinism is that it leave open theimplementation technique, that can also use breadth first exploration,or even dynamic programming (to avoid repeating calculations). And thehandling of non-determinism, such as adding the backtrack control, isdone entirely by the compiler. An interesting side-point is that BNF, the language of context-freegrammars, can be seen as a simple non-deterministic programminglanguage to write parsers. And you can compile it fairly simply into adepth-first parser (some early ones were just that - Ned Irons, 1961),or a breadth-first one, or a dynamic-programming one (CYK, Earley, andsome others). However, backtracking requires mastering the state of the environment.Without going into details, it is much easier to implement it in apurely functional programming language that has no side-effects(for example, no assignment to global variables), at least not in theparts of programs that use non-determinism. This is not in contradiction with the use of the global variable A below, which is never read, except to collect the answers. But it explains why I insist on starting with a recursivenon-deterministic program, rather than with an equivalent iterativevariant. What is described below could well be done automatically by thecompiler of a non-deterministic language. Description of the design for the given example Here is how I would explain the design of this algorithm. You consider an array A of bits where the permutation is stored beforeprinting. It can be a global variable. First you write a non-deterministic algorithm, that will just produceone permutation, a "random" one, chosen by the god of non-determinism:) This is a simple recursive procedure that ask the god's oracle choose for each bit in succession. The choose oracle returnnon-deterministically one of its arguments (here the word oracle isused in its mythological sense, not in the usual sense ofcomputability theory). Using recursion is important, as I shall explain later. void binary(int n){ if(n < 0) printf("%s\n",A); // Assume A is a global variable else { A[n-1]= choose('0','1') binary(n-1); }} You should agree that this algorithm will return one of thepermutation, anyone of them. You could organize the recursion differently, it does not really matter. All that matters is that you may follow any of the possible computations. What you have done with this program is to prepare for all choicesthat will make a permutation. Then, instead of making one choice non deterministically, you simplytry them all, one after the other, for each of the calls. Since you have a list of the possible choices (there can be a variablenumber of them), you just try them in succession. You can do that byhaving a segment of code for each possible value. But when thepossible values are in a list of variable length, you can just loopover that list. Here you have just 2 values, so you just write twice the same code,once for each of the values, so that: A[n-1]= choose('0','1') binary(n-1); becomes A[n-1]= '0' binary(n-1); A[n-1]= '1' binary(n-1); The first call to binary will produce all permutations with '0' inposition n and the second all permutations with '1'. Instead of choising arbitrarily, you do one, then the other. There is an invisible trick that helps you. This would be somewhatharder if you had used an iteration rather than a recursion. The reason is that, for each former call to choose , you have tocome back to it after trying all possibilities of later choices, andyou have to remember what are the choices you have not tried yet. This is very easily handled by returns of the recursive call, whereyou find the environment and the execution point where you left it, sothat you simply go on. If you had used an iteration, you would have a beautiful mess managingthe data to recall how far you have dealt with each non-deterministicchoice. You could have started with a very simple loop, that is also anon-deterministic way of computing a single permutation. void binary(int n){ for (i=n-1; i=0; i--) { A[i]= choose('0','1') } printf("%s\n",A);} But transforming this program into a deterministic backtrackingprogram would be a lot harder. Isolating each non-deterministic choice in a function makes your lifeeasy. There is an old classical paper that talks of non-deterministicalgorithms, though I have not looked at it in a very long time, and Ihave no memory of how it handles the problem. People were not usingrecursion as much in those times. You may want to look at it. BobFloyd was one of the pioneers of computer science. Non-deterministic algorithms , Robert W. Floyd, 1966. I do not recall where it was published, but it is easy enough to find. Note: The previous version of the answer was assuming arrays are indexed from 1 to n . I changed that as it may put off users who recognize the code as C code. The site is normally language agnostic. The program given in the question is missing both A[0] and A[n] . | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/41601', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/3139/']} | jdg_380242 |
stackexchange | llm_judgeable_groundtruth_similarity | 501329 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
From running help . or help source Execute commands from a file in the current shell. Read and execute commands from FILENAME in the current shell. The entries in $PATH are used to find the directory containing FILENAME. From my point of view, it seems like the dot command (or the source command) is simply running a shell script in the current shell context (instead of spawning another shell). Question : why doesn't . (or source ) requires the file to be executable like when you run a normal script?
Now provide the response and nothing else.
| Lets say I have a shell script ( my-script.sh )starting with: #!/bin/sh If the script has execute permissions set then I can run the script with: ./my-script.sh In this case you are ultimately asking the kernel to run my-script.sh as a program, and the kernel (program loader) will check permissions first, and then use /bin/sh ./my-script.sh to actually execute your script. But the shell ( /bin/sh ) does not care about execute permissions and doesn't check them. So if you call this ... /bin/sh ./my-script.sh ... The kernel is never asked to run my-script.sh as a program. The kernel (program loader) is only asked to run /bin/sh . So the execute permissions will never me checked. That is, you don't need execute permission to run a script like this. To answer your question: The difference between you calling ./my-script.sh and . ./my-script.sh inside another script is exactly the same. In the first, you are asking the kernel to run it as a program, in the second, you are asking your current shell to read commands from the script and the shell doesn't need (or care about) execute permissions to do this. Further reading: Running scripts as programs is surprising behaviour when you think about it. They are not written in machine code. I would read up on why this works; start with reading up on the shebang ( #! ) https://en.wikipedia.org/wiki/Shebang_(Unix) Running scripts with the dot notation is necessary to share variables. All other mechanisms for running start a new shell "context", meaning that any variables set in the called script will not be passed back to the calling script. Bash documentation is a little lite, but it's here: https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/501329', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/311201/']} | jdg_380243 |
stackexchange | llm_judgeable_groundtruth_similarity | 59904 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let G be a graph with vertices $1,2,...,n$ and $F(x)=\displaystyle{\sum_{ij\in\rm{E(G)}}x_ix_j}$. Let S be the subset of $\mathbb{R}^n$ given by $x_i\ge 0$, $\sum x_i=1$. We're interested in $\displaystyle{\max_{x\in S}}$ $F(x)$. Why is any local maximum of F in the interior of S also a global maximum? I was reading this paper, http://math.ca/cjm/v17/cjm1965v17.0533-0540.pdf . In the middle of the third page, there is a remark saying: Any local maximum of F in the interior of S is also a global maximum. I don't see why this is true. Is this obvious? Or well-known?
Now provide the response and nothing else.
| It's because the Hessian quadratic form of $F$ restricted to the orthogonal complement of the vector of all ones is exactly the Laplacian of the graph $G$ (a good basis for the orthogonal complement is the set of vectors having $1$ in the first coordinates, and $-1$ in the $i>1$-st coordinate). The Laplacian matrix is positive semi-definite. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/59904', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10304/']} | jdg_380244 |
stackexchange | llm_judgeable_groundtruth_similarity | 7865 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Define a function $F(A, B, C)$ as the number of ways you can roll $B$ $C$-sided dice to sum up to $A$, counting different orderings (rolling a $2$, $2$, and $3$ with three dice is different from rolling a $2$, $3$, and $2$). Example: With three $5$-sided dice, the list of $F(A, B, C)$ values in the domain of the possible values of $A$ for $B = 3$ and $C = 5$ is: $$F(3, 3, 5), F(4, 3, 5), F(5, 3, 5), F(6, 3, 5), ... , F(15, 3, 5)$$ is evaluated to:$$1, 3, 6, 10, 15, 18, 19, 18, 15, 10, 6, 3, 1$$Call this list $L_1$. Let $s$ be the number of sides on each die, let $n$ be the number of dice, and let $v$ be the total value to roll from the $n$ dice.Let $L_2$ be the list of ${v - 1}\choose{v - n}$ in the domain of $v$ values for $n = 3$.Then $L_2$ is:$${{3 - 1}\choose{3 - 3}}, {{4 - 1}\choose{4 - 3}}, {{5 - 1}\choose{5 - 3}}, {{6 - 1}\choose{6 - 3}}, ... , {{15 - 1}\choose{15 - 3}}$$Which is evaluated to:$$1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91$$Comparing $L_1$ with $L_2$, we see that only the first $s$ values of the lists are equal:$$1, 3, 6, 10, 15$$I have observed that this property holds with other values of $s$, $v$, and $n$, and $A$, $B$, and $C$. Can someone please explain why $L_1$ and $L_2$ share the first $s$ values?
Now provide the response and nothing else.
| The first $s$ terms of $L_1$ are the compositions of $A$. They stop being the compositions of $A$ at that point because you hit the limit. In your example, you miss the composition of 8 as 6+1+1 because the dice only have 5 sides. The compositions of $n$ into exactly $k$ parts are given by ${{n - 1}\choose{k - 1}}$ as shown here | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/7865', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2771/']} | jdg_380245 |
stackexchange | llm_judgeable_groundtruth_similarity | 27768303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a .zip file and need to unpack its entire content using Powershell. I'm doing this but it doesn't seem to work: $shell = New-Object -ComObject shell.application$zip = $shell.NameSpace("C:\a.zip")MkDir("C:\a")foreach ($item in $zip.items()) { $shell.Namespace("C:\a").CopyHere($item)} What's wrong? The directory C:\a is still empty.
Now provide the response and nothing else.
| Here is a simple way using ExtractToDirectory from System.IO.Compression.ZipFile : Add-Type -AssemblyName System.IO.Compression.FileSystemfunction Unzip{ param([string]$zipfile, [string]$outpath) [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $outpath)}Unzip "C:\a.zip" "C:\a" Note that if the target folder doesn't exist, ExtractToDirectory will create it. Other caveats: Existing files will not be overwritten and instead trigger an IOException. This method requires at least .NET Framework 4.5, available for Windows Vista and newer. Relative paths are not resolved based on the current working directory, see Why don't .NET objects in PowerShell use the current directory? See also: How to Compress and Extract files (Microsoft Docs) | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/27768303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4380217/']} | jdg_380246 |
stackexchange | llm_judgeable_groundtruth_similarity | 30460929 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In order to prevent from context switching, I want to create a big loop to serve both the network connections and some routines. Here's the implementation for normal functions: import asyncioimport timedef hello_world(loop): print('Hello World') loop.call_later(1, hello_world, loop)def good_evening(loop): print('Good Evening') loop.call_later(1, good_evening, loop)print('step: asyncio.get_event_loop()')loop = asyncio.get_event_loop()print('step: loop.call_soon(hello_world, loop)')loop.call_soon(hello_world, loop)print('step: loop.call_soon(good_evening, loop)')loop.call_soon(good_evening, loop)try: # Blocking call interrupted by loop.stop() print('step: loop.run_forever()') loop.run_forever()except KeyboardInterrupt: passfinally: print('step: loop.close()') loop.close() Here's the implementation for coroutines: import [email protected] hello_world(): while True: yield from asyncio.sleep(1) print('Hello World')@asyncio.coroutinedef good_evening(): while True: yield from asyncio.sleep(1) print('Good Evening')print('step: asyncio.get_event_loop()')loop = asyncio.get_event_loop()try: print('step: loop.run_until_complete()') loop.run_until_complete(asyncio.wait([ hello_world(), good_evening() ]))except KeyboardInterrupt: passfinally: print('step: loop.close()') loop.close() And the mixed one: import asyncioimport timedef hello_world(loop): print('Hello World') loop.call_later(1, hello_world, loop)def good_evening(loop): print('Good Evening') loop.call_later(1, good_evening, loop)@asyncio.coroutinedef hello_world_coroutine(): while True: yield from asyncio.sleep(1) print('Hello World Coroutine')@asyncio.coroutinedef good_evening_coroutine(): while True: yield from asyncio.sleep(1) print('Good Evening Coroutine')print('step: asyncio.get_event_loop()')loop = asyncio.get_event_loop()print('step: loop.call_soon(hello_world, loop)')loop.call_soon(hello_world, loop)print('step: loop.call_soon(good_evening, loop)')loop.call_soon(good_evening, loop)print('step: asyncio.async(hello_world_coroutine)')asyncio.async(hello_world_coroutine())print('step: asyncio.async(good_evening_coroutine)')asyncio.async(good_evening_coroutine())try: loop.run_forever()except KeyboardInterrupt: passfinally: print('step: loop.close()') loop.close() As you see, each coroutine function has a while loop surrounded. How can I make it like the normal one? I.e. when it is done, call itself after the given delay time, but not just put a loop there.
Now provide the response and nothing else.
| If you really want to eliminate the while-loop from the coroutines (I'm not sure why you feel that's necessary; it's the most natural way to do what you're trying to do), you can use asyncio.async (or asyncio.ensure_future on Python 3.4.4+) to schedule the coroutine to run again on the next event loop iteration: import [email protected] hello_world(): yield from asyncio.sleep(1) print('Hello World') asyncio.async(hello_world())@asyncio.coroutinedef good_evening(): yield from asyncio.sleep(1) print('Good Evening') asyncio.async(good_evening())print('step: asyncio.get_event_loop()')loop = asyncio.get_event_loop()try: print('step: loop.run_until_complete()') asyncio.async(hello_world()) asyncio.async(good_evening()) loop.run_forever()except KeyboardInterrupt: passfinally: print('step: loop.close()') loop.close() Note that you have to switch back to using loop.run_forever() if you do this, since hello_world / good_evening will exit immediately after printing now. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30460929', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1592410/']} | jdg_380247 |
stackexchange | llm_judgeable_groundtruth_similarity | 2670732 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(S, \mathcal{A}, \mu)$ be a measure space. For $A_1,A_2,\ldots\subseteq S$ define $$\limsup_{n\to\infty} A_n = \bigcap_{k=1}^\infty\bigcup_{n= k}^\infty A_n$$ Exercise: Show that $s\in\limsup\limits_{n\to\infty} A_n$ if and only if there are infinitely many $n\in\mathbb{N}$ such that $s\in A_n$. What I've tried: I think I have prove the statement in the $(\Rightarrow)$ direction. Let $s\in\limsup\limits_{n\to\infty}A_n$. Suppose that there are only a finite number of $n$ for which $s\in A_n$, and let $N$ denote the largest $n\in\mathbb{N}$ such that $s\in A_N$. We have that $$\limsup\limits_{n\to\infty} A_n \subseteq (A_1\cup A_2\cup\ldots)\cap (A_2\cup A_3\cup\ldots)\cap\ldots \cap (A_N\cup A_{N+1}\cup\ldots)\cap(A_{N+1}\cup A_{N+2}\cup\ldots)\subseteq (A_{N+1}\cup A_{N+2}\cup\ldots)$$ Now, since $s\not\in (A_{N+1}\cup A_{N+2}\cup\ldots)$ we have that $s\not\in\limsup\limits_{n\to\infty}A_n$. This is a contradiction, so we must have that there are infinitely many $n\in\mathbb{N}$ such that $s\in A_n$. I have thought about how I should prove the $(\Leftarrow)$ direction, but my results thus far aren't very rigorous. Suppose that there are infinitely many $n\in\mathbb{N}$ such that $s\in A_n$. Then for every $k$ we have that $s\in\bigcup\limits_{n=k}^\infty A_n$, and since $\limsup\limits_{n\to\infty}A_n$ is the intersection of those $\bigcup\limits_{n=k}^\infty A_n$, we have that $s\in\limsup\limits_{n\to\infty}A_n$. Question: How should I show this exercise? Thanks!
Now provide the response and nothing else.
| Right to left direction. Given such an $x$ and $k$, we are to show that $x\in\displaystyle\bigcup_{n\geq k}A_{n}$. Since $x\in A_{i}$ for infinitely many $i$, the set $I:=\{i: x\in A_{i}\}$ is infinite. Choose an $i_{0}\in I$, then $I-\{1,...,i_{0}\}\ne\emptyset$, if not, then $I$ is a finite set. So pick an $i_{1}\in I-\{1,...,i_{0}\}$, then $I-\{1,...,i_{0},...,i_{1}\}\ne\emptyset$, so pick an $i_{2}\in I-\{1,...,i_{0},...,i_{1}\}$. Continue in this fashion, a strictly increasing sequence $\{i_{l}\}$ is such that $x\in A_{i_{l}}$. Since $\{i_{l}\}$ is a sequence of natural numbers, if it converges, then it is eventually constant, so it is not convergent, and hence $i_{l}\rightarrow\infty$, so some $l_{0}$ is such that $i_{l}\geq k$ for all $l\geq l_{0}$, but we have $x\in A_{i_{l}}$, so $x\in\displaystyle\bigcup_{n\geq k}A_{n}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2670732', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/418202/']} | jdg_380248 |
stackexchange | llm_judgeable_groundtruth_similarity | 9946881 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm developing a newsstand application and use NSURLRequest to download issue assets. NSArray *contents = [issue.tableOfContents objectForKey:kSNTableOfContentsContents];NSHTTPCookie *cookie;NSHTTPCookieStorage *cookieJar = [NSHTTPCookieStorage sharedHTTPCookieStorage];NSLog(@"HERE GO MY COOKIES");for (cookie in [cookieJar cookies]) { NSLog(@"%@", cookie);} for (NSDictionary *contentItem in contents) { NSString *contentURL_string = [contentItem objectForKey:kSNTableOfContentsRemoteURL]; NSURL *contentURL = [NSURL URLWithString:contentURL_string]; NSString *fileName = [contentItem objectForKey:kSNTableOfContentsContentsURL]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:contentURL]; NKAssetDownload *asset = [newsstandIssue addAssetWithRequest:request]; [request release]; .... [asset downloadWithDelegate:self]; ....} When the first for loop is executed my cookies appear to be in NSHTTPCookieStorage , but when actual requests are sent, there are no cookie information in headers. I use CharlesProxy to look that up. Could anyone please give some advice what might be causing this issue?
Now provide the response and nothing else.
| From this thread , the magic incantation appears to be: NSDictionary * headers = [NSHTTPCookie requestHeaderFieldsWithCookies: [cookieJar cookies]];[request setAllHTTPHeaderFields:headers]; (Warning: untested code.) This will convert your cookie jar into an array of cookies, then to an NSDictionary of headers, and finally, staple those headers to your request. This is comparable to doing it manually, as Adam Shiemke linked in the question errata, but much cleaner in my opinion. As per the documentation , you may also want to check HTTPShouldHandleCookies to see if your default cookie policy is being used properly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9946881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/198930/']} | jdg_380249 |
stackexchange | llm_judgeable_groundtruth_similarity | 18684 |
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the formula for the vanilla option (Call/Put) price in the Heston model? I only found the bi-variate system of stochastic differential equations of Heston model but no expression for the option prices.
Now provide the response and nothing else.
| In the Heston Model we have\begin{align}C(t\,,{{S}_{t}},{{v}_{t}},K,T)={{S}_{t}}{{P}_{1}}-K\,{{e}^{-r\tau }}{{P}_{2}}\end{align}where, for $j=1,2$ \begin{align}& {{P}_{j}}({{x}_{t}}\,,\,{{v}_{t}}\,;\,\,{{x}_{T}},\ln K)=\frac{1}{2}+\frac{1}{\pi }\int\limits_{0}^{\infty }{\operatorname{Re}\left( \frac{{{e}^{-i\phi \ln K}}{{f}_{j}}(\phi ;t,x,v)}{i\phi } \right)}\,d\phi \\ & {{f}_{j}}(\phi \,;{{v}_{t}},{{x}_{t}})=\exp [{{C}_{j}}(\tau ,\phi )+{{D}_{j}}(\tau ,\phi ){{v}_{t}}+i\phi {{x}_{t}}] \\ \end{align} and \begin{align} & {{C}_{j}}(\tau ,\phi )=(r-q)i\phi \,\tau +\frac{a}{{{\sigma }^{2}}}{{\left( ({{b}_{j}}-\rho \sigma i\phi +{{d}_{j}})\,\tau -2\ln \frac{1-{{g}_{j}}{{e}^{{{d}_{j}}\tau }}}{1-{{g}_{j}}} \right)}_{_{_{_{{}}}}}} \\ & {{D}_{j}}(\tau ,\phi )=\frac{{{b}_{j}}-\rho \sigma i\phi +{{d}_{j}}}{{{\sigma }^{2}}}\left( \frac{1-{{e}^{{{d}_{j}}\tau }}}{1-{{g}_{j}}{{e}^{{{d}_{j}}\tau }}} \right) \\ \end{align} where\begin{align} & {{g}_{j}}=\frac{{{b}_{j}}-\rho \sigma i\phi +{{d}_{j}}}{{{b}_{j}}-\rho \sigma i\phi -{{d}_{j}}} \\ & {{d}_{j}}=\sqrt{{{({{b}_{j}}-\rho \sigma i\phi )}^{2}}-{{\sigma }^{2}}(2i{{u}_{j}}\phi -{{\phi }^{2}})} \\ & {{u}_{1}}=\frac{1}{2}\,,\,{{u}_{2}}=-\frac{1}{2}\,,\,a=\kappa \theta \,,\,{{b}_{1}}=\kappa +\lambda -\rho \sigma \,,\,{{b}_{2}}=\kappa +\lambda \,,\ {{i}^{2}}=-1 \\ \end{align}Other representations: Carr-Madan (1999) Lewis (2000) Attari (2004) Gatheral (2006) Albercher (2007) | {} | {'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/18684', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/9336/']} | jdg_380250 |
stackexchange | llm_judgeable_groundtruth_similarity | 217363 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have seen several pictures depicting our galaxy, but I would like to clarify the following: Picture 1: Picture 2: What part of the galaxy is depicted in the first picture, which is taken from the Earth? Does the first picture show the bright part (galactic bulge) on the centre of the second picture, or it show one of the spiral arms of the galaxy located between the Sun and the galactic bulge, or does it show both of them, first the spiral arm and behind this, the galactic bulge? Moreover, what is the shape of the Milky Way? Why do some pictures depict it as an arch while others depict it as an straight line? I would appreciate if you include websites or scientific literature for further reading.
Now provide the response and nothing else.
| The first picture is a view of the center of the galaxy , as observed from Earth. There's quite a lot of dust in between it and Earth, so on many wavelengths (including visible light), we can't see much. The first picture is actually only part of a larger picture, including the Paranal Observatory: Image courtesy of Wikipedia user Nikthestunned, under the Creative Commons Attribution 3.0 Unported license . The galactic enter corresponds to the big, bright bulge in the second picture. This holds Sagittarius A* , a radio source within the region Sagittarius A that is thought to be a supermassive black hole. The Milky Way is a spiral galaxy (specifically, a barred spiral). It has four arms , although the discovery of the "New Outer Arm" has cast doubt on whether or not there may be a fifth. We're in the plane of the Milky Way, so we can't view it face-on and see its spiral structure in the same way we can view other galaxies. That's why photos of it - your second image is an artist's impression, not a photo - are typically bar-shaped. However, using a fish-eye lens can cause it to appear quite curved, as in this photograph, also from Paranal: Image courtesy of Wikipedia user Soerfm, under the Creative Commons Attribution 3.0 Unported license . As a final note, I must add that I'm somewhat confused by the use of "bulge" in the second picture. In this context, "bulge" refers to the central group of stars in the galaxy. The leader in the second picture, however, appears to be pointing to the end of one of the spiral arms, which is inaccurate. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/217363', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/97925/']} | jdg_380251 |
stackexchange | llm_judgeable_groundtruth_similarity | 430313 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
a) $f(x)>0$ and $f(x)\in C[a,b]$ Prove $$\left(\int_a^bf(x)\sin x\,dx\right)^2 +\left(\int_a^bf(x)\cos x\,dx\right)^2 \le \left(\int_a^bf(x)\,dx\right)^2$$ I have tried Cauchy-Schwarz inequality but failed to prove. b) $f(x)$ is differentiable in $[0,1]$ Prove $$|f(0)|\le \int_0^1|f(x)|\,dx+\int_0^1|f'(x)|dx$$ Any Helps or Tips,Thanks
Now provide the response and nothing else.
| Hint: For part a), use Jensen's inequality with weighted measure $f(x)\,\mathrm{d}x$. Since $f(x)>0$, Jensen says that for a convex function $\phi$$$\phi\left(\frac1{\int_Xf(x)\mathrm{d}x}\int_Xg(x)\,f(x)\mathrm{d}x\right)\le\frac1{\int_Xf(x)\mathrm{d}x}\int_X\phi(g(x))\,f(x)\mathrm{d}x$$ Hint: For part b), note that for $x\in[0,1]$,$$f(0)-f(x)\le\int_0^1|f'(t)|\,\mathrm{d}t$$and integrate over $[0,1]$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/430313', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71857/']} | jdg_380252 |
stackexchange | llm_judgeable_groundtruth_similarity | 590328 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I read this text on op amp input bias current. It says: The input pins are the base pins of the transistor (or gate pins in the case of FET inputs). In order for the input transistor to operate, the base current (IB) must flow. This base current is the input bias current. Often the input bias current is modeled by this kind of model: The bias current of an op amp is mentioned usually in the datasheet. But why is it constant? If the bias current is connected to the base-emitter junction of a transistor inside the op amp, shouldn't its value be dependent on the input voltage? Why can it be modeled in a circuit using a constant current source?
Now provide the response and nothing else.
| If you think of the simplified input circuit of a bipolar-input op-amp with a differential pair- the transistors are operated at fairly constant and equal collector currents at balance, so the base currents tend to be fairly constant.(image from here ) CMOS and JFET input op-amps are another matter- the input bias current consists of leakages in input transistors and protection networks so you would expect them to vary with input CM voltage. Getting back the bipolar types, a real op-amp such as the LT1013 has characteristics like this: As you can see the bias current varies by about 2:1 with common mode voltage typically (+5V supply), and varies significantly with temperature as well. So they're not really all that constant. Take an older "precision" op-amp design, the LM725 which actually has something close to the simple textbook differential pair with current source as the front end (and no bias current compensation): You can see that the input bias current varies by 4:1 over a wide temperature range. You refer in particular to the LM741- here is the input bias current as a function of temperature for the Fairchild uA741 from the Nov 1970 datasheet: Maybe 8:1 change over temperature. Note also that if you unbalance the op-amp (say the output saturates) then the input bias current of a given input of a bipolar input op-amp will change significantly. Rail-to-rail input bipolar-input op-amps have even more "interesting" characteristics in that the bias current typically changes sign depending on common mode voltage. The shenanigans look like this for the LT1218 : Compare with the much cheaper LM358 where the current changes only about 20% typically with CM voltage, and in a smooth manner. P.S. do not take the "simplified" schematics without a large grain of salt. Even the venerable LM358 has a more sophisticated current source arrangement than shown (3 current paths rather than one), which has a large effect on the bias current change with differential input voltage. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/590328', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/142915/']} | jdg_380253 |
stackexchange | llm_judgeable_groundtruth_similarity | 3591565 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use WWW::Mechanize::Shell to test stuff.Since I didn't managed to sign in on a web site I want to scrape, I thought I will use the browser cookie (chrome or firefox) for that specific website with the 'cookie' command WWW::Mechanize::Shell has. The question is, Cookies usually stored in a single file, which is not good, how to get a cookie for only this specific site? thanks,
Now provide the response and nothing else.
| Why isn't storing cookies in a file good? Since WWW::Mechanize is built on top of LWP::UserAgent , you handle cookies just like you do in LWP::UserAgent . You can make the cookie jar a file or an in-memory hash. If you don't want to save the cookies in a file, use an empty hash reference when you construct the mech object: use WWW::Mechanize; my $mech = WWW::Mechanize->new( cookie_jar => {} ); If you want to use a new file, make a new HTTP::Cookies object: use WWW::Mechanize; my $mech = WWW::Mechanize->new( cookie_jar => HTTP::Cookies->new( file => "$ENV{HOME}/.cookies.txt" ) ); If you want to load a browser specific cookies file, use the right module for it: use WWW::Mechanize; my $mech = WWW::Mechanize->new( cookie_jar => HTTP::Cookies::Netscape->new( file => $filename ) ); If you want no cookies at all, use undef explicitly: use WWW::Mechanize; my $mech = WWW::Mechanize->new( cookie_jar => undef ); All of this is in the docs. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3591565', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427306/']} | jdg_380254 |
stackexchange | llm_judgeable_groundtruth_similarity | 724501 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider I have an Array, int[] i = {1,2,3,4,5}; Here I have assigned values for it. But in my problem I get these values only at runtime.How can I assign them to an array. For example: I get the max size of array from user and the values to them now how do I assign them to the array int []. Or can I use anyother data types like ArrayList etc which I can cast to Int[] at the end?
Now provide the response and nothing else.
| Well, the easiest is to use List<T> : List<int> list = new List<int>();list.Add(1);list.Add(2);list.Add(3);list.Add(4);list.Add(5);int[] arr = list.ToArray(); Otherwise, you need to allocate an array of suitable size, and set via the indexer. int[] arr = new int[5];arr[0] = 1;arr[1] = 2;arr[2] = 3;arr[3] = 4;arr[4] = 5; This second approach is not useful if you can't predict the size of the array, as it is expensive to reallocate the array every time you add an item; a List<T> uses a doubling strategy to minimize the reallocations required. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/724501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71713/']} | jdg_380255 |
stackexchange | llm_judgeable_groundtruth_similarity | 165650 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to add a tooltip/alt to a "td" element inside of my tables with jquery. Can someone help me out? I tried: var tTip ="Hello world";$(this).attr("onmouseover", tip(tTip)); where I have verified that I am using the "td" as "this". **Edit:**I am able to capture the "td" element through using the "alert" command and it worked. So for some reason the "tip" function doesn't work. Anyone know why this would be?
Now provide the response and nothing else.
| $(this).mouseover(function() { tip(tTip);}); a better way might be to put title attributes in your HTML. That way, if someone has javascript turned off, they'll still get a tool tip (albeit not as pretty/flexible as you can do with jQuery). <table id="myTable"> <tbody> <tr> <td title="Tip 1">Cell 1</td> <td title="Tip 2">Cell 2</td> </tr> </tbody></table> and then use this code: $('#myTable td[title]') .hover(function() { showTooltip($(this)); }, function() { hideTooltip(); });function showTooltip($el) { // insert code here to position your tooltip element (which i'll call $tip) $tip.html($el.attr('title'));}function hideTooltip() { $tip.hide();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/165650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7644/']} | jdg_380256 |
stackexchange | llm_judgeable_groundtruth_similarity | 1365683 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to read in a file with names and addreses. It may look like this: John Doe123 Main StreetMy Town, US 12345Ralph Kramdenc/o Joe999 North LaneYourTown, US 22839 Where there is always a line between records. But I don't know how to tell Perl that the next X lines are all one record. (And X can vary). How can that be done?
Now provide the response and nothing else.
| From perldoc perlvar : $/ The input record separator, newline by default. This influences Perl’s idea of what a "line" is. Works like awk’s RS variable, including treating empty lines as a terminator if set to the null string. (An empty line cannot contain any spaces or tabs.) You may set it to a multi‐character string to match a multi‐character terminator, or to "undef" to read through the end of file. Setting it to "\n\n" means something slightly different than setting to "", if the file contains consecutive empty lines. Setting to "" will treat two or more consecutive empty lines as a single empty line. Setting to "\n\n" will blindly assume that the next input character belongs to the next paragraph, even if it’s a newline. (Mnemonic: / delimits line boundaries when quoting poetry.) So try this: { open my $fh, "<", $input_file; local $/ = ""; while(<$fh>) { # each loop, $_ will be a different record # the first will be "John Doe\n123 Main Street\nMy Town, US 12345\n\n" # etc. }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1365683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166985/']} | jdg_380257 |
stackexchange | llm_judgeable_groundtruth_similarity | 24979 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am unable to change the frameticks for my list plot. I am trying to plot a growth rate omega function which looks like this: Clear[\[Omega]3, \[Omega], \[Epsilon], h, K1, \[Delta], q, Bi, m]\[Omega]3 = \[Epsilon]/((h + K1)^2) - h^3 q^4 - Bo h^2 q^2 + m q^2 h^2/((h + K1)^2) + \[Delta] h^3 q^2/((Bi h + K1)^3)\[Omega] = \[Omega]3 /. {\[Epsilon] -> 6.01*10^-8, \[Delta] -> 5.19*10^-7, K1 -> 1, Bi -> 1, m -> 0.1092, Bo -> 0} I vary the q via a table and capture the film thickness for omega=0 for ListPlot[ Table[ q = qx; 2.35 FindRoot[\[Omega] == 0, {h, 1}][[1]][[2]], {qx, 0.1, 2, 0.1} ], PlotRange -> {{0, 21}, {0.0, 4}}, AxesLabel -> {"wavenumber", "Growth rate"}, BaseStyle -> {FontSize -> 18}, Frame -> {True, True, False, False}, FrameLabel -> {"Wavenumber, q", "\!\(\*SubscriptBox[\(h\), \(\[Omega] = 0\)]\)[mm]"}, PlotStyle -> Directive[Thick, Black, PointSize[Large]] ] The plot looks like this: The x axis has grid points and not the values of q . Is there any way I could have q values instead of grid points?
Now provide the response and nothing else.
| There are two ways. The first way is to specify your data differently to get pairs of numbers like this: datatoplot = Table[{qx, 2.35 FindRoot[(\[Omega] /. q -> qx) == 0, {h, 1}][[1]][[2]]}, {qx, 0.1, 2, 0.1}]ListPlot[datatoplot, PlotRange -> {{0, 2.5}, {0.0, 4}}, AxesLabel -> {"wavenumber", "Growth rate"}, BaseStyle -> {FontSize -> 18}, Frame -> {True, True, False, False}, FrameLabel -> {"Wavenumber, q", "\!\(\*SubscriptBox[\(h\), \(\[Omega] = 0\)]\)[mm]"}, PlotStyle -> Directive[Thick, Black, PointSize[Large]]] Alternatively, use the FrameTicks option to ListPlot to replace the tick label at each point. ListPlot[datatoplot2, PlotRange -> {{0, 20}, {0.0, 4}}, AxesLabel -> {"wavenumber", "Growth rate"}, BaseStyle -> {FontSize -> 18}, Frame -> {True, True, False, False}, FrameTicks -> {{Automatic, None}, {Transpose[{Range[0, 20, 5], Range[0, 2, 0.5]}], None}}, FrameLabel -> {"Wavenumber, q", "\!\(\*SubscriptBox[\(h\), \(\[Omega] = 0\)]\)[mm]"}, PlotStyle -> Directive[Thick, Black, PointSize[Large]]] You can tweak the presentation according to taste. Have a look at some of the other questions about Ticks for some ideas. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/24979', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/204/']} | jdg_380258 |
stackexchange | llm_judgeable_groundtruth_similarity | 8153061 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my view page, i am using form_tag to create a form which will pass a string of ids from a hidden field to the controller code. In my controller code, i am looping through an array of ids to update each record containing that id in the Expression table. But the code below does not seem to work.I would really appreciate it if somebody could give me some suggestion regarding what is wrong with the code below. def update_expression @emi_ids_array = params[:emi_ids].split(/,/) @sub_id = params[:sub_id] @emi_ids_array.each do |emi_id| @existing_exp = Expression.find(:first, :conditions => [ "EXT_EMI_ID = ? and EXT_SUB_FK = ?", emi_id, @sub_id]) @expression = @existing_exp.update_attributes( :EXT_SUB_FK => @sub_id, :EXT_PRESENCE => "present", :EXT_STRENGTH => "weak", :EXT_EMI_ID => emi_id ) endend
Now provide the response and nothing else.
| So if you want a different limit for each user you can add things_limit:integer into User and do class User has_many :things validates_each :things do |user, attr, value| user.errors.add attr, "too much things for user" if user.things.size > user.things_limit endendclass Thing belongs_to :user validates_associated :user, :message => "You have already too much things."end with this code you can't update the user.things_limit to a number lower than all the things he already got, and of course it restrict the user to create things by his user.things_limit. Application example Rails 4 : https://github.com/senayar/user_things_limit | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8153061', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/702154/']} | jdg_380259 |
stackexchange | llm_judgeable_groundtruth_similarity | 144122 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in preparing an anhydrous sample of sodium tungstate (Na 2 WO 4 ), which is normally found as the dihydrate. Finding an example of this in literature is proving difficult, I am interested if someone can suggest a preparation method based on prior experience or difficulties I should encounter.
Now provide the response and nothing else.
| Hoermann [ 1 ] prepared anhydrous sodium tungstate by melting the 1:1 mix of sodium carbonate and tungsten(VI) oxide (and, subsequently, growing single crystals); alternatively, he proposed prolonged drying of a dihydrate at 100 °C: $\ce{Na2MoO3}$ und $\ce{Na2WO3}$ sind wasserfrei durch Zusammenschmelzen von 1 Mol $\ce{Na2CO3}$ mit 1 Mol $\ce{MoO3},$ bzw. $\ce{WO3},$ oder durch volliges Entwassern der Hydrate bei 100° zu erhalten. Die Salze zeigen ansgepragte Polymorphie. Busey and Keller [ 2 ] obtained anhydrous sodium tungstate (reported water content 0.12%) by drying a dihydrate at 200 °C in vacuum. Purity has also been confirmed by Raman spectroscopy and powder x-ray diffraction: The $\ce{Na2WO4}$ was prepared by heating the dihydrate at 200° in a vacuum. References Hoermann, F. Beitrag zur Kenntnis der Molybdate und Wolframate. Die binären Systeme: $\ce{Li3MoO4-MoO3},$ $\ce{Na2MoO4-MoO3},$ $\ce{K3MoO4-MoO3},$ $\ce{Li2WO4-WO3},$ $\ce{Na2WO4-WO3},$ $\ce{K2WO4-WO3},$ $\ce{Li2MoO4-Na2MoO4},$ $\ce{Li2WO4-Na2WO4},$ $\ce{Li2MoO4-K2MoO4}.$ Z. Anorg. Allg. Chem. 1929 , 177 (1), 145–186. DOI: 10.1002/zaac.19291770117 . Busey, R. H.; Keller, O. L. Structure of the Aqueous Pertechnetate Ion by Raman and Infrared Spectroscopy. Raman and Infrared Spectra of Crystalline $\ce{KTcO4},$ $\ce{KReO4},$ $\ce{Na2MoO4},$ $\ce{Na2WO4},$ $\ce{Na2MoO4 · 2 H2O},$ and $\ce{Na2WO4 · 2 H2O}.$ The Journal of Chemical Physics 1964 , 41 (1), 215–225. DOI: 10.1063/1.1725625 . | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/144122', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/37024/']} | jdg_380260 |
stackexchange | llm_judgeable_groundtruth_similarity | 40808 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Directly from Wikipedia, a set of vertices $X \subseteq V(G)$ of a graph $G$ is independent if and only if its complement $V(G) \setminus X$ is a vertex cover. Does this imply that the complement of the independent set problem is the vertex cover problem?
Now provide the response and nothing else.
| Well, strictly speaking it's not the complement; co-VC is co-NP-complete whereas Independent Set is NP-complete. If they were the same, we would know that co-NP was equal to NP, which we do not, and indeed most people believe they are not. But an easy way of seeing that they are not the same if to consider $(K_4, 2)$, the complete graph on four vertices) which is neither a yes-instance of Vertex Cover nor of Independent Set. Similarly, the instance $(K_2,1)$ is a yes-instance for both. However, they are related in the following way.A set of vertices $C \subseteq V(G)$ of a graph $G$ is a vertex cover if and only if $V(G) \setminus C$ is an independent set. This is easy to see; for every endpoint of an edge, at least one vertex must be in $C$ for $C$ to be a vertex cover, hence not both endpoints of an edge are in $V(G) \setminus C$, so $V(G) \setminus C$ is an independent set. This holds both directions. So $(G,k)$ is a yes instance for Vertex Cover (a minimization problem) if and only if $(G,n-k)$ is a yes instance for Independent Set (a maximization problem). | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/40808', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/29555/']} | jdg_380261 |
stackexchange | llm_judgeable_groundtruth_similarity | 7567827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a UITableView that displays images associated with contacts in each row. In some cases these images are read on first display from the address book contact image, and where there isn't one they are an avatar rendered based on stored data. I presently have these images being updated on a background thread using GCD. However, this loads the images in the order they were requested, which means during rapid scrolling the queue becomes lengthy and when the user stops scrolling the current cells are the last to get updated. On the iPhone 4, the problem isn't really noticeable, but I am keen to support older hardware and am testing on an iPhone 3G. The delay is tolerable but quite noticeable. It strikes me that a Last In-First Out stack would seem likely to largely resolve this issue, as whenever the user stopped scrolling those cells would be the next to be updated and then the others that are currently off-screen would be updated. Is such a thing possible with Grand Central Dispatch? Or not too onerous to implement some other way? Note, by the way, that I am using Core Data with a SQLite store and I am not using an NSFetchedResultsController because of a many-to-many relationship that has to be traversed in order to load the data for this view. (As far as I am aware, that precludes using an NSFetchedResultsController.) [I've discovered an NSFetchedResultsController can be used with many-to-many relationships, despite what the official documentation appears to say. But I'm not using one in this context, yet.] Addition: Just to note that while the topic is "How do I create a Last In-First Out Stack with GCD", in reality I just want to solve the issue outlined above and there may be a better way to do it. I am more than open to suggestions like timthetoolman's one that solves the problem outlined in another way; if such a suggestion is finally what I use I'll recognize both the best answer to the original question as well as the best solution I ended up implementing... :)
Now provide the response and nothing else.
| The code below creates a flexible last in-first out stack that is processed in the background using Grand Central Dispatch. The SYNStackController class is generic and reusable but this example also provides the code for the use case identified in the question, rendering table cell images asynchronously, and ensuring that when rapid scrolling stops, the currently displayed cells are the next to be updated. Kudos to Ben M. whose answer to this question provided the initial code on which this was based. (His answer also provides code you can use to test the stack.) The implementation provided here does not require ARC, and uses solely Grand Central Dispatch rather than performSelectorInBackground. The code below also stores a reference to the current cell using objc_setAssociatedObject that will enable the rendered image to be associated with the correct cell, when the image is subsequently loaded asynchronously. Without this code, images rendered for previous contacts will incorrectly be inserted into reused cells even though they are now displaying a different contact. I've awarded the bounty to Ben M. but am marking this as the accepted answer as this code is more fully worked through. SYNStackController.h //// SYNStackController.h// Last-in-first-out stack controller class.//@interface SYNStackController : NSObject { NSMutableArray *stack;}- (void) addBlock:(void (^)())block;- (void) startNextBlock;+ (void) performBlock:(void (^)())block;@end SYNStackController.m //// SYNStackController.m// Last-in-first-out stack controller class.//#import "SYNStackController.h"@implementation SYNStackController- (id)init{ self = [super init]; if (self != nil) { stack = [[NSMutableArray alloc] init]; } return self;}- (void)addBlock:(void (^)())block{ @synchronized(stack) { [stack addObject:[[block copy] autorelease]]; } if (stack.count == 1) { // If the stack was empty before this block was added, processing has ceased, so start processing. dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); dispatch_async(queue, ^{ [self startNextBlock]; }); }}- (void)startNextBlock{ if (stack.count > 0) { @synchronized(stack) { id blockToPerform = [stack lastObject]; dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); dispatch_async(queue, ^{ [SYNStackController performBlock:[[blockToPerform copy] autorelease]]; }); [stack removeObject:blockToPerform]; } [self startNextBlock]; }}+ (void)performBlock:(void (^)())block{ @autoreleasepool { block(); }}- (void)dealloc { [stack release]; [super dealloc];}@end In the view.h, before @interface: @class SYNStackController; In the view.h @interface section: SYNStackController *stackController; In the view.h, after the @interface section: @property (nonatomic, retain) SYNStackController *stackController; In the view.m, before @implementation: #import "SYNStackController.h" In the view.m viewDidLoad: // Initialise Stack Controller.self.stackController = [[[SYNStackController alloc] init] autorelease]; In the view.m: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // Set up the cell. static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } else { // If an existing cell is being reused, reset the image to the default until it is populated. // Without this code, previous images are displayed against the new people during rapid scrolling. [cell setImage:[UIImage imageNamed:@"DefaultPicture.jpg"]]; } // Set up other aspects of the cell content. ... // Store a reference to the current cell that will enable the image to be associated with the correct // cell, when the image subsequently loaded asynchronously. objc_setAssociatedObject(cell, personIndexPathAssociationKey, indexPath, OBJC_ASSOCIATION_RETAIN); // Queue a block that obtains/creates the image and then loads it into the cell. // The code block will be run asynchronously in a last-in-first-out queue, so that when // rapid scrolling finishes, the current cells being displayed will be the next to be updated. [self.stackController addBlock:^{ UIImage *avatarImage = [self createAvatar]; // The code to achieve this is not implemented in this example. // The block will be processed on a background Grand Central Dispatch queue. // Therefore, ensure that this code that updates the UI will run on the main queue. dispatch_async(dispatch_get_main_queue(), ^{ NSIndexPath *cellIndexPath = (NSIndexPath *)objc_getAssociatedObject(cell, personIndexPathAssociationKey); if ([indexPath isEqual:cellIndexPath]) { // Only set cell image if the cell currently being displayed is the one that actually required this image. // Prevents reused cells from receiving images back from rendering that were requested for that cell in a previous life. [cell setImage:avatarImage]; } }); }]; return cell;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7567827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/842106/']} | jdg_380262 |
stackexchange | llm_judgeable_groundtruth_similarity | 931699 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This seems like it should be simple: I want a list like any other list , except it has a different .__str__ method. Trying to set object.__str__ = foo results in a read-only error Trying to subclass list means you need some way to convert an existing list to an instance of the subclass. This requires either copying all attributes manually (a huge pain), or somehow copying them all automatically, which I don't know how to do. Trying to write a wrapper around the list object means I have to figure out some way to send all messages to the wrapped object except .__str__ which I handle with my own method. Don't know how to do this. Any alternatives, or solutions #2 or #3 greatly appreciated. Thanks!
Now provide the response and nothing else.
| This solution works without a wrapper. And works if you join two lists by add. Any operation that modify the list itself will work as expected. Only functions that return a copy of the list like: sorted, reveresed will return the native python list which is fine. sort and reverse on the other hand operate on the list itself and will keep the type. class myList(list): def __new__(cls, data=None): obj = super(myList, cls).__new__(cls, data) return obj def __str__(self): return 'myList(%s)' % list(self) def __add__(self, other): return myList(list(self) + list(other))>>> l = myList(range(5))>>> print lmyList([0, 1, 2, 3, 4])>>> print l + [1, 2]myList([0, 1, 2, 3, 4, 1, 2])>>> l.sort()>>> print lmyList([0, 1, 2, 3, 4]) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/931699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37610/']} | jdg_380263 |
stackexchange | llm_judgeable_groundtruth_similarity | 41744 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After reading this article , I got to wondering why the US has 2 viable political parties while the UK appears to have many, even though both use first-past-the-post voting. Across England and Wales, voters turned away in anger from May's Conservatives and the opposition Labour Party of Jeremy Corbyn, which had sought a softer version of Brexit. The Brexit Party came first while explicitly pro-EU parties - the Liberal Democrats, Greens and Change UK - were, combined, a few percentage points behind. That paragraph alone lists six parties. What are the differences between the UK and US that allow for a larger number of parties in the UK? Are all of these UK parties independent of one another? Do they form coalitions?
Now provide the response and nothing else.
| Duverger's law says that for a given district in a plurality (first-past-the-post) system, the number of parties will tend towards two. In the United States, there is an office with a national district, the presidency. As a result, the total number of parties tends toward two. If a third party becomes large enough, it takes over one of the other parties. This last happened in the 1850s when the Republican party replaced the Whig party over abolition/slavery. There have been two serious attempts at the formation of a significant third party, the Bull Moose party of the 1910s and the United We Stand party of the 1990s. Neither was able to get a president elected and both faded away. The way that House and Senate districts overlap may help too. In the United Kingdom, there is no national office and no overlapping legislative offices (members of the House of Lords are not elected). This makes it easier for a third party to dominate one or more districts. For example, the Scottish National Party dominates districts in Scotland. And Northern Ireland has two regional parties. In France, there is a national office, but the national office is not elected by a FPTP system. They have a runoff election . So there isn't the same grouping requirement in the first election. A voter can vote for a preferred candidate then and still vote between the top two candidates in the runoff. This doesn't have the same forcing to two effect as a plurality system. Germany's system also provides extra support for third parties with its compensatory seats for parties that are underrepresented from the geographic districts. This makes it act more like a proportional system under Duverger's law. | {} | {'log_upvote_score': 7, 'links': ['https://politics.stackexchange.com/questions/41744', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/26695/']} | jdg_380264 |
stackexchange | llm_judgeable_groundtruth_similarity | 15091284 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following input: AG23,VU,Blablublablu,8IE22,VU,FooBlaFooBlaFoo,3and so on... I want it to "parse" with scanf() using some code like this: char sem[5];char type[5];char title[80];int value;while(scanf("%s,%s,%s,%d", sem, type, title, &value) == 4) { //do something with the read line values} But the execution of the code gives me: illegal instruction How would you read a comma-separated file like this?
Now provide the response and nothing else.
| The comma is not considered a whitespace character so the format specifier "%s" will consume the , and everything else on the line writing beyond the bounds of the array sem causing undefined behaviour. To correct this you need to use a scanset: while (scanf("%4[^,],%4[^,],%79[^,],%d", sem, type, title, &value) == 4) where: %4[^,] means read at most four characters or until a comma is encountered. Specifying the width prevents buffer overrun. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15091284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1291235/']} | jdg_380265 |
stackexchange | llm_judgeable_groundtruth_similarity | 60681826 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to assign a return type to the function below: async function *sleepyNumbers() { // what TypeScript type is this? let n = 0; while (true) { yield new Promise(resolve => resolve(n++)); await new Promise(resolve => setTimeout(resolve, 500)); }}(async () => { for await (const i of sleepyNumbers()) console.log(i);})(); The generator is yielding a Promise that resolves to a number . Setting the type to Promise<number> fails with this error message: TS2739: Type 'AsyncGenerator' is missing the following properties from type 'Promise': then, catch, [Symbol.toStringTag], finally Iterable resulted in a similar error. I can set the type to AsyncGenerator but that's not specific enough. What is the proper TypeScript syntax for the return type of this function?
Now provide the response and nothing else.
| It will be AsyncGenerator<number, never, void> : number - next result never returns void - next doesn't get any parameter You'll also need to explicitly type a promise resolve: yield new Promise<number>(resolve => resolve(n++)); All together: async function *sleepyNumbers(): AsyncGenerator<number, never, void> { let n = 0; while (true) { yield new Promise<number>(resolve => resolve(n++)); await new Promise(resolve => setTimeout(resolve, 500)); }}(async () => { for await (const i of sleepyNumbers()) console.log(i);})(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60681826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1269037/']} | jdg_380266 |
stackexchange | llm_judgeable_groundtruth_similarity | 24343220 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Taken from the gabor filter example from skimage calculating a gabor filter for an image is easy: import numpy as npfrom scipy import ndimage as ndfrom skimage import datafrom skimage.util import img_as_floatfrom skimage.filter import gabor_kernelbrick = img_as_float(data.load('brick.png'))kernel = np.real(gabor_kernel(0.15, theta = 0.5 * np.pi,sigma_x=5, sigma_y=5))filtered = nd.convolve(brick, kernel, mode='reflect')mean = filtered.mean()variance = filtered.var() brick is simply a numpy array. Suppose I have a 5000*5000 numpy array. What I want to achieve is to generate two new 5000*5000 numpy arrays where the pixels are the mean and var values of the gabor filter of the 15*15 window centered on them. Could anyone help me achieve this? EDIT ¿Why did I get downvoted? Anyway, to clarify I show an example on how to calculate a gabor filter on a single image. I would like to simply calculate a gabor filter on small square subsets of a very large image (hence the sliding window).
Now provide the response and nothing else.
| If you return a number instead of a boolean, you can distinguish between cases where there are 0, 1, or more than 1 solution(s). // returns 0, 1 or more than 1 depending on whether 0, 1 or more than 1 solutions are foundstatic byte solve(int i, int j, int[][] cells, byte count /*initailly called with 0*/) { if (i == 9) { i = 0; if (++j == 9) return 1+count; } if (cells[i][j] != 0) // skip filled cells return solve(i+1,j,cells, count); // search for 2 solutions instead of 1 // break, if 2 solutions are found for (int val = 1; val <= 9 && count < 2; ++val) { if (legal(i,j,val,cells)) { cells[i][j] = val; // add additional solutions count = solve(i+1,j,cells, count)); } } cells[i][j] = 0; // reset on backtrack return count;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24343220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1393000/']} | jdg_380267 |
stackexchange | llm_judgeable_groundtruth_similarity | 111555 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've just placed a large FPGA in a QFP package with 144 pins in the middle of my board, and would now like to add bypass capacitors on the power supply pins. The power and ground pins are either next to each other or at least in close proximity, so placing the capacitors manually is not that difficult, but I can see myself manually routing the entire vicinity of that IC in the process, which I'd rather avoid. I have two kinds of I/O: About 40 pins with f max at around 1 MHz, running a PWM coding, and an SDRAM interface; the SDRAM I have runs at up to 143 MHz. I suspect I should avoid running the latter kind through too many vias, and instead prefer to create supply vias from the bottom layer to the capacitors. Is there a way I can automate the capacitor placement? If not, is there a good strategy for getting good results?
Now provide the response and nothing else.
| The auto router is a great tool that can replace hours of manual tedium, but it's not a do-all solution. There is only so much you can explain to the auto router, and therefore have it make the right tradeoffs for you. Carefully placing bypass caps on the opposite side of the FPGA and carefully routing the tracks to those bypass caps and the power/ground pins is something you're going to have to do manually. That's the state of the technology today. My normal workflow in routing a board with Eagle actually starts with giving thought and attention to placement. Look at the airwires carefully when placing parts, and think about routing at that time. Then when it's time to start routing, I start by doing the few critical areas manually. These usually include the high current loops inside switching power supplies, the crystal connections to microcontrollers, and the bypass caps. If using a ground plane, I manually connect most ground pins to the ground plane with their own via immdiately by the pad. After that I make sure the ground around important parts is routed correctly, and only then see what the auto router can do. Always save a copy of the board before running the auto router. The first few times I use none or only a few optimization passes. The purpose is to see where the problem areas are and what kind of trouble the auto router will get itself into. Then I manually route a little in those areas and try again. After a few passes of this I can see that the auto router looks like it will do a reasonable job on the rest, so I run it with 8 optimization passes. It's important to tweak the auto router parameters carefully for the type of board you are doing. This also means changing the parameters over the 9 passes. In the first pass, I want to maximize the chance it will find a solution, regardless of how messy that might be. I therefore give a reasonably strong preference to vertical/horizontal orientation of the top and bottom layers and make the via cost relatively low. In subsequent optimization passes, I remove the orientation restrictions, make via costs increasingly higher, and a few other tweaks. For the last few passes, I make the via cost a little lower again, favor the outer layers more, and keep making the polygon cost higher and higher. I use different setups for 2 and 4 layer boards. On a 2 layer board I usually try to keep the bottom layer a ground plane and break it up only for short "jumpers" only. On a 4 layer board, the tradeoff are different, so you have to set the various auto router costs differently. Even after all that, you go back and look at what the auto router did and possibly do a little cleanup. Even with the hugging parameter set to 0, Eagle will still often lump vias together. That's usually not good on a 2 layer board where anything on the bottom layer breaks up the ground plane. For a plane, it is better to have more smaller islands than fewer larger ones. You want to minimize the maximum dimension of any island. That means separating vias and their traces from adjacent vias and their traces so that the plane pour flows around them. Anyway, the point is that if you think you can just throw your design at the auto router and be done with it, then you're in trouble. The auto router is a great tool, but you have to understand it and work with it, not just delegate everything to it. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/111555', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/3119/']} | jdg_380268 |
stackexchange | llm_judgeable_groundtruth_similarity | 48295288 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My issue is, the methods used for both the events is getting triggered when I perform "double click" For example, I need to perform specific functionality when specific event is triggered. <a (click)="method1()" (dblclick)="method2()"> Both method1() and method2() are getting triggered when I perform "double click".
Now provide the response and nothing else.
| You can use a timeout and a boolean flag to solve this.Consider the following: The DOM takes a few milliseconds to recognize the double click. But it's damn sure that it recognize the double click but the first click is also recognized. So the logic goes like this. isSingleClick: Boolean = true; method1CallForClick(){ this.isSingleClick = true; setTimeout(()=>{ if(this.isSingleClick){ doTheStuffHere(); } },250)}method2CallForDblClick(){ this.isSingleClick = false; doTheStuffDblClickHere();} Call the method one in the click event of the element and method 2 in the click event of the element. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48295288', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8365009/']} | jdg_380269 |
stackexchange | llm_judgeable_groundtruth_similarity | 25090635 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading K&R book. I read: ...name intended for use only by functions of the standard library begin with _ so they are less likely to collide with the names in the user program... What does this exactly means please explain real simple and practical way. What i understood is: if i want to use sqrt defined in math.h then #include <math.h>#define sqrt(x) x*x*xmain() { int x=4; _sqrt(x); // That is from the header file math.hsqrt(x); // my own defined macro /*or its the reverse way _sqrt for my own defined macro so it won't collide with original sqrt i.e. without _ for sqrt from math.h */ return 0;} Now, I read a code on stackoverflow using __ . sys/syscall.h is not present in windows so we have to use #if __linux #include <sys/syscall.h>#elif defined(_WIN32) || defined(_WIN64)#include <windows.h>#endif Where exactly is __ used and what's the difference b/w __ & _ . enter code here
Now provide the response and nothing else.
| Here's what the C standard says (section 7.1.3): All identifiers that begin with an underscore and either an uppercase letter or anotherunderscore are always reserved for any use. All identifiers that begin with an underscore are always reserved for use as identifierswith file scope in both the ordinary and tag name spaces. (The section goes on to list specific identifiers and sets of identifiers reserved by certain standard headers.) What this means is that for example, the implementation (either the compiler or a standard header) can use the name __FOO for anything it likes . If you define that identifier in your own code, your program's behavior is undefined. If you're "lucky", you'll be using an implementation that doesn't happen to define it, and your program will work as expected. This means you simply should not define any such identifiers in your own code (unless your own code is part of a C implementation -- and if you have to ask, it isn't). There's no need to define such identifiers anyway; there's hardly any shortage of unreserved identifiers. You can use an identifier like _foo as long as it's defined locally (not at file scope) -- but personally I find it much easier just to avoid using leading underscores at all. Incidentally, your example of _sqrt doesn't necessarily illustrate the point. An implementation may define the name _sqrt in <math.h> (since anything defined there is at file scope), but there's no particular reason to expect that it will do so. When I compile your program, I get a warning: c.c:7:1: warning: implicit declaration of function ‘_sqrt’ [-Wimplicit-function-declaration] because <math.h> on my system doesn't define that identifier, and a link-time fatal error: /tmp/cc1ixRmL.o: In function `main':c.c:(.text+0x1a): undefined reference to `_sqrt' because there's no such symbol in the library. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25090635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380270 |
stackexchange | llm_judgeable_groundtruth_similarity | 45924258 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So here's what's happened: Accidentally committed lots of files that weren't meant to be. Did a git reset --soft HEAD~2 to get back to a commit before the accident Modified gitignore to ignore the files Commited again and pushed to origin. I assumed the git reset would revers everything from the accidental commit, but after checking bitbucket's list of git lfs files, it seems all the lfs tracked files from the accidental commit were pushed to lfs in origin. These files do not exist if I look through the source in bitbucket. So I tried doing git lfs prune which appeared to delete an amount of files that looks to be about the amount that was accidentally commited, then git lfs push origin master . Checked bitbucket's list of git lfs files again, but those files are still there and nothing's changed in origin. What have I done wrong?
Now provide the response and nothing else.
| There doesn't appear to be a standard way of doing this : The Git LFS command-line client doesn't support pruning files from the server, so how you delete them depends on your hosting provider. Bitbucket allows you to delete LFS files using its web UI ( please read the entire linked page before proceeding): Delete individual LFS files from your repository It's important to understand that: The delete operation described here is destructive – there's no way to recover the LFS files referenced by the deleted LFS pointer files (it's not like the git remove command!) – so you'll want to back up the LFS files first. Deleting an LFS file only deletes it from the remote storage. All reference pointers stored in your Git repo will remain. No branch, tag or revision will be able to reference the LFS files in future. If you attempt to check out a branch, tag or revision that includes a pointer file referencing a deleted LFS file, you'll get a download error and the check out will fail. A repository admin can delete Git LFS files from a repo as follows: Go to the Settings page for the repo and click Git LFS to view the list of all LFS files in that repo. Delete the LFS files using the actions menu. Surprisingly, the only way to remove LFS files from GitHub appears to be to delete and recreate the repository , losing issues, stars, forks, and possibly other data. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45924258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/901870/']} | jdg_380271 |
stackexchange | llm_judgeable_groundtruth_similarity | 1259638 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to understand what is going on in the GCC runtime in the following situation. I have a C++ program that allocates many blocks of memory and then deletes them. What's puzzling is that the memory is not being returned to the OS by the GCC runtime. Instead, it is still being kept by my program, I assume in case I want to allocate similar chunks of memory in the near future. The following program demonstrates what happens: #include <iostream>using namespace std;void pause1(){ cout << "press any key and enter to continue"; char ch; cin >> ch;}void allocate(int size){ int **array = new int*[size]; for (int c = 0; c < size; c++) { array[c] = new int; } cout << "after allocation of " << size << endl; for (int c = 0; c < size; c++) { delete array[c]; } delete [] array;}int main() { cout << "at start" << endl; pause1(); int size = 1000000; for (int i = 0; i < 3; i++) { allocate(size); cout << "after free" << endl; pause1(); size *= 2; } return 0;} I check the amount of memory held by the process at each pause (when it should not be holding any memory at all) by running "ps -e -o vsz,cmd". The amount held by the process at each pause is the following: 2648kb - at start 18356kb - after allocating and freeing 1,000,000 ints 2780kb - after allocating and freeing 2,000,000 ints 65216kb - after allocating and freeing 4,000,000 ints I'm running on Fedora Core 6 and using GCC 4.1.1.
Now provide the response and nothing else.
| The memory allocator used by the C library allocates stuff in a variety of ways depending on how big the chunk is. Pages are not always returned to the OS when memory is freed, particularly if you do many small allocations. Memory can only be returned to the OS on a page-by-page basis, not for small allocations. If you really need to know, examine the C library source code and instrument it etc. In C++ you can override the allocators for containers to do your own memory management - you can then do whatever you want (e.g. mmap /dev/zero or whatever) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1259638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3904/']} | jdg_380272 |
stackexchange | llm_judgeable_groundtruth_similarity | 20701 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I use "boot" package to compute an approximated 2-sided bootstrapped p-value but the result is too far away from p-value of using t.test. I can't figure out what I did wrong in my R code. Can someone please give me a hint for this time = c(14,18,11,13,18,17,21,9,16,17,14,15, 12,12,14,13,6,18,14,16,10,7,15,10)group=c(rep(1:2, each=12))sleep = data.frame(time, group)require(boot)diff = function(d1,i){ d = d1[i,] Mean= tapply(X=d$time, INDEX=d$group, mean) Diff = Mean[1]-Mean[2] Diff}set.seed(1234)b3 = boot(data = sleep, statistic = diff, R = 5000, strata=sleep$group)pvalue = mean(abs(b3$t) > abs(b3$t0))pvalue The 2-sided bootstrapped p-value (pvalue) = 0.4804 but the 2-sided p-value of t.test is 0.04342. Both p-values are around 11 times difference. How can this happen?
Now provide the response and nothing else.
| You are using bootstrap to generate data under the empirical distribution of the observed data. This can be useful to give a confidence interval on the difference between the two means: > quantile(b3$t,c(0.025,0.975)) 2.5% 97.5% 0.4166667 5.5833333 To get a $p$-value, you need to generate permutations under the null hypothesis. This can be done eg like this: diff2 = function(d1,i){ d = d1; d$group <- d$group[i]; # randomly re-assign groups Mean= tapply(X=d$time, INDEX=d$group, mean) Diff = Mean[1]-Mean[2] Diff}> set.seed(1234)> b4 = boot(data = sleep, statistic = diff2, R = 5000)> mean(abs(b4$t) > abs(b4$t0))[1] 0.046 In this solution, the size of groups is not fixed, you randomly reassign a group to each individual by bootstraping from the initial group set. It seems legit to me, however a more classical solution is to fix the number of individuals of each group, so you just permute the groups instead of bootstraping (this is usually motivated by the design of the experiment, where the group sizes are fixed beforehand): > R <- 10000; d <- sleep> b5 <- numeric(R); for(i in 1:R) { + d$group <- sample(d$group, length(d$group)); + b5[i] <- mean(d$time[d$group==1])-mean(d$time[d$group==2]); + }> mean(abs(b5) > 3)[1] 0.0372 | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/20701', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/4559/']} | jdg_380273 |
stackexchange | llm_judgeable_groundtruth_similarity | 60517685 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to have a route for my static files: // server.jsapp.use('/', require('./routes/ui/templates')); The thing is that I cannot change the content-type from html->xhtml. Here's my route: const express = require('express');const router = express.Router();// Path configsconst pathRoot = __dirnameconst pathPublic = pathRoot + "/../../public/" router.use('/', express.static(pathPublic));router.get('/', (req, res) => { console.log(pathPublic) res.sendFile('index.html', {root: pathRoot});})router.use((req, res, next) => { res.type('application/xhtml+xml'); next();})module.exports = router; Note that for some reason, if I don't add the router.use(...) my index file is not served at all. From what I understand, the middleware I'vewritten should be last as I am trying to capture the response and modify it.Please correct me if I am wrong.
Now provide the response and nothing else.
| It is possible to remove disabled item form a select all in datatable I've added a new key "disabled" in items array Here is the working codepen: https://codepen.io/chansv/pen/mdJMvJr?editors=1010 <div id="app"> <v-app id="inspire"> <v-data-table v-model="selected" :headers="headers" :items="desserts" item-key="name" show-select class="elevation-1" @toggle-select-all="selectAllToggle" > <template v-slot:item.data-table-select="{ item, isSelected, select }"> <v-simple-checkbox :value="isSelected" :readonly="item.disabled" :disabled="item.disabled" @input="select($event)" ></v-simple-checkbox></template> </v-data-table> </v-app></div>new Vue({ el: '#app', vuetify: new Vuetify(), data () { return { selected: [], disabledCount: 0, headers: [ { text: 'Dessert (100g serving)', align: 'start', sortable: false, value: 'name', }, { text: 'Calories', value: 'calories' }, { text: 'Fat (g)', value: 'fat' }, { text: 'Carbs (g)', value: 'carbs' }, { text: 'Protein (g)', value: 'protein' }, { text: 'Iron (%)', value: 'iron' }, ], desserts: [ { name: 'Frozen Yogurt', calories: 159, fat: 6.0, carbs: 24, protein: 4.0, iron: '1%', disabled: true, }, { name: 'Ice cream sandwich', calories: 237, fat: 9.0, carbs: 37, protein: 4.3, iron: '1%', }, { name: 'Eclair', calories: 262, fat: 16.0, carbs: 23, protein: 6.0, iron: '7%', }, { name: 'Cupcake', calories: 305, fat: 3.7, carbs: 67, protein: 4.3, iron: '8%', }, { name: 'Gingerbread', calories: 356, fat: 16.0, carbs: 49, protein: 3.9, iron: '16%', disabled: true, }, { name: 'Jelly bean', calories: 375, fat: 0.0, carbs: 94, protein: 0.0, iron: '0%', }, { name: 'Lollipop', calories: 392, fat: 0.2, carbs: 98, protein: 0, iron: '2%', }, { name: 'Honeycomb', calories: 408, fat: 3.2, carbs: 87, protein: 6.5, iron: '45%', }, { name: 'Donut', calories: 452, fat: 25.0, carbs: 51, protein: 4.9, iron: '22%', }, { name: 'KitKat', calories: 518, fat: 26.0, carbs: 65, protein: 7, iron: '6%', }, ], } }, methods: { selectAllToggle(props) { if(this.selected.length != this.desserts.length - this.disabledCount) { this.selected = []; const self = this; props.items.forEach(item => { if(!item.disabled) { self.selected.push(item); } }); } else this.selected = []; } }, created() { const self = this; this.desserts.map(item => { if (item.disabled) self.disabledCount += 1 }) }}) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60517685', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7513683/']} | jdg_380274 |
stackexchange | llm_judgeable_groundtruth_similarity | 39611979 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a string and I need it not to break two specific words in different lines. Example: "Ask for it it when contracting until 2016/09/30 with T-2 Rate" When I resize window and make it smaller there is a moment that it outputs: "Ask for it it when contracting until 2016/09/30 with T-2 \nRate" I would like T-2 + Rate to be always together. How to do it?
Now provide the response and nothing else.
| You use a nonbreaking space . The HTML entity for it is . You'll probably want a non-breaking hyphen ( ‑ ) in T-2 as well: Ask for it it when contracting until 2016/09/30 with T‑2 Rate Example: var target = document.getElementById("target");var originalWidth = target.innerWidth || target.clientWidth;var width = originalWidth;tick();function tick() { width = width < 10 ? originalWidth : (width - 10); target.style.width = width + "px"; setTimeout(tick, 400);} #target { display: inline-block; border: 1px solid #ddd;} <div id="target">Ask for it it when contracting until 2016/09/30 with T‑2 Rate</div> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39611979', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4708911/']} | jdg_380275 |
stackexchange | llm_judgeable_groundtruth_similarity | 509887 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose there are 34 poeople standing in a row in random order, among them 18 are male and 16 are female. If two people adjacent to each other belong to different genders, we consider them to be a couple, how many couples are we expected to see on average?
Now provide the response and nothing else.
| First, we find the probability that two adjacent individuals are of different genders. Those two people are equally likely to be any of the ${34 \choose 2}$ pairs of people, of which $18 \times 16$ are male-female pairs, so this probability is $(18 \times 16)/{34 \choose 2}$ . The total number of pairs is $X_1 + X_2 + \cdots + X_{33}$ where $X_i$ is an indicator random variable that is 1 if person $i$ and person $i+1$ are of opposite genders and 0 otherwise. Its expectation is $E(X_1) + \cdots + E(X_{33})$ , but all these variables have the same expectation, the probability we found above. So the answer is $$ 33 \times {18 \times 16 \over {34 \choose 2}} = {33 \times 18 \times 16 \over (34 \times 33)/2} = {18 \times 16 \over 17} = {17^2 - 1 \over 17} = 17 - {1 \over 17} \approx 16.94.$$ This agrees with Bernhard's simulation. More generally, if you have $m$ males and $f$ females and the same problem you get $$ (m+f-1) {mf \over {m+f \choose 2}} = {(m+f-1) mf \over (m+f)(m+f-1)/2} = {2mf \over m+f}$$ which can also be checked by simulation. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/509887', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/11446/']} | jdg_380276 |
stackexchange | llm_judgeable_groundtruth_similarity | 19580944 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm doing some set operations in Python, and I noticed something odd.. >> set([1,2,3]) | set([2,3,4])set([1, 2, 3, 4])>> set().union(*[[1,2,3], [2,3,4]])set([1, 2, 3, 4]) That's good, expected behaviour - but with intersection: >> set([1,2,3]) & set([2,3,4])set([2, 3])>> set().intersection(*[[1,2,3], [2,3,4]])set([]) Am I losing my mind here? Why isn't set.intersection() operating as I'd expect it to? How can I do the intersection of many sets as I did with union (assuming the [[1,2,3], [2,3,4]] had a whole bunch more lists)? What would the "pythonic" way be?
Now provide the response and nothing else.
| When you do set() you are creating an empty set. When you do set().intersection(...) you are intersecting this empty set with other stuff. The intersection of an empty set with any other collection of sets is empty. If you actually have a list of sets , you can get their intersection similar to how you did it. >>> x = [{1, 2, 3}, {2, 3, 4}, {3, 4, 5}]>>> set.intersection(*x)set([3]) You can't do this directly with the way you're doing it, though, because you don't actually have any sets at all in your example with intersection(*...) . You just have a list of lists . You should first convert the elements in your list to sets. So if you have x = [[1,2,3], [2,3,4]] you should do x = [set(a) for a in x] | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19580944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1406230/']} | jdg_380277 |
stackexchange | llm_judgeable_groundtruth_similarity | 38180080 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is more of a conceptual question. I understand the Promise design pattern, but couldn't find a reliable source to answer my question about promise.all() : What is(are) the correct scenario(s) to use promise.all() OR Are there any best practices to use promise.all() ? Should it be ideally used only if all of the promise objects are of the same or similar types? The only one I could think of is: Use promise.all() if you want to resolve the promise only if all of the promise objects resolve and reject if even one rejects.
Now provide the response and nothing else.
| I'm not sure anyone has really given the most general purpose explanation for when to use Promise.all() (and when not to use it): What is(are) the correct scenario(s) to use promise.all() Promise.all() is useful anytime you have more than one promise and your code wants to know when all the operations that those promises represent have finished successfully. It does not matter what the individual async operations are. If they are async, are represented by promises and your code wants to know when they have all completed successfully, then Promise.all() is built to do exactly that. For example, suppose you need to gather information from three separate remote API calls and when you have the results from all three API calls, you then need to run some further code using all three results. That situation would be perfect for Promise.all() . You could so something like this: Promise.all([apiRequest(...), apiRequest(...), apiRequest(...)]).then(function(results) { // API results in the results array here // processing can continue using the results of all three API requests}, function(err) { // an error occurred, process the error here}); Promise.all() is probably most commonly used with similar types of requests (as in the above example), but there is no reason that it needs to be. If you had a different case where you needed to make a remote API request, read a local file and read a local temperature probe and then when you had data from all three async operations, you wanted to then do some processing with the data from all three, you would again use Promise.all() : Promise.all([apiRequest(...), fs.promises.readFile(...), readTemperature(...)]).then(function(results) { // all results in the results array here // processing can continue using the results of all three async operations}, function(err) { // an error occurred, process the error here}); On the flip side, if you don't need to coordinate among them and can just handle each async operation individually, then you don't need Promise.all() . You can just fire each of your separate async operations with their own .then() handlers and no coordination between them is needed. In addition Promise.all() has what is called a "fast fail" implementation. It returns a master promise that will reject as soon as the first promise you passed it rejects or it will resolve when all the promises have resolved. So, to use Promise.all() that type of implementation needs to work for your situation. There are other situations where you want to run multiple async operations and you need all the results, even if some of them failed. Promise.all() will not do that for you directly. Instead, you would likely use something like Promise.settle() for that situation. You can see an implementation of .settle() here which gives you access to all the results, even if some failed. This is particularly useful when you expect that some operations might fail and you have a useful task to pursue with the results from whatever operations succeeded or you want to examine the failure reasons for all the operations that failed to make decisions based on that. Are there any best practices to use promise.all()? Should it beideally used only if all of the promise objects are of the same orsimilar types? As explained above, it does not matter what the individual async operations are or if they are the same type. It only matters whether your code needs to coordinate them and know when they all succeed. It's also useful to list some situations when you would not use Promise.all() : When you only have one async operation. With only one operation, you can just use a .then() handler on the one promise and there is no reason for Promise.all() . When you don't need to coordinate among multiple async operations. When a fast fail implementation is not appropriate. If you need all results, even if some fail, then Promise.all() will not do that by itself. You will probably want something like Promise.allSettled() instead. If your async operations do not all return promises, Promise.all() cannot track an async operation that is not managed through a promise. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38180080', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1409180/']} | jdg_380278 |
stackexchange | llm_judgeable_groundtruth_similarity | 19855943 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to winforms..I am trying to set two column of DataGridView to Numeric Only.. I do not want user to be able to type anything into a cell unless its a natural number in one column and a numeric value in another(which always one decimal). I thought this would be simple.. but even after trying a lot of things from stackoverflow and other sites i am still unable to achieve this. If DataGridView1.CurrentCell.ColumnIndex = 8 Then If Not Char.IsControl(e.KeyChar) AndAlso Not Char.IsDigit(e.KeyChar) AndAlso e.KeyChar <> "."c Then e.Handled = True End IfEnd If
Now provide the response and nothing else.
| Try this code Private Sub DataGridView1_EditingControlShowing(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewEditingControlShowingEventArgs) Handles DataGridView1.EditingControlShowing If DataGridView1.CurrentCell.ColumnIndex = 2 Then AddHandler CType(e.Control, TextBox).KeyPress, AddressOf TextBox_keyPress ElseIf DataGridView1.CurrentCell.ColumnIndex = 1 Then AddHandler CType(e.Control, TextBox).KeyPress, AddressOf TextBox_keyPress1 End If End Sub Private Sub TextBox_keyPress(ByVal sender As Object, ByVal e As KeyPressEventArgs) If Char.IsDigit(CChar(CStr(e.KeyChar))) = False Then e.Handled = True End Sub Private Sub TextBox_keyPress1(ByVal sender As Object, ByVal e As KeyPressEventArgs) If Not (Char.IsDigit(CChar(CStr(e.KeyChar))) Or e.KeyChar = ".") Then e.Handled = True End Sub TextBox_keyPress Event for only numeric TextBox_keyPress1 Event for numeric with decimal value | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19855943', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2064292/']} | jdg_380279 |
stackexchange | llm_judgeable_groundtruth_similarity | 318971 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to fit an exponential decay function to y-values that become negative at high x-values, but am unable to configure my nls function correctly. Aim I am interested in the slope of the decay function ( $\lambda$ according to some sources ). How I get this slope is not important, but the model should fit my data as well as possible (i.e. linearizing the problem is acceptable , if the fit is good; see "linearization"). Yet, previous works on this topic have used a following exponential decay function ( closed access article by Stedmon et al., equation 3 ): $f(y) = a \times exp(-S \times x) + K$ where S is the slope I am interested in, K the correction factor to allow negative values and a the initial value for x (i.e. intercept). I need to do this in R, as I am writing a function that converts raw measurements of chromophoric dissolved organic matter (CDOM) to values that researchers are interested in. Example data Due to the nature of the data, I had to use PasteBin. The example data are available here . Write dt <- and copy the code fom PasteBin to your R console. I.e. dt <- structure(list(x = ... The data look like this: library(ggplot2)ggplot(dt, aes(x = x, y = y)) + geom_point() Negative y values take place when $x > 540 nm$. Trying to find solution using nls Initial attempt using nls produces a singularity, which should not be a surprise seeing that I just eyeballed start values for parameters: nls(y ~ a * exp(-S * x) + K, data = dt, start = list(a = 0.5, S = 0.1, K = -0.1))# Error in nlsModel(formula, mf, start, wts) : # singular gradient matrix at initial parameter estimates Following this answer , I can try to make better fitting start parameters to help the nls function: K0 <- min(dt$y)/2mod0 <- lm(log(y - K0) ~ x, data = dt) # produces NaNs due to the negative valuesstart <- list(a = exp(coef(mod0)[1]), S = coef(mod0)[2], K = K0)nls(y ~ a * exp(-S * x) + K, data = dt, start = start)# Error in nls(y ~ a * exp(-S * x) + K, data = dt, start = start) : # number of iterations exceeded maximum of 50 The function does not seem to be able to find a solution with the default number of iterations. Let's increase the number of iterations: nls(y ~ a * exp(-S * x) + K, data = dt, start = start, nls.control(maxiter = 1000))# Error in nls(y ~ a * exp(-S * x) + K, data = dt, start = start, nls.control(maxiter = 1000)) : # step factor 0.000488281 reduced below 'minFactor' of 0.000976562 More errors. Chuck it! Let's just force the function to give us a solution: mod <- nls(y ~ a * exp(-S * x) + K, data = dt, start = start, nls.control(maxiter = 1000, warnOnly = TRUE))mod.dat <- data.frame(x = dt$x, y = predict(mod, list(wavelength = dt$x)))ggplot(dt, aes(x = x, y = y)) + geom_point() + geom_line(data = mod.dat, aes(x = x, y = y), color = "red") Well, this was definitely not a good solution... Linearizing the problem Many people have linearized their exponential decay functions with a success (sources: 1 , 2 , 3 ). In this case, we need to make sure that no y value is negative or 0. Let's make the minimum y value as close to 0 as possible within the floating point limits of computers : K <- abs(min(dt$y)) dt$y <- dt$y + K*(1+10^-15)fit <- lm(log(y) ~ x, data=dt) ggplot(dt, aes(x = x, y = y)) + geom_point() + geom_line(aes(x=x, y=exp(fit$fitted.values)), color = "red") Much better, but the model does not trace y values perfectly at low x values. Note that the nls function would still not manage to fit the exponential decay: K0 <- min(dt$y)/2mod0 <- lm(log(y - K0) ~ x, data = dt) # produces NaNs due to the negative valuesstart <- list(a = exp(coef(mod0)[1]), S = coef(mod0)[2], K = K0)nls(y ~ a * exp(-S * x) + K, data = dt, start = start)# Error in nlsModel(formula, mf, start, wts) : # singular gradient matrix at initial parameter estimates Do the negative values matter? The negative values are obviously a measurement error as absorption coefficients cannot be negative. So what if I make the y values generously positive? It is the slope I am interested in. If addition does not affect the slope, I should be settled: dt$y <- dt$y + 0.1fit <- lm(log(y) ~ x, data=dt) ggplot(dt, aes(x = x, y = y)) + geom_point() + geom_line(aes(x=x, y=exp(fit$fitted.values)), color = "red") Well, this did not go that well...High x values should obviously be as close to zero as possible. The question I am obviously doing something wrong here. What is the most accurate way to estimate slope for an exponential decay function fitted on data that have negative y values using R?
Now provide the response and nothing else.
| Use a selfstarting function: ggplot(dt, aes(x = x, y = y)) + geom_point() + stat_smooth(method = "nls", formula = y ~ SSasymp(x, Asym, R0, lrc), se = FALSE) fit <- nls(y ~ SSasymp(x, Asym, R0, lrc), data = dt)summary(fit)#Formula: y ~ SSasymp(x, Asym, R0, lrc)##Parameters:# Estimate Std. Error t value Pr(>|t|) #Asym -0.0001302 0.0004693 -0.277 0.782 #R0 77.9103278 2.1432998 36.351 <2e-16 ***#lrc -4.0862443 0.0051816 -788.604 <2e-16 ***#---#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1##Residual standard error: 0.007307 on 698 degrees of freedom##Number of iterations to convergence: 0 #Achieved convergence tolerance: 9.189e-08exp(coef(fit)[["lrc"]]) #lambda#[1] 0.01680222 However, I would seriously consider if your domain knowledge doesn't justify setting the asymptote to zero. I believe it does and the above model doesn't disagree (see the standard error / p-value of the coefficient). ggplot(dt, aes(x = x, y = y)) + geom_point() + stat_smooth(method = "nls", formula = y ~ a * exp(-S * x), method.args = list(start = list(a = 78, S = 0.02)), se = FALSE, #starting values obtained from fit above color = "dark red") | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/318971', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/10829/']} | jdg_380280 |
stackexchange | llm_judgeable_groundtruth_similarity | 124628 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading a bit on boosting algorithms for classification tasks and Adaboost in particular. I understand that the purpose of Adaboost is to take several "weak learners" and, through a set of iterations on training data, push classifiers to learn to predict classes that the model(s) repeatedly make mistakes on. However, I was wondering why so many of the readings I've done have used decision trees as the weak classifier. Is there a particular reason for this? Are there certain classifiers that make particularly good or bad candidates for Adaboost?
Now provide the response and nothing else.
| I talked about this in an answer to a related SO question . Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this: Decision trees are non-linear. Boosting with linear models simply doesn't work well. The weak learner needs to be consistently better than random guessing. You don't normal need to do any parameter tuning to a decision tree to get that behavior. Training an SVM really does need a parameter search. Since the data is re-weighted on each iteration, you likely need to do another parameter search on each iteration. So you are increasing the amount of work you have to do by a large margin. Decision trees are reasonably fast to train. Since we are going to be building 100s or 1000s of them, thats a good property. They are also fast to classify, which is again important when you need 100s or 1000s to run before you can output your decision. By changing the depth you have a simple and easy control over the bias/variance trade off, knowing that boosting can reduce bias but also significantly reduces variance. Boosting is known to overfit, so the easy nob to tune is helpful in that regard. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/124628', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/34993/']} | jdg_380281 |
stackexchange | llm_judgeable_groundtruth_similarity | 54117311 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my Flutter log I'm constantly getting this messages(just with some different numbers): Background concurrent copying GC freed 153040(3MB) AllocSpace objects, 12(4MB) LOS objects, 49% free, 4MB/8MB, paused 819us total 173.633ms I recently implemented the bloc pattern and I'm not that familiar with streams, maybe I've done something wrong there... I got about 5000 lines of code so I can't post it all here, just wanna know if you know this problem, maybe it's a common error. EDIT: Oh yeah, btw I'm testing on Android Emulator, Android Pie. My platform is Windows.
Now provide the response and nothing else.
| This is not an error, it's just an Android log message notifying you about when garbage collection takes place. Everything's normal.The log messages don't harm your app, see this question regarding the same topic on native Android .It's only a problem if you go out of memory, or you see performance hiccups due to garbage collection.Phew. That being said, let's see how you might get less of these messages. Typically, an emulator's resources are limited .So, the easiest way would be to increase the emulator's RAM size or use an actual phone instead of an emulator. Secondly, make sure your logic doesn't handle huge amounts of data , or if it does, that it gets disposed as soon as possible. Also, don't "cache" widgets yourself by storing them in a state like this: class _MyWidgetState extends State<MyWidget> { Widget button; @override void initState() { super.initState(); button = RaisedButton(...); } @override Widget build() => button;} To get more information on why not to do this, check out my answer to a somewhat related question .Basically, Dart uses two types of garbage collectors, the Young Space Scavenger for short-lived objects and the Mark Sweep GC for long-lived ones.By caching your widgets manually, you're relying on the latter, which is slower and may actually notify Android about the freed memory, causing your logs. Finally, you can always filter or ignore the logs . :D | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54117311', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8757322/']} | jdg_380282 |
stackexchange | llm_judgeable_groundtruth_similarity | 108071 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can we show that $\pm 1, \pm i$ are the only units in the ring of Gaussian integers, $\mathbb Z[i]$? Thank you.
Now provide the response and nothing else.
| If $z,w\in\mathbb{Z}[i]$ are such that $zw=1$ (i.e. $z$ is a unit and $w$ its inverse), then $|z|^2|w|^2=|zw|^2=1$, or $$(a^2+b^2)(c^2+d^2)=1, \quad z=a+bi,\; w=c+di.$$ Now $a,b,c,d$ are all integers, so $a^2+b^2$ and $c^2+d^2$ must both be nonnegative integers, which must both equal exactly $1$ and no greater in order to multiply to $1$ in the integers. And if $a^2+b^2=1$, we have $a^2$ and $b^2\le1$. Check by hand the only solutions here correspond to $(a,b)=(\pm1,0)$ or $(0,\pm1)$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/108071', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24787/']} | jdg_380283 |
stackexchange | llm_judgeable_groundtruth_similarity | 31139710 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a div and I want to fire an event only after user continuous hovers his mouse for 3 sec. My code doesn't work well because it fires right after hover and doesn't "wait". Code: $(".inner_pic").mouseenter(function () { setTimeout(function () { alert('testing'); }, 3000);}).mouseleave(function () { alert('finish');});
Now provide the response and nothing else.
| You need to store timeout id somewhere and clear it on mouseout. It's convenient to use data property to save this id: $(".inner_pic").mouseenter(function () { $(this).data('timeout', setTimeout(function () { alert('testing'); }, 3000));}).mouseleave(function () { clearTimeout($(this).data('timeout')); alert('finish');}); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script><div class="inner_pic">PICTURE</div> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31139710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1925095/']} | jdg_380284 |
stackexchange | llm_judgeable_groundtruth_similarity | 100608 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was thinking today about a whether it would be feasible to implement an inverter using a simple sine wave oscillator (perhaps op amp based or a Wien bridge implementation), a power amplifier stage and a step-up transformer. The more common simple circuits online use a 555 or a 4047 and as such the inverter has a square wave output as well. However I'm sure I'm missing something here because an implementation would surely exist if it were plausible. So what are the limitations of using the the setup I mentioned earlier? PS: I understand that buying an inverter would be more efficient and more cost effective. Just wondering if what I'm talking about would actually work at all.
Now provide the response and nothing else.
| To be efficient, the output amplifier has to be switching (Class D), but other than that you're not missing anything. It's easier to do this with a microcontroller synthesizing the class-D drive signals directly (which become sine waves after filtering) than with an analog oscillator (which won't run at a crystal-controlled frequency without more parts, and requires AGC to get a stable low-distortion output level), so that's generally how it's done. I would expect you might find some very old designs (when microcontrollers and microprocessors were relatively expensive) using just that method. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/100608', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10355/']} | jdg_380285 |
stackexchange | llm_judgeable_groundtruth_similarity | 1413445 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to read a password from standard input and wanted std::cin not to echo the characters typed by the user... How can I disable the echo from std::cin? here is the code that I'm currently using: string passwd;cout << "Enter the password: ";getline( cin, passwd ); I'm looking for a OS agnostic way to do this. Here there are ways to do this in both Windows and *nix.
Now provide the response and nothing else.
| @wrang-wrang answer was really good, but did not fulfill my needs, this is what my final code (which was based on this ) look like: #ifdef WIN32#include <windows.h>#else#include <termios.h>#include <unistd.h>#endifvoid SetStdinEcho(bool enable = true){#ifdef WIN32 HANDLE hStdin = GetStdHandle(STD_INPUT_HANDLE); DWORD mode; GetConsoleMode(hStdin, &mode); if( !enable ) mode &= ~ENABLE_ECHO_INPUT; else mode |= ENABLE_ECHO_INPUT; SetConsoleMode(hStdin, mode );#else struct termios tty; tcgetattr(STDIN_FILENO, &tty); if( !enable ) tty.c_lflag &= ~ECHO; else tty.c_lflag |= ECHO; (void) tcsetattr(STDIN_FILENO, TCSANOW, &tty);#endif} Sample usage: #include <iostream>#include <string>int main(){ SetStdinEcho(false); std::string password; std::cin >> password; SetStdinEcho(true); std::cout << password << std::endl; return 0;} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1413445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/121961/']} | jdg_380286 |
stackexchange | llm_judgeable_groundtruth_similarity | 42422269 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i'd like to use DatePicker for selecting date using redux form.I create this: import React from 'react';import DatePicker from 'react-datepicker';import 'react-datepicker/dist/react-datepicker.css';export default field => ( <div> <DatePicker onChange={field.value} selected={field.value} isClearable={true} > {field.children} </DatePicker> {field.meta.touched && field.meta.error && <span className="error">{field.meta.error}</span>} </div>);<div className="form-group"> <div className="col-xs-12 col-sm-3 "> <div className="label" htmlFor="date-to">DATE TO</div>{' '} <Field id="date-to" name="date-to" component={DateInput} /> </div> </div> But it does not return any values and the does not show the date in the fieldWhat should i do?
Now provide the response and nothing else.
| You want to wrap the DatePicker element that it can be used as a component on "Field" like this: const renderDatePicker = ({input, placeholder, defaultValue, meta: {touched, error} }) => ( <div> <DatePicker {...input} dateForm="MM/DD/YYYY" selected={input.value ? moment(input.value) : null} /> {touched && error && <span>{error}</span>} </div>);export default renderDatePicker Refer to this GitHub issue for more information: https://github.com/Hacker0x01/react-datepicker/issues/543 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42422269', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7334203/']} | jdg_380287 |
stackexchange | llm_judgeable_groundtruth_similarity | 70397 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Note : This is somewhat related to this question According to CNBC , one of the goals of Russia is to prevent Ukraine from joining NATO. I am not sure if this is related to not having a direct border with NATO or because Ukraine is a former Soviet state. This map shows NATO's expansion since 1997 and after the Baltic States have joined NATO, Russia has a direct border with NATO. I assume the fact that Baltic States are part of NATO makes a difference, despite being the former Soviet States. This makes me wonder what prevented Ukraine from joining NATO earlier and possibly benefiting from the membership nowadays? I am specifically interested in the reasons why not joining in the early 2000s when Russia was more affected by the collapse of the Soviet Union than it is in the 2020s.
Now provide the response and nothing else.
| They are not strictly comparable. For one thing, Ukraine did not ask for membership until 2008. The Baltics are very separate from Russia and Russian culture. Their absorption in 1940 was a provision of the Molotov/Ribbentrop pact . Likewise, Poland's history with Russia has been of a distinct nation being oppressed by an outsider nation. Long story short: they were in the lump of countries that were forcibly absorbed into the Warsaw Pact without any wish to join and coerced into remaining. Russia and Ukraine share much greater religious, cultural, historical and linguistic ties. Ukraine has had a number of leaders since independence, some of whom were fairly closely aligned with Russian leaders. So in 2004 did Ukraine request NATO membership in time to accede? It did not . Relations between Ukraine and the North Atlantic Treaty Organization (NATO) started in 1992. 1 Ukraine applied to begin a NATO Membership Action Plan (MAP) in 2008. Plans for NATO membership were shelved by Ukraine following the 2010 presidential election in which Viktor Yanukovych, who preferred to keep the country non-aligned, was elected President. 4 Was it perceived to be important to all parties? No, probably remaining unaligned seemed more possible at the time. The West might have been more accommodating of Russia's fascination with its near abroad states, Ukraine's own political landscape was more divided on the subject and Russia had yet to invade Georgia in 2008. Russia itself was more accommodating and the gradual collapse of its influence via the Orange revolutions had not started. On the other hand, the 2004 accessor states were perceived to have been unfairly coerced into the USSR. They are also much smaller militarily and in fact NATO has been fairly careful to limit exercises there. Most are at the brigade level. There is nothing for example that ever compared to Russia's late 2021 massing of about 100k personnel near Ukraine. Ukrainian accession would be very different as it would position a NATO country with a large home military force right next to Russia and saying "it's just like the Baltics" is missing a lot of subtleties. I suspect this is as true in 2022 as it was in 2004, the main difference is that Ukranians are much more distrustful of Russian intentions than they were then. Last, Ukraine might have thought that their security needs were already covered by the Budapest Memorandum on Security Assurances, signed in 1994 by which Ukraine agreed to give up its share of Soviet nukes in return for Russia guarantees. The United States of America, the Russian Federation, and the United Kingdom ofGreat Britain and Northern Ireland, reaffirm their obligation to refrain from the threator use of force against the territorial integrity or political independence of Ukraine ,and that none of their weapons will ever be used against Ukraine except in selfdefense or otherwise in accordance with the Charter of the United Nations. | {} | {'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/70397', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/11278/']} | jdg_380288 |
stackexchange | llm_judgeable_groundtruth_similarity | 7070573 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have implemented a test app with Android's In-App Billing. I have filled in my Public Key in the Security.java file. Everything works, but when I submit the payment, the app crashes. I receive an error in LogCat that says "Signature Verification Failed", which corresponds to this bit of code: if (!sig.verify(Base64.decode(signature))) { Log.e(TAG, "Signature verification failed."); return false;} If I change that bit to return true instead of return false, then everything works properly - I can submit payment and safely return to the app - but I am assuming that something else is wrong, since I probably should change that to return true. Any ideas about what could be causing this?
Now provide the response and nothing else.
| That signature verification error can be caused by: 1.- A wrong public key . Maybe you've forgotten to copy some character. It happens :) 2.- The .apk must be signed . You can't use the debug.keystore, if you do your signature string will be empty. And remember, for testing In-app billing: Add Android Market public key to Security.java ( String base64EncodedPublicKey = "your public key here" ) Build in release mode and sign it (If you are using Eclipse, you can use the Export Wizard). Upload the release version to Android Market, do not publish it, and create the product list. Install the application onto your device ( adb -d install myapp.apk ) and make a test account primary on your device. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7070573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300129/']} | jdg_380289 |
stackexchange | llm_judgeable_groundtruth_similarity | 8481396 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Common practice might be to put asserts in code to check input parameters, data integrity, and such, during app development. I test my apps, BUT, given that I'm not Knuth (and he writes $1 checks), and I can't afford to employ a large team of full-time QA people as do some medical and space systems software companies, I assume that all my apps will always have plenty of bugs that have never yet been seen during testing or QA. Assuming otherwise seems quite intellectually dishonest. So after testing an app (and obviously removing all bugs causing any previously seen ASSERT failures) and getting the app ready to ship to Apple, what should be done with all the ASSERT checks in the Release/Distribution build? Leave or no-op? Here's one rationale for leaving them in: If an app acts wonky for some users, the app might get rated by those users as 1-Star without anyone ever telling the developer why in sufficient detail. But if the app crashes from an ASSERT failure, the app might still get rated 1-Star, but the developer could potentially get some crash dumps, indirectly via iTunes and iTunes Connect if enough users opts in, to figure out what is going wrong. And if the app gets rejected by Apple due to a brand new ASSERT crash, that will prevent a bad version of the app from ever getting onto one's customer's devices.
Now provide the response and nothing else.
| Leave them in for exactly the reasons you specify, but also because in certain cases they act as comments (especially where types are concerned in Objective-C). And do not worry about the performance hit unless it becomes a problem or you know you're in a performance critical situation and a particular assert is going to be run hundreds or thousands of times on the main run-loop. Can't resist mentioning this article on asserts vs. NSAssert. Personally, I start to remove the ones that I've put in for debugging purposes, but if you use asserts to check data integrity, parameters, resource dependencies and other related things -- arguably, you could throw Exceptions yourself instead, which might be wiser -- then I would leave them in. Note: A further point is that just removing asserts is utterly stupid, since your app will either crash or be in an inconsistent state, both of which are worse than crashing in a way that you can recognize from the crash logs (so leave the asserts in). Replace asserts with if statements, on the other hand, could be a good thing. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8481396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341750/']} | jdg_380290 |
stackexchange | llm_judgeable_groundtruth_similarity | 2257563 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't understand what "generations" are in the context of Garbage Collection. Can someone explain in simple terms?
Now provide the response and nothing else.
| From Understanding Garbage Collection in .NET Generations A generational garbage collector collects the short-lived objects more frequently than the longer lived ones. Short-lived objects are stored in the first generation, generation 0. The longer-lived objects are pushed into the higher generations, 1 or 2. The garbage collector works more frequently in the lower generations than in the higher ones. When an object is first created, it is put into generation 0. When the generation 0 is filled up, the garbage collector is invoked. The objects that survive the garbage collection in the first generation are promoted onto the next higher generation, generation 1. The objects that survive garbage collection in generation 1 are promoted onto the next and the highest generation, generation 2. This algorithm works efficiently for garbage collection of objects, as it is fast. Note that generation 2 is the highest generation that is supported by the garbage collector. Garbage Collection in .NET Generations While memory allocation on the managed heap is fast, GC itself may take some time. With this in mind several optimisations have been made to improve performance. The GC supports the concept of generations, based on the assumption that the longer an object has been on the heap, the longer it will probably stay there. When an object is allocated on the heap it belongs in generation 0. Each garbage collection that that object survives increases its generation by 1 (currently the highest supported generation is 2). Obviously it's faster to search through, and garbage collect a subset of all objects on the heap, so the GC has the option of collecting only generation 0, 1 or 2 objects (or whatever combination it chooses until it has sufficient memory). Even while collecting only younger objects the GC can also determine if old objects have references to new objects to ensure that it doesn't inadvertently ignore in-use objects. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2257563', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380291 |
stackexchange | llm_judgeable_groundtruth_similarity | 43011072 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Visual Studio Enterprise 2017 to develop my .net core projects. My solution also has some unit test projects and I want to view my current Code coverage. When I clicked Test -> Analyze Code Coverage -> All tests. All my unit tests ran but in Code Coverage Results it only reported code coverage for my unit test projects which doesn't make any sense to me. Question 1 : Do you guys experience the same issue with me? Any solution for this? I also need to set up build definition in VSTS to report code coverage. Question 2 : Also when I right clicked to debug 1 of my unit test. It executes ALL of my unit tests. Is this a bug in VS2017? Update1: I found this topic: https://github.com/Microsoft/vstest/issues/597
Now provide the response and nothing else.
| Even with the "fixed" version I had issues (Only able to see code coverage for the test projects, not the actual projects I was testing). To workaround the problem I removed the <DebugType>Full</DebugType> from the test project. So, as a summary: Add package Microsoft.CodeCoverage (1.0.3) into your test project. Add the <DebugType>Full</DebugType> tag on the .csproj file onthe projects we want to see code coverage values (inside <PropertyGroup> as explained on the vstest github link ). Run unit tests. On the "Test Explorer" select passed unit tests, right click -> "Analyze Code Coverage for Selected Tests". You should see code coverage for your assemblies. My setup (Minimal setup?) xunit (2.3.1) xunit.runner.visualstudio (2.3.1) Microsoft.NET.Test.Sdk (15.3.0) Microsoft.CodeCoverage (1.0.3) Visual Studio Enterprise 2017 (15.4.1) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43011072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6535426/']} | jdg_380292 |
stackexchange | llm_judgeable_groundtruth_similarity | 199237 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $C$ be a small category and $F\colon C^{op}\rightarrow Set$ a functor. The Grothendieck construction is the category $F\wr C$ with objects being pairs $(c,x)$ where $c$ is a object of $C$ and $x\in F(c)$. An arrow from $(c,x)\rightarrow (c',x')$ is an arrow $f\colon c\rightarrow c'$ with $F(f)(x')=x$. This comes with a natural functor $F\wr C\rightarrow C$ forgetting the second coordinate. In other words it is a category $F\wr C$ together with a functor $F\wr C\rightarrow C$, such that the diagram of nerves $$\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex}\newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex}\newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}}\begin{array}{c} N_1(F\wr C) & \ra{d_0} & N_o(F\wr C) \\ \da{} & & \da{} \\N_1(C) & \ra{d_0} & N_o(C) \end{array}$$ is cartesian. Now topologize the whole situation, i.e. let $C$ be a topological category (a category internal to $Top$) and $F\colon C^{op}\rightarrow Top$ a functor. The grothendieck construction of this situation should give me a topological category $F\wr C$ with a continuous functor $F\wr C\rightarrow C$, such that the diagram above is cartesian. However, I ran into difficulties making this precise. Taking the non-topological definition for the underlying sets $ob(F\wr C)$ and $mor(F\wr C)$, one can topologize the set of morphisms as subspace of $mor(C)$, but what is the right topology on $ob(F\wr C)$? In some situations, the way to go is obvious : If $F$ takes values in subspaces of a fixed topological space $X$, one could topologize $ob(F \wr C)$ as subspace of $ob(C)\times X$. If $C$ has a discrete set of objects, $ob(F\wr C)$ can be topologized as a subspace of $\coprod\limits_{c\in ob(C)}\{c\}\times F(c)$.
Now provide the response and nothing else.
| This is a standard irritation. The issue is that $Top$ is not a category internal to $Top$, because it doesn't have a space of objects (and I don't mean for set-theoretic reasons), so what do you mean by a functor $F : C^{op} \to Top$? One solution to this (which I learnt from Section 7 of S. Galatius, I. Madsen, U. Tillmann, M. Weiss, "The homotopy type of the cobordism category") is to define a "continuous functor" $F : C^{op} \to Top$ to be a topological category $F \wr C$ with a continuous functor to $C$ such that the appropriate square is cartesian. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/199237', 'https://mathoverflow.net', 'https://mathoverflow.net/users/32022/']} | jdg_380293 |
stackexchange | llm_judgeable_groundtruth_similarity | 23531671 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to communicate from a web page to a packaged app. The idea is to have the web page read a number from a serial device. Because I want to access the serial device, I need a packaged app and can't use an extension. This is pretty similar to Keep Chrome Packaged App running in background? and it seems that Chrome documentation says this is possible. How can I execute the chrome.runtime.sendMessage from a regular web page? When I do so, I get *Uncaught TypeError: Cannot read property 'sendMessage' of undefined. My simple function is: function doFunction(){ chrome.runtime.sendMessage(editorExtensionId, {openUrlInEditor: url}, function(response) { if (!response.success) handleError(url); });} My packaged app loads and can access the serial ports. But my suspicion is the manifest isn't "enabling" the chrome.runtime of the regular webpage. Manifest.json: { "name": "Hello World!", "description": "My first Chrome App.", "version": "0.1", "app": { "background": { "scripts": ["background.js"] } }, "icons": { "16": "calculator-16.png", "128": "calculator-128.png" }, "permissions": [ "serial", "*://localhost/*" ], "externally_connectable": { "matches": [ "*://localhost/*"]}} Maybe it's the ://localhost/ which I'm using for testing. But Chrome does not complain. Any ideas out there? Thanks in advance.
Now provide the response and nothing else.
| Xan's comment did the trick. While Chrome did not complain about *://localhost/* , it did not work. Chrome did complain about other combinations such as file://localhost/ . I added foo.com to host file and then served up my web page through a web server, and it worked! I can communicate from my web page to my packaged app. Note that browsing to file://www.foo.com/hostpage.html did not work. But browing to http://www.foo.com:3000/hostpage.html did. (I'm using Rails, hence the 3000 port). Morale of the story: When testing locally, you need to add an entry with a bogus second level domain to your host file. Here's my manifest.json: { "name": "RFID Tag Reader", "description": "Reads RFID Tags connected via USB reader", "version": "0.0.0.1", "app": { "background": { "scripts": ["background.js"] } }, "icons": { "16": "rfid-16.png", "128": "rfid-128.png" }, "permissions": [ "serial", "*://www.foo.com/*", "*://arealsite.net/*" ], "externally_connectable": { "matches": [ "*://www.foo.com/*", "*://arealsite.net/*" ] }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23531671', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1797331/']} | jdg_380294 |
stackexchange | llm_judgeable_groundtruth_similarity | 4869856 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I keep getting this error. I'm uploading via ftp to a server. It works fine and uploads completely while running in the simulator, but when I provision it on my iPhone, it says: Error Occurred. Upload failed: The operation couldn’t be completed. (Cocoa error 260.) Any suggestions? I've been debugging and researching for hours on end and can't figure this out. I've tried cleaning targets, resetting device, resetting xcode. One thing I narrowed down was: NSError *attributesError = nil;NSDictionary *fileAttributes = [[NSFileManager defaultManager] attributesOfItemAtPath:self.filePath error:&attributesError];if (attributesError) { [self failWithError:attributesError]; return;} In the device attributesError is true, in the simulator it is false
Now provide the response and nothing else.
| I've tried cleaning targets, resetting device, resetting xcode. Blind pounding on random targets is never a good debugging technique. At best, you'll fix the problem and not know how. At worst, you'll break something else. Instead, find out what the problem is . For a “Cocoa error”, you'll want to look in FoundationErrors.h and CoreDataErrors.h (and AppKitErrors.h when not targeting Cocoa Touch). The former file gives the name for Cocoa error 260: NSFileReadNoSuchFileError = 260, // Read error (no such file) You're unable to get the attributes of that file because it doesn't exist (on your device). You may want to edit your question to include the code that creates the path that you store into self.filePath . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4869856', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/544094/']} | jdg_380295 |
stackexchange | llm_judgeable_groundtruth_similarity | 35596031 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After training a word2vec model using python gensim , how do you find the number of words in the model's vocabulary?
Now provide the response and nothing else.
| In recent versions, the model.wv property holds the words-and-vectors, and can itself can report a length – the number of words it contains. So if w2v_model is your Word2Vec (or Doc2Vec or FastText ) model, it's enough to just do: vocab_len = len(w2v_model.wv) If your model is just a raw set of word-vectors, like a KeyedVectors instance rather than a full Word2Vec /etc model, it's just: vocab_len = len(kv_model) Other useful internals in Gensim 4.0+ include model.wv.index_to_key , a plain list of the key (word) in each index position, and model.wv.key_to_index , a plain dict mapping keys (words) to their index positions. In pre-4.0 versions, the vocabulary was in the vocab field of the Word2Vec model's wv property, as a dictionary, with the keys being each token (word). So there it was just the usual Python for getting a dictionary's length: len(w2v_model.wv.vocab) In very-old gensim versions before 0.13 vocab appeared directly on the model. So way back then you would use w2v_model.vocab instead of w2v_model.wv.vocab . But if you're still using anything from before Gensim 4.0, you should definitely upgrade! There are big memory & performance improvements, and the changes required in calling code are relatively small – some renamings & moves, covered in the 4.0 Migration Notes . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/35596031', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2014591/']} | jdg_380296 |
stackexchange | llm_judgeable_groundtruth_similarity | 565785 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While looking for the plain truth on echo I found this page: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/echo.html It's normally a HTML frame on this site https://pubs.opengroup.org/onlinepubs/9699919799/ (where you can search for "echo"). This claims to be POSIX, but I see no -n and I see \c instead! What have I found? GracefulRestart points out that /bin/echo recognises \c but it doesn't do that by default: I must do echo -e for \c to be recognised.
Now provide the response and nothing else.
| You have found IEEE 1003.1-2017, a.k.a. the Single Unix Specification , published by The Open Group. For more, see " What exactly is POSIX? ", " Difference between POSIX, Single UNIX Specification, and Open Group Base Specifications? ", and all of their linked questions and answers. The -n is there, in boldface no less so it is hard to miss. And yes, \c is standard. The variations in behaviour of echo are notorious. You should not be surprised that /bin/echo is not the same as a shell built-in echo , and that one requires an -e where another does not. It's not even that simple. For a long explanation, see " Why is printf better than echo? ". For the little-known variability of printf , ironically involving the very same \c escape sequence, see " Bash printf formating not working ". | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/565785', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/16446/']} | jdg_380297 |
stackexchange | llm_judgeable_groundtruth_similarity | 32450288 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two radio buttons pass and fail . How to get value of selected radio button. <div class="col-md-4"> Result <div class="radio"> <label><input type="radio" name="rdoResult">pass</label> </div> <div class="radio"> <label><input type="radio" name="rdoResult">fail</label> </div> </div> Do I need to use ng-model or something else. In jquery I know the things well not in angularjs.
Now provide the response and nothing else.
| Both should have same ng-model with different ng-value (meant for use with select option s or radio button), so that the selected value will be changed on result $scope variable and you can grab that value inside a controller on form submit or button click. Markup <div class="col-md-4"> Result <div class="radio"> <label> <input ng-model="result" type="radio" name="rdoResult" ng-value="'pass'"> pass </label> </div> <div class="radio"> <label> <input ng-model="result" type="radio" name="rdoResult" ng-value="'fail'"> fail </label> </div></div> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32450288', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/795420/']} | jdg_380298 |
stackexchange | llm_judgeable_groundtruth_similarity | 20054243 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I notice that In [30]: np.mean([1, 2, 3])Out[30]: 2.0In [31]: np.average([1, 2, 3])Out[31]: 2.0 However, there should be some differences, since after all they are two different functions. What are the differences between them?
Now provide the response and nothing else.
| np.average takes an optional weight parameter. If it is not supplied they are equivalent. Take a look at the source code: Mean , Average np.mean: try: mean = a.meanexcept AttributeError: return _wrapit(a, 'mean', axis, dtype, out)return mean(axis, dtype, out) np.average: ...if weights is None : avg = a.mean(axis) scl = avg.dtype.type(a.size/avg.size)else: #code that does weighted mean hereif returned: #returned is another optional argument scl = np.multiply(avg, 0) + scl return avg, sclelse: return avg... | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20054243', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2106753/']} | jdg_380299 |
stackexchange | llm_judgeable_groundtruth_similarity | 532092 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
#include <iostream>#include <vector>using namespace std;int main(){ vector< vector<int> > dp(50000, vector<int>(4, -1)); cout << dp.size();} This tiny program takes a split second to execute when simply run from the command line. But when run in a debugger, it takes over 8 seconds. Pausing the debugger reveals that it is in the middle of destroying all those vectors. WTF? Note - Visual Studio 2008 SP1, Core 2 Duo 6700 CPU with 2GB of RAM. Added: To clarify, no, I'm not confusing Debug and Release builds. These results are on one and the same .exe, without even any recompiling inbetween. In fact, switching between Debug and Release builds changes nothing.
Now provide the response and nothing else.
| Running in the debugger changes the memory allocation library used to one that does a lot more checking. A program that does nothing but memory allocation and de-allocation is going to suffer much more than a "normal" program. Edit Having just tried running your program under VS I get a call stack that looks like ntdll.dll!_RtlpValidateHeapEntry@12() + 0x117 bytes ntdll.dll!_RtlDebugFreeHeap@12() + 0x97 bytes ntdll.dll!_RtlFreeHeapSlowly@12() + 0x228bf bytes ntdll.dll!_RtlFreeHeap@12() + 0x17646 bytes msvcr90d.dll!_free_base(void * pBlock=0x0061f6e8) Line 109 + 0x13 bytesmsvcr90d.dll!_free_dbg_nolock(void * pUserData=0x0061f708, int nBlockUse=1)msvcr90d.dll!_free_dbg(void * pUserData=0x0061f708, int nBlockUse=1) msvcr90d.dll!operator delete(void * pUserData=0x0061f708)desc.exe!std::allocator<int>::deallocate(int * _Ptr=0x0061f708, unsigned int __formal=4)desc.exe!std::vector<int,std::allocator<int> >::_Tidy() Line 1134 C++ Which shows the debug functions in ntdll.dll and the C runtime being used. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/532092', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41360/']} | jdg_380300 |
stackexchange | llm_judgeable_groundtruth_similarity | 1483550 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I think using ets will still introduce similar side effects.
Now provide the response and nothing else.
| ETS is not garbage collected since it is stored in a heap outside of erlang processes. This means that when you put something into ets it is copied into it, and when you take it out, you get a copy in your process. Making lots of ets lookups can then lead to excess consing in your process (but this is only relevant for very high througputs). The process dictionary is garbage collected. It is stored in the process's own heap. So when you look things up in it you get a reference to the exact same value you put in it. The values stored in the process dictionary are not compacted. Both approaches are non-pure, i.e. they have side-effects. Yes it is bad, and yes it is not why we have both alternatives. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1483550', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134929/']} | jdg_380301 |
stackexchange | llm_judgeable_groundtruth_similarity | 69024 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Over the years I've come across (usually as a tangential remark in a lecture) examples of how our intuitions (derived as they are from the experience of living in 3-dimensional space) will lead us badly astray when thinking about some $n$-dimensional Euclidean space, for some $n > 3$, especially if $n \gg 3$. Does anyone know of a compendium of these "false intuitions" (in high-dimensional Euclidean space)? Thanks! P.S. The motivation for this question is more than amusement. In my line of work, the geometrization of a problem by mapping it onto some Euclidean $n$-space is often seen as a boon to intuition, even when $n$ is huge. I suspect, however, that the net gain in intuition resulting from this maneuver may very well be negative! In any case, it seems like a good idea to be conversant with those intuitions that should be disregarded.
Now provide the response and nothing else.
| Intuitions fails in higher-dimensions: Imagine a unit hyper-sphere within a cube with side 2. In low dimensions (2d), most of the volume (area) is within the hyper-sphere (circle) and only a small fraction of the volume is outside of the hyper-sphere, thus in the corners of the hyper-cube (square). However, for high dimensions it is the other way around.The volume of the hyper-cube is obviously $V_q = 2^n$ while the volume of the unit hyper-sphere is $V_s=\frac{\pi^{\frac{n}{2}}}{(\frac{n}{2})!}$ (for even $n$) with $\lim_{n\rightarrow \infty} \frac{\pi^{\frac{n}{2}}}{(\frac{n}{2})!}=0$. In other words: Only for low dimensions, the bounding box of a hyper-sphere is a 'fair' approximation of the volume of the sphere. ... | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/69024', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13675/']} | jdg_380302 |
stackexchange | llm_judgeable_groundtruth_similarity | 14676064 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to Jasmine and assumed using the .not.toBeDefined() or .toBeUndefined() matches you could check if a variable is undefined : describe('toBeDefined', function() { it('should be defined', function() { var obj = {}; expect(obj).toBeDefined(); // Passes }); it('should not be defined using .not.tobeDefined()', function() { //var obj = {}; expect(obj).not.toBeDefined(); // Fails // ReferenceError: obj is not defined }); it('should not be defined using .tobeUnefined()', function() { //var obj = {}; expect(obj).toBeUndefined(); // Fails // ReferenceError: obj is not defined });}); I completely get that this would fail within the code, but I assumed using those matches, it wouldn't. Am I just using these wrong, or is it not possible to write a spec to check if something is undefined ?
Now provide the response and nothing else.
| The problem is that expect(obj).toBeUndefined(); fails before the call to Jasmine even happens . It's erroneous to refer to a variable that's not defined (in new browsers or in "strict" mode at least). Try this setup instead: it('should not be defined using .tobeUnefined()', function() { var obj = {}; expect(obj.not_defined).toBeUndefined(); }); In that code, there's a variable "obj" whose value is an empty object. It's OK in JavaScript to refer to a non-existent property of an object, and because such a reference results in undefined the test will pass. Another way to do it would be: it('should not be defined using .tobeUnefined()', function() { var no_value; expect(no_value).toBeUndefined();}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14676064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/560686/']} | jdg_380303 |
stackexchange | llm_judgeable_groundtruth_similarity | 110067 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While reading Leonid's grand answers to General strategies to write big code in Mathematica? I came across something that goes against my own practices. I do not disagree with the principle but the degree to which it is taken feels both alien and counterproductive to me. Quite possibly Leonid is right, he usually is, but I wish to indulge in a counterargument even if it ultimately only proves his point. He gives as his example of granular coding this: ClearAll[returnedQ,randomSteps,runExperiment,allReturns,stats];returnedQ[v_,steps_]:=MemberQ[Accumulate[v[[steps]]],{0,0}];randomSteps[probs_,q_]:=RandomChoice[probs->Range[Length[probs]],q];runExperiment[v_,probs_,q_]:= returnedQ[v,randomSteps[probs,q]];allReturns[n_,q_,v_,probs_]:= Total @ Boole @ Table[runExperiment[v,probs,q],{n}]stats[z_,n_,q_,v_,probs_]:=Table[allReturns[n,q,v,probs],{z}]; I have expressly left out the explanatory comments. Answering questions on Stack Exchange has taught me that code often doesn't do what descriptions claim it does, and it is better to read and understand the code itself for a true understanding. I find the level of granularity illustrated above distracting rather than illuminating. There is quite a lot of abstract fluff in the form of function names to tell me what code does rather that just showing me what it does in simple, reabable steps. Each subfunction has multiple parameters and the relationship between these functions is not clear at a glance. The evaluation order ultimately proves simple but the code itself feels convoluted. To follow this code I have to read it backwards, working inside out, and I have to keep track of multiple arguments at each step. Leonid wisely keeps the parameters consistent throughout but this cannot be assumed at first read, therefore additional mental effort must be expended. Conversely in my own terse paradigm I would write the function as follows: ClearAll[stats2]stats2[z_, n_, q_, v_, probs_] := With[{freq = probs -> Range @ Length @ probs}, ( v[[ freq ~RandomChoice~ q ]] // Accumulate // MemberQ[{0, 0}] // Boole ) ~Sum~ {n} ~Table~ {z} ]; I find this greatly superior for personal ease of reading and comprehension. I know that my style is unconventional and at times controversial; some no doubt flinch at my use of ~infix~ operators. Nevertheless I stand by my assertion that once this becomes familiar the code is very easy to read. The entire algorithm is visible in one compact structure The relationship of the parts of the code is quickly apparent The code can be read in a straightforward top-to-bottom, left-to-right manner This has almost no abstract fluff; the code is what it does, one comprehensible step at at time There is little need to visually or mentally jump around the code in the process of following it There are a minimum of arguments to keep track of at each step; each function is a built-in and each here has only one or two arguments, most instantly apparent from the syntax itself, e.g. 1 // f or 1 ~f~ 2 . Each parameter (of stats2 ) is used only once, with the exception of probs ; there is no interwoven handing off of arguments to track or debug (e.g. accidentally passing two in reverse order) There is virtually no need to count brackets or commas I feel that as illustrated stats2 is a sufficiently granular piece of code and that understanding and debugging it in its entirety is faster and easier than the same process on Leonid's code. So where are the questions in all of this? Who is right here? ;^) I know that my code is faster for me to read and understand, now and later. But what do others make of it? Surely some readers are already familiar with my style (perhaps grudgingly!) -- do they find stats2 easy to read? If as I believe there should be a balance of granularity and terseness how might the optimum degree be found? Is my finding Leonid's code comparatively slow to read and follow peculiar? What methods might I employ to improve my comprehension of that style? If my code is not easy for others to read and follow how can I identify and address the barriers that make it so? Am I missing the point? Are ease and speed of reading and debugging not the primary goals of the coding style Leonid illustrated in this example? What then is, and does my style fail to meet this goal in this specific example? Reply 1 This is a reply specifically to Leonid, not because other answers are not equally welcome and valid but because I chose his statements and code as the basis for my argument. I suspect that there is little in this that I truly disagree with and that further dialog will bring me closer to your position. I have neither the breadth (multiple languages) nor depth (large projects, production code) of your experience. I suspect that this is the crux of the problem: "It is somewhat an art to decide for each particular case, and this can not be decided without a bigger context / picture in mind." I think that art is what I wish to explore here. It is somewhat unfair to pick apart your example without context but since none was provided I see no other option. I am certainly guilty of crafting "write-only code" at times; sometimes I even find this amusing . However I do not think stats2 is a case of this. To the contrary I find it more read-friendly than your code which is largely the foundation of this entire question. I abhor code redundancy to the point of compulsively compacting other people's answers (1) (2) , so your claim (if I read it correctly) that my style is inherently more redundant is simultaneously promising and exasperating. :^) Surely I believe in code reusability, but I favor shorthand and abstractions that are broadly applicable rather than limited to a small class or number of problems. What experienced coder doesn't have a shorthand for Range @ Length @ x because that comes up frequently in a broad range of problems? But when I am going to use returnedQ again and is it worth the mental namespace to remember what it does? Am I going to be looking for element {0,0} again or might it be something else? Might I want Differences instead of Accumulate ? Is it easier to make returnedQ sufficiently general or to simply call // MemberQ[foo] when I need it? You wrote: My guess is that you like terse code because it brings you to the solution most economically. But when / if you want to solve many similar problems most economically, then you will notice that, if you list all your solutions, and compare those, your terse code for all of them will contain repeated pieces which however are wired into particular solutions, and there won't be an easy way to avoid that redundancy unless you start making your code more granular. Perhaps surprisingly this is actually rather backward from the way it seems to play out for me. It is easy to churn out verbose code with little thought for brevity and clarity; that is economic of my time to write . But spending the effort to write terse and clear code as I attempted to do with stats2 returns economy when reading and reusing that code because I can quickly re-parse and understand this code holistically rather than getting lost in a tangle of abstractions as I do with your code example. (Sorry, but that's how I feel in this case.) I do not want to have to run code to understand what it does; I want to be able to simply read it in the language I am acquainted with ( Mathematica ). If in the course of solving multiple related problems I realize that there is redundancy in my code I can still pull out those elements and refactor my code. The simple, visibly apparent structure makes this easy. I think the only way I shall be able to see this from your perspective is to work on a sufficiently large example where your principles become beneficial, and where our styles would initially diverge. I wonder if we can find and use such an example without pointlessly spending time on something arbitrary. Reply 2 Your updated answer reads: What I didn't realize was that often, when you go to even more granular code, dissecting pieces that you may initially consider inseparable, you suddenly see that your code has a hidden inner structure which can be expressed even more economically with those smaller blocks. This is what Sessions has repeatedly and profoundly demonstrated throughout his book, and it was an important lesson for me. I welcome this epiphany! To remove redundancy from my code and make it even more terse is something I have striven for for years. I think this can only come through a direct example (or series of examples) as in the microcosm your granularity is verbose rather than condensing. How large a code base would we need to have for this level of granularity to condense code rather than expand it? C is so verbose that I doubt I would be able to fully appreciate and internalize examples from the referenced book. Does a Mathematica -specific example come to mind?
Now provide the response and nothing else.
| My path to prefer granularity This is probably more an extended comment and a complementary answer to an excellent one by Anton. What I want to say is that for a long time, I had been thinking exactly along Mr.Wizard's lines. Mathematica makes it so easy to glue transformations together (and keep them readable and understandable!), that there is a great temptation to always code like that. Going to extreme granularity may seem odd and actually wrong. What changed my mind almost a decade ago was a tiny book by Roger Sessions called Reusable data structures for C . In particular, his treatment of linked lists, although all other things he did were also carrying that style. I was amazed by the level of granularity he advocated. By then, I've produced and / or studied several other implementations for the same things, and was sure one can't do better / easier. Well, I was wrong. What I did realize by that time was that once you've written some code, you can search for repeated patterns and try to factor them out - and as long as you do that reasonably well, you follow the DRY principle, avoid code duplication and everything is fine. What I didn't realize was that often, when you go to even more granular code, dissecting pieces that you may initially consider inseparable, you suddenly see that your code has a hidden inner structure which can be expressed even more economically with those smaller blocks. This is what Sessions has repeatedly and profoundly demonstrated throughout his book, and it was an important lesson for me. Since then, I started actively looking for smaller bricks in my code (in a number of languages. While I mostly answer Mathematica questions, I wrote reasonably large volumes of production code also in Java, C, javascript and Python), and more often than not, I was finding them. And almost in all cases, going more granular was advantageous, particularly in the long term, and particularly when the code you write is only a smaller part of a much larger code base. My reasons to prefer granularity Now, why is that? Why I think that granular code is very often a superior approach? I think, there are a few reasons. Here are some that come to mind Conceptual advantages It helps to conceptually divide code into pieces which for me make sense by themselves, and which I view as parts deserving their own mental image / name. More granular functions, when the split of a larger chunk of code is done correctly, represent inner "degrees for freedom" in your code. They expose the ideas behind the code, and the core elements which combine to give you a solution, more clearly. Sure, you can see that also in a single chunk of code, but less explicitly. In that case, you have to understand the entire code to see what is supposed to be the input for each block, just to understand how it is supposed to work. Sometimes that's Ok, but in general this is an additional mental burden. With separate functions, their signatures and names (if chosen well) help you with that. It helps to separate abstraction levels . The code combined from granular pieces reads like DSL code, and allows me to grasp the semantics of what is being done easier. To clarify this point, I should add that when your problem is a part of a larger code base, you often don't recall it (taken separately) as clearly as when it is a stand-alone problem - simply because most of such functions solve problems which only make sense given a larger context. Smaller granular functions make it easier for me to reconstruct that context locally without reading all the big code again. It is often more extensible This is so because I can frequently add more functionality by overloading some of the granular functions. Such extension points are just not visible / not easily possible in the terse / monolithic approach. It often allows one to reveal certain (hidden) inner structure, cross-cutting concerns, and new generalization points , and this leads to significant code simplifications. This is particularly so when we talk not about a single function, but about several functions, forming a larger block of code. It frequently happens that when you split one of the functions into pieces, you then notice that other functions may reuse those components. This sometimes allows one to discover a new cross-cutting concern in code, which was previously hidden. Once it is discovered, one can make efforts to factor it from the rest of the code and make it fully orthogonal. Again, this is not something that is observable on the level of a single function. It allows you to easily create many more combinations This way you can get solutions to similar (perhaps somewhat different) problems, without the need to dissect your entire code and rewrite it all. For example, if I had to change the specific way the random walk in that example was set up, I only had to change one tiny function - which I can do without thinking about the rest. Practical advantages It is easier to understand / recall after a while Granular code, at least for me, is easier to understand, when you come to it after a while, having forgotten the details of it. I may not remember exactly what was the idea behind the solution (well-chosen names help here), as well as which data structures were involved in each transformation (signatures help here). It also helps when you read someone else's code. Again, this is particularly true for larger code bases. More granular functions are easier to test in isolation . You can surely do that with the parts of a single function too, but it is not as straightforward. This is particularly true if your functions live in a package and are parts of a larger code base. I can better protect such code from regression bugs Here I mean the bugs coming from changes not propagated properly through entire code (such as changes of types / number of arguments for some functions), since I can insert argument checks and post-conditions easier. When some wrong / incomplete change is made, the code breaks in a controlled, predictable and easy-to-understand fashion. In many ways, this approach complements unit tests, code basically tests itself. It makes debugging much simpler . This is true because: Functions can throw inner exceptions with the detailed information where the error occurred (see also previous point) I can access them easier in running code, even when they are in packages. This is actually often a big deal, since it is one thing to run and test a tiny function, even private one, and it is another thing to deal with a larger and convoluted function. When you work on the running code, and have no direct access to the source (such that you can easily reload an isolated function), the smaller the function is that you may want to test, the easier it is. It makes creating workarounds, patches, and interactions with other code much easier . This I have experienced myself a lot. Making patches and workarounds . It often happens that you don't have access to the source, and have to change the behavior of some block of functionality at runtime. Being able to just simply overload or Block a small function is so much better than having to overload or redefine huge pieces of code, without even knowing what you may break by doing so. Integrating your functionality with code that does not have a public extension API The other, similar, issue is when you want to interact with some code (for example, make some of its functions work with your data types and be overloaded on them). It is good if that other code has an API designed for extensions. But if not, you may for example use UpValues to overload some of those functions. And there, having such granular functions as hooks really saves the day. In such moments, you really feel grateful for the other person who wrote their code in a granular fashion. This happened to me more than once. Implications for larger programs There surely isn't a single "right" way to structure code. And you may notice, that in most of the answers I post here on M SE, I do not follow the granularity principle to the extreme. One important thing to realize here is that the working mode where one solves a very particular problem is very different from the working mode when one is constructing, extending and / or maintaining larger code bases. The whole ability to glue together things insanely fast works against you in the long term, if your code is large. This is a road to writing so-called write-only code, and for software development that is a road to hell. Perl is notorious for that - which was the reason why lots of people switched to Python from Perl despite the unquestionable power of Perl. Mathematica is similar, because it shares with Perl the property that there are typically a large number of ways to solve any given problem. Put another way, the Mathematica language is very reusable, but that doesn't mean that it is very easy to create reusable code with it. It is easy to create the code that solves any particular problem fast , but that's not the same thing. Smaller granularity I view as an idiomatic (in Mathematica) way to improve reusability. What I wanted to stress was that reusability comes from the right separation of concerns, factoring out different pieces. It is obvious for the larger volumes of code, but I think this is no less true also for smaller functions. When we typically solve some problem in Mathematica, we don't have reusability in mind all that much, since our context is usually confined to that particular problem. In such a case, reusability is a foreign concept and gets in the way. My guess is that you like terse code because it brings you to the solution most economically. But when / if you want to solve many similar problems most economically, then you will notice that, if you list all your solutions, and compare those, your terse code for all of them will contain repeated pieces which however are wired into particular solutions, and there won't be an easy way to avoid that redundancy unless you start making your code more granular. My conclusions So, this really boils down to a simple question: do you need to solve some very specific problem, or do you want to construct a set of bricks to solve many similar problems. It is somewhat an art to decide for each particular case, and this can not be decided without a bigger context / picture in mind. If you are sure that you just need to solve a particular problem, then going to extreme granularity is probably an overkill. If you anticipate many similar problems, then granularity offers advantages. It so happens that large code bases frequently automate a lot of similar things, rather than solve a single large problem. This is true even for programs like compilers, which do solve a single large problem, but in reality lots of sub-problems will reuse the same core set of data structures. So, I was particularly advocating granularity in the context of development of large programs - and I would agree that for solving some particular very specific problem, making it too granular might result in too much of a mental overhead. Of course, that also greatly depends on personal habits - mine have been heavily influenced in recent years by dealing with larger chunks of code. | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/110067', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/121/']} | jdg_380304 |
stackexchange | llm_judgeable_groundtruth_similarity | 513141 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can we prove the following formula? $$\sum_{k=1}^{\infty}\frac{1}{(mk^2-n)^2}=\frac{-2m+\sqrt{mn}\pi\cot\left(\sqrt{\frac{n}{m}}\pi\right)+n\pi^2\csc^2\left(\sqrt{\frac{n}{m}}\pi\right)}{4mn^2}$$ What is the general method for finding sums of the form $\sum\limits_{k=1}^{\infty}\frac{1}{(mk^2-n)^\ell}, \ell\in\mathbb{N}$ ?
Now provide the response and nothing else.
| Start with the infinite product expansion $$\frac{\sin z}{z} = \prod_{k=1}^{\infty}\left(1 - \frac{z^2}{k^2\pi^2}\right)$$ Taking logarithm, substitute $z$ by $\pi\sqrt{x}$ and differentiate with respect to $x$, we find$$\sum_{k=1}^{\infty} \frac{1}{k^2 - x} = -\frac{d}{dx} \left[ \sum_{k=1}^{\infty}\log\left(1 - \frac{x}{k^2}\right)\right]= -\frac{d}{dx} \left[ \log\left(\frac{\sin(\pi \sqrt{x})}{\pi \sqrt{x}}\right) \right]$$Differentiate both sides with respect to $x$ for $\ell - 1 $ more times and then divide by $-(\ell-1)!$, we get in general: $$\sum_{k=1}^{\infty} \frac{1}{(k^2 - x)^\ell} = -\frac{1}{(\ell-1)!}\frac{d^{\ell}}{dx^{\ell}} \left[ \log\left(\frac{\sin(\pi \sqrt{x})}{\pi \sqrt{x}}\right) \right]$$In the case $\ell = 2$, the RHS simplifies to$$-\frac{1}{2x^2} + \frac{\pi}{4x}\left( \frac{1}{\sqrt{x}}\cot(\pi\sqrt{x}) + \pi \csc(\pi\sqrt{x})^2 \right)$$Substitute $x$ by $\frac{n}{m}$ will give you the formula you have for $\ell = 2$. Formula for other $\ell$ can be obtained by taking corresponding number of derivatives. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/513141', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/91500/']} | jdg_380305 |
stackexchange | llm_judgeable_groundtruth_similarity | 43305 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let say that an infinite subsets $A$ of $\mathbb{N}$ is "nice w.r.to ergodic limits", if it can replace $\mathbb{N}$ in the individual ergodic theorem, that is, if it is such that the following statement is true: For any probability space $(X,\Sigma,\mu),$ for any measure-preserving transformation $T$ on $X,$ for any $f\in L^1(X,\mu)$ the ergodic means along $A,$ $$M(f,T,A,t,x):=\big|\{j\in A\, : j\leq t\}\big|^{\,-1}\sum_{j\in A,\, j\leq t}f(T^{j}x)$$ converge for a.e. $x\in X$ to the conditional expectation w.r.to the $T$ invariant $\sigma$-algebra, $\mathbb{E}(f|\Sigma_T),$ as $t \to +\infty.$ So $\mathbb{N}$ itself is nice in this sense, by Birkhoff's theorem; if $A$ is nice and $m$ is a positive integer, the set of translates $A+m$ is nice (the set of convergence with $T$ along $A+m$ coinciding with the $T^{\, m}$ pre-image of the set of convergence with $T$ along $A$). Also, a disjoint union of two nice sets is nice. Is there any other structure on the family of these sets? What about e.g. the union of two of them? (at a glance it seems to me that something more can be said for the analogous cases of other ergodic theorems, e.g. for the $L^p$ convergence. Looking at this very related question , and its answer, make me think that the situation may be non-trivial and interesting enough, so that it should have been studied.
Now provide the response and nothing else.
| These are called good universal sets.Bourgain (1987) proved that sequences of the form $p(n)$, $n \in {\bf N}$, $p$ a non constant polynomial, are good. He also proved (1988) that the set of primes is a good universal set for $L^p$ functions, $p> {(1+\sqrt{3})\over 2}$. This was later improved to $p>1$ by Wierdl, see its article for a short introduction to the problem http://www.springerlink.com/content/e027w4211n7784h1/fulltext.pdf . There is now an extensive litterature on the problem, following Bourgain's articles. In another direction, note that a transformation T is mixing iff the ergodic theorem for T holds for all subsequences (see e.g. the book of Krengel, "ergodic theorems"). | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/43305', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6101/']} | jdg_380306 |
stackexchange | llm_judgeable_groundtruth_similarity | 18266952 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read that by default, Web API will return JSON Data but for some reason when creating an API, it returns XML instead of JSON. public class CurrencyController : ApiController{ private CompanyDatabaseContext db = new CompanyDatabaseContext(); // GET api/Currency public IEnumerable<Currency> GetCurrencies() { return db.Currencies.AsEnumerable(); }} I haven't modified anything out of the ordinary so I'm stumped
Now provide the response and nothing else.
| if you modify your WebApiConfig as follows you'll get JSON by default. public static class WebApiConfig{ public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter.SupportedMediaTypes.FirstOrDefault(t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18266952', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1712587/']} | jdg_380307 |
stackexchange | llm_judgeable_groundtruth_similarity | 16071211 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have this code to create a Web Worker: w = new Worker("webwork.js");w.onmessage = function(event) { alert(event.data); } And then the webwork.js code for the Web Worker: self.onmessage = function(event) { //var ss=r; //Causes error because of undefined var ss=""; for(var currProp in event) { ss+=("event."+currProp+"="+event[currProp]+"\n"); } postMessage(ss);} Now I want to transfer a 128-Megabyte ArrayBuffer with this code: var r = new ArrayBuffer(1048576*128);w.postMessage(0, [r]); Now that I have supposedly transferred the variable r , how do I access it from the Web Worker itself. I have tried event.r , just r , self.r and other things like trying to add a second function argument for the array of ArrayBuffers , but nothing works. How can I access the transferred variable(s) from the Web Worker?
Now provide the response and nothing else.
| postMesage(aMessage, transferList) In transferList , you must specify the transferable objects that are contained in aMessage : const objData = { strText: "coolcmd", objArrayBuffer: new ArrayBuffer(10), objTypedArray: new Int8Array(20)};objWorker.postMessage( objData, [objData.objArrayBuffer, objData.objTypedArray.buffer]); In the worker: self.onmessage = objEvent => { console.log( objEvent.data.strText, objEvent.data.objArrayBuffer, objEvent.data.objTypedArray );}; Or using an object destructuring : self.onmessage = ({data: {strText, objArrayBuffer, objTypedArray}}) => { console.log( strText, objArrayBuffer, objTypedArray );}; List of transferable objects. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16071211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/681164/']} | jdg_380308 |
stackexchange | llm_judgeable_groundtruth_similarity | 14063791 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
#include <queue>using namespace std;class Test{ int *myArray; public: Test(){ myArray = new int[10]; } ~Test(){ delete[] myArray; }};int main(){ queue<Test> q Test t; q.push(t);} After I run this, I get a runtime error "double free or corruption". If I get rid of the destructor content (the delete ) it works fine. What's wrong?
Now provide the response and nothing else.
| Let's talk about copying objects in C++. Test t; , calls the default constructor, which allocates a new array of integers. This is fine, and your expected behavior. Trouble comes when you push t into your queue using q.push(t) . If you're familiar with Java, C#, or almost any other object-oriented language, you might expect the object you created earler to be added to the queue, but C++ doesn't work that way. When we take a look at std::queue::push method , we see that the element that gets added to the queue is "initialized to a copy of x." It's actually a brand new object that uses the copy constructor to duplicate every member of your original Test object to make a new Test . Your C++ compiler generates a copy constructor for you by default! That's pretty handy, but causes problems with pointer members. In your example, remember that int *myArray is just a memory address; when the value of myArray is copied from the old object to the new one, you'll now have two objects pointing to the same array in memory. This isn't intrinsically bad, but the destructor will then try to delete the same array twice, hence the "double free or corruption" runtime error. How do I fix it? The first step is to implement a copy constructor , which can safely copy the data from one object to another. For simplicity, it could look something like this: Test(const Test& other){ myArray = new int[10]; memcpy( myArray, other.myArray, 10 );} Now when you're copying Test objects, a new array will be allocated for the new object, and the values of the array will be copied as well. We're not completely out trouble yet, though. There's another method that the compiler generates for you that could lead to similar problems - assignment. The difference is that with assignment, we already have an existing object whose memory needs to be managed appropriately. Here's a basic assignment operator implementation: Test& operator= (const Test& other){ if (this != &other) { memcpy( myArray, other.myArray, 10 ); } return *this;} The important part here is that we're copying the data from the other array into this object's array, keeping each object's memory separate. We also have a check for self-assignment; otherwise, we'd be copying from ourselves to ourselves, which may throw an error (not sure what it's supposed to do). If we were deleting and allocating more memory, the self-assignment check prevents us from deleting memory from which we need to copy. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14063791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/547695/']} | jdg_380309 |
stackexchange | llm_judgeable_groundtruth_similarity | 11061621 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have set up a SSH tunnel between two servers A and B . B has MySQL server, and this works: mysql -h localhost -P 3306 -u user -p While this doesn't: mysql -h 127.0.0.1 -P 3306 -u user -p Although my.cnf has these lines: bind-address = 127.0.0.1# Next addr differs slightly, but anywaybind-address = 99.99.99.99 Now about the tunnel. It connects the following: (A) localhost(9989) -> (B) localhost(3306) But when (on A , with ports forwarded) I do mysql -v -h 127.0.0.1 -P 9989 -u user userdb -p I get ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0 And when I do mysql -v -h localhost -P 9989 -u user userdb -p I get ERROR 1045 (28000): Access denied for user 'user'@'localhost' (using password: YES) What might be the reason? What am I doing wrong?
Now provide the response and nothing else.
| There are three issues here. 1 - Forget about the SSH tunnel for now You cannot bind MySQL to more than one specific IP.The first bind-address clause is overridden (therefore, ignored) by the second one. Your server only listens to 99.99.99.99 . The reason why you can connect with -h localhost but not with -h 127.0.0.1 is that in the first form, you do not actually connect through TCP/IP, but through a local socket. Look in your my.cnf for a socket clause. Remove one redundant bind-address clause. You may want to use bind-address=0.0.0.0 , which instructs MySQL daemon to listen to all network interfaces. 2 - Let's setup your SSH tunnel The reason for you error ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0 is not obvious to me. I suspect SSH tunnel is actually established only when it receives a connection request (in your case, when you run the mysql client). Since your server does not listen to 127.0.0.1 (see previous paragraph), the SSH tunnel cannot be established, connection fails, and your client interprets it as a network failure. 3 - Why mysql -v -h localhost -P 9989 -u user userdb -p fails Please post the output of [edit : just added ...OR host LIKE 'localhost' below, as this might be relevant for troubleshooting purposes] mysql > SELECT user, host FROM mysql.user WHERE user LIKE 'user' OR host LIKE 'localhost'; (replace 'user' , after the LIKE clause, with the actual user name if necessary) MySQL access control checks both the username/password ( user ) and the origin of the connection ( host ) to identify a user. You probably did not create a user 'user'@'localhost' . N.B.: mysql.com being unreachable from my location at this time, I cannot link to the relevant manual pages. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11061621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/822789/']} | jdg_380310 |
stackexchange | llm_judgeable_groundtruth_similarity | 510345 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given the Universe is the set of natural numbers, then $(∀x)(∃y)(x>y)$ is false. But then why is $(∀x)(∃y)(x\geq y)$ true? The first equation and the second equation is the same except for "=" in the second equation.so i don't get how that affects the statement to this degree.Thank you.
Now provide the response and nothing else.
| Look at the smallest number in the natural numbers $\mathbb N$ (since you mention that the universe is the set of all natural numbers): the smallest number in $\mathbb N$ would be $0$ or $1$, depending on your definition. Let's just go with $0$. Since $0 \in \mathbb N$, the universally quantified "x" means the inequality must hold for every natural number $x$, including $x = 0.\;$ Now, does there exist any $y \in \mathbb N$ such that that $0 > y\;$? On the other hand, if we allow equality too, then we have that it is true that there exists a $\,y \in \mathbb N\,$ such that $\;0 \geq y,\,$ namely, $\,y = 0:$ That is, it is certainly true that $\;0 \geq 0.$ The same logic applies if $x = 1$ instead of $x = 0$, if you are working with a definition of the natural numbers $\{1, 2, 3, \ldots\}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/510345', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/51252/']} | jdg_380311 |
stackexchange | llm_judgeable_groundtruth_similarity | 7245550 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My app is partly written in native app using C/C++. The problem is that whenever C/C++ part crashes for some reason the app dies and then restarts automatically. This causes all kinds of messy problems Now of course, it should not crash in the native part and I'm trying to weed out all reasons why it would happen. However, if it does happen I'd like to: Quit gracefully If it does die, at least not try to restart automatically. I'm curious as to why this behaviour happens. After some search I tried putting the following line in the main activity element of the AndroidManifest.xml: android:finishOnTaskLaunch="true" but the automatic restore still happens. Anyone knows why this is happening and how to change it? UPDATE: I think a more fundamental question is, Is there something similar to a callback if there is a native crash? One of the answers suggested 'handling crash signals'. I'd be grateful for any links on how it can be done at an application or module level. As it stands currently, if there is a crash the app just disappears, there's nothing in logcat, so no debugging is possible.
Now provide the response and nothing else.
| Try to handle crash signals (SIGSEGV etc.) and send kill to yourself in signal handler. This trick helps me. Example: #include <signal.h>#include <unistd.h>static void signal_handler(int signal, siginfo_t *info, void *reserved){ kill(getpid(),SIGKILL);}extern "C" jint JNI_OnLoad(JavaVM* vm, void* /*reserved*/){ struct sigaction handler; memset(&handler, 0, sizeof(handler)); handler.sa_sigaction = signal_handler; handler.sa_flags = SA_SIGINFO; sigaction(SIGILL, &handler, NULL); sigaction(SIGABRT, &handler, NULL); sigaction(SIGBUS, &handler, NULL); sigaction(SIGFPE, &handler, NULL); sigaction(SIGSEGV, &handler, NULL); sigaction(SIGSTKFLT, &handler, NULL); return(JNI_VERSION_1_6);} UPDATE2 if you want to see crashlog in android logcat you should use this signal handler static void signal_handler(int signal, siginfo_t *info, void *reserved){ struct sockaddr_un addr; size_t namelen; socklen_t alen; int s, err; char name[] = "android:debuggerd"; namelen = strlen(name); // Test with length +1 for the *initial* '\0'. if ((namelen + 1) > sizeof(addr.sun_path)) { errno = EINVAL; return; } /* This is used for abstract socket namespace, we need * an initial '\0' at the start of the Unix socket path. * * Note: The path in this case is *not* supposed to be * '\0'-terminated. ("man 7 unix" for the gory details.) */ memset (&addr, 0, sizeof addr); addr.sun_family = AF_LOCAL; addr.sun_path[0] = 0; memcpy(addr.sun_path + 1, name, namelen); alen = namelen + offsetof(struct sockaddr_un, sun_path) + 1; s = socket(AF_LOCAL, SOCK_STREAM, 0); if(s < 0) return; RETRY_ON_EINTR(err,connect(s, (struct sockaddr *) &addr, alen)); if (err < 0) { close(s); s = -1; } pid_t tid = gettid(); if(s>=0) { /* debugger knows our pid from the credentials on the * local socket but we need to tell it our tid. It * is paranoid and will verify that we are giving a tid * that's actually in our process */ int ret; RETRY_ON_EINTR(ret, write(s, &tid, sizeof(unsigned))); if (ret == sizeof(unsigned)) { /* if the write failed, there is no point to read on * the file descriptor. */ RETRY_ON_EINTR(ret, read(s, &tid, 1)); //notify_gdb_of_libraries(); } close(s); } wait(NULL); kill(getpid(),SIGKILL);} I took it from android source (can't insert link because android.git.kernel.org is down), but I am not sure that it will work in future Android releases | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7245550', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289918/']} | jdg_380312 |
stackexchange | llm_judgeable_groundtruth_similarity | 14875582 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a df: head(hej3) Year variable N Sum Mean sd Variance Median Min Max1 1901 Delägare.män. 85 18089 212.81176 365.39168 133511.083 110.0 5 27712 1901 Delägare.kvinnor. 48 3509 73.10417 97.84209 9573.074 34.5 1 4143 1902 Delägare.män. 92 19783 215.03261 363.63529 132230.625 111.5 2 28274 1902 Delägare.kvinnor. 53 3872 73.05660 94.12827 8860.131 40.0 1 4085 1903 Delägare.män. 99 21123 213.36364 366.58411 134383.907 109.0 2 29946 1903 Delägare.kvinnor. 55 4201 76.38182 93.85006 8807.833 40.0 1 390dput(hej3)structure(list(Year = c(1901L, 1901L, 1902L, 1902L, 1903L, 1903L, 1904L, 1904L, 1905L, 1905L, 1906L, 1906L, 1907L, 1907L, 1908L, 1908L, 1909L, 1909L, 1910L, 1910L, 1911L, 1911L, 1912L, 1912L, 1915L, 1915L, 1921L, 1921L, 1924L, 1924L, 1927L, 1927L, 1930L, 1930L), variable = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), .Label = c("Delägare.män.", "Delägare.kvinnor."), class = "factor"), N = c(85, 48, 92, 53, 99, 55, 112, 63, 116, 69, 126, 76, 130, 78, 130, 80, 129, 83, 125, 81, 118, 77, 97, 72, 50, 41, 42, 34, 42, 33, 26, 20, 29, 29), Sum = c(18089, 3509, 19783, 3872, 21123, 4201, 23686, 5087, 26751, 5652, 28198, 6493, 31638, 6934, 32611, 7194, 36133, 7025, 30138, 6884, 26935, 6900, 20333, 8021, 18929, 5619, 11157, 4332, 10778, 4437, 7974, 3416, 9270, 3793), Mean = c(212.811764705882, 73.1041666666667, 215.032608695652, 73.0566037735849, 213.363636363636, 76.3818181818182, 211.482142857143, 80.7460317460317, 230.612068965517, 81.9130434782609, 223.793650793651, 85.4342105263158, 243.369230769231, 88.8974358974359, 250.853846153846, 89.925, 280.100775193798, 84.6385542168675, 241.104, 84.9876543209877, 228.262711864407, 89.6103896103896, 209.618556701031, 111.402777777778, 378.58, 137.048780487805, 265.642857142857, 127.411764705882, 256.619047619048, 134.454545454545, 306.692307692308, 170.8, 319.655172413793, 130.793103448276), sd = c(365.391684625249, 97.8420871855394, 363.635291602196, 94.1282707255493, 366.5841066326, 93.8500559223754, 373.650556559185, 106.929577104772, 405.688052605677, 109.41727188241, 421.942750950132, 110.801123403007, 462.951922738037, 115.87931358968, 502.602700547356, 117.741378786224, 642.043293966629, 114.535815924939, 459.068496259615, 114.82317860815, 360.246791665663, 119.967995276389, 293.702281347504, 224.695704072853, 460.551137890511, 162.282154166672, 231.68690199813, 139.634830604701, 226.30617641151, 143.124310240498, 343.058102277823, 181.389548819806, 410.53721563181, 192.111645239046), Variance = c(133511.083193277, 9573.0740248227, 132230.625298614, 8860.13134978229, 134383.907235622, 8807.832996633, 139614.738416988, 11433.9344598054, 164582.796026986, 11972.1393861893, 178035.685079365, 12276.8889473684, 214324.482766846, 13428.0153180153, 252609.474597496, 13863.032278481, 412219.591327519, 13118.4531295915, 210743.884258065, 13184.362345679, 129777.750905403, 14392.3198906357, 86261.0300687285, 50488.159428795, 212107.350612245, 26335.4975609756, 53678.8205574913, 19497.8859180036, 51214.4854819977, 20484.5681818182, 117688.861538462, 32902.1684210526, 168540.805418719, 36906.8842364532), Median = c(110, 34.5, 111.5, 40, 109, 40, 112, 47, 109.5, 34, 111.5, 35, 120.5, 41.5, 124.5, 46.5, 125, 44, 124, 44, 121, 44, 112, 42.5, 251, 85, 199.5, 93.5, 186, 88, 206, 111, 185, 50), Min = c(5, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 19, 1, 19, 1, 19, 1, 49, 5, 0, 0), Max = c(2771, 414, 2827, 408, 2994, 390, 3196, 506, 3421, 524, 3680, 553, 3952, 570, 4271, 581, 4585, 601, 3898, 602, 2603, 592, 1951, 1699, 2269, 675, 1151, 576, 1122, 565, 1680, 652, 2086, 809)), .Names = c("Year", "variable", "N", "Sum", "Mean", "sd", "Variance", "Median", "Min", "Max"), row.names = c(NA, -34L), class = "data.frame") I made a plot: theplot<- ggplot(subset(hej3,variable=="Delägare.män."), aes(x = Year)) + geom_line(aes(y = Mean, color = "Medelvärde"), size = 1.5, linetype = 1, alpha = 1) + geom_ribbon(aes(ymax = Max, ymin = Min, fill = "Delägare Män Max/Min"), linetype = 3, alpha = 0.4) + geom_ribbon(aes(ymax = Mean+sd, ymin = Mean-sd, fill = "Mean +- sd"), colour = "grey50", linetype = 3, alpha = 0.8)+#geom_line(aes(y = Sum, #color = "Sum Delägare Män"), size = 0.9, linetype = 1, alpha = 1) + geom_line(aes(y = N, color = "Antal Kassor"), size = 0.9, linetype = 2, alpha = 1)+scale_y_continuous(breaks = seq(-500, 4800, by = 100), limits = c(-500, 4800), labels = seq(-500, 4800, by = 100))+ scale_x_continuous(breaks=seq(1901,1930,2))+labs(title = "Manliga Delägare i Yrkeskassor") + scale_color_manual("Variabler", breaks = c("Antal Kassor","Medelvärde"), values = c("Antal Kassor" = "black", "Medelvärde" = "#6E6E6E")) + scale_fill_manual(" Ribbons", breaks = c("Delägare Män Max/Min", "Mean +- sd"), values = c(`Delägare Män Max/Min` = "grey50", `Mean +- sd` = "#4E4E4E")) + theme(legend.direction = "horizontal", legend.position = "bottom", legend.key = element_blank(), legend.background = element_rect(fill = "white", colour = "gray30")) + guides(fill = guide_legend(keywidth = 0.9, keyheight = 1)) I have 2 questions : How can I change the line type in the legend? I would like the "Antal Kassor variable in the legend to be dashed as in the picture (linetype=2). How can I "zoom in" a part of y-axis? I would like to zoom in the area between -300 och 600 at the y-axis. I know there are a lot of ggplot wizards here:) Best Regards!
Now provide the response and nothing else.
| To use your original data frame you should change to lines. In both calls to geom_line() put linetype= inside the aes() and set the type to variable name. + geom_line(aes(y = Mean, color = "Medelvärde",linetype = "Medelvärde"), size = 1.5, alpha = 1) + geom_line(aes(y = N, color = "Antal Kassor",linetype="Antal Kassor"), size = 0.9, alpha = 1) Then you should add scale_linetype_manual() with the same name as for scale_colour_manual() and there set line types you need. +scale_linetype_manual("Variabler",values=c("Antal Kassor"=2,"Medelvärde"=1)) Also guides() should be adjusted for linetype and colours to better show lines in legend. + guides(fill = guide_legend(keywidth = 1, keyheight = 1), linetype=guide_legend(keywidth = 3, keyheight = 1), colour=guide_legend(keywidth = 3, keyheight = 1)) Here is complete code used: theplot<- ggplot(subset(hej3,variable=="Delägare.män."), aes(x = Year)) + geom_line(aes(y = Mean, color = "Medelvärde",linetype = "Medelvärde"), size = 1.5, alpha = 1) + geom_ribbon(aes(ymax = Max, ymin = Min, fill = "Delägare Män Max/Min"), linetype = 3, alpha = 0.4) + geom_ribbon(aes(ymax = Mean+sd, ymin = Mean-sd, fill = "Mean +- sd"), colour = "grey50", linetype = 3, alpha = 0.8)+ #geom_line(aes(y = Sum, #color = "Sum Delägare Män"), size = 0.9, linetype = 1, alpha = 1) + geom_line(aes(y = N, color = "Antal Kassor",linetype="Antal Kassor"), size = 0.9, alpha = 1)+ scale_y_continuous(breaks = seq(-500, 4800, by = 100), limits = c(-500, 4800), labels = seq(-500, 4800, by = 100))+ scale_x_continuous(breaks=seq(1901,1930,2))+ labs(title = "Manliga Delägare i Yrkeskassor") + scale_color_manual("Variabler", breaks = c("Antal Kassor","Medelvärde"), values = c("Antal Kassor" = "black", "Medelvärde" = "#6E6E6E")) + scale_fill_manual(" Ribbons", breaks = c("Delägare Män Max/Min", "Mean +- sd"), values = c(`Delägare Män Max/Min` = "grey50", `Mean +- sd` = "#4E4E4E")) + scale_linetype_manual("Variabler",values=c("Antal Kassor"=2,"Medelvärde"=1))+ theme(legend.direction = "horizontal", legend.position = "bottom", legend.key = element_blank(), legend.background = element_rect(fill = "white", colour = "gray30")) + guides(fill = guide_legend(keywidth = 1, keyheight = 1), linetype=guide_legend(keywidth = 3, keyheight = 1), colour=guide_legend(keywidth = 3, keyheight = 1)) + coord_cartesian(ylim = c(-300, 600)) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14875582', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1665355/']} | jdg_380313 |
stackexchange | llm_judgeable_groundtruth_similarity | 1945618 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an easy way (aka: not using a proxy) to get access to the raw request/response XML for a webservice published with JAX-WS reference implementation (the one included in JDK 1.5 and better) ?Being able to do that via code is what I need to do.Just having it logged to a file by clever logging configurations would be nice but enough. I know that other more complex and complete frameworks exist that might do that, but I would like to keep it as simple as possible and axis, cxf, etc all add considerable overhead that I want to avoid. Thanks!
Now provide the response and nothing else.
| Here is the solution in raw code (put together thanks to stjohnroe and Shamik): Endpoint ep = Endpoint.create(new WebserviceImpl());List<Handler> handlerChain = ep.getBinding().getHandlerChain();handlerChain.add(new SOAPLoggingHandler());ep.getBinding().setHandlerChain(handlerChain);ep.publish(publishURL); Where SOAPLoggingHandler is (ripped from linked examples): package com.myfirm.util.logging.ws;import java.io.PrintStream;import java.util.Map;import java.util.Set;import javax.xml.namespace.QName;import javax.xml.soap.SOAPMessage;import javax.xml.ws.handler.MessageContext;import javax.xml.ws.handler.soap.SOAPHandler;import javax.xml.ws.handler.soap.SOAPMessageContext;/* * This simple SOAPHandler will output the contents of incoming * and outgoing messages. */public class SOAPLoggingHandler implements SOAPHandler<SOAPMessageContext> { // change this to redirect output if desired private static PrintStream out = System.out; public Set<QName> getHeaders() { return null; } public boolean handleMessage(SOAPMessageContext smc) { logToSystemOut(smc); return true; } public boolean handleFault(SOAPMessageContext smc) { logToSystemOut(smc); return true; } // nothing to clean up public void close(MessageContext messageContext) { } /* * Check the MESSAGE_OUTBOUND_PROPERTY in the context * to see if this is an outgoing or incoming message. * Write a brief message to the print stream and * output the message. The writeTo() method can throw * SOAPException or IOException */ private void logToSystemOut(SOAPMessageContext smc) { Boolean outboundProperty = (Boolean) smc.get (MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (outboundProperty.booleanValue()) { out.println("\nOutbound message:"); } else { out.println("\nInbound message:"); } SOAPMessage message = smc.getMessage(); try { message.writeTo(out); out.println(""); // just to add a newline } catch (Exception e) { out.println("Exception in handler: " + e); } }} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1945618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/236773/']} | jdg_380314 |
stackexchange | llm_judgeable_groundtruth_similarity | 4553129 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As far as I know, Python has 3 ways of finding out what operating system is running on: os.name sys.platform platform.system() Knowing this information is often useful in conditional imports, or using functionality that differs between platforms (e.g. time.clock() on Windows v.s. time.time() on UNIX). My question is, why 3 different ways of doing this? When should one way be used and not another? Which way is the 'best' (most future-proof or least likely to accidentally exclude a particular system which your program can actually run on)? It seems like sys.platform is more specific than os.name , allowing you to distinguish win32 from cygwin (as opposed to just nt ), and linux2 from darwin (as opposed to just posix ). But if that's so, that what about the difference between sys.platform and platform.system() ? For example, which is better, this: import sysif sys.platform == 'linux2': # Do Linux-specific stuff or this? : import platformif platform.system() == 'Linux': # Do Linux-specific stuff For now I'll be sticking to sys.platform , so this question isn't particularly urgent, but I would be very grateful for some clarification regarding this.
Now provide the response and nothing else.
| Dived a bit into the source code. The output of sys.platform and os.name are determined at compile time. platform.system() determines the system type at run time. sys.platform is specified as a compiler define during the build configuration. os.name checks whether certain os specific modules are available (e.g. posix , nt , ...) platform.system() actually runs uname and potentially several other functions to determine the system type at run time. My suggestion: Use os.name to check whether it's a posix-compliant system. Use sys.platform to check whether it's a linux, cygwin, darwin, atheos, etc. Use platform.system() if you don't believe the other sources. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4553129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/526887/']} | jdg_380315 |
stackexchange | llm_judgeable_groundtruth_similarity | 3356229 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I Have a display that needs to be a little more dynamic than what I'm use to and can't seem to quite find the answer I need. Customer a Customer b Customer c (and so on) savings with product a savings with product b (and so on) I know there will always be a minimum of one in each field. Someone said use a nested repeater or something. I looked around and couldn't find out how to use a nested repeater. I am on a deadline and can't really play with things until I find something that works. What asp control should I use to do this? An example would be nice but I just need help in the right direction. I am using sql but getting the data through link. The data ends up in lists. Thank you for your help!
Now provide the response and nothing else.
| Nested Repeaters are pretty easy. Just throw one in your ItemTemplate, and in the OnItemDataBound event of your main repeater do the following if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem){ DataRowView row = (DataRowView)e.Item.DataItem; Repeater nestedRepeater = e.Item.FindControl("NestedRepeater") as Repeater; nestedRepeater.DataSource = getSavingsPerCustomer(row["customerID"]); nestedRepeater.DataBind(); } Where the template of the outer repeater had a customer name and a repeater and the inner one has the different savings probably incorrect syntax but you get the idea <asp:repeater ID="outer"><HeaderTemplate> <div style="float:left"></HeaderTemplate><ItemTemplate> Customer: <%= Eval(customer)%><br/> <asp:repeater ID="NestedRepeater"> <ItemTemplate> Saving: <%= Eval(saving)%><br/> </ItemTemplate> </asp:repeater></ItemTemplate><FooterTemplate> </div></FooterTemplate></asp:repeater> Similar SO question: Repeater in Repeater | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3356229', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/243977/']} | jdg_380316 |
stackexchange | llm_judgeable_groundtruth_similarity | 8381739 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a hello function and it takes n arguments (see below code). def hello(*args): # return values I want to return multiple values from *args . How to do it? For example: d, e, f = hello(a, b, c) SOLUTION: def hello(*args): values = {} # values rst = [] # result for arg in args: rst.append(values[arg]) return rsta, b, c = hello('d', 'e', f)a, b = hello('d', 'f') Just return list. :) :D
Now provide the response and nothing else.
| So, you want to return a new tuple with the same length as args (i.e. len(args)), and whose values are computed from args[0], args[1], etc.Note that you can't modify 'args' directly, e.g. you can't assign args[0] = xxx, that's illegal and will raise a TypeError: 'tuple' object does not support item assignment.What You need to do then is return a new tuple whose length is the same as len(args).For example, if you want your function to add one to every argument, you can do it like this: def plus_one(*args): return tuple(arg + 1 for arg in args) Or in a more verbose way: def plus_one(*args): result = [] for arg in args: result.append(arg + 1) return tuple(result) Then, doing : d, e, f = plus_one(1, 2, 3) will return a 3-element tuple whose values are 2, 3 and 4. The function works with any number of arguments. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8381739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/152308/']} | jdg_380317 |
stackexchange | llm_judgeable_groundtruth_similarity | 15975363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Question regarding combination of Jackson/JPA If there are about 20 entities in current application and I have add Jackson dependency in POM, does it mean all entities are by default ready to convert to JSON object? I saw a sample project seems only class annotated as @JsonIgnored is skipped by JSON . If so, then how can this happen, what is behind such mechanism? how JACKSON handle those entities which don't have any Jackson annotation, by default ignored or not? I've been looking for resources online but not much luck. If only one of the 20 entities need to be mapped to JSON object, does it mean I have to add @JsonIgnore to all other 19 entities? If not, how Jackson differentiate with entity to work on? Thanks.
Now provide the response and nothing else.
| Jackson and JPA don't have anything to do with each other. Jackson is a JSON parsing library and JPA is a persistence framework. Jackson can serialize almost any object - the only requirement being that the object have some kind of recognizable properties ( Javabean type properties , or bare fields annotated with @JsonProperty . There is an additional requirement for deserialization , that the target type have a default (no-arg) constructor. So, for example, this is an object that Jackson can serialize: // Class with a single Javabean property, "name"class Person { private String name; public String getName() { return name ; } public String setName(String name) { this.name = name ; }} And here is another: // Class with a single field annotated with @JsonPropertyclass Account { @JsonProperty("accountNumber") private String accountNumber;} And here is yet another: @Entitypublic class User { @Id private Long id; @Basic private String userName; @Basic @JsonIgnore private String password; @Basic @JsonIgnore private Address address; // Constructors, getters, setters} The last example shows a JPA entity class - as far as Jackson is concerned it can be serialized just like any other type. But , take note of its fields: when this object is serialized into JSON two of the fields will not be included - 'password' and 'address'. This is because they have been annotated with @JsonIgnore . The @JsonIgnore annotation allows a developer to say 'Hey, its ok to serialize this object, but when you do so don't include these fields in the output'. This exclusion only occurs for the fields of this object, so for example, if you included an Address field in another class, but did not mark the field as ignorable, it would be serialized. To prevent serialization of a type in all cases , regardless of context, use the @JsonIgnoreType annotation. When used on a type it basically means 'I dont care where this type is used, never serialize it'. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15975363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1233359/']} | jdg_380318 |
stackexchange | llm_judgeable_groundtruth_similarity | 5454249 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: public void myMethod(Object... args) { System.out.println("this is myMethod");}public void invokeMyMethod() { Method s = this.getClass().getMethod("myMethod", Object[].class); Object[] ex = new Object[2]; ex[0] = "hi"; ex[1] = "there"; s.invoke(this, ex);} I'm getting the exception java.lang.IllegalArgumentException: wrong number of arguments. What's wrong?
Now provide the response and nothing else.
| You need to call the method like this: s.invoke(this, new Object[]{new Object[]{"hi", "there"}}); (... or use the alternative in @Jon's answer.) The reason your current code fails is to do with the way that varadic methods are implemented in Java. Essentially, T1 xxx(T2... args) is syntactic sugar for T1 xxx(T2[] args) . And when you call the methods, xxx(arg1, arg2, arg3) is syntactic sugar for xxx(new T2[]{arg1, arg2, arg3}) . In this case, you are trying to call a varadic method using another varadic method with the same array basetype, and there are multiple possible interpretations of the code. When there are two possible interpretations of a varadic call, Java assumes that you are trying to use the "unsugared" version of the call instead of the "sugared" version. Or to be more precise, the "sugared" interpretation is used if and only if : the number of actual arguments is not equal to the number of formal parameters, or the last actual argument is NOT assignment compatible with the (array) type of the last formal parameter. If you are interested, this behaviour is specified in the JLS in section 15.12.4.2 . So ... my solution works by forcing the non-varadic interpretation and explicitly constructing the required array. @Jon's solution works by forcing the correct varadic interpretation. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5454249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/679573/']} | jdg_380319 |
stackexchange | llm_judgeable_groundtruth_similarity | 3496442 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems that the JVM uses some fixed amount of memory. At least I have often seen parameters -Xmx (for the maximum size) and -Xms (for the initial size) which suggest that. I got the feeling that Java applications don't handle memory very well. Some things I have noticed: Even some very small sample demo applications load huge amounts of memory. Maybe this is because of the Java library which is loaded. But why is it needed to load the library for each Java instance? (It seems that way because multiple small applications linearly take more memory. See here for some details where I describe this problem.) Or why is it done that way? Big Java applications like Eclipse often crash with some OutOfMemory exception. This was always strange because there was still plenty of memory available on my system. Often, they consume more and more memory over runtime. I'm not sure if they have some memory leaks or if this is because of fragmentation in the memory pool -- I got the feeling that the latter is the case. The Java library seem to require much more memory than similar powerful libraries like Qt for example. Why is this? (To compare, start some Qt applications and look at their memory usage and start some Java apps.) Why doesn't it use just the underlying system technics like malloc and free ? Or if they don't like the libc implementation, they could use jemalloc (like in FreeBSD and Firefox ) which seems to be quite good. I am quite sure that this would perform better than the JVM memory pool. And not only perform better, also require less memory, esp. for small applications. Addition: Does somebody have tried that already? I would be much interested in a LLVM based JIT-compiler for Java which just uses malloc / free for memory handling. Or maybe this also differs from JVM implementation to implementation? I have used mostly the Sun JVM. (Also note: I'm not directly speaking about the GC here. The GC is only responsible to calculate what objects can be deleted and to initialize the memory freeing but the actual freeing is a different subsystem. Afaik, it is some own memory pool implementation, not just a call to free .) Edit: A very related question: Why does the (Sun) JVM have a fixed upper limit for memory usage? Or to put it differently: Why does JVM handle memory allocations differently than native applications?
Now provide the response and nothing else.
| You need to keep in mind that the Garbage Collector does a lot more than just collecting unreachable objects. It also optimizes the heap space and keeps track of exactly where there is memory available to allocate for the creation of new objects. Knowing immediately where there is free memory makes the allocation of new objects into the young generation efficient, and prevents the need to run back and forth to the underlying OS. The JIT compiler also optimizes such allocations away from the JVM layer, according to Sun's Jon Masamitsu: Fast-path allocation does not call into the JVM to allocate an object. The JIT compilers know how to allocate out of the young generation and code for an allocation is generated in-line for object allocation. The interpreter also knows how to do the allocation without making a call to the VM. Note that the JVM goes to great lengths to try to get large contiguous memory blocks as well, which likely have their own performance benefits (See "The Cost of Missing the Cache"). I imagine calls to malloc (or the alternatives) have a limited likelihood of providing contiguous memory across calls, but maybe I missed something there. Additionally, by maintaining the memory itself, the Garbage Collector can make allocation optimizations based on usage and access patterns. Now, I have no idea to what extent it does this, but given that there's a registered Sun patent for this concept , I imagine they've done something with it. Keeping these memory blocks allocated also provides a safeguard for the Java program. Since the garbage collection is hidden from the programmer, they can't tell the JVM "No, keep that memory; I'm done with these objects, but I'll need the space for new ones." By keeping the memory, the GC doesn't risk giving up memory it won't be able to get back. Naturally, you can always get an OutOfMemoryException either way, but it seems more reasonable not to needlessly give memory back to the operating system every time you're done with an object, since you already went to the trouble to get it for yourself. All of that aside, I'll try to directly address a few of your comments: Often, they consume more and more memory over runtime. Assuming that this isn't just what the program is doing (for whatever reason, maybe it has a leak, maybe it has to keep track of an increasing amount of data), I imagine that it has to do with the free hash space ratio defaults set by the (Sun/Oracle) JVM. The default value for -XX:MinHeapFreeRatio is 40%, while -XX:MaxHeapFreeRatio is 70%. This means that any time there is only 40% of the heap space remaining, the heap will be resized by claiming more memory from the operating system (provided that this won't exceed -Xmx ). Conversely, it will only* free heap memory back to the operating system if the free space exceeds 70%. Consider what happens if I run a memory-intensive operation in Eclipse; profiling, for example. My memory consumption will shoot up, resizing the heap (likely multiple times) along the way. Once I'm done, the memory requirement falls back down, but it likely won't drop so far that 70% of the heap is free. That means that there's now a lot of underutilized space allocated that the JVM has no intention of releasing. This is a major drawback, but you might be able to work around it by customizing the percentages to your situation. To get a better picture of this, you really should profile your application so you can see the utilized versus allocated heap space. I personally use YourKit , but there are many good alternatives to choose from. *I don't know if this is actually the only time and how this is observed from the perspective of the OS, but the documentation says it's the "maximum percentage of heap free after GC to avoid shrinking ," which seems to suggest that. Even some very small sample demo applications load huge amounts of memory. I guess this depends on what kind of applications they are. I feel that Java GUI applications run memory-heavy, but I don't have any evidence one way or another. Did you have a specific example that we could look at? But why is it needed to load the library for each Java instance? Well, how would you handle loading multiple Java applications if not creating new JVM processes? The isolation of the processes is a good thing, which means independent loading. I don't think that's so uncommon for processes in general, though. As a final note, the slow start times you asked about in another question likely come from several intial heap reallocations necessary to get to the baseline application memory requirement (due to -Xms and -XX:MinHeapFreeRatio ), depending what the default values are with your JVM. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3496442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/133374/']} | jdg_380320 |
stackexchange | llm_judgeable_groundtruth_similarity | 9630384 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been looking at a lot of code recently (for my own benefit, as I'm still learning to program), and I've noticed a number of Java projects (from what appear to be well respected programmers) wherein they use some sort of immediate down-casting . I actually have multiple examples, but here's one that I pulled straight from the code: public Set<Coordinates> neighboringCoordinates() { HashSet<Coordinates> neighbors = new HashSet<Coordinates>(); neighbors.add(getNorthWest()); neighbors.add(getNorth()); neighbors.add(getNorthEast()); neighbors.add(getWest()); neighbors.add(getEast()); neighbors.add(getSouthEast()); neighbors.add(getSouth()); neighbors.add(getSouthWest()); return neighbors;} And from the same project, here's another (perhaps more concise) example: private Set<Coordinates> liveCellCoordinates = new HashSet<Coordinates>(); In the first example, you can see that the method has a return type of Set<Coordinates> - however, that specific method will always only return a HashSet - and no other type of Set . In the second example, liveCellCoordinates is initially defined as a Set<Coordinates> , but is immediately turned into a HashSet . And it's not just this single, specific project - I've found this to be the case in multiple projects. I am curious as to what the logic is behind this? Is there some code-conventions that would consider this good practice? Does it make the program faster or more efficient somehow? What benefit would it have?
Now provide the response and nothing else.
| When you are designing a method signature, it is usually better to only pin down what needs to be pinned down. In the first example, by specifying only that the method returns a Set (instead of a HashSet specifically), the implementer is free to change the implementation if it turns out that a HashSet is not the right data structure. If the method had been declared to return a HashSet , then all code that depended on the object being specifically a HashSet instead of the more general Set type would also need to be revised. A realistic example would be if it was decided that neighboringCoordinates() needed to return a thread-safe Set object. As written, this would be very simple to do—replace the last line of the method with: return Collections.synchronizedSet(neighbors); As it turns out, the Set object returned by synchronizedSet() is not assignment-compatible with HashSet . Good thing the method was declared to return a Set ! A similar consideration applies to the second case. Code in the class that uses liveCellCoordinates shouldn't need to know anything more than that it is a Set . (In fact, in the first example, I would have expected to see: Set<Coordinates> neighbors = new HashSet<Coordinates>(); at the top of the method.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9630384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/992550/']} | jdg_380321 |
stackexchange | llm_judgeable_groundtruth_similarity | 32621403 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to support at least api 10, I want to be able to style my preferences nicely, I want to be able to have headers (or to show PreferenceScreen s). It seems that PreferenceActivity , not fully supported by AppCompat 's coloring, will not fit. So I'm trying to use AppCompatActivity and PreferenceFragmentCompat . public class Prefs extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (savedInstanceState == null) getSupportFragmentManager().beginTransaction() .replace(android.R.id.content, new PreferencesFragment()) .commit(); } public static class PreferencesFragment extends PreferenceFragmentCompat { @Override public void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); } @Override public void onDisplayPreferenceDialog(Preference preference) { // the following call results in a dialogue being shown super.onDisplayPreferenceDialog(preference); } @Override public void onNavigateToScreen(PreferenceScreen preferenceScreen) { // I can probably use this to go to to a nested preference screen // I'm not sure... } }} Now, I want to create a custom preference that will provide the choice of a font. With PreferenceActivity , I could simply do import android.preference.DialogPreference;public class FontPreference extends DialogPreference { public FontPreference(Context context, AttributeSet attrs) {super(context, attrs);} @Override protected void onPrepareDialogBuilder(Builder builder) { super.onPrepareDialogBuilder(builder); // do something with builder and make a nice cute dialogue, for example, like this builder.setSingleChoiceItems(new FontAdapter(), 0, null); }} and use xml such as this to display it <my.app.FontPreference android:title="Choose font" android:summary="Unnecessary summary" /> But now, there is no onPrepareDialogBuilder in android.support.v7.preference.DialogPreference . Instead, it's been moved to PreferenceDialogFragmentCompat . I found little information on how to use that thing, and I'm not sure how to go from xml to displaying it. v14 preference fragment has the following code: public void onDisplayPreferenceDialog(Preference preference) { ... final DialogFragment f; if (preference instanceof EditTextPreference) f = EditTextPreferenceDialogFragment.newInstance(preference.getKey()); ... f.show(getFragmentManager(), DIALOG_FRAGMENT_TAG);} I tried subclassing android.support.v7.preference.DialogPreference and having onDisplayPreferenceDialog use a similar piece of code to instantiate a dummy FontPreferenceFragment but it fails with the following exception. java.lang.IllegalStateException: Target fragment must implement TargetFragment interface At this point I'm already too deep into the mess and don't want to dig further. Google knows nothing about this exception. Anyways, this method seems to be overly complicated. So, what's the best way to create custom preferences using android.support.v7.preference library?
Now provide the response and nothing else.
| Important note: Currently (v23.0.1 of the v7 library) there are still a lot of theme-issues with the 'PreferenceThemeOverlay'(see this issue ). On Lollipop for example, you end up with Holo-styled category headers. After some frustrating hours, I finally succeeded to create a custom v7 Preference. Creating your own Preference appears to be harder than you might think is needed. So make sure to take some time. At first you might be wondering why you will find both a DialogPreference and a PreferenceDialogFragmentCompat for each preference type. As it turns out, the first one is the actual preference, the second is the DialogFragment where the preference would be displayed in. Sadly, you are required to subclass both of them. Don't worry, you won't need to change any piece of code. You only need to relocate some methods: All preference-editing methods (like setTitle() or persist*() ) can be found in the DialogPreference class. All dialog (-editing) methods ( onBindDialogView(View) & onDialogClosed(boolean) ) have been moved to PreferenceDialogFragmentCompat . You might want your existing class to extend the first one, that way you don't have to change to much I think. Autocomplete should help you find missing methods. When you have completed the above steps, it is time to bind these two classes together. In your xml file, you will refer to the preference-part. However, Android doesn't know yet which Fragment it must inflate when your custom preference needs to be. Therefore, you need to override onDisplayPreferenceDialog(Preference) : @Overridepublic void onDisplayPreferenceDialog(Preference preference) { DialogFragment fragment; if (preference instanceof LocationChooserDialog) { fragment = LocationChooserFragmentCompat.newInstance(preference); fragment.setTargetFragment(this, 0); fragment.show(getFragmentManager(), "android.support.v7.preference.PreferenceFragment.DIALOG"); } else super.onDisplayPreferenceDialog(preference);} and also your DialogFragment needs to handle the 'key': public static YourPreferenceDialogFragmentCompat newInstance(Preference preference) { YourPreferenceDialogFragmentCompat fragment = new YourPreferenceDialogFragmentCompat(); Bundle bundle = new Bundle(1); bundle.putString("key", preference.getKey()); fragment.setArguments(bundle); return fragment;} That should do the trick. If you encounter problems, try taking a look at existing subclasses and see how Android solved it (in Android Studio: type a class' name and press Ctrl+b to see the decompiled class). Hope it helps. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32621403', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1449683/']} | jdg_380322 |
stackexchange | llm_judgeable_groundtruth_similarity | 498970 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I remove all whitespace from the start and end of the string?
Now provide the response and nothing else.
| All browsers since IE9+ have trim() method for strings: " \n test \n ".trim(); // returns "test" here For those browsers who does not support trim() , you can use this polyfill from MDN : if (!String.prototype.trim) { (function() { // Make sure we trim BOM and NBSP var rtrim = /^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g; String.prototype.trim = function() { return this.replace(rtrim, ''); }; })();} That said, if using jQuery , $.trim(str) is also available and handles undefined/null. See this: String.prototype.trim=function(){return this.replace(/^\s+|\s+$/g, '');};String.prototype.ltrim=function(){return this.replace(/^\s+/,'');};String.prototype.rtrim=function(){return this.replace(/\s+$/,'');};String.prototype.fulltrim=function(){return this.replace(/(?:(?:^|\n)\s+|\s+(?:$|\n))/g,'').replace(/\s+/g,' ');}; | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/498970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20951/']} | jdg_380323 |
stackexchange | llm_judgeable_groundtruth_similarity | 1829925 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does refButton get null in the following JavaScript code? <html><head> <title></title> <script type="text/javascript"> var refButton = document.getElementById("btnButton"); refButton.onclick = function() { alert('I am clicked!'); }; </script></head><body> <form id="form1"> <div> <input id="btnButton" type="button" value="Click me"/> </div> </form></body></html>
Now provide the response and nothing else.
| At the point you are calling your function, the rest of the page has not rendered and so the element is not in existence at that point. Try calling your function on window.onload maybe. Something like this: <html><head> <title></title> <script type="text/javascript"> window.onload = function(){ var refButton = document.getElementById("btnButton"); refButton.onclick = function() { alert('I am clicked!'); } }; </script></head><body> <form id="form1"> <div> <input id="btnButton" type="button" value="Click me"/> </div> </form></body></html> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1829925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159072/']} | jdg_380324 |
stackexchange | llm_judgeable_groundtruth_similarity | 34735580 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to do A SIMPLE using react-router ( version ^1.0.3 ) to redirect to another view. import React from 'react';import {Router, Route, Link, RouteHandler} from 'react-router';class HomeSection extends React.Component { static contextTypes = { router: PropTypes.func.isRequired }; constructor(props, context) { super(props, context); } handleClick = () => { console.log('HERE!', this.contextTypes); // this.context.location.transitionTo('login'); }; render() { return ( <Grid> <Row className="text-center"> <Col md={12} xs={12}> <div className="input-group"> <span className="input-group-btn"> <button onClick={this.handleClick} type="button"> </button> </span> </div> </Col> </Row> </Grid> ); }};HomeSection.contextTypes = { location() { React.PropTypes.func.isRequired }}export default HomeSection; all I need is to send the use to '/login' and that's it. What can I do ? errors in console: Uncaught ReferenceError: PropTypes is not defined file with my routes // LIBRARY/*eslint-disable no-unused-vars*/import React from 'react';/*eslint-enable no-unused-vars*/import {Route, IndexRoute} from 'react-router';// COMPONENTimport Application from './components/App/App';import Contact from './components/ContactSection/Contact';import HomeSection from './components/HomeSection/HomeSection';import NotFoundSection from './components/NotFoundSection/NotFoundSection';import TodoSection from './components/TodoSection/TodoSection';import LoginForm from './components/LoginForm/LoginForm';import SignupForm from './components/SignupForm/SignupForm';export default ( <Route component={Application} path='/'> <IndexRoute component={HomeSection} /> <Route component={HomeSection} path='home' /> <Route component={TodoSection} path='todo' /> <Route component={Contact} path='contact' /> <Route component={LoginForm} path='login' /> <Route component={SignupForm} path='signup' /> <Route component={NotFoundSection} path='*' /> </Route>);
Now provide the response and nothing else.
| 1) react-router > V6 useNavigate hook: If you have React >= 16.8 and functional components you can use the useNavigate hook from react-router . import React from 'react';import { useNavigate } from "react-router-dom";const YourComponent = () => { const navigate = useNavigate(); const handleClick = () => { navigate("/path/to/push"); } return ( <div> <button onClick={handleClick} type="button" /> </div> );}export default YourComponent; 2) react-router > V5 useHistory hook: If you have react-router v5 and functional components you can use the useHistory hook from react-router . import React from 'react';import { useHistory } from 'react-router-dom';const YourComponent = () => { const history = useHistory(); const handleClick = () => { history.push("/path/to/push"); } return ( <div> <button onClick={handleClick} type="button" /> </div> );}export default YourComponent; 3) react-router > V4 withRouter HOC: As @ambar mentioned in the comments, React-router has changed its codebase since its V4. Here is the documentation for withRouter import React, { Component } from 'react';import { withRouter } from "react-router-dom";class YourComponent extends Component { handleClick = () => { this.props.history.push("path/to/push"); } render() { return ( <div> <button onClick={this.handleClick} type="button"> </div> ); };}export default withRouter(YourComponent); 4) React-router < V4 with browserHistory You can achieve this functionality using react-router BrowserHistory . Code below: import React, { Component } from 'react';import { browserHistory } from 'react-router-dom';export default class YourComponent extends Component { handleClick = () => { browserHistory.push('/login'); }; render() { return ( <div> <button onClick={this.handleClick} type="button"> </div> ); };} 5) Redux connected-react-router If you have connected your component with redux, and have configured connected-react-router all you have to do is this.props.history.push("/new/url"); ie, you don't need withRouter HOC to inject history to the component props. // reducers.jsimport { combineReducers } from 'redux';import { connectRouter } from 'connected-react-router';export default (history) => combineReducers({ router: connectRouter(history), ... // rest of your reducers});// configureStore.jsimport { createBrowserHistory } from 'history';import { applyMiddleware, compose, createStore } from 'redux';import { routerMiddleware } from 'connected-react-router';import createRootReducer from './reducers';...export const history = createBrowserHistory();export default function configureStore(preloadedState) { const store = createStore( createRootReducer(history), // root reducer with router state preloadedState, compose( applyMiddleware( routerMiddleware(history), // for dispatching history actions // ... other middlewares ... ), ), ); return store;}// set up other redux requirements like for eg. in index.jsimport { Provider } from 'react-redux';import { Route, Switch } from 'react-router-dom';import { ConnectedRouter } from 'connected-react-router';import configureStore, { history } from './configureStore';...const store = configureStore(/* provide initial state if any */)ReactDOM.render( <Provider store={store}> <ConnectedRouter history={history}> <> { /* your usual react-router v4/v5 routing */ } <Switch> <Route exact path="/yourPath" component={YourComponent} /> </Switch> </> </ConnectedRouter> </Provider>, document.getElementById('root'));// YourComponent.jsimport React, { Component } from 'react';import { connect } from 'react-redux';...class YourComponent extends Component { handleClick = () => { this.props.history.push("path/to/push"); } render() { return ( <div> <button onClick={this.handleClick} type="button"> </div> ); } };}export default connect(mapStateToProps = {}, mapDispatchToProps = {})(YourComponent); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34735580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3424817/']} | jdg_380325 |
stackexchange | llm_judgeable_groundtruth_similarity | 56434549 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems like Apple's new SwiftUI framework uses a new kind of syntax that effectively builds a tuple, but has another syntax: var body: some View { VStack(alignment: .leading) { Text("Hello, World") // No comma, no separator ?! Text("Hello World!") }} Trying to tackle down what this syntax really is , I found out that the VStack initializer used here takes a closure of the type () -> Content as the second parameter, where Content is a generic param conforming to View that is inferred via the closure. To find out what type Content is inferred to, I changed the code slightly, maintaining its functionality: var body: some View { let test = VStack(alignment: .leading) { Text("Hello, World") Text("Hello World!") } return test} With this, test reveals itself to be of type VStack<TupleView<(Text, Text)>> , meaning that Content is of type TupleView<Text, Text> . Looking up TupleView , I found it's a wrapper type originating from SwiftUI itself that can only be initialized by passing the tuple it should wrap. Question Now I'm wondering how in the world the two Text instances in this example are converted to a TupleView<(Text, Text)> . Is this hacked into SwiftUI and therefore invalid regular Swift syntax? TupleView being a SwiftUI type supports this assumption. Or is this valid Swift syntax? If yes, how can one use it outside SwiftUI ?
Now provide the response and nothing else.
| As Martin says , if you look at the documentation for VStack 's init(alignment:spacing:content:) , you can see that the content: parameter has the attribute @ViewBuilder : init(alignment: HorizontalAlignment = .center, spacing: Length? = nil, @ViewBuilder content: () -> Content) This attribute refers to the ViewBuilder type, which if you look at the generated interface, looks like: @_functionBuilder public struct ViewBuilder { /// Builds an empty view from an block containing no statements, `{ }`. public static func buildBlock() -> EmptyView /// Passes a single view written as a child view (e..g, `{ Text("Hello") }`) /// through unmodified. public static func buildBlock(_ content: Content) -> Content where Content : View} The @_functionBuilder attribute is a part of an unofficial feature called " function builders ", which has been pitched on Swift evolution here , and implemented specially for the version of Swift that ships with Xcode 11, allowing it to be used in SwiftUI. Marking a type @_functionBuilder allows it to be used as a custom attribute on various declarations such as functions, computed properties and, in this case, parameters of function type. Such annotated declarations use the function builder to transform blocks of code: For annotated functions, the block of code that gets transformed is the implementation. For annotated computed properties, the block of code that gets transformed is the getter. For annotated parameters of function type, the block of code that gets transformed is any closure expression that is passed to it (if any). The way in which a function builder transforms code is defined by its implementation of builder methods such as buildBlock , which takes a set of expressions and consolidates them into a single value. For example, ViewBuilder implements buildBlock for 1 to 10 View conforming parameters, consolidating multiple views into a single TupleView : @available(iOS 13.0, OSX 10.15, tvOS 13.0, watchOS 6.0, *)extension ViewBuilder { /// Passes a single view written as a child view (e..g, `{ Text("Hello") }`) /// through unmodified. public static func buildBlock<Content>(_ content: Content) -> Content where Content : View public static func buildBlock<C0, C1>(_ c0: C0, _ c1: C1) -> TupleView<(C0, C1)> where C0 : View, C1 : View public static func buildBlock<C0, C1, C2>(_ c0: C0, _ c1: C1, _ c2: C2) -> TupleView<(C0, C1, C2)> where C0 : View, C1 : View, C2 : View // ...} This allows a set of view expressions within a closure passed to VStack 's initialiser to be transformed into a call to buildBlock that takes the same number of arguments. For example: struct ContentView : View { var body: some View { VStack(alignment: .leading) { Text("Hello, World") Text("Hello World!") } }} gets transformed into a call to buildBlock(_:_:) : struct ContentView : View { var body: some View { VStack(alignment: .leading) { ViewBuilder.buildBlock(Text("Hello, World"), Text("Hello World!")) } }} resulting in the opaque result type some View being satisfied by TupleView<(Text, Text)> . You'll note that ViewBuilder only defines buildBlock up to 10 parameters, so if we attempt to define 11 subviews: var body: some View { // error: Static member 'leading' cannot be used on instance of // type 'HorizontalAlignment' VStack(alignment: .leading) { Text("Hello, World") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") Text("Hello World!") } } we get a compiler error, as there's no builder method to handle this block of code (note that because this feature is still a work-in-progress, the error messages around it won't be that helpful). In reality, I don't believe people will run into this restriction all that often, for example the above example would be better served using the ForEach view instead: var body: some View { VStack(alignment: .leading) { ForEach(0 ..< 20) { i in Text("Hello world \(i)") } } } If however you do need more than 10 statically defined views, you can easily workaround this restriction using the Group view: var body: some View { VStack(alignment: .leading) { Group { Text("Hello world") // ... // up to 10 views } Group { Text("Hello world") // ... // up to 10 more views } // ... } ViewBuilder also implements other function builder methods such: extension ViewBuilder { /// Provides support for "if" statements in multi-statement closures, producing /// ConditionalContent for the "then" branch. public static func buildEither<TrueContent, FalseContent>(first: TrueContent) -> ConditionalContent<TrueContent, FalseContent> where TrueContent : View, FalseContent : View /// Provides support for "if-else" statements in multi-statement closures, /// producing ConditionalContent for the "else" branch. public static func buildEither<TrueContent, FalseContent>(second: FalseContent) -> ConditionalContent<TrueContent, FalseContent> where TrueContent : View, FalseContent : View} This gives it the ability to handle if statements: var body: some View { VStack(alignment: .leading) { if .random() { Text("Hello World!") } else { Text("Goodbye World!") } Text("Something else") } } which gets transformed into: var body: some View { VStack(alignment: .leading) { ViewBuilder.buildBlock( .random() ? ViewBuilder.buildEither(first: Text("Hello World!")) : ViewBuilder.buildEither(second: Text("Goodbye World!")), Text("Something else") ) } } (emitting redundant 1-argument calls to ViewBuilder.buildBlock for clarity). | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/56434549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4355141/']} | jdg_380326 |
stackexchange | llm_judgeable_groundtruth_similarity | 11544 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to the Filesystem Hierarchy Standard , /opt is for "the installation of add-on application software packages". /usr/local is "for use by the system administrator when installing software locally". These use cases seem pretty similar. Software not included with distributions usually is configured by default to install in either /usr/local or /opt with no particular rhyme or reason as to which they chose. Is there some difference I'm missing, or do both do the same thing, but exist for historical reasons?
Now provide the response and nothing else.
| While both are designed to contain files not belonging to the operating system, /opt and /usr/local are not intended to contain the same set of files. /usr/local is a place to install files built by the administrator, typically by using the make command (e.g., ./configure; make; make install ). The idea is to avoid clashes with files that are part of the operating system, which would either be overwritten or overwrite the local ones otherwise (e.g., /usr/bin/foo is part of the OS while /usr/local/bin/foo is a local alternative). All files under /usr are shareable between OS instances, although this is rarely done with Linux. This is a part where the FHS is slightly self-contradictory, as /usr is defined to be read-only, but /usr/local/bin needs to be read-write for local installation of software to succeed. The SVR4 file system standard, which was the FHS' main source of inspiration, is recommending to avoid /usr/local and use /opt/local instead to overcome this issue. /usr/local is a legacy from the original BSD. At that time, the source code of /usr/bin OS commands were in /usr/src/bin and /usr/src/usr.bin , while the source of locally developed commands was in /usr/local/src , and their binaries in /usr/local/bin . There was no notion of packaging (outside tarballs). On the other hand, /opt is a directory for installing unbundled packages (i.e. packages not part of the Operating System distribution, but provided by an independent source), each one in its own subdirectory. They are already built whole packages provided by an independent third party software distributor. Unlike /usr/local stuff, these packages follow the directory conventions (or at least they should). For example, someapp would be installed in /opt/someapp , with one of its command being /opt/someapp/bin/foo , its configuration file would be in /etc/opt/someapp/foo.conf , and its log files in /var/opt/someapp/logs/foo.access . | {} | {'log_upvote_score': 10, 'links': ['https://unix.stackexchange.com/questions/11544', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/5164/']} | jdg_380327 |
stackexchange | llm_judgeable_groundtruth_similarity | 10685152 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
joomla- How to remove unwanted js files from page I have used plugins some pages so many js files are included in all pages
Now provide the response and nothing else.
| There are two ways that I am aware of: 1) get an instance if the document object and remove the js files (you could do that in a plugin) : <?php //get the array containing all the script declarations $document = JFactory::getDocument(); $headData = $document->getHeadData(); $scripts = $headData['scripts']; //remove your script, i.e. mootools unset($scripts['/media/system/js/mootools-core.js']); unset($scripts['/media/system/js/mootools-more.js']); $headData['scripts'] = $scripts; $document->setHeadData($headData);?> 2) remove js files directly from your templates index.php : <?php unset($this->_scripts['/media/system/js/mootools-core.js']); ?> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10685152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1403339/']} | jdg_380328 |
stackexchange | llm_judgeable_groundtruth_similarity | 27947 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If i had a good knowledge in Java programming and know some information about Microcontroller, Can i begin to learn embedded linux? if not, how can i start in this field?
Now provide the response and nothing else.
| There are some steps before getting on embedded linux. As @KKToronto said, it would be nice if you have a desktop linux experience first. If you don't have any experience I recommend installing Ubuntu in your desktop/notebook in order to get some feeling with the OS. To go from the desktop to the embedded world, at least on linux way, is relatively easy, if you're planing to develop FOR embedded linux. Almost all drivers interfaces, kernel calls, are the same. One program that runs on a desktop linux, can run on a embedded linux with minor alterations. The beagleboard platform is a good example, as @JobyTaffey noted. Some applications I developed can run on both desktop and the beagleboard without code alterations, just need to use the correct compiler. Still on the "develop for embedded linux" topic: one of the main alterations is the compiler, mainly because almost all embedded processors that are powerful enough to run linux are ARM ones. From the C/C++/Java programmer it is almost like migrating from x86 to ia64. If you want to get a deeper "embedded" experience as building device drivers for new hardware, accessing I/O ports on the board, control external equipments using linux, I'd go with Linux Embedded Primer . It is a great book to learn on low level stuff as how the device drivers are made, how to get access to fixed memory locations that represent some peripheral etc. (And it has an amazing lightsaber on the cover =) ) By going on the hardware side a deep microcontroller knowledge is really important, because you're going to be dealing directly with memory for peripheral configuration/access. Some operations may even require assembly knowledge, mainly interruptions or flash writing. It depends a lot on the microcontroller architecture. If you have no experience on this, I'd recommend to start with arduino, as @stevegt noted, to get a hardware/electronics feeling and then proceed to a baremetal programming over any other processor, to learn some tricks related to hardware/software interface, that are somehow hidden on the arduino firmware or linux kernel. As you can see that is much knowledge hidden on "embedded linux" expression. Keep in mind that you don't need to have all of it to build an embedded system. You need to focus on what side you want to learn first and get a pre-made system for the other: arduino for hardware first contact, beagleboard to learn programming for embedded linux, a baremetal processor for hardware/software interface. Specifically for your case I'd recommend the beagle board. As you have some programming and microcontroller background, you can develop some applications in high level to get experience at linux programming and when you fell comfortable enough you could start hardware stuff with the available I/O's on the board. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/27947', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/8516/']} | jdg_380329 |
stackexchange | llm_judgeable_groundtruth_similarity | 3506020 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Conjecture . For every natural number $n \in \Bbb{N}$ , there exists a finite set of consecutive numbers $C\subset \Bbb{N}$ containing $n$ such that $\sum\limits_{c\in C} c$ is a prime power. A list of the first few numbers in $\Bbb{N}$ has several different covers by such consecutive number sets. One such is: 3 7 5 13 8 19 11 25 29 16 37 41 49 53___ ___ _ ___ _ ____ __ _____ _____ __ __ _____ _____ _____ _____ __1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ___ ___ _____ _____ __ ________ 11 17 31 43 8159 71 3^5___ __ _____ _________________ 30 31 32 33 34 35 36 37 38 39 40 41 42 43 ..... _____ _____ _____ 61 67 73 Has this been proved already?
Now provide the response and nothing else.
| For any odd prime $p$ , there are $p$ consecutive integers centred on $p$ that sum to $p^2$ . $2+3+4=3^2$ $3+4+5+6+7=5^2$ $4+5+6+7+8+9+10=7^2$ etc. Let $p_n$ be the $n$ -th prime. Then, using Bertrand's postulate in the form $$p_{n+1}<2p_n$$ we know that the above sums for consecutive primes overlap. Finally, we note that $1+2=3$ to complete the proof. I don't know if this has been shown before, but the proof seems straightforward. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/3506020', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26327/']} | jdg_380330 |
stackexchange | llm_judgeable_groundtruth_similarity | 20169793 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having real trouble trying to get started with FunScript, I really love the idea of F# -> Javascript but constantly frustrated by the amount information/documentation of how to work with such beautiful-looking technologies. Here's the problem: The FunScript Project (No Hosting) template cannot be installed because: "You need Nuget Installed", says VS when I try to download it. // Dah, I do have Nuget and just updated it to latest release. I'm using VS2013 Ultimate. I did the following: I created a new FSharp Library Project and then I ran the Package Maneger command for getting the needed FunScript Libs: PM> Install-Package FunScript that worked nice but then . . . When I look at a FunScript example I see the open FunScript.TypeScript in the program which currently does not exist when I type that as it says TypeScript is not defines, and looking at the examples FunScript.TypeScript gives the ability to use jQuery and interact with the DOM. so without it i can't do anything here I did notice the 3rd-party libs on Nuget for FunScript, the: FuncScript.TypeScript.Bindings.(lib|jquery) but i can't reference those in an "open [namespace]" statement. and at last, there must be a "do Runtime components ..." something at the end of the program. this also is 'not defined' Do you guys have a complete FunScript example that I can start with? (I have already searched the web alot, the programs i saw didn't work with me)
Now provide the response and nothing else.
| Sorry, the project is still a bit thin on documentation. We will try to improve this soon. I have now updated the FunScript Visual Studio Project Template to remove the NuGet Package Manager dependency. From v1.1.0.15 It should work on Visual Studio 2013. Note: you may need to build twice to restore the NuGet packages correctly. You only get the basic F# to JS compiler (and some FSharp.Data type provider mappings) in the main NuGet package. If you want to access JavaScript through TypeScript bindings you must add references to those "Bindings" packages you mentioned. The names of these packages don't actually reflect any namespace within the assemblies. Currently, they import types into the global namespace (i.e., they should just become available in intellisense without opening any namespaces). Unless the TypeScript contains modules, in which case the functions and types are mapped to the corresponding .NET namespaces. E.g., Globals.alert(message) should type-check after you have added the FunScript.TypeScript.Binding.lib NuGet package. Please note, we may move these definitions all up into the FunScript.TypeScript namespace in future to avoid polluting the global namespace. The do Runtime.Run(components) call in the examples is actually just a helper method. We should probably remove this from the examples, as it confuses a lot of people. The only method call you really need is Compiler.Compile(<@ main() @>, noReturn = true) , which compiles the quotation and returns a string of JavaScript. It is then up to the user what they do with this JavaScript (e.g., put it in a hosted page). I'll be adding more documentation around using FunScript when I release another project called HashBang , which makes use of FunScript to build crawlable-dynamic-websites. This will include step-by-step tutorials for building and deploying with HashBang/FunScript. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20169793', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2705530/']} | jdg_380331 |
stackexchange | llm_judgeable_groundtruth_similarity | 4930398 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Right now, I'm storing every XML layout file inside the 'res/layout' folder, so it is feasible and simple to manage small projects, but when there is a case of large and heavy projects, then there should be a hierarchy and sub-folders needed inside the layout folder. for e.g. layout-- layout_personal -- personal_detail.xml -- personal_other.xml--layout_address -- address1.xml -- address2.xml Like the same way, we would like to have sub-folders for the large application, so is there any way to do so inside the Android project? I am able to create layout-personal and layout_address sub-folders inside the layout folder, but when the time comes to access the XML layout file using R.layout._______ , at that time there is no any XML layout pop-up inside the menu.
Now provide the response and nothing else.
| You CAN do this with gradle. I've made a demo project showing how. The trick is to use gradle's ability to merge multiple resource folders , and set the res folder as well as the nested subfolders in the sourceSets block. The quirk is that you can't declare a container resource folder before you declare that folder's child resource folders. Below is the sourceSets block from the build.gradle file from the demo. Notice that the subfolders are declared first. sourceSets { main { res.srcDirs = [ 'src/main/res/layouts/layouts_category2', 'src/main/res/layouts', 'src/main/res' ] }} Also, the direct parent of your actual resource files (pngs, xml layouts, etc..) does still need to correspond with the specification . | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/4930398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/379693/']} | jdg_380332 |
stackexchange | llm_judgeable_groundtruth_similarity | 27482579 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How is the compiler not complaining when I write the following code? public class MyClass { private int count; public MyClass(int x){ this.count=x; } public void testPrivate(MyClass o){ System.out.println(o.count); } } Even though it is an instance of the same class in which testPrivate is written, shouldn't it give a compilation error at System.out.println(o.count) ? After all, I am trying to access a private variable directly. The code even runs fine.
Now provide the response and nothing else.
| A private member is accessible from any method within the class in which it is declared, regardless of whether that method accesses its own ( this ) instance's private member or some other instance's private member. This is stated in JLS 6.6.1 : ...Otherwise, if the member or constructor is declared private, then access is permitted if and only if it occurs within the body of the top level class (§7.6) that encloses the declaration of the member or constructor. This feature of Java allows you to write methods that accept an instance of the class as an argument (for example - clone(Object other) , compareTo(Object other) ) without relying on the class having non private getters for all the private properties that need to be accessed. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27482579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1571587/']} | jdg_380333 |
stackexchange | llm_judgeable_groundtruth_similarity | 18063451 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I have a tuple with different element types like std::tuple<T0, T1, T2, ...> And how to get the index of a element type? template<class T, class Tuple>struct Index{ enum {value = ?;}}; Thanks.
Now provide the response and nothing else.
| template <class T, class Tuple>struct Index;template <class T, class... Types>struct Index<T, std::tuple<T, Types...>> { static const std::size_t value = 0;};template <class T, class U, class... Types>struct Index<T, std::tuple<U, Types...>> { static const std::size_t value = 1 + Index<T, std::tuple<Types...>>::value;}; See it live at Coliru . This implementation returns the index of the first occurrence of a given type. Asking for the index of a type that is not in the tuple results in a compile error (and a fairly ugly one at that). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18063451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1899020/']} | jdg_380334 |
stackexchange | llm_judgeable_groundtruth_similarity | 3434202 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm confused about machine code and native code in the context of .NET languages. What is the difference between them? Are they the same?
Now provide the response and nothing else.
| The terms are indeed a bit confusing, because they are sometimes used inconsistently. Machine code: This is the most well-defined one. It is code that uses the byte-code instructions which your processor (the physical piece of metal that does the actual work) understands and executes directly. All other code must be translated or transformed into machine code before your machine can execute it. Native code: This term is sometimes used in places where machine code (see above) is meant. However, it is also sometimes used to mean unmanaged code (see below). Unmanaged code and managed code: Unmanaged code refers to code written in a programming language such as C or C++, which is compiled directly into machine code . It contrasts with managed code , which is written in C#, VB.NET, Java, or similar, and executed in a virtual environment (such as .NET or the JavaVM) which kind of “simulates” a processor in software. The main difference is that managed code “manages” the resources (mostly the memory allocation) for you by employing garbage collection and by keeping references to objects opaque. Unmanaged code is the kind of code that requires you to manually allocate and de-allocate memory, sometimes causing memory leaks (when you forget to de-allocate) and sometimes segmentation faults (when you de-allocate too soon). Unmanaged also usually implies there are no run-time checks for common errors such as null-pointer dereferencing or array bounds overflow. Strictly speaking, most dynamically-typed languages — such as Perl, Python, PHP and Ruby — are also managed code . However, they are not commonly described as such, which shows that managed code is actually somewhat of a marketing term for the really big, serious, commercial programming environments (.NET and Java). Assembly code: This term generally refers to the kind of source code people write when they really want to write byte-code. An assembler is a program that turns this source code into real byte-code. It is not a compiler because the transformation is 1-to-1. However, the term is ambiguous as to what kind of byte-code is used: it could be managed or unmanaged. If it is unmanaged, the resulting byte-code is machine code . If it is managed, it results in the byte-code used behind-the-scenes by a virtual environment such as .NET. Managed code (e.g. C#, Java) is compiled into this special byte-code language, which in the case of .NET is called Common Intermediate Language (CIL) and in Java is called Java byte-code . There is usually little need for the common programmer to access this code or to write in this language directly, but when people do, they often refer to it as assembly code because they use an assembler to turn it into byte-code. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3434202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/414321/']} | jdg_380335 |
stackexchange | llm_judgeable_groundtruth_similarity | 1118454 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two functions: def f(a,b,c=g(b)): blabladef g(n): blabla c is an optional argument in function f . If the user does not specify its value, the program should compute g(b) and that would be the value of c . But the code does not compile - it says name 'b' is not defined. How to fix that? Someone suggested: def g(b): blabladef f(a,b,c=None): if c is None: c = g(b) blabla But this doesn't work. Maybe the user intended c to be None and then c will have another value.
Now provide the response and nothing else.
| def f(a,b,c=None): if c is None: c = g(b) If None can be a valid value for c then you do this: sentinel = object()def f(a,b,c=sentinel): if c is sentinel: c = g(b) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1118454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115579/']} | jdg_380336 |
stackexchange | llm_judgeable_groundtruth_similarity | 8579 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I remember learning some years ago of a theorem to the effect that if a polynomial $p(x_1, ... x_n)$ with real coefficients is non-negative on $\mathbb{R}^n$, then it is a sum of squares of polynomials in the variables $x_i$. Unfortunately, I'm not sure if I'm remembering correctly. (The context in which I saw this theorem was someone asking whether there was a sum-of-squares proof of the AM-GM inequality in $n$ variables, so I'm not 100% certain if the quoted theorem was specific to that case.) So: does anyone know a reference for the correct statement of this theorem, if in fact something like it is true? (Feel free to retag if you don't think it's appropriate, by the way.)
Now provide the response and nothing else.
| One interpretation of the question is Hilbert's seventeenth problem , to characterize the polynomials on $\mathbb{R}^n$ that take non-negative values. The problem is motivated by the nice result, which is not very hard, that a non-negative polynomial in $\mathbb{R}[x]$ (one variable) is a sum of two squares. What is fun about this result is that it establishes an analogy between $\mathbb{C}[x]$, viewed as a quadratic extension by $i$ of the Euclidean domain $\mathbb{R}[x]$; and $\mathbb{Z}[i]$ (the Gaussian integers), viewed as a quadratic extension by $i$ of the Euclidean domain $\mathbb{Z}$. In this analogy, a real linear polynomial is like a prime that is 3 mod 4 that remains a Gaussian prime, while a quadratic irreducible polynomial is like a prime that is not 3 mod 4, which is then not a Gaussian prime. A non-zero integer $n \in \mathbb{Z}$ is a sum of two squares if and only if it is positive and each prime that is 3 mod 4 occurs evenly. Analogously, a polynomial $p \in \mathbb{R}[x]$ is a sum of two squares if and only if some value is positive and each real linear factor occurs evenly. And that is a way of saying that $p$ takes non-negative values. In dimension 2 and higher, the result does not hold for sums of squares of polynomials. But as the Wikipedia page says, Artin showed that a non-negative polynomial (or rational function) in any number of variables is at least a sum of squares of rational functions. In general, if $R[i]$ and $R$ are both unique factorization domains, then some of the primes in $R$ have two conjugate (or conjugate and associate) factors in $R[i]$, while other primes in $R$ are still primes in $R[i]$. This always leads to a characterization of elements of $R$ that are sums of two squares. This part actually does apply to the multivariate polynomial ring $R = \mathbb{R}[\vec{x}]$. What no longer holds is the inference that if $p \in R$ has non-negative values, then the non-splitting factors occur evenly. For instance, $x^2+y^2+1$ is a positive polynomial that remains irreducible over $\mathbb{C}$. It is a sum of 3 squares rather than 2 squares; of course you have to work harder to find a polynomial that is not a sum of squares at all. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/8579', 'https://mathoverflow.net', 'https://mathoverflow.net/users/290/']} | jdg_380337 |
stackexchange | llm_judgeable_groundtruth_similarity | 1617106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to come up with an algorithm to compare two strings. It would register a match any words that contain the same letters. For example rent and tern would be equivalent because they both contain the letters r,e,n,t. EDIT I apologize for being so vague. The comparison is going to be made on two sets of a few thousands of words hundreds of times. This is only a small part of the overall code so I don't want it to bog everything down. For those who were asking yes overmatching would be very important for example rent would also match ternicate. EDIT 2 For a match like rent == ternicate, ternicate would not match rent. It is more like does word two contain the letters of word one. So if you have extra letters it would still be a match so long as the word contains all of the letters of the first word.
Now provide the response and nothing else.
| Okay, this is a really bad idea, but it's just so crazy it might work! Create a list of the first 26 prime numbers. primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, ...] For each letter of a word, find the corresponding prime number. A → 2, B → 3, C → 5, etc. Multiply these prime numbers together. You will end up with a (very large) number. Words that have the same letters will have the same number. Words with different letters are guaranteed to have different numbers. Why is that? Because we're multiplying prime numbers we will always end up with unique products for unique combinations of letters. The numbers can be decomposed back into their prime factors, and the factors tell us precisely which letters were in the original word. The order of the letters isn't preserved but which letters were in the word and how many there were is. For instance, take the words "face" and "cafe". FACE = 13 * 2 * 5 * 11 = 1430 CAFE = 5 * 2 * 13 * 11 = 1430 Ha! What could be more efficient than a simple integer comparison? ... Okay, no, maybe not. This is a little too ridiculous to actually use. It's neat though. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1617106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/185114/']} | jdg_380338 |
stackexchange | llm_judgeable_groundtruth_similarity | 14693244 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been customizing my UIBarButtonItems using UIAppearance, but I don't know how to change the style of the text for the back button. How is this done? I know how to set the background image: [[UIBarButtonItem appearance] setBackButtonBackgroundImage:[UIImage imageNamed:@"arrow-button-static.png"] forState:UIControlStateNormal barMetrics:UIBarMetricsDefault]; [[UIBarButtonItem appearance] setBackButtonBackgroundImage:[UIImage imageNamed:@"arrow-button-pressed.png"] forState:UIControlStateHighlighted barMetrics:UIBarMetricsDefault];
Now provide the response and nothing else.
| You can't change the text attributes of only the Back button. (Or at least, you can't do so globally using the appearance proxy.) Apple apparently intends that its text match that of other buttons in the navigation bar, with only its shape (background image) differing. As @DhruvGoel notes, you can set text attributes thusly: NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys: [UIColor redColor],UITextAttributeTextColor, nil];[[UIBarButtonItem appearance] setTitleTextAttributes:attributes forState:UIControlStateNormal]; Or, for only bar buttons in the navigation bar: [[UIBarButtonItem appearanceWhenContainedIn:[UINavigationBar class], nil] setTitleTextAttributes:attributes forState:UIControlStateNormal]; See the UIAppearance protocol docs for more details about how to customize appearance in specific parts of your view / view controller hierarchy. Though as I've said, this won't let you globally make Back button text different from that of other buttons in the same navigation bar. (You could go nuts directly changing each one as your view controllers appear, but that's probably not a great idea.) Oh, and sorry about that impeachment thing. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14693244', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/108512/']} | jdg_380339 |
stackexchange | llm_judgeable_groundtruth_similarity | 28327620 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought it would be great to have a comparison between _JAVA_OPTIONS and JAVA_TOOL_OPTIONS .I have been searching a bit for one, but I cannot find anything, so I hope we can find the knowledge here on Stackoverflow. JAVA_OPTS is included for completeness. It is not part of the JVM, but there is a lot of questions about it out in the wild. What I know: So far I have found out that: JAVA_OPTS is not used by the JDK, but by a bunch of other apps (see this post ). JAVA_TOOL_OPTIONS and _JAVA_OPTIONS are ways to specify JVM arguments as an environment variable instead of command line parameters. The are picked up by at least java and javac They have this precedence: _JAVA_OPTIONS (overwrites the others) Command line parameters JAVA_TOOL_OPTIONS (is overwritten by the others) What I would like to know Are there any official documentation comparing JAVA_TOOL_OPTIONS and _JAVA_OPTIONS Are there any other differences between JAVA_TOOL_OPTIONS and _JAVA_OPTIONS (except from precedence). Which executables pick up JAVA_TOOL_OPTIONS and _JAVA_OPTIONS (in addition to java and javac ) Any limitation on what can be included on JAVA_TOOL_OPTIONS and _JAVA_OPTIONS Official Documentation I have not been able to find any documentation about _JAVA_OPTIONS . The documentation for JAVA_TOOL_OPTIONS does not shed much light on the difference: Since the command-line cannot always be accessed or modified, for example in embedded VMs or simply VMs launched deep within scripts, a JAVA_TOOL_OPTIONS variable is provided so that agents may be launched in these cases. ... Example script This is the code I used to figure this out. Console output is included as comments: export JAVA_OPTS=foobarexport JAVA_TOOL_OPTIONS= export _JAVA_OPTIONS="-Xmx512m -Xms64m"java -version # Picked up JAVA_TOOL_OPTIONS: # Picked up _JAVA_OPTIONS: -Xmx512m -Xms64m# java version "1.7.0_40"OpenJDK Runtime Environment (IcedTea 2.4.1) (suse-3.41.1-x86_64)OpenJDK 64-Bit Server VM (build 24.0-b50, mixed mode)javac -version# Picked up JAVA_TOOL_OPTIONS: # Picked up _JAVA_OPTIONS: -Xmx512m -Xms64m# javac 1.7.0_40export JAVA_TOOL_OPTIONS="-Xmx1 -Xms1"export _JAVA_OPTIONS="-Xmx512m -Xms64m"javac -version# Picked up JAVA_TOOL_OPTIONS: -Xmx1 -Xms1# Picked up _JAVA_OPTIONS: -Xmx512m -Xms64m# javac 1.7.0_40export JAVA_TOOL_OPTIONS="-Xmx512m -Xms64m"export _JAVA_OPTIONS="-Xmx1 -Xms1"javac -version# Picked up JAVA_TOOL_OPTIONS: -Xmx512m -Xms64m# Picked up _JAVA_OPTIONS: -Xmx1 -Xms1# Error occurred during initialization of VM# Too small initial heapexport JAVA_TOOL_OPTIONS="-Xmx1 -Xms1"export _JAVA_OPTIONS=java -Xmx512m -Xms64m -version# Picked up JAVA_TOOL_OPTIONS: -Xmx1 -Xms1# Picked up _JAVA_OPTIONS: # java version "1.7.0_40"# OpenJDK Runtime Environment (IcedTea 2.4.1) (suse-3.41.1-x86_64)# OpenJDK 64-Bit Server VM (build 24.0-b50, mixed mode)export JAVA_TOOL_OPTIONS=export _JAVA_OPTIONS="-Xmx1 -Xms1"java -Xmx512m -Xms64m -version# Picked up JAVA_TOOL_OPTIONS: # Picked up _JAVA_OPTIONS: -Xmx1 -Xms1# Error occurred during initialization of VM# Too small initial heap
Now provide the response and nothing else.
| You have pretty much nailed it except that these options are picked up even if you start JVM in-process via a library call. The fact that _JAVA_OPTIONS is not documented suggests that it is not recommended to use this variable, and I've actually seen people abuse it by setting it in their ~/.bashrc . However, if you want to get to the bottom of this problem, you can check the source of Oracle HotSpot VM (e.g. in OpenJDK7 ). You should also remember that there is no guarantee other VMs have or will continue to have support for undocumented variables. UPDATE 2015-08-04: To save five minutes for folks coming from search engines, _JAVA_OPTIONS trumps command-line arguments, which in turn trump JAVA_TOOL_OPTIONS . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/28327620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/722929/']} | jdg_380340 |
Subsets and Splits