source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 18822777 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array of objects with names, addresses, tel. nos, etc. I want to be able to search the array for any occurrence of a term - whether in the name field, the address field, etc. I have something like this in mind : -(void)filterContentForSearchText:(NSString*)searchText scope:(NSString*)scope { // Update the filtered array based on the search text and scope. // Remove all objects from the filtered search array [self.searchResults removeAllObjects]; // Filter the array using NSPredicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF contains[c] %@",searchText]; searchResults = [NSMutableArray arrayWithArray:[contactArray filteredArrayUsingPredicate:predicate]];} This causes an exception "Can't use in/contains operator with collection". UPDATE. I can now search on up to three fields. When I add a fourth (in any sequence), I get this exception: "Unable to parse the format string ..." The Predicate code is now: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF.narrative contains[c] %@ OR SELF.category contains[c] %@ OR SELF.date contains[c] OR SELF.name contains[c] %@", searchText, searchText, searchText, searchText]; searchResults = [NSMutableArray arrayWithArray:[allDreams filteredArrayUsingPredicate:predicate]]; Is three a limit of predicate search fields? How do I get around this? Thanks again.
Now provide the response and nothing else.
| Just use a predicate string that checks for them all: @"name contains[cd] %@ OR address contains[cd] %@" you can add as many as you want. The only downside is that you'll need to add the same search string for each field you want to test, which can seem a bit ugly. If your objects are dictionaries, then there is a way to truly search all values without necessarily knowing their names at compile time, using a subquery. It works like this: @"subquery(self.@allValues, $av, $av contains %@).@count > 0" It uses the @allValues special key (or method call if you prefer) for dictionary objects and uses that to filter any value that contains your search string. If any is found (i.e., the count is positive), the object is included in the results. Notice that this will examine all values indiscriminately, even those that you don't want to include if you have any in your dictionary. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18822777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/542019/']} | jdg_380341 |
stackexchange | llm_judgeable_groundtruth_similarity | 1186026 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I automatically create a WordPress page (for example, when plugin is activated)?
Now provide the response and nothing else.
| Use wp_insert_post() , which can insert pages as well: http://codex.wordpress.org/Function_Reference/wp_insert_post See post_type below. $post = array( 'ID' => [ <post id> ] //Are you updating an existing post? 'menu_order' => [ <order> ] //If new post is a page, sets the order should it appear in the tabs. 'page_template' => [ <template file> ] //Sets the template for the page. 'comment_status' => [ 'closed' | 'open' ] // 'closed' means no comments. 'ping_status' => [ ? ] //Ping status? 'pinged' => [ ? ] //? 'post_author' => [ <user ID> ] //The user ID number of the author. 'post_category' => [ array(<category id>, <...>) ] //Add some categories. 'post_content' => [ <the text of the post> ] //The full text of the post. 'post_date' => [ Y-m-d H:i:s ] //The time post was made. 'post_date_gmt' => [ Y-m-d H:i:s ] //The time post was made, in GMT. 'post_excerpt' => [ <an excerpt> ] //For all your post excerpt needs. 'post_name' => [ <the name> ] // The name (slug) for your post 'post_parent' => [ <post ID> ] //Sets the parent of the new post. 'post_password' => [ ? ] //password for post? 'post_status' => [ 'draft' | 'publish' | 'pending' ] //Set the status of the new post. 'post_title' => [ <the title> ] //The title of your post. 'post_type' => [ 'post' | 'page' ] //Sometimes you want to post a page. 'tags_input' => [ '<tag>, <tag>, <...>' ] //For tags. 'to_ping' => [ ? ] //?); // Insert the post into the databasewp_insert_post( $post ); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1186026', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134972/']} | jdg_380342 |
stackexchange | llm_judgeable_groundtruth_similarity | 377017 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Python, is there a portable and simple way to test if an executable program exists? By simple I mean something like the which command which would be just perfect. I don't want to search PATH manually or something involving trying to execute it with Popen & al and see if it fails (that's what I'm doing now, but imagine it's launchmissiles )
Now provide the response and nothing else.
| Easiest way I can think of: def which(program): import os def is_exe(fpath): return os.path.isfile(fpath) and os.access(fpath, os.X_OK) fpath, fname = os.path.split(program) if fpath: if is_exe(program): return program else: for path in os.environ["PATH"].split(os.pathsep): exe_file = os.path.join(path, program) if is_exe(exe_file): return exe_file return None Edit : Updated code sample to include logic for handling case where provided argument is already a full path to the executable, i.e. "which /bin/ls". This mimics the behavior of the UNIX 'which' command. Edit : Updated to use os.path.isfile() instead of os.path.exists() per comments. Edit : path.strip('"') seems like the wrong thing to do here. Neither Windows nor POSIX appear to encourage quoted PATH items. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/377017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38796/']} | jdg_380343 |
stackexchange | llm_judgeable_groundtruth_similarity | 350444 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a picture of a schematic that has these legends on the resistors. What does N/F stand for? And is it also applicable on other electronic components?
Now provide the response and nothing else.
| Not fitted. In this case, specifically, the designer wanted to be able to choose whether IN+_2 and IN-_2 should each be connected to either IIN_2A, IIN_2B, or IIN_2C. That's why they used a combination of zero-ohm resistors (which are like wires) and N/F parts. By selectively mounting zero-ohm resistors to different pads, it allows the designer to reuse the board for different scenarios, or with unknown future configurations. In the default configuration specified by this schematic, IIN+_2 is directly attached to IIN_2A, and IIN-_2 is directly attached to IIN_2C and IIN_2B. Because R64 is connected to ground, I assume there was an optional voltage-divider (or digital pull-down) resistor on the design, too. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/350444', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/155872/']} | jdg_380344 |
stackexchange | llm_judgeable_groundtruth_similarity | 965220 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While discussing modular forms associated to different subgroups $\Gamma$ of $SL(2,\mathbb{Z})$, there appeared to be a heuristic relationship between the index $[SL(2,\mathbb{Z}) \colon \Gamma]$ and the area of the Riemann surface $\Gamma/\mathbb{H}$, where $\mathbb{H}$ denote the hyperbolic plane. Specifically, it appears that when the hyperbolic area of the fundamental domain of $\Gamma$ is infinite, then $\Gamma$ has infinite index in $SL(2,\mathbb{Z})$. Are any results that describe such a relationship? And if so, can we say anything for more general subgroups of $SL(2,\mathbb{R})$? (That is, for subgroups not necessarily contained in $SL(2,\mathbb{Z})$.) Specifically, I'm wondering about subgroups $G(\lambda) < SL(2,\mathbb{R})$ generated by the isometries $z \mapsto z +\lambda$ and $z \mapsto -1/z$, for $\lambda \in \mathbb{R}$. Edit : @hunter has answered in the case where $\Gamma$ is a subgroup of $SL(2,\mathbb{Z})$. However, it is still not clear how this may generalize to subgroups of $SL(2,\mathbb{R})$.
Now provide the response and nothing else.
| In general given discrete subgroups $\Gamma_2 < \Gamma_1 < PSL(2,\mathbb{R}) = \text{Isom}(\mathbb{H}^2)$, there is an equation$$\text{Area}(\mathbb{H}^2 / \Gamma_2) = [\Gamma_1:\Gamma_2] \cdot \text{Area}(\mathbb{H}^2 / \Gamma_1)$$So if $\text{Area}(\mathbb{H}^2 / \Gamma_1)$ is finite, as is the case when $\Gamma_1 = PSL(2,\mathbb{Z})$, then $\text{Area}(\mathbb{H}^2 / \Gamma_2)$ is finite if and only if $\Gamma_2$ has finite index in $\Gamma_1$. On the other hand if $\text{Area}(\mathbb{H}^2 / \Gamma_1)$ is infinite then $\text{Area}(\mathbb{H}^2 / \Gamma_2)$ is also infinite. The reason for this equation is that the inclusion $\Gamma_2 < \Gamma_1$ induces an orbifold covering map (a branched covering map) from $\mathbb{H}^2 / \Gamma_2$ to $\mathbb{H}^2 / \Gamma_1$ of degree equal to the index $[\Gamma_1:\Gamma_2]$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/965220', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/67412/']} | jdg_380345 |
stackexchange | llm_judgeable_groundtruth_similarity | 5004148 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When computing the inverse for some square matrix A in MATLAB, using Ai = inv(A)% should be the same as:Ai = A^-1 MATLAB usually notifies me that this is not the most efficient way of inverting.So what's more efficient? If I have an equation system, using the /,\ operators probably is.But sometimes I need the inverse for other computations. What's the most efficient way to invert?
Now provide the response and nothing else.
| I would recommend to use svd (unless you are really absolute sure that your matrix is not ill-conditioned). Then, based on singular values you make your decisions on further actions to take. This may sound like a 'overkill' approach, but in long run it will pay back. Now if your matrix A is actually invertible, then the pseudo inverse of A coincidence with inv(A) , however if you are close to 'singularity' you'll easily make appropriate decision how to proceed to actually make the pseudo inverse . Naturally these decisions will depend on your application. Added a straightforward example: > A= randn(3, 2); A= [A A(:, 1)+ A(:, 2)]A = -1.520342 -0.239380 -1.759722 0.022604 0.381374 0.403978 0.852420 1.521925 2.374346> inv(A)warning: inverse: matrix singular to machine precision, rcond = 0ans = Inf Inf Inf Inf Inf Inf Inf Inf Inf> [U, S, V]= svd(A)U = -0.59828 -0.79038 0.13178 0.13271 -0.25993 -0.95646 0.79022 -0.55474 0.26040S =Diagonal Matrix 3.6555e+000 0 0 0 1.0452e+000 0 0 0 1.4645e-016V = 0.433921 0.691650 0.577350 0.382026 -0.721611 0.577350 0.815947 -0.029962 -0.577350> s= diag(S); k= sum(s> 1e-9) % simple thresholding based decisionk = 2> Ainv= (U(:, 1: k)* diag(1./ s(1: k))* V(:, 1: k)')'Ainv = -0.594055 -0.156258 -0.273302 0.483170 0.193333 0.465592 -0.110885 0.037074 0.192290> A* Ainvans = 0.982633 0.126045 -0.034317 0.126045 0.085177 0.249068 -0.034317 0.249068 0.932189> A* pinv(A)ans = 0.982633 0.126045 -0.034317 0.126045 0.085177 0.249068 -0.034317 0.249068 0.932189 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5004148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/281891/']} | jdg_380346 |
stackexchange | llm_judgeable_groundtruth_similarity | 15854043 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ParseService, that I would like to mock in order test all the controllers that are using it, I have been reading about jasmine spies but it is still unclear for me. Could anybody give me an example of how to mock a custom service and use it in the Controller test? Right now I have a Controller that uses a Service to insert a book: BookCrossingApp.controller('AddBookCtrl', function ($scope, DataService, $location) { $scope.registerNewBook = function (book) { DataService.registerBook(book, function (isResult, result) { $scope.$apply(function () { $scope.registerResult = isResult ? "Success" : result; }); if (isResult) { //$scope.registerResult = "Success"; $location.path('/main'); } else { $scope.registerResult = "Fail!"; //$location.path('/'); } }); };}); The service is like this: angular.module('DataServices', []) /** * Parse Service * Use Parse.com as a back-end for the application. */ .factory('ParseService', function () { var ParseService = { name: "Parse", registerBook: function registerBook(bookk, callback) { var book = new Book(); book.set("title", bookk.title); book.set("description", bookk.Description); book.set("registrationId", bookk.RegistrationId); var newAcl = new Parse.ACL(Parse.User.current()); newAcl.setPublicReadAccess(true); book.setACL(newAcl); book.save(null, { success: function (book) { // The object was saved successfully. callback(true, null); }, error: function (book, error) { // The save failed. // error is a Parse.Error with an error code and description. callback(false, error); } }); } }; return ParseService; }); And my test so far look like this: describe('Controller: AddBookCtrl', function() { // // load the controller's module beforeEach(module('BookCrossingApp')); var AddBookCtrl, scope, book; // Initialize the controller and a mock scope beforeEach(inject(function($controller, $rootScope) { scope = $rootScope; book = {title: "fooTitle13"}; AddBookCtrl = $controller('AddBookCtrl', { $scope: scope }); })); it('should call Parse Service method', function () { //We need to get the injector from angular var $injector = angular.injector([ 'DataServices' ]); //We get the service from the injector that we have called var mockService = $injector.get( 'ParseService' ); mockService.registerBook = jasmine.createSpy("registerBook"); scope.registerNewBook(book); //With this call we SPY the method registerBook of our mockservice //we have to make sure that the register book have been called after the call of our Controller expect(mockService.registerBook).toHaveBeenCalled(); }); it('Dummy test', function () { expect(true).toBe(true); });}); Right now the test is failing: Expected spy registerBook to have been called. Error: Expected spy registerBook to have been called. What I am doing wrong?
Now provide the response and nothing else.
| What I was doing wrong is not injecting the Mocked Service into the controller in the beforeEach: describe('Controller: AddBookCtrl', function() { var scope; var ParseServiceMock; var AddBookCtrl; // load the controller's module beforeEach(module('BookCrossingApp')); // define the mock Parse service beforeEach(function() { ParseServiceMock = { registerBook: function(book) {}, getBookRegistrationId: function() {} }; }); // inject the required services and instantiate the controller beforeEach(inject(function($rootScope, $controller) { scope = $rootScope.$new(); AddBookCtrl = $controller('AddBookCtrl', { $scope: scope, DataService: ParseServiceMock }); })); it('should call registerBook Parse Service method', function () { var book = {title: "fooTitle"} spyOn(ParseServiceMock, 'registerBook').andCallThrough(); //spyOn(ParseServiceMock, 'getBookRegistrationId').andCallThrough(); scope.registerNewBook(book); expect(ParseServiceMock.registerBook).toHaveBeenCalled(); //expect(ParseServiceMock.getBookRegistrationId).toHaveBeenCalled(); });}); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15854043', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/732537/']} | jdg_380347 |
stackexchange | llm_judgeable_groundtruth_similarity | 6266649 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Yii, is there a way to render a single action using a different layout than that defined for the controller? I have an action that I would like to format differently from the rest, and it's not clear from the documentation if that's possible.
Now provide the response and nothing else.
| I believe on that action you could just call the $layout variable. public function actionYourAction(){ $this->layout = 'nameOfYourAltLayout';} The instructions in the link below indicate that you will have to set this variable for every action since you can't just set the default public variable and expect the other actions to default back to this. http://www.yiiframework.com/wiki/28/how-to-implement-multiple-page-layouts-in-an-application/ ::Edit:: It seems the best practice here is to define the $layout variable in the view script for the particular action that calls it. For example, if your action calls viewscriptone.php then the viewscriptone view file would contain: $this->layout = 'nameOfYourAltLayout'; It makes more sense to override here rather than in the controller action. However, as LDG said, if the layout is conditional you should probably keep it in the controller. This information can still be found in the link above under the "Using Layouts" section of the page. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6266649', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117014/']} | jdg_380348 |
stackexchange | llm_judgeable_groundtruth_similarity | 16987362 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to get Vim to highlight non-ASCII characters. Is there an available setting, regex search pattern, or plugin to do so?
Now provide the response and nothing else.
| Using range in a [] character class in your search, you ought to be able to exclude the ASCII hexadecimal character range, therefore highlighting (assuming you have hlsearch enabled) all other characters lying outside the ASCII range: /[^\x00-\x7F] This will do a negative match (via [^] ) for characters between ASCII 0x00 and ASCII 0x7F (0-127), and appears to work in my simple test. For extended ASCII, of course, extend the range up to \xFF instead of \x7F using /[^\x00-\xFF] . You may also express it in decimal via \d : /[^\d0-\d127] If you need something more specific, like exclusion of non-printable characters, you will need to add those ranges into the character class [] . | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/16987362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/154688/']} | jdg_380349 |
stackexchange | llm_judgeable_groundtruth_similarity | 10874952 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I think Coffeescript is an awesome language! I was looking for some projects / issues / features that add Static Analysis to Coffeescript. However after some searching I found that the Coffeescript faq and this page suggest that static analysis might not be viable. I was wondering that if there is a fundamental issue in implementing static analysis / static type checking in Coffeescript, because of which something of this sort does not already exist in the compiler? Also, is it something that is not possible to do for non trivial checks but might work only for straightforward analysis? When I say straightforward I mean checking for trivial stuff like, whether the user has defined a function twice by the same name (in a class) or on the top level (or perhaps on the top level in a collection of related .coffee files). I would appreciate if anyone could please point out some examples that show why implementing static analysis / type checking is not straightforward / possible / worth spending time on? Thank you very much!
Now provide the response and nothing else.
| This answer is a bit of a brain dump since I'm interested in this also. Hope it helps. I use the Google Closure Compiler to statically analyze the code that CoffeeScript generates. It has a really good static analyzer, and I'm not sure if there's a good reason to reinvent the wheel here. The easy way is to just write the annotations by hand: ###* * @param {number} x * @param {number} y * @return {number}###adder = (x, y) -> x + y It's a bit verbose, but on the other hand you're borrowing the static analysis abilities of the closure compiler which is really powerful and is able to check a lot. I actually write type annotations in a slightly more concise way, then have a script to rewrite the coffee file. My code ends up looking like this: #! {number} x {number} y @return {number}adder = (x, y) -> x + y I'm sure you can see that the rewriter is pretty straightforward. A quick note before I move on. Be sure to compile your code with -b (bare) if you're running it through the closure compiler. The closure compiler is pretty good, but it's not smart enough to do data flow analysis. CoffeeScript wraps your code in an anonymous function by default, which will trip up the compiler. Another option along the same path (this would break compatibility with CoffeeScript, but would be a lot cooler) would be to have the Coffee compiler compile something like this: adder = (number x, number y): number -> x + y into JS like this: /*** * @param {number} x * @param {number} y * @return {number */var adder = function(x, y) { return x + y;}; which could then be fed into the closure compiler on a compile - if there were no errors the compiler could then strip all the comments out. Indeed, this guy appeared to be doing exactly this. Sadly, his work seems to be in an incomplete state. In all of these cases, we defer the hard work - static typechecking - to the closure compiler. If you don't want to do this, I'd understand, but it'd be tough to convince me that it's worthwhile to build a whole new static analysis tool from scratch. :) EDIT a year later: I just use typescript these days. :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10874952', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/615465/']} | jdg_380350 |
stackexchange | llm_judgeable_groundtruth_similarity | 28217803 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So far as I know, whenever we run any program, the machine instructions of the program is loaded in RAM. Again, there are two regions of memory: stack and heap. My question is: Which region of memory the machine instruction stored in? stack or heap? I learnt that the following program gives a runtime error though there is no variable declared inside the function. The reason behind this is the overflow of stack. Then should I assume that the machines instructions of the function is stored in stack? int func() { return func(); }
Now provide the response and nothing else.
| Neither, as it is not dynamically allocated the way stack and heap are. The executable loader loads the executable (.text) and any static data it contains, like the initial values of global variables (.data / .rodata), into an unused RAM area. It then sets up any zero-initialized memory the executable asked for (.bss). Only then is the stack set up for main() . Stack memory is allocated on the stack if you enter another function, holding the return address, function arguments, and any locally declared variables as well as any memory allocated via alloca() .[1] Memory is released when you return from the function. Heap memory is allocated by malloc() , calloc() , or realloc() . It gets released when you free() or realloc() it. The RAM used for the executable and its static data does not get released until the process terminates. Thus, stack and heap are, basically, under control of the application. The memory of the executable itself is under control of the executable loader / the operating system. In appropriately equipped operating systems, you don't even have write access to that memory. Regarding your edited question, no. (Bad style, editing a question to give it a completely new angle.) The executable code remains where it was loaded. Calling a function does not place machine instructions on the stack. The only thing your func() (a function taking no arguments) places on the stack is the return address , a pointer that indicates where execution should continue once the current function returns. Since none of the calls ever returns, your program keeps adding return addresses on the stack, until that cannot hold any more. This has nothing to do with machine code instructions at all. [1]: Note that none of this is actually part and parcel of the C language standard, but implementation-defined , and may differ -- I presented a simplified version of affairs. For example, function parameters might be passed in CPU registers instead of on the stack. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28217803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4215332/']} | jdg_380351 |
stackexchange | llm_judgeable_groundtruth_similarity | 29665 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Steven Pinker's book Better Angels of Our Nature , he notes that Probability is a matter of perspective. Viewed at sufficiently close range, individual events have determinate causes. Even a coin flip can be predicted from the starting conditions and the laws of physics, and a skilled magician can exploit those laws to throw heads every time. Yet when we zoom out to take a wide-angle view of a large number of these events, we are seeing the sum of a vast number of causes that sometimes cancel each other out and sometimes align in the same direction. The physicist and philosopher Henri Poincare explained that we see the operation of chance in a deterministic world either when a large number of puny causes add up to a formidable effect, or when a small cause that escapes our notice determines a large effect that we cannot miss. In the case of organized violence, someone may want to start a war; he waits for the opportune moment, which may or may not come; his enemy decides to engage or retreat; bullets fly; bombs burst; people die. Every event may be determined by the laws of neuroscience and physics and physiology. But in the aggregate, the many causes that go into this matrix can sometimes be shuffled into extreme combinations. (p. 209) I am particularly interested in the bolded sentence, but I give the rest for context. My question: are there statistical ways of describing the two processes that Poincare described? Here are my guesses: 1) "A large number of puny causes add up to a formidable effect." The "large number of causes" and "add up" sound to me like the central limit theorem . But in (the classical definition of) the CLT, the causes need to be random variables, not deterministic effects. Is the standard method here to approximate these deterministic effects as some sort of random variable? 2) "A small cause that escapes our notice determines a large effect that we cannot miss." It seems to me like you could think of this as some sort of hidden Markov model . But the (unobservable) state transition probabilities in an HMM are just that, probabilities, which is by definition once again not deterministic.
Now provide the response and nothing else.
| Interesting thought (+1). In cases 1) and 2), the problem is the same: we do not have complete information. And probability is a measure of the lack of information. 1) The puny causes may be purely deterministic, but which particular causes operate is impossible to know by a deterministic process. Think of molecules in a gaz. The laws of mechanics apply, so what is random here? The information hidden to us: where is which molecule with what speed. So the CLT applies, not because there is randomness in the system, but because there is randomness in our representation of the system . 2) There is a time component in HMM that is not necessarily present in this case. My interpretation is the same as before, the system may be non random, but our access to its state has some randomness. EDIT : I don't know if Poincare was thinking of a different statistical approach for these two cases. In case 1) we know the varialbes, but we cannot measure them because there are too many and they are too small. In case 2) we don't know the variables. Both ways, you end up making assumptions and modeling the observable the best we can, and quite often we assume Normality in case 2). But still, if there was one difference, I think it would be emergence . If all systems were determined by sums of puny causes then all random variables of the physical world would be Gaussian. Clearly, this is not the case. Why? Because scale matters. Why? Because new properties emerge from interactions at smaller scale, and these new properties need not be Gaussian. Actually, we have no statistical theory for emergence (as far as I know) but maybe one day we will. Then it will be justified to have different statistical approaches for cases 1) and 2) | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/29665', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2073/']} | jdg_380352 |
stackexchange | llm_judgeable_groundtruth_similarity | 318226 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have a custom object, Student : public class Student{ public int _id; public String name; public int age; public float score;} And a class, Window , that is used to show information of a Student : public class Window{ public void showInfo(Student student);} It looks quite normal, but I found Window is not quite easy to test individually, because it needs a real Student object to call the function. So I try to modify showInfo so that it does not accept a Student object directly: public void showInfo(int _id, String name, int age, float score); so that it is easier to test Window individually: showInfo(123, "abc", 45, 6.7); But I found the modified version has another problems: Modify Student (e.g.:add new properties) requires modifying the method-signature of showInfo If Student had many properties, the method-signature of Student would be very long. So, using custom objects as parameter or accept each property in objects as parameter, which one is more maintainable?
Now provide the response and nothing else.
| Using a custom object to group related parameters is actually a recommended pattern. As a refactoring, it is called Introduce Parameter Object . Your problem lies elsewhere. First, generic Window should know nothing about Student. Instead, you should have some kind of StudentWindow that knows about only displaying Students . Second, there is absolutely no problem about creating Student instance to test StudentWindow as long as Student doesn't contain any complex logic that would drastically complicate testing of StudentWindow . If it does have that logic then making Student an interface and mocking it should be preferred. | {} | {'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/318226', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/196142/']} | jdg_380353 |
stackexchange | llm_judgeable_groundtruth_similarity | 69526 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My company is working on developing a product that will go into commercial freezers, so my boss asked me to provide operating temperature specifications for the product. I can find "operating range" temperatures listed for everything but the PCB itself, which is just plain old FR-4. Wikipedia helpfully lists "Temperature Index" (whatever that means) as 140 C, but there are no indications of a minimum temperature. I'm not really worried, as I'm sure that the other components on the board will be the limiting factors, but for the sake of completeness, I would like to have it listed. Does anyone known the minimum operating temperature of FR-4? (And what would the failure mode be?)
Now provide the response and nothing else.
| FR4 PCB is glass-reinforced epoxy laminate. Several research studies have been published of the effect of low temperatures on such material. A specific quote from the paper " Dynamic failure behavior of glass/epoxy composites under low temperature using Charpy impact test method " (Shokrieh et al ): it is found that failure mechanism changes from matrix cracking at room temperature to delamination and fiber breakage at low temperatures. This study applied a temperature of -30 o C, not low enough for spontaneous crystalline brittleness. Another study, " The Influence of High and Low Temperatures on the impact properties of glass-epoxy composites " (Putic et al ) takes the temperature down to -50 o C, and finds the appearance of brittle cracks in the material at such temperatures. These are two key assumptions regarding the device touched upon in the question: The freezer temperature will not reach as low as those used in these studies There is limited risk of vibration or impact upon the actual circuit board, hence brittle cracks are unlikely but not ruled out If either of these assumptions is invalid, the material in question needs to be reconsidered. There do exist special-purpose industrial ceramics / alumina PCB substrates designed specifically for extremely low temperatures, typically used for device deployment in space or in cryogenic equipment. Those materials might be more suitable in this case. The point to note in such environments is the possible fracture of packages, casings and solder junctions for the electronic components on the board, not just the PCB itself. A commonly recommended procedure for deploying circuit boards in extreme temperature conditions, is to bring the device to the desired temperature in slow stages , thus avoiding any rapid shrinkage or thermal shock to board or parts. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/69526', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/23997/']} | jdg_380354 |
stackexchange | llm_judgeable_groundtruth_similarity | 285873 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The topological space $A$ is called homotopy dominated by the space $X$ if there are maps $f:A\longrightarrow X$ and $g:X\longrightarrow A$ so that $g\circ f\simeq id_A$. Question: Suppose that $X_1$ and $X_2$ are two polyhedra. If $A$ is homotopy dominated by $X_1\vee X_2$, then is $A$ of the form $A_1 \vee A_2$ (up to homotopy equivalent) where $A_i$ is homotopy dominated by $X_i$ for $i=1,2$?
Now provide the response and nothing else.
| The answer here is certainly yes under many sets of mild side hypotheses. For example, once upon a time, I wrote a paper with Frank Adams (!) that seems of some relevance: [J.F.Adams and N.J.Kuhn, Atomic spaces and spectra , Proc. Edin. Math. Soc 32 (1989), 473-481]. We show that if $X$ is a space or spectrum that is $p$--complete and of finite type, then the monoid of homotopy classes of based self maps $[X,X]$ is a profinite monoid. In this case, one concludes that, if $X$ has no nontrivial retracts then every self map is either invertible or topologically nilpotent. Let's apply this to the stated question, under the hypotheses $A$ has no nontrivial retracts and is complete of finite type. Consideration of homology shows that at least one of the two maps $A \rightarrow X \rightarrow X_i \rightarrow X \rightarrow A$ is not topologically nilpotent, and so must be an equivalence. Thus $A$ is a retract of either $X_1$ or $X_2$, i.e. the question has an affirmative answer in this case. More generally, you are asking something closely related to a Krull-Schmidt type theorem for spaces: if a space $X$ is written as a wedge of `indecomposable' spaces in two different ways, must the pieces correspond? Issues here include: need a space be written in this way? and also: What is the difference between retracts and wedge summands? If $X_1 \vee X_2$ is a suspension, and suitably complete, then certainly a Krull-Schmidt theorem holds and the answer is yes. In the more algebraic world of spectra, some of us were using these ideas all the time in the early 1980's in papers about stable splittings of classifying spaces. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/285873', 'https://mathoverflow.net', 'https://mathoverflow.net/users/114476/']} | jdg_380355 |
stackexchange | llm_judgeable_groundtruth_similarity | 20647652 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a set of read commands that I have to in sequence. Any fails, processing stops. readCommands is an array of read functions... async.waterfall(readCommands, function(err) { if (err) { console.log(err); res.send(500, { error : err.toString() }); return; } else { console.log('we determined the state to which we have to rollback to'); }}); at this point, I know what I started with. Now I want to do the write commands async.waterfall(writeCommands, function(err) { if (err) { console.log(err); res.send(500, { error : err.toString() }); return; } else { console.log('we succeeded with all the write commands...'); }}); the arrays readCommands and writeCommands entries are radically different so very hard to combine them. But I DO for the first waterfall to finish before I go to the next one.How do I make a "waterfall" out of the existing two?
Now provide the response and nothing else.
| It looks like that you're just lacking Java Heap Space. It's normal as JMeter by default has a very low heap allocation. It's controllable via -Xmx parameter. There is a line in jmeter.bat or jmeter.sh script which recommends launching JMeter as JVM_ARGS="-Xms512m -Xmx512m" jmeter.sh You can try increasing maximum heap until you'll stop receiving these errors. I'd recommend to set it to something like 80% of your hardware RAM. Also consider JMeter Performance and Tuning Tips article to make sure that you disable memory-consuming listeners, don't overscript, use post-processors efficiently, etc. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20647652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/496136/']} | jdg_380356 |
stackexchange | llm_judgeable_groundtruth_similarity | 41615038 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple array of objects and want to get the location attribute of the first element: const companies = [ { name: 'Google', location: 'Mountain View' }, { name: 'Facebook', location: 'Menlo Park' }, { name: 'Uber', location: 'San Francisco' }];const [{ location }] = companies; // expected: Mountain View but got Uncaught SyntaxError: Identifier //'location' has already been declared at <anonymous>:1:1 What am I missing?
Now provide the response and nothing else.
| You cannot redefine the global variable location as a constant. Had you used let [{location}] = companies , you wouldn't have got an error, but you would have had the wrong behavior as it would try to assign a value to window.location and redirect the page. Possible solutions: Change the variable name (along with the property name) Wrap your code in a block (IIFE is not necessary in this case since we are using ES6) You can keep the property name and change the variable name, see the last example const companies = [ {name: 'Google',loc: 'Mountain View'}, {name: 'Facebook', loc: 'Menlo Park'}, {name: 'Uber', loc: 'San Francisco'}];const [{loc}] = companies;console.log(loc);// Block scope{ const companies = [ {name: 'Google', location: 'Mountain View'}, {name: 'Facebook', location: 'Menlo Park'}, {name: 'Uber', location: 'San Francisco'} ]; const [{location}] = companies; console.log(location);}// Variable name doesn't have to be the same as propertyconst [{ location: loca }] = [{location: 'Mountain View'}]console.log(loca); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41615038', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1323481/']} | jdg_380357 |
stackexchange | llm_judgeable_groundtruth_similarity | 42358227 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to set up my router config using a Resolve that returns an Observable from a BehaviorSubject. I've tried this in both angular 4.0.0-beta8 and angular 2.4.8+router 3.4.8 Here's my service: @Injectable()export class MyService { private _data: BehaviorSubject<Array<string>> = new BehaviorSubject(undefined); constructor() {} public getData(): Observable<Array<string>> { this._data.next(['test1', 'test2', 'test3']); let asObservable = this._data.asObservable().delay(1000); asObservable.subscribe((myData) => { console.log([myData, 'this console message DOES show up']); }); // if I return here, my component's constructor and ngOnInit never fire // return asObservable; let fakeObservable = Observable.of(['test1', 'test2', 'test3']).delay(1000); fakeObservable.subscribe((fakeData) => { console.log([fakeData, 'this console message shows up']); }); console.log([asObservable, fakeObservable]); /* console log output Observable { _isScalar: false, operator: DelayOperator, source: Observable { _isScalar: false, source: BehaviorSubject { _isScalar: false, _value: ['test1', 'test2', 'test3'], closed: false, hasError: false, isStopped: false, observers: Array[1], thrownError: null, value: ['test1', 'test2', 'test3'] } } }, Observable { _isScalar: false, operator: DelayOperator, source: ScalarObservable { _isScalar: true, scheduler: null, value: ['test1', 'test2', 'test3'] } } */ return fakeObservable; // this WILL reach my component constructor and ngOnInit }} Here's my resolve @Injectable()export class MyResolver implements Resolve<Array<string>> { constructor(private myService: MyService) {} resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<Array<string>>|undefined { return this.myService.getData(); }} Here's the router RouterModule.forChild([{ path: 'mypath', component: MyComponent, resolve: { data: MyResolver }}]); And here's the component: @Component({ selector: 'my-component', template: '<Span>My Component</span>'})export class MyComponent implements OnInit { constructor(private route: ActivatedRoute) { console.log('component constructor'); } ngOnInit(): void { console.log(this.route.snapshot.data['data']); // ['test1', 'test2', 'test3'] }} This is probably not the best way of designing the interaction between the resolve and the service, so I'm very open to suggestions there. However, I might go crazy if I don't figure out why BehaviorSubject.asObservable() doesn't work, but the mocked observable does work.
Now provide the response and nothing else.
| I thought about this one overnight, and realized that I was using the resolve incorrectly. The crux of the problem is that the router expects the resolve result to eventually be completed . The BehaviorSubject, even though it only has one value at a time , will never be done , because the value can always change. I changed this._data.asObservable() to this._data.asObservable().first() , and it started working. It seems so obvious now! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42358227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1661937/']} | jdg_380358 |
stackexchange | llm_judgeable_groundtruth_similarity | 48601 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a multiplicative partition function that would generate something like f[12](*{{2, 2, 3}, {4, 3}, {6, 2}}*)f[24](*{{2, 2, 2, 3}, {2, 2, 6}, {4, 2, 3}, {8, 3}, {12, 2}, {4, 6}}*)f[48](*{{2, 2, 2, 2, 3}, {2, 2, 2, 6}, {4, 2, 2, 3}, {2, 2, 12}, {2, 3, 8}, {2, 4, 6}, {4, 4, 3}, {6, 8}, {4, 12}, {3, 16}, {2, 24}}*) I have got as far as n = 30;i = FactorInteger[n];r = Range[Length[i]];Join[{f = Flatten[Map[Table[i[[#]][[1]], {x, 1, i[[#]][[2]]}] &, r]]},Transpose[{d = DeleteDuplicates[f], Map[n/d[[#]] &, Range[Length[d]]]}]] but it is turning out to be more complicated than I thought. Is there a more efficient way of tackling this? Update Have got a little further, but still missing some: n = 48;i = FactorInteger[n];r = Range[Length[i]];f = Flatten[Map[Table[i[[#]][[1]], {x, 1, i[[#]][[2]]}] &, r]];p = Drop[Drop[DeleteDuplicates[Subsets[f]], 1], -1];d = Split[Drop[Drop[Reverse[Divisors[n]], 1], -1]];Drop[Map[Sort[Join[p[[#]], d[[#]]]] &, Range[Length[d]]], -1]
Now provide the response and nothing else.
| A slightly more compact form of belisarius 's answer (differs in that factors are pooped out in descending order of maximum factor in a set): f2[x_] := DeleteDuplicates[Sort /@ Map[Times @@ # &, SetPartitions[Flatten[ConstantArray @@@ FactorInteger[x]]], {2}]] Tiny bit more efficient, but really in the noise - both are probably about as efficient as is reasonable: You'd have to prevent creation of duplicate subsets (for later multiplication) to improve much, and I'd venture doing that would eat more time than just using them and deleting dupes post-hoc... As to your comment question: A001055 ClearAll[c, r, ds, n, a];c[1, r_] := c[1, r] = 1;c[n_, r_] := c[n, r] = Module[{ds, i}, ds = Select[Divisors[n], 1 < # <= r &]; Sum[c[n/ds[[i]], ds[[i]]], {i, 1, Length[ds]}]];a[n_] := c[n, n];(* count *)a[1000](* calculate and get length to verify *)Length@f2[1000](*3131*) Vastly faster for large n than calculating fully... Update: Poking around A162247 g[lst_, p_] := Module[{t, i, j}, Union[Flatten[ Table[t = lst[[i]]; t[[j]] = p*t[[j]]; Sort[t], {i, Length[lst]}, {j, Length[lst[[i]]]}], 1], Table[Sort[Append[lst[[i]], p]], {i, Length[lst]}]]]; f[n_] := Module[{i, j, p, e, lst = {{}}}, {p, e} = Transpose[FactorInteger[n]]; Do[lst = g[lst, p[[i]]], {i, Length[p]}, {j, e[[i]]}]; lst];f[12](* {{12}, {2, 6}, {3, 4}, {2, 2, 3}} *) But much faster on larger numbers than f2 (but still not as fast as count-only...) | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/48601', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9923/']} | jdg_380359 |
stackexchange | llm_judgeable_groundtruth_similarity | 3262863 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In my text book I'm asked to deduce a simpler expression for (p ∨ ¬q) ∧ (¬p ∨ ¬q) Looking at an equivalency table I did, it seems p ∨ ¬q gives the same results as (p ∨ ¬q) ∧ (¬p ∨ ¬q) . However I'm not sure how you would deduce this without the table, as in, if I was outrightly asked to write the above in simpler terms I wouldn't know where to begin. Am I understanding this correctly? My Table:
Now provide the response and nothing else.
| $$(p \lor \lnot q) \land (\lnot p \lor \lnot q) \iff (p \land \lnot p) \lor \lnot q \iff \lnot q.$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3262863', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/659388/']} | jdg_380360 |
stackexchange | llm_judgeable_groundtruth_similarity | 18490748 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say i have a menu like this... <ul class="main_menu"> <li class="item-1"> First top level item <ul> <li class="item-2">Item 2</li> <li class="item-3"> Item 3 <ul> <li class="item-4">Item 4</li> <li class="item-5">Item 5</li> </ul> </li> <li class="item-6">Item 6</li> </ul> </li> <li class="item-7"> Second top level item <ul> <li class="item-8">Item 8</li> <li class="item-9">Item 9</li> <li class="item-10">Item 10</li> <li class="item-11">Item 11</li> </ul> </li></ul> ...and in this case the sub menus can be at variable depth, how can i get the top level item if i only know an sub item? For example, i know the class item-5, then i want to get the class "item-1", or if i know the "item-11", i want "item-7". That is, wherever i am, i want the "Top level item X".
Now provide the response and nothing else.
| You can chain parents() into last() : var topListItem = $(".item-5").parents("li").last(); This solution works because parents() returns the ancestor elements ordered from the closest to the outer ones. If you want the original element to be part of the search (i.e. .item-7 returning itself), then you can chain parents() into addBack() , then into first() : var topListItem = $(".item-7").parents("li").addBack().first(); // '.item-7' addBack() adds the original element back into the chain of parents. In this case, we have to apply first() instead of last() because addBack() puts the elements in document order (so the top-level element will be the first one matched instead of the last). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18490748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1062130/']} | jdg_380361 |
stackexchange | llm_judgeable_groundtruth_similarity | 7205 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading 13 Things That Don't Make Sense: The Most Intriguing Scientific Mysteries of Our Time (2010) and Chapter 10 concentrates on the subject of Sex. The author argues that we do not know why humans, or animals, have evolved to reproduce sexually rather than asexually. Asexual reproduction is far more efficient according to the author, who backs his claims up by citing Richard Dawkins amongst others. He hasn’t done it yet. In his 2004 book The Ancestor’s Tale, he again admits defeat over the origin of sex. “To do justice to all the theories would take a book—it has already taken several . . . Yet no definitive verdict has emerged.” In the end, he settles for discussing a consequence of sexual reproduction, rather than explaining its origin. The question of what is so good about sex is one that “betters scientists than I have spent book after book failing to answer,” Dawkins admits. He suggests we have Ptolemaic view of sexual reproduction, which has been shaped over the centuries ( quote needed ) Has any progress been made to explain why animals reproduce sexually? Or is it an incorrect assertion in the first place, and we've known for a while?
Now provide the response and nothing else.
| TL;DR : Sexual reproduction, by providing avenue for recombination of genes, assists in mitigating harmful mutations. Neiman, Gery Hehman, Joseph T. Miller, John M. Logsdon, Jr., and Douglas R. Taylor. "Accelerated Mutation Accumulation in Asexual Lineages of a Freshwater Snail". Molecular Biology and Evolution, 2009; DOI: 10.1093/molbev/msp300 Abstract Sexual reproduction is both extremely costly and widespread relative to asexual reproduction, meaning that it must also confer profound advantages in order to persist. One theorized benefit of sex is that it facilitates the clearance of harmful mutations, which would accumulate more rapidly in the absence of recombination. The extent to which ineffective purifying selection and mutation accumulation are direct consequences of asexuality and whether the accelerated buildup of harmful mutations in asexuals can occur rapidly enough to maintain sex within natural populations, however, remain as open questions. We addressed key components of these questions by estimating the rate of mutation accumulation in the mitochondrial genomes of multiple sexual and asexual representatives of Potamopyrgus antipodarum, a New Zealand snail characterized by mixed sexual/asexual populations. We found that increased mutation accumulation is associated with asexuality and occurs rapidly enough to be detected in recently derived asexual lineages of P. antipodarum. Our results demonstrate that increased mutation accumulation in asexuals can differentially affect coexisting and ecologically similar sexual and asexual lineages. The accelerated rate of mutation accumulation observed in asexual P. antipodarum provides some of the most direct evidence to date for a link between asexuality and mutation accumulation and implies that mutational buildup could be rapid enough to contribute to the short-term evolutionary mechanisms that favor sexual reproduction. | {} | {'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/7205', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/1033/']} | jdg_380362 |
stackexchange | llm_judgeable_groundtruth_similarity | 24010875 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a h264 file that extract from YUV format using SVC software. Now, I want to caculate size of each GOP in the h264 file. We know that size of GOP is the distance between two nearest I frame. here . Could you suggest to me how to cacluate the GOP size of a given h264 file. It is better when we implement it by C/C++.Thank you
Now provide the response and nothing else.
| Well, just parsing the bitstream to find the each I-frame is a bit tricky; among other things the encode order might be (or not) different from the display-order. One solution is to use http://www.ffmpeg.org/ffprobe.html from the ffmpeg-suite. Example: ffprobe -show_frames input.bin | grep key_framekey_frame=1key_frame=0key_frame=0key_frame=0key_frame=0key_frame=0... from the output you can easily calculate the GOP-length Another solution is to patch the reference implementation found at http://iphome.hhi.de/suehring/tml/ Let me know if you need help with this part :-) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24010875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3677103/']} | jdg_380363 |
stackexchange | llm_judgeable_groundtruth_similarity | 25488675 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What exactly happens when Pandas issues this warning? Should I worry about it? In [1]: read_csv(path_to_my_file)/Users/josh/anaconda/envs/py3k/lib/python3.3/site-packages/pandas/io/parsers.py:1139: DtypeWarning: Columns (4,13,29,51,56,57,58,63,87,96) have mixed types. Specify dtype option on import or set low_memory=False. data = self._reader.read(nrows) I assume that this means that Pandas is unable to infer the type from values on those columns. But if that is the case, what type does Pandas end up using for those columns ? Also, can the type always be recovered after the fact? (after getting the warning), or are there cases where I may not be able to recover the original info correctly, and I should pre-specify the type? Finally, how exactly does low_memory=False fix the problem?
Now provide the response and nothing else.
| Revisiting mbatchkarov's link, low_memory is not deprecated .It is now documented : low_memory : boolean, default True Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize or iterator parameter to return the data in chunks. (Only valid with C parser) I have asked what resulting in mixed type inference means, and chris-b1 answered: It is deterministic - types are consistently inferred based on what's in the data. That said, the internal chunksize is not a fixed number of rows, but instead bytes, so whether you can a mixed dtype warning or not can feel a bit random. So, what type does Pandas end up using for those columns? This is answered by the following self-contained example: df=pd.read_csv(StringIO('\n'.join([str(x) for x in range(1000000)] + ['a string'])))DtypeWarning: Columns (0) have mixed types. Specify dtype option on import or set low_memory=False.type(df.loc[524287,'0'])Out[50]: inttype(df.loc[524288,'0'])Out[51]: str The first part of the csv data was seen as only int, so converted to int,the second part also had a string, so all entries were kept as string. Can the type always be recovered after the fact? (after getting the warning)? I guess re-exporting to csv and re-reading with low_memory=False should do the job. How exactly does low_memory=False fix the problem? It reads all of the file before deciding the type, therefore needing more memory. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25488675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283296/']} | jdg_380364 |
stackexchange | llm_judgeable_groundtruth_similarity | 9988204 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying for ages to figure this our. when i try to bind my class with an interceptor i'm getting the following exception on the line Kernel.Bind<MyClass>().ToSelf().Intercept().With<ILoggerAspect>(); Error loading Ninject component IAdviceFactory. No such component has been registered in the kernel's component container I've tried with and without LoadExtensions, With about with using a Module to set up my bindings and my last attempt looks like this internal class AppConfiguration { internal AppConfiguration( ) { var settings = new NinjectSettings() { LoadExtensions = false }; Kernel = new StandardKernel(settings); Load(); } internal StandardKernel Kernel { get; set; } public static AppConfiguration Instance { get { return _instance ?? (_instance = new AppConfiguration()); } } private static AppConfiguration _instance; private void Load() { Kernel.Bind<ILoggerAspect>().To<Log4NetAspect>().InSingletonScope(); Kernel.Bind<MyClass>().ToSelf().Intercept().With<ILoggerAspect>(); } internal static StandardKernel Resolver() { return Instance.Kernel; }} My Logger Attribute looks like this public class LogAttribute : InterceptAttribute{ public override IInterceptor CreateInterceptor(IProxyRequest request) { return request.Context.Kernel.Get<ILoggerAspect>(); }} And my interceptor like this public class Log4NetAspect : SimpleInterceptor, ILoggerAspect{ protected override void BeforeInvoke(IInvocation invocation) { Debug.WriteLine("Running " + invocation.ReturnValue); base.BeforeInvoke(invocation); } public new void Intercept(IInvocation invocation) { try { base.Intercept(invocation); } catch (Exception e) { Debug.WriteLine("Exception: " + e.Message); } } protected override void AfterInvoke(IInvocation invocation) { Debug.WriteLine("After Method"); base.AfterInvoke(invocation); }}
Now provide the response and nothing else.
| Most likely you didn't deploy Ninject.Extensions.Interception.DynamicProxy or Ninject.Extensions.Interception.Linfu alongside your application [and Ninject.Extensions.Interception ]. You have to pick exactly one of them. With the code as you have it right now ( LoadExtensions=false ) it will fail to pick up the specific interception library - you should remove that and the normal extensions loading should wire the extension into the Kernel on creation for the interception bits to pick it up. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9988204', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/738068/']} | jdg_380365 |
stackexchange | llm_judgeable_groundtruth_similarity | 232930 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What happens when a microprocessor like the 8085 fetches an invalid opcode, i.e. not part of its instruction set, from memory and attempts to decode and run it? I found, in some forum, that invalid opcodes are treated as NOPs, but that's no good since it doesnt inform the programmer that there's a problem. Is this really what happens or is some interrupt (TRAP?) triggered? What if invalid opcodes somehow made their way down to a modern processor? How different is the behaviour, with respect to an 8085's? I'm asking because I'm building an 8085 trainer which will require students to enter opcodes in hex directly into RAM from where they'll be executed, so I'd like to know what would happen if they enter incorrect code. Also, how likely is it that the one wrong opcode will ruin the rest of the program? Thanks.
Now provide the response and nothing else.
| With something like an 8085 processor, the result is probably "undefined behavior". Those 1970s devices had limited logic available for instruction decoding, and they designed the opcodes to require minimal decoding effort. For example, maybe every op-code that had a '1' in the 4th bit would result in an update to the accumulator. These devices wouldn't inform the programmer of anything because they couldn't spare the resources to detect a wrong op-code. It also wouldn't necessarily behave as a NOP, because the fields of the opcode might actually pass through the decode logic and produce some behavior that changed the state of the processor. It would be the job of the programmer or the compiler to not generate invalid opcodes, which isn't really very difficult when you think about it. A compiler is not going to produce any opcode that it hasn't been programmed to produce. It seems likely that newer processors, with vastly more resources, can spare some to detect invalid opcodes and produce some defined behavior, but I'm not familiar enough with them to comment on that. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/232930', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/83143/']} | jdg_380366 |
stackexchange | llm_judgeable_groundtruth_similarity | 5505181 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've read that you can store classes directly into a session variable i.e. Session["var"] = myclass; My question is how the memory management works. Does it automatically serialize this into the session on the client side? Or does it hold the data for the instance of the class in server memory, and just holds a reference in the session object?
Now provide the response and nothing else.
| ASP.Net will store your object in a static nested dictionary in memory on the server. It then sends a cookie to the client with the session ID. Next time the client sends a request, ASP.Net will retrieve the session associated with that ID from the outer dictionary, then give you the inner dictionary containing the objects in that session. (This is the way the default session provider works; other providers can serialize objects to SQL Server, or do something else entirely) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5505181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/299408/']} | jdg_380367 |
stackexchange | llm_judgeable_groundtruth_similarity | 2617388 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've an interesting question on my hand which I've approach several others but all of them gives me different insights to this probability question. Here it is, The incidence of a suspicious transaction in a bank is 1 in 149 . They are able to correctly identify a legitimate transaction 92% of the time. However, this bank is also able to correctly pinpoint a suspicious transaction 92% of the time. One day, the bank identify a transaction as suspicious. What is the exact probability of the transaction actually being legitimate? From my personal point of view, if the question ask for the probability of the transaction actually being legitimate, states that the rate is 148⁄ 149 . The bank is able to correctly identify (which they fail to) a legitimate and suspicious transaction. Therefore, the failure % should be (8% * 8%) which is 0.08 * 0.08 = 0.0064. Hence, the probability of it actually being legitimate is 148⁄ 149 * 0.0064 = 0.00636 . However, i asked various people of their opinion and some states that the probability should be just 148⁄ 149 * 0.08 . Therefore, what should be the most probable answer to problems like this.
Now provide the response and nothing else.
| The incidence of a suspicious transaction in a bank is 1 in 149. They are able to correctly identify a legitimate transaction 92% of the time. However, this bank is also able to correctly pinpoint a suspicious transaction 92% of the time. One day, the bank identify a transaction as suspicious. What is the exact probability of the transaction actually being legitimate? Let $S$ be the event that a transaction is suspicious. Let $T$ be the event that a transaction is identified as suspicious. We are given: $\mathsf P(S)=1/149, \mathsf P(T^\complement\mid S^\complement)=0.92=\mathsf P(T\mid S)$. We seek, by means of Bayes' Rule: $\mathsf P(S^\complement\mid T)~{=\dfrac{\mathsf P(T\mid S^\complement)\mathsf P(S^\complement)}{\mathsf P(T\mid S)\mathsf P(S)+\mathsf P(T\mid S^\complement)\mathsf P(S^\complement)}\\ = \dfrac{0.08\cdot148/149}{0.92\cdot1/149+0.08\cdot148/149} \\ =\dfrac{296}{319} }$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2617388', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/524064/']} | jdg_380368 |
stackexchange | llm_judgeable_groundtruth_similarity | 21124475 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using the following example: http://compscipleslab.wordpress.com/category/java/ But I'm getting an exception and when I log the exceptions it it appears in my logcat stating: InvalidKeyException I'm not sure exactly what I've done wrong. I haven't really modified the methods from the tutorial (perhaps this is the problem) so I'm not sure exactly why I'm getting this error. Source: public class FileChooser extends ListActivity { private File currentDir; private File moveToDir; private FileArrayAdapter adapter; public static String TAG = "DEBUG THIS:"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); currentDir = new File("/sdcard/"); fill(currentDir); registerForContextMenu(getListView()); // get the attachment's filename SharedPreferences preferences = PreferenceManager .getDefaultSharedPreferences(this); String attachmentFileName = preferences.getString("fileName", ""); // save the attachment try { InputStream attachment = getContentResolver().openInputStream( getIntent().getData()); File savedFile = new File(Environment.getExternalStorageDirectory() .getAbsolutePath(), attachmentFileName); FileOutputStream f = new FileOutputStream(savedFile); byte[] buffer = new byte[1024]; int len1 = 0; while ((len1 = attachment.read(buffer)) > 0) { f.write(buffer); } f.close(); } catch (Exception e) { } } // File Manager Source to view SD Card or Internal Storage Contents private void fill(File f) { File[] dirs = f.listFiles(); this.setTitle("Current Dir: " + f.getName()); List<Item> dir = new ArrayList<Item>(); List<Item> fls = new ArrayList<Item>(); try { for (File ff : dirs) { Date lastModDate = new Date(ff.lastModified()); DateFormat formater = DateFormat.getDateTimeInstance(); String date_modify = formater.format(lastModDate); if (ff.isDirectory()) { File[] fbuf = ff.listFiles(); int buf = 0; if (fbuf != null) { buf = fbuf.length; } else buf = 0; String num_item = String.valueOf(buf); if (buf == 0) num_item = num_item + " item"; else num_item = num_item + " items"; // String formated = lastModDate.toString(); dir.add(new Item(ff.getName(), num_item, date_modify, ff .getAbsolutePath(), "directory_icon")); } else { fls.add(new Item(ff.getName(), ff.length() + " Byte", date_modify, ff.getAbsolutePath(), "file_icon")); } } } catch (Exception e) { } Collections.sort(dir); Collections.sort(fls); dir.addAll(fls); if (!f.getName().equalsIgnoreCase("sdcard")) dir.add(0, new Item("..", "Parent Directory", "", f.getParent(), "directory_up")); adapter = new FileArrayAdapter(FileChooser.this, R.layout.file_view, dir); this.setListAdapter(adapter); } // onClick listener to move back one directory @Override protected void onListItemClick(ListView l, View v, int position, long id) { // TODO Auto-generated method stub super.onListItemClick(l, v, position, id); Item o = adapter.getItem(position); if (o.getImage().equalsIgnoreCase("directory_icon") || o.getImage().equalsIgnoreCase("directory_up")) { currentDir = new File(o.getPath()); fill(currentDir); } else { onFileClick(o); } } // open file onClick private void onFileClick(Item o) { Intent intent = new Intent(); intent.putExtra("GetPath", currentDir.toString()); intent.putExtra("GetFileName", o.getName()); setResult(RESULT_OK, intent); finish(); } @Override public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { super.onCreateContextMenu(menu, v, menuInfo); menu.setHeaderTitle("Context Menu"); menu.add(0, v.getId(), 0, "Copy"); menu.add(0, v.getId(), 0, "Paste"); menu.add(0, v.getId(), 0, "Delete"); menu.add(0, v.getId(), 0, "Encrypt"); menu.add(0, v.getId(), 0, "Decrypt"); } @Override public boolean onContextItemSelected(MenuItem item) { if (item.getTitle() == "Copy") { function1(item.getItemId()); } if (item.getTitle() == "Paste") { function2(item.getItemId()); } if (item.getTitle() == "Delete") { function3(item.getItemId()); } if (item.getTitle() == "Encrypt") { function4(item.getItemId()); } if (item.getTitle() == "Decrypt") { function5(item.getItemId()); } else { } return false; } public void function1(int id) { Toast.makeText(this, "Copy", Toast.LENGTH_SHORT).show(); try { File sd = Environment.getExternalStorageDirectory(); currentDir = new File(sd.getAbsolutePath()); moveToDir = new File(sd.getAbsolutePath()); copyDirectoryOneLocationToAnotherLocation(currentDir, currentDir); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void function2(int id) { Toast.makeText(this, "Paste", Toast.LENGTH_SHORT).show(); } public void function3(int id) { Toast.makeText(this, "Delete", Toast.LENGTH_SHORT).show(); DeleteRecursive(currentDir); } public void function4(int id) { Toast.makeText(this, "Encrypt", Toast.LENGTH_SHORT).show(); //encrypt(null, null, null); // encrypt(String PlaintextFile, String Key, String CiphertextFile); encrypt("D:\\plaintext.txt", "testkey", "D:\\ciphertext.txt"); } public void function5(int id) { Toast.makeText(this, "Decrypt", Toast.LENGTH_SHORT).show(); decrypt(null, null, null); // decrypt(String PlaintextFile, String Key, String CiphertextFile); } private void DeleteRecursive(File fileOrDirectory) { if (fileOrDirectory.isDirectory()) { File[] children = fileOrDirectory.listFiles(); if (null != children) { for (File child : children) DeleteRecursive(child); } } fileOrDirectory.delete(); } public static void copyDirectoryOneLocationToAnotherLocation( File sourceLocation, File targetLocation) throws IOException { if (sourceLocation.isDirectory()) { if (!targetLocation.exists()) { targetLocation.mkdir(); } String[] children = sourceLocation.list(); for (int i = 0; i < sourceLocation.listFiles().length; i++) { copyDirectoryOneLocationToAnotherLocation(new File( sourceLocation, children[i]), new File(targetLocation, children[i])); } } else { InputStream in = new FileInputStream(Environment .getExternalStorageDirectory().getPath()); OutputStream out = new FileOutputStream(Environment .getExternalStorageDirectory().getPath()); Log.d("TEST", "source: " + in); Log.d("TEST", "target: " + out); // Copy the bits from instream to outstream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); out.close(); } } public static void encrypt(String PlaintextFile, String Key, String CiphertextFile) { try { byte[] KeyData = Key.getBytes(); SecretKeySpec KS = new SecretKeySpec(KeyData, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, KS); FileInputStream fis = new FileInputStream(PlaintextFile); CipherInputStream cis = new CipherInputStream(fis, cipher); FileOutputStream fos = new FileOutputStream(CiphertextFile); byte[] b = new byte[1024]; int i = cis.read(b); while (i != -1) { fos.write(b, 0, i); i = cis.read(b); } fos.flush(); fos.close(); fis.close(); cis.close(); System.out.println("Encryption Successfull !!!"); } catch (NoSuchAlgorithmException exception) { // do something. Thread.currentThread().getStackTrace(); } catch (NoSuchPaddingException exception) { // do something (else?). Thread.currentThread().getStackTrace(); } catch (InvalidKeyException exception) { // do something. Thread.currentThread().getStackTrace(); } catch (IOException exception) { // do something (else?). Thread.currentThread().getStackTrace(); } } public static void decrypt(String CiphertextFile, String Key, String DecipheredFile) { try { byte[] KeyData = Key.getBytes(); SecretKeySpec KS = new SecretKeySpec(KeyData, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.DECRYPT_MODE, KS); FileInputStream fis = new FileInputStream(CiphertextFile); FileOutputStream fos = new FileOutputStream(DecipheredFile); CipherOutputStream cos = new CipherOutputStream(fos, cipher); byte[] b = new byte[1024]; int i = fis.read(b); while (i != -1) { cos.write(b, 0, i); i = fis.read(b); } fos.flush(); fos.close(); fis.close(); cos.flush(); cos.close(); System.out.println("Decryption Successfull !!!"); } catch (NoSuchAlgorithmException exception) { // do something. Thread.currentThread().getStackTrace(); } catch (NoSuchPaddingException exception) { // do something (else?). Thread.currentThread().getStackTrace(); } catch (InvalidKeyException exception) { // do something. Thread.currentThread().getStackTrace(); } catch (IOException exception) { // do something (else?). Thread.currentThread().getStackTrace(); } } public static void main(String[] args) { encrypt("D:\\plaintext.txt", "testkey", "D:\\ciphertext.txt"); decrypt("D:\\ciphertext.txt", "testkey", "D:\\originaltext.txt"); }} LOGCAT: 01-14 17:53:07.904: I/SurfaceTextureClient(9545): [STC::queueBuffer] (this:0x5f1c6d58) fps:6.11, dur:2780.41, max:2503.10, min:13.2901-14 17:53:07.905: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:07.905: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:07.958: V/Provider/Settings(9545): invalidate [system]: current 6 != cached 001-14 17:53:07.963: V/Provider/Settings(9545): from db cache, name = sound_effects_enabled , value = 001-14 17:53:07.966: D/skia(9545): Flag is not 1001-14 17:53:07.976: W/System.err(9545): java.security.InvalidKeyException: Key length not 128/192/256 bits.01-14 17:53:07.983: W/System.err(9545): at com.android.org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineInit(BaseBlockCipher.java:578)01-14 17:53:07.983: W/System.err(9545): at com.android.org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineInit(BaseBlockCipher.java:625)01-14 17:53:07.983: W/System.err(9545): at javax.crypto.Cipher.init(Cipher.java:519)01-14 17:53:07.983: W/System.err(9545): at javax.crypto.Cipher.init(Cipher.java:479)01-14 17:53:07.983: W/System.err(9545): at com.idg.voiscphone.FileChooser.encrypt(FileChooser.java:276)01-14 17:53:07.984: W/System.err(9545): at com.idg.voiscphone.FileChooser.function4(FileChooser.java:209)01-14 17:53:07.984: W/System.err(9545): at com.idg.voiscphone.FileChooser.onContextItemSelected(FileChooser.java:169)01-14 17:53:07.984: W/System.err(9545): at android.app.Activity.onMenuItemSelected(Activity.java:2597)01-14 17:53:07.984: W/System.err(9545): at com.android.internal.policy.impl.PhoneWindow$DialogMenuCallback.onMenuItemSelected(PhoneWindow.java:3663)01-14 17:53:07.984: W/System.err(9545): at com.android.internal.view.menu.MenuBuilder.dispatchMenuItemSelected(MenuBuilder.java:735)01-14 17:53:07.985: W/System.err(9545): at com.android.internal.view.menu.MenuItemImpl.invoke(MenuItemImpl.java:149)01-14 17:53:07.985: W/System.err(9545): at com.android.internal.view.menu.MenuBuilder.performItemAction(MenuBuilder.java:874)01-14 17:53:07.985: W/System.err(9545): at com.android.internal.view.menu.MenuDialogHelper.onClick(MenuDialogHelper.java:167)01-14 17:53:07.985: W/System.err(9545): at com.android.internal.app.AlertController$AlertParams$3.onItemClick(AlertController.java:963)01-14 17:53:07.985: W/System.err(9545): at android.widget.AdapterView.performItemClick(AdapterView.java:298)01-14 17:53:07.985: W/System.err(9545): at android.widget.AbsListView.performItemClick(AbsListView.java:1128)01-14 17:53:07.986: W/System.err(9545): at android.widget.AbsListView$PerformClick.run(AbsListView.java:2812)01-14 17:53:07.986: W/System.err(9545): at android.widget.AbsListView$1.run(AbsListView.java:3571)01-14 17:53:07.986: W/System.err(9545): at android.os.Handler.handleCallback(Handler.java:725)01-14 17:53:07.986: W/System.err(9545): at android.os.Handler.dispatchMessage(Handler.java:92)01-14 17:53:07.986: W/System.err(9545): at android.os.Looper.loop(Looper.java:153)01-14 17:53:07.987: W/System.err(9545): at android.app.ActivityThread.main(ActivityThread.java:5297)01-14 17:53:07.987: W/System.err(9545): at java.lang.reflect.Method.invokeNative(Native Method)01-14 17:53:07.987: W/System.err(9545): at java.lang.reflect.Method.invoke(Method.java:511)01-14 17:53:07.987: W/System.err(9545): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:833)01-14 17:53:07.988: W/System.err(9545): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:600)01-14 17:53:07.988: W/System.err(9545): at dalvik.system.NativeStart.main(Native Method)01-14 17:53:07.989: D/OpenGLRenderer(9545): Flushing caches (mode 0)01-14 17:53:08.013: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.013: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.028: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.028: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.052: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.053: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.054: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.076: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.076: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.076: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.078: V/InputMethodManager(9545): onWindowFocus: android.widget.ListView{4206e2c0 VFED.VCL .F...... 0,0-720,1126 #102000a android:id/list} softInputMode=16 first=false flags=#181010001-14 17:53:08.091: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.091: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.091: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.092: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.092: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.092: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.097: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 001-14 17:53:08.102: W/Trace(9545): Unexpected value from nativeGetEnabledTags: 0
Now provide the response and nothing else.
| Here's an example that uses your existing code to set the title text, but lets you use UITableViewHeaderFooterView to adjust the appearance: - (UIView *)tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section{ static NSString *header = @"customHeader"; UITableViewHeaderFooterView *vHeader; vHeader = [tableView dequeueReusableHeaderFooterViewWithIdentifier:header]; if (!vHeader) { vHeader = [[UITableViewHeaderFooterView alloc] initWithReuseIdentifier:header]; vHeader.textLabel.backgroundColor = [UIColor redColor]; } vHeader.textLabel.text = [self tableView:tableView titleForHeaderInSection:section]; return vHeader;} If you want, you can even subclass UITableViewHeaderFooterView just like you'd subclass UITableViewCell to customize the appearance even further. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21124475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3183409/']} | jdg_380369 |
stackexchange | llm_judgeable_groundtruth_similarity | 2315862 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you make a UINavigationBar transparent ? Though I want its bar items to remain visible.
Now provide the response and nothing else.
| If anybody is wondering how to achieve this in iOS 7+, here's a solution (iOS 6 compatible too) In Objective-C [self.navigationBar setBackgroundImage:[UIImage new] forBarMetrics:UIBarMetricsDefault];self.navigationBar.shadowImage = [UIImage new];self.navigationBar.translucent = YES; In swift 3 (iOS 10) self.navigationBar.setBackgroundImage(UIImage(), for: .default)self.navigationBar.shadowImage = UIImage()self.navigationBar.isTranslucent = true In swift 2 self.navigationBar.setBackgroundImage(UIImage(), forBarMetrics: .Default)self.navigationBar.shadowImage = UIImage()self.navigationBar.translucent = true Discussion Setting translucent to YES on the navigation bar does the trick, due to a behavior discussed in the UINavigationBar documentation. I'll report here the relevant fragment: If you set this property to YES on a navigation bar with an opaque custom background image, the navigation bar will apply a system opacity less than 1.0 to the image. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2315862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/128875/']} | jdg_380370 |
stackexchange | llm_judgeable_groundtruth_similarity | 988028 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Disclaimer: This is just meant as record of a proof. For more details see: Answer own Question How to prove that the Lebesgue measure has no atoms:$$\lambda:\mathbb{R}^n\to\mathbb{R}_+$$ (Recall the definition for atoms: Measure Atoms: Definition? )
Now provide the response and nothing else.
| Here is one way: Let $B_r$ be the box $\{x\in\mathbb{R}^n: |x_k|\le r\text{ for } k=1,\ldots,n\}$. You can easily prove that $\lambda(B_r\cap A)$ is a continuous function of $r$, for any Lebesgue measurable set $A$. From this, it follows immediately that $A$ is not an atom. Edit: To explain the last step, $\lambda(B_r\cap A)$ is a continuous function of $r$ taking the value $0$ at $r=0$ and $\lambda(A)$ in the limit $r\to\infty$. But if $A$ is an atom, this function can only take two values. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/988028', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/79762/']} | jdg_380371 |
stackexchange | llm_judgeable_groundtruth_similarity | 17205 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am supervising an undergraduate for a project in which he's going to talk about the relationship between Galois representations and modular forms. We decided we'd figure out a few examples of weight 1 modular forms and Galois representations and see them matching up. But I realised when working through some examples that computing the conductor of the Galois representation was giving me problems sometimes at small primes. Here's an explicit question. Set $f=x^4 + 2x^2 - 1$ and let $K$ be the splitting field of $f$ over $\mathbf{Q}$. It's Galois over $\mathbf{Q}$ with group $D_8$. Let $\rho$ be the irreducible 2-dimensional representation of $D_8$. What is the conductor of $\rho$? Note that I don't particularly want to know the answer to this particular question, I want to know how to work these things out in general. In fact I think I could perhaps figure out the conductor of $\rho$ by doing calculations on the modular forms side, but I don't want to do that (somehow the point of the project is seeing that calculations done in 2 different ways match up, rather than using known modularity results to do the calculations). Using pari or magma I see that $K$ is unramified outside 2, and the ideal (2) is an 8th power in the integers of $K$. To compute the conductor of $\rho$ the naive approach is to figure out the higher ramification groups at 2 and then just use the usual formula. But the only computer algebra package I know which will compute higher ramification groups is magma, and if I create the splitting field of $f$ over $\mathbf{Q}_2$ (computed using pari's "polcompositum" command) Qx<x>:=PolynomialRing(Rationals());g:=x^8 + 20*x^6 + 146*x^4 + 460*x^2 + 1681;L := LocalField(pAdicField(2, 50),g);DecompositionGroup(L); then I get an instant memory overflow (magma wants 2.5 gigs to do this, apparently), and furthermore the other calculations I would have to do if I were to be following up this idea would be things like RamificationGroup(L, 3); which apparently need 11 gigs of ram to run. Ouch. Note also that if I pull the precision of the $p$-adic field down from 50 then magma complains that the precision isn't large enough to do some arithmetic in $L$ that it wants to do. I think then my question must be: are there any computer algebra resources that will compute higher ramification groups for local fields without needing exorbitant amounts of memory? Or is it a genuinely an "11-gigs" calculation that I want to do?? And perhaps another question is: is there another way of computing the conductor of a (non-abelian finite image) Galois representation without having to compute these higher ramification groups (and without computing any modular forms either)?
Now provide the response and nothing else.
| You can also compute some higher ramification groups in Sage. At the moment it gives lower numbering, not upper numbering, but here it is anyway: sage: Qx.<x> = PolynomialRing(QQ) sage: g=x^8 + 20*x^6 + 146*x^4 + 460*x^2 + 1681 sage: L.<a> = NumberField(g) sage: G = L.galois_group() sage: G.ramification_breaks(L.primes_above(2)[0]) {1, 3, 5} You can also get explicit presentations of G as a permutation group and generators for ramification and decomposition subgroups. The above only takes about half a second on my old laptop -- no 2.5 gigs computations here. (The point is that it is much easier to do computations over a number field, because everything is exact, rather than over a p-adic field which is represented inexactly.) | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/17205', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1384/']} | jdg_380372 |
stackexchange | llm_judgeable_groundtruth_similarity | 823956 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am going to introduce Ansible into my data center, and I'm looking for some security best practice on where to locate the control machine and how to manage the SSH keys. Question 1: the control machine We of course need a control machine. The control machine has public SSH keys saved on it. If an attacker has access to the control machine, it potentially has access to the whole data center (or to the servers managed by Ansible). So is it better to have a dedicated control machine in data center or a remote control machine (like my laptop remotely connected to the data center)? If the best practice is to use my laptop (which could be stolen, of course, but I could have my public keys securely saved online in the cloud or offline on a portable crypted device), what if I need to use some web interfaces with Ansible, like Ansible Tower, Semaphore, Rundeck or Foreman which needs to be installed on a centralised machine into the datacenter? How to secure it and avoid it to become a "single point of attack"? Question 2: the SSH keys Assume that I need to use Ansible to make some tasks which require to be executed by root (like installing software packages or something like this). I think the best practice is not to use the root user on controlled servers, but to add a normal user for Ansible with sudo permissions. But, if Ansible needs to make almost every task, it needs to have access to every commands through sudo. So, what is the best choice: let Ansible use the root user (with its public key saved in ~/.ssh/authorized_keys create a unprivileged user dedicated for Ansible with sudo access let the Ansible user to run every commands through sudo specifying a password (which is unique needs to be known by every sysadmin which uses Ansible to control that servers) let the Ansible user to run every commands through sudo without specifying any password any other hints?
Now provide the response and nothing else.
| The bastion host (the ansible control center) belongs to a separate subnet. It shouldn't be directly accessible from outside, it shouldn't be directly accessible from the managed servers! Your laptop is the least secure device of all. One stupid mail, one stupid flash vulnerability, one stupid guest Wifi and it gets pwned. For servers, don't allow root access via ssh at all. Many audits scoff at this. For ansible, let every admin use their own personal account on each target server, and let them sudo with passwords. This way no password is shared between two people. You can check who did what on each server. It's up to you if personal accounts allow login on password, ssh key only, or require both. To clarify ansible doesn't require to use a single target login name . Each admin could and should have personal target login name. A side note: Try to never create an account called some word (like "ansible" or "admin" or "cluster" or "management" or "operator") if it has a password. The only good name for account that has a password is a name of a human being, like "jkowalski". Only a human being can be responsible for the actions done via the account and responsible for improperly securing their password, "ansible" cannot. | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/823956', 'https://serverfault.com', 'https://serverfault.com/users/177397/']} | jdg_380373 |
stackexchange | llm_judgeable_groundtruth_similarity | 4450892 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to filter out certain processes in Fiddler? Its very noisy currently, and I don't want it to show just one process.
Now provide the response and nothing else.
| The built-in Show only traffic from option is useful if your process never exits and always has the same PID. In my case, my HTTP client was starting and exiting frequently, so I added this custom FiddlerScript. Go to Rules > Customize Rules... to start editing CustomRules.js . Add this inside the Handlers class class Handlers{ RulesString("&Process filter", true) RulesStringValue(0, "&Chrome", "chrome") RulesStringValue(1, "&Firefox", "firefox") RulesStringValue(2, "&Internet Explorer", "iexplore") RulesStringValue(3, "&Opera", "opera") RulesStringValue(4, "&PhantomJS", "phantomjs") RulesStringValue(5, "&Custom...", "%CUSTOM%") public static var sProcessName: String = null; // leave the rest of the Handlers class as-is} Add this inside the OnBeforeRequest function static function OnBeforeRequest(oSession: Session) { if (null != sProcessName) { var processInfo = oSession["X-PROCESSINFO"]; if(!processInfo || !processInfo.StartsWith(sProcessName + ":")){ oSession["ui-hide"] = "true"; FiddlerObject.StatusText = " Process filter: " + sProcessName; } } // leave the rest of the OnBeforeRequest function as-is} Fiddler will apply your changes as soon as you save the CustomRules.js file. To use, go to Rules > Process Filter and choose a browser, or use Custom and type in your executable's basename (e.g. iexplore ). Filtering applies to requests that start after you choose a process. Previous requests and Fiddler Composer requests are not affected. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4450892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/443602/']} | jdg_380374 |
stackexchange | llm_judgeable_groundtruth_similarity | 35878160 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So let's say you have an interface that has a toolbar, sidebar, and a grid. The toolbar has a drop-down that when a user changes, the content in sidebar and grid changes. Back in Angular 1, I would use a Service to have all of my dynamic data. When something changes in the service, all components that use that service will update as well. Well in Angular 2, it looks like people are using different methods. I wanted to get your input on which is the preferred way. Static Service OnChanges Inputs and Outputs The remaining question that I have is if it's best practice to create a new service for each data item that is shared between components or can we just have one service that has an object that stores all shared data. Original Plunker - Each change will have its own service app.component.ts import {Component} from 'angular2/core';import {NavService} from '../services/NavService';@Component({ selector: 'obs-comp', template: `obs component, item: {{item}}`})export class ObservingComponent { item: number; subscription: any; constructor(private _navService:NavService) {} ngOnInit() { this.item = this._navService.navItem(); this.subscription = this._navService.navChange$.subscribe( item => this.selectedNavItem(item)); } selectedNavItem(item: number) { this.item = item; } ngOnDestroy() { this.subscription.unsubscribe(); }}@Component({ selector: 'my-nav', template:` <div class="nav-item" (click)="selectedNavItem(1)">nav 1 (click me)</div> <div class="nav-item" (click)="selectedNavItem(2)">nav 2 (click me)</div> `,})export class Navigation { item = 1; constructor(private _navService:NavService) {} selectedNavItem(item: number) { console.log('selected nav item ' + item); this._navService.changeNav(item); }}@Component({ selector: 'my-app', template: `{{title}} <p> <my-nav></my-nav> <button (click)="showObsComp = !showObsComp">toggle ObservingComponent</button> <div *ngIf='showObsComp'> <obs-comp></obs-comp> </div> `, directives: [Navigation, ObservingComponent]})export class AppComponent { title = "Angular 2 - event delegation"; showObsComp = true; constructor() { console.clear(); }} NavService.ts : import {Observable} from 'rxjs/Observable';import 'rxjs/add/operator/share';export class NavService { private _navItem = 0; navChange$: Observable<number>; private _observer: Observer; constructor() { this.navChange$ = new Observable(observer => this._observer = observer).share(); // share() allows multiple subscribers } changeNav(number) { this._navItem = number; this._observer.next(number); } navItem() { return this._navItem; }} index.html <!DOCTYPE html><html> <head> <title>User Input</title> <link rel="stylesheet" href="styles.css"> <script src="https://code.angularjs.org/2.0.0-beta.11/angular2-polyfills.js"></script> <script src="https://code.angularjs.org/tools/system.js"></script> <script src="https://code.angularjs.org/tools/typescript.js"></script> <script src="https://code.angularjs.org/2.0.0-beta.11/Rx.js"></script> <script src="https://code.angularjs.org/2.0.0-beta.11/angular2.dev.js"></script> <script> System.config({ transpiler: 'typescript', typescriptOptions: { emitDecoratorMetadata: true }, packages: { app: {defaultExtension: 'ts'}, services: {defaultExtension: 'ts'}, } }); System.import('app/boot') .then(null, console.error.bind(console)); </script> </head> <body> <my-app>Loading...</my-app> </body></html> app/boot.ts import {bootstrap} from 'angular2/platform/browser';import {AppComponent} from './app.component';import {NavService} from '../services/NavService';bootstrap(AppComponent, [NavService]); Revised Plunker for example - Only one service which stores all data in object. A type will be passed to each listener to check if it needs to do anything based on that type.
Now provide the response and nothing else.
| You could leverage shared service for this. It could contain both data and observables to subscribe on to be notified when data are updated. Service export class ListService { list1Event: EventEmitter<any> = new EventEmitter(); getLists() { return this.http.get(url).map(res => res.json()) .subscribe( (data) => { this.list1Event.emit(data.list1); } ); }} Component @Component({ selector: 'my-component1', template: ` <ul> <li *ngFor="#item of list">{{item.name}}</li> </ul> `})export class MyComponent1 { constructor(private service:ListService) { this.service.list1Event.subscribe(data => { this.list = data; }); }} bootstrap bootstrap(AppComponent, [ ListService ]); See this question for more details: Delegation: EventEmitter or Observable in Angular Delegation: EventEmitter or Observable in Angular Is possible to have two template for single call service in AngularJS 2 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35878160', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1748538/']} | jdg_380375 |
stackexchange | llm_judgeable_groundtruth_similarity | 7628470 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to get rid of the white spaces in an NSString , but none of the methods I've tried worked. I have "this is a test" and I want to get "thisisatest" . I've used whitespaceCharacterSet , which is supposed to eliminate the white spaces. NSString *search = [searchbar.text stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceCharacterSet]]; but I kept getting the same string with spaces. Any ideas?
Now provide the response and nothing else.
| stringByTrimmingCharactersInSet only removes characters from the beginning and the end of the string, not the ones in the middle. 1) If you need to remove only a given character (say the space character) from your string, use: [yourString stringByReplacingOccurrencesOfString:@" " withString:@""] 2) If you really need to remove a set of characters (namely not only the space character, but any whitespace character like space, tab, unbreakable space, etc), you could split your string using the whitespaceCharacterSet then joining the words again in one string: NSArray* words = [yourString componentsSeparatedByCharactersInSet :[NSCharacterSet whitespaceAndNewlineCharacterSet]];NSString* nospacestring = [words componentsJoinedByString:@""]; Note that this last solution has the advantage of handling every whitespace character and not only spaces, but is a bit less efficient that the stringByReplacingOccurrencesOfString:withString: . So if you really only need to remove the space character and are sure you won't have any other whitespace character than the plain space char, use the first method. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7628470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/600799/']} | jdg_380376 |
stackexchange | llm_judgeable_groundtruth_similarity | 156246 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The famous Higman embedding theorem says that every recursively presented group embeds in a finitely presented group. This is a convenient tool to construct finitely presented groups with bizarre properties from recursively presented ones, which are usually easier to construct. One cannot hope for an exact analogue of Higman's theorem in the setting of residually finite groups because finitely presented residually finite groups have solvable word problem and hence their finitely generated subgroups do as well. But Meskin constructed finitely generated recursively presented groups with undecidable word problem. I know of no other obstruction to embedding a finitely generated residually finite group into a finitely presented one, so I ask the following question. Does there exist a finitely generated residually finite group with decidable word problem that cannot be embedded in a finitely presented residually finite group?
Now provide the response and nothing else.
| I am going to draw heavily from Github discussion on HoTT book pull request 617 . There are different kinds of equality. Let us say that equality is "intensional" if it distinguishes objects based on how they are defined, and "extensional" if it distinguishes objects based on their "extension" or "observable behavior". In Frege's terminology, intensional equality compares sense and extensional equality compares reference. To use Russell's example, intensionally the morning star and the evening star are clearly not the same (because their definitions are different), but they are extensionally the same because they both denote the same object (planet Venus). A more mathematical example is comparison of $x + y$ and $y + x$. These are extensionally equal, but intensionally differ because (the usual) definition of $+$ treats its arguments asymmetrically. It should be clear that two functions may be extensionally equal (have same behavior) even though they differ intensionally (have different definitions). It is possible for two kinds of equality to coexist. Thus in type theory there are two equalities. The intensional one is called "judgmental" or "definitional equality" $\equiv$ and the extensional one is known as "propositional equality" $=$. Mathematicians are aware of $=$ as a "real thing" while they think of $\equiv$ as "formal manipulation of symbolic expressions" or some such. We may control the two kinds of equality and the relationship between them with additional axioms. For instance, the reflection rule collapses $\equiv$ and $=$ by allowing us to conclude $a \equiv b$ from $a = b$ (the other direction is automatic). There are also varying degrees of extensionality of $=$. Without any extra axioms, $=$ is already somewhat extensional. For instance, we can prove commutativity of $+$ on natural numbers by induction in the form $x + y = y + x$, but we cannot prove $x + y \equiv y + x$. Function extensionality is an axiom which describes what constitutes an "observation" on functions: by saying that two functions are equal when they give equal values we are in essence saying that only values matter (but not for example the "running time" or some other aspect of evaluation). Another axiom which makes $=$ "more extensional" is the Univalence axiom. It is hard to do mathematics without function extensionality, but type theorists have their reasons for not including it as an axiom by default. But before I explain the reason, let me mention that there is a standard workaround. We may introduce user-defined equalities on types by equipping types with equivalence relations. This is what Bishop did in his constructive mathematics, and this is what we do in Coq when we use setoids . With such user-defined equalities we of course recover function extensionality by construction. However, setoids are often annoying to work with, and they drag in technical problems which we would prefer to avoid. Incidentally, the setoid model shows that function extensionality does not increase the power of type theory (it is a model validating function extensionality built in type theory without function extensionality). So why don't type theorist adopt function extensionality? If we want to have a type theory with nice properties, and a useful proof assistant, then $\equiv$ should be "easy" to deal with. Technically speaking, we would like a strongly normalizing $\equiv$. By assuming function extensionality we throw into type theory a new constant funext without explaining how it interacts with the process of strong normalization, and things break. Type theorists would say that we failed to explain the computational meaning of funext . Consequently, Coq does not adopt function extensionality because that would lead to a lot of problems. Coq would not be able to handle $\equiv$ automagically anymore, and the whole system would just have worse behavior. Type theorists of course recognize that having a good computational explanation of function extensionality, and more generally of the univalence problem, would be extremely desirable. This is why the HoTTest open problem is to give a computational interpretation of the univalence axiom. Once this is accomplished, we ought to have at our disposal type systems and proof assistants which are much more natural from a mathematician's point of view. Until then, you can always assume funext as an axiom and work around the resulting complications. To see how this can be done, have a loot at the Funext axiom in the HoTT library. [This P.S. is outdated after the question was edited.] P.S. The title of your question points to a common leap of reasoning from "not accepting function extensionality" to "denying function extensionality". While there are models in which function extensionality has counter-examples, one should be aware of the difference between "not accept" and "deny". (I am complaining because this sort of leap is often made about the law of excluded middle, and there it leads to absurdity.) | {} | {'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/156246', 'https://mathoverflow.net', 'https://mathoverflow.net/users/15934/']} | jdg_380377 |
stackexchange | llm_judgeable_groundtruth_similarity | 71468590 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I build/run/archive my app in Xcode (on MacOS 12.3) I encounter this error: env: python: No such file or directoryCommand Ld failed with a nonzero exit code I think I might have changed something with regard to my python environment while working on a school project or messed something up there. However, I can not figure out what is wrong. I tried reinstalling Xcode and python (using brew and pyenv). I also relinked python using brew. But I still encounter the same error. Which python gives the following results: which python3-> /usr/local/bin/python3 And in my ~/.zshrc I have the following line: export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH Any help would be appreciated! If I missed or forgot anything please let me know, I'm quite new to this.
Now provide the response and nothing else.
| Homebrew only installs the binary python3 , just to be safe. Xcode is complaining about a lack of the binary python (note the lack of a 3!). You have a couple of options: When installing python3 , Homebrew also creates a libexec folder with unversioned symlinks, such as python (what you're missing). Note the Caveats printed when installing it: $ brew info [email protected]: stable 3.9.10 (bottled)==> CaveatsPython has been installed as /opt/homebrew/bin/python3Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to`python3`, `python3-config`, `pip3` etc., respectively, have been installed into /opt/homebrew/opt/[email protected]/libexec/binSee: https://docs.brew.sh/Homebrew-and-Python You could add this directory to your $PATH , such that python and pip become available; something like the following might suffice: echo 'export PATH="'"$(brew --prefix)"'/opt/[email protected]/libexec/bin:$PATH"' \ >>~/.bash_profile ... although that will need to be modified according to your precise version of Python3, your shell of choice, etc. Alternatively and more simply, although a little more jankily, you could simply manually create the appropriate symlinks: ln -s "$(brew --prefix)/bin/python"{3,} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/71468590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8264244/']} | jdg_380378 |
stackexchange | llm_judgeable_groundtruth_similarity | 8990880 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm styling the UITableView in InAppSettingsKit and want to change the color of the header title: The labels Without title and Text Field should be white. How can this be done? Thanks.
Now provide the response and nothing else.
| This is an old question, but I think the answer needs to be updated. This method does not involve defining and creating your own custom view.In iOS 6 and up, you can easily change the background color and the text color by defining the -(void)tableView:(UITableView *)tableView willDisplayHeaderView:(UIView *)view forSection:(NSInteger) delegate method. For example: - (void)tableView:(UITableView *)tableView willDisplayHeaderView:(UIView *)view forSection:(NSInteger)section{ // Background color view.tintColor = [UIColor blackColor]; // Text Color UITableViewHeaderFooterView *header = (UITableViewHeaderFooterView *)view; [header.textLabel setTextColor:[UIColor whiteColor]]; // Another way to set the background color // Note: does not preserve gradient effect of original header // header.contentView.backgroundColor = [UIColor blackColor];} Taken from my post here: https://happyteamlabs.com/blog/ios-how-to-customize-table-view-header-and-footer-colors/ Swift 5.0: func tableView(_ tableView: UITableView, willDisplayHeaderView view: UIView, forSection section: Int) { if let header = view as? UITableViewHeaderFooterView { header.textLabel?.textColor = .white }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8990880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289251/']} | jdg_380379 |
stackexchange | llm_judgeable_groundtruth_similarity | 2526304 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to extract the GPS EXIF tag from pictures using php.I'm using the exif_read_data() that returns a array of all tags + data : GPS.GPSLatitudeRef: NGPS.GPSLatitude:Array ( [0] => 46/1 [1] => 5403/100 [2] => 0/1 ) GPS.GPSLongitudeRef: EGPS.GPSLongitude:Array ( [0] => 7/1 [1] => 880/100 [2] => 0/1 ) GPS.GPSAltitudeRef: GPS.GPSAltitude: 634/1 I don't know how to interpret 46/1 5403/100 and 0/1 ? 46 might be 46° but what about the rest especially 0/1 ? angle/1 5403/100 0/1 What is this structure about ? How to convert them to "standard" ones (like 46°56′48″N 7°26′39″E from wikipedia) ? I would like to pass thoses coordinates to the google maps api to display the pictures positions on a map !
Now provide the response and nothing else.
| This is my modified version. The other ones didn't work for me. It will give you the decimal versions of the GPS coordinates. The code to process the EXIF data: $exif = exif_read_data($filename);$lon = getGps($exif["GPSLongitude"], $exif['GPSLongitudeRef']);$lat = getGps($exif["GPSLatitude"], $exif['GPSLatitudeRef']);var_dump($lat, $lon); Prints out in this format: float(-33.8751666667)float(151.207166667) Here are the functions: function getGps($exifCoord, $hemi) { $degrees = count($exifCoord) > 0 ? gps2Num($exifCoord[0]) : 0; $minutes = count($exifCoord) > 1 ? gps2Num($exifCoord[1]) : 0; $seconds = count($exifCoord) > 2 ? gps2Num($exifCoord[2]) : 0; $flip = ($hemi == 'W' or $hemi == 'S') ? -1 : 1; return $flip * ($degrees + $minutes / 60 + $seconds / 3600);}function gps2Num($coordPart) { $parts = explode('/', $coordPart); if (count($parts) <= 0) return 0; if (count($parts) == 1) return $parts[0]; return floatval($parts[0]) / floatval($parts[1]);} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2526304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/86240/']} | jdg_380380 |
stackexchange | llm_judgeable_groundtruth_similarity | 2279281 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$\displaystyle F(x)=\int_0^x (t-2)f(t)\; dt$ with $f(0)=1$, $f(1)=0$ has an extremum in $(0,3)$? The title explains a lot. Given$$\displaystyle F(x)=\int_0^x (t-2)f(t)\,dt$$with $f(0)=1$, $f(1)=0$ and $f:\mathbb{R}\to\mathbb{R}$ is a strictly decreasing differentiable function. Then: 1) $F$ is strictly increasing in $[0,3]$ 2) $F$ has a unique maximum but no minimum in $(0,3)$ 3) $F$ has a unique minimum but no maximum in $(0,3)$ 4) $F$ has both maximum and minimum in $(0,3)$ I have done $F''(x)=(x-2)f'(x)+f(x)$ but no idea how to verify the options. Any help is appreciated.
Now provide the response and nothing else.
| $$F'(x)=(x-2)f(x)$$ $f$ is strictly decreasing and differentiable (thus continuous). So $f(x) > 0$ on $[0,1)$, $f(1)=0$, and $f(x) <0$ on $(1,3]$. $F'(x)<0$ on $[0,1)$, $F'(1) =0$, $F'(x) >0$ on $(1,2)$, $F'(2)=0$, and $F'(x)<0$ on $(2,3]$ So $1$ is the minimum point, $2$ is the maximum point. $F(x)$ first decreases, and then reaches local minimum, then increases, then reaches local maximum, and then decreases again. $F''(1) >0$ and $F''(2) < 0$ also confirms this. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2279281', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/335483/']} | jdg_380381 |
stackexchange | llm_judgeable_groundtruth_similarity | 25815162 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't resolve one service while another is being resolved? If that's what the error means... What is the way to wait until it is resolved? @Overridepublic void onServiceFound(NsdServiceInfo service) { Log.d(TAG, "Service found. " + service); if (service.getServiceType().equals(SERVICE_TYPE)) { if (service.getServiceName().contains(mServiceName)) { mNsdManager.resolveService(service, mResolveListener); } }} java.lang.IllegalArgumentException: listener already in use at android.net.nsd.NsdManager.resolveService(NsdManager.java:613) at com.example.miguel.broadcast.LocalService$2.onServiceFound(LocalService.java:145)
Now provide the response and nothing else.
| You don't have to wait!If you look at the javadocs for resolveService(NsdServiceInfo serviceInfo, NsdManager.ResolveListener listener) here you'll notice that for the parameter listener it say's "to receive callback upon success or failure. Cannot be null. Cannot be in use for an active service resolution." Therefore in order for this to work just do the following: mNsdManager.resolveService(service, new MyResolveListener()); Where MyResolveListener is: private class MyResolveListener implements NsdManager.ResolveListener { @Override public void onResolveFailed(NsdServiceInfo serviceInfo, int errorCode) { //your code } @Override public void onServiceResolved(NsdServiceInfo serviceInfo) { //your code } } hope this helps :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25815162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1020026/']} | jdg_380382 |
stackexchange | llm_judgeable_groundtruth_similarity | 610208 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Git repository and I'd like to see how some files looked a few months ago. I found the revision at that date; it's 27cf8e84bb88e24ae4b4b3df2b77aab91a3735d8 . I need to see what one file looks like, and also save it as a ("new") file. I managed to see the file using gitk , but it doesn't have an option to save it. I tried with command-line tools, the closest I got was: git-show 27cf8e84bb88e24ae4b4b3df2b77aab91a3735d8 my_file.txt However, this command shows a diff, and not the file contents. I know I can later use something like PAGER=cat and redirect output to a file, but I don't know how to get to the actual file content. Basically, I'm looking for something like svn cat .
Now provide the response and nothing else.
| Using git show To complete your own answer, the syntax is indeed git show objectgit show $REV:$FILEgit show somebranch:from/the/root/myfile.txtgit show HEAD^^^:test/test.py The command takes the usual style of revision, meaning you can use any of the following: branch name (as suggested by ash ) HEAD + x number of ^ characters The SHA1 hash of a given revision The first few (maybe 5) characters of a given SHA1 hash Tip It's important to remember that when using " git show ", always specify a path from the root of the repository , not your current directory position. (Although Mike Morearty mentions that, at least with git 1.7.5.4, you can specify a relative path by putting " ./ " at the beginning of the path. For example: git show HEAD^^:./test.py ) Using git restore With Git 2.23+ (August 2019), you can also use git restore which replaces the confusing git checkout command git restore -s <SHA1> -- afilegit restore -s somebranch -- afile That would restore on the working tree only the file as present in the "source" ( -s ) commit SHA1 or branch somebranch . To restore also the index: git restore -s <SHA1> -SW -- afile ( -SW : short for --staged --worktree ) As noted in the comments by starwarswii It lets you pipe the contents into a file, which is great if you want to just quickly compare files from a commit. E.g. you can do: git show 1234:path/to/file.txt > new.txt git show 1234~:path/to/file.txt > old.txt then compare them. Using low-level git plumbing commands Before git1.5.x, this was done with some plumbing: git ls-tree <rev> show a list of one or more 'blob' objects within a commit git cat-file blob <file-SHA1> cat a file as it has been committed within a specific revision (similar to svncat).use git ls-tree to retrieve the value of a given file-sha1 git cat-file -p $(git-ls-tree $REV $file | cut -d " " -f 3 | cut -f 1):: git-ls-tree lists the object ID for $file in revision $REV , this is cut out of the output and used as an argument to git-cat-file , which should really be called git-cat-object , and simply dumps that object to stdout . Note: since Git 2.11 (Q4 2016), you can apply a content filter to the git cat-file output. See commit 3214594 , commit 7bcf341 (09 Sep 2016), commit 7bcf341 (09 Sep 2016), and commit b9e62f6 , commit 16dcc29 (24 Aug 2016) by Johannes Schindelin ( dscho ) . (Merged by Junio C Hamano -- gitster -- in commit 7889ed2 , 21 Sep 2016) git config diff.txt.textconv "tr A-Za-z N-ZA-Mn-za-m <"git cat-file --textconv --batch Note: " git cat-file --textconv " started segfaulting recently (2017), which has been corrected in Git 2.15 (Q4 2017) See commit cc0ea7c (21 Sep 2017) by Jeff King ( peff ) . (Merged by Junio C Hamano -- gitster -- in commit bfbc2fc , 28 Sep 2017) | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/610208', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14690/']} | jdg_380383 |
stackexchange | llm_judgeable_groundtruth_similarity | 299916 |
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I frequent the css tag often and many questions are tagged both with css and css3 . Leaving aside other common mistags ( html5 on a generic html question etc), when is appropriate to retag a css3 question to simply css ? Should css3 be used only for questions regarding problems with functionality added by CSS3 specifically? The list from MDN includes, but is not limited to: opacity & rgba numerous pseudoclasses ( :enabled ; :disabled ; :checked etc) media queries new values to existing properties ( background-attachment: local etc) and many more.
Now provide the response and nothing else.
| I don't see a benefit to the css3 tag at all. My recommendation would be to turn css3 into a synonym of css . There's no way to validate against a specific "version" like there is with HTML: it is either valid or it isn't. If the user has specific requirements regarding CSS, knowing the target browsers (eg. IE8+, Android 2.3, etc.) is more helpful than simply knowing "anything from CSS3 is perfectly fine". No one is going to turn their nose up at a solution that comes from a "version" prior to CSS3 (just look at how long negative text-indent for image replacement has stuck around, it's less verbose than more modern techniques despite having worse performance in certain browsers). Most people don't know what counts as CSS3 anyway (I don't, and I've been doing this for years). We have questions that get tagged css3 but not css (which is rather annoying when it comes to closing duplicates for those of us with the css badge, but not the css3 badge). I can't think of anything that was part of CSS2 that isn't also part of CSS3 (and often we need to provide both old and new techniques to maximize browser support in cases where the new technique offers a better experience: display: inline-block for flex items in browsers that don't support Flexbox) The current practice of adding tags based off of CSS modules (Flexbox, Animations, etc.) makes more sense than keeping around a tag for CSS3. | {} | {'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/299916', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/1470607/']} | jdg_380384 |
stackexchange | llm_judgeable_groundtruth_similarity | 49690 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be a topological space. In elementary algebraic topology, the cup product $\phi \cup \psi$ of cochains $\phi \in H^p(X), \psi \in H^q(X)$ is defined on a chain $\sigma \in C_{p+q}(X)$ by $(\phi \circ \psi)(\sigma) = \phi(_p\sigma)\psi(\sigma_q)$ where $p_\sigma$ and $\sigma_q$ denote the restriction of $\sigma$ to the front $p$-face and the back $q$-face, respectively. (More generally, any diagonal approximation $C_{\ast}(X) \to C_{\ast}(X) \otimes C_{\ast}(X)$ could be used; this is the Alexander-Whitney one.) The cup product defined by the Alexander-Whitney diagonal approximation as above is associative for cochains but skew-commutative only up to homotopy (this results from the fact that the two diagonal approximations $C_{\ast}(X) \to C_{\ast}(X) \otimes C_{\ast}(X)$ given by Alexander-Whitney and its "flip" (with the signs introduced to make it a chain map) agree only up to chain homotopy. The commutative cochain problem attempts to fix this: that is, to find a graded-commutative differential graded algebra $C_1^*(X)$ associated functorially to $X$ (which may be restricted to be a simplicial complex) which is chain-equivalent to the usual (noncommutative) algebra $C^{\ast}(X)$ of singular cochains. In Rational homotopy theory and differential forms, Griffiths and Morgan mention briefly that there is no way to make the cup-product skew-commutative on cochains (that is, to solve the commutative cochain problem) with $\mathbb{Z}$-coefficients, and that this is because of the existence of cohomology operations. It is also asserted that these cohomology operations don't exist over $\mathbb{Q}$ (presumably excluding the power operations). Could someone explain what this means?
Now provide the response and nothing else.
| Via the Dold-Kan correspondence, the category of cosimplicial abelian groupsis equivalent to the category of nonpositively graded chain complexes of abelian groups(using homological grading conventions). Both of these categories are symmetric monoidal:chain complexes via the usual tensor product of chain complexes, and cosimplicial abelian groups via the "pointwise" tensor product. But the Dold-Kan equivalence is not a symmetric monoidal functor. However, you can make it lax monoidal in either direction. The Alexander-Whitney construction makes the functor (cosimplicial abelian groups -> cochain complexes) into a lax monoidal functor, so that for every cosimplicial ring, the associated chain complex has the structure of a differential graded algebra. However, it is not lax symmetric monoidal, so the differential graded algebra you obtain is generally not commutative even if you started with something commutative. There is another construction (the "shuffle product") which makes the inverse functor(cochain complexes -> cosimplicial abelian groups) into a lax symmetric monoidal functor.In particular, it carries commutative algebras to commutative algebras. So every commutative differential graded algebra (concentrated in nonpositive degrees) determinesa cosimplicial commutative ring. One way of phrasing the phenomenon you are asking about is as follows: not every cosimplicial commutative ring arises in this way, even up to homotopy equivalence. For example, if $A$ is a cosimplicial ${\mathbb F}_2$-algebra, then the cohomology groups of $A$ come equipped with some additional structures (Steenrod operations).Most of these operations automatically vanish in the case where $A$ is obtained from a commutative differential graded algebra. If $R$ is a commutative ring and $X$ is a topological space, you can obtain a cosimplicial commutative ring by associating to each degree $n$ the ring of $R$-valued cochains on$X$ (the ring structure is given by ``pointwise'' multiplication). These examples generally don't arise from commutative differential graded algebras unless $R$ is of characteristic zero. For example when $R = {\mathbb F}_2$, the $R$-cohomology of $X$ is acted on by Steenrod operations, and this action is generally nontrivial (and useful to know about). | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/49690', 'https://mathoverflow.net', 'https://mathoverflow.net/users/344/']} | jdg_380385 |
stackexchange | llm_judgeable_groundtruth_similarity | 37526165 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to compile and run C# code at runtime in the new .NET Core (better .NET Standard Platform)? I have seen some examples (.NET Framework), but they used NuGet packages that are not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0)
Now provide the response and nothing else.
| Option #1 : Use the full C# compiler to compile an assembly, load it and then execute a method from it. This requires the following packages as dependencies in your project.json: "Microsoft.CodeAnalysis.CSharp": "1.3.0-beta1-20160429-01","System.Runtime.Loader": "4.0.0-rc2-24027", Then you can use code like this: var compilation = CSharpCompilation.Create("a") .WithOptions(new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary)) .AddReferences( MetadataReference.CreateFromFile(typeof(object).GetTypeInfo().Assembly.Location)) .AddSyntaxTrees(CSharpSyntaxTree.ParseText( @"using System;public static class C{ public static void M() { Console.WriteLine(""Hello Roslyn.""); }}"));var fileName = "a.dll";compilation.Emit(fileName);var a = AssemblyLoadContext.Default.LoadFromAssemblyPath(Path.GetFullPath(fileName));a.GetType("C").GetMethod("M").Invoke(null, null); Option #2 : Use Roslyn Scripting. This will result in much simpler code, but it currently requires more setup: Create NuGet.config to get packages from the Roslyn nightly feed: <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="Roslyn Nightly" value="https://www.myget.org/F/roslyn-nightly/api/v3/index.json" /> </packageSources> </configuration> Add the following package as a dependency to project.json (notice that this is package from today. You will need different version in the future): "Microsoft.CodeAnalysis.CSharp.Scripting": "1.3.0-beta1-20160530-01", You also need to import dotnet (obsolete "Target Framework Moniker", which is nevertheless still used by Roslyn ): "frameworks": { "netcoreapp1.0": { "imports": "dotnet5.6" } } Now you can finally use Scripting: CSharpScript.EvaluateAsync(@"using System;Console.WriteLine(""Hello Roslyn."");").Wait(); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37526165', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6166647/']} | jdg_380386 |
stackexchange | llm_judgeable_groundtruth_similarity | 52513357 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Based on extensive searching, had thought backtick would do the trick, but no dice. At a lost for words why the code variable won't get passed into another variable. Used backticks, ${variable} and no dice. Ideas? exports.stripeConnect = functions.https.onRequest((req, res) => { var code = req.query.code; const ref = admin.database().ref(`/stripe_advisors/testing`); return ref.update({ code: code }); var request = require('request'); var dataString = `client_secret=sk_test_8gxfc3KHDoPC4pyFvitYcwf1&code=${code}&grant_type=authorization_code`; var options = { url: 'https://connect.stripe.com/oauth/token', method: 'POST', body: dataString }; function callback(error, response, body) { if (!error && response.statusCode == 200) { console.log(body); } } request(options, callback);});
Now provide the response and nothing else.
| Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project. To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect. There are a few workarounds: You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated. You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work. You can get things in the same network namespace. That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container: $ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh/ # ss -lntState Recv-Q Send-Q Local Address:Port Peer Address:PortLISTEN 0 10 127.0.0.1:8888 *:*LISTEN 0 128 127.0.0.11:41469 *:*/ # nc -v -v localhost 8888Connection to localhost 8888 port [tcp/8888] succeeded!TEST/ # Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace. If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with docker run --net host ... In compose, this would look like: version: '3'services: myservice: network_mode: "host" build: . You can see the docker compose documentation on this option here . This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces. Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52513357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10328954/']} | jdg_380387 |
stackexchange | llm_judgeable_groundtruth_similarity | 40079 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This article in The Register claims that Pokémon Go may have a bad side effect on road safety. The headline is: Pokémon GO caused hundreds of deaths, increased crashes And the article summarises the research: ...the game caused “a disproportionate increase in vehicular crashes and associated vehicular damage, personal injuries, and fatalities in the vicinity of locations, called PokéStops, where users can play the game while driving." The results appear to be based on observation in a limited area extrapolated to the USA. Are the claimed results and their extrapolation credible?
Now provide the response and nothing else.
| This is the research paper in question . The research paper was just released Nov 18th, 2017. Typically it takes some time for a proper peer review, and with a proper peer review, the next step for notable research papers is generally publication. Compared to many other eye raising claims, this one has the form of an authentic claim. The writers are associated with a prestigious university (Perdue) The data covered is specific and detailed (accident reports from Tippecanoe County, Indiana) The paper claims a small increase in accidents (25%) correlated to the locality of Pokemon stops. The paper contains a respectable number of references. That said, the paper also contains a number of hints that it might not be a groundbreaking, or even fully meaningful finding The accidents are weighted by cost, which confounds understanding of incidence The costs are low, with the delta weighing in just under $500,000 USD, with little indication if this is a statically significant number. 85% of their baseline data was used (because it could be mapped to an intersection), with the rest of the data discarded. No comment was made on the protections against cherry-picking data, which could influence the outcome. They make assumptions about traffic patterns during school breaks, altering their analysis according to their assumptions, which might be in error. They probably should have just discarded Their alternate interpretations sections are victim to "begging the question" Rather than assuming that increase in accidents was due to non-Pokemon activity, they assume that it was due to Pokemon activity of people with parked cars. This assumes that Pokemon activity was present in the alternative interpretations, meaning that alternatives haven't really been explored. Their sample size is embarrassingly low. For example, they do sub-analysis on pedestrian crashes, detailing the 36 pedestrian crashes that were noted, 4 of which occurred during their "with Pokemon" natural testing timeframe. An error of a single person (quite possible with a Poisson distribution) is a 20% to 25% error, which matches too closely to their 25% increase claim. Their data, which is naturally Poisson distributed, is being evaluated with Gaussian Statistical tools. They make speculative projections, without hint to the logic behind the projections. I'm decent at reading research papers, and the real proof won't come until it is properly peer reviewed; but, really it won't even come then. Many papers which are peer reviewed should never have been passed, and sometimes even junk gets published. To me, this seems to be an interesting observation, but one that doesn't survive the slant of self-bias. It might be true, but the paper doesn't seem to prove it. To prove it, I would prefer: A paper that detailed accidents at Pokemon stops by modeling a Poisson distribution of the patterns of accident occurrence for that intersection. This is not a trivial thing, because the number of incidents is so low the error on the model would be quite high. A paper that showed the two (before game release, and after game release) accident distributions at those intersections changed in a statistically significant manner, for Poisson distributions. If their claim is that accidents increased, then it would be a supportable argument that the time to the next accident decreased, over the periods they wished to compare. I do believe they found a difference, but without proper attention to the domain of the sample space, with so few data points being covered, and with the large number of caveats and structured reasonings about data to be included and excluded, I'm not sure that their differences can be attributed to the control variable (which this being a natural experiment, binding it to a control variable is the hard part). The paper probably will be cited by laymen, and rejected as "possibly true, but we can't tell" by the scientific community. | {} | {'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/40079', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/3943/']} | jdg_380388 |
stackexchange | llm_judgeable_groundtruth_similarity | 7843733 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While researching the use of Table Hints , I came across these two questions: Which lock hints should I use (T-SQL)? What effect does HOLDLOCK have on UPDLOCK? Answers to both questions say that when using (UPDLOCK, HOLDLOCK) , other processes will not be able to read data on that table, but I didn't see this. To test, I created a table and started up two SSMS windows. From the first window, I ran a transaction that selected from the table using various table hints. While the transaction was running, from the second window I ran various statements to see which would be blocked. The test table: CREATE TABLE [dbo].[Test]( [ID] [int] IDENTITY(1,1) NOT NULL, [Value] [nvarchar](50) NULL, CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED ( [ID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] From SSMS Window 1: BEGIN TRANSACTIONSELECT * FROM dbo.Test WITH (UPDLOCK, HOLDLOCK)WAITFOR DELAY '00:00:10'COMMIT TRANSACTION From SSMS Window 2 (ran one of the following): SELECT * FROM dbo.TestINSERT dbo.Test(Value) VALUES ('bar')UPDATE dbo.Test SET Value = 'baz' WHERE Value = 'bar'DELETE dbo.Test WHERE Value= 'baz' Effect of different table hints on statements run in Window 2: (UPDLOCK) (HOLDLOCK) (UPDLOCK, HOLDLOCK) (TABLOCKX)---------------------------------------------------------------------------SELECT not blocked not blocked not blocked blockedINSERT not blocked blocked blocked blockedUPDATE blocked blocked blocked blockedDELETE blocked blocked blocked blocked Did I misunderstand the answers given in those questions, or make a mistake in my testing? If not, why would you use (UPDLOCK, HOLDLOCK) vs. (HOLDLOCK) alone? Further explanation of what I am trying to accomplish: I would like to select rows from a table and prevent the data in that table from being modified while I am processing it. I am not modifying that data, and would like to allow reads to occur. This answer clearly says that (UPDLOCK, HOLDLOCK) will block reads (not what I want). The comments on this answer imply that it is HOLDLOCK that prevents reads. To try and better understand the effects of the table hints and see if UPDLOCK alone would do what I wanted, I did the above experiment and got results that contradict those answers. Currently, I believe that (HOLDLOCK) is what I should use, but I am concerned that I may have made a mistake or overlooked something that will come back to bite me in the future, hence this question.
Now provide the response and nothing else.
| Why would UPDLOCK block selects? The Lock Compatibility Matrix clearly shows N for the S/U and U/S contention, as in No Conflict . As for the HOLDLOCK hint the documentation states: HOLDLOCK: Is equivalent to SERIALIZABLE. For more information, see SERIALIZABLE later in this topic. ... SERIALIZABLE: ... The scan is performed with the same semantics as a transaction running at the SERIALIZABLE isolation level... and the Transaction Isolation Level topic explains what SERIALIZABLE means: No other transactions can modify data that has been read by the current transaction until the current transaction completes. Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes. Therefore the behavior you see is perfectly explained by the product documentation: UPDLOCK does not block concurrent SELECT nor INSERT, but blocks any UPDATE or DELETE of the rows selected by T1 HOLDLOCK means SERALIZABLE and therefore allows SELECTS, but blocks UPDATE and DELETES of the rows selected by T1, as well as any INSERT in the range selected by T1 (which is the entire table, therefore any insert). (UPDLOCK, HOLDLOCK): your experiment does not show what would block in addition to the case above, namely another transaction with UPDLOCK in T2 : SELECT * FROM dbo.Test WITH (UPDLOCK) WHERE ... TABLOCKX no need for explanations The real question is what are you trying to achieve ? Playing with lock hints w/o an absolute complete 110% understanding of the locking semantics is begging for trouble... After OP edit: I would like to select rows from a table and prevent the data in that table from being modified while I am processing it. The you should use one of the higher transaction isolation levels. REPEATABLE READ will prevent the data you read from being modified. SERIALIZABLE will prevent the data you read from being modified and new data from being inserted. Using transaction isolation levels is the right approach, as opposed to using query hints. Kendra Little has a nice poster exlaining the isolation levels . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7843733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/409259/']} | jdg_380389 |
stackexchange | llm_judgeable_groundtruth_similarity | 90239 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
my question is: Could we prove the this conversion of variable work by my formula on the bottom? $$\iint_R f(r,\theta) \ dxdy = \int_a^b \int_0^{r(\theta)} f(r,\theta) r (dr)\ d\theta$$ as $d r$ and $d \theta$ approach $0$. Prove or disprove that: $$((r+\Delta r) \cos(a +\Delta \theta) -r \cos a) \cdot ((r+\Delta r) \sin(a + \Delta \theta) -r \sin a) / (r \;\Delta \theta \; \Delta r)=1 .$$ where the variable represent as in this graph : as $\Delta r $ and $\Delta \theta$ approach $0$ This question is inspired from $dx\;dy=r \;dr \;d \theta$.
Now provide the response and nothing else.
| The diagram that you are giving is not what is going on at all. $\mathrm{d}x\;\mathrm{d}y$ is an element of area intended to represent the plane broken up into small rectangles.: $\mathrm{d}r\;\mathrm{d}\theta$ is an element of area in a space whose small squares get mapped to small annular wedges by $x=r\cos(\theta)$, $y=r\sin(\theta)$: The Jacobian is the matrix that locally maps between two coordinate systems.$$\frac{\partial(x,y)}{\partial(u,v)}=\begin{bmatrix}\frac{\partial x}{\partial u}&\frac{\partial y}{\partial u}\\\frac{\partial x}{\partial v}&\frac{\partial y}{\partial v}\end{bmatrix}\tag{1}$$From polar to rectangular coordinates, the Jacobian is$$\frac{\partial(x,y)}{\partial(r,\theta)}=\begin{bmatrix}\cos(\theta)&\sin(\theta)\\-r\sin(\theta)&r\cos(\theta)\end{bmatrix}\tag{2}$$Note that $\begin{vmatrix}\cos(\theta)&\sin(\theta)\\-r\sin(\theta)&r\cos(\theta)\end{vmatrix}=r$. A small area in $\mathrm{d}r\;\mathrm{d}\theta$, the green square, is mapped by the polar coordinate map to the blue annular wedge, which has approximately the same area as the red rectangle. The Jacobian matrix maps the green square to the red rectangle. The ratio of the area of the red rectangle to the green square is the determinant of the Jacobian (this is just linear algebra). Therefore, since a small square in $\mathrm{d}r\;\mathrm{d}\theta$ is mapped by the coordinate transform so that its area is $\left|\frac{\partial(x,y)}{\partial(r,\theta)}\right|\;\mathrm{d}r\;\mathrm{d}\theta$$$\begin{align}\iint f(x,y)\;\mathrm{d}x\;\mathrm{d}y&=\iint f(r,\theta)\left|\frac{\partial(x,y)}{\partial(r,\theta)}\right|\;\mathrm{d}r\;\mathrm{d}\theta\\&=\iint f(r,\theta)\;r\;\mathrm{d}r\;\mathrm{d}\theta\tag{3}\end{align}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/90239', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12796/']} | jdg_380390 |
stackexchange | llm_judgeable_groundtruth_similarity | 12863648 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've started using Objective-C blocks today. I wrote the following code: NSArray *array = @[@25, @"abc", @7.2];void (^print)(NSUInteger index) = ^(NSUInteger index){ NSLog(@"%@", array[index]);};for (int n = 0; n < 3; n++) print(n); Which works properly. I needed to change the array variable after its declaration, though, so I tried using the following code: NSArray *array;void (^print)(NSUInteger index) = ^(NSUInteger index){ NSLog(@"%@", array[index]);};array = @[@25, @"abc", @7.2];for (int n = 0; n < 3; n++) print(n); However, that doesn't work. The console just prints (null) three times. Why is it that this doesn't work, while it did work with my first piece of code?
Now provide the response and nothing else.
| It's because the block captures variables by value and when the block is created (unless you use __block ). What you probably want is: NSArray *array = @[@25, @"abc", @7.2];void (^print)(NSUInteger index) = ^(NSUInteger index){ NSLog(@"%@", array[index]);};for (int n = 0; n < 3; n++) print(n); Example with __block : __block NSArray *array;void (^print)(NSUInteger index) = ^(NSUInteger index){ NSLog(@"%@", array[index]);};array = @[@25, @"abc", @7.2];for (int n = 0; n < 3; n++) print(n); Note that it's a little less efficient to use __block if you don't actually need to modify the variable inside the block and have it reflected outside. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12863648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223781/']} | jdg_380391 |
stackexchange | llm_judgeable_groundtruth_similarity | 4735566 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to decode a string I took from file: file = open ("./Downloads/lamp-post.csv", 'r')data = file.readlines()data[0] '\xff\xfeK\x00e\x00y\x00w\x00o\x00r\x00d\x00\t\x00C\x00o\x00m\x00p\x00e\x00t\x00i\x00t\x00i\x00o\x00n\x00\t\x00G\x00l\x00o\x00b\x00a\x00l\x00 \x00M\x00o\x00n\x00t\x00h\x00l\x00y\x00 \x00S\x00e\x00a\x00r\x00c\x00h\x00e\x00s\x00\t\x00D\x00e\x00c\x00 \x002\x000\x001\x000\x00\t\x00N\x00o\x00v\x00 \x002\x000\x001\x000\x00\t\x00O\x00c\x00t\x00 \x002\x000\x001\x000\x00\t\x00S\x00e\x00p\x00 \x002\x000\x001\x000\x00\t\x00A\x00u\x00g\x00 \x002\x000\x001\x000\x00\t\x00J\x00u\x00l\x00 \x002\x000\x001\x000\x00\t\x00J\x00u\x00n\x00 \x002\x000\x001\x000\x00\t\x00M\x00a\x00y\x00 \x002\x000\x001\x000\x00\t\x00A\x00p\x00r\x00 \x002\x000\x001\x000\x00\t\x00M\x00a\x00r\x00 \x002\x000\x001\x000\x00\t\x00F\x00e\x00b\x00 \x002\x000\x001\x000\x00\t\x00J\x00a\x00n\x00 \x002\x000\x001\x000\x00\t\x00A\x00d\x00 \x00s\x00h\x00a\x00r\x00e\x00\t\x00S\x00e\x00a\x00r\x00c\x00h\x00 \x00s\x00h\x00a\x00r\x00e\x00\t\x00E\x00s\x00t\x00i\x00m\x00a\x00t\x00e\x00d\x00 \x00A\x00v\x00g\x00.\x00 \x00C\x00P\x00C\x00\t\x00E\x00x\x00t\x00r\x00a\x00c\x00t\x00e\x00d\x00 \x00F\x00r\x00o\x00m\x00 \x00W\x00e\x00b\x00 \x00P\x00a\x00g\x00e\x00\t\x00L\x00o\x00c\x00a\x00l\x00 \x00M\x00o\x00n\x00t\x00h\x00l\x00y\x00 \x00S\x00e\x00a\x00r\x00c\x00h\x00e\x00s\x00\n' Adding ignore do not really help...: In [69]: data[2] Out[69]: u'\u6700\u6100\u7200\u6400\u6500\u6e00\u2000\u6c00\u6100\u6d00\u7000\u2000\u7000\u6f00\u7300\u7400\u0900\u3000\u2e00\u3900\u3400\u0900\u3800\u3800\u3000\u0900\u2d00\u0900\u3300\u3200\u3000\u0900\u3300\u3900\u3000\u0900\u3300\u3900\u3000\u0900\u3400\u3800\u3000\u0900\u3500\u3900\u3000\u0900\u3500\u3900\u3000\u0900\u3700\u3200\u3000\u0900\u3700\u3200\u3000\u0900\u3300\u3900\u3000\u0900\u3300\u3200\u3000\u0900\u3200\u3600\u3000\u0900\u2d00\u0900\u2d00\u0900\ua300\u3200\u2e00\u3100\u3800\u0900\u2d00\u0900\u3400\u3800\u3000\u0a00' In [70]: data[2].decode("utf-8", "replace") --------------------------------------------------------------------------- Traceback (most recent call last) /Users/oleg/ in () /opt/local/lib/python2.5/encodings/utf_8.py in decode(input, errors) 14 15 def decode(input, errors='strict'): ---> 16 return codecs.utf_8_decode(input, errors, True) 17 18 class IncrementalEncoder(codecs.IncrementalEncoder): : 'ascii' codec can't encode characters in position 0-87: ordinal not in range(128) In [71]:
Now provide the response and nothing else.
| This looks like UTF-16 data. So try data[0].rstrip("\n").decode("utf-16") Edit (for your update): Try to decode the whole file at once, that is data = open(...).read()data.decode("utf-16") The problem is that the line breaks in UTF-16 are "\n\x00", but using readlines() will split at the "\n", leaving the "\x00" character for the next line. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4735566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126545/']} | jdg_380392 |
stackexchange | llm_judgeable_groundtruth_similarity | 11661545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I run collection.find() in MongoDB/Node/Express, I'd like to get a callback when it's finished. What's the correct syntax for this? function (id,callback) { var o_id = new BSON.ObjectID(id); db.open(function(err,db){ db.collection('users',function(err,collection){ collection.find({'_id':o_id},function(err,results){ //What's the correct callback synatax here? db.close(); callback(results); }) //find }) //collection }); //open }
Now provide the response and nothing else.
| That's the correct callback syntax, but what find provides to the callback is a Cursor , not an array of documents. So if you want your callback to provide results as an array of documents, call toArray on the cursor to return them: collection.find({'_id':o_id}, function(err, cursor){ cursor.toArray(callback); db.close();}); Note that your function's callback still needs to provide an err parameter so that the caller knows whether the query worked or not. 2.x Driver Update find now returns the cursor rather than providing it via a callback, so the typical usage can be simplified to: collection.find({'_id': o_id}).toArray(function(err, results) {...}); Or in this case where a single document is expected, it's simpler to use findOne : collection.findOne({'_id': o_id}, function(err, result) {...}); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11661545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/336781/']} | jdg_380393 |
stackexchange | llm_judgeable_groundtruth_similarity | 2391233 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that there is no return type of the constructors in C++ However, the code below compiles right. What is returned by the constructor in the code below? class A{public:A() {}}A a = A(); //what is returned by A() here, why? Is there any conflict here?
Now provide the response and nothing else.
| Nothing is returned from the constructor. The syntax A() is not a constructor call, it creates a temporary object of type A (and calls the constructor in the process). You can't call a constructor directly, constructors are called as a part of object construction. In your code, during the construction of the temporary the default constructor (the one you defined) is called. Then, during the construction of a , the copy constructor (generated automatically by the compiler) is called with the temporary as an argument. As Greg correctly points out, in some circumstances (including this one), the compiler is allowed to avoid the copy-construction and default-construct a (the copy-constructor must be accessible however). I know of no compiler that wouldn't perform such optimization. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2391233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233254/']} | jdg_380394 |
stackexchange | llm_judgeable_groundtruth_similarity | 14101099 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use ng-switch with ng-include below. The problem is with ng-init and the whole controller block getting re-rendered on any ng-includes change. In the login_form.html, when a user logins, I set the isLoggedIn = true, in the LoginCtrl. However this causes the re-rendering of the full html below, which causes the ng-init again. How do I avoid this cycle? <div ng-controller="LoginCtrl" ng-init="isLoggedIn = false" class="span4 pull-right"> <div ng-switch on="isLoggedIn"> <div ng-switch-when="false" ng-include src="'login_form.html'"></div> <div ng-switch-when="true" ng-include src="'profile_links.html'"></div> </div> </div> Below is the HTML for the login form - <form class="form-inline"> <input type="text" placeholder="Email" ng-model="userEmail" class="input-small"/> <input type="password" placeholder="Password" ng-model="userPassword" class="input-small"/> <button type="submit" ng-click="login(userEmail, userPassword)" class="btn">Sign In</button></form> Below is the controller - angularApp.controller('LoginCtrl', function($scope, currentUser){ $scope.loginStatus = function(){ return currentUser.isLoggedIn(); };/* $scope.$on('login', function(event, args) { $scope.userName = args.name; }); $scope.$on('logout', function(event, args) { $scope.isLoggedIn = false; });*/ $scope.login = function(email, password){ currentUser.login(email, password); }; $scope.logout = function(){ currentUser.logout(); };}); Blow is the service - angularApp.factory('currentUser', function($rootScope) { // Service logic // ... // var allUsers = {"[email protected]": {name: "Robert Patterson", role: "Admin", email: "[email protected]", password: "rob"}, "[email protected]":{name: "Steve Sheldon", role: "User", email: "[email protected]", password: "steve"}} var isUserLoggedIn = false; // Public API here return { login: function(email, password){ var user = allUsers[email]; var storedPass = user.password; if(storedPass === password){ isUserLoggedIn = true; return true; } else { return false; } }, logout: function(){ $rootScope.$broadcast('logout'); isUserLoggedIn = false; }, isLoggedIn: function(){ return isUserLoggedIn; } };});
Now provide the response and nothing else.
| I believe your problem is a result of the way prototypal inheritance works. ng-include creates its own child scope. Assigning a primitive value in a child scope creates a new property on that scope that shadows/hides the parent property. I'm guessing that in login_form.html you do something like the following when a user logs in: <a ng-click="isLoggedIn=true">login</a> Before isLoggedIn is set to true, this is what your scopes look like: After isLoggedIn is set to true, this is what your scopes look like: Hopefully the pictures make it clear why this is causing you problems. For more information about why prototypal inheritance works this way with primitives, please see What are the nuances of scope prototypal / prototypical inheritance in AngularJS? As the above link explains, you have three solutions: define an object in the parent for your model, then reference a property of that object in the child: parentObj.isLoggedIn use $parent.isLoggedIn in login_form.html -- this will then reference the primitive in the $parent scope, rather than create a new one. E.g., <a ng-click="$parent.isLoggedIn=true">login</a> define a function on the parent scope, and call it from the child -- e.g., setIsLoggedIn(). This will ensure the parent scope property is being set, not a child scope property. Update : in reviewing your HTML, you may actually have two levels of child scopes, since ng-switch and ng-include each create their own scopes. So, the pictures would need a grandchild scope, but the three solutions are the same... except for #2, where you would need to use $parent.$parent.isLoggedIn -- ugly. So I suggest option 1 or 3. Update2 : @murtaza52 added some code to the question... Remove ng-init="isLoggedIn = false" from your controller (your service is managing the login state via its isUserLoggedIn variable) and switch on loginStatus() in your controller: <div ng-switch on="loginStatus()"> . Here is a working fiddle . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14101099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/780799/']} | jdg_380395 |
stackexchange | llm_judgeable_groundtruth_similarity | 7361802 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I remove the slash in the output of InetAddress.getbyName ? UPDATE Thanks everyone, I just did it. One of the solutions is: String ip_old = myInetaddress.toString(); String ip_new = ip_old.substring(1);
Now provide the response and nothing else.
| If you just want the IP, use the host address: String address = InetAddress.getByName("stackoverflow.com").getHostAddress(); If you just want the host name, use String hostname = InetAddress.getByName("stackoverflow.com").getHostName(); Edit The slash you're seeing is probably when you do an implicit toString() on the returned InetAddress as you try to print it out, which prints the host name and address delimited by a slash (e.g. stackoverflow.com/64.34.119.12 ). You could use String address = InetAddress.getByName("stackoverflow.com").toString().split("/")[1];String hostname = InetAddress.getByName("stackoverflow.com").toString().split("/")[0]; But there is no reason at all to go to a String intermediary here. InetAddress keeps the two fields separate intrinsically. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7361802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/827663/']} | jdg_380396 |
stackexchange | llm_judgeable_groundtruth_similarity | 19947084 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am mobile application developer and working on native as well as Phone Gap also. I have build one application in Phonegap that work perfectly in device but not working in some of the tabs. That i found here as per the Google Play Store policy . But why they are not showing a reason that due to some these kind of invalid code or due to some specific error which Google not allow you to run on some specific device. Query 1 : I have Samsung Galaxy tab 2 and tab 3. My app is not working in both tablet as per the unsupported device availability list and when user search from tablets then they can't find app also in play store. In my app i have set all requirements for all devices and tablets as below , <?xml version="1.0" encoding="utf-8" ?><manifest xmlns:android="http://schemas.android.com/apk/res/android" package="my_pkg" android:hardwareAccelerated="true" android:versionCode="6" android:versionName="1.5" android:windowSoftInputMode="adjustPan"> <supports-screens android:anyDensity="true" android:largeScreens="true" android:normalScreens="true" android:resizeable="true" android:smallScreens="true" android:xlargeScreens="true" /> <uses-sdk android:minSdkVersion="7" android:targetSdkVersion="18" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.BROADCAST_STICKY" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.VIBRATE" /> <application android:debuggable="false" android:hardwareAccelerated="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name"> <receiver android:name="my_pkg.AlarmReceiver"> </receiver> <receiver android:name="my_pkg.AlarmRestoreOnBoot"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> <activity android:name="my_pkg.MainActivity" android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:theme="@android:style/Theme.Black.NoTitleBar" android:windowSoftInputMode="adjustPan"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application></manifest> So, I am stuck here what is the issue with my app and why Google store is not allow to install my app in some of the device and tablet. Before a week the total number of unsupported device & tablets are 1550 and now that are 1153. Query 2 : How can i found the mistakes due to which i am not able to run my app in tablets and how to resolve it?. Please Help me how to resolve this issue because without knowing the reason how can i solve it.
Now provide the response and nothing else.
| Finally after spending long time on RnD i have found solution. Actually i am not solo founder for this answer. I really thanks to all who had helped me to solve this issue. Divesh Salian's answer is very near. I tried as per his instruction and along with that I have alter my config.xml file which have some unnecessary <features></features> tags. I have asked my question in Samsung Developer form and also tried test my app in RTL but i was not able to login in web browser, don't know why ? and finally I applied some other alternative regarding <uses-feature></uses-feature> <uses-feature android:name="android.hardware.telephony" android:required="false" /> <uses-feature android:name="android.hardware.microphone" android:required="false" /> <uses-feature android:name="android.hardware.camera" android:required="false" /> <uses-feature android:name="android.hardware.camera.autofocus" android:required="false" /> <uses-feature android:name="android.hardware.camera.front" android:required="false" /> <uses-feature android:name="android.software.input_methods" android:required="false" /> I have use above features but not going to filter app base on that so i have set android:required="false" for some specific features. Care below things if you find this type of problem How Google play filter the apps ? Is your Uses-Feature are required ? is your version code valid ? Is your app compatible with all screen? Have you added permission which are not required for your apps ? I hope these links will solve your problem definitely. Thanks to all again !!! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19947084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1562548/']} | jdg_380397 |
stackexchange | llm_judgeable_groundtruth_similarity | 407351 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider a matrix X where the columns are the attributes and each row is an example. There is no order of occurrence of the examples. The target variables are continuous valued. Does normalization to 0 mean and 1 standard deviation occur by considering the mean and standard deviation each column separately or for each row? Do we have a mean and sigma for each column? My confusion is that each feature is in a row representing and example. So feature normalization should be done for each example. But I may be wrong. QUESTION 1) Should each row be normalized or each column wise? Is this the correct approach for calculating the mean and standard deviation for normalization (in MATLAB) and then apply it for normalization to zero mean and 1 std? I am calculating the mean and sigma for each attribute across all examples. %X = data matrix rows are the number of examples, columns are the attributEsfor i = 1:size(X, 2), mu(i) = mean(X(:,i)); sigma(i) = std(X(:,i));end Example: X= [1 2 4; 1 1 1; 3 2 2; 0 0 0]; %4 examples and 3 attributes %for each column there is a mu and sigmamu = 1.2500 1.2500 1.7500sigma = 1.2583 0.9574 1.7078 QUESTION 2) My dataset is composed of the input and targets (continuous valued). Should normalization of the entire dataset which contains the input and target be done altogether and then should the splitting of the normalized dataset into train, validation and test set and traintarget and test target be done? Please correct me where wrong.
Now provide the response and nothing else.
| Steady on there! You have two very small samples there. Statistics is not taught at Hogwarts! No white magic for very small samples. Not rejecting the null on Shapiro-Wilk doesn't allow the description "is normally distributed", but rather a much more circumspect "not enough evidence to be clear that this isn't normally distributed". Let's look at graphs, for data separate (left) and data pooled (right). The graphs would be straight if data were from a normal. I see two things there: Not too bad in terms of (non-)normality for very small samples, but not the same slope, meaning different variability. Checking that, I find the SD for tree is 0.030, and that for knn 0.015: a two-fold difference. The t test should be allowed to follow suit but what you called copes with unequal variability. Most crucially, no one (competent) promises exactly the same P-values. Different tests focus on different information. For this kind of problem and data, they shouldn't be wildly contradictory, no more, no less. PS: My own view is that the graph is more interesting and more convincing that any formal test, but those who review your work might want to hear the clank of testing machinery and see the wheels turning. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/407351', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/61820/']} | jdg_380398 |
stackexchange | llm_judgeable_groundtruth_similarity | 724364 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How realistic is the science behind the Lightyear movie? Buzz Lightyear flies into space, and accelerates up to 70% of the speed of light. When he lands, suddenly four years have past back home, while it was only about 5-10 minutes for him. Is that realistic?
Now provide the response and nothing else.
| The description of his flight from the movie is here and their explanation of time dilation is here (both YouTube videos). It makes no sense for several reasons. The gamma factor for $0.7c$ is only $1.4$ , so a few minutes for him would just be a few more minutes for those who stayed behind. The flight plan slingshots around the star Alpha Takoni and starts and ends on Takoni Prime, which is a class M planet that presumably orbits Alpha Takoni, making it only a few light minutes away. That means a roundtrip to the star at anything close to $c$ would last only a few minutes for those who stayed behind. As his speed approached $c$ , the $γ$ factor would approach infinity, but that's the factor by which the trip is shorter for him . The only way the times would work out is if the star was actually light years away and his speed was $0.99999...\,c$ . In the second clip, Buzz actually says "the faster I fly, the further into the future I travel", as though repeating the trip at a higher speed would take even longer. That's not how speed works. They make it clear that he can't decelerate to turn around at the star, he has to slingshot. You can't slingshot around a star at a substantial fraction of the speed of light. The amount your path will deflect at that speed is a scarcely measurable fraction of a degree, not a full 180 degrees. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/724364', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/298854/']} | jdg_380399 |
stackexchange | llm_judgeable_groundtruth_similarity | 24860426 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to add p3p headers to the static resource location on a standard Nodejs & Nginx Elastic Beanstalk. I've created an ebextension script as explained on this question . The script uses set to add a add_header directive under the alias line, which is under the static location directive. It runs on the /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf file. The script not only modifies the file, it also copies it to a "safe" location, i.e. /home/ec2-user. According to /var/log/cfn-init.log , the script runs correctly. As evidence, the copy of the modified file shows the additional header at the right place. But the /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf file does not have this modification . I can only deduce that although my script runs fine, something else later in the deployment process overwrites it. Which is strange, because according to documentation container commands are run after the application and web server have been set up, so I don't see what does it. So ho/what is overwriting this file and how can I prevent that?
Now provide the response and nothing else.
| It seems that Elastic Beanstalk has changed and the commonly recommended approach/hack of overwriting #etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf doesn't work any more. Nor does creating any file in /tmp/deployment/config. The solution I found was to overwrite /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf directly, using a container_commands directive, since these commands are executed after the Elastic Beanstalk install creates it's version of the nginx config. From http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-container-commands : They [container_commands] run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed. I did this in three steps within .ebextensions: Create my version of the nginx config file. Create a script to overwrite the standard config file with my own. Run the script. The first two steps happen earlier in the install process, while the last uses container_commands so as described previous happens late in the install. Here's the files I used: File .ebextensions/install_nginx_config_01.config : (Note that the indenting is important) ## STEP 1 - Create the nginx config file#files: "/tmp/my.nginx.conf" : mode: "000755" owner: root group: root content: | # This file was overwritten during deployment # by .ebextensions/install_nginx_config_03.config upstream nodejs { server 127.0.0.1:3000; keepalive 256; } server { listen 8080; location / { proxy_pass http://nodejs; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } gzip on; gzip_comp_level 4; gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; } File .ebextensions/install_nginx_config_02.config : ## STEP 2 - Create a script that will overwrite the Nginx config#files: "/tmp/install-nginx-config.sh" : mode: "000755" owner: root group: root content: | #!/bin/sh cp /tmp/my.nginx.conf /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf File .ebextensions/install_nginx_config_03.config : ## STEP 3 - Run the script to overwrite the nginx config template.#container_commands: 01_runmyshellscript: command: "/tmp/install-nginx-config.sh" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24860426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/29568/']} | jdg_380400 |
stackexchange | llm_judgeable_groundtruth_similarity | 4971436 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently came across this: static enum Response{ NO_ERROR=0, MISSING_DESCRIPTOR, ...}; It compiles and works under Microsoft VS2005. However, I'm not sure what the 'static' modifier is supposed to do. Is it any different from the following? enum Response { NO_ERROR=0, MISSING_DESCRIPTOR, ...};
Now provide the response and nothing else.
| That exact code, with just the ellipsis removed, is not valid C++. You can't use the static storage class specifier in an enum declaration; it doesn't make any sense there (only objects, functions, and anonymous unions can be declared static ). You can, however, declare an enum and a variable all in one declaration: static enum Response { NO_ERROR = 0, MISSING_DESCRIPTOR} x; The static here applies to x and it is effectively the same as if you said: enum Response { NO_ERROR = 0, MISSING_DESCRIPTOR};static Response x; | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4971436', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/312209/']} | jdg_380401 |
stackexchange | llm_judgeable_groundtruth_similarity | 4908545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A special type of tree is given where all leaves are marked with L and others are marked with N . Every node can have 0 or at most 2 nodes. The preorder traversal of the tree is given. Give an algorithm to build the tree from this traversal.
Now provide the response and nothing else.
| This is the preorder traversal algorithm: Preorder(T) if (T is not null) print T.label Preorder(T.left) Preorder(T.right) Let's try to find an algorithm for an input of NNLLNL . Obviously the label of the root is printed first. So you know the root has label N . Now the algorithm recurses on the left subtree. This is also N according to the input. Recurse on the left subtree of that, which is L . Now you have to backtrack, because you've reached a leaf. The next position in the input is also L , so the current node has a right child labeled with L . Backtrack once. Backtrack again, because you've added all the children of the current node (max 2 children). Now you're at the root again. You have to go right, because you already went left. According to the input, this is N . So the right child of the root is N . The left child of that will be L . This is your tree: N / \ N N / \ / L L L Note that the solution is not necessarily unique, but this will get you a possible solution. Pseudocode: k = 0input = ... get preorder traversal vector from user ...Reconstruct(T) if input[k] == N T = new node with label N k = k + 1 Reconstruct(T.left) Reconstruct(T.right) else T = new node with label L T.left = T.right = null k = k + 1 Call with a null node. Follow-up question : given both the preorder and the inorder traversal of a binary tree containing distinct node labels, how can you uniquely reconstruct the tree? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4908545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/247560/']} | jdg_380402 |
stackexchange | llm_judgeable_groundtruth_similarity | 30374725 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is the description of DNX: The .NET Execution Environment (DNX) is a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux. It provide a host process, CLR hosting logic and managed entry point discovery. DNX was built for running cross-platform ASP.NET Web applications, but it can run other types of .NET applications, too, such as cross-platform console apps. Is DNX an alternative to Mono? If not, then what will be the difference?
Now provide the response and nothing else.
| Is DNX an alternative to Mono? If not, then what will be the difference? Mono is an open sourced development platform . Its implementation is based on the CLI specification, such as the platform that is provided by Microsoft. It includes a C# compiler, a runtime, a BCL and something called MCL (Mono Class Library, which is an extensions to the BCL). Mono itself can run on Linux, OS X, BSD and Windows on varying architectures. DNX is an SDK containing all of the bits needed to build and run an application (including custom utilities such as the dnu which is used to build and package the app), including the CLR (currently it deploys with CoreCLR ). This CoreCLR can also be switched with Mono, which means it will be consuming all services of the Mono runtime, compiler, etc. Mono as opposed to DNX provides the full platform (Runtime, BCL, JIT, etc.). DNX is used at the lowest level as the Native Process which invoked the CoreCLR. DNX would be used for scenarios such as self-host or building and running from the command line. As @xanatos points out, DNX aspire to be able to ship the runtime with the application, where multiple runtimes will be able to live side by side without interfering each other. Perhaps this image can clarify: Here is the list which DNX can run on top (x86 showing twice as it is the default): Active Version Runtime Architecture Location Alias------ ------- ------- ------------ -------- ----- * 1.0.0-beta2-10735 clr x86 C:\Users\victorhu\.dnx\runtimes default 1.0.0-dev clr x64 C:\Users\victorhu\.dnx\runtimes clr-x64-dev 1.0.0-dev clr x86 C:\Users\victorhu\.dnx\runtimes clr-x86-dev 1.0.0-dev coreclr xd64 C:\Users\victorhu\.dnx\runtimes coreclr-x64-dev 1.0.0-dev coreclr x86 C:\Users\victorhu\.dnx\runtimes coreclr-x86-dev 1.0.0-dev mono C:\Users\victorhu\.dnx\runtimes mono-dev There is an extensive wiki page explaning the DNX structure for more. @Will points out the ASP.NET docs page as well. Update: 25/02/2016 DNX is now retired in favor of .NET CLI Tools . Update: 8/6/2020 .NET CLI Tools are now retired in favor of dotnet sdk | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30374725', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/672532/']} | jdg_380403 |
stackexchange | llm_judgeable_groundtruth_similarity | 9968114 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to send a HTTPS Get Request to the google shopping api however nothing is quite working for me, for example here is what I'm trying at the moment: try { HttpClient client = new DefaultHttpClient(); HttpGet request = new HttpGet(); request.setURI(new URI("https://www.googleapis.com/shopping/search/v1/public/products/?key={my_key}&country=&q=t-shirts&alt=json&rankByrelevancy=")); HttpResponse response = client.execute(request);} catch (URISyntaxException e) { e.printStackTrace();} catch (ClientProtocolException e) { // TODO Auto-generated catch block e.printStackTrace();} catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace();} return response; If anyone has any suggestions on how to improve this or replace it please let me know, thanks in advance.
Now provide the response and nothing else.
| You should be getting a compile error. This is the correct version: HttpResponse response = null;try { HttpClient client = new DefaultHttpClient(); HttpGet request = new HttpGet(); request.setURI(new URI("https://www.googleapis.com/shopping/search/v1/public/products/?key={my_key}&country=&q=t-shirts&alt=json&rankByrelevancy=")); response = client.execute(request);} catch (URISyntaxException e) { e.printStackTrace();} catch (ClientProtocolException e) { // TODO Auto-generated catch block e.printStackTrace();} catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace();} return response; Therefore now if you have an error your response will be returned as null. Once you have the response and checked it for null, you'll want to get the content (i.e. your JSON). http://developer.android.com/reference/org/apache/http/HttpResponse.html http://developer.android.com/reference/org/apache/http/HttpEntity.html http://developer.android.com/reference/java/io/InputStream.html response.getEntity().getContent(); This gives you an InputStream to work with. If you want to convert this to a string you'd do the below or equivalent: http://www.mkyong.com/java/how-to-convert-inputstream-to-string-in-java/ public static String convertStreamToString(InputStream inputStream) throws IOException { if (inputStream != null) { Writer writer = new StringWriter(); char[] buffer = new char[1024]; try { Reader reader = new BufferedReader(new InputStreamReader(inputStream, "UTF-8"),1024); int n; while ((n = reader.read(buffer)) != -1) { writer.write(buffer, 0, n); } } finally { inputStream.close(); } return writer.toString(); } else { return ""; }} When you have this string you need to create a JSONObject from it: http://developer.android.com/reference/org/json/JSONObject.html JSONObject json = new JSONObject(inputStreamAsString); Done! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9968114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/938401/']} | jdg_380404 |
stackexchange | llm_judgeable_groundtruth_similarity | 109876 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$. I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere. Can anybody provide a hint as to how I should be looking at this problem?
Now provide the response and nothing else.
| As you note, $\gcd(n^3+1,n^2+2) = \gcd(1-2n,n^2+2)$. Now, continuing in that manner,$$\begin{align*}\gcd(1-2n, n^2+2) &= \gcd(2n-1,n^2+2)\\ &= \gcd(2n-1, n^2+2+2n-1)\\ &= \gcd(2n-1,n^2+2n+1)\\ &= \gcd(2n-1,(n+1)^2).\end{align*}$$ Consider now $\gcd(2n-1,n+1)$. We have:$$\begin{align*}\gcd(2n-1,n+1) &= \gcd(n-2,n+1) \\&= \gcd(n-2,n+1-(n-2))\\&=\gcd(n-2,3)\\ &= 1\text{ or }3.\end{align*}$$ Therefore, the gcd of $2n-1$ and $(n+1)^2$ is either $1$, $3$, or $9$. Hence the same is true of the original gcd. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/109876', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1582/']} | jdg_380405 |
stackexchange | llm_judgeable_groundtruth_similarity | 2313378 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Evaluate the following double integral $$\int_0^1 \int_0^{1-y} \cos \left(\frac{x-y}{x+y}\right)dxdy$$ I tried transforming to \begin{align}x+y &=u\\x-y &=v\end{align} but I think it is getting complicated. Thanks in advance.
Now provide the response and nothing else.
| You are on the right track. By letting $u=x+y$ , $v=x-y$ , we have $$\left|\frac{\partial (x,y)}{\partial (u,v)}\right|=\left|\frac{\partial (u,v)}{\partial (x,y)}\right|^{-1}=\left|\det \begin{pmatrix}1 & 1 \\ 1 & -1\end{pmatrix}\right|^{-1}=|-2|^{-1}=\frac{1}{2}.$$ Moreover the triangle with vertices $(0,0)$ , $(1,0)$ and $(0,1)$ in the $xy$ -plane is transformed into the triangle with vertices $(0,0)$ , $(1,1)$ and $(1,-1)$ in the $uv$ -plane.Therefore $$\int_{y=0}^1\left( \int_{x=0}^{1-y} \cos\left(\frac{x-y}{x+y}\right)dx\right)dy=\frac{1}{2}\int_{u=0}^1\left(\int_{v=-u}^u\cos(v/u)dv\right) du.$$ Can you take it from here? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2313378', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/405163/']} | jdg_380406 |
stackexchange | llm_judgeable_groundtruth_similarity | 47407 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Is there any non-trivial ring such that $SL_{3}(R)$ is isomorphic to a subgroup of $SL_{2}(R)$? $SL_{3}(\mathbb{Z})$ is not an amalgam, and has the wrong number of order $2$ elements to be a subgroup of $SL_{2}(\mathbb{Z})$. Is there any non-trivial ring where this occurs? When can this definitely not occur? I am trying to understand if there is any sort of group-theoretically apparent notion of dimension here.
Now provide the response and nothing else.
| ( Edit: I edited a bit. Now the answer should be clearer, simpler and even correct. Thank you Andrei Smolensky for repeating correcting my embarrassing mistakes here.) I am surprised that this old question was not fully answered yet.The answer is "No" and it is well known in some circles.In fact, a far more general statement holds: 1) Let $R$ and $S$ be rings (commutative with 1). Then any group homomorphism $\text{SL}_3(R)\to\text{SL}_2(S)$ factors via the non-trivial quotient $\text{SK}_1(3,R):=\text{SL}_3(R)/\text{EL}_3(R)$, where $\text{EL}_3(R)$ is the subgroup generated by elementary matrices in $\text{SL}_3(R)$. Note that $\text{SL}_3(R)$ contains an epimorphic image of $\text{SL}_3(\mathbb{Z})$ (induced by the map $\mathbb{Z}\to R$). It is well known (and easy) that $\mathrm{EL}_3(\mathbb{Z})\simeq \mathrm{SL}_3(\mathbb{Z})$, thus the image of $\text{SL}_3(\mathbb{Z})$ is contained in $\mathrm{EL}_3(R)$, and in fact $\mathrm{EL}_3(R)$ is generated by $\text{SL}_3(\mathbb{Z})$ as a normal subgroup (as you can observe by playing with commutation relation of elementary matrices). Thus (1) is equivalent to: 2) Let $S$ be a ring (commutative with 1). Then any group homomorphism $\text{SL}_3(\mathbb{Z})\to\text{SL}_2(S)$ is trivial. We now fix a homomorphism as in statement (2) and assume its image is non-trivial. Let $\mathfrak{n}<S$ denote the nilpotent radical. It is easy to see that every finitely generated subgroup of the kernel of $\text{SL}_2(S)\to \text{SL}_2(S/\mathfrak{n})$ is nilpotent. By the facts that $\text{SL}_3(\mathbb{Z})$ is finitely generated without nilpotent quotients we deduce that it is mapped non-trivially to $\text{SL}_2(S/\mathfrak{n})$.Since $\mathfrak{n}$ is the intersection of all prime ideal we deduce that $\text{SL}_3(\mathbb{Z})$ is mapped non-trivially to $\text{SL}_2(S/\mathfrak{p})$ for some prime ideal $\mathfrak{p}<S$.By letting $k$ be the field of fractions of $S/\mathfrak{p}$ we see that it is enough to provethe following statement: 3) Let $k$ be a field. Then any group homomorphism $\text{SL}_3(\mathbb{Z})\to\text{SL}_2(k)$ is trivial. (here I had before an argument I liked, but I had to replace it by a simpler one.) Here is a nice exercise: 4) Let $k$ be a field. Then for any group homomorphism $\text{H}(\mathbb{Z})\to\text{SL}_2(k)$, where $\text{H}(\mathbb{Z})$ is the integral Heisenberg group, the image of the center (=commutator group) of $\text{H}(\mathbb{Z})$ consists of scalar matrices. Hint: Assume the image of a generator of the center is not a scalar matrix and show that $\text{H}(\mathbb{Z})$ is in the Borel, in which every nilpotent group is abelian (you may assume that $k$ is algebraically closed here). Remark: Actually, the image of the center of $\text{H}(\mathbb{Z})$ will be trivial unless $\text{char}(k)=2$. To finish up with (3), observe that every elementary matrix in $\text{SL}_3(\mathbb{Z})$ is the center of a conjugate of $\text{H}(\mathbb{Z})$, thus the image of $\text{SL}_3(\mathbb{Z})$ consists of scalar matrices. But $\text{SL}_3(\mathbb{Z})$ is perfect, so this image is trivial. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/47407', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4100/']} | jdg_380407 |
stackexchange | llm_judgeable_groundtruth_similarity | 45709054 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create JSON object like below but I am not able to add the second item in it e.g: "CarType": "mercedes","carID": "merc123" and also other items. I want to create JSON like this: { cars: [ { "CarType": "BMW", "carID": "bmw123" }, { "CarType": "mercedes", "carID": "merc123" }, { "CarType": "volvo", "carID": "vol123r" }, { "CarType": "ford", "carID": "ford123" } ]}; I have tried so far: #include <stdio.h>#include <stdlib.h>#include <string.h>#include "cJSON.h"int main (void){ char field_name[32], value[32], *out; cJSON *root,*car; root = cJSON_CreateObject(); car= cJSON_CreateArray(); cJSON_AddItemToObject(root, "CarType", cJSON_CreateString("BMW")); cJSON_AddItemToObject(root, "carID", cJSON_CreateString("bmw123")); cJSON_AddItemToArray(car, root); out = cJSON_Print(car); printf("%s\n",out); return 0;} My Output is something like this (indentation is exactly as showed here): [{ "CarType": "BMW", "carID": "bmw123" }]
Now provide the response and nothing else.
| The following code will show you how to use the cJSON functions like cJSON_CreateObject() , cJSON_CreateArray() , cJSON_AddItemToObject() and cJSON_AddItemToArray() . You have to add the cars array to the root object. After that you have to create each car as object containing items which are the CarType and carID . Each car object has to be added to the cars array. It it also very well documentated with examples here at GitHub . Edit #1: As @JonnySchubert pointed out, it's necessary to free allocated ressources. But it's enough to free the root node in this case, because adding an item to an array or object transfers it's ownership. In other words: freeing the root node will cause freeing all nodes under root also. From the GitHub ressource I linked above: For every value type there is a cJSON_Create... function that can be used to create an item of that type. All of these will allocate a cJSON struct that can later be deleted with cJSON_Delete . Note that you have to delete them at some point, otherwise you will get a memory leak. Important: If you have added an item to an array or an object already, you mustn't delete it with cJSON_Delete . Adding it to an array or object transfers its ownership so that when that array or object is deleted, it gets deleted as well. Edit #2: @lsalamon mentioned that you have to free the return value of cJSON_Print, see here on SO for example and the documentation . Code: #include <stdio.h>#include <stdlib.h>#include "cJSON.h"int main(){ char *out; cJSON *root, *cars, *car; /* create root node and array */ root = cJSON_CreateObject(); cars = cJSON_CreateArray(); /* add cars array to root */ cJSON_AddItemToObject(root, "cars", cars); /* add 1st car to cars array */ cJSON_AddItemToArray(cars, car = cJSON_CreateObject()); cJSON_AddItemToObject(car, "CarType", cJSON_CreateString("BMW")); cJSON_AddItemToObject(car, "carID", cJSON_CreateString("bmw123")); /* add 2nd car to cars array */ cJSON_AddItemToArray(cars, car = cJSON_CreateObject()); cJSON_AddItemToObject(car, "CarType", cJSON_CreateString("mercedes")); cJSON_AddItemToObject(car, "carID", cJSON_CreateString("mercedes123")); /* print everything */ out = cJSON_Print(root); printf("%s\n", out); free(out); /* free all objects under root and root itself */ cJSON_Delete(root) return 0;} Output: { "cars": [{ "CarType": "BMW", "carID": "bmw123" }, { "CarType": "mercedes", "carID": "mercedes123" }]} This code just add 2 cars to show the usage. In your real application you should do that with C arrays and a for loop. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45709054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4833530/']} | jdg_380408 |
stackexchange | llm_judgeable_groundtruth_similarity | 22385189 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create a new VFAT image and add a few files to it. # Create file of 1MB size:dd if=/dev/zero of=my-image.fat count=1 bs=1M# Format file as VFAT:mkfs.vfat ./my-image.fat Now I want to add the files ./abc, ./def and ./ghi to the image. How do I do that without mount -o loop or fusermount ?I only want to write to a new, empty, pristine VFAT image.I don't need deleting appending or any "complicated" operations. I tried 7z -a because 7zip can read VFAT images, but it does not know how to write to it.
Now provide the response and nothing else.
| I want to do the exact same thing as part of an image build for an embedded system. It's really annoying that the entire build, which takes ~3hrs, could be completely unattended except for the final steps which required a password in order to mount a VFAT image. Fortunately, I found a set of tools which solve the problem. You want mcopy provided by GNU mtools . Mtools is a collection of utilities to access MS-DOS disks from GNU and Unix without mounting them. It also supports disk images such as VFAT image files. As an example, the following command will copy the file hello.txt from your current directory into the subdirectory subdir of the VFAT file system in ~/images/fat_file.img : mcopy -i ~/images/fat_file.img hello.txt ::subdir/hello.txt There are more useful inclusions in mtools, such as mdir and mtype which are great for inspecting your image file without having to mount it. mdir -i ~/images/fat_file.img ::mdir -i ~/images/fat_file.img ::subdirmtype -i ~/imags/fat_file.img ::subdir/hello.txt | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22385189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/416224/']} | jdg_380409 |
stackexchange | llm_judgeable_groundtruth_similarity | 25631989 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to insert text to the third line in a file using sed, and the syntax I've found on other forums is: sed -i '' "3i\ text to insert" file When I use this however, I get an error: sed: 1: "3i\ text to insert": extra characters after \ at the end of i command I can't seem to figure out what is causing the problem. I'm using OSX, which is why I have an empty ' ' as my extension. Thanks!
Now provide the response and nothing else.
| You should put a newline directly after the \ : sed '3i\text to insert' file This is actually the behaviour defined by the POSIX specification . The fact that GNU sed allows you to specify the text to be inserted on the same line is an extension. If for some reason you need to use double quotes around the sed command, then you must escape the backslash at the end of the first line: sed "3i\\text to insert" file This is because a double-quoted string is processed first by the shell, and \ followed by a newline is removed: $ echo "abc\def"abcdef | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25631989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3843029/']} | jdg_380410 |
stackexchange | llm_judgeable_groundtruth_similarity | 4366837 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
From what I've read it's used to fix bugs in the CPU without modifying the BIOS.From my basic knowledge of Assembly I know that assembly instructions are split into microcodes internally by the CPU and executed accordingly. But intel somehow gives access to make some updates while the system is up and running. Anyone has more info on them? Is there any documentation regarding what can it be done with microcodes and how can they be used? EDIT: I've read the wikipedia article: didn't figure out how can I write some on my own, and what uses it would have.
Now provide the response and nothing else.
| In older times, microcode was heavily used in CPU: every single instruction was split into microcode. This enabled relatively complex instruction sets in modest CPU (consider that a Motorola 68000 , with its many operand modes and eight 32-bit registers, fits in 40000 transistors, whereas a single-core modern x86 will have more than a hundred millions ). This is not true anymore. For performance reasons, most instructions are now "hardwired": their interpretation is performed by inflexible circuitry, outside of any microcode. In a recent x86, it is plausible that some complex instructions such as fsin (which computes the sine function on a floating point value) are implemented with microcode, but simple instructions (including integer multiplication with imul ) are not. This limits what can be achieved with custom microcode. That being said, microcode format is not only very specific to the specific processor model (e.g. microcode for a Pentium III and a Pentium IV cannot be freely exchanged with eachother -- and, of course, using Intel microcode for an AMD processor is out of the question), but it is also a severely protected secret. Intel has published the method by which an operating system or a motherboard BIOS may update the microcode (it must be done after each hard reset; the update is kept in volatile RAM) but the microcode contents are undocumented. The Intel® 64 and IA-32 Architectures Software Developer’s Manual (volume 3a) describes the update procedure (section 9.11 "microcode update facilities") but states that the actual microcode is "encrypted" and clock-full of checksums. The wording is vague enough that just about any kind of cryptographic protection may be hidden, but the bottom-line is that it is not currently possible, for people other than Intel, to write and try some custom microcode. If the "encryption" does not include a digital (asymmetric) signature and/or if the people at Intel botched the protection system somehow, then it may be conceivable that some remarkable reverse-engineering effort could potentially enable one to produce such microcode, but, given the probably limited applicability (since most instructions are hardwired), chances are that this would not buy much, as far as programming power is concerned. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4366837', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11301/']} | jdg_380411 |
stackexchange | llm_judgeable_groundtruth_similarity | 352175 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't make out what's in series and what's in parallel in this circuit. The solution I've thought of so far is to consider R1 in series with R4 and R2 in series with R5. Then, (R1+R4), R3 and (R2+R5) would be in parallel and their resultant resistance would be in parallel with R6. I'm not entirely certain this solution is correct though. A nudge in the right direction would be appreciated.
Now provide the response and nothing else.
| Keyword Y-Δ-transformation : You can transform the Y-configuration at R3, R4, R5 into a Δ-configuration Ra, Rb, Rc and then see that there are three parallel configurations (R1 || Ra, R2 || Rb and R6 || Rc) which can be simplified. simulate this circuit – Schematic created using CircuitLab | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/352175', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/175973/']} | jdg_380412 |
stackexchange | llm_judgeable_groundtruth_similarity | 14302248 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to add lines to the object account.bank.statement.line through other object But I get following error: "dictionary update sequence element #0 has length 3; 2 is required" Here is my code: def action_account_line_create(self, cr, uid, ids): res = False cash_id = self.pool.get('account.bank.statement.line') for exp in self.browse(cr, uid, ids): company_id = exp.company_id.id #statement_id = exp.statement_id.id lines = [] for l in exp.line_ids: lines.append((0, 0, { 'name': l.name, 'date': l.date, 'amount': l.amount, 'type': l.type, 'statement_id': exp.statement_id.id, 'account_id': l.account_id.id, 'account_analytic_id': l.analytic_account_id.id, 'ref': l.ref, 'note': l.note, 'company_id': l.company_id.id })) inv_id = cash_id.create(cr, uid, lines,context=None) res = inv_id return res I changed it on that but then I ran into this error: File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\workflow\wkf_expr.py", line 68, in execute File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\workflow\wkf_expr.py", line 58, in _eval_expr File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\tools\safe_eval.py", line 241, in safe_eval File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\tools\safe_eval.py", line 108, in test_expr File "<string>", line 0 ^SyntaxError: unexpected EOF while parsing Code: def action_account_line_create(self, cr, uid, ids, context=None): res = False cash_id = self.pool.get('account.bank.statement.line') for exp in self.browse(cr, uid, ids): company_id = exp.company_id.id lines = [] for l in exp.line_ids: res = cash_id.create ( cr, uid, { 'name': l.name, 'date': l.date, 'amount': l.amount, 'type': l.type, 'statement_id': exp.statement_id.id, 'account_id': l.account_id.id, 'account_analytic_id': l.analytic_account_id.id, 'ref': l.ref, 'note': l.note, 'company_id': l.company_id.id }, context=None) return res
Now provide the response and nothing else.
| This error raised up because you trying to update dict object by using a wrong sequence ( list or tuple ) structure. cash_id.create(cr, uid, lines,context=None) trying to convert lines into dict object: (0, 0, { 'name': l.name, 'date': l.date, 'amount': l.amount, 'type': l.type, 'statement_id': exp.statement_id.id, 'account_id': l.account_id.id, 'account_analytic_id': l.analytic_account_id.id, 'ref': l.ref, 'note': l.note, 'company_id': l.company_id.id}) Remove the second zero from this tuple to properly convert it into a dict object. To test it your self, try this into python shell: >>> l=[(0,0,{'h':88})]>>> a={}>>> a.update(l)Traceback (most recent call last): File "<pyshell#11>", line 1, in <module> a.update(l)ValueError: dictionary update sequence element #0 has length 3; 2 is required>>> l=[(0,{'h':88})]>>> a.update(l) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14302248', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1973828/']} | jdg_380413 |
stackexchange | llm_judgeable_groundtruth_similarity | 46392636 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've included these dependencies in my Maven pom.xml: <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>${httpclient.version}</version></dependency><dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version></dependency> I am trying to add this dependency in module-info.java like so: module io.github.wildcraft.restclient { requires httpcore; // no compilation problem requires httpclient; // no compilation problem requires commons-io; // shows compilation error} For commons-io, I receive a compilation error. How can I make this work?
Now provide the response and nothing else.
| Short Version Use requires commons.io . (In general, see nullpointer's answer how to learn a module's name.) Long Version Since commons-io.jar is not yet modularized, you are creating an automatic module, for which the module system has to come up with a name. The Javadoc of ModuleFinder describes how that happens: The module finder returned by this method supports modules packaged as JAR files. [...] A JAR file that does not have a module-info.class in its top-level directory defines an automatic module, as follows: If the JAR file has the attribute "Automatic-Module-Name" in its main manifest then its value is the module name. The module name is otherwise derived from the name of the JAR file. The version and the module name [...] are derived from the file name of the JAR file as follows: [...] All non-alphanumeric characters ([^A-Za-z0-9]) in the module name are replaced with a dot ("."), all repeating dots are replaced with one dot, and all leading and trailing dots are removed. The last two bullets apply to automatic modules that are not prepared for Java 9, e.g. to commons.io . This example from the same Javadoc explains what happens in your case: As an example, a JAR file named "foo-bar.jar" will derive a module name "foo.bar" and no version. A JAR file named "foo-bar-1.2.3-SNAPSHOT.jar" will derive a module name "foo.bar" and "1.2.3-SNAPSHOT" as the version. Hence requires commons.io should work. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46392636', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1849556/']} | jdg_380414 |
stackexchange | llm_judgeable_groundtruth_similarity | 640558 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's have 2 numbers, N and K, where K divides N. The number of K-combinations from a given set S of N elements is a well known formula. Let's concatenate N/K groups (resulting in N elements) such as the resulting set is N. How many possibilities are there, i.e. what's the formula? For instance: N=4, K=2, C(4, 2) = 6 {1,2},{1,3},{1,4},{2,3},{2,4},{3,4} The 3 possibilities are: {1,2},{3,4} {1,3},{2,4} {1,4},{2,3} I generated these combinations and I think the number goes like this: (4, 2): 3(6, 3): 10(6, 2): 15(10, 5): 126(9, 3): 280(10, 2): 945(14, 7): 1716(12, 4): 5775(15, 5): 126126(15, 3): 1401400 Apparently, the result always divides with (N-1).
Now provide the response and nothing else.
| You want to count the number of set partitions of a set of $n$ elments, into $n/k$ parts each of size $k$.(It is assumed that $k$ divides $n$.) Method 1. We can generate such a partition by writing down the $n$ elements in a sequence, and then declaring that the first $k$ elements are the first part, the next $k$ elements are the second part, and so on. There are $n!$ ways of writing $n$ elements in a sequence, but each partition is generated multiple times: for each of the $n/k$ parts, there are $k!$ orderings of the $k$ elements in that part that would lead to the same partition, as you don't care about the order within each part. Further, there are $(n/k)!$ orderings of the parts themselves, for the same partition. The number of partitions is therefore:$$\frac{n!}{(k!)^{n/k} (n/k)!}$$ Method 2. You can choose the elements of the first part in $\binom{n}{k}$ ways, then choose the elements of the second part as $k$ out of the remaining $n-k$ in $\binom{n-k}{k}$ ways, and so on. But as different orderings of the $(n/k)$ parts don't change the partition, the number of partitions is $$\frac{\binom{n\vphantom{k}}{k}\binom{n-k}{k}\cdots\binom{k}{k}}{(n/k)!} = \frac{n!}{(k!)^{n/k}(n/k)!}$$ as before. You can verify that this accords with all your cases. For instance, for $n=15$ and $k=5$, you get $\frac{15!}{5!^3 3!} = 126126$. These numbers are tabulated in OEIS A060540 , and no simpler formula is listed. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/640558', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/121749/']} | jdg_380415 |
stackexchange | llm_judgeable_groundtruth_similarity | 589520 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use the following code to call a wcf service. If i call a (test) method that takes no parameters, but returns a string it works fine. If i add a parameter to my method i get a wierd error: {"ExceptionDetail":{"HelpLink":null,"InnerException":null,"Message":"The token '\"' was expected but found '''.","StackTrace":" at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)\u000d\u000a at System.Xml.XmlExceptionHelper.ThrowTokenExpected(XmlDictionaryReader reader, String expected, Char found)\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.ParseStartElement()\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.Read()\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBodyCore(XmlDictionaryReader reader, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, String action, MessageDescription messageDescription, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeBodyContents(Message message, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DemultiplexingDispatchMessageFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.UriTemplateDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.CompositeDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.DeserializeInputs(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)","Type":"System.Xml.XmlException"},"ExceptionType":"System.Xml.XmlException","Message":"The token '\"' was expected but found '''.","StackTrace":" at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)\u000d\u000a at System.Xml.XmlExceptionHelper.ThrowTokenExpected(XmlDictionaryReader reader, String expected, Char found)\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.ParseStartElement()\u000d\u000a at System.Runtime.Serialization.Json.XmlJsonReader.Read()\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBodyCore(XmlDictionaryReader reader, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.DataContractJsonSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, String action, MessageDescription messageDescription, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeBodyContents(Message message, Object[] parameters, Boolean isRequest)\u000d\u000a at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DemultiplexingDispatchMessageFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.UriTemplateDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.CompositeDispatchFormatter.DeserializeRequest(Message message, Object[] parameters)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.DeserializeInputs(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)\u000d\u000a at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)"} My jquery looks like this, but i tried changing the actual data which i send as a string serialized json (as you can see) to a pure json object with the same sad result. $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "ajax/Statistics.svc/Get7DaysStatistics", dataType: "json", data: "{'customerId': '2'}", timeout: 10000, success: function(obj) { updateStatistics(obj.d); }, error: function(xhr) { if (xhr.responseText) $("body").html(xhr.responseText); else alert('unknown error'); return; }}); The wcf service looks like this: [SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic"), OperationContract] public string Get7DaysStatistics(string customerId) { Debug.WriteLine(customerId); return "Test done"; } It's placed in a a class with the following attributes: [ServiceContract(Namespace = "")][AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] I won't list the configuration in the web.config to keep this long message "short" but i can post it if anybody thinks they can use it - i just want to stress that i CAN call a method and get a result - string or even a json object i can read from as long as i DON'T pass any data to the wcf service.
Now provide the response and nothing else.
| Use double quotes instead of single quotes in the JSON you are sending to the service. That is, change: data: "{'customerId': '2'}", to data: '{"customerId": "2"}', I've tested this locally and this fixes the problem. Incidentally, I debugged this using a method I've often used when calling ASMX and WCF services using libraries other than the built-in ASP.NET tools. I called the service using the client proxy created by an asp:ScriptReference and then inspected the request being sent to the server using an HTTP sniffer (such as HttpFox for FireFox) and compared the request to the one being sent by jQuery . Then you can usually quickly see what is different (and so probably wrong) with the request. In this case, it was clear that there was a difference in the POST data being sent. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/589520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11619/']} | jdg_380416 |
stackexchange | llm_judgeable_groundtruth_similarity | 257598 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't seem to find any information on this aside from "the CPU's MMU sends a signal" and "the kernel directs it to the offending program, terminating it". I assumed that it probably sends the signal to the shell and the shell handles it by terminating the offending process and printing "Segmentation fault" . So I tested that assumption by writing an extremely minimal shell I call crsh (crap shell). This shell does not do anything except take user input and feed it to the system() method. #include <stdio.h>#include <stdlib.h>int main(){ char cmdbuf[1000]; while (1){ printf("Crap Shell> "); fgets(cmdbuf, 1000, stdin); system(cmdbuf); }} So I ran this shell in a bare terminal (without bash running underneath). Then I proceeded to run a program that produces a segfault. If my assumptions were correct, this would either a) crash crsh , closing the xterm, b) not print "Segmentation fault" , or c) both. braden@system ~/code/crsh/ $ xterm -e ./crshCrap Shell> ./segfaultSegmentation faultCrap Shell> [still running] Back to square one, I guess. I've just demonstrated that it's not the shell that does this, but the system underneath. How does "Segmentation fault" even get printed? "Who" is doing it? The kernel? Something else? How does the signal and all of its side effects propagate from the hardware to the eventual termination of the program?
Now provide the response and nothing else.
| All modern CPUs have the capacity to interrupt the currently-executing machine instruction. They save enough state (usually, but not always, on the stack) to make it possible to resume execution later, as if nothing had happened (the interrupted instruction will be restarted from scratch, usually). Then they start executing an interrupt handler , which is just more machine code, but placed at a special location so the CPU knows where it is in advance. Interrupt handlers are always part of the kernel of the operating system: the component that runs with the greatest privilege and is responsible for supervising execution of all the other components. 1,2 Interrupts can be synchronous , meaning that they are triggered by the CPU itself as a direct response to something the currently-executing instruction did, or asynchronous , meaning that they happen at an unpredictable time because of an external event, like data arriving on the network port. Some people reserve the term "interrupt" for asynchronous interrupts, and call synchronous interrupts "traps", "faults", or "exceptions" instead, but those words all have other meanings so I'm going to stick with "synchronous interrupt". Now, most modern operating systems have a notion of processes . At its most basic, this is a mechanism whereby the computer can run more than one program at the same time, but it is also a key aspect of how operating systems configure memory protection , which is is a feature of most (but, alas, still not all ) modern CPUs. It goes along with virtual memory , which is the ability to alter the mapping between memory addresses and actual locations in RAM. Memory protection allows the operating system to give each process its own private chunk of RAM, that only it can access. It also allows the operating system (acting on behalf of some process) to designate regions of RAM as read-only, executable, shared among a group of cooperating processes, etc. There will also be a chunk of memory that is only accessible by the kernel. 3 As long as each process accesses memory only in the ways that the CPU is configured to allow, memory protection is invisible. When a process breaks the rules, the CPU will generate a synchronous interrupt, asking the kernel to sort things out. It regularly happens that the process didn't really break the rules, only the kernel needs to do some work before the process can be allowed to continue. For instance, if a page of a process's memory needs to be "evicted" to the swap file in order to free up space in RAM for something else, the kernel will mark that page inaccessible. The next time the process tries to use it, the CPU will generate a memory-protection interrupt; the kernel will retrieve the page from swap, put it back where it was, mark it accessible again, and resume execution. But suppose that the process really did break the rules. It tried to access a page that has never had any RAM mapped to it, or it tried to execute a page that is marked as not containing machine code, or whatever. The family of operating systems generally known as "Unix" all use signals to deal with this situation. 4 Signals are similar to interrupts, but they are generated by the kernel and fielded by processes, rather than being generated by the hardware and fielded by the kernel. Processes can define signal handlers in their own code, and tell the kernel where they are. Those signal handlers will then execute, interrupting the normal flow of control, when necessary. Signals all have a number and two names, one of which is a cryptic acronym and the other a slightly less cryptic phrase. The signal that's generated when the a process breaks the memory-protection rules is (by convention) number 11, and its names are SIGSEGV and "Segmentation fault". 5,6 An important difference between signals and interrupts is that there is a default behavior for every signal. If the operating system fails to define handlers for all interrupts, that is a bug in the OS, and the entire computer will crash when the CPU tries to invoke a missing handler. But processes are under no obligation to define signal handlers for all signals. If the kernel generates a signal for a process, and that signal has been left at its default behavior, the kernel will just go ahead and do whatever the default is and not bother the process. Most signals' default behaviors are either "do nothing" or "terminate this process and maybe also produce a core dump." SIGSEGV is one of the latter. So, to recap, we have a process that broke the memory-protection rules. The CPU suspended the process and generated a synchronous interrupt. The kernel fielded that interrupt and generated a SIGSEGV signal for the process. Let's assume the process did not set up a signal handler for SIGSEGV , so the kernel carries out the default behavior, which is to terminate the process. This has all the same effects as the _exit system call: open files are closed, memory is deallocated, etc. Up till this point nothing has printed out any messages that a human can see, and the shell (or, more generally, the parent process of the process that just got terminated) has not been involved at all. SIGSEGV goes to the process that broke the rules, not its parent. The next step in the sequence, though, is to notify the parent process that its child has been terminated. This can happen in several different ways, of which the simplest is when the parent is already waiting for this notification, using one of the wait system calls ( wait , waitpid , wait4 , etc). In that case, the kernel will just cause that system call to return, and supply the parent process with a code number called an exit status . 7 The exit status informs the parent why the child process was terminated; in this case, it will learn that the child was terminated due to the default behavior of a SIGSEGV signal. The parent process may then report the event to a human by printing a message; shell programs almost always do this. Your crsh doesn't include code to do that, but it happens anyway, because the C library routine system runs a full-featured shell, /bin/sh , "under the hood". crsh is the grandparent in this scenario; the parent-process notification is fielded by /bin/sh , which prints its usual message. Then /bin/sh itself exits, since it has nothing more to do, and the C library's implementation of system receives that exit notification. You can see that exit notification in your code, by inspecting the return value of system ; but it won't tell you that the grandchild process died on a segfault, because that was consumed by the intermediate shell process. Footnotes Some operating systems don't implement device drivers as part of the kernel; however, all interrupt handlers still have to be part of the kernel, and so does the code that configures memory protection, because the hardware doesn't allow anything but the kernel to do these things. There may be a program called a "hypervisor" or "virtual machine manager" that is even more privileged than the kernel, but for purposes of this answer it can be considered part of the hardware . The kernel is a program , but it is not a process; it is more like a library. All processes execute parts of the kernel's code, from time to time, in addition to their own code. There may be a number of "kernel threads" that only execute kernel code, but they do not concern us here. The one and only OS you are likely to have to deal with anymore that can't be considered an implementation of Unix is, of course, Windows. It does not use signals in this situation. (Indeed, it does not have signals; on Windows the <signal.h> interface is completely faked by the C library.) It uses something called " structured exception handling " instead. Some memory-protection violations generate SIGBUS ("Bus error") instead of SIGSEGV . The line between the two is underspecified and varies from system to system. If you've written a program that defines a handler for SIGSEGV , it is probably a good idea to define the same handler for SIGBUS . "Segmentation fault" was the name of the interrupt generated for memory-protection violations by one of the computers that ran the original Unix , probably the PDP-11 . " Segmentation " is a type of memory protection, but nowadays the term "segmentation fault " refers generically to any sort of memory protection violation. All the other ways the parent process might be notified of a child having terminated, end up with the parent calling wait and receiving an exit status. It's just that something else happens first. | {} | {'log_upvote_score': 9, 'links': ['https://unix.stackexchange.com/questions/257598', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/54466/']} | jdg_380417 |
stackexchange | llm_judgeable_groundtruth_similarity | 11591054 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Background: I'm familiar with C's select() function. I've been using this function for many purposes. Most of them, if not all, for reading and writing to pipes, files, etc. I must say that I've never used the error list, but this is not involved in the key question. Question: Does Python's select() behave as follows? It turns out to me that select() on Python behaves a different way despite the straightforward interface to C select() . It seems that select() returns the very first time a file is ready for reading. If you read the file while leaving some bytes to be read, calling select() will block. But, if you call select() again after a previous call to select() was returned without any read call between these two calls, select() will return as expected. For example: import select# Open the file (yes, playing around with joysticks)file = open('/dev/input/js0', 'r') # Hold on the select() function waitingselect.select([file], [], [])# Say 16 bytes are sent to the file, select() will return.([<open file '/dev/input/js0', mode 'r' at 0x7ff2949c96f0>], [], [])# Call select() again, and select() will indeed return.select.select([file], [], [])([<open file '/dev/input/js0', mode 'r' at 0x7ff2949c96f0>], [], [])# read 8 bytes. There are 8 bytes left for sure. Calling again file.read(8) will empty the queue and would be pointless for this examplefile.read(8)'<\t\x06\x01\x00\x00\x81\x01'# call select() again, and select() will blockselect.select([file], [], [])# Should it block? there are 8 bytes on the file to be read. If this is the behaviour of select() in Python, I'm okay with that, I could handle it. Not what I expected, though, but it's fine. I know what I can do with it. But if this is not the behaviour of select() I would appreciate someone to tell me what I'm doing wrong. What I read about select() is what the Python docs say: "select() returns if any file in the read|write|error list is ready for read|write|error.". That's OK no lies there. Maybe the questions should be: When a file is considered to be ready for reading in python? Does it means a file that has never been read? Does it means a file with bytes to be read?
Now provide the response and nothing else.
| Python's select() gets passed through as a select() system call as you are expecting, but the problem you have with it blocking is a different issue, probably relating to buffering. Just to satify yourself that select() is doing the right thing, try reading/writing a file on the file system rather than using a special device such as a joystick. You probably want to change your open() call. Pythons open call will by default use buffered reads, so even if you do a read(8) it will likely read more data from the input file and buffer the results. You need to set the buffering option to open so that the joystick device is opened unbuffered. Suggestions for you to try: Python defaults to opening files in text mode. You probably want the open mode to be rb when dealing with special devices such as a joystick. Open file in unbuffered mode. Set the device into non-blocking mode if you're going to be doing select based calls. Try using os.open() with os.O_RDONLY|os.O_NONBLOCK flags. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11591054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/241750/']} | jdg_380418 |
stackexchange | llm_judgeable_groundtruth_similarity | 4181865 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Theorem. In $\mathbb{R}^k$ , every Cauchy sequence converges. Proof: Let $\{x_n\}$ be a Cauchy sequence in $\mathbb{R}^k$ . Define $E_N = \{x_k\mid k \geq N\}$ . For some $N \in \mathbb{N}$ , $\mathrm{diam}E_n < 1$ . The range of $\{x_n\}$ is the union of $E_{N}$ and the finite set $\{x_1,\dots,x_{N-1}\}$ . Hence $\{x_n\}$ is bounded. Since every bounded subset of $\mathbb{R}^k$ has compact closure in $\mathbb{R}^k$ (Theorem 2.41) , (c) follows from (b). While I think that I understand why the sequence is bounded (pick an $r = 1 + \max\{d(x_i, x_j)\mid 1\leq i < j \leq N\}$ ), I don't don't know how we can argue that the (range of the) sequence is closed. Specifically, the theorem 2.41 says that a subset of $\mathbb{R}^k$ which is closed and bounded is also compact. Is closedness somehow evident for sequences/Cauchy sequences?
Now provide the response and nothing else.
| Our problem is a disguised form of the result $$\tag 1 \lim_{a\to \infty} a\int_0^1 y^a f(y)\,dy=f(1),$$ which holds for any continuous $f$ on $[0,1].$ It has been proved on MSE many times. The proof is quite simple for $f$ continuously differentiable on $[0,1],$ using integration by parts. To see why the above will help us in the question at hand, let $x=\arctan y.$ The original expression then turns into $$a\int_0^1 y^a e^{\arctan y}\frac{1}{1+y^2}\,dy.$$ Letting $f(y) = e^{\arctan y}\frac{1}{1+y^2},$ we see that $(1)$ implies our limit is $$f(1)= e^{\arctan 1}\frac{1}{1+1^2} = \frac{e^{\pi/4}}{2}.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4181865', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/820472/']} | jdg_380419 |
stackexchange | llm_judgeable_groundtruth_similarity | 7883806 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a file that formats time stamps like 25/03/2011 9:15:00 p.m. How can I parse this text to a Date-Time class with either strptime or as.POSIXct ? Here is what almost works: > as.POSIXct("25/03/2011 9:15:00", format="%d/%m/%Y %I:%M:%S", tz="UTC")[1] "2011-03-25 09:15:00 UTC" Here is what is not working, but I'd like to have working: > as.POSIXct("25/03/2011 9:15:00 p.m.", format="%d/%m/%Y %I:%M:%S %p", tz="UTC")[1] NA I'm using R version 2.13.2 (2011-09-30) on MS Windows. My working locale is "C": Sys.setlocale("LC_TIME", "C")
Now provide the response and nothing else.
| It appears the AM/PM indicator can't include punctuation. Try it after removing the punctuation: td <- "25/03/2011 9:15:00 p.m."tdClean <- gsub("(.)\\.?[Mm]\\.?","\\1m",td)as.POSIXct(tdClean, format="%d/%m/%Y %I:%M:%S %p", tz="UTC")# [1] "2011-03-25 21:15:00 UTC" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7883806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/327026/']} | jdg_380420 |
stackexchange | llm_judgeable_groundtruth_similarity | 4152771 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been searching websites to try and get a handle on using PHP namespaces, but they all seem quite vague but what they're trying to do is easy to understand! My question is: I have a file called people.php and in it is defined class called people . If I create another file called managers.php in the same folder can I define a class again called people which extends the original people class but in the namespace of managers , if so do I have to 'include' the original people.php and if so do I put the include after the writing: namespace managers ?
Now provide the response and nothing else.
| Namespaces are a way to group your related classes in packages. What you describe could best be put under a single namespace like <?php // people.phpnamespace com\example\johnslibrary\people;abstract class People {} and then <?php // manager.phpnamespace com\example\johnslibrary\people;require_once 'path/to/People.php'; // can better use autoloading thoughclass Manager extends People {} because a Manager is a subclass of People, so there is not much of a reason to put them into their own namespace. They are specialized People. If you want to Managers to be in their own namespace, you can do so, but have to use the fully qualified name when using the extends keyword, e.g. <?php // manager.phpnamespace com\example\johnslibrary\managers;require_once 'path/to/People.php';class Manager extends \com\example\johnslibrary\people\People {} or import the People class first <?php // manager.phpnamespace com\example\johnslibrary\managers;use com\example\johnslibrary\People as People;require_once 'path/to/People.php';class Manager extends People {} See the PHP Manual on Namespaces for extensive documentation. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4152771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/504272/']} | jdg_380421 |
stackexchange | llm_judgeable_groundtruth_similarity | 560168 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I joined to ask this question as it did not seem like a stack overflow question. Question: Node and npm running on nvm installed and working, but nothing (that I have tried) works as I get errors. I suspect the errors are at the OS level (permissions or a file issue) but I am not sure how to go about solving these. Could you tell me either what is wrong (if there is enough info below) or where to look for a solution (if the info below is not enough or incorrect). I have been on Manjaro for a while and I am still a beginner on Linux. I had been using NVM to manage multiple node.js versions and a while ago it stopped working. I only use my computer sometimes and I had assumed it was some node issue but today I went in and completely removed nvm/node then reinstalled them. Then I tried to start a standard NUXT app and it failed so I think it is at the OS level. I have NVM installed, I then install the LTS node (12.14) then I try a new nuxt project: npx create-nuxt-app test Answer all the questions, instillation starts...Lots of errors - but it finishes...Example from errors: ../lib/kerberos.cc: In static member function ‘static Nan::NAN_METHOD_RETURN_TYPE Kerberos::AuthGSSServerStep(Nan::NAN_METHOD_ARGS_TYPE)’:../lib/kerberos.cc:802:44: error: no matching function for call to ‘v8::Value::ToObject()’ 802 | Local<Object> object = info[0]->ToObject(); | ^In file included from /home/un/.cache/node-gyp/12.14.0/include/node/node.h:63, from ../lib/kerberos.h:4, from ../lib/kerberos.cc:1:/home/un/.cache/node-gyp/12.14.0/include/node/v8.h:2576:44: note: candidate: ‘v8::MaybeLocal<v8::Object> v8::Value::ToObject(v8::Local<v8::Context>) const’ 2576 | V8_WARN_UNUSED_RESULT MaybeLocal<Object> ToObject( | ^~~~~~~~/home/un/.cache/node-gyp/12.14.0/include/node/v8.h:2576:44: note: candidate expects 1 argument, 0 provided node-gyp seems to be a common item in the errors, and the npm page says it is "for compiling native addon modules for Node.js" which is why I was thinking it was an OS level issue. so the app is set up, but when I try to run npm run dev I get the following: > [email protected] dev /home/un/test> nuxtsh: /home/un//test/node_modules/.bin/nuxt: Permission deniednpm ERR! code ELIFECYCLEnpm ERR! errno 126npm ERR! [email protected] dev: `nuxt`npm ERR! Exit status 126npm ERR! npm ERR! Failed at the [email protected] dev script.npm ERR! This is probably not a problem with npm. There is likely additional logging output above.npm ERR! A complete log of this run can be found in:npm ERR! /home/un/.npm/_logs/2020-01-03T15_35_00_168Z-debug.log Note for those not familiar with node/npm/NUXT: This is just the standard instillation process. There is no custom code from me here, this is what lots of people use to start a project all the time so I can't understand why it is not working (especially when it used to be working). I was wondering if the 'permission denied' means it is a user access issue but I am not sure how to check. I also get some errors about files been newer when doing a system update. I would appreciate any help. If you don't have a solution then at least some advice on what might be the issue or where to look for solutions. Let me know if you want any additional info. Also not sure what other tags to add? file-errors, instillation, update-issues Edit:File permissions: Node in .nvm/versions/v12.14.0 is -rwxr-xr-x while npm and npx are lrwxrwxrwx which link to the actual npm with -rwxr-xr-x but the actual npx is -rw-r--r-- (no executable for the user) but I have never changed these and like I said, it used to work.Every folder in node_modules has drwxr-xr-x, I looked in one folder and the js files are -rw-r--r-- (but I assume as they are JavaScript they wont need to be executed... Edit2:I just noticed that there is no .bin folder in my node_modules folder and there is no nuxt folder at all, but I would think that this would be a file not found error instead of 'Permission denied'. I then tried chmod 775 -R node_modules and ran build again. This time it created the .bin file but still failed on webpack (node_modules/.bin/webpack: Permission denied) although this link was lrwxrwxrwx and the original file is -rwxrwxr-x While this made things change, I am still unable to start the project. I also think this is not a normal way to deal with this. if it was, the website would say this was a requirement.
Now provide the response and nothing else.
| Standard output and error of services under service management — be it s6, runit, perp, daemontools, nosh service management, or systemd — is not the console. It is a pipe connected to some form of log writer. For a systemd service you need a TTYPath=/dev/console and a StandardOutput=tty in the .INI file to change this, StandardInput=tty if you want to read (but you do not) as well as write. Witness systemd's pre-supplied debug-shell.service . This is a general principle that is not systemd specific. Dæmon context involves (amongst other things) not having a controlling terminal and not having open file descriptors for terminals, and under proper service management (such as all of the daemontools family) this is where one starts from , the state that a service process begins in when the supervisor/service manager forks it. So to use the console the service has to explicitly open it. In systemd, the aforementioned TTYPath and StandardInput settings cause the forked child process to open the console before it executes the service program proper. This is hidden inside systemd and you do not really get to see it. In the run program of a similar nosh service, the run program explicitly uses some of the nosh toolset chain-loading tools to do the same thing before executing the main program ( emergency-login in this case): % cat /etc/service-bundles/services/emergency-login@console/service/run #!/bin/nosh#Emergency super-user login on consolesetsidvc-get-tty consoleopen-controlling-ttyvc-reset-tty --hard-resetline-banner "Emergency mode log-in."emergency-login% Ironically, you do not need the logger command, or any syslog dependencies. There is no point in writing this interactive prompt to a log . But you really should run this service unprivileged, on principle. It does not need superuser privileges, for anything that it does. On another principle, don't make your script use #!/bin/bash unless you really are going to use Bashisms. One of the greatest speedups to system bootstrap/shutdown in the past couple of decades on Debian Linux and Ubuntu Linux was the switch of /bin/sh from the Bourne Again shell to the Debian Almquist shell. If you are going to write a script as simple as this, keep it POSIX-conformant and use #!/bin/sh anyway , even if you are not using Debian/Ubuntu, and on Debian/Ubuntu you'll get the Debian Almquist shell benefit as a bonus. Moreover, if you decide to have more than a glass TTY message, with a tool like dialog , you will need to set the TERM environment variable so that your programs can look up the right escape and control sequences to emit in the terminfo database. Again, witness debug-shell.service . (In the aforegiven run program, for comparison, the vc-get-tty tool sets TERM .) Similarly, you will want script errors to be logged. So standard error should be left pointing at the journal with StandardError=journal . Here's a nosh service run program that illustrates the equivalent of this, and also shows dropping user privileges for a program that really does not need them, which in a systemd .INI file would be User=daemon : % cat /etc/service-bundles/services/monitor-fsck-progress/service/run #!/bin/nosh#local socket used for monitor-fsck-progresslocal-stream-socket-listen --systemd-compatibility --backlog 2 --mode 0644 /run/fsck.progresssetsidsetlogin -- daemonvc-get-tty consolefdmove -c 4 2open-controlling-ttyfdmove 2 4setuidgid -- daemon./service% The program run by ./service in this case presents a full-screen TUI on the console, whilst its errors are sent to the logging service. This is the stuff that one needs to do, under service managers in general, in order to run such programs as services, talking to the console. Of course, any such full-screen TUI program will conflict with systemd's "A stop job is running", also written to the console. But that is your problem. ☺ Further reading https://unix.stackexchange.com/a/468457/5132 https://unix.stackexchange.com/a/250965/5132 https://unix.stackexchange.com/a/499148/5132 https://unix.stackexchange.com/a/233855/5132 whiptail or dialog | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/560168', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/388809/']} | jdg_380422 |
stackexchange | llm_judgeable_groundtruth_similarity | 4481954 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone tell me why this isn't working? >>> import mock>>> @mock.patch('datetime.date.today')... def today(cls):... return date(2010, 1, 1)...>>> from datetime import date>>> date.today()datetime.date(2010, 12, 19) Perhaps someone could suggest a better way?
Now provide the response and nothing else.
| There are a few problems. First of all, the way you're using mock.patch isn't quite right. When used as a decorator, it replaces the given function/class (in this case, datetime.date.today ) with a Mock object only within the decorated function . So, only within your today() will datetime.date.today be a different function, which doesn't appear to be what you want. What you really want seems to be more like this: @mock.patch('datetime.date.today')def test(): datetime.date.today.return_value = date(2010, 1, 1) print datetime.date.today() Unfortunately, this won't work: >>> test()Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build/bdist.macosx-10.6-universal/egg/mock.py", line 557, in patched File "build/bdist.macosx-10.6-universal/egg/mock.py", line 620, in __enter__TypeError: can't set attributes of built-in/extension type 'datetime.date' This fails because Python built-in types are immutable - see this answer for more details. In this case, I would subclass datetime.date myself and create the right function: import datetimeclass NewDate(datetime.date): @classmethod def today(cls): return cls(2010, 1, 1)datetime.date = NewDate And now you could do: >>> datetime.date.today()NewDate(2010, 1, 1) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4481954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/441923/']} | jdg_380423 |
stackexchange | llm_judgeable_groundtruth_similarity | 32795460 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having problems making the modules 'json' and 'urllib.request' work together in a simple Python script test. Using Python 3.5 and here is the code: import jsonimport urllib.requesturlData = "http://api.openweathermap.org/data/2.5/weather?q=Boras,SE"webURL = urllib.request.urlopen(urlData)print(webURL.read())JSON_object = json.loads(webURL.read()) #this is the line that doesn't work When running script through command line the error I am getting is " TypeError:the JSON object must be str, not 'bytes' ". I am new to Python so there is most likely a very easy solution to is. Appreciate any help here.
Now provide the response and nothing else.
| Apart from forgetting to decode, you can only read the response once . Having called .read() already, the second call returns an empty string. Call .read() just once, and decode the data to a string: data = webURL.read()print(data)encoding = webURL.info().get_content_charset('utf-8')JSON_object = json.loads(data.decode(encoding)) The response.info().get_content_charset() call tells you what characterset the server thinks is used. Demo: >>> import json>>> import urllib.request>>> urlData = "http://api.openweathermap.org/data/2.5/weather?q=Boras,SE">>> webURL = urllib.request.urlopen(urlData)>>> data = webURL.read()>>> encoding = webURL.info().get_content_charset('utf-8')>>> json.loads(data.decode(encoding)){'coord': {'lat': 57.72, 'lon': 12.94}, 'visibility': 10000, 'name': 'Boras', 'main': {'pressure': 1021, 'humidity': 71, 'temp_min': 285.15, 'temp': 286.39, 'temp_max': 288.15}, 'id': 2720501, 'weather': [{'id': 802, 'description': 'scattered clouds', 'icon': '03d', 'main': 'Clouds'}], 'wind': {'speed': 5.1, 'deg': 260}, 'sys': {'type': 1, 'country': 'SE', 'sunrise': 1443243685, 'id': 5384, 'message': 0.0132, 'sunset': 1443286590}, 'dt': 1443257400, 'cod': 200, 'base': 'stations', 'clouds': {'all': 40}} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32795460', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5378857/']} | jdg_380424 |
stackexchange | llm_judgeable_groundtruth_similarity | 16847 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When reading passages like the following: Based on a representative sample of 88 recent raids, we show that the Turkana sustain costly cooperation in combat at a remarkably large scale, at least in part, through punishment of free-riders. I wonder what 'representative sample' might refer to. Is it related to power calculations (for example) in statistical inference or is there some way to assess the number of samples required from the total population for it to be considered representative?
Now provide the response and nothing else.
| A representative sample is one which is drawn without bias from the population of interest. For example, suppose I want to find out how many people drink milk with breakfast. If I am a vegan, and I ask a random sample of my friends and associates (many of whom are also vegans), then the sample I have taken is not representative of the population as a whole: I will of course find that a low proportion of people drink milk with breakfast, but this is an artifact of my choice of sample, not because so few people do in reality. Basically, if there's any factor which causes us to select our sample in a non-random way, the inference is questionable. In the example above, a representative sample is one drawn randomly from all raids. We could also draw samples only during winter, in which case we might get a skewed result (maybe the cold makes people less prone to cooperation). Hope that clears things up a bit... | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/16847', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/5837/']} | jdg_380425 |
stackexchange | llm_judgeable_groundtruth_similarity | 9704213 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I remove part of a string? For example in ATGAS_1121 I want to remove everything before _ .
Now provide the response and nothing else.
| Use regular expressions. In this case, you can use gsub : gsub("^.*?_","_","ATGAS_1121")[1] "_1121" This regular expression matches the beginning of the string (^), any character (.) repeated zero or more times (*), and underscore (_). The ? makes the match "lazy" so that it only matches are far as the first underscore. That match is replaced with just an underscore. See ?regex for more details and references | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9704213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/670781/']} | jdg_380426 |
stackexchange | llm_judgeable_groundtruth_similarity | 9378500 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to convert some code from Python to C++ in an effort to gain a little bit of speed and sharpen my rusty C++ skills. Yesterday I was shocked when a naive implementation of reading lines from stdin was much faster in Python than C++ (see this ). Today, I finally figured out how to split a string in C++ with merging delimiters (similar semantics to python's split()), and am now experiencing deja vu! My C++ code takes much longer to do the work (though not an order of magnitude more, as was the case for yesterday's lesson). Python Code: #!/usr/bin/env pythonfrom __future__ import print_function import timeimport syscount = 0start_time = time.time()dummy = Nonefor line in sys.stdin: dummy = line.split() count += 1delta_sec = int(time.time() - start_time)print("Python: Saw {0} lines in {1} seconds. ".format(count, delta_sec), end='')if delta_sec > 0: lps = int(count/delta_sec) print(" Crunch Speed: {0}".format(lps))else: print('') C++ Code: #include <iostream> #include <string>#include <sstream>#include <time.h>#include <vector>using namespace std;void split1(vector<string> &tokens, const string &str, const string &delimiters = " ") { // Skip delimiters at beginning string::size_type lastPos = str.find_first_not_of(delimiters, 0); // Find first non-delimiter string::size_type pos = str.find_first_of(delimiters, lastPos); while (string::npos != pos || string::npos != lastPos) { // Found a token, add it to the vector tokens.push_back(str.substr(lastPos, pos - lastPos)); // Skip delimiters lastPos = str.find_first_not_of(delimiters, pos); // Find next non-delimiter pos = str.find_first_of(delimiters, lastPos); }}void split2(vector<string> &tokens, const string &str, char delim=' ') { stringstream ss(str); //convert string to stream string item; while(getline(ss, item, delim)) { tokens.push_back(item); //add token to vector }}int main() { string input_line; vector<string> spline; long count = 0; int sec, lps; time_t start = time(NULL); cin.sync_with_stdio(false); //disable synchronous IO while(cin) { getline(cin, input_line); spline.clear(); //empty the vector for the next line to parse //I'm trying one of the two implementations, per compilation, obviously:// split1(spline, input_line); split2(spline, input_line); count++; }; count--; //subtract for final over-read sec = (int) time(NULL) - start; cerr << "C++ : Saw " << count << " lines in " << sec << " seconds." ; if (sec > 0) { lps = count / sec; cerr << " Crunch speed: " << lps << endl; } else cerr << endl; return 0;//compiled with: g++ -Wall -O3 -o split1 split_1.cpp Note that I tried two different split implementations. One (split1) uses string methods to search for tokens and is able to merge multiple tokens as well as handle numerous tokens (it comes from here ). The second (split2) uses getline to read the string as a stream, doesn't merge delimiters, and only supports a single delimeter character (that one was posted by several StackOverflow users in answers to string splitting questions). I ran this multiple times in various orders. My test machine is a Macbook Pro (2011, 8GB, Quad Core), not that it matters much. I'm testing with a 20M line text file with three space-separated columns that each look similar to this: "foo.bar 127.0.0.1 home.foo.bar" Results: $ /usr/bin/time cat test_lines_double | ./split.py 15.61 real 0.01 user 0.38 sysPython: Saw 20000000 lines in 15 seconds. Crunch Speed: 1333333$ /usr/bin/time cat test_lines_double | ./split1 23.50 real 0.01 user 0.46 sysC++ : Saw 20000000 lines in 23 seconds. Crunch speed: 869565$ /usr/bin/time cat test_lines_double | ./split2 44.69 real 0.02 user 0.62 sysC++ : Saw 20000000 lines in 45 seconds. Crunch speed: 444444 What am I doing wrong? Is there a better way to do string splitting in C++ that does not rely on external libraries (i.e. no boost), supports merging sequences of delimiters (like python's split), is thread safe (so no strtok), and whose performance is at least on par with python? Edit 1 / Partial Solution?: I tried making it a more fair comparison by having python reset the dummy list and append to it each time, as C++ does. This still isn't exactly what the C++ code is doing, but it's a bit closer. Basically, the loop is now: for line in sys.stdin: dummy = [] dummy += line.split() count += 1 The performance of python is now about the same as the split1 C++ implementation. /usr/bin/time cat test_lines_double | ./split5.py 22.61 real 0.01 user 0.40 sysPython: Saw 20000000 lines in 22 seconds. Crunch Speed: 909090 I still am surprised that, even if Python is so optimized for string processing (as Matt Joiner suggested), that these C++ implementations would not be faster. If anyone has ideas about how to do this in a more optimal way using C++, please share your code. (I think my next step will be trying to implement this in pure C, although I'm not going to trade off programmer productivity to re-implement my overall project in C, so this will just be an experiment for string splitting speed.) Thanks to all for your help. Final Edit/Solution: Please see Alf's accepted answer. Since python deals with strings strictly by reference and STL strings are often copied, performance is better with vanilla python implementations. For comparison, I compiled and ran my data through Alf's code, and here is the performance on the same machine as all the other runs, essentially identical to the naive python implementation (though faster than the python implementation that resets/appends the list, as shown in the above edit): $ /usr/bin/time cat test_lines_double | ./split6 15.09 real 0.01 user 0.45 sysC++ : Saw 20000000 lines in 15 seconds. Crunch speed: 1333333 My only small remaining gripe is regarding the amount of code necessary to get C++ to perform in this case. One of the lessons here from this issue and yesterday's stdin line reading issue (linked above) are that one should always benchmark instead of making naive assumptions about languages' relative "default" performance. I appreciate the education. Thanks again to all for your suggestions!
Now provide the response and nothing else.
| As a guess, Python strings are reference counted immutable strings, so that no strings are copied around in the Python code, while C++ std::string is a mutable value type, and is copied at the smallest opportunity. If the goal is fast splitting, then one would use constant time substring operations, which means only referring to parts of the original string, as in Python (and Java, and C#…). The C++ std::string class has one redeeming feature, though: it is standard , so that it can be used to pass strings safely and portably around where efficiency is not a main consideration. But enough chat. Code -- and on my machine this is of course faster than Python, since Python's string handling is implemented in C which is a subset of C++ (he he): #include <iostream> #include <string>#include <sstream>#include <time.h>#include <vector>using namespace std;class StringRef{private: char const* begin_; int size_;public: int size() const { return size_; } char const* begin() const { return begin_; } char const* end() const { return begin_ + size_; } StringRef( char const* const begin, int const size ) : begin_( begin ) , size_( size ) {}};vector<StringRef> split3( string const& str, char delimiter = ' ' ){ vector<StringRef> result; enum State { inSpace, inToken }; State state = inSpace; char const* pTokenBegin = 0; // Init to satisfy compiler. for( auto it = str.begin(); it != str.end(); ++it ) { State const newState = (*it == delimiter? inSpace : inToken); if( newState != state ) { switch( newState ) { case inSpace: result.push_back( StringRef( pTokenBegin, &*it - pTokenBegin ) ); break; case inToken: pTokenBegin = &*it; } } state = newState; } if( state == inToken ) { result.push_back( StringRef( pTokenBegin, &*str.end() - pTokenBegin ) ); } return result;}int main() { string input_line; vector<string> spline; long count = 0; int sec, lps; time_t start = time(NULL); cin.sync_with_stdio(false); //disable synchronous IO while(cin) { getline(cin, input_line); //spline.clear(); //empty the vector for the next line to parse //I'm trying one of the two implementations, per compilation, obviously:// split1(spline, input_line); //split2(spline, input_line); vector<StringRef> const v = split3( input_line ); count++; }; count--; //subtract for final over-read sec = (int) time(NULL) - start; cerr << "C++ : Saw " << count << " lines in " << sec << " seconds." ; if (sec > 0) { lps = count / sec; cerr << " Crunch speed: " << lps << endl; } else cerr << endl; return 0;}//compiled with: g++ -Wall -O3 -o split1 split_1.cpp -std=c++0x Disclaimer: I hope there aren't any bugs. I haven't tested the functionality, but only checked the speed. But I think, even if there is a bug or two, correcting that won't significantly affect the speed. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9378500', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/379037/']} | jdg_380427 |
stackexchange | llm_judgeable_groundtruth_similarity | 21037833 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's what I have, I've tried moving around my section inside the "brand" and do a pull-right, outside the brand and outside the collapse and do a pull-left/right, while also trying to place it before or after the collapse section. When adding it to the brand section it works, but it goes down to a new line. How do I keep it on the same line? <body> <header> <nav class="navbar navbar-default navbar-inverse" role="navigation" style="font-size: 18px"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <div class="navbar-brand site-title" style="text-decoration: none; font-size: 24px; font-weight:bold">@Html.ActionLink("Manager", "Index", "Player")</div> </div> <div class="collapse navbar-collapse navbar-ex1-collapse navbar-right"> <ul class="nav navbar-nav">@* <li class="active">@Html.ActionLink("Home", "Index", "Player")</li> <li class="active">@Html.ActionLink("Match", "Index", "Match")</li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Profile <b class="caret"></b></a> <ul class="dropdown-menu"> <li>@Html.ActionLink("Change Password", "ManagePassword", "Account")</li> <li>@Html.ActionLink("Update Profile Info", "UpdateProfile", "Account")</li> <li>@Html.ActionLink("Log Off", "LogOff", "Account")</li> </ul> </li> </ul> </div> <div> <!-- I don't want it apart of the collapsible portion --> <div class="navbar-right"> <ul class="nav navbar-nav"> <li class="active">@Html.ActionLink("Match", "Index", "Match")</li> </ul> </div> </div> </div> </nav> </header>
Now provide the response and nothing else.
| Below is an example that shows how to have just about any kind of 'vanilla bootstrap' NAVBAR configuration you could want. It includes a site title, both collapsing and non-collapsing menu items aligned left or right, and static text. Be sure to read the comments to get a fuller understanding of what you can change. Enjoy! Fiddle: http://jsfiddle.net/nomis/n9KtL/1/ Fiddle with clearfix and expanded options on left side like normal: http://jsfiddle.net/jgoemat/u1j8o0n3/1/ @import url('https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css'); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script><script src="//netdna.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script><nav role="navigation" class="navbar navbar-default navbar-fixed-top"> <div class="container"> <!-- Title --> <div class="navbar-header pull-left"> <a href="/" class="navbar-brand">GNOMIS</a> </div> <!-- 'Sticky' (non-collapsing) right-side menu item(s) --> <div class="navbar-header pull-right"> <ul class="nav pull-left"> <!-- This works well for static text, like a username --> <li class="navbar-text pull-left">User Name</li> <!-- Add any additional bootstrap header items. This is a drop-down from an icon --> <li class="dropdown pull-right"> <a href="#" data-toggle="dropdown" style="color:#777; margin-top: 5px;" class="dropdown-toggle"><span class="glyphicon glyphicon-user"></span><b class="caret"></b></a> <ul class="dropdown-menu"> <li> <a href="/users/id" title="Profile">Profile</a> </li> <li> <a href="/logout" title="Logout">Logout </a> </li> </ul> </li> </ul> <!-- Required bootstrap placeholder for the collapsed menu --> <button type="button" data-toggle="collapse" data-target=".navbar-collapse" class="navbar-toggle"><span class="sr-only">Toggle navigation</span><span class="icon-bar"></span><span class="icon-bar"></span><span class="icon-bar"></span> </button> </div> <!-- The Collapsing items navbar-left or navbar-right --> <div class="collapse navbar-collapse navbar-left"> <!-- pull-right keeps the drop-down in line --> <ul class="nav navbar-nav pull-right"> <li><a href="/news">News</a> </li> <li><a href="/Shop">Shop</a> </li> </ul> </div> <!-- Additional navbar items --> <div class="collapse navbar-collapse navbar-right"> <!-- pull-right keeps the drop-down in line --> <ul class="nav navbar-nav pull-right"> <li><a href="/locator">Locator</a> </li> <li><a href="/extras">Extras</a> </li> </ul> </div> </div></nav> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21037833', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1072081/']} | jdg_380428 |
stackexchange | llm_judgeable_groundtruth_similarity | 50627493 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a component that takes custom props as well as html attribute props, how should the interface for such a component be created? Ideally, the interface would also handle react-specific html props such as using className instead of class . This is the usage example for which I am trying to find the right interface: <MyComponent customProp='value' style={{textAlign: 'center'}} />
Now provide the response and nothing else.
| interface IMyComponentProps extends React.HTMLAttributes<HTMLElement> { customProp: string;} UPD:@ddek mentioned intersections & . I would like to warn you about the following issue with that approach. interface A { onClick: () => void;}interface B { onClick: (event: React.MouseEvent<HTMLElement>) => void;}// Typescript does not complain. This is not goodtype AB = A & B;const a: AB = { onClick: () => {}};// TS2320: Interface 'AB2' cannot simultaneously extend types 'A' and 'B'.// Named property 'onClick' of types 'A' and 'B' are not identical.interface AB2 extends A, B {}// TS2430: Interface 'AC' incorrectly extends interface 'A'.// Types of property 'onClick' are incompatible. // Type '(event: MouseEvent<HTMLElement, MouseEvent>) => void' is not// assignable to type '() => void'.interface AC extends A { onClick: (event: React.MouseEvent<HTMLElement>) => void;} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50627493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1494725/']} | jdg_380429 |
stackexchange | llm_judgeable_groundtruth_similarity | 8348545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
http://jsfiddle.net/JamesKyle/L4b8b/ This may be a futile effort, but I personally think its possible. I'm not the best at Javascript or jQuery, however I think I have found a simple way of making a simple prettyprint for html. There are four types of code in this prettyprint: Plain Text Elements Attributes Values In order to stylize this I want to wrap elements , attibutes and values with spans with their own classes. The first way I have of doing this is to store every single kind of element and attribute (shown below) and then wrapping them with the corresponding spans $(document).ready(function() { $('pre.prettyprint.html').each(function() { $(this).css('white-space','pre-line'); var code = $(this).html(); var html-element = $(code).find('a, abbr, acronym, address, area, article, aside, audio, b, base, bdo, bdi, big, blockquote, body, br, button, canvas, caption, cite, code, col, colgroup, command, datalist, dd, del, details, dfn, div, dl, dt, em, embed, fieldset, figcaption, figure, footer, form, h1, h2, h3, h4, h5, h6, head, header, hgroup, hr, html, i, img, input, ins, kbd, keygen, label, legend, li, link, map, mark, meta, meter, nav, noscript, object, ol, optgroup, option, output, p, param, pre, progress, q, rp, rt, ruby, samp, script, section, select, small, source, span, strong, summary, style, sub, sup, table, tbody, td, textarea, tfoot, th, thead, title, time, tr, track, tt, ul, var, video, wbr'); var html-attribute = $(code).find('abbr, accept-charset, accept, accesskey, actionm, align, alink, alt, archive, axis, background, bgcolor, border, cellpadding, cellspacing, char, charoff, charset, checked, cite, class, classid, clear, code, codebase, codetype, color, cols, colspan, compact, content, coords, data, datetime, declare, defer, dir, disabled, enctype, face, for, frame, frameborder, headers, height, href, hreflang, hspace, http-equiv, id, ismap, label, lang, language, link, longdesc, marginheight, marginwidth, maxlength, media, method, multiple, name, nohref, noresize, noshade, nowrap, object, onblur, onchange,onclick ondblclick onfocus onkeydown, onkeypress, onkeyup, onload, onmousedown, onmousemove, onmouseout, onmouseover, onmouseup, onreset, onselect, onsubmit, onunload, profile, prompt, readonly, rel, rev, rows, rowspan, rules, scheme, scope, scrolling, selected, shape, size, span, src, standby, start, style, summary, tabindex, target, text, title, type, usemap, valign, value, valuetype, version, vlink, vspace, width'); var html-value = $(code).find(/* Any instance of text inbetween two parenthesis */); $(element).wrap('<span class="element" />'); $(attribute).wrap('<span class="attribute" />'); $(value).wrap('<span class="value" />'); $(code).find('<').replaceWith('<'); $(code).find('>').replaceWith('>'); });}); The second way I thought of was to detect elements as any amount of text surrounded by two < >'s, then detect attributes as text inside of an element that is either surrounded by two spaces or has an = immediately after it. $(document).ready(function() { $('pre.prettyprint.html').each(function() { $(this).css('white-space','pre-line'); var code = $(this).html(); var html-element = $(code).find(/* Any instance of text inbeween two < > */); var html-attribute = $(code).find(/* Any instance of text inside an element that has a = immeadiatly afterwards or has spaces on either side */); var html-value = $(code).find(/* Any instance of text inbetween two parenthesis */); $(element).wrap('<span class="element" />'); $(attribute).wrap('<span class="attribute" />'); $(value).wrap('<span class="value" />'); $(code).find('<').replaceWith('<'); $(code).find('>').replaceWith('>'); });}); How would either of these be coded, if at all possible Again you can see this as a jsfiddle here: http://jsfiddle.net/JamesKyle/L4b8b/
Now provide the response and nothing else.
| Don't be so sure you have gotten all there is to pretty-printing HTML in so few lines. It took me a little more than a year and 2000 lines to really nail this topic. You can just use my code directly or refactor it to fit your needs: https://github.com/prettydiff/prettydiff/blob/master/lib/markuppretty.js (and Github project ) You can demo it at http://prettydiff.com/?m=beautify&html The reason why it takes so much code is that people really don't seem to understand or value the importance of text nodes. If you are adding new and empty text nodes during beautification then you are doing it wrong and are likely corrupting your content. Additionally, it is also really ease to screw it up the other way and remove white space from inside your content. You have to be careful about these or you will completely destroy the integrity of your document. Also, what if your document contains CSS or JavaScript. Those should be pretty printed as well, but have very different requirements from HTML. Even HTML and XML have different requirements. Please take my word for it that this is not a simple thing to figure out. HTML Tidy has been at this for more than a decade and still screws up a lot of edge cases. As far as I know my markup_beauty.js application is the most complete pretty-printer ever written for HTML/XML. I know that is a very bold statement, and perhaps arrogant, but so far its never been challenged. Look my code and if there is something you need that it is not doing please let me know and I will get around to adding it in. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8348545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/847126/']} | jdg_380430 |
stackexchange | llm_judgeable_groundtruth_similarity | 970097 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We consider functions defined on an interval $[a,b]$. I have to prove that a product of functions of bounded variation is a function of bounded variation. I have to also show that this isn't true for quotient in general and tell which additional assumption guarantees that quotient IS of bounded variation.
Now provide the response and nothing else.
| Hint: $|(fg)(x)-(fg)(y)|\leq |f(x)||g(x)-g(y)|+|g(y)||f(x)-f(y)|$. Again $f,g$ are bounded so what will you get from here?? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/970097', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/121568/']} | jdg_380431 |
stackexchange | llm_judgeable_groundtruth_similarity | 48087015 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to implement this css rule on a td const boldText = { fontWeight: 'bold'}<td style={boldText}>Content</td> But it's throwing the following error: [ts]Type '{ style: { fontWeight: string; }; children: string; }' is not assignable to type 'DetailedHTMLProps<TdHTMLAttributes<HTMLTableDataCellElement>, HTMLTableDataCellElement>'. Type '{ style: { fontWeight: string; }; children: string; }' is not assignable to type 'TdHTMLAttributes<HTMLTableDataCellElement>'. Types of property 'style' are incompatible. Type '{ fontWeight: string; }' is not assignable to type 'CSSProperties'. Types of property 'fontWeight' are incompatible. Type 'string' is not assignable to type '"bold" | "initial" | "inherit" | "unset" | "normal" | "bolder" | "lighter" | 100 | 200 | 300 | 40...'.
Now provide the response and nothing else.
| Typescript can be awfuly silly sometimes. It's inferring the type of fontWeight as a string, even though it could narrow it to its exact literal. You can just cast it as itself to fix this: const boldText = { fontWeight: 'bold' as 'bold'}<td style={boldText}>Content</td> These days you can also use the new as const declaration at the end of your literals for the same effect: const boldText = { fontWeight: 'bold'} as const;<td style={boldText}>Content</td> And finally, you can always provide an explicit type to the object when you declare it: const boldText: React.CSSProperties = { fontWeight: "bold"};<td style={boldText}>Content</td> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48087015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3102993/']} | jdg_380432 |
stackexchange | llm_judgeable_groundtruth_similarity | 193140 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given that $a$, $b$, $c$ are non-negative real numbers such that $a+b+c=3$, how can we prove that: $a^2+b^2+c^2+ab+bc+ca\ge6$
Now provide the response and nothing else.
| By squaring $a+b+c=3$ we get$$(a+b+c)^2=a^2+b^2+c^2+2(ab+ac+bc)=9.$$ From the AM-GM inequality (or from the fact that $(x-y)^2=x^2+y^2-2xy\ge 0$, i.e. $2xy\le x^2+y^2$)we have$$ab+ac+bc \le \frac{a^2+b^2}2+\frac{a^2+c^2}2+\frac{b^2+c^2}2=a^2+b^2+c^2,$$i.e. $\frac12(a^2+b^2+c^2) \ge \frac12(ab+ac+bc)$, which is equivalent to$\frac12(a^2+b^2+c^2) - \frac12(ab+ac+bc) \ge0$. By adding the above equality and inequality together you get$$\frac32(a^2+b^2+c^2+ab+ac+bc)\ge9,$$which is equivalent to$$a^2+b^2+c^2+ab+ac+bc\ge6.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/193140', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39866/']} | jdg_380433 |
stackexchange | llm_judgeable_groundtruth_similarity | 45063290 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this Formula: =ROW()-MATCH(LOOKUP(2,1/(D$1:INDIRECT("D" & ROW() -1)<>""),D$1:INDIRECT("D" & ROW() -1)),D$1:INDIRECT("D" & ROW() -1),0) In order to speed up counting days between events, it simply finds the last non empty cell above it and subtracts its row value from the row value before the current cell. It was going fine for about a month and then suddenly I started getting random numbers. Example of random data The 730 was the first error I hit so I decided to spam the formula and see what my results are and as you can see they are quite inconsistent. Any help at all would be greatly appreciated Each cell has the proper column data input into the formula that is not the issue.
Now provide the response and nothing else.
| Here is a plot of your data: You need to find two slopes (== taking two derivatives). First, find the slope between every two points (using numpy ): import numpy as np x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10],dtype=np.float)y = np.array([1, 2, 3, 4, 5, 6, 8, 10, 12, 14],dtype=np.float)m = np.diff(y)/np.diff(x)print (m)# [ 1. 1. 1. 1. 1. 2. 2. 2. 2.] Clearly, slope changes from 1 to 2 in the sixth interval (between sixth and seventh points). Then take the derivative of this array, which tells you when the slope changes: print (np.diff(m))[ 0. 0. 0. 0. 1. 0. 0. 0.] To find the index of the non-zero value: idx = np.nonzero(np.diff(m))[0]print (idx)# 4 Since we took one derivative with respect to x, and indices start from zero in Python, idx+2 tells you that the slope is different before and after the sixth point. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45063290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4955323/']} | jdg_380434 |
stackexchange | llm_judgeable_groundtruth_similarity | 9775538 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a packed vector of four 64-bit floating-point values. I would like to get the sum of the vector's elements. With SSE (and using 32-bit floats) I could just do the following: v_sum = _mm_hadd_ps(v_sum, v_sum);v_sum = _mm_hadd_ps(v_sum, v_sum); Unfortunately, even though AVX features a _mm256_hadd_pd instruction, it differs in the result from the SSE version. I believe this is due to the fact that most AVX instructions work as SSE instructions for each low and high 128-bits separately, without ever crossing the 128-bit boundary. Ideally, the solution I am looking for should follow these guidelines: 1) only use AVX/AVX2 instructions. (no SSE) 2) do it in no more than 2-3 instructions. However, any efficient/elegant way to do it (even without following the above guidelines) is always well accepted. Thanks a lot for any help. -Luigi Castelli
Now provide the response and nothing else.
| If you have two __m256d vectors x1 and x2 that each contain four double s that you want to horizontally sum, you could do: __m256d x1, x2;// calculate 4 two-element horizontal sums:// lower 64 bits contain x1[0] + x1[1]// next 64 bits contain x2[0] + x2[1]// next 64 bits contain x1[2] + x1[3]// next 64 bits contain x2[2] + x2[3]__m256d sum = _mm256_hadd_pd(x1, x2);// extract upper 128 bits of result__m128d sum_high = _mm256_extractf128_pd(sum, 1);// add upper 128 bits of sum to its lower 128 bits__m128d result = _mm_add_pd(sum_high, _mm256_castpd256_pd128(sum));// lower 64 bits of result contain the sum of x1[0], x1[1], x1[2], x1[3]// upper 64 bits of result contain the sum of x2[0], x2[1], x2[2], x2[3] So it looks like 3 instructions will do 2 of the horizontal sums that you need. The above is untested, but you should get the concept. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9775538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1276110/']} | jdg_380435 |
stackexchange | llm_judgeable_groundtruth_similarity | 66050 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am concerned about the use of bcrypt for client-side password generation. I am developing a password generation function to be used client-side, similar to PwdHash and PasswordMaker. Much has been said about the advantage of using bcrypt over faster hash functions because it slows down brute force attacks. I know bcrypt uses Blowfish internally, which is a symmetric encryption algorithm rather than a hash algorithm. So there must be a hard-coded key somewhere to use bcrypt, and since Blowfish is being used, it stands to reason that if the key is discovered, the password derivation can be reversed and the original password discovered. Since client-side code can be decompiled, the key could be easily discovered, making bcrypt unsafe to use client-side. Is my reasoning correct or have I missed something? Also, in a related question, wouldn't the same argument be valid server-side as well. A hash function cannot be reversed, but an encryption function can be if the key is known. Wouldn't it be safer to use a real hash server side, even if it is faster and therefore more susceptible to brute force attack, than to use bcrypt which is reversible? EDIT: user10008 notes below (post has been removed) that only parts of Blowfish are used in bcrypt and gave me a link. When I followed a link I found a function prototype that includes key as the last argument. So I still see the key being used to kick-start the bcrypt algorithm. If the key is required, and bcrypt uses symmetrical encryption instead of hashing, isn't the operation reversible? EDIT: Good answers from both martinstoeckli and user10008. I gave the answer to marginstoeckli because of the last sentence in the response: BCrypt can be seen as encrypting with throwing away of the key. This really cleared it up for me. Basically, we go through 2 phases P -> K ; P,K -> C and then throw away key K, leaving cyphertext C. Because we throw away the key K, we cannot decrypt back to plaintext P. Throwing away K effectively makes bcrypt a one-way function. EDIT: From user10008, the steps I gave above are more complex, however the essence is that the key K is used in the final phase and discarded. Thanks user10008.
Now provide the response and nothing else.
| It's just the other way round, BCrypt does not encrypt the password with a secret key, rather it uses the password as the key to encrypt a known text. In the setup where the key is generated, it uses both salt and the password (variable EksBlowfishSetup.key ), to generate a key (variable bcrypt.state ) used for encryption. bcrypt(cost, salt, input) state \gets EksBlowfishSetup(cost, salt, input) ctext \gets "OrpheanBeholderScryDoubt" //three 64-bit blocks repeat (64) ctext \gets EncryptECB(state, ctext) //encrypt using standard Blowfish in ECB mode return Concatenate(cost, salt, ctext)EksBlowfishSetup(cost, salt, key) state \gets InitState() state \gets ExpandKey(state, salt, key) repeat (2cost) state \gets ExpandKey(state, 0, key) state \gets ExpandKey(state, 0, salt) return state BCrypt can be seen as encrypting with throwing away of the key. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/66050', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/52245/']} | jdg_380436 |
stackexchange | llm_judgeable_groundtruth_similarity | 2527561 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a way to do something similar to a c/c++ #define in adobe flex. I'd like to have lots of different paths a project build can take depending on wither or not something was defined. Does such a thing exist in flex? I know there is ways to set global variables but that wont really suit my purpose. being able to have structures with numerous #ifndefined and such is really what i'm in need of. thanks!
Now provide the response and nothing else.
| Actually MXMLC (the compiler in the Flex SDK) does support some limited preprocessor features. You can use them to pass in constant values, or to simulate #ifdef / #ifndef type functionality. Check out this documentation Example 1: This code only gets executed if the -define=CONFIG::debugging,true flag is passed to the compiler: CONFIG::debugging { // Execute debugging code here.} Example 2: Change the color of the button depending on if you defined 'CONFIG::release' or 'CONFIG::debugging' // compilers/MyButton.aspackage { import mx.controls.Button; CONFIG::debugging public class MyButton extends Button { public function MyButton() { super(); // Set the label text to blue. setStyle("color", 0x0000FF); } } CONFIG::release public class MyButton extends Button { public function MyButton() { super(); // Set the label text to red. setStyle("color", 0xFF0000); } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2527561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/195585/']} | jdg_380437 |
stackexchange | llm_judgeable_groundtruth_similarity | 299599 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I was reading John Baez’s paper on operads and phylogenetics trees where he formalizes a Jukes–Cantor model of phylogenetics. Because biological questions receive different answers depending on the model used, I was wondering if the operad that represents that model could be compared to an operad representing a different model. i.e., with some distance metric or structural similarity, or perhaps by choosing an appropriate topology on them? So, for example, we might instead adopt a model with more parameters like the Kimura one to better treat the patterns in our data. And it would be interesting to quantify, conceiving of the first model as a predecessor and the second as a recently discovered successor, what was retained in the successor model. I've been hoping to measure structural continuity across theory change (e.g., the preservation of Newtonian mechanics in a limit of GR, and the same for classical mechanics to quantum which can be measured by deformation theory). But I'm having trouble doing this with biological models, specifically phylogenetic ones formalized in a category theoretic way. I've talked to Baez about this a little on twitter, so let me know if attaching that would help answer this.
Now provide the response and nothing else.
| Yes, operads can be compared. There are lots of kinds of operad (enriched in various categories, symmetric or plain or defined with respect to a monad, one-colored or many-colored, and don't even get me started on models of $\infty$-operads...) and I haven't looked to see which flavor Baez is using. But one thing they all have in common (and similar to most mathematical structures, for that matter) is that once you've fixed your notion of "operad", there is a natural notion of morphism of operads , so that operads form a category . An operad $O$ can be compared to an operad $P$ via a morphism $O \to P$. More elaborate sorts of comparison are available in accordance with more elaborate types of operad. For example: If your operads are one-colored and enriched in topological spaces, say, then the set of morphisms $Hom(O,P)$ from $O$ to $P$ has the structure of a topological space. If your operads are many-colored, then $Hom(O,P)$ will actually itself have the structure of a category (so that many-colored operads form a 2-category ). And so forth. Similar comments apply to other sorts of categorical structures. This is really one of the nice things about category theory -- categorical structures naturally organize themselves into categories, so you can re-use category-theoretic concepts at higher levels of abstraction. For example, there is a category of categories (with functors as morphisms), which is even a 2-category (with natural transformations as 2-morphisms). | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/299599', 'https://mathoverflow.net', 'https://mathoverflow.net/users/123650/']} | jdg_380438 |
stackexchange | llm_judgeable_groundtruth_similarity | 3061224 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to solve the integral: $$\int_{0}^{\infty}\frac{x^2}{(x^4+1)^2}dx$$ using $ x = \sqrt{\tan(u)}$ and $dx = \frac{ \sec^2(u)}{2\sqrt{\tan(u)}} du,$ but I ended up with an even worse looking integral $$ \int_{0}^{\frac{\pi}{2}}\frac{\sqrt{\tan(u)}}{\sec^2(u)}du.$$ Wolfram gave an answer of $ \dfrac{\pi}{8\sqrt{2}},$ but how would one get to that answer?
Now provide the response and nothing else.
| Let us start with a step of integration by parts: $$ \int_{0}^{+\infty}\frac{1}{4x}\cdot\frac{4x^3}{(x^4+1)^2}\,dx =\int_{0}^{+\infty}\frac{1}{4x^2}\left(1-\frac{1}{1+x^4}\right)\,dx=\frac{1}{4}\int_{0}^{+\infty}\frac{dx}{x^2+\frac{1}{x^2}}$$ and finish with Glasser's master theorem : $$ \frac{1}{8}\int_{-\infty}^{+\infty}\frac{dx}{\left(x-\frac{1}{x}\right)^2+2}\stackrel{\text{GMT}}{=}\frac{1}{8}\int_{-\infty}^{+\infty}\frac{dx}{x^2+2} = \frac{\pi}{8\sqrt{2}}.$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3061224', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/579095/']} | jdg_380439 |
stackexchange | llm_judgeable_groundtruth_similarity | 23197 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Question: What are the connected components of the familiar spaces of functions between two (let's say compact and smooth, for simplicity) manifolds $M$ and $N$? Specifically, I'm thinking of the Hölder spaces $\mathcal{C}^{k,\alpha}(M, N)$ and the Sobolev spaces $\mathcal{W}^{k,p}(M, N)$. Some comments: For a smooth function $f:M\to N$, it seems clear that, at least, all continuous functions homotopic to $f$ will be connected to it. This question is inspired by the discussion of $\mathcal{W}^{k,p}(M, N)$ in McDuff-Salamon's book on $J$-holomorphic curves. There it's stated as an offhand remark that the connected components of $\mathcal{W}^{k,p}(M, N)$ (in the case of $M$ oriented & two-dimensional; I'm not sure if this makes a difference) are the completions of the sets {$f:M\to N \text{ smooth}: f_*[M]=A$}, for $A\in H_{\dim M}(N)$. If the McD-S factoid is true, there should exist sequences of smooth not-all-mutually-homotopic functions which converge in $\mathcal{W}^{k,p}(M, N)$. (This isn't too counterintuitive, since $\mathcal{W}^{k,p}(M, N)$ presumably contains functions which aren't continuous, & so don't themselves have a homotopy class). Can someone give me an example of this phenomenon? Please feel free to re-tag -- I can't think of anything really appropriate.
Now provide the response and nothing else.
| Any continuous map from M to N is homotopic to a smooth map, and if two smooth maps are homotopic, then they are also smoothly homotopic. (More generally, two homotopic functions are homotopic through a homotopy that is smooth except at the endpoints.) The proof involves convolving with Gaussians, and is standard; I think you can find it in Milnor's Topology from a Differentiable Viewpoint , for instance. (It's also appeard on mathoverflow before, but I couldn't find it just now.) The hard issues for smooth vs. continuous functions arise only once you start demanding the maps be injective. I can't say more about McDuff and Salomon without more context for the quote. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/23197', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2819/']} | jdg_380440 |
Subsets and Splits