source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
264317
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to mark the periods of day versus night on a DateListPlot (and associated plots) by light gray regions for the night. Here's a minimal example that gives the appearance of what I seek: DateListPlot[ {{DateObject[{2022, 1, 1}], 80}, {DateObject[{2022, 1, 5}], 85}, {DateObject[{2022, 1, 10}], 82}}, GridLines -> {Automatic, Automatic}, GridLinesStyle -> {{LightGray, Thickness[0.1]}, Automatic}] (Note that the nights are not properly marked for each day.) There are three problems with this kludgy approach: The Thickness in GridLineStyle must change as I add more dates to the plot. I'd rather it be automatic. The specification Automatic for GridLines sometimes puts grid lines every day (as I need), but other times every other day, or every week . That of course defeats the approach. The harder problem is the following: The beginning and end of nighttime (and hence the duration of night) changes throughout the year, of course... longer in winter, shorter in summer. I'd like that subtlety included as well. This third component problem suggests that basing the shading on GridLines is not the proper approach. Any suggestions? Now provide the response and nothing else.
data = {{DateObject[{2022, 1, 1}], 80}, {DateObject[{2022, 1, 5}], 85}, {DateObject[{2022, 1, 10}], 82}};sunrisesandsunsets = Transpose @ Map[Map[AbsoluteTime]@#["Values"] &]@ Through[{Sunrise, Sunset}[DateRange @@ MinMax[data[[All, 1]]]]];{min, max} = MinMax@data[[All, 2]];DateListPlot[data, GridLines -> {Join @@ sunrisesandsunsets, Automatic}, GridLinesStyle -> {{LightGray, Thin}, Automatic}, Epilog -> {Opacity[.3, Gray], Rectangle[Scaled[{0, -1}, {#, min}], Scaled[{0, 1}, {#2, max}]] & @@@ Partition[Prepend[.99 AbsoluteTime @ Min @ data[[All, 1]]] @ Flatten @ sunrisesandsunsets, 2]}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/264317', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9735/']}
jdg_71041
stackexchange
llm_judgeable_groundtruth_similarity
15962573
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a simple ruby script, hello.rb : #!/usr/bin/env rubyputs 'hello' It runs ok at the command line: # /usr/local/src/hello/hello.rbhello However, if I put it in cron : * * * * * /usr/local/src/hello/hello.rb >> /usr/local/src/hello/hello.log 2>&1 There are errors in the log file: /usr/bin/env: ruby: No such file or directory/usr/bin/env: ruby: No such file or directory.../usr/bin/env: ruby: No such file or directory /usr/bin/env ruby runs ok at command line though: # /usr/bin/env ruby -vruby 1.8.7 (2012-10-12 patchlevel 371) [i686-linux] How to fix the env error for cron ? Now provide the response and nothing else.
The problem is that the environment isn't what you expect. You don't say whether the cron is running as your user, or as root, but, in either case, you can test to see what the environment looks like by adding another cron entry of: * * * * * /usr/bin/env > /path/to/your/home/directory/env.txt Let that run once, then pull it out, and look at the file. Instead of using /usr/bin/env to try to find a Ruby to run your code, define the Ruby explicitly: * * * * * /path/to/the/ruby/you/want /usr/local/src/hello/hello.rb >> /usr/local/src/hello/hello.log 2>&1 You can figure out which Ruby you want by using: which ruby Alternately, instead of relying on /usr/bin/env in your #! line, define your Ruby there. Using /usr/bin/env ruby in your code is a convenience when you're using something like RVM or rbenv, and switching between versions of Ruby. It's not a good choice when you're putting something into "production", whether it's on your machine in your own account, or on a production host running as root. If you are on Linux or Mac OS, try man 5 crontab for more information. Also, " Where can I set environment variables that crontab will use? " should be very useful.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15962573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/88597/']}
jdg_71042
stackexchange
llm_judgeable_groundtruth_similarity
28931224
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got stuck on something that feels like should be relatively easy. The code I bring below is a sample based on a larger project I'm working on. I saw no reason to post all the details, so please accept the data structures I bring as is. Basically, I'm creating a bar chart, and I just can figure out how to add value labels on the bars (in the center of the bar, or just above it). Been looking at samples around the web but with no success implementing on my own code. I believe the solution is either with 'text' or 'annotate', but I:a) don't know which one to use (and generally speaking, haven't figured out when to use which).b) can't see to get either to present the value labels.Would appreciate your help, my code below.Thanks in advance! import numpy as npimport pandas as pdimport matplotlib.pyplot as pltpd.set_option('display.mpl_style', 'default') %matplotlib inline# Bring some raw data.frequencies = [6, 16, 75, 160, 244, 260, 145, 73, 16, 4, 1]# In my original code I create a series and run on that, # so for consistency I create a series from the list.freq_series = pd.Series(frequencies)x_labels = [108300.0, 110540.0, 112780.0, 115020.0, 117260.0, 119500.0, 121740.0, 123980.0, 126220.0, 128460.0, 130700.0]# Plot the figure.plt.figure(figsize=(12, 8))fig = freq_series.plot(kind='bar')fig.set_title('Amount Frequency')fig.set_xlabel('Amount ($)')fig.set_ylabel('Frequency')fig.set_xticklabels(x_labels) Now provide the response and nothing else.
Firstly freq_series.plot returns an axis not a figure so to make my answer a little more clear I've changed your given code to refer to it as ax rather than fig to be more consistent with other code examples. You can get the list of the bars produced in the plot from the ax.patches member. Then you can use the technique demonstrated in this matplotlib gallery example to add the labels using the ax.text method. import pandas as pdimport matplotlib.pyplot as plt# Bring some raw data.frequencies = [6, 16, 75, 160, 244, 260, 145, 73, 16, 4, 1]# In my original code I create a series and run on that,# so for consistency I create a series from the list.freq_series = pd.Series(frequencies)x_labels = [ 108300.0, 110540.0, 112780.0, 115020.0, 117260.0, 119500.0, 121740.0, 123980.0, 126220.0, 128460.0, 130700.0,]# Plot the figure.plt.figure(figsize=(12, 8))ax = freq_series.plot(kind="bar")ax.set_title("Amount Frequency")ax.set_xlabel("Amount ($)")ax.set_ylabel("Frequency")ax.set_xticklabels(x_labels)rects = ax.patches# Make some labels.labels = [f"label{i}" for i in range(len(rects))]for rect, label in zip(rects, labels): height = rect.get_height() ax.text( rect.get_x() + rect.get_width() / 2, height + 5, label, ha="center", va="bottom" )plt.show() This produces a labeled plot that looks like:
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28931224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2594546/']}
jdg_71043
stackexchange
llm_judgeable_groundtruth_similarity
1533852
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Recently I started using WCF, and I was very impressed, so much so , that I am wondering about getting more into WPF, but my initial experiences with XAML were not that great, about 1 year ago. Have things improved, is the delivery truly browser independent (can run on browsers other than IE). Is development productivity increased, better code/ui seperation? Does it require learning silverlight? If you've had real experience with WPF, please state the pro's and cons. Now provide the response and nothing else.
First: WPF is 100% browser independent, because WPF doesn't run in the browser. XBAP (now deprecated) and Silverlight run in the browser, but both are subsets of WPF. Second: The major benefit of WPF is the hierarchical nature of the GUI structures you can create. If you've ever done ASP.NET programming, WPF is a more natural move than WinForms. Databinding in WPF is different, but better, IMO. Bottom line: XAML and it's intricacies are a big learning curve, and WPF and XAML go very much hand in hand. But once you understand the basics and can start constructing basic GUIs with it, you'll never want to use WinForms again. Regarding Pros and Cons, I'd say the only real con to WPF is the large learning curve. It can do everything WinForms can do from a .NET perspective, and is easier to use (once you understand it) than WinForms. The only place it might be "lacking" is in direct GDI+ manipulation, but you'll probably want to avoid that wherever possible. If it becomes necessary in a WPF project, just create a WinForms control and host it in WPF. Not the most elegant solution, but it can get the job done.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1533852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41543/']}
jdg_71044
stackexchange
llm_judgeable_groundtruth_similarity
19130236
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't understand the results of following code: #include <stdio.h>#include <conio.h>int main(){ int a[4]={1, 3, 5, 6}; //suppose a is stored at location 2010 printf("%d\n", a + 2); printf("%d", a++); return 0;} Why does the second printf function produce following error? error: lvalue required as increment operand Now provide the response and nothing else.
Part-1: Array names are constant (not modifiable lvalue), your can add value to array name but can't modify it. Expression a + 2 doesn't modify a itself but when you do a++ that is equivalent to a = a + 1 try to modify array name --lvalue error. The expression a++ in second printf is wrong - an example of semantic phase error. read following language standards: 6.3.2.1 Lvalues, arrays, and function designators 724 A modifiable lvalue is an lvalue that does not have array type, does not have an incomplete type, does not have a const-qualified type, and if it is a structure or union, does not have any member (including, recursively, any member or element of all contained aggregates or unions) with a const-qualified type. 729 Except when it is the operand of the sizeof operator or the unary & operator, or is a string literal used to initialize an array, an expression that has type “array of type” is converted to an expression with type “pointer to type” that points to the initial element of the array object and is not an lvalue . Part-2: Note array names in most expressions decays in address of first element (read some exceptions where array name not decaying into a pointer to first element? ably answered by @H 2 CO 3 ). When you do a + 2 its result is address of third element (or address of element at index 2 ) So a + 2 is same as &a[2] It is address not value at index. To print address use %p instead of %d and typecast address into void* as follows: printf("address (a + 2) = %p , &a[2] = %p", (void*)(a + 2), (void*)(&a[2])); To print value you need defence operator * as follows: printf("address *(a + 2) = %d , a[2] = %d", *(a + 2), a[2]); Part-3: suppose a is stored at location 2010, Is the output of first printf function 2012? No, pointer arithmetic is different then integer arithmetic. As we know array name decays into address of first element's address in most expressions So when you do a + 2 the value is address of third element that is at index 2 . So suppose if int size in your system is 4 bytes then a + 2 stat pointing to location 2018 according to your assumption that a address value is 2010. To understand read 10.2 Pointers and Arrays; Pointer Arithmetic and Pointer Arithmetic .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19130236', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2485742/']}
jdg_71045
stackexchange
llm_judgeable_groundtruth_similarity
255753
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm designing a circuit to power a high-brightness UV LED. I'd like to safely drive it as close as possible to maximum irradiance, and I'm having trouble figuring out whether all the components are properly chosen to avoid damaging/stressing anything (especially the LED, which is relatively expensive ). A Raspberry PI will be used to provide the on/off logic. The PI is powered separately, but shares a common ground with the LED power system. Ideally I'd like the LED to be driven directly by a N-FET, where the gate is hooked up to a raspberry pi GPIO pin: simulate this circuit – Schematic created using CircuitLab The first question I have is: am I reading the UV-LED spec ( here ) correctly? Forward Voltage: 3.45V Maximum Current: 1400mA Electrical Power (Max): 4830mW The above three make sense, because 3.45V * 1400mA = 4830 mW. Then I see some peculiar items: LED Output Power: 1500mW (min), 1700mW (Typical) Test Current for Typical Power: 1400mA How can the "Typical" LED output power be 1700mW when the test current for "typical" power is 1400mA? Essentially, I'm not sure whether my resistor value should be calculated based on the 1400mA figure (5V-3.45V / 1400 mA = 1.11 Ohms) or whether I should be deriving a current from the 1700mW figure (1700mW = 3.45V X I; I = 492mA; R = 3.1 Ohms) and then using a larger resistor. The second question I have is regarding the NFET. The NFET I have is rated for 1.7A and 30V (datasheet here ). But the package is scarily tiny (SOT-23) and my gut feeling says I should ask before proceeding. I will be running this at 1.4A which is pretty close to the 1.7 rating. Will it work, and do I actually need to get a tiny heatsink for this NFET? Or is it better to get a beefier NFET? Other than that, I'm making sure to get a hefty power resistor rated at 1-2 Watts, all wires will be 22 gauge, and all PCB traces will be nice and fat. Any other power-related gotchas I should be aware of? Power source details also listed below. Thanks so much! Datasheet for the NFET (1.7A / 30V): https://www.fairchildsemi.com/datasheets/ND/NDS355AN.pdf Datasheet for the UV LED: https://www.thorlabs.com/drawings/ca01256cc12b40f8-35473EF6-5056-0103-7951A15FE4DCB58B/M405D2-SpecSheet.pdf 2A+ Power Board and battery: https://www.adafruit.com/product/2465 https://www.adafruit.com/products/353 Now provide the response and nothing else.
Copper has got better thermal conductivity. Aluminium - \$ \mathrm{ 200 \frac {W} {m\cdot K} } \$ Copper - \$ \mathrm{ 400 \frac {W} {m\cdot K} } \$ (from here , also here ) But thermal conductivity within the solid material is only a part of the story. The rest of the story depends on where one wants to dump the heat into. Liquid coolant Copper heatsink (one may also call it heat transfer block) will perform better than aluminium. Air with forced convection In other words, there's a fan blowing onto the heatsink. Copper heatsink will perform better than aluminium. Air with natural convection I've saved best for last. It also looks like it's the O.P.'s case too. With natural convection air , the copper heatsink perform only marginally 1 better (in °C/W) than aluminium. This is because the bottleneck isn't in the transfer withing metal. When you have air with natural convection, the bottleneck is in the transfer between metal and air, and it's the same for Al and Cu. 1 I might add that the marginal increase is often not worth the cost of Cu. This curve demonstrates the non-linear relationship between heat transfer and material thermal conductivity. The curve is generic. It applies to any application having both conduction and convection components to the total heat transfer. [Radiation is typically small and is ignored in this calculation.] The shape of the curve is the same regardless of the application. The quantitative values on the axes are not shown because they depend on the power, part size and convective cooling conditions. They become fixed for any given application and set of conditions. It’s obvious from the shape of the curve that heat transfer depends on material thermal conductivity but there is also a point, a knee in the curve, where increasing thermal conductivity produces negligible improvement in the heat transfer . ( source , emphasis mine N.A.) Phil have already linked an ECN article which compares aluminium and copper in air with natural convection. Here's another take at it: what if we compare aluminum to a material with lower thermal conductivity (unlike copper). There is a company that makes thermally conductive plastic. It has got \$ \mathrm{20 \frac {W} {m\cdot K} } \$ conductivity, but that's a lot for a plastic. They have benchmarked it against aluminium in air with natural convection. E2 is the plastic ( source )
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/255753', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/70400/']}
jdg_71046
stackexchange
llm_judgeable_groundtruth_similarity
44604686
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using python 3.5 and Django 1.10 and trying to test my app in tests.py, but an error appeared, it said: ValueError: Cannot assign "1": "NewsLetter.UserID" must be a "User" instance. so how to test a fk value here?here is the code: class NewsletterModelTest(TestCase): @classmethod def setUpTestData(cls): #Set up non-modified objects used by all test methods NewsLetter.objects.create(NewsLetterID=1, Email='[email protected]', Connected=False,UserID=1) class NewsLetter(models.Model): NewsLetterID = models.AutoField(primary_key=True) Email = models.CharField(max_length=255) Connected = models.BooleanField(default=False) UserID = models.ForeignKey(User, on_delete=models.CASCADE) class Meta: db_table = 'NewsLetter' Now provide the response and nothing else.
In your setupTestData method you have to create a User object, and pass it into the NewsLetter object create method. @classmethoddef setUpTestData(cls): #Set up non-modified objects used by all test methods user = User.objects.create(<fill params here>) NewsLetter.objects.create(NewsLetterID=1, Email='[email protected]', Connected=False,UserID=user)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44604686', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7959147/']}
jdg_71047
stackexchange
llm_judgeable_groundtruth_similarity
46184197
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is enabling Safe Area Layout Guides compatible to iOS below 11? Now provide the response and nothing else.
I managed to work with the new Safe Area layout guides and maintain backwards compatibility with iOS 9 and iOS 10: (EDIT: as pointed out in the comments by @NickEntin, this implementation will presume there is a status bar present, which won't be true in landscape on the iPhone X. Resulting in to much space to the top (20 points). It will run perfectly fine however. E.g. if you want a view to be 10 points below the status bar (and 10 points below the sensor housing on iPhone X): In your XIB, go to File Inspector and enable the safe are by checking Use Safe Area Layout Guides . Create a constraint from the view's top to the main view's top, with >= (greater than or equal) constraint, constant 30 (30 because we want 10 points spacing to the status bar which is 20 points high) and priority High (750). Create a constraint from the view's top to the Safe Area's top, with = (equal) constraint, constant 10 and priority Low (250). The same can be done for a view at the bottom (and for leading/trailing or left/right to the Safe Area): In your XIB, go to File Inspector and enable the safe are by checking Use Safe Area Layout Guides . Create a constraint from the view's bottom to the main view's bottom, with >= (greater than or equal) constraint, constant 10 and priority High (750). Create a constraint from the view's bottom to the Safe Area's bottom, with = (equal) constraint, constant 10 and priority Low (250).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46184197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3040446/']}
jdg_71048
stackexchange
llm_judgeable_groundtruth_similarity
882784
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we pick a random real number between 0 and 1 and call it $x$. There are $2^{\aleph_0}$ possible values, so the chance of picking any specific number (such as $x$) in that range is 0. But in the end, we did manage to pick $x$, despite its probability of 0. Does this mean that a 0% chance is actually possible, or is there some flaw in this logic? Now provide the response and nothing else.
Answer No, $0$ % chance does not mean impossible. Indeed, this is the canonical example of a non-trivial event with $0$ probability. Math Probability is a measure , and it is quite common for non-empty sets to have $0$ measure (such sets might be dense and uncountable &c). Conversely, an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability $1$ . Note the qualifier almost ! E.g., if you pick a random number in $[0;1]$ , it will be almost surely an irrational , moreover, a transcendental number (because their complements - rationals and algebraic numbers - are countable and thus have zero measure). This does not mean that you cannot possibly pick $\frac12$ . Philosophy Bayesian If you view probability as a subjective measure of likelihood that a certain event will occur, then, obviously, you cannot believe that one number in $[0;1]$ is more likely than another one; which means that each individual number has to be assigned probability of $0$ . Frequentist If you view probability as the limit of frequency, then a random sequence in $[0;1]$ will probably contain no duplicates, so, as the number of trials goes to $\infty$ , the number of successes (i.e., occurrences of the specific number) will be $0$ or $1$ , so the probability will be $0$ .
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/882784', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/118470/']}
jdg_71049
stackexchange
llm_judgeable_groundtruth_similarity
11711499
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm stuck on a MySQL query and don't know what I'm doing wrong (using MySQL 5.0.88). Here is my Query: SELECT b.some, b.thing, bt.else AS liefdatum_gewuenscht, bt.bar FROM btable AS b LEFT JOIN bschedule AS bt ON bt.key= b.key AND bt.type = b.type WHERE b.entry_from IN (1,2,3,4) AND b.`status` = "2" AND b.date >= 20120718 LIMIT 1,10 The query should return a single record, but the check for b.status causes an empty response, although the record in question has a status of 2 with 2 being a CHAR field. If I comment out the line it returns the correct record, however I would like to know what I'm doing wrong? Can anyone give me hint? Thanks! EDIT After some more testing I think it b.status isn't the problem, because if I use any other column and value from the record in question it also return nothing. E.g. the record will have columns ==== no ===== written_by ===== 1111 john smith so putting in: AND b.no = 1111 OR b.written_by = "john smith" also return an empty resultset. EDIT2 Here is a sample dataset and the query. I left out the LEFT JOIN because it doesn't make any difference: SELECT b.*FROM bk AS bWHERE 1AND b.entry_from IN ( 9900000002985 )AND b.`status` = "2 "AND b.bestelldatum >= 20120718LIMIT 1,10 Table: CREATE TABLE `bk` (`id` INT(11) NOT NULL AUTO_INCREMENT,`entry_from` VARCHAR(13) NULL DEFAULT NULL,`placed_at` VARCHAR(13) NULL DEFAULT NULL,`type` VARCHAR(2) NULL DEFAULT 'SF',`no` VARCHAR(35) NULL DEFAULT NULL,`date` DATE NULL DEFAULT NULL,`written_by` VARCHAR(35) NULL DEFAULT 'Händler',`status` VARCHAR(3) NULL DEFAULT NULL,PRIMARY KEY (`id`))COLLATE='latin1_swedish_ci'ENGINE=InnoDB; hm... how to paste a record... this is the record in question id = 985entry_from = 9900000002985placed_at = 9900000003005type = SFno = 11date = 2012-07-19written_by "Fachhändler"status = 2 Now provide the response and nothing else.
You are right, in C++, objects are created as soon as they're defined. You do not need to use the new keyword. However, unlike in Java, objects can be created with different kinds of duration. Using new creates an object on the heap, with dynamic storage duration: the variable lives until you explicitly delete it. (And new returns a pointer to the created object, so that you can track it) If you simply define an object, as in your first and third lines, then it is created with automatic storage duration: that is, the object exists until it goes out of scope. This means that you can create objects inside a function, and be guaranteed that they'll be destroyed as soon as you leave the function -- regardless of how you leave the function. Whether you return, or throw an exception, all objects with automatic storage duration (created without using new ) are guaranteed to be properly cleaned up. This means that you should always avoid new whenever possible. If you have to use new , you should typically wrap the resulting pointer into a smart pointer class, an object created with automatic storage duration, so that it gets destroyed automatically). The smart pointer will then call delete on the new-allocated object automatically, ensuring, again, that you don't leak memory. This distinction is a very powerful tool, which good C++ programmers need to understand well. It is a key to avoiding memory leaks, or more generally, resource leaks of all kinds, and it is, in some respects, more powerful than Java's garbage collector. For example, say we wish to open a file, and then write some data to it. In C++, we can do it like this: void foo() { std::ofstream file("foo.txt"); doStuff(file); // call a function which does something with the file } And because file was declared without using new , because it has automatic storage duration , we are guaranteed that it will have its destructor invoked when it goes out of scope, and it will be properly cleaned up -- that is, the stream will be flushed, and the file handle will be closed. It doesn't matter if doStuff might throw an exception. No matter how we leave foo , file will be properly destroyed, so we don't need to mess about with try / finally like you would in Java. The class is exception-safe by itself, without requiring any additional effort from the user. Try writing a similar snippet in Java, one which guarantees that even if doStuff throws an exception, the file will be immediately closed. It'll be much longer, and requires more care on the part of the user.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11711499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/536768/']}
jdg_71050
stackexchange
llm_judgeable_groundtruth_similarity
33072570
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been programming in python for about two years; mostly data stuff (pandas, mpl, numpy), but also automation scripts and small web apps. I'm trying to become a better programmer and increase my python knowledge and one of the things that bothers me is that I have never used a class (outside of copying random flask code for small web apps). I generally understand what they are, but I can't seem to wrap my head around why I would need them over a simple function. To add specificity to my question: I write tons of automated reports which always involve pulling data from multiple data sources (mongo, sql, postgres, apis), performing a lot or a little data munging and formatting, writing the data to csv/excel/html, send it out in an email. The scripts range from ~250 lines to ~600 lines. Would there be any reason for me to use classes to do this and why? Now provide the response and nothing else.
Classes are the pillar of Object Oriented Programming . OOP is highly concerned with code organization, reusability, and encapsulation. First, a disclaimer: OOP is partially in contrast to Functional Programming , which is a different paradigm used a lot in Python. Not everyone who programs in Python (or surely most languages) uses OOP. You can do a lot in Java 8 that isn't very Object Oriented. If you don't want to use OOP, then don't. If you're just writing one-off scripts to process data that you'll never use again, then keep writing the way you are. However, there are a lot of reasons to use OOP. Some reasons: Organization:OOP defines well known and standard ways of describing and defining both data and procedure in code. Both data and procedure can be stored at varying levels of definition (in different classes), and there are standard ways about talking about these definitions. That is, if you use OOP in a standard way, it will help your later self and others understand, edit, and use your code. Also, instead of using a complex, arbitrary data storage mechanism (dicts of dicts or lists or dicts or lists of dicts of sets, or whatever), you can name pieces of data structures and conveniently refer to them. State: OOP helps you define and keep track of state. For instance, in a classic example, if you're creating a program that processes students (for instance, a grade program), you can keep all the info you need about them in one spot (name, age, gender, grade level, courses, grades, teachers, peers, diet, special needs, etc.), and this data is persisted as long as the object is alive, and is easily accessible. In contrast, in pure functional programming, state is never mutated in place. Encapsulation :With encapsulation, procedure and data are stored together. Methods (an OOP term for functions) are defined right alongside the data that they operate on and produce. In a language like Java that allows for access control , or in Python, depending upon how you describe your public API, this means that methods and data can be hidden from the user. What this means is that if you need or want to change code, you can do whatever you want to the implementation of the code, but keep the public APIs the same. Inheritance :Inheritance allows you to define data and procedure in one place (in one class), and then override or extend that functionality later. For instance, in Python, I often see people creating subclasses of the dict class in order to add additional functionality. A common change is overriding the method that throws an exception when a key is requested from a dictionary that doesn't exist to give a default value based on an unknown key. This allows you to extend your own code now or later, allow others to extend your code, and allows you to extend other people's code. Reusability: All of these reasons and others allow for greater reusability of code. Object oriented code allows you to write solid (tested) code once, and then reuse over and over. If you need to tweak something for your specific use case, you can inherit from an existing class and overwrite the existing behavior. If you need to change something, you can change it all while maintaining the existing public method signatures, and no one is the wiser (hopefully). Again, there are several reasons not to use OOP, and you don't need to. But luckily with a language like Python, you can use just a little bit or a lot, it's up to you. An example of the student use case (no guarantee on code quality, just an example): Object Oriented class Student(object): def __init__(self, name, age, gender, level, grades=None): self.name = name self.age = age self.gender = gender self.level = level self.grades = grades or {} def setGrade(self, course, grade): self.grades[course] = grade def getGrade(self, course): return self.grades[course] def getGPA(self): return sum(self.grades.values())/len(self.grades)# Define some studentsjohn = Student("John", 12, "male", 6, {"math":3.3})jane = Student("Jane", 12, "female", 6, {"math":3.5})# Now we can get to the grades easilyprint(john.getGPA())print(jane.getGPA()) Standard Dict def calculateGPA(gradeDict): return sum(gradeDict.values())/len(gradeDict)students = {}# We can set the keys to variables so we might minimize typosname, age, gender, level, grades = "name", "age", "gender", "level", "grades"john, jane = "john", "jane"math = "math"students[john] = {}students[john][age] = 12students[john][gender] = "male"students[john][level] = 6students[john][grades] = {math:3.3}students[jane] = {}students[jane][age] = 12students[jane][gender] = "female"students[jane][level] = 6students[jane][grades] = {math:3.5}# At this point, we need to remember who the students are and where the grades are stored. Not a huge deal, but avoided by OOP.print(calculateGPA(students[john][grades]))print(calculateGPA(students[jane][grades]))
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33072570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1887261/']}
jdg_71051
stackexchange
llm_judgeable_groundtruth_similarity
515891
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I generated the keypair in Computer 1. And move the public key to the Computer2(server) and make it in authorized_keys. And move the private key to Computer3(client) and use ssh-add to add it. Why I can directly login to server without offering a public key? What's the real workflow of ssh key authorization? Now provide the response and nothing else.
# getcap ./some_bin./some_bin =ep That binary has ALL the capabilites permitted ( p ) and effective ( e ) from the start. In the textual representation of capabilities, a leading = is equivalent to all= . From the cap_to_text(3) manpage: In the case that the leading operator is = , and no list of capabilities is provided, the action-list is assumed to refer to all capabilities. For example, the following three clauses are equivalent to eachother (and indicate a completely empty capability set): all= ; = ; cap_chown,<every-other-capability>= . Such a binary can do whatever it pleases, limited only by the capability bounding set, which on a typical desktop system includes everything (otherwise setuid binaries like su wouldn't work as expected). Notice that this is just a "gotcha" of the textual representation used by libcap : in the security.capability extended attribute of the file for which getcap will print /file/path =ep , all the meaningful bits are effectively on ; for an empty security.capability , /file/path = (with the = not followed by anything) will be printed instead. If someone is still not convinced, here is a small experiment: # cp /bin/ping /tmp/ping # will wipe setuid bits and extented attributes# su user -c '/tmp/ping localhost'ping: socket: Operation not permitted# setcap =ep /tmp/ping# su user -c '/tmp/ping localhost' # will work because of cap_net_rawPING localhost(localhost (::1)) 56 data bytes64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.073 ms^C# setcap = /tmp/ping# su user -c '/tmp/ping localhost'ping: socket: Operation not permitted Notice that an empty file capability is also different from a removed capability ( capset -r /file/path ), an empty file capability will block the Ambient set from being inherited when the file executes. A subtlety of the =ep file capability is that if the bounding set is not a full one, then the kernel will prevent a program with =ep on it from executing (as described in the "Safety checking for capability-dumb binaries" section of the capabilities(7) manpage).
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/515891', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/350311/']}
jdg_71052
stackexchange
llm_judgeable_groundtruth_similarity
13662645
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a project witch involves sending AT commands to mobiles phones modems and receiving their result codes.I already succeeded on sending the at commands, but what i couldn't figure out is reading the result codes from the commands like when you send "at" it returns "OK" and if you send "at+cbc" it return the battery status.I tried using Read and ReadLine and ReadExisting after sending the command but it didn't work, I also tried creating an event for the received data and using those functions but it didn't work either.This is my source code i hope you would give me some help. using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Windows.Forms;using System.IO.Ports;namespace WindowsFormsApplication1{ public partial class check : Form { SerialPort sp; public check() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { sp = new SerialPort(); sp.PortName = "COM3"; sp.BaudRate = 9600; sp.Parity = Parity.None; sp.DataBits = 8; sp.StopBits = StopBits.One; sp.ReadTimeout = 3000; sp.WriteTimeout = 3000; //sp.DataReceived += new SerialDataReceivedEventHandler(sp_DataReceived); } private void btn1_Click(object sender, EventArgs e) { try { if(!sp.IsOpen) { sp.Open(); } sp.WriteLine(tbCmd+"\r\n"); System.Threading.Thread.Sleep(2000); tbStatus.Text = sp.ReadLine(); } catch(Exception ex) { tbStatus.Text = "Error!\r\n" + ex.ToString(); } } //void sp_DataReceived(object sender, SerialDataReceivedEventArgs e) //{ // tbStatus.Text = sp.ReadExisting(); //} }} Now provide the response and nothing else.
If your service supports all CRUD operations, it's always advisable to implement a RESTful interface. It's not super-hard to do so. I've outlined some of the basics below. A RESTful service simply does a few things: It uses HTTP request method for communication of the CRUD action It uses HTTP status code to communicate response status, and It uses the URI to define your resource (file, database item you're accessing, etc). It is stateless The idea is to minimize the development of custom communications for these things that are already defined in the HTTP spec. 1 - REQUEST METHOD The 4 HTTP request methods you're required to support for a RESTful service are: POST GET PUT DELETE and you may optionally support PATCH HEAD You can map these directly to your CRUD actions as follows: POST = Create GET = Retrieve PUT = Update DELETE = Delete PATCH = Edit (a partial update, e.g. "change password". PUT becomes "replace") HEAD = Header only (metadata about the resource) To do this, route requests properly with a simple request method router as follows: switch ($_SERVER["REQUEST_METHOD"]) { case "POST": // Create action break; case "GET": // Retrieve action break; case "PUT": // Update action break; case "DELETE": // Delete action break;} 2 - STATUS CODE You should further implement HTTP status codes from your service to communicate status back to the client, e.g.: 20x = success 30x = redirection 40x = communication issues 50x = server error To do this, simply prepend your response with the proper HTTP header output, e.g.: header("Status: 500 Internal Server Error"); You can reference the full list of implemented HTTP status codes here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html 3 - URIs For URIs, RESTful services usually follow a top-down approach to categorized naming, e.g. /object_type/id.content_type Examples: POST /userPUT /user/1GET /user/1.jsonGET /user/1.html You can implement a very rudimentary RESTful router for the above convention using Apache with mod_rewrite in an .htaccess file, as follows: RewriteEngine OnRewriteRule ^([^\/]+)\/([^\.]+)\.(\w+)$ index.php?object_type=$1&object_id=$2&content_type=$3 You would then have index.php Look for the appropriate object_type and id to route appropriately, e.g.: $object = $_GET["object_type"];$id = (int) $_GET["object_id"];$content_type = $_GET["content_type"];// Route from here to a class with the name of the object (e.g. UserController) via __autoload// or to a file (e.g. user.php) via include, and pass id and content_type as params 4 - STATELESSNESS Simply stated, the server maintains no "state" for the client. No requirements for storing session or status. Each request represents a complete transaction. I.e. if I GET user/1, the server won't remember that I did that, and future requests won't be dependent upon or affected by previous ones. If you implement these standards, congrats, you've built a RESTful service!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13662645', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1869149/']}
jdg_71053
stackexchange
llm_judgeable_groundtruth_similarity
63077
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What relations and differences are between statistical learning theory and computational learning theory ? Are they about the same topic? Solve the same problems, and use the same methods? For example, the former says it is the theory of prediction (regression, classification,...). Now provide the response and nothing else.
Computational learning, more concretely the probably approximately correct ( PAC ) framework, answers questions like: how many training examples are needed for a learner to learn with high probability a good hypothesis? how much computational effort do I need to learn with high probability such hypothesis? It does not deal with the concrete classifier you are working with. It is about what you can and cannot learn with some samples at hand. In statistical learning theory you rather answer questions of the sort: how many training samples will the classifier misclassify before it has converged to a good hypothesis? i.e. how hard is it to train a classifier, and what warranties do I have on its performance? Regretfully I do not know a source where these two areas are described/compared in an unified manner. Still, though not much hope that helps
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/63077', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1005/']}
jdg_71054
stackexchange
llm_judgeable_groundtruth_similarity
75989
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to Wikipedia $\pu{100 mL}$ of water dissolve at $\pu{25 ^\circ{}C}$ about $\pu{35 g}$ of $\ce{NaCl}$ ( 1 ), but about $\pu{79 g}$ of $\ce{NaClO3}$ ( 2 ), and around $\pu{210 g}$ for $\ce{NaClO4}$ ( 3 ). This looks to me roughly like its doubling solubility for every single higher level of oxidation. Earlier I was under the impression that the % of $\ce{Cl}$ or % of $\ce{Na}$ in the water would still be the same, and the amount of additional oxygen is what could be causing the increase in solubility.... ie. now each molecule has 3 oxygen atoms as opposed to none in case of $\ce{NaCl}$. But this doesn't hold true. I would also like to know what is the effect of multiple solutes sharing same atoms on each other. For example does dissolving some chlorate cause the solvent to hold more chloride ? As you may have guess Chemistry isn't my strong suite, I know basic chemistry, but now when I am looking at it again.. the whole thing has changed atleast w.r.t what I was taught during my school days. (right from how we determine the number of electrons) Now provide the response and nothing else.
Science is all about discovering hidden regularities, patterns, and laws. In that sense your bold generalization is good, but will it hold if we check just one more data point? Let's see: $\ce{KCl}$ has solubility of 25g/100ml, $\ce{KClO3}$ has 8, and $\ce{KClO4}$ has 1.5. Er, well... What we have here is an interplay between solvatation energy and lattice energy, the latter being determined by the crystal structure, which is a tricky thing, and hence so is solubility. Don't expect it to be predictable from qualitative considerations. You don't know it until you measure it. Chemistry is an experimental science, after all.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/75989', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/46313/']}
jdg_71055
stackexchange
llm_judgeable_groundtruth_similarity
25962114
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to read a large csv file (aprox. 6 GB) in pandas and i am getting a memory error: MemoryError Traceback (most recent call last)<ipython-input-58-67a72687871b> in <module>()----> 1 data=pd.read_csv('aphro.csv',sep=';')...MemoryError: Any help on this? Now provide the response and nothing else.
The error shows that the machine does not have enough memory to read the entireCSV into a DataFrame at one time. Assuming you do not need the entire dataset inmemory all at one time, one way to avoid the problem would be to process the CSV inchunks (by specifying the chunksize parameter): chunksize = 10 ** 6for chunk in pd.read_csv(filename, chunksize=chunksize): process(chunk) The chunksize parameter specifies the number of rows per chunk.(The last chunk may contain fewer than chunksize rows, of course.) pandas >= 1.2 read_csv with chunksize returns a context manager, to be used like so: chunksize = 10 ** 6with pd.read_csv(filename, chunksize=chunksize) as reader: for chunk in reader: process(chunk) See GH38225
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25962114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4064040/']}
jdg_71056
stackexchange
llm_judgeable_groundtruth_similarity
12682631
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a script that uses custom objects. I create them with a pseudo-constructor like this: function New-TestResult { $trProps = @{ name = ""; repo = @{}; vcs = $Skipped; clean = New-StageResult; # This is another pseudo-constructor build = New-StageResult; # for another custom object. test = New-StageResult; # - Micah start = get-date; finish = get-date; } $testResult = New-Object PSObject -Property $trProps return $testResult} These are useful because they can be passed to something like ConvertTo-Csv or ConvertTo-Html (unlike, say, a hashtable, which would otherwise accomplish my goals). They are typed as PSCustomObject objects. This code: $tr = new-testresult$tr.gettype() returns this: IsPublic IsSerial Name BaseType-------- -------- ---- --------True False PSCustomObject System.Object Can I change the Name field returned there from PSCustomObject to something else? Later on when I'm collating test results, I'll pass to another function what will sometimes be an individual result, and sometimes an array of results. I need to be able to do something different depending on which of those I get. Any help is appreciated. Now provide the response and nothing else.
Sure, try this after creating $testResult: $testResult.psobject.TypeNames.Insert(0, "MyType") The heart of the PowerShell extended type system is the psobject wrapper (at least in V1 and V2). This wrapper allows you to add properties and methods, modify type names list and get at the underlying .NET object e.g.: C:\PS > $obj = new-object psobjectC:\PS > $obj.psobjectBaseObject :Members : {string ToString(), bool Equals(System.Object obj), int GetHashCode(), type GetType()}Properties : {}Methods : {string ToString(), bool Equals(System.Object obj), int GetHashCode(), type GetType()}ImmediateBaseObject :TypeNames : {System.Management.Automation.PSCustomObject, System.Object} Or try this from the prompt: C:\PS> $d = [DateTime]::NowC:\PS> $d.psobject...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12682631', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/868206/']}
jdg_71057
stackexchange
llm_judgeable_groundtruth_similarity
33528507
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use XCGLogger 3.0 to do the logging in an iOS9 app written in Swift 2. In the AppDelegate, I define let log = XCGLogger.defaultInstance() and in application:didFinishLaunchingWithOptions I do the setup with: log.setup(.Debug, showThreadName: true, showLogLevel: true, showFileNames: true, showLineNumbers: true, writeToFile: "/tmp/app.log", fileLogLevel: .Debug) When I start the app in the simulator I see the output of XCGLogger in the console of XCode 7. The output is: 2015-11-04 18:28:40.798 [Info] > myapp Version: 1.0 Build: 1 PID: 49243 2015-11-04 18:28:40.798 [Info] > XCGLogger Version: 3.0 - LogLevel: Debug 2015-11-04 18:28:40.801 [Info] > XCGLogger writing to log to: file:///tmp/app.log But when I look at the sandbox filesystem of the correct simulator instance (using SimPholders2), there is no logfile app.log. It's even worse when I start the app on my IPhone 6. The output in the XCode console is: 2015-11-04 18:36:14.692 [Error] > Attempt to open log file for writing failed: The operation couldn’t be completed. (Cocoa error 2.) I also tried different pathes like "/tmp/app.log", "Library/Caches/de.myidentifier.myapp/app.log", "/Library/Caches/de.myidentifier.myapp/app.log" etc. but without success... What am I doing wrong? Now provide the response and nothing else.
On iOS, you can't just write to the /tmp folder. You need to ensure the path is in the application's sandbox. To do that, you need to ask the system for your cache directory. The sample app in XCGLogger includes the code to do that, but I'll include it here as well. Try this: let cacheDirectory: NSURL = { let urls = NSFileManager.defaultManager().URLsForDirectory(.CachesDirectory, inDomains: .UserDomainMask) return urls[urls.endIndex - 1] }()let logPath: NSURL = cacheDirectory.URLByAppendingPathComponent("app.log")log.setup(.Debug, showThreadName: true, showLogLevel: true, showFileNames: true, showLineNumbers: true, writeToFile: logPath, fileLogLevel: .Debug)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33528507', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3902308/']}
jdg_71058
stackexchange
llm_judgeable_groundtruth_similarity
51587227
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When i'm running phpMyAdmin and click to Export/Import I always get an error: Error in processing request Error code: 500 Error text: Internal Server Error. OS - Ubuntu 18.04 Now provide the response and nothing else.
I faced problem. My php version was 7.2 . Actually this error comes from a phpmyadmin library. The library name is /usr/share/phpmyadmin/libraries/sql.lib.php . In this file line no 614 . So you need to modify the file From && ($analyzed_sql_results['select_expr'][0] == '*'))) to && ($analyzed_sql_results['select_expr'][0] == '*')) or you can replace full method bellow: /*** Function to check whether to remember the sorting order or not** @param array $analyzed_sql_results the analyzed query and other variables set* after analyzing the query** @return boolean*/function PMA_isRememberSortingOrder($analyzed_sql_results){return $GLOBALS['cfg']['RememberSorting'] && ! ($analyzed_sql_results['is_count'] || $analyzed_sql_results['is_export'] || $analyzed_sql_results['is_func'] || $analyzed_sql_results['is_analyse']) && $analyzed_sql_results['select_from'] && ((empty($analyzed_sql_results['select_expr'])) || (count($analyzed_sql_results['select_expr']) == 1) && ($analyzed_sql_results['select_expr'][0] == '*')) && count($analyzed_sql_results['select_tables']) == 1;} I hope this may help. Thank you.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51587227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10147537/']}
jdg_71059
stackexchange
llm_judgeable_groundtruth_similarity
623057
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What’s the difference between df -h and df -kh ? I am trying both these commands in my terminal, however I don’t see any visible difference so wanted to understand. Now provide the response and nothing else.
There is effectively no difference. The -h option to df selects "human readable" output, meaning that the sizes of things will be scaled to appropriate amounts to give nice small readable values, such as 2.1G, or 806M. The -k option does something similar, but scales the sizes to kilobytes only, so you'll get e.g. 2165680 and 824550 instead of 2.1G and 806M. Since these options are conflicting with each other (you can't both have the sizes in kilobytes and in "human readable" format), the last of option specified will "win". The combination of these options that you use, -kh (which is the same as -k -h ), means that you'll get the effect of using only -h . There is therefore no difference between df -h and df -kh . Compare this behavior with conflicting options to other utilities, such as the -C , the -1 ("minus one"), and the -l ("minus ell") option to ls , and what happens if you use all in one order or the other.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/623057', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/222504/']}
jdg_71060
stackexchange
llm_judgeable_groundtruth_similarity
582711
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to know what would be the best resistor values for R5,R6,R7 at the PNP and NPN transistors, if I want to use the circuit below. Assume the LEDs Vf =2V at 20mA. Will the circuit values I have below be good enough to make it work? I think maybe using R6=1k, R7=1K and R5=10K will make the circuit better. Now provide the response and nothing else.
Resistor Regulation Your drawn schematic (not your proposed values) is fine enough. The regulation is about like this: $$\begin{align*}\%\,I_{_\text{LED}}&=-\%\,V_{_\text{LED}}\cdot \frac{1}{\frac{V_{_\text{CC}}}{V_{_\text{LED}}}-1}\end{align*}$$ In this case, you have four \$2\:\text{V}\$ LEDs, so \$V_{_\text{LED}}=8\:\text{V}\$ in the above equation. This means that you'd get a -5% change in current for a +10% change in the summed LED voltage. And that's probably adequate. That's why I said it was fine as it was. However, if you wanted to swap in four white \$3.3\:\text{V}\$ LEDs now, then this is a +65% change and you should expect a -33% change in the LED current. So it's not so good, in that regard. You'd have to recalculate \$R_8\$ and replace it. Drawn Schematic and Written Values These work out fine. Your \$R_5\$ and \$R_7\$ wind up being a resistor voltage divider with \$R_{_\text{TH}}\approx 910\:\Omega\$ and \$V_{_\text{TH}}\approx 21.8\:\text{V}\$ . Give the emitter at \$24\:\text{V}\$ and an estimated \$\mid V_{_\text{BE}}\mid\approx 750\:\text{mV}\$ , the base current is \$\frac{24\:\text{V}-21.8\:\text{V}-750\:\text{mV}}{910\:\Omega}\approx 1.6\:\text{mA}\$ . And that's more than fine enough for a collector current of \$20\:\text{mA}\$ . Your written replacements would be \$R_{_\text{TH}}\approx 910\:\Omega\$ and \$V_{_\text{TH}}\approx 2.2\:\text{V}\$ (maybe slightly higher depending on the I/O pin driver BJT saturation voltage.) This would mean a great deal more base current -- far more than needed. And it would require more base drive current compliance from your I/O pin. It can work. But I don't see the need to go there. Current Regulation Approach If you are interested in something that will just work right regardless of the LED type, then try this: simulate this circuit – Schematic created using CircuitLab You can substitute almost any LED type in there, mix or match. Doesn't matter. And you won't need to recalculate and replace a resistor if you change LED types.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/582711', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/293039/']}
jdg_71061
stackexchange
llm_judgeable_groundtruth_similarity
390705
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to calculate $\lim_{x\to 1^+} \log (x)^{\log(x)}$ ?i know that its "1", but why?How can i calculate this? Thank you very very much =) Now provide the response and nothing else.
Substituting $t = \log x$ you obtain$$\lim_{x \to 1^+} \log(x)^{\log(x)} = \lim_{t \to 0^+} t^t = \lim_{t \to 0^+} e^{t \log(t)} = e^{\lim_{t \to 0^+} t \log(t)}.$$To compute $\lim_{t \to 0^+} t \log(t)$ use, for example, de l'Hospital rule.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/390705', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/77673/']}
jdg_71062
stackexchange
llm_judgeable_groundtruth_similarity
33885235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have so far avoided use of log.Fatal , but I recently co-incidentally discovered these questions; code-coverage and tests-using-log-fatal . One of the comments from the 100 code coverage questions says: ... In the vast majority of cases log.Fatal should be only be used in main, or init functions (or possibly some things meant to be called only directly from them)" It go me thinking, so I began to look at the standard library code provided with Go. There are lots of examples where the test code in the library makes use of log.Fatal which seems fine. There are a few examples outside of the test code, such as in net/http , shown below: // net/http/transport.go func (t *Transport) putIdleConn(pconn *persistConn) bool { ... for _, exist := range t.idleConn[key] { if exist == pconn { log.Fatalf("dup idle pconn %p in freelist", pconn) } } ...} If its is best practice to avoid use of log.Fatal , why is it used at all in the standard libraries, I would have expected just return an error. It seems unfair to the user of the library to cause os.Exit to be called and not providing any chance for the application to clean-up. I may be naive, so hence my question as a better practice would seem to be to call log.Panic which can be recovered and my theoretical long running stable application might have a chance of rising from the ashes. So what would best-practise say for Go about when should log.Fatal should be used? Now provide the response and nothing else.
It might be just me, but here is how I use log.Fatal . As per UNIX conventions, a process which encounters an error should fail as early as possible with a non-zero exit code. This lead me to the following guidelines to use log.Fatal when… …an error happens in any of my func init() , as these happen when the imports are processed or before the main func is called, respectively. Conversely, I only do stuff not directly affecting the unit of work the library or cmd is supposed to do. For example, I set up logging and check wether we have a sane environment and parameters. No need to run main if we have invalid flags, right? And if we can not give proper feedback, we should tell this early. …an error happens of which I know is irrecoverable. Let's assume we have a program which creates a thumbnail of an image file given on the command line. If this file does not exist or is unreadable because of insufficient permissions, there is no reason to continue and this error can not be recovered from. So we adhere to the conventions and fail. …an error occurs during a process which might not be reversible. This is kind of a soft definition, I know. Let me illustrate that. Let's assume we have an implementation of cp , and it was started to be non-interactive and recursively copy a directory. Now, let's assume we encounter a file in the target directory which has the same name (but different content) as a file to be copied there. Since we can not ask the user to decide what to do and we can not copy this file, we have a problem. Because the user will assume that the source and the target directories are exact copies when we finish with exit code zero, we can not simply skip the file in question. However, we can not simply overwrite it, since this might potentially destroy information. This is a situation we can not recover from per explicit request by the user, and so I'd use log.Fatal to explain the situation, hereby obeying the principle to fail as early as possible.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33885235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302164/']}
jdg_71063
stackexchange
llm_judgeable_groundtruth_similarity
11835860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a code for a deck of cards which shuffles the deck of cards. I tested the code but I don't really know if it's actually doing what it's supposed to be doing correctly? What do you think? This is the code for the shuffle method: public void shuffle(){ for( int x = myDeck.size(); x > 0 ; x--) { Random rn = new Random(); int index1 = rn.nextInt(52); Card c = myDeck.remove(index1); myDeck.add(c); } } My output seems shuffled in its numbers but not by the name of the card like spades hearts etc, For example this is my output when I test the code : Deuce of spadesSeven of spadesEight of spadesAce of spadesThree of heartsFive of heartsSix of heartsSeven of heartsNine of heartsTen of heartsQueen of heartsKing of heartsAce of heartsSeven of diamondsEight of diamondsJack of diamondsKing of diamondsThree of clubsSeven of clubsNine of clubsJack of clubsQueen of clubsKing of clubsAce of clubsQueen of spadesDeuce of clubsThree of spadesNine of diamondsFour of spadesFour of clubsDeuce of heartsJack of spadesTen of clubsSix of diamondsJack of heartsSix of clubsFour of diamondsFive of diamondsAce of diamondsFour of heartsNine of spadesTen of spadesFive of spadesThree of diamondsSix of spadesFive of clubsDeuce of diamondsEight of heartsKing of spadesTen of diamondsEight of clubsQueen of diamonds Like there's always repeated names. is it wrong since the point of shuffling is to mix it up? This is the actual question: When playing cards, it is, of course, important to shuffle the deck, that is, toarrange things so that the cards will be dealt in a random order. There areseveral ways to achieve this. One strategy involves repeatedly picking a cardat random out of the deck and moving it to the end. The following code usesthe Random class (which you met on page 8 of the “ArrayLists” section of theonline course) to perform one such “pick and move to the end” operation: Random rn = new Random();int index1 = rn.nextInt( 52 );Card c = myDeck.remove( index1 );myDeck.add( c ); To shuffle the deck effectively, this operation should be repeated many times(say, 500 times). Create a new instance method, shuffle, for the Deck classthat uses a single Random object and a for loop to shuffle myDeck. Aftersuitably modifying the main method, use it to test your new code. So my main question is: am I doing this wrong? Now provide the response and nothing else.
Just change rn.nextInt(52); to rn.nextInt(x) and you have a proper Fisher-Yates shuffle . No need to do more than 52 iterations. Why this works: In the first iteration (when x is 52) you'll select a random card from the full deck and move it last. In the second iteration (when x is 51) you'll select a random card from the remaining cards and move it last. ...and so on. After 52 iterations, the first card selected, will have ended up in the first index. Since this card was selected randomly from the full deck, each card is equally probable. Same applies for second index, third index, ... It follows that each possible permutation of the deck is equally probable. (In production code, just use Collections.shuffle in these situations.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11835860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1500291/']}
jdg_71064
stackexchange
llm_judgeable_groundtruth_similarity
31877029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The website is http://www.mercuriusresearch.co.uk which runs on Wordpress and based on Bootstrap . The error shows when you resize the screen below 768px, all the post title links stop working. The actual HTML is still showing the href but the post titles just behave like normal text. The error is displaying across the website (i.e. on all post title links) but only on those sections of the page that are taking links from Wordpress . For example, the sidebar links on my homepage work on all screen sizes. I haven't had this problem before. My suspicion, based on the previous paragraph, is that it relates to Wordpress somehow...but I am using the same code that I always use to bring in the titles from Wordpress . So that, and the fact that the error appears to relate to the size of the screen, leads me to think that Bootstrap is somehow related too. Any help would be much appreciated. The only code that isn't on the website is this, which is the Wordpress loop: <article id="post-<?php the_ID(); ?>" <?php post_class(); ?>><div class="post-date"> <?php k99_relative_time(); ?></div> <div class="post-title"> <?php the_title( sprintf( '<h3 class="entry-title"><a href="%s" rel="bookmark">', esc_url( get_permalink() ) ), '</a></h3>' ); ?></div><div class="post-excerpt"> <?php the_excerpt(); ?></div><div class="read-more"> <a href="<?php the_permalink() ?>">Read more &#8594;</a></div></article> As I say though, this is working on larger screen sizes and if there was something wrong with this section, it obviously would occur on all screen sizes. All the other code is there on the website (obviously, I am not sure what part exactly is relevant, or I wouldn't I have this problem). Now provide the response and nothing else.
It turned out the error above resulted from two problems: The service principal in the spring configuration was wrong. It [email protected], but HTTP/[email protected] is correct. The Kvno in the keytab was not identical with the Kvno stored in theactive directory. As stated on https://tomcat.apache.org/tomcat-7.0-doc/windows-auth-howto.html Active Directory is raising the Kvno with every execution of ktpass.However I wasn't able to find the value (msDS-KeyVersionNumber) forit in our AD and could only grap it from the request. Summarized the 'Cannot find key of appropriate type to decrypt ...' error can result from one of the following problems: The service prinicpal in the spring security configuration is not identical with that from the keytab (param /princ from ktpass). There is no key for the enctype the AD has send the ticket with (param /crypto from ktpass and set in the krb5.conf/permitted_enctypes+default_tkt_enctypes). The Kvno from the ticket is different then the Kvno in the keytab (param /kvno from ktpass). The path to the keytab is wrong (see answer from Xavier Portebois) The process does not have permissions to read the keytab (See comment from user7610 )
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31877029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3772700/']}
jdg_71065
stackexchange
llm_judgeable_groundtruth_similarity
50060276
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I create a custom radio button group like this in flutter Now provide the response and nothing else.
Here is the full code class CustomRadio extends StatefulWidget { @override createState() { return new CustomRadioState(); }}class CustomRadioState extends State<CustomRadio> { List<RadioModel> sampleData = new List<RadioModel>(); @override void initState() { // TODO: implement initState super.initState(); sampleData.add(new RadioModel(false, 'A', 'April 18')); sampleData.add(new RadioModel(false, 'B', 'April 17')); sampleData.add(new RadioModel(false, 'C', 'April 16')); sampleData.add(new RadioModel(false, 'D', 'April 15')); } @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( title: new Text("ListItem"), ), body: new ListView.builder( itemCount: sampleData.length, itemBuilder: (BuildContext context, int index) { return new InkWell( //highlightColor: Colors.red, splashColor: Colors.blueAccent, onTap: () { setState(() { sampleData.forEach((element) => element.isSelected = false); sampleData[index].isSelected = true; }); }, child: new RadioItem(sampleData[index]), ); }, ), ); }}class RadioItem extends StatelessWidget { final RadioModel _item; RadioItem(this._item); @override Widget build(BuildContext context) { return new Container( margin: new EdgeInsets.all(15.0), child: new Row( mainAxisSize: MainAxisSize.max, children: <Widget>[ new Container( height: 50.0, width: 50.0, child: new Center( child: new Text(_item.buttonText, style: new TextStyle( color: _item.isSelected ? Colors.white : Colors.black, //fontWeight: FontWeight.bold, fontSize: 18.0)), ), decoration: new BoxDecoration( color: _item.isSelected ? Colors.blueAccent : Colors.transparent, border: new Border.all( width: 1.0, color: _item.isSelected ? Colors.blueAccent : Colors.grey), borderRadius: const BorderRadius.all(const Radius.circular(2.0)), ), ), new Container( margin: new EdgeInsets.only(left: 10.0), child: new Text(_item.text), ) ], ), ); }}class RadioModel { bool isSelected; final String buttonText; final String text; RadioModel(this.isSelected, this.buttonText, this.text);} To use : void main() { runApp(new MaterialApp( home: new CustomRadio(), ));} Screenshot :
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50060276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2863386/']}
jdg_71066
stackexchange
llm_judgeable_groundtruth_similarity
7555
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was trying to implement the breadth-first search (BFS) algorithm for the sliding blocks puzzle (number type). Now, the main thing I noticed is that, if you have a $4 \times 4$ board, the number of states can be as large as $16!$ , so I cannot enumerate all states beforehand. How do I keep track of already visited states? I am using a class board each class instance contains a unique board pattern and is created by enumerating all possible steps from the current step. I searched on the net and, apparently, they do not go back to the just-completed previous step, BUT we can go back to the previous step by another route too and then again re-enumerate all steps which have been previously visited. So, how to keep track of visited states when all the states have not been enumerated already? Comparing already present states to the present step will be costly. Now provide the response and nothing else.
You can use a set (in the mathematical sense of the word, i.e. a collection that cannot contain duplicates) to store states that you have already seen. The operations you'll need to be able to perform on this are: inserting elements testing if elements are already in there Pretty much every programming language should already have support for a data structure that can perform both of these operations in constant ($O(1)$) time. For example: set in Python HashSet in Java At first glance, it may seem like adding all the states you ever see to a set like this will be expensive memory-wise, but it is not too bad in comparison to the memory you already need for your frontier; if your branching factor is $b$, your frontier will grow by $b - 1$ elements per node that you visit (remove $1$ node from frontier to "visit" it, add $b$ new successors/children), whereas your set will only grow by $1$ extra node per visited node. In pseudocode, such a set (let's name it closed_set , to be consistent with the pseudocode on wikipedia could be used in a Breadth-First Search as follows: frontier = First-In-First-Out Queuefrontier.add(initial_state)closed_set = set()while frontier not empty: current = frontier.remove_next() if current == goal_state: return something for each child in current.generate_children() if child not in closed_set: // This operation should be supported in O(1) time regardless of closed_set's current size frontier.add(child) closed_set.add(current) // this should also run in O(1) time (some variations of this pseudocode might work too, and be more or less efficient depending on the situation; for example, you could also take the closed_set to contain all nodes of which you have already added children to the frontier, and then entirely avoid the generate_children() call if current is already in the closed_set .) What I described above would be the standard way to handle this problem. Intuitively, I suspect a different "solution" could be to always randomize the order of a new list of successor states before adding them to the frontier. This way, you do not avoid the problem of occasionally adding states that you've already previousl expanded to the frontier, but I do think it should significantly reduce the risk of getting stuck in infinite cycles. Be careful : I do not know of any formal analysis of this solution that proves that it always avoids infinite cycles though. If I try to "run" this through my head, intuitively, I suspect it should kind of work, and it does not require any extra memory. There may be edge cases that I'm not thinking of right now though, so it also simply might not work, the standard solution described above will be a safer bet (at the cost of more memory).
{}
{'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/7555', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/-1/']}
jdg_71067
stackexchange
llm_judgeable_groundtruth_similarity
47371536
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to allow name and message customisation in both my lexer and parsers. The parser/lexer must be able to select a specific message or name, e.g.: messageManager.tokenName.eofToken Here, eofToken would be a String . tokenName would be a set of names for lexer tokens and messageManager would be a structure like { parserMessage, lexerMessage, contextName, tokenName } However, I want the message customisation very directly constructed. I don't want something like: TokenName tokenName = new TokenName();tokenName.eofToken = "end of file";tokenName.identifier = "identifier";tokenName.keyword = "keyword";tokenName.regExpLiteral = "regexp' literal";// much more... I want something like: new TokenName( "end of file", "identifier", "keyword", ...) I know we just need to define parameters and assign them to the corresponding variables, but I don't want a huge constructor like the one I had in my previous parser: public TokenNames( String booleanLiteral, String eofToken, String identifier, String punctuator, String keyword, String numericLiteral, String nullLiteral, String regExpLiteral, String stringLiteral, String xmlName, String xmlMarkup, String xmlPunctuator, String xmlTagCharacters, String xmlText){ this.booleanLiteral = booleanLiteral; this.eofToken = eofToken; this.identifier = identifier; this.punctuator = punctuator; this.keyword = keyword; this.numericLiteral = numericLiteral; this.nullLiteral = nullLiteral; this.regExpLiteral = regExpLiteral; this.stringLiteral = stringLiteral; this.xmlName = xmlName; this.xmlMarkup = xmlMarkup; this.xmlPunctuator = xmlPunctuator; this.xmlTagCharacters = xmlTagCharacters; this.xmlText = xmlText;} I believe it's possible with arrays or varargs (more readable). How to? Now provide the response and nothing else.
The Microsoft.AspNetCore.MVC.Controller class comes with the extension method HttpContext.RequestServices.GetService<T> Which can be used whenever the HttpContext is available in the pipeline (e.g. The HttpContext property will be Null if called from the controller's constructor) Try this pattern Note: make sure you include this directive using Microsoft.Extensions.DependencyInjection ; Base Controller public abstract class BaseController<T> : Controller where T: BaseController<T>{ private ILogger<T> _logger; protected ILogger<T> Logger => _logger ?? (_logger = HttpContext.RequestServices.GetService<ILogger<T>>()); Child Controller [Route("api/authors")]public class AuthorsController : BaseController<AuthorsController>{ public AuthorsController(IAuthorRepository authorRepository) { _authorRepository = authorRepository; } [HttpGet("LogMessage")] public IActionResult LogMessage(string message) { Logger.LogInformation(message); return Ok($"The following message has been logged: '{message}'"); } Needless to say, remember to register your services in the Startup.cs --> ConfingureServices method
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47371536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_71068
stackexchange
llm_judgeable_groundtruth_similarity
408436
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a big text file (263 lines) that contains lines something like these: image_name.jpg: *lots of spaces* JPEG image data, JFIF standard 1.01, resolution (DPI), density 96x96, segment length 16, baseline, precision 8, 1024x768, frames 3 \nimage_name.jpg: *lots of spaces* JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, comment: "CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 70", progressive, precision 8, 960x540, frames 3 \nimage_name.png: *lots of spaces* PNG image data, 752 x 760, 8-bit/color RGBA, non-interlaced \n How can I remove all the text between : and \n at once? Now provide the response and nothing else.
With cut : cut -d: -f1 file With sed : sed -e 's/:.*//' file With awk : awk -F: '{print $1}' file With GNU grep or many BSD grep s (but not POSIX grep ): grep -o '^[^:]*' file cut is the shortest one. If you want to modify the file in-place, your sed may have an option -i that does so - but how exactly that works depends on your platform. Otherwise, > file2 && mv file2 file on the end of any of them will work. Alternatively, with ed , in-place everywhere: printf ',s/:.*/\nw\n' | ed file
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/408436', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/174627/']}
jdg_71069
stackexchange
llm_judgeable_groundtruth_similarity
28421777
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried union Spring 3(MVC) with JSF 2. I have some experience in Spring and JSF, but never tried to join them before. In the end I have 2 files @ManagedBean(name = "userBean")@Scope@Componentpublic class someBean { @Autowired private TestService testService; public void printString() { System.out.println(testService.getString()); }} and @ManagedBean(name = "studentBean")@Scope@Componentpublic class StudentBean { @Autowired private TestService testService; public void printString() { System.out.println(testService.getString()); }} For these file I have right configuration for spring, jsf, and web.xml. And have .xhtml page where I start printString() for 'someBean' and for 'StudentBean'. I have the NPE in first case and 'some string' in the console in second case. The reason is simple - different bean names in the Spring context and JSF. all problems finished after @Component => @Component("userBean") public class someBean { In the debug I saw that private TestService testService;@Autowiredpublic void setTestService(TestService testservice) { this.testService = testService;} When JSF bean is creating testService sets not null, but it is null during JSF lifecycle when public void pringString() { testService.blah();} testService is null. It is what I can't understand. Has someone deep knowledge the Spring and JSF to describe this situation in details? Now provide the response and nothing else.
Both JSF and Spring can act as bean containers. The @ManagedBean annotation instructs the JSF managed bean facility to create a new instance of the class, and manage it under the given name. The @Component annotation instructs the Spring ApplicationContext to create a new instance of the class, and manage it under the given name. That is, both JSF and Spring create an instance of that class, the JSF one is reachable through EL, but the Spring one gets its dependencies injected (because, being a spring annotation, @Autowired is not understood by the JSF managed bean facility). So you have a choice: Use the JSF managed bean facility for everything (which I would not recommend, as it is rather limited), use CDI for everything (which is an option, but does not use Spring), or use Spring for everything (which I usually do), by removing the @ManagedBean annotation, and making Spring beans accessible through EL by registering a SpringBeanFacesELResolver in your faces-config.xml. The Spring reference manual describes this in section 19.3.1 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28421777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4320284/']}
jdg_71070
stackexchange
llm_judgeable_groundtruth_similarity
485039
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's wrong with this query: INSERT INTO Users( weight, desiredWeight ) VALUES ( 160, 145 ) WHERE id = 1; It works without the WHERE clause. I've seemed to have forgot my SQL. Now provide the response and nothing else.
MySQL INSERT Syntax does not support the WHERE clause so your query as it stands will fail. Assuming your id column is unique or primary key: If you're trying to insert a new row with ID 1 you should be using: INSERT INTO Users(id, weight, desiredWeight) VALUES(1, 160, 145); If you're trying to change the weight/desiredWeight values for an existing row with ID 1 you should be using: UPDATE Users SET weight = 160, desiredWeight = 145 WHERE id = 1; If you want you can also use INSERT .. ON DUPLICATE KEY syntax like so: INSERT INTO Users (id, weight, desiredWeight) VALUES(1, 160, 145) ON DUPLICATE KEY UPDATE weight=160, desiredWeight=145 OR even like so: INSERT INTO Users SET id=1, weight=160, desiredWeight=145 ON DUPLICATE KEY UPDATE weight=160, desiredWeight=145 It's also important to note that if your id column is an autoincrement column then you might as well omit it from your INSERT all together and let mysql increment it as normal.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/485039', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_71071
stackexchange
llm_judgeable_groundtruth_similarity
77
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Background: Following is from the book Graph Databases , which covers a performance test mentioned in the book Neo4j in Action : Relationships in a graph naturally form paths. Querying, or traversing, the graph involves following paths. Because of the fundamentally path-oriented nature of the datamodel, the majority of path-based graph database operations are highly aligned with the way in which the data is laid out, making them extremely efficient. In their book Neo4j in Action, Partner and Vukotic perform an experiment using a relational store and Neo4j. The comparison shows that the graph database is substantially quicker for connected data than a relational store.Partner and Vukotic’s experiment seeks to find friends-of-friends in a social network, to a maximum depth of five. Given any two persons chosen at random, is there a path that connects them which is at most five relationships long? For a social network containing 1,000,000 people, each with approximately 50 friends, the results strongly suggest that graph databases are the best choice for connected data, as we see in Table 2-1. Table 2-1. Finding extended friends in a relational database versus efficient finding in Neo4j Depth RDBMS Execution time (s) Neo4j Execution time (s) Records returned2 0.016 0.01 ~2500 3 30.267 0.168 ~110,000 4 1543.505 1.359 ~600,000 5 Unfinished 2.132 ~800,000 At depth two (friends-of-friends) both the relational database and the graph database perform well enough for us to consider using them in an online system. While the Neo4j query runs in two-thirds the time of the relational one, an end-user would barely notice the the difference in milliseconds between the two. By the time we reach depth three (friend-of-friend-of-friend), however, it’s clear that the relational database can no longer deal with the query in a reasonable timeframe: the thirty seconds it takes to complete would be completely unacceptable for an online system. In contrast, Neo4j’s response time remains relatively flat: just a fraction of a second to perform the query—definitely quick enough for an online system. At depth four the relational database exhibits crippling latency, making it practically useless for an online system. Neo4j’s timings have deteriorated a little too, but the latency here is at the periphery of being acceptable for a responsive online system. Finally, at depth five, the relational database simply takes too long to complete the query. Neo4j, in contrast, returns a result in around two seconds. At depth five, it transpires almost the entire network is our friend: for many real-world use cases, we’d likely trim the results, and the timings. Questions are: Is this a reasonable test to emulate what one might except to find in a social network? (Meaning do real social networks normally have nodes with approximately 50 friends for example; seems like the " rich get richer " model would be more natural for social networks, though might be wrong.) Regardless of the naturalness of the emulation, is there any reason to believe the results are off, or unreproducible? Now provide the response and nothing else.
Looking at this document called Anatomy of Facebook I note that the median is 100. Looking at the cumulative function plot I can bet that the average is higher, near 200. So 50 seems to not be the best number here. However I think that this is not the main issue here. The main issue is the lack of information on how the database was used. It seems reasonable that a data storage designed specially for graph structures to be more efficient than traditional RDBMs. However, even if the RDBMs are not in the latest trends as a data storage of choice, these systems evolved continuously in a race with the data set dimensions. There are various types of possible designs, various ways of indexing data, improvements related with concurrency and so on. To conclude I think that regarding reproducibility, the study lack a proper description of how the database schema was designed. I do not expect that a database to dominate on such king of interrogations, however I would expect that with a well-tuned design the differences to not be such massive.
{}
{'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/77', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/158/']}
jdg_71072
stackexchange
llm_judgeable_groundtruth_similarity
26586060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've tried to compile this program on an x64 computer: #include <cstring>int main(int argc, char* argv[]){ return ::std::strcmp(argv[0], "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really really really" "really really really really really really really long string" );} I compiled it like this: g++ -std=c++11 -msse2 -O3 -g a.cpp -o a But the resulting disassembly is like this: 0x0000000000400480 <+0>: mov (%rsi),%rsi 0x0000000000400483 <+3>: mov $0x400628,%edi 0x0000000000400488 <+8>: mov $0x22d,%ecx 0x000000000040048d <+13>: repz cmpsb %es:(%rdi),%ds:(%rsi) 0x000000000040048f <+15>: seta %al 0x0000000000400492 <+18>: setb %dl 0x0000000000400495 <+21>: sub %edx,%eax 0x0000000000400497 <+23>: movsbl %al,%eax 0x000000000040049a <+26>: retq Why is no SIMD used? I suppose it could be to compare, say, 16 chars at once. Should I write my own SIMD strcmp , or is it a nonsensical idea for some reason? Now provide the response and nothing else.
In a SSE2 implementation, how should the compiler make sure that no memory accesses happen over the end of the string? It has to know the length first and this requires scanning the string for the terminating zero byte. If you scan for the length of the string you have already accomplished most of the work of a strcmp function. Therefore there is no benefit to use SSE2. However, Intel added instructions for string handling in the SSE4.2 instruction set. These handle the terminating zero byte problem. For a nice write-up on them read this blog-post: http://www.strchr.com/strcmp_and_strlen_using_sse_4.2
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26586060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1095108/']}
jdg_71073
stackexchange
llm_judgeable_groundtruth_similarity
5860816
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: std::vector<int> my_ints;my_ints.push_back(1);my_ints.push_back(2);my_ints.push_back(3);my_ints.push_back(4);std::for_each(my_ints.begin(), my_ints.end(), std::cout.operator <<); Now provide the response and nothing else.
Because that's a member function, and for_each wants a function object that takes a single parameter. You'll have to write your own function: void print_to_stdout(int i){ std::cout << i;}std::for_each(my_ints.begin(), my_ints.end(), print_to_stdout); Another alternative is to mix std::mem_fun and std::bind1st (or any of the better C++0x/boost alternatives) to generate that function. But the best would be to use std::copy with a std::ostream_iterator : std::copy(my_ints.begin(), my_ints.end(), std::ostream_iterator<int>(std::cout));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5860816', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171180/']}
jdg_71074
stackexchange
llm_judgeable_groundtruth_similarity
2283063
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have this function: $$\int_{L} \frac{dz}{z^{4}+1}$$$$L: x^2+y^{2}=2x $$ The question is how to calulate the integral. The solution uses the Cauchy Teorem: $$f(a)=\frac{1}{2\Pi i} \int \frac{f(z)dz}{z-a}$$ $$\frac{1}{z^4+1}=\frac{1}{(z-\sqrt{i})(z+\sqrt{i})(z^2+1)}$$we set a to be -i$$\frac{f(z)}{z-\sqrt{i}}$$so:$$f(z)=\frac{1}{(z+\sqrt(i))(z^2+1)}$$ and so $$\int_{L} \frac{dz}{1+z^4}=\int_{L} \frac{f(z)}{z-\sqrt{i}}=i2\Pi f(\sqrt{i})= \frac{1}{(-\sqrt{i}+\sqrt{i})(z^2+1)}=1/0 = \infty$$ What have I done wrong, how do you solve these ones ?! Now provide the response and nothing else.
Note that $$\cos 3\theta=4\cos^3\theta-3\cos\theta$$ Let $x=\cos\theta$. Then \begin{align*}8\cos^3\theta-6\cos\theta+1&=0\\2\cos3\theta+1&=0\\\cos3\theta&=\frac{-1}{2}\\3\theta&=2n\pi\pm\frac{2\pi}{3}\\\theta&=\frac{2n\pi}{3}\pm\frac{2\pi}{9}\end{align*} So $x=\cos\frac{2\pi}{9}\approx 0.7660$, $\cos\frac{4\pi}{9}\approx 0.1736$ or $\cos\frac{8\pi}{9}=-0.9397$. By letting $x=\cos\theta$, we can only find roots with $|x|\le1$. But as we already have $3$ roots, we have found all the roots.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2283063', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/380017/']}
jdg_71075
stackexchange
llm_judgeable_groundtruth_similarity
2361040
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $p(x)=a_n x^n + a_{n-1} x^{n-1} + ... + a_1 x + a_0,\enspace a_i\in\mathbb{C}$ some polynomial. Suppose that $|a_0|$ is very small (compared to the other coefficients' magnitude). Is there any way the (complex) roots of $p(x)$ could be heavily affected by setting $a_0 = 0$ ? I think the answer could be no because the polynomial is continuous in the constant term and thus small changes in the constant term will affect the function only slightly. But does this hold true for the location of the roots? Why am I asking this? I try to incorporate a not selfwritten custom root finder into my Matlab-program but unfortunately in some rare cases one of its loops doesn't converge if the input vector's constant coefficient is of the magnitude $\approx 10^{-18}$ and then the algorithm crashes. However it does converge if I set the constant term to zero but I worry whether I could get wrong results. EDIT 1: Based on the answer by @Fixed Point, I could find a successor ( NAClab ) of the Matlab root finder that I was using and it doesn't crash anymore. I then went on to quickly investigate the polynomials Fixed Point proposed. Here are the results: Figure1: Up to degree 17 the root finder keeps the results on the real axis.From degree 18 onwards the calculated roots gain an imaginary part which grows linearly with the degree of the polynomial. The ratio between the constant coefficient $a_0$ and the highest order coefficient $a_n$ is in the order of $10^{-16}$ when the roots begin to diverge from the real axis and grows with approximately one order of magnitude per increase of polynomial degree. Figure2: Here the constant coefficient is set to zero, one can see that the roots that are close to each other get perturbed quite significantly. The Matlab code to reproduce the results can be downloaded here . EDIT 2: To address Fixed Point's questions: Since Brent's algorithm is guaranteed to converge (it could be slow but it will converge), I am curious as to why you were having the problem that you said you were. When developing the MIMO extension for ANP (animated nyquist diagram, a leisure project for educational purposes) I came to realize that the program would have to deal with high order polynomials even for small MIMO systems. I then noticed that Matlab's 'roots' would produce very inaccurate results when there were roots with high multiplicity present - even in trivial, obvious cases like $(x+1)^4=0$ (try roots(poly([-1,-1,-1,-1])) ). Even if my program should only be used for entertainment, that wasn't good enough. After finding Multroot (by Zeng) and unit-testing it with quite some success using randomized MIMO systems I found that besides some trivial to solve crashes it had a more severe flaw that had to do with a small constant term. How/why was your application crashing? One such polynomial can be defined in Matlab as follows (it's the one that finally lead me to this SE question): p = hex2num(['bfae7873980ada44';'bfd79794c0074ef6';'bfe9e4c737c98680';'bfe5502ed16afae0';'bf81513e302abba0';'3fc59ae0b4d97164';'bc80000000000000'])'; Use it as input to Multroot: multroot(p) Most likely it will end with an error saying that an output argument hasn't been assigned. (Beware that the algorithm uses randomized initial vectors and thus succeeds with a small chance) Was this MATLAB's fzero which was crashing? As explained I didn't use 'fzero' and unfortunately it can't help me here, as it says in the documentation that it needs a change of sign to detect a zero - which isn't the case for all roots of a general polynomial. Now provide the response and nothing else.
Consider the polynomial, $$p_1(x)=(x-1)(x-2)...(x-19)(x-20).$$After expanding it, you get, \begin{array}{|r|r|}\hline\textrm{Exponent} & \textrm{Coefficients of $p(x)$}\\\hline 0 & 2432902008176640000 \\ 1 & -8752948036761600000 \\ 2 & 13803759753640704000 \\ 3 & -12870931245150988800 \\ 4 & 8037811822645051776 \\ 5 & -3599979517947607200 \\ 6 & 1206647803780373360 \\ 7 & -311333643161390640 \\ 8 & 63030812099294896 \\ 9 & -10142299865511450 \\ 10 & 1307535010540395 \\ 11 & -135585182899530 \\ 12 & 11310276995381 \\ 13 & -756111184500 \\ 14 & 40171771630 \\ 15 & -1672280820 \\ 16 & 53327946 \\ 17 & -1256850 \\ 18 & 20615 \\ 19 & -210 \\ 20 & 1 \\\hline\end{array} Note that the constant term here is $20!$. Now let us consider the polynomial$$p_2(x)=x^{20}p_1(1/x)$$which basically flips the coefficients, giving us \begin{array}{|r|r|}\hline\textrm{Exponent} & \textrm{Coefficients of $p_2(x)$}\\\hline 0 & 1 \\ 1 & -210 \\ 2 & 20615 \\ 3 & -1256850 \\ 4 & 53327946 \\ 5 & -1672280820 \\ 6 & 40171771630 \\ 7 & -756111184500 \\ 8 & 11310276995381 \\ 9 & -135585182899530 \\ 10 & 1307535010540395 \\ 11 & -10142299865511450 \\ 12 & 63030812099294896 \\ 13 & -311333643161390640 \\ 14 & 1206647803780373360 \\ 15 & -3599979517947607200 \\ 16 & 8037811822645051776 \\ 17 & -12870931245150988800 \\ 18 & 13803759753640704000 \\ 19 & -8752948036761600000 \\ 20 & 2432902008176640000 \\\hline\end{array} This polynomial $p_2(x)$, I present as an answer to your question. The constant term here is very small compared to the other coefficients . The constant term is just one. The roots of this $p_2(x)$ polynomial are just$$x=\frac{1}{20},\frac{1}{19},\frac{1}{18},...,\frac{1}{3},\frac{1}{2},1.$$The roots are all distinct, rational, and reasonably separated on the real line. Now let us define $p_3(x)=p_2(x)$ except that the constant term is equal to zero instead of one and compare the roots. \begin{array}{|l|l|}\hline\textrm{Old roots of $p_2(x)$} & \textrm{New roots of $p_3(x)$}\\\hline 0.05 & 0\\ 0.0526316 & 0.00606612 - 0.0292961i\\ 0.0555556 & 0.00606612 + 0.0292961i\\ 0.0588235 & 0.0236616 - 0.0549143i\\ 0.0625 & 0.0236616 + 0.0549143i\\ 0.0666667 & 0.0510481 - 0.0735013i\\ 0.0714286 & 0.0510481 + 0.0735013i\\ 0.0769231 & 0.0855378 - 0.0823447i\\ 0.0833333 & 0.0855378 + 0.0823447i\\ 0.0909091 & 0.123755 - 0.0796379i\\ 0.1 & 0.123755 + 0.0796379i\\ 0.111111 & 0.16195 - 0.064656i\\ 0.125 & 0.16195 + 0.064656i\\ 0.142857 & 0.196345 - 0.0378412i\\ 0.166667 & 0.196345 + 0.0378412i\\ 0.2 & 0.218259\\ 0.25 & 0.249\\ 0.333333 & 0.333333\\ 0.5 & 0.5\\ 1 & 1\\ \hline\end{array} As you can see that some of the roots stay the same or don't change much. But the majority of the roots "changed significantly" and became complex instead of purely real. So the answer to your question is, don't set the constant term to be zero. It won't work in general and can give you wacky answers. If these coefficients are too large for you, then just divide the polynomial $p_2(x)$ by $20!$ and you get the coefficients \begin{array}{|r|r|}\hline\textrm{Exponent} & \textrm{Coefficients of $\frac{p_2(x)}{20!}$}\\\hline 0 & 4.110317623312165\times10^{-19}\\ 1 & -8.631667008955546\times10^{-17}\\ 2 & 8.473419780458027\times10^{-15}\\ 3 & -5.166052704859894\times10^{-13}\\ 4 & 2.1919479625883945\times10^{-11}\\ 5 & -6.873605325572918\times10^{-10}\\ 6 & 1.6511874089046065\times10^{-8}\\ 7 & -3.1078571268337856\times10^{-7}\\ 8 & 4.6488830858656685\times10^{-6}\\ 9 & -0.0000557298 \\ 10 & 0.000537438 \\ 11 & -0.00416881 \\ 12 & 0.0259077 \\ 13 & -0.127968 \\ 14 & 0.495971 \\ 15 & -1.47971 \\ 16 & 3.3038 \\ 17 & -5.29036 \\ 18 & 5.67378 \\ 19 & -3.59774 \\ 20 & 1 \\\hline\end{array} Making that small $10^{-19}$ constant to zero will give you the same problem with the roots because multiplying a polynomial by a constant doesn't change its roots. The thing to realize here is that the roots of a polynomial do depend continuously on the coefficients but they can be extremely sensitive and you must consider the complex plane as a whole with the real line embedded in it. The real line is just a tiny part of the entire complex plane. The real line is nothing special in this context. Changing the coefficients makes the roots wander but they can wander in any direction in the complex plane even if they were originally strictly on the real line . There is nothing constraining the roots to the real line. You can have real roots becoming complex or the other way around by slightly changing a polynomial's coefficients. James Wilkinson is one of the most respected numerical analysts of the 20th century and his specialty was coming up with counter examples. He demonstrated that the problem of finding the roots of an arbitrary polynomial using its coefficients is an ill-conditioned problem in general. He presented a specific example,$$p(x)=(x-1)(x-2)...(x-19)(x-20).$$The roots are easy to find; $x=1,2,3,...,19,20$ so they are well-seperated. If the polynomial is expanded, the coefficient of the $x^{19}$ term is $-210$. Let us perturb this coefficient by $2^{-23}$ and then round it to $−210.0000001192$ and let's call this new polynomial $q(x)$. The following happens: The value of $p(20)=0$ blows up to $q(20)\approx-6.24949\times10^{17}$. The roots at $x=1,2,3,4,5,6$ stay roughly the same. The roots at $x=7,8,9$ have noticeable change. The next ten roots actually turn complex (pairs, because the coefficients are still real). All ten of these roots have a not-so-small imaginary part. The smallest imaginary part is $\approx 0.643439$ so you decide how far this is from being a real number. The root which was at $x=20$ has moved to $x\approx20.8469$. Remember, this is despite the fact that the coefficients were integers (albeit very large integers). The roots were all real integers and well-separated and look what happened by a tiny perturbation. This polynomial is the one on which my answer is based. Wilkinson presented another polynomial. Define$$q(x)=(x-2^{-1})(x-2^{-2})(x-2^{-3})...(x-2^{-19})(x-2^{-20}).$$He showed that this polynomial $q(x)$ is rather insensitive to relatively large changes in the coefficients. Therefore in general, one cannot say anything either way. Check out this page for some more detail and I also urge you to read his published works (like the references at the bottom of the wikipedia page). They may be a bit hard to find but they are totally worth it. Addendum (too long to be a comment) In response to the OP's comment, Just out of curiosity - how did you calculate the new roots (for p3) in the third table? Is it an analytical result? the answer is that no, this is not an analytical result. Plain floating point arithmetic was used to numerically estimate the roots of $p_3(x)$. I want to point out here that lots of new or improved methods were presented in the past century. We know quite a bit about polynomials and their roots . But there is (still) no single numerical method which will work for all polynomials in an optimal fashion. You can always come up with counterexamples where a well-liked method will be "slow" or sub-optimal for finding the roots of a polynomial. However, there are methods which are much better than others. One of the most popular is Brent's method which is a hybrid method. Brent actually wrote a book, " Algorithms for Minimization without Derivatives " but the book is now out of print. So they scanned a copy and made it available to public here . In this books, you want chapter 4, "An Algorithm with Guaranteed Convergence for Finding a Zero of a Function" which describes his method. His method has also been peer-reviewed published . Furthermore, one of the cofounders of MATLAB, Cleve Moler has a blog (useful in itself) and once he had a three-part post ( one , two , and three ) describing various algorithms and how MATLAB's fzero works. Part two is where he discusses Brent's method which is implemented in MATLAB's fzero. Since Brent's algorithm is guaranteed to converge (it could be slow but it will converge), I am curious as to why you were having the problem that you said you were. How/why was your application crashing? Was this something you yourself wrote? But then I would advise you against this. There are plenty of canned routines and there shouldn't be a need to reinvent the wheel. Was this MATLAB's fzero which was crashing? In which case I would be very interested in knowing those polynomials. Perhaps you can post a few examples here? Lastly, there is this fun book , one of my all-time favorites and talks quite a bit about zeros of polynomials and how to find them. Addendum - 72 Days Later The OP provided with a specific example of a polynomial with which he was having a problem. Just out of curiosity, I wanted to take a look at it and see what was happening. The OP provided the coefficients in floating point (in hexadecimal). Since I cannot tell what the actual coefficients were supposed to be, I will assume that the provided floating point coefficients represent the coefficients exactly as rational numbers. First, convert the hexadecimal form into base ten fractions and then decimals just to see what they are. $$\begin{array}{|l|l|l|r|r|}\hline&\text{Exponent} & \text{Hex Form} & \text{Fractional Form} & \text{Decimal Form} \\\hlinea_6 & 6 & \text{BFAE7873980ADA44} & -\frac{2144171792184977}{36028797018963968} & -0.05951272231 \\ a_5 & 5 & \text{BFD79794C0074EF6} & -\frac{3320294798501755}{9007199254740992} & -0.3686267734 \\ a_4 & 4 & \text{BFE9E4C737C98680} & -\frac{56940771119885}{70368744177664} & -0.8091770258 \\ a_3 & 3 & \text{BFE5502ED16AFAE0} & -\frac{187473016346583}{281474976710656} & -0.6660379496 \\ a_2 & 2 & \text{BF81513E302ABBA0} & -\frac{152325066937821}{18014398509481984} & -0.008455739827 \\ a_1 & 1 & \text{3FC59AE0B4D97164} & \frac{1520316102106201}{9007199254740992} & 0.1687889941 \\ a_0 & 0 & \text{BC80000000000000} & \frac{1}{36028797018963968} & 2.775557561562\times10^{-17} \\ \hline\end{array}$$ Using the exact fractional form of the coefficients, define the polynomials$$p(x)=a_0+a_1x+a_2x^2+a_3x^3+a_4x^4+a_5x^5+a_6x^6$$$$q(x)=a_1x+a_2x^2+a_3x^3+a_4x^4+a_5x^5+a_6x^6$$where $q(x)$ is just $p(x)$ with the constant term set to zero.After looking at these, I know that all of the roots are irrational, except for the trivial root $x=0$ for $q(x)$. Further, I don't know for sure, but I suspect that both of these polynomials are not solvable in the radicals (check this and this if the reader doesn't know what this means) so there are no solutions in the radicals. Now, we have no choice but to rely on numerical methods. This picture above plots both $p(x)$ and $q(x)$. Look at the axis limits and the scales. I can see the three simple real distinct roots easily enough (the three right-most roots). On the left, the graph is very flat near the $x$-axis but there is at least one root there for sure. It could be just one real root with multiplicity three or it could be three distinct real roots very close together or any other combination in between. My guess would be there aren't any complex roots. Notice how both polynomials are indistinguishable from each other. We just see a single curve. This second image, above, zooms in on that flat region. The third simple root (the one on the right in this plot) is even more obvious now. But we still can't tell what is happening with the other three roots on the left. The graph is still too flat. Note we still cannot distinguish between $p(x)$ and $q(x)$. This means that for this specific polynomial, the polynomial which prompted this question, setting the constant term to zero actually doesn't change the roots by much . We'll verify this now. Starting with the exact coefficients and carrying about a hundred digits throughout the computations for precision and accuracy, I got the following roots. $$\begin{array}{|r|r|r|}\hline\text{Roots of $p(x)$} & \text{Roots of $q(x)$} & \frac{|\text{Roots}(p(x))-\text{Roots}(q(x))|}{|\text{Roots}(p(x))|}\cdot100\%\\ \hline -1.7613269 & -1.7613263 & 0.00003277 \\ -1.3071486 & -1.3071486 & 1.70941691\times10^{-13} \\ -1.64439487\times10^{-16} & 0 & 100 \\ 0.39708223 & 0.39708223 & 1.72600660\times10^{-14} \\ -1.7613452-0.0000106 i & -1.7613455-0.0000111 i & 0.00003276 \\ -1.7613452+0.0000106 i & -1.7613455+0.0000111 i & 0.00003276 \\\hline\end{array}$$ The first column shows the roots of $p(x)$. The second column shows the roots of the perturbed $q(x)$. The third column shows the relative difference in the roots in percent. The absolute value in the third column is the usual norm in $\mathbb{R}$ or $\mathbb{C}$ as appropriate. The relative changes are small so that you can see that none of the roots really changed significantly. The magnitude of the entire difference vector of the roots is$$||\text{Roots}(p(x))-\text{Roots}(q(x))||_2=9.995367\times10^{-7}$$and indeed the roots changed very little . This is all relative because another perspective is that a change of the order $10^{-17}$ to the coefficient vector caused the magnitude of the roots vector to change by almost $10^{-6}$ which is eleven orders of magnitude. This is gigantic from another point of view. The vector function which maps the coefficients of a polynomial to its roots is indeed continuous. But being continuous doesn't restrict the gradient in anyway. It can be large or small. And in this case, the gradient of this vector function just happens to be very large at this particular point in its domain.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2361040', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/452747/']}
jdg_71076
stackexchange
llm_judgeable_groundtruth_similarity
2383543
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an exercise where I am supposed to use the change of variables to solve an integral. I am given that the area $D$ is bound by $y^2-x^2=1$, $y^2-x^2=4$, $x=-\frac{y}{2}$ and $x=\frac{y}{2}$. Then it asks me to find $\iint_D \frac{y^2-x^2}{y^2} dA$. Now I have set the variables as following: $u=y^2-x^2$ $v=\frac{x}{y}$. I have calculated the Jakobian determinant which results in $2(v^2-1)$. Now I have to solve $du\ dv = |2(v^2-1)|\ dx\ dy$ and this is where I get stuck. The answersheet says its $du\ dv = 2\ (1-v^2)\ dx\ dy$ but does not exaplin how? Would love if someone could help me out understanding this. Now provide the response and nothing else.
The stupid, but effective, way is to write $c=-a-b$ and put that into$$a^3+b^3+c^3-3abc=a^3+b^3-(a+b)^3+3ab(a+b)=\cdots=0.$$ A more stylish way is to note that $a+b+c=0$ implies$$\pmatrix{a&b&c\\c&a&b\\b&c&a}\pmatrix{1\\1\\1}=\pmatrix{0\\0\\0}$$which implies$$\det\pmatrix{a&b&c\\c&a&b\\b&c&a}=0.$$Of course,$$\det\pmatrix{a&b&c\\c&a&b\\b&c&a}=a^3+b^3+c^3-3abc.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2383543', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82781/']}
jdg_71077
stackexchange
llm_judgeable_groundtruth_similarity
6234561
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create MySQL query that essentially returns true or false. I'd like to run SELECT COUNT(id) FROM comments WHERE comment_date >= 1306904400 AND user_id = 1 So if the user posted in the forum 10 times this month, I'd like it to return just 1, otherwise I'd like it to return 0 to indicate they haven't. Is that possible efficiently within SQL? Now provide the response and nothing else.
If you don't mind MySQL-specific things then you could use IF : select if(count(id) >= 10, 1, 0)from commentswhere comment_date >= 130690440 and user_id = 1 Or MySQL booleans (which are 1 for true and 0 for false): select count(id) >= 10from commentswhere comment_date >= 130690440 and user_id = 1 If you want to stick to standard SQL, then CASE is your friend: select case when count(id) >= 10 then 1 else 0 endfrom commentswhere comment_date >= 130690440 and user_id = 1
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6234561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/782005/']}
jdg_71078
stackexchange
llm_judgeable_groundtruth_similarity
393622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When should I use specs for Rails application and when Cucumber (former rspec-stories)? I know how both work and actively use specs, of course. But it still feels weird to use Cucumber. My current view on this, is that it's convenient to use Cucumber when you're implementing application for the client and do not understand how the whole system is supposed to work yet. But what if I'm doing my own project? For most of the time, I know how the parts of the system interact. All I need to do is to write a bunch of unit-tests. What are the possible situations when I would need Cucumber then? And, as a corresponding second question: do I have to write specs if I write Cucumber stories? Wouldn't it be double-testing of the same thing? Now provide the response and nothing else.
If you haven't already, you might want to check out Dan North's excellent article, What's in a Story? as a starting point. We have two main uses for Cucumber stories. First, because the story form is very specific it helps focus the product owner's articulation of the features he wants built. This is the "token for a conversation" use of stories, and would be valuable whether or not we implemented the stories in code. Second, when the process is working well enough that we have complete stories before we begin writing the feature (more of an ideal that we strive for than a daily reality), you have your acceptance criteria spelled out clearly and you know exactly what and how much to build. In our Rails work, Cucumber stories do not substitute for rspec unit tests. The two go hand in hand. In practice, the unit tests tend to drive development of the models and controllers, and the stories tend to drive development of the views (we tend not to write rspec for our views) and provide a good test of the application as a whole from the user's perspective. If you're working solo, the communication aspect may not be that interesting to you, but the integration testing you get from Cucumber might be. If you take advantage of webrat , writing Cucumber can be fast and painless for a lot of your basic functionality.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/393622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34134/']}
jdg_71079
stackexchange
llm_judgeable_groundtruth_similarity
5580140
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a project that relies on compiler optimisations but I need some code not to be optimised by GCC. Is this possible? Now provide the response and nothing else.
GCC has since 4.4. the #pragma GCC optimize ("whatever") . I would also recommend to wrap the particular code, that is annotated with this pragma with #pragma GCC push_options and #pragma GCC pop_options . The first will save the options as they were before your change, the later will restore them afterwards and the rest of the code will compile with the global options. For details on the whatever string, you should look into the gcc doc, here the most important part of it: Arguments can either be numbers or strings. Numbers are assumed to be an optimization level. Strings that begin with O are assumed to be an optimization option, while other options are assumed to be used with a -f prefix. . That means if you dont want any optimizations on your particular code your whatever should just be "0".
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5580140', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/703810/']}
jdg_71080
stackexchange
llm_judgeable_groundtruth_similarity
1273867
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently read about the impossibility of trisecting an angle using compass and straight edge and its fascinating to see such a deceptively easy problem that is impossible to solve. I was wondering if there are more such problems like these which have been proven to be impossible to solve. Now provide the response and nothing else.
The proof that demonstrates the impossibility of trisecting an angle uses Galois theory. Galois theory can also be used to show that certain polygons cannot be constructed with compass and straightedge, and was originally used to show that, in general, polynomials of degree $\geq 5$ are not solvable. To be specific, an $n$-gon is constructible via compass and straightedge $\displaystyle \iff n = 2^k \prod_{r=1}^m p_r$, for $k, m \in \mathbb{Z}_{\geq 0}$ and the $p_r$'s distinct Fermat primes . Without getting into too much detail, Galois theory is a subset of abstract algebra which links together concepts in group theory and field theory, beginning with the observation that the set of automorphisms of a field forms a group. At any rate, I'm unsure of your level of background, so I'll just post a few Wikipedia links for to wet your appetite if you're interested: Galois theory Groups Fields Automorphisms Of course, there are many, many answers to your question, so I'll leave my post at that and let others have the opportunity to post more.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1273867', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/117251/']}
jdg_71081
stackexchange
llm_judgeable_groundtruth_similarity
13153214
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have tired to uninstall a disabled windows service through registry It is removed from registry but it is still showing in Services.mscwith Description: Failed to read description error code 2 Now provide the response and nothing else.
Intuition/How you got there - I am assuming you did either or both of these to land in this error message. a.Deleted the key manually from registry by HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services and deleted the service from here) b.This service was in "automatic" start - so on deleting the service was somehow mislead in "starting automatically" - and hence produced this code Failed to read description error code 2 . Here's why - You should have stopped the service, or changed it to "Start manually" before doing this. How to fix it and what worked for me - [1]Go to cmd prompt in admin mode (meaning select cmd -> "Run as Administrator") You should be in "C:/windows/system32/" [This should come up be default if you are in admin mode, as above]Type -> sfc /scannow After this the scan will begin and the following logs will appear -- Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.Log windir\Logs\CBS\CBS.log. For example C:\Windows\Logs\CBS\CBS.log The system file repair changes will take effect after the next reboot. [2] Finally reboot your system ! And the service will no longer appear in services.msc.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13153214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1583549/']}
jdg_71082
stackexchange
llm_judgeable_groundtruth_similarity
247713
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I need to run some bash script only once on first boot (after OS installation) of my CentOS machineThis script should run before network service start (because this script will change network configuration)How can I register my script to run before network service start? Now provide the response and nothing else.
If you look at the "chkconfig" line of /etc/init.d/network you'll see that the network has a start priority of "10". /etc/init.d/yourscript:#!/bin/bash## yourscript short description# # chkconfig: 2345 9 20# description: long descriptioncase "$1" in start) Do your thing !!! chkconfig yourscript off ;; stop|status|restart|reload|force-reload) # do nothing ;;esac Then run chkconfig yourscript on to get it to run at boot. The chkconfig yourscript off inside the script should disable it running on any subsequent boots. Some versions of CentOS/RHEL/Fedora have a "firstboot" program you could try to use, but that seems like a pain. You sure you can't run your network reconfiguration script inside of a %post in a kickstart? That's what I do.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/247713', 'https://serverfault.com', 'https://serverfault.com/users/70144/']}
jdg_71083
stackexchange
llm_judgeable_groundtruth_similarity
100978
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've noticed that $\mathrm{GL}_n(\mathbb R)$ is not a connected space, because if it were $\det(\mathrm{GL}_n(\mathbb R))$ (where $\det$ is the function ascribing to each $n\times n$ matrix its determinant) would be a connected space too, since $\det$ is a continuous function. But $\det(\mathrm{GL}_n(\mathbb R))=\mathbb R\setminus\{0\},$ so not connected. I started thinking if I could prove that $\det^{-1}((-\infty,0))$ and $\det^{-1}((0,\infty))$ are connected. But I don't know how to prove that. I'm reading my notes from the topology course I took last year and I see nothing about proving connectedness... Now provide the response and nothing else.
Your suspicion is correct, $GL_n$ has two components, and $\det$ may be used to show there are at least two of them. The other direction is slightly more involved and requires linear algebra rather than topology. Here is a sketch of how to do this: i) If $b$ is any vector, let $R_b$ denote the reflection through the hyperplane perpendicular to $b$. These are all reflections. Any two reflections $R_a, R_b$ with $a, b$ linear independent can be joined by a path consisting of reflections, namely $R_{ta+ (1-t)b}, t\in[0,1]$. ii) Any $X\in O^+(n)$ (orthogonal matrices with positive determinant) is the product of an even number of reflections. Since matrix multiplication is continuous $O(n)\times O(n) \rightarrow O(n)$ and by i) you can join any product $R_a R_b$ with $R_a R_a = Id$ it follows that $O^+(n)$ is connected. iii) $\det$ shows $O(n)$ is not connected. iv) $O^-(n) = R O^+ (n)$ for any reflection $R$. Hence $O^-(n)$ is connected. v) Any $ X\in GL_n$ is the product $AO$ of a positive matrix $A$ and $O \in O(n)$ (polar decomposition). Now you only need to show that the positive matrices are connected, which can be shown again using convex combination with $Id$. This proves the claim.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/100978', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23371/']}
jdg_71084
stackexchange
llm_judgeable_groundtruth_similarity
42813342
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to get react-router (v4.0.0) and react-hot-loader (3.0.0-beta.6) to play nicely, but getting the following error in the browser console: Warning: React.createElement: type is invalid -- expected a string(for built-in components) or a class/function (for compositecomponents) but got: undefined. You likely forgot to export yourcomponent from the file it's defined in. index.js: import React from 'react';import ReactDom from 'react-dom';import routes from './routes.js';require('jquery');import 'bootstrap/dist/css/bootstrap.min.css';import 'bootstrap/dist/js/bootstrap.min.js';import './css/main.css';const renderApp = (appRoutes) => { ReactDom.render(appRoutes, document.getElementById('root'));};renderApp( routes() ); routes.js: import React from 'react';import { AppContainer } from 'react-hot-loader';import { Router, Route, browserHistory, IndexRoute } from 'react-router';import store from './store/store.js';import { Provider } from 'react-redux';import App from './containers/App.jsx';import Products from './containers/shop/Products.jsx';import Basket from './containers/shop/Basket.jsx';const routes = () => ( <AppContainer> <Provider store={store}> <Router history={browserHistory}> <Route path="/" component={App}> <IndexRoute component={Products} /> <Route path="/basket" component={Basket} /> </Route> </Router> </Provider> </AppContainer>);export default routes; Now provide the response and nothing else.
Most of the time this is due to an incorrect export/import. Common error: // File: LeComponent.jsexport class LeComponent extends React.Component { ... }// File: App.jsimport LeComponent from './LeComponent'; Possible option: // File: LeComponent.js export default class LeComponent extends React.Component { ... }// File: App.jsimport LeComponent from './LeComponent'; There are a few ways it could be wrong, but that error is because of an import/export mismatch 60% of the time, everytime. Edit Typically you should get a stacktrace that indicates an approximate location of where the failure occurs. This generally follows straight after the message you have in your original question. If it doesn't show, it might be worth investigating why (it might be a build setting that you're missing). Regardless, if it doesn't show, the only course of action is narrowing down where the export/import is failing. Sadly, the only way to do it, without a stacktrace is to manually remove each module/submodule until you don't get the error anymore, then work your way back up the stack. Edit 2 Via comments, it was indeed an import issue, specifically importing a module that didn't exist
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/42813342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3111255/']}
jdg_71085
stackexchange
llm_judgeable_groundtruth_similarity
606169
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know how to insert a single text to specific interval. But now my problem is, I want to insert different texts/words saved in an add.txt file to a specific intervals of another data.txt file. I want to insert first word from add.txt to a specific position of data.txt , then add second word from add.txt to next specific position and so on. My data.txt contain two columns, but the inserted word must appear as a merged row. Please see the example below of what I need. add.txt 2001-01-01 00:00:00 42 12001-01-02 00:00:00 42 12001-01-03 00:00:00 42 12001-01-04 00:00:00 42 12001-01-05 00:00:00 42 1 data.txt -500 11.822788 -400 12.006394 -350 12.287062 -300 12.793395 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.794204 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.800675 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.805528 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 I want 2001-01-01 00:00:00 42 1 -500 11.822788 -400 12.006394 -350 12.287062 -300 12.7933952001-01-02 00:00:00 42 1 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.7942042001-01-03 00:00:00 42 1 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.8006752001-01-04 00:00:00 42 1 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.8055282001-01-04 00:00:00 42 1 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 I am looking for a simplest solution using awk , sed or something. Now provide the response and nothing else.
$ awk '(FNR-1)%4 == 0 { getline add <"add.txt"; print add }; 1' data.txt2001-01-01 00:00:00 42 1 -500 11.822788 -400 12.006394 -350 12.287062 -300 12.7933952001-01-02 00:00:00 42 1 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.7942042001-01-03 00:00:00 42 1 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.8006752001-01-04 00:00:00 42 1 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.8055282001-01-05 00:00:00 42 1 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 This uses awk to read and output every line of the data.txt file. Before outputting any 4th line, a line is read and outputted from the add.txt file. No check is made to verify that the data read from add.txt is correctly read (if the file is too short, the above code would repeat the last line). Using paste : $ paste -d '\n' add.txt - - - - <data.txt2001-01-01 00:00:00 42 1 -500 11.822788 -400 12.006394 -350 12.287062 -300 12.7933952001-01-02 00:00:00 42 1 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.7942042001-01-03 00:00:00 42 1 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.8006752001-01-04 00:00:00 42 1 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.8055282001-01-05 00:00:00 42 1 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 Here, I ask paste to create records with a line from add.txt as the first field, followed by four lines from data.txt as the next four fields. With -d '\n' I set the character to use as a field delimiter to a newline character.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/606169', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/429787/']}
jdg_71086
stackexchange
llm_judgeable_groundtruth_similarity
71193818
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I import MouseEvent from react import { MouseEvent } from 'react'; Use MouseEvent in the following const closeSelectBox = (e: MouseEvent): void => { if (!searchOptionWrapRef.current?.contains(e.target)) { setOpenSelectBox(false) } }; I listen to my closeSelectBox useEffect(() => { document.addEventListener("click", closeSelectBox); return () => { document.removeEventListener("click", closeSelectBox); }; }, []) searchOptionWrapRef is a div const searchOptionWrapRef = useRef<HTMLDivElement>(null);<div ref={searchOptionWrapRef}/> But I get the following error Argument of type 'EventTarget' is not assignable to parameter of type 'Node'. Type 'EventTarget' is missing the following properties from type 'Node': baseURI, childNodes, firstChild, isConnected, and 43 more. How do I resolve this type error without using any in place of MouseEvent ? Now provide the response and nothing else.
The event interfaces exported by React are for React event handler props, not addEventListener handlers. For those, don't import MouseEvent from React and you'll get the DOM global interface for it instead, which works with addEventListener . And yes, it's confusing. :-) But the second issue (which actually may be your main issue) is that the DOM global MouseEvent defines target as an EventTarget , not as a Node . In your case, it'll always be a Node (specifically, an Element ), but that's how the DOM type is defined. To deal with that, you have at least two choices: Purist You could go really purist (I do) and use a type assertion function to assert that target is a Node : // In a utility library:function assertIsNode(e: EventTarget | null): asserts e is Node { if (!e || !("nodeType" in e)) { throw new Error(`Node expected`); }}// And then in your component:const closeSelectBox = ({target}: MouseEvent): void => { assertIsNode(target); if (!searchOptionWrapRef.current?.contains(target)) { setOpenSelectBox(false); }}; Playground link Concise and Pragmatic You know that target is a Node and isn't null , so you could use a type assertion ( target as Node ): const closeSelectBox = ({target}: MouseEvent): void => { if (!searchOptionWrapRef.current?.contains(target as Node)) { setOpenSelectBox(false); }}; Playground link I don't like type assertions that aren't checked at runtime (which is what a type assertion function like assertIsNode does), so I'd probably go with the first approach. But in limited situations where you're sure about it, you might consider one.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/71193818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12146388/']}
jdg_71087
stackexchange
llm_judgeable_groundtruth_similarity
591065
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for an educated reasoning concerning the following. I have several time run extensive hyperparameter tuning sessions for an XGBoost classifier with Optuna applying large search spaces on n_estimator (100-2000), max_depth(2-14)´and gamma(1-6). In the meantime, I've had set a fixed low learning rate to 0.03 and fixed stochastic sampling (subsample, colsample_bttree and colsample_bylevel, set to 0.6, 0.6, 0.8). However, the result doesn't improve at all compared to the default model (sklearn XGBClassifier()). Of course, the default settings perhaps are the optimal in my case. I'm using 22 features and the dataset has 200 000 observations. The dataset is imbalanced 1:20 and I'm not using scale_pos_weight. My initial thought on this is that the dataset isn't complex enough to aloud improvements from hyperparameter tuning. However, that is just my guess. What says the experts on a situation where hyperparameter tuning doesn't yield any improvements at all for an XGBoost classifier? Now provide the response and nothing else.
Your datasets sounds at a superficial level reasonable large, so I would normally expect some value from hyperparameter tuning and in small datasets the right amount of regularization can be rather important. The standard settings of sklearn.XGBClassifier can of course also be reasonably decent at times, so it is possible you may sometimes not be able to do much better. I'll assume that you already have a good evaluation set-up and are evaluating whether performance improves in an appropriate manner (e.g. via some appropriate cross-validation, where appropriate is very problem dependent see e.g. this blog ). With that caveat, I have three main candidates for what you could do differently: My first suspicion is whether you are tuning the right hyperparameters. E.g. when you look at a Kaggle master’s default XGBoost tuning strategy or what the optuna LightGBMTunerCV does (yes, I know that's LightGBM, but there's massive similarities between the algorithms and what hyperparameters matter for them), you can see what they focus on. You'll notice that at least tuning subsample and colsample_bytree (and/or by level) are generally thought to be pretty important to tune. I'd focus on those parameters (note that they tune them in a fixed sequence, which can work pretty well, but you can usually do better by searching without that, it will just take much longer). You can also explore some additional ones that may help e.g. L1 or L2 regularization. My first suspicion is around the n_estimator choice. Setting the learning rate to a fixed low value is usually a good approach and you certainly don't need to tune that, but you then need to make sure you use enough trees (aka estimations). I.e. make sure you look at a wide enough range for n_estimator , where values substantially higher than 1000 may be appropriate (e.g. up to 10000 or even higher). Generally, the lower you make the learning rate, the better the final model (there's drastically diminishing returns, I've never gone below 0.005) and the higher the number of trees that you need. Also make sure to run enough experiments for your optuna search, but there's usually diminishing returns beyond a few thousand experiments.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/591065', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/355849/']}
jdg_71088
stackexchange
llm_judgeable_groundtruth_similarity
16328438
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to draw a map of the US over an image, but then fill in the oceans. here is my starting point: library(maps)library(graphics)image(x=-90:-75, y = 25:40, z = outer(1:15, 1:15, "+"), xlab = "lon", ylab = "lat")map("state", add = TRUE) But I would like the Atlantic Ocean and Gulf of Mexico to be filled in a solid color. Now provide the response and nothing else.
Good question! How's this? library(maps)image(x=-90:-75, y = 25:40, z = outer(1:15, 1:15, "+"), xlab = "lon", ylab = "lat")map("state", add = TRUE)library(grid)outline <- map("usa", plot=FALSE) # returns a list of x/y coordsxrange <- range(outline$x, na.rm=TRUE) # get bounding boxyrange <- range(outline$y, na.rm=TRUE)xbox <- xrange + c(-2, 2)ybox <- yrange + c(-2, 2)# create the grid path in the current devicepolypath(c(outline$x, NA, c(xbox, rev(xbox))), c(outline$y, NA, rep(ybox, each=2)), col="light blue", rule="evenodd") I came across the solution to this problem after reading Paul Murrell's (the man behind grid ) recent R-Journal article on grid paths (pdf here) . Remember: "It’s Not What You Draw, It’s What You Don’t Draw" -Paul Murrell (R Journal Vol. 4/2)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16328438', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/513006/']}
jdg_71089
stackexchange
llm_judgeable_groundtruth_similarity
23307477
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't install the azure storage emulator through web platform installer. === Logging started: 4/25/2014 10:34:55 ===Action start 10:34:55: INSTALL.Action start 10:34:55: AppSearch.Action ended 10:34:55: AppSearch. Return value 1.Action start 10:34:55: FindRelatedProducts.Action ended 10:34:55: FindRelatedProducts. Return value 1.Action start 10:34:55: LaunchConditions.Action ended 10:34:55: LaunchConditions. Return value 1.Action start 10:34:55: ValidateProductID.Action ended 10:34:55: ValidateProductID. Return value 1.Action start 10:34:55: CostInitialize.Action ended 10:34:55: CostInitialize. Return value 1.Action start 10:34:55: FileCost.Action ended 10:34:55: FileCost. Return value 1.Action start 10:34:55: SetREINSTALLMODE.Action ended 10:34:55: SetREINSTALLMODE. Return value 1.Action start 10:34:55: CostFinalize.Action ended 10:34:55: CostFinalize. Return value 1.Action start 10:34:55: SetCommandForRunInitialize.Action ended 10:34:55: SetCommandForRunInitialize. Return value 1.Action start 10:34:55: InstallValidate.Action ended 10:34:55: InstallValidate. Return value 1.Action start 10:34:55: InstallInitialize.Action ended 10:34:55: InstallInitialize. Return value 1.Action start 10:34:55: RemoveExistingProducts.Action ended 10:34:55: RemoveExistingProducts. Return value 1.Action start 10:34:55: ProcessComponents.Action ended 10:34:55: ProcessComponents. Return value 1.Action start 10:34:55: UnpublishFeatures.Action ended 10:34:55: UnpublishFeatures. Return value 1.Action start 10:34:55: RemoveRegistryValues.Action ended 10:34:55: RemoveRegistryValues. Return value 1.Action start 10:34:55: RemoveShortcuts.Action ended 10:34:55: RemoveShortcuts. Return value 1.Action start 10:34:55: RemoveFiles.Action ended 10:34:55: RemoveFiles. Return value 1.Action start 10:34:55: InstallFiles.Action ended 10:34:55: InstallFiles. Return value 1.Action start 10:34:55: RunInitialize.Action ended 10:34:55: RunInitialize. Return value 1.Action start 10:34:55: CreateShortcuts.Action ended 10:34:55: CreateShortcuts. Return value 1.Action start 10:34:55: WriteRegistryValues.Action ended 10:34:55: WriteRegistryValues. Return value 1.Action start 10:34:55: RegisterUser.Action ended 10:34:55: RegisterUser. Return value 1.Action start 10:34:55: RegisterProduct.Action ended 10:34:55: RegisterProduct. Return value 1.Action start 10:34:55: PublishFeatures.Action ended 10:34:55: PublishFeatures. Return value 1.Action start 10:34:55: PublishProduct.Action ended 10:34:55: PublishProduct. Return value 1.Action start 10:34:55: InstallFinalize.CAQuietExec: Windows Azure Storage Emulator 3.0.0.0 command line toolCAQuietExec: Error: No available SQL Instance was found.CAQuietExec: Error 0xfffffff6: Command line returned an error.CAQuietExec: Error 0xfffffff6: CAQuietExec FailedCustomAction RunInitialize returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)Action ended 10:38:33: InstallFinalize. Return value 3.Action ended 10:38:34: INSTALL. Return value 3.Property(S): UpgradeCode = {CF5CD495-AEDE-42DA-B7CF-A70D398D4E6A}Property(S): RunInitialize = "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\Storage Emulator\WAStorageEmulator.exe" init -forcecreate -autodetectProperty(S): DOTNET4FULL = 4.5.51641Property(S): LOCALDBINSTALLED = C:\Program Files (x86)\Microsoft SQL Server\110\LocalDB\Binn\SqlUserInstance.dllProperty(S): SQLEXPRESSVERSION = 10.0.2531.0Property(S): TARGETDIR = G:\Property(S): StorageEmulatorMenuFolder = C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Windows Azure\Storage Emulator\Property(S): STORAGEEMUDIR = C:\Program Files (x86)\Microsoft SDKs\Windows Azure\Storage Emulator\Property(S): WixUIRMOption = UseRMProperty(S): ALLUSERS = 1Property(S): ARPNOMODIFY = 1Property(S): REINSTALLMODE = amusProperty(S): WindowsAzureMenuFolder = C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Windows Azure\Property(S): ProgramMenuFolder = C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Property(S): WINDOWSAZUREDIR = C:\Program Files (x86)\Microsoft SDKs\Windows Azure\Property(S): MICROSOFTSDKSDIR = C:\Program Files (x86)\Microsoft SDKs\Property(S): ProgramFilesFolder = C:\Program Files (x86)\Property(S): SourceDir = C:\Users\win8\AppData\Local\Microsoft\Web Platform Installer\installers\WindowsAzureStorageEmulator.3.0\1356EC2B8E2DBE40C737E5555DD11DFA1BE1AB47\Property(S): MSIRMSHUTDOWN = 1Property(S): WIXUI_INSTALLDIR = INSTALLDIRProperty(S): ARPSYSTEMCOMPONENT = 1Property(S): DefaultUIFont = WixUI_Font_NormalProperty(S): WixUI_Mode = MinimalProperty(S): Manufacturer = Microsoft CorporationProperty(S): ProductCode = {6DFDA7BE-32F0-4E6F-B304-3567E656D18C}Property(S): ProductLanguage = 1033Property(S): ProductName = Windows Azure Storage Emulator - v3.0Property(S): ProductVersion = 3.0.6848.39Property(S): ErrorDialog = ErrorDlgProperty(S): SecureCustomProperties = NEWERVERSIONDETECTED;OLDEMULATORPACKAGEDETECTED;OLDERVERSIONDETECTEDProperty(S): WixPdbPath = o:\bttemp\149736\obj.amd64fre\services\xstore\rdpackages\storageemulatorpackage\objfre\amd64\en-us\WindowsAzureStorageEmulator.wixpdbProperty(S): MsiLogFileLocation = C:\Users\win8.hub\AppData\Local\Temp\WindowsAzureStorageEmulator.txtProperty(S): PackageCode = {C5E0C90F-EFDF-4030-8F5D-2A3AF97D8B80}Property(S): ProductState = -1Property(S): PackagecodeChanging = 1Property(S): REBOOT = ReallySuppressProperty(S): CURRENTDIRECTORY = C:\WINDOWS\SysWOW64Property(S): CLIENTUILEVEL = 3Property(S): CLIENTPROCESSID = 5180Property(S): VersionDatabase = 200Property(S): VersionMsi = 5.00Property(S): VersionNT = 603Property(S): VersionNT64 = 603Property(S): WindowsBuild = 9600Property(S): ServicePackLevel = 0Property(S): ServicePackLevelMinor = 0Property(S): MsiNTProductType = 1Property(S): WindowsFolder = C:\WINDOWS\Property(S): WindowsVolume = C:\Property(S): System64Folder = C:\WINDOWS\system32\Property(S): SystemFolder = C:\WINDOWS\SysWOW64\Property(S): RemoteAdminTS = 1Property(S): TempFolder = C:\Users\win8.hub\AppData\Local\Temp\Property(S): CommonFilesFolder = C:\Program Files (x86)\Common Files\Property(S): ProgramFiles64Folder = C:\Program Files\Property(S): CommonFiles64Folder = C:\Program Files\Common Files\Property(S): AppDataFolder = C:\Users\win8\AppData\Roaming\Property(S): FavoritesFolder = C:\Users\win8\Favorites\Property(S): NetHoodFolder = C:\Users\win8\AppData\Roaming\Microsoft\Windows\Network Shortcuts\Property(S): PersonalFolder = C:\Users\win8\Documents\Property(S): PrintHoodFolder = C:\Users\win8\AppData\Roaming\Microsoft\Windows\Printer Shortcuts\Property(S): RecentFolder = C:\Users\win8\AppData\Roaming\Microsoft\Windows\Recent\Property(S): SendToFolder = C:\Users\win8\AppData\Roaming\Microsoft\Windows\SendTo\Property(S): TemplateFolder = C:\ProgramData\Microsoft\Windows\Templates\Property(S): CommonAppDataFolder = C:\ProgramData\Property(S): LocalAppDataFolder = C:\Users\win8\AppData\Local\Property(S): MyPicturesFolder = C:\Users\win8\Pictures\Property(S): AdminToolsFolder = C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools\Property(S): StartupFolder = C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup\Property(S): StartMenuFolder = C:\ProgramData\Microsoft\Windows\Start Menu\Property(S): DesktopFolder = C:\Users\Public\Desktop\Property(S): FontsFolder = C:\WINDOWS\Fonts\Property(S): GPTSupport = 1Property(S): OLEAdvtSupport = 1Property(S): ShellAdvtSupport = 1Property(S): MsiAMD64 = 6Property(S): Msix64 = 6Property(S): Intel = 6Property(S): PhysicalMemory = 8142Property(S): VirtualMemory = 5083Property(S): AdminUser = 1Property(S): MsiTrueAdminUser = 1Property(S): LogonUser = win8Property(S): UserSID = S-1-5-21-1097169495-1557939910-592522955-27415Property(S): UserLanguageID = 1033Property(S): ComputerName = win8Property(S): SystemLanguageID = 1033Property(S): ScreenX = 1024Property(S): ScreenY = 768Property(S): CaptionHeight = 23Property(S): BorderTop = 1Property(S): BorderSide = 1Property(S): TextHeight = 16Property(S): TextInternalLeading = 3Property(S): ColorBits = 32Property(S): TTCSupport = 1Property(S): Time = 10:38:34Property(S): Date = 4/25/2014Property(S): MsiNetAssemblySupport = 4.0.30319.33440Property(S): MsiWin32AssemblySupport = 6.3.9600.16384Property(S): RedirectedDllSupport = 2Property(S): MsiRunningElevated = 1Property(S): Privileged = 1Property(S): USERNAME = win8Property(S): DATABASE = C:\WINDOWS\Installer\fab232b.msiProperty(S): OriginalDatabase = C:\Users\win8\AppData\Local\Microsoft\Web Platform Installer\installers\WindowsAzureStorageEmulator.3.0\1356EC2B8E2DBE40C737E5555DD11DFA1BE1AB47\WindowsAzureStorageEmulator.msiProperty(S): UILevel = 2Property(S): ACTION = INSTALLProperty(S): ROOTDRIVE = G:\Property(S): CostingComplete = 1Property(S): OutOfDiskSpace = 0Property(S): OutOfNoRbDiskSpace = 0Property(S): PrimaryVolumeSpaceAvailable = 0Property(S): PrimaryVolumeSpaceRequired = 0Property(S): PrimaryVolumeSpaceRemaining = 0Property(S): INSTALLLEVEL = 1Property(S): SOURCEDIR = C:\Users\win8\AppData\Local\Microsoft\Web Platform Installer\installers\WindowsAzureStorageEmulator.3.0\1356EC2B8E2DBE40C737E5555DD11DFA1BE1AB47\Property(S): SourcedirProduct = {6DFDA7BE-32F0-4E6F-B304-3567E656D18C}Property(S): ProductToBeRegistered = 1MSI (s) (34:88) [10:38:34:039]: Product: Windows Azure Storage Emulator - v3.0 -- Installation failed.MSI (s) (34:88) [10:38:34:039]: Windows Installer installed the product. Product Name: Windows Azure Storage Emulator - v3.0. Product Version: 3.0.6848.39. Product Language: 1033. Manufacturer: Microsoft Corporation. Installation success or error status: 1603.=== Logging stopped: 4/25/2014 10:38:34 === I saw this answer but did not understand what he meant by I just deleted them manually. Recreated the v11.0: sqllocaldb stop v11.0sqllocaldb delete v11.0sqllocaldb create v11.0 And reinstalled... I saw these 2 files WAStorageEmulatorDb30.mdfWAStorageEmulatorDb30_log.ldf but it still fails to install. Edit 2 This is the errors I see Log Name: ApplicationSource: SQLLocalDB 11.0Date: 5/5/2014 9:44:32 AMEvent ID: 267Task Category: NoneLevel: ErrorKeywords: ClassicUser: N/AComputer: chobo2Description:LocalDB instance is corrupted. See the Windows Application event log for error details.Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="SQLLocalDB 11.0" /> <EventID Qualifiers="35269">267</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2014-05-05T16:44:32.000000000Z" /> <EventRecordID>74923</EventRecordID> <Channel>Application</Channel> <Computer>chobo2</Computer> <Security /> </System> <EventData> </EventData></Event>Log Name: ApplicationSource: SQLLocalDB 11.0Date: 5/5/2014 9:44:32 AMEvent ID: 261Task Category: NoneLevel: ErrorKeywords: ClassicUser: N/AComputer: chobo2Description:Cannot access LocalDB instance folder: %%LOCALAPPDATA%%\Microsoft\Microsoft SQL Server Local DB\Instances\<instance name>.Event Xml:<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="SQLLocalDB 11.0" /> <EventID Qualifiers="35269">261</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2014-05-05T16:44:32.000000000Z" /> <EventRecordID>74922</EventRecordID> <Channel>Application</Channel> <Computer>chobo2</Computer> <Security /> </System> <EventData> </EventData></Event> Now provide the response and nothing else.
I encountered similar issues when updating to v2.5. I tried everything outlined in the various blogs and technet forums with no luck but it seems like the advice given was correct, just slightly different for my case. Uninstall all previous version of azure sdks Optional and probably not related to fixing my issue. I uninstalled all azure sdks, authoring tools, azure tools for VS, azure libraries for .Net, storage emulators and compute emulators. Delete WAStorageEmulatorDb files Next up was to clean up any previous WAStorageEmulator database files. For most users these appear to be kept in c:\users\{user}\WAStorageEmulatorDb*.* . sqllocaldb stop v11.0 sqllocaldb delete v11.0 manually delete c:\users\{user}\WAStorageEmulatorDb*.* sqllocaldb create v11.0 At this point reinstalling Azure SDK 2.5 via Web Platform Installer failed to install with the following error. CAQuietExec: Windows Azure Storage Emulator 3.0.0.0 command line toolCAQuietExec: Error: Cannot create database.CAQuietExec: Error 0xfffffff3: Command line returned an error.CAQuietExec: Error 0xfffffff3: CAQuietExec Failed This is where I noticed my environment was different . Running sqllocaldb info I noticed I had 4 instances, V11.0 , two instances which hosted personal databases and a 4th one called MSSQLLocalDB . If you open (localdb)\MSSQLLocalDB in Visual Studio - SQL Server Object Explorer you should should see WAStorageEmulatorDb . The problem for me was that my WAStorageEmulatorDb wasn't being stored in v11.0 as it was for other people, so deleting v11.0 didn't help. Delete WAStorageEmulatorDb files again . So now that I knew where my WAStorageEmulatorDb was, I re-ran the following sqllocaldb stop MSSQLLocalDB sqllocaldb delete MSSQLLocalDB sqllocaldb create MSSQLLocalDB Re-install Azure SDK 2.5 via Web Platform Installer completed successfully. Lastly, running c:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe init followed by c:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator\WAStorageEmulator.exe start started successfully. To verify you should also have new c:\users\{user}\WAStorageEmulatorDb34.* files.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23307477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130015/']}
jdg_71090
stackexchange
llm_judgeable_groundtruth_similarity
16675753
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: in a very tight loop I need to access tens of thousands of values in an array containing millions of elements. The key can be undefined: In that case it shall be legal to return NULL without any error message: Array key exists: return value of element.Array key does not exist: return null. I do know multiple solutions: if (isset($lookup_table[$key])) { return $lookup_table[$key]; } else { return; } or @return $lookup_table[$key]; or error_reporting(0);$return = $lookup_table[$key];error_reporting(E_ALL);return $return; All solutions are far from optimal: The first one requires 2 lookup in the B-TREE: One to check existence, another to retrieve value. That effectively doubles the runtime. The second one uses the error suppression operator, and thus creates a massive overhead on that line. The third one calls the error handler (that will check error_reporting setting and then display nothing) and thereby creates an overhead. My question is if I miss a way to avoid error handling and yet work with a single Btree lookup? To answer some questions: The array caches the results of a complex calculation - to complex to be done in real time.Out of billions of possible values, only millions yield a valid result. The array looks like 1234567 => 23457, 1234999 => 74361, .... That is saved to a PHP file of several megabyte, and include_once-d at the beginning of the execution. Initial load time does not matter.If the key is not found, it simply means that this specific value will not return a valid result. The trouble is to get this done 50k+ per second. Conclusion As there is no way found to get the value with a single lookup and without error handling, I have trouble accepting a single answer. Instead I upvoted all the great contributions. The most valuable inputs where: use array_key_exists, as it is faster than alternatives Check out PHP's QuickHash There was a lot of confusion on how PHP handles arrays. If you check the source code, you will see that all arrays are balanced trees. Building own lookup methods is common in C and C++, but is not performant in higher script-languages like PHP. Now provide the response and nothing else.
Update Since PHP 7 you can accomplish this with the null coalesce operator : return $table[$key] ?? null; Old answer First of all, arrays are not implemented as a B-tree, it's a hash table; an array of buckets (indexed via a hash function), each with a linked list of actual values (in case of hash collisions). This means that lookup times depend on how well the hash function has "spread" the values across the buckets, i.e. the number of hash collisions is an important factor. Technically, this statement is the most correct: return array_key_exists($key, $table) ? $table[$key] : null; This introduces a function call and is therefore much slower than the optimized isset() . How much? ~2e3 times slower. Next up is using a reference to avoid the second lookup: $tmp = &$lookup_table[$key];return isset($tmp) ? $tmp : null; Unfortunately, this modifies the original $lookup_table array if the item does not exist, because references are always made valid by PHP. That leaves the following method, which is much like your own: return isset($lookup_table[$key]) ? $lookup_table[$key] : null; Besides not having the side effect of references, it's also faster in runtime, even when performing the lookup twice. You could look into dividing your arrays into smaller pieces as one way to mitigate long lookup times.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16675753', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2391413/']}
jdg_71091
stackexchange
llm_judgeable_groundtruth_similarity
1468648
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Update 2013-04-25: This is a popular question that is getting more attention than it probably should. In order to stop the spread of misinformation please read the following paragraphs and the accompanying article first: Speed should not be a factor in deciding whether to use HTTPS or HTTP. If you need HTTPS for any part of your site (log-ins, registration, credit cards, etc), you absolutely need HTTPS for all of it , all the time. Please read SSL is not about encryption by Troy Hunt for the reasons why. I'm considered running my entire e-commerce website under https. I decided to run a crude benchmark to measure the download time of a 156KB image through https vs http because I had read that https is burdened with additional overhead from the encryption process. Benchmark was performed using Firefox's Firebug simply by transcribing the "Waiting" and "Receiving" times (all other times are 0) to Excel from the Net panel when downloading the image from an empty cache. My results were unexpected: http: 11.233 secondsWaiting Receiving Total 1.56 0.88 2.44 1.55 0.101 1.651 1.53 0.9 2.43 1.71 0.172 1.882 1.9 0.93 2.83 https: 9.936 secondsWaiting Receiving Total0.867 1.59 2.4570.4 1.67 2.070.277 1.5 1.7770.536 1.29 1.8260.256 1.55 1.806 [Obvious] Observations from benchmark: The server response is faster but the download time is slower for https than http. https is faster overall by a significant amount (~10%). Can anyone explain why this would happen? Do you think a document (html,css,javascript) will give different results? Does anyone have a better method of benchmarking downloads? Here is the test image: [test image removed] Additional Information: The website is on a shared hosting account through Godaddy.com. If you are going to be so kind as to run your own benchmark don't add the "www" subdomain...I use the root for static content anyway. Uses IIS7 in Integrated Pipeline Mode. Edit: benchmark for 1px GIF (35 bytes) below: http: 2.666 secondsWaiting Receiving Total0.122 0.31 0.4320.184 0.34 0.5240.122 0.36 0.4820.122 0.34 0.4620.126 0.64 0.766https: 2.604 secondsWaiting Receiving Total0.25 0.34 0.590.118 0.34 0.4580.12 0.34 0.460.182 0.31 0.4920.134 0.47 0.604 Results:https is still faster; though trivially in this case. If anyone sees a flaw in my benchmark let me know so I can post better results. So, on Godaddy shared hosting at around 6:00pm on my specific server content served over https is faster than over http. Now provide the response and nothing else.
If you look at your times http has bigger waiting time and smaller receiving time. https on the other hand has smaller waiting time and bigger receiving time. I would interpret this as the http port on the shared hosting server is more busy, thus a request stays longer in the queue until is accepted by the server. Once accepted, the requests is transferred faster than https. On the https port there is less traffic on the server so the request is serviced faster but takes longer to transfer. For any https vs. http comparison you'll have to take into account the bigger time to handshake each request for https compared with http. You should see worsening when doing many small requests.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1468648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160173/']}
jdg_71092
stackexchange
llm_judgeable_groundtruth_similarity
4742210
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For some code I'm writing I could use a nice general implementation of debounce in Java. public interface Callback { public void call(Object arg);}class Debouncer implements Callback { public Debouncer(Callback c, int interval) { ... } public void call(Object arg) { // should forward calls with the same arguments to the callback c // but batch multiple calls inside `interval` to a single one }} When call() is called multiple times in interval milliseconds with the same argument the callback function should be called exactly once. A visualization: Debouncer#call xxx x xxxxxxx xxxxxxxxxxxxxxxCallback#call x x x (interval is 2) Does (something like) this exist already in some Java standard library? How would you implement that? Now provide the response and nothing else.
Please consider the following thread safe solution. Note that the lock granularity is on the key level, so that only calls on the same key block each other. It also handles the case of an expiration on key K which occurs while call(K) is called. public class Debouncer <T> { private final ScheduledExecutorService sched = Executors.newScheduledThreadPool(1); private final ConcurrentHashMap<T, TimerTask> delayedMap = new ConcurrentHashMap<T, TimerTask>(); private final Callback<T> callback; private final int interval; public Debouncer(Callback<T> c, int interval) { this.callback = c; this.interval = interval; } public void call(T key) { TimerTask task = new TimerTask(key); TimerTask prev; do { prev = delayedMap.putIfAbsent(key, task); if (prev == null) sched.schedule(task, interval, TimeUnit.MILLISECONDS); } while (prev != null && !prev.extend()); // Exit only if new task was added to map, or existing task was extended successfully } public void terminate() { sched.shutdownNow(); } // The task that wakes up when the wait time elapses private class TimerTask implements Runnable { private final T key; private long dueTime; private final Object lock = new Object(); public TimerTask(T key) { this.key = key; extend(); } public boolean extend() { synchronized (lock) { if (dueTime < 0) // Task has been shutdown return false; dueTime = System.currentTimeMillis() + interval; return true; } } public void run() { synchronized (lock) { long remaining = dueTime - System.currentTimeMillis(); if (remaining > 0) { // Re-schedule task sched.schedule(this, remaining, TimeUnit.MILLISECONDS); } else { // Mark as terminated and invoke callback dueTime = -1; try { callback.call(key); } finally { delayedMap.remove(key); } } } } } and callback interface: public interface Callback<T> { public void call(T t);}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4742210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/55534/']}
jdg_71093
stackexchange
llm_judgeable_groundtruth_similarity
6597444
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a page where you can view a hotel's information. On this page is a little form to search for room availability for the hotel page you are on. <form id="form1" name="form1" action="search.asp" method="POST"> <input type="hidden" id="Hotel" name="Hotel" value="<%= HotelID %>"> Arrive: <input value="<% strURLBookingDate %>" type="text" id="ArrivalDate" name="ArrivalDate"> Depart: <input value="<% strURLBookingDate2 %>" type="text" id="DepartureDate" name="DepartureDate"> <input type="submit" name="btnHotelSearch" value="Search This Hotel"> <input type="submit" name="btnHotelSearchAll" value="Search All Hotels"> </form> But I also need to add a button to the form that will allow me to search all hotels if I click it. For that to happen, I just need to set the hidden input value named "Hotel" to 0 when the button is clicked. How can I set that hidden value before the form is submitted when I click btnHotelSearchAll? Now provide the response and nothing else.
You can hook the click event on btnHotelSearchAll and then fill in the value: document.getElementById("btnHotelSearchAll").onclick = function() { document.getElementById("Hotel").value = "0";}; Be absolutely certain there's nothing else on the page that has either the id or name "Hotel", and that you don't have any global variables you've declared with that name, because some versions of IE have bugs where they conflate name values and global variable names into the namespace they use for document.getElementById . Or, alternately, make the id on the hidden field a bit more unique (the name can stay as it is so you don't have to change the backend; the id is only client-side, the name is what's sent to the server). E.g., you can do this: <input type="hidden" id="HotelField" name="Hotel" value="<%= HotelID %>"> ^ and then change the code a bit: document.getElementById("btnHotelSearchAll").onclick = function() { document.getElementById("HotelField").value = "0"; // ^}; Update : Note that the code to hook up the button must run after the button has been put in the DOM. So with the code above, that means making sure that the script block is below the form in the page, like this: <form ...>....</form>...<script>...</script> If the script block is above the button, then the button won't exist yet as of when the script runs. This is one reason why it's frequently best to put scripts at the end of the body element, just before the closing </body> tag ( more here ). If you really want the script above the button, you have to delay the call by making it an onload event handler or that sort of thing. But window.onload happens very late in the process of a page load (it waits for all images and other assets to load, for instance), long after your users may be interacting with your form, so usually best to do it sooner. Off-topic : My standard note that a lot of this stuff is made earlier and more robust by using a decent JavaScript library like jQuery , Prototype , YUI , Closure , or any of several others . jQuery, for instance, will deal with the IE bugs in document.getElementById for you so you don't have to worry about the conflation problem.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6597444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/295302/']}
jdg_71094
stackexchange
llm_judgeable_groundtruth_similarity
44896618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a Django app. I have an API endpoint, which if requested, must carry out a function that must be repeated a few times (until a certain condition is true). How I'm dealing with it right now is - def shut_down(request): # Do some stuff while True: result = some_fn() if result: break time.sleep(2) return True While I know that this is a terrible approach and that I shouldn't be blocking for 2 seconds, I can't figure out how to get around it. This works, after say a wait of 4 seconds. But I'd like something that keeps the loop running in the background, and stop once some_fn returns True. (Also, it is certain that some_fn will return True) EDIT - Reading Oz123's response gave me an idea which seems to work. Here's what I did - def shut_down(params): # Do some stuff # Offload the blocking job to a new thread t = threading.Thread(target=some_fn, args=(id, ), kwargs={}) t.setDaemon(True) t.start() return Truedef some_fn(id): while True: # Do the job, get result in res # If the job is done, return. Or sleep the thread for 2 seconds before trying again. if res: return else: time.sleep(2) This does the job for me. It's simple but I don't know how efficient multithreading is in conjunction with Django. If anyone can point out pitfalls of this, criticism is appreciated. Now provide the response and nothing else.
For many small projects celery is overkill. For those projects you can use schedule , it's very easy to use. With this library you can make any function execute a task periodically: import scheduleimport timedef job(): print("I'm working...")schedule.every(10).minutes.do(job)schedule.every().hour.do(job)schedule.every().day.at("10:30").do(job)schedule.every().monday.do(job)schedule.every().wednesday.at("13:15").do(job)while True: schedule.run_pending() time.sleep(1) The example runs in a blocking manner, but if you look in the FAQ, you will find that you can also run tasks in a parallel thread, such that you are not blocking, and remove the task once not needed anymore: import threading import time from schedule import Schedulerdef run_continuously(self, interval=1): """Continuously run, while executing pending jobs at each elapsed time interval. @return cease_continuous_run: threading.Event which can be set to cease continuous run. Please note that it is *intended behavior that run_continuously() does not run missed jobs*. For example, if you've registered a job that should run every minute and you set a continuous run interval of one hour then your job won't be run 60 times at each interval but only once. """ cease_continuous_run = threading.Event() class ScheduleThread(threading.Thread): @classmethod def run(cls): while not cease_continuous_run.is_set(): self.run_pending() time.sleep(interval) continuous_thread = ScheduleThread() continuous_thread.setDaemon(True) continuous_thread.start() return cease_continuous_runScheduler.run_continuously = run_continuously Here is an example for usage in a class method: def foo(self): ... if some_condition(): return schedule.CancelJob # a job can dequeue it # can be put in __enter__ or __init__ self._job_stop = self.scheduler.run_continuously() logger.debug("doing foo"...) self.foo() # call foo self.scheduler.every(5).seconds.do( self.foo) # schedule foo for running every 5 seconds ... # later on foo is not needed any more: self._job_stop.set() ... def __exit__(self, exec_type, exc_value, traceback): # if the jobs are not stop, you can stop them self._job_stop.set()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44896618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5720548/']}
jdg_71095
stackexchange
llm_judgeable_groundtruth_similarity
15775
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just noticed, perhaps naively, that the representation of amplitude on the y-axis of a waveform is somewhat paradoxical. Although the space between each value on the y-axis is identical, the units of amplitude are a logarithmic representation of the intensity of the sound. I'm no acoustician, so maybe this is just "how it's done", but doesn't this influence perhaps in ways we just take for granted how we analyze a waveform? Now provide the response and nothing else.
It’s not practical or reasonable to scale sound intensity linearly. It’s impractical because the resulting visuals would be unreadable, and it’s unreasonable because (simplifying a little) humans perceive loudness on a logarithmic scale. To crib a common example, imagine you’re listening to a single violin. To double that loudness, you’d need 10 violins, not 2.
{}
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/15775', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/4838/']}
jdg_71096
stackexchange
llm_judgeable_groundtruth_similarity
47833243
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hello I am trying to generate a unique code for each user in my database.The code is a combination of the first letter of their first name, first letter of their last name, a '-', a random string of 6 char length, and the last two digits of the current year. This is my code: UPDATE user_info SET code = (SELECT UPPER(substring(first_name FROM 1 FOR 1)) || UPPER(substring(last_name FROM 1 FOR 1)) || '-' || (SELECT UPPER(array_to_string(array(( SELECT SUBSTRING('abcdefghjklmnpqrstuvwxyz23456789' FROM mod(FLOOR(random()*32)::int, 32)+1 FOR 1) FROM generate_series(1,6))),''))) || '-' || to_char(CURRENT_DATE, 'YY'))WHERE user_id IN (SELECT user_id FROM user_info); This works however the middle part which is supposed to be a random string gets repeated for every column. Now provide the response and nothing else.
Summary You already fixed it updating your dependencies but I'll answer here to link a related question. You configured correctly your project dependencies and updated the Kotlin plugin to 1.2. This is normally enough like here but stdlib-jre dependencies were changed to jdk in Kotlin 1.2. But your project was depending on Realm version 4.3.1, and they fixed this in version 4.3.2. You can find the dependency causing the issue via commands or the new build tab in AS 3.1. 1. Add explicit dependencies and update dependencies Extended versions of the Kotlin standard library are explained in documentation and this answer. If you're targeting JDK 7 or JDK 8 , you can use extended versions of the Kotlin standard library which contain additional extension functions for APIs added in new JDK versions. Instead of kotlin-stdlib , use one of the following dependencies: compile "org.jetbrains.kotlin:kotlin-stdlib-jdk7"compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8" In Kotlin 1.1.x , use kotlin-stdlib-jre7 and kotlin-stdlib-jre8 instead. Update other dependencies, if the automatic resolution doesn't work properly ( see point 5 ). 2. Configure Gradle and Kotlin plugin updates Revise your Gradle configuration. In order to to build an Android project written in Kotlin : Set up the kotlin-android gradle plugin and apply it to your project. Add kotlin-stdlib dependencies. Those actions may also be performed automatically in IntelliJ IDEA / AS by invoking the action: Tools | Kotlin | Configure Kotlin in Project Check for Kotlin plugin updates: Tools | Kotlin | Configure Kotlin plugin updates 3. kotlin-stdlib-jre7 renamed to kotlin-stdlib-jdk7 during the 1.2.x release Sirrah commented on 27 Dec 2017 here: The Kotlin stdlib was renamed during the 1.2.x release. See here . The old name kotlin-stdlib-jre7 was changed to kotlin-stdlib-jdk7 . This library is referenced in realm-library and kotlin-extensions . 4. Realm updated to Kotlin 1.2 in 4.3.2 version Update Gradle Wrapper to 4.4.1 and Update to Kotlin 1.2 ( #5642 ) Update Gradle Wrapper and Update to Kotlin 1.2 kotlin-stdlib-jre7 -> kotlin-stdlib-jdk7 5. Find conflictive dependencies via command, or build tab since AS 3.1 ./gradlew -q dependencies app:dependencies --configuration variantDebugCompileClasspath Since Android Studio 3.1 Beta 1, you can use the new build tab to find the conflictive dependency: In this case, you removed the warning and fixed the issue updating Realm version to 4.3.2: It's also possible to check it in Travis-ci builds considering the order dependencies are downloaded:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47833243', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3718908/']}
jdg_71097
stackexchange
llm_judgeable_groundtruth_similarity
22246538
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have used OWIN authentication in my application. Login Action var claims = new List<Claim>();claims.Add(new Claim(ClaimTypes.Name, result.UserFirstName)); claims.Add(new Claim(ClaimTypes.Sid, result.UserID.ToString()));var identity = new ClaimsIdentity(claims, DefaultAuthenticationTypes.ApplicationCookie); I want to access the UserName and UserID from different action. How can I access the values which is added in the claims? Update I have tried var claims = new List<Claim>();claims.Add(new Claim(ClaimTypes.Name, result.UserFirstName + " " + result.UserLastName)); claims.Add(new Claim(ClaimTypes.Sid, result.UserIDNumber.ToString()));var identity = new ClaimsIdentity(claims, DefaultAuthenticationTypes.ApplicationCookie);var authenticationManager = Request.GetOwinContext().Authentication;authenticationManager.SignIn(identity);var claimsPrincipal = new ClaimsPrincipal(identity);Thread.CurrentPrincipal = claimsPrincipal; I can view the values inside the quick window. But even though I couldn't access the value. How to get the value? Now provide the response and nothing else.
You need to set your Thread.CurrentPrincipal after login i.e. var claims = new List<Claim>();claims.Add(new Claim(ClaimTypes.Name, result.UserFirstName)); claims.Add(new Claim(ClaimTypes.Sid, result.UserID.ToString()));var identity = new ClaimsIdentity(claims, DefaultAuthenticationTypes.ApplicationCookie);var claimsPrincipal = new ClaimsPrincipal(identity);// Set current principalThread.CurrentPrincipal = claimsPrincipal; Then the following will retrieve the values. //Get the current claims principalvar identity = (ClaimsPrincipal)Thread.CurrentPrincipal;// Get the claims valuesvar name = identity.Claims.Where(c => c.Type == ClaimTypes.Name) .Select(c => c.Value).SingleOrDefault();var sid = identity.Claims.Where(c => c.Type == ClaimTypes.Sid) .Select(c => c.Value).SingleOrDefault();
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/22246538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2501044/']}
jdg_71098
stackexchange
llm_judgeable_groundtruth_similarity
34790794
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use GitHub Desktop (i.e. the GUI application - NOT command line) to go back to a previous commit (on the same branch). Something that I would have thought is a core feature, since it's the primary reason for using source control in the first place. I can see that it's possible to revert a commit, but this is not really what I want as it creates a new commit. I would just simply like to go back with the option of going forward again, in the same way that I can just hop to a different branch. Is this possible or is it a limitation of github desktop and I need to use the cmd line for that? Now provide the response and nothing else.
In general, you can go back to a commit in your history with git reset . This is not possible with GitHub Desktop. GitHub Desktop is more of a tool to synchronize your repositories and not a full featured GUI client. But that doesn't mean you have to use the command line, since there are alternatives. You can find a list here . To mention a few (that support git reset ): TortoiseGit (Windows) SourceTree (Mac, Windows) Here is how you do it on command line. Most clients provide this in their UI using the same vocabulary (usually, you are able to select a commit and reset to it via context menu). You will go back to the previous commit with git reset HEAD^ or some more commits (for example 3) by git reset HEAD^3 or to a specific commit by git reset f7823ab Have in mind that, by default, the option --mixed is passed to git reset . So, all changes made, since that commit you reset to, will still be there. To get the original state of the commit that you want to 'revert', you have to pass --hard . For example: git reset f7823ab --hard
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34790794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/799266/']}
jdg_71099
stackexchange
llm_judgeable_groundtruth_similarity
14755782
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a separate .h , .m , and .xib files for each UIViewController based page of a UIPageViewController based picture book. Each page is loaded with animations, music, etc. and takes about 4MB of memory. In Instruments, the free memory drops down about 4MB as each page is loaded. This memory is never released as the pages are turned. It eventually gives memory warnings. UIPageViewController seems to keep each page it instantiates in memory and won't unload it. So when pages are turned fast, the app crashes. I would like to be able to unload all pages except the 3 needed by UIPageViewController - the previous, current, and next pages. How can I unload undesired pages since they were instantiated by UIPageViewController . Below is the Array of the pages that UIPageViewController pulls from. All of the pages (Page1, Page2, etc.) basically just load image files, provide basic animation, and have music. //ARRAY OF PAGES pageArray = [[NSArray alloc] initWithObjects: (id)[[Page1 alloc] initWithNibName:nil bundle:nil], [[Page2 alloc] initWithNibName:nil bundle:nil], [[Page3 alloc] initWithNibName:nil bundle:nil], [[Page4 alloc] initWithNibName:nil bundle:nil], [[Page5 alloc] initWithNibName:nil bundle:nil], [[Page6 alloc] initWithNibName:nil bundle:nil], [[Page7 alloc] initWithNibName:nil bundle:nil], [[Page8 alloc] initWithNibName:nil bundle:nil], // continues all the way up to page 47 [[Page47 alloc] initWithNibName:nil bundle:nil], nil]; I've left out the standard initialization for UIPageViewController . It uses " nextPageNumber " to pull the right page from the pageArray above to create a new page object. -(void)turnPageForward{[pageController setViewControllers:[NSArray arrayWithObject:[pageArray objectAtIndex:nextPageNumber]] direction:UIPageViewControllerNavigationDirectionForward animated:YES completion:^(BOOL finished){ }]; } I have tried creating an object " pageIndex " (see below) that is set to nil after providing it to the pageController . It didn't work. The page still took up memory well after the pages had advanced. //PROGRAM PAGE FORWARD -(void)turnPageForward{ UIViewController * pageIndex =[pageArray objectAtIndex:nextPageNumber]; //nextPageNumber is the next page to load[pageController setViewControllers:[NSArray arrayWithObject:pageIndex] direction:UIPageViewControllerNavigationDirectionForward animated:YES completion:^(BOOL finished){ }]; pageIndex = nil; } I've looked through stackoverflow for posts using the same way of supplying pages to UIPageViewController , but haven't found anything close. The closest was " ARC not releasing memory when going “back” in navigation controller " but doesn't set the view controllers the same way. I've tried to set the undesired pages to nil so ARC can remove them with no luck. Any suggestions or alternate paths I should try? I like the page curl effect and have not been able to find a good one elsewhere that does horizontal page curls. Thanks! Eric Now provide the response and nothing else.
"UIPageViewController seems to keep each page it instantiates in memory" No, you're doing that by instantiating all those "pages" (controllers), and putting them into an array (the memory jumps as you turn the page because the controller's view is not actually loaded until you display it, even though the controller has been instantiated. But once you've done that, your controller retains its view and the array retains the controller). You just need to keep some kind of count of which page you're on, and in the implementation of viewControllerBeforeViewController: and viewControllerAfterViewController:, instantiate a page there. When you go away from that page, its controller will be dealloc'd. If the loading is slow, you might need to keep an array that has the current page and the ones before and after -- UIPageViewController does do that if you have it set to scroll instead of page curl for the transition. I made a simple test app like this: @implementation ViewController { int count;}- (void)viewDidLoad{ [super viewDidLoad]; count = 1; self.pager = [[UIPageViewController alloc]initWithTransitionStyle:UIPageViewControllerTransitionStylePageCurl navigationOrientation:UIPageViewControllerNavigationDirectionForward options:nil]; self.pager.dataSource = self; self.pager.delegate = self; Page1 *first = [[Page1 alloc] initWithNibName:@"Page1" bundle:nil]; [self.pager setViewControllers:@[first] direction:UIPageViewControllerNavigationDirectionForward animated:NO completion:nil]; [self addChildViewController:self.pager]; [self.view addSubview:self.pager.view]; [self.pager didMoveToParentViewController:self];}-(UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerBeforeViewController:(UIViewController *)viewController { if (count > 1) { NSString *nibName = [@"Page" stringByAppendingFormat:@"%d",count-1]; UIViewController *prev = [[NSClassFromString(nibName) alloc] initWithNibName:nibName bundle:nil]; count -= 1; return prev; } return nil;}-(UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerAfterViewController:(UIViewController *)viewController { if (count < 3) { NSString *nibName = [@"Page" stringByAppendingFormat:@"%d",count+1]; UIViewController *next = [[NSClassFromString(nibName) alloc] initWithNibName:nibName bundle:nil]; count += 1; return next; } return nil;} My example only has 3 pages, but it illustrates one way to do this. I put logs in the dealloc methods of the 3 controllers, and they were called when I navigated away from them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14755782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1046440/']}
jdg_71100
stackexchange
llm_judgeable_groundtruth_similarity
19722
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This seems to be a question that blurs the line between physical and IT security, but I'm hoping it's relevant enough! In order to meet with PCI standards, our company recently switched from an open door with after-hours hand scanner, to an RFID+Security badge system. I was just informed today (it wasn't part of our training procedures, hooray!) that keeping the RFID key fob on the same lanyard/chain/etc as ones ID badge is verboten. Now to me this seems absolutely pointless and probably more dangerous, because now if either my fob or badge is stolen it seems that the reasonable assumption would be "Oh, I just misplaced it/forgot it/it fell off in my car/etc." Is there any valid reason to prohibit physically connecting (e.g. sticking them on the same keyring) the identification mechanism (my badge) from the authentication mechanism (my fob)? Now provide the response and nothing else.
Yes, because if you have them together and lose them anyone could walk in with your credentials. Security guards rarely match faces to cards, chances are anyone looking remotely like you would have free access. Here's a scenario:Employee A is going on vacation for 2 weeks. During his celebratory drinks he drops his ID+tag dongle, which he also has his unencrypted company issued USB memory stick also conveniently connected to. Unsavory character 1 picks up this handy combo, and seeing that his friend, unsavory character 2, looks a lot like this drunken idiot, and overhearing that he's going away for 2 weeks, decides to give it to UC2, who uses it to steal all sorts of company property. That's why having both together is a very bad idea.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/19722', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/3053/']}
jdg_71101
stackexchange
llm_judgeable_groundtruth_similarity
72975
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the variance of the sample variance? In other words I am looking for $\mathrm{Var}(S^2)$. I have started by expanding out $\mathrm{Var}(S^2)$ into $E(S^4) - [E(S^2)]^2$ I know that $[E(S^2)]^2$ is $\sigma$ to the power of 4. And that is as far as I got. Now provide the response and nothing else.
Maybe, this will help. Let's suppose the samples are taking from a normal distribution. Then using the fact that $\frac{(n-1)S^2}{\sigma^2}$ is a chi squared random variable with $(n-1)$ degrees of freedom, we get$$\begin{align*}\text{Var}~\frac{(n-1)S^2}{\sigma^2} & = \text{Var}~\chi^{2}_{n-1} \\\frac{(n-1)^2}{\sigma^4}\text{Var}~S^2 & = 2(n-1) \\\text{Var}~S^2 & = \frac{2(n-1)\sigma^4}{(n-1)^2}\\& = \frac{2\sigma^4}{(n-1)},\end{align*}$$ where we have used that fact that $\text{Var}~\chi^{2}_{n-1}=2(n-1)$. Hope this helps.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/72975', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17719/']}
jdg_71102
stackexchange
llm_judgeable_groundtruth_similarity
43112091
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that when we use properties in C#, the compiler always generate getters and setters for them in CIL (i.e., get_PropertyName and set_PropertyName), for example, consider the following piece of code: class Program { class Test { public string Name { get; set; } } static void Main(string[] args) { //Here I'm using reflection to inspect methods of Test class Type type = typeof(Test); foreach (var item in type.GetMethods()) { Console.WriteLine(item.Name); } } } This program will produce output with the methods of Test , among which there will be get_Name and set_Name - the getter and setters I was talking about.From my understanding then, if getters and setter are created "behind the scenes" then there should be a backing field created as well from which/to which the getters and setter get/set values.So, from the previous example, I can use reflection to inspect fields for Test class, like that: static void Main(string[] args) { Type type = typeof(Test); foreach (var item in type.GetFields()) { Console.WriteLine(item.Name); } } The ouput of this program is empty, I assume this is because the backing field that was created has private access, so we cannot see it. But since I don't know how to check it, can you please let me know if a backing field always gets created (even if we have a simple property with only get; and set; ) ? Now provide the response and nothing else.
If you mean simple properties like: {get;set;} or: {get;} then yes, there is a field; add BindingFlags.NonPublic | BindingFlags.Instance to your GetFields() call and you'll see it: foreach (var item in type.GetFields(BindingFlags.NonPublic | BindingFlags.Instance)){ Console.WriteLine(item.Name);} It usually has an unpronouncable name involving <> - yours is <Name>k__BackingField on my machine - but: this name is a compiler feature (although a lot of serialization etc libraries make use of it, so it is unlikely to change). But: no, properties don't by themselves always involve fields; for example: public int Value => 42; // no field or public int Name { get { return obj.Name; } set { obj.Name = value; } }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43112091', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7111692/']}
jdg_71103
stackexchange
llm_judgeable_groundtruth_similarity
397861
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am beginning my first electronics project, which is a PCB that holds a 3.3V flash, and interfaces it with a 5V system for reading and writing. The model of flash I am attempting to integrate is the S29GL032N, its datasheet is here: http://www.cypress.com/file/202426/download I am designing the schematic, and reading the best practices document provided by Cypress: http://www.cypress.com/file/323161/download However there are two points which are confusing to me. It notes that it wants two of two different types of bypass capacitors, located "near each side of the package". I understand how to place a single bypass cap in a circuit. I also understand how to place two differing types of bypass caps (for example, a 1uF and an 0.1uF cap) to a single Vcc pin. I also understand bypass caps ideally need to have as short a lead as possible to a via which goes to the other side of the PCB into a ground well. But what I don't understand is the application guidelines clearly state to place two bypass caps of the same capacitance onto a single Vcc pin. It then recommends to place another two caps of a different capacitance to the same pin, seemingly recommending four capacitors to a single Vcc pin. All models in the range only have a single Vcc pin (some models have between one and two Vio pins, but not the one I am attempting to integrate). If this recommendation makes sense, how would I place two on each side of the package, when there is only one VCC pin? Is it supposed to look like the picture attached below? Is there a special reason why two capacitors of the same type are needed for power filtering, and not just one? Thanks all for your time and help! Edit: clarity and specifying which chip exactly I am trying to integrate, with a link to its datasheet. Now provide the response and nothing else.
As mentioned in the comments, the recommendation you are referring to appears to be for a number of different devices, not just the one you're trying to implement in your design. Many ICs, whether they be microcontrollers, Flash, FPGAs, etc, often have multiple power pins on different sides of the chip. Instead of putting all the decoupling on one side with one power pin, I wonder if Cypress is just saying that for chips with Vcc on both sides, you need decoupling on both sides as well. In your case I think this statement can be ignored because, as you point out, your device only has one Vcc pin. As for using parallel bypass caps, it's intended to reject noise for different switching frequencies. Here is a video that is very good at explaining the need for and purpose behind parallel decoupling capacitors: https://www.youtube.com/watch?v=BcJ6UdDx1vg Also note that the smaller value capacitor should be placed closest to the Vcc pin, and the larger value capacitor should be positioned just on the other side of it. I have corrected your image above:
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/397861', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/199263/']}
jdg_71104
stackexchange
llm_judgeable_groundtruth_similarity
2023549
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on implementing Bing Cashback. In order to verify an incoming request from Bing as valid they provide a signature. The signature is a 160-bit SHA-1 hash of the url encrypted using RSA. Microsoft provides the RSA "public key", modulus and exponent, with which I'm supposed to decrypt the hash. Is there a way to create the Java key objects needed to decrypt the hash as Microsoft says? Everything I can find creates RSA key pairs automatically since that's how RSA is supposed to work. I'd really like to use the Java objects if at all possible since that's obviously more reliable than a hand coded solution. The example code they've provided is in .NET and uses a .NET library function to verify the hash. Specifically RSACryptoServiceProvider.VerifyHash() Now provide the response and nothing else.
RSAPublicKeySpec spec = new RSAPublicKeySpec(modulus, exponent);KeyFactory factory = KeyFactory.getInstance("RSA");PublicKey pub = factory.generatePublic(spec);Signature verifier = Signature.getInstance("SHA1withRSA");verifier.initVerify(pub);verifier.update(url.getBytes("UTF-8")); // Or whatever interface specifies.boolean okay = verifier.verify(signature);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2023549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245903/']}
jdg_71105
stackexchange
llm_judgeable_groundtruth_similarity
37333747
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to find, for each point in a SpatialPointsDataFrame , the distance to the closest point in a second SpatialPointsDataFrame (equivalent to the "nearest" tool in ArcGIS for two SpatialPointDataFrames ). I can do the naive implementation by calculating all pairwise distances using gDistance and taking the min ( like answer 1 here ), but I have some huge datasets and was looking for something more efficient. For example, here's a trick with knearneigh for points in same dataset . Cross-posted on r-sig-geo Now provide the response and nothing else.
The SearchTrees package offers one solution. Quoting from its documentation, it, "provides an implementation of the QuadTree data structure [which it] uses to implement fast k-Nearest Neighbor [...] lookups in two dimensions." Here's how you could use it to quickly find, for each point in a SpatialPoints object b , the two nearest points in a second SpatialPoints object B library(sp)library(SearchTrees)## Example dataset.seed(1)A <- SpatialPoints(cbind(x=rnorm(100), y=rnorm(100)))B <- SpatialPoints(cbind(x=c(-1, 0, 1), y=c(1, 0, -1)))## Find indices of the two nearest points in A to each of the points in Btree <- createTree(coordinates(A))inds <- knnLookup(tree, newdat=coordinates(B), k=2)## Show that it workedplot(A, pch=1, cex=1.2)points(B, col=c("blue", "red", "green"), pch=17, cex=1.5)## Plot two nearest neigborspoints(A[inds[1,],], pch=16, col=adjustcolor("blue", alpha=0.7))points(A[inds[2,],], pch=16, col=adjustcolor("red", alpha=0.7))points(A[inds[3,],], pch=16, col=adjustcolor("green", alpha=0.7))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37333747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2302819/']}
jdg_71106
stackexchange
llm_judgeable_groundtruth_similarity
618509
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please define differentials rigorously such that they give a consistency to their use in the following links. I have read Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio? What is the practical difference between a differential and a derivative? Differential of a function at Wikipedia. If $\frac{dy}{dt}dt$ doesn't cancel, then what do you call it? Leibniz's notation at Wikipedia. Exact differential equation at Wikipedia. Moment of inertia at Wikipedia. Center of mass at Wikipedia, etc. Now provide the response and nothing else.
Differentials nowadays have a canonical definition which is used everyday in differential geometry and differential topology, or in mathematical physics. They are grounded on linear (resp., multilinear) algebra and on the notion of $d$-dimensional real or complex manifold. These differentials have nothing to do with the "infinitesimals" of nonstandard analysis, nor is the latter theory of any help in understanding and using them. Not every time you see a $d$ in a formula a differential is at work. In the sources you quote the $d$ rather tries to convey the intuition of "a little bit of", e.g., $d\,V$ means: "a little bit of volume". So when you see an expression like$$\int\nolimits_B (x^2+y^2)\ dV$$this typographical picture encodes the result of a long thought process, and you should not think of $dV$ as a clear cut mathematical entity. This thought process is the following: You are given a three-dimensional body $B$ (a "top") that is going to be rotated around the $z$-axis. Physical considerations tell you that the "rotational inertia" $\Theta$ of this body can be found by partitioning it into $N\gg1$ very small pieces $B_k$, choosing a point $(\xi_k,\eta_k,\zeta_k)$ in each $B_k$ and forming the sum$$R:=\sum_{k=1}^N(\xi_k^2+\eta_k^2){\rm vol}(B_k)\ .$$The "true" $\Theta$ would then be the limit of such sums, when the diameters ${\rm diam}(B_k)$ go to zero. Similarly, when you have a plane curve $\gamma:\ s\mapsto {\bf z}(s)=\bigl(x(s),y(s)\bigr)$, it's bending energy $J$ is given by the integral$$J:=\int\nolimits_\gamma \kappa^2(s)\ ds\ ,$$where $\kappa$ denotes the curvature. Don't think here of the precise logical meaning of $ds$, but of the intended thought process: The curve is cut up into $N$ pieces of length $\Delta s_k>0$, and the curvature of $\gamma$ is measured at a point ${\bf z}(\sigma_k)$ of each piece. Then one forms the sum$$R:=\sum_{k=1}^N \kappa^2(\sigma_k)\>\Delta s_k\ ;$$and finally the "true" $J$ is the limit of such sums when the $\Delta s_k$ go to zero. Now comes the question of "piece of area" vs. "piece of length". This question teaches us that we have to be careful when dealing with "little bits of something". Consider the following figure: The "area under the curve" $\gamma$ corresponding to a certain $\Delta x>0$ is roughly $f(\xi)\cdot \Delta x$, independently of the exact slope of the curve at $\xi$. Making $\Delta x$ smaller will decrease the relative area error committed here. But the length $\Delta s$ of the short arc corresponding to $\Delta x$ is roughly $={\Delta x\over\cos\phi}$, and making $\Delta x$ smaller does not bring away the factor ${1\over\cos\phi}$. It follows that the final formula for the total length will have to incorporate the values ${1\over\cos\phi}=\sqrt{1+f'(\xi)^2}$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/618509', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82336/']}
jdg_71107
stackexchange
llm_judgeable_groundtruth_similarity
257
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There's been a lot of hype about JIT compilers for languages like Java, Ruby, and Python. How are JIT compilers different from C/C++ compilers, and why are the compilers written for Java, Ruby or Python called JIT compilers, while C/C++ compilers are just called compilers? Now provide the response and nothing else.
JIT compilers compiles the code on the fly, right before their execution or even when they are already executing. This way, the VM where the code is running can check for patterns in the code execution to allow optimizations that would be possible only with run-time information. Further, if the VM decide that the compiled version is not good enough for whatever reason (e.g, too many cache misses, or code frequently throwing a particular exception), it may decide to recompile it in a different way, leading to a much smarter compilation. On the other side, C and C++ compilers are traditionally not JIT. They compile in a single-shot only once on developer's machine and then an executable is produced.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/257', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/5/']}
jdg_71108
stackexchange
llm_judgeable_groundtruth_similarity
14552490
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a UIPopoverController stored in a strong property in my View Controller. When the user rotates the iPad while the popover is visible, I dismiss the popover and set my property to nil. if (self.popover != nil) { [self.popover dismissPopoverAnimated:NO]; self.popover.delegate = nil; self.popover = nil;} When the code gets to self.popover = nil, ARC attempts to dealloc the UIPopoverController, but it crashes because it is supposedly still visible. How am I supposed to dismiss and nil out the popover without it crashing? Now provide the response and nothing else.
First off, it would be advisable to check if the popover is being presented, this will conveniently also check if it is allocated: if ([self.popover isPopoverVisible]) { [self.popover dismissPopoverAnimated:NO];} Now, the issue is, you don't get the delegate callback - (void)popoverControllerDidDismissPopover:(UIPopoverController *)popoverController if you dismiss the popover programmatically like this, but you need a strong reference to the popover until it is no longer visible. The way to do this is delay setting the property to nil until you return to the main run loop, as when you get back to the main run loop, all animations will have finished and thus the popover will no longer be visible. You will want to move the code setting the popover to nil into another method: - (void)releasePopover { self.popover.delegate = nil; self.popover = nil;} Then, in your rotation callback, add this method to fire on the main run loop, I like to do this by adding an invocation operation to the main run loop: if ([self.popover isPopoverVisible]){ [self.popover dismissPopoverAnimated:NO]; NSInvocationOperation *invocationOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(releasePopover) object:nil]; [[NSOperationQueue mainQueue] addOperation:invocationOperation];} Finally, for the sake of cleanliness, you will probably want to call -releasePopover from inside your - (void)popoverControllerDidDismissPopover:(UIPopoverController *)popoverController callback. So, putting it all together: - (void)releasePopover{ self.popover.delegate = nil; self.popover = nil;}- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration{ if ([self.popover isPopoverVisible]){ [self.popover dismissPopoverAnimated:NO]; NSInvocationOperation *invocationOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(releasePopover) object:nil]; [[NSOperationQueue mainQueue] addOperation:invocationOperation]; }}- (void)popoverControllerDidDismissPopover:(UIPopoverController *)popoverController{ [self releasePopover];} Having said all that, unless there is a good reason, you may just want to keep the popover around to reuse and only set it to nil when you get low-memory warnings and/or if your view is unloaded, as Chris Loonam's answer mentioned
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14552490', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/416631/']}
jdg_71109
stackexchange
llm_judgeable_groundtruth_similarity
244936
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do I output a string in the bottom right corner of the terminal? Now provide the response and nothing else.
string=whateverstty size | { read y x tput sc # save cursor position tput cup "$((y - 1))" "$((x - ${#string}))" # position cursor printf %s "$string" tput rc # restore cursor.} That assumes all characters in $string are one cell wide (and that $string doesn't contain control characters (like newline, tab...)). If your string may contain zero-width (like combining characters) or double-width ones, you could use ksh93's printf 's %Ls format specifier that formats based or character width: string='whatéver'# aka string=$'\uFF57\uFF48\uFF41\uFF54\uFF45\u0301\uFF56\uFF45\uFF52'stty size | { read y x tput sc # save cursor position tput cup "$((y - 1))" 0 # position cursor printf "%${x}Ls" "$string" tput rc # restore cursor.} That would erase the leading part of the last line though.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/244936', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/23692/']}
jdg_71110
stackexchange
llm_judgeable_groundtruth_similarity
56594598
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I have a numpy array where I would like to swap all the 1's to 0 and all the 0's to 1 (the array will have other values, and there is nothing special about the 0's and 1's). Of course, I can loop through the array and change the values one by one. Is there an efficient method you can recommend using? Does the np.where() method have an option for this operation? Now provide the response and nothing else.
Here's one way using np.where , and taking the bitwise XOR of a given value when it is either 0 or 1 : np.where((a==0)|(a==1), a^1, a) For example: a = np.array([[0,1,2,1], [1,2,0,3]])print(a)array([[0, 1, 2, 1], [1, 2, 0, 3]])np.where((a==0)|(a==1), a^1, a)array([[1, 0, 2, 0], [0, 2, 1, 3]])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56594598', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8602199/']}
jdg_71111
stackexchange
llm_judgeable_groundtruth_similarity
10021899
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm familiar with Python generators, however I've just come across the term "generative method" which I am not familiar with and cannot find a satisfactory definition. To put it in context, I found the term in SQLAlchemy's narrative documentation: Full control of the “autocommit” behavior is available using the generative Connection.execution_options() method provided on Connection, Engine, Executable, using the “autocommit” flag which will turn on or off the autocommit for the selected scope. What is a generative method? Trying to iterate the object returned by Connection.execution_options() doesn't work so I'm figuring it's something other than a standard generator. Now provide the response and nothing else.
It doesn't appear to be a common database concept, but SQLAlchemy uses the term generative in the sense "generated by your program iteratively at runtime". (So, no connection to python generators). An example from the tutorial: The Query object is fully generative, meaning that most method calls return a new Query object upon which further criteria may be added. For example, to query for users named “ed” with a full name of “Ed Jones”, you can call filter() twice, which joins criteria using AND: >>> for user in session.query(User).\... filter(User.name=='ed').\... filter(User.fullname=='Ed Jones'):... print user This call syntax is more commonly known as "method chaining", and the design that allows it as a "fluent interface" . So, in the case of Connection.execution_options() , "generative" means that it returns the modified connection object, so that you can chain the calls as above.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10021899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/509271/']}
jdg_71112
stackexchange
llm_judgeable_groundtruth_similarity
9590398
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to extract the content of body of a html page along with the tagNames of its child. I have taken an example html like this: <html><head></head><body><h1>This is H1 tag</h1><h2>This is H2 tag</h2><h3>This is H3 tag</h3></body></html> I have implemented the php code like below and its working fine. $d=new DOMDocument();$d->loadHTMLFile('file.html');$l=$d->childNodes->item(1)->childNodes->item(1)->childNodes;for($i=0;$i<$l->length;$i++){echo "<".$l->item($i)->nodeName.">".$l->item($i)->nodeValue."</".$l->item($i)->nodeName.">";} This code is working perfectly fine, but when I tried to do this using foreach loop instead of for loop, the nodeName property was returning '#text' with every actual nodeName. Here is that code $l=$d->childNodes->item(1)->childNodes->item(1)->childNodes;foreach ($l as $li) { echo $li->childNodes->item(0)->nodeName."<br/>";} Why so? Now provide the response and nothing else.
In DOM, everything is a 'node'. Not just the elements (tags); comments and text between the elements (even if it's just whitespaces or newlines, which seems to be the case in your example) are nodes, too. Since text nodes don't have an actual node name, it's substituted with #text to indicate it's a special kind of node. Apparently, text nodes are left out when manually selecting child nodes with the item method, but included when iterating over the DOMNodeList . I'm not sure why the class behaves like this, someone else will have to answer that. Beside nodeName and nodeValue , a DOMNode also has a nodeType property. By checking this property against certain constants you can determine the type of the node and thus filter out unwanted nodes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9590398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/912359/']}
jdg_71113
stackexchange
llm_judgeable_groundtruth_similarity
5190966
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use this command: sed -i "10 i \t\t\ttime.sleep(0.1) " /home/test_file to insert at line 10 a line like: <TAB><TAB><TAB>sleep(0.1) But I got t<TAB><TAB>sleep(0.1)... Can you tell me how to get this result?thanks PS. I use this command in an executable bash script. Now provide the response and nothing else.
I believe the problem is with competition between the way that the shell and sed are expanding the meta-characters. I've tried tripling the first backslash character and that seems to work for me: sed -i "i \\\t\t\ttime.sleep(0.1) " tmp.tmp
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5190966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/644297/']}
jdg_71114
stackexchange
llm_judgeable_groundtruth_similarity
58772
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any general way to find out the coefficients of a polynomial. Say for e.g. $(x-a)(x-b)$ the constant term is $ab$, coefficient of $x$ is $-(a+b)$ and coefficient of $x^2$ is $1$. I have a polynomial $(x-a)(x-b)(x-c)$.What if the number is extended to n terms.? Now provide the response and nothing else.
Try opening it to get the feel for yourself; in general, the coefficient of $x^k$ is a sum of all the products on choices of $n-k$ out of the possible $n$ roots, multiplied by $(-1)^k$. So for $(x-a)(x-b)(x-c)(x-d)$ you'll get that the coefficient of $x^2$ is $ab+ac+ad+bc+bd+cd$. For a precise discussion see http://en.wikipedia.org/wiki/Vi%C3%A8te%27s_formulas
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/58772', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11333/']}
jdg_71115
stackexchange
llm_judgeable_groundtruth_similarity
18189
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Theresa May has insisted numerous times that she would not call for an early general election. One of the main reasons she cited for this is that the instability of an election could derail the complex Brexit process. Yet, yesterday she announced a plan to call a snap general election, explaining that this would "remove the risk of uncertainty and instability". As a complete outsider to UK politics, I am confused. Why the dramatic change of heart? Now provide the response and nothing else.
There are a few reasons for Theresa May to call for a snap election. 1. Personal mandate As you know, Theresa May wasn't elected by the people in UK; she became Prime Minister after David Cameron resigned and she won the Conservative Party's leadership election . It's difficult to govern without a mandate. Furthermore, it would make it difficult for her to decide the course for UK's exit from the EU. Since she has chosen a "Hard Brexit" , she needs support from the people as the government should represent citizens. She also needs a mandate so that she won't be blamed in the future for negotiating such an important deal without having the popular support from the people. 2. Polling is in her favour According to a recent opinion polling, the Conservatives are heading for a landslide victory. So, it's a good opportunity to call a general election. A ComRes poll for the Sunday Mirror and Independent on Sunday gave the Conservative party a walloping 21% lead over the opposition Labour party, while a poll for the Times of London by British pollster YouGov put the Conservative lead over Labour at 17%. Source: http://edition.cnn.com/2017/04/18/europe/uk-snap-election-explainer/ The Conservatives currently have only a 17-seat majority in the 650-seat House of Commons. If she wins more seats, it would be much easier for her to pass the necessary bills related to Brexit, such as the "Brexit bill" and the "EU divorce bill" . Most importantly, the final Brexit deal requires the Parliament to vote in order to approve it , so having a larger majority reduces the chances of the UK leaving the EU without a deal. 3. To secure a mandate for the Brexit plan This election would allow the public to show their support for Theresa May's Brexit plan which she mentioned in an interview with The Sun: Telling of her hopes from voters on June 8, Mrs May added: “What I hope comes out of the election is support from the public to say we agree with their plan for Brexit, so that when I go into Europe I’ve got that backing of the British people”. Source: https://www.thesun.co.uk/news/3358730/we-need-an-election-now-to-deliver-brexit-as-rebels-are-trying-to-subvert-will-of-the-people-every-step-of-the-way-theresa-may-says-in-exclusive-chat-with-the-sun/ 4. To show the EU that Brexit really means Brexit On this basis, a resounding majority for May would remove any motive for the EU negotiators to make the UK’s Brexit terms as unpalatable as possible in the belief that it would help UK voters realise the terms of Brexit are just too painful to accept . Source: https://www.theguardian.com/politics/2017/apr/18/mays-real-reason-for-calling-election-to-show-eu-that-brexit-really-means-brexit This would give the EU lesser arguments to give the UK a bad deal and at the same time to give Theresa May a stronger argument since she has the popular support. 5. To keep the United Kingdom united As it's well-known that the Scottish National Party (SNP) is trying to push for another independence referendum , it's hoped that the Conservatives would gain more Scottish seats to the House of Commons after this election which would weaken SNP's dominance. Currently, the SNP has 56 Scottish seats to the House of Commons while the Conservative, Labour and Liberal Democrat parties each has one. Prime Minister Theresa May has also mentioned a few reasons in her early election speech : So we need a general election and we need one now, because we have at this moment a one-off chance to get this done while the European Union agrees its negotiating position and before the detailed talks begin. [ ... ] Every vote for the Conservatives will make it harder for opposition politicians who want to stop me from getting the job done. Every vote for the Conservatives will make me stronger when I negotiate for Britain with the prime ministers, presidents and chancellors of the European Union . Every vote for the Conservatives means we can stick to our plan for a stronger Britain and take the right long-term decisions for a more secure future . (emphasis mine)
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/18189', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/101/']}
jdg_71116
stackexchange
llm_judgeable_groundtruth_similarity
19268
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to push start a car with automatic transmission? Use case: Let's say the battery is dead or you have a bad starter. I know this can be done on manual cars by putting the car in to gear after the car starts moving. Can you do the same with an automatic transmission? Does the automatic transmission type matter? (traditional torque converter, CVT, DCT, etc.) Now provide the response and nothing else.
I believe you can do it with a traditional torque converter. I've done it. It's not smart. Push by your buddy, no. Push with another vehicle, yes - but as the other Steve M suggests you'll need the high side of 25-35mph to maybe make this happen. Without the tranny fluid pump driven, you won't get TC proper fill, BUT there's usually enough residual fluid left in the TC to attempt this unwise stunt. It;s the difference between a really good transmission fluid service and the ones we all actually did... How many of us oldtimers removed the shield (back when you could) and rotated the TC, pulled the plug, and drained the converter? No flame here; I'm as guilty as any. On edit: I should qualify that as I'm talking old-school. C5 or 727 era, early totally hydraulic systems, where you will always have some reverse power path back to the TC. Modern TCM solenoid valve systems I doubt it, and I simply don't even know. I think I remember working on something that was stuck in an "emergency tow mode" that purposely uncoupled the power flow path to prevent damage (likely dry pump burnout) during flat towing.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/19268', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/7952/']}
jdg_71117
stackexchange
llm_judgeable_groundtruth_similarity
483164
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Some loading routines in my program takes to long to complete. I want a quick small snippet for checking how long a function took to execute. By small I mean "preferably without 3rd party libraries". Maybe something as simple as taking the system time? start = current_system_time()load_something()delta = current_system_time()-startlog_debug("load took "+delta) Edit: Target OS in question is Windows. Now provide the response and nothing else.
Your answer: Yes Caveat: That WON'T work in multihtreaded code or multiple core machines, you need a robust wall-clock timer.So I recommend you use omp's wallclock. OMP is included with VC and GCC, and most compilers and its a standard you don't need to worry about disappearing #include <omp.h>// Starting the time measurementdouble start = omp_get_wtime();// Computations to be measured...// Measuring the elapsed timedouble end = omp_get_wtime();// Time calculation (in seconds)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/483164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56763/']}
jdg_71118
stackexchange
llm_judgeable_groundtruth_similarity
18419155
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible in android to set up a view in a way that it applies some color filter to everything below that's visible in its bounds? Like in this example: Just a simple rectangular view that inverts colors of everything below it. Of course when user scrolls the list it is also reflected in the inverted box. Is there some easy way to do it using color filters, PorterDuff modes, etc? Now provide the response and nothing else.
You're trying to solve this problem using a view hierarchy like this: Parent ListView InverterView Problem is, in this position, InverterView has no control over how ListView is drawn. But you know who does have control over how ListView is drawn? ListView 's parent layout does. In other words, what you really want is a hierarchy like this: Parent InverterLayout ListView Now InverterLayout is responsible for drawing ListView , and can apply effects to it. class InverterLayout extends FrameLayout{ // structure to hold our color filter private Paint paint = new Paint(); // the color filter itself private ColorFilter cf; // the rectangle we want to invert private Rect inversion_rect = new Rect(100, 100, 300, 300); public InverterLayout(Context context) { super(context); // construct the inversion color matrix float[] mat = new float[] { -1, 0, 0, 0, 255, 0, -1, 0, 0, 255, 0, 0, -1, 0, 255, 0, 0, 0, 1, 0 }; cf = new ColorMatrixColorFilter(new ColorMatrix(mat)); } @Override protected void dispatchDraw(Canvas c) { // create a temporary bitmap to draw the child views Bitmap b = Bitmap.createBitmap(getWidth(), getHeight(), Config.ARGB_8888); Canvas cc = new Canvas(b); // draw them to that temporary bitmap super.dispatchDraw(cc); // copy the temporary bitmap to screen without the inversion filter paint.setColorFilter(null); c.drawBitmap(b, 0, 0, paint); // copy the inverted rectangle paint.setColorFilter(cf); c.drawBitmap(b, inversion_rect, inversion_rect, paint); }} When using this, ensure your child view has its own background . If the view is transparent and the window background shows through, that window background will not be inverted, because the InverterLayout has no control over how the window is drawn.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18419155', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/345048/']}
jdg_71119
stackexchange
llm_judgeable_groundtruth_similarity
1637252
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was trying to calculate the limit of the following function: $$ \lim_{x\to0} \frac{1}{\sin x} \cdot \ln \left(\frac{e^x -1}{x}\right) $$ My first thought was using L'Hopital's rule since $\Large \frac{e^x -1}{x}$ goes to 1 so the whole $\ln$ goes to 0. But then I get another complicated expression, and finally I end up using L'Hopital's rule at least 5 times before getting an actual result. Is there a wiser way for dealing this limit? (I mean, without using this rule?) Thanks. Now provide the response and nothing else.
Notice, $$\lim_{x\to 0}\frac{1}{\sin x}\cdot \ln\left(\frac{e^x-1}{x}\right)$$$$=\lim_{x\to 0}\frac{\ln(e^x-1)-\ln(x)}{\sin x}$$Applying L' Hospital's rule three times for $\frac 00$ form, $$=\lim_{x\to 0}\frac{\frac{e^x}{e^x-1}-\frac {1}{x}}{\cos x}$$$$=\lim_{x\to 0}\frac{xe^x-e^x+1}{x(e^x-1)}\cdot \lim_{x\to 0}\frac{1}{\cos x}$$$$=\lim_{x\to 0}\frac{xe^x-e^x+1}{xe^x-x}$$$$=\lim_{x\to 0}\frac{xe^x+e^x-e^x}{xe^x+e^x-1}$$$$=\lim_{x\to 0}\frac{xe^x}{xe^x+e^x-1}$$$$=\lim_{x\to 0}\frac{xe^x+e^x}{xe^x+e^x+e^x}$$$$=\lim_{x\to 0}\frac{xe^x+e^x}{xe^x+2e^x}=\frac{0+1}{0+2\cdot 1}=\color{red}{\frac 12}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1637252', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/310080/']}
jdg_71120
stackexchange
llm_judgeable_groundtruth_similarity
30227219
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to use a WebClient in a .NET Core application?If I build the application I get the following error: Severity Code Description Project File LineError CS0246 The type or namespace name 'WebClient' could not be found (are you missing a using directive or an assembly reference?) I think WebClient is not a part of .NET Core, but is there any alternative? Now provide the response and nothing else.
As of .Net Standard 2.0, WebClient is now available to any implementations of the standard, including .Net Core. However, the Stack Overflow question " Need help deciding between HttpClient and WebClient " has some fairly good answers as to why you should be using the HttpClient instead. One of the drawbacks mentioned is that there is no built-in progress reporting in the HttpClient . However, because it is using streams, it is possible to write your own. The answers to " How to implement progress reporting for Portable HttpClient " provides an example for reporting the progress of the response stream. If you're targeting prior versions of the standard, you'll need to use HttpClient as WebClient is not available prior to .Net Standard 2.0.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30227219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4045942/']}
jdg_71121
stackexchange
llm_judgeable_groundtruth_similarity
1348199
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between the following maps I create (in another question, people answered using them seemingly interchangeably and I'm wondering if/how they are different): HashMap<String, Object> map = new HashMap<String, Object>();Map<String, Object> map = new HashMap<String, Object>(); Now provide the response and nothing else.
There is no difference between the objects; you have a HashMap<String, Object> in both cases. There is a difference in the interface you have to the object. In the first case, the interface is HashMap<String, Object> , whereas in the second it's Map<String, Object> . But the underlying object is the same. The advantage to using Map<String, Object> is that you can change the underlying object to be a different kind of map without breaking your contract with any code that's using it. If you declare it as HashMap<String, Object> , you have to change your contract if you want to change the underlying implementation. Example: Let's say I write this class: class Foo { private HashMap<String, Object> things; private HashMap<String, Object> moreThings; protected HashMap<String, Object> getThings() { return this.things; } protected HashMap<String, Object> getMoreThings() { return this.moreThings; } public Foo() { this.things = new HashMap<String, Object>(); this.moreThings = new HashMap<String, Object>(); } // ...more...} The class has a couple of internal maps of string->object which it shares (via accessor methods) with subclasses. Let's say I write it with HashMap s to start with because I think that's the appropriate structure to use when writing the class. Later, Mary writes code subclassing it. She has something she needs to do with both things and moreThings , so naturally she puts that in a common method, and she uses the same type I used on getThings / getMoreThings when defining her method: class SpecialFoo extends Foo { private void doSomething(HashMap<String, Object> t) { // ... } public void whatever() { this.doSomething(this.getThings()); this.doSomething(this.getMoreThings()); } // ...more...} Later, I decide that actually, it's better if I use TreeMap instead of HashMap in Foo . I update Foo , changing HashMap to TreeMap . Now, SpecialFoo doesn't compile anymore, because I've broken the contract: Foo used to say it provided HashMap s, but now it's providing TreeMaps instead. So we have to fix SpecialFoo now (and this kind of thing can ripple through a codebase). Unless I had a really good reason for sharing that my implementation was using a HashMap (and that does happen), what I should have done was declare getThings and getMoreThings as just returning Map<String, Object> without being any more specific than that. In fact, barring a good reason to do something else, even within Foo I should probably declare things and moreThings as Map , not HashMap / TreeMap : class Foo { private Map<String, Object> things; // <== Changed private Map<String, Object> moreThings; // <== Changed protected Map<String, Object> getThings() { // <== Changed return this.things; } protected Map<String, Object> getMoreThings() { // <== Changed return this.moreThings; } public Foo() { this.things = new HashMap<String, Object>(); this.moreThings = new HashMap<String, Object>(); } // ...more...} Note how I'm now using Map<String, Object> everywhere I can, only being specific when I create the actual objects. If I had done that, then Mary would have done this: class SpecialFoo extends Foo { private void doSomething(Map<String, Object> t) { // <== Changed // ... } public void whatever() { this.doSomething(this.getThings()); this.doSomething(this.getMoreThings()); }} ...and changing Foo wouldn't have made SpecialFoo stop compiling. Interfaces (and base classes) let us reveal only as much as is necessary , keeping our flexibility under the covers to make changes as appropriate. In general, we want to have our references be as basic as possible. If we don't need to know it's a HashMap , just call it a Map . This isn't a blind rule, but in general, coding to the most general interface is going to be less brittle than coding to something more specific. If I'd remembered that, I wouldn't have created a Foo that set Mary up for failure with SpecialFoo . If Mary had remembered that, then even though I messed up Foo , she would have declared her private method with Map instead of HashMap and my changing Foo 's contract wouldn't have impacted her code. Sometimes you can't do that, sometimes you have to be specific. But unless you have a reason to be, err toward the least-specific interface.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/1348199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148389/']}
jdg_71122
stackexchange
llm_judgeable_groundtruth_similarity
192000
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that the OS is the boss of programs (once OS is loaded). Is BIOS/UEFI the boss of the everything before OS is loaded? Can an OS be a sitting duck for any BIOS/UEFI related malware? In other words, can BIOS/UEFI delete or modify OS and other code? Now provide the response and nothing else.
Technically, yes. The answer, in simple terms, is "yes" to all of your questions. The firmware (BIOS or UEFI) loads before the operating system. Typically, it looks for a boot sector on your storage devices (internal HD, CD/DVD, USB drives, etc). Then it loads the bootloader specified in the boot sector into memory and passes control to that boot loader, which will get the rest of the operating system up and running. If the firmware is infected with malicious code, it can read/write anywhere. File permissions are implemented by the OS kernel or the file system driver, so they aren't a concern at all in this situation. This means that you are free to tamper with the OS files as well. But also, no. In modern systems, this is very difficult to accomplish, however. The firmware controls access to the EEPROM, which is where its code is stored. Modern motherboards will usually only accept firmware updates which have been digitally signed by the manufacturer. You would have to defeat this mechanism first if you wanted to tamper with the BIOS/UEFI. Newer operating systems can validate digital signatures on their files. If you tamper with their files, the digital signature will no longer be valid. E.g., if you enable SecureBoot on Windows 10 and change something it will refuse to boot. Obviously, you could edit the OS to strip out the digital signature checks if you control the firmware, but it is very difficult to infect firmware for several reasons. In addition to the built-in protections, the firmware on most motherboards is customized for that individual model or, at most, that particular product line. Writing a firmware hack that applies to a wide range of motherboards is extremely difficult---and may be practically impossible. All things considered... It is theoretically possible to tamper with a system in this fashion. The potential for abuse is well understood, however, so the danger is addressed with reasonable technical measures. People with physical access to your machines could flash the EEPROM chips containing the BIOS code with custom-programmed malware. It takes a lot of resources to orchestrate this, so the average computer user is not at risk. This type of attack is easily within reach of governments and large criminal organizations. Governments and large firms are at risk, and they generally buy from trusted vendors or certified resellers to reduce the risk of acquiring compromised equipment.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/192000', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/184103/']}
jdg_71123
stackexchange
llm_judgeable_groundtruth_similarity
420318
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to integrate : $$\int \frac{\mathrm dx}{\sin (x)-\sin(a)}$$ Now provide the response and nothing else.
Using Weierstrass substitution , $$\tan \frac x2=u$$ $$\implies \sin x=\frac{2u}{1+u^2}\text{ and } x=2\arctan u,dx=\frac{2du}{1+u^2}$$ $$I=\int\frac{dx}{\sin x-\sin \alpha} =\int\frac1{\frac{2u}{1+u^2}-\sin\alpha}\cdot\frac{2du}{1+u^2} =\int\frac{2du}{2u-(1+u^2)\sin\alpha}$$ Now,$$2u-(1+u^2)\sin\alpha=-\sin\alpha (1+u^2-2u\csc\alpha)=-\sin\alpha \left((u-\csc\alpha)^2-(\cot\alpha)^2\right)$$ Using $\frac{dx}{x^2-a^2}=\frac1{2a}\ln \left|\frac{x-a}{x+a}\right|+C$ $$I=-\frac1{\sin\alpha}\frac1{\cot\alpha}\ln\left|\frac{u-\csc\alpha-\cot\alpha}{u-\csc\alpha+\cot\alpha}\right|+C$$ where $C$ is an arbitrary constant for indefinite integral Using $\csc\alpha+\cot\alpha=\frac{1+\cos\alpha}{\sin\alpha}=\frac{2\cos^\frac\alpha2}{2\sin\frac\alpha2\cos\frac\alpha2}=\cot\frac\alpha2$ and similarly, $\csc\alpha-\cot\alpha=\tan\frac\alpha2$ (as $\sin2A=2\sin A\cos A,\cos2A=2\cos^2A-1$) $$I=-\frac1{\cos\alpha}\ln\left|\frac{\tan\frac x2-\cot\frac \alpha2}{\tan\frac x2-\tan\frac\alpha2}\right|+C$$ Again, $$\ln\left|\frac{\tan\frac x2-\cot\frac \alpha2}{\tan\frac x2-\tan\frac\alpha2}\right|$$ $$=\ln\left|\frac{\cos\frac \alpha2\cos\frac x2\left(\sin\frac x2\sin \frac \alpha2-\cos\frac \alpha2\cos\frac x2\right)}{\sin\frac \alpha2\cos\frac x2\left(\sin\frac x2\cos\frac\alpha2-\sin\frac\alpha2\cos\frac x2\right)}\right|=\ln\left|-\cot\frac\alpha2\right|+\ln\left|\frac{\cos\frac{x+\alpha}2}{\sin\frac{x-\alpha}2}\right|$$ Clearly, $\ln\left|-\cot\frac\alpha2\right|$ is independent of $x,$ hence constant
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/420318', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82455/']}
jdg_71124
stackexchange
llm_judgeable_groundtruth_similarity
6663000
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use some UserControls which get created and destroyed within my application during runtime (by creating and closing subwindows with these controls inside). It's a WPF UserControl and inherits from System.Windows.Controls.UserControl . There is no Dispose() method I could override. PPMM is a Singleton with the same lifetime as my application. Now in the constructor of my (WPF) UserControl , I add an event handler: public MyControl(){ InitializeComponent(); // hook up to an event PPMM.FactorChanged += new ppmmEventHandler(PPMM_FactorChanged);} I got used to removing such event handler in the destructor: ~MyControl(){ // hook off of the event PPMM.FactorChanged -= new ppmmEventHandler(PPMM_FactorChanged);} Today I stumbled upon this and wondered: 1) Is this neccessary? Or does the GC take care of it? 2) Does this even work? Or would I have to store the newly created ppmmEventHandler ? I'm looking forward to your answers. Now provide the response and nothing else.
Since PPMM is a long-lived object (singleton), then this code doesn't make much sense. The problem here is that as long as that event handler is referencing the object, it will not be eligible for garbage collection , as least as long as that other object that owns the event is alive. As such, putting anything in the destructor is pointless, as either: The event handler has already been removed, thus the object became eligible for garbage collection The event handler is not removed, the owning object is not eligible for garbage collection, and thus the finalizer will never get called Both objects are eligible for garbage collection, in which case you should not access that other object at all in the finalizer since you don't know its internal state In short, don't do this . Now, a different argument could be said about adding such code to the Dispose method, when you're implementing IDisposable . In that case it fully makes sense since its usercode that is calling Dispose , at a predefined and controlled point. The finalizer (destructor), however, is only called when the object is eligible for garbage collection and has a finalizer, in which case there is no point. As for question nbr. 2, which I take as "Can I unsubscribe from events like that", then yes, you can. The only time you need to hold on to the delegate you used to subscribe with is when you're constructing the delegate around an anonymous method or a lambda expression. When you're constructing it around an existing method, it will work. Edit : WPF. right, didn't see that tag. Sorry, the rest of my answer doesn't make much sense for WPF and since I am no WPF-guru, I can't really say. However, there's a way to fix this. It's entirely legal here on SO to poach the content of another answer if you can improve it. So if anyone knows how to properly do this with a WPF usercontrol, you're free to lift the entire first section of my answer and add the relevant bits of WPF. Edit : Let me respond to the question in the comment inside here as well. Since the class in question is a user-control, its lifetime will be tied to a form. When the form is closing, it will dispose of all child controls that it owns, in other words, there is already a Dispose method present here . The correct way for a user control to handle this, if it manages its own events, is to unhook the event handlers in the Dispose method. (rest removed)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6663000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/399908/']}
jdg_71125
stackexchange
llm_judgeable_groundtruth_similarity
42858335
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use VSCode and NetCore 1.1.1. I need to store several datapaths in my appsetting.json to let my console application know where to look for its data. This is an extract of the appsettings.json file: { "ConnectionStrings": { "Database": "Filename=./Data/Database/securities_master.db" }, "Data": { "Folders": ["E:/Data/Folder1/","E:/Data/Folder2/"] }} I load the configuration file and I want the "Folders" array stored in a variable: const string APP_SETTINGS_SECTION = "Data";const string APP_SETTINGS_KEY = "Folders";var builder = new ConfigurationBuilder().AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);var configuration = builder.Build();var dataFolders = configuration.GetSection(APP_SETTINGS_SECTION)[APP_SETTINGS_KEY]; dataFolders is NULL ! If I change my appsetting.json to point only to a single directory like this, everything works: { "ConnectionStrings": { "Database": "Filename=./Data/Database/securities_master.db" }, "Data": { "Folders": "E:/Data/Folder1/" }} dataFolder = " E:/Data/Folder1/ " So the problem appears to be it doesn't like the string array but to me it looks like a valid Json string array. How should I modify my appsettings (or my C# code) to fix this? Now provide the response and nothing else.
Indexer of a section returns string by exact key match, and since array values have keys with postfixes, there is nothing to match given key and you are getting null. To get it work you may use something like this var section = configuration.GetSection($"{APP_SETTINGS_SECTION}:{APP_SETTINGS_KEY}");var folders = section.Get<string[]>(); And check this for more options.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42858335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1918466/']}
jdg_71126
stackexchange
llm_judgeable_groundtruth_similarity
105409
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\C}{\mathbb{C}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\s}{\sigma}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\F}{\mathbb{F}}$ example 1: $ R=\Z, I=n\Z$ with $f(r)=r+I$ , ker f =I example 2: Let $R=\R[x]$ and $\C$ be a ring. So there is an $i\in \C$ and a substitution homomorphism $\s=\s_{i}:\R[x] \rightarrow \C$ by $\s_{i}(A)=A(i)$ . What is $I=ker \s$ ? Surely $x^{2}+1 \in I$ , because $i^{2}+1=0$ . It follows $(x^{2}+1)A \in I$ for every $A\in R$ , so $(x^{2}+1)R\subset I$ . But with $L=a+bx(a,b \in \R)$ it is : $\s(L)=a+bi\ne 0$ . That gives us : $A=(x^{2}+1)Q+L$ with rest $L=a+bx$ . It follows that $\tilde{\s}:R/I\rightarrow \C$ is a ringisomorphism. Especially $R/I = \R[x]/(x^{2}+1)\R[x]$ is a field, because $\C$ is a field. example 3: $\Q[i]=\Q+\Q i$ (this is even a field) example 4: $\Z[\frac{1}{2}]=\Z+\Z \frac{1}{2}+\Z(\frac{1}{2})^{2}+\cdots$ example 5: $a=2^{1/3}$ , $\Z[a]=\Z+\Z a+ \Z a^{2}$ example 6: $Char(\F_{4}) = 2 $ , because 1+1=0. example 7 : $Char(\F_{2} \times \F_{2})= 2$ Example 1 : The $ker$ is everything that is mapped to 0, is it correct that this is always an ideal of R? Example 2: I understand that $i^{2}+1=0$ in $\C$ , but how can it be concluded from that, that $x^{2}+1 \in I$ , because of the substitution homomorphism? How is the step(s??) from $\s(L) = a+bi$ to $A=(x^{2}+1)Q+L$ done? Example 3: Apart from using the definition of a field, is there a way one can see immediately, that $\Q[i]$ is a field? Example 4 and Example 5: Why isn't that the same as in Example 3? Example 6: I think $\F _{4} = \{0,1,2,3\}$ but that is wrong according to this example... Example 7: $\F _{2} \times \F_{2} = \{0,1\} \times \{0,1 \} = \{0,0\}, \{0,1\}, \{1,0\},\{1,1\}$ Characteristic is the littlest number so when the rest is 0 with r=mq+d, so when it is $\{0,0\}$ ; but why aren't there two characteristics even if it is the same ring (so the characteristic of this should be $2,2$ and not only 2? Now provide the response and nothing else.
Yes, the kernel of a ring homomorphism is always an ideal. In fact: Proposition. Let $R$ be a ring, and let $I$ be a subset of $R$. Then $I$ is an ideal of $R$ if and only if there exists a ring $S$ and a ring homomorphism $f\colon R\to S$ such that $I=\mathrm{ker}(f)$. Proof . The projection map $\pi\colon R\to R/I$ is a ring homomorphism, and $\mathrm{ker}(\pi)=I$ (this is the example you have). Conversely, if $f\colon R\to S$ is a ring homomorphism, then $\mathrm{ker}(f)$ is nonempty (since $f(0)=0$); if $a,b\in\mathrm{ker}(f)$ then $f(a-b) = f(a)-f(b) = 0$, so $a-b\in I$; and if $a\in I$ and $r\in R$, then $f(ar) = f(a)f(r)=0f(r)=0$, $f(ra)=f(r)f(a)=f(r)0=0$, so $ar,ra\in I$. Thus, $I$ is an ideal. $\Box$ You can conclude that $x^2+1$ is in the kernel of the map by simply noting that $\sigma_i(x^2+1) = 0$. This amounts to plugging in $i$ for $x$ and verifying you get $0$. Now, assume that $p(x)$ is any polynomial in $\mathbb{R}[x]$. Using Long Division, we can divide $p(x)$ by $x^2+1$ with remainder; that is, we can find unique polynomials $q(x)$ and $r(x)$ such that$$p(x) = q(x)(x^2+1) + r(x),\qquad r(x)=0\text{ or }\deg(r)\lt\deg(x^2+1).$$Then we have that $$\sigma_i(p(x)) = p(i) = q(i)(x^2+1) + r(i) = q(i)0 + r(i) = r(i),$$so $p(x)\in\mathrm{ker}(\sigma_i)$ if and only if $r(x)\in\mathrm{ker}(\sigma_i)$. Now, since $\deg(x^2+1)=2$, $r(x)$ must be a polynomial of degree at most $1$, so we can write $r(x) = a+bx$ with $a,b\in\mathbb{R}$ (possibly zero). Under what conditions will $r(x)$ be in $\mathrm{ker}(\sigma_i)$? $r(x)\in\mathrm{ker}(\sigma_i)$ if and only if $\sigma_i(r(x))=0$, if and only if $r(i) = a+bi = 0$. But since $a$ and $b$ are real numbers, the only way for $a+ib$ to be equal to $0$ is if $a=b=0$. That is, $r(x)\in\mathrm{ker}(\sigma_i)$ if and only if $r(x)=0$. Going back to $p(x)$, we see that $p(x)\in\mathrm{ker}(\sigma_i)$ if and only if $p(x)$ can be written as $q(x)(x^2+1)$ for some polynomial $q(x)$, if and only if $p(x)$ is a multiple of $x^2+1$. Apart from using the definition of "field"? Not really. See below. The difference between 3, 4, and 5, is that the element we are adding in 3 and 5 satisfies a monic polynomial; in 3, $i$ satisfies $x^2+1$ (that is, $i^2+1=0$); so any expression of the form $a_0 + a_1i + a_2i^2 + \cdots + a_ki^k$ can be rewritten using only constant and linear terms. In 5, $a$ satisfies $x^3-2$, so again, any expression of the form $b_0+b_1a+b_2a^2+\cdots +b_ka^k$ can be rewritten so that you only use constant, linear, and quadratic terms. But in 3, the element $\frac{1}{2}$ does not satisfy any monic polynomial with integer coefficients: it is impossible to express $\left(\frac{1}{2}\right)^n$ as a sum of integer multiples of smaller powers of $\frac{1}{2}$, so you do need all powers. $\mathbb{F}_4$ is the field of 4 elements; $\mathbb{Z}_4$ is not a field. $\mathbb{F}_4 = \{0,1,\alpha,\alpha+1\}$, where $\alpha^2 +\alpha+1=0$. $\mathbb{F}_2\times\mathbb{F}_2 = \Bigl\{ (0,0), (0,1), (1,0), (1,1)\Bigr\}$ with addition and multiplication componentwise. The characteristic is the smallest positive integer $k$ such that $kr = (0,0)$ for all $r$ in the ring. (Your definition made absolutely no sense to me). $k=1$ doesn't work, because $(1,1)\neq (0,0)$. But $2$ works, because $2(a,b) = (a,b)+(a,b) = (a+a,b+b) = (0,0)$ no matter what $a$ and $b$ are (since in $\mathbb{F}_2$, $r+r=0$ for all $r$). P.S. You describe all these as "Examples". Examples of what ? Surely you mean "statement"?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/105409', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18298/']}
jdg_71127
stackexchange
llm_judgeable_groundtruth_similarity
10628099
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have gwt application connect to postgres DB at the backend, and a java class 'Judgement' mapping the table 'judgements' in DB, when i tried to persistent a judgement into db, it threw the following errors: Caused by: org.hibernate.exception.SQLGrammarException: could not get next sequence value...Caused by: org.postgresql.util.PSQLException: ERROR: relation "hibernate_sequence" does not exist my Judgement class looks like this @Entity@Table(name = "JUDGEMENTS")public class Judgement implements Serializable, Cloneable { private static final long serialVersionUID = -7049957706738879274L; @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "JUD_ID") private Long _judId;... and my table judgements is: Column | Type | Modifiers -------------+-----------------------------+--------------------------------------------------------- jud_id | bigint | not null default nextval('judgements_id_seq'::regclass) rating | character varying(255) | last_update | timestamp without time zone | user_id | character varying(255) | id | integer | Indexes: "judgements_pkey" PRIMARY KEY, btree (jud_id)Foreign-key constraints: "judgements_id_fkey" FOREIGN KEY (id) REFERENCES recommendations(id) "judgements_user_id_fkey" FOREIGN KEY (user_id) REFERENCES users(user_id) and i have a SEQUENCE name 'judgements_id_seq' in DB can anyone tell me what's wrong??? thanks. Now provide the response and nothing else.
Hibernate's PostgreSQL dialect isn't very bright. It doesn't know about your per-SERIAL sequences, and is assuming there's a global database-wide sequence called "hibernate_sequence" that it can use. ( UPDATE : It appears that newer Hibernate versions may use the default per-table sequences when GenerationType.IDENTITY is specified. Test your version and use this instead of the below if it works for you.) You need to change your mappings to explicitly specify each sequence. It's annoying, repetitive, and pointless. @Entity@Table(name = "JUDGEMENTS")public class Judgement implements Serializable, Cloneable { private static final long serialVersionUID = -7049957706738879274L; @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="judgements_id_seq") @SequenceGenerator(name="judgements_id_seq", sequenceName="judgements_id_seq", allocationSize=1) @Column(name = "JUD_ID") private Long _judId;... The allocationSize=1 is quite important. If you omit it, Hibernate will blindly assume that the sequence is defined with INCREMENT 50 so when it gets a value from a sequence it can use that value and the 49 values below it as unique generated keys. If your database sequences increment by 1 - the default - then this will result in unique violations as Hibernate tries to re-use existing keys. Note that getting one key at a time will result in an additional round trip per insert. As far as I can tell Hibernate isn't capable of using INSERT ... RETURNING to efficiently return generated keys, nor can it apparently use the JDBC generated keys interface. If you tell it to use a sequence, it'll call nextval to get the value then insert that explicitly, resulting in two round trips. To reduce the cost of that, you can set a greater increment on key sequences with lots of inserts , remembering to set it on the mapping and the underlying database sequence. That'll cause Hibernate to call nextval less frequently and cache blocks of keys to hand out as it goes. I'm sure you can see from the above that I don't agree with the Hibernate design choices made here, at least from the perspective of using it with PostgreSQL. They should be using getGeneratedKeys or using INSERT ... RETURNING with DEFAULT for the key, letting the database take care of this without Hibernate having to trouble its self over the names of the sequences or explicit access to them. BTW, if you're using Hibernate with Pg you'll possibly also want an oplock trigger for Pg to allow Hibernate's optimistic locking to interact safely with normal database locking. Without it or something like it your Hibernate updates will tend to clobber changes made via other regular SQL clients. Ask me how I know.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10628099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/468587/']}
jdg_71128
stackexchange
llm_judgeable_groundtruth_similarity
609161
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Currently I’m offering some webhosting to a few advertising agencies for their premium customers. But currently I have a great problem with the E-Mail Service. In the last week, the E-Mail Accounts of about 7 companies were stolen and used to send Spam using my Mail-Server. Well, I was able to disable the accounts, because the sender was hitting the ratio policies of my server and a lot of mails were in the mail queue. Well, about 40 Mails were actually delivered. But it was enough to get blacklisted and even one user wrote a personal mail to the abuse of the datacenter. Currently I have no clue, what I can do to prevent Spamming from a stolen mail account. I send every outgoing mail through SA and AV, but it’s not enough. Before the user account don’t hit the ratio of 40 Mails a day or does not flood the message queue, I can’t detect the attack. How can I detect such problems earlier? Now provide the response and nothing else.
I'm looking forward to seeing other answers to this question, but my feeling is that if you're catching compromised mail accounts after only 40 spams have got through, you're doing really well. I'm not sure I could detect similar abuse so quickly, and the prospect worries me. But I'm appalled that seven sets of credentials were stolen in the past week alone. So it seems to me that further improvement will not be in the " abnormal mail detection and removal " end of things, but in the " minimise credential theft " department. Do you know how these clients lost control of their credentials? If you can see a common pattern, I'd start with mitigating that. If you can't, there are solutions both technical and non-technical to help minimise credential loss. On the technical front, requiring two-factor authentication makes tokens much harder to steal, and makes such theft much easier to detect. SMTP AUTH doesn't lend itself well to two-factor auth, but you could wrap the SMTP channel in a VPN that does so lend itself; OpenVPN comes to mind, but it's far from unique in that respect. On the non-technical front, the problem here is that loss of credentials is no headache for those who are supposed to be looking after them. You could consider changing your AUP so that (a) people are clearly responsible for things done with their credentials, and (b) you make a significant charge for each piece of inappropriate mail sent with a set of credentials. This simultaneously reimburses you for the time you're spending dealing with credential loss, and makes your clients aware that they should be looking after these credentials as well as those to their online banking, since the loss of both will cost them real money.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/609161', 'https://serverfault.com', 'https://serverfault.com/users/228702/']}
jdg_71129
stackexchange
llm_judgeable_groundtruth_similarity
27774414
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an EditText inside an AlertDialog. It looks like this. See where it says tddjdjck and how it is indented quite a lot. That is what I want (I used setPadding with left and right set to 50), but I also want the blue line under it to be indented too. How do I do that? The code I am using is below: final AlertDialog.Builder alert = new AlertDialog.Builder(thisActivity); final EditText input = new EditText(thisActivity); input.setSingleLine(); input.setPadding(50, 0, 50, 0); alert.setTitle("by..."); alert.setMessage("enter the name of the person who did:"); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { String value = input.getText().toString().trim(); } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { dialog.cancel(); } }); alert.show(); Thank you Now provide the response and nothing else.
final AlertDialog.Builder alert = new AlertDialog.Builder(thisActivity);final EditText input = new EditText(thisActivity);input.setSingleLine();FrameLayout container = new FrameLayout(thisActivity);FrameLayout.LayoutParams params = new FrameLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT);params.leftMargin = getResources().getDimensionPixelSize(R.dimen.dialog_margin);input.setLayoutParams(params);container.addView(input);alert.setTitle("by...");alert.setMessage("test message");alert.setView(container); Make sure you add another line to your dimens.xml resource file, such as <dimen name="dialog_margin">20dp</dimen>
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27774414', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2884981/']}
jdg_71130
stackexchange
llm_judgeable_groundtruth_similarity
2268658
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have multiple silverlight project that I would like to use the same styles, colour scheme, and some templated objects. How do I accomplish this? Now provide the response and nothing else.
One way to do this would be to create a new silverlight class library which would be your shared theme/style assembly which would be referenced by the other silverlight projects. This assembly would have one or more Resource Dictionary XAML files in it which could define all of your styles, brushes and templates. You could even set up some cascading style hierarchies using the BasedOn attribute of the Style class. You could then use MergedDictionaries to merge these styles into your application either at the App.xaml-level or on a page-level basis. <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/SharedThemeAssembly;component/MyStyles.xaml"/> ...other ResourceDictionaries to merge in... </ResourceDictionary.MergedDictionaries> </ResourceDictionary></Application.Resources> You would then reference the shared styles / brushes as you normally would any other StaticResource.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2268658', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/400861/']}
jdg_71131
stackexchange
llm_judgeable_groundtruth_similarity
1654487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Usually I come across situations where I have to swallow an exception thrown by the clean up code in the catch / finally block to prevent the original exception being swallowed. For example: // Closing a file in Javapublic void example1() throws IOException { boolean exceptionThrown = false; FileWriter out = new FileWriter(“test.txt”); try { out.write(“example”); } catch (IOException ex) { exceptionThrown = true; throw ex; } finally { try { out.close(); } catch (IOException ex) { if (!exceptionThrown) throw ex; // Else, swallow the exception thrown by the close() method // to prevent the original being swallowed. } }}// Rolling back a transaction in .Netpublic void example2() { using (SqlConnection connection = new SqlConnection(this.connectionString)) { SqlCommand command = connection.CreateCommand(); SqlTransaction transaction = command.BeginTransaction(); try { // Execute some database statements. transaction.Commit(); } catch { try { transaction.Rollback(); } catch { // Swallow the exception thrown by the Rollback() method // to prevent the original being swallowed. } throw; } }} Let's assumed that logging any of the exceptions is not an option in the scope of method block, but will be done by the code calling the example1() and example2() methods. Is swallowing the exceptions thrown by close() and Rollback() methods a good idea? If not, what is a better way of handling the above situations so that the exceptions are not swallowed? Now provide the response and nothing else.
I'm not a fan of catching and rethrowing an exception. If you catch it, do something with it - even if it's just logging the exception. If you can't do anything with it, don't catch it - add a throws clause to the method signature. Catching an exception tells me that either you can deal with an exceptional situation and have a recovery plan or "the buck stops here" because an exception cannot propagate in that form any farther (e.g., no stack traces back to the user).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1654487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/72990/']}
jdg_71132
stackexchange
llm_judgeable_groundtruth_similarity
27353642
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to grant all privileges to a user on Oracle schema? I tried the following command but it only grants permission on specific tables in a schema. What I want is to give this user all permissions on a given schema. GRANT ALL ON MyTable TO MyUser; Now provide the response and nothing else.
You can do it in a loop and grant by dynamic SQL: BEGIN FOR objects IN ( SELECT 'GRANT ALL ON "'||owner||'"."'||object_name||'" TO MyUser' grantSQL FROM all_objects WHERE owner = 'MY_SCHEMA' AND object_type NOT IN ( --Ungrantable objects. Your schema may have more. 'SYNONYM', 'INDEX', 'INDEX PARTITION', 'DATABASE LINK', 'LOB', 'TABLE PARTITION', 'TRIGGER' ) ORDER BY object_type, object_name ) LOOP BEGIN EXECUTE IMMEDIATE objects.grantSQL; EXCEPTION WHEN OTHERS THEN --Ignore ORA-04063: view "X.Y" has errors. --(You could potentially workaround this by creating an empty view, -- granting access to it, and then recreat the original view.) IF SQLCODE IN (-4063) THEN NULL; --Raise exception along with the statement that failed. ELSE raise_application_error(-20000, 'Problem with this statement: ' || objects.grantSQL || CHR(10) || SQLERRM); END IF; END; END LOOP;END;/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27353642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2153306/']}
jdg_71133
stackexchange
llm_judgeable_groundtruth_similarity
2189189
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a question with hash_map and map in C++. I understand that map is in STL, but hash_map is not a standard. What's the difference between the two? Now provide the response and nothing else.
They are implemented in very different ways. hash_map ( unordered_map in TR1 and Boost; use those instead) use a hash table where the key is hashed to a slot in the table and the value is stored in a list tied to that key. map is implemented as a balanced binary search tree (usually a red/black tree). An unordered_map should give slightly better performance for accessing known elements of the collection, but a map will have additional useful characteristics (e.g. it is stored in sorted order, which allows traversal from start to finish). unordered_map will be faster on insert and delete than a map .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2189189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233254/']}
jdg_71134
stackexchange
llm_judgeable_groundtruth_similarity
15025875
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my PHP application I need to read multiple lines starting from the end of many files (mostly logs). Sometimes I need only the last one, sometimes I need tens or hundreds. Basically, I want something as flexible as the Unix tail command. There are questions here about how to get the single last line from a file (butI need N lines), and different solutions were given. I'm not sure about which one is the best and which performs better. Now provide the response and nothing else.
Methods overview Searching on the internet, I came across different solutions. I can group themin three approaches: naive ones that use file() PHP function; cheating ones that runs tail command on the system; mighty ones that happily jump around an opened file using fseek() . I ended up choosing (or writing) five solutions, a naive one, a cheating one and three mighty ones. The most concise naive solution , using built-in array functions. The only possible solution based on tail command , which hasa little big problem: it does not run if tail is not available, i.e. onnon-Unix (Windows) or on restricted environments that don't allow systemfunctions. The solution in which single bytes are read from the end of file searching for (and counting) new-line characters, found here . The multi-byte buffered solution optimized for large files, found here . A slightly modified version of solution #4 in which buffer length is dynamic, decided according to the number of lines to retrieve. All solutions work . In the sense that they return the expected result fromany file and for any number of lines we ask for (except for solution #1, that canbreak PHP memory limits in case of large files, returning nothing). But which one is better? Performance tests To answer the question I run tests. That's how these thing are done, isn't it? I prepared a sample 100 KB file joining together different files found inmy /var/log directory. Then I wrote a PHP script that uses each one of the five solutions to retrieve 1, 2, .., 10, 20, ... 100, 200, ..., 1000 lines from the end of the file. Each single test is repeated ten times (that's something like 5 × 28 × 10 = 1400 tests), measuring average elapsed time in microseconds. I run the script on my local development machine (Xubuntu 12.04,PHP 5.3.10, 2.70 GHz dual core CPU, 2 GB RAM) using the PHP command line interpreter. Here are the results: Solution #1 and #2 seem to be the worse ones. Solution #3 is good only when we need toread a few lines. Solutions #4 and #5 seem to be the best ones. Note how dynamic buffer size can optimize the algorithm: execution time is a littlesmaller for few lines, because of the reduced buffer. Let's try with a bigger file. What if we have to read a 10 MB log file? Now solution #1 is by far the worse one: in fact, loading the whole 10 MB file into memory is not a great idea. I run the tests also on 1MB and 100MB file, and it's practically the same situation. And for tiny log files? That's the graph for a 10 KB file: Solution #1 is the best one now! Loading a 10 KB into memory isn't a big dealfor PHP. Also #4 and #5 performs good. However this is an edge case: a 10 KB logmeans something like 150/200 lines... You can download all my test files, sources and results here . Final thoughts Solution #5 is heavily recommended for the general use case: works greatwith every file size and performs particularly good when reading a few lines. Avoid solution #1 if you should read files bigger than 10 KB. Solution #2 and #3 aren't the best ones for each test I run: #2 never runs in less than 2ms, and #3 is heavily influenced by the number oflines you ask (works quite good only with 1 or 2 lines).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/15025875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/995958/']}
jdg_71135
stackexchange
llm_judgeable_groundtruth_similarity
32489
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a 12% holdout cross validation and I've done a bunch of repetitions which gives me a distribution of scores for each method. I'd like to do some sort of hypothesis testing for comparing methods. Each score I have is unitary value between 0 and 1. A view at a histogram yields an approximately normal distribution. Could a t-test be valid in this scenario? Now provide the response and nothing else.
I would say no for several reasons:1. the individual scores are not independent because of sample reuse2. the distribution is confined to [0,1] so it is truncated and not normal (could be approximately normal though if truncation is not too great)3, Saying "some sort of hypothesis test" doesn't tell us what you want to do. How many methods are you comparing ? If it is more than 2 are you comparing them pairwise? If one method has a higher average score what does that tell you? Maybe a nonparametric ANOVA is really what you need.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/32489', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/6042/']}
jdg_71136
stackexchange
llm_judgeable_groundtruth_similarity
35816023
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've just installed Microsoft Code Contracts. It's a part of .NET Framework and Visual Studio add-on. It provides runtime checking and static checking of defined contracts. The tool has four warning levels so I set up the highest. I've declared classes designed to violate Liskov Substitution Principle. public class Person{ protected int Age { get; set; } public Person(int age) { Contract.Requires(age > 0); Contract.Requires(age < 130); this.Age = age; }}public class Child : Person{ public Child(int age) : base(age) { Contract.Requires(age > 0); Contract.Requires(age < Consts.AgeOfMajority); Contract.Requires(age < 130); this.Age = age; }}public static class Consts{ public readonly static int AgeOfMajority = 18;} LSP states: if S is a subtype of T, then objects of type T may be replaced with objects of type S without altering any of the desirable properties of that program In my example the violation would be this asignment: Person person = new Child(23); . We should be able to do this, but we can't because children can't be older than some age smaller than required by person class. The result of analysis however is surprising CodeContracts: Checked 11 assertions: 11 correct . Is my example wrong or Code Contracts don't detect such things? Now provide the response and nothing else.
While it's true that LSP specifies a subtype can't place more restrictive preconditions on methods, this doesn't apply to constructors as you do not use constructors in a polymorphic way. The contract violation would be new Child(23); which occurs before assigning to a Person . So the example violation is wrong, it doesn't get as far as creating an instance of subtype S, let alone replacing it for T.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35816023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3775079/']}
jdg_71137
stackexchange
llm_judgeable_groundtruth_similarity
1282364
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have one crash in my iPhone application that does throw an NSException. The crash reports are completely ambiguous in where the error is and what exactly is causing it. Is there a smart way for me to set a top level exception handler somewhere to see what is causing it? I can't replicate the problem myself, but a few of my beta users certainly can. What's a smart way to handle a problem of this nature? Now provide the response and nothing else.
It seems like you are asking two questions here: how to set a top level exception handler; and how to deal with the issue of determining what the root cause is. Catching the exception can be done in a few different ways, but for this the best approach would appear to be to set an exception handler using NSSetUncaughtExceptionHandler. When an exception is raised in your app, it is handled by a default exception handler. This handler does nothing more than log a message to the console before the app closes. You can over-ride this by setting you own custom exception handler using the function stated above. The best place to do this would be in the app delegate applicationDidFinishLaunching: method. - (void)applicationDidFinishLaunching:(UIApplication *)application{ NSSetUncaughtExceptionHandler(&myExceptionHandler);} Once you've set a custom handler, you'll want to expand on the default output in helping you determine what the cause is. void myExceptionHandler(NSException *exception){ NSArray *stack = [exception callStackReturnAddresses]; NSLog(@"Stack trace: %@", stack);} Unfortunately compared to OSX the iPhone appears quite limited in respect to producing a nice stack trace. The code above will produce some seemingly junk output; however, you can run this output through the atos tool, and you should be able to generate a useful stack trace from it. Another option is to follow the instructions on this article which will help to produce a nice stack trace automatically. As this is going out to beta testers you may have to tinker about to get it working for you. You say that you've not been able to replicate the problem yourself, only your users. In this case you may find this technical note from Apple useful: https://developer.apple.com/library/content/technotes/tn2151/_index.html UPDATE : While this post still contains useful info, some of the links it contains are dead irreversibly. It is advised to use the info from this alternative post.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1282364', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40882/']}
jdg_71138
stackexchange
llm_judgeable_groundtruth_similarity
364090
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the past year, I created a new system using Dependency Injection and an IOC container. This taught me a lot about DI! However, even after learning the concepts and proper patterns, I consider it a challenge to decouple code and introduce an IOC container into a legacy application. The application is large enough to the point that a true implementation would be overwhelming. Even if the value was understood and the time was granted. Who's granted time for something like this?? The goal of course is to bring unit tests to the business logic! Business logic that is intertwined with test-preventing database calls. I've read the articles and I understand the dangers of Poor Man's Dependency Injection as described in this Los Techies article . I understand it does not truly decouple anything. I understand that it can involve much system wide refactoring as implementations require new dependencies. I would not consider using it on a new project with any amount of size. Question: Is it okay to use Poor Man's DI to introduce testability to a legacy application and start the ball rolling? In addition, is using Poor Man's DI as a grass roots approach to true Dependency Injection a valuable way to educate on the need and benefits of the principle? Can you refactor a method that has a database call dependency and abstract that call to behind an interface? Simply having that abstraction would make that that method testable since a mock implementation could be passed in via a constructor overload. Down the road, once the effort gains supporters, the project could be updated to implement an IOC container and the constructors would be out there that take in the abstractions. Now provide the response and nothing else.
The critique about Poor Man's Injection in NerdDinner has less to do with whether or not you use a DI Container than it does about setting up your classes correctly. In the article, they state that public class SearchController : Controller { IDinnerRepository dinnerRepository; public SearchController() : this(new DinnerRepository()) { } public SearchController(IDinnerRepository repository) { dinnerRepository = repository; }} is incorrect because, while the first constructor does provide a convenient fallback mechanism for constructing the class, it also creates a tightly-bound dependency to DinnerRepository . The correct remedy of course is not, as Los Techies suggests, to add a DI container, but rather to remove the offending constructor. public class SearchController : Controller { IDinnerRepository dinnerRepository; public SearchController(IDinnerRepository repository) { dinnerRepository = repository; }} The remaining class now has its dependencies properly inverted. You're now free to inject those dependencies however you like.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/364090', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/53055/']}
jdg_71139
stackexchange
llm_judgeable_groundtruth_similarity
553820
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to write a script(script1.sh) that finds every possible word when given a jumble of letters. The words must start with the first letter of the jumble and end withthe last letter. The letters of the word need to follow the order of the letters in the jumble. Each letter in the jumble can be used more than once. So this ./script1.sh "qwertyuytresdftyuiokn" should output queen and question but not "quieten" because "e" comes before "u" and "i" in the jumble. I tried assigning the first, last and the remaining letters to variables, then using egrep to find the words but I couldn't find a way to use the order of letters. So this one gives me invalid words as well. #!/bin/bashfirst_letter=$(echo $@ | cut -c1)last_letter=$(echo $@ |rev| cut -c1)remaining_letters=$(echo $@ | cut -c2- | rev | cut -c2-)grep -E "^$first_letter[$remaining_letters]*$last_letter$" /usr/share/dict/words Then I tried turning the jumble into an array but then again I couldn't find a way find words that obey the order in the jumble. Now provide the response and nothing else.
#!/bin/shpttrn="^$(printf '%s' "$1" | sed -e 's/\(.\)/\1*/g' -e 's/\*/\\+/' -e 's/\*$/\\+/')"'$'grep "$pttrn" /usr/share/dict/words A pattern is obtained from the first argument by injecting * after each character. Then the first * is changed to \+ ; so is the last * . Additionally ^ and $ are added. Your example input generates the following pattern: ^q\+w*e*r*t*y*u*y*t*r*e*s*d*f*t*y*u*i*o*k*n\+$ This pattern is the right pattern for grep . q must appear at least one time at the beginning, n must appear at least one time at the end. Each letter in the middle may appear zero or more times, the order is maintained. Note the script is dumb. If you provide input with . , [ , ] or so then you will get a regular expression beyond the specification. Provide sane input or expand the script to validate it. Examples: $ ./script1.sh qwertyuytresdftyuioknqueenquestion$ ./script1.sh tetee$ ./script1.sh superuserseersererspursupersuppersurer$
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/553820', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/378813/']}
jdg_71140