source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 51386 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to Kalman filtering and state estimation and I'd like some guidance on EKFs. Currently, I'm trying to use a linear prediction model coupled with nonlinear measurements to estimate the state of an object. My state vector ($\bar{x}$) is: \begin{bmatrix}x\\y\\v_x\\v_y\end{bmatrix} and my state model is (with no process noise ($W_k$) or process noise covariance ($Q_k$)):$$\bar{x}_{k_p} = A\bar{x}_{k-1} \ + B\begin{bmatrix} a_x \\ a_y \end{bmatrix}$$ $$P_{k_p} = AP_{k-1}A^T$$where$$A = \begin{bmatrix} 1 & 0 & \Delta T & 0 \\0 & 1 & 0 & \Delta T \\0 & 0 & 1 & 0 \\0 & 0 & 0 & 1 \end{bmatrix}$$and$$B = \begin{bmatrix} \frac{1}{2}\Delta T^2 & 0 \\0 & \frac{1}{2}\Delta T^2 \\\Delta T & 0 \\0 & \Delta T\end{bmatrix} $$ I have simulated some measurement data with Gaussian noise. The measurements are outputted as $\begin{bmatrix}r\\\theta\end{bmatrix}$, and this is the graph of the generated data: To account for the polar coordinates, I was going to use a function $h(z)$ that transformed from polar to cartesian coordinates in my state update equation $\bar{x}_k = \bar{x}_{k_p} \ + K_k(h(\bar{z}_k)-\bar{x}_{k_p})$? This seemed like the simple thing to do, but this post does say that if I did so, my filter wouldn't work as the polar to cartesian transform isn't a linear transform. To resolve this, I want to use an EKF to linearize the measurements, and after reading a few articles online, my gain calculation and update step are as follows: $$K_k = P_kJ(\bar{z}_k)^T \ (J(\bar{z}_k)P_kJ(\bar{z}_k)^T + R)^{-1}$$ $$\bar{x}_k = \bar{x}_{k_p} \ + K_k(Z_k - H\bar{x}_{k_p})$$ $$P_k = (I - K_k J(\bar{z}_k))P_{k_p}$$ where $K_k$ is the Kalman gain, $\bar{z}_k$ is the measurement vector outputted by the sensor, and $J(\bar{z}_k)$ is a Jacobian to linearize the polar coordinate measurements. These are all at time $k$. In addition, my prediction equations are the same linear equations as shown above. I used $x = r \cos(\theta)$ and $y = r \sin(\theta)$ to derive the following Jacobian:$$J(r,\theta) = \begin{bmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\\frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta} \\\frac{\partial v_x}{\partial r} & \frac{\partial v_x}{\partial \theta} \\\frac{\partial v_y}{\partial r} & \frac{\partial v_y}{\partial \theta} \\\end{bmatrix} =\begin{bmatrix} \cos(\theta) & -r\sin(\theta) \\\sin(\theta) & r\cos(\theta) \\0 & 0 \\0 & 0\end{bmatrix}$$ Would this Jacobian and model be the correct way to go about my problem?Thanks for your help! P.S. I'm super new to this so if some of my lingo is wrong or weird, if I'm missing something or something makes no sense, or if you have some tips to better my understanding, please let me know. Thanks! EDIT: I converted the polar values to the cartesian plane with $x = r\cos(\theta)$ and $y=r\sin(\theta)$, and then graphed the $x$ and 4y$ values. This is the result: Obviously, something seems to be working, but I don't know/understand why, especially if the transform isn't linear. If someone could help my understanding of this as well, I'd truly appreciate it.
Now provide the response and nothing else.
| Update If I understood your model, you have a model of Constant Velocity in 2D (Cartesian Coordinate System). While your measurement are in Polar Coordinate System. Pay attention that your measurement function is: $$ h \left( x, y, {v}_{x}, {v}_{y} \right) = \begin{bmatrix}\sqrt{ {x}^{2} + {y}^{2} } \\ {\tan}^{-1} \left( \frac{y}{x} \right )\end{bmatrix} $$ Hence you Jacobian becomes: $$\begin{aligned}{J}_{h} \left( x, y, {v}_{x}, {v}_{y} \right) & = \begin{bmatrix}\frac{\partial \sqrt{ {x}^{2} + {y}^{2} } }{\partial x} & \frac{\partial \sqrt{ {x}^{2} + {y}^{2} } }{\partial y} & \frac{\partial \sqrt{ {x}^{2} + {y}^{2} } }{\partial {v}_{x}} & \frac{\partial \sqrt{ {x}^{2} + {y}^{2} } }{\partial {v}_{y}} \\ \frac{\partial {\tan}^{-1} \left( \frac{y}{x} \right ) }{\partial x} & \frac{\partial {\tan}^{-1} \left( \frac{y}{x} \right ) }{\partial y} & \frac{\partial {\tan}^{-1} \left( \frac{y}{x} \right ) }{\partial {v}_{x}} & \frac{\partial {\tan}^{-1} \left( \frac{y}{x} \right ) }{\partial {v}_{y}}\end{bmatrix} \\& = \begin{bmatrix}\frac{x}{ \sqrt{ {x}^{2} + {y}^{2} } } & \frac{y}{ \sqrt{ {x}^{2} + {y}^{2} } } & 0 & 0 \\-\frac{y}{{x}^{2} + {y}^{2}} & \frac{x}{{x}^{2} + {y}^{2}} & 0 & 0\end{bmatrix}\end{aligned}$$ The function of the measurement is the one connecting your state vector into the measurement and not the other way around. Let's look on Wikipedia EKF Model: In you case $ F $ is constant is the model is linear. What's in Wikipedia called $ H $ is the $ J $ I derived above. Dimension wise, all is perfectly defined. Implementation I implemented a general Kalman Filter Iteration with support for Extended Kalman Filter (With option for Numeric Calculation of the Jacobian). I also added option to Unsecented Kalman Filter (UKF) Iteration, so you will be able to compare. Here is a result in with the same model as yours: The full code is available on my StackExchange Signal Processing Q51386 GitHub Repository (Look at the SignalProcessing\Q51386 folder). 24/08/2018:I added UKF implementation which is pretty general so you could use it in various models. Original Answer Converting coordinate system is the main reason the Extended Kalman Filter was invented. Let me give you a tip, it doesn't work well in those cases. If you use Non Linear Transformation use something that will both make things easier and better (Yea, usually it doesn't work like that, but in this case it does) - Use the Unscented Kalman Filter (UKF) which is based on the Unscented Transform . Once you utilize that there is no need to derive the Jacobian. All needed is to apply the non linear function $ n $ times (On each Sigma Point). It is easy to see that linearization doesn't work well for propagating the mean and the covariance in many (Most) cases. The UKF directly approximate the calculation of the integration of the non linear function which calculates the mean and covariance. It will make things easier as you'll be able to skip the linearization step and only know the coordinate transformation function. In modern tracking we usually stay away from EKF and utilize methods which better approximate the integrals of the first 2 moments propagation. The most common ones are the UKF and GHKF (Those are called Sigma Points Kalman filters). Their generalization is the Particle Filter which in most cases is over kill. Update Have a look at EKF / UKF Maneuvering Target Tracking using Coordinated Turn Models with Polar/Cartesian Velocity . From their conclusion: We have shown a range of coordinated turn (CT) modelsusing either Cartesian or polar velocity and how to use them ina Kalman filtering framework for maneuvering target tracking.The results of the conducted simulation study are in favorof polar velocity. This confirms the results of the previousstudy [11] and extends it to the case of varying target speed.For polar CT models, the performance in terms of positionRMSE of the predicted state appears to be comparable forEKF and UKF. As the UKF does not require the derivation andimplementation of Jacobians it might be more straightforwardto implement. The RMSE provided by the Cartesian velocityEKF and UKF turned out slightly worse. Interestingly, thesensitivity of the RMSE with respect to the noise parameterswas decreased by using EKF2 and UKF in the Cartesian case.This, in addition to the simpler implementation and lowercomputational cost of UKF over EKF2 results in a recommendationfor UKF if Cartesian CT models are preferred. Basically telling you, don't bother with Jacobians, just use the simpler UKF. Another comparison is made at Implementation of the Unscented Kalman Filter and a simple Augmentation System for GNSS SDR receivers with: As can be seen, UKF implementation does not requirelinearization (state transition function and measurementsfunctions are directly applied to sigma points) and it canalso work in presence of discontinuities. The predictiononly consists of linear algebra operations. All suchadvantages are fundamental for minimizingcomputational load in an SDR implementation. While the classical Kalman Filter implies the propagationof n components for the state vector and n2/2+n/2components for the Covariance matrix, the UKF requiresthe propagation of 2n+1 sigma points only. Furthermore, UKF is more insensitive to initial conditionswith respect to EKF. It has been demonstrated that UKFrapidly converge also in presence of an initial positionerror of several Kilometers. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/51386', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/37022/']} | jdg_380841 |
stackexchange | llm_judgeable_groundtruth_similarity | 4834227 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to invoke a command when ENTER is pressed in a TextBox . Consider the following XAML: <UserControl ... xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" ...> ... <TextBox> <i:Interaction.Triggers> <i:EventTrigger EventName="KeyUp"> <i:InvokeCommandAction Command="{Binding MyCommand}" CommandParameter="{Binding Text}" /> </i:EventTrigger> </i:Interaction.Triggers> </TextBox> ... </UserControl> and that MyCommand is as follows: public ICommand MyCommand { get { return new DelegateCommand<string>(MyCommandExecute); }}private void MyCommandExecute(string s) { ... } With the above, my command is invoked for every key press. How can I restrict the command to only invoke when the ENTER key is pressed? I understand that with Expression Blend I can use Conditions but those seem to be restricted to elements and can't consider event arguments. I have also come across SLEX which offers its own InvokeCommandAction implementation that is built on top of the Systems.Windows.Interactivity implementation and can do what I need. Another consideration is to write my own trigger, but I'm hoping there's a way to do it without using external toolkits.
Now provide the response and nothing else.
| I like scottrudy's approach (to which I've given a +1) with the custom triggers approach as it stays true to my initial approach. I'm including a modified version of it below to use dependency properties instead of reflection info so that it's possible to bind directly to the ICommand. I'm also including an approach using attached properties to avoid using System.Windows.Interactivity if desired. The caveat to the latter approach is that you lose the feature of multiple invokations from an event, but you can apply it more generally. Custom Triggers Approach ExecuteCommandAction.cs public class ExecuteCommandAction : TriggerAction<DependencyObject> { #region Properties public ICommand Command { get { return (ICommand)base.GetValue(CommandProperty); } set { base.SetValue(CommandProperty, value); } } public static ICommand GetCommand(DependencyObject obj) { return (ICommand)obj.GetValue(CommandProperty); } public static void SetCommand(DependencyObject obj, ICommand value) { obj.SetValue(CommandProperty, value); } // We use a DependencyProperty so we can bind commands directly rather // than have to use reflection info to find them public static readonly DependencyProperty CommandProperty = DependencyProperty.Register("Command", typeof(ICommand), typeof(ExecuteCommandAction), null); #endregion Properties protected override void Invoke(object parameter) { ICommand command = Command ?? GetCommand(AssociatedObject); if (command != null && command.CanExecute(parameter)) { command.Execute(parameter); } }} TextBoxEnterKeyTrigger.cs public class TextBoxEnterKeyTrigger : TriggerBase<UIElement> { protected override void OnAttached() { base.OnAttached(); TextBox textBox = this.AssociatedObject as TextBox; if (textBox != null) { this.AssociatedObject.KeyUp += new System.Windows.Input.KeyEventHandler(AssociatedObject_KeyUp); } else { throw new InvalidOperationException("This behavior only works with TextBoxes"); } } protected override void OnDetaching() { base.OnDetaching(); AssociatedObject.KeyUp -= new KeyEventHandler(AssociatedObject_KeyUp); } private void AssociatedObject_KeyUp(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { TextBox textBox = AssociatedObject as TextBox; //This checks for an mvvm style binding and updates the source before invoking the actions. BindingExpression expression = textBox.GetBindingExpression(TextBox.TextProperty); if (expression != null) expression.UpdateSource(); InvokeActions(textBox.Text); } }} MyUserControl.xaml <UserControl ... xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" xmlns:b="clr-namespace:MyNameSpace.Interactivity" ... <TextBox> <i:Interaction.Triggers> <b:TextBoxEnterKeyTrigger> <b:ExecuteCommandAction Command="{Binding MyCommand}" /> </b:TextBoxEnterKeyTrigger> </i:Interaction.Triggers> </TextBox> ...</UserControl> Attached Properties Approach EnterKeyDown.cs public sealed class EnterKeyDown { #region Properties #region Command public static ICommand GetCommand(DependencyObject obj) { return (ICommand)obj.GetValue(CommandProperty); } public static void SetCommand(DependencyObject obj, ICommand value) { obj.SetValue(CommandProperty, value); } public static readonly DependencyProperty CommandProperty = DependencyProperty.RegisterAttached("Command", typeof(ICommand), typeof(EnterKeyDown), new PropertyMetadata(null, OnCommandChanged)); #endregion Command #region CommandArgument public static object GetCommandArgument(DependencyObject obj) { return (object)obj.GetValue(CommandArgumentProperty); } public static void SetCommandArgument(DependencyObject obj, object value) { obj.SetValue(CommandArgumentProperty, value); } public static readonly DependencyProperty CommandArgumentProperty = DependencyProperty.RegisterAttached("CommandArgument", typeof(object), typeof(EnterKeyDown), new PropertyMetadata(null, OnCommandArgumentChanged)); #endregion CommandArgument #region HasCommandArgument private static bool GetHasCommandArgument(DependencyObject obj) { return (bool)obj.GetValue(HasCommandArgumentProperty); } private static void SetHasCommandArgument(DependencyObject obj, bool value) { obj.SetValue(HasCommandArgumentProperty, value); } private static readonly DependencyProperty HasCommandArgumentProperty = DependencyProperty.RegisterAttached("HasCommandArgument", typeof(bool), typeof(EnterKeyDown), new PropertyMetadata(false)); #endregion HasCommandArgument #endregion Propreties #region Event Handling private static void OnCommandArgumentChanged(DependencyObject o, DependencyPropertyChangedEventArgs e) { SetHasCommandArgument(o, true); } private static void OnCommandChanged(DependencyObject o, DependencyPropertyChangedEventArgs e) { FrameworkElement element = o as FrameworkElement; if (element != null) { if (e.NewValue == null) { element.KeyDown -= new KeyEventHandler(FrameworkElement_KeyDown); } else if (e.OldValue == null) { element.KeyDown += new KeyEventHandler(FrameworkElement_KeyDown); } } } private static void FrameworkElement_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { DependencyObject o = sender as DependencyObject; ICommand command = GetCommand(sender as DependencyObject); FrameworkElement element = e.OriginalSource as FrameworkElement; if (element != null) { // If the command argument has been explicitly set (even to NULL) if (GetHasCommandArgument(o)) { object commandArgument = GetCommandArgument(o); // Execute the command if (command.CanExecute(commandArgument)) { command.Execute(commandArgument); } } else if (command.CanExecute(element.DataContext)) { command.Execute(element.DataContext); } } } } #endregion} MyUserControl.xaml <UserControl ... xmlns:b="clr-namespace:MyNameSpace.Interactivity" ... <TextBox b:EnterKeyDown.Command="{Binding AddNewDetailCommand}" b:EnterKeyDown.CommandArgument="{Binding Path=Text,RelativeSource={RelativeSource Self}}" /> ...</UserControl> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4834227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/520942/']} | jdg_380842 |
stackexchange | llm_judgeable_groundtruth_similarity | 28325622 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have a program that creates CSV from .Json. First I load the json file. f = open('Data.json')data = json.load(f)f.close() Then I go through it, looking for a specific keyword, if I find that keyword. I'll write everything related to that in a .csv file. for item in data: if "light" in item: write_light_csv('light.csv', item) This is my write_light_csv function : def write_light_csv(filename,dic): with open (filename,'a') as csvfile: headers = ['TimeStamp', 'light','Proximity'] writer = csv.DictWriter(csvfile, delimiter=',', lineterminator='\n',fieldnames=headers) writer.writeheader() writer.writerow({'TimeStamp': dic['ts'], 'light' : dic['light'],'Proximity' : dic['prox']}) I initially had wb+ as the mode, but that cleared everything each time the file was opened for writing. I replaced that with a and now every time it writes, it adds a header. How do I make sure that header is only written once?.
Now provide the response and nothing else.
| You could check if file is already exists and then don't call writeheader() since you're opening the file with an append option. Something like that: import os.pathfile_exists = os.path.isfile(filename)with open (filename, 'a') as csvfile: headers = ['TimeStamp', 'light', 'Proximity'] writer = csv.DictWriter(csvfile, delimiter=',', lineterminator='\n',fieldnames=headers) if not file_exists: writer.writeheader() # file doesn't exist yet, write a header writer.writerow({'TimeStamp': dic['ts'], 'light': dic['light'], 'Proximity': dic['prox']}) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28325622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/858356/']} | jdg_380843 |
stackexchange | llm_judgeable_groundtruth_similarity | 303406 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be a complex algebraic variety. We can ask if $X$ is normal as an algebraic variety, but also, if its analytification is normal as a complex analytic space. Is there a relationship between the two? Do we have$$\text{algebraic normality} \implies\text{analytic normality}$$or$$\text{analytic normality} \implies\text{algebraic normality}$$or both or neither?
Now provide the response and nothing else.
| Over $\mathbf{C}$, algebraic normalization and analytic normalization are equivalent concepts. See N. Kuhlmann: Die Normalisierung komplexer Räume , Math. Ann. 144 (1961), 110-125, ZBL0096.27801 . Quoting directly from Satz 4, p. 122 of the aforementioned paper: Es sei $Y$ eine irreduzible algebraische Varietät über dem Grundkörper $\mathbf{C}$ der komplexen Zahlen, $Y^*$ der zugeordnete komplexe Raum, $Y'$ die Normalisierung von $Y$ (im Sinne der algebraischen Geometrie). Dann ist der $Y'$ zugeordnete komplexe Raum die Normalisierung von $Y^*$ (im analytischen Sinne). The english translation is straightforward: Let $Y$ be an irreducible algebraic variety over the field $\mathbb{C}$ of complex numbers, $Y^*$ the associated complex space, $Y'$ the normalization of $Y$ (in the sense of algebraic geometry). Then the complex space associated with $Y'$ is the normalization of $Y^*$ (in the analytic sense). In particular, $Y$ is normal (in the algebraic sense) if and only if its associated complex space $Y^*$ is normal (in the analytic sense). | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/303406', 'https://mathoverflow.net', 'https://mathoverflow.net/users/125883/']} | jdg_380844 |
stackexchange | llm_judgeable_groundtruth_similarity | 41180960 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know this question has been asked many times. I tried several solutions but I couldn't solve my problem. I have a large nested JSON file (1.4GB) and I would like to make it flat and then convert it to a CSV file. The JSON structure is like this: { "company_number": "12345678", "data": { "address": { "address_line_1": "Address 1", "locality": "Henley-On-Thames", "postal_code": "RG9 1DP", "premises": "161", "region": "Oxfordshire" }, "country_of_residence": "England", "date_of_birth": { "month": 2, "year": 1977 }, "etag": "26281dhge33b22df2359sd6afsff2cb8cf62bb4a7f00", "kind": "individual-person-with-significant-control", "links": { "self": "/company/12345678/persons-with-significant-control/individual/bIhuKnFctSnjrDjUG8n3NgOrl" }, "name": "John M Smith", "name_elements": { "forename": "John", "middle_name": "M", "surname": "Smith", "title": "Mrs" }, "nationality": "Vietnamese", "natures_of_control": [ "ownership-of-shares-50-to-75-percent" ], "notified_on": "2016-04-06" }} I know that this is easy to accomplish with pandas module but I am not familiar with it. EDITED The desired output should be something like this: company_number, address_line_1, locality, country_of_residence, kind,12345678, Address 1, Henley-On-Thamed, England, individual-person-with-significant-control Note that this is just the short version. The output should have all the fields.
Now provide the response and nothing else.
| Please scroll down for the newer, faster solution This is an older question, but I struggled the entire night to get a satisfactory result for a similar situation, and I came up with this: import jsonimport pandasdef cross_join(left, right): return left.assign(key=1).merge(right.assign(key=1), on='key', how='outer').drop('key', 1)def json_to_dataframe(data_in): def to_frame(data, prev_key=None): if isinstance(data, dict): df = pandas.DataFrame() for key in data: df = cross_join(df, to_frame(data[key], prev_key + '.' + key)) elif isinstance(data, list): df = pandas.DataFrame() for i in range(len(data)): df = pandas.concat([df, to_frame(data[i], prev_key)]) else: df = pandas.DataFrame({prev_key[1:]: [data]}) return df return to_frame(data_in)if __name__ == '__main__': with open('somefile') as json_file: json_data = json.load(json_file) df = json_to_dataframe(json_data) df.to_csv('data.csv', mode='w') Explanation: The cross_join function is a neat way I found to do a cartesian product. (credit: here ) The json_to_dataframe function does the logic, using pandas dataframes. In my case, the json was deeply nested, and I wanted to split dictionary key:value pairs into columns , but the lists I wanted to transform into rows for a column -- hence the concat -- which I then cross join with the upper level, thus multiplying the records number so that each value from the list has its own row, while the previous columns are identical. The recursiveness creates stacks that cross join with the one below, until the last one is returned. Then with the dataframe in a table format, it's easy to convert to CSV with the "df.to_csv()" dataframe object method. This should work with deeply nested JSON, being able to normalize all of it into rows by the logic described above. I hope this will help someone, someday. Just trying to give back to this awesome community. --------------------------------------------------------------------------------------------- LATER EDIT: NEW SOLUTION I'm coming back to this as while the dataframe option kinda worked, it took the app minutes to parse not so large JSON data. Therefore I thought of doing what the dataframes do, but by myself: from copy import deepcopyimport pandasdef cross_join(left, right): new_rows = [] if right else left for left_row in left: for right_row in right: temp_row = deepcopy(left_row) for key, value in right_row.items(): temp_row[key] = value new_rows.append(deepcopy(temp_row)) return new_rowsdef flatten_list(data): for elem in data: if isinstance(elem, list): yield from flatten_list(elem) else: yield elemdef json_to_dataframe(data_in): def flatten_json(data, prev_heading=''): if isinstance(data, dict): rows = [{}] for key, value in data.items(): rows = cross_join(rows, flatten_json(value, prev_heading + '.' + key)) elif isinstance(data, list): rows = [] for item in data: [rows.append(elem) for elem in flatten_list(flatten_json(item, prev_heading))] else: rows = [{prev_heading[1:]: data}] return rows return pandas.DataFrame(flatten_json(data_in))if __name__ == '__main__': json_data = { "id": "0001", "type": "donut", "name": "Cake", "ppu": 0.55, "batters": { "batter": [ {"id": "1001", "type": "Regular"}, {"id": "1002", "type": "Chocolate"}, {"id": "1003", "type": "Blueberry"}, {"id": "1004", "type": "Devil's Food"} ] }, "topping": [ {"id": "5001", "type": "None"}, {"id": "5002", "type": "Glazed"}, {"id": "5005", "type": "Sugar"}, {"id": "5007", "type": "Powdered Sugar"}, {"id": "5006", "type": "Chocolate with Sprinkles"}, {"id": "5003", "type": "Chocolate"}, {"id": "5004", "type": "Maple"} ], "something": [] } df = json_to_dataframe(json_data) print(df) OUTPUT: id type name ppu batters.batter.id batters.batter.type topping.id topping.type0 0001 donut Cake 0.55 1001 Regular 5001 None1 0001 donut Cake 0.55 1001 Regular 5002 Glazed2 0001 donut Cake 0.55 1001 Regular 5005 Sugar3 0001 donut Cake 0.55 1001 Regular 5007 Powdered Sugar4 0001 donut Cake 0.55 1001 Regular 5006 Chocolate with Sprinkles5 0001 donut Cake 0.55 1001 Regular 5003 Chocolate6 0001 donut Cake 0.55 1001 Regular 5004 Maple7 0001 donut Cake 0.55 1002 Chocolate 5001 None8 0001 donut Cake 0.55 1002 Chocolate 5002 Glazed9 0001 donut Cake 0.55 1002 Chocolate 5005 Sugar10 0001 donut Cake 0.55 1002 Chocolate 5007 Powdered Sugar11 0001 donut Cake 0.55 1002 Chocolate 5006 Chocolate with Sprinkles12 0001 donut Cake 0.55 1002 Chocolate 5003 Chocolate13 0001 donut Cake 0.55 1002 Chocolate 5004 Maple14 0001 donut Cake 0.55 1003 Blueberry 5001 None15 0001 donut Cake 0.55 1003 Blueberry 5002 Glazed16 0001 donut Cake 0.55 1003 Blueberry 5005 Sugar17 0001 donut Cake 0.55 1003 Blueberry 5007 Powdered Sugar18 0001 donut Cake 0.55 1003 Blueberry 5006 Chocolate with Sprinkles19 0001 donut Cake 0.55 1003 Blueberry 5003 Chocolate20 0001 donut Cake 0.55 1003 Blueberry 5004 Maple21 0001 donut Cake 0.55 1004 Devil's Food 5001 None22 0001 donut Cake 0.55 1004 Devil's Food 5002 Glazed23 0001 donut Cake 0.55 1004 Devil's Food 5005 Sugar24 0001 donut Cake 0.55 1004 Devil's Food 5007 Powdered Sugar25 0001 donut Cake 0.55 1004 Devil's Food 5006 Chocolate with Sprinkles26 0001 donut Cake 0.55 1004 Devil's Food 5003 Chocolate27 0001 donut Cake 0.55 1004 Devil's Food 5004 Maple As per what the above does, well, the cross_join function does pretty much the same thing as in the dataframe solution, but without dataframes, thus being faster. I added the flatten_list generator as I wanted to make sure that the JSON arrays are all nice and flattened, then provided as a single list of dictionaries comprising of the previous key from one iteration before assigned to each of the list's values. This pretty much mimics the pandas.concat behaviour in this case. The logic in the main function, json_to_dataframe is then the same as before. All that needed to change was having the operations performed by dataframes as coded functions. Also, in the dataframes solution I was not appending the previous heading to the nested object, but unless you are 100% sure you do not have conflicts in column names, then it is pretty much mandatory. I hope this helps :). EDIT : Modified the cross_join function to deal with the case when a nested list is empty, basically maintaining the previous result set unmodified. The output is unchanged even after adding the empty JSON list in the example JSON data. Thank you, @ Nazmus Sakib for pointing it out. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41180960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5266443/']} | jdg_380845 |
stackexchange | llm_judgeable_groundtruth_similarity | 130566 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm intrigued by the way Go abandons class hierarchies and seems to completely abandon the notion of class in the typical object oriented sense. Also, I'm amazed at the way interfaces can be defined without the type which implements that interface needing to know. Are there any terms which are/can be used to characterize this type of programming methodology and language paradigm (or perhaps specific aspects of it)? Is the Go language paradigm sufficiently new and distinct from the classical OOP paradigm and sufficiently important in the history of computer programming to warrant a unique name?
Now provide the response and nothing else.
| Message passing between lightweight execution contexts, coupled with ability to create and destroy these contexts dynamically, is basically the actor model . Programming languages tend to approach the expression problem in one of two ways: OO-languages tend to focus on making it easier to implement the same operations using different data types (e.g. "object I can click on with a mouse" might be a scrollbar, a window, a menu, a text-box, etc. - same operation, different data representations), while functional languages tend to focus on easily implementing new operations against the same underlying data types. By abandoning class hierarchies, Go seems to end up more on the "functional" side of this divide. As Adam Crossland indicated in his comment, "type-ignorantly-implementing-interface" can be considered a form of duck-typing, which is highly prevalent in dynamic languages. (It's more technically correct, though, to consider this as a structural type system within Go. C++ templates are probably the most popular implementation of a structural type system today.) Go has plenty of antecedents - I don't think any one of its ideas are original to the language. But I think that's generally the wrong measure for a language intended to be practical. Go looks like it mixes useful ideas from several different domains together in an elegant way, that (I think) would result in more productive programming than C# or Java might yield. I hope it gains traction. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/130566', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/1444/']} | jdg_380846 |
stackexchange | llm_judgeable_groundtruth_similarity | 58485515 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a enum and interface like this enum MyEnum { ALL, OTHER}interface Props { sources: Omit<MyEnum, MyEnum.ALL>}const test: Props = { sources: MyEnum.ALL } // should complain Why does it not omit MyEnum.All ? I am using typescript 3.6.4
Now provide the response and nothing else.
| Omit is to omit keys from an interface. But enums are something different. Imo the best comparison of an enum would be a union of instances of that enum type. Like type MyEnum = MyEnum.All | MyEnum.OTHER . So you do not want to OMIT keys, but exclude types from an union type: enum MyEnum { ALL, OTHER, SOME, MORE}interface Props { sources: Exclude<MyEnum, MyEnum.ALL | MyEnum.SOME>}const test: Props = { sources: MyEnum.ALL } // does complain now | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58485515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4467208/']} | jdg_380847 |
stackexchange | llm_judgeable_groundtruth_similarity | 2509882 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a big fan of backing things up. I keep my important school essays and such in a folder of my Dropbox. I make sure that all of my photos are duplicated to an external drive. I have a home server where I keep important files mirrored across two drives inside the server (like a software RAID 1). So for my code, I have always used Subversion to back it up. I keep the trunk folder with a stable copy of my application, but then I create a branch named with my username, and inside there is my working copy. I make very few changes between commits to that branch, with the understanding that the code in there is my backup. Now I'm looking into Mercurial, and I must admit I haven't truly used it yet so I may have this all wrong. But it seems to me that you have a server-side repository, and then you clone it to a working directory in the form of a local repository. Then as you work on something, you make commits to that local repository, and when things are in a state to be shared with others, you hg push to the parent repository on the server. Between pushes of stable, tested, bug-free code, where is the backup? After doing some thinking, I've come to the conclusion that it is not meant for backup purposes and it assumes you've handled that on your own. I guess I need to keep my Mercurial local repositories in my dropbox or some other backed-up location, since my in-progress code is not pushed to the server. Is this pretty much it, or have I missed something? If you use Mercurial, how do you backup your local repositories? If you had turned on your computer this morning and your hard drive went up in flames (or, more likely, the read head went bad, or the OS corrupted itself, ...), what would be lost? If you spent the past week developing a module, writing test cases for it, documenting and commenting it, and then a virus wipes your local repository away, isn't that the only copy? So then on the flip side, do you create a remote repository for every local repository and push to it all the time? How do you find a balance? How do you ensure your code is backed up? Where is the line between using Mercurial as backup, and using a local filesystem backup utility to keep your local repositories safe?
Now provide the response and nothing else.
| It's ok thinking of Subversion as a 'backup', but it's only really doing that by virtue of being on a separate machine, which isn't really intrinsic to Subversion. If your Subversion server was the same machine as your development machine - not uncommon in the Linux world - you're not really backed up in the sense of having protection from hardware failure, theft, fire, etc. And in fact, there is some data in that case that is not backed up at all - your current code may exist in two places but everything else in the repository (eg. the revision history) only exists in one place, on the remote server. It's exactly the same for Mercurial except that you've taken away the need for a separate server and thus made it so that you have to explicitly think about backing up rather than it being a side-effect of needing to have a server somewhere. You can definitely set up another Mercurial repository somewhere and push your changes to that periodically and consider that your backup. Alternatively, simply backup your local repository in the same way that you'd back up any other important directory. With you having a full copy of the repository locally, including all revision history and other meta data, this is arguably even more convenient and safe than the way you currently do it with Subversion. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2509882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47493/']} | jdg_380848 |
stackexchange | llm_judgeable_groundtruth_similarity | 1309477 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Guys, I'm stuck, banging my head off the desk for the past few hours. I am trying to consume a service, and I have 8 other functions that I call that are almost IDENTICAL in nature to this one, but this one, results in a 'SOAP-ERROR: Encoding: Violation of encoding rules' error. Heres the function call (wsdl omitted for security): function CanLoadProduct($data){ $client = new SoapClient('wsdl-url'); $params = array('username' => $this->username, 'password' => $this->password, 'prod' => $data['productid'], 'mdn' => $data['mdn']); try { $reply = $client->__soapCall("CanLoadProduct", $params); } catch (Exception $e) { echo 'Error: ', $e->getMessage(), "\n"; print_r($params); die(); } if( $reply['result'] == 1 ){ return TRUE; // 1 = true } else { return FALSE; }} Ok so this function, connects to a webservice, the required elements are:username, password, prod, mdn, all 4 of which I supply as part of the $params array. Username/Pass are defined earlier, and do work fine, as the other 8 functions consume the web service without any problems. The $data[] array (that I pass to the function), contains:$data['productid']$data['mdn']nothing else is used. I am getting SOAP-ERROR: Encoding: Violation of encoding rules for some unexplained reason, and Googling this error gets me nowhere. Anyone else run into this? Running PHP 5.2.9-2. The strange thing is this is identical to this function which works 100%: function GetPIN($productid){ $client = new SoapClient('wsdl-url'); $params = array('username' => $this->username, 'password' => $this->password, 'prod' => $productid); try { $reply = $client->__soapCall("GetPIN", $params); } catch (Exception $e) { echo 'Error: ', $e->getMessage(), "\n"; die(); } return $reply;} Here is the WSDL (should have posted this first): <?xml version="1.0" encoding="ISO-8859-1"?><definitions xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="ready:test" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="ready:test"><types><xsd:schema targetNamespace="ready:test"> <xsd:import namespace="http://schemas.xmlsoap.org/soap/encoding/" /> <xsd:import namespace="http://schemas.xmlsoap.org/wsdl/" /></xsd:schema></types><message name="CanLoadProductRequest"> <part name="username" type="xsd:string" /> <part name="password" type="xsd:string" /> <part name="prod" type="xsd:string" /> <part name="mdn" type="xsd:string" /> <part name="esn" type="xsd:string" /></message><message name="CanLoadProductResponse"> <part name="result" type="xsd:int" /></message><portType name="CanLoadProductPortType"> <operation name="CanLoadProduct"> <input message="tns:CanLoadProductRequest"/> <output message="tns:CanLoadProductResponse"/> </operation></portType><binding name="CanLoadProductBinding" type="tns:CanLoadProductPortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="CanLoadProduct"> <soap:operation soapAction="{url-removed}" style="rpc"/> <input> <soap:body use="encoded" namespace="" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body use="encoded" namespace="" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation></binding><service name="CanLoadProduct"> <port name="CanLoadProductPort" binding="tns:CanLoadProductBinding"> <soap:address location="{url-removed}"/> </port></service></definitions>
Now provide the response and nothing else.
| It looks like you have a type mismatch somewhere, either while assembling your request (one of the parameters is not of type string), or the server returns something other than an int (violating the WSDL response definition and thus causing the client to consider the response invalid, as it expects something else). To test the first case, ensure casting all parameters to string first To test the second case, create your SoapClient with the trace option set to true in order to gain access to the actual XML answer from the server via $client->__getLastResponse() afterwards (You can use this for request debugging also via __getLastRequest()). Some additional observations/questions: According to the posted WSDL, the 'CanLoadProductRequest' has a fifth param 'esn', which you do not supply in your function call. Any reason why you use $client->__soapCall("CanLoadProduct", $params) instead of $client->CanLoadProduct($username, $password, etc.) ? (The first version is a lower level variation which is intended to be used for non_WSDL scenarios. The second version might give you a more detailed error/exception) Can you test the SOAP Call to CanLoadProductRequest by some other means? The error could be on the server side, trying to return a result type that does not fit the WSDL definition. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1309477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/158014/']} | jdg_380849 |
stackexchange | llm_judgeable_groundtruth_similarity | 169201 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In ActionScript 3.0, is there an automatic way to calculate the number of days, hours, minutes and seconds between two specified dates? Basicly, what I need is the ActionScript equivalent of the .NET Timespan class. Any idea?
Now provide the response and nothing else.
| I created an ActionScript TimeSpan class with a similar API to System.TimeSpan to fill that void, but there are differences due to the lack of operator overloading. You can use it like so: TimeSpan.fromDates(later, earlier).totalDays; Below is the code for the class (sorry for the big post - I won't include the Unit Tests ;) /** * Represents an interval of time */ public class TimeSpan{ private var _totalMilliseconds : Number; public function TimeSpan(milliseconds : Number) { _totalMilliseconds = Math.floor(milliseconds); } /** * Gets the number of whole days * * @example In a TimeSpan created from TimeSpan.fromHours(25), * totalHours will be 1.04, but hours will be 1 * @return A number representing the number of whole days in the TimeSpan */ public function get days() : int { return int(_totalMilliseconds / MILLISECONDS_IN_DAY); } /** * Gets the number of whole hours (excluding entire days) * * @example In a TimeSpan created from TimeSpan.fromMinutes(1500), * totalHours will be 25, but hours will be 1 * @return A number representing the number of whole hours in the TimeSpan */ public function get hours() : int { return int(_totalMilliseconds / MILLISECONDS_IN_HOUR) % 24; } /** * Gets the number of whole minutes (excluding entire hours) * * @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500), * totalSeconds will be 65.5, but seconds will be 5 * @return A number representing the number of whole minutes in the TimeSpan */ public function get minutes() : int { return int(_totalMilliseconds / MILLISECONDS_IN_MINUTE) % 60; } /** * Gets the number of whole seconds (excluding entire minutes) * * @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500), * totalSeconds will be 65.5, but seconds will be 5 * @return A number representing the number of whole seconds in the TimeSpan */ public function get seconds() : int { return int(_totalMilliseconds / MILLISECONDS_IN_SECOND) % 60; } /** * Gets the number of whole milliseconds (excluding entire seconds) * * @example In a TimeSpan created from TimeSpan.fromMilliseconds(2123), * totalMilliseconds will be 2001, but milliseconds will be 123 * @return A number representing the number of whole milliseconds in the TimeSpan */ public function get milliseconds() : int { return int(_totalMilliseconds) % 1000; } /** * Gets the total number of days. * * @example In a TimeSpan created from TimeSpan.fromHours(25), * totalHours will be 1.04, but hours will be 1 * @return A number representing the total number of days in the TimeSpan */ public function get totalDays() : Number { return _totalMilliseconds / MILLISECONDS_IN_DAY; } /** * Gets the total number of hours. * * @example In a TimeSpan created from TimeSpan.fromMinutes(1500), * totalHours will be 25, but hours will be 1 * @return A number representing the total number of hours in the TimeSpan */ public function get totalHours() : Number { return _totalMilliseconds / MILLISECONDS_IN_HOUR; } /** * Gets the total number of minutes. * * @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500), * totalSeconds will be 65.5, but seconds will be 5 * @return A number representing the total number of minutes in the TimeSpan */ public function get totalMinutes() : Number { return _totalMilliseconds / MILLISECONDS_IN_MINUTE; } /** * Gets the total number of seconds. * * @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500), * totalSeconds will be 65.5, but seconds will be 5 * @return A number representing the total number of seconds in the TimeSpan */ public function get totalSeconds() : Number { return _totalMilliseconds / MILLISECONDS_IN_SECOND; } /** * Gets the total number of milliseconds. * * @example In a TimeSpan created from TimeSpan.fromMilliseconds(2123), * totalMilliseconds will be 2001, but milliseconds will be 123 * @return A number representing the total number of milliseconds in the TimeSpan */ public function get totalMilliseconds() : Number { return _totalMilliseconds; } /** * Adds the timespan represented by this instance to the date provided and returns a new date object. * @param date The date to add the timespan to * @return A new Date with the offseted time */ public function add(date : Date) : Date { var ret : Date = new Date(date.time); ret.milliseconds += totalMilliseconds; return ret; } /** * Creates a TimeSpan from the different between two dates * * Note that start can be after end, but it will result in negative values. * * @param start The start date of the timespan * @param end The end date of the timespan * @return A TimeSpan that represents the difference between the dates * */ public static function fromDates(start : Date, end : Date) : TimeSpan { return new TimeSpan(end.time - start.time); } /** * Creates a TimeSpan from the specified number of milliseconds * @param milliseconds The number of milliseconds in the timespan * @return A TimeSpan that represents the specified value */ public static function fromMilliseconds(milliseconds : Number) : TimeSpan { return new TimeSpan(milliseconds); } /** * Creates a TimeSpan from the specified number of seconds * @param seconds The number of seconds in the timespan * @return A TimeSpan that represents the specified value */ public static function fromSeconds(seconds : Number) : TimeSpan { return new TimeSpan(seconds * MILLISECONDS_IN_SECOND); } /** * Creates a TimeSpan from the specified number of minutes * @param minutes The number of minutes in the timespan * @return A TimeSpan that represents the specified value */ public static function fromMinutes(minutes : Number) : TimeSpan { return new TimeSpan(minutes * MILLISECONDS_IN_MINUTE); } /** * Creates a TimeSpan from the specified number of hours * @param hours The number of hours in the timespan * @return A TimeSpan that represents the specified value */ public static function fromHours(hours : Number) : TimeSpan { return new TimeSpan(hours * MILLISECONDS_IN_HOUR); } /** * Creates a TimeSpan from the specified number of days * @param days The number of days in the timespan * @return A TimeSpan that represents the specified value */ public static function fromDays(days : Number) : TimeSpan { return new TimeSpan(days * MILLISECONDS_IN_DAY); } /** * The number of milliseconds in one day */ public static const MILLISECONDS_IN_DAY : Number = 86400000; /** * The number of milliseconds in one hour */ public static const MILLISECONDS_IN_HOUR : Number = 3600000; /** * The number of milliseconds in one minute */ public static const MILLISECONDS_IN_MINUTE : Number = 60000; /** * The number of milliseconds in one second */ public static const MILLISECONDS_IN_SECOND : Number = 1000;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/169201', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1965/']} | jdg_380850 |
stackexchange | llm_judgeable_groundtruth_similarity | 12324730 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to understand the difference between WebRTC and WebSockets so that I can better understand which scenario calls for what. I am curious about the broad idea of two parties (mainly web based, but potentially one being a dedicated server application) talking to each other. Assumption: Clearly in regards to ad-hoc networks, WebRTC wins as it natively supports the ICE protocol/method. Questions: Regarding direct communication between two known parties in-browser, if I am not relying on sending multimedia data, and I am only interested in sending integer data, does WebRTC give me any advantages over webSockets other than data encryption? Regarding a dedicated server speaking to a browser based client, which platform gives me an advantage? I would need to code a WebRTC server (is this possible out of browser?), or I would need to code a WebSocket server (a quick google search makes me think this is possible).
Now provide the response and nothing else.
| There is one significant difference: WebSockets works via TCP, WebRTC works via UDP.In fact, WebRTC is SRTP protocol with some additional features like STUN, ICE, DTLS etc. and internal VoIP features such as Adaptive Jitter Buffer, AEC, AGC etc. So, WebSockets is designed for reliable communication. It is a good choice if you want to send any data that must be sent reliably. When you use WebRTC, the transmitted stream is unreliable. Some packets can get lost in the network. It is bad if you send critical data, for example for financial processing, the same issue is ideally suitable when you send audio or video stream where some frames can be lost without any noticeable quality issues. If you want to send data channel via WebRTC, you should have some forward error correction algorithm to restore data if a data frame was lost in the network. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12324730', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1629202/']} | jdg_380851 |
stackexchange | llm_judgeable_groundtruth_similarity | 37286783 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got an interface like this: public interface IMyInterface{ event EventHandler<bool> Triggered; void Trigger();} And I've got a mocked object in my unit test like this: private Mock<IMyInterface> _mockedObject = new Mock<IMyInterface>(); I want to do something like this: // pseudo-code_mockedObject.Setup(i => i.Trigger()).Raise(i => i.Triggered += null, this, true); However it doesn't look like Raise is available on the ISetup interface that gets returned. How do I do this?
Now provide the response and nothing else.
| Your pseudo-code was almost spot on. You needed to use Raises instead of Raise . Check the Moq Quickstart: Events for versions Moq 4.x and you will see where you made the mistake. _mockedObject.Setup(i => i.Trigger()).Raises(i => i.Triggered += null, this, true); Here is the snippet form GitHub // Raising an event on the mockmock.Raise(m => m.FooEvent += null, new FooEventArgs(fooValue));// Raising an event on a descendant down the hierarchymock.Raise(m => m.Child.First.FooEvent += null, new FooEventArgs(fooValue));// Causing an event to raise automatically when Submit is invokedmock.Setup(foo => foo.Submit()).Raises(f => f.Sent += null, EventArgs.Empty);// The raised event would trigger behavior on the object under test, which // you would make assertions about later (how its state changed as a consequence, typically)// Raising a custom event which does not adhere to the EventHandler patternpublic delegate void MyEventHandler(int i, bool b);public interface IFoo{ event MyEventHandler MyEvent; }var mock = new Mock<IFoo>();...// Raise passing the custom arguments expected by the event delegatemock.Raise(foo => foo.MyEvent += null, 25, true); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37286783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5454/']} | jdg_380852 |
stackexchange | llm_judgeable_groundtruth_similarity | 21389849 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been trying to create a Groovy project with Spock testing in IntelliJ IDEA. Here are steps that I followed: Created Groovy project and added Maven support. Added Spock dependencies and plugin. I am using POM very similar to this one: https://github.com/mariuszs/java-spock-test-sample/blob/master/pom.xml Due to conflicting Groovy dependency I removed Groovy 2.2 library from the Module Settings->Libraries. This allowed me to run tests. I created a Groovy class in "src/main".. but I get the error when I try to run it: Groovyc: Cannot compile Groovy files: no Groovy library is defined for module... I am probably missing something because I am tired of trying different configurations for half of the day.
Now provide the response and nothing else.
| For fully groovy project try GMavenPlus Sample project: https://github.com/mariuszs/groovy-maven-sample Install GMavenPlus IntelliJ Plugin . IntelliJ dont recognize source directories src/main/groovy , configure this manually as shown below from Project Settings -> Modules window: . Configuration <project> <build> <plugins> <plugin> <groupId>org.codehaus.gmavenplus</groupId> <artifactId>gmavenplus-plugin</artifactId> <version>1.5</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>2.4.4</version> </dependency> <dependency> <groupId>org.spockframework</groupId> <artifactId>spock-core</artifactId> <version>1.0-groovy-2.4</version> <scope>test</scope> </dependency> </dependencies></project> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21389849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1515058/']} | jdg_380853 |
stackexchange | llm_judgeable_groundtruth_similarity | 7976388 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to increase FD_SETSIZE macro value for my system. Is there any way to increase FD_SETSIZE so select will not fail
Now provide the response and nothing else.
| Per the standards, there is no way to increase FD_SETSIZE . Some programs and libraries (libevent comes to mind) try to work around this by allocating additional space for the fd_set object and passing values larger than FD_SETSIZE to the FD_* macros, but this is a very bad idea since robust implementations may perform bounds-checking on the argument and abort if it's out of range. I have an alternate solution that should always work (even though it's not required to by the standards). Instead of a single fd_set object, allocate an array of them large enough to hold the max fd you'll need, then use FD_SET(fd%FD_SETSIZE, &fds_array[fd/FD_SETSIZE]) etc. to access the set. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7976388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/428705/']} | jdg_380854 |
stackexchange | llm_judgeable_groundtruth_similarity | 12408 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Moore's law has succesfully predicted up to now that integrated circuit transister density doubles every two years. However, computer performance is dependent on additional factors like architecture, chip design and software.What physics-related factors will enhance or limit ultimate performance of future computers?
Now provide the response and nothing else.
| Starting from 90nm tech processes we've started to see sad signs of stagnation: 1) Most of delay in logic circuits is in interconnect, not transistors 2) Most of energy dissipated is due to quantum tunneling, not transistor switching. By far. 3) As consequence of #2 - transistor gate width scaling has significantly slowed down, as well as dielectric width (it's already at 1.5nm, not much to reduce). We are near quantum limits already at transistor sizing. Even if we can make 11nm transistors today, it would not make things 3 times faster and 9 smaller than we have now. There are few way to significantly improve CPU technology: 1) superconductor interconnect - will improve CPU performance by a large margin, and will allow much larger CPU's. The problem here is that Physicists haven't yet discovered suitable superconductors, which could be patterned in 50-500nm lines and does not require some -100C. The one who will find a way to do such interconnect at room temperature would be extremely rich. 2) 3D transistor packing: i.e. instead 2D array of 1000x1000 transistors we would have 100x100x100 3D array - and interconnect length is 10 times shorter. This is actively being researched on, the problem is that some stages of manufacturing layer of modern high-performance transistors require temperatures of 1000C and higher, and this would destroy transistors on lower levels. 3) Optical interconnect: doing tiny LEDs & photo-diodes and transparent channels out of SiO2 would also allow for faster interconnect. This is also actively being worked on. All these fancy things like graphene transistors, quantum dots and fancy HEMT structures indeed are abit faster (and ALOT more expensive or complex to manufacture), but does not solve problems with interconnect, which is major problem. Individual transistors at chips can switch much faster than 4Ghz, but we aren't really limited by transistor switching speed at the moment. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/12408', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1924/']} | jdg_380855 |
stackexchange | llm_judgeable_groundtruth_similarity | 2810174 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I exit the each function when the conditions was true once? This does not work: $$('.box div').each(function(e) { if(e.get('html') == '') { e.set('html', 'test'); exit; } });
Now provide the response and nothing else.
| Use .some ? $$('.box div').some(function(e) { if(e.get('html') == '') { e.set('html', 'test'); return true; } else return false; }); But probably you could just use arr = $$('.box div[html=""]'); if (arr.length > 0) arr[0].set("html", "test"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2810174', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232173/']} | jdg_380856 |
stackexchange | llm_judgeable_groundtruth_similarity | 3774328 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to open a file-like object in python (it's a serial connection through /dev/) and then close it. This is done several times in several methods of my class. How I WAS doing it was opening the file in the constructor, and then closing it in the destructor. I'm getting weird errors though and I think it has to do with the garbage collector and such, I'm still not used to not knowing exactly when my objects are being deleted =\ The reason I was doing this is because I have to use tcsetattr with a bunch of parameters each time I open it and it gets annoying doing all that all over the place. So I want to implement an inner class to handle all that so I can use it doing with Meter('/dev/ttyS2') as m: I was looking online and I couldn't find a really good answer on how the with syntax is implemented. I saw that it uses the __enter__(self) and __exit(self)__ methods. But is all I have to do implement those methods and I can use the with syntax? Or is there more to it? Is there either an example on how to do this or some documentation on how it's implemented on file objects already that I can look at?
Now provide the response and nothing else.
| Those methods are pretty much all you need for making the object work with with statement. In __enter__ you have to return the file object after opening it and setting it up. In __exit__ you have to close the file object. The code for writing to it will be in the with statement body. class Meter(): def __init__(self, dev): self.dev = dev def __enter__(self): #ttysetattr etc goes here before opening and returning the file object self.fd = open(self.dev, MODE) return self def __exit__(self, type, value, traceback): #Exception handling here close(self.fd)meter = Meter('dev/tty0')with meter as m: #here you work with the file object. m.fd.read() | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3774328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/375874/']} | jdg_380857 |
stackexchange | llm_judgeable_groundtruth_similarity | 354942 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As the title says. I think this should follow straightforwardly but I can't find a proof. My random variable of interest $X$ takes values in the non-negative integers. The only other assumption on its distribution is that $E(X)<\infty$. I want to prove:$$\lim_{n\to\infty}n\Pr(X\ge n) = 0.$$The fact that this should follow is referenced e.g. by DeGroot (2004) "Optimal Statistical Decisions" p. 295, but no proof is given. All I have right now is that without the constant $n$ it is easy to prove using Markov's inequality:$$\Pr(X\ge n) \le \frac{1}{n}E(X) \to 0.$$I appreciate any help in figuring this out.
Now provide the response and nothing else.
| This fact should be true for any monotonically decreasing sequence $a_n$ with $\sum_{i=1}^\infty a_i<\infty$. Recall the Cauchy Condensation test which says that $\sum_{i=1}^\infty a_i<\infty$ converges iff $\sum_{i=1}^\infty 2^i a_{2^i}<\infty$ converges, so that we have $2^na_{2^n}\rightarrow 0$ and by monotonicity if we let $k(n):=\log_2(n)$ then $0\leq na_n \leq 2^{k(n)}a_{2^{k(n)}}$ which implies $na_n\rightarrow 0$. Now use the fact that $\sum_{i=1}^\infty P(X\geq i)=E(X)$ for nonnegative integer random variables. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/354942', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71590/']} | jdg_380858 |
stackexchange | llm_judgeable_groundtruth_similarity | 7814401 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read that having CC 10 or less would be highly maintainable code. But the method that I wrote have CC 58. Thanks to VS 2010 code analysis tool. I believe that the method I wrote is very simple, readable and maintainable as far as my understanding. Hence I would not prefer refactoring the code. But since CC is higher than acceptable, I am wondering why would one refactor this method. I am learning things to improve my code If I have mistake, plese correct me. Here is the code. private string MapBathRooms(string value) { double retValue = 0; if (value == "1" || value == "One") retValue = 1; if (value == "OneAndHalf" || value == "1.5" || value == "1 1/2") retValue = 1.5; if (value == "2" || value == "Two") retValue = 2; if (value == "TwoAndHalf" || value == "2.5" || value == "2 1/2") retValue = 2.5; if (value == "3" || value == "Three") retValue = 3; if (value == "ThreeAndHalf" || value == "3.5" || value == "3 1/2") retValue = 3.5; if (value == "4" || value == "Four") retValue = 4; if (value == "FourAndHalf" || value == "4.5" || value == "4 1/2") retValue = 4.5; if (value == "5" || value == "Five" || value == "FourOrMore") retValue = 5; if (value == "FiveAndHalf" || value == "5.5" || value == "5 1/2") retValue = 5.5; if (value == "6" || value == "Six") retValue = 6; if (value == "SixAndHalf" || value == "6.5" || value == "6 1/2") retValue = 6.5; if (value == "7" || value == "Seven") retValue = 7; if (value == "SevenAndHalf" || value == "7.5" || value == "7 1/2") retValue = 7.5; if (value == "8" || value == "8+" || value == "Eight" || value == "SevenOrMore") retValue = 8; if (value == "EightAndHalf" || value == "8.5" || value == "8 1/2") retValue = 8.5; if (value == "9" || value == "Nine") retValue = 9; if (value == "NineAndHalf" || value == "9.5" || value == "9 1/2") retValue = 9.5; if(value == "10" || value == "Ten") retValue = 10; if (value == "TenAndHalf" || value == "10.5" || value == "10 1/2" || value == "10+" || value == "MoreThanTen" || value == "11") retValue = 10.5; if (retValue == 0) return value; return retValue.ToString(); }
Now provide the response and nothing else.
| Why not just have a Dictionary<string, double> ? That will make for much simpler code - you've separated the data from the lookup code. private static readonly Dictionary<string, double> BathRoomMap = new Dictionary<string, double>{ { "1", 1 }, { "One", 1 }, { "OneAndHalf", 1.5 }, { "1.5", 1.5 }, { "1 1/2", 1.5 } // etc};private static string MapBathRooms(string value){ double result; if (!BathRoomMap.TryGetValue(value, out result)) { return value; // Lookup failed } return result.ToString();} In fact, you could make it even simpler by avoiding the ToString call - just make it a Dictionary<string, string> : private static readonly Dictionary<string, string> BathRoomMap = new Dictionary<string, string>{ // Note: I've removed situations where we'd return the // same value anyway... no need to map "1" to "1" etc { "One", "1" }, { "OneAndHalf", "1.5" }, { "1 1/2", "1.5" } // etc};private static string MapBathRooms(string value){ string result; if (!BathRoomMap.TryGetValue(value, out result)) { return value; // Lookup failed } return result;} As ChrisF says, you could also read this from a file or other resource. Benefits of doing this: It's much easier to avoid mistakes and to extend, IMO. There's a simple 1:1 mapping from input to output, as opposed to logic which could go wrong It separates out the data from the logic It allows you to load the data from other places if need be. Because collection initializers use Dictionary<,>.Add , if you have a duplicate key you'll get an exception when you initialize the type, so you'll spot the error immediately. Put it this way - would you ever consider refactoring from the Dictionary-based version to the "lots of real code" version? I certainly wouldn't. If you really, really want to have it all in the method, you could always use a switch statement: private static string MapBathRooms(string value){ switch (value) { case "One": return "1"; case "OneAndHalf": case "1 1/2": return "1.5"; ... default: return value; }} I'd still use the dictionary form myself... but this does have the very slight advantage that duplicate detection is brought forward to compile -time. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7814401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/313208/']} | jdg_380859 |
stackexchange | llm_judgeable_groundtruth_similarity | 184657 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Artificial intelligence website defines off-policy and on-policy learning as follows: "An off-policy learner learns the value of the optimal policy independently of the agent's actions. Q-learning is an off-policy learner. An on-policy learner learns the value of the policy being carried out by the agent including the exploration steps." I would like to ask your clarification regarding this, because they don't seem to make any difference to me. Both the definitions seem like they are identical. What I actually understood are the model-free and model-based learning, and I don't know if they have anything to do with the ones in question. How is it possible that the optimal policy is learned independently of the agent's actions? Isn't the policy learned when the agent performs the actions?
Now provide the response and nothing else.
| First of all, there's no reason that an agent has to do the greedy action ; Agents can explore or they can follow options . This is not what separates on-policy from off-policy learning. The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the greedy action $a'$ . In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy. The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the current policy's action $a''$ . It estimates the return for state-action pairs assuming the current policy continues to be followed. The distinction disappears if the current policy is a greedy policy. However, such an agent would not be good since it never explores. Have you looked at the book available for free online? Richard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction. Second edition, MIT Press, Cambridge, MA, 2018. | {} | {'log_upvote_score': 9, 'links': ['https://stats.stackexchange.com/questions/184657', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/67413/']} | jdg_380860 |
stackexchange | llm_judgeable_groundtruth_similarity | 104983 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Python specifically, how do variables get shared between threads? Although I have used threading.Thread before I never really understood or saw examples of how variables got shared. Are they shared between the main thread and the children or only among the children? When would I need to use thread local storage to avoid this sharing? I have seen many warnings about synchronizing access to shared data among threads by using locks but I have yet to see a really good example of the problem. Thanks in advance!
Now provide the response and nothing else.
| In Python, everything is shared, except for function-local variables (because each function call gets its own set of locals, and threads are always separate function calls.) And even then, only the variables themselves (the names that refer to objects) are local to the function; objects themselves are always global, and anything can refer to them.The Thread object for a particular thread is not a special object in this regard. If you store the Thread object somewhere all threads can access (like a global variable) then all threads can access that one Thread object. If you want to atomically modify anything that another thread has access to, you have to protect it with a lock. And all threads must of course share this very same lock, or it wouldn't be very effective. If you want actual thread-local storage, that's where threading.local comes in. Attributes of threading.local are not shared between threads; each thread sees only the attributes it itself placed in there. If you're curious about its implementation, the source is in _threading_local.py in the standard library. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/104983', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19215/']} | jdg_380861 |
stackexchange | llm_judgeable_groundtruth_similarity | 14715250 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently generating a CSRF token in my forms to prevent cross-site request forgery. It looks like: <form method="post" action="action.php"> <input type="hidden" id="security_token" name="security_token" value="gTt96phAcretR99rafEjepHebrEZadEdezadagaZ3gAS5es33WReJeZaMADU2AWr" /> ...</form> The problem is that I have multiple forms on a single page. Must I create a security token for each form so security_token_1 , security_token_2 , or can I simply instead of generating the security token inside of forms, append it to a property to the entire body tag like: <body data-csrf-token="gTt96phAcretR99rafEjepHebrEZadEdezadagaZ3gAS5es33WReJeZaMADU2AWr">...</body> Is this insecure in any way? It simplifies things quite a bit, since I can simply append the security token to the body element instead of dealing with multiple security tokens. Thanks for the insight and comments.
Now provide the response and nothing else.
| There really isn't any reason you can't have the same generated token for both forms, with each hidden field in each form having the same name attribute. After all, what you are really trying to validate is that a form request is inbound from a user with a valid session, and only one form is going to be actively posted at a time. Thus you are comparing the token posted against a token stored in session for the user. There need not be more than one token value in order to do this. For your case of needing to update the tokens based on AJAX posts, what you would need to do is, as you say, pass the newly created token back in the AJAX response, then update the hidden field values to the new token value. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14715250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/425964/']} | jdg_380862 |
stackexchange | llm_judgeable_groundtruth_similarity | 637183 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building a product that will use HS 2.0 on USB C for data transfer and charging. But I want to make a dumb cable that just has 3 wires connected to Ecg leads to plug into the same USB C port. Can I keep those 3 pins on the USB socket always connected to my A2D, and simply ignore the data coming in on them if it happens to have a real USB cable connected (assumedly by looking for incoming power)?
Now provide the response and nothing else.
| No. Please don't use the USB connector for ECG leads, because that's unsafe. Let's do a spot of risk analysis. Imagine the ECG electrodes connected to a patient on one end, and to a USB plug on the other end. Imagine all the places where somebody could accidentally stick the USB plug into: laptops, smartphones, counterfeit Chinese wall chargers. What kind of benefit can outweigh such risk? More generally. Exposing common connectors (HDMI, USB, RJ-45, etc) for other purposes is usually a wrong idea. But [the opposite] using an uncommon or proprietary connector for common communication interfaces is fine. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/637183', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/323177/']} | jdg_380863 |
stackexchange | llm_judgeable_groundtruth_similarity | 1760270 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Jacobi theta function $\theta_4$ is defined by $$\displaystyle \theta_4(q)=\sum_{n \in \mathbb{Z}} (-1)^n q^{n^2} \tag{1}$$For this question, set $q=\large e^{-\pi x}$ and $\theta_4 \equiv \theta_4(q)$. Define $\theta_3(q)=\theta_4(-q)$. Using Lambert-Series representation for powers of $\theta_4$ (which I will describe in a moment) and integrating term by term, I have obtained a family of neat identites:$$ \int_0^{\infty} \frac{\theta_4^2}{1+x^2} dx=1 \tag{2}$$$$ \int_0^{\infty} \frac{\theta_4^4}{1+x^2} dx=\frac{4 \ln2}{\pi} \tag{3}$$$$ \int_0^{\infty} \frac{\theta_4^6}{1+x^2} dx=\frac{16 G}{\pi^2}-\frac23 \tag{4}$$$$ \int_0^{\infty} \frac{\theta_4^8}{1+x^2} dx=\frac{20 \zeta(3)}{\pi^3} \tag{5}$$Here, $G$ is Catalan's constant. These identities check out to a lot of digits, according to Mathematica . As I mentioned, in order to drive these identities I used Lambert series representations for powers of $\theta_4$ which I have found online. For instance: $$\theta_4^2 = 1+4\sum_{n=1}^{\infty} \frac{(-1)^n q^n}{1+q^{2n}} \tag{6}$$(See e.g. [1], or use $r_2(n)=4\sum_{d \mid n} \sin(\frac{\pi}{2} d)$ ([2]), and switch order of summation) $$\theta_4^6=1+16\sum_{n=0}^{\infty}\frac{(-1)^n n^2 q^n}{1+q^{2n}}+4\sum_{n=0}^{\infty} \frac{(-1)^n (2n+1)^2 q^{2n+1}}{1+q^{2n+1}} \tag{7}$$(A proof is given in [3], together with proofs for similar formulas for the $4$th power of $\theta_4$, and the $8$th power.) I am interested in a closed form for the integral $$I(n)= \int_0^{\infty} \frac{\theta_4(e^{-\pi x})^n}{1+x^2} dx.\tag{8}$$ 1st Question I have searched the web for a similar Lambert series combination for $\theta_4^{10}$, but all my efforts remain fruitless. Also, I wasn't able to derive one using the approach taken in [3]. Can we find a formula, possibly in the form of $(6)$ or $(7)$, for $\theta_4^{10}(q)$? Such a formula may be used to find the closed form of $I(10)$. 2nd Question Can we find closed forms for $I(n)$ for other values of $n$? In general, can we find a Lambert series representation of $\theta_4^n$ for each even $n$? If not, for which $n$ we can find one and for which we cannot? References [1]: Proving $\left(\sum_{n=-\infty}^\infty q^{n^2} \right)^2 = \sum_{n=-\infty}^\infty \frac{1}{\cos(n \pi \tau)}$ ,(Note that $\theta_4(q)=\theta_3(-q)$) [2]: Eric W. Weisstein, Sum of Squares Function ,Mathworld.(25),(The line below references proofs). Link [3]: George E. Andrews, Richard Lewis and Zhi-Guo Liu, An identity relating a theta function to a sum of Lambert series , (7)-(9). This question has been edited. For more information about how I obtained these results, check the original version of this post. $\displaystyle \large \mathbf{Progress}\,\mathbf{Report}$ Thanks to Paramanand Singh (see his answer below) I've been able to make some progress. Again, we set $q=e^{-\pi x}$ and$$\theta_2 \equiv \theta_2(q)= \sum_{n \in \mathbb{Z}} q^{(n+\frac12)^2}\\\theta_3\equiv \theta_3(q)=\sum_{n \in \mathbb{Z}} q^{n^2}\\\theta_4\equiv \theta_4(q)=\sum_{n \in \mathbb{Z}}(-1)^n q^{n^2}$$We also have the relations $$\theta_2^4+\theta_4^4=\theta_3^4\\ \theta_2(e^{-\pi/x})=\frac1{\sqrt{x}}\theta_4(e^{-\pi x})\\ \theta_3(e^{-\pi/x})=\frac1{\sqrt{x}}\theta_3(e^{-\pi x})\tag{9}$$I will omit all the intermediate steps and just state what I've found so far, and also some other related identities which I'venot proven, but match numerically.So, using Singh's formulas (changing $q$ to $-q$ and rewriting in terms of hyperbolic functions), I get that$$\int_0^{\infty} \frac{\theta_4^{10}}{1+x^2}dx=\frac{768 \beta(4)}{5 \pi^4}-\frac{32}{75}-2\int_0^{\infty} \frac{\theta_2^4\theta_4^6}{1+x^2}dx\tag{10}$$$$\int_0^{\infty} \frac{\theta_4^{12}}{1+x^2}dx=\frac{450 \zeta(5)}{\pi^5}-\int_0^{\infty} \frac{\theta_2^4\theta_3^4\theta_4^4}{1+x^2}dx\tag{11}$$Notice that $(10)$ and $(11)$ are a bit different from the expressions I wrote in the comments on Singh's answer.Numerical evidence suggests that:$$\int_0^{\infty} \frac{\theta_2^4\theta_3^4\theta_4^4}{1+x^2}dx=\frac23\int_0^{\infty} \theta_2^4\theta_3^4\theta_4^4 dx\tag{12}$$$$\int_0^{\infty} \frac{\theta_2^4\theta_3^4\theta_4^2}{1+x^2}dx=\frac83\int_0^{\infty} \theta_2^2\theta_3^4\theta_4^4 dx\tag{13}$$$$\int_0^{\infty} \theta_2^4\theta_4^2dx=\int_0^{\infty} \theta_2^2\theta_4^2dx=1\tag{14}$$$$\int_0^{\infty} \frac{\theta_2^4\theta_4^4}{1+x^2}dx=\frac{8\zeta(3)}{\pi^3}\tag{15}$$$$\int_0^{\infty} \frac{\theta_2^4\theta_4^2}{1+x^2}dx=\frac23\tag{16}$$All of these just make me more confident that an ultimate colsed form for both $(10)$ ,$(11)$ and higher power combinations of theta functions exist. As a side note, these integrals identities can be translated into identites about lattice sums, by expanding the theta functions into their series definition.
Now provide the response and nothing else.
| This is a sequel to my comments to the question which was too long to fit in another comment. We have the formulas for $\vartheta_{3}^{10}(q), \vartheta_{3}^{12}(q)$ from Topics in Analytic Number Theory by Rademacher (famous for proving an infinite series formula to calculate the number of partitions of a positive integer) on page 198:\begin{align}\vartheta_{3}^{10}(q) &= 1 + \frac{4}{5}\left\{\sum_{n = 1}^{\infty}\frac{2n^{4}q^{n}}{1 + q^{2n}} + \sum_{n = 1}^{\infty}(-1)^{n - 1}\frac{(2n - 1)^{4}q^{2n - 1}}{1 - q^{2n - 1}}\right\} + \frac{2}{5}\vartheta_{3}^{2}(q)\vartheta_{2}^{4}(q)\vartheta_{4}^{4}(q)\tag{1}\\\vartheta_{3}^{12}(q) &= 1 + 8\sum_{n = 1}^{\infty}\frac{n^{5}q^{n}}{1 - q^{2n}} - 8\sum_{n = 1}^{\infty}(-1)^{n}\frac{n^{5}q^{2n}}{1 - q^{2n}} + \vartheta_{2}^{4}(q)\vartheta_{3}^{4}(q)\vartheta_{4}^{4}(q)\tag{2}\end{align} Finding a general formula for $\vartheta_{3}^{k}(q)$ for even positive integer $k$ is a difficult problem but using the methods given in Rademacher's book it looks like it is possible to obtain such formulas at the cost of heavy symbolic manipulation for a specific $k$. Update : I found one pattern in your formulas by using the substitution $x = K'(k)/K(k)$ so that when $x = 0$ then $k = 1$ and when $x = \infty$ then $k = 0$ and moreover $$\frac{dx}{dk} = -\frac{\pi}{2kk'^{2}K^{2}}$$ so that the integral of $\vartheta_{4}^{2n}(e^{-\pi x})/(1 + x^{2})$ is transformed into $$\int_{0}^{1}\left(\frac{2k'K}{\pi}\right)^{n}\frac{1}{K^{2} + K'^{2}}\frac{\pi}{2kk'^{2}}\,dk = \left(\frac{2}{\pi}\right)^{n - 1}\int_{0}^{1}\frac{k^{-1}k^{'(n - 2)}K^{n}}{K^{2} + K'^{2}}\,dk$$ and that explains (at least to some extent) the occurrence of $\dfrac{1}{\pi^{n - 1}}$ in your formulas. Next it is easy to prove one of the formulas in $(14)$. We have $$\vartheta_{2}^{2}\vartheta_{4}^{2} = kk'(2K/\pi)^{2}$$ and hence $$\int_{0}^{\infty}\vartheta_{2}^{2}\vartheta_{4}^{2}\,dx = \int_{0}^{1}kk'\cdot\frac{4K^{2}}{\pi^{2}}\cdot\frac{\pi}{2kk'^{2}K^{2}}\,dk = \frac{2}{\pi}\int_{0}^{1}\frac{dk}{\sqrt{1 - k^{2}}} = 1$$ I wonder if similar technique can be applied to prove other formulas. If $q = e^{-\pi x}$ then $dx = -\dfrac{dq}{\pi q}$ and interval $(0, \infty)$ changes to $(0, 1)$ and hence we can express the first integral of $(14)$ as $$\frac{1}{\pi}\int_{0}^{1}\vartheta_{2}^{4}(q)\vartheta_{4}^{2}(q)\,\frac{dq}{q} = \frac{16}{\pi}\int_{0}^{1}\psi^{4}(q^{2})\phi^{2}(-q)\,dq$$ Next $$\psi^{4}(q^{2}) = \sum_{n = 0}^{\infty}\frac{(2n + 1)q^{2n}}{1 - q^{4n + 2}}, \phi^{2}(-q) = 1 + 4\sum_{n = 1}^{\infty}\frac{(-1)^{n}q^{n}}{1 + q^{2n}}$$ I wonder if you can utilize the above Lambert series to prove that the desired integral is equal to $1$. It appears that if we express the integrand as a Lambert series then it can also be expressed as the logarithmic derivative of some product of theta functions and the integral can be evaluated. See this paper regarding some integrals related to theta functions (all of it was given by Ramanujan in his lost notebook). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1760270', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/219995/']} | jdg_380864 |
stackexchange | llm_judgeable_groundtruth_similarity | 30782985 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to double a Stream (no DoubleStream ). Meaning I start with a stream and want to get a new stream where each element of the old stream is streamed twice. So 1,2,3,4,4,5 gives us 1,1,2,2,3,3,4,4,4,4,5,5. Is there such a stream operation?
Now provide the response and nothing else.
| Create an inner stream which will contain current element two times and flatMap this stream. stream.flatMap(e -> Stream.of(e,e)) If you want to multiply the number of elements by n you can create an utility method like this one: public static <T> Stream<T> multiplyElements(Stream<T> in, int n) { return in.flatMap(e -> IntStream.range(0, n).mapToObj(i -> e)); // we can also use IntStream.rangeClosed(1, n) // but I am used to iterating from 0 to n (where n is excluded)} (but try to use a better name for this method, since the current one may be ambiguous) Usage example: multiplyElements(Stream.of(1,2), 3).forEach(System.out::println); Output: 111222 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30782985', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2706344/']} | jdg_380865 |
stackexchange | llm_judgeable_groundtruth_similarity | 4954871 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I hide a TemplateField column in a GridView? I tried the following: <asp:TemplateField ShowHeader="False" Visible='<%# MyBoolProperty %>' ><ItemTemplate> <asp:LinkButton ID="attachmentButton" runat="server" ... /></ItemTemplate> but it didn't work and gives the following error: Databinding expressions are only supported on objects that have a DataBinding event. System.Web.UI.WebControls.TemplateField does not have a DataBinding event. I tried also to hide it programmatically, but seems it's not possible to get a column by the name because there iss no name for TemplateField column.
Now provide the response and nothing else.
| protected void OnRowCreated(object sender, GridViewRowEventArgs e){ e.Row.Cells[columnIndex].Visible = false;} If you don't prefer hard-coded index, the only workaround I can suggest is to provide a HeaderText for the GridViewColumn and then find the column using that HeaderText . protected void UsersGrid_RowCreated(object sender, GridViewRowEventArgs e){ ((DataControlField)UsersGrid.Columns .Cast<DataControlField>() .Where(fld => fld.HeaderText == "Email") .SingleOrDefault()).Visible = false;} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4954871', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/322355/']} | jdg_380866 |
stackexchange | llm_judgeable_groundtruth_similarity | 39549469 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to run an array through a for loop to append the items checked onto each array with each one appending the suffix with $n (1-3). Thank you for your suggestions I am one step closer, now I am getting each suffix on one item. I want each one as its own item in the array. Can anyone see the error? I have updated my code and it is one step closer to the solution and underneath it is what I am aiming for. $equip = 'Phone';$ailments_checkvar = explode(', ', 'Cracked, Scratched, Bent, Twisted');foreach ($ailments_checkvar as &$value) { $value = 'directory/'.$equip.'_'.$value.'';}unset($value);$duplicateArray = $ailments_checkvar;foreach ($ailments_checkvar as $key) { $duplicateArray[] = $key;}foreach ($ailments_checkvar as $key) { $duplicateArray[] = $key;}for ($n = 1; $n <= 3; $n++) {foreach ($duplicateArray as &$valueN) { $valueN = $valueN.'_0'.$n.'.pdf';}}unset($valueN);print_r ($duplicateArray); Getting this Array ( [0] => directory/Phone_Cracked_01.pdf_02.pdf_03.pdf [1] => directory/Phone_Scratched_01.pdf_02.pdf_03.pdf [2] => directory/Phone_Bent_01.pdf_02.pdf_03.pdf [3] => directory/Phone_Twisted_01.pdf_02.pdf_03.pdf [4] => directory/Phone_Cracked_01.pdf_02.pdf_03.pdf [5] => directory/Phone_Scratched_01.pdf_02.pdf_03.pdf [6] => directory/Phone_Bent_01.pdf_02.pdf_03.pdf [7] => directory/Phone_Twisted_01.pdf_02.pdf_03.pdf [8] => directory/Phone_Cracked_01.pdf_02.pdf_03.pdf [9] => directory/Phone_Scratched_01.pdf_02.pdf_03.pdf [10] => directory/Phone_Bent_01.pdf_02.pdf_03.pdf [11] => directory/Phone_Twisted_01.pdf_02.pdf_03.pdf ) And want to make this... Array ( [0] => directory/Phone_Cracked_01.pdf [1] => directory/Phone_Cracked_02.pdf [2] => directory/Phone_Cracked_03.pdf [3] => directory/Phone_Scratched_01.pdf [4] => directory/Phone_Scratched_02.pdf[5] => directory/Phone_Scratched_03.pdf[6] => directory/Phone_Bent_01.pdf [7] => directory/Phone_Bent_02.pdf [8] => directory/Phone_Bent_03.pdf [9] => directory/Phone_Twisted_01.pdf[10] => directory/Phone_Twisted_02.pdf[11] => directory/Phone_Twisted_03.pdf )
Now provide the response and nothing else.
| itertools.islice and two arg iter can be used to accomplish this, but it's a little funny: from itertools import islicen = 5 # Or whatever chunk size you wantwith open(filename, 'rb') as f: for n_lines in iter(lambda: tuple(islice(f, n)), ()): process(n_lines) This will keep islice ing off n lines at a time (using tuple to actually force the whole chunk to be read in) until the f is exhausted, at which point it will stop. The final chunk will be less than n lines if the number of lines in the file isn't an even multiple of n . If you want all the lines to be a single string, change the for loop to be: # The b prefixes are ignored on 2.7, and necessary on 3.x since you opened # the file in binary mode for n_lines in iter(lambda: b''.join(islice(f, n)), b''): Another approach is to use izip_longest for the purpose, which avoids lambda functions: from future_builtins import map # Only on Py2from itertools import izip_longest # zip_longest on Py3 # gets tuples possibly padded with empty strings at end of file for n_lines in izip_longest(*[f]*n, fillvalue=b''): # Or to combine into a single string: for n_lines in map(b''.join, izip_longest(*[f]*n, fillvalue=b'')): | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39549469', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6591738/']} | jdg_380867 |
stackexchange | llm_judgeable_groundtruth_similarity | 13891559 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
def foo(a, b, c): print a+b+ci = [1,2,3] Is there a way to call foo(i) without explicit indexing on i?Trying to avoid foo(i[0], i[1], i[2])
Now provide the response and nothing else.
| Yes, use foo(*i) : >>> foo(*i)6 You can also use * in function definition: def foo(*vargs) puts all non-keyword arguments into a tuple called vargs .and the use of ** , for eg., def foo(**kargs) , will put all keyword arguments into a dictionary called kargs : >>> def foo(*vargs, **kargs): print vargs print kargs>>> foo(1, 2, 3, a="A", b="B")(1, 2, 3){'a': 'A', 'b': 'B'} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13891559', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/178511/']} | jdg_380868 |
stackexchange | llm_judgeable_groundtruth_similarity | 154182 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Are pseudo zeroth-order reactions possible? I have been taught that pseudo first-order reactions take place and they can be considered pseudo zeroth-order reactions because one reactant is in excess of the other(s). What are the necessary conditions to label a reaction pseudo-zeroth order?
Now provide the response and nothing else.
| I think to some extent, you can think about all zero-order reactions as "pseudo-zero-order" reactions. This is because it's not possible for a reaction to be true zero-order. How can it be that the rate of a reaction does not depend on any one of the reactants? So, anything that is zero-order means that there is some artifact in the system, or that something else is present. In general, zero-order reactions happen when the reactant is not able to react at the theoretical maximum rate due to something preventing it. For example, for a catalytic process, the reaction happens at the surface of the catalyst, so if you have a large excess of reactant, all of the surface of the catalyst will be covered up with the reactant, and the rate will become constant. When the concentration of the reactant goes down so that all of the catalyst surface is not in use, the reaction will show a first/second or higher order rate equation. An example of such reaction is the decomposition of nitrous oxide: $$\ce{2N2O\overset{\Delta, Ni}{->}2N2 + O2}$$ When nickel catalyst is used, the rate is limited by the total surface area of the catalyst. Once the catalyst is working at the full capacity, the rate won't increase, even if the concentration of the reactants is increased further. Without the catalyst, the reaction is slower, but it shows the standard second-order rate equation that we would expect by looking at the equation. For reactions that are driven only by light (not just initiated by light!), the rate would be limited by the intensity of the light. Once there are enough species that can absorb all of the light, the reaction rate won't go up even if more reactants are added. Then there are biochemical reactions which are diffusion controlled. So, in summary, all zero-order reactions are zero-order because there is some reactant which is in excess than something else the reaction depends upon (which can be catalyst, light, diffusion etc.). You can say they are probably all pseudo zero-order. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/154182', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/103220/']} | jdg_380869 |
stackexchange | llm_judgeable_groundtruth_similarity | 960625 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems I can't find the correct syntax to define a nhibernate filter using fluent Nhibernate. I'm trying to follow this ayende's blogpost: http://ayende.com/Blog/archive/2006/12/26/LocalizingNHibernateContextualParameters.aspx I defined the formula on my property with .FormulaIs() method but can't find on google how to translate this definition to fluent nhibernate: < filter-def name='CultureFilter'> < filter-param name='CultureId' type='System.Int32'/> < /filter-def>
Now provide the response and nothing else.
| If you build Fluent from source, there is now support for filters. You use them like this: First create a class inheriting from FluentNHibernate.Mapping.FilterDefinition : using FluentNHibernate.Mapping;namespace PonyApp.FluentFilters{ public class PonyConditionFilter : FilterDefinition { public PonyConditionFilter() { WithName("PonyConditionFilter") .AddParameter("condition",NHibernate.NHibernateUtil.String); } }} In your ClassMap for your class, use the ApplyFilter method: namespace PonyApp.Entities.Mappings{ public class PonyMap : ClassMap<Pony> { public PonyMap() { Id(x => x.Id); Map(x => x.PonyName); Map(x => x.PonyColor); Map(x => x.PonyCondition); ApplyFilter<PonyConditionFilter>("PonyCondition = :condition"); } }} Then add the filter to your fluent config: Fluently.Configure() .Mappings(m => m.FluentMappings.Add(typeof(PonyConditionFilter))) //blah blah bunches of other important stuff left out .BuildSessionFactory(); Then you can turn it on and off just as you would with vanilla NHibernate: session.EnableFilter("PonyConditionFilter").SetParameter("condition","Wonderful"); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/960625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78745/']} | jdg_380870 |
stackexchange | llm_judgeable_groundtruth_similarity | 60482018 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a Bootstrap sidebar like this picture here. I have looked at all the code on react-bootstrap and Twitter Bootstrap and I am yet to find a how-to code this. Basically, if they are viewing on a desktop, I want the sidebar to be visible, otherwise hidden. The sidebar should stay still while the content on the page scrolls up and down.
Now provide the response and nothing else.
| Ok so for people who want to make a sidebar sadly the news is you gotta make it all yourself. What I have done is the following. See the example at https://github.com/StartBootstrap/startbootstrap-simple-sidebar Create sidebar.js somewhere in your app. import React from "react";import {Nav} from "react-bootstrap";import { withRouter } from "react-router";import '../pages/style/Dashboard.css'const Side = props => { return ( <> <Nav className="col-md-12 d-none d-md-block bg-light sidebar" activeKey="/home" onSelect={selectedKey => alert(`selected ${selectedKey}`)} > <div className="sidebar-sticky"></div> <Nav.Item> <Nav.Link href="/home">Active</Nav.Link> </Nav.Item> <Nav.Item> <Nav.Link eventKey="link-1">Link</Nav.Link> </Nav.Item> <Nav.Item> <Nav.Link eventKey="link-2">Link</Nav.Link> </Nav.Item> <Nav.Item> <Nav.Link eventKey="disabled" disabled> Disabled </Nav.Link> </Nav.Item> </Nav> </> ); }; const Sidebar = withRouter(Side); export default Sidebar My Dashboard.css has the following in it. .sidebar { position: fixed; top: 0; bottom: 0; left: 0; min-height: 100vh !important; z-index: 100; padding: 48px 0 0; box-shadow: inset -1px 0 0 rgba(0, 0, 0, .1); } #sidebar-wrapper{ min-height: 100vh !important; width: 100vw; margin-left: -1rem; -webkit-transition: margin .25s ease-out; -moz-transition: margin .25s ease-out; -o-transition: margin .25s ease-out; transition: margin .25s ease-out; } #sidebar-wrapper .sidebar-heading { padding: 0.875rem 1.25rem; font-size: 1.2rem; } #page-content-wrapper { min-width: 0; width: 100%; } Then final stepIn the file you want it to be show in do the following import React from "react";import {Container, Row, Col, Card, Form, Button } from "react-bootstrap";import { withRouter } from "react-router";import Sidebar from "../moduls/sidebar.js";import './style/Dashboard.css'const Dash = props => { return ( <> <Container fluid> <Row> <Col xs={2} id="sidebar-wrapper"> <Sidebar /> </Col> <Col xs={10} id="page-content-wrapper"> this is a test </Col> </Row> </Container> </> ); }; const Dashboard = withRouter(Dash); export default Dashboard | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/60482018', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/275414/']} | jdg_380871 |
stackexchange | llm_judgeable_groundtruth_similarity | 970696 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't really understand why there are so many universal properties in math or why they all need to be highlighted. For example, I'm studying some Algebra right now. I have found three universal properties that are all basically saying the same thing, although the details are different: Universal property 1 : If $R, S$ are rings and $\theta: R \to S$ is a ring map, then for each $s \in S$ , there is a unique map $\hat{\theta_{s}} : R[x] \to S$ such that if $i: R \to R[x]$ is the inclusion map, we get $\theta = \hat{\theta_{s}} \circ i$ . Universal property 2 : If $D$ is an integral domain and $F$ is a field with $\phi : D \to F$ a one-to-one ring map, then there is a unique map $\hat{\phi} : Q(D) \to F$ such that $\hat{\phi} \circ \pi = \phi$ , where $\pi : D \to Q(D)$ sends $a$ to $\frac{a}{1}$ ( $Q(D)$ the fractional field of $D$ ). Universal property two was used to prove that in a field of characteristic $0$ , the rationals are a subfield, and in a field of characteristic $p$ ( $p$ prime), $\mathbb{Z}_{p}$ is a subfield. Universal property 3 : If $R, S$ are rings, $\phi: R \to S$ is a ring map, and $I$ is an ideal such that $I \subseteq \text{ker}(\phi)$ , then there is a unique map $\overline{\phi} : R/I \to S$ such that $\phi = \overline{\phi} \circ i$ where $i: R \to R/I$ maps $a$ to $\overline{a}$ . It is really hard for me to keep track of all of these universal properties, especially when they are all usually referenced by the single name "universal property". Is there a point to all of these universal properties? Honestly, I don't even know if my question is clear, or how to ask a better question in this regard.
Now provide the response and nothing else.
| A universal property of some object $A$ tells you something about the functor $\hom(A,-)$ (or $\hom(-,A)$, but this is just dual). For example, $\hom(R[x],S) \cong |S| \times \hom(R,S)$ is the universal property of the polynomial ring (where $|S|$ denotes the underlying set of $S$). Conversely, we may consider the functor which takes a commutative ring $S$ to $|S| \times \hom(R,S)$ and say that it is a representable functor , represented by $R[x]$. This can be also interpreted as the statement that $R[x]$ is the free commutative $R$-algebra on one generator, see free object for categorical generalizations. Roughly, representing a functor means to give a universal example of, or to classify, the things which the functor describes. This happens all the time in mathematics. Conversely, whenever you have an object $A$, it is interesting to ask what it classifies, i.e. to look at $\hom(A,-)$ and give a more concise description of it. The Yoneda Lemma tells you that all information of $A$ is already encoded in $\hom(A,-)$. Also, one of the main insights of category theory is that it is very useful to work with morphisms instead of elements. For example, what the quotient ring $R/I$ does for us is not really that we can compute with cosets, but rather that it is the universal solution to the problem to enlarge $R$ somehow to kill (the elements of) $I$. In other words, $\hom(R/I,S) \cong \{f \in \hom(R,S) : f|_I = 0\}$. This makes things like $(R/I)/(J/I) = R/J$ for $I \subseteq J \subseteq R$ really trivial : On the left side, we first kill $I$ and then $J$, which is the same as to kill $J$ directly, which happens on the right hand side. No element calculations are necessary. (On math.stackexchange, I have posted lots of examples for this kind of reasoning.) Quotient rings, quotient vector spaces, quotient spaces etc. are all special cases of colimits . The universal property of the field of fractions states that $\hom(Q(D),F) \cong \hom(D,F)$, where on the right hand side we mean injective homomorphisms. This says that $Q(-)$ is left adjoint to the forgetful functor from fields to integral domains (in each case with injective homomorphisms as morphisms). This is a special case of localizations . Adjunctions are ubiquitous in modern mathematics. They allow us to "approximate" objects of a category by objects of another category. So far I have only mentioned some patterns of universal properties, but not answered the actual "philosophical" question " Why are there so many universal properties in math?" in the title. Well first of all, they are useful, as explained above. Also notice that many objects of interest turn out to be quotients of universal objects. For example, every finitely generated $k$-algebra is a quotient of a polynomial algebra $k[x_1,\dotsc,x_n]$. Thus, if we understand this polynomial algebra and its properties, we may gain some information about all finitely generated $k$-algebras. A specific example of this type is Hilbert's Basis Theorem, which implies that finitely generated algebras over fields are noetherian. Perhaps one can say: Universal objects are there because we have invented them in order to study all objects. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/970696', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/131740/']} | jdg_380872 |
stackexchange | llm_judgeable_groundtruth_similarity | 497584 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Question Is there a curve on plane such that any line on the plane meets it (a non zero ) finite times ? What are the bounds on the number of such intersections. My question was itself inspired by this "Can you draw circles on the plane so that every line intersects at least one of them but no more than 100 of them?"
Now provide the response and nothing else.
| Cubic parabola $$y=x^3$$ has this property. The max number of such intersections is given by the Fundamental theorem of algebra: $$x^3=ax+b$$ can have at most 3 solutions. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/497584', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82412/']} | jdg_380873 |
stackexchange | llm_judgeable_groundtruth_similarity | 219619 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding is that even when following proper cross validation and model selection procedures, overfitting will happen if one searches for a model hard enough , unless one imposes restrictions on model complexity, period. Moreover, often times people try to learn penalties on model complexity from the data which undermines the protection they can provide. My question is: How much truth is there to the statement above? I often hear ML practicioners say: " At my company/lab, we always try every model available (e.g. from libraries like caret or scikit-learn ) to see which one works best ". I often argue that this approach can easily overfit even if they are serious about cross-validation and keep hold-out sets in any way they want. Moreover the harder they search, the more likely they may overfit. In other words, over-optimization is a real problem and there are no heuristics that can help you systematically fight against it. Am I wrong to think this way?
Now provide the response and nothing else.
| Not a whole answer, but one thing that people overlook in this discussion is what does Cross-Validation (for example) mean, why do you use it, and what does it cover? The problem I see with searching too hard is that the CV that people are doing is often within a single model. Easy to do by setting a folds= argument of the model fitting procedure. But when you go to multiple models, and even multiple procedures for creating multiple models, you add another layer or two which you haven't wrapped in CV. So they should be using nested CV. And they should also be using "Target Shuffling" (resampling/permutation testing) wrapped around their whole process to see how well their procedure would do if you break the relationship between dependent and independent variables -- i.e. how much better are you doing than random considering your entire process? | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/219619', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2798/']} | jdg_380874 |
stackexchange | llm_judgeable_groundtruth_similarity | 626041 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The problem is $\displaystyle\int\frac{\sqrt{16-x^2}}{x}\mathrm{d}x$. I've attempted to use a trig substitution with $x=4\sin\theta$ and $\mathrm{d}x=4\cos\theta\ \mathrm{d}\theta$. This yields $ \displaystyle 4 \int\frac{\cos^2\theta}{\sin\theta}\mathrm{d}\theta$ and I attempted to substitute $1-\sin^2 \theta$ for the numerator but that did not appear to yield a tractable integral either. (Similar result attempting to substitute a double angle formula.) I attempted to do an integration by parts with $\displaystyle 4\int\frac{\cos\theta}{\sin\theta}\cos\theta\ \mathrm{d}\theta$ and $u=\cos\theta$ and $\displaystyle \mathrm{d}v=\frac{\cos\theta}{\sin\theta}\mathrm{d}\theta$ which gets me $\displaystyle \cos\theta\ln\sin\theta + \int\ln(\sin\theta) \sin\theta\ \mathrm{d} \theta$ and I don't know how to solve that integral either.
Now provide the response and nothing else.
| Write the integral as $$\int{\sqrt{16-x^2}\over x^2}x\,dx$$ then let $u^2=16-x^2$, so that $u\,du=-x\,dx$ and the substitution gives $$-\int{u\over16-u^2}u\,du=\int\left(1-{16\over16-u^2}\right)\,du$$ Partial fractions should finish things off. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/626041', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/118908/']} | jdg_380875 |
stackexchange | llm_judgeable_groundtruth_similarity | 2333054 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently we are running a competition which proceeds very well. Unfortunately we have all those cheaters back in business who are running scripts which automatically vote for their entries. We already saw some cheaters by looking at the database entries by hand - 5 Star ratings with same browser exactly all 70 minutes for example. Now as the userbase grows up it gets harder and harder to identify them. What we do until now: We store the IP and the browser and block that combination to a one hour timeframe. Cookies won't help against these guys. We are also using a Captcha, which has been broken Does anyone know how we could find patterns in our database with a PHP script or how we could block them more efficiently? Any help would be very appreciated...
Now provide the response and nothing else.
| Direct feedback elimination This is more of a general strategy that can be combined with many of the other methods. Don't let the spammer know if he succeeds. You can either hide the current results altogether, only show percentages without absolute number of votes or delay the display of the votes. Pro: good against all methods Con: if the fraud is massive, percentage display and delay won't be effective Vote flagging Also a general strategy. If you have some reason to assume that the vote is by a spammer, count their vote and mark it as invalid and delete the invalid votes at the end. Pro: good against all detectable spam attacks Con: skews the vote, harder to set up, false positives Captcha Use a CAPTCHA . If your Captcha is broken, use a better one. Pro: good against all automated scripts. Con: useless against pharygulation IP checking Limit the number of votes an IP address can cast in a timespan. Pro: Good against random dudes who constantly hit F5 in their browser Pro: Easy to implement Con: Useless against Pharyngulation and elaborate scripts which use proxy servers. Con: An IP address sometimes maps to many different users Referrer checking If you assume that one user maps one IP address, you can limit the number if votes by that IP address. However this assumption usually only holds true for private households. Pro: Easy to implement Pro: Good against simple pharyngulation to some extent Con: Very easy to circumvent by automated scripts Email Confirmation Use Email confirmation and only allow one vote per Email. Check your database manually to see if they are using throwaway-emails. Note that you can add +foo to your username in an email address. [email protected] and [email protected] will both deliver the mail to the same account, so remember that when checking if somebody has already voted. Pro: good against simple spam scripts Con: harder to implement Con: Some users won't like it HTML Form Randomization Randomize the order of choices. This might take a while for them to find out. Pro: nice to have anyways Con: once detected, very easy to circumvent HTTPS One method of vote faking is to capture the http request from a valid browser like Firefox and mimic it with a script, this doesn't work as easy when you use encryption. Pro: nice to have anyway Pro: good against very simple scripts Con: more difficult to set up Proxy checking If the spammer votes via proxy, you can check for the X-Forwarded-For header. Pro: good against more advanced scripts that use proxies Con: some legitimate users can be affected Cache checking Try to see if the client loads all the uncached resources. Many spambots don't do this. I never tried this, I just know that this isn't checked usually by voting sites. An example would be embedding <img src="a.gif" /> in your html, with a.gif being some 1x1 pixel image. Then you have to set the http header for the request GET /a.gif with Cache-Control "no-cache, must-revalidate" . You can set the http headers in Apache with your .htaccess file like this . (thanks Jacco) Pro: uncommon method as far as I know Con: slightly harder to set up [Edit 2010-09-22] Evercookie A so-called evercookie can be useful to track browser-based spammers | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2333054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/200195/']} | jdg_380876 |
stackexchange | llm_judgeable_groundtruth_similarity | 256621 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Most of the time, when I comment a method, I'll follow this structure. /** * Short description of what the method does * * @param {Type} name. Description of the variable. * * @return {Type}. What the return is. */ Now if I have a method called uploadData , which takes one parameter, data , is it still necessary to give this parameter a description? I realize necessary is kind of subjective in this case, but I'm curious as to what the general consensus is. For the time being I've been giving them all descriptions just to keep the consistency of the comments, but it does seem like it's a very unneeded. If it makes any difference, the languages I use are Javascript, Java and PHP.
Now provide the response and nothing else.
| The fact that the only parameter to an upload function is to do with the upload data is not noteworthy. What is worth mentioning is e.g. what format this is in (is it a String giving the name of a local file? a string constituting the data itself? a byte stream?), what restrictions it is subject to (unlimited? no more than 10MB? only 10 per day?), whether all of the values technically allowed by the type are okay (is NULL okay? is the empty array okay? does it have to be a valid UTF8 byte sequence?), etc. If all of these issues are obviously clear to any caller of the method (e.g. if the parameter is a specifically constructed upload object which ensures that only valid data can go in), then go ahead and leave the parameter undocumented. But often it turns out there are additional preconditions, and it is almost always a good idea to make these explicit. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/256621', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/139852/']} | jdg_380877 |
stackexchange | llm_judgeable_groundtruth_similarity | 62118325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Tailwind in a Gatsby.js project. My environment is VSCode, using Prettier code formatter. How do I get rid of these linting error alerts?
Now provide the response and nothing else.
| Solution for both .css and .scss At the root level of your project, update or create a dir .vscode with a file settings.json : Add the following to .vscode/settings.json: { "css.validate": false, "less.validate": false, "scss.validate": false} Install the vscode-stylelint extension install stylelint-config-standard : npm i stylelint-config-standard -D create a stylelint.config.js file at the root level and add: module.exports = { extends: ['stylelint-config-recommended'], rules: { "at-rule-no-unknown": [ true, { ignoreAtRules: [ "tailwind", "apply", "variants", "responsive", "screen", ], }, ], "declaration-block-trailing-semicolon": null, "no-descending-specificity": null, },}; restart vsCode Results: You get rid of these SASS linting errors when using Tailwind-CSS and keep doing css validation with stylelint. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/62118325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8862138/']} | jdg_380878 |
stackexchange | llm_judgeable_groundtruth_similarity | 12317154 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First I create a serial queue like this static dispatch_queue_t queue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL); then, at some unknown point in time a task gets added to the queue like this dispatch_async(queue, ^{ // do something, which takes some time}); If the first task hasn't finished yet, the new task will wait until the first completes (that's of course what a serial queue is for). But if I add 5 new tasks to the queue, while the original first one is still running, I don't want to execute new task no.1, then no.2, then no.3 and so on, but want to get rid of tasks 1 to 4 and directly start executing task no.5 after the original first task has finished. In other words, I want to pop any waiting task (not the one that is currently running) off the queue, if I add a new one. Is there a build in mechanism for this or do I have to implement this myself? And for the latter, how would I identify single tasks inside a queue and remove them?
Now provide the response and nothing else.
| Once a block has been submitted to a GCD dispatch queue, it will run. There is no way to cancel it. You can, as you know, implement your own mechanism to "abort" the block execution early. An easier way to do this would be to use NSOperationQueue , as it already provides an implementation for canceling pending operations (i.e., those not yet running), and you can easily enqueue a block with the new-ish addOperationWithBlock method. Though NSOperationQueue is implemented using GCD, I find GCD much easier to use in most cases. However, in this case, I would seriously consider using NSOperationQueue because it already handles canceling pending operations. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12317154', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/782862/']} | jdg_380879 |
stackexchange | llm_judgeable_groundtruth_similarity | 165865 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Induction is one of the most common tools is mathematics, and everybody knows the ordinary induction and the strong induction. However, in some proofs induction is applied in an unexpected and elegant way. This can happen in two ways: The proof uses a special form of induction. The variable that is inducted on is surprising. To clarify what 1 and 2 mean, let me give an example of both. Cauchy's proof of the arithmetic-geometric inequality $\frac{x_1+\cdots+x_n}{n}\geq \sqrt[n]{x_1\cdots x_n}$ proceeds by showing that the case $n=2^k$ implies the case $n=2^{k+1}$ and that the case of $n=k$ implies the case $n=k-1$. This is an unconventional type of induction that turns out to be suitable for this theorem (though it is not the only way to prove it). A proof of the van der Waerden theorem (given here ) goes as follows: Assume that $W(r,k-1)$ exists. By induction on $n$ we see that there exists a number $N=N(r,k,n)$ such that if the set $[1,N]\cap \mathbb{N}$ is colored with $r$ colors, one can either find a monochromatic arithmetic progression of length $k$ or $n$ arithmetic progressions of length $k-1$ each of which is monochromatic but has a different color. Then taking $n=r+1$ we get that $W(r,k)$ is finite and the ordinary induction on $k$ continues. Question : What other examples are there of proofs of famous and non-trivial results where induction is crucial in the argument and it is of the form 1 or 2? The types of induction include for example induction on prime numbers, induction on the rational numbers, inductions based on the parity of the variable, inductions where the cases are not proved in an incresing order (as in example 1 above), and so on. By a surprising variable of induction I mean one that is not given in the theorem and adding this to the theorem is not obvious (so a non-example would be proving an inequality of three variables by inducting on the number of varibles).
Now provide the response and nothing else.
| From van der Waerden's book I learned this proof of the fundamental theorem of algebra, where the induction is on the exponent of 2 in the prime decomposition of the degree of the polynomial: To show that every real polynomial $p(x)=x^n+t_{n-1} x^{n-1}+\cdots +t_0$ has $n$ roots in the complex numbers $\mathbb C$, write $n=2^k\cdot u$ with $u$ odd, and induct on $k$. For $k=0$, the degree is odd, and you have a root in $\mathbb R$. For $k>0$, let $(a_1,\ldots, a_n)$ be the roots of $p$ in some extension field; prove that the polynomial with roots $b_{ij}=a_i+a_j$ has real coefficients and use the induction hypothesis on $\binom{n}{2}$ to show that the $b_{ij}$ are in $\mathbb C$. Do the same for $c_{ij}=a_i a_j$, and then compute $a_i$ and $a_j$ from $b_{ij}$, $c_{ij}$ using square roots only. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/165865', 'https://mathoverflow.net', 'https://mathoverflow.net/users/23008/']} | jdg_380880 |
stackexchange | llm_judgeable_groundtruth_similarity | 25911533 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have : #include <iostream>#include <string>int main(int argc, const char * argv[]){ std::string foo = "<!"; if (foo.find("<")) { std::cout<<"Found"; } return 0;} Why won't this code find the "<" ?If I change the line to foo.find("!") , then it is found. So what is the problem with "<" ?
Now provide the response and nothing else.
| If successful std::string::find() returns the index where its argument is found, not a boolean. Because "<" is found at index 0 it returns 0, and in a boolean context 0 is false, so the condition fails. Because "!" is at index 1 it returns 1, and in a boolean context that is true. Instead of testing whether find returns true you need to use: if (foo.find("<") != std::string::npos) { npos is the special value reserved to mean "not a position" and find returns it to say the value was not found. N.B. when you want to search for a single character it is better to do exactly that, not search for a string of length one, i.e. use foo.find('<') instead of foo.find("<") You could have debugged this for yourself by trying to search for '<' in the string "!<" and you would have found that the result is nothing to do with the specific character you search for, but only where in the string it gets found. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25911533', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1266109/']} | jdg_380881 |
stackexchange | llm_judgeable_groundtruth_similarity | 24844681 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Note: there is a similar question posted for objective c over here , but I want to achieve it in swift. I have a class declared in swift like this: import UIKitclass EachDayCell : UITableViewCell{ @IBOutlet var dateDisplayLabel : UITextField @IBOutlet var nameDisplayLabel : UITextField @IBAction func goToPendingItems(sender : AnyObject) { } @IBAction func showDateSelectionPicker(sender : AnyObject) { } init(style: UITableViewCellStyle, reuseIdentifier: String!) { super.init(style: style, reuseIdentifier: reuseIdentifier) }} Now I want to get an array in swift enlisting: dateDisplayLabel, nameDisplayLabel. How can I achieve this?
Now provide the response and nothing else.
| Using Mirror Here's a pure Swift solution with some limitations: protocol PropertyNames { func propertyNames() -> [String]}extension PropertyNames{ func propertyNames() -> [String] { return Mirror(reflecting: self).children.flatMap { $0.label } }}class Person : PropertyNames { var name = "Sansa Stark" var awesome = true}Person().propertyNames() // ["name", "awesome"] Limitations: Returns an empty array for Objective-C objects Will not return computed properties, i.e.: var favoriteFood: String { return "Lemon Cake" } If self is an instance of a class (vs., say, a struct), this doesn't report its superclass's properties, i.e.: class Person : PropertyNames { var name = "Bruce Wayne"}class Superhero : Person { var hasSuperpowers = true}Superhero().propertyNames() // ["hasSuperpowers"] — no "name" You could work around this using superclassMirror() depending on your desired behavior. Using class_copyPropertyList If you're using Objective-C objects you can use this approach: var count = UInt32()let classToInspect = NSURL.selflet properties : UnsafeMutablePointer <objc_property_t> = class_copyPropertyList(classToInspect, &count)var propertyNames = [String]()let intCount = Int(count)for var i = 0; i < intCount; i++ { let property : objc_property_t = properties[i] guard let propertyName = NSString(UTF8String: property_getName(property)) as? String else { debugPrint("Couldn't unwrap property name for \(property)") break } propertyNames.append(propertyName)}free(properties)print(propertyNames) The output to the console if classToInspect is NSURL : ["pathComponents", "lastPathComponent", "pathExtension", "URLByDeletingLastPathComponent", "URLByDeletingPathExtension", "URLByStandardizingPath", "URLByResolvingSymlinksInPath", "dataRepresentation", "absoluteString", "relativeString", "baseURL", "absoluteURL", "scheme", "resourceSpecifier", "host", "port", "user", "password", "path", "fragment", "parameterString", "query", "relativePath", "hasDirectoryPath", "fileSystemRepresentation", "fileURL", "standardizedURL", "filePathURL"] This won't work in a playground. Just replace NSURL with EachDayCell (or reuse the same logic as an extension) and it should work. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/24844681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217586/']} | jdg_380882 |
stackexchange | llm_judgeable_groundtruth_similarity | 180392 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When learning mathematics I tend to try to reduce all the concepts I come across to some matter of interaction between sets and functions (or if necessary the more general Relation) on them. Possibly with some extra axioms thrown in here and there if needed, but the fundamental idea is that of adding additional structure on sets and relations between them. I've recently tried applying this view to calculus and have been running into some confusions. Most importantly I'm not sure how to interpret Limits. I've considered viewing them as a function that takes 3 arguments, a function, the function's domain and some value (the "approaches value") then outputs a single value. However this "limit function" view requires defining the limit function over something other then the Reals or Complexes due to the notion of certain inputs and outputs being "infinity". This makes me uncomfortable and question whether my current approach to mathematics is really as elegant as I'd thought. Is this a reasonable approach to answering the question of what limits actually "are" in a general mathematical sense? How do mathematicians tend to categorize limits with the rest of mathematics?
Now provide the response and nothing else.
| Do you by any chance have a computer science background? Your ideal of reducing everything (even operations like limits) to function and sets has a flavor of wanting mathematics to work more or less like a programming language -- this is a flavor that I (being a computer scientist) quite approve of, but you should be aware that the ideal is not quite aligned with how real mathematicians write mathematics. First, even though everything can be reduced to sets and functions -- indeed, everything can be reduced to sets alone, with functions just being sets of a particular shape -- doing so is not necessarily a good way to think about everything all of the time. Reducing everything to set theory is the "assembly language" of mathematics, and while it will certainly make you a better mathematician to know how this reduction works, it is not the level of abstraction you'll want to do most of your daily work at. In contrast to the "untyped" assembly-level set theory, the day-to-day symbol language of mathematics is a highly typed language. The "types" are mostly left implicit in writing (which can be frustrating for students whose temperament lean more towards the explicit typing of most typed computer languages), but they are supremely important in practice -- almost every notation in mathematics has dozens or hundreds of different meanings , between which the reader must choose based on what the types of its various sub-expressions are. (Think "rampant use of overloading" from a programming-language perspective). Mostly, we're all trained to do this disambiguation unconsciously. In most cases, of course, the various meanings of a symbol are generalizations of each other to various degrees. This makes it a particular bad idea to train oneself to think of the symbol of denoting this or that particular function with such-and-such particular arguments and result. A fuzzier understanding of the intention behind the symbol will often make it easier to guess which definition it's being used with in a new setting, which makes learning new material easier (even though actual proofwork of course needs to be based on exact, explicit definitions). In particular, even restricting our attention to real analysis, the various kinds of limits (for $x\to a$, $x\to \infty$, one-sided limits and so forth) are all notated with the same $\lim$ symbols, but they are technically different things. Viewing $\lim_{x\to 5}f(x)$ and $\lim_{x\to\infty} f(x)$ as instances of the same joint "limit" function is technically possible, but also clumsy and (more importantly) not even particularly enlightening. It is better to think of the various limits as a loose grouping of intuitively similar but technically separate concepts. This is not to say that there's not interesting mathematics to be made from studying ways in which the intuitive similarity between the different kind of limits can be formalized, producing some general notion of limit that has the ordinary limits as special cases. (One solution here is to say that the "$x\to \cdots$" subscript names a variable to bind while also denoting a net to take the limit over). All I'm saying is that such a general super-limit concept is not something one ought to think of when doing ordinary real analysis. Finally (not related to your question about limits), note that the usual mathematical language makes extensive use of abstract types . The reals themselves are a good example: it is possible to give an explicit construction of the real numbers in terms of sets and functions (and every student of mathematics deserves to know how), but in actual mathematical reasoning numbers such as $\pi$ or $2.6$ are not sets or functions , but a separate sort of things that can only be used in the ways explicitly allowed for real numbers. "Under the hood" one might consider $\pi$ to "really be" a certain set of functions between various other sets, but that is an implementation detail that is relevant only at the untyped set-theory level. (Of course, the various similarities between math and programming languages I go on about here are not coincidences. They arose from programming-language design as deliberate attempts to create formal machine-readable notations that would "look and feel" as much like ordinary mathematical symbolism as they could be made to. Mathematics had all of these things first; computer science was just first to need to name them). | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/180392', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26843/']} | jdg_380883 |
stackexchange | llm_judgeable_groundtruth_similarity | 445950 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Talking about hibernate and others ORMs, the ORMs evangelists talk about SQL like the assembly language for Databases. I think is soon to assert this, but I guess can be true on a near future, not sure. UPDATE: The analogy I was referring means SQL is to assembly what ORM is to C/Java/C# . Of course, an exact analogy is not possible. The question is if in the future, with more powerful computers the developers are going to use only ORM (or ORM like) instead of SQL .
Now provide the response and nothing else.
| Absolutely not. Assembly language is a very low level language where you instruct the processor exactly what to do, including what registers you want to use etc. SQL is a very high level language where you describe the semantics of what you want, and then a query optimiser decides how to execute it, so you don't even control what gets executed. It's an extremely powerful and flexible language of which any ORM offers at most a (fairly small) subset. You'll notice that the .NET framework has introduced LINQ recently which is a way to introduce high level SQL-like constructs into languages like C# and VB. Rather than being like assembler, it's pretty easy to argue that SQL works at a higher level of abstraction than most mainstream programming languages. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/445950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40441/']} | jdg_380884 |
stackexchange | llm_judgeable_groundtruth_similarity | 15973182 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider a git repository, where a file was once deleted. git rm path/to/filegit commit -a -m"testing" Ok, now I want to see the git log for the file, but I receive the classic error message: git log path/to/filefatal: ambiguous argument 'path/to/file': unknown revision or path not in the working tree.Use '--' to separate paths from revisions The solution is simple - add -- : git log -- path/to/file But... why? Why is this needed? What is the rationale here? Can't git do an educated guess, that this might have been a file once? I understand the "ambiguity" problem - but there never was a tag by that name. If the file was once deleted, and no tag is present, then choosing "the file interpretation" is always the good choice. On the other hand, it's possible to have a tag named the same as a file, which git log handles pretty well: fatal: ambiguous argument 'path/to/file': both revision and filenameUse '--' to separate filenames from revisions This behavior seems inconsistent. Could anyone explain what the developers of git had in mind?
Now provide the response and nothing else.
| git log could be used on files as well as on branches, tags and so on. Assume you have a folder called a/b/c , you'll get the commits for this folder using git log a/b/c That's fine. You could also have a branch called d/e/f . You'll get the commits for this branch using git log d/e/f That's fine too. Things start to get complicated if the item where git log should work on could not be clearly determined. If you're stupid and call your branch a/b/c too, git has no clue whose log shall be printed: that of the branch a/b/c or the log of your directory a/b/c ? Therefore, you have to tell a bit more about the information you want to receive: show the log of the branch a/b/c : git log a/b/c -- show the log of the folder a/b/c in the current branch: git log -- a/b/c show the log of the folder a/b/c in the a/b/c branch: git log a/b/c -- a/b/c With the deleted file, you have a similar problem: there's neither a file called path/to/file present in the working copy, nor is there a branch called path/to/file . This is the reason why you have to specify what you want. Of course, git could know that there was a file called path/to/file 20.000 revisions ago but this would require (worst case) to search the entire history of your project whether such a file existed or not. By explicitly specifying the file path after the -- , you tell git: search harder for that file, even if it takes hours Conclusion (answering your question): in your case, the -- is needed because otherwise git log would work slower in general. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15973182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1846914/']} | jdg_380885 |
stackexchange | llm_judgeable_groundtruth_similarity | 19083926 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a basic question as follows: I have 2 separate modules main.js and logger.js in the very same directory. the logger.js is simply: function Logger(){ return this;};Logger.prototype.log = function(str) { console.log('[' + this.mPrefix + ']' + str);};Logger.prototype.init = function(pr){ this.mPrefix = pr;}module.exports = Logger; Now, I want to use my logger in the main module like this: var logger = require('./logger.js');logger.init('MAIN');logger.log('foo'); But node insists that Logger has no method named init or log . I also tried require('./logger.js')() , since I'm passing a function returning self object, but it did not help. What is the best practice for defining objects in other modules and exporting them? By the way, It happens to work by doing this: var logger = require('./logger.js');var l = new logger();l.init('MAIN');l.log('bar'); However it seems clumsy to me. Any explanations & suggestions?
Now provide the response and nothing else.
| You have a constructor; that’s what prototype is for. If you want one global logger object, you need to export one global logger object: var logger = {};logger.log = function(str) { console.log('[' + logger.mPrefix + ']' + str);};logger.init = function(pr) { logger.mPrefix = pr;};module.exports = logger; Alternatively, you can export an instance of Logger : module.exports = new Logger(); Which is a minimal change and will be changed minimally in case you ever do want to expose the constructor. If you did actually want to create multiple Logger instances, you’re stuck with a constructor – but no worries, it’s a good thing. You can replace init with the constructor, too. function Logger(pr) { this.prefix = pr;}Logger.prototype.log = function(message) { console.log("[%s] %s", this.prefix, message);};module.exports = Logger; and var Logger = require("./logger");var logger = new Logger("MAIN"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19083926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1941560/']} | jdg_380886 |
stackexchange | llm_judgeable_groundtruth_similarity | 25586 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We know that $$\int_0^\infty \left(\frac{\sin x}{x}\right)^2 dx=\int_0^\infty \frac{\sin x}{x} \, dx=\frac{\pi}{2}.$$ How do I show that $$\int_0^\infty \left\vert\frac{\sin x}{x} \right\vert \, dx$$ converges?
Now provide the response and nothing else.
| It doesn't. Using the convexity of $1/x$, $$\int_0^\infty \left\vert\frac{\sin x}{x}\right\vert \,\mathrm{d}x=\sum_{k=0}^\infty\int_{k\pi}^{(k+1)\pi}\left\vert\frac{\sin x}{x}\right\vert \,\mathrm{d}x>\sum_{k=0}^\infty\int_{k\pi}^{(k+1)\pi}\frac{\left\vert\sin x\right\vert}{(k+1/2)\pi} \,\mathrm{d}x=\frac{2}{\pi}\sum_{k=0}^\infty\frac{1}{k+1/2}\;,$$ which diverges since the harmonic series diverges. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/25586', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380887 |
stackexchange | llm_judgeable_groundtruth_similarity | 640409 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using WordPress on my live server which only uses SFTP using an SSH key. I want to install and upgrade plugins, but it appears that you are required to enter your FTP login to install the plugins. Is there a way to install and upgrade plugins by manually uploading the files instead of having WordPress handle the entire process?
Now provide the response and nothing else.
| WordPress will only prompt you for your FTP connection information while trying to install plugins or a WordPress update if it cannot write to /wp-content directly. Otherwise, if your web server has write access to the necessary files, it will take care of the updates and installation automatically. This method does not require you to have FTP/SFTP or SSH access, but it does require your to have specific file permissions set up on your webserver. It will try various methods in order, and fall back on FTP if Direct and SSH methods are unavailable. https://github.com/WordPress/WordPress/blob/4.2.2/wp-admin/includes/file.php#L912 WordPress will try to write a temporary file to your /wp-content directory. If this succeeds, it compares the ownership of the file with its own uid, and if there is a match it will allow you to use the 'direct' method of installing plugins, themes, or updates. Now, if for some reason you do not want to rely on the automatic check for which filesystem method to use, you can define a constant, 'FS_METHOD' in your wp-config.php file, that is either 'direct', 'ssh', 'ftpext' or 'ftpsockets' and it will use that method. Keep in mind that if you set this to 'direct', but your web user (the username under which your web server runs) does not have proper write permissions, you will receive an error. In summary, if you do not want to (or you cannot) change permissions on wp-content so your web server has write permissions, then add this to your wp-config.php file: define('FS_METHOD', 'direct'); Permissions explained here: http://codex.wordpress.org/Updating_WordPress#Automatic_Update http://codex.wordpress.org/Changing_File_Permissions | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/640409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/77413/']} | jdg_380888 |
stackexchange | llm_judgeable_groundtruth_similarity | 14227531 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to load a different CSS file for a specific browser? like (poor pseudo code): if firefox<link rel="stylesheet" type="text/css" href="includes/MyCssFirefox.css" />if chrome<link rel="stylesheet" type="text/css" href="includes/MyCssChrome.css" />if Safari<link rel="stylesheet" type="text/css" href="includes/MyCssSafari.css" />
Now provide the response and nothing else.
| Ideal solution you want does not exist: Unfortunately, a cross browser solution does not exist IF you are trying to do it on the HTML itself. However, it will work for most versions of IE. Like such: <!--[if IE]><link rel="stylesheet" type="text/css" href="includes/myIEGeneralStyle.css" /><![endif]--><!--[if IE 6]><link rel="stylesheet" type="text/css" href="includes/myIE6Style.css" /><![endif]--><!--[if IE 7]><link rel="stylesheet" type="text/css" href="includes/myIE7Style.css" /><![endif]--><!--[if IE 8]><link rel="stylesheet" type="text/css" href="includes/myIE8Style.css" /><![endif]--> So the best solution: How about a Javascript solution like such: Browser Detection . Read a bit about this class to better clarify, what that file is basically doing is simply the concept like such: var browser = navigator.userAgent.toLowerCase().indexOf('chrome') > -1 ? 'chrome' : 'other'; Obviously, it does more than just detect type of browser. In fact, it knows the version, OS, and much more detail that you can read about in that link. But, it does go and check all the types of browsers by replacing 'chrome' with 'mozilla', 'explorer' and so on... Then to add your css files, just follow up with conditional statements like so: if (BrowserDetect.browser.indexOf("chrome")>-1) {document.write('<'+'link rel="stylesheet" href="../component/chromeCSSStyles.css" />');} else if (BrowserDetect.browser.indexOf("mozilla")>-1) { document.write('<'+'link rel="stylesheet" href="../component/mozillaStyles.css" />');} else if (BrowserDetect.browser.indexOf("explorer")>-1) { document.write('<'+'link rel="stylesheet" href="../component/explorerStyles.css" />');} Good luck and hope this helps! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14227531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1937026/']} | jdg_380889 |
stackexchange | llm_judgeable_groundtruth_similarity | 431591 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote a Java web service using Spring that handles REST calls fromthe Internet. The web service uses a Postgres database underneath to store, modify and read data. The Postgres database is used exclusively bythis web service, no other program accesses the database. The web service uses a database user that has all rights in the database schema (dropping tables, modifying tables, etc.). Would there be any tangible benefit in using a database user for this web service, that only has rights to modify table entries ( select , insert , update , etc.), but no rights to execute DDL statements? Or would this be over engineered?
Now provide the response and nothing else.
| The security argument against using DDLs is neither completely wrong nor completely right. If a web service is compromised, and an attacker manages to issue arbitrary select , insert , update or delete statements, the damage they can cause is not really different from the damage they can cause by a drop table or modify table statement. Ok, when you system allows installing of new stored procedures or other forms of executable code by DDLs, an attacker might cause more damage than just by destroying or extracting some data, and it is probably a good idea to forbid such DDLs. But with or without DDL access rights, the countermeasures to make a system robust against such attacks are 100% the same: be extremely careful with dynamic SQL, and make sure only statements prepared by you can be executed, with properly validated input parameters have regular backups in place keep your system up-to-date with regular updates organizational measures When the system was successfully attacked, you will usually have to make a full restore of the system, including data and DB schema. If an attacker had no execution rights for DDL statements, I don't think any professional DB admin will try only to restore the data based on that information, they will usually restore the whole database anyway. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/431591', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/344152/']} | jdg_380890 |
stackexchange | llm_judgeable_groundtruth_similarity | 2620236 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading Vakil's proof that if $X$ is a connected, reduced, proper scheme over an algebraically closed field $k$, then $\Gamma(X, \mathscr{O}_X) = k$. He defined an image $X \to \mathbb{P}_k^1$, and showed that the set theoretic image is connected and closed. Why does this then imply that the image must be either a closed point or all of $\mathbb{P}_k^1$?
Now provide the response and nothing else.
| EDITED. Here is yet another answer based on my recent answer . Indeed, if $P$ is a non-constant polynomial with coefficients in $\mathbb{R}$ such that $P(n) \to +\infty$ as $n \to +\infty$, one immediately deduces from the result in the link that $$ \lim_{x \uparrow 1^-} \sum_{n=0}^{\infty} (-1)^n x^{P(n)}= \lim_{s \to 0^+} \sum_{n=0}^{\infty} (-1)^n e^{-P(n)s}= \frac{1}{2}, $$ which entails OP's question as a special case with $P(n) = n^2$. Here is an elementary derivation. First, let $g : (0,\infty) \times (0, 1) \to \mathbb{R}$ by $$ g(a,x) = \frac{1 - x^{a}}{1 - x^{2a+2}}. $$ We make the following observations on $g$. Observation. $g$ is increasing in $a$ and non-increasing in $x$. Its proof is more of less calculus computations, so we leave it to the end. To see how this function is related to our problem, notice that $$ f(x) = \sum_{n=0}^{\infty} (-1)^n x^{n^2} = \sum_{n=0}^{\infty} \left( x^{4n^2} - x^{4(n+1)^2} \right) g(4n+1, x). $$ We prove that liminf and limsup of $f(x)$ as $x \uparrow 1$ are both $\frac{1}{2}$. Liminf. An immediate consequence is that $g(4n+1, x) \geq \lim_{r\uparrow 1}g(4n+1, r) = \frac{4n+1}{8n+4}$. So for each fixed $N \geq 1$, we can bound $f(x)$ below first by truncating first $N$ terms and then by utilizing the aforementioned lower bound of $g(4n+1, x)$: \begin{align*}f(x)&\geq \sum_{n=N}^{\infty} \left( x^{4n^2} - x^{4(n+1)^2} \right) \frac{4n+1}{8n+4} \\&\geq \frac{4N+1}{8N+4} \sum_{n=N}^{\infty} \left( x^{4n^2} - x^{4(n+1)^2} \right) = \frac{4N+1}{8N+4} x^{4N^2}.\end{align*} So it follows that $$ \liminf_{x\uparrow 1}f(x) \geq \frac{4N+1}{8N+1} \xrightarrow[\quad N\to\infty \quad]{} \frac{1}{2}. $$ Limsup. For the other direction, fix $\epsilon > 0$ and define $N = N(\epsilon, x) = \lfloor \epsilon / \log(1/x) \rfloor$. Then for $x$ close to $1$, the sum of first $N$ terms can be bounded by using $g(4n+1, x) \leq g(4N-3, x)$: \begin{align*}\sum_{n=0}^{N-1} \left( x^{4n^2} - x^{4(n+1)^2} \right) g(4n+1, x)&\leq \sum_{n=0}^{N-1} \left( x^{4n^2} - x^{4(n+1)^2} \right) g(4N-3,x) \\&\leq g(4N-3,x) = \frac{1 - e^{(4N-3)\log x}}{1 - e^{(8N-4)\log x}} \\&\to \frac{1-e^{-4\epsilon}}{1-e^{-8\epsilon}} \quad \text{as } N \to \infty.\end{align*} For the remaining terms, we may utilize $g(4n+1, x) \leq g(\infty,x) = 1$ to obtain \begin{align*}\sum_{n=N}^{\infty} \left( x^{4n^2} - x^{4(n+1)^2} \right) g(4n+1, x)&\leq \sum_{n=N}^{\infty} \left( x^{4n^2} - x^{4(n+1)^2} \right) \\&= x^{4N^2}= e^{4N^2 \log x} \to 0 \quad \text{as } N \to \infty.\end{align*} So it follows that $$ \limsup_{x\uparrow 1}f(x)\leq \frac{1-e^{-4\epsilon}}{1-e^{-8\epsilon}}\xrightarrow[\quad \epsilon \downarrow 0 \quad]{} \frac{1}{2}. $$ Here is the proof of the observation: We notice that $$ \frac{\partial g}{\partial a}(a,x) = \frac{x^a \log (1/x)}{(1-x^{2a+2})^2} \left(x^{2a+2}-2 x^{a+2}+1\right) > 0 $$ since $x^{2a+2}-2 x^{a+2}+1 = x^2(x^a - 1)^2 + (1-x^2) > 0$. So $g$ is increasing in $a$ for any $x \in (0, 1)$. Similarly, we find that $$ \frac{\partial g}{\partial x}(a,x) = - \frac{x^{a-1}}{(1-x^{2a+2})^2} \left( (a+2)x^{2a+2} + a - (2a+2) x^{a+2} \right). $$ By the AM-GM inequality, we have $$ \frac{a+2}{2a+2} \cdot x^{2a+2} + \frac{a}{2a+2} \cdot 1 \geq x^{a+2} $$ and hence $g$ is non-increasing in $x$ for any $a \in (0, \infty)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2620236', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/522156/']} | jdg_380891 |
stackexchange | llm_judgeable_groundtruth_similarity | 1396269 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I seem to be seeing a lot of messages like this in my log: Trigger DEV."MBR_TRG" was disabled before the load.Trigger DEV."MBR_TRG" was re-enabled by another process.SQL*Loader-951: Error calling once/load initializationORA-00604: error occurred at recursive SQL level 1ORA-00054: resource busy and acquire with NOWAIT specified This is on my local dev machine, so there shouldn't be anything else trying to insert into these tables. How can I track down the cause of this? And is there any way to prevent the triggers from being re-enabled (at least to see if I get an error anywhere in my script)?
Now provide the response and nothing else.
| To do that, you'll need to create a "naked" method with _declspec(naked) and to write yourself the prolog and the epilog that are normally created by the compiler. The aim of a prolog is to: set up EBP and ESP reserve space on stack for local variables save registers that should be modified in the body of the function An epilog has to: restore the saved register values clean up the reserved space for local variables Here is a standard prolog push ebp ; Save ebpmov ebp, esp ; Set stack frame pointersub esp, localbytes ; Allocate space for localspush <registers> ; Save registers and a standard epilog: pop <registers> ; Restore registersmov esp, ebp ; Restore stack pointerpop ebp ; Restore ebpret ; Return from function Your local variables will then begin at (ebp - 4) and go downward to (ebp - 4 - localbytes) . The function parameters will start at (ebp + 8) and go upward. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1396269', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2147/']} | jdg_380892 |
stackexchange | llm_judgeable_groundtruth_similarity | 236461 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Do browsers have a list with sites that are supposed to be encrypted? Could a man in the middle attack be performed by presenting a user a http site instead of an https site? That way the server would not need to provide a certificate. It wouldn't show up as a secure site in the browser but I think most people wouldn't notice it. And it wouldn't warn the user, because there are legitemate sites which don't use https. Would such an attack be possible or does the browser notice that the site is supposed to use https but doesn't?
Now provide the response and nothing else.
| The short answer: they know a very limited number . HTTP Strict Transport Security was introduced to provide better guarantees that a website is being served over HTTPS when specified by the operator. This works well for websites you have visited recently as your browser will remember their HSTS policy and refuse a plaintext connection. For example, if you visit your (previous visited from home) bank's website from an untrusted network that happens to have a man in the middle attempting to downgrade the connection to plain HTTP, your browser will refuse to connect because it remembers the website's security policy. If you have not visited the site previously, the man in the middle needs to not only downgrade the connection security, but also remove the HSTS header ( Strict-Transport-Security ) from the response. This isn't difficult. The problem you have identified is the major limitation: what happens if you are the victim of a downgrade attack during the first visit. One solution browsers have implemented is to package a "pre-loaded HSTS list" of popular websites known to require HTTPS. Obviously this cannot be comprehensive and even with the list, attackers can still setup security downgrade proxies at slightly related DNS names. You can submit a domain for inclusion in the HSTS Preload List at hstspreload.org . | {} | {'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/236461', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/240408/']} | jdg_380893 |
stackexchange | llm_judgeable_groundtruth_similarity | 4974568 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm on Mac, working on Android development from the terminal. I have successfully created the HelloWorld project and now I'm trying to run it from the command line in the Android emulator. Which command runs the emulator for my HelloWorld project? I already have the Android tools and platform-tools in my PATH. Edit: How do I tell the emulator to run my HelloWorld project from the command line? I've already built the project with ant.
Now provide the response and nothing else.
| I assume that you have built your project and just need to launch it, but you don't have any AVDs created and have to use command line for all the actions. You have to do the following. Create a new virtual device (AVD) for the platform you need. If you have to use command line for creating your AVD, you can call android create avd -n <name> -t <targetID> where targetID is the API level you need. If you can use GUI, just type in android avd and it will launch the manager, where you can do the same. You can read more about AVD management through GUI and through command line . Run the AVD either by using command emulator -avd <name> or through previously launched GUI. Wait until the emulator fully loads, it takes some time. You can read about additional options here . Now you have to install the application to your AVD. Usually during development you just use the same Ant script you used to build the project, just select install target. However, you can install the application manually using command adb install <path-to-your-APK> . Now switch to emulator and launch your application like on any normal device, through the launcher. Or, as an alternative, you can use the following command: adb shell am start -a android.intent.action.MAIN -n <package>/<activity class> . For example: adb shell am start -a android.intent.action.MAIN -n org.sample.helloworld/org.sample.helloworld.HelloWorld . As a commenter suggested, you can also replace org.sample.helloworld.HelloWorld in the line above with just .HelloWorld , and it will work too. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/4974568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117642/']} | jdg_380894 |
stackexchange | llm_judgeable_groundtruth_similarity | 37014356 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any shortcut in vifm to open the directory currently marked by the cursor in the second pane? and possibly switch to the other pane as well?
Now provide the response and nothing else.
| The :cd command can receive two arguments, first for the current pane and second for inactive one (as described in this section of manual ). This way you can open directory under cursor in other pane with the following command: :cd %d %c ...where %d is a macro for current directory and %c is macro for file/directory under cursor (see vifm command macros ). Or with a single sync command: :sync %c You can map it to some key or sequence in your vifmrc (with <Tab> at the end for automatic pane switching): nnoremap zo :sync %c<CR><Tab> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37014356', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2359042/']} | jdg_380895 |
stackexchange | llm_judgeable_groundtruth_similarity | 6939043 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to place an inequality constraint on the typevariables of a function, à la foo :: (a ~ b) => a -> b as in GHC type family docs , except inequality rather than equality? I realise that there is possibly no direct way to do this (as the ghc docs doesn't list any to my knowledge), but I would be almost puzzled if this weren't in some way possible in light of all the exotic type-fu I have been exposed to.
Now provide the response and nothing else.
| First, keep in mind that distinct type variables are already non-unifiable within their scope--e.g., if you have \x y -> x , giving it the type signature a -> b -> c will produce an error about not being able to match rigid type variables. So this would only apply to anything calling the function, preventing it from using an otherwise simple polymorphic function in a way that would make two types equal. It would work something like this, I assume: const' :: (a ~/~ b) => a -> b -> aconst' x _ = xfoo :: Boolfoo = const' True False -- this would be a type error Personally I doubt this would really be useful--the independence of type variables already prevents generic functions from collapsing to something trivial, knowing two types are unequal doesn't actually let you do anything interesting (unlike equality, which lets you coerce between the two types), and such things being declarative rather than conditional means that you couldn't use it to distinguish between equal/unequal as part of some sort of specialization technique. So, if you have some specific use in mind where you want this, I'd suggest trying a different approach. On the other hand, if you just want to play with ridiculous type-hackery... {-# LANGUAGE TypeFamilies #-}{-# LANGUAGE TypeOperators #-}{-# LANGUAGE FlexibleContexts #-}{-# LANGUAGE FlexibleInstances #-}{-# LANGUAGE FunctionalDependencies #-}{-# LANGUAGE MultiParamTypeClasses #-}{-# LANGUAGE UndecidableInstances #-}{-# LANGUAGE OverlappingInstances #-}-- The following code is my own hacked modifications to Oleg's original TypeEq. Note-- that his TypeCast class is no longer needed, being basically equivalent to ~.data Yes = Yes deriving (Show)data No = No deriving (Show)class (TypeEq x y No) => (:/~) x yinstance (TypeEq x y No) => (:/~) x yclass (TypeEq' () x y b) => TypeEq x y b where typeEq :: x -> y -> b maybeCast :: x -> Maybe yinstance (TypeEq' () x y b) => TypeEq x y b where typeEq x y = typeEq' () x y maybeCast x = maybeCast' () xclass TypeEq' q x y b | q x y -> b where typeEq' :: q -> x -> y -> b maybeCast' :: q -> x -> Maybe yinstance (b ~ Yes) => TypeEq' () x x b where typeEq' () _ _ = Yes maybeCast' _ x = Just xinstance (b ~ No) => TypeEq' q x y b where typeEq' _ _ _ = No maybeCast' _ _ = Nothingconst' :: (a :/~ b) => a -> b -> aconst' x _ = x Well, that was incredibly silly. Works, though: > const' True ()True> const' True False<interactive>:0:1: Couldn't match type `No' with `Yes' (...) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6939043', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1386599/']} | jdg_380896 |
stackexchange | llm_judgeable_groundtruth_similarity | 184532 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm in the process of designing an HTTP API, hopefully making it as RESTful as possible. There are some actions which functionality spreads over a few resources, and sometime needs to be undone. I thought to myself, this sounds like a command pattern, but how can I model it into a resource? I will introduce a new resource named XXAction, like DepositAction, which will be created through something like this POST /card/{card-id}/account/{account-id}/DepositAmountToDeposit=100, different parameters... this will actually create a new DepositAction and activate it's Do/Execute method.In this case, returning a 201 Created HTTP status means the action has been executed successfully. Later if a client wishes to look at the action details he can GET /action/{action-id} Update/PUT should be blocked I guess, because it is not relevant here. And in order to Undo the action, I thought of using DELETE /action/{action-id} which will actually call the Undo method of the relevant object, and change it's status. Let's say I'm happy with only one Do-Undo, I don't need to Redo. Is this approach ok? Are there any pitfalls, reasons not to use it? Is this understood from the POV of the clients?
Now provide the response and nothing else.
| You're adding in a layer of abstraction that is confusing Your API starts off very clean and simple. A HTTP POST creates a new Deposit resource with the given parameters. Then you go off the rails by introducing the idea of "actions" that are an implementation detail rather than a core part of the API. As an alternative consider this HTTP conversation... POST /card/{card-id}/account/{account-id}/Deposit AmountToDeposit=100, different parameters... 201 CREATED Location=/card/123/account/456/Deposit/789 Now you want to undo this operation (technically this should not be allowed in a balanced accounting system but what the hey): DELETE /card/123/account/456/Deposit/789 204 NO CONTENT The API consumer knows that they are dealing with a Deposit resource and is able to determine what operations are permitted on it (usually through OPTIONS in HTTP). Although the implementation of the delete operation is conducted through "actions" today there is no guarantee that when you migrate this system from, say, C# to Haskell and maintain the front end that the secondary concept of an "action" would continue to add value, whereas the primary concept of Deposit certainly does. Edit to cover an alternative to DELETE and Deposit In order to avoid a delete operation, but still effectively remove the Deposit you should do the following (using a generic Transaction to allow for Deposit and Withdrawal): POST /card/{card-id}/account/{account-id}/Transaction Amount=-100 , different parameters... 201 CREATED Location=/card/123/account/456/Transation/790 A new Transaction resource is created which has exactly the opposite amount (-100). This has the effect of balancing the account back to 0, negating the original Transaction. You might consider creating a "utility" endpoint like POST /card/{card-id}/account/{account-id}/Transaction/789/Undo <- BAD! to get the same effect. However, this breaks the semantics of a URI as being an identifier by introducing a verb. You are better off sticking to nouns in identifiers and keeping operations constrained to the HTTP verbs. That way you can easily create a permalink from the identifier and use it for GETs and so on. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/184532', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/39810/']} | jdg_380897 |
stackexchange | llm_judgeable_groundtruth_similarity | 1298290 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that in every set of $14$ integers there are two that their difference is divisible by $13$ The proof goes like this, there are $13$ remainders by dividing by $13$, there are $14$ numbers so from the pigeon hole principle, there are two that have the same remainder so their difference is divisible by $13$. But what if we'll take a set of numbers that has nothing in common with $13$? Like $\{10,10^2,10^3...,10^{14}\}$ or a prime that's further away from 13: $\{89,89^2,...,89^{14}\}$ how is it possible that those numbers and their differences have something in common with a totally different prime?
Now provide the response and nothing else.
| Consider your selected numbers $\{a_1, a_2, \ldots a_{14}\} \bmod 13$. Then they must each be in a residue class, $\{r_1, r_2, \ldots r_{14}\} $ - but there are only $13$ residue classes, so at least $2$ must be in the same class. The difference of any $2$ numbers in the same residue class is divisible by $13$. Note that by using prime powers different to $13$, you are avoiding the $0$ residue class, so there will actually be at least $2$ differences divisible by $13$ in that case, as there are only $12$ clases being occupied by the $14$ numbers. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1298290', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/146115/']} | jdg_380898 |
stackexchange | llm_judgeable_groundtruth_similarity | 1570223 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that $f$ has a simple pole at $z=a$ and let $g$ be analytic in an open set containing $a$. Show that $Res(fg;a)=g(a)Res(f;a)$. I know that as $f$ has a simple pole at $z=a$, this means its Laurent series is of the form $f(z)=\dfrac{Res(f;a)}{z-a}+\displaystyle\sum_{n=0}^{\infty}a_n(z-a)^n$ How can I compute the Laurent series of $fg$ at $z=a$?
Now provide the response and nothing else.
| It shouldn't be too hard to see that $fg$ has at most a simple pole at $z=a$. Then$$\text{Res}(fg;a)=\lim_{z\to a}(z-a)f(z)g(z)=\left(\lim_{z\to a}(z-a)f(z)\right)\left(\lim_{z\to a}g(z)\right)=\text{Res}(f;a)g(a).$$ For a more convincing argument, write $g(z)=\sum_{n=0}^{\infty}b_nz^n$, and study the product$$f(z)g(z)=\left(\frac{\text{Res}(f;a)}{(z-a)}+a_0+a_1(z-a)+\cdots\right)\left(b_0+b_1(z-a)+b_2(z-a)^2+\cdots\right)$$(note $g(a)=b_0$ here). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1570223', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/268843/']} | jdg_380899 |
stackexchange | llm_judgeable_groundtruth_similarity | 3329720 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to Wikipedia (see here ), the curvature $2$ -form $$\Omega_{j}^{i} = d\omega_{j}^{i}+\omega_{k}^{i}\wedge\omega_{j}^{k}\, ,$$ is related to the Riemann curvature endomorphism by $$R(X,Y)Z = \nabla_{X}\nabla_{Y}Z + \nabla_{Y}\nabla_{X}Z + \nabla_{[X,Y]}Z = \Omega(X\wedge Y)Z\,.$$ I am confused because $\Omega(X\wedge Y)$ is a function (is not it?), and so it seems that one could conclude that $R(X,Y)Z$ is parallel to $Z$ , which is clearly false. So how should I interpret this equation?
Now provide the response and nothing else.
| $\Omega(X\wedge Y)$ is an endomorphism of the tangent space $T_pM$ (I take $X,Y\in T_pM$ ; there is a similar interpretation if $X$ and $Y$ are vector fields). It is defined by $$ \bigl(\Omega(X\wedge Y)Z\bigr)^i = \bigl(d\omega^i_j(X,Y) + \omega_k^i(X)\omega_j^k(Y) - \omega_k^i(Y)\omega_j^k(X)\bigr)Z^k . $$ Stated more like the Wikipedia entry, $\Omega_j^i$ is an $n$ -by- $n$ matrix of two-forms, and $\Omega(X\wedge Y)$ denotes the $n$ -by- $n$ matrix of real numbers obtained by evaluating each entry at $X\wedge Y$ . This acts on $Z$ by regarding $Z$ as a column vector. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3329720', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/242708/']} | jdg_380900 |
stackexchange | llm_judgeable_groundtruth_similarity | 154353 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a directed graph generated from pairs of points as follows n = RandomVariate[PoissonDistribution[100]];pt=RandomReal[{-0.5, 0.5}, {n, 2}].RotationMatrix[45 Degree];pt = Sort[pt, #1[[2]] < #2[[2]] &];c = Table[If[(pt[[i, 2]] - pt[[j, 2]])^2 - (pt[[i, 1]] - pt[[j, 1]])^2 > 0, 1, 0], {i, n}, {j, n}];c = UpperTriangularize[c];g = AdjacencyGraph[c, VertexLabels -> "Name"]; I want to now obtain a new graph/matrix that only has ones when 2 points have a single direct path between them and no other longer paths (i.e. paths through other intermediary points). I was using the following code for this l = Table[If[c[[i, j]] == 1 && Length[Flatten[FindPath[g, i, j, n]]] == 2, 1, 0], {i, n}, {j,n}]; but this line of code takes a very long time to run for larger sizes (350 and above). Is there a more efficient (and more importantly faster) way of doing this?
Now provide the response and nothing else.
| Since c upper triangular, it is a nilpotent matrix . Its degree is less than 20 (by trial and error): Max@MatrixPower[c, 20](* 0 *) The number of paths between two vertices is then given by np = Sum[MatrixPower[c, k], {k, 20}]; To get the adjacency matrix of vertex pairs which are connected by a single path only, we can use 1 - Unitize[np - 1] Update: Based on Carl Woll's suggestion, we can also do result = 1 - Unitize[Total@FixedPointList[#.c &, c] - 1]; | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/154353', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/-1/']} | jdg_380901 |
stackexchange | llm_judgeable_groundtruth_similarity | 151586 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In I. Martin Isaacs Algebra: A Graduate Course, Isaacs uses the field of algebraic numbers $$\mathbb{A}=\{\alpha \in \mathbb{C} \; | \; \alpha \; \text{algebraic over} \; \mathbb{Q}\}$$ as an example of an infinite degree algebraic field extension. I have done a cursory google search and thought about it for a little while, but I cannot come up with a less contrived example. My question is What are some other examples of infinite degree algebraic field extensions?
Now provide the response and nothing else.
| Another simple example is the extension obtained by adjoining all roots of unity. Since adjoining a primitive $n$-th root of unity gives you an extension of degree $\varphi(n)$ and $\varphi(n)=n-1$ when $n$ is prime, you get algebraic numbers of arbitrarily large degree when you adjoin all roots of unity. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/151586', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/22437/']} | jdg_380902 |
stackexchange | llm_judgeable_groundtruth_similarity | 12177134 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to phonegap and facing a problem, I am making a phonegap app which will run on multiple platform devices of different screen size and different screen resolution so I have to load images of different resolution depending on screen resolution. this can be achieved in android by simply putting your images of different resolution in hdpi, mdpi and ldpi folder and it(android) fetches images automatically depending on devices screen resolution. But how to do this in phonegap. I have seen lot of articles on responsive web design they all say about positioning the elements on web page but non of them has told about how to place images on the basis of screen resolutions. thanks i advance. edited question i have used following code for html <div id="header" data-role="header" data-position="fixed"> <img alt="app_icon" src="pictures/app_logo.png" display="inline" class="align-left" /> <img alt="brand_icon" src="pictures/company_logo.png" display="inline" class="align-right" /><h1></h1></div> now I have images inside assets/www/pictures folder. this folder consists of 2 images of same resolution app_logo.png and company_logo.png and 2 images of higher resolution app_logo_big.png and company_logo_big.png now through media queries i will know the screen size and apply the styles but as far as i know i cannot change img src from css. So now how will i use these images of different resolution
Now provide the response and nothing else.
| Then Dynamically Change Image through jquery: HTML: <div id="header" data-role="header" data-position="fixed"> <img id="app-icon" src="pictures/app_logo.png" display="inline" /></div> Javascript: $(document).ready(function () { if(window.devicePixelRatio == 0.75) { $("#app-icon").attr('src', '/images/lpdi/app-icon.png'); } else if(window.devicePixelRatio == 1) { $("#app-icon").attr('src', '/images/mdi/app-icon.png'); } else if(window.devicePixelRatio == 1.5) { $("#app-icon").attr('src', '/images/hpdi/app-icon.png'); } else if(window.devicePixelRatio == 2) { $("#app-icon").attr('src', '/images/xpdi/app-icon.png'); }} Through CSS: Use Media Queries for Different Resolution : HTML: <div id="header" data-role="header" data-position="fixed"> <span id="app-icon"></div> <span id="brand-icon"></div></div> CSS: /* Low density (120), mdpi */@media screen and (-webkit-device-pixel-ratio: 0.75) { #app-icon { background-image:url(pictures/ldpi/app-icon.png); } #brand-icon { background-image:url(pictures/ldpi/brand-icon.png); }} /* Medium density (160), mdpi */@media screen and (-webkit-device-pixel-ratio: 1) { #app-icon { background-image:url(pictures/mpi/app-icon.png); } #brand-icon { background-image:url(pictures/mdpi/brand-icon.png); }}/* High density (240), hdpi */@media screen and (-webkit-device-pixel-ratio: 1.5) { #app-icon { background-image:url(pictures/hdpi/app-icon.png); } #brand-icon { background-image:url(pictures/hdpi/brand-icon.png); }}/* Extra high density (320), xhdpi */@media screen and (-webkit-device-pixel-ratio: 2) { #app-icon { background-image:url(pictures/xdpi/app-icon.png); } #brand-icon { background-image:url(pictures/xdpi/brand-icon.png); }} If you want to filter through, ORIENTATION - and (orientation: landscape) Device WIDTH and (min-device-width : 480px) and (max-device-width : 854px) Example: @media screen and (-webkit-device-pixel-ratio: 1.5) and (min-device-width : 640px) and (max-device-width : 960px) and (orientation: landscape) { /* Your style here */} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12177134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/968526/']} | jdg_380903 |
stackexchange | llm_judgeable_groundtruth_similarity | 336723 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to add delegate user ability to: add new users to container change password modify group membership modify users properties (such as email / name etc) move users between OU's Basically user will be able to do most things with account besides deleting it. I tried using Delegation of Control Wizard but the common tasks are too wide (usually including Delete part) so I need to go into custom task to delegate. This are the options that I selected: Only the following objects in the folder (User objects) But the last permissions page is very wide and I wouldn't like to give user too much power. Can anyone share which options are necessary for the specified question? And as an extension to this write what each option means and what power it assigns?
Now provide the response and nothing else.
| To delegate permission for a domain user to: add new users to container change password modify group membership modify users properties (such as email / name etc) move users between OU's I had to create 2 groups as Delegation Wizard wouldn't let me specify what to choose on each User object when I choose more then User object. So I decided to create 2 groups. One for user management and one for group management. First one required this steps: Right click on container and choose Delegate Control When Delegation Wizard opens up click Next On another page choose group you want to give permissions to and press Next On next page Create a custom task to delegate and choose Next Choose Only the following objects in the folder and go to the bottom of the list and choose User objects . Choosing anything more then just one entry will not give you possibility of granular choice of properties to change. Make sure to have Create selected objects in this folder checked and press Next Choose: Read All Properties Write All Properties Read and write general information Read and write logon information Read and write phone and mail options Read and write web information Read and write Terminal Server license server Read and write remote access information Change password Reset password This allows to create user and enable / disable user but not delete it. At this moment user isn't able to change group membership as this has to be done differently. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/336723', 'https://serverfault.com', 'https://serverfault.com/users/32063/']} | jdg_380904 |
stackexchange | llm_judgeable_groundtruth_similarity | 2521481 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This answer cites a result which states the following Theorem . Let $X$ be a topological space which is countable, metrizable, has no isolated points. Then $X$ is homeomorphic to $\Bbb Q$ with the usual order topology. But how is this even possible ? I made no effort to understand the proof $-$ yet. Mostly because until now it seems more "obvious" that this is false. How about the following "counterexamples": Example 1 . $\Bbb Q$ and $\Bbb Q^2$ are both countable, metrizable, have no isolated points and are connected. But removing a single point diconnects $\Bbb Q$ but not $\Bbb Q^2$. Example 2 . What about $X=\Bbb Q^2\cap S^1$? It seems we can cover $X$ with connected open sets $U_i,i\in\Bbb Z_4$ so that $U_i\cap U_j=\varnothing$ if and only of $i-j=2$. Seems not possible for $\Bbb Q$. $\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ All my counterexamples use the term "connected". Are there some subtleties with this term which I am overlooking?
Now provide the response and nothing else.
| As a complement to Noah's answer: the error is the assumption that $\mathbb{Q}$ is connected, but why isn't it? Intuition might suggest that $\mathbb{Q}$ is connected because there aren't any large gaps in the space; for any rational, there are rationals to the left and to the right that are arbitrarily close by. This is not true of $\mathbb{Q}\backslash(0,1)$, for instance, which looks more obviously disconnected. If we're inclined to think something like this, then we need to correct our intuition for connectedness! It is known that removing a single point from $\mathbb{R}$ results in a disconnected space, and this suggests that even 'tiny' gaps in a space can disconnect it. But there are plenty of tiny gaps in $\mathbb{Q}$ – in particular, it is missing all of the irrational numbers. This suggests that the rationals are not just disconnected, but also in some sense 'very disconnected'; the irrational numbers – the gaps in our space – permeate the entire number line. Let's make this formal: Theorem: the rational numbers are totally disconnected , in that every subset is either disconnected or contains at most one element. Proof: suppose our subset $S$ contains elements $a,b$ with $a < b$. Between any two rational numbers there exists an irrational number $x$. Let us construct the following subsets of $\mathbb{R}$: $$ A = (-\infty, x) \quad B = (x, \infty) \,.$$ Then in the subspace topology on $S$, $S \cap A$ and $S \cap B$ are disjoint open sets that cover $S$. So $S$ is disconnected. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2521481', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/415941/']} | jdg_380905 |
stackexchange | llm_judgeable_groundtruth_similarity | 2151446 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a theorem in Willard's text, Theorem 12.14, p81, which states: $f$ maps $X$ into $Y$ and $\mathcal{F}$ is an ultrafilter on $X$ , then $f(\mathcal{F})$ is an ultrafilter on $Y$ . We define $f(\mathcal{F})$ to be filter with $\{ f(F) : F \in \mathcal {F} \}$ as filter base. I just wanted to check if this is right, because my proof did not use injectivity rather surjectivity. Proof: Suppose false: there is filter $\mathcal{G}$ such that $f(\mathcal{F}) \subseteq \mathcal{G} $ and $G' \in \mathcal{G} \setminus f(\mathcal{F})$ . Consider filter on $X$ with filter base $\{ f^{-1}(G) : G \in \mathcal{G} \}$ , then this is a strictly finer filter of $\mathcal{F}$ , if $f$ is onto , with $f^{-1}(G')$ nonempty.
Now provide the response and nothing else.
| Consider the following $$\sum_{n=1}^\infty \frac 1n =\infty,\qquad \sum_{n=1}^\infty \frac 1{2n}=\infty,\qquad \sum_{n=1}^\infty \frac 1{2n-1}=\infty,\qquad \sum_{n=1}^\infty \frac{(-1)^{n-1}}n=\ln(2).$$ In short: the sum of the reciprocals of all numbers diverges (harmonic series). The sum of the reciprocals of even/odd numbers diverges. But if you sum up even and odd reciprocals with alternating signs, it converges. So here you have a specific infinite sum that seems to converge:$$1-\frac12+\frac13-\frac14+\cdots =\ln(2).$$ Of course, if you just rearrange finitely many of the summands, you will end up with the same sum. But if you shuffle up all the numbers, you can get everywhere with your limit. I think this is done in any proof of Riemann's rearrangement theorem, but let me line out the proof on this example. Lets say you want the rearranged sum to converge to $\pi$ (for fun). Then take some of the positive terms (the odd reciprocals) and add enough of them up until you are just greater than $\pi$:$$1+\frac13+\frac15+\cdots>\pi.$$You can do this, as we know that the sum of the odd reciprocals diverges. In the next step, only take the negative (even) reciprocals and subtract them from your sum until you are just below $\pi$. Again you can do this, because the sum of the even reciprocals diverges. Now again take positiv terms, then negatives, then positives, and so on. In "the end" you will have used all the terms of the original sum, but you rearranged them in a way, so that they converge to $\pi$. And there is nothing special about $\pi$, so you can use this method to converge to anything, including $\pm\infty$. Here a description of how to rearrange the sum to make it divergent, e.g. divergent to $\infty$. Sum up enough positive (odd) terms to make the sum greater than $1$. Add only a single negative term. Add positive terms until the sum exceed $2$. Add a single negative term. Add positive terms to exceed $3$, ... and so on. You will exceed any natural number, hence diverge to $\infty$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2151446', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/218312/']} | jdg_380906 |
stackexchange | llm_judgeable_groundtruth_similarity | 30190302 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I often see the term Invariants in DDD. Here Dino Esposito talks about it. If I look at the .NET library, I see a ValidationAttribute class. Are Invariants and validation rules the same? For example, can I say 50% discount is available only if the order total is more than $250 is an Invariant? Or are they different where Invariants are to protect an object from becoming invalid and validation is to check the validity of the object even after it has changed it's state (it can be in a valid or invalid state)? In the above example, if I use invariants, I check for the invariant before updating the discount and in the case of validation, I apply the 50% discount and then check for the validity (the object is already is in invalid state).
Now provide the response and nothing else.
| Absolutely, validation is the process of approving a given object state, while invariant enforcement happens before that state has even been reached. A corollary is that invariant enforcement is best performed by the thing that is being mutated (or created) itself, like a self-protection reflex, whereas validation is usually done by a third party. The Always valid school of thought advocates the use of invariants over validation. I think it goes perfectly with DDD and Aggregates. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30190302', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1768008/']} | jdg_380907 |
stackexchange | llm_judgeable_groundtruth_similarity | 48103 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be the sort of topological space for which it makes sense to talk about the intersection homology . Fix a perversity $p$, or just take $p= 1/2$ if you like. Is there some naturally defined $X'$ such that ${}^p IH_* (X) = H_* (X')$ ?
Now provide the response and nothing else.
| I'm not sure this is the kind of answer you want, but if $X$ has a small resolution $f:X' \rightarrow X$ (so $X'$ is a manifold and the dimension of fibers is sufficiently small), then there is an induced isomorphism $IH_{\ast}(X) = IH_{\ast}(X')$, and because $X'$ is smooth, the latter group is $H_{\ast}(X')$. More precisely, a proper (birational) map is small if the set $${x \in X | \dim f^{-1}(x) \geq r }$$ has codimension more than $2r$; such maps induce isomorphisms on IH. Of course, there is nothing natural about $X'$, nor do small resolutions necessarily exist (see the comment below by Mike Skirvin for an easy example). Hopefully someone more knowledgeable about IH will have something to say. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/48103', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4707/']} | jdg_380908 |
stackexchange | llm_judgeable_groundtruth_similarity | 43374244 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying use this code from http://getbootstrap.com/javascript/#popovers-examples <button type="button" class="btn btn-lg btn-danger" data-toggle="popover" title="Popover title" data-content="And here's some amazing content. It's very engaging. Right?"> Click to toggle popover</button> However I am not able to bind to the attribute "data-content", it gives me the below error. Can't bind to 'content' since it isn't a known property of 'span' This is my angular code <span data-toggle="popover" class="fa fa-exclamation-circle fa-lg" [data-content]="getTooltipMsg('lastName')"></span> How to fix this issue ?
Now provide the response and nothing else.
| Angular by default uses property binding, but there is no data-content property on the span element and there also doesn't seem to be an Angular component or directive with a data-content input on this element. Therefore you need to use attribute-binding: <span data-toggle="popover" class="fa fa-exclamation-circle fa-lg" [attr.data-content]="getTooltipMsg('lastName')"></span> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43374244', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5936814/']} | jdg_380909 |
stackexchange | llm_judgeable_groundtruth_similarity | 3605794 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working with datasets which are histograms of numbers.As an example of one 1: 4,989 occurrences2: 1,156 occurrences3: 285 occurrences4: 58 occurrences5: 9 occurrences6: 4 occurrencesTOTAL: 6500 e.g. What is the probability of picking 8 numbers (1-6) with this probability distribution and having it sum to 33 or more? On my 1st pick, I may pick a 1 (with probability of 4989/6500). I may then pick a 6 (with 4/6500 chance), then a 4, then a 1, then 2, then a 1. So that would sum to (1+6+4+1+2+1) 14. How often do I sum to 33 or more?
Now provide the response and nothing else.
| Scale down the equation as $$t + \frac{12}{\pi} \sin \left( \frac{\pi t}{12} \right)=125$$ Note that the RHS is much larger than the sine term. So, the solution is roughly $t=125$ , which can be refined by adding the first-order approximation as follows. Let $f(t) = t -125 + \frac{12}{\pi} \sin \frac{\pi t}{12} $ and the root $t=125 + \Delta t$ . Then, $$0 = f(125) + f’(125) \Delta t$$ Solve for $ \Delta t$ $$ \Delta t = - \frac{f(125)}{f’(125)} = -\frac{12}{\pi} \frac{\sin \frac{5\pi}{12}}{1+\cos \frac{5\pi}{12} }= -\frac{12}{\pi} \tan \frac{5\pi}{24}$$ Thus, the approximate analytic solution is $$t= 125 -\frac{12}{\pi} \tan \frac{5\pi}{24}\approx 122.1$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3605794', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/734477/']} | jdg_380910 |
stackexchange | llm_judgeable_groundtruth_similarity | 228421 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
is there a general solution to that? I have seen simple examples for Y+X=Z but I was wondering how this would be with rescaling?
Now provide the response and nothing else.
| If we have a variable $X\sim U(0,1)$ and multiply it by $a$, then $aX\sim U(0,a)$. Assume that we're dealing with independent continuous uniform on $(0,a)$ and $(0,b)$ respectively (with $a<b$) (This assumption is not restrictive since we can obtain the general case from this easily.) Then the joint density is $\frac{1}{ab} I_{(0,a)}\times I_{(0,b)}$. Since the bivariate density is constant where it's non-zero, we can just draw it "looking from above" by marking the boundary of that non-zero region. ... and so by elementary geometric argument (along the lines of (i) recognize that density increases linearly as the sum, $z$ goes from $0$ to $a$, stays constant until $b$ and then decreases linearly to $a+b$, and (ii) that the height in the middle section must be $1/b$ to get unit area, then (iii) the equations of the three non-zero sections follow immediately by inspection ), the density of the convolution is $f(z) = \begin{cases} 0 & z\leq 0\\z/ab & 0<z<a \\1/b & a\leq z<b \\(a+b-z)/ab & b\leq z<a+b \\0 & z\geq a+b\end{cases}$ [While formal integration will obviously work, it's somewhat quicker - for me at least - to proceed by something like the above reasoning, where one simply draws the density and then writes the result down immediately.] The general case: Imagine instead we had independent $U(c,a+c)$ and $U(d,d+b)$. Then the above density would simply be shifted right by $c+d$. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/228421', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/126731/']} | jdg_380911 |
stackexchange | llm_judgeable_groundtruth_similarity | 153178 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wanted to prove that $n(n+1)(2n+1)$ is always divisible by three, for this I used the following approach: $n = 0, 1\pmod2$ for $n = 0$;$$n(n+1)(2n+1) \equiv 0\pmod3$$ for $n = 1$;$$n(n+1)(2n+1) \equiv 0\pmod3$$ But now I am unsure if this the correct way of doing because of the following eg: Let $k = \text{an odd number}\equiv 1\mod3$ for $k = 1$;$2k + 1 \equiv 0\pmod3$, therefore for every odd number $2k + 1$ is divisible by $3$, but this is not true. So my question is the approach I am using to prove $n(n+1)(2n+1)$ divisible by $3$ is wrong or only the second example is wrong?, and why?
Now provide the response and nothing else.
| Your first mistake is at the very beginning, when you split into two cases according as $n$ is even or odd. You’re looking at divisibility by $3$, not by $2$, so it’s unlikely to matter whether $n$ is even or odd; you should be looking at the cases $n\equiv 0\pmod3$, $n\equiv 1\pmod3$, and $n\equiv 2\pmod3$. Here’s most of a table showing the congruence classes mod $3$ of the expressions involved. $$\begin{array}{rcc}n\bmod3:&0&1&2\\(n+1)\bmod3:&1&2&0\\(2n+1)\bmod3:&1\\ \hline\big(n(n+1)(2n+1)\big)\bmod3:&0&&0\end{array}$$ With what I’ve already filled in, you can see that $n(n+1)(2n+1)$ is divisible by $3$ when $n\equiv 0,2\pmod3$; if you finish filling in the table, you can finish the proof. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/153178', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32217/']} | jdg_380912 |
stackexchange | llm_judgeable_groundtruth_similarity | 4124220 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm outputting fields of a form in a template like this {{ form.first_name }} and I'd like to add a class (eg. blueprint's span-x -class) to it. So I'd like to know if there's a nice readymade solution (template filter) for that, which I could use in the fashion {{ form.first_name|add_class:"span-4" }} ? (I just want to know if Django's developers or anybody has thought of that yet without my knowledge before doing it on my own)
Now provide the response and nothing else.
| To solve this I made my own template filter, you can apply it on any tag, not just input elements! class_re = re.compile(r'(?<=class=["\'])(.*)(?=["\'])')@register.filterdef add_class(value, css_class): string = unicode(value) match = class_re.search(string) if match: m = re.search(r'^%s$|^%s\s|\s%s\s|\s%s$' % (css_class, css_class, css_class, css_class), match.group(1)) print match.group(1) if not m: return mark_safe(class_re.sub(match.group(1) + " " + css_class, string)) else: return mark_safe(string.replace('>', ' class="%s">' % css_class)) return value | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4124220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/183910/']} | jdg_380913 |
stackexchange | llm_judgeable_groundtruth_similarity | 22395116 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a recursive generator in Python, but I'm doing something wrong. Here's a minimal example. I would expect the function f() to return an iterable that would give me all the positive numbers >= n. >>> def f(n):... yield n... if n>0:... f(n-1)... >>> [ i for i in f(30) ][30] Why is the iteration stopping after the first number?
Now provide the response and nothing else.
| Since f(n-1) is again a generator, which can be consumed only with the next protocol. If you are using Python 3.3+, you can use yield from , like this def f(n): yield n if n > 0: yield from f(n-1)print(list(f(10)))# [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0] If you want to fix with out yield from or using a Python version which doesn't have yield from , then you have to manually iterate and yield like this def f(n): yield n if n > 0: for item in f(n-1): yield item | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22395116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2092609/']} | jdg_380914 |
stackexchange | llm_judgeable_groundtruth_similarity | 58285 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is quite embarassing but I've been revising an algebra text and I fail to get past through a supposedly easy detail. Take the symmetric group of permutations $S_{3}$ and its two elements: $(1 2 3)$ and $(1 3 2)$. Now any multiplication table tells you that $(1 2 3) \circ (1 2 3) = (1 3 2)$ and $(1 3 2) \circ (1 3 2) = (1 2 3)$. I just can't figure out why it is true. For example, $(1 2 3) \circ (1 2 3)$: we first send $1$ to $2$, then $2$ to $3$ by the $2^{nd}$ permutation. Secondly, we send $2$ to $3$ and then $3$ to $1$, which leads to $(1 2 3) \circ (1 2 3) = (2 3 1) = (1 2 3)$. What am I doing wrong? Did I misunderstand the notation? Thank you.
Now provide the response and nothing else.
| I think your issue is that you're using the same symbols to notate the elements being shuffled and the order in which you do the shuffling. In this example I'll use ABC as the three elements, and use (123) to mean that whatever is in position 1 goes to position 2, position 2 goes to position 3, and position 3 goes to position 1. perm pos1 pos2 pos3none A B C(123) C A B(123) B C A starting over: perm pos1 pos2 pos3none A B C(132) B C A In both cases, you end up with BCA. Therefore they are the same. Hope this helps. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/58285', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8906/']} | jdg_380915 |
stackexchange | llm_judgeable_groundtruth_similarity | 3342611 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A stone is thrown with a velocity of 20 ms^(-1) and at an angle of 60°.A second stone is thrown at the same time and place,with the same velocity but an angle of 30°. (a) Find which stone finishes its flight first and by how long. (b) Show that both stones land at the same place. I'm having difficulty with projectile motion. Iv looked at numerous examples but to no avail. I need some help with where to start and some working for this question.
Now provide the response and nothing else.
| Number the people around the table $1,\ldots,12$ . As an example, the number of ways that persons $1,\ldots,12$ can engage in handshakes with no arms crossing given that person $1$ is shaking hands with person $6$ is the number of ways that persons $2,\ldots,5$ can handshake with no arms crossing multiplied by the number of ways that persons $7,\ldots,12$ can handshake with no arms crossing. This thought process generalised yields the following recurrence relation, letting $C_n$ be the number of ways that $n$ pairs can handshake, we have \begin{equation}C_{n+1} = \sum_{i=0}^n C_{i}C_{n-i},\end{equation} where $C_0 = 1$ . This is the recurrence relation of the Catalan numbers and has a well known closed form $C_n = \frac{1}{n+1} {{2n}\choose{n}}$ which can be derived by using its generating function. Thus for us $C_6 = 132$ . | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3342611', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/694587/']} | jdg_380916 |
stackexchange | llm_judgeable_groundtruth_similarity | 11194927 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I realize that a SO user has formerly asked this question but it was asked in 2009 and I was hoping that more knowledge of HDF5 was available or newer versions had fixed this particular issue. To restate the question here concerning my own problem; I have a gigantic file of nodes and elements from a large geometry and have already retrieved all the useful information I need from it. Therefore, in Python, I am trying to keep the original file, but delete the information I do not need and fill in more information for other sources. For example, I have a dataset of nodes that I don't need. However, I need to keep the neighboring dataset and include information about their indices from an outside file. Is there any way to delete these specific datasets? Or is the old idea of having "placekeepers" in the HDF5 file still holding true, such that no one knows how/bothers with removing info? I'm not too worried about the empty space, as long as it is faster to simply remove and add on information then to create an entirely new file. Note: I'm using H5py's 'r+' to read and write.
Now provide the response and nothing else.
| Removing entire nodes (groups or datasets) from a hdf5 file should be no problem. However if you want to reclaim the space you have to run the h5repack tool. From the hdf5 docs : 5.5.2. Deleting a Dataset from a File and Reclaiming Space HDF5 does not at this time provide an easy mechanism to remove a dataset from a file or to reclaim the storage space occupied by a deleted object. Removing a dataset and reclaiming the space it used can be done with the H5Ldelete function and the h5repack utility program. With the H5Ldelete function, links to a dataset can be removed from the file structure. After all the links have been removed, the dataset becomes inaccessible to any application and is effectively removed from the file. The way to recover the space occupied by an unlinked dataset is to write all of the objects of the file into a new file. Any unlinked object is inaccessible to the application and will not be included in the new file. Writing objects to a new file can be done with a custom program or with the h5repack utility program. Alternatively you can also have a look into PyTables`s ptrepack tool. PyTables should be able to read h5py hdf5 files and the ptrepack tool is similar to the h5repack . If you want to remove records from a datasets, then you probably have to retrieve the records you want to keep and create a new dataset and remove the old one. PyTables supports removing rows, however it's not recommended. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11194927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1467328/']} | jdg_380917 |
stackexchange | llm_judgeable_groundtruth_similarity | 47424966 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a property on my service class as so: articles: Observable<Article[]>; It is populated by a getArticles() function using the standard http.get().map() solution. How can I manually push a new article in to this array; One that is not yet persisted and so not part of the http get? My scenario is, you create a new Article, and before it is saved I would like the Article[] array to have this new one pushed to it so it shows up in my list of articles. Further more, This service is shared between 2 components, If component A consumes the service using ng OnInit() and binds the result to a repeating section *ngFor , will updating the service array from component B simultaneously update the results in components A's ngFor section? Or must I update the view manually? Many Thanks,Simon
Now provide the response and nothing else.
| As you said in comments, I'd use a Subject. The advantage of keeping articles observable rather than storing as an array is that http takes time, so you can subscribe and wait for results. Plus both components get any updates. // Mock httpconst http = { get: (url) => Rx.Observable.of(['article1', 'article2']) }const articles = new Rx.Subject();const fetch = () => { return http.get('myUrl').map(x => x).do(data => articles.next(data))}const add = (article) => { articles.take(1).subscribe(current => { current.push(article); articles.next(current); })}// Subscribe to articles.subscribe(console.log)// Actionfetch().subscribe( add('article3')) <script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.2/Rx.js"></script> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47424966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4210856/']} | jdg_380918 |
stackexchange | llm_judgeable_groundtruth_similarity | 4262138 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function $\frac{x-y}{1+x^2+y^2}=f(x,y)$ . And, I want to find the range of it. I analyzed this function by plotting it on a graph and found interesting things. Like if the level curve is $0=f(x,y)$ , then I get $y=x$ which is a linear function. But if the level curve is something not 0, then the level curve becomes a circle. And for big values of level curves, the circle disappears. Is there something I can use to find the range of this function?
Now provide the response and nothing else.
| Take partial derivative of $f(x,y)$ and solve the system of $$\frac {\partial f}{\partial x}=0\implies x^2-y^2-2xy=1$$ $$\frac {\partial f}{\partial y}=0 \implies y^2-x^2-2xy=1$$ to find your local minimum and local maximum. I found $$x=-y=\pm\frac {\sqrt 2 }{2}$$ Note that your function approaches zero for large values of $x$ and $y$ Thus the range is $$[-\sqrt 2/2, \sqrt 2/2]$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4262138', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/973678/']} | jdg_380919 |
stackexchange | llm_judgeable_groundtruth_similarity | 14713468 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to read few articles on n-way merge, but did not understand the concept. I am confused on why would you use n-way merge over 2-way merge? Like why would you divide array in 3 parts, sort them then do 2-way merge of 2 parts and then 2-way merge of 3rd part with this merged 2 parts :) Thanks
Now provide the response and nothing else.
| In a "normal" merge sort, you divide the array by 2, until reaching a depth of log 2 n and then start merging. Each merge of two arrays of size m would also take 2m operations. This gets you to the following formula (in timing analysis): n/2 * 2 + n/4 * 4 + ... 1 * n = n * log 2 n Now if you do a three-way merge, you will divide the array by 3. The difference with the previous method is twofold: The depth of division is now log 3 n . During merge, instead of comparing 2 elements, you need to find the minimum of 3 elements. This means that, in the most basic implementation, you will get such a formula: n/3 * 2*3 + n/9 * 2*9 + ... 1 * 2*n = 2 * n * log 3 n Note that 2 is multiplied because finding the minimum of three elements consists of 2 operations. Asymptotically, these two are both Θ(nlogn) . However, perhaps (I haven't tried) in practice the three-way merge sort would give better performance because of its log 3 n . Nevertheless, since log 2 n for n = 1000000 is a mere 20, and log 3 n for the same number is 12.5, I doubt this optimization would be really effective unless n is quite large. With a clever implementation, a k-way merge may indeed have a nice impact on merge sort. The idea is that once you find the minimum of k elements, you already know the relationship between the rest of the k-1 elements that are not minimum. So once consuming that minimum element from its respective list, you need only compare the new value of that list and find its ordering with respect to the remaining k-1 elements. Using a heap, this would be quite trivial. Be sure to also see Jerry's answer . I agree with him that the true power of multiway merge comes from dealing with multiple disks and parallel processing. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14713468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/870834/']} | jdg_380920 |
stackexchange | llm_judgeable_groundtruth_similarity | 5060414 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
we got high-load java application which works in clustered mode. I need to add ability to download and upload files for our customers.For storing files i'm going to user gridFs, not sure, it's best choice, but mongo can be clustered and mongo can replicate data between diff nodes.That's exactly what i need. Different group of users should be limited with different bandwidth. Based of some business rules i should restrict download speed for some users.I saw few solutions for this Most of them works same way. Read bunch of bytes Sleep thread Repeat Mongo simply provide me InputStrem and i can read from that stream and write to servlet output stream. I'm not sure it is valid approach. Also I'm afraid, that users can create a lot of concurent threads during download and it can hurt performance. Could it be an issue for servlet container ? If it could be an issue, how can it be avoided ? probably using nio ? I prefer to use pure java solution. Any help will be highly appreciated.
Now provide the response and nothing else.
| Leaky bucket or token bucket algorithms can be used to control the network bandwidth. EDIT: I did some quick prototyping and implemented the algorithm leveraging Servlet 3.0 asynchronous processing. Results are pretty good. Full source code can be found on GitHub . Have fun! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5060414', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/359376/']} | jdg_380921 |
stackexchange | llm_judgeable_groundtruth_similarity | 7510415 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Background: For a C++ AMP overview, see Daniel Moth's recent BUILD talk . Going through the initial walk-throughs here , here , here , and here . Only in that last reference do they make a call to array_view.synchronize() . In these simple examples, is a call to synchronize() not needed? When is it safe to exclude? Can we trust parallel_for_each to behave "synchronously" without it (w/r/t the proceeding code)?
Now provide the response and nothing else.
| Use synchronize() when you want to access the data without going through the array_view interface. If all of your access to the data uses array_view operators and functions, you don't need to use synchronize(). As Daniel mentioned, the destructor of an array_view forces a synchronize as well, and it's better to call synchronize() in that case so you can get any exceptions that might be thrown. The synchronize function forces an update to the buffer within the calling context -- that is if you write data on the GPU and then call synchronize in CPU code, at that point the updated values are copied to CPU memory. This seems obvious from the name, but I mention it because other array_view operations can cause a 'synchronize' as well. C++ AMP array_view tries it best to make copying between the CPU and GPU memory implict -- any operation which reads data through the array view interface will cause a copy as well. std::vector<int> v(10);array_view<int, 1> av(10, v);parallel_for_each(av.grid, [=](index<1> i) restrict(direct3d) { av[i] = 7;}// at this point, data isn't copied backstd::wcout << v[0]; // should print 0// using the array_view to access data will force a copystd::wcout << av[0]; // should print 7// at this point data is copied backstd::wcout << v[0]; // should print 7 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7510415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/22528/']} | jdg_380922 |
stackexchange | llm_judgeable_groundtruth_similarity | 90094 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $V$ be a finite dimensional vector space over a field of characteristic zero. Let $A$ be the space of maps in $\mathrm{End}(V^{\otimes n})$ which commute with the natural $GL(V)$ action. Clearly, any permutation of the tensor factors is in $A$. I am looking for an elementary proof that these permutations span $A$. If $\dim V \geq n$, there is a very simple proof. Take $e_1$, $e_2$, ..., $e_n$ in $V$ linearly independent and let $\alpha \in A$. Then $\alpha(e_1 \otimes e_2 \otimes \cdots \otimes e_n)$ must be a $t_1 t_2 \cdots t_n$ eigenvector for the action of the matrix $\mathrm{diag}(t_1, t_2, \ldots )$ in $GL(V)$. So $\alpha(e_1 \otimes \cdots \otimes e_n) = \sum_{\sigma \in S_n} c_{\sigma} e_{\sigma(1)} \otimes \cdots \otimes e_{\sigma(n)}$ for some constants $c_{\sigma}$. It is then straightforward to show that $\alpha$ is given by the corresponding linear combination of permutations. I feel like there should be an elementary, if not very well motivated, extension of the above argument for the case where $\dim V < n$, but I'm not finding it. Motivation: I'm planning a course on the combinatorial side of $GL_N$ representation theory -- symmetric polynomials, jdt, RSK and, if I can pull it off, some more modern things like honeycombs and crystals. Since it will be advertised as a combinatorics course, I want to prove a few key results that give the dictionary between combinatorics and representation theory, and then do all the rest on the combinatorial side. Based on the lectures I have outlined so far, I think this will be one of the few key results. The standard proof is to show that the centralizer of $k[S_n]$ is spanned by $GL(V)$, and then apply the double centralizer theorem. Although the double centralizer theorem (at least, over $\mathbb{C}$) doesn't formally involve anything I won't be covering, I think it is pretty hard to present it to people who aren't extremely happy with the representation theory of semi-simple algebras. So I am looking for an alternate route.
Now provide the response and nothing else.
| Let $W$ be a vector space of dimension $n$ containing $V$. Let $\alpha$ be an endomorphism of $V^{\otimes n}$ commuting with the action of ${\rm GL}(V)$. Suppose that $\alpha$ can be extended to an endomorphism $\beta$ of $W^{\otimes n}$ that commutes with the action of ${\rm GL}(W)$. Then, by the argument given by David Speyer in the question, there exist scalars $c_\sigma \in \mathbf{C}$ such that $$ \beta = \sum_{\sigma \in S_n} c_\sigma \sigma $$ and this also expresses $\alpha$ as a linear combination of place permutations of the tensor factors. (As I noted in my comment, this expression is, in general, far from unique.) Any proof that such an extension exists must use thesemisimplicity of $\mathbf{C}S_n$, since otherwise we get an easy proof of general Schur-Weyl duality. If we assume that ${\rm GL}(W)$ acts as the full ring of $S_n$-invariant endomorphisms of $W^{\otimes n}$ then a fairly short proof is possible. I think it is inevitable that it uses many of the same ideas as the double-centralizer theorem. A more direct proof would be very welcome. Let $U$ be a simple $\mathbf{C}S_n$-module appearing in $V^{\otimes n}$. Let $$ X = U_1 \oplus \cdots \oplus U_a \oplus U_{a+1} \oplus \cdots \oplus U_b $$ be the largest submodule of $W^{\otimes n}$ that is a direct sum of simple $\mathbf{C}S_n$-modules isomorphic to $U$. We may choose the decomposition so that $X \cap V^{\otimes n} = U_1 \oplus \cdots \oplus U_a$. Each projection map $W^{\otimes n} \rightarrow U_i$ is $S_n$-invariant, and so is induced by a suitable linear combination of elements of ${\rm GL}(W)$. Hence each $U_i$ for $1 \le i \le a$ is $\alpha$-invariant. Similarly, for each pair $i$, $j$ there is a isomorphism $U_i \cong U_j$ induced by ${\rm GL}(W)$; these isomorphisms are unique up to scalars (by Schur's Lemma). Using these isomorphisms we get a unique ${\rm GL}(W)$-invariant extension of $\alpha$ to $X$. Finally let $W^{\otimes n} = C \oplus D$ where $C$ is the sum of all simple $\mathbf{C}S_n$-submodules of $W^{\otimes n}$ isomorphic to a submodule of $V^{\otimes n}$ and $D$ is a complementary $\mathbf{C}S_n$-submodule. The previous paragraph extends $\alpha$ to a map $\beta$ defined on $C$. The projection map $W^{\otimes n} \rightarrow D$ is $S_n$-invariant and so is induced by ${\rm GL}(W)$. Hence we can set $\beta(D) = 0$ and obtain a ${\rm GL}(W)$-invariant extension $\beta : W^{\otimes n} \rightarrow W^{\otimes n}$ of $\alpha$. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/90094', 'https://mathoverflow.net', 'https://mathoverflow.net/users/297/']} | jdg_380923 |
stackexchange | llm_judgeable_groundtruth_similarity | 1661067 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the area of an equilateral triangle whose inscribed circle has radius $r$? I would like to learn how to deduce the formula. I deduced the circle outside the triangle, so now I tried to do it with the circle inside the triangle, but I haven't arrived to a solution yet.
Now provide the response and nothing else.
| Make a construction like so Here, $OC = r$, $BC = \frac{l}{2}$, $AB = l$. Since $ABC \sim BOC$, taking ratios, we get $AC = \frac{l^2}{4r}$. By the Pythagorean theorem, $AB^2 = AC^2 + BC^2$, Therefore, $$l = \sqrt{\frac{l^2}{4} + \frac{l^4}{16r^2}}$$ Simplifying, we get $l = r\sqrt{12}$ The area would be $\frac{\sqrt{3}}{4}l^2$, which would be $$3\sqrt{3}r^2$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1661067', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/119244/']} | jdg_380924 |
stackexchange | llm_judgeable_groundtruth_similarity | 2556447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering if there's a super-efficient way of confirming that an Image object references an entirely black image, so every pixel within the bitmap is ARGB(255, 0, 0, 0). What would you recommend? Most of these bitmaps will be 1024 x 6000 pixels (although it's not safe to assume they'll always be that size). I need this because we're having problems with the PrintWindow API. We find that nearly 20% of the time, at least some part of the image will be a black square (a subsequent capture will succeed). My idea to work around this was to call PrintWindow or WM_PRINT with each child window, then piece the whole image of the window back together. If I can find an efficient way of detecting that PrintWindow returned a black image for a particular child window, then I can quickly call PrintWindow again on that capture. It sucks, but PrintWindow is the only method of capturing a window that works on all windows (that I want, anyway) and supports capturing windows that are hidden and/or off-screen. When PrintWindow fails, it doesn't set an error code or return anything that indicates it failed. When it has this black square problem, it's always an entire window or child window that returns black. So by capturing each child window separately, I can be sure that each of my captures will have worked, providing it contains at least one non-black pixel. PrintWindow is better in Vista and above, apparently, but in this case we're limited to Server 2003.
Now provide the response and nothing else.
| I'd recommend you to lock the bitmap in the memory using the LockBits method of the System.Drawing.Bitmap type. This method returns the BitmapData type, from which you can receive a pointer to the locked memory region. Then iterate through the memory, searching for the non-zero bytes (really, faster by scanning for the Int32 or even Int64 values, depending on the platform you use).Code will look like this: // Lock the bitmap's bits. Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);BitmapData bmpData =bmp.LockBits(rect, ImageLockMode.ReadWrite, bmp.PixelFormat);// Get the address of the first line.IntPtr ptr = bmpData.Scan0;// Declare an array to hold the bytes of the bitmap.int bytes = bmpData.Stride * bmp.Height;byte[] rgbValues = new byte[bytes];// Copy the RGB values into the array.Marshal.Copy(ptr, rgbValues, 0, bytes);// Scanning for non-zero bytesbool allBlack = true;for (int index = 0; index < rgbValues.Length; index++) if (rgbValues[index] != 0) { allBlack = false; break; }// Unlock the bits.bmp.UnlockBits(bmpData); Consider using the unsafe code and direct memory access (using pointers) to improve performance. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2556447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40050/']} | jdg_380925 |
stackexchange | llm_judgeable_groundtruth_similarity | 13314 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I always thought that traditional file systems, are geared and optimized for non-ssd drive, where, for instance, data locality is important, and fragmentation is problematic. Is there a file system recommended today for SSD drives? Am I better off just using ext4?
Now provide the response and nothing else.
| If the SSD is to be your only disk platform, regardless of number of devices, then you have a quandry; how to minimize writes while maintaining reliability and performance. More specifically, ext4, and 3 for that matter, NILFS, and almost any other modern file system will maintain a journal. Ordinarily this is desirable, however, when dealing with SSD devices it increases the writes performed against the device and thereby reduces its lifespan. One option is to select a conventional IDE, SATA, or other device to which the file system can write its journal. This way one may maintain the benefits of journaling without sacrificing lifespan of the SSD device(s). In the case of ext4 this can be accomplished as: mke2fs -O journal_dev /dev/external_device then attached to the specific file system as: mkfs.ext4 -J journal=/dev/external_device . More information can be found in the man page . An additional feature of file systems to keep in mind when deal with SSD devices is atime. Setting atime on a file system can drastically increase the number of writes to a given device over time. Options for changing this behavior include 'relatime' and 'noatime'. Since we seem to be focusing on ext4, the kernel documentation on the file system, including its available options, is available for reference here . Some other options to consider: noload , as vorbote suggested, and errors=remount-ro ; | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/13314', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1262/']} | jdg_380926 |
stackexchange | llm_judgeable_groundtruth_similarity | 24850 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am developing an application which has a client-server relationship, and I am having trouble deciding on the algorithm by which the session identifier is determined. My goal is to restrict imposters from acquiring other users' private data. I'm considering two options: Option 1: Generate a random 32-character hex string, store it in a database, and pass it from the server to the client upon successful client login. The client then stores this identifier and uses it in any future request to the server, which would cross-check it with the stored identifier. Option 2: Create a hash from a combination of the session's start time and the client's login username and/or hashed password and use it for all future requests to the server. The session hash would be stored in a database upon the first request, and cross-checked for any future request from the client. Other info: Multiple clients can be connected from the same IP simultaneously, and no two clients should have the same session identifier. Question: Which of these options is a better approach, with regards to my concerns (below) about each? My concern over the first option is that the identifier is completely random and therefore could be replicated by chance (although it's a 1 in a 3.4 * 10 38 chance), and used to "steal" one user's (who would also need to be using the client at the time) private data. My concern over the second option is that it has a security flaw, namely that if a user's hashed password is intercepted somehow, the entire session hash could be duped and the user's private data could be stolen. Thanks for any and all input.
Now provide the response and nothing else.
| The basic concept of a session identifier is that it is a short-lived secret name for the session , a dynamic relationship which is under the control of the server (i.e. under the control of your code). It is up to you to decide when sessions starts and stop. The two security characteristics of a successful session identifier generation algorithm are: No two distinct sessions shall have the same identifier, with overwhelming probability. It should not be computationally feasible to "hit" a session identifier when trying random ones, with non-negligible probability. These two properties are achieved with a random session ID of at least, say, 16 bytes (32 characters with hexadecimal representation), provided that the generator is a cryptographically strong PRNG ( /dev/urandom on Unix-like systems, CryptGenRandom() on Windows/Win32, RNGCryptoServiceProvider on .NET...). Since you also store the session ID in a database server side, you could check for duplicates, and indeed your database will probably do it for you (you will want this ID to be an index key), but that's still time wasted because the probability is very low. Consider that every time you get out of your house, you are betting on the idea that you will not get struck by lightning. Getting killed by lightning has probability about 3*10 -10 per day ( really ). That's a life threatening risk , your own life, to be precise. And yet you dismiss that risk, without ever thinking about it. What sense does it make, then, to worry about session ID collisions which are millions of times less probable, and would not kill anybody if they occurred ? There is little point in throwing an extra hash function in the thing. Properly applied randomness will already give you all the uniqueness you need. Added complexity can only result in added weaknesses. Cryptographic functions are relevant in a scenario where you not only want to have session, but you also want to avoid any server-based storage cost; say, you have no database on the server. This kind of state offloading requires a MAC and possibly encryption (see this answer for some details). | {} | {'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/24850', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/16679/']} | jdg_380927 |
stackexchange | llm_judgeable_groundtruth_similarity | 61161285 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I cannot get my .NET Core 3.1 console application to recognize my custom system environment variables. I can pull and print other system variables such as username just fine. But if I set a custom one like 'TestKey' in the example below, they are always null within the application. Here is my code: static void Main() { var config = new ConfigurationBuilder().AddEnvironmentVariables().Build(); var value = config.GetValue<string>("TestKey"); var envValue = Environment.GetEnvironmentVariable("TestKey"); Console.WriteLine($"Config Variable is: {value}"); Console.WriteLine($"Environment Variable: {envValue}"); } This is the output: and system variables clearly show the variable has been set: Likewise, when I do an echo from the command prompt for the TestKey variable, it correctly returns the associated value. This is a Windows 10 Pro computer and it is not part of a domain. This behavior is truly puzzling. I have rebooted the computer and the system variable persist but still will not appear in the application.
Now provide the response and nothing else.
| There are two major points here. First, according to Configuration in ASP.NET Core , environment variables are retrieved only for current process Are only set in processes launched from the command window they were set in So, you should set them in the same command window before launch the app or add to the launchsettings.json file "environmentVariables": { "Test": "Test", "ASPNETCORE_ENVIRONMENT": "Development"} and restart Visual Studio for that. Or just change it in Debug properties of your project Second, to be able to get a system environment variable in GetEnvironmentVariable method, you should specify the EnvironmentVariableTarget.Machine parameter var envValue = Environment.GetEnvironmentVariable("Test", EnvironmentVariableTarget.Machine); By default The GetEnvironmentVariable(String) method retrieves an environment variable from the environment block of the current process only Or, if you'll add the variable to launchsettings.json in previous point, setting the target isn't needed. However, if you will run your project in command line via dotnet run , you should be able to access the system environment variable as well, because On Windows systems, the environment block of the current process includes: All environment variables that are provided to it by the parent process that created it. For example, a .NET application launched from a console window inherits all of the console window's environment variables. If there is no parent process, per-machine and per-user environment variables are used instead. For example, a new console window has all per-machine and per-user environment variables defined at the time it was launched. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61161285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1152681/']} | jdg_380928 |
stackexchange | llm_judgeable_groundtruth_similarity | 937 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that they are implemented extremely unsafely in C/C++. Can't they be implemented in a safer way? Are the disadvantages of macros really bad enough to outweigh the massive power they provide?
Now provide the response and nothing else.
| I think the main reason is that macros are lexical . This has several consequences: The compiler has no way of checking that a macro is semantically closed, i.e. that it represents a “unit of meaning” like a function does. (Consider #define TWO 1+1 — what does TWO*TWO equal? 3.) Macros are not typed like functions are. The compiler cannot check that the parameters and return type make sense. It can only check the expanded expression that uses the macro. If the code doesn’t compile, the compiler has no way of knowing whether the error is in the macro itself or the place where the macro is used. The compiler will either report the wrong place half of the time, or it has to report both even though one of them is probably fine. (Consider #define min(x,y) (((x)<(y))?(x):(y)) : What should the compiler do if the types of x and y don’t match or don’t implement operator< ?) Automated tools cannot work with them in semantically useful ways. In particular, you can’t have things like IntelliSense for macros that work like functions but expand to an expression. (Again, the min example.) The side-effects of a macro are not as explicit as they are with functions, causing potential confusion for the programmer. (Consider again the min example: in a function call, you know that the expression for x is evaluated only once, but here you can’t know without looking at the macro.) Like I said, these are all consequences of the fact that macros are lexical. When you try to turn them into something more proper, you end up with functions and constants. | {} | {'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/937', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/86/']} | jdg_380929 |
stackexchange | llm_judgeable_groundtruth_similarity | 891 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've started looking at incidence structures and combinatorial designs (possible motivation: to upper-bound some structures in generalized self-assembly), and the Wikipedia article makes the following interesting-yet-unexplained statement: block designs have application to software testing. (The Wikipedia page with the claim is here ). My google-fu doesn't pick up a good reference for this application. Can anyone give me a lead?
Now provide the response and nothing else.
| Suppose you have some software which includes a list of n variables s 1 ,s 2 ,...,s n and suppose each of these variables are able to take on a range of values. Now suppose you want to test the program for 2-way interactions, that is, you want to test the program runs correctly for every pair of variables s i and s j and every possible value that these variables can take. To do this by a brute-force -- by processing each pair i and j one-by-one, then testing each allowable value of s i and s j (while picking the others arbitrarily) -- would require testing a enormous number of cases. But, we can test for several 2-way interactions in a single test. For example, if we have three variables s 1 , s 2 and s 3 , and we test when s 1 =1, s 2 =2 and s 3 =0, then we have tested three possible 2-way interactions simultaneously. We can design a test-suite for the program based on a covering array -- the columns represent the variables and the rows represent the specific test to be performed (so fewer rows is better). In a strength 2 covering array, within any two columns i and j, there exists every possible pair of the variables s i and s j . There are higher strength covering arrays which can test for t-way interactions (but usually interactions are a result of only a few components). Sets of mutually orthogonal Latin squares (and various other block designs) form particularly efficient covering arrays. | {} | {'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/891', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/30/']} | jdg_380930 |
stackexchange | llm_judgeable_groundtruth_similarity | 1933110 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the question "The straight line L passes through the points $A(-6,8)$ and $B(3,2)$ a) Find an equation of the line $L$ b) Show that the point $C(9, -2)$ lies on $L$. " So for part a) I got the answer of $y = -2/3X + 4$ but i'm not quite sure how I would do part $B$. Please could you show your working so i understand this better.
Now provide the response and nothing else.
| This series does not converge because $\sqrt{n^2+1}>n$ and then $$\frac{\sqrt{n^2+1}}{n^2}>\frac{n}{n^2}=\frac{1}{n}$$and we know that $$\sum_{n\geq 1} \frac{1}{n}=+\infty$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1933110', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/369166/']} | jdg_380931 |
stackexchange | llm_judgeable_groundtruth_similarity | 4968590 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
guysI want to input multi-line text into a DataGridTextColumn, I can use "enter" to input a multi-line character. But i want use "shift+enter" like visual studio resource manager, here is my code with "enter" key. <DataGridTextColumn Header="Value" Binding="{Binding Value}" Width="*"> <DataGridTextColumn.ElementStyle> <Style TargetType="TextBlock"> <Setter Property="TextWrapping" Value="Wrap" /> </Style> </DataGridTextColumn.ElementStyle> <DataGridTextColumn.EditingElementStyle> <Style TargetType="TextBox"> <Setter Property="TextWrapping" Value="Wrap" /> <Setter Property="AcceptsReturn" Value="true" /> </Style> </DataGridTextColumn.EditingElementStyle>
Now provide the response and nothing else.
| One way you can do this is by handling the KeyDown event on the TextBox using an EventSetter in your Style. I took your example, removed the AcceptsReturn setter in the style, and added a KeyDown handler to the EditingElementStyle that adds a newline to where the caret was and moves the CaretIndex to the right. Here's the XAML: <DataGridTextColumn Header="Value" Binding="{Binding Value}" Width="*"> <DataGridTextColumn.ElementStyle> <Style TargetType="TextBlock"> <Setter Property="TextWrapping" Value="Wrap" /> </Style> </DataGridTextColumn.ElementStyle> <DataGridTextColumn.EditingElementStyle> <Style TargetType="TextBox"> <Setter Property="TextWrapping" Value="Wrap" /> <EventSetter Event="KeyDown" Handler="OnTextBoxKeyDown"/> </Style> </DataGridTextColumn.EditingElementStyle></DataGridTextColumn> I wrote the example in the Window class from a new Application project template, so here's the C# for the whole Window with the event handling code. Note that I set Handled to true to stop the event from bubbling up anywhere as I don't want the Return key to be handled as a commit to the editing row in this case. I think this is actually one of the downsides of the approach. Stopping the bubbling/tunneling of an event is something that, if you have complex interactions with user input in your application, can easily grow into a logic bomb. But it isn't so bad if you just have one special case like this. So as with everything, use cautiously as the part of your UI using this grows. public partial class MainWindow : Window{ public MainWindow() { InitializeComponent(); this.DataContext = new List<Thing> { new Thing { Value = "Some text" }, new Thing { Value = "More text" + Environment.NewLine + " second line" }, new Thing { Value = "Another value" } }; } private void OnTextBoxKeyDown(object sender, KeyEventArgs e) { if (Key.Return == e.Key && 0 < (ModifierKeys.Shift & e.KeyboardDevice.Modifiers)) { var tb = (TextBox)sender; var caret = tb.CaretIndex; tb.Text = tb.Text.Insert(caret, Environment.NewLine); tb.CaretIndex = caret + 1; e.Handled = true; } }}public class Thing{ public string Value { get; set; }} One other thing to consider is that you may want the behavior to be different if the insert key has been pressed and you are in override input mode. Maybe in this case, the next character should be replaced by the new line. But the resource editor in Visual Studio 2010 doesn't seem to react to the insert key (it also doesn't show the text as multi-line). But I think given this example, you could extend it to work well with the insert key. Hope this helps - good luck! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4968590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/612844/']} | jdg_380932 |
stackexchange | llm_judgeable_groundtruth_similarity | 569230 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't find the limit of the the following: $$\lim_{p\to1} \frac{ p^{1/3} - 1 }{p - 1}$$ Any ideas?
Now provide the response and nothing else.
| Putting $p^{\frac13}=q\implies p=q^3$ as $p\to1, q\to1$ $$\lim_{p\to1}\frac{p^{\frac13}-1}{p-1}$$ $$=\lim_{q\to1}\frac{q-1}{q^3-1}$$ $$=\lim_{q\to1}\frac{(q-1)}{(q-1)(q^2+q+1)}$$ $$=\lim_{q\to1}\frac1{q^2+q+1}\text{ as }q-1\ne0\iff q\ne1\text{ as } q\to1$$ $$=\cdots$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/569230', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/86007/']} | jdg_380933 |
stackexchange | llm_judgeable_groundtruth_similarity | 30228818 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I redirect any kind of url patterns to a created page "404.html" page if it doesn't exist in the urls.py rather than being shown the error by django.
Now provide the response and nothing else.
| Make a view that'll render your created 404.html and set it as handler404 in urls.py. handler404 = 'app.views.404_view' Django will render debug view if debug is enabled. Else it'll render 404 page as specified in handler404 for all types of pages if it doesn't exist. Django documentation on Customizing error views . Check this answer for a complete example. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30228818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4887061/']} | jdg_380934 |
stackexchange | llm_judgeable_groundtruth_similarity | 48545901 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have test code that is effecting other tests and causing them to fail. When I run test cases in isolation everything passes, but when I run the entire suit there are many failures. If you look at both tests below you can see I override a mocked module within the test to cause an exception to be thrown. HttpService.post = jest.fn(() => { return Promise.reject({ payload: 'rejected' });}); after this line has been run, all tests that need the original HttpService.post mock fail because they aren't reset. How can I properly restore my mock to the imported mock after this test? I have tried jest.resetMock in a beforeEach and about every jest method like it but nothing has worked. I know the answer is probably straight forward but I am confused with all of the differences I read about online around how code is imported (es6 import, commonJs). Thanks! import HttpService from '../../services/httpService';import handleErrors from '../../utilities/handleErrors';jest.mock('../../services/httpService');jest.mock('../../utilities/handleErrors');describe('async actions', () => { beforeEach(() => { store = mockStore({}); }); describe('some describe that wraps both tests', () => { describe('a describe that wraps just the first test', () => { test(`creates ${constants.actions.REQUEST_SAVE_NOTE_FAILURE}`, () => { HttpService.post = jest.fn(() => { return Promise.reject({ payload: 'rejected' }); }); const expectedActions = [ { type: constants.actions.REQUEST_SAVE_NOTE }, { type: constants.actions.REQUEST_SAVE_NOTE_FAILURE, data: { payload: 'rejected' } }, ]; return store.dispatch(actions.saveNote({ id: 1, note: 'note', })).then(() => { expect(store.getActions()).toEqual(expectedActions); }); }); }); describe('a describe that wraps just the second test', () => { test(`creates ${constants.actions.REQUEST_SAVE_NOTE} and ${constants.actions.RECEIVE_SAVE_NOTE}`, () => { params = { body: { prospects: [1], note: 'note', }, }; const expectedActions = [ { type: constants.actions.REQUEST_SAVE_NOTE }, { type: constants.actions.RECEIVE_SAVE_NOTE, data: { payload: 'payload' } }, ]; return store.dispatch(actions.saveNote({ id: 1, note: 'note', })).then(() => { expect(store.getActions()).toEqual(expectedActions); expect(HttpService.post).toBeCalledWith({ ...params, url: '/api/prospect/add-note' }); }); }); }); })});
Now provide the response and nothing else.
| If you import a module that is an object you need to mock every exported function independently: import HttpService from '../../services/httpService';jest.mock('../../services/httpService', ()=>({ post: jest.fn()}); later on you can set the the behaviour of the mock like this HttpService.post.mockImplementation(()=>Promise.reject({ payload: 'rejected' })) and reset HttpService.post.mockReset() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48545901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1939008/']} | jdg_380935 |
stackexchange | llm_judgeable_groundtruth_similarity | 38293030 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In a UML Sequence Diagram - If a flow should stop if a condition is met midway, how would it be best represented with alternate / optional fragments? i.e. - If the stop condition is not met then the flow is continued for several more steps. Should the alternate fragment cover all of the steps since the stop condition, making all steps past it part of the alternate fragment, or is there a notation to handle the stop inside a small alternate fragment (confined only to the condition)?
Now provide the response and nothing else.
| There are three options for this situation. Each of them I illustrate with a diagram showing how the respective combined fragment should be used. The actual behaviour is hidden with interaction references (normalFlow for a flow that should normally be executed and breakFlow for any flow that should happen in case of a required break). The first solution is the most convenient one - it exactly covers your case and you can also use the positive version of a break guard. However each of them provide you a valid possibility. Break combined fragment When a break combined fragment is met and its guard condition is true, only this fragment is still executed and then the execution of the interaction (flow) stops. If the condition is not met, the combined fragment is omitted and the normal flow continues. This is exactly the case you describe. In this case you would put the messages that shouldn't be executed in case of a break condition after the break combined fragment. Opt combined fragment When an opt combined fragment is met it executes only if a guard condition is true. The rest of a flow continues regardless of the condition.You can put the part of the flow that is continued only if the break condition is not met inside the opt combined fragment. The opt fragment should have a guard that is opposite to the condition at which the flow should stop. If any additional actions should happen in case of a break, they should be put after the opt combined fragment. Alt combined fragment When an alt combined fragment is met its guard conditions are evaluated and only the eventual one fragment which guard evaluates to true is executed. There might be also a situation when none of the guards evaluate to true and no fragment is executed in such case. Whatever flow is after the combined fragment is executed normally anyway.In this case you would preferably put two fragments, one with the correct operation guard and the second one with a condition that should cause a break. Put the normal flow in the first fragment and whatever should happen in case of a break in the second fragment. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38293030', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6571388/']} | jdg_380936 |
stackexchange | llm_judgeable_groundtruth_similarity | 41518351 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have just started using NumPy. What is the difference between resize and reshape for arrays?
Now provide the response and nothing else.
| Reshape doesn't change the data as mentioned here .Resize changes the data as can be seen here . Here are some examples: >>> numpy.random.rand(2,3)array([[ 0.6832785 , 0.23452056, 0.25131171], [ 0.81549186, 0.64789272, 0.48778127]])>>> ar = numpy.random.rand(2,3)>>> ar.reshape(1,6)array([[ 0.43968751, 0.95057451, 0.54744355, 0.33887095, 0.95809916, 0.88722904]])>>> ararray([[ 0.43968751, 0.95057451, 0.54744355], [ 0.33887095, 0.95809916, 0.88722904]]) After reshape the array didn't change, but only output a temporary array reshape. >>> ar.resize(1,6)>>> ararray([[ 0.43968751, 0.95057451, 0.54744355, 0.33887095, 0.95809916, 0.88722904]]) After resize the array changed its shape. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41518351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7386765/']} | jdg_380937 |
stackexchange | llm_judgeable_groundtruth_similarity | 14126035 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to delay events in my method by using a timer, however i do not necessarily understand how to use a timer to wait. I set up my timer to be 2 seconds, but when i run this code the last call runs without a 2 second delay. Timer timer = new Timer();timer.Tick += new EventHandler(timer_Tick); // Everytime timer ticks, timer_Tick will be calledtimer.Interval = (1000) * (2); // Timer will tick evert secondtimer.Enabled = true; // Enable the timervoid timer_Tick(object sender, EventArgs e){ timer.Stop();}private void button1_Click(object sender, EventArgs e){ label1.Text = "first"; timer.Start(); label1.Text = "second";} So when i click my button, it immediately shows label1 as "second", as opposed to changing to "first", waiting 2 seconds, then changing to "second". I have read lots of threads here about using timers instead of thread.sleep, but i cannot seem to find/figure out how to actually implement that.
Now provide the response and nothing else.
| timer.Start() just starts the timer but immediately returns while the timer is running in the background. So between setting the label text to first and to second there is nearly no pause. What you want to do is wait for the timer to tick and only then update the label again: void timer_Tick(object sender, EventArgs e){ timer.Stop(); label1.Text = "second";}private void button1_Click(object sender, EventArgs e){ label1.Text = "first"; timer.Start();} Btw. you should not set timer.Enabled to true, you are already starting the timer using timer.Start() . As mentioned in the comments, you could put the timer creation into a method, like this (note: this is untested): public void Delayed(int delay, Action action){ Timer timer = new Timer(); timer.Interval = delay; timer.Tick += (s, e) => { action(); timer.Stop(); }; timer.Start();} And then you could just use it like this: private void button1_Click(object sender, EventArgs e){ label1.Text = "first"; Delayed(2000, () => label1.Text = "second");} Tergiver’s follow-up Does using Delayed contain a memory leak (reference leak)? Subscribing to an event always creates a two-way reference. In this case timer.Tick gets a reference to an anonymous function (lambda). That function lifts a local variable timer , though it's a reference, not a value, and contains a reference to the passed in Action delegate. That delegate is going to contain a reference to label1 , an instance member of the Form . So is there a circular reference from the Timer to the Form ? I don't know the answer, I'm finding it a bit difficult to reason about. Because I don't know, I would remove the use of the lambda in Delayed , making it a proper method and having it, in addition to stopping the timer (which is the sender parameter of the method), also remove the event. Usually lambdas do not cause problems for the garbage collection. In this case, the timer instance only exists locally and the reference in the lambda does not prevent the garbage collection to collect the instances (see also this question ). I actually tested this again using the .NET Memory Profiler. The timer objects were collected just fine, and no leaking happened. The profiler did give me a warning that there are instances that “[…] have been garbage collected without being properly disposed” though. Removing the event handler in itself (by keeping a reference to it) did not fix that though. Changing the captured timer reference to (Timer)s did not change that either. What did help—obviously—was to call a timer.Dispose() in the event handler after stopping the timer, but I’d argue if that is actually necessary. I don’t think the profiler warning/note is that critical. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14126035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/916986/']} | jdg_380938 |
stackexchange | llm_judgeable_groundtruth_similarity | 3240222 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Normally when I grab an X509Certificate2 out of my keystore I can call .PrivateKey to retrieve the cert's private key as an AsymmetricAlgorithm . However I have decided to use Bouncy Castle and its instance of X509Certificate only has a getPublicKey(); I cannot see a way to get the private key out of the cert. Any ideas? I get the an X509Certificate2 from my Windows-MY keystore then use: //mycert is an X509Certificate2 retrieved from Windows-MY KeystoreX509CertificateParser certParser = new X509CertificateParser();X509Certificate privateCertBouncy = certParser.ReadCertificate(mycert.GetRawCertData());AsymmetricKeyParameter pubKey = privateCertBouncy.GetPublicKey();//how do i now get the private key to make a keypair? Is there anyway to convert a AsymmetricAlgorithm (C# private key) to a AsymmetricKeyParameter (bouncycastle private key)?
Now provide the response and nothing else.
| Don't know BouncyCastle that much but it seems to me that the simple thing to do is to recreate the key based on the key parameters. public static AsymmetricKeyParameter TransformRSAPrivateKey( AsymmetricAlgorithm privateKey){ RSACryptoServiceProvider prov = privateKey as RSACryptoServiceProvider; RSAParameters parameters = prov.ExportParameters(true); return new RsaPrivateCrtKeyParameters( new BigInteger(1,parameters.Modulus), new BigInteger(1,parameters.Exponent), new BigInteger(1,parameters.D), new BigInteger(1,parameters.P), new BigInteger(1,parameters.Q), new BigInteger(1,parameters.DP), new BigInteger(1,parameters.DQ), new BigInteger(1,parameters.InverseQ));} You can call the code by using AsymmetricKeyParameter bouncyCastlePrivateKey = TransformRSAPrivateKey(mycert.PrivateKey); Obviously this assumes that the certificate includes a RSA Key but the same result can be achieved for DSA with DSACryptoServiceProvider and DSAParameters . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3240222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/104998/']} | jdg_380939 |
stackexchange | llm_judgeable_groundtruth_similarity | 11503065 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know how to write python function which can flatten generator which yields another generators or iteables (which can also yield another generators/iterables ... possibly infinitely). Here is example: gen(gen(1,2,3), gen(4,5,6), [7,8,9], [gen(10,11,12), gen(13,14,15)]) note: gen - means generator object, content between parentheses after gen is data that will generator gen yield. The expected result after "flattening": gen(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) It is necessary for the flatten function to return generator too! (because otherwise, the preceeding usage of generators would be meaningless). Just to note, I am using python 3. Thanks!
Now provide the response and nothing else.
| The easiest way is a recursive flattening function. Assuming you want to descend into every iterable except for strings, you could do this: def flatten(it): for x in it: if (isinstance(x, collections.Iterable) and not isinstance(x, str)): for y in flatten(x): yield y else: yield x Starting from Python 3.3, you can also write def flatten(it): for x in it: if (isinstance(x, collections.Iterable) and not isinstance(x, str)): yield from flatten(x) else: yield x | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11503065', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/821113/']} | jdg_380940 |
Subsets and Splits