source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 24834951 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When products A and B each install several MSIs and some of the MSIs are the same, will uninstalling either A or B affect the other? Does install location matter? Also, what happens when common MSI C's version is higher in Product B and B upgrades C on install? Now uninstalling B will remove the common MSI C which breaks Product A. How do you handle this gracefully without using the Permanent flag?
Now provide the response and nothing else.
| The first thing that comes to mind with this question is whether the products in question are decomposed the way they should be. As a general rule all MSI files think they own whatever they install, and they will uninstall everything attached to a component GUID inside the MSI on uninstall if the reference count (number of products using the component) is zero. There are some qualifications to this rule : If the component is marked permanent it is never uninstalled If the file / registry item has no component GUID at all, it is installed, never tracked by Windows Installer, and won't get uninstalled either Finally the reference counting for MSI allows the same component to be shared between several products and it will persist on disk during uninstall if it is registered in use by several other installer packages The mechanisms for creating shared components between MSI packages are generally: Merge modules allow you to install shared components that are reference counted and that will remain on disk after uninstall of a related product if there are other clients using the GUID on the system. A merge module is merged into other MSI packages at compile time. A form of binary early binding if you like. It can be merged into any package. With the advent of Wix (xml based installer source files), it is possible to include the same segment of files from several setups via an XML source include file instead of a merge module. This is vastly superior in my opinion due to the fact that Wix works better for source control ( see Wix link for explanation ). It is crucially important to realize that a " Wix source include file " has the exact same effect as a merge module - its components are reference counted properly for sharing between different installer packages, provided the GUIDs in the source file are hard coded (I recommend not to use auto-generated guids for this particular purpose). It is my personal opinion that you should use third party merge modules for generic runtime files, but only Wix includes for your own shared files. Merge modules are harder to manage than Wix includes imho. Updating and file replacement : As to update scenarios the MSI file replacement rules will take care of updating newer files, dependent upon the overall setting in the special Windows Installer property REINSTALLMODE . In general higher version files overwrite lower version files. Non-versioned files are overwritten if they are unmodified. If they are modified the create and modified date stamps are different and the file is left alone. Keep in mind that the issue of downgrading files is actively discouraged by the overall MSI design. If you need to downgrade files (shared or not), there is something deployment smelly about your design. At this point I would thoroughly read these answers: Windows Installer and the creation of WiX - for a short Wix history and context Change my component GUID in wix? - for component reference counting Wix installation, server, client or both - for client / server packaging Wix to Install multiple Applications - for changing requirements and setup problems WiX tricks and tips - for community Wix tips and tricks How to include wxi file into wxs? - for a simple idea of how to deal with Wix include files Provided you use Wix, or you are willing to use Wix, I would think the best way to deal with your overlapping products would be to decompose your installer into Wix segment source files that you include as needed in your main installers. This will allow the uninstall of one product to leave in place any components used by other applications. With that being said, I do not like to cause too many overlapping dependencies in my installers for the reasons listed in this article (also listed above): Wix to Install multiple Applications . For stability it is crucially important that shared components are stable before being used by too many setups as a bug fix as a general rule will require the recompilation of all setups the shared component is compiled or merged into. The easy way to say it: bundle files together that change together . To counteract this need for massive recompilation, you can chose to deliver a stand-alone supporting setup consisting of some of the shared components. One, or a couple of such " shared components setups " that are likely to contain Wix includes that change together on a similar, slow release schedule , and then separate setups for each product should be able to account for any deployment need whilst maintaining a balance between maintainability and flexibility . The product setup should then be the one that gets recompiled often, and the shared modules setups should be designed for minimal recompilation. Then wait for changing requirements :-). To me it is all about cohesion and coupling , and the difficulty of balancing sales, marketing and technical needs. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24834951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1254811/']} | jdg_379641 |
stackexchange | llm_judgeable_groundtruth_similarity | 1114255 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
x∧ ∼ y → ∼ z is logically equivalent to x ∧ z → y. I can't figure it out, especially the negations are throwing me off.
Now provide the response and nothing else.
| Unfortunately, there is no more elementary argument than going through some form of AC, because the result actually does depend on some amount of choice. As shown by e.g. C.J. Ash ( see this 1973 J. Australian Math Society paper ), an isomorphism between $(\mathbb{R},+)$ and $(\mathbb{C},+)$ implies the existence of a non-measurable set of reals. The paper has the full argument, but the short version is that (assuming that all sets of reals are measurable) one takes an isomorphism $f:\mathbb{R}\oplus\mathbb{R}\mapsto\mathbb{R}$, defines the sets $S_n=f[\mathbb{R}\oplus[n,n+1)]\cap(0,1)$ (that is, the image of $\mathbb{R}\oplus[n,n+1)$ under $f()$, intersected with the unit interval), and then shows that (a) the $S_n$ partition $(0,1)$ and (b) they all have the same measure. This is enough to contradict countable additivity. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1114255', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/209620/']} | jdg_379642 |
stackexchange | llm_judgeable_groundtruth_similarity | 4783 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We know that $$ 2^n= (1+1)^n = \sum_{k=0}^n {n \choose k}$$ I was asked to solve this limit, $$\lim_{n \to \infty} \ \sum_{k=0}^n {n \choose k}^{-1}=? \quad \text{for} \ n \geq 1$$
Now provide the response and nothing else.
| The terms from $k = 2$ to $k = n-2$ are all at most $\frac{1}{ {n \choose 2} }$ by unimodality, hence their sum is at most $\frac{2}{n-1}$. So the entire sum is at most $2 + \frac{2}{n} + \frac{2}{n-1}$. The limit is $2$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4783', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_379643 |
stackexchange | llm_judgeable_groundtruth_similarity | 45187928 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new in Swift 4 and trying to figure out How to convert Json to swift Object automatically like Gson in java. Is there is any plugin i can use which can convert my json to object and vice versa. I have tried to use SwiftyJson Library but couldnt understand what is syntax for directly converting the json to object mapper. In Gson conversion is as follow : String jsonInString = gson.toJson(obj);Staff staff = gson.fromJson(jsonInString, Staff.class); Can you please suggest some really simple example for beginner like me . below is my swift person class : class Person { let firstName: String let lastName: String init(firstName: String, lastName: String) { self.firstName = firstName self.lastName = lastName }} below is method call to fetch response from server : let response = Helper.makeHttpCall(url: "http://localhost:8080/HttpServices/GetBasicJson", method: "PUT", param: interestingNumbers) In response variable I am getting json: { "firstName": "John", "lastName": "doe"}
Now provide the response and nothing else.
| There's no need for external libraries in Swift anymore. As of Swift 4, there are 2 protocols that can achieve what you are looking for: Decodable and Encodable which are grouped into the Codable typealias, as well as JSONDecoder . You just need to create an entity that conforms to Codable ( Decodable should be enough in this example). struct Person: Codable { let firstName, lastName: String}// Assuming makeHttpCall has a callback:Helper.makeHttpCall(url: "http://localhost:8080/HttpServices/GetBasicJson", method: "PUT", param: interestingNumbers, callback: { response in // response is a String ? Data ? // Assuming it's Data let person = try! decoder.decode(Person.self, for: response) // Uncomment if it's a String and comment the line before // let jsonData = response.data(encoding: .utf8)! // let person = try! decoder.decode(Person.self, for: jsonData) print(person)}) More info: Apple's sample code: Using JSON with Custom Types SE-0167: Swift Encoders In-depth guide to JSON parsing in Swift 4 JSON with Encoder and Encodable JSON to Swift with Decoder and Decodable | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45187928', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_379644 |
stackexchange | llm_judgeable_groundtruth_similarity | 503291 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I will start building a relay based on this answer: https://electronics.stackexchange.com/a/464345/56969 Why is T1 needed? Every other component has an important purpose. But does T1 has an important purpose as well? Can I just remove it? What would happen if I remove that transistor? Edit I know Arduino cannot supply more than 40mA on each pin and that is the reason why there is a separate power supply to turn on the relay. I guess my question should have been "Can the optocoupler supply 100mA of current?" If so, that means I can remove the transistor and have fewer components.
Now provide the response and nothing else.
| Image from linked question: - Basically the opto-coupler cannot provide enough current at low enough voltage drop to turn on the relay coil shown. The transistor acts as a power buffer and it "delivers the goods" with a small input signal power from the opto-isolator. Addition You may be able to replace the opto, the transistor and the relay by using Panasonic's PhotoMOS product range: - Pick the DC/AC contact rating you need and the isolation voltage and if it fits your application then you're good to go. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/503291', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/56969/']} | jdg_379645 |
stackexchange | llm_judgeable_groundtruth_similarity | 4257645 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am debugging stored procedures, and right now I am interested in what ran in what order and which which parameters as opposed to how fast things ran and what may sneak in between and cause a slowdown. So, I captured a couple of minutes worth of trace. The problem is that there is too much stuff, and I need to narrow it down. If I do File -> Save As , I get the following options: Trace File... Trace Template... Trace Table... Trace XML File... Trace XML File for Replay... Now, these are decent options, but what I really want is a tabular format, such as CSV. I think that commas in SP trace would probably mess up the CSV format. I would gladly use something else, such as || as a delimiter. Once I do have the tabular format, I can filter it down using grep , etc. and then easily process it with Python to see the exact things I want. I started parsing the XML file with a script, but found myself spending too much time on the trace XML file format (have not used lxml library before). So ... is there an easier way? Can I at least copy it to Excel somehow?
Now provide the response and nothing else.
| If you save it into a trace table; you can get the data in a table in SQL Server which will let you manipulate it to your hearts content; including dumping it out to CSV if still required. The text data column is fully represented in the table. If you choose Save → Trace Table . You will be prompted for the name of the table and the database. Lets say you call it ProfilerTemp in the database scratch. Enter those; you can query the table using select * from scratch.dbo.ProfilerTemp You will see everything in the trace window in the table. If you didnt filter down to just stored procedures and want just them in the select Select textdata from [Scratch].[dbo].[ProfilerTemp] where eventclass = 10 And textdata like 'exec %' and not cast(TextData as nvarchar(max))= 'exec sp_reset_connection' This filters out non procedure calls and any connection resets you may have. You may need to add more filters depending on what you are trying to do. If you want this out as a text file; choose query - results to file and run the query. This will prompt for the file name and give you the parameter text as a text file. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4257645', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/231677/']} | jdg_379646 |
stackexchange | llm_judgeable_groundtruth_similarity | 25356211 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i am new to mongodb and stack overflow. I want to know why on mongodb collection ID is of 24 hex characters?what is importance of that?
Now provide the response and nothing else.
| Why is the default _id a 24 character hex string? The default unique identifier generated as the primary key ( _id ) for a MongoDB document is an ObjectId . This is a 12 byte binary value which is often represented as a 24 character hex string, and one of the standard field types supported by the MongoDB BSON specification . The 12 bytes of an ObjectId are constructed using: a 4 byte value representing the seconds since the Unix epoch a 3 byte machine identifier a 2 byte process id a 3 byte counter (starting with a random value) What is the importance of an ObjectId? ObjectIds (or similar identifiers generated according to a GUID formula) allow unique identifiers to be independently generated in a distributed system. The ability to independently generate a unique ID becomes very important as you scale up to multiple application servers (or perhaps multiple database nodes in a sharded cluster ). You do not want to have a central coordination bottleneck like a sequence counter (eg. as you might have for an auto-incrementing primary key), and you will want to insert new documents without risk that a new identifier will turn out to be a duplicate. An ObjectId is typically generated by your MongoDB client driver , but can also be generated on the MongoDB server if your client driver or application code or haven't already added an _id field. Do I have to use the default ObjectId? No. If you have a more suitable unique identifier to use, you can always provide your own value for _id . This can either be a single value or a composite value using multiple fields. The main constraints on _id values are that they have to be unique for a collection and you cannot update or remove the _id for an existing document. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25356211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3951346/']} | jdg_379647 |
stackexchange | llm_judgeable_groundtruth_similarity | 1186931 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm defining a database for a customer/ order system where there are two highly distinct types of customers. Because they are so different having a single customer table would be very ugly (it'd be full of null columns as they are pointless for one type). Their orders though are in the same format. Is it possible to have a CustomerId column in my Order table which has a foreign key to both the Customer Types? I have set it up in SQL server and it's given me no problems creating the relationships, but I'm yet to try inserting any data. Also, I'm planning on using nHibernate as the ORM, could there be any problems introduced by doing the relationships like this?
Now provide the response and nothing else.
| No, you can't have a single field as a foreign key to two different tables. How would you tell where to look for the key? You would at least need a field that tells what kind of user it is, or two separate foreign keys. You could also put the information that is common for all users in one table and have separate tables for the information that is specific for the user types, so that you have a single table with user id as primary key. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1186931', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11388/']} | jdg_379648 |
stackexchange | llm_judgeable_groundtruth_similarity | 5804043 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to convert a Long value into an Integer value in Java?
Now provide the response and nothing else.
| Integer i = theLong != null ? theLong.intValue() : null; or if you don't need to worry about null: // auto-unboxing does not go from Long to int directly, soInteger i = (int) (long) theLong; And in both situations, you might run into overflows (because a Long can store a wider range than an Integer). Java 8 has a helper method that checks for overflow (you get an exception in that case): Integer i = theLong == null ? null : Math.toIntExact(theLong); | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5804043', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102689/']} | jdg_379649 |
stackexchange | llm_judgeable_groundtruth_similarity | 682434 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below, I'm not declaring my_ints as a pointer. I don't know where the memory will be allocated. Please educate me here! #include <iostream>#include <vector>class FieldStorage{private: std::vector<int> my_ints;public: FieldStorage() { my_ints.push_back(1); my_ints.push_back(2); } void displayAll() { for (int i = 0; i < my_ints.size(); i++) { std::cout << my_ints[i] << std::endl; } }}; And in here, I'm declaring the field my_ints as a pointer: #include <iostream>#include <vector>class FieldStorage{private: std::vector<int> *my_ints;public: FieldStorage() { my_ints = new std::vector<int>(); my_ints->push_back(1); my_ints->push_back(2); } void displayAll() { for (int i = 0; i < my_ints->size(); i++) { std::cout << (*my_ints)[i] << std::endl; } } ~FieldStorage() { delete my_ints; }}; main() function to test: int main(){ FieldStorage obj; obj.displayAll(); return 0;} Both of them produces the same result. What's the difference?
Now provide the response and nothing else.
| In terms of memory management, these two classes are virtually identical. Several other responders have suggested that there is a difference between the two in that one is allocating storage on the stack and other on the heap, but that's not necessarily true, and even in the cases where it is true, it's terribly misleading. In reality, all that's different is where the metadata for the vector is allocated; the actual underlying storage in the vector is allocated from the heap regardless. It's a little bit tricky to see this because you're using std::vector , so the specific implementation details are hidden. But basically, std::vector is implemented like this: template <class T>class vector {public: vector() : mCapacity(0), mSize(0), mData(0) { } ~vector() { if (mData) delete[] mData; } ...protected: int mCapacity; int mSize; T *mData;}; As you can see, the vector class itself only has a few members -- capacity, size and a pointer to a dynamically allocated block of memory that will store the actual contents of the vector. In your example, the only difference is where the storage for those few fields comes from. In the first example, the storage is allocated from whatever storage you use for your containing class -- if it is heap allocated, so too will be those few bits of the vector . If your container is stack allocated, so too will be those few bits of the vector . In the second example, those bits of the vector are always heap allocated. In both examples, the actual meat of the vector -- the contents of it -- are allocated from the heap, and you cannot change that. Everybody else has pointed out already that you have a memory leak in your second example, and that is also true. Make sure to delete the vector in the destructor of your container class. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/682434', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7205/']} | jdg_379650 |
stackexchange | llm_judgeable_groundtruth_similarity | 3009477 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is this double-colon :: ? E.g. Foo::Bar . I found a definition : The :: is a unary operator that allows: constants, instance methods and class methods defined within a class or module, to be accessed from anywhere outside the class or module. What good is scope (private, protected) if you can just use :: to expose anything?
Now provide the response and nothing else.
| :: is basically a namespace resolution operator. It allows you to access items in modules, or class-level items in classes. For example, say you had this setup: module SomeModule module InnerModule class MyClass CONSTANT = 4 end endend You could access CONSTANT from outside the module as SomeModule::InnerModule::MyClass::CONSTANT . It doesn't affect instance methods defined on a class, since you access those with a different syntax (the dot . ). Relevant note: If you want to go back to the top-level namespace, do this: ::SomeModule – Benjamin Oakes | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/3009477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63761/']} | jdg_379651 |
stackexchange | llm_judgeable_groundtruth_similarity | 84664 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a server (4GB RAM, two 4-core Intel Xeon E5420) performing the following tasks: website with static content (currently just some CMS, we plan to add a caching reverse-proxy) customer control panel (basically web interface to a database) backend scripts for our desktop software - a few PHP scripts which perform a few DB queries/updates, but will be hit up every 5 minutes per customer database storing data for the above two We are expecting a big (up to hundreds of thousands) influx of users in the upcoming weeks. My superiors are worried that our current setup may not be able to handle the load, and are considering moving to cloud hosting - however, I'm not convinced that this would help. From what I understand of cloud computing, virtualization won't be of much help unless we can split the workload across several machines (unless they have individual servers significantly more powerful than ours). They offer MySQL clusters, however the backend scripts simply read/write some values in a few rows and don't perform any computationally-intensive SQL queries, so again I'm not sure MySQL clusters would make much of a difference compared to Apache/PHP overhead + MySQL network latency. As for the website, we could also move most of the static content to a CDN as we have a few video clips which could stress our bandwidth. So, would cloud hosting benefit us in any way?
Now provide the response and nothing else.
| Maybe, maybe not. As you suspect, unless you can easily split your system workload into discrete units that can communicate across the network, you won't see any benefit from going to a "cloud" provider in terms of scalability. There are "cloud" providers who do offer much larger machines (from a RAM perspective, anyway -- I have my doubts that you'll consistently get the equivalent of eight E5420 cores out of any of them), but you can get more RAM for your existing server, too, if necessary. I'd spend the time looking at each individual component of the system, assessing the resource usage of each and hence how much of a "saving" it'd be to move it onto a separate box. Then, look at how each component (starting from the most resource intensive) could be clustered, so that the work of that component could be split onto multiple machines (what's referred to as "horizontal scalability"). Once you've done that for everything that has the potential to use anywhere close to a machine's worth of resources at your projected load levels, you can go to your superiors with a plan and an estimate of effort and cost to split it all out. None of this is "cloud"-specific, it's just basic due diligence investigation into how to cope with increasing user load. Where you can start to use the benefits of "cloud" computing is when you do need to scale up in a hurry you can just hit a button and whoomp! there's more resources at your disposal. Don't wait until the load spikes to try that out, though -- if you value your uptime at all, run with at least two of every scalable component at all times, work out what your expansion trigger thresholds are going to be well in advance, monitor them, and spawn your new instances ASAP. Then put the system under massive load and test that it all works as you expect. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/84664', 'https://serverfault.com', 'https://serverfault.com/users/25229/']} | jdg_379652 |
stackexchange | llm_judgeable_groundtruth_similarity | 24830258 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using custom view controller transitions, UIViewControllerAnimatedTransitioning , to present and dismiss a view controller. The presenting animation works fine, but when I run the dismiss animation, once I call completeTransition: the containerView gets removed. I'm not sure what is going on, here is the transition code: - (void)animateTransition:(id<UIViewControllerContextTransitioning>)transitionContext { UIViewController *fromViewController = [transitionContext viewControllerForKey:UITransitionContextFromViewControllerKey]; UIViewController *toViewController = [transitionContext viewControllerForKey:UITransitionContextToViewControllerKey]; UIView *containerView = [transitionContext containerView]; containerView.backgroundColor = [UIColor blackColor]; if (self.reverse) { [containerView addSubview:toViewController.view]; [containerView addSubview:fromViewController.view]; } else { [containerView addSubview:fromViewController.view]; [containerView addSubview:toViewController.view]; } if (! self.reverse) { // Forward toViewController.view.frame = CGRectMake(-containerView.frame.size.width, 0, containerView.frame.size.width, containerView.frame.size.height); } else { fromViewController.view.frame = CGRectMake(0, 0, containerView.frame.size.width, containerView.frame.size.height); } [UIView animateWithDuration:[self transitionDuration:transitionContext] delay:0 usingSpringWithDamping:0.75f initialSpringVelocity:1.0f options:UIViewAnimationOptionCurveLinear animations:^{ if (self.reverse) { fromViewController.view.frame = CGRectMake(-containerView.frame.size.width, 0, containerView.frame.size.width, containerView.frame.size.height); fromViewController.view.layer.opacity = 0.f; toViewController.view.layer.opacity = 1.f; } else { toViewController.view.frame = CGRectMake(0, 0, containerView.frame.size.width, containerView.frame.size.height); toViewController.view.layer.opacity = 1.f; fromViewController.view.layer.opacity = 0.3f; } } completion:^(BOOL finished) { [transitionContext completeTransition:finished]; }];}- (NSTimeInterval)transitionDuration:(id<UIViewControllerContextTransitioning>)transitionContext { if (self.reverse) { return 0.45; } else { return 0.35; }} How can I prevent my toViewController from disappearing if .reverse is set to YES ? Update: This is how I'm presenting the view controller: SecondaryViewController *vc = [[SecondaryViewController alloc] init];UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:vc];navigationController.modalPresentationStyle = UIModalPresentationCustom;navigationController.transitioningDelegate = self;[self presentViewController:navigationController animated:YES completion:nil];
Now provide the response and nothing else.
| The container view disappearing on dismissal is correct behavior. Your mistake is adding the fromView to it. You are incorrectly distinguishing whether this is presentation or dismissal and what you should do in each case. Simply use the two view controllers fromViewController and toViewController to tell them apart; on dismissal, the roles are reversed. On dismissal, do not add anything to the content view; the original presenter is still present and will be revealed by the removal of the container view. So, on presentation, add only the toView to the container view. On dismissal, do not add anything to the container view. It's as simple as that. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24830258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1740273/']} | jdg_379653 |
stackexchange | llm_judgeable_groundtruth_similarity | 27893317 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I choose which ViewController my App should load when the app goes back to foreground? Is it by default the last view used in the App? Is there a way to choose a different view or viewcontroller? If sohow?
Now provide the response and nothing else.
| There are two problems with this code. First, you have to pass a pointer to the WaitGroup to downloadFromURL() , otherwise the object will be copied and Done() will not be visible in main() . See: func main() { ... go downloadFromURL(url, &wg) ...} Second, defer wg.Done() should be one of the first statements in downloadFromURL() , otherwise if you return from the function before that statement, it won't get "registered" and won't get called. func downloadFromURL(url string, wg *sync.WaitGroup) error { defer wg.Done() ...} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27893317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382788/']} | jdg_379654 |
stackexchange | llm_judgeable_groundtruth_similarity | 11973028 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a little script in which I call itertools.product like so: for p in product(list1,list2,list3): self.createFile(p) Is there a way for me to call this function without knowing in advance how many lists to include? Thanks
Now provide the response and nothing else.
| You can use the star or splat operator (it has a few names): for p in product(*lists) where lists is a tuple or list of things you want to pass. def func(a,b): print (a,b)args=(1,2)func(*args) You can do a similar thing when defining a function to allow it to accept a variable number of arguments: def func2(*args): #unpacking print(args) #args is a tuplefunc2(1,2) #prints (1, 2) And of course, you can combine the splat operator with the variable number of arguments: args = (1,2,3)func2(*args) #prints (1, 2, 3) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11973028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1596293/']} | jdg_379655 |
stackexchange | llm_judgeable_groundtruth_similarity | 195090 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to learn Lisp and looking at all the Lisps out there and their differences. I see that in some implementations of Scheme, you can use square brackets interchangeably with round brackets for readability, so because they are treated the same, I assume they are still just S-expressions like everything else. However, in Clojure I see that square brackets and curly braces are used to represent different data types like vectors and so on. Are these also still valid S-expressions, or must they be considered atoms? Doesn't this "break" the whole concept of S-expressions, thereby making Clojure an "impure" Lisp?
Now provide the response and nothing else.
| Literal notation for vectors and maps (ie, square brackets and curly braces) are just read-time sugar, and anything you can represent with that notation can also be represented with S-expressions. As a result, there's no loss of power or homoiconicity, and indeed reader macros (which I understand are even more powerful) are available in a number of lisps. Incidentally, note that Clojure now provides a mechanism for creating custom "tagged" literals, which, like other literals, are transformed to S-expressions before they are compiled and interpreted. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/195090', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/25420/']} | jdg_379656 |
stackexchange | llm_judgeable_groundtruth_similarity | 198135 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have just created ISO using these instructions: Creating an iso file in Linux How can I mount this iso image that was created?
Now provide the response and nothing else.
| Linux has a loopback device which lets you mount files that contain a filesystem on them. This will work for images of partitions (i.e. an ext3 backup image) but also works for cdrom images as well. This command allows you to mount an iso image. In order of this to work, /mnt/disk must already exist: mount -o loop disk.iso /mnt/disk The -o switch is for mount options. The loop option tells the mount command to find the first /dev/loopX device and use it. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/198135', 'https://serverfault.com', 'https://serverfault.com/users/28829/']} | jdg_379657 |
stackexchange | llm_judgeable_groundtruth_similarity | 56118563 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to integrate the new In-App Update API but I cannot able to test it's implementation. Seems like my code is fine therefore I read the Troubleshoots and am not able to understand 2 points in this document Make sure that the app that you are testing is signed with the same signing key as the one available from Google Play. How can I sign with same signing key in debug mode while installing Apk on Connected device?. If the app you are testing doesn’t appear with an available update, check that you’ve properly set up your testing tracks. Is it compulsory to release App in internal testing tracks to test implementation? Anyway I just want to test on my device and I decremented the version code from the play store version but its showing always UPDATE_NOT_AVAILABLE . Here is code:- val appUpdateInfo: Task<AppUpdateInfo> = appUpdateManager.appUpdateInfoappUpdateInfo.addOnSuccessListener { if (it.updateAvailability() == UpdateAvailability.UPDATE_AVAILABLE && it.isUpdateTypeAllowed(AppUpdateType.FLEXIBLE)) { appUpdateManager.startUpdateFlowForResult(it, AppUpdateType.FLEXIBLE, this, 2); }else{ toast("Not Available") }}
Now provide the response and nothing else.
| I struggled with this same problem for a few days before once again going through the documentation thoroughly. Read this section on the official documentation of the in-app updates. The link describes the correct testing procedure as follows: On your test device, make sure you've already installed a version of your app that meets the following requirements: The app was installed using an internal app sharing URL Supports in-app updates Uses a version code that's lower than the updated version of your app Follow the Play Console instructions on how to share your app internally. Make sure you upload a version of your app that uses a version code that's higher than the one you have already installed on the test device. On the test device, only click the internal app-sharing link for the updated version of your app. Do not install the app from the Google Play Store page you see after clicking the link. Open the app from the device's app drawer or home screen. The update should now be available to your app, and you can test your implementation of in-app updates. This is also super helpful because you don't have to wait for the app to be published, you can start testing immediately. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56118563', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6810881/']} | jdg_379658 |
stackexchange | llm_judgeable_groundtruth_similarity | 71117269 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using MikroORM and getting this error: ValidationError: Using global EntityManager instance methods for context specific actions is disallowed.If you need to work with the global instance's identity map, use `allowGlobalContext` configuration option or `fork()` instead The code that it corresponds to is below: import { MikroORM } from "@mikro-orm/core";import { __prod__ } from "./constants";import { Post } from "./entities/Post";import mikroConfig from "./mikro-orm.config";const main = async () => { const orm = await MikroORM.init(mikroConfig); const post = orm.em.create(Post, { title: "my first post", }); await orm.em.persistAndFlush(post); await orm.em.nativeInsert(Post, { title: "my first post 2" });};main().catch((error) => { console.error(error);}); I am unsure where I need to use the .fork() method
Now provide the response and nothing else.
| Don't disable validations without understanding them! I can't believe what I see in the replies here. For anybody coming here, please don't disable the validation (either via MIKRO_ORM_ALLOW_GLOBAL_CONTEXT env var or via allowGlobalContext configuration). Disabling the validation is fine only under very specific circumstances, mainly in unit tests. In case you don't know me, I am the one behind MikroORM, as well as the one who added this validation - for a very good reason, so please don't just disable that, it means you have a problem to solve, not that you should add one line to your configuration to shut it up. This validation was added to MikroORM v5 (so not typeorm, please dont confuse those two), and it means exactly what it says - you are trying to work with the global context, while you should be working with request specific one. Consult the docs for why you need request context here: https://mikro-orm.io/docs/identity-map#why-is-request-context-needed . In general using single (global) context will result in instable API response and basically a one huge memory leak. So now we should understand why the validation is there and why we should not disable it. Next how to get around it properly. As others mentined (and as the validation error message mentioned too), we can create fork and use that instead: const fork = orm.em.fork();const res = await fork.find(...); But that would be quite tedious, in real world apps, we usually have middlewares we can use to do this for us automatically. That is where the RequestContext helper comes into play. It uses the AsyncLocalStorage under the hood and is natively supported in the ORM. Following text is mostly an extraction of the MikroORM docs. How does RequestContext helper work? Internally all EntityManager methods that work with the Identity Map (e.g. em.find() or em.getReference() ) first call em.getContext() to access the contextual fork. This method will first check if we are running inside RequestContext handler and prefer the EntityManager fork from it. // we call em.find() on the global EM instanceconst res = await orm.em.find(Book, {});// but under the hood this resolves toconst res = await orm.em.getContext().find(Book, {});// which then resolves toconst res = await RequestContext.getEntityManager().find(Book, {}); The RequestContext.getEntityManager() method then checks AsyncLocalStorage static instance we use for creating new EM forks in the RequestContext.create() method. The AsyncLocalStorage class from Node.js core is the magician here. It allows us to track the context throughout the async calls. It allows us to decouple the EntityManager fork creation (usually in a middleware as shown in previous section) from its usage through the global EntityManager instance. Using RequestContext helper via middleware If we use dependency injection container like inversify or the one in nestjs framework, it can be hard to achieve this, because we usually want to access our repositories via DI container, but it will always provide we with the same instance, rather than new one for each request. To solve this, we can use RequestContext helper, that will use node 's AsyncLocalStorage in the background to isolate the request context. MikroORM will always use request specific (forked) entity manager if available, so all we need to do is to create new request context preferably as a middleware: app.use((req, res, next) => { RequestContext.create(orm.em, next);}); We should register this middleware as the last one just before request handlers and before any of our custom middleware that is using the ORM. There might be issues when we register it before request processing middleware like queryParser or bodyParser , so definitely register the context after them. Later on we can then access the request scoped EntityManager via RequestContext.getEntityManager() . This method is used under the hood automatically, so we should not need it. RequestContext.getEntityManager() will return undefined if the context was not started yet. Simple usage without the helper Now your example code from the OP is very basic, for that forking seems like the easiest thing to do, as its very bare bones, you dont have any web server there, so no middlewares: const orm = await MikroORM.init(mikroConfig);const emFork = orm.em.fork(); // <-- create the forkconst post = emFork.create(Post, { // <-- use the fork instead of global `orm.em` title: "my first post",});await emFork.persistAndFlush(post); // <-- use the fork instead of global await orm.em.nativeInsert(Post, { title: "my first post 2" }); // <-- this line could work with the global EM too, why? because `nativeInsert` is not touching the identity map = the context But we can use the RequestContext here too, to demonstrate how it works: const orm = await MikroORM.init(mikroConfig);// run things in the `RequestContext` handlerawait RequestContext.createAsync(orm.em, async () => { // inside this handler the `orm.em` will actually use the contextual fork, created via `RequestContext.createAsync()` const post = orm.em.create(Post, { title: "my first post", }); await orm.em.persistAndFlush(post); await orm.em.nativeInsert(Post, { title: "my first post 2" });}); The @UseRequestContext() decorator Middlewares are executed only for regular HTTP request handlers, what if we needa request scoped method outside that? One example of that is queue handlers orscheduled tasks (e.g. CRON jobs). We can use the @UseRequestContext() decorator. It requires us to first inject the MikroORM instance to current context, it will be then used to create the contextfor us. Under the hood, the decorator will register new request context for ourmethod and execute it inside the context. This decorator will wrap the underlying method in RequestContext.createAsync() call. Every call to such method will create new context (new EntityManager fork) which will be used inside. @UseRequestContext() should be used only on the top level methods. It should not be nested - a method decorated with it should not call another method that is also decorated with it. @Injectable()export class MyService { constructor(private readonly orm: MikroORM) { } @UseRequestContext() async doSomething() { // this will be executed in a separate context }} Alternatively we can provide a callback that will return the MikroORM instance. import { DI } from '..';export class MyService { @UseRequestContext(() => DI.orm) async doSomething() { // this will be executed in a separate context }} Note that this is not a universal workaround, you should not blindly put the decorator everywhere - its actually the opposite, it should be used only for a very specific use case like CRON jobs, in other contexts where you can use middlewares this is not needed at all. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/71117269', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16631470/']} | jdg_379659 |
stackexchange | llm_judgeable_groundtruth_similarity | 12786818 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to do some experiment with HTML5, WebSocket and File API.I'm using the Tomcat7 WebSocket implementation. I'm able to send and received text messages from the servlet. What I want to do now is to send from the servlet to the client JSON objects, but I want to avoid text message in order to skip the JSON.parse (or similar) on the client, so I'm trying to send binary messages.The servlet part is really simple: String s = "{arr : [1,2]}";CharBuffer cbuf = CharBuffer.wrap(s); CharsetEncoder encoder = Charset.forName("UTF-8").newEncoder(); getWsOutbound().writeBinaryMessage(encoder.encode(cbuf));getWsOutbound().flush(); After this message, on the client I see that I received a binary frame, that is converted to a Blob object ( http://www.w3.org/TR/FileAPI/#dfn-Blob ).The question is: is it possible to get the JSON object from the Blob?I took a look at the FileReader interface ( http://www.w3.org/TR/FileAPI/#FileReader-interface ), and I used code like this to inspect what the FileReader can do (the first line creates a brand new Blob, so you can test on the fly if you want): var b = new Blob([{"test": "toast"}], {type : "application/json"});var fr = new FileReader();fr.onload = function(evt) { var res = evt.target.result; console.log("onload",arguments, res, typeof res);};fr.readAsArrayBuffer(b); using all the "readAs..." methods that I saw on the File Reader implementation (I'm using Chrome 22). Anyway I didn't find something useful. Did you have any suggestion? Thanks.
Now provide the response and nothing else.
| What you're doing is conceptually wrong. JSON is a string representation of an object, not an object itself. So, when you send a binary representation of JSON over the wire, you're sending a binary representation of the string. There's no way to get around parsing JSON on the client side to convert a JSON string to a JavaScript Object. You absolutely should always send JSON as text to the client, and you should always call JSON.parse. Nothing else is going to be easy for you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12786818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/499373/']} | jdg_379660 |
stackexchange | llm_judgeable_groundtruth_similarity | 313406 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We define likelihood of parameters $\theta$ given observations $x$ (assuming $x$ is sampled according to density $f$ ) as: $$\mathcal{L}(\theta | x)=f_\theta(x).$$ Is it correct to speak about "[l]ikelihood of the data" (The Elements of Statistical Learning before equation (2.35)), or should we only speak about the likelihood of parameters? I also get such remark during oral presentation, but I'm still confused to know if this misnomer is accepted. In addition, the first equation comes from the English Wikipedia, but in the French ones it is written: $$\mathcal{L}(x | \theta)=f(x;\theta).$$ I think this notation $\mathcal{L}(x | \theta)$ is incorrect, but is it accepted somehow? Are there some authoritative notation rules for likelihood, or each author picks as he wants?
Now provide the response and nothing else.
| Here's one case: $X,Y$ are bivariate Normal, each with mean $0$. \begin{align*}E[X \mid Y] &= EX + \rho \frac{\sigma_X}{\sigma_Y}[Y - EY] \\&= \frac{E[XY]}{EY^2 - [EY]^2}Y\\&= \frac{E[XY]Y}{E[Y^2]}.\end{align*} | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/313406', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/155499/']} | jdg_379661 |
stackexchange | llm_judgeable_groundtruth_similarity | 2724977 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Facebook how can I post a message onto a user's wall saying "I scored 8/10 on objects game" then a URL? I really don't want to have to use the full API, as I don't want to handle user login details. I don't mind if Facebook needs to authenticate and then post the message. Is it possible using the new Graph API and JavaScript?
Now provide the response and nothing else.
| Note 4/16/2011: stream.publish seems to have been deprecated, There's a new way to do this: http://developers.facebook.com/docs/reference/dialogs/feed/ You can use something like this to publish to a wall, the user will need to confirm before it get sent.Don't forget that you'll need to use FB.init and include the JS SDK link. function fb_publish() { FB.ui( { method: 'stream.publish', message: 'Message here.', attachment: { name: 'Name here', caption: 'Caption here.', description: ( 'description here' ), href: 'url here' }, action_links: [ { text: 'Code', href: 'action url here' } ], user_prompt_message: 'Personal message here' }, function(response) { if (response && response.post_id) { alert('Post was published.'); } else { alert('Post was not published.'); } } ); } | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2724977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/235146/']} | jdg_379662 |
stackexchange | llm_judgeable_groundtruth_similarity | 2293191 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been researching into this for quite a while, but I seem to be getting only answers involving some programming language of which I do not have any background knowledge. Let me explain the problem: I think "equal length subdivision" of the image should be a straightforward visualization of the problem. I have a curve (a parametric Bezier curve, to be more specific) that I want to divide such that any two consecutive points of division have equal Euclidean distances between each other. The first and the last chords have to be on the beginning and the end of the curve respectively, and all points of division should be on the curve. Ideally I would like to have a mathematical solution (i.e. no programming) in which I can specify the number of chords to obtain the resulting division points. I am perfectly fine with calculus and ready to learn more if needed. Thanks. [1]: https://i.stack.imgur.com/6RkAp.jpg
Now provide the response and nothing else.
| If you have some curve $c:[0,1]\to\Bbb R^n$ you will have to reparametrize it to arc length . Assuming your curve is parametrized by arc length, it holds that the length of the curve between $c(a)$ and $c(b)$ is simply $b-a$. For such a curve, you desired partition into $n$ arcs is simply cutting the curve at the points $c(L\cdot i/n)$ for $i=0,...,n$ and $L$ the length of $c$. Some details: The arc length of a curve $c$ between $a,b\in\Bbb R$ is $$L_c(a,b)=\int_a^b\sqrt{1+\|c'(t)\|^2} \;\mathrm{d}t.$$ Reparametrizing your curve via $\hat c(t)=c(\varphi(t))$ for some $\varphi:[0,L]\to[0,1]$ with $L=L_c(0,1)$ is called parametrization by arc length if $L_{\hat c}(a,b)=b-a$. Now the parametrization of the curve perfectly mirrors its length. This gives you perfect control on how long your arcs will be. How to find such a reparametrization? You will have to compute $$\Phi(t)=L_c(0,t)$$ explicitely. Invert it to $\varphi=\Phi^{-1}$ and apply to build $\hat c=c\circ \varphi$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2293191', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/448893/']} | jdg_379663 |
stackexchange | llm_judgeable_groundtruth_similarity | 38839650 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a typescript application that dynamically adds script tags that point to JS files. Due to some restrictions I cant have these script tags statically defined in a html file, so I add them dynamically through typescript like this: for (let src of jsFiles) { let sTag = document.createElement('script'); sTag.type = 'text/javascript'; sTag.src = src; sTag.defer = true; document.body.appendChild(script);} Now, I am noticing that, when I add script tags dynamically, their doesn't seem to be a guarantee in the order in which they are loaded. Unfortunately, the jsFiles array has scripts that are dependent on each other. So, the 2nd script in the array can only be loaded after the first one is fully loaded. The second script references a function that is defined in the first one. Is there a way I can specify the order in which scripts are ordered and executed when adding them dynamically (similar to how ordering is done when you statically define script tags in a html file)? P.S. I would like to avoid using the onload callback to solve this issue since I noticed a performance degradation with my application. My second script file is very large and I am assuming that caused the degradation.
Now provide the response and nothing else.
| I can mention some alternatives to overcome that requirement: Use a library to inject dependencies (AMD or CommonJS modules) Just use modules . ES2015: import / export , or CommonJS: require() . Create the script tag programmatically and set the callback onload in order to react when the script has been loaded asynchronously. The attribute async = true is set by default. If you are allowed to modify the scripts to inject, then add a line at the end of scripts with an object or array that keeps track of the scripts already loaded. You can fetch the scripts as text ( XMLHttpRequest ), then, build a string with the scripts in the required order and, finally, execute the text-script via eval() And the less recommended option but frequently used, set a setInterval to check if the script was already executed. I recommend going for the first option. But for academic purposes, I'm going to illustrate the second option: Create the script tag programmatically and set the callback onload in order to react when the script has been loaded asynchronously. I want to recommend a reading about script loaders: Deep dive into the murky waters of script loading , half hour worth spending! The following example is a small module to manage scripts injection, and this is the basic idea behind it: let _scriptsToLoad = [ 'path/to/script1.js', 'path/to/script2.js', 'path/to/script3.js'];function createScriptElement() { // gets the first script in the list let script = _scriptsToLoad.shift(); // all scripts were loaded if (!script) return; let js = document.createElement('script'); js.type = 'text/javascript'; js.src = script; js.onload = onScriptLoaded; let s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(js, s);}function onScriptLoaded(event) { // loads the next script createScriptElement();}; In this plunker you can test the injection of scripts asynchronously in a specific order: https://plnkr.co/edit/b9O19f The main idea was to create an API that allows you interact with the scripts to inject, by exposing the following methods: addScript : receive an URL or a list of URLs for each script to be loaded. load : Run the task to load scripts in the specified order. reset : Clear the array of scripts, or cancels the load of scripts. afterLoad : Callback executed after every script has been loaded. onComplete : Callback executed after all scripts have been loaded. I like Fluent Interface or method chaining technique, so I built the module that way: scriptsLoader .reset() .addScript("script1.js") .addScript(["script2.js", "script3.js"]) .afterLoad((src) => console.warn("> loaded from jsuLoader:", src)) .onComplete(() => console.info("* ALL SCRIPTS LOADED *")) .load(); In the code above, we load first the "script1.js" file, and execute the afterLoad() callback, next, do the same with "script2.js" and "script3.js" and after all scripts has been loaded, the onComplete() callback is executed. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38839650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1320912/']} | jdg_379664 |
stackexchange | llm_judgeable_groundtruth_similarity | 26647460 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With the original way to define controllers , accessing the parent's scope was fairly trivial, since the child scope prototypically inherits from its parent. app.controller("parentCtrl", function($scope){ $scope.name = "Parent";}).controller("childCtrl", function($scope){ $scope.childName = "child of " + $scope.name;});<div ng-controller="parentCtrl"> {{name}} <div ng-controller="childCtrl"> {{childName}} </div></div> The Controller-As approach seems to be the recommended way to declare a controller. But with Controller-As, the above approach no longer works. Sure, I can access the parent scope with pc.name from the View: <div ng-controller="parentCtrl as pc"> {{pc.name}} <div ng-controller="childCtrl as cc"> {{cc.childName}} </div></div> I do have some issues with this (potential for spaghetti code), but this question is about accessing the parent scope from the child controller. The only way I can see this working is: app.controller("parentCtrl", function(){ this.name = "parent";}).controller("childCtrl", function($scope){ $scope.pc.name = "child of " + $scope.name; // or $scope.$parent.pc.name = "child of " + $scope.name; // there's no $scope.name // and no $scope.$parent.name}); So now, the child controller needs to know about " pc " - except, this should (in my mind) be restricted to the view. I don't think a child controller should know about the fact that a view decided to declare a ng-controller="parentCtrl as pc" . Q: What's the right approach then? EDIT: Clarification: I'm not looking to inherit a parent controller. I am looking to inherit/change the shared scope. So, if I was to amend the first example, I should be able to do the following: app.controller("parentCtrl", function($scope){ $scope.someObj = {prop: "not set"};}).controller("childCtrl", function($scope){ $scope.someObj.prop = "changed";});
Now provide the response and nothing else.
| After researching, I came to the following realization: Controller-As approach is NOT a substitute for using $scope . Both have their place, and can/should be used together judiciously. $scope does exactly what the name implies: i.e. it defines ViewModel properties on the $scope . This works best for sharing scope with nested controllers that can use the $scope to drive their own logic or to change it. Controler-As defines the entire controller object as a ViewModel with a named scope (via the controller's alias). This works best only in the View (but not other controllers), if the View decides if it wants to reference a specific controller ViewModel. Here's an example: var app = angular.module('myApp', []);// Then the controllers could choose whether they want to modify the inherited scope or not:app.controller("ParentCtrl", function($scope) { this.prop1 = { v: "prop1 from ParentCtrl" }; $scope.prop1 = { v: "defined on the scope by ParentCtrl" }; }) .controller("Child1Ctrl", function($scope) {}) .controller("Child2Ctrl", function($scope) { // here, I don't know about the "pc" alias this.myProp = $scope.prop1.v + ", and changed by Child2Ctrl"; }); <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.8/angular.min.js"></script><body ng-app="myApp"> <div ng-controller="ParentCtrl as pc"> <div ng-controller="Child1Ctrl"> <div>I know about the "pc" alias: {{pc.prop1.v}}</div> </div> <div ng-controller="Child2Ctrl as ch2"> <div>I only care about my own ViewModel: {{ch2.myProp}}</div> </div> </div> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26647460', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/968155/']} | jdg_379665 |
stackexchange | llm_judgeable_groundtruth_similarity | 555358 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am going through Groenewold's theorem and in his book: On The Principles of Elementary Quantum Mechanics, page 8, eq. 1.30: $$[\mathbf{p}, \mathbf{q}]=1\left(\text { i.e. } \mathbf{p q}-\mathbf{q} \mathbf{p}=\frac{\hbar}{i}\right),\tag{1.30} $$ and he wrote: The classical quantities $a(p,q)$ can be regarded as approximations to the quantum Operators $\mathbf{a}$ for $\lim \hbar \rightarrow 0$ . How did he assume that $\frac{\hbar}{i}=1$ ? And if $\hbar$ (as we have learned it) is a constant and it is precisly equal to $6.5821 × 10^{-16} eV s$ , how can we say that it goes to zero?
Now provide the response and nothing else.
| How did he assume that $\frac{\hbar}{i}=1$ ? He didn't. Check the definition he gives of the commutator in equation (1.02). And if $\hbar$ (as we have learned it) is a constant how can we say that it goes to zero? I think the point here is to say: if $\hbar \rightarrow 0$ we recover the classical mechanics (CS), therefore if in nature $\hbar = 0$ we wouldn't have QM only CS. And classical mechanics is a limit of QM and this is fundamental since we see that classical mechanics works. Moreover, it tells us that since $\hbar \neq 0$ but it's small we see QM only at small scales. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/555358', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/197570/']} | jdg_379666 |
stackexchange | llm_judgeable_groundtruth_similarity | 835184 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a project where I am verifying information from a user with a SOAP web service. I currently am taking care of errors assuming that I'm receiving responses from the web service, but also need to handle the edge cases of a service timeout or unavailability. In the case of a timeout or service unavailability, I need to pretend that the request was successful (that the web service approved the info), but I'm not clear on what exceptions are thrown. Some pseudo-code: // $client is PHP's SoapClient classtry { $response = $client->SomeSoapRequest();}catch(SoapFault $e){ // handle issues returned by the web service}catch(Exception $e){ // handle PHP issues with the request} What I can't seem to find is: Are timeouts a SoapFault ? If so, what is the best way to distinguish between a timeout error and web service issues (like a type error, etc.)? I found one page that mentioned an error where the message was something to the effect of "Error loading headers", but didn't mention if this was a Soap fault. How is a service unavailability potentially going to happen? A PHP exception seems like it would make sense (a SoapFault would be returned from the web service where unavailability would be a socket issue or similar)? Is there an existing service (e.g. example) that I can test a timeout against? Most timeout related discussions seem to be related to preventing timeouts by extending the default timeout setting, which isn't ideal in this situation.
Now provide the response and nothing else.
| 1) In case of timeout, PHP throws a SoapFault exception with faultcode="HTTP" and faultstring="Error Fetching http headers" . 2) In my opinion, the best way to distinguish between a timeout error and web service issues is by looking at the faultcode and faultstring members of the SoapFault class . In particular, the faultcode element is intended for use by software to provide an algorithmic mechanism for identifying the fault. As you can also read in a comment of the PHP manual , there is no method to read the faultcode property, so you have to access it directly (eg. $e->faultcode ), because the getCode() method does not work. The SOAP 1.1 Spec defines four possible values for the faultcode field: VersionMismatch : The processing party found an invalid namespace for the SOAP Envelope element MustUnderstand : An immediate child element of the SOAP Header element that was either not understood or not obeyed by the processing party contained a SOAP mustUnderstand attribute with a value of "1" Client : The Client class of errors indicate that the message was incorrectly formed or did not contain the appropriate information in order to succeed. For example, the message could lack the proper authentication or payment information. It is generally an indication that the message should not be resent without change. Server : The Server class of errors indicate that the message could not be processed for reasons not directly attributable to the contents of the message itself but rather to the processing of the message. For example, processing could include communicating with an upstream processor, which didn't respond. The message may succeed at a later point in time. In addition to those codes, PHP uses the HTTP code for identifying the errors happening at the protocol level (eg.: socket errors); for example, if you search for add_soap_fault in the ext/soap/php_http.c source code you can see when some of these kind of faults are generated. By searching for the add_soap_fault and soap_server_fault functions in the PHP SOAP extension source files, I've built the following list of PHP SoapFault exceptions: HTTP----Unable to parse URLUnknown protocol. Only http and https are allowed.SSL support is not available in this buildCould not connect to hostFailed Sending HTTP SOAP requestFailed to create stream??Error Fetching http headersError Fetching http body: No Content-Length: connection closed or chunked dataRedirection limit reached: abortingDidn't recieve an xml documentUnknown Content-EncodingCan't uncompress compressed responseError build soap requestVersionMismatch---------------Wrong VersionClient------A SOAP 1.2 envelope can contain only Header and BodyA SOAP Body element cannot have non Namespace qualified attributesA SOAP Envelope element cannot have non Namespace qualified attributesA SOAP Header element cannot have non Namespace qualified attributesBad RequestBody must be present in a SOAP envelopeCan't find response dataDTD are not supported by SOAPencodingStyle cannot be specified on the BodyencodingStyle cannot be specified on the EnvelopeencodingStyle cannot be specified on the HeaderError cannot find parameterError could not find "location" propertyError finding "uri" propertylooks like we got "Body" with several functions calllooks like we got "Body" without function calllooks like we got no XML documentlooks like we got XML without "Envelope" elementMissing parametermustUnderstand value is not booleanSoapClient::__doRequest() failedSoapClient::__doRequest() returned non string valueUnknown Data Encoding StyleUnknown ErrorDataEncodingUnknownMustUnderstand--------------Header not understoodServer------Couldn't find WSDLDTD are not supported by SOAPUnknown SOAP versionWSDL generation is not supported yet 3) To simulate the timeout condition, try with the following code: soapclient.php <?phpini_set('default_socket_timeout', 10);$client = new SoapClient(null, array( 'location' => "http://localhost/soapserver.php", 'uri' => "http://localhost/soapserver.php", 'trace' => 1 ));try { echo $return = $client->__soapCall("add",array(41, 51));} catch (SoapFault $e) { echo "<pre>SoapFault: ".print_r($e, true)."</pre>\n"; //echo "<pre>faultcode: '".$e->faultcode."'</pre>"; //echo "<pre>faultstring: '".$e->getMessage()."'</pre>";}?> soapserver.php <?phpfunction add($a, $b) { return $a + $b;}sleep(20);$soap = new SoapServer(null, array('uri' => 'http://localhost/soapserver.php'));$soap->addFunction("add");$soap->handle();?> Notice the sleep call in the SoapServer.php script with a time (20) longest than the time (10) specified for the default_socket_timeout parameter in the SoapClient.php script. If you want to simulate a service unavailability, you could for example change the location protocol from http to https in the soapclient.php script, assuming that your web server is not configured for SSL; by doing this, PHP should throw a "Could not connect to host" SoapFault. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/835184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/55151/']} | jdg_379667 |
stackexchange | llm_judgeable_groundtruth_similarity | 26828815 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know the method rdd.firstwfirst() which gives me the first element in an RDD. Also there is the method rdd.take(num) Which gives me the first "num" elements. But isn't there a possibility to get an element by index? Thanks.e
Now provide the response and nothing else.
| This should be possible by first indexing the RDD. The transformation zipWithIndex provides a stable indexing, numbering each element in its original order. Given: rdd = (a,b,c) val withIndex = rdd.zipWithIndex // ((a,0),(b,1),(c,2)) To lookup an element by index, this form is not useful. First we need to use the index as key: val indexKey = withIndex.map{case (k,v) => (v,k)} //((0,a),(1,b),(2,c)) Now, it's possible to use the lookup action in PairRDD to find an element by key: val b = indexKey.lookup(1) // Array(b) If you're expecting to use lookup often on the same RDD, I'd recommend to cache the indexKey RDD to improve performance. How to do this using the Java API is an exercise left for the reader. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26828815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2813152/']} | jdg_379668 |
stackexchange | llm_judgeable_groundtruth_similarity | 2835297 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been trying to evaluate$$ f(x) \equiv \int \limits_0^\infty - \ln\left(1 - \frac{x^2}{\cosh^2 (t)}\right) \, \mathrm{d} t $$for $x \in [0,1]$ and similar integrals recently. I know that$$ \int \limits_0^\infty \frac{\mathrm{d} t}{\cosh^z (t)} = \frac{2^{z-2} \Gamma^2 (\frac{z}{2})}{\Gamma(z)} $$holds for $\operatorname{Re} (z) > 0$, so by expanding the logarithm I found that$$ f(x) = \frac{1}{2} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} x^{2n} \, .$$But the right-hand side is the power series of the arcsine squared, so $f(x) = \arcsin^2 (x)$. On the other hand, the substitution $u = \frac{x}{\cosh(t)}$ in the original integral leads to the representation$$ f(x) = \int \limits_0^x \frac{- x \ln(1-u^2)}{u \sqrt{x^2-u^2}} \, \mathrm{d} u \, ,$$for which Mathematica (or WolframAlpha if you're lucky) gives the correct result. I would like to compute this integral without resorting to the above power series and thereby find an alternative proof for the expansion. I have tried to transform the integral into the usual form$$ \arcsin^2 (x) = \int \limits_0^x \frac{2 \arcsin(y)}{\sqrt{1-y^2}} \, \mathrm{d} u $$and thought about using the relations$$ \arcsin(x) = \arctan\left(\frac{x}{\sqrt{1-x^2}}\right) = 2 \arctan\left(\frac{x}{1+\sqrt{1-x^2}}\right) \, , $$but to no avail. Maybe the solution is trivial and I just cannot see it at the moment, maybe it is not. Anyway, I would be grateful for any ideas or hints.
Now provide the response and nothing else.
| I have finally managed to put all the pieces together, so here's a solution that does not use the power series: Let $u = x v$ to obtain$$ f(x) = \int \limits_0^1 \frac{- \ln(1 - x^2 v^2)}{v \sqrt{1-v^2}} \, \mathrm{d} v \, . $$Now we can differentiate under the integral sign (justified by the dominated convergence theorem) and use the substitution $v = \sqrt{1 - w^2}\, .$ Then the derivative is given by\begin{align}f'(x) &= 2 x \int \limits_0^1 \frac{v}{(1-x^2 v^2) \sqrt{1-v^2}} \, \mathrm{d} v = 2 x \int \limits_0^1 \frac{\mathrm{d} w }{1-x^2 + x^2 w^2} \\&= \frac{2}{\sqrt{1-x^2}} \arctan \left(\frac{x}{\sqrt{1-x^2}}\right) = \frac{2 \arcsin (x)}{\sqrt{1-x^2}}\end{align}for $x \in (0,1)$. Since $f(0)=0 \, ,$ integration yields$$ f(x) = f(0) + \int \limits_0^x \frac{2 \arcsin (y)}{\sqrt{1-y^2}} \, \mathrm{d} y = \arcsin^2 (x)$$for $x \in [0,1]$ as claimed. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2835297', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/570419/']} | jdg_379669 |
stackexchange | llm_judgeable_groundtruth_similarity | 33057291 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use JS function toLocaleString for date formatting. How can I set one common format for all clients like: 2015-10-29 20:00:00 That I do parsong at PHP by -
Now provide the response and nothing else.
| I think you would have to manually parse it into that format, which actually isn't too bad. What Date.toLocaleString() returns is a format of: MM/DD/YYYY, HH:MM:SS Here's my code snippet to help you out: // Parse our locale string to [date, time]var date = new Date().toLocaleString('en-US',{hour12:false}).split(" ");// Now we can access our time at date[1], and monthdayyear @ date[0]var time = date[1];var mdy = date[0];// We then parse the mdy into partsmdy = mdy.split('/');var month = parseInt(mdy[0]);var day = parseInt(mdy[1]);var year = parseInt(mdy[2]);// Putting it all togethervar formattedDate = year + '-' + month + '-' + day + ' ' + time; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33057291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5403949/']} | jdg_379670 |
stackexchange | llm_judgeable_groundtruth_similarity | 11488238 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When a request comes in for a page, eg app.get("/") I want to return a static HTML page from amazon s3. I know I can request it from S3 and then send it, but that seems slow. Is there anyway to tell the requester to get the file from s3 directly without changing the url? Thanks. Failing that, what's the fastest way to serve the file from s3? This tutorial shows writing the file first http://www.hacksparrow.com/node-js-amazon-s3-how-to-get-started.html // We need the fs module so that we can write the stream to a filevar fs = require('fs');// Set the file name for WriteStreamvar file = fs.createWriteStream('slash-s3.jpg');knox.getFile('slash.jpg', function(err, res) { res.on('data', function(data) { file.write(data); }); res.on('end', function(chunk) { file.end(); });}); Is there a way to send the file without writing it first? Writing it seems awfully slow.
Now provide the response and nothing else.
| As you suspected, you cannot get the requester to fetch from S3 directly without changing the URL. You have to proxy the remote page: var http = require('http'), express = require('express'), app = express();app.get('/', function(req, res) { http.get('http://www.stackoverflow.com', function(proxyRes) { proxyRes.pipe(res); });});app.listen(8080); You can cache the remote page for better performance. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11488238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/663447/']} | jdg_379671 |
stackexchange | llm_judgeable_groundtruth_similarity | 64586829 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am facing very strange issue in flutter app! When test my app in debug mode its working fine while in release mode it shows the white screen. This app is already published in playstore. After few days I changed this app. I want to release second version. When I try to build release apk its shows white screen while in debug mode working fine. Why does it happen? Is it flutter version issue? here is code import 'package:flutter/cupertino.dart';import 'package:flutter/material.dart';import 'package:flutter/widgets.dart';import 'package:tv/Online.dart';import 'package:tv/paypal.dart';import 'package:url_launcher/url_launcher.dart';class Donateus extends StatefulWidget { @override _DonateusState createState() => _DonateusState();}class _DonateusState extends State<Donateus> { bool _isExpandedonline=false; bool _isExpandedpaypal=false; bool _isExpandedetransfer=false; bool _isExpandedtexttogive=false; void customLaunch(command) async { if (await canLaunch(command)) { await launch(command); } else { print(' could not launch $command'); } } @override void initState() { // TODO: implement initState super.initState(); } @override Widget build(BuildContext context) { return Scaffold( body: SingleChildScrollView( scrollDirection: Axis.vertical, child:Padding( padding: EdgeInsets.all(10), child: Container( child:Expanded( child: Column( children: <Widget>[ Text( 'WAYS TO DONATE', style: TextStyle( fontFamily: 'TT NORMS', fontSize: 20, fontWeight: FontWeight.w100, ), ), Padding( padding: EdgeInsets.only(left: 10), child: Text( 'All of our Donations are processed through Cornerstone Asian Church.We are a registered Canadian Charity (842869265RR0001). A Canadian Tax-deductible receipt will be issued to you.', style: TextStyle( fontSize: 14 ), ), ), Padding( padding: EdgeInsets.only(left: 10), child: ExpansionTile( onExpansionChanged: (value) { _isExpandedonline=value; setState(() {}); }, children: <Widget>[ Text( 'Give a single gift, or schedule a recurring amount by using your Credit cards (i.e Mastercard, VISA, AMEX).' 'Note: Cornerstone uses the giving provider, Tithe.ly to securely process online donations from all over the world. Our Church will cover all associated fees for this service which are 2.5% + 15 cents (CAD) for all Credit Cards. AMEX is 3.5% + 30 cents (CAD). However, if you wish to cover the fees as well, you can click on the option to "Cover Fees".',style: TextStyle( fontSize: 14, ),) ], title: Row( children: <Widget>[ Image.asset( 'images/online.png', ), GestureDetector( onTap: () { Navigator.push( context, MaterialPageRoute( builder: (context) => Online()), ); }, child: Text( 'Online Giving', style: TextStyle( fontFamily: 'TT NORMS', fontSize: 18, fontWeight: FontWeight.w100, color: _isExpandedonline ? Colors.amber[800] : Colors.black, ), ), ) ], )), ), Padding( padding: EdgeInsets.only(left: 10), child: ExpansionTile( onExpansionChanged: (value) { _isExpandedpaypal=value; setState(() {}); }, children: <Widget>[ Text( 'You can send your funds through PayPal to email [email protected]',style: TextStyle( fontSize: 14, ),) ], title: Row( children: <Widget>[ Image.asset( 'images/paypal.png', ), Padding( padding: EdgeInsets.only(left: 10), child: Text( 'PAYPAL', style: TextStyle( fontFamily: 'TT NORMS', color:_isExpandedpaypal ? Colors.amber[800] : Colors.black, fontSize: 18, fontWeight: FontWeight.w100, ), ), ), GestureDetector( onTap: () { Navigator.push( context, MaterialPageRoute( builder: (context) => paypal()), ); }, child: Image.asset( 'images/donatebutton.png', height: 33, ), ) ], )), ), Padding( padding: EdgeInsets.only(left: 10), child: ExpansionTile( onExpansionChanged: (value) { _isExpandedetransfer=value; setState(() {}); }, children: <Widget>[ Text( 'You can send your funds via Interac E-Transfer to email*',style: TextStyle( fontSize: 14, ),), Text( '
[email protected]', style: TextStyle( color: Colors.blue, fontSize: 18, ), ), Text( '
This is an auto-deposit account
so you you do not have to set up any password.
Please do indicate the Purpose for your donation.
This service is only available within Canada.',style: TextStyle( fontSize: 14, ),), ], title: Row( children: <Widget>[ Image.asset( 'images/etransfer.png', height: 28, ), Text( 'INTERAC E-TRANSFER', style: TextStyle( fontFamily: 'TT NORMS', fontSize: 18, fontWeight: FontWeight.w100, color:_isExpandedetransfer? Colors.amber[800] : Colors.black, ), ), ], )), ), Padding( padding: EdgeInsets.only(left: 10), child: ExpansionTile( onExpansionChanged: (value) { _isExpandedtexttogive=value; setState(() {}); }, children: <Widget>[ Text('STEP 1: TEXT the word GIVE to',style: TextStyle( fontSize: 14, ),), Text( '(844) 329-1637', style: TextStyle( color: Colors.red, ), ), SizedBox( height: 20, ), Text( 'STEP 2: Complete the information on link received via TEXT for the first time only.',style:TextStyle( fontSize: 14, ),), SizedBox( height: 20, ), Text( 'STEP 3: Once you are setup for TEXT GIVING you can continue to TEXT the amount without having to fill your information again. Simply TEXT the amounti.e 10, 15 etc to the number directly.',style: TextStyle( fontSize: 14, ),) ], title: Row( children: <Widget>[ Image.asset( 'images/text.png', height: 30, ), GestureDetector( onTap: () { customLaunch('sms:8443291637'); }, child: Padding( padding: EdgeInsets.only(left: 10), child: Text( 'TEXT TO GIVE', style: TextStyle( fontSize: 18, fontWeight: FontWeight.w100, fontFamily: 'TT NORMS', color:_isExpandedtexttogive? Colors.amber[800] : Colors.black, ), ), ), ) ], )), ), ], ), ), ), ), ), ); }}
Now provide the response and nothing else.
| The problem is you can't use Expand in child,means that An Expanded widget must be a descendant or parent,in debug mode,you can clearly see Incorrect Usage of Parent Widget in the terminal,it will work on debug mode,but in release build it will show blank page,so keep eye on above warning or error Incorrect usage child: Container( child:Expanded( child: Column( example Expanded( child: Container(), ), In Row/Column Row( children: [ Expanded( child: MyWidget(), ), Expanded( child:Text("Text Widget"), ), ],) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64586829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14200544/']} | jdg_379672 |
stackexchange | llm_judgeable_groundtruth_similarity | 186512 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
could any one tell me why in the definition of index number or winding number of a curve $\gamma(t)$ around some point $a$ we take this integral : $$\frac{1}{2\pi i}\int_{\gamma}\frac{1}{z-a} $$ why not $$\frac{1}{2\pi i}\int_{\gamma}\frac{1}{z^2+2z+1+e^z-a} $$?
Now provide the response and nothing else.
| It is a non-trivial fact from topology that given any curve $\gamma:[0, 1]\rightarrow\mathbb{C}\setminus\{a\}$ we can find a continuous polar coordinate expression about $a$ such that $$\gamma(t) = a + r(t)\cdot e^{i\theta(t)}$$ where $\theta$ and $r$ are continuous themselves (if the curve is differentiable, so are $r$ and $\theta$ ). Moreover $\theta$ and $r$ are uniquely defined up to multiples of $2\pi$ which is a freedom in the choice of the starting angle. The main consequence of the above is that the natural definition of the winding number should be $$\mathrm{Wnd}(a,\ \gamma) = \frac{\theta(1) - \theta(0)}{2\pi}$$ i.e. the total continuous angular change of the curve divided by $2\pi$ . It should not be hard to convince yourself that this is always an integer for closed curves and represents the number of times $\gamma$ "winds" about $a$ . It remains to relate this intuitive definition to our integral definition and the key component is the logarithm $$\log(z) = \ln|z| + i\theta(z)$$ where the imaginary component of the logarithm is essentially an angle tracking function, provided that we can make it vary continuous. (The logarithm is inherently discontinuous along some branch cut, so one remedy is the above continuous polar expression. Another is to piece together multiple integrals to make the argument vary continuously.) So we have \begin{align}\oint_\gamma \frac{1}{z-a} dz &= \int_0^1 \frac{r'(t)e^{i\theta(t)} + i\theta'(t)r(t)e^{i\theta(t)}}{r(t)e^{i\theta(t)}} dt\\&=\int_0^1 \frac{r'(t)}{r(t)} + i\theta'(t) dt\\&=\log(r(1))-\log(r(0)) + i\left[\theta(1) - \theta(0)\right]\end{align} Since the curve is closed, it follows that $r(1) = r(0)$ so the expression evaluates to $$i\left[\theta(1) - \theta(0)\right]= 2\pi i\cdot\mathrm{Wnd}(a,\ \gamma)$$ which is the justification for the integral definition. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/186512', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24690/']} | jdg_379673 |
stackexchange | llm_judgeable_groundtruth_similarity | 12390398 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've always used this address to get all my tweets: http://api.twitter.com/1/statuses/user_timeline.json?screen_name=enriquemoreno but since API 1.1 is out and API 1 is deprecated I thought I would try change. But the new address is not working: http://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=enriquemoreno What am I missing?
Now provide the response and nothing else.
| Someone wrote a (clever) snippet in vanilla javascript that fetches twitter widgets: /********************************************************************* #### Twitter Post Fetcher! #### Coded by Jason Mayes 2013. www.jasonmayes.com Please keep this disclaimer with my code if you use it. Thanks. :-) Got feedback or questions, ask here: http://goo.gl/JinwJ *********************************************************************/ var twitterFetcher=function(){var d=null;return{fetch:function(a,b){d=b;var c=document.createElement("script");c.type="text/javascript";c.src="http://cdn.syndication.twimg.com/widgets/timelines/"+a+"?&lang=en&callback=twitterFetcher.callback&suppress_response_codes=true&rnd="+Math.random();document.getElementsByTagName("head")[0].appendChild(c)},callback:function(a){var b=document.createElement("div");b.innerHTML=a.body;a=b.getElementsByClassName("e-entry-title");d(a)}}}(); /* * ### HOW TO USE: ### * Create an ID: * Go to www.twitter.com and sign in as normal, go to your settings page. * Go to "Widgets" on the left hand side. * Create a new widget for "user timeline". Feel free to check "exclude replies" * if you dont want replies in results. * Now go back to settings page, and then go back to widgets page, you should * see the widget you just created. Click edit. * Now look at the URL in your web browser, you will see a long number like this: * 345735908357048478 * Use this as your ID below instead! */ twitterFetcher.fetch('345170787868762112', function(tweets){ // Do what you want with your tweets here! For example: var x = tweets.length; var n = 0; var element = document.getElementById('tweets'); var html = '<ul>'; while(n < x) { if (tweets[n].innerText) { html += '<li>' + tweets[n].innerText + '</li>'; } else { html += '<li>' + tweets[n].textContent + '</li>'; } n++; } html += '</ul>'; element.innerHTML = html; }); http://jsfiddle.net/jmayes/maT2Z/ Not the cleanest way and maybe not future-proof, but for now the only solution (I know) to access twitter data without OAuth and server-side support. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12390398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/392684/']} | jdg_379674 |
stackexchange | llm_judgeable_groundtruth_similarity | 8482444 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a BOOL property that I want to set in my class initializer. @property (assign, nonatomic) BOOL isEditMode; - (id)init{ . . . [self setValue:NO forKey:isEditMode]; return self;} The compiler gives me an "Incompatible integer to pointer conversion" warning. What am i doing wrong here?
Now provide the response and nothing else.
| The Key-Value Coding method setValue:forKey: only accepts objects as arguments. To set a BOOL, you need to wrap the number in a value object with [NSNumber numberWithBool:NO] . But there's little reason to do that. Key-Value Coding is a roundabout way to accomplish this. Either do self.isEditMode = NO or just isEditMode = NO . The latter is preferable in an init method (because setters can run arbitrary code that might not be desirable before an object is fully set up). But to elaborate on the first point: The reason Key-Value Coding works this way is because the type system can't represent an argument that's sometimes an object and at other times a primitive value. So KVC always deals with objects and just autoboxes primitive values as necessary. Similarly, if you do [yourObject valueForKey:@"isEditMode"] , you'll get back an NSNumber object wrapping the real value. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8482444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/558423/']} | jdg_379675 |
stackexchange | llm_judgeable_groundtruth_similarity | 27429 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What I want to know is how thick the observable universe is from the point of the cosmic microwave background and beyond. It appears the thickness of the cosmic microwave background itself (the part we can see) is above 100,000 light years, per the following article: http://scienceblogs.com/startswithabang/2013/06/19/5-facts-you-probably-dont-know-about-the-cosmic-microwave-background/ However, I want to know the thickness of that, plus what lies beyond that we cannot see, another way of looking at it would be the distance between the surface of last scattering (cosmic microwave background end) to the beginning (e.g. Big Bang). According to the following article, it appears that this time from the beginning to the surface of last scattering is about 300,000 years: https://ned.ipac.caltech.edu/level5/Glossary/Essay_lss.html That would imply that the thickness should be about 300,000 light years, but that doesn't take inflation into account. What is the thickness (in the observable universe) between the beginning (e.g. Big Bang) to the surface of last scattering (Cosmic Microwave Background), including inflation?
Now provide the response and nothing else.
| If I understand you correctly, you want to know the distance from the point from which we observe the CMB, to the edge of the observable Universe. During inflation, the observable Universe expanded from ridiculously small to some ten meters in radius, so that part can be safely ignored compared to the distances now$^1$. The distance$^2$ to the "CMB shell"$^3$ is 45.4 billion lightyears ("Glyr"), and the distance to the edge of the observable Universe is 46.3 Glyr. Hence, the shell of the observable Universe that lies beyond the CMB shell has a thickness of only 0.9 Glyr. Here's a sketch of how I interpret your question (not to scale): If you want, I can add details on how to calculate these numbers. $^1$ The relative expansion during inflation was huge, however: The Universe expanded roughly by the same factor that it has expanded afterwards, namely a factor of $\sim10^{26}$. $^2$ Here, "distance" corresponds to the comoving distance, which is what you would measure if you froze the Universe right now, and started laying out meter sticks. $^3$ This shell is not infinitely thin, but has a thickness of some 60 million lightyears, so let's ignore that. | {} | {'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/27429', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/1386/']} | jdg_379676 |
stackexchange | llm_judgeable_groundtruth_similarity | 21387696 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a query with the following result:query: SELECT Tasks.TaskId, Comments.Comment, comments.timespent FROM commentsINNER JOIN tasks ON comments.entityid = tasks.taskid WHERE ( comments.entity = 1 ) GROUP BY Tasks.TaskId, Comments.Comment, comments.timespent Result: TaskID Comment TimeSpent__________________________111754 C1 4111754 C2 1111754 C3 79 Please tell me how should I write my query to get the result as follows: TaskID Comment TimeSpent__________________________________111754 ,C1,C2,C3 84 Thanks in advance.
Now provide the response and nothing else.
| Here's the working SQL Fiddle: http://sqlfiddle.com/#!3/3597a/3 Here's the actual working SQL. SELECT Tasks.TaskId, SUBSTRING((SELECT ',' + Comments.CommentFROM CommentsINNER JOIN tasks ON comments.entityid = tasks.taskidFOR XML PATH('')),2,200000) AS Comments, SUM(comments.timespent) AS TimeSpent FROM commentsINNER JOIN tasks ON comments.entityid = tasks.taskid WHERE ( comments.entity = 1 ) GROUP BY Tasks.TaskId Create Table and Populate Data CREATE TABLE Tasks( TaskID NVARCHAR(20) NOT NULL,);CREATE TABLE Comments( Entity INT NOT NULL, EntityID NVARCHAR(20) NOT NULL, Comment NVARCHAR(50) NOT NULL, TimeSpent INT NOT NULL);INSERT INTO Tasks VALUES( '111754' );INSERT INTO Comments VALUES(1,'111754', 'C1',4 ),(1,'111754', 'C2',1 ),(1,'111754', 'C3',79 ); Execute SQL SELECT Tasks.TaskId, SUBSTRING((SELECT ',' + Comments.CommentFROM CommentsINNER JOIN tasks ON comments.entityid = tasks.taskidFOR XML PATH('')),2,200000) AS Comments, SUM(comments.timespent) AS TimeSpent FROM commentsINNER JOIN tasks ON comments.entityid = tasks.taskid WHERE comments.entity = 1 GROUP BY Tasks.TaskId View Results. TASKID COMMENTS TIMESPENT111754 C1,C2,C3 84 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21387696', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2891159/']} | jdg_379677 |
stackexchange | llm_judgeable_groundtruth_similarity | 34752570 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to check if a file loaded into an UIImage object file is equal to another image and execute some actions if so. Unfortunately, it's not working. emptyImage = UIImage(named: imageName)if(image1.image != emptyImage) { // do something} else { // do something} The above code always enters the if branch.
Now provide the response and nothing else.
| You can convert your UIImage instances to NSData instances and compare them. if let emptyImage = UIImage(named: "empty") { let emptyData = UIImagePNGRepresentation(emptyImage) let compareImageData = UIImagePNGRepresentation(image1.image) if let empty = emptyData, compareTo = compareImageData { if empty.isEqualToData(compareTo) { // Empty image is the same as image1.image } else { // Empty image is not equal to image1.image } } else { // Creating NSData from Images failed }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34752570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4715927/']} | jdg_379678 |
stackexchange | llm_judgeable_groundtruth_similarity | 17768932 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There are several questions about it but I always read the same thing: "the service will be killed if the system need resources" or "you can't build an service that runs forever because the more it runs in background, more susceptible it is to the system kills it" and etc. The problem I'm facing is: My service runs fine and as it is expected, if I run my app then exit it my service is still running, but when I kill my app (by going to the "recent apps" and swype it away) the service stops. In this moment, if I go to the Settings >> aplications >> running I'll see that the service is restarting. After a while, it goes back and my Service run with no problem. I google it and I find some things I could do but lets see my code first: I start my service by this way (after a button click): Intent intent = new Intent (MainActivity.this, MyService.class);startService(intent); I also have 3 Integers I put in extra, so I have something like this: final Integer i, i2, i3;i = 5; //for examplei2 = 10; //for examplei3 = 15; //for examplefinal Intent intent = new Intent (MainActivity.this, MyService.class);intent.putExtra("INTEGER1", i);intent.putExtra("INTEGER2", i2);intent.putExtra("INTEGER3", i3);startService(intent); In MyService I have the folloywing: public class MyService extends Service{ AlarmManager am; BroadcastReceiver br; PendingIntent pi; Integer i, i2, i3; @Override public void onCreate() { super.onCreate(); am = (AlarmManager) getSystemService(Context.ALARM_SERVICE); pi = PendingIntent.getBroadcast(this, 0, new Intent("anyany"); 0) //Why those zeros? br = new BroadcastReceiver () { public void onReceive (Context context, Intent i) { new thread(new Runnable() { public void run() { //do something } }).start(); } }; } @Override public void onStartCommand(Intent intent, int flags, int startId) { super.onStartCommand(intent, flags, startId); try { i = intent.getIntExtra("INTENT1", 0) // I don't understant yet why this zero are here i2 = intent.getIntExtra("INTENT2", 0) i3 = intent.getIntExtra("INTENT3", 0); } catch(NullPointerException e) {} this.registerReceiver(br, new IntentFilter("anyany")); new thread(new Runnable() { public void run() { am.setRepeating(AlarmManager.ELAPSED_REALTIME_WAKEUP, SystemClock. elapsedRealtime() + i*1000, i2*1000, pi); } }).start(); return START_REDELIVER_INTENT; //so I can get my Extra even with my Activity closed} My onDestroy: @Overridepublic void onDestroy(){ unregisterReceiver(br); super.onDestroy();} I also have onBind() method (without @Override), but it returns null.I google a lot and I tried to run the service in foreground, so I did this (inside de onStartCommand): Notification n = new Notification(R.drawable.ic_laucher), getText(R.string.app_name), System.currentTimeMillis());PendingIntent npi = PendingIntent.getActivity(this, MainActivity.class);n.setLatestEventInfo(this, getText(R.string.notification_title), getText(R.string.notification_message), npi);startForeground(3563, n); My notification appears and when I click on it my app runs, but the problem with my service wasn't fixed (I believe it still not run on foreground). The notification is restarted too. I also deleted the Try catch and I define a value for the integers (so I didn't use the getIntExtra() method), but nothing changed After several tests I tried to see the logs, when I kill my App I have the following message: Scheduling restart of crashed service. So, for some reason my service crash when my MainActivity dies, why? The intention here is not to transform the service in a god that can not be killed (I don't think it is impossible at all, the WhatsApp are running for 105 hours !) but prevent my Service to not being crashed after my App dies. I don't know if this'll help but this is what I add on my Manifest.xml <Activity android:name = ".MyService"/><service android:name ="Myservice" android:enabled="true" android: exported="false"/><uses-permission android:name="android.permission.WAKE_LOCK"/> Min API = 9, target API = 17.Size of the Service when running: about 3MB. Hope I was clear and sorry for my English. PS: the entire code are running as expected, so if you see any sintax error fell free to edit it. EDIT If I add android:isolatedProcess="true" in the <service> in AndroidManifest.xml I receive this error in logCat: java.lang.RuntimeException: Unable to create a service in com.mycompany.myapp.myservice: java.lang.SecurityException: Isolated process not allow ed to call getIntentSender When I start my service using this, the MainActivity does not show any erros, only the service crashes.
Now provide the response and nothing else.
| I finally found the solution ! I removed the AlarmManager from the Service and the service does not cashed anymore, but I have to use it The problem is the service crash after the user swype away the app from Recent App, so what I did was prevent the app to appear in that window. Add the following to your AndroidManifest.xml as a child of <activity> android:excludeFromRecents="true" Now when the user exit from your app it wil not appear in the recent apps window, what means the system kills the Activity right after you exit it, so it'll not waste any resources. PS: don't forget to set the service to run in a separate process, add the following to your AndroidManifest.xml, as a child of <service> android:process=":remote" EDIT - REAL SOLUTION FOUND After a lot of research and study (months of study) I took a deep look at android APIs and here is what a found, this is na expected behaviour that occours only at API 16+, a change at android arquiteture changed the way that PendingIntents are broadcasted by the system, so Google added the flag FLAG_RECEIVER_FOREGROUND , you must pass this flag to the intent you are using as a parameter on the PendingIntent.getBroadcast() , here is na example: if(Build.VERSION.SDK_INT >= 16) //The flag we used here was only added at API 16 myIntent.setFlags(Intent.FLAG_RECEIVER_FOREGROUND); //use myIntent.addFlags(Intent.FLAG_RECEIVER_FOREGROUND); if you want to add more than one flag to this intent;PendingIntent pi = PendingIntent.getBroadcast(context, 1, myIntent, 0); // the requestCode must be different from 0, in this case I used 1; Android versions older than API 16 will work as expected, the service won't crash if you swype away the app from Recent Apps page. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17768932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2544787/']} | jdg_379679 |
stackexchange | llm_judgeable_groundtruth_similarity | 11522026 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It is said that non-static variables cannot be used in a static method. But public static void main does. How is that?
Now provide the response and nothing else.
| No, it doesn't. public class A { int a = 2; public static void main(String[] args) { System.out.println(a); // won't compile!! }} but public class A { static int a = 2; public static void main(String[] args) { System.out.println(a); // this works! }} or if you instantiate A public class A { int a = 2; public static void main(String[] args) { A myA = new A(); System.out.println(myA.a); // this works too! }} Also public class A { public static void main(String[] args) { int a = 2; System.out.println(a); // this works too! }} will work, since a is a local variable here, and not an instance variable. A method local variable is always reachable during the execution of the method, regardless of if the method is static or not. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11522026', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1526671/']} | jdg_379680 |
stackexchange | llm_judgeable_groundtruth_similarity | 3574028 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a string as classname = "Text" using this I want to create an object of the Text class Now when I try doing this classname = classname.constantize I get the Text as a module and not as a class.Please suggest something. Thanks and regards Rohit
Now provide the response and nothing else.
| You could use: Object.const_get( class_name )$ irb >> class Person >> def name>> "Person instance">> end>> end=> nil>> class_name = "Person"=> "Person">> Object.const_get( class_name ).new.name => "Person instance" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3574028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/429167/']} | jdg_379681 |
stackexchange | llm_judgeable_groundtruth_similarity | 6899800 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
int a[3][4] = { 1,2,3,4, 5,6,7,8, 9,10,11,12, };printf("%u %u %u \n", a[0]+1, *(a[0]+1), *(*(a+0)+1));
Now provide the response and nothing else.
| Time for a crash course on arrays in C. First of all, let's fix the initializer for the array: int a[3][4] = { { 1, 2, 3, 4}, { 5, 6, 7, 8}, { 9, 10, 11, 12} }; This defines a 3-element array of 4-element arrays of int . The type of the expression a is "3-element array of 4-element arrays of int ". Now for the headache-inducing part. Except when it's the operand of the sizeof or unary & operators, or if it's a string literal being used to initialize another array in a declaration, an expression of array type will have its type implicitly converted ("decay") to a pointer type. If the expression a appears by itself in the code (such as in a statement like printf("%p", a); , its type is converted from "3-element array of 4-element array of int " to "pointer to 4-element array of int ", or int (*)[4] . Similarly, if the expression a[i] appears in the code, its type is converted from "4-element array of int " ( int [4] ) to "pointer to int " ( int * ). If a or a[i] are operands of either sizeof or & , however, the conversion doesn't happen. In a similar vein, array subscripting is done through pointer arithmetic: the expression a[i] is interpreted as though it were written *(a+i) . You offset i elements from the base of the array and dereference the result. Thus, a[0] is the same as *(a + 0) , which is the same as *a . a[i][j] is the same as writing *(*(a + i) + j) . Here's a table summarizing all of the above: Expression Type Decays To Resulting Value---------- ---- --------- ----- a int [3][4] int (*)[4] Address of the first element of the array &a int (*)[3][4] n/a Same as above, but type is different *a int [4] int * Same as above, but type is different a[0] int [4] int * Same as above *(a+0) int [4] int * Same as above a[i] int [4] int * Address of the first element of the i'th subarray *(a+i) int [4] int * Same as above &a[i] int (*)[4] n/a Same as above, but type is different *a[i] int n/a Value of the 0'th element of the i'th subarray a[i][j] int Value of the j'th element of the i'th subarray *(a[i]+j) int Same as above*(*(a+i)+j) int Same as above Hopefully, that should give you everything you need to figure out what the output should be. However, the printf statement should be written as printf("%p %d %d\n", (void *) a[0]+1, *(a[0]+1), *(*(a+0)+1)); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6899800', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/872931/']} | jdg_379682 |
stackexchange | llm_judgeable_groundtruth_similarity | 3183818 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is changed by applying nullable Operator on value type datatype that now it can store null.
Now provide the response and nothing else.
| As others have said, "?" is just shorthand for changing it to Nullable<T> . This is just another value type with a Boolean flag to say whether or not there's really a useful value, or whether it's the null value for the type. In other words, Nullable<T> looks a bit like this: public struct Nullable<T>{ private readonly bool hasValue; public bool HasValue { get { return hasValue; } } private readonly T value; public T value { get { if (!hasValue) { throw new InvalidOperationException(); } return value; } } public Nullable(T value) { this.value = value; this.hasValue = true; } // Calling new Nullable<int>() or whatever will use the // implicit initialization which leaves value as default(T) // and hasValue as false.} Obviously in the real code there are more methods (like GetValueOrDefault() ) and conversion operators etc. The C# compiler adds lifted operators which effectively proxy to the original operators for T . At the risk of sounding like a broken record, this is still a value type. It doesn't involve boxing... and when you write: int? x = null; that's not a null reference - it's the null value of Nullable<int> , i.e. the one where hasValue is false . When a nullable type is boxed, the CLR has a feature whereby the value either gets boxed to a null reference, or a plain boxed T. So if you have code like this: int? x = 5;int y = 5;object o1 = x;object o2 = y; The boxed values referred to by o1 and o2 are indistinguishable. You can't tell that one is the result of boxing a nullable type. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3183818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/355063/']} | jdg_379683 |
stackexchange | llm_judgeable_groundtruth_similarity | 143 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I could be out of context here about security, but , I would like to know if phone lines or phone calls over VOIP could be made secure . I know that Skype and other VOIP service providers have to give access to respective governments of countries who request authorization of phone calls from their country . The normal phone lines themselves are unsecure. I am talking from a perspective of a kid(in terms of security) who watches movies and thinks what does it mean and how do they do it ? I would really like to know the secret that if its actually possible. "The phone line at our end is secure , is your connection secure ? " "Are we on a secure phone line ?"
Now provide the response and nothing else.
| A secure phone line is conceptually possible; this is not really different from, e.g., a secure communication between a Web browser and a HTTPS server (there are technical subtleties about lost packets and whether they should be tolerated, but that is not the issue here). However, the movie-secure phone is not secure, and that's a structural problem. The problem lies in the question: "is your connection secure ?". If you need to ask to the other guy whether the line is secure, then the line is not secure. That's as simple as that. A "bad guy" could hijack the line and, when you ask whether the connection is secure, the bad guy could simply respond "yes it is !", counterfeiting the voice of the intended recipient (and, in the other direction, he could "replace" the question by an innocent sentence). In a secure phone line, the caller and the receiver shall be authenticated to each other, which can be done with various cryptographic tools (e.g. digital signatures, or, more simply, a shared secret). Variants of the same tools also establish a session-specific shared secret which can be used to symmetrically encrypt the data. Bottom-line is that once people begin to actually talk, the line should already be secure and both participants shall have ways to know it (e.g. they are using special phones which refuse to communicate if security is not achieved). Otherwise, there is no security. On a more practical point of view, if I were to implement a phone-like secure system between two entities, I would investigate using VoIP over a VPN. This would require some delving into the details of the VoIP protocol, so I would do that with an open protocol (i.e. Ekiga , not Skype ). | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/143', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/86/']} | jdg_379684 |
stackexchange | llm_judgeable_groundtruth_similarity | 395749 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This might be a silly question but I really don't understand what was the reason for which Newton ended up differentiating mass into inertial and gravitational. Why did he think it necessary to do so? For example, an object is made to accelerate over an almost frictionless surface and then the same object is dropped from a certain height. The same objects in both the cases, the same amount of matter in both the cases, but they are said to be different. The mass in first case is said to be inertial mass and that in second case is called gravitational mass. But why did newton think of classifying mass? And also what is the significance of this differentiation?
Now provide the response and nothing else.
| It is not so much their differentiation, as it is their identity, that is extraordinary. Consider the following. Inertial mass as defined describes an object's resistance to forces that attempt to change its motion. Described in Newton's second law, F = d/dt(momentum) = m * dv/dt for a mass that does not change with time. The inertial mass can therefore be seen as the scaling factor between the (net) force acting on an object, and the resulting change in motion as effected by the force. The gravitational mass, on the other hand, describes the magnitude that a certain object is affected by a gravitational field. Looking at it in either the form F = GMm/(r^2) = gm where G is the gravitational constant, and M is the mass of the other (large) mass. In that sense then, if we were to consider g as being a quantity intrinsic to the gravitational field (rather than the object), the gravitational mass m then becomes a scaling factor between the strength of the gravitational field, and the force experienced by an object due to the influence of the gravitational field. There is little theoretical work (and none legitimate) during Newton's time that can explain why these two scaling factors should be the same. Considering both as intrinsic properties of the object, why should the tendency of an object to be affected by gravitational fields, be related in any way to the resistance of the object to changes in its motion due to external forces? The equivalence of the two values (equivalence principle) is therefore something that needs to be experimentally verified, and the search for a difference between the two has been ongoing since his time. As an aside, it is perhaps important to note that other forces of attraction and repulsion (as understood at that time) all have their own unique scaling factors. Be it the charge of particles for electrostatic forces, or the magnetisation strength of certain materials for magnetic forces, it does not seem like any correlation exists between them or with the mass property of the object. The uniqueness of these two values stands out. Taking a closer look at the two formulas: F = m * dv/dt | F = m * (GM/(r^2)) The greater the resistance of an object is to being moved, the stronger is the attractive force exerted by an object on neighbouring objects to pull them towards it. Now, consider a ball placed on a taut bedsheet. The deeper the ball sinks into the bedsheet, the harder it is to move the ball. But, with a deeper depression created by the ball, the tendency for neighbouring balls affected by the depression to roll towards the main ball increases. This is a really, really crude way to think about general relativity, but I suppose it could pass off as a tiny bit of inspiration. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/395749', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/174412/']} | jdg_379685 |
stackexchange | llm_judgeable_groundtruth_similarity | 32232978 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am not even sure I am asking the right question. Let me explain my situation: This is about Git on Windows 7. My company sets up the Windows user directory on a network drive, not on the local hard drive (for backup and other purposes beyond the scope of this question). I cannot change that policy. I CAN have local files outside of that scheme however and that is how my Apache server is set up. Entirely local. I installed Git. It installs Bash. When I fire up Bash and cd ~ (change to the root directory) I find that it (the Bash root) points to my network user directory. I want it not to do that. I want ~ to be elsewhere on my hard drive. I want it so that when I perform other operations such as installing a certificate, an SSH key, etc. its defaults will not automatically use that network directory. I have searched in vain everywhere, and all I can find refers to answers that involve aliases, redirection, and the location $HOME points to. But that is not what I want at all. The question is: Can I change that directory? And if so: How? UPDATE: So, $HOME is what I need to modify. However I have been unable to find where this mythical $HOME variable is set so I assumed it was a Linux system version of PATH or something. Anyway ... I do have a "profile" file under git/etc . Here are the contents (notice no $HOME): # To the extent possible under law, the author(s) have dedicated all # copyright and related and neighboring rights to this software to the # public domain worldwide. This software is distributed without any warranty. # You should have received a copy of the CC0 Public Domain Dedication along # with this software. # If not, see <http://creativecommons.org/publicdomain/zero/1.0/>. # System-wide profile file # Some resources... # Customizing Your Shell: http://www.dsl.org/cookbook/cookbook_5.html#SEC69 # Consistent BackSpace and Delete Configuration: # http://www.ibb.net/~anne/keyboard.html # The Linux Documentation Project: http://www.tldp.org/ # The Linux Cookbook: http://www.tldp.org/LDP/linuxcookbook/html/ # Greg's Wiki http://mywiki.wooledge.org/ # Setup some default paths. Note that this order will allow user installed # software to override 'system' software. # Modifying these default path settings can be done in different ways. # To learn more about startup files, refer to your shell's man page. MSYS2_PATH="/usr/local/bin:/usr/bin:/bin" MANPATH="/usr/local/man:/usr/share/man:/usr/man:/share/man:${MANPATH}" INFOPATH="/usr/local/info:/usr/share/info:/usr/info:/share/info:${INFOPATH}" MINGW_MOUNT_POINT= if [ -n "$MSYSTEM" ] then case "$MSYSTEM" in MINGW32) MINGW_MOUNT_POINT=/mingw32 PATH="${MINGW_MOUNT_POINT}/bin:${MSYS2_PATH}:${PATH}" PKG_CONFIG_PATH="${MINGW_MOUNT_POINT}/lib/pkgconfig:${MINGW_MOUNT_POINT}/share/pkgconfig" ACLOCAL_PATH="${MINGW_MOUNT_POINT}/share/aclocal:/usr/share/aclocal" MANPATH="${MINGW_MOUNT_POINT}/share/man:${MANPATH}" ;; MINGW64) MINGW_MOUNT_POINT=/mingw64 PATH="${MINGW_MOUNT_POINT}/bin:${MSYS2_PATH}:${PATH}" PKG_CONFIG_PATH="${MINGW_MOUNT_POINT}/lib/pkgconfig:${MINGW_MOUNT_POINT}/share/pkgconfig" ACLOCAL_PATH="${MINGW_MOUNT_POINT}/share/aclocal:/usr/share/aclocal" MANPATH="${MINGW_MOUNT_POINT}/share/man:${MANPATH}" ;; MSYS) PATH="${MSYS2_PATH}:/opt/bin:${PATH}" PKG_CONFIG_PATH="/usr/lib/pkgconfig:/usr/share/pkgconfig:/lib/pkgconfig" ;; *) PATH="${MSYS2_PATH}:${PATH}" ;; esac else PATH="${MSYS2_PATH}:${PATH}" fi MAYBE_FIRST_START=false SYSCONFDIR="${SYSCONFDIR:=/etc}" # TMP and TEMP as defined in the Windows environment must be kept # for windows apps, even if started from msys2. However, leaving # them set to the default Windows temporary directory or unset # can have unexpected consequences for msys2 apps, so we define # our own to match GNU/Linux behaviour. ORIGINAL_TMP=$TMP ORIGINAL_TEMP=$TEMP #unset TMP TEMP #tmp=$(cygpath -w "$ORIGINAL_TMP" 2> /dev/null) #temp=$(cygpath -w "$ORIGINAL_TEMP" 2> /dev/null) #TMP="/tmp" #TEMP="/tmp" case "$TMP" in *\\*) TMP="$(cygpath -m "$TMP")";; esac case "$TEMP" in *\\*) TEMP="$(cygpath -m "$TEMP")";; esac test -d "$TMPDIR" || test ! -d "$TMP" || { TMPDIR="$TMP" export TMPDIR } # Define default printer p='/proc/registry/HKEY_CURRENT_USER/Software/Microsoft/Windows NT/CurrentVersion/Windows/Device' if [ -e "${p}" ] ; then read -r PRINTER < "${p}" PRINTER=${PRINTER%%,*} fi unset p print_flags () { (( $1 & 0x0002 )) && echo -n "binary" || echo -n "text" (( $1 & 0x0010 )) && echo -n ",exec" (( $1 & 0x0040 )) && echo -n ",cygexec" (( $1 & 0x0100 )) && echo -n ",notexec" } # Shell dependent settings profile_d () { local file= for file in $(export LC_COLLATE=C; echo /etc/profile.d/*.$1); do [ -e "${file}" ] && . "${file}" done if [ -n ${MINGW_MOUNT_POINT} ]; then for file in $(export LC_COLLATE=C; echo ${MINGW_MOUNT_POINT}/etc/profile.d/*.$1); do [ -e "${file}" ] && . "${file}" done fi } for postinst in $(export LC_COLLATE=C; echo /etc/post-install/*.post); do [ -e "${postinst}" ] && . "${postinst}" done if [ ! "x${BASH_VERSION}" = "x" ]; then HOSTNAME="$(/usr/bin/hostname)" profile_d sh [ -f "/etc/bash.bashrc" ] && . "/etc/bash.bashrc" elif [ ! "x${KSH_VERSION}" = "x" ]; then typeset -l HOSTNAME="$(/usr/bin/hostname)" profile_d sh PS1=$(print '\033]0;${PWD}\n\033[32m${USER}@${HOSTNAME} \033[33m${PWD/${HOME}/~}\033[0m\n$ ') elif [ ! "x${ZSH_VERSION}" = "x" ]; then HOSTNAME="$(/usr/bin/hostname)" profile_d zsh PS1='(%n@%m)[%h] %~ %% ' elif [ ! "x${POSH_VERSION}" = "x" ]; then HOSTNAME="$(/usr/bin/hostname)" PS1="$ " else HOSTNAME="$(/usr/bin/hostname)" profile_d sh PS1="$ " fi if [ -n "$ACLOCAL_PATH" ] then export ACLOCAL_PATH fi export PATH MANPATH INFOPATH PKG_CONFIG_PATH USER TMP TEMP PRINTER HOSTNAME PS1 SHELL tmp temp test -n "$TERM" || export TERM=xterm-256color if [ "$MAYBE_FIRST_START" = "true" ]; then sh /usr/bin/regen-info.sh if [ -f "/usr/bin/update-ca-trust" ] then sh /usr/bin/update-ca-trust fi clear echo echo echo "###################################################################" echo "# #" echo "# #" echo "# C A U T I O N #" echo "# #" echo "# This is first start of MSYS2. #" echo "# You MUST restart shell to apply necessary actions. #" echo "# #" echo "# #" echo "###################################################################" echo echo fi unset MAYBE_FIRST_START
Now provide the response and nothing else.
| I don't understand, why you don't want to set the $HOME environment variable since that solves exactly what you're asking for. cd ~ doesn't mean change to the root directory , but change to the user's home directory , which is set by the $HOME environment variable. Quick'n'dirty solution Edit C:\Program Files (x86)\Git\etc\profile and set $HOME variable to whatever you want (add it if it's not there). A good place could be for example right after a condition commented by # Set up USER's home directory . It must be in the MinGW format, for example: HOME=/c/my/custom/home Save it, open Git Bash and execute cd ~ . You should be in a directory /c/my/custom/home now. Everything that accesses the user's profile should go into this directory instead of your Windows' profile on a network drive. Note: C:\Program Files (x86)\Git\etc\profile is shared by all users, so if the machine is used by multiple users, it's a good idea to set the $HOME dynamically: HOME=/c/Users/$USERNAME Cleaner solution Set the environment variable HOME in Windows to whatever directory you want. In this case, you have to set it in Windows path format (with backslashes, e.g. c:\my\custom\home ), Git Bash will load it and convert it to its format. If you want to change the home directory for all users on your machine , set it as a system environment variable , where you can use for example %USERNAME% variable so every user will have his own home directory, for example: HOME=c:\custom\home\%USERNAME% If you want to change the home directory just for yourself , set it as a user environment variable , so other users won't be affected. In this case, you can simply hard-code the whole path: HOME=c:\my\custom\home | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/32232978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4830015/']} | jdg_379686 |
stackexchange | llm_judgeable_groundtruth_similarity | 1255257 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How would you eliminate the parameter where the x coordinate is in terms of t, but the t is squared. x= 3t - $t^2$ y= t + 1 I know to solve for y as a function of x, but I'm not sure how to do so with powers.
Now provide the response and nothing else.
| Assume that $T$ is not one-to-one. So we have that $T(x)=T(y)$ for some $x\neq y$. Then $$ 0=|T(x-y)|=|T(x)-T(y)|\geq \alpha |x-y| > 0$$ since $|x-y|>0$. That is we have a contradiction. So it is one-to-one. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1255257', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/231168/']} | jdg_379687 |
stackexchange | llm_judgeable_groundtruth_similarity | 18752366 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to show users how long has been elapsed since they performed an action. The date+time of the action happening is stored on the server, in the server's timezone. That's what's causing the trouble, since if the user's computer's timezone is 12 hours ahead of the server's timezone, then if the user adds something right now, moment.js will show '12 hours ago' as the output of fromNow() rather than just now . To try to solve this, I'm trying the following method: var actionTime = moment( action.timeStamp);//time of when user performed action var serverTime = moment().zone('-07:00'); //current server timeconsole.debug( serverTime);//outputs Wed Sep 11 2013 15:19:51 GMT-0700var timeAgo = serverTime.from( actionTime); But despite of all this, timeAgo still shows the difference between the client's timezone and the server's timezone (i.e showing '12 hours ago' instead of 'now'); Anyone know how to fix this or what I'm doing wrong?
Now provide the response and nothing else.
| Ideally, you would want to pass a UTC timestamp from your server to the client. That doesn't mean you have to switch your whole server over to UTC, it just means that you would convert from the time in your database to UTC on the server before sending it over the web. Sure, it would be even better if you actually stored times in UTC, but you said you aren't in a position to make that sort of change right now. But let's just work off the assumption that you can't change anything at all on the server. We'll also assume that your server is fixed to the UTC-07:00 offset. In real life, this would only be true for places like Arizona that don't follow daylight saving time. So if you are in Los Angeles and are in Pacific Time, then some of your data is based on UTC-07:00, but some of it is based on UTC-08:00. That requires a lot more work if you want to do it in JavaScript. Let's also assume that the input is already a string in ISO8601 format. (If it's not, then let me know and I will adjust this code.) var s = "2013-09-11 18:00:00"; // from action.timeStampvar actionTime = moment(s + "-07:00", "YYYY-MM-DD HH:mm:ssZ");var timeAgo = actionTime.fromNow(); The reason your other code didn't work is because in the first line, you are affected by the time zone of the browser. The zone setter in the second line just changes the zone for formatting, not changing the actual moment in time. Also, when you dump a moment to the console for debugging, make sure you format it for output. Otherwise you are just looking at its internal property values, which may or may not make sense directly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18752366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49153/']} | jdg_379688 |
stackexchange | llm_judgeable_groundtruth_similarity | 333646 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an extremely bright LED, so bright that I wouldn't want to look at it when it's at full brightness. I am dimming it with PWM (pulse-width modulation) down to 1/256 of its original brightness. At a 1/256 duty cycle, the LED appears reasonably dim. (Still quite visible, but not blindingly bright.) My question is this: since the LED is actually sending out super-bright pulses 1/256 of the time, can these bright pulses hurt the eye more than a hypothetical LED which was on constantly and was 1/256 as bright? I'm using the TLC5947 LED driver, so if my calculations are correct, the frequency of the PWM is approximately 1 kHz. (The chip's internal clock is 4 MHz, and one PWM cycle is 4096 internal clock cycles long.) The LED I'm driving is this RGB 7-segment display . The brightness of each segment is 244 mcd for red, 552 mcd for green, and 100 mcd for blue. So with all 7 segments illuminated, it would be 7 times that.
Now provide the response and nothing else.
| It is permissible within certain limits. The best place to look is probably the associated IEC standards (IEC 60285 Laser Safety and IEC 62471 Lamp Safety), which are generally internationally recognized as best practice. Unfortunately I can't post excerpts of them here since they're copyrighted. Choosing which standard to apply depends on how the LED is used. OSRAM has a very comprehensive appnote describing how these standards apply to infrared LEDs and how to calculate permissible exposure. Your particular case focuses on pulsed light. In general, PWM'd light is weighted against its averaged value, so long as the individual pulses do not exceed an irradiance limit (given by a chart in the standard of pulse length vs. irradiance). This is all outlined in the OSRAM appnote, although since you're in the visible range you'll have to refer back to the source standards to see what the particular limits are for your wavelengths. Edit: Found another appnote which may be useful to you - OSRAM has an appnote on 62471 as a whole, not just IR . The best place of course to look is the standard itself , but it costs about $250. If this is a product you're designing, that's probably worth it, but if this is just a hobby project I would scavenge information based on appnotes. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/333646', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/56896/']} | jdg_379689 |
stackexchange | llm_judgeable_groundtruth_similarity | 439286 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I've been able to get Greasemonkey and jQuery 1.2.6 to work together without issue, but, now I'm wondering how to embed additional jQuery plugins into my Greasemonkey script, such as Eric Martin's SimpleModal plugin ( http://www.ericmmartin.com/projects/simplemodal/ ). The following code gets jQuery loaded, but I'm not sure how to get SimpleModal loaded properly: var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://jqueryjs.googlecode.com/files/jquery-1.2.6.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_JQ_SM = document.createElement('script'); GM_JQ_SM.src = 'http://simplemodal.googlecode.com/files/jquery.simplemodal-1.2.2.min.js'; GM_JQ_SM.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_SM); // Check if jQuery's loaded function GM_wait() { if(typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = unsafeWindow.jQuery; } } GM_wait(); Anyone have any ideas? Thanks.
Now provide the response and nothing else.
| First, if you are OK with not having Firebug debugging access the easiest way to include jquery is to use the require settings: // @require http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js// @require http://ajax.googleapis.com/ajax/libs/jqueryui/1.5.2/jquery-ui.min.js Following that line you can include other external scripts. Most of the jquery plugins are not available like the jquery api, but you can host it yourself. Using the require also allows you to dump all the loading code and simply go with: $(document).ready( function() { ... }); Firebug will report bugs, but you will not be able to step into the debugger. Additionally, once you have jquery loaded you can load other items to like so: $('head').append("<link href='http://www.somewebsite.com/styles.css' type='text/css' rel='stylesheet'>"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/439286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38375/']} | jdg_379690 |
stackexchange | llm_judgeable_groundtruth_similarity | 53515942 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Which is the right approach to send user credentials from the front end to the backend server?I see examples where some developers use the authorization headers and some pass the credentials in the POST body.
Now provide the response and nothing else.
| Credentials usually go to the request body once, when trying log in.You should receive a token in return, although whether you send this token via HTTP header, request body or as a GET param is up to you ( or the protocol you are implementing ). It's generally a good practice to use the header, because GET requests shouldn't include request body and passing the token as a GET parameter may not always be an option ( e.g. due to the token appearing in various logs ). Either way, I would advise you to avoid trying to implement your own protocol and use an existing standard instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53515942', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3222004/']} | jdg_379691 |
stackexchange | llm_judgeable_groundtruth_similarity | 47268652 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently I am using the following code to add a color to an element using jss. const styleSheet = theme => ({ root: { backgroundColor: theme.colors.red, },}) I would like to know if exist a function to add opacity based on color theme.colors.red . example smt like: backgroundColor: color(theme.colors.red, .05),
Now provide the response and nothing else.
| Material UI has a colorManipulator utility file , which includes an alpha function: import { alpha } from '@material-ui/core/styles/colorManipulator';/** * Sets the absolute transparency of a color. * Any existing alpha values are overwritten. * @param {string} color - CSS color, i.e. one of: #nnn, #nnnnnn, rgb(), rgba(), hsl(), hsla(), color() * @param {number} value - value to set the alpha channel to in the range 0 - 1 * @returns {string} A CSS color string. Hex input values are returned as rgb */{ backgroundColor: alpha(theme.colors.red, 0.5)} For Mui v5: import { alpha } from "@mui/material"; Alternatively, you can add the color library from npm for color manipulation: import Color from 'color';{ backgroundColor: Color(theme.colors.red).alpha(0.5).string()} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/47268652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/379008/']} | jdg_379692 |
stackexchange | llm_judgeable_groundtruth_similarity | 306851 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The title is a quote from a Jim Holt article entitled, "The Riemann zeta conjecture and the laughter of the primes" (p. 47). 1 His example of a "long-standing conjecture" is the Riemann hypothesis,and he is cautioning "those who blithely assume the truth of theRiemann conjecture." Q . What are examples of long-standing conjectures in analysis that turned out to be false? Is Holt's adverb "often" justified? 1 Jim Holt. When Einstein Walked with Gödel: Excursions to the Edge of Thought .Farrar, Straus and Giroux, 2018. pp.36-50.( NYTimes Review .)
Now provide the response and nothing else.
| I don't know about analysis in general, but I think it's definitely fair to say "often" in functional analysis. My feeling is that we have a solid, thorough, elegant body of theory which usually leads to positive solutions rather quickly, when they exist. (The Kadison-Singer problem is a recent exception which required radically new tools for a positive solution.) Problems that stick around for a long time tend to do so not because there's a complicated positive solution but because there's a complicated counterexample. That's a gross overgeneralization but I think there's some truth to it. The first examples I can think of are: every separable Banach space has the approximation property and has a Schauder basis (counterexample by Enflo) every bounded linear operator on a Banach space has a nontrivial closed invariant subspace (counterexamples by Enflo and Read) every infinite dimensional Banach space has an infinite dimensional subspace which admits an unconditional Schauder basis (counterexample by Gowers and Maurey) every infinite dimensional Banach space $X$ is isomorphic to $X \oplus \mathbb{R}$; if $X$ and $Y$ are Banach spaces, each linearly homeomorphic to a subspace of the other, then they are linearly homeomorphic (counterexample by Gowers) I can't resist also mentioning some examples that I was involved with. Dixmier's problem: every prime C*-algebra is primitive (counterexample by me) Naimark's problem: if a C*-algebra has only one irreducible representation up to unitary equivalence, then it is isomorphic to $K(H)$ for some Hilbert space $H$ (counterexample by Akemann and me) every pure state on $B(l^2)$ is pure on some masa (counterexample by Akemann and me) every automorphism of the Calkin algebra is inner (counterexample by Phillips and me) The last three require extra set-theoretic axioms, so the correct statement is that if ordinary set theory is consistent, then it is consistent that these counterexamples exist. Presumably all three are independent of the usual axioms of set theory, but this is only known of the last one, where the consistency of a positive solution was proved by Farah. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/306851', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6094/']} | jdg_379693 |
stackexchange | llm_judgeable_groundtruth_similarity | 317153 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm just making some changes to the way we run Zope following a recent security advisory . In the advisory it says: “Make sure that the Zope/Plone service is running with with minimum privileges. Ideally, the Zope and ZEO services should be able to write only to log and data directories.” Currently we have a dedicated linux user that runs buildout, starts and stops the server and which runs the Zope service. We're using supervisord to manage the two Zope instances we run in parallel (relstorage backend). Our current zope user needs write permission on more than just the log and data directory so that it can run buildout so I opted to create an additional linux user, zoperun, to actually run the zope service. zoperun is in the same group as zope and only has write permissions on the var directory. This is all working fine except one thing: I'd like the original zope user to be able to start and stop the zope instances using supervisorctl. I can't get this working. For example, when I try to run bin/supervisorctl status I get error: , [Errno 13] Permission denied: file: line: 1 I think this is because the supervisor socket doesn't have group read and write permissions srw------- 1 zoperun zopeour 0 Sep 30 09:00 supervisor.sock I've set umask = 002 in the supervisord configuration but this has no affect on this superctl socket. Is there any way I can achieve what I'd like?
Now provide the response and nothing else.
| Take a look at unix-http-server section. Change your configuration file as belows: [unix_http_server]file=/tmp/supervisor.sock ; (the path to the socket file)chmod=0770 ; sockef file mode (default 0700)chown=zope:zoperun ; socket file uid:gid owner;username=user ; (default is no username (open server));password=123 ; (default is no password (open server)) This make the socket file can be read, write by users in zoperun group: ll /tmp/supervisor.sock srwxrwx--- 1 zope zoperun 0 Sep 30 16:54 /tmp/supervisor.sock Finally, add all users you want to allow start/stop Zope instance into zoperun group and testing with normal user, you will see something like this: $ supervisorctl statusfoo STARTING | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/317153', 'https://serverfault.com', 'https://serverfault.com/users/71583/']} | jdg_379694 |
stackexchange | llm_judgeable_groundtruth_similarity | 117013 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I) I know that virtual-photons are known to be the force-carriers for the Electromagnetic force, and that they are called "virtual" because the Energy-Time-inequality version of the Heisenberg Uncertainty Principle allows for particles that are high enough energy that they are very difficult to observe (because higher energy means a smaller possible time-scale for observation). But I also know that photons are the quanta of EM radiation; i.e. they emitted from atoms at some point in space, and absorbed at other points in space as a means of transmitting radiation energy. My question is this: are the photons that act as the force carrier of the Electromagnetic force the "same" photons (i.e. the exact same particle) as the photons that act as the quanta of EM radiation? Is it just that the photons emitted as virtual particles have high enough energy that they act as a force carrier? If so, what causes charged particles to emit photons of such high energy? II) As an add-on question: I'm being introduced loosely to Electro-weak Unification and the idea that at high enough energy, the EM- and Weak forces become indistinguishable from one another (and, I believe, that the difference between the EM-force and the Weak force, at low energy, is that the W and Z bosons that mediate the Weak force are massive, and therefor act at low range, whereas photons are massless and therefor act at long ranges). And subsequently, that the Higgs Boson helps to explain what gives W and Z bosons mass. But what is the difference between the W and Z bosons and the photon that makes them interact with the Higgs mechanism, and the photon remain unaffected? I hope these questions make sense.
Now provide the response and nothing else.
| There is only one kind of photon. Indeed, when we describe elementary interactions between two electrons for example, we call the photon "virtual" as opposed to a physical photon that might exist outside of this process. Still, these are the same particles, i.e. excitations of the same fundamental field, as the photons that make up light for example. Again, virtual photons can only appear in the context of a direct interaction between charged particles, while real photons are the electromagnetic waves send out e.g. by excited atoms. Macroscopic (constant) electric and magnetic fields are coherent states of virtual photons. Regarding the electroweak unification you seem to have a misconception. In the unified theory there is no electromagnetism any more, but only the electroweak force, which has four force carriers: The $W^\pm, W^0$ and $B$. The Higgs field couples to all of those, giving mass to the $W^\pm$ and to a linear combination of $W^0$ and $B$, which we call $Z = \cos\left(\theta_W\right) W^0 + \sin\left(\theta_W\right) B$, while the orthogonal linear combination $\gamma = -\sin\left(\theta_W\right)W^0 + \cos(\theta_W)B$ remains massless. So the photon is defined as the boson that remains massless after electroweak symmetry breaking. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/117013', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/40021/']} | jdg_379695 |
stackexchange | llm_judgeable_groundtruth_similarity | 28243 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering about a few technicalities of crossover in meiosis. The point of crossover is to create new chromosomes that don't have the same allele combinations as the original two chromosomes. Usually, the chromosomes are cut at the same place on both chromosomes, and each piece is then stitched to that place on the other. This is to avoid unequal recombination, a scenario in which one chromosome has several instances of a gene and the other no instance at all. I'm wondering how the molecular machinery knows where to cut. So here's my question: How does the molecular machinery choose where to cut a chromosome for recombination? This question has two parts: At what type of place does it occur (does the machinery choose a completely random place, regardless of where genes start and end, does it just cut at the start of genes, or does it do something else)? Given that it happens at this type of place (e.g. start of a gene), how does it decide that it will cut here (the start of this gene) and not there (the start of that gene)?
Now provide the response and nothing else.
| The question is very broad and complicated, since the situation may differ in prokaryotes and eukaryotes. Nevertheless, I'm citing a good paper that is closely related to your question: Studies in yeast show that initiation of recombination, which occurs by the formation of DNA double-strand breaks, determines the distribution of gene conversion and crossover events that take place in nearby intervals. Recent data in humans and mice also indicate the presence of highly localized initiation sites that promote crossovers clustered around the region of initiation and seem to share common features with sites in yeast. On a larger scale, chromosomal domains with various recombination rates have been identified from yeast to mammals. This indicates a higher level of regulation of recombination in the genome with potential consequences on genome structure... ...DSBs (Double Strand Breaks) occur in highly localized regions and spread over 70–250 bp. DNA sequence analysis reveals no unique conserved consensus sequences, although a degenerate 50-bp motif partly correlates with DSB sites. However, one common feature is that DSBs are located in accessible regions of the chromatin next to either promoters or binding sites for transcription factors. Based on two studies, DSB activity does not correlate with local transcriptional activity, but depends on transcription-factor binding (HIS4 in S. cerevisiae and ade6-M26 in S. pombe). Bernard de Massy, Distribution of meiotic recombination sites. TRENDS in Genetics Vol.19 No.9 September 2003 I'll sum that up into a more answer-like form. Seems like the process is not random, because the double strand breaking events are clearly not evenly distributed across a genome. As the paper says there are no specific consensus motifs found as of yet, though there clearly is something special before promoters and TF binding sites, which makes them more likely to be a breaking site. How the machinery choses the place? Once again, as the paper says the breaking event depends on TF binding. But that is for S. cerevisiae. There are 17 hot spots found in human and mice genomes some of which are intergenic (they occupy introns or 5'/ 3' flanking regions). Here is the distribution of recombination frequencies across one chromosome (the figure is taken from the paper). Here is a list of recombination sites in humans and mice | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/28243', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/13300/']} | jdg_379696 |
stackexchange | llm_judgeable_groundtruth_similarity | 553073 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following really simple code <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager><asp:UpdatePanel ID="UpdatePanel1" runat="server"><ContentTemplate> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> </asp:PlaceHolder> <asp:Button ID="Button1" runat="server" Text="Button" onclick="Button1_Click" /></ContentTemplate></asp:UpdatePanel> And the codebehind protected void Button1_Click(object sender, EventArgs e){ Literal literal = new Literal(); literal.Text = DateTime.Now.ToString(); literal.ID = DateTime.Now.Ticks.ToString(); // These both work fine the first time the button is clicked // but the second time nothing is added. UpdatePanel1.ContentTemplateContainer.Controls.Add(literal); PlaceHolder1.Controls.Add(literal);} My problem comes in that the Literal control is only ever added once. I've scoured google and blog sites (plus books) but without any luck. What am I missing?
Now provide the response and nothing else.
| In asp.net, the controls in the ASPX file are automatically generated on each postback. The controls you've created are not in the ASPX code so the framework does not create them for you. The first time you execute the Button1_Click method, you add one extra control to the page. The second time you execute the Button1_Click method, you're on another post back and that first extra button has been forgotten about. So the result of that postback is you get one extra button again. This will create one extra control each time you click the button (although the timestamps will update each time you press the button because the controls are being re-created) protected void Button1_Click(object sender, EventArgs e){ int count = 0; if (ViewState["ButtonCount"] != null) { count = (int)ViewState["ButtonCount"]; } count++; ViewState["ButtonCount"] = count; for (int i = 0; i < count; i++) { Literal literal = new Literal(); literal.Text = DateTime.Now.ToString(); literal.ID = DateTime.Now.Ticks.ToString(); UpdatePanel1.ContentTemplateContainer.Controls.Add(literal); PlaceHolder1.Controls.Add(literal); } } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/553073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21574/']} | jdg_379697 |
stackexchange | llm_judgeable_groundtruth_similarity | 78846 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was just wondering why Fresnel Lenses are not widely used in the production of solar electricity. Their use there would mean that you could produce heat within a fraction of a second, up to a few minutes and run a turbine to produce electricity. Though it is used in welding, I am not sure what are the problems in producing electricity, as stated by this Wikipedia article : New applications have appeared in solar energy, where Fresnel lenses can concentrate sunlight (with a ratio of almost 500:1) onto solar cells.
Now provide the response and nothing else.
| This is mainly an engineering & economics question; and we can deal with those aspects of it over on the Sustainability Stack Exchange , if you want. And there is one conceptual physics aspect too. No, fresnel lenses are not widely used for solar power. Occasionally, but rarely. Concentrated solar power (CSP), including concentrated photovoltiacs (CPV) depend on direct rays. Ordinary photovoltaics do not; they generate electricity from light however it comes in; reflected off snow, or scattered by the atmosphere and by clouds. And you can find light like that all around the world. Whereas if you need high-intensity direct rays for a lot of the year, you're pretty much confined to the tropics and near the tropics. You can go further away (and there are indeed concentrating solar power stations further from the equator), but then you've got an economic problem. That's not something we can deal with here, so I'll leave that for now Here's a real-world CPV using a linear fresnel lens: source So, the first problem with CPV is that you need a lot of direct light, not just ambient light. Ordinary PV can make do with either. The second problem, is that no PV cell is anywhere near 100% efficient, and although CPV cells have got up to (roughly) 30-45% efficient, that still means that up to 70% of the energy could end as heat. Some will get reflected, but there'll still be 30-50% of the direct light energy going into concentrated heat. And that's all heat on the expensive CPV cell, that you've got to dissipate. The more successful the fresnel lens is in concentrating the light, the bigger your problem in preventing heat build-up, and the higher the equilibrium temperature on your CPV cell. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/78846', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/28147/']} | jdg_379698 |
stackexchange | llm_judgeable_groundtruth_similarity | 307896 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm about to implement my own Certification Authority (CA) for interal use only. Now there is a problem, that the CA private should never ever be exploited. So right now the private key is encrypted. What else could be done to enhance the security of the private key?
Now provide the response and nothing else.
| I worked at a company where the security of the CA key was critical to the continued success of the business. To this end the key was encrypted using a custom protocol that required at least 2 people to be present with physical tokens plugged into terminals to decrypt it(there were at least 5 of these tokens, any 2 combined would work). The terminals were physically separated from the actual machine with the CA key. The interface that the users had who decrypted it was a VT220 terminal that allowed them to input the decryption tokens and then select what they wanted to 'sign' with the key (never giving them access to the decrypted key). This system meant at least 4 people would have to work together to compromise the key, two token holders, the guy who had access to the data center, and another person who had root access on the server (because the decrypted key was never stored on the server only in memory you couldn't just steal the box, and the people with root to this specific server were not allowed DC access). If you are interested in more details on this sort of setup Bruce Schneier has a great site covering computer security design and implementation: http://www.schneier.com/ He has also published a really good book Applied Cryptography that I found helped me understand the fundamentals of systems like this and how to architect more secure infrastructures (readable by people who don't wear pocket protectors): http://www.schneier.com/book-applied.html | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/307896', 'https://serverfault.com', 'https://serverfault.com/users/64660/']} | jdg_379699 |
stackexchange | llm_judgeable_groundtruth_similarity | 6506578 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using SQLAlchemy, an Engine object is created like this: from sqlalchemy import create_engineengine = create_engine("postgresql://localhost/mydb") Accessing engine fails if the database specified in the argument to create_engine (in this case, mydb ) does not exist. Is it possible to tell SQLAlchemy to create a new database if the specified database doesn't exist?
Now provide the response and nothing else.
| On postgres, three databases are normally present by default. If you are able to connect as a superuser (eg, the postgres role), then you can connect to the postgres or template1 databases. The default pg_hba.conf permits only the unix user named postgres to use the postgres role, so the simplest thing is to just become that user. At any rate, create an engine as usual with a user that has the permissions to create a database: >>> engine = sqlalchemy.create_engine("postgres://postgres@/postgres") You cannot use engine.execute() however, because postgres does not allow you to create databases inside transactions, and sqlalchemy always tries to run queries in a transaction. To get around this, get the underlying connection from the engine: >>> conn = engine.connect() But the connection will still be inside a transaction, so you have to end the open transaction with a commit : >>> conn.execute("commit") And you can then proceed to create the database using the proper PostgreSQL command for it. >>> conn.execute("create database test")>>> conn.close() | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6506578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/189776/']} | jdg_379700 |
stackexchange | llm_judgeable_groundtruth_similarity | 90558 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If you compute the probability amplitude of a free 1D non-relativistic particle with mass $m$, located at position $x_0$ at time $t_0$, for beeing detected at some other point $x_N$ at time $t_N$ you will find it to be given by$$ \mathcal{M} = \left\langle x_N \right| \text{e}^{-\frac{\text{i}}{\hbar} \frac{P^2}{2m}(t_N-t_0)}\left| x_0\right\rangle =\left(\frac{m}{2\pi\text{i}\hbar\ (t_N-t_0)}\right)^{1/2} \text{e}^{\frac{\text{i}}{\hbar}\frac{m}{2}\frac{(x_N-x_0)^2}{t_N-t_0}}$$Now, if I compute the corresponding probability(density) according to$$ P = \left|\mathcal{M}\right|^2 = \mathcal{M} \mathcal{M}^* = \frac{m}{2\pi\hbar\ (t_N-t_0)}$$it somehow strikes me that it does not depend on the distance $(x_N-x_0)$ at all. Does this mean, that the probability of detecting the particle is the SAME everywhere? I expected something like the initial (i.e. $t_N \rightarrow t_0$) delta function "melting away" like a Gaussian wave packet... Can anyone tell me what the correct interpretation of $P$ should be?
Now provide the response and nothing else.
| Does this mean, that the probability of detecting the particle it the SAME everywhere? No, it does not. This is quite a common mistake, stemming from the idea that the Green function $\mathcal{M}$ can be used in the role of the $\psi$ function of free particle with the Born interpretation of $|\psi|^2$ as probability density. But that is not possible, since $\mathcal{M}$ is not normalizable. The quantity $\mathcal{M}$ is simply the Green function of the time-dependent Schroedinger equation for free particle. It can be used to express $\psi$ function of the particle at time $t$ as$$\psi(x,t) = \int \mathcal{M}(x,t;x_0, t_0) \psi_0(x_0,t_0) dx_0$$where $\psi_0(x_0,t_0)$ is normalized initial $\psi$ function at time $t_0$. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/90558', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7485/']} | jdg_379701 |
stackexchange | llm_judgeable_groundtruth_similarity | 19085 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Being from Physics, I have been trained to look into a lot of problems from a geometrical point of view. For example the differential geometry of manifolds in dynamical systems etc. When I read the foundations of computer science, I always try to find geometric interpretations. Like a plausible geometric interpretation of recursively enumerable sets (I worked on a part where I tried to connect them with Algebraic Geometry by exploiting equivalence with Diophantine Sets but the connection seemed forced and I could not find a "natural" expression of the facts in that formulation) or a beautiful geometric result for a simple algorithm for sorting numbers. Though I am not an expert I have read surveys on Geometric Complexity Theory and it is surely an interesting program but I am more interested in having a geometric view of extremely fundamental concepts like the dynamics of a Turing Machine, Lambda Calculus or the structure of (un)computable sets (rather than specific problems). Is it a hopeless job to find geometrical structure in these objects or can one expect some intricate results? Is there any formulation of TCS which treats it geometrically?
Now provide the response and nothing else.
| The semantics of computer programs can be understood geometrically in three distinct (and apparently incompatible) ways. The oldest approach is via domain theory . The intuition behind domain theory arises from the asymmetry behind termination and nontermination. When treating programs extensionally (ie, only looking at their I/O behavior, and not their internal structure), it is always possible to confirm in finite time that a program halts -- you just wait until it stops. However, it's not possible to confirm that a program doesn't halt, because no matter how long you wait, there is always a halting program that will run for a few more steps than you waited. As a result, halting and looping can be viewed as forming a topological space ( the Sierpiński space ). This lifts to richer notions of observation (via the Scott topology), and you can thereby interpret programs as elements of topological spaces. These spaces are generally quite surprising from a traditional point of view -- domains are generally not Hausdorff. The best topological introduction I know to these ideas is Steve Vickers' short and extremely accessible Topology via Logic . It can be understood as a sort of warm-up for Peter Johnstone's significantly more formidable Stone Spaces . If you are looking for online lecture notes, let me suggest Martin Escardo's Synthetic Topology of Data Types and Classical Spaces . Another view arises from concurrency theory. A concurrent program can be understood as having multiple valid executions (sequences of states), depending on how races are resolved. Then, the set of executions can be viewed as a space, with each possible sequence of states understood as a path through this space. Then, methods from algebraic topology and homotopy theory can be applied to derive invariants about the program execution. Nir Shavit and Maurice Herlihy uses this idea to prove the impossibility of certain distributed algorithms, for which they won the 2004 Gödel prize. (See The Topological Structure of Asynchronous Computation .) Eric Goubault has a survey paper explaining the relevant ideas in Some Geometric Perspectives in Concurrency Theory . Most recently, it has been observed that the structure of the identity type in dependent type theory corresponds very closely with the notion of homotopy type in homotopy theory -- so closely, in fact, that dependent type theory can actually be seen as a sort of "synthetic homotopty theory"! (Vladimir Voevodsky has joked that he spent several years developing a new calculus for homotopy theory, only to discover that his colleagues in the CS department were already teaching it to undergraduates.) See cody's link above to the homotopy type theory book . Interestingly, these three views seem incompatible with each other, or at least very difficult to reconcile. Dependent type theory is a total language, so nontermination (and the Scott topology) does not arise in it. It is also confluent, so the view of computations-as-spaces doesn't arise either. Similarly, formulating concurrency in terms of domain theory has proved ferociously difficult, and a completely satisfactory account is still an open problem. | {} | {'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/19085', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/17706/']} | jdg_379702 |
stackexchange | llm_judgeable_groundtruth_similarity | 2424032 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Definition . A set is called closed if its complement in $\mathbb{R}$ is open. In my lecture notes it says: $\emptyset$ is closed because $\emptyset = \emptyset \setminus \mathbb{R}$ and $\mathbb{R}$ is open. I think there is a typo because $\emptyset \neq \emptyset \setminus \mathbb{R}$, right? It should be $\emptyset = \mathbb{R} \setminus \mathbb{R}$. Can you please check this?
Now provide the response and nothing else.
| It's both correct and a typo. That is: The useful statement is "$\mathbb{R}=\mathbb{R}\setminus\emptyset$": since $\mathbb{R}$ is open, this means the complement of $\emptyset$ (in $\mathbb{R}$) is open - so $\emptyset$ is closed. This is (presumably) what the author meant to write. However, it is true that $\emptyset=\emptyset\setminus\mathbb{R}$; it's just not helpful here. Remember that "$A\setminus B$" is the set of all things in $A$ which aren't in $B$. Well, there are no things in $\emptyset$ which aren't in $\mathbb{R}$ (in fact, there are no things in $\emptyset$ at all!), so $\emptyset\setminus\mathbb{R}=\emptyset$. (I'm pointing this out because you ask whether $\emptyset\setminus\mathbb{R}\not=\emptyset$, at the end of your question.) | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2424032', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_379703 |
stackexchange | llm_judgeable_groundtruth_similarity | 13423064 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Thanks in advance for help and direction. I am finally making the switch from linear programming to OOP. I am working on my first class ever and I could use a little direction. My first class is a gallery with the following properties class Gallery { //Gallery Name public $galleryID; public $galleryName; //Client Name public $clientName; //Gallery Options public $bg_color; public $albumAgreement; public $maxChanges; public $sharing_on; //Revisions public $revisions;} My out put thus looks like: Gallery Object( [galleryID] => [galleryName] => [clientName] => [bg_color] => [albumAgreement] => [maxChanges] => [sharing_on] => [revisions] => ) My next step is I would like to make 'revisions' an object as well so that my output would look like Gallery Object( [galleryID] => [galleryName] => [clientName] => [bg_color] => [albumAgreement] => [maxChanges] => [sharing_on] => [revisions] => Revisions Object ( [revisionID] => [revisionName] => )) What direction do I go for something like this and what might the class look like? Thanks
Now provide the response and nothing else.
| This is more of a long-form comment , since it explains the origin of your dilemma, but provides no solutions. OOP Lesson 1: Just because you're using classes doesn't mean you're writing object-oriented code. There is rarely a good use-case for a public property in your objects. Let's look at the OP's example: class Gallery { public $galleryID; public $galleryName; // ...} Having defined our properties as public , how do the following two snippets of code differ? $gallery = new Gallery;$gallery->galleryId = 42;$gallery->galleryName = 'some name';// vs:$gallery = array( 'galleryId' => 42, 'galleryName' => 'some name'); If you said, "they aren't really different at all," then you'd be correct. In fact, the object-based code will be slower because of the instantiation overhead involved with new . There are some other factors like the ability to pass around references to an object instead of copying a new array, but those don't affect this particular situation. OOP Lesson 2: Objects are a black box The problem with creating an object that's nothing more than a collection of mutable properties is that the rest of your code has full knowledge of what happens inside that object. Let's talk about why this is bad ... Humans just just aren't very good when it comes to complexity. Good software aims to minimize complexity in part by encapsulating functionality into discrete units. In this case, we want to encapsulate all the logic of a "gallery" entity into the Gallery class. This makes sense as part of a domain-driven design (DDD) approach. What we want to do is wall off the Gallery from the outside world; we want its internal implementation to be opaque to the rest of our code. The rest of our application shouldn't know or care how the Gallery functions, just that it works as expected. The added benefit here is that we can focus on making the gallery work how it's supposed to and then forget about it. We aren't forced to remember how Gallery works to work with an Image or a Revision . This loose-coupling is one of the most powerful tools in OO design. While it might work on very small scales, it's impossible to keep the logic of an entire application in your head at the same time. It doesn't matter how smart you are, our brains just don't have enough RAM. Moving back to the code, if our application code knows how the Gallery assigns itself a name then we've already allowed the logic of "gallery-ness" to leak out into the rest of the program. What happens when we decide that we want to verify new gallery names when they're assigned? We must now put that validation logic everywhere in our code where we've specified gallery names because we haven't walled off everything about the abstract concept of "gallery-ness." A much better design would be to encapsulate the assignment of Gallery properties within the object itself: class Gallery { private $galleryId; private $name; public function setName($name) { $this->name = $name; } public function getName($name) { return $this->name; }} If we structure our class in this manner, we always have a single point of entry when we need to assign a name to a gallery. Now, when our requirements for a gallery change down the road ( and they will ), all of our application code -- which is blind to the logic behind gallery name assignment -- is isolated from breakage. We simply add a new method to our name setter and create minimal upheaval in our program: class Gallery { private $galleryId; private $name; public function setName($name) { $this->validateName($name); $this->name = $name; } private function validateName($name) { if (!preg_match('/^[a-z]+$/', $name)) { throw new Exception; } } public function getName($name) { return $this->name; }} Addressing the OP To answer the question of how to represent an encapsulated Revision object as a property of the higher-level Gallery instance we need a bit of context. It appears that what the OP is trying to do is model domain entities that will be written to and retrieved from a backend persistence layer (such as a database, flat text file, etc). Anemic domain models are one way to handle this, however it's generally considered an anti-pattern. Martin Fowler writes: The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. With those arguments in mind, you should consider using something like the DataMapper or Gateway pattern to work with domain objects that need to be persisted to some form of backend storage. Alternatives Let's forget about the Revision object for a minute and imagine that we want to use a Slideshow object to output the images from a gallery. This class might look like: class Slideshow { private $gallery; public function __construct(Gallery $gallery) { $this->gallery = $gallery; } public function play() { // do something with the gallery here }} Ignore the fact that php code wouldn't actually be used to "play" a slideshow as that's something that would happen in client-side code. The important thing here is that the Slideshow is using Composition to access the Gallery . This construction is vastly superior to directly new ing a Gallery inside the Slideshow because: The Slideshow is now pluggable -- we can insert any object that follows the concepts of "gallery-ness" (usually Gallery will be declared to conform to a specified interface contract). The Slideshow is now imminently testable. How do we handle a situation in which the Gallery provided doesn't have appropriate image types? If we're directly instantiating a Gallery inside a Slideshow we have no way to simulate such conditions. By injecting the dependencies of the Slideshow we allow ourselves the opportunity to test our code's ability to handle different operational conditions. Of course, sometimes it's appropriate to directly instantiate an object inside of another class. For more guidance on this topic I'd suggest the advice of Miško Hevery in his article To "new" or not to "new" . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13423064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1214858/']} | jdg_379704 |
stackexchange | llm_judgeable_groundtruth_similarity | 10789563 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What does it mean when there are three items in a typedef? For example: typedef CK_BYTE CK_PTR CK_BYTE_PTR; I know that if you just have typedef CK_BYTE CK_PTR ; then CK_BYTE would just be able to be referred to as CK_PTR .
Now provide the response and nothing else.
| A bit of Googling indicates that CK_PTR is a macro defined in pkcs11.h . Follow that link to see the documentation for these definitions. It's normally defined as: #define CK_PTR * but on some ancient systems it might be defined as #define CK_PTR far * where far is a mostly obsolete system-specific keyword that specifies a certain non-standard kind of pointer. So this: typedef CK_BYTE CK_PTR CK_BYTE_PTR; is equivalent to this (much clearer) code: typedef CK_BYTE *CK_BYTE_PTR; which defined CK_BYTE_PTR as a pointer to a CK_BYTE . The quoted definition of CK_BYTE_PTR occurs in the same header file. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10789563', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/959201/']} | jdg_379705 |
stackexchange | llm_judgeable_groundtruth_similarity | 16884421 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a .do to check the existence of some variables in a number of .dta files as well as to check the existence of certain values for those variables. However, my code stops executing as it encounters an invalid variable name. I know I mix Java and Stata coding, and it is completely inappropriate, but is there any way I could do something like: try {su var1local var1_mean=(mean)var1local var1_min=(min)var1local var1_max=(max)var1...}catch (NoSuchVariableException e) {System.out.println("Var1 does not exist")}// So that the code does not stop executing...?
Now provide the response and nothing else.
| The short answer is Yes. A slightly longer answer is that guessing what the syntax might be by analogy with Java has minimal chance of success. It is best to read Stata's documentation, e.g. start by skimming the main entries in the [P] manual. Here the problem being trapped is that no var1 exists. This code is legal, or so I trust: capture su var1, meanonly if _rc == 0 { local var1_mean = r(mean) local var1_min = r(min) local var1_max = r(max)}else display "var1 does not exist" The idea is two-fold. capture eats any error of the command it prefixes, but a return code will still be accessible in _rc . Non-zero return codes are error codes. A related command is confirm , e.g. capture confirm var var1 checks that a variable var1 exists. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16884421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2178635/']} | jdg_379706 |
stackexchange | llm_judgeable_groundtruth_similarity | 52177318 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Say the webpack config is as follows { entry: path.join(__dirname, 'src', 'index.js'), output: { path: path.join(__dirname, 'build'), filename: 'bundle.js'}, Now the build from webpack is and the build from react-scripts build ( static contains css, js and media in seperate folders ) Question : Is there any specific advantage of webpack over react-scripts build? ( including but not limited to performance ) NOTE : package.json is edited to achieve this.
Now provide the response and nothing else.
| Webpack is a general purpose bundler, with applications beyond React. Before create-react-app , the web was full of examples of setting up a brand new React project which uses webpack as the bundler. It is extremely flexible and can handle things including and beyond what a React application would need. It works for Angular, Vue, NodeJS and even Web Assembly. But it used to take a while to setup. You will need to understand its working and configure it so that you can transpile your React+ES6 code into plan-vanilla JS. You would need to decide the output structure you like and configure webpack for it. And then also add hot-module-reloading and code-splitting support yourself. During this, you will also need to add other plugins required by Webpack to support all the above :). This naturally caused some fatigue with folks who were starting with React. So facebook created cra which internally uses webpack, pre-configured to include all the nice tools to take care of these basics and help you focus on the React part of your code. It hides webpack from you as much as possible, otherwise the build process may break if the configuration is changed by the user. With that aside, the structural conventions which cra uses should not be having any performance impact over a bare-bones webpack setup. It's just a convention. Your question should then be, when would I use create-react-app and when would I use Webpack? As a beginner you might want to stick to cra as you focus on your react app. Eventually there would come a time where what you want to do is not supported by the webpack configuration cra is managing under the hood. A very common example is if you want to write a component library to reuse in other apps. This cannot be done by cra (it's about the whole app :)). You can then switch over to webpack and start learning it. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/52177318', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9077364/']} | jdg_379707 |
stackexchange | llm_judgeable_groundtruth_similarity | 1044846 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it OK to assign to $! on an error in Perl? E.g., if( ! (-e $inputfile)){ $! = "Input file $inputfile appears to be non-existent\n"; return undef;} This way I can handle all errors at the top-level. Thanks.
Now provide the response and nothing else.
| If you assign to $!, it is placed in the system errno variable, which only takes numbers. So you can in fact do: use Errno "EEXIST";$! = EEXIST;print $!; and get the string value for a defined system error number, but you can't do what you want - setting it to an arbitrary string. Such a string will get you a Argument "..." isn't numeric in scalar assignment warning and leave errno set to 0. The other problem is that $! may be changed by any system call. So you can only trust it to have the value you set until you do a print or just about anything else. You probably want your very own error variable. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1044846', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26227/']} | jdg_379708 |
stackexchange | llm_judgeable_groundtruth_similarity | 13677781 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a SqlAlchemy model with a schema argument like so: Base = declarative_base()class Road(Base): __tablename__ = "roads" __table_args__ = {'schema': 'my_schema'} id = Column(Integer, primary_key=True) When I use Base.metadata.create_all(engine) it correctly issues a CREATE TABLE with the schema name on the front like so CREATE TABLE my_schema.roads ( but Postgresql rightly complains that the schema doesn't exist. Am I missing a step to get SqlAlchemy to issue the CREATE SCHEMA my_schema or do I have to call this manually?
Now provide the response and nothing else.
| I ran into the same issue and believe the "cleanest" way of issuing the DDL is something like this: from sqlalchemy import eventfrom sqlalchemy.schema import CreateSchemaevent.listen(Base.metadata, 'before_create', CreateSchema('my_schema')) This will ensure that before anything contained in the metadata of your base is created, you have the schema for it. This does, however, not check if the schema already exists. You can do CreateSchema('my_schema').execute_if(callback_=check_schema) if you can be bothered to write the check_schema callback (" Controlling DDL Sequences " on should_create in docs). Or, as an easy way out, just use DDL("CREATE SCHEMA IF NOT EXISTS my_schema") instead (for Postgres): from sqlalchemy import DDLevent.listen(Base.metadata, 'before_create', DDL("CREATE SCHEMA IF NOT EXISTS my_schema")) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13677781', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181146/']} | jdg_379709 |
stackexchange | llm_judgeable_groundtruth_similarity | 454915 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that $$\sum_{n\leq x}d^2(n)=O(x\log^3 x),\tag1$$ where $d(n)$ is the divisor function : $d(n)=\sigma_0(n)=\sum_{a\mid n}1.$ I can prove that $$\sum_{n\leq x}d(n)=x\log x+(2\gamma -1)x+O(\sqrt x),\tag2$$ but how to prove $(1)$?
Now provide the response and nothing else.
| Here is a different argument. Note that $d(ab)\leq d(a)d(b)$ for all integers $a$ and $b$. We thus have $$\sum_{n\leq x} d(n)^2 = \sum_{ab\leq x} d(ab) \leq \sum_{ab\leq x} d(a)d(b) = \sum_{a\leq x} d(a) \sum_{b\leq x/a} d(b) \leq 2\sum_{a\leq x} d(a) {x\over a} \log x,$$where in the final inequality I used your bound on $\sum_{n\leq x} d(n)$. Now note$$\sum_{a\leq x} {d(a)\over a} = \sum_{cd\leq x} {1\over cd} \leq \sum_{c\leq x}\sum_{d\leq x} {1\over cd} \leq (2\log x)^2.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/454915', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/66964/']} | jdg_379710 |
stackexchange | llm_judgeable_groundtruth_similarity | 3378438 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
for example: $$\frac{{{x^2} + 4}}{{x\left( {x + 2} \right)\left( {3x - 2} \right)}}\, = \frac{A}{x} + \frac{B}{{x + 2}} + \frac{C}{{3x - 2}}$$ first method is: $${x^2} + 4 = A\left( {x + 2} \right)\left( {3x - 2} \right) + Bx\left( {3x - 2} \right) + Cx\left( {x + 2} \right)$$ but it is hard and take much time to find A,B,C second method(Substituting the roots, or "zeros") is: \begin{align*}x & = 0 \,\,\,\,\, : & \hspace{0.5in}4 & = A\left( 2 \right)\left( { - 2} \right) & \hspace{0.5in} & \Rightarrow & \hspace{0.25in}A & = - 1\\ x & = - 2 : & \hspace{0.5in}8 & = B\left( { - 2} \right)\left( { - 8} \right) & \hspace{0.25in}&\Rightarrow & \hspace{0.25in}B & = \frac{1}{2}\\ x & = \frac{2}{3}\,\, : & \hspace{0.5in}\frac{{40}}{9} & = C\left( {\frac{2}{3}} \right)\left( {\frac{8}{3}} \right) & \hspace{0.25in} & \Rightarrow & \hspace{0.25in}C & = \frac{{40}}{{16}} = \frac{5}{2}\end{align*} it is better and easier method for Partial Fractions. My question is why this method works? can you prove this method?
Now provide the response and nothing else.
| Hint Euclid's parameterization of the Pythagorean triples ( Elements , Book X, Proposition XXIX ) is: $$a = k (m^2 - n^2), \qquad b = 2 k m n, \qquad c = k (m^2 + n^2),$$ where $m > n > 0$ and $m, n$ coprime and not both odd. Substituting in our condition gives $$1000 = a + b + c = 2 k m (m + n),$$ and clearing the constant leaves $$\phantom{(\ast)} \qquad 500 = k m (m + n) . \qquad (\ast)$$ Now, notice that (1) $500 = 2^2 5^3$ has only two distinct prime factors, and (2) since $m$ and $n$ are coprime, so are $m$ and $m + n$ . So, one of $m, m + n$ must be one of $1, 2, 4$ (in fact one of $2, 4$ , since $m > n > 0$ implies $m + n > m > 1$ ) and the other must be one of $1, 5, 25, 125$ . Because $m + n > m$ , we must have $m \in \{2, 4\}$ , and so $m + n < 2 m \leq 8$ . Thus, $m + n = 5$ , and $2 m > m + n = 5$ implies $m \geq 3$ , leaving $m = 4$ as the only possibility. So, $n = 1, k = 25$ , and $$\color{#df0000}{\boxed{(a, b, c) = (375, 200, 425)}} .$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3378438', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/699821/']} | jdg_379711 |
stackexchange | llm_judgeable_groundtruth_similarity | 37817693 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My apologies as this has to be a question others have asked but it seems google proof. I'm trying to figure out what shell is being invoked as I have met an inconsistency. My script doesn't work if I sudo , but does work if I sudo bash . Yet when I sudo echo $0 , it says bash. cpu=$(cat /proc/cpuinfo | grep Revision | cut -d' ' -f 2-);if [[ "a22082" = $cpu || "a02082" = $cpu ]]; then echo 'do stuff';fi If I specify the command interpreter with #!/bin/sh it still fails but being more specific #!/bin/bash it does. I (now) know that [[ is bash specific but despite the response it doesn't seem like sudo's default shell is bash.
Now provide the response and nothing else.
| Short answer: There is no default interpreter. Your script needs a shebang line #!/bin/bash at the top. Long answer: sudo does not run a shell. It runs the command you pass it directly, no intermediate shell. If you want a shell you need to explicitly pass it one. sudo echo $0 This command is misleading due to $0 being expanded by your shell before sudo is called . It's printing your current $0 , not the value of $0 inside the sudo 'd environment. Let's compare some commands: sudo ls This executes the /bin/ls executable directly. There's no shell involved. The following commands will show when the distinction comes into play. sudo ls /root/* The current shell expands the glob. If you can't access /root/ then the glob isn't expanded and you get ls: cannot access /root/*: No such file or directory . sudo 'ls /root/*' This attempts to execute a program called ls /root/* . Literally, it's named * in the root sub-directory of ls ( ls with a trailing space). It fails with sudo: ls /root/*: command not found . sudo ls '/root/*' This runs /bin/ls and passes it the literal string /root/* . There's no glob expansion. It fails with ls: cannot access /root/*: No such file or directory . sudo bash -c 'ls /root/*' This executes /bin/bash and has it interpolate the ls /root/* command-line, expanding the glob. This one, finally, works, because we have an explicit shell to expand the glob, and we're doing the expansion within sudo instead of outside of it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37817693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44656/']} | jdg_379712 |
stackexchange | llm_judgeable_groundtruth_similarity | 542608 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that $m=(x+y^2, y+x^2+2xy^2+y^4)$ is a maximal ideal of $\mathbb{C}[x,y]$. I can show that the ideal $(x,y)$ of $\mathbb{C}[x,y]$ contains $m$ and $(x,y)$ is a maximal ideal. Therefore to show that $m$ is itself a maximal ideal I only need to show that $m=(x,y)$. I tried to prove it by looking for two polynomials $f,g$ such that $x=f(x,y)(x+y^2)+g(x,y)(y+x^2+2xy^2+y^4)$ but wasn't able to find them. And I'm not sure whether this is the best way of proving this. I would appreciate any help. Thank you.
Now provide the response and nothing else.
| As Tbrendle said in the comments, you can show that $\mathbb C[x,y]/m$ is a field. This is equivalent to $m$ being a maximal ideal of $\mathbb C[x,y]$. So consider the quotient $R:= \mathbb C[x,y]/m$. We want to compute some residue classes. Note that in this ring we have $x=-y^2$. This implies that$$0= y+y^4-2y^4+y^4=y.$$ So by these identifications, we have $$ R \simeq \mathbb C[y]/(y) \simeq \mathbb C.$$ This is a field, and we're done. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/542608', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12603/']} | jdg_379713 |
stackexchange | llm_judgeable_groundtruth_similarity | 48402218 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to install and use FUSE inside a Docker container. My Dockerfile is the following: FROM golang:1.8WORKDIR /go/src/appCOPY . .RUN apt-get update && apt-get install -y fuse && rm -rf /var/lib/apt/lists/*RUN go-wrapper downloadRUN go-wrapper installCMD ["go-wrapper", "run", "/mnt"] When I run the program mounting FUSE, I get: /bin/fusermount: fuse device not found, try 'modprobe fuse' first . If I install kmod and run modprobe fuse during the build step, I get the error: modprobe: ERROR: ../libkmod/libkmod.c:557 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.104-boot2docker/modules.dep.bin' How can I fix this?
Now provide the response and nothing else.
| With respect to Nickolay's answer below, the --privileged flag is not strictly required, for fuse. And you're best to avoid giving that much privilege to your container. You should be able to get things working by replacing it with --cap-add SYS_ADMIN like below. docker run -d --rm \ --device /dev/fuse \ --cap-add SYS_ADMIN \ <image_id/name> Sometimes this may not work. In this case, you should try and tweak the AppArmor profile or just disable it as follows: docker run -d --rm \ --device /dev/fuse \ --cap-add SYS_ADMIN \ --security-opt apparmor:unconfined \ <image_id/name> Finally, if all fails, use --privileged flag. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48402218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/893390/']} | jdg_379714 |
stackexchange | llm_judgeable_groundtruth_similarity | 45949074 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple set up here: I have a parent component which has 2 child components attached to this parent component. In the 1st child component: User changes the value of an input. The value of that change would then be a prop I'd like to pass from this child to the parent so that it can be passed to the other child attached to the same parent component. Main (parent component)___| |_Child 1 |_Child 2 this the set up currently, please view The flow from user input to change in UI: 1. In "Child 1": adjust a slider, onChange pass value to parent component; 2. pass this prop (the new slider value) to the Parent component so that it can be available to the "Child 2" component; 3. That prop, the valueOfUserInput (the new slider value) would then be used in an if/else statement about styling the element of the "Child 2" component. I've seen solutions and tutorials which are similar to my question, but they don't quite make sense to me. I've been hacking away at this all day, interspersed by meetings. Any help or suggestions would be amazing. Thank you all for you patience with this React noob.
Now provide the response and nothing else.
| When you want 2 children to communicate or share some data, the way to do it in React is to lift state up ( source ). This means that the state that the children use should live in the parent. Then the parent passes the state down to the children. To update the state from an action in a child, the usual pattern is to pass down a function from the parent that gets called when the action is performed in the child. Here's an example that should do what you want: class Parent extends React.Component { constructor(props) { super(props); this.state = { valueOfUserInput: '' }; } handleUserInputChange = event => { this.setState({ valueOfUserInput: event.target.value, }); }; render() { const { valueOfUserInput } = this.state; return ( <div> <Child1 valueOfUserInput={valueOfUserInput} onUserInputChange={this.handleUserInputChange} /> <Child2 valueOfUserInput={valueOfUserInput} /> </div> ); }}class Child1 extends React.Component { render() { const { valueOfUserInput, onUserInputChange } = this.props; return <input type="text" value={valueOfUserInput} onChange={onUserInputChange} />; }}class Child2 extends React.Component { render() { const { valueOfUserInput } = this.props; return ( <div> {valueOfUserInput} </div> ); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45949074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2344204/']} | jdg_379715 |
stackexchange | llm_judgeable_groundtruth_similarity | 16808205 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know this question was asked before and answered in the following post: How do I keep the current tab active with twitter bootstrap after a page reload? However, given my limited knowledge of scripting, I have not been able to get this to work on my website. After putting the code in my html page the tabs still go to the first and active one on page reload. Please can someone share a working example of the solution presented in the link above. My code looks like this: <div class="tabbable" style="margin-bottom: 8px;"> <ul class="nav nav-tabs"> <li class="active"><a href="#tab1" data-toggle="tab">My Career Centre</a></li> <li class=""><a href="#tab2" data-toggle="tab">Hiring Companies</a></li> <li class=""><a href="#tab3" data-toggle="tab">Training Providers</a></li> <li class=""><a href="#tab4" data-toggle="tab">Advertise</a></li> <li class=""><a href="#tab5" data-toggle="tab">Graduate Opportunities</a></li> <li class=""><a href="#tab6" data-toggle="tab">Career Forum</a></li> </ul> <script type="text/javascript"> $(function() { $('a[data-toggle="tab"]').on('shown', function (e) { //save the latest tab; use cookies if you like 'em better: localStorage.setItem('lastTab', $(e.target).attr('id')); }); //go to the latest tab, if it exists: var lastTab = localStorage.getItem('lastTab'); if (lastTab) { $('#'+lastTab).tab('show'); } }); </script> <div class="tab-content" style="padding: 0 5px;"> <div class="tab-pane active" id="tab1"> <div class="navbar" style="width:930px;"> <div class="navbar-inner"> <div class="container"> <ul class="nav2" style="padding-top:0px;"> <li><a href="#">Job Search</a></li> <li><a href="#">Training Programmes</a></li> <li><a href="#">Job Alerts</a></li> <li><a href="#">My Job Applications</a></li> <li><a href="#">My Studies</a></li> <li><a href="http://localhost:8080/nationalcareers/JOBSEEKERS/index.php?category=home&action=received">Received Messages</a></li> </ul> </div> </div> </div> </div> <div class="tab-pane" id="tab2"> <div class="navbar" style="width:900px;"> <div class="navbar-inner"> <div class="container"> <ul class="nav2" style="padding-top:0px;"> <li><a href="#">Manage Your Job Adverts</a></li> <li><a href="#">Browse CV's</a></li> <li><a href="#">Manage Job Applications</a></li> <li><a href="http://localhost:8080/nationalcareers/EMPLOYERS/index.php?category=postings&action=my">View Reports For Your Adverts</a></li> <li><a href="http://localhost:8080/nationalcareers/EMPLOYERS/index.php?category=home&action=sub_accounts">Manage Sub-Accounts</a></li> </ul> </div> </div> </div> </div> <div class="tab-pane" id="tab3"> <div class="navbar" style="width:900px;"> <div class="navbar-inner"> <div class="container"> <ul class="nav2" style="padding-top:0px;"> <li><a href="#">Browse Training Programmes</a></li> <li><a href="#">Browse Featured Training Providers</a></li> </ul> </div> </div> </div> </div> <div class="tab-pane" id="tab4"> <div class="navbar" style="width:900px;"> <div class="navbar-inner"> <div class="container"> <ul class="nav2" style="padding-top:0px;"> <li><a href="#">Jobs</a></li> <li><a href="#">Training Programmes</a></li> <li><a href="#">Your Company</a></li> <li><a href="#">Your Recruitment Agency</a></li> </ul> </div> </div> </div> </div> <div class="tab-pane" id="tab5"> <div class="navbar" style="width:900px;"> <div class="navbar-inner"> <div class="container"> <ul class="nav2" style="padding-top:0px;"> <li><a href="#">Search Current Opportunities</a></li> <li><a href="#">News and Updates</a></li> <li><a href="#">Events</a></li> </ul> </div> </div> </div> </div> <div class="tab-pane" id="tab6"> <div class="navbar" style="width:900px;"> <div class="navbar-inner"> <div class="container"> <ul class="nav2" style="padding-top:0px;"> <li><a href="#">Career Advice</a></li> <li><a href="#">CV/Resume</a></li> <li><a href="#">Interview Preparation</a></li> <li><a href="#">Career Net-Work</a></li> </ul> </div> </div> </div> </div> </div> </div></div> Highly appreciated,
Now provide the response and nothing else.
| As Tommi Komulainen wrote: e.target contains the full url including the hash. You only need the hash. So use e.target.toString().split('#')[1]) ; or even better $(this).attr('href') $('#'+lastTab).tab('show'); applies the .tab() on the div with id = #{lastTab} while you need to apply on the link (a tag) with data-toggle . So use: $('a[href=#' + lastTab + ']').tab('show'); here. The complete function to use: $(function() { $('a[data-toggle="tab"]').on('shown', function () { //save the latest tab; use cookies if you like 'em better: localStorage.setItem('lastTab', $(this).attr('href')); }); //go to the latest tab, if it exists: var lastTab = localStorage.getItem('lastTab'); if (lastTab) { $('a[href=' + lastTab + ']').tab('show'); } else { // Set the first tab if cookie do not exist $('a[data-toggle="tab"]:first').tab('show'); } }); update :see https://stackoverflow.com/a/16016592/1596547 so remove the active class from your source and set the first tab active when lastTab is not set. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16808205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2431463/']} | jdg_379716 |
stackexchange | llm_judgeable_groundtruth_similarity | 3974683 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to set a variable from a SQL query: declare @ModelID uniqueidentiferSelect @ModelID = select modelid from modelswhere areaid = 'South Coast' Obviously I'm not doing this right as it doesn't work. Can somebody suggest a solution? Thanks!
Now provide the response and nothing else.
| Using SELECT SELECT @ModelID = m.modelid FROM MODELS m WHERE m.areaid = 'South Coast' Using SET SET @ModelID = (SELECT m.modelid FROM MODELS m WHERE m.areaid = 'South Coast'); Then you can use SELECT to show the value of @ModelID or use the variable into your code. SELECT @ModelID See this question for the difference between using SELECT and SET in TSQL . Warning If this SELECT statement returns multiple values (bad to begin with): When using SELECT , the variable is assigned the last value that is returned (as womp said), without any error or warning (this may cause logic bugs) The SET query returns error only if you DON'T put the semicolon in the end of query | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3974683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/474018/']} | jdg_379717 |
stackexchange | llm_judgeable_groundtruth_similarity | 879022 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does $$\sum_{k=1}^\infty \binom{2k}{k} \frac{1}{4^k(k+1)}=1$$ Is there an intuitive method by which to derive this equality?
Now provide the response and nothing else.
| Your series is a telescoping one, since:$$\begin{eqnarray*}\frac{1}{4^{k+1}}\binom{2k+2}{k+1}-\frac{1}{4^k}\binom{2k}{k}&=&\frac{1}{4^{k+1}}\binom{2k}{k}\left(\frac{(2k+2)(2k+1)}{(k+1)^2}-4\right)\\&=&-\frac{1}{2(k+1)4^{k}}\binom{2k}{k},\end{eqnarray*}$$hence:$$\sum_{k=1}^{+\infty}\binom{2k}{k}\frac{1}{4^k(k+1)}=2\sum_{k=1}^{+\infty}\left(\frac{1}{4^k}\binom{2k}{k}-\frac{1}{4^{k+1}}\binom{2k+2}{k+1}\right)=\frac{2}{4}\binom{2}{1}=1.$$No need of generating functions or integrals. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/879022', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76284/']} | jdg_379718 |
stackexchange | llm_judgeable_groundtruth_similarity | 34138823 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my .NET solution, I have two projects: one main project and a project for running tests against the main project. In my project, I have several methods that I'd like to keep "private", but would also like to run tests for. Is there an access method that could limit these functions to just inside of my solution?
Now provide the response and nothing else.
| You are looking for the InternalsVisibleTo attribute. This attributes lets you specify other assemblies that should have access to types and methods that are internal to your assembly. So, in your main project AssemblyInfo.cs file (or any other source file), you can specify that your test project is a 'friend assembly' and should have access to the internals of your main project: [assembly:InternalsVisibleTo("MainProject.Tests")] On a side note, as pointed out by Alexei, if your MainProject is signed with a strong name key, any 'friend' assembly must also be signed. This is explained here Although, as mentioned in another comment. Best practice is to test your assembly by using its public API. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34138823', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_379719 |
stackexchange | llm_judgeable_groundtruth_similarity | 1625107 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I want to select every image which it's alt is Home for example, I can do something like this: $("img[alt='Home']") But how can I select every elements which their width CSS property is 750px for example in a single selector? EDIT: If there is no such selector, is there any plugin, or any plans to do it in the next jQuery versions?
Now provide the response and nothing else.
| Not necessarily a great idea, but you could add a new Sizzle selector for it : $.expr[':'].width = function(elem, pos, match) { return $(elem).width() == parseInt(match[3]);} which you could then use like so: $('div:width(970)') That's going to be horrifically slow, though, so you'd want to narrow down on the number of elements you're comparing with something like : $('#navbar>div:width(970)') to only select those divs that are direct descendants of the navbar, which also have a width of 970px. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1625107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/140937/']} | jdg_379720 |
stackexchange | llm_judgeable_groundtruth_similarity | 606469 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am designing a soft start circuit for my bench power supply. The voltage input of the IC that control the mosfets in the circuit can not be under 4.3 V even when the output of the power supply is under 5 V. I've been doing some research on the web and I found a circuit so I decide to simplify it for simulation in order to understand it better. This is the circuit: V1 refers to a 5 V voltage regulator and V2 refers to the output of the power supply that can be between 0.8-20 V. What I can not understand for example is why when V2 is under 5 V, voltage output is 4.36 V and not 8.6 V (4.36 + 3.3 V).It's like depending what voltage has V2, the voltage output depends of V1 or V2.
Now provide the response and nothing else.
| Everyone is saying what it's not, but I'll throw my two cents in to explain what this circuit is: The technique you're using here is called OR diodes. The voltage at the load is not the combined total of supplies. It is the voltage of the higher voltage supply (minus any forward voltage drop). This technique is typically used when you have several power sources to a board. For example, if you have a battery backup you want to fall back on if your main supply fails. If the main supply is, say, 30V. You could make design the backup supply for 25V. That way, when the main supply is working, the diode leading to it is reverse biased and the battery is effectively taken out of the circuit, but if the main supply fails, you can be sure that the power to the board will never go under 25V (again, minus the forward voltage of the diode, typically ~0.7V for napkin analysis purposes. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/606469', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/251597/']} | jdg_379721 |
stackexchange | llm_judgeable_groundtruth_similarity | 41991178 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I used to be able to use a lodash method in Angular by an import statement that looked like the following: import {debounce as _debounce} from 'lodash'; I now get the following error when using that statement: '"{...}/node_modules/@types/lodash/index"' has no exported member 'debounce'. The only thing that will compile without errors is this statement: import * as _ from 'lodash'; In my code, I change _debounce() to _.debounce() . Is that the only (and/or correct) way to do it? Is there a way to only import debounce, or does it not matter due to "treeshaking"? I realize I can write my own debounce function, but I'm mainly interested in the "right" way to do this. p.s. Other variations that I've tried (each has some sort of error associated with it): import {debounce as _debounce } from 'lodash/debounce';import * as _debounce from 'lodash/debounce';import debounce = require('lodash/debounce'); FYI...I'm using the following versions: Angular: 2.4.5 Typescript: 2.1.5 Angular-cli: 1.0.0-beta.26
Now provide the response and nothing else.
| (if you care about tree shaking see update ) I suppose in order to bring lodash in to your project you already done npm install lodash --savenpm install @types/lodash --save-dev If you want to import just required functions you should do: import * as debounce from 'lodash/debounce' or import { debounce } from "lodash"; Use it as: debounce() BTW: You might have to downgrade your typescript version to 2.0.10 as you are using angular 2.x. npm install [email protected] --save-dev UPDATE: Recently I realised that lodash package is just not tree shakable, so if you need tree shaking just use lodash-es instead. npm install lodash-es --savenpm install @types/lodash-es --save-devimport debounce from 'lodash-es/debounce' | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/41991178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267829/']} | jdg_379722 |
stackexchange | llm_judgeable_groundtruth_similarity | 106907 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We put all of our unit tests in their own projects. We find that we have to make certain classes public instead of internal just for the unit tests. Is there anyway to avoid having to do this. What are the memory implication by making classes public instead of sealed?
Now provide the response and nothing else.
| If you're using .NET, the InternalsVisibleTo assembly attribute allows you to create "friend" assemblies. These are specific strongly named assemblies that are allowed to access internal classes and members of the other assembly. Note, this should be used with discretion as it tightly couples the involved assemblies. A common use for InternalsVisibleTo is for unit testing projects. It's probably not a good choice for use in your actual application assemblies, for the reason stated above. Example: [assembly: InternalsVisibleTo("NameAssemblyYouWantToPermitAccess")]namespace NameOfYourNameSpace{ | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/106907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4653/']} | jdg_379723 |
stackexchange | llm_judgeable_groundtruth_similarity | 445381 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Teasing out the causal effect of one variable/treatment on another/outcome by blocking all the Backdoor Paths between treatment and outcome in the corresponding DAG (Directed Acyclic Graph) requires drawing a correct DAG in the first place. But can we ever be sure our DAG is correct?! Can you point to a convincing/rigorous/commonly agreed to be correct causal study which estimated the causal effect by drawing a DAG and blocking all backdoor paths? If you know of such a study, why do you believe the DAG to be correct? I've been intrigued by causal analysis using DAGs and backdoor paths but I do not read any academic journals so it is difficult for me to assess whether this technique is merely an interesting logical/theoretical setup or is actually practical/useful.
Now provide the response and nothing else.
| No, we can never be sure that the DAG is correct. This is one of the fundamental principles of causal inference informed by DAGs. DAGs are a non-parametric abstraction of reality. You will find in much of the DAG literature things like: In causal diagrams, an arrow represents a "direct effect" of the parent on the child, although this effect is direct only relative to a certain level of abstraction, in that the graph omits any variables that might mediate the effect represented by the arrow. Greenland and Pearl, 2017 This is completely unavoidable. Take pharmacological research. There are many, many cases of drugs which reach the market, where the researchers do not know the actual biological mechanism that causes their product to work. They may have theories, and these theories can be encapsulated using DAGs. The resulting analysis is conditional on the DAG being correct (at a level of abstraction). Other researchers may have different theories and consequently different DAGs, and that is completely OK. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/445381', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/198058/']} | jdg_379724 |
stackexchange | llm_judgeable_groundtruth_similarity | 33025719 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a RecyclerView , and in its adapter, I have created something similar to an OnLongClickListener , which I am calling an OnEntryLongClickListener to avoid confusion. I am using an AlertDialog to display a dialog with list items for different actions. However, I am getting the following error: E/AndroidRuntime: android.content.res.Resources$NotFoundException: Resource ID #0x0 at android.content.res.Resources.getValue(Resources.java:2345) at android.content.res.Resources.loadXmlResourceParser(Resources.java:3910) at android.content.res.Resources.getLayout(Resources.java:2161) at android.view.LayoutInflater.inflate(LayoutInflater.java:413) at android.view.LayoutInflater.inflate(LayoutInflater.java:366) at android.support.v7.app.AlertController$AlertParams.createListView(AlertController.java:734) at android.support.v7.app.AlertController$AlertParams.apply(AlertController.java:711) at android.support.v7.app.AlertDialog$Builder.create(AlertDialog.java:883) at com.mycompany.myapp.ThisActivity$2.onEntryLongClick(ThisActivity.java:135) at com.mycompany.myapp.adapter.RVAdapter$RVViewHolder.onLongClick(RVAdapter.java:41) at android.view.View.performLongClick(View.java:5236) Below is the relevant code I am using: adapter.setOnEntryLongClickListener(new RVAdapter.OnEntryLongClickListener() { @Override public void onEntryLongClick(View view, int position) { final MiniEntry thisEntry = entryList.get(position); AlertDialog.Builder builder = new AlertDialog.Builder(getBaseContext()); builder.setTitle(thisEntry.getEntryName());); builder.setItems(R.array.quickActions, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // Other code here } }); AlertDialog alert = builder.create(); // The error log points to this line alert.show(); }});mRecyclerView.setAdapter(adapter); As well as the XML I am using for the array: <string-array name="quickActions"> <item>Add to Favourites</item> <item>More information</item></string-array> I'm not sure if it matters, but I am importing the AlertDialog from android.support.v7.app.AlertDialog (from the v7 Support Library). How can I solve this problem?
Now provide the response and nothing else.
| Change getBaseContext() in the AlertDialog.Builder instantiation to the current Activity instance. For example: AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); An AlertDialog requires certain resources whose values are provided by the themes and styles attached to the Context it uses. The Context returned by getBaseContext() doesn't have those attached, but the Activity does. Indeed, whenever a Context is needed for a UI component - e.g., Dialog s, View s, Adapter s, etc. - the current Activity is usually what you want to use. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33025719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4230345/']} | jdg_379725 |
stackexchange | llm_judgeable_groundtruth_similarity | 40615641 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This question: How to test os.exit scenarios in Go (and the highest voted answer therein) sets out how to test os.Exit() scenarios within go. As os.Exit() cannot easily be intercepted, the method used is to reinvoke the binary and check the exit value. This method is described at slide 23 on this presentation by Andrew Gerrand (one of the core members of the Go team); the code is very simple and is reproduced in full below. The relevant test and main files look like this (note that this pair of files alone is an MVCE) : package fooimport ( "os" "os/exec" "testing")func TestCrasher(t *testing.T) { if os.Getenv("BE_CRASHER") == "1" { Crasher() // This causes os.Exit(1) to be called return } cmd := exec.Command(os.Args[0], "-test.run=TestCrasher") cmd.Env = append(os.Environ(), "BE_CRASHER=1") err := cmd.Run() if e, ok := err.(*exec.ExitError); ok && !e.Success() { fmt.Printf("Error is %v\n", e) return } t.Fatalf("process ran with err %v, want exit status 1", err)} and package fooimport ( "fmt" "os")// Coverage testing thinks (incorrectly) that the func below is// never being calledfunc Crasher() { fmt.Println("Going down in flames!") os.Exit(1)} However, this method appears to suffer certain limitations: Coverage testing with goveralls / coveralls.io does not work - see for instance the example here (the same code as above but put into github for your convenience) which produces the coverage test here , i.e. it does not record the test functions being run. NOTE that you don't need to those links to answer the question - the above example will work fine - they are just there to show what happens if you put the above into github, and take it all the way through travis to coveralls.io Rerunning the test binary appears fragile. Specifically, as requested, here is a screenshot (rather than a link) for the coverage failure; the red shading indicates that as far as coveralls.io is concerned, Crasher() is not being called. Is there a way around this? Particularly the first point. At a golang level the problem is this: The Goveralls framework runs go test -cover ... , which invokes the test above. The test above calls exec.Command / .Run without -cover in the OS arguments Unconditionally putting -cover etc. in the argument list is unattractive as it would then run a coverage test (as the subprocess) within a non-coverage test, and parsing the argument list for the presence of -cover etc. seems a heavy duty solution. Even if I put -cover etc. in the argument list, my understanding is that I'd then have two coverage outputs written to the same file, which isn't going to work - these would need merging somehow. The closest I've got to that is this golang issue . Summary What I am after is a simple way to run go coverage testing (preferably via travis, goveralls, and coveralls.io), where it is possible to both test cases where the tested routine exits with OS.exit() , and where the coverage of that test is noted. I'd quite like it to use the re-exec method above (if that can be made to work) if that can be made to work. The solution should show coverage testing of Crasher() . Excluding Crasher() from coverage testing is not an option, as in the real world what I am trying to do is test a more complex function, where somewhere deep within, under certain conditions, it calls e.g. log.Fatalf() ; what I am coverage testing is that the tests for those conditions works properly.
Now provide the response and nothing else.
| With a slight refactoring, you may easily achieve 100% coverage. foo/bar.go : package fooimport ( "fmt" "os")var osExit = os.Exitfunc Crasher() { fmt.Println("Going down in flames!") osExit(1)} And the testing code: foo/bar_test.go : package fooimport "testing"func TestCrasher(t *testing.T) { // Save current function and restore at the end: oldOsExit := osExit defer func() { osExit = oldOsExit }() var got int myExit := func(code int) { got = code } osExit = myExit Crasher() if exp := 1; got != exp { t.Errorf("Expected exit code: %d, got: %d", exp, got) }} Running go test -cover : Going down in flames!PASScoverage: 100.0% of statementsok foo 0.002s Yes, you might say this works if os.Exit() is called explicitly, but what if os.Exit() is called by someone else, e.g. log.Fatalf() ? The same technique works there too, you just have to switch log.Fatalf() instead of os.Exit() , e.g.: Relevant part of foo/bar.go : var logFatalf = log.Fatalffunc Crasher() { fmt.Println("Going down in flames!") logFatalf("Exiting with code: %d", 1)} And the testing code: TestCrasher() in foo/bar_test.go : func TestCrasher(t *testing.T) { // Save current function and restore at the end: oldLogFatalf := logFatalf defer func() { logFatalf = oldLogFatalf }() var gotFormat string var gotV []interface{} myFatalf := func(format string, v ...interface{}) { gotFormat, gotV = format, v } logFatalf = myFatalf Crasher() expFormat, expV := "Exiting with code: %d", []interface{}{1} if gotFormat != expFormat || !reflect.DeepEqual(gotV, expV) { t.Error("Something went wrong") }} Running go test -cover : Going down in flames!PASScoverage: 100.0% of statementsok foo 0.002s | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40615641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3177124/']} | jdg_379726 |
stackexchange | llm_judgeable_groundtruth_similarity | 1431 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is an exercise from Problems from the Book by Andreescu and Dospinescu. When it was posted on AoPS a year ago I spent several hours trying to solve it, but to no avail, so I am hoping someone here can enlighten me. Problem: Prove that the function $f : [0, 1) \to \mathbb{R}$ defined by $\displaystyle f(x) = \log_2 (1 - x) + x + x^2 + x^4 + x^8 + ...$ is bounded. A preliminary observation is that $f$ satisfies $f(x^2) = f(x) + \log_2 (1 + x) - x$. I played around with using this functional equation for awhile, but couldn't quite make it work.
Now provide the response and nothing else.
| OK, a second trick is needed (but it actually finishes the problem). It is nice and simple enough that it's probably what the authors intended by a "Book" solution. Let $f(x) = x \log(2) - \log(1+x)$. We want to show that $S(x) = f(x) + f(x^2) + f(x^4) + \dots$ is bounded. Because $f(0)=f(1)=0$ and $f$ is differentiable, we can find a constant $A$ such that $|f(x)| \leq Ax(1-x) = Ax - Ax^2$. The sum of this bound over the powers $x^{2^k}$ is telescopic. Notice that the role of $\log(2)$ was to ensure that $f(1)=0$. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1431', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/232/']} | jdg_379727 |
stackexchange | llm_judgeable_groundtruth_similarity | 911922 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need a reference for a complete proof of the below theorem: Let $RP^n$ be $n$-dimensional real projective space. Then $RP^n$ is a compact, connected manifold. (Consider the standard topology over $R$) I'd appreciate it If you refer me to a reference with a non-complicated proof.Thanks in advance.
Now provide the response and nothing else.
| William Boothby, An Introduction to Differentiable Manifolds and Riemannian Geometry Rev 2nd ed . At the beginning of ch3. You can find a detailed proof that $\mathbb RP^n$ admits a differentiable manifold structure over the quotient topology induced by the natural projection $\pi:\mathbb R^{n+1}\to\mathbb RP^n$. To see it is compact and connected, this is really a topological matter. Restrict the domain of $\pi$ to $S^n$. According to the definition of quotient topology, $\pi:\mathbb R^{n+1}\to\mathbb RP^n$ is a surjective continuous map. So $\pi|_{S^n}:S^n\to\mathbb RP^n$ with $S^n$ equipped with subspace topology is also a surjective continuous map. Surjectivity of $\pi|_{S^n}$ comes from the fact that $\forall [x_0,x_1,\dots,x_n]\in\mathbb RP^n$ has in its preimage a point $(x_0,\dots,x_n)/(\sum_{i=0}^nx_i^2)^{1/2}\in S^n$. Therefore. it preserves compactness and connectedness. So $\mathbb RP^n$ is compact and connected since $S^n$ is. Appendix: $S^n$ is compact, since it is closed and bounded in $\mathbb R^{n+1}$ by Heine-Borel theorem. $S^n$ is path connected: any two points can be connected by an arc on a great circle. So it is also connected. (path connectedness implies connectedness). All the topological facts can be picked up in James Munkres's Topology 2nd ed . Or you can find a quick review in Abraham, Marsden, Ratiu, Manifolds, Tensor Analysis and Applications (2007 draft recommended). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/911922', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/97635/']} | jdg_379728 |
stackexchange | llm_judgeable_groundtruth_similarity | 13512865 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on an OpenCart project, that requires a lot of customization.for my project I have to change something in the cart library (system/library/cart.php). I would have to call a custom function that's defined inside the product model (catalog/model/catalog/product.php). In a controller, loading a Model and using its functions is easy: $this->load->model("catalog/product"); $this->model_catalog_product->customFunction(); But how do you load a model outside a controller?You can't create a new instance of the model, I already tried that: require_once("catalog/model/catalog/product.php"); $a_model = new ModelCatalogProduct(); This obviously doesn't work cause models weren't intended to be used in such a way. I also tried to use the scope resolution operator ( ModelCatalogProduct::customFunction())It doesn't work either. I could pass all the required info as arguments, but I would rather use the model inside the cart library class, cause the changes would be global. Is it even possible to load a model outside a controller in OpenCart?
Now provide the response and nothing else.
| If it's only one method that you need to copy, you would be best adding a method to the Cart class itself. The Cart class will work with the $this->db->query() calls as it already has $db assigned to it even though it's not a Controller/Model Edit Should you wish to do this, you could do something similar to the following public function test() { global $loader, $registry; $loader->model('catalog/product'); $model = $registry->get('model_catalog_product'); $result = $model->getProduct(123);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13512865', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/430997/']} | jdg_379729 |
stackexchange | llm_judgeable_groundtruth_similarity | 8749191 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a line like this in my code new Date().getFullYear(); and I need it to always return the same year. Is there any way to do this with Jasmine ? Thanks.
Now provide the response and nothing else.
| In JavaScript you can simply overwrite the function in your test setup: Date.prototype.getFullYear = function(){return 2012} You could also use a jasmine spy : spyOn(Date, 'getFullYear').andReturn(2012) Another way is to insert the date into your function you wanna test. Which is btw. the best way to write testable code. Dont create new instances in your functions, cause you always start to test not only your code but also this instances. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8749191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47579/']} | jdg_379730 |
stackexchange | llm_judgeable_groundtruth_similarity | 228184 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Wolfram says it's 800, but how to calculate it? $$\frac{25x}{x^2+1600x+640000}$$
Now provide the response and nothing else.
| It’s straightforward as a calculus problem. To solve it without calculus, note that $$\frac{25x}{(x^2+1600x+640000)}=\frac{25x}{(x+800)^2}\;,\tag{1}$$ so the denominator is always positive, the the function has its maximum at some positive value of $x$. That maximum will occur where $$\frac{(x+800)^2}{25x}=\frac1{25}\left(x+1600+\frac{640000}x\right)=64+\frac1{25}\left(x+\frac{640000}x\right)\tag{1}$$ has its minimum (over the range $x>0$). This in turn occurs where $x+\dfrac{640000}x$ has its minimum. Now $x$ and $\frac{640000}x$ are a pair of numbers whose product is $640000=800^2$; if we set $x=800$, their sum is $1600$. Suppose that we set $x=800+a$ for some $a>0$; then $$\begin{align*}x+\frac{640000}x&=800+a+\frac{640000}{800+a}\\&=\frac{1280000+1600a+a^2}{800+a}\\&=1600+\frac{a^2}{800+a}\\&>1600\;.\end{align*}$$ Thus, $x=800$ gives us the minimum value of of $x+\frac{640000}x$, namely, $1600$, and hence the minimum value of $(2)$ and the maximum value of $(1)$. Substituting $x=800$ into $(1)$, we find that the maximum is $$\frac{25\cdot800}{1600^2}=\frac{25}{3200}=\frac1{128}\;.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/228184', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45886/']} | jdg_379731 |
stackexchange | llm_judgeable_groundtruth_similarity | 32675180 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I received an error when trying to export product data in Magento 1.9. It said "no valid data sent" . Based on other posts, I was able to locate my `system.log' (in var/log folder & changed permissions to 777). It looks like there are several errors and i'm not sure when they occurred or which to look at. The last one, which seems relevant, is: Notice: Undefined offset: 331 in includes/src/Mage_ImportExport_Model_Export_Entity_Product.php on line 875. I read that 331 = Product ID . But, when I go to manage products, there is no product ID of 331. I may have added it a few weeks ago when I was testing uploads, but I've since deleted it. So, I'm not sure what to do there. I also tried to locate the file mentioned in the error, but it wasn't in that file path. I ended up going to: app/code/core/Mage/ImportExport/Model/Export/Entity. (I assume that is the right file to review). The code on line 873-5 is: } else { $dataRow[self::COL_STORE] = null;$dataRow += $stockItemRows[$productId]; Any advice on next steps?Your help is greatly appreciated! Please note: I also checked my exceptions.log and it doesn't look as if anything has been added to the file since Aug 11.
Now provide the response and nothing else.
| There is nothing wrong with having a constraint like Eq (f a) . As the error message says, you will need to enable the (harmless) FlexibleContexts GHC extension to do that, so add... {-# LANGUAGE FlexibleContexts #-} ... to the top of your source file. Do note, however, that (Functor f, Eq (f a)) doesn't really reflect what you are doing in your implementation of (==) . Firstly, you don't need the supposition that f is a Functor here, so you can safely drop the Functor f constraint. Secondly, the constraints should match what you need to write the different cases. In the first case, you do x == y . x and y are both of type a , and so you need Eq a . For analogous reasons, the second case requires Eq (f (Free f a)) rather than Eq (f a) . That means you will end up with... (Eq (f (Free f a)), Eq a) => Eq (Free f a) ... which matches reference implementations such as the one in Control.Monad.Free . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32675180', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4180880/']} | jdg_379732 |
stackexchange | llm_judgeable_groundtruth_similarity | 178936 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've heard of this point among others, but never really got what it meant. Wikipedia makes one's head spin. The only thing I picked up is that it occurs between liquid and gas, and displays opalescence at supercriticality. What is it useful for, what characteristic properties does it have compared to normal phases, and what are the critical points for some normal elements or compounds?
Now provide the response and nothing else.
| Characteristically, a critical point occurs somewhere anytime you have a continuous phase transistion . That is, if you have two phases of a substance that themselves share their intrinsic symmetries. The classic example is the critical point associated with the liquid gas transition, as you note. Liquids are isotropic and homogenous, gases are isotropic and homogenous, they only differ in their densities at lower pressures. This suggests that there is some point in phase space where these two phases are smoothly connected. For most materials this corresponds to high pressure. At lower pressures you might have a discontinuous transition from one of these to the other, just like the boiling we observe for the ordinary liquid-gas transition. Why do we care? Well, critical points are rather special. Since they lie right at the edge of the phase coexistence curve, they have peculiar properties. In particular, they are characterized by a diverging correlation length , meaning the system can have very long scale correlations, that is, very well separated parts of the system can be highly coordinated together. This gives rise to the critical opalescence you mentioned. At the liquid-gas critical point you get very long scale fluctuations in the system, which can get large enough to scatter light, just like the large proteins that scatter light in milk, giving the fluid a sort of milky character. Fine, but again, why do we care? Well, critical points are very nice for theory. Since the correlation length diverges, the physics of the system becomes independent of any of the microscopic physics. The physics of the system takes on a life of its own, independant of all of the microscopic particulars. This gives rise to all sorts of power law dependencies for things like susceptibilities, heat capacities and the like. It also gives rise to fractal behavior. Both of these things happen because there is no longer a characteristic length scale in the system, all of the physics must be scale free, properties that both power laws and fractals have. Again, why should we care? Well, since the system's physics takes on this scale free behavior, and a life of its own independent of the microscopic description, we observe a general phenomenon known as universality , very different systems can start to look very much the same. For instance, the liquid-gas transition you mention has the same scale free behavior at its critical point as a very simple model known as the Ising Model does at its critical point. The spreading of disease at its critical point looks like percolation . Great, so things look the same, again why do we care? Well, some of these models are much simpler than others. The Ising model is much simpler than trying to simulate a "realistic" liquid, but if the behavior is the same at the critical point, we can hope to make both computational and theoretical progress by studying simpler models that share the same universal behavior as much more complicated models, and rest assured that we aren't wasting our time. Those simple models, if designed right, will have exactly the same behavior at the critical point. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/178936', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/77786/']} | jdg_379733 |
stackexchange | llm_judgeable_groundtruth_similarity | 518266 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am running Arch Linux on a Raspberry Pi. Suddenly: I am unable to ping to a website. I am unable to access a website from the browser. I have two more computers (all running Arch Linux) connected to the Internet, where I can ping and use the Internet. Also, /etc/resolv.conf is identical on the other computers: nameserver 10.230.252.252nameserver 203.147.88.2nameserver 8.8.8.8search domain.name I can use VNC. I can also ping to 8.8.8.8.When trying to access DuckDuckGo on Chromium I get: This site can’t be reachedduckduckgo.com’s server IP address could not be found.DNS_PROBE_FINISHED_NXDOMAIN I have an active Internet connection. What's wrong?
Now provide the response and nothing else.
| Although I've never had problem with my other x86_64 PC all running Arch Linux, this frequently happens till date with Arch Linux ARM when running NetworkManager. The problem is like you are connected to wifi, but you can't ping or use the internet but you can access all the computers on the local network, and even use remote desktop sharing software. There is a high chance that something went wrong while your ping or your browser tries to resolve the host. I can think of 3 solutions: Solution 1 I believe this is a problem on the thousands of the Raspberry Pi systems running Archlinux ARM and using NetworkManger. In my case /etc/resolv.conf was a broken symlink to ../run/systemd/resolve/stub-resolv.conf . NetworkManager can't populate the symlink, and the /etc/resolv.conf is empty. We have to: Remove the broken symlink: # rm /etc/resolv.conf Create an /etc/NetworkManager/conf.d/dns.conf file with the contents: [main]dns=nonemain.systemd-resolved=false Restart NetworkManager: sudo systemctl restart NetworkManager This should fix the issue, if not follow Solution 2. Solution 2 In case the above didn't fix the issue for you, you can temporarily populate /etc/resolv.conf by: sudo systemctl restart systemd-resolved && sudo systemctl stop systemd-resolved The reason this works is because probably something is messing up the /etc/resolv.conf file. The above command should overwrite the contents, but again, you should look at what causing the issue. Solution 3 If you can't get your /etc/resolv.conf back, just create a new /etc/resolv.conf (delete if an empty old one or symbolic link exists) and paste the code: search domain.namenameserver 8.8.8.8nameserver 1.1.1.1nameserver 1.0.0.1 Note, in the first line, you can also use your router's IP address, for example ( nameserver 192.168.43.1 in my case) which will make other systems pingable on the same network. It's not a good idea to generate resolv like this, but I had a bad time with the NetworkManager's auto-generated resolv. Systemd-resolvd also generates wrong ones, even on my PC. A bit weird, here I am using google's primary dns and cloudflare's primary dns, you can use 8.8.8.8 with 8.8.4.4 or 1.1.1.1 with 1.0.0.1. Although that step works, but you may want to stop NetworkManager from overwriting the file whenever it restarts: Add this entry to /etc/NetworkManager/NetworkManager.conf [main]dns=nonesystemd-resolved=false They worked for my installations on Raspberry Pi 3 model B. Hope this will work for you, too. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/518266', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/274717/']} | jdg_379734 |
stackexchange | llm_judgeable_groundtruth_similarity | 1728284 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In JavaScript, how can I add a row dynamically to a table? On a JavaScript event I want to create a similar row and append to the table.
Now provide the response and nothing else.
| If you don't wish to use jQuery, there are a couple of simple functions you could use, like cloneNode() , createElement() and appendChild() . Here is a simple demonstration that appends a row to the end of the table using either the clone or create method. Tested in IE8 and FF3.5. <html><head> <script type="text/javascript"> function cloneRow() { var row = document.getElementById("rowToClone"); // find row to copy var table = document.getElementById("tableToModify"); // find table to append to var clone = row.cloneNode(true); // copy children too clone.id = "newID"; // change id or other attributes/contents table.appendChild(clone); // add new row to end of table } function createRow() { var row = document.createElement('tr'); // create row node var col = document.createElement('td'); // create column node var col2 = document.createElement('td'); // create second column node row.appendChild(col); // append first column to row row.appendChild(col2); // append second column to row col.innerHTML = "qwe"; // put data in first column col2.innerHTML = "rty"; // put data in second column var table = document.getElementById("tableToModify"); // find table to append to table.appendChild(row); // append row to table } </script></head><body> <input type="button" onclick="cloneRow()" value="Clone Row" /> <input type="button" onclick="createRow()" value="Create Row" /> <table> <tbody id="tableToModify"> <tr id="rowToClone"> <td>foo</td> <td>bar</td> </tr> </tbody> </table></body></html> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1728284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40521/']} | jdg_379735 |
stackexchange | llm_judgeable_groundtruth_similarity | 58819858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have dataframe in this shape: A B C D E 213-1 XL NaN NaN NaN 21 22.0 12 232.0 101.32 23-0 L NaN NaN NaN 12 23 12 232.2 NaN 31-0 LS NaN NaN NaN 70 70 23 NaN 21.22 I would like to move every second row of that dataframe to the row above so that there are only combined rows left as seen in the expected result: ID Name A B C D E 213-1 XL 21 22.0 12 232.0 101.32 23-0 L 12 23 12 232.2 NaN 31-0 LS 70 70 23 NaN 21.22 Is it possible to do with Pandas?
Now provide the response and nothing else.
| I would use concat : new_df = pd.concat((df.iloc[::2, :2].reset_index(drop=True), df.iloc[1::2].reset_index(drop=True)), axis=1)# renamenew_df.columns = ['ID', 'Name'] + new_df.columns[2:].to_list() Output: ID Name A B C D E0 213-1 XL 21 22.0 12.0 232.0 101.321 23-0 L 12 23 12.0 232.2 NaN2 31-0 LS 70 70 23.0 NaN 21.22 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58819858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6653602/']} | jdg_379736 |
stackexchange | llm_judgeable_groundtruth_similarity | 3000326 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After learning about Jordan Canonical Form, I began thinking about if you have two matrices $A, B \in M_n(\mathbb{R}),$ whether or not their product $AB$ will also have a Jordan Canonical Form. Trivially, if we consider polynomials in $ M_n(\mathbb{C})$ then our polynomial will always have roots in the field. So, from what I understand, this is equivalent to asking whether or not the characteristic polynomial of $AB$ will have real roots. I began playing around with the $2 \times 2$ case and if we have that $AB$ has a negative determinant then we can't have that the eigenvalues of $AB$ are in $\mathbb{C},$ since they must multiply to the determinant, but since the complex eigenvalues always come in a conjugate pair, that would mean that their product is always positive. I am not sure how I can break down the case if the determinant is positive. Finally, my own intuition says matrices which have Jordan form will be closed under multiplication. This is because if we think about what matrices with strictly, real eigenvalues do, they simply reflect and stretch space. So I don't believe it is possible to find two matrices whose product matrix will be a rotation in the plane. Also, I couldn't find this question asked elsewhere, but if it has an answer, I'd be happy to read it.
Now provide the response and nothing else.
| This is false already in the $2 \times 2$ case. Consider the matrices $$A = \left[ \begin{array}{cc} -1 & 2 \\ -2 & 3 \end{array} \right], B = \left[ \begin{array}{cc} 3 & 0 \\ 0 & 1 \end{array} \right]$$ $B$ has eigenvalues $3, 1$ , and the characteristic polynomial of $A$ is $t^2 - 2t + 1 = (t - 1)^2$ , so $A$ has eigenvalues $1, 1$ . But $$AB = \left[ \begin{array}{cc} -3 & 2 \\ -6 & 3 \end{array} \right]$$ has characteristic polynomial $t^2 + 3$ , so has eigenvalues $\pm i \sqrt{3}$ . I found these matrices by first taking $B$ to be diagonal with diagonal entries $s, 1$ and then looked at the effect multiplication by $B$ had on the characteristic polynomial of a generic $A$ . It's possible to choose $s$ so that $AB$ has trace zero, so then we just need its determinant to be positive to guarantee that it has complex eigenvalues. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3000326', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/483863/']} | jdg_379737 |
stackexchange | llm_judgeable_groundtruth_similarity | 52588873 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to use a universal reference with vector. template<typename T> void foo(T&& v) { for(typename T::iterator i = v.begin(); i != v.end(); i++) { std::cout << *i << std::endl; } }int main(){ std::vector v = {0,5,4,3}; foo(std::move(v)); foo(v); //compiler error return 0;} but when I use as argument of foo function v(without std::move) it cause a compiler error. I think that in both case universal reference should work. Error: prog.cc: In instantiation of 'void foo(T&&) [with T = std::vector<int, std::allocator<int> >&]':prog.cc:25:10: required from here prog.cc:16:30: error: 'std::vector<int, std::allocator<int> >&' is not a class, struct, or union type 16 | for(typename T::iterator i = v.begin(); i != v.end(); i++) | ^prog.cc:16:30: error: 'std::vector<int, std::allocator<int> >&' is not a class, struct, or union type
Now provide the response and nothing else.
| When you pass an lvalue to foo , T is deduced as an lvalue reference due to special perfect forwarding rules: From [temp.deduct.call] (12.9.2.1 par 3): A forwarding reference is an rvalue reference to a cv-unqualified template parameter that does not represent a template parameter of a class template (during class template argument deduction ([over.match.class.deduct])). If P is a forwarding reference and the argument is an lvalue, the type “lvalue reference to A” is used in place of A for type deduction. Therefore typename T::iterator will attempt to access the iterator type alias of an lvalue reference , which does not exist. You can either change it to: typename std::remove_reference_t<T>::iterator Or simply: for(auto i = v.begin(); i != v.end(); i++) { /* ... */ } Or even better: for(const auto& x : v) { std::cout << x << '\n'; } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52588873', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10440409/']} | jdg_379738 |
stackexchange | llm_judgeable_groundtruth_similarity | 1855115 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How many unique ways are there to arrange the letters in the word HATTER? I can't wrap my head around the math to find the answer. I know that if they were all different letters the answer would be 6!. However, I know that these T's are going to overlap, so it won't be that. I am trying to give myself examples like AAA, it can only be written once but if it was 3 different letters it would be 6 times instead. Somehow I need to get a 6/6, so that it can become 1. If I try it with AAC, half of the permutations disappear. So it must be divided by 2 I guess. 6/2. ABC AAC 1 ACB ACA 2 BCA ACA 2 BAC AAC 1 CAB CAA 3 CBA CAA 3 I kind of see a pattern here. Possible combinations if all letters were different factorial / Divide by the number of equal letters factorial, but still I am confused. Explanation is appreciated. The answer is 360.
Now provide the response and nothing else.
| Imagine one of the Ts is red and the other is blue. Then write out all 6!=720 arrangements. You will see that while they are all unique, you can create pairs where the only difference is the position of the red and blue Ts. Since they are identical in the original question, you must divide by the number of ways the Ts can be arranged. In this case, $2!=2$ . So your answer is $$\frac{6!}{2!}$$ Generally you can write the answer as the total number of letters factorial divided by the count of each letter factorial. In this case $$\frac{6!}{2!\cdot 1!\cdot1!\cdot1!\cdot1!}$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1855115', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/267862/']} | jdg_379739 |
stackexchange | llm_judgeable_groundtruth_similarity | 81226 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know the DFT, I agree with the formula and everything, but I don't get the intuition on the link between frequency resolution and number of samples. Like, why would I get a higher frequency resolution by taking more samples in my DFT? The formula is: Here, we are not summing on a finite number of k discrete frequencies, we are just summing over N samples. Here is a great video from 3Blue1Brown, which helped me get a visual intuition on Fourier Transform: https://www.youtube.com/watch?v=spUNpyF58BY&t=915s In this video, we can see how we evaluate our signal on the exp function for different rotational speeds.So I don't get why we could not evaluate our signal on how many frequency points we want with the DFT. It's weird, cause I understand why we go from discrete time to discrete frequencies. But I don't have the intuition and visual representation in my head. Can somebody please help me? Many thanks in advance Antoine
Now provide the response and nothing else.
| It has to do with the fact that two sinusoids with a different, but exact integer, number of period (within some window or vector length) are orthogonal, or, as sampled data, sum to zero under vector dot product. So take two sinusoids that are very slightly different in frequency (so slight that you couldn't tell at a quick glance that they were any different). Extend both in length, and at some length eventually that slight difference in frequency will cause one sinusoid to go though one more period than the other. Then the difference will be quite noticeable because, due to the difference in the number of full periods between them, at some point they will be 180 degrees out of phase, or exact opposites in amplitude. Obviously not the same. Thus showing that a longer amount of data (more samples at the same sample rate) provides greater ability to tell two frequencies apart. The slighter the difference, the more data required to get a full period of difference, and thus dot product cancellation. Go in the opposite direction, and pretty soon two moderately different frequency sinusoids will overlap over some short enough segment with less difference than your pencil mark or chalk line widths, thus looking the same. (or less different than your sampler's finite quantization noise). Causing you to mistake the two as just one. If you window, you will need around two integer periods of difference over your number of samples, due to artifacts the windowing introduces to basis vector orthogonality. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/81226', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/61252/']} | jdg_379740 |
Subsets and Splits