source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 2527035 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $n,k \in \mathbb{Z} $ and $0 \leq k \leq n$. Show that ${{n}\choose{k}} = {{n}\choose{n-k}}$ without using ${{n}\choose{k}} = \frac{n!}{k!(n-k)!}$ It's easy to prove by using that definition, but I'm supposed to use the following fact: "If $n$ and $k$ are integers, then ${{n}\choose{k}}$ denotes the number of subsets that can be made by choosing $k$ elements from a set with $n$ elements." So if $S$ is a set and $|S| = n$, then ${{n}\choose{k}} = |\{X: X \in \wp(S)$ and $|X| = k \}|$ and ${{n}\choose{n-k}} = |\{X: X \in \wp(S)$ and $|X| = n - k \}|$ From there, I'm stuck. Any hints? Thank you.
Now provide the response and nothing else.
| Use the following bijection $$\varphi:\{X\ :\ X\in\wp(S), \vert{X}\vert=k \}\longrightarrow\{Y\ :\ Y\in\wp(S), \vert{Y}\vert=n-k\}$$ such that $X\longmapsto X^{c}$ where $X^{c}$ is the complement of $X$ w.r.t $\wp(S)$. This map is bijective because it is it's own inverse (is an involution on $\wp(S)$). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2527035', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/203687/']} | jdg_380741 |
stackexchange | llm_judgeable_groundtruth_similarity | 5226 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am doing some problems on an application of decision tree/random forest. I am trying to fit a problem which has numbers as well as strings (such as country name) as features. Now the library, scikit-learn takes only numbers as parameters, but I want to inject the strings as well as they carry a significant amount of knowledge. How do I handle such a scenario? I can convert a string to numbers by some mechanism such as hashing in Python. But I would like to know the best practice on how strings are handled in decision tree problems.
Now provide the response and nothing else.
| In most of the well-established machine learning systems, categorical variables are handled naturally. For example in R you would use factors, in WEKA you would use nominal variables. This is not the case in scikit-learn. The decision trees implemented in scikit-learn uses only numerical features and these features are interpreted always as continuous numeric variables . Thus, simply replacing the strings with a hash code should be avoided, because being considered as a continuous numerical feature any coding you will use will induce an order which simply does not exist in your data. One example is to code ['red','green','blue'] with [1,2,3], would produce weird things like 'red' is lower than 'blue', and if you average a 'red' and a 'blue' you will get a 'green'. Another more subtle example might happen when you code ['low', 'medium', 'high'] with [1,2,3]. In the latter case it might happen to have an ordering which makes sense, however, some subtle inconsistencies might happen when 'medium' in not in the middle of 'low' and 'high'. Finally, the answer to your question lies in coding the categorical feature into multiple binary features . For example, you might code ['red','green','blue'] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding , binary encoding, one-of-k-encoding or whatever. You can check documentation here for encoding categorical features and feature extraction - hashing and dicts . Obviously one-hot-encoding will expand your space requirements and sometimes it hurts the performance as well. | {} | {'log_upvote_score': 7, 'links': ['https://datascience.stackexchange.com/questions/5226', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/8409/']} | jdg_380742 |
stackexchange | llm_judgeable_groundtruth_similarity | 551566 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm just starting to learn about matrices, and during one exercise I got a question to which I have no answer; Due to the fact that I haven't learned it yet... The question is as follows: Let $A = \left[\begin{matrix}1&x\\2&3\end{matrix}\right]$ and $B = \left[\begin{matrix}1&1\\1&2\end{matrix}\right]$ If $AB = BA$ what is the value of $x$? I have absolutely no clue of how to solve this effectively. One way I managed to get a (correct) result was through series of inequalities, which both looked, and was ugly to handle... If anyone could concisely explain how one would go about solving this, and similar problems, it would be greatly appreciated!
Now provide the response and nothing else.
| Since $a, b$ are zeros, they each satisfy the equation: $$x=a: \quad a^2 + a\cdot a + b = 0\iff 2a^2 + b = 0 \iff b = -2a^2$$ $$x = b:\quad b^2 + ab + b = 0$$ Substituting $b = -2a^2$ into the second equation: $$(-2a^2)^2 + a(-2a^2) + -2a^2 = 0 \iff 4a^4 -2a^3 - 2a^2 = 2a^2(2a^2 -a - 1) = 0$$ The right-hand factor itself factors nicely, giving you 3 solutions to $a$, and then you can find the corresponding value for $b$ by using $b = -2a^2$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/551566', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73241/']} | jdg_380743 |
stackexchange | llm_judgeable_groundtruth_similarity | 1986400 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I notice that a number of Java exception classes differ only in the name of the class and do not add any new functionality. Most exceptions for example just seem to override Exception() or Exception(String message) . This goes against the tenets of inheritance ie:- inherit to add new functionality. What are some good reasons to create a new Exception class?
Now provide the response and nothing else.
| Exceptions are a special case. In their case, the inheritance is not to add new functionality, but to add new classes of errors. This lets your code catch particular kinds of errors while ignoring others. Say you are writing a large project. You have a Data component, and you have a Display component. They can both fail in various ways, and you want to throw exceptions for these failures. The Display component doesn't care about exceptions arising from the Data component, though, and vice versa. If all the classes just threw Exception , there'd be no way to figure out where the exception came from. However, if you subclass Exception with DataException and GraphicsException , even though they don't add new functionality, you can now throw and catch those particular types of exceptions, i.e. a graphics component can catch GraphicsException and not have to deal with data exceptions. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1986400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/162758/']} | jdg_380744 |
stackexchange | llm_judgeable_groundtruth_similarity | 327417 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How would I go about this? I have a mostly static site, which is being hosted on a cheap web-host, which only allows FTP access to the hosting. The site is tracked in git. I am using OS X. I would like to upload a new version of the site by simply doing cap deploy
Now provide the response and nothing else.
| We use capistrano to deploy our site which is written in PHP. From memory (I'm not at work right now) we overload deploy and used rsync to sync over sftp. Something like this: desc "Sync"namespace :deploy do desc "Sync remote by default" task :default do remote.default end namespace :remote do desc "Sync to remote server" task :default do `rsync -avz "/path/to/webapp" "#{remote_host}:#{remote_root}/path/to/webapp"` end endend I'm sure you could replace rsync with whatever an ftp program and it should work fine. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/327417', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/745/']} | jdg_380745 |
stackexchange | llm_judgeable_groundtruth_similarity | 383734 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I need files created by apache2 to have umask 002 , i.e. group rw, by default. I've tried putting umask 002 in /etc/apache2/envvars and although this script does get executed as part of apache start up ( apache2ctl graceful ) the umask has no effect. Presumably somewhere further in the start up process (e.g. when the user is downgraded from root to www-data ) there's somewhere better to put this. I've read posts about Fedora and one suggesting putting umask in /etc/init.d/apache2 but neither of these apply/work in Debian (Squeeze). Can you help?
Now provide the response and nothing else.
| To be sure that the umask setting takes effect please use a simple test and do not use any other web application for this. It might be the case that these application change the rights independently from the umask setting of Apache. Simple test PHP script: <?phpif ($fp = fopen(time() . '.txt', 'w')) { fwrite($fp, 'This is a simple test.'); fclose($fp); echo "done";} else { echo "error - cannot create file";}?> Take care that the user www-data has write access to the folder where you have installed this simple test file. To have the new umask running, check if the file /etc/apache2/envvars will be used within your Apache start file /etc/init.d/apache2 : ...PIDFILE=$(. /etc/apache2/envvars && echo $APACHE_PID_FILE)... Set your umask in /etc/apache2/envvars : ...# umask 002 to create files with 0664 and folders with 0775umask 002 Restart your Apache : service apache2 restart Check the difference : #> ls -l *.txt-rw-rw-r-- 1 www-data www-data 14 2012-05-01 15:56 1335880583.txt-rw-r--r-- 1 www-data www-data 14 2012-05-01 15:55 1335880540.txt | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/383734', 'https://serverfault.com', 'https://serverfault.com/users/96883/']} | jdg_380746 |
stackexchange | llm_judgeable_groundtruth_similarity | 50848892 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just read the https://laravel.com/docs/5.6/passport documentation and I have some doubts that hopefully someone could help me with: First, some context, I want to use Passport as a way to provide Oauth authentication for my mobile app (first-party app). When I use php artisan passport:client --password I get back a Client ID and a Client Secret. Does this value have to be fixed on my app? for example storing them hardcoded or as a "settings" file? If the values shouldn't be stored then how should it work? To register a user to my app I use: $user->createToken('The-App')->accessToken; I get that the accessToken will be the one used for sending on all my requests as a Header (Authorization => Bearer $accessToken) but what exactly is "The-App" value for? For login the user I'm using the URL: http://example.com/oauth/token and sending as parameters: { "username": "[email protected]", "password": "userpassword", "grant_type": "password", "client_id": 1, // The Client ID that I got from the command (question 1) "client_secret": "Shhh" // The Client Secret that I got from the command (question 1)} When I login the user using the previous endpoint I get back a refresh_token, I read that I could refresh the token through http://example.com/oauth/token/refresh but I try to request the refresh I got Error 419, I removed the url oauth/token/refresh from the csrf verification and now I get back "message": "Unauthenticated." , I'm making the following request: Content-Type: x-www-form-urlencodedgrant_type: refresh_tokenrefresh_token: the-refresh-token // The Refresh Token that I got from the command (question 3) client_id: 1 // The Client ID that I got from the command (question 1) client_secret: Shhh // The Client Secret that I got from the command (question 1)scope: '' Should I use this endpoint? or is not necessary given the app I'm trying to develop. Finally, there are a lot of endpoints that I get from passport that I don't think I will use for example: oauth/clients* , oauth/personal-access-tokens* is there a way to remove them from the endpoints published by passport? Thanks a lot for your help!
Now provide the response and nothing else.
| If you are consuming your own api then you don't need to call http://example.com/oauth/token for user login because then you need to store client_id and client_secret at app side. Better you create an api for login and there you can check the credentials and generate the personal token. public function login(Request $request){ $credentials = $request->only('email', 'password'); if (Auth::attempt($credentials)) { // Authentication passed... $user = Auth::user(); $token = $user->createToken('Token Name')->accessToken; return response()->json($token); }} Finally, there are a lot of endpoints that I get from passport that I don't think I will use for example: oauth/clients*, oauth/personal-access-tokens* is there a way to remove them from the endpoints published by passport? You need to remove Passport::routes(); from AuthServiceProvider and manually put only required passport routes. I think you only need oauth/token route. what exactly is "The-App" value for? if you check oauth_access_tokens table it has name field. $user->createToken('Token Name')->accessToken; here the "Token Name" stored in name field. How to use Laravel Passport with Password Grant Tokens? To generate password grant token you have to store client_id and client_secret at app side (not recommended, check this ) and suppose if you have to reset the client_secret then the old version app stop working, these are the problems. To generate password grant token you have to call this api like you mention in step 3. $http = new GuzzleHttp\Client;$response = $http->post('http://your-app.com/oauth/token', [ 'form_params' => [ 'grant_type' => 'password', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'username' => '[email protected]', 'password' => 'my-password', 'scope' => '', ],]);return json_decode((string) $response->getBody(), true); Generate token from refresh_token $http = new GuzzleHttp\Client;$response = $http->post('http://your-app.com/oauth/token', [ 'form_params' => [ 'grant_type' => 'refresh_token', 'refresh_token' => 'the-refresh-token', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'scope' => '', ],]);return json_decode((string) $response->getBody(), true); You can look this https://laravel.com/docs/5.6/passport#implicit-grant-tokens too. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50848892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1214722/']} | jdg_380747 |
stackexchange | llm_judgeable_groundtruth_similarity | 6685249 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've done some jQuery in the past, but I am completely stuck on this. I know about the pros and cons of using synchronous ajax calls, but here it will be required. The remote page is loaded (controlled with firebug), but no return is shown. What should I do different to make my function to return properly? function getRemote() { var remote; $.ajax({ type: "GET", url: remote_url, async: false, success : function(data) { remote = data; } }); return remote;}
Now provide the response and nothing else.
| As you're making a synchronous request, that should be function getRemote() { return $.ajax({ type: "GET", url: remote_url, async: false }).responseText;} Example - http://api.jquery.com/jQuery.ajax/#example-3 PLEASE NOTE: Setting async property to false is deprecated and in the process of being removed ( link ). Many browsers including Firefox and Chrome have already started to print a warning in the console if you use this: Chrome: Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. For more help, check https://xhr.spec.whatwg.org/ . Firefox: Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user’s experience. For more help http://xhr.spec.whatwg.org/ | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/6685249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/198128/']} | jdg_380748 |
stackexchange | llm_judgeable_groundtruth_similarity | 4584609 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Which is bigger $$ \int_0^{\frac{\pi}{2}}\frac{\sin x}{1+x^2}dx$$ or $$ \int_0^{\frac{\pi}{2}}\frac{\cos x}{1+x^2}dx~?$$ I let $x=\frac{\pi}{2}-t$ in the second integral, and I obtain this $$\int_0^{\frac{\pi}{2}}\frac{\sin x}{1+(\frac{\pi}{2}-x)^2}dx$$ But it is still to decide which is the bigger.
Now provide the response and nothing else.
| $g(x) = 1/(1+x)^2$ is strictly decreasing on $[0, \pi/2]$ , therefore is $$ \int_0^{\pi/2} (\cos(x)-\sin(x))g(x) \, dx = \int_0^{\pi/4} (\cos(x)-\sin(x))g(x) \, dx + \int_{\pi/4}^{\pi/2} (\cos(x)-\sin(x))g(x) \, dx\\ \underset{(*)}{=} \int_0^{\pi/4} (\cos(x)-\sin(x))g(x) \, dx + \int_{0}^{\pi/4} (\sin(x)-\cos(x))g(\frac \pi 2 - x) \, dx \\= \int_0^{\pi/4} \left( \cos(x)-\sin(x)\right)(g(x) - g(\pi/2-x)) \, dx > 0 \, .$$ In $(*)$ I have substituted $x$ by $\pi/2 - x$ in the second integral, and the integral in the last line is positivesince both factors are strictly positive on $[0, \pi/4)$ . This shows that $$\int_0^{\pi/2} \cos(x)g(x) \, dx > \int_0^{\pi/2} \sin(x)g(x) \, dx$$ for any strictly decreasing function on $[0, \pi/2]$ . | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4584609', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17980/']} | jdg_380749 |
stackexchange | llm_judgeable_groundtruth_similarity | 3653586 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If we have a table like this: Grp ValueGrp1 2Grp1 5Grp1 3Grp2 3Grp2 -5Grp2 -2Grp3 4Grp3 0Grp3 1Grp4 -2Grp4 -4Grp5 7Grp5 NULLGrp6 NULLGrp6 NULLGrp7 -1Grp7 10 How can we group/multiply to get this? GrpID ValueGrp1 30Grp2 30Grp3 0Grp4 8Grp5 7Grp6 NULLGrp7 -10
Now provide the response and nothing else.
| Multiplying row values is the same as adding logarithms of row values The trick is dealing with zeros and nulls. Ok, checked now DECLARE @foo TABLE (GrpID varchar(10), Value float)INSERt @foo (GrpID, Value)SELECT 'Grp1', 2UNION ALL SELECT 'Grp1', 5UNION ALL SELECT 'Grp1', 3UNION ALL SELECT 'Grp2', 3UNION ALL SELECT 'Grp2', -5UNION ALL SELECT 'Grp2', -2UNION ALL SELECT 'Grp3', 4UNION ALL SELECT 'Grp3', 0UNION ALL SELECT 'Grp3', 1UNION ALL SELECT 'Grp4', -2UNION ALL SELECT 'Grp4', -4UNION ALL SELECT 'Grp5', 7UNION ALL SELECT 'Grp5', NULLUNION ALL SELECT 'Grp6', NULLUNION ALL SELECT 'Grp6', NULLUNION ALL SELECT 'Grp7', -1UNION ALL SELECT 'Grp7', 10 SELECT GrpID, CASE WHEN MinVal = 0 THEN 0 WHEN Neg % 2 = 1 THEN -1 * EXP(ABSMult) ELSE EXP(ABSMult) ENDFROM ( SELECT GrpID, --log of +ve row values SUM(LOG(ABS(NULLIF(Value, 0)))) AS ABSMult, --count of -ve values. Even = +ve result. SUM(SIGN(CASE WHEN Value < 0 THEN 1 ELSE 0 END)) AS Neg, --anything * zero = zero MIN(ABS(Value)) AS MinVal FROM @foo GROUP BY GrpID ) foo | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3653586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/414295/']} | jdg_380750 |
stackexchange | llm_judgeable_groundtruth_similarity | 1327064 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to serialize, and I am facing a problem with an abstact class. I googled for an answer, and I found this blogitem .I tried that and that work. Ok, very nice. But check out the comment on the item: This methodology seems to be hiding the true problem and that is an inaccurate implementation of OO design patterns, namely the factory pattern. Having to change the base class to reference any new factory class is self-defeating. With a little after-thought, the code can be changed to where any derived type can be associated with the abstract class (through the miracle of interfaces) and no XmlInclude would be required. I suggest further research into factory patterns which seems to be what you are trying to implement here. What is commenter talking about? He is kinda vague. Can someone explain it more in detail (with an example)? Or is he just talking nonsense? Update (after reading the first answer) Why does the commentor talk about factory pattern and the code can be changed to where any derived type can be associated with the abstract class (through the miracle of interfaces) ? Does he want to make an interface, like this? public interface IWorkaround{ void Method();}public class SomeBase : IWorkaround{ public void Method() { // some logic here }}public class SomeConcrete : SomeBase, IWorkaround{ public new void Method() { base.Method(); }}
Now provide the response and nothing else.
| He is both right and wrong at the same time. With things like BinaryFormatter , this is a non-issue; the serialized stream contains full type metadata, so if you have: [Serializable] abstract class SomeBase {}[Serializable] class SomeConcrete : SomeBase {}...SomeBase obj = new SomeConcrete(); and serialize obj , then it includes "I'm a SomeConcrete " in the stream. This makes life simple, but is verbose, especially when repeated. It is also brittle, as it demands the same implementation when deserializing; bad for either different client/server implementations, or for long-term storage. With XmlSerializer (which I guess the blog is talking about), there is no metadata - but the element names (or the xsi:type attributes) are used to help identify which is used. For this to work, the serializer needs to know in advance what names map to which types. The simplest way to do this is to decorate the base-class with the subclasses we know about. The serializer can then inspect each of these (and any additional xml-specific attributes) to figure out that when it sees a <someConcreteType> element, that maps to a SomeConcrete instance (note that the names don't need to match, so it can't just look for it by name). [XmlInclude(typeof(SomeConcrete))]public abstract class SomeBase {}public class SomeConcrete : SomeBase {}...SomeBase obj = new SomeConcrete();XmlSerializer ser = new XmlSerializer(typeof(SomeBase));ser.Serialize(Console.Out, obj); However, if he is a purist (or the data isn't available), then there is an alternative; you can specify all this data separately via the overloaded constructor to XmlSerializer . For example, you might lookup the set of known subtypes from configuration (or maybe an IoC container), and setup the constructor manually. This isn't very tricky, but it is tricky enough that it isn't worth it unless you actually need it . public abstract class SomeBase { } // no [XmlInclude]public class SomeConcrete : SomeBase { }...SomeBase obj = new SomeConcrete();Type[] extras = {typeof(SomeConcrete)}; // from configXmlSerializer ser = new XmlSerializer(typeof(SomeBase), extras);ser.Serialize(Console.Out, obj); Additionally, with XmlSerializer if you go the custom ctor route, it is important to cache and re-use the XmlSerializer instance; otherwise a new dynamic assembly is loaded per usage - very expensive (they can't be unloaded). If you use the simple constructor it caches and re-uses the model, so only a single model is used. YAGNI dictates that we should choose the simplest option; using [XmlInclude] removes the need for a complex constructor, and removes the need to worry about caching the serializer. The other option is there and is fully supported, though. Re your follow-up questions: By "factory pattern", he's talking about the case where your code doesn't know about SomeConcrete , perhaps due to IoC/DI or similar frameworks; so you might have: SomeBase obj = MyFactory.Create(typeof(SomeBase), someArgsMaybe); Which figures out the appropriate SomeBase concrete implementation, instantiates it and hands it back. Obviously, if our code doesn't know about the concrete types (because they are only specified in a config file), then we can't use XmlInclude ; but we can parse the config data and use the ctor approach (as above). In reality, most times XmlSerializer is used with POCO/DTO entities, so this is an artificial concern. And re interfaces; same thing, but more flexible (an interface doesn't demand a type hierarchy). But XmlSerializer doesn't support this model. Frankly, tough; that isn't its job. Its job is to allow you to store and transport data. Not implementation. Any xml-schema generated entities won't have methods. Data is concrete, not abstract. As long as you think "DTO", the interface debate is a non-issue. People who are vexed by not being able to use interfaces on their boundary haven't embraced separation of concerns, i.e. they are trying to do: Client runtime entities <---transport---> Server runtime entities rather than the less restrictive Client runtime entities <---> Client DTO <--- transport---> Server DTO <---> Server runtime entities Now, in many (most?) cases the DTO and entities can be the same; but if you are trying to do something that the transport doesn't like, introduce a DTO; don't fight the serializer. The same logic applies when people are struggling to write their object: class Person { public string AddressLine1 {get;set;} public string AddressLine2 {get;set;}} as xml of the form: <person> <address line1="..." line2="..."/></person> If you want this, intoduce a DTO that corresponds to the transport, and map between your entity and the DTO: // (in a different namespace for the DTO stuff)[XmlType("person"), XmlRoot("person")]public class Person { [XmlElement("address")] public Address Address {get;set;}}public class Address { [XmlAttribute("line1")] public string Line1 {get;set;} [XmlAttribute("line2")] public string Line2 {get;set;}} This also applies to all those other niggles like: why do I need a parameterless constructor? why do I need a setter for my collection properties? why can't I use an immutable type? why must my type be public? how do I handle complex versioning? how do I handle different clients with different data layouts? why can't I use interfaces? etc, etc You don't always have these problems; but if you do - introduce a DTO (or several) and your problems go away. Taking this back to the question about interfaces; the DTO types might not be interface-based, but your runtime/business types can be. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1327064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59119/']} | jdg_380751 |
stackexchange | llm_judgeable_groundtruth_similarity | 244706 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In a video lecture a professor is explaining how to prevent an electric arc caused by the switching operation in an RL circuit. Here is the section (at around 18:26) where he explains it: https://youtu.be/FYDLG_A2P1I?t=1106 I know that a flyback diode is used for this purpose: ... yet in his example he uses a very large resistor to create a path for the reverse current. My questions are: If a very large resistor is used, wouldn't it be a problem if a continuous switching is very fast comparing to LR time constant? If a diode is used, would the inductor burn if the back emf current is too high? In almost all examples they just use a diode without mentioning any possibility about this.
Now provide the response and nothing else.
| You need to be aware that the professor in the video is skipping over a few things. Note that at about 22 minutes, he writes the equation for the current through the resistor $$i = I_0 e^{\frac{R}{L}t}$$ but conveniently fails to evaluate $$ I_0= iR $$ In other words, for his proposed 10,000 ohm resistor, a 1 amp current will provide a 10 kV voltage spike. Using a resistor instead of a diode is one of those entertaining tradeoffs. A diode will limit the switch voltage to only a bit over the source voltage. This is extremely useful when the switch is a solid-state device such as a MOSFET which absolutely cannot handle kV spikes. On the other hand, the voltage limit restricts dI/dt, so it takes a long time for the diode to bleed off the current. A resistor will allow much higher pulse voltages, but they will typically be much shorter. So: 1-) If a very large resistor is used, wouldn't it be a problem if a continuous switching is very fast comparing to LR time constant? Yup. You don't do it with switching regulators. The professor was referring to very low switch rates, such as applying and removing power from a circuit. Very old school. 2-) If a diode is used, would the inductor burn if the back emf current is too high? In almost all examples they just use a diode without mentioning any possibility about this. Nope. The "back emf current" is simply the value of the current which existed when the switch was opened. The diode needs to adequately sized, though - it must be able to carry whatever current the inductor was carrying. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/244706', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/16307/']} | jdg_380752 |
stackexchange | llm_judgeable_groundtruth_similarity | 10632251 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have three files . The contents of main.cpp are #include<iostream>#include<QString>#include "util.h"int main(){ using Util::convert2QString; using namespace std; int n =22; QString tmp = convert2QString<int>(n); return 0;} util.h namespace Util{ template<class T> QString convert2QString(T type , int digits=0);} util.cpp namespace Util{ template<class T> QString convert2QString(T type, int digits=0) { using std::string; string temp = (boost::format("%1%") % type).str(); return QString::fromStdString(temp); }} When I try to compile these files with following command I get undefined reference error vickey@tb:~/work/trash/template$ g++ main.cpp util.cpp -lQtGui -lQtCore -I. -I/usr/local/Trolltech/Qt-4.8.0/include/QtCore -I/usr/local/Trolltech/Qt-4.8.0/include/QtGui -I/usr/local/Trolltech/Qt-4.8.0/include/tmp/cca9oU6Q.o: In function `main':main.cpp:(.text+0x22): undefined reference to `QString Util::convert2QString<int>(int, int)'collect2: ld returned 1 exit status Is there something wrong with the template declaration or implementation ? why M I getting these linking errors :?
Now provide the response and nothing else.
| The implementation of a non-specialized template must be visible to a translation unit that uses it. The compiler must be able to see the implementation in order to generate code for all specializations in your code. This can be achieved in two ways: 1) Move the implementation inside the header. 2) If you want to keep it separate, move it into a different header which you include in your original header: util.h namespace Util{ template<class T> QString convert2QString(T type , int digits=0);}#include "util_impl.h" util_impl.h namespace Util{ template<class T> QString convert2QString(T type, int digits=0) { using std::string; string temp = (boost::format("%1") % type).str(); return QString::fromStdString(temp); }} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10632251', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1203565/']} | jdg_380753 |
stackexchange | llm_judgeable_groundtruth_similarity | 1232794 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Fix an algebraic number $\beta$ and consider a complex number $\alpha$ which admits multiple representations in base $\beta$. If one representation of $\alpha$ is ultimately periodic, must every other representation of $\alpha$ be ultimately periodic ? Bonus question : Does this depend on the choice of the set of digits? Edit : John Bentin showed that this is false in general, even with $\beta \in \Bbb{Q}$. Is this true at least for $\beta$ algebraic integer? Edit 2 : This question came up when trying to solve this problem of mine. All I know so far is that if $\beta$ is a Pisot integer, i.e. if it is a real algebraic integer greater than $1$ with every conjugate lying in the unit circle, then $\alpha \in \Bbb{Q}(\beta)$ if and only if it has an ultimately periodic expansion (which is stronger than my question).
Now provide the response and nothing else.
| It's the action $Sym(S) \times S \to S$ given by $(f,x) \mapsto f(x)$. Every action $G \times S \to S$ defines a homomorphism $G \to Sym(S)$ and vice-versa. The natural action defined above corresponds the identity homomorphism. This makes it "natural". An "unnatural" group action $Sym(S) \times S \to S$ could for instance correspond to an nontrivial automorphism of $Sym(S)$ induced by renaming the elements of $S$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1232794', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/65389/']} | jdg_380754 |
stackexchange | llm_judgeable_groundtruth_similarity | 12193779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to write trycatch code to deal with error in downloading from the web. url <- c( "http://stat.ethz.ch/R-manual/R-devel/library/base/html/connections.html", "http://en.wikipedia.org/wiki/Xz")y <- mapply(readLines, con=url) These two statements run successfully. Below, I create a non-exist web address: url <- c("xxxxx", "http://en.wikipedia.org/wiki/Xz") url[1] does not exist. How does one write a trycatch loop (function) so that: When the URL is wrong, the output will be: "web URL is wrong, can't get". When the URL is wrong, the code does not stop, but continues to download until the end of the list of URLs?
Now provide the response and nothing else.
| Well then: welcome to the R world ;-) Here you go Setting up the code urls <- c( "http://stat.ethz.ch/R-manual/R-devel/library/base/html/connections.html", "http://en.wikipedia.org/wiki/Xz", "xxxxx")readUrl <- function(url) { out <- tryCatch( { # Just to highlight: if you want to use more than one # R expression in the "try" part then you'll have to # use curly brackets. # 'tryCatch()' will return the last evaluated expression # in case the "try" part was completed successfully message("This is the 'try' part") readLines(con=url, warn=FALSE) # The return value of `readLines()` is the actual value # that will be returned in case there is no condition # (e.g. warning or error). # You don't need to state the return value via `return()` as code # in the "try" part is not wrapped inside a function (unlike that # for the condition handlers for warnings and error below) }, error=function(cond) { message(paste("URL does not seem to exist:", url)) message("Here's the original error message:") message(cond) # Choose a return value in case of error return(NA) }, warning=function(cond) { message(paste("URL caused a warning:", url)) message("Here's the original warning message:") message(cond) # Choose a return value in case of warning return(NULL) }, finally={ # NOTE: # Here goes everything that should be executed at the end, # regardless of success or error. # If you want more than one expression to be executed, then you # need to wrap them in curly brackets ({...}); otherwise you could # just have written 'finally=<expression>' message(paste("Processed URL:", url)) message("Some other message at the end") } ) return(out)} Applying the code > y <- lapply(urls, readUrl)Processed URL: http://stat.ethz.ch/R-manual/R-devel/library/base/html/connections.htmlSome other message at the endProcessed URL: http://en.wikipedia.org/wiki/XzSome other message at the endURL does not seem to exist: xxxxxHere's the original error message:cannot open the connectionProcessed URL: xxxxxSome other message at the endWarning message:In file(con, "r") : cannot open file 'xxxxx': No such file or directory Investigating the output > head(y[[1]])[1] "<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">" [2] "<html><head><title>R: Functions to Manipulate Connections</title>" [3] "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\">"[4] "<link rel=\"stylesheet\" type=\"text/css\" href=\"R.css\">" [5] "</head><body>" [6] "" > length(y)[1] 3> y[[3]][1] NA Additional remarks tryCatch tryCatch returns the value associated to executing expr unless there's an error or a warning. In this case, specific return values (see return(NA) above) can be specified by supplying a respective handler function (see arguments error and warning in ?tryCatch ). These can be functions that already exist, but you can also define them within tryCatch() (as I did above). The implications of choosing specific return values of the handler functions As we've specified that NA should be returned in case of error, the third element in y is NA . If we'd have chosen NULL to be the return value, the length of y would just have been 2 instead of 3 as lapply() will simply "ignore" return values that are NULL . Also note that if you don't specify an explicit return value via return() , the handler functions will return NULL (i.e. in case of an error or a warning condition). "Undesired" warning message As warn=FALSE doesn't seem to have any effect, an alternative way to suppress the warning (which in this case isn't really of interest) is to use suppressWarnings(readLines(con=url)) instead of readLines(con=url, warn=FALSE) Multiple expressions Note that you can also place multiple expressions in the "actual expressions part" (argument expr of tryCatch() ) if you wrap them in curly brackets (just like I illustrated in the finally part). | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/12193779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1631694/']} | jdg_380755 |
stackexchange | llm_judgeable_groundtruth_similarity | 782835 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
All, I'm having a bit of a weird issue. I have a legacy Postfix smtp server that just yesterday started acting as a relay for spam. From what I can tell, the issue is that whatever spam bot is doing it, figured out that they could send email using [email protected] to send through my server to anywhere. I cannot, for the life of me, figure out how to prevent this. The mail server, at this point, only exists to either accept mail for a few internal aliases, our hosted mailman lists (also aliases), and act as an SMTP server for a couple users that still have it set as their primary SMTP server. So, really, I should be able to tell the server "only accept SASL auth'd clients, OR only accept mail destined for local accounts", yes? Other info, running Mac OS X Server 10.6.8, Postfix 2.5.14. Dovecot is disabled. Here's my postconf, scrubbed slightly. I don't get what's missing here... but I'm guessing it's something obvious... 2bounce_notice_recipient = postmasteraccess_map_reject_code = 554address_verify_default_transport = $default_transportaddress_verify_local_transport = $local_transportaddress_verify_map = address_verify_negative_cache = yesaddress_verify_negative_expire_time = 3daddress_verify_negative_refresh_time = 3haddress_verify_poll_count = 3address_verify_poll_delay = 3saddress_verify_positive_expire_time = 31daddress_verify_positive_refresh_time = 7daddress_verify_relay_transport = $relay_transportaddress_verify_relayhost = $relayhostaddress_verify_sender = $double_bounce_senderaddress_verify_sender_dependent_relayhost_maps = $sender_dependent_relayhost_mapsaddress_verify_service_name = verifyaddress_verify_transport_maps = $transport_mapsaddress_verify_virtual_transport = $virtual_transportalias_database = hash:/etc/aliasesalias_maps = hash:/etc/aliases,hash:/var/mailman/data/aliasesallow_mail_to_commands = alias, forwardallow_mail_to_files = alias, forwardallow_min_user = noallow_percent_hack = yesallow_untrusted_routing = noalternate_config_directories = always_bcc = anvil_rate_time_unit = 60sanvil_status_update_time = 600sappend_at_myorigin = yesappend_dot_mydomain = yesapplication_event_drain_time = 100sauthorized_flush_users = static:anyoneauthorized_mailq_users = static:anyoneauthorized_submit_users = static:anyonebackwards_bounce_logfile_compatibility = yesberkeley_db_create_buffer_size = 16777216berkeley_db_read_buffer_size = 131072best_mx_transport = biff = nobody_checks = body_checks_size_limit = 51200bounce_notice_recipient = postmasterbounce_queue_lifetime = 5dbounce_service_name = bouncebounce_size_limit = 50000bounce_template_file = broken_sasl_auth_clients = nocanonical_classes = envelope_sender, envelope_recipient, header_sender, header_recipientcanonical_maps = check_for_od_forward = yescleanup_service_name = cleanupcommand_directory = /usr/sbincommand_execution_directory = command_expansion_filter = 1234567890!@%-_=+:,./abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZcommand_time_limit = 1000sconfig_directory = /etc/postfixconnection_cache_protocol_timeout = 5sconnection_cache_service_name = scacheconnection_cache_status_update_time = 600sconnection_cache_ttl_limit = 2scontent_filter = smtp-amavis:[127.0.0.1]:10024cyrus_sasl_config_path = daemon_directory = /usr/libexec/postfixdaemon_timeout = 18000sdata_directory = /var/lib/postfixdebug_peer_level = 2debug_peer_list = default_database_type = hashdefault_delivery_slot_cost = 5default_delivery_slot_discount = 50default_delivery_slot_loan = 3default_destination_concurrency_failed_cohort_limit = 1default_destination_concurrency_limit = 20default_destination_concurrency_negative_feedback = 1default_destination_concurrency_positive_feedback = 1default_destination_rate_delay = 0sdefault_destination_recipient_limit = 50default_extra_recipient_limit = 1000default_minimum_delivery_slots = 3default_privs = nobodydefault_process_limit = 100default_rbl_reply = $rbl_code Service unavailable; $rbl_class [$rbl_what] blocked using $rbl_domain${rbl_reason?; $rbl_reason}default_recipient_limit = 20000default_recipient_refill_delay = 5sdefault_recipient_refill_limit = 100default_transport = smtpdefault_verp_delimiters = +=defer_code = 450defer_service_name = deferdefer_transports = delay_logging_resolution_limit = 2delay_notice_recipient = postmasterdelay_warning_time = 0hdeliver_lock_attempts = 20deliver_lock_delay = 1sdestination_concurrency_feedback_debug = nodetect_8bit_encoding_header = yesdisable_dns_lookups = nodisable_mime_input_processing = nodisable_mime_output_conversion = nodisable_verp_bounces = nodisable_vrfy_command = yesdont_remove = 0double_bounce_sender = double-bounceduplicate_filter_limit = 1000empty_address_recipient = MAILER-DAEMONempty_address_relayhost_maps_lookup_key = <>enable_original_recipient = yesenable_server_options = yeserror_notice_recipient = postmastererror_service_name = errorexecution_directory_expansion_filter = 1234567890!@%-_=+:,./abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZexpand_owner_alias = noexport_environment = TZ MAIL_CONFIG LANGfallback_transport = fallback_transport_maps = fast_flush_domains = $relay_domainsfast_flush_purge_time = 7dfast_flush_refresh_time = 12hfault_injection_code = 0flush_service_name = flushfork_attempts = 5fork_delay = 1sforward_expansion_filter = 1234567890!@%-_=+:,./abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZforward_path = $home/.forward${recipient_delimiter}${extension}, $home/.forwardfrozen_delivered_to = yeshash_queue_depth = 1hash_queue_names = deferred, deferheader_address_token_limit = 10240header_checks = pcre:/etc/postfix/custom_header_checksheader_size_limit = 102400helpful_warnings = yeshome_mailbox = hopcount_limit = 50html_directory = /usr/share/doc/postfix/htmlignore_mx_lookup_error = noimport_environment = MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ XAUTHORITY DISPLAY LANG=Cin_flow_delay = 1sinet_interfaces = allinet_protocols = ipv4initial_destination_concurrency = 5internal_mail_filter_classes = invalid_hostname_reject_code = 554ipc_idle = 5sipc_timeout = 3600sipc_ttl = 1000sline_length_limit = 2048lmtp_bind_address = lmtp_bind_address6 = lmtp_body_checks = lmtp_cname_overrides_servername = nolmtp_connect_timeout = 0slmtp_connection_cache_destinations = lmtp_connection_cache_on_demand = yeslmtp_connection_cache_time_limit = 2slmtp_connection_reuse_time_limit = 300slmtp_data_done_timeout = 600slmtp_data_init_timeout = 120slmtp_data_xfer_timeout = 180slmtp_defer_if_no_mx_address_found = nolmtp_destination_concurrency_failed_cohort_limit = $default_destination_concurrency_failed_cohort_limitlmtp_destination_concurrency_limit = $default_destination_concurrency_limitlmtp_destination_concurrency_negative_feedback = $default_destination_concurrency_negative_feedbacklmtp_destination_concurrency_positive_feedback = $default_destination_concurrency_positive_feedbacklmtp_destination_rate_delay = $default_destination_rate_delaylmtp_destination_recipient_limit = $default_destination_recipient_limitlmtp_discard_lhlo_keyword_address_maps = lmtp_discard_lhlo_keywords = lmtp_enforce_tls = nolmtp_generic_maps = lmtp_header_checks = lmtp_host_lookup = dnslmtp_initial_destination_concurrency = $initial_destination_concurrencylmtp_lhlo_name = $myhostnamelmtp_lhlo_timeout = 300slmtp_line_length_limit = 990lmtp_mail_timeout = 300slmtp_mime_header_checks = lmtp_mx_address_limit = 5lmtp_mx_session_limit = 2lmtp_nested_header_checks = lmtp_pix_workaround_delay_time = 10slmtp_pix_workaround_maps = lmtp_pix_workaround_threshold_time = 500slmtp_pix_workarounds = disable_esmtp,delay_dotcrlflmtp_quit_timeout = 300slmtp_quote_rfc821_envelope = yeslmtp_randomize_addresses = yeslmtp_rcpt_timeout = 300slmtp_rset_timeout = 20slmtp_sasl_auth_cache_name = lmtp_sasl_auth_cache_time = 90dlmtp_sasl_auth_enable = nolmtp_sasl_auth_soft_bounce = yeslmtp_sasl_mechanism_filter = lmtp_sasl_password_maps = lmtp_sasl_path = lmtp_sasl_security_options = noplaintext, noanonymouslmtp_sasl_tls_security_options = $lmtp_sasl_security_optionslmtp_sasl_tls_verified_security_options = $lmtp_sasl_tls_security_optionslmtp_sasl_type = cyruslmtp_send_xforward_command = nolmtp_sender_dependent_authentication = nolmtp_skip_5xx_greeting = yeslmtp_starttls_timeout = 300slmtp_tcp_port = 24lmtp_tls_CAfile = lmtp_tls_CApath = lmtp_tls_cert_file = lmtp_tls_dcert_file = lmtp_tls_dkey_file = $lmtp_tls_dcert_filelmtp_tls_enforce_peername = yeslmtp_tls_exclude_ciphers = lmtp_tls_fingerprint_cert_match = lmtp_tls_fingerprint_digest = md5lmtp_tls_key_file = $lmtp_tls_cert_filelmtp_tls_loglevel = 0lmtp_tls_mandatory_ciphers = mediumlmtp_tls_mandatory_exclude_ciphers = lmtp_tls_mandatory_protocols = SSLv3, TLSv1lmtp_tls_note_starttls_offer = nolmtp_tls_per_site = lmtp_tls_policy_maps = lmtp_tls_scert_verifydepth = 9lmtp_tls_secure_cert_match = nexthoplmtp_tls_security_level = lmtp_tls_session_cache_database = lmtp_tls_session_cache_timeout = 3600slmtp_tls_verify_cert_match = hostnamelmtp_use_tls = nolmtp_xforward_timeout = 300slocal_command_shell = local_destination_concurrency_failed_cohort_limit = $default_destination_concurrency_failed_cohort_limitlocal_destination_concurrency_limit = 2local_destination_concurrency_negative_feedback = $default_destination_concurrency_negative_feedbacklocal_destination_concurrency_positive_feedback = $default_destination_concurrency_positive_feedbacklocal_destination_rate_delay = $default_destination_rate_delaylocal_destination_recipient_limit = 1local_header_rewrite_clients = permit_inet_interfaceslocal_initial_destination_concurrency = $initial_destination_concurrencylocal_recipient_maps = proxy:unix:passwd.byname $alias_mapslocal_transport = local:$myhostnameluser_relay = mail_name = Postfixmail_owner = _postfixmail_release_date = 20110707mail_spool_directory = /var/mailmail_version = 2.5.14mailbox_command = mailbox_command_maps = mailbox_delivery_lock = flock, dotlockmailbox_size_limit = 0mailbox_transport = dovecotmailbox_transport_maps = mailq_path = /usr/bin/mailqmanpage_directory = /usr/share/manmaps_rbl_domains = maps_rbl_reject_code = 554masquerade_classes = envelope_sender, header_sender, header_recipientmasquerade_domains = masquerade_exceptions = max_idle = 100smax_use = 100maximal_backoff_time = 4000smaximal_queue_lifetime = 5dmessage_reject_characters = message_size_limit = 31457280message_strip_characters = milter_command_timeout = 30smilter_connect_macros = j {daemon_name} vmilter_connect_timeout = 30smilter_content_timeout = 300smilter_data_macros = imilter_default_action = tempfailmilter_end_of_data_macros = imilter_end_of_header_macros = imilter_helo_macros = {tls_version} {cipher} {cipher_bits} {cert_subject} {cert_issuer}milter_macro_daemon_name = $myhostnamemilter_macro_v = $mail_name $mail_versionmilter_mail_macros = i {auth_type} {auth_authen} {auth_author} {mail_addr}milter_protocol = 2milter_rcpt_macros = i {rcpt_addr}milter_unknown_command_macros = mime_boundary_length_limit = 2048mime_header_checks = $header_checksmime_nesting_limit = 100minimal_backoff_time = 300smulti_recipient_bounce_reject_code = 554mydestination = $myhostname, localhost.$mydomainmydomain = domain.commydomain_fallback = localhostmyhostname = server.domain.commynetworks = 127.0.0.0/8,123.123.0.0/16mynetworks_style = subnetmyorigin = $myhostnamenested_header_checks = $header_checksnewaliases_path = /usr/bin/newaliasesnon_fqdn_reject_code = 554non_smtpd_milters = notify_classes = resource, softwareowner_request_special = noparent_domain_matches_subdomains = debug_peer_list,fast_flush_domains,mynetworks,permit_mx_backup_networks,qmqpd_authorized_clients,relay_domains,smtpd_access_mapspermit_mx_backup_networks = pickup_service_name = pickupplaintext_reject_code = 450prepend_delivered_header = command, file, forwardprocess_id_directory = pidpropagate_unmatched_extensions = canonical, virtualproxy_interfaces = proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $sender_bcc_maps $recipient_bcc_maps $smtp_generic_maps $lmtp_generic_mapsproxy_write_maps = $smtp_sasl_auth_cache_name $lmtp_sasl_auth_cache_nameqmgr_clog_warn_time = 300sqmgr_fudge_factor = 100qmgr_message_active_limit = 20000qmgr_message_recipient_limit = 20000qmgr_message_recipient_minimum = 10qmqpd_authorized_clients = qmqpd_client_port_logging = noqmqpd_error_delay = 1sqmqpd_timeout = 300squeue_directory = /private/var/spool/postfixqueue_file_attribute_count_limit = 100queue_minfree = 0queue_run_delay = 300squeue_service_name = qmgrrbl_reply_maps = readme_directory = /usr/share/doc/postfixreceive_override_options = recipient_bcc_maps = recipient_canonical_classes = envelope_recipient, header_recipientrecipient_canonical_maps = recipient_delimiter = +reject_code = 554relay_clientcerts = relay_destination_concurrency_failed_cohort_limit = $default_destination_concurrency_failed_cohort_limitrelay_destination_concurrency_limit = $default_destination_concurrency_limitrelay_destination_concurrency_negative_feedback = $default_destination_concurrency_negative_feedbackrelay_destination_concurrency_positive_feedback = $default_destination_concurrency_positive_feedbackrelay_destination_rate_delay = $default_destination_rate_delayrelay_destination_recipient_limit = $default_destination_recipient_limitrelay_domains = $mydestinationrelay_domains_reject_code = 554relay_initial_destination_concurrency = $initial_destination_concurrencyrelay_recipient_maps = relay_transport = relayrelayhost = relocated_maps = remote_header_rewrite_domain = require_home_directory = noresolve_dequoted_address = yesresolve_null_domain = noresolve_numeric_domain = norewrite_service_name = rewritesample_directory = /usr/share/doc/postfix/examplessend_cyrus_sasl_authzid = nosender_bcc_maps = sender_canonical_classes = envelope_sender, header_sendersender_canonical_maps = sender_dependent_relayhost_maps = sendmail_path = /usr/sbin/sendmailservice_throttle_time = 60ssetgid_group = _postdropshow_user_unknown_table_name = yesshowq_service_name = showqsmtp_always_send_ehlo = yessmtp_bind_address = smtp_bind_address6 = smtp_body_checks = smtp_cname_overrides_servername = nosmtp_connect_timeout = 30ssmtp_connection_cache_destinations = smtp_connection_cache_on_demand = yessmtp_connection_cache_time_limit = 2ssmtp_connection_reuse_time_limit = 300ssmtp_data_done_timeout = 600ssmtp_data_init_timeout = 120ssmtp_data_xfer_timeout = 180ssmtp_defer_if_no_mx_address_found = nosmtp_destination_concurrency_failed_cohort_limit = $default_destination_concurrency_failed_cohort_limitsmtp_destination_concurrency_limit = $default_destination_concurrency_limitsmtp_destination_concurrency_negative_feedback = $default_destination_concurrency_negative_feedbacksmtp_destination_concurrency_positive_feedback = $default_destination_concurrency_positive_feedbacksmtp_destination_rate_delay = $default_destination_rate_delaysmtp_destination_recipient_limit = $default_destination_recipient_limitsmtp_discard_ehlo_keyword_address_maps = smtp_discard_ehlo_keywords = smtp_enforce_tls = nosmtp_fallback_relay = $fallback_relaysmtp_generic_maps = smtp_header_checks = smtp_helo_name = $myhostnamesmtp_helo_timeout = 300ssmtp_host_lookup = dnssmtp_initial_destination_concurrency = $initial_destination_concurrencysmtp_line_length_limit = 990smtp_mail_timeout = 300ssmtp_mime_header_checks = smtp_mx_address_limit = 5smtp_mx_session_limit = 2smtp_nested_header_checks = smtp_never_send_ehlo = nosmtp_pix_workaround_delay_time = 10ssmtp_pix_workaround_maps = smtp_pix_workaround_threshold_time = 500ssmtp_pix_workarounds = disable_esmtp,delay_dotcrlfsmtp_quit_timeout = 300ssmtp_quote_rfc821_envelope = yessmtp_randomize_addresses = yessmtp_rcpt_timeout = 300ssmtp_rset_timeout = 20ssmtp_sasl_auth_cache_name = smtp_sasl_auth_cache_time = 90dsmtp_sasl_auth_enable = nosmtp_sasl_auth_soft_bounce = yessmtp_sasl_mechanism_filter = smtp_sasl_password_maps = smtp_sasl_path = smtp_sasl_security_options = noplaintext, noanonymoussmtp_sasl_tls_security_options = $smtp_sasl_security_optionssmtp_sasl_tls_verified_security_options = $smtp_sasl_tls_security_optionssmtp_sasl_type = cyrussmtp_send_xforward_command = nosmtp_sender_dependent_authentication = nosmtp_skip_5xx_greeting = yessmtp_skip_quit_response = yessmtp_starttls_timeout = 300ssmtp_tls_CAfile = smtp_tls_CApath = smtp_tls_cert_file = smtp_tls_dcert_file = smtp_tls_dkey_file = $smtp_tls_dcert_filesmtp_tls_enforce_peername = yessmtp_tls_exclude_ciphers = smtp_tls_fingerprint_cert_match = smtp_tls_fingerprint_digest = md5smtp_tls_key_file = $smtp_tls_cert_filesmtp_tls_loglevel = 0smtp_tls_mandatory_ciphers = mediumsmtp_tls_mandatory_exclude_ciphers = smtp_tls_mandatory_protocols = SSLv3, TLSv1smtp_tls_note_starttls_offer = nosmtp_tls_per_site = smtp_tls_policy_maps = smtp_tls_scert_verifydepth = 9smtp_tls_secure_cert_match = nexthop, dot-nexthopsmtp_tls_security_level = smtp_tls_session_cache_database = smtp_tls_session_cache_timeout = 3600ssmtp_tls_verify_cert_match = hostnamesmtp_use_tls = nosmtp_xforward_timeout = 300ssmtpd_authorized_verp_clients = $authorized_verp_clientssmtpd_authorized_xclient_hosts = smtpd_authorized_xforward_hosts = smtpd_banner = $myhostname ESMTP $mail_namesmtpd_client_connection_count_limit = 50smtpd_client_connection_rate_limit = 20smtpd_client_event_limit_exceptions = ${smtpd_client_connection_limit_exceptions:$mynetworks}smtpd_client_message_rate_limit = 0smtpd_client_new_tls_session_rate_limit = 0smtpd_client_port_logging = nosmtpd_client_recipient_rate_limit = 0smtpd_client_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination reject_rbl_client zen.spamhaus.org reject_rbl_client psbl.surriel.com reject_rbl_client multi.uribl.com reject_rbl_client dsn.rfc-ignorant.org reject_rbl_client dul.dnsbl.sorbs.net reject_rbl_client sbl-xbl.spamhaus.org reject_rbl_client bl.spamcop.net reject_rbl_client dnsbl.sorbs.net reject_rbl_client cbl.abuseat.org reject_rbl_client ix.dnsbl.manitu.net reject_rbl_client combined.rbl.msrbl.net reject_rbl_client rabl.nuclearelephant.comsmtpd_data_restrictions = smtpd_delay_open_until_valid_rcpt = yessmtpd_delay_reject = yessmtpd_discard_ehlo_keyword_address_maps = smtpd_discard_ehlo_keywords = smtpd_end_of_data_restrictions = smtpd_enforce_tls = nosmtpd_error_sleep_time = 10ssmtpd_etrn_restrictions = smtpd_expansion_filter = \t\40!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~smtpd_forbidden_commands = CONNECT GET POSTsmtpd_hard_error_limit = 5smtpd_helo_required = yessmtpd_helo_restrictions = permit_mynetworks reject_invalid_helo_hostname reject_non_fqdn_helo_hostnamesmtpd_history_flush_threshold = 100smtpd_junk_command_limit = 100smtpd_milters = smtpd_noop_commands = smtpd_null_access_lookup_key = <>smtpd_peername_lookup = yessmtpd_policy_service_max_idle = 300ssmtpd_policy_service_max_ttl = 1000ssmtpd_policy_service_timeout = 100ssmtpd_proxy_ehlo = $myhostnamesmtpd_proxy_filter = smtpd_proxy_timeout = 100ssmtpd_pw_server_security_options = cram-md5,plain,loginsmtpd_recipient_limit = 3000smtpd_recipient_overshoot_limit = 1000smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, check_sender_access hash:/etc/postfix/sender_access, check_policy_service unix:private/policysmtpd_reject_unlisted_recipient = yessmtpd_reject_unlisted_sender = nosmtpd_restriction_classes = smtpd_sasl_auth_enable = yessmtpd_sasl_authenticated_header = nosmtpd_sasl_exceptions_networks = smtpd_sasl_local_domain = $mydomainsmtpd_sasl_path = smtpdsmtpd_sasl_security_options = noanonymous, noplaintextsmtpd_sasl_tls_security_options = $smtpd_sasl_security_optionssmtpd_sasl_type = cyrussmtpd_sender_login_maps = smtpd_sender_restrictions = reject_unknown_sender_domainsmtpd_soft_error_limit = 3smtpd_starttls_timeout = 300ssmtpd_timeout = 300ssmtpd_tls_CAfile = /etc/certificates/CA.chain.pemsmtpd_tls_CApath = smtpd_tls_always_issue_session_ids = yessmtpd_tls_ask_ccert = nosmtpd_tls_auth_only = nosmtpd_tls_ccert_verifydepth = 9smtpd_tls_cert_file = /etc/certificates/cert.cert.pemsmtpd_tls_dcert_file = smtpd_tls_dh1024_param_file = smtpd_tls_dh512_param_file = smtpd_tls_dkey_file = $smtpd_tls_dcert_filesmtpd_tls_exclude_ciphers = SSLv2, aNULL, ADH, eNULLsmtpd_tls_fingerprint_digest = md5smtpd_tls_key_file = /etc/certificates/key.key.pemsmtpd_tls_loglevel = 0smtpd_tls_mandatory_ciphers = mediumsmtpd_tls_mandatory_exclude_ciphers = smtpd_tls_mandatory_protocols = SSLv3, TLSv1smtpd_tls_received_header = nosmtpd_tls_req_ccert = nosmtpd_tls_security_level = smtpd_tls_session_cache_database = smtpd_tls_session_cache_timeout = 3600ssmtpd_tls_wrappermode = nosmtpd_use_pw_server = yessmtpd_use_tls = nosoft_bounce = nostale_lock_time = 500sstress = strict_7bit_headers = nostrict_8bitmime = nostrict_8bitmime_body = nostrict_mailbox_ownership = yesstrict_mime_encoding_domain = nostrict_rfc821_envelopes = nosun_mailtool_compatibility = noswap_bangpath = yessyslog_facility = mailsyslog_name = postfixtls_append_default_CA = notls_daemon_random_bytes = 32tls_export_cipherlist = ALL:+RC4:@STRENGTHtls_high_cipherlist = ALL:!EXPORT:!LOW:!MEDIUM:+RC4:@STRENGTHtls_low_cipherlist = ALL:!EXPORT:+RC4:@STRENGTHtls_medium_cipherlist = ALL:!EXPORT:!LOW:+RC4:@STRENGTHtls_null_cipherlist = eNULL:!aNULLtls_random_bytes = 32tls_random_exchange_name = ${data_directory}/prng_exchtls_random_prng_update_period = 3600stls_random_reseed_period = 3600stls_random_source = dev:/dev/urandomtrace_service_name = tracetransport_maps = transport_retry_time = 60strigger_timeout = 10sundisclosed_recipients_header = To: undisclosed-recipients:;unknown_address_reject_code = 554unknown_client_reject_code = 554unknown_hostname_reject_code = 554unknown_local_recipient_reject_code = 554unknown_relay_recipient_reject_code = 554unknown_virtual_alias_reject_code = 554unknown_virtual_mailbox_reject_code = 554unverified_recipient_reject_code = 554unverified_sender_reject_code = 554use_getpwnam_ext = yesuse_od_delivery_path = noverp_delimiter_filter = -=+virtual_alias_domains = $virtual_alias_mapsvirtual_alias_expansion_limit = 1000virtual_alias_maps = virtual_alias_recursion_limit = 1000virtual_destination_concurrency_failed_cohort_limit = $default_destination_concurrency_failed_cohort_limitvirtual_destination_concurrency_limit = $default_destination_concurrency_limitvirtual_destination_concurrency_negative_feedback = $default_destination_concurrency_negative_feedbackvirtual_destination_concurrency_positive_feedback = $default_destination_concurrency_positive_feedbackvirtual_destination_rate_delay = $default_destination_rate_delayvirtual_destination_recipient_limit = $default_destination_recipient_limitvirtual_gid_maps = virtual_initial_destination_concurrency = $initial_destination_concurrencyvirtual_mailbox_base = virtual_mailbox_domains = $virtual_mailbox_mapsvirtual_mailbox_limit = 51200000virtual_mailbox_lock = fcntl, dotlockvirtual_mailbox_maps = virtual_minimum_uid = 100virtual_transport = virtualvirtual_uid_maps =
Now provide the response and nothing else.
| There are a few recommendations I'd make with your current script. First off, a single large query is almost always going to perform better than many smaller queries. So rather than running get-aduser separately for each target OU, I would combine them into a single call using a higher level common OU as the search base. Obviously, this may end up returning results from OUs you didn't want to include. But it's much faster to filter those out later. You're also calling get-aduser again for each result from the first set of queries just to filter on lastLogonDate. But you could instead combine that filter with the -ldapfilter from your original queries. It's just a matter of converting the -filter version with an equivalent -ldapfilter version. The secret to doing this is knowing that lastLogonDate is just a Powershell converted version of the lastLogonTimestamp attribute. And you can convert a normal Powershell DateTime value into the format that lastLogonTimestamp uses with the ToFileTime() method. The last thing that confused me was the (UserPrincipalName=*) portion of your ldapfilter. In every domain I've ever touched, this attribute will always have a value (just like SamAccountName or DistinguishedName). It may be different than the default value of <SamAccoutnName>@<DomainFQDN> , but it's never empty. The filter isn't hurting anything necessarily. It's just one extra thing for AD to spend CPU cycles evaluating when it doesn't need to. But if you have reason to believe it might be empty in your environment, by all means leave it in. So here's how I'd modify your script if I understand your intentions correctly. # make the comparison value using ToFileTime()$30daysago = (Get-Date).AddDays(-30).ToFileTime()# make the combined ldapfilter value$LdapFilter = "(&(lastLogonTimestamp<=$30daysago)(extensionAttribute9=*)"# make an array of the OU DNs you care about$TargetOUs = @( "OU=Users,OU=US-Location,OU=Americas,DC=Domain,DC=Domain,DC=com" "OU=Users-Remote,OU=US-Location,OU=Americas,DC=Domain,DC=Domain,DC=com" "OU=Contractors,OU=US-Location,OU=Americas,DC=Domain,DC=Domain,DC=com" "OU=Temps,OU=US-Location,OU=Americas,DC=Domain,DC=Domain,DC=com")# define your common search base$base = "OU=US-Location,OU=Americas,DC=Domain,DC=Domain,DC=com"# get your combined results and the additional attributes you care about$OldUsers = get-aduser -ldapfilter $LdapFilter -searchbase $base -pr lastLogonDate# convert the target OU list into a regular expression we can compare each DN against in a single comparison call$regex = ""$TargetOUs | %{ $regex += ".*," + [Regex]::Escape($_) + "$|" }$regex = $regex.Substring(0,$regex.Length-1)# filter the results that aren't in your target OUs# (depending on your OU layout, it might be more efficient to flip this# and define the OUs you explicitly want to leave out)$FilteredUsers = $OldUsers | ?{ $_.DistinguishedName -match $regex }# export your CSV (sorted for good measure)$FilteredUsers | select SamAccountName,LastLogonDate | sort LastLogonDate | export-csv C:/Users/myname/Desktop/Usersover30days.csv P.S. Be wary of treating lastLogonTimestamp (or lastLogonDate ) as 100% accurate. It may be anywhere from 9 to 14 days out of date by design . | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/782835', 'https://serverfault.com', 'https://serverfault.com/users/359660/']} | jdg_380756 |
stackexchange | llm_judgeable_groundtruth_similarity | 11354802 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This works: $sql = "SELECT id FROM `users` WHERE `account_status` = '" . $i . "'"; $query = $this->db->query($sql); var_dump($query->num_rows()); But this doesn't: $sql = "SELECT COUNT(*) FROM `users` WHERE `account_status` = '" . $i . "'"; $query = $this->db->query($sql); var_dump($query->num_rows()); How to do a num_rows on a COUNT(*) query? Also is doing it the 2nd way any better performance wise?
Now provide the response and nothing else.
| Doing a COUNT(*) will only give you a singular row containing the number of rows and not the results themselves. To access COUNT(*) you would need to do $result = $query->row_array();$count = $result['COUNT(*)']; The second option performs much better since it does not need to return a dataset to PHP but instead just a count and therefore is much more optimized. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11354802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_380757 |
stackexchange | llm_judgeable_groundtruth_similarity | 20345860 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of keywords that i store in a list. To fetch records from a table, am using the following query: sqlBuilder.Append("SELECT name, memberid FROM members WHERE");StringBuilder sqlBuilder = new StringBuilder();foreach (string item in keywords) { sqlBuilder.AppendFormat(" LOWER(Name) LIKE '%{0}%' AND", item); }string sql = sqlBuilder.ToString(); As you might have noticed, my query is vulnerable to sql injection, thus i want to use parameters using SqlCommand(). I have tried the following but still doesn't work: foreach (string item in keywords) { sqlBuilder.AppendFormat(" LOWER(Name) LIKE '%' + @searchitem + '%' AND", item); SqlCommand cmd = new SqlCommand(sqlBuilder.ToString()); cmd.Parameters.AddWithValue("@searchitem",item); } Where could i be making the mistake, or rather, how should i got about it?
Now provide the response and nothing else.
| You are doing a few things wrong here: You give all your parameters the same name @searchitem . That won't work. The parameters need unique names. You create a new SqlCommand for each item. That won't work. Create the SqlCommand once at the beginning of the loop and then set CommandText once you are done creating the SQL. Your SQL ends with AND , which is not valid syntax. Improvement suggestions (not wrong per se, but not best practice either): As Frederik suggested, the usual way is to put the % tokens in the parameter, rather than doing string concatenation inside the SQL. Unless you explicitly use a case-sensitive collation for your database, comparisons should be case-insensitive. Thus, you might not need the LOWER . Code example: SqlCommand cmd = new SqlCommand();StringBuilder sqlBuilder = new StringBuilder();sqlBuilder.Append("SELECT name, memberid FROM members ");var i = 1;foreach (string item in keywords){ sqlBuilder.Append(i == 1 ? " WHERE " : " AND "); var paramName = "@searchitem" + i.ToString(); sqlBuilder.AppendFormat(" Name LIKE {0} ", paramName); cmd.Parameters.AddWithValue(paramName, "%" + item + "%"); i++;}cmd.CommandText = sqlBuilder.ToString(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20345860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1537514/']} | jdg_380758 |
stackexchange | llm_judgeable_groundtruth_similarity | 55230896 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to run Rocket but I'm falling at the first hurdle. When trying to cargo run , I get the following error: error: failed to run custom build command for `pear_codegen v0.1.2`Error: Pear requires a nightly or dev version of Rust.Installed version is: 1.33.0 (2019-02-28). Minimum required: 1.31.0-nightly (2018-10-05). I'm new to Rust, but coming from other languages this makes no sense whatsoever. It needs version 1.31.0 as a minimum but I have version 1.33.0 installed. What am I doing wrong?
Now provide the response and nothing else.
| If software requires a nightly build of Rust, no stable version of Rust can be substituted: you are required to use nightly. The nightly channel of Rust is a superset of stable Rust. Features that are not yet complete or simply haven't proven their value are included in nightly builds of Rust. You opt into using a given feature via a crate attribute . These unstable features may completely change or even be removed at any time. Said another way, an unstable feature is never guaranteed to exist in any particular Rust stable version. If it helps, you can think of nightly versions as an "alternate reality" track of development. The version number of nightly is only a loose indicator of where they exist in time; the compilation date and git commit hash are much more informative. I would have thought the nightly code from 1.31.0 would be pushed into the stable 1.31.0+ versions once tested This is how the beta channel works — anything in 1.x.y-beta will be in 1.x.y-stable (assuming no major emergency occurs). See also: What is the stabilization process? error[E0554]: #![feature] may not be used on the stable release channel Couldn't install racer using cargo What is a crate attribute and where do I add it? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55230896', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1005169/']} | jdg_380759 |
stackexchange | llm_judgeable_groundtruth_similarity | 5087992 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to compare an unsigned int with a signed char like this: int main(){ unsigned int x = 9; signed char y = -1; x < y ? printf("s") : printf("g"); return 0;} I was expecting the o/p to be "g". Instead, its "s". What kind of conversion is done here?
Now provide the response and nothing else.
| Section 6.3.1.8 , Usual arithmetic conversions, of C99 details implicit integer conversions. If both operands have the same type, then no further conversion is needed. That doesn't count since they're different types. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. That doesn't count since one is signed, the other unsigned. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Bingo. x has a higher rank than y so y is promoted to unsigned int . That means that it morphs from -1 into UINT_MAX , substantially larger than 9. The rest of the rules don't apply since we have found our match but I'll include them for completeness: Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. The ranks relevant to this question are shown below. All ranks are detailed in C99, section 6.3.1.1 , Boolean, character, and integers so you can refer to that for further details. The rank of long long int shall be greater than the rank of long int , which shall be greater than the rank of int , which shall be greater than the rank of short int , which shall be greater than the rank of signed char . The rank of char shall equal the rank of signed char and unsigned char . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5087992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/438822/']} | jdg_380760 |
stackexchange | llm_judgeable_groundtruth_similarity | 30068920 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click CON = New OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=E:\Transport.accdb") COM = New OleDbCommand(" Update details DName=@DName,Address=@Address,DOB=@DOB,MoNo=@MoNo,LiNo=@LiNo,LiExNo=@LiExNo,Experiance=@Experiance,Note=@Note where DNo=@DNo", CON) CON.Open() COM.Parameters.AddWithValue("@DNo", TextBox9.Text) COM.Parameters.AddWithValue("@DName", TextBox8.Text) COM.Parameters.AddWithValue("@Address", TextBox1.Text) COM.Parameters.AddWithValue("@DOB", TextBox2.Text) COM.Parameters.AddWithValue("@MoNo", TextBox3.Text) COM.Parameters.AddWithValue("@LiNo", TextBox4.Text) COM.Parameters.AddWithValue("@LiExNo", TextBox5.Text) COM.Parameters.AddWithValue("@Experiance", TextBox6.Text) COM.Parameters.AddWithValue("@Note", TextBox7.Text) COM.ExecuteNonQuery() MsgBox("record Updated") CON.Close()
Now provide the response and nothing else.
| Two most valuable points from the link I mentioned: In most recent version of Django, we need to call form.is_valid() Sending of email is done upon save() . Here is how I queried users that I needed and sent each of them a password reset link: def find_users_and_send_email(): from django.http import HttpRequest from django.contrib.auth.forms import PasswordResetForm from django.contrib.auth.models import User import logging logger = logging.getLogger(__name__) users = User.objects.filter(date_joined__gt = '2015-04-16') for user in users: try: if user.email: logger.info("Sending email for to this email:", user.email) form = PasswordResetForm({'email': user.email}) assert form.is_valid() request = HttpRequest() request.META['SERVER_NAME'] = 'help.mydomain.com' request.META['SERVER_PORT'] = '443' form.save( request= request, use_https=True, from_email="[email protected]", email_template_name='registration/password_reset_email.html') except Exception as e: logger.warning(str(e)) continue return 'done' Usually PasswordResetForm works with a "request" from the front-end, which I didn't have. So I simply created one. When I followed the example in the link, it failed.. It couldn't find server name in the request. (Makes sense because I instantiated my request out of nowhere) When I fixed server name, it asked for the server port. Since I used https, my port is 443, otherwise, use default port. If you use https, don't forget to indicate it when you save the form use_https=True | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30068920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4869195/']} | jdg_380761 |
stackexchange | llm_judgeable_groundtruth_similarity | 371614 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The standard statement of the Stone-Weierstrass theorem is: Let $X$ be compact Hausdorff topological space, and $\mathcal{A}$ a subalgebra of the continuous functions from $X$ to $\mathbb{R}$ which separates points. Then $\mathcal{A}$ is dense in $C(X, \mathbb{R})$ in sup-norm. Most materials that I can find on the extension of Stone-Weierstrass theorem discuss only the multivariate case, i.e., $X\in \mathbb{R}^d$ . I wonder whether this theorem can be extended to vector-valued continuous functions. Specifically, let $\mathcal{A}$ be a subalgebra of continuous functions $X\to \mathbb{R}^n$ , with the multiplication defined componentwisely, i.e., $\forall f, g\in \mathcal{A}$ , $fg = (f_1g_1, \ldots, f_ng_n)$ . Then shall we claim $\mathcal{A}$ is dense in $C(X, \mathbb{R}^n)$ in sup-norm if $\mathcal{A}$ separates points? Any direct answer or reference would greatly help me! Edit: As Nik Weaver points out, the original conjecture is false since the functions of the form $x\mapsto (f(x), 0, \ldots, 0)$ create a counter-example. I wonder whether there are non-trivial Weierstrass-type theorems on vector-valued functions. For instance, what if we further assume $\mathcal{A}$ is dense on each ` axis'?
Now provide the response and nothing else.
| I think that you want something like this: Let $E\to X$ be a (finite rank) vector bundle over a compact, Hausdorff topological space $X$ , let $\mathcal{A}\subset C(X,\mathbb{R})$ be a subalgebra that separates points, and let $\mathcal{E}\subset C(X,E)$ be an $\mathcal{A}$ -submodule of the $C(X,\mathbb{R})$ -module of continuous section of $E\to X$ . Suppose that, at every point $x\in X$ , the set $\{\,e(x)\ |\ e\in\mathcal{E}\ \}$ spans $E_x$ . Then $\mathcal{E}$ is dense in $C(X,E)$ with respect to the sup-norm defined by any norm on $E$ . Addendum: Here is a sketch of the argument: First, by an easy compactness argument, one can show that $\mathcal{E}$ contains a finite set $e_1,\ldots e_m$ such that $e_1(x),e_2(x),\ldots,e_m(x)$ spans $E_x$ for all $x\in X$ . Then $\mathcal{E}$ contains all the sections of the form $$a_1\, e_1 + \cdots + a_m\,e_m$$ where $a_i\in\mathcal{A}$ , and every section $e\in C(X,E)$ can be written in the form $$e = f_1\, e_1 + \cdots + f_m\,e_m$$ for some functions $f_i\in C(X,\mathbb{R})$ . By the Stone-Weierstrass Theorem, for any given $\delta>0$ , we can choose $a_i\in \mathcal{A}$ so that $\|f_i-a_i\|<\delta$ for all $1\le i\le m$ . Now the equivalence of all norms in finite dimensional vector spaces can be applied (together with the compactness of $X$ ) to conclude that $\mathcal{E}$ is dense in $C(X,E)$ in any sup-norm derived from a norm on the (finite rank) vector bundle $E$ . | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/371614', 'https://mathoverflow.net', 'https://mathoverflow.net/users/161104/']} | jdg_380762 |
stackexchange | llm_judgeable_groundtruth_similarity | 85303 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way to calculate x/y without using division? Like 1/10 = 1 * 0.1 . I'm asking because dividing is slower than multiplying in programming programs.
Now provide the response and nothing else.
| No. There's no free lunch: the most efficient way to calculate $x/y$ for general $y$ is to ask the hardware to divide $x$ by $y$. Obviously, in analogy to your example, $x/y = x \cdot \frac{1}{y}$ if you happen to have $\frac{1}{y}$ handy. Maybe if you're dividing many, many numbers by the same $y$, inverting $y$ first and then doing a bunch of multiplications will save some cycles. But see my caveat below. Another special case is when $y$ is an integer power of 2: division can then be efficiently performed using bit shifting, if $x$ is also an integer, or by adjusting the exponent of $x$, if $x$ is an IEEE floating point number of the form $a 2^{b}$. CAVEAT : Please do not try to "optimize division" by hand: The compiler you are using is very good at recognizing special cases where division can be optimized / replaced by faster operations and will almost certainly do a better job at it than you. Division is extremely unlikely to be the biggest performance bottleneck in your program, so spending time optimizing it isn't spending your time most effectively. Your future self, and anyone else who needs to maintain your code in the future, will thank you for writing more straightforward, readable code. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/85303', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12202/']} | jdg_380763 |
stackexchange | llm_judgeable_groundtruth_similarity | 7928874 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my application I am using the new Action Bar Compatibility sample from Google (located at <sdk>/samples/android-<version>/ActionBarCompat ) which works great. The only problem I have is applying this to my PreferenceActivity in order to get a screen like the settings in the Android Market (see picture). To fill the ActionBar with icons, each Activity must extend the ActionBarActivity class. The problem is that my Activity already extends PreferenceActivity and in Java classes can not extend more than one class. There must be a way to get the ActionBar together with a PreferenceScreen . I would be glad if anybody could provide a solution for this common issue. P.S.: A solution like in How to add a button to PreferenceScreen does not fit because the ActionBar is actually the title bar and so this is more a Java than a layout thing.
Now provide the response and nothing else.
| Edit: My answer below is rather hacky and it seems like it is now outdated (for pre Android 3.0) Have a look at the other answers for less hacky and more current solutions ~pyko 2014-09-01 I managed to get it working - not sure if this is the nicest/cleanest solution, but it works. Had to make the following changes: Make a copy of ActionBarActivity and have the new class extend PreferenceActivity public abstract class ActionBarPreferenceActivity extends PreferenceActivity { // contents exactly the same as 'ActionBarActivity'} Modify onCreate() in ActionBarHelperBase.java slightly - make a special case for PreferenceActivity classes @Overridepublic void onCreate(Bundle savedInstanceState) { // If the activity is a PreferenceActivity, don't make the request if (!(mActivity instanceof PreferenceActivity)) { mActivity.requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); } Have your PreferenceActivity extend this class and add request for FEATURE_CUSTOM_TITLE before you call super.onCreate() public class MyPreferenceActivity extends ActionBarPreferenceActivity { @Override protected void onCreate(Bundle savedInstanceState) { requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); // add this line super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); // etc etc } // etc etc} As far as I can tell, changes 2 and 3 are needed because for PreferenceActivity : "As soon as you call super.onCreate(), the ViewGroup will be set up and so, you are not allowed to change the Window's parameters." ( see Oliver's comment to the answer ) I guess the order of how components in PreferenceActivity activities are created is different to plain Activity activities . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7928874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1018177/']} | jdg_380764 |
stackexchange | llm_judgeable_groundtruth_similarity | 8007883 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Somehow i dun know why does the default animation fails. my Search bar did not shift up as it supposed to be. the view is in UIView, i'm wondering if this is the problem. Included the IB layout
Now provide the response and nothing else.
| The searchbar doesn't actually move in the 'push-up' animation. Rather it stays in place while the navbar goes up, thus pulling the rest of the view with it. You should be able to move the search bar manually in code by registering as its delegate (UISearchBarDelegate) and responding to these calls. #pragma mark - UISearchBarDelegate Methods- (BOOL)searchBarShouldBeginEditing:(UISearchBar *)searchBar { //move the search bar up to the correct location eg [UIView animateWithDuration:.4 animations:^{ searchBar.frame = CGRectMake(searchBar.frame.origin.x, 0, searchBar.frame.size.width, searchBar.frame.size.height); } completion:^(BOOL finished){ //whatever else you may need to do }];}- (BOOL)searchBarShouldEndEditing:(UISearchBar *)searchBar { //move the search bar down to the correct location eg [UIView animateWithDuration:.4 animations:^{ searchBar.frame = CGRectMake(searchBar.frame.origin.x, /*SearchBar original Y*/, searchBar.frame.size.width, searchBar.frame.size.height); } completion:^(BOOL finished){ //whatever else you may need to do }];} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8007883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/537422/']} | jdg_380765 |
stackexchange | llm_judgeable_groundtruth_similarity | 27313 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In AdS/CFT, the story of renormalization has an elegant gravity dual. Regularizing the theory is done by putting a cutoff near the conformal boundary of AdS space, and renormalization is done by adding counterterms on that surface. Mathematically this is also interesting, since this utilizes the Lorentzian generalization of the Graham-Fefferman expansion. But, in the spirit of “effective holography”, one ought to be able to do that in spacetimes which do not admit a conformal boundary. I am wondering if anyone has ever seen an attempt to systematically define holographic renormalization in such spaces, for example for p-branes ($p \neq 3$), the NS fivebrane, or the Sakai-Sugimoto model, etc. In such cases one can still take a cutoff surface at the UV of the theory, take the fields to be essentially non-fluctuating, but one does not have a conformal boundary and all the associated machinery.
Now provide the response and nothing else.
| I believe one has to distinguish two kinds of dualities. AdS/CFT, even in the context where it describes an RG flow (so not the pure AdS_5xS^5 case), is an exact duality to a four-dimensional theory, which interpolates between one well-defined conformal field theory in the UV and another conformal field theory in the IR. So holographic renormalization is in one-to-one correspondence with renormalization in the four-dimensional theory (that is to say, one can map the counterterms, and identify diff invariance with the renormalization group invariance of correlation functions). On the other hand, Sakai-Sugimoto is not a true duality, it only reduces in the IR to something like a four-dimensional theory (one would hope). The UV of the full Sakai-Sugimoto setup has nothing to do with the UV of QCD or any other four-dimensional theory. So in my opinion there is no reason that (whatever renormalization means in this context) it would resemble what we expect in QCD or any other RG flow in four dimensions. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/27313', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_380766 |
stackexchange | llm_judgeable_groundtruth_similarity | 29577713 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The data type of the field is String. I would like to fetch the data where character length of field name is greater than 40. I tried these queries but returning error.1. db.usercollection.find({$where: "(this.name.length > 40)"}).limit(2);output :error: { "$err" : "TypeError: Cannot read property 'length' of undefined near '40)' ", "code" : 16722} this is working in 2.4.9 But my version is 2.6.5
Now provide the response and nothing else.
| For MongoDB 3.6 and newer: The $expr operator allows the use of aggregation expressions within the query language, thus you can leverage the use of $strLenCP operator to check the length of the string as follows: db.usercollection.find({ name: { $exists: true }, $expr: { $gt: [{ $strLenCP: '$name' }, 40] } }) For MongoDB 3.4 and newer: You can also use the aggregation framework with the $redact pipeline operator that allows you to proccess the logical condition with the $cond operator and uses the special operations $$KEEP to "keep" the document where the logical condition is true or $$PRUNE to "remove" the document where the condition was false. This operation is similar to having a $project pipeline that selects the fields in the collection and creates a new field that holds the result from the logical condition query and then a subsequent $match , except that $redact uses a single pipeline stage which is more efficient. As for the logical condition, there are String Aggregation Operators that you can use $strLenCP operator to check the length of the string. If the length is $gt a specified value, then this is a true match and the document is "kept". Otherwise it is "pruned" and discarded. Consider running the following aggregate operation which demonstrates the above concept: db.usercollection.aggregate([ { $match: { name: { $exists: true } } }, { $redact: { $cond: [ { $gt: [ { $strLenCP: "$name" }, 40] }, "$$KEEP", "$$PRUNE" ] } }, { $limit: 2 }]) If using $where , try your query without the enclosing brackets: db.usercollection.find({ $where: "this.name.length > 40" }).limit(2); A better query would be to to check for the field's existence and then check the length: db.usercollection.find({ name: { $type: 2 }, $where: "this.name.length > 40" }).limit(2); or: db.usercollection.find({ name: { $exists: true }, $where: "this.name.length > 40" }).limit(2); MongoDB evaluates non- $where query operations before $where expressions and non- $where query statements may use an index. A much better performance is to store the length of the string as another field and then you can index or search on it; applying $where will be much slower compared to that. It's recommended to use JavaScript expressions and the $where operator as a last resort when you can't structure the data in any other way, or when you are dealing with asmall subset of data. A different and faster approach that avoids the use of the $where operator is the $regex operator. Consider the following pattern which searches for db.usercollection.find({"name": {"$type": 2, "$regex": /^.{41,}$/}}).limit(2); Note - From the docs : If an index exists for the field, then MongoDB matches the regularexpression against the values in the index, which can be faster than acollection scan. Further optimization can occur if the regularexpression is a “prefix expression”, which means that all potentialmatches start with the same string. This allows MongoDB to construct a“range” from that prefix and only match against those values from theindex that fall within that range. A regular expression is a “prefix expression” if it starts with acaret (^) or a left anchor (\A) , followed by a string of simplesymbols. For example, the regex /^abc.*/ will be optimized bymatching only against the values from the index that start with abc . Additionally, while /^a/, /^a.*/, and /^a.*$/ match equivalentstrings, they have different performance characteristics. All of theseexpressions use an index if an appropriate index exists; however, /^a.*/ , and /^a.*$/ are slower. /^a/ can stop scanning aftermatching the prefix. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/29577713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4414504/']} | jdg_380767 |
stackexchange | llm_judgeable_groundtruth_similarity | 40680589 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First off: this is not a Unix/Linux system. I am working on an IBM AS/400 V7R1 (C++ 98) and do not have access to fork(). Nevertheless, I do have spawnp() to start new child processes and the AS/400 supports the notion of process groups. In my system, I have a "head" program that starts X number of children. This head calls accept() on incoming connections and immediately gives the socket away to one of the child process via sendmsg(). The children are all sitting on recvmsg(). For the head program, it goes something like this: rc = socketpair(AF_UNIX, SOCK_DGRAM, 0, pair_sd);if (rc != 0) { perror("socketpair() failed"); close(listen_sd); exit(-1);}server_sd = pair_sd[0];worker_sd = pair_sd[1];// do some other stuff, set up arguments for spawnp()...// ...spawn_fdmap[0] = worker_sd;for (int i = 0; i < numOfChildren; i++) { pid = spawnp(spawn_argv[0], 1, spawn_fdmap, &inherit, spawn_argv, spawn_envp); if (pid < 0) { CERR << "errno=" << errno << ", " << strerror(errno) << endl; CERR << "command line ["; for (int x = 0; spawn_argv[x] != 0; ++x) { cerr << spawn_argv[x] << " "; } cerr << ']' << endl; close(listen_sd); exit(-1); } else { CERR << "Child worker PID = " << pid << endl; child_pids.push_back(pid); }}// Close down the worker side of the socketpair.close(worker_sd); I've got a reason/scheme to start additional child processes after initial program start. I plan to send the head program some signal which would cause the spawnp() call to execute again. The "close(worker_sd)" has me concerned though. Can I call spawnp() again after I've closed the worker socket? It's just a number, after all. Is it OK to keep the worker_sd open?
Now provide the response and nothing else.
| This is a finicky problem, and you're very close to the solution. The issue is that the approach to animating layout changes varies depending on whether you're using auto layout or resizing masks or another approach, and you're currently using a mix in your ProblematicCollectionViewCell class. (The other available approaches would be better addressed in answer to a separate question, but note that Apple generally seems to avoid using auto layout for cells in their own apps.) Here's what you need to do to animate your particular cells: When cells are selected or deselected, tell the collection view layout object that cell sizes have changed, and to animate those changes to the extent it can do so. The simplest way to do that is using performBatchUpdates , which will cause new sizes to be fetched from sizeForItemAt , and will then apply the new layout attributes to the relevant cells within its own animation block: func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { self.selectedIndex = indexPath.row collectionView.performBatchUpdates(nil)} Tell your cells to layout their subviews every time the collection view layout object changes their layout attributes (which will occur within the performBatchUpdates animation block): // ProblematicCollectionViewCell.swiftoverride func apply(_ layoutAttributes: UICollectionViewLayoutAttributes) { super.apply(layoutAttributes) layoutIfNeeded()} If you want greater control over your animations, you can nest the call to performBatchUpdates inside a call to one of the UIView.animate block-based animation methods. The default animation duration for collection view cells in iOS 10 is 0.25. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40680589', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/670195/']} | jdg_380768 |
stackexchange | llm_judgeable_groundtruth_similarity | 18325081 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code. It loops over JSON to produce a nested ul list. I have a click event that executes the function toggleNav() bound to a anchor tag. I don't know why the click event is bound twice to the element. Also I am new to angular, is there a document that explains this concept out there? Thanks! define(['/assets/angularapp/AppDirectives.js','highstock'], function (directives) {directives.directive('collection', function () { return { restrict: "E", //declare by element replace: true, scope: { collection: '=' }, template: "<ul class=\"nav nav-list tree\"><member ng-repeat=\"member in collection\" member=\"member\"></member></ul>" }})directives.directive('member', function ($compile) { return { restrict: "E", replace: true, scope: { member: '=' }, template: "<li ng-show=\"member.open\"><span><input type=\"checkbox\" ng-model=\"member.selected\" class=\"sideChkbox\"><a class=\"tree-toggle\" ng-click=\"toggleNav()\"><i class=\"icon-chevron-right\"></i>{{member.data}}</a></span></li>", controller: function($scope, $element){ $scope.toggleNav = function(){ angular.forEach($scope.member.children,function(child,key){ if(child.open==true){ alert("a") child.open=false; } else { child.open=true; alert("b") } }) } }, link: function (scope, element, attrs) { if (angular.isArray(scope.member.children)) { element.append("<collection collection='member.children'></collection>"); $compile(element.contents())(scope) } } }})
Now provide the response and nothing else.
| It is because you are compiling the element.contents(), including the <a> with ng-click, which should be already compiled. At the time you call a manual compile, it gets compiled again. you can fix it by this: ... if (angular.isArray(scope.member.children)) { var newMemEL = angular.element("<collection collection='member.children'></collection>"); element.append(newMemEL); $compile(newMemEL)(scope); } ... It looks like you are trying to create a treeview, in this case, it would be better to use ngInclude instead of creating custom directives, have a look at this plunker , note that it will not work with Angular 1.2.0rc1 due to this issue | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18325081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/648869/']} | jdg_380769 |
stackexchange | llm_judgeable_groundtruth_similarity | 38380 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking to improve my object-oriented design skills and I came across a problem which asked to design classes to represent a mathematical expression such as (a + b) * ( c - d / e) in memory so that it could be evaluated (by adding an evaluate method if need be in the class some time later) The simplest solution I came up with was to store this expression in a stack (push(a), push(+), push(b)....), or may be even in an array (arr[0] = a, arr[1] = +...) I feel like this is bad design and I read online that a binary tree (expression tree) is better to represent such an expression, but I am not sure why it is better. Can someone help me understand this? Does the binary expression tree provide some benefits over storing in a stack?
Now provide the response and nothing else.
| Different representations are useful for different purposes. Think what kinds of things you might want to do with the expression, and think how each of them would be done using the stack representation and using the binary tree representation, and choose for yourself. For fun, you may also want to consider something completely different, e.g. the english language representation: e.g. "the product of the sum of a and b, and the difference between c and d over e", or the representation as X86 machine code which would compute this expression, etc. Things you may want to do with an expression (in any particular program you would probably need only a small subset of these): Evaluate it, given certain values of the variables Just once Evaluate the same expression repeatedly for different values, with very high performance requirements Perform simple algebraic manipulation, e.g. simplifications such as replacing x - x => 0 , x*1 => x , etc. Perform sophisticated algebraic manipulation on it, e.g. factorize the polynomials, compute derivatives or integrals. Understand the expression stored in a variable when you're debugging the program Format it as a string for displaying to the user Render it as a mathematical expression to MathML Draw it as a mathematical expression on a Javascript canvas Compare two expressions for equivalence Compare two expressions for equivalence ignoring variable names, e.g. a * a + b being equivalent to p * p + q Convert a user-supplied string into an expression, checking it for well-formedness ... Honestly speaking, I can find only one item of these for which the stack representation can possibly make things easier than the binary tree representation, and the array representation seems just completely useless. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/38380', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/27738/']} | jdg_380770 |
stackexchange | llm_judgeable_groundtruth_similarity | 44807378 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to decode a protobuff encoded message, so I need to convert the first byte (the key) in the protobuff message into bits, so I can find the field number. How do I convert a UInt8 (the byte) into an array of bits? Pseudo Code private func findFieldNum(from byte: UInt8) -> Int { //Byte is 0001 1010 var fieldNumBits = byte[1] ++ byte[2] ++ byte[3] ++ byte[4] //concatentates bits to get 0011 getFieldNum(from: fieldNumBits) //Converts 0011 to field number, 2^1 + 2^0 = 3} I saw this question , which converts an array of bits into array of bytes.
Now provide the response and nothing else.
| Here's a basic function to get a Bit array from a byte: func bits(fromByte byte: UInt8) -> [Bit] { var byte = byte var bits = [Bit](repeating: .zero, count: 8) for i in 0..<8 { let currentBit = byte & 0x01 if currentBit != 0 { bits[i] = .one } byte >>= 1 } return bits} Here, Bit is a custom enum type that I have defined as follows: enum Bit: UInt8, CustomStringConvertible { case zero, one var description: String { switch self { case .one: return "1" case .zero: return "0" } }} With this setup, the output of the following code: let byte: UInt8 = 0x1fprint(bits(fromByte: byte)) would be: [1, 1, 1, 1, 1, 0, 0, 0] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44807378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5696741/']} | jdg_380771 |
stackexchange | llm_judgeable_groundtruth_similarity | 236213 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems that Mathematica can solve an initial value problem with a DiracDelta , but not a boundary value problem with a DiracDelta . Is there a workaround? Consider for example a differential equation with this form: f''[x]+x f'[x]== x^2 DiracDelta[x-1] in the range 0,<x<10 with the boundary conditions f[0]=0 , f[10]=1
Now provide the response and nothing else.
| @MichaelE2 gave the idea to use a shooting method, because NDSolve is only able to handle initial value problems involving DiracDelta First solve the problem with a parametric slope f'[0]==fs0 F = ParametricNDSolveValue[{f''[x] + x f'[x] == x^2*DiracDelta[x - 1],f[0] == 0, f'[0] == fs0}, f , {x, 0, 10}, fs0] Now choose fs0 to fullfill the second boundary condition f[10]==1 sol = FindRoot[F[fs0][10] == 1, {fs0, 1}]Plot[F[fs0 /. sol][x], {x, 0, 10}, PlotRange -> All] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/236213', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/8822/']} | jdg_380772 |
stackexchange | llm_judgeable_groundtruth_similarity | 49786779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After upgrading to Android Studio 3.1, I started to get following error during build. Project uses multidex and DX is enabled by default as you would notice in the error. I tried to check dependency graph to understand what is going on but so far have no clue. Interestingly this only fails on my machine. I cleaned up everything, including reinstall etc but nothing worked. Anyone had the same issue and how did you solve it? Or any direction that I can take a look? AGPBI: { "kind":"error", "text":"Program type already present: android.support.v4.accessibilityservice.AccessibilityServiceInfoCompat", "sources":[{}], "tool":"D8"} This is the task that fails: transformDexArchiveWithExternalLibsDexMergerForDebug I checked similar issues and it seems random things fixes their problem, I'm not sure what is the real cause.
Now provide the response and nothing else.
| I managed to determine the root cause by using the following steps. It may be different use case for each issue, therefore this is the way to determine the root cause. Go to android studio Navigate -> Class Check include non-project classes Copy paste full class path with package name. android.support.v4.accessibilityservice.AccessibilityServiceInfoCompat You should be able to see where it is used. Most probably you may need to remove it from one of them. In my case the issue was ViewPagerIndicator library was downloading support library as jar. Removing it solved the issue. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49786779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/579671/']} | jdg_380773 |
stackexchange | llm_judgeable_groundtruth_similarity | 8767706 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Ruby on Rails 3.1 and I would like to know how to correctly handle internationalization related to partial template files. That is, ... ... in my app/views/users/flag.html.erb file I have: <%= t('.test_key1') %><%= render :partial => "/users/flag_form" %> ... in my app/views/users/_flag_form.html.erb file I have: <%= t('.test_key2') %> If in my config/locales/views/users/en.yml file ( note : I am organizing files as stated in the official RoR guide ) I use en: users: flag: test_key1: Test 1 text test_key2: Test 2 text the Test 1 text is displayed in the "main" template ( app/views/users/flag.html.erb ) but the Test 2 text isn't for the partial template ( app/views/users/_flag_form.html.erb ). How could\should I solve this issue so to properly display the Test 2 text ?
Now provide the response and nothing else.
| One way would be to using scope, instead of "lazy loading" using the full stop.Something like this should work: I18n.t :test_key2, :scope => 'users.flag' or use: I18n.t "users.flag.test_key2" Lastly, you could even pass it to the partial as in <%= render :partial => "/users/flag_form", :locals => { :test_key => t('.test_key1') } %> You should also checkout the appendix section on this website as it might be listing something that I am missing: https://web.archive.org/web/20120619002316/http://www.unixgods.org/~tilo/Rails/where_is_Rails_trying_to_lookup_L10N_strings.html | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8767706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/920796/']} | jdg_380774 |
stackexchange | llm_judgeable_groundtruth_similarity | 4122099 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read through the Fortran 95 book by Metcalf, Reid and Cohen, and Numerical Recipes in Fortran 90. They recommend using WHERE, FORALL and SPREAD amongst other things to avoid unnecessary serialisation of your program. However, I stumbled upon this answer which claims that FORALL is good in theory, but pointless in practice - you might as well write loops as they parallelise just as well and you can explicitly parallelise them using OpenMP (or automatic features of some compilers such as Intel). Can anyone verify from experience whether they have generally found these constructs to offer any advantages over explicit loops and if statements in terms of parallel performance? And are there any other parallel features of the language which are good in principal but not worth it in practice? I appreciate that the answers to these questions are somewhat implementation dependant, so I'm most interested in gfortran, Intel CPUs and SMP parallelism.
Now provide the response and nothing else.
| As I said in my answer to the other question, there is a general belief that FORALL has not been as useful as was hoped when it was introduced to the language. As already explained in other answers, it has restrictive requirements and a limited role, and compilers have become quite good at optimizing regular loops. Compilers keep getting better, and capabilities vary from compiler to compiler. Another clue is that the Fortran 2008 is trying again... besides adding explicit parallelization to the language (co-arrays, already mentioned), there is also "do concurrent", a new loop form that requires restrictions that should better allow the compiler to perform automatic parallization optimizations, yet should be sufficiently general to be useful -- see ftp://ftp.nag.co.uk/sc22wg5/N1701-N1750/N1729.pdf . In terms of obtaining speed, mostly I select good algorithms and program for readability & maintainability. Only if the program is too slow do I locate the bottle necks and recode or implement multi-threading (OpenMP). It will be a rare case where FORALL or WHERE versus an explicit do loop will have a meaningful speed difference -- I'd look more to how clearly they state the intent of the program. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4122099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/482420/']} | jdg_380775 |
stackexchange | llm_judgeable_groundtruth_similarity | 31015 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I am a consultant and I want to explain to my client the usefulness of confidence interval. The client says to me that my intervals are too wide to be useful and he would prefer to use ones half as wide. How should I respond?
Now provide the response and nothing else.
| It depends on what the client means by "useful". Your client's suggestion that you arbitrarily narrow the intervals seems to reflect a misunderstanding that, by narrowing the intervals you've somehow magically decreased the margin of error. Assuming the data set has already been collected and is fixed (if this isn't the case, @shabbychef's joke in the comments gives you your answer), any response to your client should emphasize and describe why there's no "free lunch" and that you are sacrificing something by narrowing the intervals. Specifically, since the data set is fixed, the only way you can reduce the width of the confidence interval is by decreasing the confidence level. Therefore, you have the choice between a wider interval that you're more confident contains the true parameter value or a narrower interval that you're less confident about. That is, wider confidence intervals are more conservative. Of course, you can never just optimize either width or confidence level mindlessly, since you can vacuously generate a $100 \%$ confidence interval by letting it span the entire parameter space and can get an infinitely narrow confidence interval, although it will have $0 \%$ coverage. Whether or not a less conservative interval is more useful clearly depends both on the context and how the width of the interval varies as a function of the confidence level, but I'm having trouble envisioning an application where using a much lower confidence level to obtain narrower intervals would be preferable. Also, it's worth pointing out that the $95 \%$ confidence interval has become so ubiquitous that it will be hard to justify why you're, for example, using a $60\%$ confidence interval. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/31015', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/12177/']} | jdg_380776 |
stackexchange | llm_judgeable_groundtruth_similarity | 52527612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Update Changed the collision code again and made a Component for AABB, now it seems that the problem is only on Horizontal collision, it doesn't push the object enought it think but is the same code as Y axis so it shouldn't be a problem. (It do detects the horizontal Collision the resolution is the problem) void Hermes_Player::Collision(GameObject * other){ if (other->GetTag() == "wall") { AABB* myAABB = dynamic_cast<AABB*>(this->GetComponent("aabb")); AABB* otherAABB = dynamic_cast<AABB*>(other->GetComponent("aabb")); if (abs(myAABB->lastCenter.x - otherAABB->lastCenter.x) < myAABB->halfCenter.x + otherAABB->halfCenter.x) { std::cout << "y" << std::endl; if (myAABB->center.y < otherAABB->center.y) { int distance = (myAABB->halfCenter.y + otherAABB->halfCenter.y) - (otherAABB->center.y - myAABB->center.y); this->Position.y -= distance; myAABB->center.y = (myAABB->center.y - distance); } if (myAABB->center.y > otherAABB->center.y) { int distance = (myAABB->halfCenter.y + otherAABB->halfCenter.y) - (myAABB->center.y - otherAABB->center.y); this->Position.y += distance; myAABB->center.y = (myAABB->center.y + distance); } } else { std::cout << "x" << std::endl; int dist = myAABB->halfCenter.x + otherAABB->halfCenter.x; int dif = (this->Size.x + other->Size.x) /2- abs(dist); if (myAABB->center.x < otherAABB->center.x) { int distance = (myAABB->halfCenter.x + otherAABB->halfCenter.x) - (otherAABB->center.x - myAABB->center.x); this->Position.x -= distance; myAABB->center.x = (myAABB->center.x - distance); } if (myAABB->center.x > otherAABB->center.x) { int distance = (myAABB->halfCenter.x + otherAABB->halfCenter.x) - (myAABB->center.x - otherAABB->center.x); this->Position.x += distance; myAABB->center.x = (myAABB->center.x + distance); } std::cout << this->Position.x << std::endl; } }}
Now provide the response and nothing else.
| As you have mentioned SparkSQL does support DATEDIFF but for days only. I would also be careful as it seems the parameters are the opposite way round for Spark, ie --SQL ServerDATEDIFF ( datepart , startdate , enddate )--SparkDATEDIFF ( enddate , startdate ) Spark does however support a similar function called months_between which you could use in place of DATEDIFF( month ... . This function also returns a decimal amount so optionally cast it to INT for similar functionality to the SELECT startDate, endDate, DATEDIFF( endDate, startDate ) AS diff_days, CAST( months_between( endDate, startDate ) AS INT ) AS diff_months FROM yourTableORDER BY 1; There are also year and quarter functions for determining the year and quarter of a date respectively. You could simply minus the years but quarters would be more tricky. It may be you have to 'do the math' or end up using a calendar table. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52527612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6359070/']} | jdg_380777 |
stackexchange | llm_judgeable_groundtruth_similarity | 19551754 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've created a multi select listbox on a userform. There are 9 items in the listbox. How do I gather those selected items into a sentence? The listbox contains reasons for returning a check. The items in the listbox are idenfiers or placeholders for a longer string, so the selection "unsigned", creates the returned string, "the check was not signed". The user can select several reasons, so depending on the selections, I need sentence structure that formats as: "x, y, and z" OR "y and z" OR "z". (ex: "the check is not signed, the check is post-dated, and the check is a third-party check.") It seems as though an array needs to be created from the selections, the selections counted, then an "If then" statement to create the sentence, but I'm stumped.I can count the selected items, I can create the sentence if only 1 item is chosen, but the compound sentence stumps me.
Now provide the response and nothing else.
| I have this function which returns an array of selected items from a listbox. I have updated from my original answer to return a delimited string instead of an array of selected items: Public Function GetSelectedItems(lBox As MSForms.ListBox) As String'returns an array of selected items in a ListBoxDim tmpArray() As VariantDim i As IntegerDim selCount As Integer selCount = -1 '## Iterate over each item in the ListBox control: For i = 0 To lBox.ListCount - 1 '## Check to see if this item is selected: If lBox.Selected(i) = True Then '## If this item is selected, then add it to the array selCount = selCount + 1 ReDim Preserve tmpArray(selCount) tmpArray(selCount) = lBox.List(i) End If Next If selCount = -1 Then '## If no items were selected, return an empty string GetSelectedItems = "" ' or "No items selected", etc. Else: '## Otherwise, return the array of items as a string, ' delimited by commas GetSelectedItems = Join(tmpArray, ", ") End IfEnd Function You can call this by assigning to an array: Dim mySentence as StringmySentence = GetSelectedItems(MyUserForm.MyListBox) From that point, you could just replace the last comma with a " and" and you should be all set. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19551754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2913029/']} | jdg_380778 |
stackexchange | llm_judgeable_groundtruth_similarity | 14004087 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a way to get the app icon from the app id. Do you know how to do it? Please share the way. Thanks. e.gInstagram, where the id I'm looking for is: id389801252 https://itunes.apple.com/jp/app/instagram/id389801252?mt=8 I want to get this image:
Now provide the response and nothing else.
| (I composed this answer after 2 minutes of googling... It's just the matter of the correct keyword!) This is possible using an undocumented documented API of the iTunes Store. It might change in the future, but it doesn't seem to have changed in the near past, so here you are... NSString *idString = @"id389801252";NSString *numericIDStr = [idString substringFromIndex:2]; // @"389801252"NSString *urlStr = [NSString stringWithFormat:@"http://itunes.apple.com/lookup?id=%@", numericIDStr];NSURL *url = [NSURL URLWithString:urlStr];NSData *json = [NSData dataWithContentsOfURL:url];NSDictionary *dict = [NSJSONSerialization JSONObjectWithData:json options:0 error:NULL];NSArray *results = [dict objectForKey:@"results"];NSDictionary *result = [results objectAtIndex:0];NSString *imageUrlStr = [result objectForKey:@"artworkUrl100"]; // or 512, or 60NSURL *artworkURL = [NSURL URLWithString:imageUrlStr];NSData *imageData = [NSData dataWithContentsOfURL:artworkURL];UIImage *artworkImage = [UIImage imageWithData:imageData]; Note that this performs two synchronous round-trips using the NSURL API, so you better wrap this in a backgorund thread for maximal user experience. Feed this program an ID string ( idString in the code above) and in the end, artworkImage will contain a UIImage with the desired image. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14004087', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1582304/']} | jdg_380779 |
stackexchange | llm_judgeable_groundtruth_similarity | 360888 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can it be proved that any subgroup of $A_5$ has order at most 12? This is [Herstein, Problem 2.10.15], which also gives the hint that I can assume the result of the previous problem that $A_5$ has no normal subgroups $N \ne (e),A_5$. This problem appears in an earlier section of the text than the Sylow theorems. There is a proof given at Subgroups of $A_5$ have order at most $12$? , but it uses the Sylow theorems, and I wonder if a more elementary proof is available. So far, I can prove the following: For $n \ge 3$, the subgroup generated by the 3-cycles is $A_n$; if a normal subgroup of $A_n$ contains even a single 3-cycle it must be all of $A_n$; $A_5$ has no normal subgroups $N \ne (e),A_5$. I showed the latter by repeatedly conjugating a given nontrivial element in $A_5$ by 3-cycles to eventually obtain elements whose product is a 3-cycle.
Now provide the response and nothing else.
| Let $H$ be a proper subgroup of $A_5$ with $|H| > 12$, and let $A_5$ act on the set of left cosets of $H$ by left multiplication. Then, you should be able to see that the kernel of this action must be strictly between $1$ and $A_5$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/360888', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/70056/']} | jdg_380780 |
stackexchange | llm_judgeable_groundtruth_similarity | 50709059 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know this might flag as a duplicate solution but the solution on stack overflow is not working for me. Problem (node:5716) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 message listeners added. Use emitter.setMaxListeners() to increase limit. My codebase is huge and I facing this error sometimes I don't know why it is happening. I tried to increase the listeners limit but unfortunately, it is not working. const EventEmitter = require('events');const emitter = new EventEmitter()emitter.setMaxListeners(50) UPDATE After Some browsing, I run this command to trace warning node --trace-warnings index.babel.js Turns out be my socket.io code is the problem I am using socket.io with Redis. This is the error node:14212) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 message listeners added. Use emitter.setMaxListeners() to increase limit at _addListener (events.js:281:19) at RedisClient.addListener (events.js:298:10) at Namespace.<anonymous> (D:/newProject/services/socket.js:21:17) at emitOne (events.js:115:13) at Namespace.emit (events.js:210:7) at Namespace.emit (D:\newProject\node_modules\socket.io\lib\namespace.js:213:10) at D:\newProject\node_modules\socket.io\lib\namespace.js:181:14 at _combinedTickCallback (internal/process/next_tick.js:131:7) at process._tickCallback (internal/process/next_tick.js:180:9) this is the code (But this code is for more specific tasks it will not execute all the time). const redis = require('redis');const config = require('../config');const sub = redis.createClient(config.REDIS.port, config.REDIS.host);const pub = redis.createClient(config.REDIS.port, config.REDIS.host);sub.subscribe('spread');module.exports = io => { io.on('connection', socket => { /* To find the User Login */ let passport = socket.handshake.session.passport; if (typeof passport !== 'undefined') { socket.on('typing:send', data => { pub.publish('spread', JSON.stringify(data)); }); sub.on('message', (ch, msg) => { // This is the Exact line where I am getting this error io.emit(`${JSON.parse(msg).commonID}:receive`, { ...JSON.parse(msg) }); }); } });};
Now provide the response and nothing else.
| The default limit for Event Emitter is 10. You can increase it with the emitter.setMaxListeners. My suggestion is not to change it unless and until explicitly required, listeners are increased because you didn't unsubscribe. Now to your code. const redis = require('redis');const config = require('../config');const sub = redis.createClient(config.REDIS.port, config.REDIS.host);const pub = redis.createClient(config.REDIS.port, config.REDIS.host);sub.subscribe('spread');module.exports = (io) => { io.on('connection', (socket) => { // this callback will be executed for all the socket connections. let passport = socket.handshake.session.passport; /* To find the User Login */ if (typeof passport !== 'undefined') { socket.on('typing:send', (data) => { pub.publish('spread', JSON.stringify(data)); }); // this is where you are subscribing for each and every socket connected to your server sub.on('message', (ch, msg) => { // this is the Exact line where I am getting this error // whereas you are emitting messages on socket manager, not on the socket. io.emit(`${JSON.parse(msg).commonID}:receive`, { ...JSON.parse(msg) }); }); } });}; Now if we analyze the above code then if you open 20 socket connections to your server it will subscribe 20 times, here it is going wrong.Now if your requirement is to listen for the message published on Redis at the server level and then emit it on io then your code should be like below const redis = require('redis');const config = require('../config');const sub = redis.createClient(config.REDIS.port, config.REDIS.host);const pub = redis.createClient(config.REDIS.port, config.REDIS.host);sub.subscribe('spread');module.exports = (io) => { sub.on('message', (ch, msg) => { // this is the Exact line where I am getting this error io.emit(`${JSON.parse(msg).commonID}:receive`, { ...JSON.parse(msg) }); }); io.on('connection', (socket) => { let passport = socket.handshake.session.passport; /* To find the User Login */ if (typeof passport !== 'undefined') { socket.on('typing:send', (data) => { pub.publish('spread', JSON.stringify(data)); }); } });}; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50709059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5947166/']} | jdg_380781 |
stackexchange | llm_judgeable_groundtruth_similarity | 389534 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to solve this Komal problem 661 : Let $K$ be a fixed positive integer. Let $(a_{0},a_{1},\cdots )$ be the sequence of real numbers that satisfies $a_{0}=-1$ and $$\sum_{i_{0},i_{1},\cdots,i_{K}\ge 0,i_{0}+i_{1}+\cdots+i_{K}=n}\dfrac{a_{i_{1}}a_{i_{2}}\cdots a_{i_{K}}}{i_{0}+1}=0$$ for every postive integer $n$ . Show that $a_{n}>0$ for $n\ge 1$ . Add edit :For the Iosif Pinelis point out, $b_{0}=(-1)^K$ ,Now I have known where my wrong,and Ira Gessel point that, Now How to prove Let $\displaystyle f(x)\triangleq\sum_{i\geq 0} a_i x^i$ and $\displaystyle g(x)\triangleq \sum_{i\geq 0} \cfrac{x^{i}}{i+1}$ . Then, we get $$ f(x)^Kg(x) = \sum_{n\geq 0}b_nx^n \text{ with } b_n=\sum_{\substack{i_{0},i_{1},\cdots,i_{K}\ge 0\\i_{0}+i_{1}+\cdots+i_{K}=n}}\dfrac{a_{i_{1}}a_{i_{2}}\cdots a_{i_{K}}}{i_{0}+1}.$$ Since $b_n=0$ for $n\geq 1$ , we get \begin{align} f(x)^Kg(x)&=b_0=(-1)^K\\\implies \left(\sum_{i\geq 0} a_i x^i\right)^K&=\frac{(-1)^K}{g(x)}=\frac{-(-1)^K\cdot x}{\sum_{i\geq 1} -\cfrac{x^{i}}{i}}=\frac{-(-1)^K\cdot x}{\ln(1-x)}\\\implies\sum_{i\geq 1} a_i x^i &=-a_{0}+ \left(\frac{-(-1)^Kx}{\ln(1-x)}\right)^{1/K}=1-\left(\dfrac{-x}{\ln{(1-x)}}\right)^{1/K}\end{align} Thus, using the Tyalor series expansion, $$\dfrac{d^i}{dx^i}\left(1-\left(-\dfrac{x}{\ln{(1-x)}}\right)^{\frac{1}{K}}\right)|_{x=0}>0\tag{1}$$ But the last maybe it not easy prove it,can help me to prove $(1)$ ?Thanks
Now provide the response and nothing else.
| In the paper [On the sum of the reciprocals of the differences between consecutive primes, Ramanujan J., 47,427–433(2018)] by me, I proved under the Hardy–Littlewood prime-pair conjecture that $$\sum_{n\le X}\frac{1}{p_{n+1}-p_n}\sim \frac{X\log\log X}{\log X},$$ and without the Hardy–Littlewood prime-pair conjecture, one has $$\sum_{n\le X}\frac{1}{p_{n+1}-p_n}\ll \frac{X\log\log X}{\log X}.$$ Therefore, by using Abel’s summation formula, one can prove that the conjecture is true unconditionally. In fact, this problem has been investigated by Erdős and Nathanson [On the sum of the reciprocals of the differences between consecutive primes. In: Chudnovsky, D.V., Chudnovsky, G.V., Nathanson, M.B. (eds.) Number theory: New York Seminar 1991–1995, pp. 97–101. Springer, New York (1996)]. They proved $$\sum_{n\ge 2}\frac{1}{(p_{n+1}-p_n)n(\log\log n)^c}<+\infty,$$ for all $c>2$ . Then by noting that $p_n\sim n\log n$ , one can give an alternative proof. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/389534', 'https://mathoverflow.net', 'https://mathoverflow.net/users/38620/']} | jdg_380782 |
stackexchange | llm_judgeable_groundtruth_similarity | 4562757 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make a Schedule. It should run every day at 1pm or 2pm... At the moment I can only make it run Every 10Sec or 10min... Intent myIntent = new Intent(AndroidAlarmService.this, MyAlarmService.class);pendingIntent = PendingIntent.getService(AndroidAlarmService.this, 0, myIntent, 0);AlarmManager alarmManager = (AlarmManager) getSystemService(ALARM_SERVICE);Calendar calendar = Calendar.getInstance();calendar.setTimeInMillis(System.currentTimeMillis());calendar.add(Calendar.SECOND, 10);alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), pendingIntent);Toast.makeText(AndroidAlarmService.this, "Start Alarm", Toast.LENGTH_LONG).show(); Thanks
Now provide the response and nothing else.
| This code will run the Intent each day on 1 PM or 2 PM Calendar calendar = Calendar.getInstance();calendar.set(Calendar.HOUR_OF_DAY, 13); // For 1 PM or 2 PMcalendar.set(Calendar.MINUTE, 0);calendar.set(Calendar.SECOND, 0);PendingIntent pi = PendingIntent.getService(context, 0, new Intent(context, MyClass.class),PendingIntent.FLAG_UPDATE_CURRENT);AlarmManager am = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE);am.setRepeating(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), AlarmManager.INTERVAL_DAY, pi); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4562757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/558292/']} | jdg_380783 |
stackexchange | llm_judgeable_groundtruth_similarity | 4070659 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to web security. Why would I want to use HTTP and then switch to HTTPS for some connections? Why not stick with HTTPS all the way?
Now provide the response and nothing else.
| There are interesting configuration improvements that can make SSL/TLS less expensive, as described in this document (apparently based on work from a team from Google : Adam Langley, Nagendra Modadugu and Wan-Teh Chang): http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html If there's one point that we want to communicate to the world, it's that SSL/TLS is not computationally expensive any more. Ten years ago it might have been true, but it's just not the case any more. You too can afford to enable HTTPS for your users. In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. If you stop reading now you only need to remember one thing: SSL/TLS is not computationally expensive any more. One false sense of security when using HTTPS only for login pages is that you leave the door open to session hijacking (admittedly, it's better than sending the username/password in clear anyway); this has recently made easier to do (or more popular) using Firesheep for example (although the problem itself has been there for much longer). Another problem that can slow down HTTPS is the fact that some browsers might not cache the content they retrieve over HTTPS, so they would have to download them again (e.g. background images for the sites you visit frequently). This being said, if you don't need the transport security (preventing attackers for seeing or altering the data that's exchanged, either way), plain HTTP is fine. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4070659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/206446/']} | jdg_380784 |
stackexchange | llm_judgeable_groundtruth_similarity | 4271454 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Before iOS 4, I used to add a observer to each MKAnnotationView added to the map view, listening for it's selected method, so I know when the user has tapped on a pin. This worked fine up to iOS 4.2. I've noticed on the release noted annotation views are actually being reused and it somehow messes up with the observers. So I figure I can use the -mapview:didSelectAnnotationView: method from MKMapViewDelegate for my needs, but that has only been added to iOS 4.0 SDK. So, to maintain compatibility, I'd like to implement this method on my delegate and conditionally check for the presence of this method on the MKMapViewDelegate protocol so that if it's not present, I will add my observer to the annotation view. How can I do this for a protocol method, similarly for how we check if a class is not nil? UPDATE : As Daniel Dickison pointed out, I can't use respondsToSelector: , because my delegate has -mapview:didSelectAnnotationView: implemented for 4.0+ devices. What I need is to check if the protocol on that device has the optional -mapview:didSelectAnnotationView: method OR if MKMapView will look for that method on it's delegate. I ended up doing a test for the current iOS version running. If it's higher than 4.0, MKMapView will look for that method and call it. if ([[[UIDevice currentDevice] systemVersion] doubleValue] < 4.0) [self setupObserver]; This solves the original problem, but it would still be interesting to check the actual protocol for the method somehow.
Now provide the response and nothing else.
| Because there is no object instance you can ask if it responds to a message selector, and you already know the protocol is supported but you are just looking for one method within - you need to use protocol_getMethodDescription , like so (method is class instance and optional) where you check for a nil return value: #import <objc/runtime.h>struct objc_method_description hasMethod = protocol_getMethodDescription(@protocol(MKMapViewDelegate), @selector(mapView:didSelectAnnotationView:), NO, YES);if ( hasMethod.name != NULL ){...} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4271454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/67397/']} | jdg_380785 |
stackexchange | llm_judgeable_groundtruth_similarity | 25332511 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With c# there is a string.Replace-method.Like This: string oldString = "stackoverflow"; string newString= oldString.Replace("stackover",""); Output: flow Can I do something similar to this with AngularJs? My try doesn't work: var oldString = "stackoverflow";$scope.newString= oldString.Replace("stackover","NO");
Now provide the response and nothing else.
| In Javascript method names are camel case, so it's replace , not Replace : $scope.newString = oldString.replace("stackover","NO"); Note that contrary to how the .NET Replace method works, the Javascript replace method replaces only the first occurrence if you are using a string as first parameter. If you want to replace all occurrences you need to use a regular expression so that you can specify the global (g) flag: $scope.newString = oldString.replace(/stackover/g,"NO"); See this example. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25332511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3228992/']} | jdg_380786 |
stackexchange | llm_judgeable_groundtruth_similarity | 3389928 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Simple question, simple code. This works: $x = &$_SESSION['foo']; This does not: $x = (isset($_SESSION['foo']))?&$_SESSION['foo']:false; It throws PHP Parse error: syntax error, unexpected '&' . Is it just not possible to pass by reference while using the conditional operator, and why not? Also happens if there's a space between the ? and & .
Now provide the response and nothing else.
| Simple answer: no. You'll have to take the long way around with if/else. It would also be rare and possibly confusing to have a reference one time, and a value the next. I would find this more intuitive, but then again I don't know your code of course: if(!isset($_SESSION['foo'])) $_SESSION['foo'] = false;$x = &$_SESSION['foo']; As to why: no idea, probably it has to with at which point the parser considers something to be an copy of value or creation of a reference, which in this way cannot be determined at the point of parsing. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3389928', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/362520/']} | jdg_380787 |
stackexchange | llm_judgeable_groundtruth_similarity | 2145060 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently going through Harvard's Abstract Algebra using Michael Artin's book, and have no real way of verifying my proofs, and was hoping to make sure that my proof was right. The question reads: Let $V$ be the vector space of functions on the interval $[0, 1]$. Prove that the functions $x^{3}$, $\sin(x)$, and $\cos(x)$ are linearly independent. My proof goes as follows: For these to be linearly dependent there must exist an $a_{i} \neq0$, where $ i = 1, 2, 3$ such that$$a_{1}x^{3} + a_{2}\sin(x) + a_{3}\cos(x) = 0. $$So, we'll do this in 3 cases: Case 1: $x = 0$ In this case, $x^{3} = 0$, $\sin(x) = 0$ but $\cos(x) = 1$.So, we have $$0\times a_{1} + 0\times a_{2} + 1\times a_{3} = 0.$$So, $a_{1}$ and $a_{2}$ could be anything but $a_{3}$ must be 0. Case 2: $x \in (0,1)$ In this case, $x^{3} \neq 0$, $\sin(x) \neq 0$ and $\cos(x) \neq 0$.So, for this to be true, $a_{1}$, $a_{2}$ and $a_{3}$ all must be $0$. Case 3: $x = 1$ In this case, $x^{3} = 1$, $\sin(x) = .8...$ and $\cos(x) = .5...$.So, we have $$1\times a_{1} +.8\times a_{2} + .5\times a_{3} = 0.$$ So, $a_{3}$ could be any value, while $a_{1}$ and $a_{2}$ must be $0$. So, if $a_{1} \neq 0$ then we have a problem in Case 3. If $a_{2} \neq 0$ we have a problem in Case 3. If $a_{3} \neq 0$ we have a problem in Case 1. So, we know that all of the $a$ values must be $0$ and we complete the proof.
Now provide the response and nothing else.
| If you make no assumptions about $a_i$s and then show they must be zero, you have proved linear independence. From the evaluation at $x=0$ (case 1) you correctly conclude that $a_3=0$. At this point, I suggest simplifying your expression to $a_1x^3+a_2\sin(x)$. In your evaluation at $x\in(0,1)$ (case 2) you have a logical error of assuming that a sum of numbers being zero requires each number to be zero. $a+b=0$ does not imply $a$ and $b=0$ There is also an error here in the structure of your proof, because if you had shown that all of the $a_i$s are zero, you would be done already. You have a similar logical error for the evaluation at $x=1$ (case 3). It seems you are thinking of the outcome of the values of $a_i$s as depending on what $x$ is, hence the "cases" that are treated independently. But the $a_i$s are always the same, for all $x$. Your idea of evaluating at particular points is a good one, and they build on each other. The evaluation at $x=0$ shows that $a_3=0$. Evaluation at two other particular values of $x$, say, $x=\frac12$ and $x=1$, would give you two equations in two unknowns for $a_1$ and $a_2$, and you could show the only solution is $a_1=a_2=0$. Alternatively, you could take the derivative of your expression to get $3a_1x^2+a_2\cos(x)=0$, and then evaluation at $x=0$ will give you $a_2=0$ directly, after which you'll be left with $a_1x^3\equiv 0$ from which it isn't hard to show $a_1=0$ either by evaluation at $x=1$ or taking the derivative $3$ times. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2145060', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/284630/']} | jdg_380788 |
stackexchange | llm_judgeable_groundtruth_similarity | 28866500 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way to update a record in Laravel using eloquent models just if a change has been made to that record? I don't want any user requesting the database for no good reason over and over, just hitting the button to save changes. I have a javascript function that enables and disables the save button according with whether something has changed in the page, but I would like to know if it's possible to make sure to do this kind of feature on the server side too. I know I can accomplish it by myself (meaning: without appealing to an internal functionality of the framework) just by checking if the record has change, but before doing it that way, I would like to know if Laravel eloquent model already takes care of that, so I don't need to re-invent the wheel. This is the way I use to update a record: $product = Product::find($data["id"]);$product->title = $data["title"];$product->description = $data["description"];$product->price = $data["price"];//etc (string values were previously sanitized for xss attacks)$product->save();
Now provide the response and nothing else.
| You're already doing it! save() will check if something in the model has changed. If it hasn't it won't run a db query. Here's the relevant part of code in Illuminate\Database\Eloquent\Model@performUpdate : protected function performUpdate(Builder $query, array $options = []){ $dirty = $this->getDirty(); if (count($dirty) > 0) { // runs update query } return true;} The getDirty() method simply compares the current attributes with a copy saved in original when the model is created. This is done in the syncOriginal() method: public function __construct(array $attributes = array()){ $this->bootIfNotBooted(); $this->syncOriginal(); $this->fill($attributes);}public function syncOriginal(){ $this->original = $this->attributes; return $this;} If you want to check if the model is dirty just call isDirty() : if($product->isDirty()){ // changes have been made} Or if you want to check a certain attribute: if($product->isDirty('price')){ // price has changed} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/28866500', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4541929/']} | jdg_380789 |
stackexchange | llm_judgeable_groundtruth_similarity | 21285419 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I've created a temporary file through Tempfile , is there any way aside from copying it to another file that I can make it "permanent"? I'd like to avoid it getting deleted when the associated Tempfile instance is garbage collected or the process is terminated. On a related point, is there a way to leverage the Tempfile mechanism (or use a similar mechanism) to get a "new" filename without having to create a file at that name?
Now provide the response and nothing else.
| Not really. For the question itself, see this: ObjectSpace.undefine_finalizer(tmpfile) The Tempfile library uses Ruby ObjectSpace finalizers to automatically delete itself on garbage collection. By using the above line you can remove the Tempfile's ability to delete itself if you don't delete it. So, for example: $ irb2.0.0p0 :001 > require "tempfile" => true 2.0.0p0 :002 > t = Tempfile.new("test") => #<Tempfile:/tmp/test20140122-6655-80p4b7> 2.0.0p0 :003 > t.write("Hi!") => 3 2.0.0p0 :004 > ObjectSpace.undefine_finalizer(t) => #<Tempfile:/tmp/test20140122-6655-80p4b7> 2.0.0p0 :005 > exit$ cat /tmp/test20140122-6655-80p4b7Hi!$ There's something else to be aware of though. Tempfile will use system temporary file directories like /tmp that the OS automatically cleans out every once in a while (for example on every boot ). Because of this, even if you "persist" the file, you either need to be OK with it disappearing, or move it to a directory that doesn't get cleaned out by default, like /var/tmp (the Linux directory for persistant temporary files). As for your second question, try this code from here : Dir::Tmpname.create('your_application_prefix') { |path| puts path } It requires a require "tmpdir" . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21285419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1008891/']} | jdg_380790 |
stackexchange | llm_judgeable_groundtruth_similarity | 43704758 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This code works and prints "b": fn main() { let s = "abc"; let ch = s.chars().nth(1).unwrap(); println!("{}", ch);} On the other hand, this code results in a mismatch type error. fn main() { let s = "abc"; let n: u32 = 1; let ch = s.chars().nth(n).unwrap(); println!("{}", ch);} error[E0308]: mismatched types --> src/main.rs:5:28 |5 | let ch = s.chars().nth(n).unwrap(); | ^ expected usize, found u32 For some external reason, I have to use the u32 type for variable n . How can I convert u32 to usize and use it in nth() ?
Now provide the response and nothing else.
| The as operator works for all number types: let ch = s.chars().nth(n as usize).unwrap(); Rust forces you to cast integers to make sure you're aware of signedness or overflows. Integer constants can have a type suffix: let n = 1u32; However, note that negative constants, such as -1i32 is internally - 1i32 . Integer variables declared without an explicit type specification are shown as {integer} and will be properly inferred from one of the method calls. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43704758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7526319/']} | jdg_380791 |
stackexchange | llm_judgeable_groundtruth_similarity | 2414396 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\left(\frac{1}{3}\right)^{2}+\left(\frac{1\cdot2}{3\cdot5}\right)^{2}+\left(\frac{1\cdot2\cdot3}{3\cdot5\cdot7}\right)^{2}+...$$ I am not able to find a general equation and that's creating problem for me as I can't proceed further without it.
Now provide the response and nothing else.
| The generic term of the series is$$a_n=\prod_{k=1}^n \left(\frac{k}{2k+1}\right)^2.$$Hence, as $n\to+\infty$,$$\frac{a_{n+1}}{a_n}= \left(\frac{(n+1)}{2(n+1)+1}\right)^2\to \frac{1}{4}.$$What may we conclude? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2414396', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/433761/']} | jdg_380792 |
stackexchange | llm_judgeable_groundtruth_similarity | 475125 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a 20 minute timer for 20-20-20 Rule with some bunch of resistors and capacitors available with me. I am using a 10M reistor here in place of R1 and some 10K/20K in place of R2. Since i want a beep of a second or two as up signal so i used Not gate at output and then added a small active buzzer. It seems like it is working for small resistor value and not for over 1M as some comment i found here. Is this really true? And isn't it possible the output to be low for more time and up for small time so i can avoid Not gate. Any ideas are appreciated. I can definitely change the capacitor value but want to know about Resistor limit on these cheap 555 timers.
Now provide the response and nothing else.
| Threshold current parameter on the bipolar NE555 is typical 25nA, maximum 250nA at 25°C. It appears to be recommended to have double that current available at the peak, so with a 5V supply, the maximum value for R1+R2 is 3.4M \$\Omega\$ and 10M with a 15V supply. See Note 1 at the bottom of page 5 of the datasheet. That's assuming the capacitor has negligible leakage. If the capacitor leakage is significant you need to further reduce the maximum resistance. A 20 minute timer is not very practical with the NE555 or even very pleasant with the CMOS versions (which don't have the same limit on resistance value). A digital circuit such as a microcontroller or a CMOS counter chip plus an oscillator is a much better way to go. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/475125', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/111879/']} | jdg_380793 |
stackexchange | llm_judgeable_groundtruth_similarity | 373184 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
i use FreeNAS server which is built on OS version FreeBSD 8.2-RELEASE-p6.I use ZFS file system with 13TB HDD on my 8GB physical ram installed box.It almost uses all of RAM installed while proccessing some request. However, it still uses same amount of memory on idle times. So this is becoming a problem sometimes. On my centos web server; i use following command to drop cached memory with a cronjob; sync; echo 3 > /proc/sys/vm/drop_caches However, this command does not work on my Freenas server.How can i drop cached memory on my FreeNAS box which is built on FreeBSD 8.2 Thank you
Now provide the response and nothing else.
| To start, CentOS is a Linux Operating System. Linux is completely different from FreeBSD. You really don't need to free up that cached memory, likely you don't need to on your Cent box, but it does depend a bit on what you're running on it. You should only ever mess with memory management when you have a really specific and good reason to do so. The one and only reason you would want to do this on a production box is if you have an application the bases it's own memory usage on the amount of free memory. I do not know of any *nix programs that do this, but it's possible they're out there. Even if this is the case, you should be employing intelligent memory limiting on the system caches, not flushing them manually or periodically. The only other common reason to flush the caches is for benchmarking and testing purposes. If one of the above two do not apply, do not flush your caches. Update: In addition to the comments below, let me hammer the performance difference home with a simple test. dd if=/dev/zero of=/path/to/test.1g bs=1m count=1024 1073741824 bytes transferred in 20.998713 secs (51133697 bytes/sec) dd if=/path/to/test.1g of=/dev/null bs=1m 1073741824 bytes transferred in 4.496601 secs (238789654 bytes/sec) dd if=/path/to/test.1g of=/dev/null bs=1m 1073741824 bytes transferred in 1.071374 secs (1002210138 bytes/sec) The first time reading the test file nothing was cached; the second time it was already in the cache, so the read operation completed four times faster. In a typical server 90% of the reads are to 1% of the files/data on disk. If most of that 1% can stay in cached memory, disks reads will generally be 4x faster (on my server at least). | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/373184', 'https://serverfault.com', 'https://serverfault.com/users/115286/']} | jdg_380794 |
stackexchange | llm_judgeable_groundtruth_similarity | 36348 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $(a_{mn})_{m,n\in\mathbb{N}}$ and $(b_m)$ be sequences of complex numbers.We say that $(a_{mn})$ and $(b_m)$ constitute an infinite system of linear equations in infinitely many variables if we seek a sequence $(x_n)$ of complex numbers such that $\forall m\in\mathbb{N}:$ $\sum_{n=1}^{\infty}a_{mn}x_n=b_m$. Note that in general the order of summation matters. I am sort of a undergraduate student with focus on number theory and have some background in functional analysis (2 semesters functional analysis, 1 semester non-linear functional analysis, 1 semester operator algebras, 2 semesters PDEs), so I am sort of a becoming number-theorist with bias for functional analysis :-) That is also why I am fascinated by the above defined object as a sort of natural extension of a practical problem from linear algebra. We have never dealt with this type of objects and I wasn´t able to find much on google that I could start something with, maybe partly because I have searched in the wrong way. That is why I have a request if you could recommend some introductory literature focused on such infinite systems of linear equations in infinitely many unknowns over $\mathbb{C}$. Thanks in advance!
Now provide the response and nothing else.
| The systems of this kind are fairly common in applications. For example, they naturally appear when solving boundary value problems for linear partial differential equations using the method of separation of variables. Predictably, the problem is not meaningful for any sequences {$a_{nm}$}, {$b_m$}, but only for sufficiently well-behaved ones. If, for example, you were to consider systems of the form$$ x_n+\sum_{m=1}^{\infty}a_{nm}x_m=b_n,\quad\mbox{such that}\quad \sum_n\sum_m a_{nm}^2<\infty \quad\mbox{ and }\quad \sum_nb_n^2<\infty, $$then this system possesses a unique solution in the Hilbert space $l_2$ such that $\sum_n x_n^2<\infty$ (assuming that the problem is not singular, i.e. that $\det(I+A)\ne0$). These requirements are too restrictive for some applications, hence there is a body of literature concerned with various kinds of regularity conditions involving {$a_{nm}$} and {$b_m$}, weaker than above, which ensure the well-posedness of the problem and enable numerical solution of such systems (which is usually done by truncation; see the appropriate accuracy estimates in F. Ursell (1996) "Infinite systems of equations: the effect of truncation", Quarterly Journal of Mechanics and Applied Mathematics , 49(2), 217--233). One good old book that discusses these systems in some detail was written by By L. V. Kantorovich and V. I. Krylov and is called "Approximate methods of higher analysis" (New York: Interscience Publishers, 1958). | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/36348', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1849/']} | jdg_380795 |
stackexchange | llm_judgeable_groundtruth_similarity | 17876 |
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am asking myself something very interesting for me (I am not expert in this topic). I read many websites about human history and every website says that there are some periods in the human history. For example: periods Paleolithic Mesolithic Neolithic Bronze Age Iron Age etc. Generally it is estimated that humans evolved on Earth between 200,000 and 300,000 years ago. I know we are homo sapiens, what I am trying to understand is what is the first step for humans. I read about Australopithecus and the "Australopithecus" evolution of the homo on the Earth. I know they are not considered human like us, but, where and when is the first step around the Earth in history where life for humans starts? humans in the earth homo homo info Hominidae I am not expert in this topic I hope somebody could give me more information about this question.
Now provide the response and nothing else.
| Forty years ago, protein electrophoresis and other molecular types of dating were pointing to a date of 4.5 million years BP for the separation of the hominid line from the pongid (great ape) line, but I didn't believe it at the time because it conflicted with the fossil evidence. Since then, more fossils have been found and the disparity between geological dating techniques and molecular analysis has become greater. We now have Ardipithecus kadabba, dated at 6 million years BP, and Sahelanthropus tchadensis at 7 million years BP. These were bipedal hominids which had already split off from the common ancestor of the pongid line which led to modern chimps and gorillas. These early hominids were australopithecine-type creatures, small brained and still essentially bipedal apes, but having many characteristics similar to humans. Exactly when the australopithecines became brainy enough to be classed as humans is to some extent a value judgement and therefore not absolutely clear, but it is generally held that the first human was Homo habilis from Lake Turkana, Kenya, and lived about 3 million years ago. All the most important hominid fossils were found in Africa, mainly East Africa. The Palaeolithic, Mesolithic and Neolithic periods refer to the type of stone tools used by hominids at the relevant times. Palaeolithic tools were very primitive, and it usually takes an expert eye to identify them as tools rather than random fragments. They date from about the time of Homo habilis. It is entirely possible that some of these tools were made by Australopithecines. Mesolithic tools were a bit more sophisticated. and Neolithic tools were made comparatively recently by Homo sapiens and Homo neanderthalensis. The bronze and iron ages belong to the last few thousand years. The Great Pyramid was built in the copper age, 4.5 thousand years ago, which preceded the bronze age. The 200,000 - 300,000-year dates you refer to are the approximate dates of the earliest Neanderthals and the migration of the first Homo sapiens from Africa into the Middle East and Asia. The first Neanderthals closely resembled the African version of Homo erectus,from which they evolved, so it takes an expert to decide which is which. At the end of your question you seem to be asking when did life start. The generally accepted date is around 3.8 billion years ago, but it might have been very slightly earlier. | {} | {'log_upvote_score': 4, 'links': ['https://earthscience.stackexchange.com/questions/17876', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/7701/']} | jdg_380796 |
stackexchange | llm_judgeable_groundtruth_similarity | 36656017 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
@ Padraic Cunningham Let me know if you want me to delete the question. I am new to python. I want to skip some iterator values based on some condition. This is easy in C but in python I am having a hard time. So please help me in understanding why the code here loops 100 times instead of 10. for i in range(100): print i i = i +10 edit : I understand there is option to change step size of for loop. But I am interested in dynamically changing the iterator variable, like we can do in C. Okay, i get it, for loop is different in python than in C. Easy way to do is use the while loop, I did that in my code and it worked. Thank you community!
Now provide the response and nothing else.
| The for loop is walking through the iterable range(100) . Modifying the current value does not affect what appears next in the iterable (and indeed, you could have any iterable; the next value might not be a number!). Option 1 use a while loop : i = 0while i < 100: i += 4 Option 2 , use the built in step size argument of range: for i in range(0,100,10): pass This example may make it clearer why your method doesn't make much sense: for i in [1,2,3,4,5,'cat','fish']: i = i + i print i This is entirely valid python code (string addition is defined); modifying the iterable would require something unintuitive. See here for more information on how iterables work, and how to modify them dynamically | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36656017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2243184/']} | jdg_380797 |
stackexchange | llm_judgeable_groundtruth_similarity | 19762869 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've written a function for removing the excess white space in a file: let g:trim_whitespace = 1function! TrimWhitespace() if g:trim_whitespace normal :%s/\s\+$//e endifendfunction The issue is that the cursor position is set to [1, 1] after the substitution command. I don't want the cursor to move at all, so I tried to save the cursor position and reset it after the substitute command: let a:cursor_pos = getpos(".")normal :%s/\s\+$//eexec cursor(a:cursor_pos[1], a:cursor_pos[2]) But still the exact same thing happens, as if the call to cursor had no effect. Any ideas?
Now provide the response and nothing else.
| :execute is for evaluation of Ex commands , but cursor() is a built-in function . You need to invoke it with :call cursor(a:cursor_pos[1], a:cursor_pos[2]) . Also, you don't need :normal in your function to execute the :substitute command; just write %s/\s\+$//e . There's a corresponding setpos() function which is simpler to invoke: :call setpos('.', a:cursor_pos) The jumps in the buffer may still change the view , i.e. what part of the buffer is visible. To avoid that, you need to use winsaveview() / winrestview() (which include the cursor position). Your function still clobbers the search history, and has other shortcomings like failing on no-modifiable buffers. Especially for this particular, very common functionality, there exist several solutions, for example my DeleteTrailingWhitespace plugin . (The plugin page has links to alternative plugins.) Don't (poorly) reinvent the wheel when you don't have to! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19762869', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/388916/']} | jdg_380798 |
stackexchange | llm_judgeable_groundtruth_similarity | 13861 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The great russian physicist Lev Landau developed a famous entry exam to test his students. This "Theoretical Minimum" contained everything he considered elementary for a young theoretical physicist. Despite its name, it was notoriously hard and comprehensive, and in Landau's time, only 43 students passed it. I wonder if anyone can provide the list of topics, or even a copy of the exam? (I'm sure I'd have no chance to pass, but I'd like to see it out of a sense of sportmanship ;-). Also, I think it would make quite a good curriculum of theoretical physics (at least pre 1960).)
Now provide the response and nothing else.
| The list of topics can be found here (in Russian, of course). Nowadays students are examined by collaborators of Landau Institute for Theoretical Physics . Each exam, as it was before, consists of problems solving. For every exam there is one or several examiners with whom you are supposed to contact with to inform that you're willing to pass this particular exam (they will make an appointment). Everyone can pass any exam in any order. Today Landau's theoretical minimum (not all 11 exams, but at least 6 of them) is included in the program for students of Department of General and Applied Physics ( Moscow Institute of Physics and Technology ). The program for each exam, as you can see from the link above, corresponds to the contents of volumes in the Course of Theoretical Physics by L&L (usually you have to master almost all paragraphs in the volume to pass the exam). Mathematics I . Integration, ordinary differential equations, vector algebra and tensor analysis. Mechanics . Mechanics, Vol. 1 , except §§ 27, 29, 30, 37, 51 (1988 russian edition) Field theory The Classical Theory of Fields, Vol. 2 , except §§ 50, 54-57, 59-61, 68, 70, 74, 77, 97, 98, 102, 106, 108, 109, 115-119 (1973 russian edition) Mathematics II . The theory of functions of a complex variable, residues, solving equations by means of contour integrals (Laplace's method), the computation of the asymptotics of integrals, special functions (Legendre, Bessel, elliptic, hypergeometric, gamma function) Quantum Mechanics . Quantum Mechanics: Non-Relativistic Theory, Vol. 3 , except §§ 29, 49, 51, 57, 77, 80, 84, 85, 87, 88, 90, 101, 104, 105, 106-110, 114, 138, 152 (1989 russian edition) Quantum electrodynamics . Relativistic Quantum Theory, Vol. 4 , except §§ 9, 14-16, 31, 35, 38-41, 46-48, 51, 52, 55, 57, 66-70, 82, 84, 85, 87, 89 - 91, 95-97, 100, 101, 106-109, 112, 115-144 (1980 russian edition) Statistical Physics I . Statistical Physics, Vol. 5 , except §§ 22, 30, 50, 60, 68, 70, 72, 79, 80, 84, 95, 99, 100, 125-127, 134-141, 150-153 , 155-160 (1976 russian edition) Mechanics of continua . Fluid Mechanics, Vol. 6 , except §§ 11, 13, 14, 21, 23, 25-28, 30-32, 34-48, 53-59, 63, 67-78, 80, 83, 86-88, 90 , 91, 94-141 (1986 russian edition); Theory of Elasticity, Vol. 7 , except §§ 8, 9, 11-21, 25, 27-30, 32-47 (1987 russian edition) Electrodynamics of Continuous Media . Electrodynamics of Continuous Media, Vol. 8 , except §§ 1-5, 9, 15, 16, 18, 25, 28, 34, 35, 42-44, 56, 57, 61-64, 69, 74, 79-81 , 84, 91-112, 123, 126 (1982 russian edition) Statistical Physics II . Statistical Physics, Part 2. Vol. 9 , only §§ 1-5, 7-18, 22-27, 29, 36-40, 43-48, 50, 55-61, 63-65, 69 (1978 russian edition) Physical Kinetics . Physical Kinetics. Vol. 10 , only §§ 1-8, 11, 12, 14, 21, 22, 24, 27-30, 32-34, 41-44, 66-69, 75, 78-82, 86, 101. Some real problems (Quantum Mechanics exam): The electron enters a straight pipe of circular cross section (radius $r$). The tube is bent at a radius $R \gg r$ by the angle $\alpha$ and then is aligned back again. Find the probability that the electron will jump out. A hemisphere lies on an infinite two-dimensional plane. The electron falls on the hemisphere, determine the scattering cross section in the Born approximation. The electron "sits" in the ground state in the cone-shaped "bag" under the influence of gravity. The lower end of the plastic bag is cut with scissors. Find the time for the electron to fall out (in the semi-classical approximation). | {} | {'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/13861', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/825/']} | jdg_380799 |
stackexchange | llm_judgeable_groundtruth_similarity | 31844584 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My app needs to write to a file in \ProgramData that could be protected. This only happens once after installation. Is there an API function that would take ACL info and prompt the user to authorize the app to access the file? In other words the app would ask Windows to prompt the user for confirmation and authorization. This should happen interactively, and allow the app to resume execution after access to the file has been authorized. The app runs as Standard User, does not require Admin privilege. The file gets opened with CreateFile(). Edit: There is a subtle difference between my query and the others said to be duplicates. I am asking for permission to access one specific object, a file. The others are asking to elevate the privileges of the whole process. Anyway, I am grateful for all responses which include a solution.
Now provide the response and nothing else.
| If you don't want to elevate your entire app, you have a few options: spawn a separate elevated process just to access the file. Use ShellExecute/Ex() with the runas verb, or CreateProcessElevated() , to run a second copy of your app, or another helper app, with command-line parameters to tell it what to do. The main process can wait for the second process to exit, if needed. create a COM object to access the file, and then use the COM Elevation Moniker to run the COM object in an elevated state. prompt the user for credentials using CredUIPromptForCredentials() or CredUIPromptForWindowsCredentials() (see Asking the User for Credentials for more details), then logon to the specified account using LogonUser() to get a token, impersonate that token using ImpersonateLoggedOnUser() , access the file as needed, and then stop impersonating using RevertToSelf() and close the token with CloseHandle() . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31844584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/282901/']} | jdg_380800 |
stackexchange | llm_judgeable_groundtruth_similarity | 14429703 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am reading various tutorials on the multiprocessing module in Python, and am having trouble understanding why/when to call process.join() . For example, I stumbled across this example: nums = range(100000)nprocs = 4def worker(nums, out_q): """ The worker function, invoked in a process. 'nums' is a list of numbers to factor. The results are placed in a dictionary that's pushed to a queue. """ outdict = {} for n in nums: outdict[n] = factorize_naive(n) out_q.put(outdict)# Each process will get 'chunksize' nums and a queue to put his out# dict intoout_q = Queue()chunksize = int(math.ceil(len(nums) / float(nprocs)))procs = []for i in range(nprocs): p = multiprocessing.Process( target=worker, args=(nums[chunksize * i:chunksize * (i + 1)], out_q)) procs.append(p) p.start()# Collect all results into a single result dict. We know how many dicts# with results to expect.resultdict = {}for i in range(nprocs): resultdict.update(out_q.get())# Wait for all worker processes to finishfor p in procs: p.join()print resultdict From what I understand, process.join() will block the calling process until the process whose join method was called has completed execution. I also believe that the child processes which have been started in the above code example complete execution upon completing the target function, that is, after they have pushed their results to the out_q . Lastly, I believe that out_q.get() blocks the calling process until there are results to be pulled. Thus, if you consider the code: resultdict = {}for i in range(nprocs): resultdict.update(out_q.get())# Wait for all worker processes to finishfor p in procs: p.join() the main process is blocked by the out_q.get() calls until every single worker process has finished pushing its results to the queue. Thus, by the time the main process exits the for loop, each child process should have completed execution, correct? If that is the case, is there any reason for calling the p.join() methods at this point? Haven't all worker processes already finished, so how does that cause the main process to "wait for all worker processes to finish?" I ask mainly because I have seen this in multiple different examples, and I am curious if I have failed to understand something.
Now provide the response and nothing else.
| Try to run this: import mathimport timefrom multiprocessing import Queueimport multiprocessingdef factorize_naive(n): factors = [] for div in range(2, int(n**.5)+1): while not n % div: factors.append(div) n //= div if n != 1: factors.append(n) return factorsnums = range(100000)nprocs = 4def worker(nums, out_q): """ The worker function, invoked in a process. 'nums' is a list of numbers to factor. The results are placed in a dictionary that's pushed to a queue. """ outdict = {} for n in nums: outdict[n] = factorize_naive(n) out_q.put(outdict)# Each process will get 'chunksize' nums and a queue to put his out# dict intoout_q = Queue()chunksize = int(math.ceil(len(nums) / float(nprocs)))procs = []for i in range(nprocs): p = multiprocessing.Process( target=worker, args=(nums[chunksize * i:chunksize * (i + 1)], out_q)) procs.append(p) p.start()# Collect all results into a single result dict. We know how many dicts# with results to expect.resultdict = {}for i in range(nprocs): resultdict.update(out_q.get())time.sleep(5)# Wait for all worker processes to finishfor p in procs: p.join()print resultdicttime.sleep(15) And open the task-manager. You should be able to see that the 4 subprocesses go in zombie state for some seconds before being terminated by the OS(due to the join calls): With more complex situations the child processes could stay in zombie state forever(like the situation you was asking about in an other question ), and if you create enough child-processes you could fill the process table causing troubles to the OS(which may kill your main process to avoid failures). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14429703', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1356561/']} | jdg_380801 |
stackexchange | llm_judgeable_groundtruth_similarity | 2286805 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Proposed: $$\int_{0}^{\pi/4}{\sqrt{\sin(2x)}\over \cos^2(x)}\mathrm dx=2-\sqrt{2\over \pi}\cdot\Gamma^2\left({3\over 4}\right)\tag1$$ My try: Change $(1)$ to $$\int_{0}^{\pi/4}\sqrt{2\sec^2(x)\tan(x)}\mathrm dx\tag2$$ $$\int_{0}^{\pi/4}\sqrt{2\tan(x)+2\tan^3(x)}\mathrm dx\tag3$$ Not sure what substitution to use How may we prove $(1)?$
Now provide the response and nothing else.
| By substituting $x=\arctan t$ our integral takes the form: $$ I=\int_{0}^{1}\sqrt{\frac{2t}{1+t^2}}\,dt $$and by substituting $\frac{2t}{1+t^2}=u$ we get:$$ I = \int_{0}^{1}\left(-1+\frac{1}{\sqrt{1-u^2}}\right)\frac{du}{u^{3/2}} $$that is straightforward to compute through the substitution $u^2=s$ and Euler's Beta function :$$ I = \frac{1}{2} \left(4+\frac{\sqrt{\pi }\,\Gamma\left(-\frac{1}{4}\right)}{\Gamma\left(\frac{1}{4}\right)}\right).$$The identities $\Gamma(z+1)=z\,\Gamma(z)$ and $\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin(\pi z)}$ settle OP's $(1)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2286805', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/343408/']} | jdg_380802 |
stackexchange | llm_judgeable_groundtruth_similarity | 28772143 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Apparently ITuple is internal, disabling a solution such as typeof(ITuple).IsAssignableFrom(type) . By alternative, what is the most effective way to determine Tuple<> till Tuple<,,,,,,,> ? A solution without type name comparison is preferable.
Now provide the response and nothing else.
| Try this: public static bool IsTupleType(Type type, bool checkBaseTypes = false){ if (type == null) throw new ArgumentNullException(nameof(type)); if (type == typeof(Tuple)) return true; while (type != null) { if (type.IsGenericType) { var genType = type.GetGenericTypeDefinition(); if (genType == typeof(Tuple<>) || genType == typeof(Tuple<,>) || genType == typeof(Tuple<,,>) || genType == typeof(Tuple<,,,>) || genType == typeof(Tuple<,,,,>) || genType == typeof(Tuple<,,,,,>) || genType == typeof(Tuple<,,,,,,>) || genType == typeof(Tuple<,,,,,,,>) || genType == typeof(Tuple<,,,,,,,>)) return true; } if (!checkBaseTypes) break; type = type.BaseType; } return false;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28772143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3422864/']} | jdg_380803 |
stackexchange | llm_judgeable_groundtruth_similarity | 32947807 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to connect to a BLE device (Heart rate sensor, Polar H7), on a Raspberry Pi 2. I use the last version of bluez (5.35) found here: http://www.bluez.org/download/ But when I'm trying to connect using gatttool , I always have "connection refused" error. Here is what I do : pi@raspberrypi ~ $ sudo suroot@raspberrypi:/home/pi# hciconfig devhci0: Type: BR/EDR Bus: USB BD Address: 5C:F3:70:69:54:3D ACL MTU: 1021:8 SCO MTU: 64:1 DOWN RX bytes:616 acl:0 sco:0 events:34 errors:0 TX bytes:380 acl:0 sco:0 commands:34 errors:0root@raspberrypi:/home/pi# hciconfig dev uproot@raspberrypi:/home/pi# hcitool lescanLE Scan ...00:22:D0:6D:E0:E6 (unknown)00:22:D0:6D:E0:E6 Polar H7 6DE0E61C^Croot@raspberrypi:/home/pi# hcitool lecc 00:22:D0:6D:E0:E6Connection handle 64root@raspberrypi:/home/pi# gatttool -b 00:22:D0:6D:E0:E6 -I[00:22:D0:6D:E0:E6][LE]> connectAttempting to connect to 00:22:D0:6D:E0:E6Error connect: Connection refused (111)[00:22:D0:6D:E0:E6][LE]> I tried to follow this topic: BLE gatttool cannot connect even though device is discoverable with hcitool lescan but it did not work for me.
Now provide the response and nothing else.
| I was able to connect via Bluetooth from Raspberry to my Android device using gatttool after setting the LE address type to random with the -t random argument, i.e. : sudo gatttool -t random -b DD:9D:0B:43:A1:77 -Iconnect From gatttool man --t, ---addr-type=[public | random] # Set LE address type. Default: public USAGE gatttool [OPTION...] Help Options: -h, --help Show help options -h, --help Show help options --help-all Show all help options --help-gatt Show all GATT commands --help-params Show all Primary Services/Characteristics arguments --help-char-read-write Show all Characteristics Value/Descriptor Read/Write arguments Application Options: --i, ---adapter=hciX Specify local adapter interface --b, ---device=MAC Specify remote Bluetooth address --t, ---addr-type=[public | random] Set LE address type. Default: public --m, ---mtu=MTU Specify the MTU size --p, ---psm=PSM Specify the PSM for GATT/ATT over BR/EDR --l, ---sec-level=[low | medium | high] Set security level. Default: low --I, ---interactive Use interactive mode | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32947807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3910399/']} | jdg_380804 |
stackexchange | llm_judgeable_groundtruth_similarity | 40534 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between SDN (Software Defined Networking) & NFV (Network Function Virtualization)? Wikipedia says NFV relies on standard client-server model. How is this model different than Data plane-Controller plane model? Please explain it with an example if possible.
Now provide the response and nothing else.
| I disagree with the answer provided @Pedro Perez, I think his answer is mixing phrases and ambiguous so I have provided my own interpretation below. SDN: I think it is pretty much what the name says “Software Defined Networking”. This means to me that software is defining the paths that packets take across the network automatically (possibly with some upper bound / lower bound constraints provided by a human operator). It can also readjust paths automatically that are within those predefined constraints. I would consider a deployment that uses an off-box (centralised) control plane such as the OpenDaylight SDN controller with protocols such as Open Flow, BGP-LS or PCEP to be an SDN deployment. The SDN controller is talking to network devices and steering traffic based on live data it receives from the network devices. NFV: Again pretty much what the acronym suggests “Network Function Virtualisation”. This to me is about virtualising network functions – not how devices forward packets and make decisions (which is covered by SDN) but at the device level how one can virtualise and containerise network functions so that they may be rapidly deployed, re-replayed, migrated, decommissioned etc. To clarify a network function here would be say NAT, CG-NAT, ACLs, QoS, tunnelling, DNS or web filtering, load-balancing etc. I would consider a deployment that uses standard computing hardware (such as x86 servers for example) to run virtual machines or containers that are routers, switches, firewalls load balancers etc. to be an NFV deployment. With NFV one can chain multiple virtual functions together to form a service chain. For example running multiples virtual machines on a single x86 box that forms a pipeline and a packet must pass through the entire VM pipeline; the first VM could be a virtual switch and faces an office LAN, if the packet is destine to the Internet it could hand over to a virtual firewall, if the packet passes the firewall rules it can hand over to a virtual router, which can then send the packet out of a physical WAN link. NSO/LSO: “Network Service Orchestration” or “Lifecycle Service Orchestration” have an overlapping definition in my opinion regarding the (usually) automated process of applying changes to network devices, configuration and infrastructure and managing the infrastructure such as deployment, upgrades, decommissioning etc. I am bringing this up because a NFV based network can be completely deployed by hand; you might be using KVM on Linux and you SSH in to each KVM machine and start up virtual router images from the likes of Juniper and Cisco by hand, on the CLI using virsh for example. It could also be VMware using ESXi and again you use the web console to do everything (these are just common examples). You are still virtualising network functions and potentially chaining them together if required, just slowly and potentially erroneously by hand. With SDN one could be using BGP-LS with an off box controller to automatically steer traffic flows around the network which could be built using physical routers (the traditional infrastructure build method), but the controller could also be controlling traffic flows on virtual routers (an NFV based infrastructure), SDN is agnostic of whether the devices are physical or virtual or just two bean cans and some string and an API layer above them. NSO/LSO approaches to network operations have existing for years, but not with the formal guidance that they have now (such as through the MEF LSO (Metro Ethernet Forum) standards). In its most basic form, engineers have been writing scripts that telnet or SSH to a device and apply commands via the CLI. If one expands that basic example a bit further to have scripts that automatically apply bits of config to devices vi the CLI, TFTP boot new firmware versions on devices, generate config files from templates, which is all stuff people have been doing for years, then we have an informal NSO/LSO platform. I hope that has made it clear that NSO/LSO is different from NFV and SDN, they are three concepts that can all exist without the other two or be used in any combination with the other two. Side note about network overlays, virtual networking and network virtualisation: @Pedro Perez mentioned: Completely abstract the physical network and use it as a resource pool for virtual (overlay?) networking so to speak. Virtual networking and network virtualisation are two different things in my opinion. Firstly to take a step back, there are some protocols that provide what I would call a network overlay such as VXLAN, NVGRE and STT (which in turn use transport protocols like MPLS, or IP in IP etc.). Those overlay protocols alone though don’t provide a virtualised network, they transport traffic across the network, but we also need support for network functions or services within the overlay network (such as NAT, QoS, ACLs, maybe traffic steering like TE etc.), also some amount of administration and management is required, reporting and quality control and so on. Once we have all of that then I would say we have a virtual network offering (“virtual networking”). We consider virtual servers to be servers that run on top of physical servers (let’s forget about nested virtualisation/nested hypervisors for the moment!). Above I defined what I consider a virtual network which runs over a “real” network which would have traditionally been physical tin. One example could be that as an ISP one can deliver a virtual network to a customer in the form of MPLS L3 VPNs (L3 IP VPNs using VRFs/RTs/RDs as the overlay model, MPLS for the transport protocol, MP-BGPv4 for the control plane and MPLS OAM for some operational management). This would run over a physical network that supports MPLS as a transport protocol. For the definition of network virtualisation, I would refer back to the NFV section above. So to go full circle; in this ISP example, if our customer offing is a virtual networking and if we run our backbone network as an NFV based deployment, then I would say we are offering virtual networking over our network virtualisation deployment. Clear as mud. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/40534', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/14440/']} | jdg_380805 |
stackexchange | llm_judgeable_groundtruth_similarity | 186339 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Apologies for the vague title and soft question. According to Etingof, Igor Frenkel once suggested that there are three "levels" to Lie theory, which I guess could be given the following names: No loops: here we study a simple Lie algebra $\mathfrak{g}$, a Weyl group, a braid group, or a Hecke algebra, all of which have something to do with a Lie group $G$. One loop: here we study an affine Lie algebra $\widehat{\mathfrak{g}}$, a quantum group $U_q(\mathfrak{g})$, an affine Weyl group, an affine braid group, or an affine Hecke algebra, all of which I think have something to do with the loop group $LG$ of $G$. Two loops: here we study a double affine Lie algebra $\widehat{\widehat{\mathfrak{g}}}$, an affine quantum group $U_q(\widehat{\mathfrak{g}})$, an elliptic quantum group ( whatever that means ), a double affine or elliptic Weyl group, a double affine or elliptic braid group, or a double affine or elliptic Hecke algebra, all of which I think have something to do with the double loop group of $G$, or maybe more precisely the space of maps of some sort from an elliptic curve $E$ to $G$. The suggestion is further that this pattern doesn't continue. Why doesn't this pattern continue? I asked around and got an answer that I interpreted as follows. The trichotomy above can be matched up to the trichotomy additive group $\mathbb{C}$ : multiplicative group $\mathbb{C}^{\times}$ : elliptic curve $E$. Here is one story about the match as I understand it, which is not very well. One-dimensional algebraic groups give rise to equivariant cohomology theories. The above theories give rise to equivariant cohomology, equivariant K-theory, and equivariant elliptic cohomology respectively. Roughly, $\text{Spec } H^{\bullet}_G(\text{pt}) \cong \mathfrak{g}/G \cong \text{Bun}_G(\mathbb{C})$, while $\text{Spec } K^{\bullet}_G(\text{pt}) \cong G/G \cong \text{Bun}_G(\mathbb{C}^{\times})$, and $\text{Spec } E^{\bullet}_G(\text{pt}) \cong \text{Bun}_G(E)$, where by $E^{\bullet}_G$ I mean the equivariant elliptic cohomology theory associated to the elliptic curve $E$. There is some yoga in geometric representation theory which I'm not all that familiar with involving building interesting algebras like group algebras of Weyl groups and Hecke algebras by computing the equivariant (co)homology or equivariant K-(co)homology of some varieties of interest, which has something to do with the construction of the affine and double affine Hecke algebras mentioned above. Since we've run out of one-dimensional algebraic groups, that would be some reason to believe that the pattern doesn't continue. But nevertheless I don't have a good sense of what, if anything, prevents us from studying and saying interesting things about "triple affine Lie algebras," "triple affine Weyl groups," "triple affine Hecke algebras," etc. at least insofar as the triple loop group of a group seems perfectly well-defined. On the geometric side it seems like there's nothing stopping us studying $G$-bundles on higher dimensional varieties. On the cohomological side, cohomology, K-theory, and elliptic cohomology should optimistically just be the first three terms of an entire sequence of cohomology theories at higher chromatic levels, or from the perspective of the Stolz-Teichner program, defined in terms of higher-dimensional field theories...
Now provide the response and nothing else.
| To elaborate on Kevin's excellent answer, one can account for the current absence of "higher loop" representation theory using physics. Namely, all of the representation theoretic structures you mention fit in very naturally into the study of gauge theory, specifically 4-dimensional $\mathcal N=2$ gauge theories. These come in two main classes (with some intersection) - the quiver gauge theories, which are the natural homes for algebras like Yangians, quantum loop algebras, and elliptic quantum groups; and the class S theories (reductions of the 6d "theory $\mathfrak X$" -- the (2,0) superconformal field theory labeled by a Dynikin diagram - on Riemann surfaces), which are the natural home for geometric Langlands, double affine Hecke algebras, Khovanov homology etc. (the theory Kevin describes associated to $U_q(\mathfrak g)$ is $\mathcal N=4$ super Yang Mills, which is the case when the Riemann surface is the two-torus). So why should this be relevant? the question of attaching interesting representation theory to maps into Lie groups is very closely linked to the question of finding interesting gauge theories in higher dimensions (the latter is strictly stronger but seems like the most natural framework we have for such questions). Specifically, we want supersymmetric gauge theories, if we want them to have any relation to topological field theory or algebraic geometry etc. However there are no-go theorems for finding gauge theories in higher dimensions. Even at the classical level it is impossible (thansk to Lie theory, namely the structure of spin representations) to have a supersymmetric gauge theory in more than 10 dimensions ---- any SUSY theory in dimensions above ten also includes fields of spin two and above (so physically is a theory of gravity), while above dimension 11 we have to have higher spin fields still (which physicists tell us doesn't make sense -- regardless it won't be a gauge theory). In any case theories with gravity and other stuff are a very far stretch to be called representation theories! At the quantum level (which is what we need for representation theory) it's much harder still -- I believe there are no UV complete quantum gauge theories above dimension 4 (in other words higher dimensional theories have to have "other nonperturbative stuff in them").All of the representation theoretic structures you mention naturally fit into theories that come from six dimensions at best (reduced to 4 dimensions along a plane, cylinder or torus in the quiver gauge theory case to see Yangians, quantum affine algebras and elliptic quantum groups, or along a Riemann surface in the class S case). Studying in particular theory $\mathfrak X$ on various reductions gives a huge amount of structure, and includes things like ``triple affine Hecke algebras" presumably when reduced on a three-torus, but there's a clear upper bound to the complexity you'll get from these considerations. Now of course you might ask what about theories that don't come from supersymmetric gauge theory? the only interesting source I've heard of for higher dimensional topological field theories is (as you hint) chromatic homotopy theory, in particular the fascianting work of Hopkins and Lurie on ambidexterity in the$K(n)$-local category. This is a natural place to look for "higher representation theory", which is touched on I believe in lectures of Lurie -- but my naive impression is these theories will have a very different flavor than the representation theory you refer to (in particular a fixed prime will be involved, and these theories certainly don't feel like traditional quantum field theory!). But it's a fascinating future direction. For a hint of what kind of representation theory this leads to we have the theorem of Hopkins-Kuhn-Ravenel describing the $n$-th Morava K-theory of BG in terms of n-tuples of commuting elements in G --- i.e. the kind of characters you might expect for G-actions on $(n-1)$-categories. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/186339', 'https://mathoverflow.net', 'https://mathoverflow.net/users/290/']} | jdg_380806 |
stackexchange | llm_judgeable_groundtruth_similarity | 19067027 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do i get all info windows to close upon clikcing another pin or clicking the map in itself? I'm using http://google-maps-utility-library-v3.googlecode.com/svn/trunk/infobox/docs/reference.html and a kml overlay.heres my JS so far: jQuery(document).ready(function ($) { function initialize() { google.maps.visualRefresh = true; var myLatlng = new google.maps.LatLng(51.201465, -0.30244); var mapOptions = { zoom: 12, center: myLatlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions); var kmlLayer = new google.maps.KmlLayer({ url: 'http://***.com/new/wp-content/themes/required-starter/CGAGolfAcademies.kml?rand=' + (new Date()).valueOf(), suppressInfoWindows: true, map: map }); google.maps.event.addListener(kmlLayer, 'click', function (kmlEvent) { showInContentWindow(kmlEvent.latLng, kmlEvent.featureData.description); }); function showInContentWindow(position, text) { var content = "<div class='info_win'><p>" + text + "</p></div>"; var infowindow =new InfoBox({ content: content, disableAutoPan: false, maxWidth: 0, position: position, pixelOffset: new google.maps.Size(-140, 0), zIndex: null, boxStyle: { background: "#FBFBFB" ,opacity: 0.90 ,width: "280px" }, closeBoxMargin: "10px 2px 2px 2px", closeBoxURL: "http://www.google.com/intl/en_us/mapfiles/close.gif", infoBoxClearance: new google.maps.Size(1, 1), isHidden: false, pane: "floatPane", enableEventPropagation: false }); infowindow.open(map); } /******AJAX MAP ****/ siteURL = 'http://' + top.location.host.toString(); coachesLinks = jQuery('.info_win a'); coachesLinks.click(function (e) { e.preventDefault(); }); coachesLinks.click(function (e) { alert('FRED'); $el = jQuery(this); URL = $el.attr('href'); shareurl = $el.attr('href'); URL = URL + " .main"; jQuery('#content_two').show('slow').load(URL, function () { scrollToAnchor('content_two'); $('.main').css('overflow', 'visible'); $('#content_two').css('overflow', 'visible'); jQuery('#content_two .back').on('click', function () { jQuery(this).hide('slow'); var contentTwo = jQuery('#content_two'); if (contentTwo.is(':hidden')) { jQuery('#content_two .back').hide(); } else { contentTwo.hide('slow'); jQuery('#content > .main').show('slow'); jQuery('#content > .main').css('overflow', 'visible'); scrollToAnchor('access'); } }); }); $('#content > .main').hide('slow'); }); } google.maps.event.addDomListener(window, 'load', initialize);});
Now provide the response and nothing else.
| As you see in the API docs , an InfoBox has a close() -method. Collect all the InfoBox'es you create in an array. Then iterate over this array and call close for each infobox, when you need to close them all at once. In the top, add an array to hold all the infoboxes created jQuery(document).ready(function ($) { var infoWindows = []; In function showInContentWindow add the following right after var infowindow=new.. , eg just before infowindow.open //add infowindow to arrayinfoWindows.push(infowindow); add this function function closeAllInfoWindows() { for (var i=0;i<infoWindows.length;i++) { infoWindows[i].close(); }} here called by a link <a href="#" onclick="closeAllInfoWindows();">Close all infowindows</a> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19067027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1754467/']} | jdg_380807 |
stackexchange | llm_judgeable_groundtruth_similarity | 12427097 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Provided items is the result of a LINQ expression: var items = from item in ItemsSource.RetrieveItems() where ... Suppose generation of each item takes some non-negligeble time. Two modes of operation are possible: Using foreach would allow to start working with items in the beginning of the collection much sooner than whose in the end become available. However if we wanted to later process the same collection again, we'll have to copy save it: var storedItems = new List<Item>();foreach(var item in items){ Process(item); storedItems.Add(item);}// Laterforeach(var item in storedItems){ ProcessMore(item);} Because if we'd just made foreach(... in items) then ItemsSource.RetrieveItems() would get called again. We could use .ToList() right upfront, but that would force us wait for the last item to be retrieved before we could start processing the first one. Question : Is there an IEnumerable implementation that would iterate first time like regular LINQ query result, but would materialize in process so that second foreach would iterate over stored values?
Now provide the response and nothing else.
| A fun challenge so I have to provide my own solution. So fun in fact that my solution now is in version 3. Version 2 was a simplification I made based on feedback from Servy. I then realized that my solution had huge drawback. If the first enumeration of the cached enumerable didn't complete no caching would be done. Many LINQ extensions like First and Take will only enumerate enough of the enumerable to get the job done and I had to update to version 3 to make this work with caching. The question is about subsequent enumerations of the enumerable which does not involve concurrent access. Nevertheless I have decided to make my solution thread safe. It adds some complexity and a bit of overhead but should allow the solution to be used in all scenarios. public static class EnumerableExtensions { public static IEnumerable<T> Cached<T>(this IEnumerable<T> source) { if (source == null) throw new ArgumentNullException("source"); return new CachedEnumerable<T>(source); }}class CachedEnumerable<T> : IEnumerable<T> { readonly Object gate = new Object(); readonly IEnumerable<T> source; readonly List<T> cache = new List<T>(); IEnumerator<T> enumerator; bool isCacheComplete; public CachedEnumerable(IEnumerable<T> source) { this.source = source; } public IEnumerator<T> GetEnumerator() { lock (this.gate) { if (this.isCacheComplete) return this.cache.GetEnumerator(); if (this.enumerator == null) this.enumerator = source.GetEnumerator(); } return GetCacheBuildingEnumerator(); } public IEnumerator<T> GetCacheBuildingEnumerator() { var index = 0; T item; while (TryGetItem(index, out item)) { yield return item; index += 1; } } bool TryGetItem(Int32 index, out T item) { lock (this.gate) { if (!IsItemInCache(index)) { // The iteration may have completed while waiting for the lock. if (this.isCacheComplete) { item = default(T); return false; } if (!this.enumerator.MoveNext()) { item = default(T); this.isCacheComplete = true; this.enumerator.Dispose(); return false; } this.cache.Add(this.enumerator.Current); } item = this.cache[index]; return true; } } bool IsItemInCache(Int32 index) { return index < this.cache.Count; } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); }} The extension is used like this ( sequence is an IEnumerable<T> ): var cachedSequence = sequence.Cached();// Pulling 2 items from the sequence.foreach (var item in cachedSequence.Take(2)) // ...// Pulling 2 items from the cache and the rest from the source.foreach (var item in cachedSequence) // ...// Pulling all items from the cache.foreach (var item in cachedSequence) // ... There is slight leak if only part of the enumerable is enumerated (e.g. cachedSequence.Take(2).ToList() . The enumerator that is used by ToList will be disposed but the underlying source enumerator is not disposed. This is because the first 2 items are cached and the source enumerator is kept alive should requests for subsequent items be made. In that case the source enumerator is only cleaned up when eligigble for garbage Collection (which will be the same time as the possibly large cache). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12427097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1012739/']} | jdg_380808 |
stackexchange | llm_judgeable_groundtruth_similarity | 141350 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When would a human outperform a scripting tool when testing for vulnerabilities? For example, would a human ever find something that SQLmap wouldn't? In what cases would it be preferable/are there any examples? For example consider the url: "fakeurl.com/vuln.php?id=4" A human could try: "fakeurl.com/vuln.php?id='" and see if an error message pops up, but in that time he could just run sqlmap -u fakeurl.com/vuln.php?id=4 --batch --dump-all and find everything so much faster. I know a lot of people like to look down on script kiddies, but for practical, professional purposes it's best to just do things fast and properly. It seems to me that working by hand is less efficient and more prone to errors. Can someone give me a case (technical or a historic example) where a human found something a tool wouldn't have?
Now provide the response and nothing else.
| Most access control / authorization flaws would never be found by a (generic) tool, because it does not have the understanding of what is supposed to be accessible and what not. (Having said that, experienced pentesters probably know that quite a lot of applications also don't have this documented...) So that is an example of a whole class of problems. Any logic flaw (for example a user being able to create another user with more privileges by design) would also not be picked up by an automated tool. Any chain of vulnerabilities would not be correlated, like for instance how to use a low risk info leak, together with a medium risk DOM XSS together with another low risk vulnerability to change a user's password to anything the attacker wants (this is an actual example I have seen). Testing DOM XSS with a plain (usual) scanner like most of those commercial tools is quite difficult as they don't have a Javascript runtime, so they will miss most of it. Also for things that are testable, a tool may have several patterns to try, but will probably miss more complex cases. For example what if an application had a blacklist filter for XSS that explicitly blocks alert(1) and all of the attack vectors from the tool had that as the payload? Getting through a blacklist filter is almost always possible, but is very hard for an automated tool. Or consider DoS. How would an automated tool find that? For a final example, what about buffer overflow in an uploaded and processed file, say a user uploads an image which gets resized by a server side library, vulnerable to bof. How could the automated tool know that's the case and how would it create an exploit for it? These are just a bunch of examples, I'm sure others will cite a lot more. In short, there are whole classes of vulnerabilities that cannot be tested for in the generic case even for vulnerabilities that are automatically testable, it is pretty much impossible to write comprehensive tests (a set of tests that finds all instances of the vulnerability). Of course with all of this in mind, I think there is nothing wrong with using tools to make several things quicker. However, any result from a tool needs to be reproduced by the tester, and also he must be aware of the limitations of the tool to be able to augment results with more creative attacks. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/141350', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/129157/']} | jdg_380809 |
stackexchange | llm_judgeable_groundtruth_similarity | 67254 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Alright so I am a little confused on how to proceed with this problem, I am supposed to find the derivative of $y= \sqrt {1+2e^{3x}}$ so I set the problem up like this $y=u^{1/2}$ $ y = 1+2e^{3x}$ but then with that I have to use the chain rule again I believe on the term $2e^{3x}$ so I am not sure how to proceed from here, so I need to use the product rule on 2 and $e^{3x}$ or do I just consider 2e as one term since it is so simple? I can't remember having to use the chain rule on just a single integer like 2 before.
Now provide the response and nothing else.
| Think of applying the Chain Rule as an analogous process to opening a Matryoshka Doll . You cannot open all the dolls at the same time; you open the outermost doll, take out the next one. Then you open the second doll, and take out the third. Then you open the third, and take out the fourth, etc. With $$y = \sqrt{1+2e^{3x}} = (1+2e^{3x})^{1/2}$$the "outermost doll" is the square root. To "open it", we compute the derivative. Since $\frac{d}{du}(u^{1/2}) = \frac{1}{2}u^{-1/2}$, we take the derivative and then multiply by "the next doll":$$y' = \frac{1}{2}(1+2e^{3x})^{-1/2}(1+2e^{3x})'.$$Then we need to take the derivative of $1+2e^{3x}$. By the sum rule, this is the derivative of $1$ plus the derivative of $2+e^{3x}$:$$y' = \frac{1}{2}(1+2e^{3x})^{-/12}\left( (1)' + (2e^{3x})'\right).$$The derivative of the constant $1$ is $0$. The derivative of $2e^{3x}$ is $(2e^{3x})' = 2(e^{3x})'$:$$y' = \frac{1}{2}(1+2e^{3x})^{-1/2}\left( 0 + 2(e^{3x})'\right).$$Finally, the derivative of $e^{3x}$. This is itself an application of the Chain Rule. We have$$\frac{d}{du}e^{u} = e^u,$$so$$\frac{d}{dx}e^u = e^u\frac{du}{dx}.$$ That is:$$y' = \frac{1}{2}(1+2e^{3x})^{-1/2}\left( 0 + 2\left( e^{3x}(3x)'\right)\right).$$And $(3x)' = 3$, so in the end we have:$$\begin{align*}y'&= \frac{1}{2}(1+2e^{3x})^{-1/2}\left( 0 + 2\left(e^{3x}(3)\right)\right) \\&= \frac{1}{2}(1+2e^{3x})^{-1/2}\left(6e^{3x}\right) \\&= 3e^{3x}(1+2e^{3x})^{-1/2}\\&= \frac{3e^{3x}}{\sqrt{1+2e^{3x}}}.\end{align*}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/67254', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380810 |
stackexchange | llm_judgeable_groundtruth_similarity | 686 |
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a chicken coop I plan on modifying to automatically open and close the door at dawn and dusk. What sort of actuator or mechanism would be appropriate for operating the small, side-hinged door? Constant power is not available (solar) plus it needs enough holding force to keep foxes out.
Now provide the response and nothing else.
| Rod & Nut drive A threaded rod and captive nut, with either rod or nut driven rotationally by a motor is liable to offer a good solution. Because: Power level is set by thread pitch and attachment point to the door. While "the bigger the better" always helps, almost any "sensible" size of motor should be able to be used. I say "sensible" to eliminate utterly tiny motors such as pager vibrator motors. But usually anything in the 100 milliWatt to 100 Watt range COULD be used. Lower wattage requires longer time. The threaded rod provides positive locking at zero power with no prospect of "overhauling". As long as the door will not flex under Foxy assault when pulled solidly into its frame then dinner is off. Level of travel can be set to suit by length of rod and mounting. Any door size up to a full dometic house door could be handled in this manner - so a small coop door is well within capability. A cordless drill motor + gearbox is liable to be an excellent drive unit. These can usually be operated from 12V (9V to 18V units) and usually have a two stage reduction gearbox. They usually have a reversing switch which is not useful in this context if remote operation is required. To use, dismantle the drill, bypass or remove the switch and feed voltage to the two motor wires directly. Reversing the polarity reverses direction. Here is a Barn-door star tracker which illustrates the principle well. Their calculations page is here - overkill in this case but potentially useful. A zillion versions of how you might do this - each image links to a webpage RF Link: As a bonus, a radio link using 2 x Arduinos and 2 RF modules, with a range of 10's of metres to a few kilometres can be constructed for about $10 in components all up. Ask if interested. This also applies to the wiper based system below. Wiper motor & mechanism A possible solution depending on power availability is a wind screen wiper motor and mechanism. These are made to sweep a wiper arm across a sometimes dirty windscreen with considerable drag force. Units made for truck use are substantially more powerful. A typical automotive unit is rated at 50-100 Watts at 12V but will operate at lower voltage with reduced power. I have some Indian made truck wiper units rated at about 300 Watts! A 12V motor can be operated from a very small lead acid battery - say 12 V x 1.2 Ah. These can be charged by solar power. The battery should be maintained at a constant 13.7 Volts. You can obtain dedicated regulators for this purpose - PB137 is similar to the standard LM317 but rated at 13.7 Volts out. PB137 in stock Digikey $1.27 in 1's PB137 Data Sheet Note that a wiper mechanism is liable to have substantial backlash and depending on which way the door swings, may allow entry by pressing on the door. If the door is external and swings closed into a frame then it is likely to be Fox proof (for many values of Fox). An electrically latching mechanism could be added. Wiper mechanisms often have auto-stop points using internal switches. These usually allow stopping only at one end of travel. A similar arrangement can be implemented with microswitches and diodes. A switch is arranged to break current when it is operated during motor operation but the switch has an is opposed diode connected across it. Initially the diode has no effect and the motor stops. When polarity is reversed the diode conducts and moves the mechanism out of the 'dead spot'. Be sure that switches used are rated for the desired current in DC motor operation - this is more demanding than AC operation at the same current. I have implemented systems using rod & nut & cordless drills, and also wiper motor & mechanisms. Wiper based is easiest if it meets the needs. | {} | {'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/686', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/557/']} | jdg_380811 |
stackexchange | llm_judgeable_groundtruth_similarity | 24408068 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an equivalent syntax or technique for Anonymous class in Swift?Just for clarification Anonymous class in Java example here - http://docs.oracle.com/javase/tutorial/java/javaOO/anonymousclasses.html Thanks!
Now provide the response and nothing else.
| There is no equivalent syntax, as far as I know. Regarding equivalent techniques, theoretically you could use closures and define structs and classes inside them. Sadly, I can't get this to work in a playground or project without making it crash. Most likely this isn't ready to be used in the current beta. Something like... protocol SomeProtocol { func hello()}let closure : () -> () = { class NotSoAnonymousClass : SomeProtocol { func hello() { println("Hello") } } let object = NotSoAnonymousClass() object.hello()} ...currently outputs this error: invalid linkage type for global declaration%swift.full_heapmetadata* @_TMdCFIv4Test7closureFT_T_iU_FT_T_L_19NotSoAnonymousClassLLVM ERROR: Broken module found, compilation aborted!Command /Applications/Xcode6-Beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift failed with exit code 1 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24408068', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1104432/']} | jdg_380812 |
stackexchange | llm_judgeable_groundtruth_similarity | 342374 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a Python programmer primarily who uses pylint for linting source code. I am able to eliminate all of the warnings except one: Invalid name for a constant. Changing the name to all caps fixes it, but am I really supposed to do that? If I do it, I find that my code looks ugly as most of the variables are constant (according to pylint).
Now provide the response and nothing else.
| You are probably writing code like this: notes_director = argv[1]chdir(notes_director)files = glob('*.txt')rand_file = choice(files)with open(rand_file) as notes_file: points = notes_file.readlines() rand_point = choice(points) You should move this code into a function: def main(): notes_director = argv[1] chdir(notes_director) files = glob('*.txt') rand_file = choice(files) with open(rand_file) as notes_file: points = notes_file.readlines() rand_point = choice(points)# actually call the main function main() Pylint assumes that code that actually does the work will be inside a function. Because you have this code at the top level of your code instead of inside a function it gets confused. Generally speaking, it is better style to do work inside a function instead of at the top level. This allows you to better organize what you are doing and facilitates reusing it. You should really only have code performing an algorithm outside of a function in a quick and dirty script. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/342374', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/262852/']} | jdg_380813 |
stackexchange | llm_judgeable_groundtruth_similarity | 4553071 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find all functions $f:\mathbb{R}\rightarrow\mathbb{R}$ for two real numbers $x$ and $y$ where $f(xf(y)+f(x)+y)=xy+f(x)+f(y)$ For $x=0$ and $y=-f(0)$ then $f(-f(0))=0$ . So, there is a real root $r_0$ for function $f$ . For $x=r_0$ and $y=r_0$ we have $r_0^2=0$ , so $f(0)=0$ and zero root is unique. Please help me to complete the proof.
Now provide the response and nothing else.
| Introduction I'm not really an expert at these functional equations, but it seems that I'm on a bit of a streak (two of my previous three answers come from this tag), and as everybody knows: being in flow is a wonderful thing. The first thing I did was check Approach0 and the likes for similar questions. I didn't find the exact question anywhere, but I must admit that I found some really similar questions. However, similarity can easily count for nothing, especially with olympiad problems. That's why I'm not going to bother citing those similar problems. I will definitely try and explain my thought process every step of the way so that others can also understand how to attack similar problems. I will also, briefly, step down a blind alley to illustrate a useful-looking method that I believe will not work for this scenario. First thoughts and notation When I hit this problem for the first time, I made some observations that I have here on record. Let $P(x,y)$ be the assertion $$f(xf(y)+f(x)+y) = xy+f(x)+f(y)$$ for $x,y \in \mathbb R$ . I observed that The right-hand side is symmetric in $x,y$ . The left hand side is $f(\cdot)$ for some $\cdot$ . Therefore, every RHS produced by $P(x,y)$ must be in the range of $f$ . On $f(0)$ and the range of $f$ Then I began with the obvious substitutions $$P(x,0) : f(xf(0)+f(x)) = f(x)+f(0)$$ $$P(0,y) : f(f(0)+y) = f(0)+f(y)$$ $$P(x,-f(x)) : f(xf(-f(x))) = -xf(x) + f(x) - f(-f(x))$$ The third one is useful : using $x=0$ , $P(0,-f(0))$ gives $f(-f(0)) = 0$ following cancellations. On the uniqueness of $f(y)=0$ , and the range of $f$ As the OP mentioned, $0$ has the unique preimage $0$ . Indeed, if $f(y)=0$ then $P(y,y)$ gives $y^2=0$ so that $y=0$ . We derived earlier that $f(-f(0)) = 0$ . Therefore, it follows that $-f(0)=0$ , hence $f(0)=0$ and $0$ is the unique such value by the previous paragraph. Once we get this, look back at $P(x,0)$ and observe that $f(0)=0$ to see that $f(f(x))=f(x)$ for all $x$ . The range of $f$ , and a blind alley We saw that $f(f(x)) = f(x)$ is true for all $x$ . We can be tempted to show that $f$ is surjective now. In this direction, I'll explain what I did, because the commenters above, who very eagerly attempted to solve this question, will be happy to know that I followed them and treated them like my gurus. Let $x,y$ be arbitrary. Start with $P(f(x),f(y))$ and use $f(f(x))$ to get $$f(f(x)f(y)+f(x)+f(y)) = f(x)f(y)+f(x)+f(y)$$ That is, we've proven that under the function $g(x,y)=xy+x+y$ , the range of $f$ is closed. This can be used to derive various corollaries from the comments. Another observation can be made by plugging in $P(-1,-1)$ which gives $f(-1)=-1$ . ( THIS will be truly crucial ) One can also try to prove the following : $f(f(x)^2)= f(x)^2$ for all $x$ , and $f(-f(x)) = -f(x)$ for all $x$ . Thus, we've shown that the range is closed under various operations. However, this is actually a blind alley. While I cannot assert it with utmost confidence, observe that One can't attempt substitutions with rational numbers in general: try $f(\frac 12)$ , for example, and see where you get. Or maybe try setting $y = \frac 1x$ , and you'll have a problem. It seems that you can't get out of the integers. Even if you get to the rational numbers, there's no way to place any monotonicity conditions on $f$ , or continuity conditions, which allow you to move from the rational numbers to the real numbers. Therefore, attempts to make this work are likely to be futile, though I absolutely, absolutely invite everyone to try. EDIT: It seems that Sil, in the comments, has found a way out using a clever substitution! I'll have a go and change everything I've said above if it works out. Hitting a right approach A right approach , in this case, comes from observing that if we expect, in light of everything we've said before, that $f(x)=x$ for $x \in \mathbb R$ , then the function $g(x) = f(x)-x$ should be identically zero for $x \in \mathbb R$ . However, when you form the functional equation for $g(x)$ , something that one may not observe in the original equation manifests itself, and we see niceties occur. Let's do it : let $g(x) = f(x)-x$ for $x \in \mathbb R$ . Start with $P(x,y)$ and begin to substitute $g$ for $f$ everywhere, knowing that $f(z) = g(z)+z$ . \begin{align}f(xf(y)+f(x)+y) &= xy+f(x)+f(y) \\\implies g(xf(y)+f(x)+y) + xf(y)+f(x)+y &= xy+g(x)+x+g(y)+y \\\implies g(xg(y)+xy+g(x)+x+y)+xg(y)+xy+g(x)+x+y &= xy+g(x)+x+g(y)+y \\\implies g(xg(y)+xy+g(x)+x+y)+xg(y) &= g(y) \\\implies g(xg(y)+xy+g(x)+x+y) &= (1-x)g(y)\end{align} We make a crucial observation about $g$ now. Let's call the last identity in the chain above $P^*(x,y)$ . Claim : if there is a $y_0$ such that $g(y_0) \neq 0$ , then $g$ is surjective . Proof : If $g(y_0) \neq 0$ , then $g(y_0)$ belongs in the range of $g$ obviously. Let $T \in \mathbb R$ be arbitrary and let $x_0 = 1-\frac{T}{g(y_0)}$ . Then, $(1-x_0)g(y_0)=T$ . By observing $P^*(x_0,y_0)$ , one sees that $T$ is in the range of $g$ , as desired. We will show that $g$ cannot be surjective in the next section, creating the contradiction. An ideal choice, and a contradiction to $g$ being surjective From surjectivity, we will now pick the most suitable candidate for the next substitution, leading to a vast simplification. The key idea is the following : look at the left hand side of $P^*$ , and you have $g(xg(y)+g(x)+xy+x+y)$ . We want a value of $x$ or $y$ that can perhaps cancel a great number of terms. One such substitution is easily observed: $x=-1$ . Recall that $f(-1)=-1$ (Yes, this was important!) so $g(-1) = 0$ . Now $P^*(-1,y)$ gives $$g(-g(y)+0-y-1+y) = 2g(y) \implies g(-g(y)-1) = 2g(y)$$ However, we mentioned that $g$ was surjective! Therefore, the above equation actually holds for all $z$ by merely substituting $y = g^{-1}(z)$ above where $y$ is any preimage of $z$ . Thus, we obtain $$g(-z-1) = 2z$$ for all $z$ . This is a contradiction : take $z=-1$ , then $g(0)=-2$ is obtained, which is a contradiction because $f(0)=0$ therefore $g(0)=0$ . Finally, we obtain that $g \equiv 0$ , and $f(x) =x$ everywhere. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4553071', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1078836/']} | jdg_380814 |
stackexchange | llm_judgeable_groundtruth_similarity | 74370 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Suppose we have an invertible matrix q in a finite subgroup $Q$ of$Gl(n,\mathbb Z)$, the group of all invertible integer matrices. Now I want tofind all $x\; mod\; \mathbb Z^n$ for which $(q+q^2+q^3+...+q^m).x = 0\quad mod\; \mathbb Z^n$ where $m$ is the order of $q$ in the finite subgroup $Q$ of $Gl(n,\mathbb Z)$ so that$q^m=1$. I tried using the Smith normal form so that $(q+q^2+q^3+...+q^m) = U.D.V$ where $U,V$ in $Gl(n,\mathbb Z)$ and $D$ the Smith normal form, so we have to solve $D.V.x=0\quad mod\; \mathbb Z^n$ Since $D.V$ is diagonal, $x$ must have rational components unless the diagonal element is zero. Now my question is, what is the maximal denominator of the components in $x$ ? So what is the maximal absolute value in $D.V$ ?I think this must be $m$, but I can't figure out why. Edit: Let me clarify why I expect x to be rational with an upper bound on the denominator. Suppose G is a subgroup of the Euclidean Group with isometries (t,q) as elements (t: translational part, q: linear part). The subgroup T which contains all isometries in G with trivial linear part is a normal subgroup of G. Suppose now that T can be identified with a $\mathbb Z$-lattice in $\mathbb R^n$, then G/T is isomorph with a finite subgroup Q of $GL(n,\mathbb Z)$. Crystallographers call G a space group and Q a point group. There are only finite many conjugacy classes of finite subgroups in $GL(n,\mathbb Z)$, so there are only finite many point groups up to conjugacy in $GL(n,\mathbb Z)$. Now I want to understand why from this finite number of point groups, a finite number of (non-equivalent) space groups can be deduced. If we write G as the union of cosets of T $G=\bigcup_{i=1}^{|Q|}(t_{qi},q_{i})T$ we see that (composition of two isometries and q belongs to exactly one coset) $t_{q_1.q_2}=t_{q_1}+q_1.t_{q_2} \quad mod\ \mathbb Z^n$ So we know that $t_{q}$ is a real vector $0\leq t_{q}<1$. Using the previous property we also find that (m order of q) $(t_{q},q)^{m}=(q^{1}\cdot t_{q}+\cdots+q^{m}\cdot t_{q},q^m)\in (0,id)T$ $\Leftrightarrow (q^{1}+\cdots+q^{m})\cdot t_{q}=0\quad mod\ \mathbb{Z}^{n}$ If an appropriate origin is chosen in Euclidean space, $t_{q}$ should be rational with maximal denominator $m$. Maybe investigating $(t_{q},q)^{m}$ is not the best way to find bounds on $t_{q}$?
Now provide the response and nothing else.
| Edit: I couldn't resist my predilection for generalizations: Using darij grinberg's simplification, the proof below shows: Let $k$ be a field, $q \in GL_n(k)$ a matrix of finite exponent $m$ with char$(k) \nmid m$ and $M \subseteq k^n$. Futhermore, let $E$ be the eigenspace of $q$ corresponding to the eigenvalue $1$ and let $U \le k^n$ be the space spanned by the columns of $1-q$. Then the following is true for $A := 1+q+\dots + q^{m-1}$: $\lbrace x \in k^n \mid Ax \in M \rbrace = U + \frac{1}{m}(E \cap M)$ $U$ and $(1/m)(E \cap M)$ intersect in $0$ iff $0 \in M$, otherwise the intersection is empty $A$ is diagonizable with diagonal $(m,...,m,0,...,0)$ where the number of m's equals $\dim E$ (Older formulation) Let $E \le \mathbb{C}^n$ be the eigenspace of $1$ of the matrix $q$ and let $U \le \mathbb{C}^n$ be the space spanned by the columns of $1-q$. Set $A := 1+q+\dots + q^{m-1}$ and $X:= \lbrace x \in \mathbb{C}^n \mid A\cdot x \in \mathbb{Z}^n \rbrace$ and $L := E \cap \mathbb{Z}^n$. Then the following holds: $X = U \oplus \frac{1}{m}L$. Proof : Assume $\dim E = d$. Then $\dim U = \text{rank}(1-q) = n-d$. Since each $x \in E$ satisfies $Ax = mx$, $E$ contains eigenvectors from $A$ of the eigenvalue $m$. From $A \cdot (1-q) = 0$ it follows that $U$ consists of eigenvectors of $A$ of the eigenvalue $0$. Hence $E \cap U = 0$ and for dimensional reasons $$\mathbb{C}^n = U \oplus E.$$Since $q$ has integral entries, it's possible to chosse a basis of $E$ in $\mathbb{Q}^n$ and by multiplying with a suitable integer it's also possible to choose a basis in $\mathbb{Z}^n$. Therefore $L = E \cap \mathbb{Z}^n$ is a lattice of rank $d$. Let $\lbrace e_1, \dots, e_d \rbrace$ be a basis of $L$. Let $x \in X$ and write $$x = u + \sum_i \alpha_i e_i \text{ with } \alpha_i \in \mathbb{C}.$$Then $Ax = \sum_i m\alpha_i e_i \in \mathbb{Z}^n$ and $q(Ax) = Ax$. It follows $Ax \in E \cap \mathbb{Z}^n = L = \oplus_i \mathbb{Z}e_i$ and therefore $m\alpha_i \in \mathbb{Z}$. This shows $X \subseteq U \oplus (1/m)L$. The converse inclusion is obvious. qed . Edit: Also note that the image of $A$ is given by $$ Y := \lbrace Ax \mid x \in X \rbrace = L.$$ | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/74370', 'https://mathoverflow.net', 'https://mathoverflow.net/users/17551/']} | jdg_380815 |
stackexchange | llm_judgeable_groundtruth_similarity | 8273033 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a project whose focus is the use of term rewriting to solve/simplify fixed-size bit-vector arithmetic problems, which is something useful to do as a prior step to some decision procedure such as those based on bit-blasting. The term rewriting may solve the problem at all, or otherwise produce a much simpler equivalent problem, so the combination of both can result in a considerable speed-up. I am aware that many SMT solvers implement this strategy (e.g. Boolector, Beaver, Alt-Ergo or Z3), but it is being hard to find papers/tech-reports/etc in which these rewriting steps are described in detail. In general, I only found papers in which the authors describe such simplification steps in a few paragraphs. I would like to find some document explaining in detail the use of term rewriting: providing examples of rules, discussing the convenience of AC rewriting and/or other equational axioms, use of rewriting strategies, etc. For now, I just have found the technical report A Decision Procedure for Fixed-Width Bit-Vectors which describes normalization/simplification steps performed by CVC Lite, and I would like to find more technical reports like this one! I have also found a poster about Term rewriting in STP but it is just a brief summary. I have already visited the websites of those SMT solvers and I have searched in their Publications pages... I would appreciate any reference, or any explanation of how term rewriting is being used in current versions of well-known SMT solvers. I am specially interested in Z3 because it looks to have one of the smartest simplification phases. For instance, Z3 3.* introduced a new bit-vector decision procedure but, unfortunately, I was not able to find any paper describing it.
Now provide the response and nothing else.
| I agree with you. It is hard to find papers describing the preprocessing steps used in modern SMT solvers.Most SMT solver developers agree that these preprocessing steps are very important for the Bit-Vector theory.I believe these techniques are not published for several reasons: most of them a little tricks that by themselves are not a significant scientific contribution; most of the techniques only work in the context of a particular system; a technique that may seem to work very well on solver A , does not work on solver B .I believe that having open source SMT solvers is the only way to address this issue. Even if we publish the techniques used in a particular solver A , it would be very hard to reproduce the actual behavior of solver A without seeing its source code. Anyway, here is a summary of the preprocessing performed by Z3, and important details. Several simplification rules, may reduce this size locally, but increase it globally. A simplifier must avoid the memory blowup caused by this kind of simplification. You can find examples at: http://research.microsoft.com/en-us/um/people/leonardo/mit2011.pdf The first simplification step only performs local simplifications that preserve equivalence.Examples: 2*x - x -> x x and x -> x Next, Z3 performs constant propagation. Given an equality t = v where v is a value. It replaces t everywhere with v . t = 0 and F[t] -> t = 0 and F[0] Next, it performs Gaussian elimination for Bit-Vectors. However, only variables that occur at most twice in arithmetical expressions are eliminated.This restriction is used to prevent an increase of the number of adders and multipliers in your formula.For example, suppose we have x = y+z+w and x occurs at p(x+z) , p(x+2*z) , p(x+3*z) and p(x+4*z) . Then, after eliminating x , we would have p(y+2*z+w) , p(y+3*z+w) , p(y+4*z+w) and p(y+5*z+w) . Although we eliminated x , we have more adders than the original formula. Next, it eliminates unconstrained variables. This approach is described by in the PhD thesis of Robert Brummayer and Roberto Brutomesso. Then, another round of simplification is performed. This time, local contextual simplifications are performed. These simplifications are potentially very expensive. So, a threshold on the maximal number of nodes to be visited is used (the default value is 10million).Local context simplification contain rules such as (x != 0 or y = x+1) -> (x != 0 or y = 1) Next, it tries to minimize the number of multipliers using distributivity. Example: a b + a c -> (b+c)*a Then, it tries to minimize the number of adders and multipliers by applying associativity and commutativity.Suppose the formula contains the terms a + (b + c) and a + (b + d) . Then, Z3 will rewrite them to: (a+b)+c and (a+b)+d .Before the transformation, Z3 would have to encode 4 adders. After, only three adders need to be encode since Z3 uses fully shared expressions. If the formula contains only equality, concatenation, extraction and similar operators. Then, Z3 uses a specialized solver based on the union-find and congruence closure. Otherwise, it bit-blasts everything, uses AIGs to minimize the Boolean formula, and invokes its SAT solver. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8273033', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1064567/']} | jdg_380816 |
stackexchange | llm_judgeable_groundtruth_similarity | 8864627 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a requirement to send an HTTP header in a specific character-case. I am aware that this is against the RFC, but I have a requirement. http.get seems to change the case of the headers dictionary I supply it. How can I preserve the character-case?
Now provide the response and nothing else.
| Based on the Tin Man's answer that the Net::HTTP library is calling #downcase on your custom header key (and all header keys), here are some additional options that don't monkey-patch the whole of Net::HTTP . You could try this: custom_header_key = "X-miXEd-cASe"def custom_header_key.downcase selfend To avoid clearing the method cache, either store the result of the above in a class-level constant: custom_header_key = "X-miXEd-cASe"def custom_header_key.downcase selfendCUSTOM_HEADER_KEY = custom_header_key or subclass String to override that particular behavior: class StringWithIdentityDowncase < String def downcase self endendcustom_header_key = StringWithIdentityDowncase.new("X-miXEd-cASe") | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8864627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69636/']} | jdg_380817 |
stackexchange | llm_judgeable_groundtruth_similarity | 33615 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I feel like the three terms are often used as synonyms but actually all mean something different. I did look at Wikipedia sites but I'm still unsure about the definitions of the three. From my understanding, routing is concerned with finding a path or paths to a destination and deciding where to send an (unknown) packet to/which path to use based on its destination or prefix and not just based on a simple lookup in a table. This is what happens at IP routers (shortest prefix routing). These simple lookups are what I understand as forwarding : The header/label (cf. MPLS)/etc are matched against entries in a flow table (or multiple tables). An exact match tells the switch on which port to output the packet without further computation or decisions to be made. If no match is found, a default action is performed or a separate controller is asked (as in SDN). This can be done faster than routing because of less computation and specialized hardware, correct? Switching seems to be fairly similar to forwarding but with an emphasis on what happens on hardware in the switching fabric after the lookup in a table. This on OSI level 2 (data link) while routing and forwarding are actions on level 3 (network)? What's correct/wrong about these intuitions? Did I miss something of importance? Are there differences between IP routing/forwarding (Google returns the same Wikipedia article)? What about the MAC layer?
Now provide the response and nothing else.
| In general, forwarding refers to a device sending a datagram to the next device in the path to the destination, switching refers to moving a datagram from one interface to another within a device, and routing refers to the process a layer-3 device uses to decide on what to do with a layer-3 packet. A host sending data to another host through an ethernet interface will place a frame on the wire. If the next device is a bridge (a switch is a bridge), the bridge will inspect the frame for the source and destination MAC addresses. The bridge will use the source MAC address in its MAC address table so that it knows the host with that MAC address is connected to the interface where the frame entered the bridge. It will also try to look up the destination MAC address in its MAC address table to see to which interface it should switch the frame. If it finds the MAC address in its MAC address table, it will switch the frame to the interface indicated in its MAC address table, otherwise it will flood the frame to all other interfaces, forwarding the frame to the device(s) on the other end(s) of the interface(s) link(s). A router receiving a frame on an interface will strip the frame from the packet. The router will then try to look up the destination IP address in its routing table. If it doesn't find a match, even a default route, it will drop the packet. If it finds one or more matches, it will switch the packet to the interface of the longest match in the routing table. The router will then prepare a new frame for the packet on the new interface (the link on new interface may or may not use MAC addresses), and the router will forward the new frame containing the original packet over the next hop interface toward its destination. The whole process a router uses is called routing , but within the router, it switches packets before forwarding them to the next hop device. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/33615', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/23061/']} | jdg_380818 |
stackexchange | llm_judgeable_groundtruth_similarity | 169203 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $M$ be a closed Riemannian manifold. I have several questions concerning the set of all minimal submanifolds (or immersion) in $M$. (1): Is there a general local theory for the set of minimal submanifold? What I mean is that, let $N \subset M$ be a minimal submanifold, then it corresponds to the critical point of the area functional. Considering the 2nd variation, one has the Jacobi operator. So the kernel of the Jacobi operator $K$ corresponds to the first order deformation of minimal submanifolds near $N$. Will there be a map $$ K \to \{\text{all minimal submanifolds in }M\}$$ Such that it is locally surjective around $N$? (2) What can we say about the structure of the set of all minimal submanifolds in $M$? If the question is too board, please feel free to impose restrictions (for example, dimension on $M$, $N$ or even some specific $M$) Remark: In some very special case (for example special Lagrangian submanifolds in a Calabi-Yau $M$), $K$ (need to restrict to Lagrangian variation here) actually parametrizes locally all nearby special Lagrangians. But the proof uses the fact that all special Lagrangians are calibrated, so cannot be generalized.
Now provide the response and nothing else.
| In general, asking whether or not all Jacobi fields on a minimal surface can be "integrated" to find a nearby minimal surface is a very difficult problem. For example, see Yau's remark here (page 246): Unfortunately minimal submanifolds are defined by a second-order elliptic system and it is difficult to understand the deformation theory. (Given a Jacobi field on a minimal submanifold, can we find a deformation by a family of minimal submanifolds along the field?) I think that http://arxiv.org/pdf/0709.1417v2.pdf provides an example of a branched minimal $S^2$ in $S^4$ with a non-integrable Jacobi field (see Theorem 4.1 and the subsequent comments). I'm not sure if there is a known example of a codimension one, embedded, minimal surface with non-integrable Jacobi fields. As I'm sure you know, the converse of your question is true: any "nearby" minimal surface corresponds to a Jacobi field. It depends on your exact problem, but you may get some mileage out of the "natural constraint," which allows you to associate a Jacobi field to a nearby surface which is minimal up to a finite dimensional error. This is described nicely in Leon Simon's book "Energy Minimizing Maps," Ch. 3.12 (or in many other places). See also http://www.ugr.es/~aros/icm-ros.pdf , Ch 7, for a discussion of the moduli space of minimal surfaces in $\mathbb{R}^3$ with finite total curvature. I think that it is not known whether or not this moduli space will be smooth (i.e. if non-integrable Jacobi fields exist). It is known, however, that particular minimal surfaces have no non-integrable Jacobi fields. For example the Costa--Hoffman--Meeks surfaces of all genus have no non-integrable Jacobi fields: See http://arxiv.org/pdf/0806.1836.pdf . I'll also remark that your question is also related to the following question (of Yau, I think): Does there exist a $1$-parmeter family of non-isometric minimal surfaces in $\mathbb{S}^3$? An obvious strategy is to try to rule out non-trivial Jacobi fields, but this has not been successfully carried out. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/169203', 'https://mathoverflow.net', 'https://mathoverflow.net/users/41094/']} | jdg_380819 |
stackexchange | llm_judgeable_groundtruth_similarity | 34486603 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I change a dictionary's key for a particular value? I can't just change dict[i] to dict[i+1] because that changes the value for that particular key. And there's no dict.updateKeyForValue() like there is a dict.updateValueForKey() . Because my keys are Int 's and all out of order, I can't modify the entire key-value pair by looping through because I may override a pair that the loop hasn't reached yet. Is there a simpler way? Feel like I'm missing something obvious.
Now provide the response and nothing else.
| Swift 3 func switchKey<T, U>(_ myDict: inout [T:U], fromKey: T, toKey: T) { if let entry = myDict.removeValue(forKey: fromKey) { myDict[toKey] = entry }} var dict = [Int:String]()dict[1] = "World"dict[2] = "Hello"switchKey(&dict, fromKey: 1, toKey: 3)print(dict) /* 2: "Hello" 3: "World" */ Swift 2 func switchKey<T, U>(inout myDict: [T:U], fromKey: T, toKey: T) { if let entry = myDict.removeValueForKey(fromKey) { myDict[toKey] = entry }} var dict = [Int:String]()dict[1] = "World"dict[2] = "Hello"switchKey(&dict, fromKey: 1, toKey: 3)print(dict) /* 2: "Hello" 3: "World" */ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34486603', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4848310/']} | jdg_380820 |
stackexchange | llm_judgeable_groundtruth_similarity | 49464634 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Trying to figure out the difference between these 2 types in TypeScript: foo: object and bar: {} and type: Object ? Example: trying to assign an object to the variable that suppose to handle headers information for a request: headers: object; Results in an error: Type 'object' is not assignable to '{ [key: string]: string }`. The same condition passes if using headers: {} , which leads to conclusion that {} has some slightly less tight requirements.
Now provide the response and nothing else.
| TypeScript has three confusing types: Object , {} and object . You can assign null and undefined to all three types if strictNullChecks compiler option is disabled otherwise the compile error occurs. Object Contains stuff (like toString() , hasOwnProperty() ) that is present in all JavaScript objects. Any value (primitive, non-primitive) can be assigned to Object type. {} {} is an empty object. It is pretty much the same as Object in runtime but different in compile time. In compile time {} doesn't have Object 's members and Object has more strict behavior (see the @golmschenk's comment). object object was introduced in TypeScript 2.2 . It is any non-primitive type . You can't assign to it any primitive type like bool , number , string , symbol . Thus, if you will try this: var strictTypeHeaders: { [key: string]: string } = {}; // non-primitive typevar header: object = {};header = strictTypeHeaders; // its OKstrictTypeHeaders = header; // causes error "Type 'object' is not assignable to type '{ [key: string]: string }`" you will get the compile error on the last line. This happens because { [key: string]: string } type is more specific than object type. There is no any error on header = strictTypeHeaders since both types are non-primitive and object is more common type than { [key: string]: string } | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/49464634', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/865866/']} | jdg_380821 |
stackexchange | llm_judgeable_groundtruth_similarity | 72315 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $\mu_i,\nu_i$ be probability measure on a finite space $\Omega_i,i=1,2,\dots,n$.Define $\mu=\prod\limits_{i=1}^{n}\mu_i$ and $\nu=\prod\limits_{i=1}^{n}\nu_i$ on $\Omega=\prod\limits_{i=1}^{n}\Omega_i$, show that $$\|\mu-\nu\| \le \sum\limits_{i=1}^{n}\|\mu_i-\nu_i\|$$where $\|\mu-\nu\|$ denote the total variation distance between $\mu$ and $\nu$. I know how to do this using coupling, is there a way to do it without coupling? I try to write$$\|\mu-\nu\|={1 \over 2}\sum\limits_{x=(x_1,x_2,\dots,x_n) \in \Omega}|\prod\limits_{i=1}^{n}\mu_i(x_i)-\prod\limits_{i=1}^{n}\nu_i(x_i)|$$ and use the fact that $\prod\limits_{i=1}^{n}\mu_i \le \sum\limits_{i=1}^{n}\mu_i$ and $\prod\limits_{i=1}^{n}\nu_i \le \sum\limits_{i=1}^{n}\nu_i$, but I didn't succeed.
Now provide the response and nothing else.
| This is a direct consequence of the fact that for every nonnegative $a_i$ and $b_i$,$$|(a_1\cdots a_n)-(b_1\cdots b_n)|\leqslant\sum\limits_{i=1}^n|a_i-b_i|\,(a_1\cdots a_{i-1})(b_{i+1}\cdots b_n).$$Hence,$$2\|\mu-\nu\|=\sum\limits_x\left|\mu_1(x_1)\cdots\mu_n(x_n)-\nu_1(x_1)\cdots\nu_n(x_n)\right|\leqslant\sum\limits_{i=1}^n\Delta_i,$$with$$\Delta_i=\sum\limits_{x_i}|\mu_i(x_i)-\nu_i(x_i)|\,\sum\limits_{\widehat x_i}\mu_1(x_1)\cdots\mu_{i-1}(x_{i-1})\nu_{i+1}(x_{i+1})\cdots\nu_n(x_n),$$where $\widehat x_i=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)$. Each sum over $\widehat x_i$ is a product of masses of probability measures hence$$\Delta_i=\sum\limits_{x_i}|\mu_i(x_i)-\nu_i(x_i)|=2\|\mu_i-\nu_i\|,$$and you are done. Edit The first inequality is a consequence of the triangular inequality between the numbers $(c_i)_{0\leqslant i\leqslant n}$ defined by $c_i=(a_1\cdots a_{i})(b_{i+1}\cdots b_n)$ for $1\leqslant i\leqslant n-1$, $c_0=b_1\cdots b_n$ and $c_n=a_1\cdots a_n$ since, for every $1\leqslant i\leqslant n$,$$c_{i}-c_{i-1}=(a_{i}-b_{i})(a_1\cdots a_{i-1})(b_{i+1}\cdots b_n).$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/72315', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380822 |
stackexchange | llm_judgeable_groundtruth_similarity | 415537 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a collection of numeric quantitative variables from a series of samples; let’s say, for example, countries from the whole world. My goal is to discern how countries can be distributed, according to these variables, into classes by using k-means. But I am interested only about the results concerning European countries (always as an example), that is, how European countries alone distribute into classes. So, my question is, do I perform the kmeans analysis taking into account only the European countries, and discarding from the beginning the other countries? Or is it better to perform first the kmeans analysis including all the world countries, and describe afterwards the results concerning the European countries? - assuming that the European countries do not appear all within the same cluster-. Just to enter into details, the quantitative variables are gene expression data (4000 genes); and the “European countries”/samples are selected brain regions (in number of 400) from a total of 1000 brain regions.
Now provide the response and nothing else.
| It is a good idea to bootstrap or cross-validate (e.g., 100 repeats of 10-fold cross-validation) indexes that were not optimized. For example, I recommend optimizing on a gold standard such as log-likelihood, penalized log-likelihood, or in a Bayesian model log-likelihood + log-prior. You can report measures such as pseudo $R^2$ that are just transformations of the gold standard objective function, and in addition do resampling validation on helpful indexes such as the $c$ -index (concordance probability = AUROC), Brier score, and most of all, the full calibration curve. I do validation of smooth nonparametric calibration curves by bootstrapping 99 predicted values when using a probability model, i.e., to validate the absolute accuracy of predicted probabilities of 0.01, 0.02, ..., 0.99. Likewise you can show overfitting-corrected estimates of Brier score, calibration slope, mean squared error, and many other quantities. Details are in my RMS book and course notes . | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/415537', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/252450/']} | jdg_380823 |
stackexchange | llm_judgeable_groundtruth_similarity | 670058 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an easy way to show that $x^4+8$ is irreducible over $\mathbb Z$ without trying to write it as a product of polynomials of lower degrees?
Now provide the response and nothing else.
| Since $\ 9^4\!+8\ $ is prime, it is irreducible by Cohn's irreducibility test. Or, with smaller prime, consider $\,f(2x) = 8(2x^4\!+1).\,$ Then $2x^4\!+1$ is irreducible by Cohn, since $\,2\cdot 3^4+1 = 163\,$ is prime (in fact, a very famous prime, the largest Heegner number , which explains why Euler's famous prime producing polynomial $\,n^2-n+41\,$ yields distinct primes for $\,n = 1,2,\ldots,40).$ Remark $\ $ Cohn's criterion can be viewed as an example of the general idea that the factorizations of a polynomial are constrained by the factorizations of the values that it takes. If one pushes this idea to the hilt one obtains a simple algorithm for polynomial factorization using factorization of its integer values and (Lagrange) interpolation. The ideas behind this algorithm are due in part to Bernoulli, Schubert, Kronecker. The algorithm is of more theoretical value than practical, since nowadays much more efficient algorithms are known. There are also other closely related results. In $1918$ Stackel published the following simple: Theorem If $\rm\, f(x)\,$ is a composite integer coefficient polynomial then $\rm\, f(n)\, $ is composite for all $\rm\,|n| > B,\, $ for some bound $\rm\,B.\,$ In fact $\rm\, f(n)\, $ has at most $\rm\, 2d\, $ prime values, where $\rm\, d = {\rm deg}(f)$. The simple proof can be found online in Mott & Rose [3] , p. 8.I highly recommend this delightful and stimulating $27$ page paperwhich discusses prime-producing polynomials and related topics. Contrapositively, $\rm\, f(x)\, $ is prime (irreducible) if it assumes a prime valuefor large enough $\rm\, |x|\, $. As an example, Polya-Szego popularized A. Cohn's irreduciblity test, whichstates that $\rm\, f(x) \in \mathbb Z[x]\,$ is prime if $\rm\, f(b)\, $yields a prime in radix $\rm\,b\,$ representation (so necessarily $\rm\,0 \le f_i < b).$ For example $\rm\,f(x) = x^4 + 6\, x^2 + 1 \pmod p\,$ factors for all primes $\rm\,p,\,$yet $\rm\,f(x)\,$ is prime since $\rm\,f(8) = 10601\rm$ octal $= 4481$ is prime.Cohn's test fails if, in radix $\rm\,b,\,$ negative digits are allowed, e.g.$\rm\,f(x)\, =\, x^3 - 9 x^2 + x-9\, =\, (x-9)\,(x^2 + 1)\,$ but $\rm\,f(10) = 101\,$ is prime. Conversely Bouniakowski conjectured $(1857)$that prime $\rm\, f(x)\, $ assume infinitely many prime values (excludingcases where all the values of $\rm\,f\,$ have fixed common divisors, e.g. $\rm\, 2\: |\: x(x+1)+2\, ).$ However, except for linear polynomials (Dirichlet's theorem), this conjecture has never been proved for any polynomial of degree $> 1.$ Note that a result yielding the existence of one prime value extends to existence of infinitely many prime values, for any class of polynomials closed under shifts, viz. if $\rm\:f(n_1)\:$ is prime, then $\rm\:g(x) = f(x+ n_1\!+1)\:$ is prime for some $\rm\:x = n_2\in\Bbb N,\:$ etc. For further detailed discussion of Bouniakowski's conjecture and related results, including heuristic and probabilistic arguments, see Chapter 6 of Ribenboim's The New Book of Prime Number Records . [1] Bill Dubuque, sci.math 2002-11-12, On prime producing polynomials. [2] Murty, Ram. Prime numbers and irreducible polynomials. Amer. Math. Monthly, Vol. 109 (2002), no. 5, 452-458. [3] Mott, Joe L.; Rose, Kermit. Prime producing cubic polynomials. Ideal theoretic methods in commutative algebra, 281-317. Lecture Notes in Pure and Appl. Math., 220, Dekker, New York, 2001. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/670058', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/125919/']} | jdg_380824 |
stackexchange | llm_judgeable_groundtruth_similarity | 3345127 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In this question someone previously asked They presented the problem: Given that the number 8881 is not a prime number, prove by contradiction that it has a prime factor that is at most 89. One of the answers was this: If all prime factors where superior to 89, they would be at least 97. Counting them with their multiplicity, if there was only one such factor it would be 8881, which contradicts the given fact that 8881 is not prime. If there are at least two (possibly equal) factors a and b, then ab≤8881 but ab≥97∗97>8881, contradiction. I understand it until Counting them with their multiplicity, if there was only one such factor it would be 8881 What does it mean to count numbers with their multiplicity and in this case why would the only factor be 8881. Moreover another answer states You're on the right lines. If 8881 is not prime, it must have at least one prime factor not equal to itself. If it has no prime factors less than or equal to 89, then it must have only prime factors greater than or equal to 97, which is the next prime up from 89. You've already found the smallest natural number which has prime factors greater than or equal to 97 (in reference to the proposed solution to the question where they state that smallest number composed of only 97 is 97^2 However wouldn't the smallest natural number which has prime factors greater than or equal to 97 be 97? Thank you and sorry if this seems like a stupid question.
Now provide the response and nothing else.
| When we say that we are counting with multiplicity , we mean that we are counting objects which might "repeat" themselves, and we want to count all of those repetitions as distinct objects. For example, the number $8$ has only one prime factor: $2$ . However, if we count the number of prime factors of $8$ with multiplicity , there are $3$ such factors: $2$ , $2$ , and $2$ (since $8 = 2^3$ ). I imagine that most students are more familiar with this term in the context of roots of polynomials (since this topic is usually taught to students relatively early in their mathematical careers). For example, the polynomial $$ (x-1)^2(x-2) $$ has two distinct roots, but three roots if we count with multiplicity. This is because the root $x=1$ has multiplicity $2$ . This notion is discussed a little further on Wikipedia : In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset... The notion of multiplicity is important to be able to count correctly without specifying exceptions (for example, double roots counted twice). Hence the expression, "counted with multiplicity". If multiplicity is ignored, this may be emphasized by counting the number of distinct elements, as in "the number of distinct roots"... | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3345127', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/701209/']} | jdg_380825 |
stackexchange | llm_judgeable_groundtruth_similarity | 488489 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to calculate the Fourier series of $\sin^3t$ in trigonometric form. In previous excercises I have been able to use trigonometric identities to be able to calculate the coefficents, but here I can rewrite the function, but I cannot get what period the function has, and moreover, I can't solve for for the $b_n$ coefficents. Am I missing something here? Because rewriting into complex form first and than raise that to 3 and calculate seems unnecessary complicated. Any ideas?
Now provide the response and nothing else.
| $$8(\sin t)^3=\mathrm i(\mathrm e^{\mathrm it}-\mathrm e^{-\mathrm it})^3=\mathrm i\mathrm e^{3\mathrm it}-3\mathrm i\mathrm e^{\mathrm it}+3\mathrm i\mathrm e^{-\mathrm it}-\mathrm i\mathrm e^{-3\mathrm it}=2\,(3\sin t-\sin 3t)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/488489', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42397/']} | jdg_380826 |
stackexchange | llm_judgeable_groundtruth_similarity | 19985827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was playing the other day, trying to see how far could I optimize something. I decided to start from a simple map that just does a linear search to find if an element is there, and then try to optimize the most of it. Also, to compare, I do the same with a std::map and a std::vector using std::find. The results with the map are the expected ones, slower creation and destruction than my map, but much more speed( actually, I have been unable to mesure it, it returns 0 allways). The problem is with std::vector. I expected it to be slower than my implementation, but is not, and I really don't understand how can it be the same or faster, as my implementation is skipping a worst case( the value isn't in the vector) and is using a cache of results. Can anyone shed some light here? I know that the guys behind stl are semi-gods, but still, this doesn't make sense. Benchmark results ( i3, Windows 8.1 Pro 64, Visual Studio 2013 ): std::vector : Build : 85.0042 ms Loop : 37.0011 ms Find : 1.82259 ms -> First : Found, Second : Found, Third : Not Found Release : 0 ms--------------------std::map : Build : 6929.41 ms Loop : 570.032 ms Find : 0 ms -> First : Found, Second : Found, Third : Not Found Release : 1425.08--------------------Linear Map V0: Build : 194.012 ms Loop : 49.0052 ms Find : 1.88915 ms -> First : Found, Second : Found, Third : Not Found Release : 109.004 Here's the code for the map: template<typename T>class LinearMap0{public:LinearMap0(){ _end = _root = new Node; _prebuffer = nullptr; prebufferCapacity = 0; _alive = true; prebufferMarker = 0; _cache = _mm_set1_epi32(-1); for (auto& ptr : _cacheBuffer) ptr = nullptr; MinID = INT32_MAX - 1; MaxID = -1;}void PreAllocate(int Count){ prebufferCapacity = Count; _prebuffer = new Node[Count];}~LinearMap0(){ if (_alive) { Release(); }}void Release(){ Node* marker = _end; while (marker->Prev) { marker = marker->Prev; if (!marker->Next->IsPreAllocated) delete marker->Next; } if (!_root->IsPreAllocated) delete _root; delete[] _prebuffer; _alive = false;}void AddElement(int ID,T element){ Node* tmp = nullptr; if (prebufferMarker < prebufferCapacity) { // Use a pre-allocated object tmp = &_prebuffer[prebufferMarker]; prebufferMarker++; tmp->IsPreAllocated = true; } else { tmp = new Node; } tmp->ID = ID; tmp->Data = element; // Update list _end->Next = tmp; Node* prevEnd = _end; _end = tmp; _end->Prev = prevEnd; bool isMin = ID < MinID; MinID = ID * isMin + (1 - isMin) * MinID; bool isMax = ID > MaxID; MaxID = ID * isMax + (1 - isMax) * MaxID;}void DeleteLast(){ Node* tmp = _end; _end = _end->Prev; _end->Next = nullptr; delete tmp;}template<class Function>void Loop(Function&& f, bool Forward = true){ if (Forward) { Node* marker = _root; while (marker->Next) { marker = marker->Next; f(marker->Data); } } else { Node* marker = _end; while (marker->Prev) { marker = marker->Prev; f(marker->Data); } }}T* Find(int ID){ // Bounds check if (ID < MinID || ID > MaxID) return nullptr; // Check it it's in the cache // Compare the value to every value in the cache __m128i idxSSE = _mm_set1_epi32(ID); __m128i C = _mm_cmpeq_epi32(_cache, idxSSE); // To change form -1 to 1 C = _mm_mul_epi32(C, _mm_set1_epi32(-1)); // Now C holds 1 if true, or 0 if false (in each of its 4 members). It should only be ONE set at 1 __m128i tmp = _mm_set1_epi32(1); __m128i S = _mm_sub_epi32(tmp, C); // Now find the index int i = S.m128i_i32[0] * (C.m128i_i32[1] + S.m128i_i32[1] * (2 * C.m128i_i32[2] + S.m128i_i32[2] * (3 * C.m128i_i32[3] + S.m128i_i32[3] * -1))); if (i != -1) return _cacheBuffer[i]; // Traverse the list Node* marker0 = _root; T* obj = nullptr; while (true) { if (marker0->ID == ID) { obj = &marker0->Data; } if (marker0->Next) marker0 = marker0->Next; else break; } // Cache value and return _cache.m128i_i32[cacheMarker] = ID; _cacheBuffer[cacheMarker] = obj; cacheMarker = (cacheMarker + 1) & 3; // x & 3 = x % 4 return obj;}private:struct Node{ Node() { Prev = nullptr; Next = nullptr; IsPreAllocated = false; ID = -1; } T Data; Node* Prev; Node* Next; bool IsPreAllocated; int ID;};Node* _root;Node* _end;Node* _prebuffer;int prebufferCapacity;int prebufferMarker;bool _alive;__m128i _cache;T* _cacheBuffer[4];int cacheMarker;int MinID, MaxID;}; And here's the benchmark: // Initialize seedsconst __int64 ecount = 5 * 1000*1000;vector<__int64> seed(ecount);for (__int64 i = 0; i < ecount; i++){ seed[i] = i;}random_shuffle(seed.begin(), seed.end());///////////// std::vectorvector<__int64> v;cout << "--------------------" << endl;cout << "std::vector :" << endl;cout << " Build : " << time_call([&](){ v.resize(ecount/2); for (__int64 i = 0; i < ecount; i++) { if (i < (ecount / 2)) v[i] = seed[i]; else v.push_back(seed[i]); }}) << " ms" << endl;cout << " Loop : " << time_call([&](){ for (auto& n : v) n /= 2;}) << " ms" << endl;bool found1, found2, found3;cout << " Find : " << (((float)time_call([&](){ for (int i = 0; i < 15; i++) { // Should exist found1 = find(v.begin(), v.end(), seed[5] / 2) != v.end();//find(seed[5]) != m.end(); found2 = find(v.begin(), v.end(), seed[1000] / 2) != v.end(); // Shouldn't exist found3 = find(v.begin(), v.end(), -1234) != v.end(); }})) / 15.0) / 3.0;cout << " ms " << " -> First : " << ((found1) ? "Found" : "Not Found") << ", Second : " << ((found2) ? "Found" : "Not Found") << ", Third : " << ((found3) ? "Found" : "Not Found") << endl;cout << " Release : " << time_call([&](){ v.clear();}) << " ms" << endl;///////////// std::mapmap<__int64, __int64> m;cout << "--------------------" << endl;cout << "std::map :" << endl;cout << " Build : " << time_call([&](){ for (__int64 i = 0; i < ecount; i++) { m[seed[i]] = seed[i]; }}) << " ms" << endl;cout << " Loop : " << time_call([&](){ for (auto& n : m) n.second /= 2;}) << " ms" << endl;cout << " Find : " << (((float)time_call([&](){ for (int i = 0; i < 15; i++) { // Should exist found1 = m.find(seed[5]) != m.end(); found2 = m.find(seed[1000]) != m.end(); // Shouldn't exist found3 = m.find(-1234) != m.end(); }})) / 15.0) / 3.0;cout << " ms " << " -> First : " << ((found1) ? "Found" : "Not Found") << ", Second : " << ((found2) ? "Found" : "Not Found") << ", Third : " << ((found3) ? "Found" : "Not Found") << endl;cout << " Release : " << time_call([&](){ m.clear();}) << endl;///////////// Linear Map V0LinearMap0<__int64> c;cout << "--------------------" << endl;cout << "Linear Map V0:" << endl;cout << " Build : " << time_call([&](){ c.PreAllocate(ecount / 2); for (__int64 i = 0; i < ecount; i++) { c.AddElement(seed[i],seed[i]); }}) << " ms" << endl;cout << " Loop : " << time_call([&](){ c.Loop([](__int64& Data) { Data /= 2; });}) << " ms" << endl;cout << " Find : " << (((float)time_call([&](){ for (int i = 0; i < 15; i++) { // Should exist found1 = c.Find(seed[5]); found2 = c.Find(seed[1000]); // Shouldn't exist found3 = c.Find(-1234); }})) / 15.0) / 3.0;cout << " ms -> First : " << ((found1) ? "Found" : "Not Found") << ", Second : " << ((found2) ? "Found" : "Not Found") << ", Third : " << ((found3) ? "Found" : "Not Found") << endl;cout << " Release : " << time_call([&](){ c.Release();}) << endl; EDIT: time_call is: template <class Function>double time_call(Function&& f){ chrono::time_point<chrono::high_resolution_clock> start, end; start = chrono::high_resolution_clock::now(); f(); end = chrono::high_resolution_clock::now(); return ((double)(chrono::duration_cast<chrono::nanoseconds>(end - start).count())) / 1000000.0;}
Now provide the response and nothing else.
| Your container is a linked list, whereas std::vector is a dynamically-sized array. The linked list approach has benefits, such as being able to insert elements without any re-allocations. However the array approach has some significant advantages: a linear search simply scans memory, which is exactly what caches and pre-fetchers are built for. A scan of a linked list will be less efficient because each jump into uncached memory means an expensive cache miss. a linear array scan is easy to vectorize. If you compile with -O3 then the compiler will likely use a vectorized version of std::find . It's impossible to vectorize a linked list scan due to memory dependencies. amount of memory used. Your linked list has to maintain a next pointer which effectively makes your elements larger. Also, each non-preallocated Node has to pay the overhead of the allocator (i.e. accounting data for new and delete ). That means you're hitting memory bandwidth limits sooner, and you can fit fewer elements in cache. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19985827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1739672/']} | jdg_380827 |
stackexchange | llm_judgeable_groundtruth_similarity | 49623731 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems Visual Studio 2017 always saves new files as UTF8-BOM.It also seems this was not the case with earlier versions of Visual Studio, but I could not find any documentation. Also there has been an option "Advanced Save Options\Encoding" which did allow to change the encoding of newly saved files which is missing in VS2017. Questions: Are all file types saved with UTF8-BOM encoding in VS2017? Is it possible to configure the encoding for new files in VS2017? Will VS2017 change the encoding of "old" files which don't have UTF8-BOM? Is there any documentation about this topic?
Now provide the response and nothing else.
| Also there has been an option "Advanced Save Options\Encoding" which did allow to change the encoding of newly saved files which is missing in VS2017. This feature Already exists! You can save files with specific character encoding to support bi-directional languages. You can also specify an encoding when opening a file, so that Visual Studio displays the file correctly. To save a file with encoding From the File menu, choose Save File As , and then click thedrop-down button next to the Save button . The Advanced Save Optionsdialog box is displayed. Under Encoding, select the encoding to use for the file. Optionally, under Line endings, select the format for end-of-linecharacters. Are all files types saved with UTF8-BOM encoding in VS2017 In my case, VS stores all the files with CodePage 1252 encoding. Is it possible to configure the encoding for new files in VS2017 However, My Visual Studio version is 15.6.1 and some people have the same problem like yours in previous versions of 2017, but they said " We have fixed this issue and it's available in Visual Studio 2017 15.3 " If not working , for C++ projects Take a look at /utf-8 (Set Source and Executable character sets to UTF-8) . Will VS2017 change the encoding of "old" files which don't have UTF8-BOM By default, Visual Studio detects a byte-order mark to determine if the source file is in an encoded Unicode format, for example, UTF-16 or UTF-8. If no byte-order mark is found, it assumes the source file is encoded using the current user code page, unless you have specified a code page by using /utf-8 or the /source-charset option. Some people encountered a problem which is came from .editorconfig file, as below: root = true[*]indent_style = tabindent_size = 4tab_width = 4trim_trailing_whitespace = trueinsert_final_newline = truecharset = utf-8 That final charset line is probable doing it... but I'm not asking for 'utf-8-with-bom'! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49623731', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1155873/']} | jdg_380828 |
stackexchange | llm_judgeable_groundtruth_similarity | 18041 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My Problem is like this: I have a physical layout represented as a graph. The Nodes represents hooks/ducts where a wire can anchor and Edges are the possible connection between 2 nodes from where wire can go. There are some special Nodes, called splitters, from where a single wire can be splitted to 2 or more up to k. The k can be taken constant for now but it varies from node to node. Not all nodes are splitters. There is one source of power from where a wire will emerge. It is the source. The wire has to be taken to n sinks. An edge can take any number of wires traversing through it in either direction. The the total wire length has to be minimized. The nature of graph, planar or euclidean is not known. Example : Below is a sample network. Nodes are named as numbers and edges are provided with equal weights of 1. Source is Node1 and Sinks are Node5, Node9 and Node13. In case 1 Node6 is Splitter node. In case 2 Node6 and Node4 are splitter nodes. The splitter node's k=3, i.e., it can take in one wire and split it out to 3 wires. Case 1 . Only one splitter Node. It makes sense to split at Node6. Case 2 . Two splitter Node. It makes sense to split at Node4 instead of Node6. I am looking for different strategies to find out a generic solution for this problem. The graph presented here is of a smaller scale as compared to the problem in hand. The graph is static and can not be changed (i mean the solution should not suggest any new edge or propose new splitter location ). Any reference to research paper published on this kind of problem is also welcomed. Case 3 . Two splitter Node. It makes sense to split at Node4 and Node14. Note that this case has edge weights changed for Edge 8-12, 6-10 and 10-11. The important thing in this case is retracing of a wire after getting splitted from Node14.
Now provide the response and nothing else.
| This problem is NP-hard. Assume every vertex is a splitter that can split to any number of degrees, then your problem is precisely the Steiner tree problem on a graph , where the set of source and sink vertices are the required vertices. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/18041', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/7976/']} | jdg_380829 |
stackexchange | llm_judgeable_groundtruth_similarity | 3475345 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How would one go about proving that e.g. the function $f(x)=x^{2}$ is locally Lipschitz-continuous but not globally?
Now provide the response and nothing else.
| This answer assumes: Two different e.g. $5 \times 6$ rectangles at different positions (within the $9 \times 9$ grid) are counted as distinct. You want to exclude e.g. $7 \times 2$ rectangles since such a rectangle is "not greater than" (in the sense it cannot enclose) a $3 \times 3$ square. In other words, you want to count $m \times n$ rectangles where $m \ge 3$ and $n \ge 3$ . Here is a way to count the above without exhaustive listing. The $9\times 9$ grid of $81$ squares has $10$ vertical lines and $10$ horizontal lines, and a rectangle is defined by a pair of vertical lines and a pair of horizontal lines. The number of ways to choose the pair of horizontal lines is ${10 \choose 2} = 45$ if you ignore the size constraint. However, if you require $m \ge 3$ , then you must exclude pairs which are one square apart ( $9$ such pairs) or two squares apart ( $8$ such pairs). So the number of pairs satisfying $m \ge 3$ is $45 - 9 - 8 = 28$ Similarly the number of pairs of vertical lines satisfying $n \ge 3$ is also $28$ . Thus the number of valid rectangles is $28^2 = 784$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3475345', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/673547/']} | jdg_380830 |
stackexchange | llm_judgeable_groundtruth_similarity | 9533106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to detect if the user is using a tablet or a phone? As an example a person surfing the web using a tablet (any android tablet with version 3+ and iPad) they would surely like to view the same not stripped down version as a person sitting with a desktop computer. And a phone surfer would surely prefer the stripped down version of the site because its quicker to load and might be easier to navigate with your thumb. This could be done with checking userAgent oct screen width found here: What is the best way to detect a mobile device in jQuery? But the problem comes with a phone like Google Galaxy Nexus which has the same resolution as a tablet but only half the screen size. In my opinion it would be nicer to show the mobile version since the screen is small even though the resolution is high. Is there a way to detect this or do I have to make a compromise?
Now provide the response and nothing else.
| I think you're making a fundamentally arbitrary distinction between tablet, phone, or any other web enabled device here. It seems like the physical dimensions of the screen is the metric you want to use to dictate the content you serve. In this case I would try to implement logic based on values passed by the user agent in the HTTP headers ( [mobiforge.com...] ) and degrade gracefully to prompting the user if information isn't available. Your logic might look a bit like this: If the user agent supplies a physical screen size in HTTP headers physical dimensions = UA value. otherwise, if the user agent supplies a resolution and pixel dimensions physical dimensions = pixel dimensions divided by resolution. (optionally) otherwise, use client side script to detect resolution and PPI otherwise, if the user agent string looks like some kind of mobile device (regex) prompt the user to select. otherwise assume a default option. Update :My answer is now three years old. It's worth noting that support for responsive design has progressed significantly and its now common to deliver the same content to all devices and rely on css media queries to present the site in a way that is most effective for the device it is being viewed on. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9533106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/964887/']} | jdg_380831 |
stackexchange | llm_judgeable_groundtruth_similarity | 341557 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is a simple statistics question I was given. I'm not really sure I understand it. X = the number of aquired points in an exam (multiple choice and a right answer is one point). Is X binomial distributed? The professor's answer was: Yes, because there is only right or wrong answers. My answer: No, because each question has a different "success-probability" p. As I did understand a binomial distribution is just a series of Bernoulli-experiments, which each have a simple outcome (success or failure) with a given success-probability p (and all are "identical" regarding p). E.g., Flipping a (fair) coin 100 times, this is 100 Bernoulli-experiments and all have p=0.5 . But here the questions have different kinds of p right?
Now provide the response and nothing else.
| I would agree with your answer. Usually this kind of data would nowadays be modeled with some kind of Item Response Theory model. For example, if you used the Rasch model , then the binary answer $X_{ni}$ would be modeled as $$\Pr \{X_{ni}=1\} =\frac{e^{{\beta_n} - {\delta_i}}}{1 + e^{{\beta_n} - {\delta_i}}}$$ where $\beta_n$ can be thought as $n$-th persons ability and $\delta_i$ as $i$-th question difficulty. So the model enables you to catch the fact that different persons vary in abilities and questions vary in difficulty, and this is the simplest of the IRT models. Your professors answer assumes that all questions have same probability of "success" and are independent, since binomial is a distribution of a sum of $n$ i.i.d. Bernoulli trials. It ignores the two kinds of dependencies described above. As noticed in the comments, if you looked at the distribution of answers of a particular person (so you don't have to care about between-person variability), or answers of different people on the same item (so there is no between-item variability), then the distribution would be Poisson-binomial, i.e. the distribution of the sum of $n$ non-i.i.d. Bernoulli trials. The distribution could be approximated with binomial, or Poisson, but that's all. Otherwise you're making the i.i.d. assumption. Even under "null" assumption about guessing, this assumes that there is no guessing patterns, so people do not differ in how they guess and items do not differ in how they are guessed--so the guessing is purely random. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/341557', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/205121/']} | jdg_380832 |
stackexchange | llm_judgeable_groundtruth_similarity | 14914200 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the current state of c++11 (say gcc 4.7.2), how should I choose between using a variadic-template or a std::initializer_list when I need a constructor that can take variable arguments?
Now provide the response and nothing else.
| A variadic template allows you providing arguments of different types, while an std::initializer_list is templated with the type of the argument. This means the type of all the elements in the list must be the same (or convertible to the underlying type, but no narrowing conversions are allowed). Depending on whether or not this is desirable for you, you may choose one or the other. Also, a variadic template is usually the default choice if you need perfect forwarding , in that the syntactic form T&& can bind to both lvalue references and rvalue references, while a similar type deduction cannot be performed for initializer_list : struct A{ // Deduces T& for lvalue references, T for rvalue references, and binds to both template<typename... Ts> A(Ts&&...) { } // This is an rvalue reference to an initializer_list. The above type deduction // does not apply here template<typename T> A(initializer_list<T>&&) { }}; Also notice, that a constructor accepting an initializer_list will be invoked by default when you use uniform initialization syntax (i.e. curly braces), even though another viable constructor exists. This may or may not be something you wish to have: struct A{ A(int i) { }};struct B{ B(int) { } B(std::initializer_list<A>) { }};int main(){ B b {1}; // Will invoke the constructor accepting initializer_list} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14914200', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/975129/']} | jdg_380833 |
stackexchange | llm_judgeable_groundtruth_similarity | 36742258 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am starting to learn lambdas and i do not understand why java Map has: getOrDefault(Object key, V defaultValue) and not(working just the same, but if value is absent, then defaultValue will be taken from supplier): getOrUseSupplier(Object key, Supplier<V> defaultValue) Adventages i currently see of current solution: defaultValue does not have to be a final/effectively final value looks simpler & no need to know lambda syntax Disadventages: If while using getOrDefault we put new object creation, it will be created and passed to GC immidietely(while using supplier, it will not be created at all). I would like to know if there is any more disadventages of using & having getOrDefault rather than getOrUseSupplier. Could you also tell me if anywhere in java libraries, there is method like this: static <V> V getOrUseSupplier(Map<?, V> map, Object key, Supplier<V> supplier) that tries to take Value from map, and if it does not exist then takes value from Supplier.
Now provide the response and nothing else.
| The closest equivalent of getOrUseSupplier() in Map is named computeIfAbsent() which allows for the value to be computed using the key, giving more flexibility than if it only took a Supplier . It also stores the computed value in the Map , unlike getOrDefault . This is because they have distinct use cases and are not really related. While getOrDefault is usually used to return a "safe" non-null default value (such as returning empty string instead of a null) indicating that something should be in the map, computeIfAbsent() implies that something must be in the map, and if it's not, it needs to be created or otherwise the internal state of the program is not correct. The following example ignores the key and just uses the supplier's value. public static <V,T> V getOrUseSupplier(Map<T, V> map, T key, Supplier<V> supplier) { return map.computeIfAbsent(key, k -> supplier.get());} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36742258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3009344/']} | jdg_380834 |
stackexchange | llm_judgeable_groundtruth_similarity | 64635319 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I spawn 2 threads on a single core PC does it ever access for example an ArrayList in the same time so it will throw ConcurrentModificationException ? My gut tells me although there are 2 threads, they cannot achieve true parallelism because there is a single core and what it can do mostly is to jump from one thread to another but without executing an instruction such as arrayList.add(element) in the same time.
Now provide the response and nothing else.
| TL;DR: Yes List<String> myList = new ArrayList<String>(Arrays.asList("My string")); Iterator<String> myIterator = myList.iterator(); myList.add("Another string"); myIterator.next(); Result: Exception in thread "main" java.util.ConcurrentModificationException at java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:1042) at java.base/java.util.ArrayList$Itr.next(ArrayList.java:996) at com.ajax.YourClass.yourMethod(YourClass.java:134) You shouldn’t modify the collection while iterating over it. In practice the ConcurrentModificationException usually comes (but is not guaranteed) when you call next() on an iterator after having added an element or removed one. And in practice it often happens when you add or remove an element from inside a loop iterating over the collection, as Carciganicate said in the comment. Or as ernest_k put it so well in the comment: "Concurrent" in ConcurrentModificationException is not really aboutparallelism | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64635319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11113108/']} | jdg_380835 |
stackexchange | llm_judgeable_groundtruth_similarity | 1811048 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Parrot is the virtual machine originally designed for Perl 6. What technical capabilities does the Parrot VM offer that competing virtual machines such as the Java Virtual Machine (JVM) / Hotspot VM and Common Language Runtime (CLR) lack?
Now provide the response and nothing else.
| The following answer was written in 2009. See also this 2015 update by raiph . To expand on @Reed and point out some highlights, Parrot's opcodes are at a far higher level than most virtual machines. For example, while most machines store integers and floats, the basic registers are integers, numbers, strings and Parrot Magic Cookies (PMCs). Just having strings built in is a step up from the JVM. More interesting is the PMC, sort of like JVM's object type but far more fungible. PMCs are a container for all the other more complicated types you need in a real language like arrays, tables, trees, iterators, I/O etc. The PMC and the wide variety of built in ops for it means less work for the language writer. Parrot does not shy away from the messy but necessary bits of implementing a language. My information may be out of date, but I believe opcodes are pluggable, you can ship a Parrot VM that only contains the opcodes your language needs. They were also going to be inheritable, if your language wants their arrays to work a little different from stock Parrot arrays you can do that subclass it. Finally, Parrot can be written for not just in assembler ( PASM ) but also a slightly higher level language, Parrot Intermediate Representation (PIR) . PIR has loops, subroutines, localized variables and some basic math and comparison ops, all the basics people expect in a programming language, without getting too far away from the metal. All in all, Parrot is very friendly to language designers (it is written by and for them) who want to design a language and leave as much of the implementation as possible to somebody else. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1811048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58394/']} | jdg_380836 |
stackexchange | llm_judgeable_groundtruth_similarity | 5382167 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How could I do this? 1 -> 014 -> 049 -> 0911 -> 1122 -> 22 Thanks!
Now provide the response and nothing else.
| You don't want to set the DataContext on the UserControl. Instead, you want to set it in the scope of the UserControl. Usually you do this in the constructor of the UserControl. I usually add a line like this: this.RootElement.DataContext = myData; Where RootElement is the first sub-element (the Content) of your UserControl (usually a panel like Grid or StackPanel). In your case it would be: this.lsvwOutput.DataContext = FindResource("myDataSource") ; And makes sure that it's after the InitializeComponent() call. It's just a question of scoping. You set the datacontext on the root panel of the usercontrol. This is a really non-obvious part of WPF. UPDATE: As Markus points out below, in the case of a listview, you want to set an array of data, not just a data point. Take that into consideration when setting the DataContext in your constructor. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5382167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/669981/']} | jdg_380837 |
stackexchange | llm_judgeable_groundtruth_similarity | 57803271 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use the library: https://www.npmjs.com/package/react-tooltip . I have a too long text, which is on one line, how can put it on multiple lines? <p data-tip="hello world">Tooltip</p><ReactTooltip className="tooltip"/>.tooltip { width: 100px;}
Now provide the response and nothing else.
| You can use html={true} or multiline={true} both property to handle multiple line scenario html var tooltiptest = 'this is <br /> a test';<div data-tip={tooltiptest} data-for='path'>Path</div><ReactTooltip id='path' type='light' html={true} /> your example: <p data-for='path' data-tip="hello <br /> world">Tooltip</p><ReactTooltip id='path' className="tooltip" html={true} />.tooltip { width: 100px;} multiline <span data-tip='tooltip<br />multiline'></span><ReactTooltip multiline={true} /> your example <p data-tip="hello <br /> world">Tooltip</p><ReactTooltip className="tooltip" multiline={true} />.tooltip { width: 100px;} source - reference1 reference2 if you need to handle word-wrap for dynamic driven content, please follow the below style. import React from "react";import { withStyles } from "@material-ui/core/styles";import ReactTooltip from "react-tooltip";const styles = theme => ({ overrideMe: { width: "100px", "word-break": "break-all", "overflow-wrap": "break-word", display: "block" }});class Opener extends React.PureComponent { constructor(props) { super(props); this.state = { open: false }; } render() { const { classes } = this.props; return ( <div> <div>test content</div> <div>test content</div> <div> <p data-for="tt" data-tip="hello ccsdcssd csdccdsc ccdc sdcscds world" > Tooltip - hover me </p> <ReactTooltip className={classes["overrideMe"]} data-html={true} insecure={true} multiline={true} id="tt" /> </div> </div> ); }}export default withStyles(styles)(Opener); play with the code - code sandbox | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57803271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9974904/']} | jdg_380838 |
stackexchange | llm_judgeable_groundtruth_similarity | 32872465 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
EDIT: The question about why the code in this question works has been answered by the linked question in the duplicate marking. The question about string literal lifetime is answered in the answer to this question. I am trying to understand how and when the string pointed to by const char * gets deallocated. Consider: const char **p = nullptr;{ const char *t = "test"; p = &t;}cout << *p; After leaving the inner scope I would expect p to be a dangling pointer to const char * . However in my tests it is not. That would imply that the value of t actually continues to be valid and accessible even after t gets out of scope. It could be due to prolonging the lifetime of the temporary by binding it to const reference. But I do no such thing and even by saving the reference to t in a member variable and printing the value from different function later still gives me its correct value. class CStringTest{public: void test1() { const char *t = "test"; m_P = &t; test2(); } void test2() { cout << *m_P; }private: const char **m_P = nullptr;}; So what is the lifetime of the t 's value here? I would say I am invoking undefined behaviour by dereferencing a pointer to a value of a variable that went out of scope. But it works every time so I think that is not the case. When trying some other type like QString : QString *p = nullptr;{ QString str = "test"; p = &str;}cout << *p; the code always prints the value correctly too even though it should not. str went out of scope with its value and I have not prolonged its lifetime by binding it to const reference either. Interestingly the class example with QString behaves as I would expect and test2() prints gibberish because the value indeed went out of scope and m_P became dangling pointer. So what is the actual lifetime of const char * 's value?
Now provide the response and nothing else.
| The variables p and t are stack variables that you declared, so they have a lifetime that ends at the end of their enclosing block. But the value of t is the address of the string literal "test" , and that is not a variable you declared, it's not on the stack. It's a string literal, which is a constant defined in the program (similar to the integer literal 99 or the floating point literal 0.99 ). Literals don't go out of scope as you expect, because they are not created or destroyed, they just are . The standard says: Evaluating a string-literal results in a string literal object with static storage duration, initialized from the given characters as specified above. So the object that the compiler creates to represent the literal "test" has static storage duration, which is the same duration as globals and static variables, meaning it doesn't go out of scope like a stack variable. The value of p is the address of t , which does become an invalid pointer when t goes out of scope, but that doesn't mean that the value stored at that address suddenly becomes inaccessible or gets wiped. The expression *p is undefined behaviour, but it appears to work because nothing has reused that memory location yet so *p still contains the address of the string literal. For more details on that see the top answer to Can a local variable's memory be accessed outside its scope? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32872465', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2194193/']} | jdg_380839 |
stackexchange | llm_judgeable_groundtruth_similarity | 35462131 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using jquery.validate v1.14.0 by Jörn Zaefferer <script src="~/lib/jquery-validation/dist/jquery.validate.js"></script><script src="~/lib/jquery-validation-unobtrusive/jquery.validate.unobtrusive.js"></script> I have a project which all the validation worked correctly. When a required fields wasn't populated the client side validation would pick it up before the form was sumitted. However I moved all these forms to boostrap modals and a page's form gets loaded dynamically inside the modal via an ajax call. After this change my client side validation never fires and I also notice the validation plugin doesn't add the tag novalidate="novalidate" to the form like it did before. Here is my code simplified I then use the following jquery as I'd like to catch all form submits and do some additional logic: $(document).on('submit', '.formDetail', function (e) { e.preventDefault(); //added this to see if it would throw error but it doesn't var form = $(".formDetail"); form.validate(); //this is always true if ($(this).valid()) { $.ajax({ processData: false, contentType: false, data: new FormData(this), type: $(this).attr('method'), url: $(this).attr('action'), success: function (data) { //other logic } }); } }); If String1 is left blank and I submit and step through the javascript console '($(this).valid())' is alwasy true . If I change it to ($(".formDetail").valid()) the result is same i.e. always true . It also doesn't show the error message on screen before the HTTP POST like it did before I switched to dynamic ajax modal forms. How do I get valid() to work. There are no javascript error in my console when stepping through the code and valid() is always true which makes me no what causes validation to always be successful. If the jquery validation plugin was not found I would surely get a javascript console error but I'm not. I tried manually adding the novalidate=novalidate tag to the form but this doesn't help.I think the fact that this doesn't get populated tells me something is not right. You can see in my code I did try to force the validate() but this didn't make a difference.
Now provide the response and nothing else.
| Try this $(document).on('submit', '.formDetail', function (e) { e.preventDefault(); $(this).removeData("validator").removeData("unobtrusiveValidation");//remove the form validation $.validator.unobtrusive.parse($(this));//add the form validation if ($(this).valid()) { $.ajax({ processData: false, contentType: false, data: new FormData(this), type: $(this).attr('method'), url: $(this).attr('action'), success: function (data) { //other logic } }); }}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35462131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2349045/']} | jdg_380840 |
Subsets and Splits