source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 322085 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $V$ be a complete and cocomplete cartesian closed category . Feel free to assume more about $V$ if necessary; in my application $V$ is simplicial sets, so it is a presheaf topos and hence has all sorts of nice properties you might want (except that its underlying-set functor $V(1,-) : V \to \rm Set$ is not faithful or conservative). Let $C$ be a complete and cocomplete $V$ - enriched category with powers and copowers (a.k.a. cotensors and tensors), hence all $V$ -enriched weighted limits . And suppose that the underlying ordinary category $C_0$ is cartesian closed, i.e. we have natural isomorphisms of hom- sets $$ C_0(X\times Y, Z) \cong C_0(X, Z^Y). $$ Is it necessarily the case that $C$ is also cartesian closed in the $V$ -enriched sense, i.e. we have $V$ -natural isomorphisms of $V$ -valued hom-objects $$ C(X\times Y, Z) \cong C(X, Z^Y)? $$ I can't decide whether I think this is likely to be true or not. I used to believe that it was true, and I believe I have implicitly used it (or more precisely its generalization to the locally cartesian closed case, which should follow from the simple one applied to enriched slice categories ) in a published paper or two. However, right now I can't see how to prove it, and when stated clearly it sounds unlikely, since usually it is an extra condition on an adjunction between enriched categories to be an enriched adjunction (even when one of the functors is known to be an enriched functor, as is the case here for the cartesian product). On the other hand, it is true when $C=V$ itself: a cartesian closed category, defined by an isomorphism of hom-sets, automatically enhances to an isomorphism of hom-objects $Z^{X\times Y} \cong (Z^X)^Y$ .
Now provide the response and nothing else.
| I believe I managed to cook up an actual counterexample where both $C$ and $V$ are presheaf toposes. I'm going to leave my original attempt below since I still think it is instructive. Let $V$ be presheaves over the category of finite sets $\mathsf{Fin}$ and let $C$ be preasheaves over the poset $\mathbb{N}$ (i.e., towers of sets). Then $C$ carries a $V$ -enrichment given as $C(X, Y)_A = C_0(X, m \mapsto Y_{m |A|})$ for $A \in \mathsf{Fin}$ and $X, Y \in C$ . This makes $C$ into a (co)tensored $V$ -category, but internal homs in $C$ are not enriched. This does look quite criptic. Let me explain this as an instance of a general construction. Let's take two small categories $P$ and $J$ with $P$ symmetric monoidal and acting on $J$ . Then we set $V$ to be presheaves on $P$ with Day convolution and $C$ presheaves on $J$ . We construct a $V$ -enrichment as follows. Let $F_p \colon J \to J$ be the action of $p \in P$ on $J$ . This extends via a left Kan extension to a functor $L_p \colon C \to C$ which has a right adjoint $R_p \colon C \to C$ . We set $C(X, Y)_p = C_0(X, R_p Y)$ . This makes $C(X, Y)$ into a $P$ -presheaf and the composition operations are induced by the $P$ -action of $J$ . We also have (co)tensors. The tensor of $X \in C$ by a representable presheaf $P(-,p)$ is $L_p X$ and the cotensor is $R_p X$ (and they extend by (co)limits to all presheaves). $C$ is clearly cartesian closed and I believe there are many cases when the internal homs are not $V$ -enriched. My old example below has a variant with $P = \mathbb{Z}/2$ (as a discrete monoidal category) and $J = \Delta$ with action by opposites. In order to make $V$ cartesian, we can just take $P$ cartesian, e.g., $P = \mathsf{Fin}$ . Then a natural candidate for $J$ is some category with finite coproducts so that $P$ acts by tensors. I took $J = \mathbb{N}$ to keep things easy, but I don't know if this is really the simplest choice. If we compute the functors $R_A$ for $A \in \mathsf{Fin}$ , we indeed get $(R_A X)_m = X_{m |A|}$ . Now, let $A = \mathbf{2}$ be a two element set. Then $C(X \times Y, Z)_\mathbf{2} = C_0(X \times Y, R_\mathbf{2} Z) = C_0(X, (R_\mathbf{2} Z)^Y)$ and $C(X, Z^Y)_\mathbf{2} = C_0(X, R_\mathbf{2} (Z^Y))$ , but $(R_\mathbf{2} Z)^Y$ and $R_\mathbf{2} (Z^Y)$ are clearly different in general. Original answer This is not an answer, but it is a bit too long for a comment. And perhaps it provides some hint towards an actual solution. Basically, I have a counterexample where the monoidal structure of $V$ is not cartesian, but I don't see if it can be modified to make it cartesian. Let $V$ be the category of $\mathbb{Z}/2$ - graded sets . The category $C = \mathsf{Cat}$ has a nice $V$ -enrichment where even morphisms are covariant functors and odd morphisms are contravariant functors. It also has (co)tensors, the tensor of a category $X$ by a graded set $A$ is $X \times A_0 \sqcup X^{\mathrm{op}} \times A_1$ and the cotensor is $X^{A_0} \times (X^{\mathrm{op}})^{A_1}$ . However, the internal homs in $C$ (with respect to the cartesian monoidal structure) are not enriched. Indeed, the odd elements of $C(X \times Y, Z)$ are functors that are contravariant in both $X$ and $Y$ , but the odd elements of $C(X, Z^Y)$ are functors that are contravariant only in $X$ . | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/322085', 'https://mathoverflow.net', 'https://mathoverflow.net/users/49/']} | jdg_379941 |
stackexchange | llm_judgeable_groundtruth_similarity | 50210 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am puzzled as to why the caret package in R does not allow tuning on the number of trees (ntree) in a random forest (specifically in the randomForest package)? I cant imagine this is an oversight on the part of the package author - so there must be a reason for it? Can anyone shed light?
Now provide the response and nothing else.
| In theory, the performance of a RF model should be a monotonic function of ntree that plateaus beyond a certain point once you have 'enough' trees. This makes ntree more of a performance parameter than a Goldilocks parameter that you would want to tune. Caret tends to focus on tuning parameters that perform poorly for high and low values in which you want to find the happy medium. In practice I believe there may have been studies that have found performance does reduce for very large ntree values but even if this is true the effect is subtle and requires very large forests. There are at least 2-3 other parameters to RF that Caret doesn't tune for the same reasons as ntree. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/50210', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2040/']} | jdg_379942 |
stackexchange | llm_judgeable_groundtruth_similarity | 36341485 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a guard equivalent of checking if a variable is nil ? If so how would I translate a statement like this to use guard instead? if post["preview"]! != nil { //do stuff} else { //handle case where the variable is nil}
Now provide the response and nothing else.
| Like some people already answered, you can use let guard let preview = post["preview"] else { /* Handle nil case */ return } If you are not using the variable, you can use an underscore to not declare the variable and avoid the warning. guard let _ = post["preview"] else { /* Handle nil case */ return } You can also do a regular boolean check instead of using let guard post["preview"] != nil else { /* Handle nil case */ return } A more general case for a boolean check on a guard guard conditionYouExpectToBeTrue else { /* Handle nil case */ return } If you want to be able to modify the variable, you can use var instead of let guard var preview = post["preview"] else { /* Handle nil case */ return } Swift 3.0 You can combine var/let with a boolean check by using commas between the statements. guard let preview = post["preview"], preview != "No Preview" else { /* Handle nil case */ return } Swift 2.x You can combine var/let with the boolean check by using where where guard let preview = post["preview"] where preview != "No Preview" else { /* Handle nil case */ return } | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36341485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3517395/']} | jdg_379943 |
stackexchange | llm_judgeable_groundtruth_similarity | 25611908 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's my naive first pass code: var httpUrlResponse: NSHTTPURLResponse? // = (...get from server...)let contentType = httpUrlResponse?.allHeaderFields["Content-Type"] I've tried various derivations of this code, but I keep getting compiler warnings/errors related to the basic impedance mismatch between the NSDictionary type of the allHeaderFields property and my desire to just get a String or optional String. Just not sure how to coerce the types.
Now provide the response and nothing else.
| You can do something like the following in Swift 3: if let httpResponse = response as? HTTPURLResponse, let contentType = httpResponse.value(forHTTPHeaderField: "Content-Type") { // use contentType here}task.resume() Obviously, here I'm going from the URLResponse (the response variable) to the HTTPURLResponse . And rather than fetching allHeaderFields , I’m using value(forHTTPHeaderField:) which is typed and uses case-insensitive keys. Hopefully this illustrates the idea. For Swift 2, see previous revision of this answer . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25611908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56581/']} | jdg_379944 |
stackexchange | llm_judgeable_groundtruth_similarity | 55099253 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My setup is: I have a solution that had different dotnet4.6 applications (services) in it. Now we added a dotnet core project inside this solution. I can build and debug it, but this doesn't create an executable. In Visual Studio I can rightclick -> Publish... it. I created two profiles (x86 and x64) that should create nice binaries in /bin/Publish/x86 or /x64. In VS this works. The application is self-contained and works on different unprepared machines. But now I Need to move that process to the buildserver. I messed around with dotnet publish but in the end i get stuck because other components of the solution are not clean dotnet core and so the build fails.So I need to stick with MSBuild. The current attempt is: "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\MSBuild.exe" NewProject\NewProject.csproj /p:DeployOnBuild=true /p:UsePublishProfile=true /p:PublishProfile=x64Profile . This says it finished building successfully, but I don't see any results. Also it doesn't make any difference, if I remove all properties and just call msbuild and *.csproj. It just builds the new project in bin/Debug, as dll, not exe. I also messed around with p:PublishProfile="NewProject\Properties\PublishProfiles\x64Profile.pubxml" and /p:PublishUrl="NewProject\bin\Publish\x64" but it doesn't change anything. I read a few articles on SO, telling that VS doesn't just call msbuild with parameters but does internal API calls. Still, I need a solution. I need the build server to create an executable. Is there a way to trigger msbuild to create thath?
Now provide the response and nothing else.
| Oh man, I searched for 2-3 days now. And - as always on StackOverflow - shortly after asking I found a working answer myself. tl;dr: Project.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFrameworks>netcoreapp2.1</TargetFrameworks> <TargetLatestRuntimePatch>true</TargetLatestRuntimePatch> <GenerateAssemblyInfo>false</GenerateAssemblyInfo> <RootNamespace>Company.Toolset.Exporter</RootNamespace> <AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects> <GenerateBindingRedirectsOutputType>true</GenerateBindingRedirectsOutputType> <RuntimeIdentifiers>win-x86;win-x64</RuntimeIdentifiers> </PropertyGroup> <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v15.0\WebApplications\Microsoft.WebApplication.targets" />... MSBuild command: msbuild Project\Project.csproj -t:restore /t:Build;Publish /p:Configuration=Release /p:Platform=x86 /p:PublishProfile=x86Profile /p:OutputPath=bin/Publish/x86 (and the same for x64) Explanation: I think it was the dotnet build/publish command that wanted me to change TargetFrameworks to TargetFramework . But for MSBuild this is wrong. And dotnet wasn't working here, as the solution is mixing dotnet core and dotnet framework. So that had to be fixed. The <RuntimeIdentifiers>win-x86;win-x64</RuntimeIdentifiers> was needed by the command. I added it to the *.csproj because I know that I build for windows only (at the moment) and that I need both versions. I don't really know why I needed this line <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v15.0\WebApplications\Microsoft.WebApplication.targets" /> but without this publishing and using the PublishProfiles didn't work as expected. Links that helped me to get here: (not sorted) https://github.com/Microsoft/msbuild/issues/1901 https://github.com/aspnet/vsweb-publish/issues/22 How to Publish Web with msbuild? ASP.NET Core Application (.NET Framework) for Windows x64 only error in project.assets.json Configure MSBuild output path | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/55099253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3319147/']} | jdg_379945 |
stackexchange | llm_judgeable_groundtruth_similarity | 31903082 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Simple Injector for test purpose but pretty new on OOP. I am trying to create loosely couple classes. Here is the my scenario. I have User repo and interface like this. public class UserRepository : IUserRepository{ public void Add(Model.User user) { Console.WriteLine("Name:"+user.Name+"\n"+"SurName:"+user.SurName); } public void Delete(int id) { throw new NotImplementedException(); }}public interface IUserRepository{ void Add(User user); void Delete(int id);} My TestInjectedClass Class and interface are something like this which I am planning to use in Program Main. public class TestInjectedClass : ITestInjectedClass{ private readonly IUserRepository _userRepository; public TestInjectedClass(IUserRepository userRepository) { _userRepository = userRepository; } public void UserRepoRun() { var user = new User() {Id = 1,Name = "ada",SurName = "stack"}; _userRepository.Add(user); }}public interface ITestInjectedClass{ void UserRepoRun();} And My console program looks like this: class Program { static ITestInjectedClass _testInjectedClass; private static IUserRepository _userRepository; static void Main(string[] args) { _testInjectedClass= new TestInjectedClass(_userRepository); _testInjectedClass.UserRepoRun(); Console.ReadLine(); } public Program() { Bootstrap.Start(); } } BootStrap class here: class Bootstrap{ public static void Start() { var container = new Container(); // Register your types, for instance: container.Register<IUserRepository, UserRepository>(Lifestyle.Singleton); container.Register<ITestInjectedClass, TestInjectedClass>(Lifestyle.Singleton); //container.Register<IUserRepository, TestInjectedClass>(Lifestyle.Singleton); //container.Register<IUserContext, WinFormsUserContext>(); container.Register<TestInjectedClass>(); // Optionally verify the container. container.Verify(); }} My problem when I run program, I am getting a value exception on the _userRepository inside TestInjectionClass .How can I properly inject TestInjectionClass and UserRepository to Main Program. Thanks
Now provide the response and nothing else.
| You need to make Bootstrap.container available in Program.Main and then use it to create instances of classes instead of directly calling their constructors directly: _testInjectedClass = Bootstrap.container.GetInstance<ITestInjectedClass>(); Of course you will need to expose it in Bootstrap for that to work: class Bootstrap{ public static Container container; public static void Start() { container = new Container(); // Register your types, for instance: container.Register<IUserRepository, UserRepository>(Lifestyle.Singleton); container.Register<ITestInjectedClass, TestInjectedClass>(Lifestyle.Singleton); //container.Register<IUserRepository, TestInjectedClass>(Lifestyle.Singleton); //container.Register<IUserContext, WinFormsUserContext>(); container.Register<TestInjectedClass>(); // Optionally verify the container. container.Verify(); }} And call Bootstrap.Start from Program.Main : static void Main(string[] args){ Bootstrap.Start(); _testInjectedClass = Bootstrap.container.GetInstance<ITestInjectedClass>(); _testInjectedClass.UserRepoRun(); Console.ReadLine();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31903082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4489687/']} | jdg_379946 |
stackexchange | llm_judgeable_groundtruth_similarity | 19835499 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Introduction Since version 5.5 in PHP there's such great thing as generators . I will not repeat official manual page, but they are great thing for short definition of iterators. The most-known sample is: function xrange($from, $till, $step){ if ($from>$till || $step<=0) { throw new InvalidArgumentException('Invalid range initializers'); } for ($i = $from; $i < $till; $i += $step) { yield $i; }}//...foreach (xrange(2, 13, 3) as $i){ echo($i.PHP_EOL); // 2,5,8,11} and generator is actually not a function, but an instance of a concrete class: get_class(xrange(1, 10, 1)); // Generator The problem Done with RTM stuff, now moving on to my question. Imagine that we want to create generator of Fibonacci numbers . Normally, to get those, we can use simple function: function fibonacci($n){ if(!is_int($n) || $n<0) { throw new InvalidArgumentException('Invalid sequence limit'); } return $n < 2 ? $n : fibonacci($n-1) + fibonacci($n-2);}var_dump(fibonacci(6)); // 8 Let's transform this into something, that holds sequence and not only it's last member: function fibonacci($n){ if (!is_int($n) || $n<0) { throw new InvalidArgumentException('Invalid sequence limit'); } if ($n<2) { return range(0, $n); } $n1 = fibonacci($n-1); $n2 = fibonacci($n-2); return array_merge($n1, [array_pop($n1)+array_pop($n2)]);}//...foreach (fibonacci(6) as $i){ echo($i.PHP_EOL); // 0,1,1,2,3,5,8} We have now a function that returns array with full sequence The question Finally, the question part: how can I transform my latest fibonacci function so it will yield my values, not holding them in an array? My $n can be big, so I want to use benefits of generators, like in xrange sample. Pseudo-code will be: function fibonacci($n){ if (!is_int($n) || $n<0) { throw new InvalidArgumentException('Invalid sequence limit'); } if ($n<2) { yield $n; } yield fibonacci($n-2) + fibonacci($n-1);} But this, obviously, is crap since we can't handle with it like this way because recursion will cause object of class Generator and not int value. Bonus : getting fibonacci sequence is just a sample for more general question: how to use generators with recursion in common case? Of course, I can use standard Iterator for that or re-write my function to avoid recursion. But I want to achieve that with generators. Is this possible? Does this worth efforts to use this such way?
Now provide the response and nothing else.
| So the issue I ran into when attempting to create a recursive generator function, is that once you go past your first depth level each subsequent yield is yielding to its parent call rather than the iteration implementation (the loop). As of php 7 a new feature has been added that allows you to yield from a subsequent generator function. This is the new Generator Delegation feature: https://wiki.php.net/rfc/generator-delegation This allows us to yield from subsequent recursive calls, which means we can now efficiently write recursive functions with the use of generators. $items = ['what', 'this', 'is', ['is', 'a', ['nested', 'array', ['with', 'a', 'bunch', ['of', ['values']]]]]];function processItems($items){ foreach ($items as $value) { if (is_array($value)) { yield from processItems($value); continue; } yield $value; }}foreach (processItems($items) as $item){ echo $item . "\n";} This gives the following output.. whatthisisisanestedarraywithabunchofvalues | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19835499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2637490/']} | jdg_379947 |
stackexchange | llm_judgeable_groundtruth_similarity | 72613 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a list or string, how do I get a list of all (contiguous) sublists/substrings? The order is not important. Example for lists: list = {1, 2, 3};sublists[list](* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *) Example for strings: string = "abc";substrings[string](* {"", "", "", "", "a", "b", "c", "ab", "bc", "abc"} *)
Now provide the response and nothing else.
| TMTOWTDI applies to both of these problems. Below I present an overview of various approaches I've come across, followed by timing data obtained in 10.4 on Windows 10 (the timing code is available as well, so you can easily rerun the tests on your own machine if you have a different setup). Which solution is best for you depends both on the problem size as well as which ordering of the sublists or substrings you're looking for. One note up front: several implementations use the 10.0 function Catenate . If you're using an older version, this can be replaced with Join @@ which is a bit slower but probably won't change the overall performance ordering of the solutions. Sublists These can be generated very concisely with ReplaceList and an appropriate pattern. sublists1[list_List] := ReplaceList[list, {___, sub___, ___}:>{sub}] To omit empty sublists, simply replace sub___ by sub__ . Alternatively, just generate them explicitly from start and end indices, using Table . You'll need to Catenate the result though, or you'll get the sublists grouped by starting index: sublists2[list_List] := Catenate @ Table[ list[[i ;; j]], {i, Length@list + 1}, {j, i-1, Length@list}] To omit empty sublists, let j start from i instead of i-1 . As of 10.1 there is SequenceCases whose Overlaps option can be used to get all the sublists: sublists3[list_List] := SequenceCases[list, {___}, Overlaps -> All] To omit empty sublists, use {__} instead of {___} . Credits for this approach go to RunnyKine . Instead of using Table we can also construct the index pairs of each sublist from Subsets . However, this only gives one instance of the empty list, so we'll need to add the others manually: sublists4[list_List] := With[{len = Length@list}, Join[ ConstantArray[{}, len], Take[list, #] & /@ Subsets[Range@len, 2] ]] To omit empty sublists, ditch Join and ConstantArray and replace 2 with {1,2} . Credits for this approach go to Kuba. As of 10.4 there's even a built-in for this task, but for some reason it also returns only one copy of the empty list: sublists5[list_List] := ConstantArray[{}, Length@list]~Join~Subsequences[list] To omit empty sublists, ditch Join and ConstantArray and give Subsequences an nspec of {1,Infinity} . You can also collect the results of using Partition with overlaps and all possible sublist lengths: sublists6[list_List] := Catenate @ Table[ Partition[list, n, 1], {n, 0, Length@list}] To omit empty sublists, simply remove the 0, . When performance is not a concern, the most important distinguishing feature of these is the order of the returned sublists: list = {1, 2, 3};sublists1@list (* ReplaceList *)sublists2@list (* Table *)sublists3@list (* SequenceCases *)sublists4@list (* Subsets *)sublists5@list (* Subsequences *)sublists6@list (* Partition *)(* {{}, {1}, {1, 2}, {1, 2, 3}, {}, {2}, {2, 3}, {}, {3}, {}} *)(* {{}, {1}, {1, 2}, {1, 2, 3}, {}, {2}, {2, 3}, {}, {3}, {}} *)(* {{1, 2, 3}, {1, 2}, {1}, {}, {2, 3}, {2}, {}, {3}, {}, {}} *)(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {1, 2, 3}, {2, 3}} *)(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *)(* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *) Things to note: ReplaceList orders them by starting index, followed by length. SequenceCases does the same but takes length from longest to shortest. Subsequences and Partitions have them in order of increasing length first. Somewhat weirdly Subset first has the length-0 and length-1 lists, and then lists the remaining ones in the same order as ReplaceList . Note that while the Table approach here uses the same ordering as ReplaceList this one is most easily adapted to different ordering by changing the limits and order of the iterators. Finally, timing results. Most notably, SequenceCases is unusably slow for larger lists (so I've omitted it from the longer timing results), but Subsequences also gets much slower than rolling your own implementation: list = RandomInteger[5, 100];RepeatedTiming[sublists1[list];] (*ReplaceList*)RepeatedTiming[sublists2[list];] (*Table*)RepeatedTiming[sublists3[list];] (*SequenceCases*)RepeatedTiming[sublists4[list];] (*Subsets*)RepeatedTiming[sublists5[list];] (*Subsequences*)RepeatedTiming[sublists6[list];] (*Partition*)(* {0.00513, Null} {0.00452, Null} {0.018, Null} <-- nope {0.00378, Null} {0.00208, Null} {0.00187, Null}*) list = RandomInteger[5, 1000];RepeatedTiming[sublists1[list];] (*ReplaceList*)RepeatedTiming[sublists2[list];] (*Table*)RepeatedTiming[sublists4[list];] (*Subsets*)RepeatedTiming[sublists5[list];] (*Subsequences*)RepeatedTiming[sublists6[list];] (*Partition*)(* {2.81, Null} {1.2, Null} {1.2, Null} {2.3, Null} {2.0, Null}*) list = RandomInteger[5, 1200];RepeatedTiming[sublists1[list];] (*ReplaceList*)RepeatedTiming[sublists2[list];] (*Table*)RepeatedTiming[sublists4[list];] (*Subsets*)RepeatedTiming[sublists5[list];] (*Subsequences*)RepeatedTiming[sublists6[list];] (*Partition*)(* {4.6, Null} {2.0, Null} {2.0, Null} {5.1, Null} {6.0, Null}*) In summary, the Table solution is the overall winner in terms of flexibility and performance, but it's good to have some other options for a terse quick-and-dirty solution when you can afford to prioritise readability. Substrings There are several more possibilities for strings (which usually map to an approach above), because you can usually either use specific string functions or convert the string to a list of characters to reduce it to one of the above implementations. (And unfortunately, it's not always the case that the string-based solution is faster.) Adapt the above ReplaceList approach by applying Characters to the string first, and then replace {sub} by StringJoin[sub] : substrings1[string_String] := ReplaceList[ Characters @ string, {___, sub___, ___} :> StringJoin[sub]] To omit empty strings, use sub__ . Write a similar version with StringReplaceList and StringExpression for pattern. The catch is to anchor the StringExpression , otherwise you'll get horrible amounts of duplicates. substrings2[string_String] := StringReplaceList[ string, StartOfString ~~ ___ ~~ sub___ ~~ ___ ~~ EndOfString :> sub] To omit empty strings, use sub__ . The StringReplaceList version can be written more compactly with a RegularExpression : substrings3[string_String] := StringReplaceList[ string, RegularExpression["^.*(.*).*$"] :> "$1"] To omit empty strings, use the regex "^.*(.+).*$" . Adapt the above Table approach, similar to 1, by obtaining the characters first: substrings4[string_String] := Module[{chars = Characters@string}, Join @@ Table[ StringJoin @@ chars[[i ;; j]], {i, Length@chars + 1}, {j, i - 1, Length@chars} ]] To omit empty strings, let j start from i . Write a similar version which uses string manipulation functions: substrings5[string_String] := Join @@ Table[ StringTake[string, {i, j}], {i, StringLength@string + 1}, {j, i - 1, StringLength@string} ] To omit empty strings, let j start from i . Adapt the above SequenceCases approach using StringCases . substrings6[string_String] := StringCases[string, ___, Overlaps -> All] To omit empty strings, use __ . Credits for this approach go to SquareOne. Adapt Kuba's Subsets solution: substrings7[string_String] := With[{len = StringLength@string}, Join[ ConstantArray["", len], StringTake[string, #] & /@ Subsets[Range@len, 2] ]] To omit empty strings, ditch Join and ConstantArray and replace 2 with {1,2} . Adapt the Partition solution using StringPartition , which was added in 10.1 and updated in 10.4: substrings8[string_String] := Catenate@Table[ StringPartition[string, n, 1], {n, 0, StringLength@string}] To omit empty strings, remove the 0, . There is no string pendant to Subsequences at this point. Let's look at the order of the results: string = "abc";substrings1[string](*ReplaceList*)substrings2[string](*StringReplaceList + StringPattern*)substrings3[string](*StringReplaceList + RegularExpression*)substrings4[string](*Table + Characters*)substrings5[string](*Table + StringTake*)substrings6[string](*StringCases*)substrings7[string](*Subsets*)substrings8[string](*StringPartition*)(* {"", "a", "ab", "abc", "", "b", "bc", "", "c", ""} {"", "c", "", "bc", "b", "", "abc", "ab", "a", ""} {"", "c", "", "bc", "b", "", "abc", "ab", "a", ""} {"", "a", "ab", "abc", "", "b", "bc", "", "c", ""} {"", "a", "ab", "abc", "", "b", "bc", "", "c", ""} {"abc", "ab", "a", "", "bc", "b", "", "c", "", ""} {"", "", "", "", "a", "b", "c", "ab", "abc", "bc"} {"", "", "", "", "a", "b", "c", "ab", "bc", "abc"}*) Interestingly, StringReplaceList yields the opposite order from ReplaceList (which I believe is due to greedy matching of the prefix), which itself orders them by starting index first, length second. StringCases , like SubsequenceCases does the same but with decreasing length. Subsets still has its funny order of index pairs and the StringPartition solution sorts them by length. Again, remember that the Table approaches can easily be adapted to yield almost any order you want. As for performance, and it turns out that the StringReplaceList versions are much slower than the other three. Timing them for a 100-character string: string = StringJoin @ ConstantArray["a", 100];Timing[substrings1[string];](*ReplaceList*)Timing[substrings2[string];](*StringReplaceList + StringPattern*)Timing[substrings3[string];](*StringReplaceList + RegularExpression*)Timing[substrings4[string];](*Table + Characters*)Timing[substrings5[string];](*Table + StringTake*)Timing[substrings6[string];](*StringCases*)Timing[substrings7[string];](*Subsets*)Timing[substrings8[string];](*StringPartition*)(* {0., Null} {3.84375, Null} {5.28125, Null} {0.015625, Null} {0., Null} {0.015625, Null} {0., Null} {0., Null}*) Comparing the others, we find that the string-based Table and Subsets approaches easily outperform the others on large inputs: string = StringJoin @ ConstantArray["a", 100];RepeatedTiming[substrings1[string];](*ReplaceList*)RepeatedTiming[substrings4[string];](*Table + Characters*)RepeatedTiming[substrings5[string];](*Table + StringTake*)RepeatedTiming[substrings6[string];](*StringCases*)RepeatedTiming[substrings7[string];](*Subsets*)RepeatedTiming[substrings8[string];](*StringPartition*)(* {0.0096, Null} {0.0121, Null} {0.00421, Null} {0.00573, Null} {0.00450, Null} {0.00612, Null}*) string = StringJoin @ ConstantArray["a", 1000];RepeatedTiming[substrings1[string];](*ReplaceList*)RepeatedTiming[substrings4[string];](*Table + Characters*)RepeatedTiming[substrings5[string];](*Table + StringTake*)RepeatedTiming[substrings6[string];](*StringCases*)RepeatedTiming[substrings7[string];](*Subsets*)RepeatedTiming[substrings8[string];](*StringPartition*)(* {5.74, Null} {4.818, Null} {1.92, Null} {4.59, Null} {2.000, Null} {2.36, Null}*) | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/72613', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2305/']} | jdg_379948 |
stackexchange | llm_judgeable_groundtruth_similarity | 52468 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
CONTEXT While no government wants there to be a pandemic , and would certainly prefer (hope) for it to just go away, they can only downplay it for a short time before it overwhelms the population and they must address it from a significantly weaker position than if they had acted decisively at the beginning. In the smartphone era, information propagates instantly through informal networks (Facebook, WeChat, StackExchange, etc.) and is difficult to suppress. QUESTIONS After witnessing the COVID epidemic and failure to contain it in Asia, it spread to Europe and was poised to infect the west. Why did western hemisphere leaders downplay the pandemic’s threat instead of acknowledging the problem and communicating what would be done to reverse the issue? What is the political incentive for deferring acknowledgement and action?
Now provide the response and nothing else.
| In hindsight it's clear that many governments should have taken the risk of a Covid19 pandemic much more seriously, but it's not that easy to anticipate the reach and intensity of the epidemic before it happens. There have been other serious threats before for which the containment strategy proved sufficient, and if a government allocates resources (for instance massive purchases of masks and ventilators) which end up wasted they are going to be criticized as well. This article in the Guardian argues that most European countries were not as well prepared as some Asian countries because: There was no recent experience of a serious epidemic in the country,causing leaders to subconsciously underestimate the seriousness ofthe threat. There is a potential political cost for a government if they investresources by precaution and the risk doesn't materialize: "The challenge faced by government is whether and when to act on a health threat. If you act swiftly and the outbreak isn’t as bad as feared, then government gets criticised for overreacting. If you adopt a wait-and-see approach and move too slowly, then government gets criticised for underreacting," says Steve Taylor, professor at the University of British Columbia and author of The Psychology of Pandemics. As an example, during the 2009 H1N1 flu pandemic Roselyne Bachelot was Minister of Health in the Frenchgovernment. She purchased 94 million vaccines, but only a smallproportion turned out to be actually used. As a result she waswidely criticized for wasting public money. I assume that there areother examples of this kind, where a politician loses some politicalpoints because they made a safe but costly decision. It seems that when an epidemic threat presents itself a government must make a call between: Taking the threat seriously and therefore mobilizing important resources for an overall unlikely epidemic crisis, with a high risk to appear as overreacting and wasting public money. Taking minimal conservative measures, which is unlikely to be sufficient if the crisis materializes. However this will likely appear as the "normal" choice to make, since nobody could objectively have anticipated the intensity of the crisis. Note: this answer is made of recycled material from a closed question of mine. | {} | {'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/52468', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/11149/']} | jdg_379949 |
stackexchange | llm_judgeable_groundtruth_similarity | 9754 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the units of the damping constant from the following equation by dimensional analysis? $$\zeta = \frac{c}{2\sqrt{mk}}$$ I'm assuming the units have to be s^-1, as the damping constant is present in the exponential equation which plots damping of y=Ae^kt (which plots amplitude vs time). Is that a correct assumption? If somebody could do a quick dimensional analysis to confirm it would be great.
Now provide the response and nothing else.
| I'm assuming the units have to be s^-1, as the damping constant is present in the exponential equation which plots damping of y=Ae^kt (which plots amplitude vs time). Is that a correct assumption? No, the damping ratio $\zeta$ is dimensionless: $$[c] = \frac{[F]}{\left[\frac{dx}{dt}\right]} = \frac{\mathrm{N}}{\mathrm{m}\cdot\mathrm{s}^{-1}} = \frac{\mathrm{kg}\cdot\mathrm{m}\cdot\mathrm{s}^{-2}}{\mathrm{m}\cdot\mathrm{s}^{-1}} = \mathrm{kg}\cdot\mathrm{s}^{-1}$$ $$[\zeta] = \frac{[c]}{\sqrt{[m][k]}} = \frac{\mathrm{kg}\cdot\mathrm{s}^{-1}}{\sqrt{\mathrm{kg}\cdot\mathrm{N}\cdot\mathrm{m}^{-1}}} = \frac{\mathrm{kg}\cdot\mathrm{s}^{-1}}{\sqrt{\mathrm{kg}^2\cdot\mathrm{s}^{-2}}} = \frac{\mathrm{kg}\cdot\mathrm{s}^{-1}}{\mathrm{kg}\cdot\mathrm{s}^{-1}} = 1$$ The solution of the damped harmonic oscillator differential equation (when underdamped) is $$x(t) = A e^{-\zeta \omega_0 t} \ \sin \left( \sqrt{1-\zeta^2} \ \omega_0 t + \varphi \right)$$ so the exponent is dimensionless (as it must be): $$[\zeta \omega_0 t] = 1\cdot\mathrm{s}^{-1}\cdot\mathrm{s} = 1$$ Dimensionless and dimensionful parameters The differential equation for a damped harmonic oscillator is $$m\frac{d^2x}{dt^2} + c\frac{dx}{dt} + kx = 0$$ We can reduce the number of parameters to 2 just by dividing by $m$ $$\frac{d^2x}{dt^2} + \frac{c}{m}\frac{dx}{dt} + \frac{k}{m}x = 0$$ Then we can transform the two remaining parameters to get a dimensionless one, controlling the shape of the solution, and a dimensionful one, setting the timescale. One way of doing that is to define $$\omega_0 = \sqrt{\frac{k}{m}}$$ $$\zeta = \frac{\frac{c}{m}}{\omega_0} = \frac{c\sqrt{m}}{m\sqrt{k}} = \frac{c}{\sqrt{k\,m}}$$ so that the differential equation takes the form: $$\frac{d^2x}{dt^2} + \zeta\omega_0\frac{dx}{dt} + \omega_0^2x = 0$$ The reason to choose $\omega_0$ as the dimensionful parameter is physical: when the system is underdamped , $\omega_0$ is the angular frequency of oscillation. More information about this differential equation and its physical interpretation can be seen in Wikipedia. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/9754', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/3511/']} | jdg_379950 |
stackexchange | llm_judgeable_groundtruth_similarity | 1656361 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make a short URL service for 2 million assets but I want to use the shortest number of possible characters. What is the math equation that I would need to use to figure it out? I know it has something to do with factorials, right?
Now provide the response and nothing else.
| It's not a factorial problem, but an exponential one. If x is the number of possible characters, you need to solve the following equation for y : x^y = 2000000 If you want to use all numbers and case-sensitive alpha [0-9A-Za-z] , you have 62 possible values. This means you need to solve: 62^y = 2000000y*log(62) = log(2000000) y = log(2000000) / log(62) y = 3.5154313828... Of course, you can't have 3.5 characters in your URL, so you would need 4. If you want to change the character set you are using for your URL's, simply resolve the problem above using the number of values in your set. Note Solving this equation assumes fixed-length URL's. For variable-length URL's, see Rob's answer. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1656361', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/169109/']} | jdg_379951 |
stackexchange | llm_judgeable_groundtruth_similarity | 583355 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I compiled OpenSSH_6.6p1 on one of our server. I am able login via SSH to the upgraded server. But I am not able to connect to other servers running OpenSSH_6.6p1 or OpenSSH_5.8 from this. While connecting I am getting an error as below. Read from socket failed: Connection reset by peer On the destination server in the logs, I am seeing it as below. sshd: fatal: Read from socket failed: Connection reset by peer [preauth] I tried specifying the cipher_spec [ ssh -c aes128-ctr destination-server ] as mentioned in ssh 'connection reset by peer' problem since 5.8p1 ( archived version ) and was able to connect. How can configure ssh to use the cipher by default? Why is the cipher required here?
Now provide the response and nothing else.
| The problem sounds like a server-side bug. When the client sends the list of ciphers the openssh server probably expects to be able to read the list in a single system call. If the list of supported ciphers is longer than can be transmitted in one packet, the server may get fewer bytes in the first call than it expected. The correct behavior on the server would be to perform another call to get the rest of the bytes. But from the problem description it appears, the server instead closes the connection when it did not get the full list of ciphers at once. When the next packet from the client arrives, the server will send a connection reset to the client. Configuring the client to use a shorter list of ciphers would then work around the bug. The openssh client will look for the list of ciphers in the following places: On the command line using either -c cipher_spec or -o Ciphers=cipher_spec In ~/.ssh/config by specifying Ciphers cipher_spec in the relevant host section or before the first host. In /etc/ssh/ssh_config using the same format as ~/.ssh/config A default list built into the client at compile time. The two configuration files are respectively per-user and system-wide settings. Using Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc like Eric suggested should work fine. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/583355', 'https://serverfault.com', 'https://serverfault.com/users/8888/']} | jdg_379952 |
stackexchange | llm_judgeable_groundtruth_similarity | 40859312 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on an app using iOS 10 and using camera for taking pictures. When camera view opens, instead of cancel button there is a title "API_CANCEL_TITLE". And when I capture the pic the whole title is seeing, I want that instead of this long title it will be look "Cancel". I have used app localization. I searched few links but could not find the solution. Here is the screen shot: This is happening only in iOS 10, in iOS 9 it will working correctlyhere is the code: - (IBAction)takePicturePressed:(UIButton *)sender{ UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.allowsEditing = YES; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentViewController:picker animated:YES completion:nil];} Please suggest...
Now provide the response and nothing else.
| I've approached the same problem using BundleLocalization and I've traced UIImagePickerController keys, that it gets from a bundle. Turns out, it uses 4 "tables" (in NSBundle nomenclature): CameraUI (for camera) PhotoLibraryServices (for PhotoLibrary) PhotoLibrary (for PhotoLibrary) PhotosUI (for PhotoLibrary) In my case, all I had to do, to localize UIImagePickerController interface, it was create in the project a couple of .strings files and localize them. Below content of mentioned files with keys I've seen (with standard english values), they are pretty self explaining CameraUI.strings "PHOTO" = "PHOTO";"AEAF_LOCK_TEXT" = "AE/AF LOCK";"API_CANCEL_TITLE" = "Cancel";"HDR_AUTO" = "Auto";"HDR_ON" = "On";"HDR_OFF" = "Off";"TIMER_OFF_TEXT" = "Off";"USE_PHOTO" = "Use Photo"; PhotoLibraryServices.strings "PHOTOS" = "Photos";"CAMERA_ROLL" = "Camera roll";"ALL_SCREENSHOTS" = "Screenshots"; PhotoLibrary.strings "CANCEL" = "Cancel";"RETAKE" = "Retake";"STREAM_SHARED_BY_ME_SUBTITLE" = "From You";"STREAM_SHARED_BY_SUBTITLE" = "From %@";"ALBUM_IMAGE_COUNT_FORMAT" = "%@ Photos";"ALBUM_VIDEO_COUNT_FORMAT" = "%@ Videos";"1_ALBUM_PHOTO" = "1 Photo";"1_ALBUM_VIDEO" = "1 Video";"ALBUM_TWO_TYPES_LABEL_COMMAS" = "%@, %@"; PhotosUI.strings "ALL_PHOTOS_IN_LIBRARY" = "Moments";"PXUserCollectionsSectionTitle" = "My Albums";"FULL_PHOTOS_GRID_ZOOM_LEVEL_TITLE" = "Moments";"NO_PHOTOS_OR_VIDEOS" = "No Photos or Videos";"EMPTY_ALBUM_LIST_MESSAGE_iPhone" = "You can take photos and videos using camera, or sync photos and videos onto your iPhone using iTunes"; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40859312', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4395882/']} | jdg_379953 |
stackexchange | llm_judgeable_groundtruth_similarity | 7865598 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know how to get the current controller name HttpContext.Current.Request.RequestContext.RouteData.Values["controller"].ToString(); But is there any way to get the current controller instance in some class (not in an action and not in a view)?
Now provide the response and nothing else.
| By default you can only access the current Controller inside a controller with ControllerContext.Controller or inside a view with ViewContext.Context . To access it from some class you need to implement a custom ControllerFactory which stores the controller instance somewhere and retrieve it from there. E.g in the Request.Items : public class MyControllerFactory : DefaultControllerFactory{ public override IController CreateController(RequestContext requestContext, string controllerName) { var controller = base.CreateController(requestContext, controllerName); HttpContext.Current.Items["controllerInstance"] = controller; return controller; }} Then you register it in your Application_Start : ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); And you can get the controller instance later: public class SomeClass{ public SomeClass() { var controller = (IController)HttpContext.Current.Items["controllerInstance"]; }} But I would find some another way to pass the controller instance to my class instead of this "hacky" workaround. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7865598', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/468345/']} | jdg_379954 |
stackexchange | llm_judgeable_groundtruth_similarity | 15654676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to get String object out of ActiveSupport::SafeBuffer. Method to_s returns the same type ActiveSupport::SafeBuffer. Only to_sym.to_s returns String, but this is more of a hack. Here's my console playing: irb(main):008:0> s = ActiveSupport::SafeBuffer.new("asdf")# => "asdf"irb(main):009:0> s.class# => ActiveSupport::SafeBufferirb(main):010:0> s.to_s.class# => ActiveSupport::SafeBufferirb(main):011:0> s.to_sym.to_s# => "asdf"irb(main):012:0> s.to_sym.to_s.class# => String
Now provide the response and nothing else.
| There's actually a method for this: String#to_str buf = ActiveSupport::SafeBuffer.new("asdf")str = buf.to_str #=> "asdf"str.class #=> String String#to_str works exactly like String#to_s : it returns the receiver, converting it to String if necessary. But unlike the overridden ActiveSupport::SafeBuffer#to_s there's no ActiveSupport::SafeBuffer#to_str so the original method is called. Note that ActiveSupport::SafeBuffer is a subclass of String : s = ActiveSupport::SafeBuffer.new("asdf")s.is_a? String# => true So there's often no need to convert it at all. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15654676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1457181/']} | jdg_379955 |
stackexchange | llm_judgeable_groundtruth_similarity | 16544891 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it a bug or is it not possible to release, stop or kill MediaPlayer while it's preparing? I have an instance of MediaPlayer running in Service , it stops fine if I stop, release, set it to null, while MediaPlayer is in prepared state . But it doesn't if I stop, release, set it to null if it's in preparing state . onPrepared() is called after stop, release, setting to null. Some workaround for this? I think it's common use case when a user wants to stop MediaPlayer before it has finished preparing.
Now provide the response and nothing else.
| By looking at the MediaPlayer documentation , you're not allowed to call stop() on an uninitialized object; which makes sense because you can't stop what is not running/ready yet. On the other hand, release() seems to do the trick after looking at the source code of MediaPlayer . But it doesn't harm to add a boolean flag to indicate that there is no need for the MediaPlayer object anymore and use that flag to release your object if onPrepared() gets called. A pseudocode would look like this: public void cancel(){ mCancel = true;}public void onPrepared(MediaPlayer player){ if(mCancel){ player.release(); //nullify your MediaPlayer reference mediaPlayer = null }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16544891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/903644/']} | jdg_379956 |
stackexchange | llm_judgeable_groundtruth_similarity | 441717 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I open terminal and execute w command then it will show: user tty7 :0 12:04 39:56 36.87s 0.06s /sbin/upstart - Now if open terminator or xterm and execute w command then it will show it's entry in the output of w command like user tty7 :0 12:04 39:56 36.87s 0.06s /sbin/upstart -user pts/2 :0.0 12:50 1.00s 0.02s 0.00s w but it will not show a new entry when I open gnome-terminal or xfce4-terminal .Why it is showing new session for terminator and not for xfce4-terminal?
Now provide the response and nothing else.
| w displays the information stored in utmp ( /var/run/utmp typically on Linux systems). This generally is only updated by “login” sessions, i.e. login (for logins on virtual consoles or serial connections), the display manager (for graphical sessions), the SSH server (for SSH connections), and some (most?) terminal emulators. In the latter case, whether or not they update utmp depends on their built-in support and configuration; for example xterm has the ut flag for this ( -ut disables utmp updates, +ut enables them), and GNOME Terminal no longer updates utmp directly at all . So you’re seeing the entries which have been added to utmp in your case: one added by your display manager (on tty7 ), and others added by some of the terminal emulators you’re using. It should be possible to wrap commands to add utmp logging to anything you like, using for example libutempter , but that is apparently not as straightforward as one might hope . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/441717', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/255251/']} | jdg_379957 |
stackexchange | llm_judgeable_groundtruth_similarity | 4609 |
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Please tell me where I've gone wrong (if I did in fact make a mistake). I'm pricing a long forward on a stock. The usual setup applies: This has payoff $S(T) - K$ at time $T$. We are at $t$ now. $S(T) = S(t)e^{(r-\frac12 \sigma^2)(T-t)+\sigma(W(T)-W(t))}$. $W(t)$ is a Wiener process. $K \in \mathbb{R}_+$. $Q$ is the risk-neutral measure. $\beta(t) = e^{rt}$ is the domestic savings account, a tradable asset. $r$ is the constant riskless rate. My Attempt : $f(t,S) = E^Q[\frac{\beta(t)}{\beta(T)}(S(T)-K)|\mathscr{F}_t]$ $ = E^Q [\frac{\beta(t)}{\beta(T)}S(T)|\mathscr{F}_t] - E^Q [\frac{\beta(t)}{\beta(T)}K|\mathscr{F}_t]$ $ = E^{P_S}[\frac{\beta(t)}{\beta(T)}S(T) \frac{\beta(T)S(t)}{\beta(t)S(T)}|\mathscr{F}_t] - \frac{\beta(t)}{\beta(T)}K$ $ = S(t) - K\frac{\beta(t)}{\beta(T)}$ $ = S(t) - Ke^{-r(T-t)}$ This isn't graded homework or assignment. (It is ungraded homework)
Now provide the response and nothing else.
| To make things really clear, you have an original matrix $X$ of size $300 \times 10$ with all your returns. Now what you do is that you choose the first $k=5$ eigenvectors (i.e. enough to get 80% of the variation given your data) and you form a vector $U$ of size $10 \times 5$. Each of the columns of $U$ represents a portfolio of the original dataset, and all of them are orthogonal. PCA is a dimensionality-reduction method: you could use it to store your data in a matrix $Z$ of size $300 \times 5$ by doing: $$Z = X U$$ You can then recover an approximation of $X$ which we can call $\hat{X}$ as follows: $$ \hat{X} = Z U^\intercal $$ Note that as your 5 eigenvectors only represent 80% of the variation of X, you will not have $X=\hat{X}$. In practice for finance application, I don't see why you would want to perform these reduction operations. In terms of factor analysis, you could sum the absolute value for each row of $U$; the vector with the highest score would be a good candidate I think. | {} | {'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/4609', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/3174/']} | jdg_379958 |
stackexchange | llm_judgeable_groundtruth_similarity | 2714829 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have applied for a Ph.D. in Trieste and am preparing for the exams. I am having a problem with Problem 8 here . Here is the text. Let $f\in L^1(\mathbb R)$ and let $F,G:\mathbb R\to\mathbb R$ be the functions defined by: $$F(x)=\int_x^{x+1}f(t)dt,\qquad\text{and}\qquad G(x)=\left|\int_x^{x+1}f(t)dt\right|.$$ (a) Prove that $G$ has a maximum point on $\mathbb R$ . (b) Give an example of $f\in L^1(\mathbb R)$ such that $F$ has no maximum point on $\mathbb R$ . Now, unless I'm much mistaken (proof at question end), we have: $F,G$ continuous on $\mathbb R$ ; $F,G$ tend to 0 as $|x|\to\infty$ . With that, by 2., both $F$ and $G$ are less than their sup-norms on $\mathbb R$ whenever $|x|>M$ for $M$ big enough, and by 1. and the compactness of $[-M,M]$ they must have a maximum on $[-M,M]$ , which is then a global maximum on $\mathbb R$ . So (a) is done, and (b)… is asking me to disprove the maximum of $F$ which I just proved, so it is impossible! Is my reasoning above correct? Are the proofs below correct? Or is there anything I am missing that is wrong in them? Proofs $F,G\to0$ as $|x|\to\infty$ I write $|x|\to\infty$ to say $x\to\infty$ or $x\to-\infty$ , so let's do $x\to\infty$ , and $x\to-\infty$ is proved the same way, more or less. Now: $$|F(x)|\leq\int_x^{x+1}|f(t)|dt\leq\int_x^{+\infty}|f(t)|dt,$$ which tends to zero for $x\to+\infty$ since $f\in L^1(\mathbb R)$ . And of course $G$ is already handled this way. For $x\to-\infty$ : $$|F(x)|\leq\int_{-\infty}^{x+1}|f(t)|dt.$$ Continuity We rewrite: $$F(x)=\int_{-\infty}^{+\infty}f(t)1_{[x,x+1]}(t)dt,$$ $1_A$ being the indicator of $A$ . Suppose $x\to x_0$ . If $t<x_0$ , then $t<x$ eventually, so that $f(t)1_{[x,x+1]}(t)=0$ eventually, hence $f(t)1_{[x,x+1]}\to0$ . The same occurs if $t>x_0+1$ , whereas of $t\in[x_0,x_0+1]$ then $t\in[x,x+1]$ eventually so that $f(t)1_{[x,x+1]}(t)=f(t)$ eventually. So for $t\neq x_0$ we have $f(t)1_{[x,x+1]}(t)\to f(t)1_{[x_0,x_0+1]}(t)$ . Since this leaves out only two points, $x_0$ and $x_0+1$ , the convergence is pointwise almost everywhere. All of these functions have absolute values that is at most $|f(t)|$ , so by dominated convergence we have: $$F(x)=\int_x^{x+1}f(t)dt=\int_{-\infty}^{+\infty}f(t)1_{[x,x+1]}(t)dt\to\int_{-\infty}^{+\infty}f(t)1_{[x_0,x_0+1]}(t)dt=\int_{x_0}^{x_0+1}f(t)dt=F(x_0),$$ as $x\to x_0$ , proving $F$ is continuous. $G=|F|$ is the composition of $h(x)=|x|$ and $F$ , and since $h,F$ are both continuous we have $G=h\circ F$ is also continuous.
Now provide the response and nothing else.
| The first example I think of is any negative Gaussian function, in particular $f(x)=-e^{-x^2}$; The graph: You can see that for for any $x$ the area between $x$ and $x+1$ is negative. You showed that for positive functions there is a maximum, but if $f$ is negative the limit to infinity implies that there is minimum, and $0$ is the $\sup$ of $F$, but not the maximum | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2714829', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/135592/']} | jdg_379959 |
stackexchange | llm_judgeable_groundtruth_similarity | 40996 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been working on this problem listed in Herstein's Topics in Algebra (Chapter 2.3, problem 4): If $G$ is a group such that $(ab)^i = a^ib^i$ for three consecutive integers $i$ for all $a, b\in G$, show that $G$ is abelian. I managed to prove it, but I'm not very happy with my result (I think there's a neater way to prove this). Anyway, I'm just looking to see if there's a different approach to this. My approach : Let $j=i+1, k=i+2$ for some $i\in \mathbb{Z}$. Then we have that $(ab)^i = a^ib^i$, $(ab)^j = a^jb^j$ and $(ab)^k = a^kb^k$. If $(ab)^k = a^kb^k$, then $a^jb^jab =a^jab^jb$. We cancel on the left and right and we have $b^ja = ab^j$, that is $b^iba = ab^j$. Multiply both sides by $a^i$ on the left and we get $a^ib^iba = a^jb^j$, so $(ab)^iba = (ab)^j$. But that is $(ab)^iba = (ab)^iab$. Cancelling on the left yields $ab=ba$, which holds for all $a,b \in G$, and therefore, $G$ is abelian. Thanks!
Now provide the response and nothing else.
| A very similar solution, but maybe slightly shorter: again let $i$, $i+1$ and $i+2$ be the three consecutive integers that work for $a$ and $b$. From $a^{i+1} b^{i+1} = (ab)^{i+1} = (ab)(ab)^i = aba^i b^i$ we get $a^i b = b a^i$. The same proof with $i$ replaced by $i+1$ gives$a^{i+1} b = b a^{i+1}$. Now $a b = a b a^i a^{-i} = a a^i b a^{-i} = a^{i+1} b a^{-i} = b a^{i+1} a^{-i} = b a$. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/40996', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1523/']} | jdg_379960 |
stackexchange | llm_judgeable_groundtruth_similarity | 23279292 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I edit README.md containing Markdown code in Vim and execute :set filetype? command, I see filetype=markdown . The Markdown syntax is highlighted correctly. But when I edit foo.md containing Markdown code in Vim and execute :set filetype? command, I see filetype=modula2 . The Markdown syntax is not highlighted correctly. What should I add to my ~/.vimrc to make Vim understand that foo.md or any file with extension name as .md is a markdown file and not modula2 file?
Now provide the response and nothing else.
| Cause of the issue To understand which script was setting this filetype, I executed the following command after editing foo.md. :verbose set filetype? I found the following output. filetype=modula2 Last set from /usr/share/vim/vim74/filetype.vim In /usr/share/vim/vim74/filetype.vim, I found the following lines. au BufNewFile,BufRead *.markdown,*.mdown,*.mkd,*.mkdn,*.mdwn,README.md setf markdownau BufNewFile,BufRead *.m2,*.DEF,*.MOD,*.md,*.mi setf modula2 These lines show that when README.md is edited, the filetype is set to markdown but on editing any other file with extension name as .md, the filetype is set to modula2 . In other words, *.md files are recognized as Modula-2 source code but an exception is made for README.md for it to be recognized as Markdown code, perhaps due to the growing popularity of README.md files on GitHub. Solution Add the following statement to ~/.vimrc to set filetype=markdown for all .md files. autocmd BufNewFile,BufRead *.md set filetype=markdown This statement says that when starting to edit a new file that doesn't exist or when starting to edit a new buffer, after reading the file into the buffer, if the file matches the pattern *.md then set filetype=markdown . Update In the updated version of Vim that I have now, I find that this issue no longer exists. $ vim --version | head -n 2VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Mar 31 2015 23:36:07)Included patches: 1-488, 576$ grep -E "markdown|modula2" /usr/share/vim/vim74/filetype.vim au BufNewFile,BufRead *.markdown,*.mdown,*.mkd,*.mkdn,*.mdwn,*.md setf markdownau BufNewFile,BufRead *.m2,*.DEF,*.MOD,*.mi setf modula2 The patch at ftp://ftp.vim.org/pub/vim/patches/7.4/7.4.860 seems to have made this change. However, I am a little confused about how these changes that seem to be available in patch 860 is available in my version of Vim which includes patches 1-448, 576 only. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23279292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303363/']} | jdg_379961 |
stackexchange | llm_judgeable_groundtruth_similarity | 8664708 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For instance, if I have a list [1,4,2,3,5,4,5,6,7,8,1,3,4,5,9,10,11] This algorithm should return [1,2,3,4,5,6,7,8,9,10,11]. To clarify, the longest list should run forwards. I was wondering what is an algorithmically efficient way to do this (preferably not O(n^2))? Also, I'm open to a solution not in python since the algorithm is what matters. Thank you.
Now provide the response and nothing else.
| Here is a simple one-pass O(n) solution: s = [1,4,2,3,5,4,5,6,7,8,1,3,4,5,9,10,11,42]maxrun = -1rl = {}for x in s: run = rl[x] = rl.get(x-1, 0) + 1 print x-run+1, 'to', x if run > maxrun: maxend, maxrun = x, runprint range(maxend-maxrun+1, maxend+1) The logic may be a little more self-evident if you think in terms of ranges instead of individual variables for the endpoint and run length: rl = {}best_range = xrange(0)for x in s: run = rl[x] = rl.get(x-1, 0) + 1 r = xrange(x-run+1, x+1) if len(r) > len(best_range): best_range = rprint list(best_range) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8664708', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/795319/']} | jdg_379962 |
stackexchange | llm_judgeable_groundtruth_similarity | 34243685 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to update a state every 3 seconds. export default class Calendar extends Component { constructor(props) { super(props); this.state = { timeLineTop: 75, }; }render() { this.state.timeLineTop = setInterval(function () { let d = new Date(); let result = d.getHours() + d.getMinutes() / MINUTES_IN_HOUR; return result; }, 3000); <View style={[ { top: this.state.timeLineTop }, ]}></View> }} Why will this not update my views position every 3 seconds?
Now provide the response and nothing else.
| ** Updated to implement TimerMixin You need to call a this.setState to update a state variable, and as specified by @eyal83, use the TimerMixin for setTimeout & setInterval: var TimerMixin = require('react-timer-mixin');componentDidMount: function() { this.setInterval( () => { let d = new Date(); let result = d.getHours() + d.getMinutes() / MINUTES_IN_HOUR; this.setState({ timeLineTop: result }) }, 500);} I've also set up a basic app resetting the state variable with a setInterval here , code is below. https://rnplay.org/apps/9gD-Nw 'use strict';var React = require('react-native');var { AppRegistry, StyleSheet, Text, View,} = React;var TimerMixin = require('react-timer-mixin');var SampleApp = React.createClass({ mixins: [TimerMixin], getInitialState: function() { return { timeLineTop: 75 } }, componentDidMount: function() { this.setInterval( () => { this.setState({ timeLineTop: this.state.timeLineTop+1 }) }, 500); }, render: function() { return ( <View style={styles.container}> <View style={[ { marginTop: this.state.timeLineTop }, ]}><Text>TOP - {this.state.timeLineTop}</Text></View> </View> ); }});var styles = StyleSheet.create({ container: { flex: 1, marginTop:60, },});AppRegistry.registerComponent('SampleApp', () => SampleApp); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34243685', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2059696/']} | jdg_379963 |
stackexchange | llm_judgeable_groundtruth_similarity | 234371 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to extract the distribution parameters of the sub-distributions which comprise a mixed normal distribution. I'll give my attempts so far. First the simulated data: MixedGaussiaData = Apply[Join, {RandomVariate[NormalDistribution[0, 2], 300], RandomVariate[NormalDistribution[0, 0.7], 500], RandomVariate[NormalDistribution[0, 0.4], 500], RandomVariate[NormalDistribution[0, 1], 200]}]; Which when plotted looks like this: So we have four normal distributions with different $\sigma$ values and different number of points, but all distributions have a common mean value $\mu = 0$ . I define my $n$ mixed-normal distribution as: NMixedGaussian[n_] := MixtureDistribution[Array[w, n], MapThread[NormalDistribution[#1, #2] &, {Array[m, n], Array[s, n]}]] Then using FindDistributionParameters FourMixedNormalMLE = FindDistributionParameters[MixedGaussiaData, NMixedGaussian[4], ParameterEstimator->{"MaximumLikelihood", PrecisionGoal->1, AccuracyGoal->1}] If I plot the result, it looks pretty good: However if we take a look at the results, they're not that good when compared to the inputs of the simulation: mMLE = Array[m, 4] /. FourMixedNormalMLEsMLE = Array[s, 4] /. FourMixedNormalMLEwMLE = Array[w, 4] /. FourMixedNormalMLE{0.0284676, 0.00902554, 0.0930328, -0.470579}{1.8648, 0.274301, 0.667947, 0.385259}{0.237727, 0.192302, 0.475281, 0.0946906} Second attempt: I tried explicitly defining the Mixed-Gaussain function with ProbabilityDensity : Clear[w, m, s, n];NMixedGaussian[n_] := MixtureDistribution[Array[w, n], MapThread[NormalDistribution[#1, #2] &, {Array[m, n], Array[s, n]}]]NMixGauss = NMixedGaussian[4];NMixGaussPDF[z_] = FullSimplify[PDF[NMixGauss, z], DistributionParameterAssumptions[NMixGauss]]NMixGaussPD = ProbabilityDistribution[NMixGaussPDF[z], {z, -Infinity, Infinity}, Assumptions -> DistributionParameterAssumptions[NMixGauss]]FourMixedNormalPDFMLE = FindDistributionParameters[MixedGaussiaData, NMixGaussPD, ParameterEstimator->{"MaximumLikelihood", PrecisionGoal->1, AccuracyGoal->1}] This makes it worse. I think the main issue might be initial values and constraints, but I'm not sure how to best implement this. Does anyone have any suggestions? One thing I noticed is that the weights produced by FindDistributionParameters don't seem to make sense. They sum to one, but none seem to correspond to weights defined by $1/\sigma^{2}$ or $1/\sigma_{\rm{SE}}^{2}$ Addition:What I'm trying to achieve is another way of performing a weighted mean. I could divide the simulated data up into chunks/bins, find the $\mu$ and $\sigma$ for each one and perform a weighted mean. I want to avoid binning if possible, hence this approach. Just to be a bit more explicit in what I am trying to do/expect to see. Here I've just performed a simple weighted average and determined the associated standard error: SeedRandom[1]Gaussian1 = RandomVariate[NormalDistribution[0, 2], 300];Gaussian2 = RandomVariate[NormalDistribution[0, 0.7], 500];Gaussian3 = RandomVariate[NormalDistribution[0, 0.4], 500];Gaussian4 = RandomVariate[NormalDistribution[0, 1], 200];M = {Mean[Gaussian1], Mean[Gaussian2], Mean[Gaussian3], Mean[Gaussian4]};S = {StandardDeviation[Gaussian1], StandardDeviation[Gaussian2], StandardDeviation[Gaussian3], StandardDeviation[Gaussian4]};SE = {StandardDeviation[Gaussian1]/Sqrt[Length[Gaussian1]], StandardDeviation[Gaussian2]/Sqrt[Length[Gaussian2]], StandardDeviation[Gaussian3]/Sqrt[Length[Gaussian3]], StandardDeviation[Gaussian4]/Sqrt[Length[Gaussian4]]};W = 1 / SE^2;Around[WU = Total[W M]/Total[W], WSE = Sqrt[(Total[W M^2]/Total[W] - (Total[W M]/Total[W])^2 Length[W]/(Length[W] - 1)) / Length[W]]]OUT = Around[-0.010808335694884515`, 0.0184845352965455]mMLE = Array[m, 4] /. FourMixedNormalMLEsMLE = Array[s, 4] /. FourMixedNormalMLEwMLE = Array[w, 4] /. FourMixedNormalMLEAround[WUMLE = Total[wMLE mMLE], WSEMLE = Sqrt[(Total[wMLE mMLE^2] - (Total[wMLE mMLE])^2 Length[wMLE]/(Length[wMLE] - 1)) / Length[wMLE]]]OUT = Around[0.008160465141904528, 0.07937540663098669] If I compare the results from those determined from a weighted mean, to those I get from FindDistributionParameters they are quite different.
Now provide the response and nothing else.
| Clear["Global`*"]fdk[s_] := 1/2 - 1/(2 s + 1) To overlay the plots, Plot[{fdk[s], RiemannXi[s]}, {s, -10, 10}, AspectRatio -> 1, Axes -> False, Frame -> True, PlotLegends -> {fdk, RiemannXi}, Epilog -> ({Opacity[0.3, ColorData[97][2 # + 4]], Tooltip[Line[{{-10, #}, {10, #}}], #], Tooltip[Line[{{#, -1.25}, {#, 3}}], #]} & /@ (Range[-1, 1]/2))] However, this clutters up the plot. You can use Manipulate to select the origin location. EDIT: Added Zeta[s] and used independent controls for x and y origins. Manipulate[ Plot[{fdk[s], RiemannXi[s], Zeta[s]}, {s, -10, 10}, AspectRatio -> 1, AxesOrigin -> {originx, originy}, Frame -> True, PlotLegends -> {fdk, RiemannXi, Zeta}], Row[{Control@{{originx, 0, Style["x origin", 14, Bold]}, {-1/2, 0, 1/2}}, Spacer[20], Control@{{originy, 0, Style["y origin", 14, Bold]}, {-1/2, 0, 1/2}}}]] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/234371', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/27119/']} | jdg_379964 |
stackexchange | llm_judgeable_groundtruth_similarity | 1516538 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering if there exist two non empty sets $A,B$ such that$$A\subseteq A\times B.$$I know that always exists a subset of $A\times B$ with the same cardinality of $A$, but i'm requesting here that $A$ is a subset of $A\times B$ without using any identification map. At first i thought that this was not possible because $A$ and $B\times A$ are two sets containing different kind of elements: the second contains pairs like $(a,b)$ with $a\in A, b\in B$; the first just single elements $a\in A$. Moreover, suppose $A\subseteq A\times B$ holds and take $a \in A$. Then $a=(a_1,b_1)$ for some $a_1 \in A, b_1\in B$. For the same reason $a_1=(a_2,b_2)$ and so $a=((a_2,b_2),b_1)$. Following this argument I got some sort of recursive infinite definition for $a$ that made me suspect something is wrong. However if I take $$A=\mathbb{N}^{\mathbb{N}} ;B=\mathbb{N}$$is it true that $A=A\times B$ or I'm missing something? Moreover, if $A\subseteq A\times B$ can be true, are there other examples? edit: I add another strange example: take $A=\bigcup_{i=1}^{\infty} \mathbb{N}^i $ and $B=\mathbb{N}$, then $A \times B \subset A$. This makes me think that maybe exists also an example for the other inclusion.
Now provide the response and nothing else.
| The existence of sets $A$ and $B$ with $A\subseteq A\times B$ contradicts the axiom of regularity (or the axiom of foundation.) The precise proof depends on which construction of $A\times B$ you prefer. We will use the most common formulation, in which we represent the ordered pair $(a,b)$ by the set $\{\{a\},\{a,b\}\}$, but the proof easily adapts to other constructions. Suppose $A\subseteq A\times B$. Since the sets are nonempty, we can pick $a_0\in A$. By assumption, we can write $a_0=(a_1,b_1)$ with $a_1\in A$ and $b_1\in B$, and continuing on we can write $a_i=(a_{i+1},b_{i+1})$. Under our formulation of the cartesian product, this means $a_{i+1}\in \{a_{i+1}\}\in a_i$. This gives an infinite descending sequence$$ \cdots \in a_{i+1}\in \{a_{i+1}\}\in a_i \in \{a_i\}\in a_{i-1}\cdots \in a_0,$$contradicting the axiom of foundation. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1516538', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/84266/']} | jdg_379965 |
stackexchange | llm_judgeable_groundtruth_similarity | 283646 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have just converted a project from Visual Studio 2003 to 2005 and although most of it 'converted' fine, I have a series of STL errors from the following line: void SomeFn( std::vector<CSomeObject*>::iterator it,std::vector<CSomeObject*>::iterator itBegin = NULL,std::vector<CSomeObject*>::iterator itEnd = NULL ); The Visual Studio error is as follows: c:\<path>\Headerfile.h(20) : error C2440: 'default argument' : cannot convert from 'int' to 'std::_Vector_iterator<_Ty,_Alloc>' with [ _Ty=CObject *, _Alloc=std::allocator<CObject *> ] No constructor could take the source type, or constructor overload resolution was ambiguous I can't see anything wrong with that code and it worked perfectly in VS 2003. Any ideas?
Now provide the response and nothing else.
| Your program is incorrect as NULL cannot be converted as an iterator. I don't really know what you want these iterators to be initialized as. If you need an iterator guarantied not to be in a container but to be still "valid", you can use a default-constructor: typedef std::vector<CSomeObject*> myvector_t;void SomeFn( myvector_t::iterator it, myvector_t::iterator itBegin = myvector_t::iterator(), myvector_t::iterator itEnd = myvector_t::iterator() ); Note, however, that if you do so, it , itBegin and itEnd won't be comparable in a meaningful way! Only iterators obtained from a given container are comparable meaningfully. In the end, I would recommend against using defaults values for itBegin and itEnd . If you really need to not have these, create another function without the arguments and do something meaningful. i.e.: typedef std::vector<CSomeObject*> myvector_t;void SomeFn( myvector_t::iterator it, myvector_t::iterator itBegin, myvector_t::iterator itEnd );void SomeFn( myvector_t::iterator it ); // No begin/end arguments Another problem of your program is the use of a vector to store pointers. This is really unsafe. Make sure you never erase elements from the vector without deleting the element first. You might also have problems with algorithms copying objects around. It is better to use smart pointers in vectors. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/283646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18664/']} | jdg_379966 |
stackexchange | llm_judgeable_groundtruth_similarity | 53416657 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What's the point of telling the compiler specifically to include the file only once? Wouldn't it make sense by default? Is there even any reason to include a single file multiple times? Why not just assume it? Is it to do with specific hardware?
Now provide the response and nothing else.
| There are multiple related questions here: Why is #pragma once not automatically enforced? Because there are situations in which you want to include files more than once. Why would you want to include a file multiple times? Several reasons have been given in other answers (Boost.Preprocessor, X-Macros, including data files). I would like to add a particular example of "avoid code duplication": OpenFOAM encourages a style where #include ing bits and pieces within functions is a common concept. See for example this discussion. Ok, but why is it not the default with an opt-out? Because it is not actually specified by the standard. #pragma s are by definition implementation-specific extensions. Why has #pragma once not become a standardized feature yet (as it is widely supported)? Because pinning down what is "the same file" in a platform-agnostic way is actually surprisingly hard. See this answer for more information . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/53416657', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9119391/']} | jdg_379967 |
stackexchange | llm_judgeable_groundtruth_similarity | 512759 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
First some specs: my computer is an HP EliteBook 8460p. It comes with an integrated Chicony HP HD webcam. My issue is that a lot of applications (well, at least Skype and guvcview) are displaying multiple lines for the same webcam; indeed, if I do ls -l /dev | grep video , I get the following: crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1 I have 2 /dev/video[n] with only one (integrated) webcam; Skype will work properly with /dev/video0 , but not with /dev/video1 . Same for guvcview. If I plug another USB webcam, for example a logitech one, I get the following with dmesg : [21222.638802] usb 2-2: new high-speed USB device number 20 using xhci_hcd[21222.970684] usb 2-2: New USB device found, idVendor=046d, idProduct=08c2, bcdDevice= 0.05[21222.970755] usb 2-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0[21222.972518] uvcvideo: Found UVC 1.00 device <unnamed> (046d:08c2)[21226.044535] uvcvideo 2-2:1.0: Entity type for entity Extension 4 was not initialized![21226.044538] uvcvideo 2-2:1.0: Entity type for entity Extension 8 was not initialized![21226.044540] uvcvideo 2-2:1.0: Entity type for entity Extension 10 was not initialized![21226.044541] uvcvideo 2-2:1.0: Entity type for entity Extension 9 was not initialized![21226.044543] uvcvideo 2-2:1.0: Entity type for entity Extension 3 was not initialized![21226.044545] uvcvideo 2-2:1.0: Entity type for entity Processing 2 was not initialized![21226.044547] uvcvideo 2-2:1.0: Entity type for entity Camera 1 was not initialized![21226.044746] input: UVC Camera (046d:08c2) as /devices/pci0000:00/0000:00:1c.7/0000:25:00.0/usb2/2-2/2-2:1.0/input/input35[21226.137559] usb 2-2: Warning! Unlikely big volume range (=3072), cval->res is probably wrong.[21226.137569] usb 2-2: [5] FU [Mic Capture Volume] ch = 1, val = 4608/7680/1 And the following with ls -l /dev/ | grep video : crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0crw-rw---- 1 root video 243, 1 Apr 16 14:06 media1crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1crw-rw----+ 1 root video 81, 2 Apr 16 14:06 video2crw-rw----+ 1 root video 81, 3 Apr 16 14:06 video3 3 new entries: /dev/media1 , /dev/video2 and /dev/video3 . I even found a Sony webcam (CEVCECM) that adds up to 4 new devices. The dmesg logs: [21927.665747] usb 2-2: new high-speed USB device number 23 using xhci_hcd[21927.817330] usb 2-2: New USB device found, idVendor=05e3, idProduct=0608, bcdDevice= 9.01[21927.817339] usb 2-2: New USB device strings: Mfr=0, Product=1, SerialNumber=0[21927.817343] usb 2-2: Product: USB2.0 Hub[21927.824119] hub 2-2:1.0: USB hub found[21927.824814] hub 2-2:1.0: 4 ports detected[21928.113733] usb 2-2.4: new high-speed USB device number 24 using xhci_hcd[21928.223184] usb 2-2.4: New USB device found, idVendor=054c, idProduct=097b, bcdDevice=21.12[21928.223192] usb 2-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3[21928.223197] usb 2-2.4: Product: CEVCECM[21928.223201] usb 2-2.4: Manufacturer: Sony[21928.223206] usb 2-2.4: SerialNumber: DHZD10412EUHK1[21928.227506] uvcvideo: Found UVC 1.00 device CEVCECM (054c:097b)[21928.242592] uvcvideo: Unable to create debugfs 2-24 directory.[21928.242780] uvcvideo 2-2.4:1.0: Entity type for entity Extension 7 was not initialized![21928.242783] uvcvideo 2-2.4:1.0: Entity type for entity Extension 3 was not initialized![21928.242785] uvcvideo 2-2.4:1.0: Entity type for entity Processing 2 was not initialized![21928.242787] uvcvideo 2-2.4:1.0: Entity type for entity Camera 1 was not initialized![21928.242877] input: CEVCECM: CEVCECM as /devices/pci0000:00/0000:00:1c.7/0000:25:00.0/usb2/2-2/2-2.4/2-2.4:1.0/input/input38 And the resulting device files with ls -l /dev | grep video : crw-rw---- 1 root video 29, 0 Apr 16 08:13 fb0crw-rw---- 1 root video 243, 0 Apr 16 08:13 media0crw-rw---- 1 root video 243, 1 Apr 16 14:18 media1crw-rw----+ 1 root video 81, 0 Apr 16 08:13 video0crw-rw----+ 1 root video 81, 1 Apr 16 08:13 video1crw-rw----+ 1 root video 81, 2 Apr 16 14:18 video2crw-rw----+ 1 root video 81, 3 Apr 16 14:18 video3crw-rw----+ 1 root video 81, 4 Apr 16 14:18 video4crw-rw----+ 1 root video 81, 5 Apr 16 14:18 video5 5 new entries: /dev/media1 and /dev/video2 to /dev/video5 . I feel like the correct files to use are the /dev/media[n] ones, but Skype and guvcview somehow fail to do so and fallback to the /dev/video[n] . I don't have this issue with Webcamoid for example. If anyone has an idea, I take it. In the meantime I will continue the investigation... --- Edited the 2019-05-14 --- Got some interesting information using v4l2-ctl --device=/dev/video* --all . For the Chicony HP HD webcam, its 2 device files have different device capabilities: # Devices capabilities for /dev/video0Video CaptureStreamingExtended Pix Format# Devices capabilities for /dev/video1Metadata CaptureStreamingExtended Pix Format I get similar results for the USB webcams. So after all, maybe what Skype and guvcview fail to do is to only list video devices that support the Video Capture device capability.
Now provide the response and nothing else.
| The second device provides metadata about the video data from the first device. The new devices were introduced by this patch: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=088ead25524583e2200aa99111bea2f66a86545a More information on the V4L metadata interface can be found here: https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/dev-meta.html For run of the mill USB Video Class devices, this mostly just provides more accurate timestamp information . For cameras like Intel's RealSense line, provide a wider range of data about how the image was captured . Presumably this data was split out into a separate device node because it couldn't easily be delivered on the primary device node in a compatible way. It's a bit of a pain though, since (a) applications that don't care about this metadata now need to filter out the extra devices, and (b) applications that do care about the metadata need a way to tie the two devices together. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/512759', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/247149/']} | jdg_379968 |
stackexchange | llm_judgeable_groundtruth_similarity | 261849 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Running apache on a debian server, I have been starting to monitor logs, and have found some odd occurrences. Does anyone know what this any-request-allowed.com site is trying to do... and how can I block this potentially malicious traffic? Sanitized logs: [46.161.11.245] - [13/Apr/2011:13:02:09 -0700] | api.goolertech.com "POST http://myinfo.any-request-allowed.com/?strGet=get2566 HTTP/1.1" 404 294 | -[46.161.11.245] - [15/Apr/2011:13:02:53 -0700] | api.goolertech.com "POST http://myinfo.any-request-allowed.com/?strGet=get8888 HTTP/1.1" 404 294 | -[46.161.11.245] - [17/Apr/2011:13:05:04 -0700] | api.goolertech.com "POST http://myinfo.any-request-allowed.com/?strGet=get9659 HTTP/1.1" 404 294 | - Where 404 is the error code and 294 is the size of the response. EDIT: Did a packet capture, and this was the request I got: POST http://myinfo.any-request-allowed.com/?strGet=get2566 HTTP/1.1Host: myinfo.any-request-allowed.comPragma: no-cacheAccept: */*Proxy-Connection: Keep-AliveCookie: strCookie=cookie2566Content-Length: 16Content-Type: application/x-www-form-urlencodedstrPost=post2566
Now provide the response and nothing else.
| This looks like internet background noise. A dodgy or compromised host is probably port scanning for open web servers and then testing them out to see if they'll relay a POST to a 3rd party site. Your server is naturally 404ing the request. It's likely not targeted at you and the chances are it will stop, pretty soon. That said, the timing is unusually regular for this sort of traffic, so if it doesn't stop, the source of the request is 46.161.11.245 . If you block anything, then block that host at the firewall. (via iptables) | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/261849', 'https://serverfault.com', 'https://serverfault.com/users/52633/']} | jdg_379969 |
stackexchange | llm_judgeable_groundtruth_similarity | 38242744 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array of values like: const arr = [1,2,3]; Is there any way I can use destructuring to create the following output? If not, what is the easiest way I can do this in ES6 (or later)? const obj = { one: 1, two: 2, three: 3}; I tried this, but I guess it doesn't work as this is the syntax for computed keys: const arr = [1,2,3];const obj = { [one, two, three] = arr};
Now provide the response and nothing else.
| I don't believe there's any structuring/destructuring solution to doing that in a single step, no. I wanted something similar in this question . The old := strawman proposal doesn't seem to have legs in the new proposal list , so I don't think there's much activity around this right now. IMHO, this answer is the best one here (much better than this one). Two steps, but concise and simple. But if it's two steps, you could also use a simple object initializer: const arr = [1,2,3];const obj = { one: arr[0], two: arr[1], three: arr[2]};console.log(obj); Another option is to do it with several temporary arrays but technically only one statement (I am not advocating this, just noting it): const arr = [1,2,3];const obj = Object.fromEntries( ["one", "two", "three"].map((name, index) => [name, arr[index]] ));console.log(obj); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38242744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/571194/']} | jdg_379970 |
stackexchange | llm_judgeable_groundtruth_similarity | 2695153 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read about how to prevent CSRF-attacks in the last days. I am going to update the token in every pageload, save the token in the session and make a check when submitting a form. But what if the user has, lets say 3 tabs open with my website, and I just store the last token in the session? This will overwrite the token with another token, and some post-action is going to fail. Do I need to store all tokens in the session, or is there a better solution to get this working?
Now provide the response and nothing else.
| Yes, with the stored-token approach you'd have to keep all generated tokens just in case they came back in at any point. A single stored-token fails not just for multiple browser tabs/windows but also for back/forward navigation. You generally want to manage the potential storage explosion by expiring old tokens (by age and/or number of tokens issued since). Another approach that avoids token storage altogether is to issue a signed token generated using a server-side secret. Then when you get the token back you can check the signature and if it matches you know you signed it. For example: // Only the server knows this string. Make it up randomly and keep it in deployment-specific// settings, in an include file safely outside the webroot//$secret= 'qw9pDr$wEyq%^ynrUi2cNi3';...// Issue a signed token//$token= dechex(mt_rand());$hash= hash_hmac('sha1', $token, $secret);$signed= $token.'-'.$hash;<input type="hidden" name="formkey" value="<?php echo htmlspecialchars($signed); ?>">...// Check a token was signed by us, on the way back in//$isok= FALSE;$parts= explode('-', $_POST['formkey']);if (count($parts)===2) { list($token, $hash)= $parts; if ($hash===hash_hmac('sha1', $token, $secret)) $isok= TRUE;} With this, if you get a token with a matching signature you know you generated it. That's not much help in itself, but then you can put extra things in the token other than the randomness, for example user id: $token= dechex($user->id).'.'.dechex(mt_rand())... if ($hash===hash_hmac('sha1', $token, $secret)) { $userid= hexdec(explode('.', $token)[0]); if ($userid===$user->id) $isok= TRUE Now each form submission has to be authorised by the same user who picked up the form, which pretty much defeats CSRF. Another thing it's a good idea to put in a token is an expiry time, so that a momentary client compromise or MitM attack doesn't leak a token that'll work for that user forever, and a value that is changes on password resets, so that changing password invalidates existing tokens. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2695153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267609/']} | jdg_379971 |
stackexchange | llm_judgeable_groundtruth_similarity | 15079948 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Look at LoaderCustomSupport (Use AsyncTaskLoader) and FragmentRetainInstanceSupport (Use Thread, almost equivalent to AsyncTask) Both examples have the following similarities. Both do not block UI thread when loading data The data fetching thread is not destroyed when user performs configuration change like screen rotation. When data fetching thread finished fetching data, it can update to the correct Fragment UI However, there are differences. AsyncTaskLoader Seems like there is no easy way to update intermediate progress to a progress bar dialog AsyncTask Not sure on this. But Android documentation seems to recommend AsyncTaskLoader for async data loading and updating final result to UI ? Is there any guideline, or checklist to look at, to make a decision on whether to choose AsyncTaskLoader or AsyncTask, to do a time-consuming loading task and update the result to Fragment's UI?
Now provide the response and nothing else.
| your question made me interested and tried sometimes to look into the differences. Here i am writing my observations. For the premature termination the asynchronous task using AsyncTask will continue running in its thread. The processing of the results can soon lead to unrequested results while AsyncTaskLoader handle the premature termination of the activity AsyncTaskLoader handles activity configuration changes (IE when the user rotates the screen). AsyncTaskLoader is intended load data for DataAdapters so for this purpose it is best to use AsyncTaskLoader But if you need to change UI (specially fragments) after task completion it is better to use AsyncTask as you can't change fragments in onLoadFinished of AsynTaskLoader. So to me the usage depends on your task. and if the above 3 points doesnt bother you then the performance is same ( haven't found any documents though , but in this case asynctaskloader is recommended :S) some related links AsyncTaskLoader vs AsyncTask http://andreas-kluck.blogspot.com/2012/02/asynctask-and-asynctaskloader.html | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15079948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/72437/']} | jdg_379972 |
stackexchange | llm_judgeable_groundtruth_similarity | 2243601 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble proving an exercise (Exercise 6.1 in Erdmann and Wildon's book Introduction to Lie Algebras ). The exercise is used to help prove a version of Engel's Theorem. It states: Let $V$ be an $n$-dimensional vector space where $ n \geq 1 $ and let $ x : V \rightarrow V $ be a nilpotent linear map. (i): Show that there is a non-zero vector $v \in V $ such that $ xv = 0 $. (this part I've done). (ii): Let $U = \text{Span} \{ v \} . $ Show that $ x $ induces a nilpotent linear transformation $ \bar{x} : V/U \rightarrow V/U$. (this part again I've showed and proved it's well defined). This is the part of the question I'm struggling with: By induction we know that there is a basis $ \{ v_1 + U , ..., v_{n-1} +U \} $ of $ V/ U $ in which $ \bar{x} $ has a strictly upper triangular matrix. Prove that $ \{ v_1,...,v_{n-1} \} $ is a basis of $ V$ and that matrix of $x$ is strictly upper triangular w.r.t this basis. I'm having an issue with the induction and the proof from there onwards. I don't really know where to start, for my $ \bar{x} $ I have $ \bar{x} = x + U $. Any help in the right direction would be appreciated, don't know if I'm missing something obvious.
Now provide the response and nothing else.
| Assuming that $a>1$, $$ \begin{align} I(a) &= \int_{0}^{\infty} \frac{\gamma+\ln x}{e^{ax}} \frac{1-\cos x}{x} \, dx \\ &= -\int_{0}^{\infty} \frac{\gamma+\ln x}{e^{ax}} \sum_{n=1}^{\infty} \frac{(-1)^{n}x^{2n-1}}{(2n)!} \, dx \\ &= -\sum_{n=1}^{\infty} \frac{(-1)^{n}}{(2n)!} \int_{0}^{\infty} (\gamma + \ln x) e^{-ax} x^{2n-1} \, dx \\ &= -\sum_{n=1}^{\infty} \frac{(-1)^{n}}{(2n)!} \left(\gamma \, \frac{\Gamma(2n)}{a^{2n}} + \frac{\Gamma'(2n) -\ln(a) \Gamma(2n)}{a^{2n}}\right) \\ &=-\sum_{n=1}^{\infty} \frac{(-1)^{n}}{2n} \left(\frac{\gamma + \psi_{0}(2n)- \ln(a)}{a^{2n}} \right) \tag{1} \\ &= -\sum_{n=1}^{\infty}\frac{(-1)^{n}}{2n} \frac{H_{2n}- \frac{1}{2n} -\ln (a) }{a^{2n}} \tag{2} \\ &= -\sum_{n=1}^{\infty} \frac{(-1)^{n}}{2n}\frac{H_{2n}}{a^{2n}} + \sum_{n=1}^{\infty} \frac{(-1)^{n}}{4n^{2}} \frac{1}{a^{2n}} + \ln (a) \sum_{n=1}^{\infty} \frac{(-1)^{n}}{2n} \frac{1}{a^{2n}} \\\ &= -\sum_{n=1}^{\infty} \frac{(-1)^{n}}{2n}\frac{H_{2n}}{a^{2n}} + \, \frac{1}{4} \, \operatorname{Li}_{2} \left(- \frac{1}{a^{2}} \right) - \frac{\ln (a)}{2} \, \ln\left(1+ \frac{1}{a^{2}} \right) \tag{3}. \end{align}$$ From the ordinary generating function for the harmonic numbers , we see that $$f(z) = \sum_{n=1}^{\infty} \frac{(-1)^{n} H_{n}}{n} \, z^{n} = -\int_{0}^{z} \frac{\log(1+t)}{t(1+t)} \, dt = \operatorname{Li}_{2}(-z) + \frac{1}{2} \, \ln^{2}(1+z), \quad |z| <1. $$ Therefore,$$\begin{align} \sum_{n=1}^{\infty} \frac{(-1)^{n}}{2n}\frac{H_{2n}}{a^{2n}}&= \frac{1}{2} \left[f\left(\frac{i}{a}\right)+f\left(-\frac{i}{a} \right) \right] \\ &= \frac{1}{2} \left[\operatorname{Li}_{2} \left(-\frac{i}{a}\right)+ \frac{1}{2} \, \ln^{2} \left(1+ \frac{i}{a} \right)+ \operatorname{Li}_{2} \left(\frac{i}{a}\right) + \frac{1}{2} \, \ln^{2} \left(1- \frac{i}{a} \right) \right] \\ &= \frac{1}{4} \left[\operatorname{Li}_{2} \left(-\frac{1}{a^{2}} \right) +\ln^{2} \left(1+ \frac{i}{a} \right) + \ln^{2} \left(1- \frac{i}{a} \right)\right], \tag{5} \end{align}$$ and $$\begin{align} I(a) &= -\frac{1}{4} \left[ \ln^{2} \left(1+ \frac{i}{a}\right)+ \ln^{2} \left(1- \frac{i}{a} \right)+2 \ln (a) \ln \left(1+ \frac{1}{a^{2}} \right)\right] \\ &= - \frac{1}{8} \left[ \ln^{2} \left(1+ \frac{1}{a^{2}} \right)-4\arctan^{2} \left(\frac{1}{a} \right) + 4 \ln (a) \ln \left(1+ \frac{1}{a^{2}} \right)\right]. \end{align}$$ Letting $a \downarrow 1$, we get $$I(1) = - \frac{1}{8} \, \ln^{2}(2)+\frac{1}{2} \left(\frac{\pi^{2}}{16} \right) = \frac{\pi^{2}-4 \ln^{2}(2)}{32} = \frac{\pi^{2}-\ln^{2}(4)}{32}.$$ For evaluation purposes, I assumed that $a >1$. But the result should hold for $a>0$. $(1)$ https://en.wikipedia.org/wiki/Digamma_function $(2)$ https://en.wikipedia.org/wiki/Digamma_function#Relation_to_harmonic_numbers $(3)$ https://en.wikipedia.org/wiki/Polylogarithm $(4)$ https://en.wikipedia.org/wiki/Polylogarithm#Properties | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2243601', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/421648/']} | jdg_379973 |
stackexchange | llm_judgeable_groundtruth_similarity | 391157 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to serialize/de-serialize some objects into/from string and transfer them as just opaque data. I can use XmlSerializer to do it, but generated string looks clumsy and long. So, is there any concise Serializer available in .NET? The first thing coming to my mind is that perhaps .NET should have JSON Serializer, but I cannot find it. Is there any off-the-shelf approach in .NET to convert object to/from concise string?
Now provide the response and nothing else.
| There are two; DataContractJsonSerializer in 3.5 JavaScriptSerializer in 3.5 SP1 In the traditional way, they aren't 100% compatible with each-other, especially re DateTime ; DCJS uses (IIRC) a literal string; JSS uses new - and neither can read t'other correctly. Of course, if the text if opaque, you could also use any concise binary serializer, and simply base-64 encode it; for example, protobuf-net is pretty terse. But using JSON minimizes the external code needed at each end, so may well be your best bet. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/391157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26349/']} | jdg_379974 |
stackexchange | llm_judgeable_groundtruth_similarity | 1244188 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For a polynomial $P(x)$ of degree $n$, $P(k) = 2^k$ for $k = 0, 1, 2, . . . , n$. Find $P(n+1)$. If $n=1$, $P(x)=x+1$ and $P(2)=3$. If $n=2$, $P(x)=0.5x^2+0.5x+1$ and $P(3)=7$. How to approach further cases? I am stuck.
Now provide the response and nothing else.
| Following are two approaches to show $P(n+1) = 2^{n+1} - 1$ . Method I is more systematic and use finite differences. Method II is more elementary and do the job with induction. Method I - finite differences Given any function $f(x)$ and positive number $h$ , the finite difference $\Delta_h f(x)$ is the function defined by $$\Delta_h f(x) \stackrel{def}{=} f(x+h) - f(x)$$ When $f(x)$ is a non-zero polynomial, $\Delta_h f(x)$ is again a polynomialbut with degree one less.A corollary of this is if $f(x)$ has degree $n$ , then the $(n+1)^{th}$ order finite difference of $f(x)$ vanishes. i.e $$\sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^{n+1-k} f(x+kh) = 0$$ Consider the special case $h = 1$ and apply this to the polynomial $P(x)$ whose degree is $n$ , we get $$\begin{align} &\left.\Delta^{n+1} P(x) \right|_{x=0} = 0 \\\iff & \sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^{n+1-k} P(k) = 0\\\implies & \sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^{n+1-k} 2^k = 2^{n+1} - P(n+1)\\\end{align}$$ This leads to $$P(n+1) = 2^{n+1} - \sum_{k=1}^{n+1} \binom{n+1}{k} (-1)^{n+1-k} 2^k= 2^{n+1} - (2-1)^{n+1}= 2^{n+1} - 1$$ Method II - mathematical induction Let $\mathcal{S}_n$ be the induction statement If $P(x)$ is a polynomial of degree $n$ such that $P(k) = 2^k$ for all $0 \le k \le n$ , then $P(n+1) = 2^{n+1} - 1$ . It is trivial to check the base case $\mathcal{S}_0$ . Assume $\mathcal{S}_{n-1}$ is true and $P(x)$ is a polynomial satisfies the assumption in $\mathcal{S}_n$ . Consider the polynomial $Q(x) = P(x+1) - P(x)$ , it is easy to see $Q(x)$ is a polynomial of degree $n-1$ , For all $k$ , $0 \le k \le n - 1$ , $Q(k) = P(k+1) - P(k) = 2^{k+1} - 2^k = 2^k$ . This means $Q(x)$ satisfies the assumption in $\mathcal{S}_{n-1}$ .By $\mathcal{S}_{n-1}$ , $Q(n) = 2^n - 1$ and hence $$P(n+1) = P(n) + Q(n) = 2^n + (2^n - 1) = 2^{n+1} - 1$$ This establishes $\mathcal{S}_{n-1} \implies \mathcal{S}_n$ and by principle of mathematical induction, $\mathcal{S}_n$ is true for all $n$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1244188', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/121158/']} | jdg_379975 |
stackexchange | llm_judgeable_groundtruth_similarity | 28256441 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just started a Machine learning class and we went over Perceptrons. For homework we are supposed to:"Choose appropriate training and test data sets of two dimensions (plane). Use 10 data points for training and 5 for testing. " Then we are supposed to write a program that will use a perceptron algorithm and output: a comment on whether the training data points are linearlyseparable a comment on whether the test points are linearly separable your initial choice of the weights and constants the final solution equation (decision boundary) the total number of weight updates that your algorithm made the total number of iterations made over the training set the final misclassification error, if any, on the training data andalso on the test data I have read the first chapter of my book several times and I am still having trouble fully understanding perceptrons. I understand that you change the weights if a point is misclassified until none are misclassified anymore, I guess what I'm having trouble understanding is What do I use the test data for and how does that relate to thetraining data? How do I know if a point is misclassified? How do I go about choosing test points, training points, threshold or a bias? It's really hard for me to know how to make up one of these without my book providing good examples. As you can tell I am pretty lost, any help would be so much appreciated.
Now provide the response and nothing else.
| What do I use the test data for and how does that relate to the training data? Think about a Perceptron as young child. You want to teach a child how to distinguish apples from oranges. You show it 5 different apples (all red/yellow) and 5 oranges (of different shape) while telling it what it sees at every turn ("this is a an apple. this is an orange). Assuming the child has perfect memory, it will learn to understand what makes an apple an apple and an orange an orange if you show him enough examples. He will eventually start to use meta- features (like shapes) without you actually telling him. This is what a Perceptron does. After you showed him all examples, you start at the beginning, this is called a new epoch . What happens when you want to test the child's knowledge? You show it something new . A green apple (not just yellow/red), a grapefruit, maybe a watermelon. Why not show the child the exact same data as before during training? Because the child has perfect memory, it will only tell you what you told him. You won't see how good it generalizes from known to unseen data unless you have different training data that you never showed him during training. If the child has a horrible performance on the test data but a 100% performance on the training data, you will know that he has learned nothing - it's simply repeating what he has been told during training - you trained him too long, he only memorized your examples without understanding what makes an apple an apple because you gave him too many details - this is called overfitting . To prevent your Perceptron from only (!) recognizing training data you'll have to stop training at a reasonable time and find a good balance between the size of the training and testing set. How do I know if a point is misclassified? If it's different from what it should be. Let's say an apple has class 0 and an orange has 1 (here you should start reading into Single/MultiLayer Perceptrons and how Neural Networks of multiple Perceptrons work). The network will take your input. How it's coded is irrelevant for this, let's say input is a string "apple". Your training set then is {(apple1,0), (apple2,0), (apple3,0), (orange1,1), (orange2,1).....}. Since you know the class beforehand, the network will either output 1 or 0 for the input "apple1". If it outputs 1, you perform (targetValue-actualValue) = (1-0) = 1. 1 in this case means that the network gives a wrong output. Compare this to the delta rule and you will understand that this small equation is part of the larger update equation. In case you get a 1 you will perform a weight update. If target and actual value are the same, you will always get a 0 and you know that the network didn't misclassify. How do I go about choosing test points, training points, threshold or a bias? Practically the bias and threshold isn't "chosen" per se. The bias is trained like any other unit using a simple "trick", namely using the bias as an additional input unit with value 1 - this means the actual bias value is encoded in this additional unit's weight and the algorithm we use will make sure it learns the bias for us automatically. Depending on your activation function, the threshold is predetermined. For a simple perceptron, the classification will occur as follows: Since we use a binary output (between 0 and 1), it's a good start to put the threshold at 0.5 since that's exactly the middle of the range [0,1]. Now to your last question about choosing training and test points: This is quite difficult, you do that by experience. Where you're at, you start off by implementing simple logical functions like AND, OR, XOR etc. There's it's trivial. You put everything in your training set and test with the same values as your training set (since for x XOR y etc. there are only 4 possible inputs 00, 10, 01, 11). For complex data like images, audio etc. you'll have to try and tweak your data and features until you feel like the network can work with it as good as you want it to. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28256441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3494525/']} | jdg_379976 |
stackexchange | llm_judgeable_groundtruth_similarity | 3094916 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In an SO answer daxim states: @array ~~ $scalar is true when $scalar is in @array to which draegtun replies: From 5.10.1+ the order of ~~ is important. Thus it needs to be $scalar ~~ @array How about a small primer on ~~ with link(s) to source(s) including the following specific questions: What is ~~ ? What is ~~ called? Why does the order matter in one version but not in a previous one? Note that a good summary may not get all the details and can be hard to write. An introduction or primer would be very useful to save time for someone unfamiliar with ~~ while expanding the exposure of this Perlism. Search strings : non-word-tilde-tilde non-word-at-sign .
Now provide the response and nothing else.
| Answering specifically "why does the order matter in one version but not in a previous one": the smart match operator was badly designed in 5.10.0 in a way that made it difficult to use reliably, and made the given/when construct less useful than it could be, so the semantics were changed with 5.10.1 and all future versions will pretend that the 5.10.0 version never existed. In the 5.10.1+ version of smart match, the left operand and the right operand to ~~ are always treated distinctly. Just as with the =~ regex match operator, the left side is the "subject" of the match, and the right side is the "pattern" to match against -- whether that be a plain scalar, a regex, an array or hash reference, a code reference, or whatever. The specifics are detailed pretty well in perlsyn. You shouldn't worry about the 5.10.0 version at all unless you've already written code that depends on the 5.10.0 semantics (in which case, you should rewrite it to require 5.10.1, or else it will break on all future versions of perl). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3094916', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/101954/']} | jdg_379977 |
stackexchange | llm_judgeable_groundtruth_similarity | 132779 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
(Apologies if this is too obscure.) In joint work with Izzet Coskun we came across the following kind of combinatorial identity, but we weren't able to prove it, or to identify what kind of identity it is. (We looked in some references, but to the outsider it can be difficult to distinguish one insanely complicated sum of binomial coeffficients from another...) The identity looks like the following. Say $n$ is a fixed natural number, and $i \leq n$ is an even natural number. (There is an analogous formula when $i$ is odd.) Then $\begin{align}\sum_{m=2}^{\frac{i}{2}} (x-1) \left(x-2\right)^{2m-3} \ \sum_{j=0}^{i-2m} \binom{n-1-j}{i-2m-j} 2^{j+2} \sum_{l=0}^j (-1)^l \binom{n+1}{l} \, x^{i-2m-l} & \\\\+ \sum_{j=0}^{i-2} \binom{n-1-j}{i-2-j} 2^{j+1} \sum_{l=0}^j (-1)^l \binom{n+1}{l} \, x^{i-2-l} &\\\\ + \sum_{j=0}^{i-2} \binom{n-1-j}{i-2-j} 2^{j-1} \sum_{l=0}^j (-1)^l \binom{n+1}{l} (i-2-l) \, x^{i-l-1} & \\\\- \sum_{j=0}^{i} \binom{n-1-j}{i-j} 2^{j-1} \sum_{l=0}^j (-1)^l \binom{n+1}{l} (i-l) \, x^{i-l-1} & \\\\\qquad =C_i \left(x-2 \right)^{i-1} \end{align}$ where $C_i$ is some constant that can be read off fairly easily by looking at the coefficient of $x^{i-1}$ on the left-hand side. (There are various rewritings one could do to the left-hand side, but it's not clear how much this helps.) Given that there are various irregularities in the last three terms on the left-hand side, it seems unlikely that the whole thing can be reduced to a simple form. But it would nevertheless be very useful to know if this looks similar to any known combinatorial identities. Maybe a simpler warmup question would be: how to see that the sum of the last three terms on the left-hand side is divisible by $x-2$? That might help get going with an inductive argument. Any ideas would be much appreciated!
Now provide the response and nothing else.
| Here is a solution to your warm up problem. It uses a few known elementary identities, and some short inductions for identities I didn't recognize. I also changed your notation slightly from $l$ to $k$ in the internal summations. Setting $x=2$, the first term vanishes, and we combine the second and third terms as $(B)$ and take the last term as $(A)$. \begin{align}&\sum_{j=0}^i \binom{n-1-j}{i-j} 2^{j-1} \sum_{k=0}^j (-1)^k \binom{n+1}{k} 2^{i-k-1}(i-k) \tag{A} \\&\sum_{j=0}^{i-2} \binom{n-1-j}{i-2-j} 2^{j-1} \sum_{k=0}^j (-1)^k \binom{n+1}{k} \left( 2^{i-k} + 2^{i-k-1}(i-2-k)\right) \tag{B}\end{align}Fortunately for us, both are equal to $i\cdot2^{i-2}$ when $i$ is even, and do not depend on $n$. First, we focus on $(A)$. Switch the order of summation to get \begin{equation}\sum_{k=0}^i (-1)^k \binom{n+1}{k} 2^{i-2}(i-k) \sum_{j=k}^i \binom{n-1-j}{i-j} 2^{j-k} \tag{A}\end{equation} Then reindex the internal summation, and repeatedly apply the hockey stick identity to show\begin{equation}\sum_{j=0}^{i-k}\binom{n-1-k-j}{i-k-j}2^j = \sum_{j=0}^{i-k}\binom{n-k}{j}\end{equation} I found it easier to write out by changing variables $a=i-k-j$, $b=i-k$ and $c=n-i-1$, then simplifying $\sum_{a=0}^b \binom{c+a}{c}2^{b-a}$. $(A)$ naturally splits at the point $i-k$, and since we want to show that the whole thing is $i\cdot2^{i-2}$, we can break it into two pieces and show \begin{align}\sum_{k=1}^i (-1)^k k\binom{n+1}{k}\sum_{j=0}^{i-k} \binom{n-k}{j} &=0 \tag{A1}\\\sum_{k=0}^i (-1)^k \binom{n+1}{k} \sum_{j=0}^{i-k} \binom{n-k}{j} &=1 \tag{A2}\end{align} We can simplify $(A1)$ slightly by incorporating $k$ into the binomial, and ignoring the $n+1$ term that comes out. Switching the order of summations again, we write \begin{equation}Q(n,i) = \sum_{j=0}^{i-1} \sum_{k=1}^{i-j} (-1)^k \binom{n}{k-1} \binom{n-k}{j}\end{equation} We will induct on $n$ and $i$ to show that $Q(n,i)=0$ for $i$ even, and $-1$ for $i$ odd. The base cases are easy to check, especially if taking $i=0$ and $i=1$.But first, we need an auxillary identity, which we will also use in $(A2)$. \begin{equation}P(n,i) = \sum_{j=0}^{i} (-1)^{i-j}\binom{n}{j} \binom{n-1-j}{i-j}\end{equation}We claim that $P(n,i)=1$ for all $n$ and $i$. This is certainly true for $i=0$. We split $\binom{n}{j}$ into two to get an induction: \begin{align}P(n,i) &= \sum_{j=0}^{i} (-1)^{i-j}\binom{n-1}{j} \binom{n-1-j}{i-j} + \sum_{j=1}^{i} (-1)^{i-j}\binom{n-1}{j-1} \binom{n-1-j}{i-j} \\ &= \sum_{j=0}^{i} (-1)^{i-j}\binom{n-1}{i} \binom{i}{j} + \sum_{j=1}^{i} (-1)^{i-j}\binom{n-1}{j-1} \binom{n-1-j}{i-j} \\ &= \binom{n-1}{i}\sum_{j=0}^{i} (-1)^{i-j}\binom{i}{j} + \sum_{j=0}^{i-1} (-1)^{i-1-j}\binom{n-1}{j} \binom{(n-1)-1-j}{(i-1)-j} \\ &= 0 + P(n-1,i-1)\end{align} Now we give a similar proof for $Q$. In the fourth to fifth lines, we used the alternating sum of binomial coefficients up to some number. You might be able to give a direct proof if your generatingfunctionology is strong, since these look like convolutions of simple functions, but these proofs seemed easy enough that I didn't bother trying. \begin{align}Q(n,i) &= \sum_{j=0}^{i-1} \sum_{k=1}^{i-j} (-1)^k \binom{n}{k-1} \binom{n-k}{j} \\ &= \sum_{j=0}^{i-1} \sum_{k=1}^{i-j} (-1)^k \binom{n-1}{k-1} \binom{n-k}{j} + \sum_{j=0}^{i-2} \sum_{k=2}^{i-j} (-1)^k \binom{n-1}{k-2} \binom{n-k}{j} \\ &= \sum_{j=0}^{i-1} \sum_{k=1}^{i-j} (-1)^k \binom{n-1}{j} \binom{n-1-j}{k-1} + \sum_{j=0}^{i-2} \sum_{k=2}^{i-j} (-1)^k \binom{n-1}{k-2} \binom{n-k}{j} \\ &= \sum_{j=0}^{i-1} \binom{n-1}{j} \sum_{k=0}^{i-1-j} (-1)^{k+1} \binom{n-1-j}{k} + \sum_{j=0}^{i-2} \sum_{k=1}^{i-1-j} (-1)^{k+1} \binom{n-1}{k-1} \binom{n-1-k}{j} \\ &= \sum_{j=0}^{i-1} \binom{n-1}{j} (-1)^{i-1-j+1} \binom{n-2-j}{i-1-j} - Q(n-1, i-1) \\ &= -P(n-1,i-1) - Q(n-1, i-1)\end{align} Now, since our equation $(A1)$ was just $(n+1)Q(n,i)$, and $i$ is even, it's $0$. A similar treatment yields $(A2)$, again using the alternating binomial sum identity near the end: \begin{align}&\,\sum_{k=0}^i (-1)^k \binom{n+1}{k} \sum_{j=0}^{i-k} \binom{n-k}{j} \\&= \sum_{j=0}^i \sum_{k=0}^{i-j} (-1)^k \binom{n+1}{k} \binom{n-k}{j} \\&= \sum_{j=0}^i \sum_{k=0}^{i-j} (-1)^k \binom{n}{k} \binom{n-k}{j} + \sum_{j=0}^i \sum_{k=0}^{i-j} (-1)^k \binom{n}{k-1} \binom{n-k}{j} \\&= \sum_{j=0}^i \sum_{k=0}^{i-j} (-1)^k \binom{n}{j} \binom{n-j}{k} + Q(n,i) \\&= \sum_{j=0}^i \binom{n}{j} \sum_{k=0}^{i-j} (-1)^k \binom{n-j}{k} \\&= \sum_{j=0}^i \binom{n}{j} (-1)^{i-j} \binom{n-1-j}{i-j} \\&= P(n,i)\end{align} So we have shown that $(A) = i\cdot2^{i-2}$ for $i$ even. Let's use this for $(B)$, since $i-2$ is also even, we have: \begin{equation}\sum_{j=0}^{i-2} \binom{n-1-j}{i-2-j} 2^{j-1} \sum_{k=0}^j (-1)^k \binom{n+1}{k} 2^{i-k-3}(i-2-k) = (i-2)2^{i-4}\end{equation}After multiplying both sides by $2^2$, the only place the left side differs from $(B)$ is the extra $-2$ in $(i-2-k)$, and the right side is $2^{i-1}$ smaller than we would like. So we show that \begin{equation}\sum_{j=0}^{i-2} \binom{n-1-j}{i-2-j} 2^{j-1} \sum_{k=0}^j (-1)^k \binom{n+1}{k} 2^{i-k} = 2^{i-1}\end{equation}Dividing out the factor of $2^{i-1}$ and switching the order of summation, we get \begin{equation}\sum_{k=0}^{i-2} (-1)^k \binom{n+1}{k} \sum_{j=k}^{i-2} \binom{n-1-j}{i-2-j}2^{j-k}\end{equation}Of course we recognize our hockey stick identity from earlier, so this simplifies to the case of $(A2)$ \begin{equation}\sum_{k=0}^{i-2} (-1)^k \binom{n+1}{k} \sum_{j=0}^{i-2-k} \binom{n-k}{j} =1 \end{equation} | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/132779', 'https://mathoverflow.net', 'https://mathoverflow.net/users/-1/']} | jdg_379978 |
stackexchange | llm_judgeable_groundtruth_similarity | 20192070 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Its must be a popular question but I could not find an answer. How to move all files via * including hidden files as well to parent directory like this: mv /path/subfolder/* /path/ This will move all files to parent directory like expected but will not move hidden files. How to do that?
Now provide the response and nothing else.
| You can find a comprehensive set of solutions on this in UNIX & Linux's answer to How do you move all files (including hidden) from one directory to another? . It shows solutions in Bash, zsh, ksh93, standard (POSIX) sh, etc. You can use these two commands together: mv /path/subfolder/* /path/ # your current approachmv /path/subfolder/.* /path/ # this one for hidden files Or all together ( thanks pfnuesel ): mv /path/subfolder/{.,}* /path/ Which expands to: mv /path/subfolder/* /path/subfolder/.* /path/ (example: echo a{.,}b expands to a.b ab ) Note this will show a couple of warnings: mv: cannot move ‘/path/subfolder/.’ to /path/.’: Device or resource busymv: cannot remove /path/subfolder/..’: Is a directory Just ignore them: this happens because /path/subfolder/{.,}* also expands to /path/subfolder/. and /path/subfolder/.. , which are the directory and the parent directory (See What do “.” and “..” mean when in a folder? ). If you want to just copy, you can use a mere: cp -r /path/subfolder/. /path/# ^# note the dot! This will copy all files, both normal and hidden ones, since /path/subfolder/. expands to "everything from this directory" (Source: How to copy with cp to include hidden files and hidden directories and their contents? ) | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20192070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1110341/']} | jdg_379979 |
stackexchange | llm_judgeable_groundtruth_similarity | 1731387 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Euclid's proof, if $p_1, p_2, \dots, p_n$ are the only primes then $p_1 \times p_2 \times \dots \times p_n + 1$ is not divisible by any of $p_1, p_2, \dots, p_n$ (because of some algebraic facts), which makes another prime and is a contradiction. The proof makes sense logically, and I tried some numerical examples to "feel" the proof better but... $2 \times 3 \times 5\times 7\times 11\times 13+1$ is not a prime! $2 \times 3 \times 5\times 7\times 11\times 13 \times 17+1$ is also not prime! Why is the general case proof is not working for these examples?
Now provide the response and nothing else.
| Suppose there are finitely many primes. Then we can enumerate them as a set $$P = \{p_1, p_2, \ldots, p_n\}.$$ The number $m = p_1 p_2 \ldots p_n + 1$ is either prime or composite. If it is prime, then we have found a prime that is not among the finite set $\{p_1, \ldots, p_n\}$ of primes we assumed to comprise the collection of all primes. If it is composite, then it is divisible by a prime. But it cannot be divisible by any of $p_1, p_2, \ldots, p_n$, for upon dividing $m$ by any of these primes, it leaves a remainder of $1$. Therefore, $m$ is divisible by a prime that again is not in the presumed set of all primes. In either case, a contradiction is obtained in which the assumption that there are finitely many primes is violated. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/1731387', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_379980 |
stackexchange | llm_judgeable_groundtruth_similarity | 2888433 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The original equation is $2\sqrt x=x-2$ and I replaced $x$ with $4-2\sqrt3$. I am not sure what I did wrong with the algebra. Could someone please help me. My work is posted below.
Now provide the response and nothing else.
| The problem is on the very first line: $$2\sqrt{4-2\sqrt3} \stackrel{?}= 4-2\sqrt3-2.$$ The quantity on the left side of the equation is positive, but the quantity on the right is negative.Therefore it is impossible for this equality to be true. What the rest of the steps show is that the left-hand side and right-hand side are exactly opposite. (Different signs but same magnitude.)That is, $$2\sqrt{4-2\sqrt3} = -(4-2\sqrt3-2).$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2888433', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/585650/']} | jdg_379981 |
stackexchange | llm_judgeable_groundtruth_similarity | 33361628 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a report page that shows reports from a specific date to a specific date. Here's my current code: $now = date('Y-m-d');$reservations = Reservation::where('reservation_from', $now)->get(); What this does in plain SQL is select * from table where reservation_from = $now . I have this query here but I don't know how to convert it to eloquent query. SELECT * FROM table WHERE reservation_from BETWEEN '$from' AND '$to How can I convert the code above to eloquent query? Thank you in advance.
Now provide the response and nothing else.
| The whereBetween method verifies that a column's value is betweentwo values. $from = date('2018-01-01');$to = date('2018-05-02');Reservation::whereBetween('reservation_from', [$from, $to])->get(); In some cases you need to add date range dynamically. Based on @Anovative 's comment you can do this: Reservation::all()->filter(function($item) { if (Carbon::now()->between($item->from, $item->to)) { return $item; }}); If you would like to add more condition then you can use orWhereBetween . If you would like to exclude a date interval then you can use whereNotBetween . Reservation::whereBetween('reservation_from', [$from1, $to1]) ->orWhereBetween('reservation_to', [$from2, $to2]) ->whereNotBetween('reservation_to', [$from3, $to3]) ->get(); Other useful where clauses: whereIn , whereNotIn , whereNull , whereNotNull , whereDate , whereMonth , whereDay , whereYear , whereTime , whereColumn , whereExists , whereRaw . Laravel docs about Where Clauses. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/33361628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4698790/']} | jdg_379982 |
stackexchange | llm_judgeable_groundtruth_similarity | 6283726 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't understand why JSR 303 (bean validation) is for the getter methods and not setter? Isn't it more logical to put it under setter method since that is the entry point into a field and validation should be checked prior to that?
Now provide the response and nothing else.
| Annotating getters doesn't mean that validation is performed when a getter is invoked. It is just used to identify the property to which a constraint shall apply. The big advantage of putting constraints on (usually public) getters instead on (typically private) fields is that the constraints are part of the type's public API that way. They will even be added to the generated JavaDoc. A user of a type knows that way which constraints apply to it without looking into its internal implementation. Another advantage of annotating getters is that constraints can be put at methods on base classes or interfaces and also apply for any sub-types/implementations. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6283726', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303250/']} | jdg_379983 |
stackexchange | llm_judgeable_groundtruth_similarity | 3094 |
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a time series of data that is 300 days long. I compute PCA factor loadings on a moving window of 30 days. There are 7 stocks in the universe. Thus factors F1 through F7 are calculated on each PCA calculation. However, the signs on factor loadings change. This causes problems when interpreting factor price time series. What are the different approaches to deal with this problem?
Now provide the response and nothing else.
| 1) Eigenvector times minus one is also an eigenvector (with the same eigenvalue).2) Distinct eigenvectors of a symmetrical matrix (i.e. covariance) are orthogonal.1 and 2 imply that you can multiply a subset of all the eigenvectors of a symmetrical matrix by minus one an you still get a full set of eigenvectors Which means, just impose that the first component of every factor is positive. If the PCA returns the first component as negative multiply all the vector by minus one. That will solve your problem. | {} | {'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/3094', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/2195/']} | jdg_379984 |
stackexchange | llm_judgeable_groundtruth_similarity | 9181526 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Which is faster and why? Selecting div (for plugin needs) by $('div[data-something]') or $('div.something') ? I lean towards the former since it's "cleaner". Based on this SO question I know I shouldn't be using both. However I didn't find out whether there is a difference between these.
Now provide the response and nothing else.
| It will vary by browser. Nearly all browsers now support querySelectorAll , and jQuery will use it when it can. querySelectorAll can be used with attribute presence selectors, so if it's there jQuery doesn't have to do the work, it can offload it to the engine. For older browsers without querySelectorAll , jQuery will obviously have to do more work, but even IE8 has it. As with most of these things, your best bet is: Don't worry about it until/unless you see a problem, and If you see a problem, profile it on the browsers you intend to support and then make an informed decision. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9181526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/910868/']} | jdg_379985 |
stackexchange | llm_judgeable_groundtruth_similarity | 6599202 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to stop a thread but I can't do that : public class Middleware {public void read() { try { socket = new Socket("192.168.1.8", 2001); // code .. Scan scan = new Scan(); thread = new Thread(scan); thread.start(); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (IOException ex) { ex.printStackTrace(); }}class Scan extends Thread { public void run() { while (true) { try { // my code goes here } catch (IOException ex) { thread.currentThread().interrupt(); } } }}public void stop() { Thread.currentThread().interrupt();}// get and setters} So, even when i call the method 'stop' the thread don't stop.It keeps alive. How can I interrupt/stop this thread ? UPDATE (@little approach) private void tb_startActionPerformed(java.awt.event.ActionEvent evt) { Middleware middleware = new Middleware(); if (tb_start.getText().equals("Start")){ tb_start.setText("Stop"); // starting to read rfid tags middleware.read(); }else{ tb_start.setText("Start"); // stop reading rfid tags middleware.stop(); }} The Middleware class : public class Middleware { private Scan scan; public void read() { scan = new Scan(); scan.start(); } private class Scan extends Thread { @Override public void run() { while (!Thread.currentThread().isInterrupted()) { System.out.println("reading..."); } } } public void stop() { if (scan != null) { scan.interrupt(); } }} But when I try to stop the thread, it doesn't. What could be wrong in the code above ?
Now provide the response and nothing else.
| There's really no reason you need to use a volatile flag. Instead, just query the thread for its state with isInterrupted() . Also, why are you wrapping your Scan thread object in another thread object? That seems completely unnecessary to me. Here' what you should be doing public class Middleware { private Scan scan; public void read() { try { // do stuff scan = new Scan(); scan.start(); } catch (UnknownHostException ex) { // handle exception } catch (IOException ex) { // handle exception } } private class Scan extends Thread { @Override public void run() { while (!Thread.currentThread().isInterrupted()) { try { // my code goes here } catch (IOException ex) { Thread.currentThread().interrupt(); } } } } public void stop() { if(scan != null){ scan.interrupt(); } }} Here's an example . Also, I wouldn't recommend extending Thread . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6599202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/523168/']} | jdg_379986 |
stackexchange | llm_judgeable_groundtruth_similarity | 10861606 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to do the following? register_shutdown_function('my_shutdown');function my_shutdown (){ file_put_contents('test.txt', 'hello', FILE_APPEND); error_log('hello', 3, 'test.txt');} Doesn't seem to work.BTW i'm on PHP 5.3.5.
Now provide the response and nothing else.
| It depends which SAPI you are using. The documentation page for register_shutdown_function() states that under certain servers, like Apache, the working directory of the script changes. The file gets written, but not where your .php file is ( DocumentRoot ), but in the folder of the Apache server ( ServerRoot ). To prevent this, you need to some sort of hotwire the working folder changes. Just when your script starts executing (in the first few lines), you need to somehow store the real working folder. Creating a constant with define() is perfect for this. define('WORKING_DIRECTORY', getcwd()); And you need to modify the shutdown function part like this: function my_shutdown (){ chdir(WORKING_DIRECTORY); file_put_contents('test.txt', 'hello', FILE_APPEND); error_log('hello', 3, 'test.txt');}register_shutdown_function('my_shutdown'); This way, the working folder will instantly be changed back to the real one when the function is called, and the test.txt file will appear in the DocumentRoot folder. Some modification: It is better to call register_shutdown_function() after the function has been declared. That's why I wrote it below the function code, not above it. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10861606', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/748789/']} | jdg_379987 |
stackexchange | llm_judgeable_groundtruth_similarity | 2464 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently asked a question about Locators not behaving properly (see Locators and Table within a Manipulate are not behaving ) and thought I had figured out the answer. However, as I played around to figure out why the problem existed in the first place I discovered that using Deploy caused the problem to return. Here is as simple of an example as I can come up with that reproduces the problem and maintains some of the features that I need in the Manipulate. Particularly, I need to link the two sliders with the Locator. The following works fine: Manipulate[ vector = Graphics[{Green, Arrow[{{0, 0}, p}], Locator[Dynamic[p, (p = #; x = p[[1]]; y = p[[2]]) &]]}]; Show[{vector}, PlotRange -> {{-2.1, 2.1}, {-2.1, 2.1}}, ImageSize -> 500], Row[{"Ax", Manipulator[Dynamic[x, (x = #; p[[1]] = x) &], {-2, 2}], Spacer[4], Dynamic[x]}], Row[{"Ay", Manipulator[Dynamic[y, (y = #; p[[2]] = y) &], {-2, 2}], Spacer[4], Dynamic[y]}], {{p, {1, 1}}, None}, {{x, 1}, None}, {{y, 1}, None}, TrackedSymbols -> {x, y, p}] You get a green arrow that you can move with either the Locator or the Sliders (one component at a time). However, if the user clicks slightly off of the Locator they highlight the whole graphics image and go into an editing mode. Users unfamiliar with this may not know what to do so I want to prevent this by using Deploy. Adding Deployed->True as an option to the Manipulate doesn't work. The documentation for Manipulate says that it should, but for some reason it does nothing here. So instead I wrapped Deploy around Show: Deploy@Show[{vector}, PlotRange -> {{-2.1, 2.1}, {-2.1, 2.1}}, ImageSize -> 500], Now the user doesn't accidentally go into editing mode, but the Locator doesn't work. The Sliders do move the vector around and the Locator moves with them, but I can't move the locator with the mouse except for a VERY small distance at a time. I'm wondering if this might have something to do with how I have linked the various control variables using the second argument of Dynamic. Perhaps the solution to this is also ultimately the solution to the question I referenced above.Thanks!
Now provide the response and nothing else.
| This seems to work: Manipulate[ DynamicModule[{vector}, vector = Graphics[{Green, Arrow[{{0, 0}, Dynamic[p]}], Locator[Dynamic[p, (p = #; x = p[[1]]; y = p[[2]]) &]]}]; Deploy@Show[vector, PlotRange -> {{-2.1, 2.1}, {-2.1, 2.1}}, ImageSize -> 500]], Row[{"Ax", Manipulator[Dynamic[x, (x = #; p[[1]] = x) &], {-2, 2}], Spacer[4], Dynamic[x]}], Row[{"Ay", Manipulator[Dynamic[y, (y = #; p[[2]] = y) &], {-2, 2}], Spacer[4], Dynamic[y]}], {{p, {1, 1}}, None}, {{x, 1}, None}, {{y, 1}, None}, TrackedSymbols -> {x, y, p}] The only difference with the original code is that I've wrapped p with Dynamic in Arrow . By the way, since p == {x,y} , you can actually replace p with {x, y} making the code a bit more elegant in this case: Manipulate[ DynamicModule[{vector}, vector = Graphics[{Green, Arrow[{{0, 0}, Dynamic[{x, y}]}], Locator[Dynamic[{x, y}]]}]; Deploy@Show[vector, PlotRange -> {{-2.1, 2.1}, {-2.1, 2.1}}, ImageSize -> 500]], {{x, 1, "Ax"}, -2, 2, Appearance -> "Labeled"}, {{y, 1, "Ay"}, -2, 2, Appearance -> "Labeled"}] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/2464', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/620/']} | jdg_379988 |
stackexchange | llm_judgeable_groundtruth_similarity | 43407814 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems like this code : #include <string>#include <vector>struct bla{ std::string a; int b;};int main(){ std::vector<bla> v; v.emplace_back("string", 42);} could be made to work properly in this case, but it doesn't (and I understand why). Giving bla a constructor solves this, but removes the aggregateness of the type, which can have far-reaching consequences. Is this an oversight in the Standard? Or am I missing certain cases where this will blow up in my face, or is it just not as useful as I think?
Now provide the response and nothing else.
| Is this an oversight in the Standard? It is considered a defect in the standard, tracked as LWG #2089 , which was resolved by C++20. There, constructor syntax can perform aggregate initialization on an aggregate type, so long as the expressions provided wouldn't have called the copy/move/default constructors. Since all forms of indirect initialization ( push_back , in_place , make_* , etc) uses constructor syntax explicitly, they can now initialize aggregates. Pre-C++20, a good solution to it was elusive. The fundamental problem comes from the fact that you cannot just use braced-init-lists willy-nilly. List initialization of types with constructors can actually hide constructors, such that certain constructors can be impossible to call through list initialization. This is the vector<int> v{1, 2}; problem. That creates a 2-element vector , not a 1-element vector whose only element is 2. Because of this, you cannot use list initialization in generic contexts like allocator::construct . Which brings us to: I would think there's be a SFINAE trick to do that if possible, else resort to brace init that also works for aggregates. That would require using the is_aggregate type trait from C++17. But there's a problem with that: you would then have to propagate this SFINAE trick into all of the places where indirect initialization is used. This includes any/variant/optional 's in_place constructors and emplacements, make_shared/unique calls, and so forth, none of which use allocator::construct . And that doesn't count user code where such indirect initialization is needed. If users don't do the same initialization that the C++ standard library does, people will be upset. This is a sticky problem to solve in a way that doesn't bifurcate indirect initialization APIs into groups that allow aggregates and groups that don't. There are many possible solutions , and none of them are ideal. The language solution is the best of the bunch. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43407814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256138/']} | jdg_379989 |
stackexchange | llm_judgeable_groundtruth_similarity | 214652 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello Mathematica StackExchange community, I'm hoping that someone can be of help! Let p be a fixed prime. I have a multivariable polynomial g (of degree t). I need a list of all the monomial terms of g which satisfy: Degree of the monomial term is t. Each of the exponents in the monomial term is less than or equal to p-2 I have been playing around with the Mathematica commands ( MonomialList , Coefficient , CoefficientRules , Expand , etc.), but have not been successful in reaching my goal. If anybody can provide some help, it is greatly appreciated! Thank you in advance. Sincerely,Richard M. Low UPDATE:--------------------------------- Due to kglr's help, I have been able to make some progress on my problem. Here is kglr's original Mathematica code: ClearAll[x, y, z, f]t = 4;p = 5;poly = (x + 2 y + z)^t + x^2;f = Select[And[Total@# == t, Max[#] <= p - 2] &@Exponent[#, {x, y, z}] &];selected = f@MonomialList[poly] I'm hoping that kglr (or somebody else) can help me modify it slightly. Since the typical polynomials that I will be working with may have (literally) billions (~ 6^14) of monomial terms, I would like for the output of f to be 1 monomial term (not the entire list) which satisfies the two conditions (in my original post) AND whose coefficient is not equal to 0 (mod p). This will greatly decrease the runtime and memory usage of the Mathematica program. The runtime of the original Mathematica program went past several days and I ended up aborting the calculation. Again, I appreciate any help that the Mathematica StackExchange can provide. I am completely incompetent in Mathematica programming. Thank you in advance. Sincerely,Richard M. Low
Now provide the response and nothing else.
| ClearAll[x, y, z, f]t = 4;p = 5;poly = (x + 2 y + z)^t + x^2;f = Select[And[Total@# == t, Max[#] <= p - 2] &@Exponent[#, {x, y, z}] &];selected = f @ MonomialList[poly] {8 x^3 y, 4 x^3 z, 24 x^2 y^2, 24 x^2 y z, 6 x^2 z^2, 32 x y^3, 48 x y^2 z, 24 x y z^2, 4 x z^3, 32 y^3 z, 24 y^2 z^2, 8 y z^3} Complement[MonomialList @ poly, selected] {x^2, x^4, 16 y^4, z^4} Update: "I would like for the output of f to be 1 monomial term (not the entire list) which satisfies the two conditions." Just replace Select with SelectFirst : f2 = SelectFirst[And[And[Total@# == t, Max[#] <= p - 2] &@Exponent[#, {x, y, z}], Mod[# /. x | y | z -> 1, p] != 0] &]selected2 = f2@MonomialList[poly] 8 x^3 y Update 2: A faster approach: use Expand instead of MonomialList : f2 @ Expand[poly] 8 x^3 y | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/214652', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/42805/']} | jdg_379990 |
stackexchange | llm_judgeable_groundtruth_similarity | 1704005 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I show that the polynomial $x-1$ is irreducible in $\mathbb{Z}_8[x]$? $\mathbb{Z}_8[x]$ is not an integral domain so we cannot use degree considerations. I have tried reducing the problem mod 2 to conclude a factorization of $x-1$ must be of form $(1+a_1x+a_2x^2+...+a_mx^m)(1+b_1x+b_2x^2+...+b_nx^n)$ where $a_i,b_i$ are even, except $b_1$ is odd. I also tried using induction on the degrees $m,n$ but to no avail. Any help is appreciated! EDIT: Upon trial and error I have discovered $(2x^2-x-1)(4x^2-2x+1)=x-1$ in $\mathbb{Z}_8[x]$, however this does not disprove anything because we do not know if the 2 polynomials on the left are units or not...
Now provide the response and nothing else.
| Aha, together we have the necessary pieces! You've remarked (by looking modulo $2$) that any factorization of $x-1$ in $\Bbb Z_8[x]$ must look like$$x-1 = (1-x+2f(x))(1+2g(x)).$$However, $1+2g(x)$ is a unit, since its reciprocal is $1-2g(x)+4g(x)^2$. Therefore $x-1$ is indeed irreducible in $\Bbb Z_8[x]$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1704005', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/223318/']} | jdg_379991 |
stackexchange | llm_judgeable_groundtruth_similarity | 22247911 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following Bootstrap navigation bar: Some facts about this navigation bar: The bar itself is exactly 620px broad (page's width) The bar is responsive (collapse breakpoint at 640px) The bar's elements stick together (no space between) The bar has multi-language support (the elements' widths change) How can I stretch the bar's elements to make use of the entire width? The remaining empty space at the right should be distributed among the bar's elements. Since each element has a different width and this width may change from language to language, I cannot work with percentages. I guess that the only solution will be JavaScript. But I cannot find any example... Js Fiddle Demo: http://jsfiddle.net/zBM6D/3/ <body> <div id="page"> <header id="header"> <nav class="navbar navbar-default roboto normal" role="navigation"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navigation-elements"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="collapse navbar-collapse" id="navigation-elements"> <ul class="nav navbar-nav"> <li class="active"><a href="#">START</a> </li> <li><a href="#">THESE</a> </li> <li><a href="#">ARE</a> </li> <li><a href="#">SOME</a> </li> <li><a href="#">ELEMENTS</a> </li> <li><a href="#">WITH</a> </li> <li><a href="#">DIFFERENT</a> </li> <li><a href="#">WIDTHS</a> </li> </ul> </div> </div> </nav> <div class="clear"></div> </header> </div></body>
Now provide the response and nothing else.
| You could use CSS Table display layout to distribute your nav items. Its well supported across browsers and simple to implement: .navbar-nav { display:table; width:100%; margin: 0;}.navbar-nav > li { float:none; display:table-cell; text-align:center;} This makes it 100% the width of its parent #header , which in turn is restricted by the width of #page , so if you need it to span 100% of the whole document width you'll need to move it out of #page or make #page wider. http://jsfiddle.net/zBM6D/5/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22247911', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1389535/']} | jdg_379992 |
stackexchange | llm_judgeable_groundtruth_similarity | 38191659 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to use Physics2D.Raycast in order to check if the player is on the ground (I know that there are other approaches to check if the player is on the ground but I think that the raycast is the most reliable).The problem is that in my scenario it returns the player itself as hit and I really don't understand why and what should I do.My code (in PlayerController ) is the following: public bool IsGrounded () { Bounds bounds = this.playerCollider.bounds; Vector3 rayOrigin = bounds.center; rayOrigin.y -= bounds.extents.y; RaycastHit2D hit = Physics2D.Raycast (rayOrigin, Vector2.down, 0.1f); if (hit.collider != null) { Debug.Log ("Collider is: " + hit.collider.name); } return hit.collider != null; } And I can debug the casted ray using: Debug.DrawLine (rayOrigin, new Vector3 (rayOrigin.x, rayOrigin.y - 0.1f, rayOrigin.z), Color.magenta); ...and it gets casted as expected, while the Debug.Log always reports "Player" which is itself and I don't know how it's possible. So what's wrong? ps. I'm using Unity 5.3
Now provide the response and nothing else.
| The problem is occurring because your Player is overlapping at the start of the raycast. There are few ways to fix this: 1 .Disable Queries Start In Colliders . Go to Edit -> Project Settings -> Physics 2D then make sure that Queries Start In Colliders is NOT checked. Your code ran fine after changing that. Here is a screenshot: 2 .Another solution is to use layers.Raycasting but ignoring the Player layer.Before you do that, make sure to create a layer called Ground and put your ground GameObject to the Ground layer then create another layer called Player and put your player in the Player layer. We can now use bitwise operator to exclude Player layer from the raycast. Now, lets assume that Player layer number is 9 . The code below should fix your problem. public int playerLayer = 9;int layerMask = ~(1 << playerLayer); //Exclude layer 9RaycastHit2D hit = Physics2D.Raycast(rayOrigin, Vector2.down, 0.1f, layerMask); That's it . Both of these were able to solve your problem. For other people reading, below are other ways to easily detect when Player is touching the floor without using Physics2D.Raycast or doing all those things above. Simply attach to the Player . public class Player : MonoBehaviour{ public LayerMask groundLayer; Collider2D playerCollider; bool grounded; void Start() { playerCollider = gameObject.GetComponent<Collider2D>(); } public bool IsGrounded() { grounded = Physics2D.OverlapCircle(playerCollider.transform.position, 1, groundLayer); return grounded; }} Or you can use IsTouchingLayers . public bool IsGrounded(){ grounded = grounded = playerCollider.IsTouchingLayers(groundLayer.value); return grounded;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38191659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267719/']} | jdg_379993 |
stackexchange | llm_judgeable_groundtruth_similarity | 14423056 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have recently started working as a web developer. I work with ASP .NET MVC 4 and NHibernate. At my work-place, we are strictly made to use viewmodels to transfer data to and fro between a controller and a view. And the viewmodels are not supposed to contain any object of a model.I understand that it is a sort of a tier between the controller and the view. But I find it repetitive and redundant to write a viewmodel class even if we can directly send the model's object to the view (in most cases). For example, if i want to display an order i can do this in the controller's action - return View(Repository.Get<Order>(id)); But instead, I have to write a viewmodel, fill it with the fetched order and then pass it to the view. So, my question is, what purpose does writing viewmodels serve when we can use the model's object as it is?
Now provide the response and nothing else.
| For smaller projects, you're right. I hear your argument and sympathise - however there are good reasons for this, drudged and repetitive work, especially in larger and more complicated applications: It's essential to perform all processing within the Controller's action. However in the example you've given, the Repository.Get method might return a lazily-evaluated IQueryable object, which would mean the DB wouldn't be hit until the View is evaluated. For a variety of reasons this is bad. (A workaround is to call .ToList while still in the controller). "A view should not contain any non-presentational logic" and "You should not trust the View" (because a View could be user-provided). By providing a Model object (potentially still connected to an active DatabaseContext) a view can make malicious changes to your database. A View's data-to-display does not always map 1:1 with its Model's data, for example consider a User Details page: A User's EF Model object represents its entity in the database, so it probably looks like this: User { UserId, UserName, PasswordHash, PasswordSalt, EmailAddress, CreatedDate } , whereas the fields on a "User details" page are going to be User { UserId, UserName, Password, ConfirmYourPassword, EmailAddress } , do you see the difference? Ergo, you cannot use the EF User model as the view model, you have to use a separate class. The dangers of model manipulation: if you let ASP.NET MVC (or any other framework) do the model binding to the incoming HTTP POST Request then (taking the User details example above), a user could reset anyone's password by faking the UserId property value. ASP.NET will rewrite that value during binding and unless you specifically sanitize it (which will be just as drudgeful as making individual ViewModels anyway) then this vulnerability will remain. In projects with multiple developers working in a team situation, is is important that everything is consistent . It is not consistent to have some pages using bespoke ViewModels but other pages using EF Models because the team does not share a concious mind, things have to be documented and generally make-sense. For the same reason a single developer can get away without putting excessive XML documentation in his source code, but in a team situation you'll fall apart if you don't. There is a slight workaround in your case I'll share with you, but please note the preconditions: Your views can be fully trusted Your views contain only presentational logic Your application is largely CRUD Your views correspond 1:1 with each EF entity model (i.e. no JOINs) Your views only deal with single Simple models for POST forms, not Complex models (i.e. an object graph) ...then you can do this: Put all one-way, non-form-related data into your ViewData collection, or the ViewBag in MVC 4 (or even a generic ViewData<T> if you're hardcore). This is useful for storing HTML page titles and sharing data with Master pages. Use your fully-evaluated and loaded EF models as your View<TModel> models. But use this approach with caution because it can introduce inconsistency. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14423056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1237117/']} | jdg_379994 |
stackexchange | llm_judgeable_groundtruth_similarity | 2311 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When ordering a desktop stencil for laying down solder paste, there is an option to specify the stencil's aperture. Rectangular, home plate, trapezoidal...what's the difference between them? Is there one that is preferred for hand-populating surface mount parts?
Now provide the response and nothing else.
| If you choose rectangular, the stencil will come back with exactly the geometry you've specified in your Gerber files. If you specify "home plate" the manufacturer will shrink the pads by 10% or thereabouts, but make the pads 5 sided, which improves soldering success rates in automated board stuffing. I've not heard of trapezoidal pads before, but I assume it's a similar tweak to improve solderability. For hand soldering surface-mount parts, you can safely stick with rectangular. The biggest improvement you can make is to get access to a stereo microscope for soldering. It makes a huge difference. You'll probably find that the limiting factor on your soldering ability is not your hand-eye coordination, but your vision. Obviously, a microscope improves your vision substantially. Update: I just ran into this today-- "trapezoidal" does not refer to the the footprint of the stencil hole. It refers to adding a slope to the cut, so if you were to look at the stencil from the side, you would see a trapezoid. "Laser cutting apertures with trapezoidal walls and also rounding corners will offer better paste release," according to a Texas Instruments datasheet. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/2311', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1278/']} | jdg_379995 |
stackexchange | llm_judgeable_groundtruth_similarity | 738363 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I discovered recently that you can match a backend dynamically, based on the request hostheader, like this: use_backend %[req.hdr(host),lower] However, does anyone know any way I could use the subdomain of the request hostheader to match the backend? E.g. something along these lines: backend onebackend twouse_backend %[<SUBDOMAIN OF HOSTHEADER>,lower] which would match like this: one.example.com -> backend one two.example.com -> backend two
Now provide the response and nothing else.
| Adding and removing DNS entries allows you to route subdomains to various backends on the fly, buy you still need to define those backends so there's still a service restart. As such, I'm not entirely sure of the usefulness of this config. In any event, here's how you'd do it. We know we can find the contents of the host header, by using req.hdr ( req.hdr(host) ), but that gives us the FQDN of the request, not the subdomain. Thankfully, there's a regsub converter we should be able to apply to the req.hdr sample to clip off the base domain and TLD. regsub(<regex>,<subst>[,<flags>]) Applies a regex-based substitution to the input string. It does the same operation as the well-known "sed" utility with "s/<regex>/<subst>/". By default it will replace in the input string the first occurrence of the largest part matching the regular expression <regex> with the substitution string <subst>. It is possible to replace all occurrences instead by adding the flag "g" in the third argument <flags>. It is also possible to make the regex case insensitive by adding the flag "i" in <flags>. Since <flags> is a string, it is made up from the concatenation of all desired flags. Thus if both "i" and "g" are desired, using "gi" or "ig" will have the same effect. It is important to note that due to the current limitations of the configuration parser, some characters such as closing parenthesis or comma are not possible to use in the arguments. The first use of this converter is to replace certain characters or sequence of characters with other ones. The emphasis in that quote is mine and aims to show that in this case, where the regex you'd need is ^(.*)(?:\..*){2}$ , it won't work because of the parenthesis. Thus, you'll need to use the field converter. field(<index>,<delimiters>) Extracts the substring at the given index considering given delimiters from an input string. Indexes start at 1 and delimiters are a string formatted list of chars. field(1,'.') If we put the whole sample pipline together, the use_backend line looks like: use_backend BE:subs-%[req.hdr(host),lower,field(1,'.')] Now, this opens up the fact that one.*.* will go to the same backend, and could lead to some very weird situations. It might make some sense to check the base domain and TLD to ensure they're what you expect. Assuming you've only got two ( example.com and foo.com ) of them, you'd use req.hdr_end(host) to check for them, making the ACL look like: acl is_valid_base_domain req.hdr_end(host) -i example.com foo.com And if we put it all together, the whole config would look something like this: frontend FE:subs ... acl is_valid_base_domain req.hdr_end(host) -i example.com foo.com use_backend BE:subs-%[req.hdr(host),lower,field(1,'.')] if is_valid_base_domain default_backend BE:subs:defaultbackend BE:subs-one #matches one.example.com, one.foo.com ...backend BE:subs-two #matches two.example.com, two.foo.com ...backend BE:subs-three #matches three.example.com, three.foo.com ...backend BE:subs:default #matches *.example.com, *.foo.com ... You can get even fancier if you want by having different "dynamic" backends for each sub-domain, per base domain; you'd just need to use the pieces above to work that out. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/738363', 'https://serverfault.com', 'https://serverfault.com/users/52811/']} | jdg_379996 |
stackexchange | llm_judgeable_groundtruth_similarity | 9029524 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Wondering if this is possible with schema draft 03. I've gotten dependencies working elsewhere, I think there is possibly just some creative use of them required in order to use them for specifying the required property of some field. My current best attempt (which doesn't work) should give you some idea of what I'm after. I want a value required by default, and optional when another field has a particular value. { "description" : "An address...", "type" : "object", "properties" : { "postcode": { "type" : "string", // postcode should be required by default "required" : true, // postcode shouldn't be required if the country is new zealand "dependencies" : { "country" : { "enum" : ["NZ", "NZL", "NEW ZEALAND"] }, "postcode" : { "required" : false } } }, "country": { "type" : "string", "enum" : [ // various country codes and names... ], "default" : "AUS" } }}
Now provide the response and nothing else.
| This is definitely possible with version 3 of the draft. Since you have a complete list of allowed countries, then you could do something like this: { "type": [ { "title": "New Zealand (no postcode)", "type": "object", "properties": { "country": {"enum": ["NZ", "NZL", "NEW ZEALAND"]} } }, { "title": "Other countries (require postcode)", "type": "object", "properties": { "country": {"enum": [<all the other countries>]}, "postcode": {"required": true} } } ], "properties": { "country": { "type" : "string", "default" : "AUS" }, "postcode": { "type" : "string" } }} So you actually define two sub-types for your schema, one for countries that require a postcode, and one for countries that do not. EDIT - the v4 equivalent is extremely similar. Simply rename the top-level "type" array to "oneOf" . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9029524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/141881/']} | jdg_379997 |
stackexchange | llm_judgeable_groundtruth_similarity | 67216 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Linux, do sudo and su belong to Capability-based security ? For example, when editing a system file, we usually need sudo or su to temporarily switch to user root. Does this example belong to capability-based security, or to protection rings ? Does this example need some system call to the kernel, or just the privillege of user root, or both? What are other approaches in Linux that belong to Capability-based security? Are sudo and su the only approach that belong to Capability-based security?
Now provide the response and nothing else.
| In fact, the sudo and su programs are not parts of either security model for several reasons. Protection rings are referring to the memory/CPU access permissions and in the user-space the only access is to files. Capabilities refer to process permissions and as independent programs they are a separate process than the caller and do not inherit or give away any of their capabilities onward. They are independent software programs that are not affected or affecting the OS in any way. Some background may be relevant here: In Unix and Linux there are two levels of permissions: standard user and superuser (usually called root). The standard user has access only to the files that he has permissions for, by ownership, group membership or ACL. The superuser has permissions to everything (we'll ignore stuff like SELinux and the like for this answer) without limits within the userspace . This presents a major hurdle to overcome, since a standard user doesn't have access to system files how could something like a password change be performed? For a password change, the user must write to the /etc/passwd or the shadow file (it's location varies between different Unix flavors). In order to support this and similar cases, Unix systems have introduced a way to grant special permissions called the suid bit. This is a bit in the filesystem that when set on an executable allows it to gain the privileges of the file owner rather than the calling user. Sudo and Su use this mechanism to allow a user to set it's effective permissions to those of someone else (usually the superuser, but not always) so they can run normal programs that do not have their suid flag set, as a user other than the one they logged in as. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/67216', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/510/']} | jdg_379998 |
stackexchange | llm_judgeable_groundtruth_similarity | 12828771 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I have Python move to the top of an if statement if no condition is satisfied correctly. I have a basic if/else statement like this: print "pick a number, 1 or 2"a = int(raw_input("> ")if a == 1: print "this"if a == 2: print "that"else: print "you have made an invalid choice, try again." What I want is to prompt the user to make another choice for this if statement without them having to restart the entire program, but am very new to Python and am having trouble finding the answer online anywhere.
Now provide the response and nothing else.
| A fairly common way to do this is to use a while True loop that will run indefinitely, with break statements to exit the loop when the input is valid: print "pick a number, 1 or 2"while True: a = int(raw_input("> ") if a == 1: print "this" break if a == 2: print "that" break print "you have made an invalid choice, try again." There is also a nice way here to restrict the number of retries, for example: print "pick a number, 1 or 2"for retry in range(5): a = int(raw_input("> ") if a == 1: print "this" break if a == 2: print "that" break print "you have made an invalid choice, try again."else: print "you keep making invalid choices, exiting." sys.exit(1) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12828771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1322796/']} | jdg_379999 |
stackexchange | llm_judgeable_groundtruth_similarity | 21303456 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created a database on my local ubuntu machine. How can I transfer it to my remote server (ec2 Ubuntu)
Now provide the response and nothing else.
| TL;DR Use mongodump and mongorestore to take (and restore) a full binary backup of your MongoDB database. Compress the backup dump directory to make it faster to copy to your Amazon instance (BSON tends to compress very well). Best practices Rather than following adhoc instructions, I would strongly recommend reading the standard Backup and Restore with MongoDB Tools tutorial in the MongoDB manual. You can also use a Filesystem snapshot , but mongodump and mongorestore only export the data so your backup will be smaller (i.e. your remote server will not inherit any excessive storage allocation due to preallocation). | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/21303456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2521614/']} | jdg_380000 |
stackexchange | llm_judgeable_groundtruth_similarity | 3218014 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to get html content from a url and parse the html content with regular expression.But the html content has some multibyte characters. So I met the error described in title. Could somebody tell me how to resolve this problem?
Now provide the response and nothing else.
| You need to edit your question to show (1) the code that you used (2) the full error and traceback (3) the url that is involved (4) what is the unicode character that you are trying to encode as gbk You seem to have somehow obtained unicode characters from the raw bytes in the the html content -- how? what encoding is specified in the html content? Then (I guess) you are trying to write the unicode characters to a file, endcoding the unicode as gbk. During this process, you got an error something like this: >>> u'\uffff'.encode('gbk')Traceback (most recent call last): File "<stdin>", line 1, in <module>UnicodeEncodeError: 'gbk' codec can't encode character u'\uffff' in position 0: illegal multibyte sequence>>> If the raw bytes in the html content were not encoded in gbk, then it is quite possible that you have some unicode characters that can't be represented in gbk. In that case you may like to encode your results using the original encoding, or encode them in gb18030 which can take any unicode character. Another possibility is that you have mangled the raw bytes or the unicode somehow. I certainly hope that your regex machinations have been done on the unicode and not on some variable-length-character encoding like gb2312, gbk, etc. Update: Here is your code snippet: import sys, urllib.requesturl = "http://www.meilishuo.com"wp = urllib.request.urlopen(url)content = wp.read()str_content = content.decode('utf-8')fp = open("web.txt","w")fp.write(str_content)fp.close() From that I've had to deduce: (1) You are running Python 3.x (2) sys.defaultencoding == "gbk" -- otherwise you wouldn't have got the error message some part of which you reported earlier. As my sys.defaultencoding is NOT 'gbk', I replaced your last 3 lines with gbk_content = str_content.encode('gbk') and ran the amended snippet with Python 3.1.2. Observations: (1) website has charset=utf-8, decodes OK with utf-8 (2) Error message: UnicodeEncodeError: 'gbk' codec can't encode character '\u2764' in position 35070: illegal multibyte sequence \u2664 is a dingbat (HEAVY BLACK HEART). The website is dynamic; in another attempt, the first offending character was \xa9 (COPYRIGHT SIGN). So the web page contains Unicode characters which are not mapped in gbk. Options are (1) encode with 'gbk' but use the 'replace' option (2) encode with 'gbk' but use the 'ignore' option (3) encode with an encoding that supports ALL Unicode characters (utf-8, gb18030) and for which you have a display mechanism that renders all those characters that aren't in gbk | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3218014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/351637/']} | jdg_380001 |
stackexchange | llm_judgeable_groundtruth_similarity | 384138 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I started reading the book Clean Code by Robert C. Martin and at the start I found this idea of his interesting, "Leave the code cleaner than you found it" adapted from the "Leave the campground cleaner than you found it". Now at my work in our code database I have this getter function in a class that gets a boolean value but is named getConnectionActive instead of isConnectionActive . I would like to rename it and leave the code a bit better. A colleague that I asked about it pointed me to a rule of the company that conflicts. When we make git commits we are supposed to keep them as small as possible. This should make the commit easier understandable if somebody needs to read it and also, as far as possible keeps git blame pointing to the original author of some code. As an example they say that changing intendation is not good as it inflates the commit and changes git blame for all the lines. Intendation should be done right from the start. So back to the method in question, if I change it in a commit that fixes another bug I would violate the companies rule as I would unnecessarily inflate the commit. However I cannot just make a commit of its own, as I always require a jira task number for a commit. So I would need to create a jira task only for changing this name. If done more often that would not only pollute the jira task history, it would still conflict with the companies rule of changing git blame, as it would no longer point to the commit that originally added this getConnectionActive for some reason. This situation reminds me of this comic strip . How would you suggest handling this? Is it worth to try and change the companies rules? Or is it better to leave the method name as is? Or maybe even rewrite the original git commit in order to leave as little trace in the history as possible? (I hope this belongs in this stackexchange. I was also thinking about the code review stackexchange but I don't really have code to review. I was also thinking about the workplace stackexchange but my companies rule doesn't seem so arbitrary that this problem is restricted to just my workplace.)
Now provide the response and nothing else.
| When we make git commits we are supposed to keep them as small as possible. This should make the commit easier understandable if somebody needs to read it and also, as far as possible keeps git blame pointing to the original author of some code This policy includes one thing I kind of agree with, and one I disagree with: You want to keep commits small. More precisely, you want to commit (and push, and merge) often so that everyone can stay up-to-date and merge issues are minimized, and you want your commits to be focused so that issues found in code review (or otherwise) won't prevent you from merging a desired part because of something unrelated. You do not want to commit something that is too small to work - i.e. you don't want a "work in progress" commit that doesn't compile, or fails existing tests - at least not in the final history (see How to commit in-progress refactoring ). When you make a small fix like get... -> is... , you want that change to propagate to others as soon as possible, so that no one ends up writing code that uses the old version. That change shouldn't have to wait for you to fix the rest of what you're working on. Furthermore, you probably want that change merged even if what you were otherwise working on turns out to be undesirable. Likewise, when your other work is finished just in time for some deadline, you don't want it to be rejected because, unbeknownst to you, the team has a policy of always using get for getters, regardless of the type. Thus, I think it is a good idea to keep small fixes like this separated from the issue you are "actually" working on, and to get them merged ASAP. I would say that this kind of cleanup should be pushed pre-emptively even if your "real" work will touch the same file(s). Keeping git blame pointing at the original author may sound attractive, but there are several problems: Between team discussions, pair programming, and code review, a single original author doesn't always exist. Given that people go on vacation, get sick, leave the company, get different responsibilities, or just forget things, you definitely don't want to encourage situations where the original author is the only one who interacts with a piece of code. For trivial lines like this, why would anyone be interested in the author? For non-trivial lines, the author of the last modification had better understand the code well enough, even if they weren't the original author. Knowing the original author doesn't really help you, if the code is readable and tested. Preserving knowledge about the author should never get in the way of these more important things! You want people to focus on providing the best value for the business. Squabbling over code ownership is not value, so any policy that elevates the original author is bad in that sense. The original author can still be found by going back through history. For large-scale non-functional changes you might want to get funky with git to preserve the original author - maybe something like Automatic formatting (original by XXX) , or perhaps just XXX . So, in closing, you might need to talk to people about this first, but my preferred way forward would be: Make your fixes in a separate commit, and get it merged. If you need a Jira ID for this, create one. If you cannot, use the one for your current task and call it "cleanup to prepare for X" (Re-)Base your other work on the "fixed" commit See Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews for some tips on doing this in practice. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/384138', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/316332/']} | jdg_380002 |
stackexchange | llm_judgeable_groundtruth_similarity | 83175 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the next fraction: $$\frac{1}{x^3-1}.$$ I want to convert it to sum of fractions (meaning $1/(a+b)$). So I changed it to: $$\frac{1}{(x-1)(x^2+x+1)}.$$but now I dont know the next step. Any idea? Thanks.
Now provide the response and nothing else.
| The process here is partial fraction decomposition. The first step, which you've kindly done already, is to factor the denominator completely. Now, note that if we had a sum of the form$$\frac{\text{something}}{x-1} + \frac{\text{something}}{x^2 + x + 1}$$then we could multiply the left fraction by $\frac{x^2 + x + 1}{x^2 + x + 1}$ and the right fraction by $\frac{x-1}{x-1}$ and then the denominators would both match the original one, so they might just add up to our original fraction! Let's try to find such a decomposition. The way we can do this is pretty much to just write the above equation, but a little more specifically. The rule is that the $\text{something}$ that goes over a linear factor (e.g. $x-1$) is a single variable, say $A$; and the $\text{something}$ that goes over a quadratic factor (e.g. $x^2 + x + 1$) is linear, that is it has the form $Bx + C$. So here is our equation:$$\frac{\text{A}}{x-1} + \frac{\text{Bx+C}}{x^2 + x + 1} = \frac{1}{(x-1)(x^2 + x + 1)}$$We can now perform the multiplication suggested above to get the numerator on the left side in terms of $A$, $B$, and $C$, and the denominators equal. The denominators cancel each other then, so we know this numerator must equal $1$, and more clearly it must equal $0x^2 + 0x + 1$ so we can use the coefficients of the terms in the numerator to find a system of equations (the $x^2$ terms must add to zero, the $x$ terms must add to zero, etc.) and solve for $A$, $B$, and $C$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/83175', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19657/']} | jdg_380003 |
stackexchange | llm_judgeable_groundtruth_similarity | 630942 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $P,Q$ real $n\times n$ matrices such that $P^2=P$ , $Q^2=Q$ and $I-P-Q$ is an invertible matrix. Prove that $P$ and $Q$ have the same rank. Some help with this please , happy year and thanks.
Now provide the response and nothing else.
| Since $I-P-Q$ is invertible by assumption, we have $$rank(P)=rank(P(I-P-Q))\mbox{ and }rank(Q)=rank((I-P-Q)Q).$$On the other hand, we have $$P(I-P-Q)=P-P^2-PQ=-PQ$$since $P^2=P$, and $$(I-P-Q)Q=Q-PQ-Q^2=-PQ$$since $Q^2=Q$. Combining all these, we have$$rank(P)=rank(-PQ)=rank(Q).$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/630942', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/84186/']} | jdg_380004 |
stackexchange | llm_judgeable_groundtruth_similarity | 180472 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
"Enter test: "read testif [[ $test == "a" ]]; then echo "worked"else echo "failed"fi It's a simple illustration of test I'm doing, but if I enter "A", it will fail. Is there anything I can do at the variable stage to change it all to small case, so that the test will match?
Now provide the response and nothing else.
| There are several useful ways to achieve this (in bash ): two checks echo -n "Enter test: "read testif [[ $test == "a" || $test == "A" ]]; then echo "worked"else echo "failed"fi make the input lower case echo -n "Enter test: "read testtest="${test,,}"if [[ $test == "a" ]]; then echo "worked"else echo "failed"fi regex for both cases echo -n "Enter test: "read testif [[ $test =~ ^[aA]$ ]]; then echo "worked"else echo "failed"fi make the shell ignore the case echo -n "Enter test: "read testshopt -s nocasematchif [[ $test == a ]]; then echo "worked"else echo "failed"fi | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/180472', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/98500/']} | jdg_380005 |
stackexchange | llm_judgeable_groundtruth_similarity | 5122570 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The square root of 3, as estimated by Wolfram Alpha: 1.7320508075688772935274463415058723669428052538103806280558... When I do sqrt(3) in C, it evaluates to 0. Why? EDIT4 : here's how you can reproduce this issue in GDB. Create test.c as follows: #include <stdio.h> #include <math.h>int main(){ printf("sqrt(3): %f\n", sqrt(3)); return 0;} Compile: gcc -O0 -g -Wall -pedantic -ansi -lm -o test test.c Run debugger: gdb test Enter this at console: (gdb) break test.c:6Breakpoint 1 at 0x400578: file test.c, line 6.(gdb) rStarting program: /home/pdedecker/Desktop/test Breakpoint 1, main () at test.c:66 printf("sqrt(3): %f\n", sqrt(3));(gdb) print sqrt(3)$1 = 0(gdb) ssqrt(3): 1.732051 My GDB version is GNU gdb (GDB) SUSE (7.1-3.12) .
Now provide the response and nothing else.
| The problem is not the missing function declaration (which isn't missing, since you did include <math.h> ). The problem is missing debug info for the sqrt you are actually using. Without that debug info, GDB has no clue what parameter type to pass to sqrt() , and what it returns. You can get the required debug info on many Linux distributions by installing libc-debuginfo package. Here is what I see on such a system: gdb -q ./a.outReading symbols from /tmp/a.out...done.(gdb) b mainBreakpoint 1 at 0x400558: file t.c, line 6.(gdb) rBreakpoint 1, main () at t.c:66 printf("sqrt(3): %f\n", sqrt(3));(gdb) p sqrt$1 = {<text variable, no debug info>} 0x7ffff7b7fb50 <__sqrt> Note: "no debug info" (gdb) p sqrt(3)$2 = 0(gdb) p sqrt(3.0)$3 = 0 Note: matches your behavior.What sqrt functions do have debug info? (gdb) info func sqrtAll functions matching regular expression "sqrt":File ../sysdeps/x86_64/fpu/e_sqrt.c:double __ieee754_sqrt(double);File s_csqrt.c:complex double __csqrt(complex double);File ../sysdeps/x86_64/fpu/e_sqrtf.c:float __ieee754_sqrtf(float);File w_sqrtf.c:float __sqrtf(float);File s_csqrtf.c:complex float __csqrtf(complex float);File ../sysdeps/i386/fpu/e_sqrtl.c:long double __ieee754_sqrtl(long double);File w_sqrtl.c:long double __sqrtl(long double);File s_csqrtl.c:complex long double __csqrtl(complex long double);File ../sysdeps/ieee754/dbl-64/mpsqrt.c:void __mpsqrt(mp_no *, mp_no *, int);File w_sqrt.c:double __sqrt(double);(gdb) p __sqrt$4 = {double (double)} 0x7ffff7b7fb50 <__sqrt> Note: __sqrt is at the same address as sqrt , but GDB knows its type! (gdb) p __sqrt(3)$5 = 1.7320508075688772(gdb) p __sqrt(3.0)$6 = 1.7320508075688772 One can reasonably argue this is a bug in GDB. Feel free to create one in GDB bugzilla . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5122570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217649/']} | jdg_380006 |
stackexchange | llm_judgeable_groundtruth_similarity | 263853 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I never fully understood the derivation of the method of variation of parameters. Consider the simple case $$y'' + p(x)y' + q(x)y = f(x)\,.$$ The homogeneous solution is $y_h=c_1y_1+c_2y_2$ and the particular solution that we guess is $y_p =u_1y_1+u_2y_2$ for some function $u_1(t)$ and $u_2(t)$. Next we take derivatives of the particular solution and substitute those back into the original ODE. The part done next is the part I don't quite see the justification for.We assume that our functions $u_1$ and $u_2$ will satisfy the constraint $$u_1'y_1 + u_2'y_2=0\,,$$ and this particular constraint will yield a cleaner result when looking at $y'_p$: $$u_1'y_1'+u_2'y_2'=f(x)\,.$$ At least in the textbook I'm looking at, the justification for assuming that $u_1$ and $u_2$ satisfy $u_1'y_1 + u_2'y_2=0$ is simply omitted. How do we know that the solutions we're after satisfy that constraint? All other sources I look at just say something like, " okay we are going to impose this constraint. Now moving on ... " Could someone give me a proper explanation as to why we can make this assumption?
Now provide the response and nothing else.
| I quote here an answer I gave to a similar question . The notation is that used here . This is closely tied to the method of osculating parameters . Suppose we wish to represent, with constant coefficients, some arbitrary function $u(x)$ with two linearly independent functions $u_1(x)$ and $u_2(x)$, $$u(x) = A u_1(x) + B u_2(x).$$ In general this can not be done. The best we can do is match the value of the function and its derivative at some point $x_0$, $$\begin{eqnarray*}u(x_0) &=& A u_1(x_0) + B u_2(x_0) \\u'(x_0) &=& A u_1'(x_0) + B u_2'(x_0).\end{eqnarray*}$$ The conditions above determine the osculating parameters, the constants $A$ and $B$. $A$ and $B$ will be different depending on the point $x_0$. In general this fit will be poor at points far from $x_0$. The method of variation of parameters involves finding the osculating parameters $A$ and $B$ at every point. That is, we let $A$ and $B$ be functions of $x$. The condition that they are the osculating parameters is that they satisfy $$\begin{eqnarray*}u_G(x) &=& A(x) u_1(x) + B(x) u_2(x) \\u_G'(x) &=& A(x) u_1'(x) + B(x) u_2'(x),\end{eqnarray*}$$ just as above. For the second equation to hold it must be the case that $$A'(x)u_1(x) + B'(x)u_2(x) = 0.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/263853', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26728/']} | jdg_380007 |
stackexchange | llm_judgeable_groundtruth_similarity | 713836 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello there I am trying to calculate$$\int_0^\infty \frac{\ln(1+\sigma x)\ln(1+\omega x^2)}{x^3}dx$$NOT using mathematica, matlab, etc. We are given that $\sigma, \omega$ are complex. Note, the integral should have different values for $|\sigma \omega^{-1/2}| < 1$ and $|\sigma \omega^{-1/2}| > 1.$ I am stuck now and not sure how to approach it.Note this integral is useful since in the limit $\sigma \to \sqrt{\omega}$ and using $Li_2(-1)=-\pi^2/12$ we obtain$$\int_0^\infty \frac{\ln(1+x)\ln(1+x^2)}{x^3}dx=\frac{\pi}{2}.$$We also know that$$\ln(1+x)=-\sum_{n=1}^\infty \frac{(-1)^nx^n}{n}, \ |x|\leq 1.$$Thanks
Now provide the response and nothing else.
| One may adopt the approach as in Pranav Arora's comment. But this approach involves a double integral whose calculation seems painful. So here is an indirect approach that makes calculation slightly easier (at least to me): Let us consider the following integral: for $\alpha, \beta \in \Bbb{C}\setminus(-\infty, 0]$ and $0 < s < 1$, $$ I = I(s,\alpha,\beta) := \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta x)}{x^{2+s}} \, dx, $$ Differentiating w.r.t. $\alpha$ and $\beta$, we have $$ \frac{\partial^{2}I}{\partial\alpha\partial\beta} = \int_{0}^{\infty} \frac{dx}{x^{s}(1+\alpha x)(1+\beta x)}. $$ Using standard complex analysis technique (you man use keyhole contour), it follows that $$ \frac{\partial^{2}I}{\partial\alpha\partial\beta} = \frac{\pi}{\sin \pi s} \frac{\beta^{s} - \alpha^{s}}{\beta - \alpha}\quad \Longrightarrow \quadI = \frac{\pi}{\sin \pi s} \int_{0}^{\alpha}\int_{0}^{\beta} \frac{x^{s} - y^{s}}{x - y} \, dxdy. \tag{1} $$ Replace $\beta$ by $i\beta$ (with $\beta > 0$). Then (1) yields $$ 2I(s, \alpha, i\beta) = \frac{2\pi}{\sin \pi s} \int_{0}^{\alpha}\int_{0}^{\beta} \frac{i^{s}x^{s} - y^{s}}{x + iy} \, dxdy. $$ Now assume that $\alpha, \beta > 0$. Taking real parts of the identity above and taking $s \to 1^{-}$, it follows that \begin{align*}\tilde{I}(\alpha, \beta) &:= \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta^{2}x^{2})}{x^{3}} \, dx \\&= \int_{0}^{\alpha}\int_{0}^{\beta} \frac{2xy \log(y/x) + \pi x^{2}}{x^{2}+y^{2}} \, dxdy. \tag{2}\end{align*} In particular, when $\beta = \alpha$, by symmetry we retrieve the following formula $$ \tilde{I}(\alpha, \alpha)= \pi \int_{0}^{\alpha}\int_{0}^{\alpha} \frac{x^{2}}{x^{2}+y^{2}} \, dxdy= \frac{\pi}{2} \int_{0}^{\alpha}\int_{0}^{\alpha} dxdy= \frac{\pi}{2}\alpha^{2}. $$ which also follows from the formula in OP's posting. In general, using polar coordinates shows that we have $$ \tilde{I}(\alpha, \beta) = \beta^{2}J(\alpha/\beta) - \alpha^{2}J(\beta/\alpha) + \frac{\pi \alpha \beta}{2} + \frac{\pi^{2}\beta^{2}}{4} - \frac{\pi(\alpha^{2}+\beta^{2})}{2}\arctan(\beta/\alpha), \tag{3} $$ where $J$ is defined by $$ J(x) = \int_{0}^{x} \frac{t \log t}{1+t^{2}} \, dt. $$ This function can be written in terms of elementary functions and dilogarithm. Remark. Though we have derived this formula for positive $\alpha, \beta$, by the principle of analytic continuation (3) continues to hold on the region containing $(0, \infty)^{2}$ where both sides of (3) are holomorphic. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/713836', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/115030/']} | jdg_380008 |
stackexchange | llm_judgeable_groundtruth_similarity | 938268 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
My current settings are as below, where the /dev/mapper/centos-root partition is almost full. Looks like this partition is on disk /dev/mapper/centos-root .but there is another disk /dev/vda , which still has enough free space Are these two disks separate physical disks? how to increase the /dev/mapper/centos-root partition? [root@devbox ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/centos-root 18G 17G 1.4G 93% /devtmpfs 3.9G 0 3.9G 0% /devtmpfs 3.9G 48M 3.8G 2% /dev/shmtmpfs 3.9G 74M 3.8G 2% /runtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup/dev/vda1 497M 297M 201M 60% /boottmpfs 783M 48K 783M 1% /run/user/1001[root@devbox ~]# fdisk -lDisk /dev/vda: 85.9 GB, 85899345920 bytes, 167772160 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x0001ec6a Device Boot Start End Blocks Id System/dev/vda1 * 2048 1026047 512000 83 Linux/dev/vda2 1026048 41943039 20458496 8e Linux LVM/dev/vda3 41943040 52428799 5242880 8e Linux LVMDisk /dev/mapper/centos-root: 18.8 GB, 18756927488 bytes, 36634624 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes[root@devbox ~]#
Now provide the response and nothing else.
| The first thing is to check if you have free extents in your volume group, to do that, you will use: vgdisplay which will return details on the VG, the important line you must check is the one that states Free PE / Size . There you will see the size available to create or extend logical volumes. For instance in my case I have a server that says: Free PE / Size 3834 / 14.98 GiB Given that you have the required free space you should use: lvextend /dev/mapper/centos-root -L +2G In the latter case I am extending the logical volume adding 2GB. Note the + , if you give only the size, it will go to the specified size, I usually use this syntax because it is more transparent with the space you have available in the volume group. After you successfully extended the volume (check with lvscan ), you have to extend the file system, you can use: resize2fs /dev/mapper/centos-root As mentioned by @tinmarino and @y-melo in the comments. The command above will only work for ext2, ext3 and ext4. For xfs you should use xfs_growfs /dev/mapper/centos-root Run df again to check that the available space has changed. What if there's no space in the VG? You have to first extend the volume group to be able to extend the logical volumes. For this matter you have to add a new disk. I am assuming that the CentOS box is a virtual machine because of the size of the disk, but of course this can be done on a physical server too, it is just that you have to physically add a disk. Once you have the disk on the server, you have to create an LVM physical volume (PV), this can be created on a partition or even on the disk, I don't know the pros of doing it on the disk, but in my experience I have found it confusing as you won't be able to see a partition table, so I would recommend creating a partition first. To create the PV over disk `/dev/vdb' partition 1 you do: pvcreate /dev/vdb1 Once you have the PV, extend the VG (I don't know the name, I bet it is centos , check on your vgdisplay): vgextend centos /dev/vdb1 TL;DR For VG: vg0 , LV: lv0 and new disk /dev/sdb . Extending 5GB Check available space on the VG: vgdisplay . If enough go to 4 If you don't have space add a disk and create a PV: pvcreate /dev/sdb1 Extend the VG: vgextend vg0 /dev/sdb1 Extend the LV: lvextend /dev/vg0/lv0 -L +5G Check: lvscan Resize the file system: resize2fs /dev/vg0/lv0 (or xfs_growfs /dev/vg0/lv0) Check: df -h | grep lv0 | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/938268', 'https://serverfault.com', 'https://serverfault.com/users/494566/']} | jdg_380009 |
stackexchange | llm_judgeable_groundtruth_similarity | 17923773 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to keep polling file till it arrives at the location for 1 hour. My dir : /home/stage File Name (which I am looking for) : abc.txt I want to keep polling directory /home/stage for 1 hour but within the 1 hour if abc.txt file arrives then it should stop polling and should display the message file arrived otherwise after 1 hour it should display that file has not arrived . Is there any way to achieve this in Unix?
Now provide the response and nothing else.
| Another bash method, not relying on trap handlers and signals, in case your larger scope already uses them for other things: #!/bin/bashinterval=60((end_time=${SECONDS}+3600))directory=${HOME}file=abc.txtwhile ((${SECONDS} < ${end_time}))do if [[ -r ${directory}/${file} ]] then echo "File has arrived." exit 0 fi sleep ${interval}doneecho "File did not arrive."exit 1 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17923773', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2630193/']} | jdg_380010 |
stackexchange | llm_judgeable_groundtruth_similarity | 4995924 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7 . My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File) , so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.
Now provide the response and nothing else.
| I had some similar initial struggles with TFS branching and merging (We have dev, integration, and main branches). Short version is that you cannot merge directly back from a tip to a common ancestor. So if C3 was branched to C5, then C7 was branched to C9, what the tool is providing makes sense within the context of how TFS works. You essentially need to go from C10/C9 to C7 then from C7 to C3. To put it a different way with a more concrete example, here's how we handle multi-level branching and merging in our projects. We start with trunk/main. We then branch to an integration branch. We then (and this is key) branch from integration into our individual dev branches so we can work on upcoming releases. As changes are complete in a dev branch, we first reverse integrate by merging from integration to our dev branch (so we pick up everyone else's changes). We then forward integrate by going from our individual dev branch to the shared integration branch. Once integration passes QA, we then reverse integrate by merging trunk to integration (just in case there are any hotfixes in main), then forward integrate all of the combined changes in integration down to main. On release day, we do one last branch from main into a new release branch which we then deploy. Basically, in TFS you always have to go up and down the branching/merging tree from trunk to limb to branch to leaf - you cannot at any time bypass any step in the branch heirarchy. Metaphorically, TFS branch and merge is more like as sloth crawling up a tree and slowly down to the end of a branch without ever letting lose it's grip vs. a monkey hopping between branches ;) Takes a bit of getting used to, but once done (and especially once you're used to the forward integrate/reverse integrate paradigm) it works a treat, especially when you have several folks all doing development and you need to wrap up everyone's changes without having things stomped over. Hope that helps! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4995924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/125407/']} | jdg_380011 |
stackexchange | llm_judgeable_groundtruth_similarity | 8792517 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to have a dark ActionBar but have the rest of the application use the Holo.Light theme. I know there is a Theme.Holo.Light.DarkActionBar Theme in ICS/4.0 but I want this also to work in Honeycomb/3.0+. At the Moment I'm using the dark Holo theme and for the rest of my components I'm using a ContextThemeWrapper. But this is much work and can easily lead to errors. Is this possible?
Now provide the response and nothing else.
| Create a custom style and set the Parent style to the holo light theme but the ActionBar to normal Holo. a xml file with something like this should do the job (just out of my memory): <style name="appstyle0" parent="android:style/Theme.Holo.Light"> <item name="android:actionBarStyle">@android:style/Widget.Holo.ActionBar</item></style> Then set the appstyle0 in your AndroidManifest.xml as style and in all your Activitys are holo light theme but the action bar style is holo dark. Edit: I checked why my first answer does not work. <style name="Widget.Holo.Light.ActionBar" parent="Widget.Holo.ActionBar"> <item name="android:titleTextStyle">@android:style/TextAppearance.Holo.Widget.ActionBar.Title</item> <item name="android:subtitleTextStyle">@android:style/TextAppearance.Holo.Widget.ActionBar.Subtitle</item> <item name="android:background">@android:drawable/ab_transparent_light_holo</item> <item name="android:backgroundStacked">@android:drawable/ab_stacked_transparent_light_holo</item> <item name="android:backgroundSplit">@android:drawable/ab_bottom_transparent_light_holo</item> <item name="android:homeAsUpIndicator">@android:drawable/ic_ab_back_holo_light</item> <item name="android:progressBarStyle">@android:style/Widget.Holo.Light.ProgressBar.Horizontal</item> <item name="android:indeterminateProgressStyle">@android:style/Widget.Holo.Light.ProgressBar</item></style> The action bar is defined in styles.xml with attributes that are set by the main theme in general.First of all the BG is transparent, so you should use "Widget.Holo.Light.ActionBar.Solid" as parent.Then you have to set the different items one by one to the dark theme.Lets take titleTextStyle as example: <style name="TextAppearance.Holo.Widget.ActionBar.Title.Own" parent="TextAppearance.Holo.Widget.ActionBar.Title"> <item name="android:textColor">@android:color/primary_text_holo_dark</item> <item name="android:textColorHighlight">@android:color/highlighted_text_holo_dark</item> <item name="android:textColorHint">@android:color/hint_foreground_holo_dark</item> <item name="android:textColorLink">@android:color/holo_blue_light</item></style> Set this now as. <item name="android:titleTextStyle">@android:style/TextAppearance.Holo.Widget.ActionBar.Title.Own</item> Proceed like this with the xml attributes above. To find all related attributes search in styles.xml and themes.xml for the parameters.Sorry to tell, but I guess there is no easy way, according to what I see... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8792517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1138674/']} | jdg_380012 |
stackexchange | llm_judgeable_groundtruth_similarity | 38206915 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to filter out my array to contain only unique values.this is my array data ["X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11"] expected result should be ["X_row7", "X_row4", "X_row6", "X_row10", "X_row11", "X_row8", "X_row9"] How should i continue my code to get proper result. newArray = [];for(n in data){ if(!newArray.indexOf(n)){ newArray.push(n); }}console.log(newArray); If you need any additional information's please let me know and i will provide. thank you
Now provide the response and nothing else.
| You can use Array.filter function to filter out elements of an array based on the return value of a callback function. The callback function runs for every element of the original array. The logic for the callback function here is that if the indexOf value for current item is same as the index, it means the element has been encountered first time, so it can be considered unique. If not, it means the element has been encountered already, so should be discarded now. var arr = ["X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11", "X_row7", "X_row4", "X_row6", "X_row10", "X_row8", "X_row9", "X_row11"];var filteredArray = arr.filter(function(item, pos){ return arr.indexOf(item)== pos; });console.log( filteredArray ); Caveat: As pointed out by rob in the comments, this method should be avoided with very large arrays as it runs in O(N^2) . UPDATE (16 Nov 2017) If you can rely on ES6 features , then you can use Set object and Spread operator to create a unique array from a given array, as already specified in @Travis Heeter's answer below: var uniqueArray = [...new Set(array)] | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/38206915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2315443/']} | jdg_380013 |
stackexchange | llm_judgeable_groundtruth_similarity | 46754569 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the below function, function helloAfter100ms(){ setTimeout(function(){ console.log('hello'); },100)} Test code with mocha, describe('#helloAfter100ms()',function(){ it('console logs hello ONLY after 100ms',function(){ // what should go here })})
Now provide the response and nothing else.
| I think you're trying to test something that you shouldn't. The name of your test suggests you don't trust that the setTimeout function calls the console.log only after the given timeout. Since this is not your code, you should probably not unit test it. Furthermore, setTimeout is probable something you can be sure works properly. So what's left to test? Your code - the code that calls setTimeout .You can make sure that you're calling setTimeout correctly. As to how this is done - there are two sinon features you can use. The first is useFakeTimers which gives you control of the clock. The second is a spy, which you should use on the console.log to make sure it was called. describe('#helloAfter100ms()',function(){ it('console logs hello ONLY after 100ms',function(){ const clock = sinon.useFakeTimers(); const logSpy = sinon.spy(console, 'log'); helloAfter100ms(); expect(logSpy).to.not.have.been.called; clock.tick(100); expect(logSpy).to.have.been.calledOnce; logSpy.restore(); clock.restore(); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46754569', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4962361/']} | jdg_380014 |
stackexchange | llm_judgeable_groundtruth_similarity | 51033222 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Edit: I've edited the code below to feature the method that fetches the data along with the widgets that build the train estimates (replacing any API information along the way with "API_URL" and "API_STOP_ID" ). I hope this even better helps us figure out the problem! I really appreciate any information anyone can give -- I've been working very hard on this project! Thank you all again! Original post: I have a ListView of ListTiles that each have a trailing widget which builds train arrival estimates in a new Text widget. These trailing widgets are updated every five seconds (proven by print statements). As a filler for when the app is fetching data from the train's API, it displays a "no data" Text widget which is built by _buildEstimatesNull(). However, the problem is that "no data" is still being shown even when the app has finished fetching data and _isLoading = false (proven by print statements). Still, even if that was solved, the train estimates would become quickly outdated, as the trailing widgets are updating every five seconds on their own but this would not be reflected in the actual app as the widgets were built on page load. Thus, I need a way to rebuild those trailing widgets whenever they fetch new information. Is there a way to have Flutter automatically rebuild the ListTile's trailing widget every five seconds as well (or whenever _buildEstimatesS1 is updated / the internals of the trailing widget is updated)? class ShuttleApp extends StatefulWidget { @override State<StatefulWidget> createState() { return new ShuttleState(); }}class ShuttleState extends State<ShuttleApp> { @override Widget build(BuildContext context) { return new MaterialApp( home: new HomeScreen(), ); }}class HomeScreen extends StatefulWidget { @override State<StatefulWidget> createState() { return new HomeState(); }}class HomeState extends State<HomeScreen> { var _isLoading = true; void initState() { super.initState(); _fetchData(); const fiveSec = const Duration(seconds: 5); new Timer.periodic(fiveSec, (Timer t) { _fetchData(); }); } var arrivalsList = new List<ArrivalEstimates>(); _fetchData() async { arrivalsList.clear(); stopsList.clear(); final url = "API_URL"; print("Fetching: " + url); final response = await http.get(url); final busesJson = json.decode(response.body); if (busesJson["service_id"] == null) { globals.serviceActive = false; } else { busesJson["ResultSet"]["Result"].forEach((busJson) { if (busJson["arrival_estimates"] != null) { busJson["arrival_estimates"].forEach((arrivalJson) { globals.serviceActive = true; final arrivalEstimate = new ArrivalEstimates( arrivalJson["route_id"], arrivalJson["arrival_at"], arrivalJson["stop_id"] ); arrivalsList.add(arrivalEstimate); }); } }); } setState(() { _isLoading = false; }); } Widget _buildEstimateNull() { return new Container( child: new Center( child: new Text("..."), ), ); } Widget _buildEstimateS1() { if (globals.serviceActive == false) { print('serviceNotActive'); _buildEstimateNull(); } else { final String translocStopId = "API_STOP_ID"; final estimateMatches = new List<String>(); arrivalsList.forEach((arrival) { if (arrival.stopId == translocStopId) { estimateMatches.add(arrival.arrivalAt); } }); estimateMatches.sort(); if (estimateMatches.length == 0) { print("zero"); return _buildEstimateNull(); } else { return new Container( child: new Center( child: new Text(estimateMatches[0]), ), ); } } } @override Widget build(BuildContext context) { return new Scaffold( backgroundColor: const Color(0xFF171717), appBar: new AppBar(), body: new DefaultTextStyle( style: new TextStyle(color: const Color(0xFFaaaaaa),), child: new ListView( children: <Widget>[ new ListTile( title: new Text('S1: Forest Hills', style: new TextStyle(fontWeight: FontWeight.w500, fontSize: 20.0)), subtitle: new Text('Orange Line'), contentPadding: new EdgeInsets.symmetric(vertical: 16.0, horizontal: 16.0), trailing: _isLoading ? _buildEstimateNull() : _buildEstimateS1(), ), ], ), ) ); }class ArrivalEstimates { final String routeId; final String arrivalAt; final String stopId; ArrivalEstimates(this.routeId, this.arrivalAt, this.stopId);} Thank you so much in advance for any help you can give! I really super appreciate it! :)
Now provide the response and nothing else.
| There are a few ways you could tackle this. It is slightly difficult however to tell what's going on without seeing a bit more of your code - specifically how you're getting the data and what you're doing with it. But I think I can give you a sufficient answer anyways. The simple way of doing this is to either: Have a StatefulWidget which keeps track of the build estimates for all of the items in the list. It should request data from your API, get the results, and then call setState(() => this.listData = data); . The call to setState is what tells the widget that it needs to rebuild. Have a StatefulWidget for each item in the list. They would all each perform an API request every 5 seconds, get the results, and then each would call setState(() => this.itemData = data); . This means multiple calls to the API etc. The advantage of #1 is that you can batch API calls, whereas the advantage to #2 is that your build would change less overall (although the way flutter works, this would be pretty minimal)... so I would probably go with #1 if possible. However, there is a better way of doing this! The better way of doing this is to have some sort of API Manager (or whatever you want to call it) which handles the communication with your API. It probably would live higher up in your widget tree and would be started/stopped with whatever logic you want. Depending on how far up the widget tree is, you could either pass it into each child or more likely hold it in an InheritedWidget which could then be used to retrieve it from each list element or from the overall list. The API manager would provide various streams - either with a bunch of named fields/methods or with a getStream(id) sort of structure depending on your API. Then, within your various list elements, you would use StreamBuilder widgets to build each of the elements based on the data - by using a StreamBuilder you get a ConnectionState object that lets you know whether the stream has received any data yet so you can choose to show an isLoading type widget instead of the one that shows data. By using this more advanced method, you get: Maintainability If your API changes, you only have to change the API manager You can write better testing as the API interactions and the UI interactions are separated Extensibility If you, later on, use push notifications for updates rather than pinging a server every 5 seconds, that can be incorporated into the API manager so that it can simply update the stream without touching the UI EDIT: as per OP's comments, they have already implemented more or less the first suggestion. However, there are a few problems with the code. I'll list them below and I've posted the code with a couple of changes. The arrivalsList should be replaced each time a new build is done rather than simply being changed. This is because dart compares the lists and if it finds the same list, it doesn't necessarily compare all of the elements. Also, while changing it in the middle of a function isn't necessarily going to cause problems, it's generally better to use a local variable and then change the value at the end. Note that the member is actually set within setState. If serviceActive == false, the return was missed from return _buildEstimateNull(); . Here's the code: class HomeState extends State<HomeScreen> { var _isLoading = true; void initState() { super.initState(); _fetchData(); const fiveSec = const Duration(seconds: 5); new Timer.periodic(fiveSec, (Timer t) { _fetchData(); }); } var arrivalsList = new List<ArrivalEstimates>(); _fetchData() async { var arrivalsList = new List<ArrivalEstimates>(); // *********** #1 stopsList.clear(); final url = "API_URL"; print("Fetching: " + url); final response = await http.get(url); final busesJson = json.decode(response.body); if (busesJson["service_id"] == null) { print("no service id"); globals.serviceActive = false; } else { busesJson["ResultSet"]["Result"].forEach((busJson) { if (busJson["arrival_estimates"] != null) { busJson["arrival_estimates"].forEach((arrivalJson) { globals.serviceActive = true; final arrivalEstimate = new ArrivalEstimates( arrivalJson["route_id"], arrivalJson["arrival_at"], arrivalJson["stop_id"] ); arrivalsList.add(arrivalEstimate); }); } }); } setState(() { _isLoading = false; this.arrivalsList = arrivalsList; // *********** #1 }); } Widget _buildEstimateNull() { return new Container( child: new Center( child: new Text("..."), ), ); } Widget _buildEstimateS1() { if (globals.serviceActive == false) { print('serviceNotActive'); return _buildEstimateNull(); // ************ #2 } else { final String translocStopId = "API_STOP_ID"; final estimateMatches = new List<String>(); print("arrivalsList length: ${arrivalsList.length}"); arrivalsList.forEach((arrival) { if (arrival.stopId == translocStopId) { print("Estimate match found: ${arrival.stopId}"); estimateMatches.add(arrival.arrivalAt); } }); estimateMatches.sort(); if (estimateMatches.length == 0) { print("zero"); return _buildEstimateNull(); } else { return new Container( child: new Center( child: new Text(estimateMatches[0]), ), ); } } } @override Widget build(BuildContext context) { return new Scaffold( backgroundColor: const Color(0xFF171717), appBar: new AppBar(), body: new DefaultTextStyle( style: new TextStyle(color: const Color(0xFFaaaaaa),), child: new ListView( children: <Widget>[ new ListTile( title: new Text('S1: Forest Hills', style: new TextStyle(fontWeight: FontWeight.w500, fontSize: 20.0)), subtitle: new Text('Orange Line'), contentPadding: new EdgeInsets.symmetric(vertical: 16.0, horizontal: 16.0), trailing: _isLoading ? _buildEstimateNull() : _buildEstimateS1(), ), ], ), ) ); } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51033222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9510263/']} | jdg_380015 |
stackexchange | llm_judgeable_groundtruth_similarity | 419163 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What does this do, and why should one include the if statement? if __name__ == "__main__": print("Hello, World!") If you are trying to close a question where someone should be using this idiom and isn't, consider closing as a duplicate of Why is Python running my module when I import it, and how do I stop it? instead. For questions where someone simply hasn't called any functions, or incorrectly expects a function named main to be used as an entry point automatically, use Why doesn't the main() function run when I start a Python script? Where does the script start running? .
Now provide the response and nothing else.
| Short Answer It's boilerplate code that protects users from accidentally invoking the script when they didn't intend to. Here are some common problems when the guard is omitted from a script: If you import the guardless script in another script (e.g. import my_script_without_a_name_eq_main_guard ), then the latter script will trigger the former to run at import time and using the second script's command line arguments . This is almost always a mistake. If you have a custom class in the guardless script and save it to a pickle file, then unpickling it in another script will trigger an import of the guardless script, with the same problems outlined in the previous bullet. Long Answer To better understand why and how this matters, we need to take a step back to understand how Python initializes scripts and how this interacts with its module import mechanism. Whenever the Python interpreter reads a source file, it does two things: it sets a few special variables like __name__ , and then it executes all of the code found in the file. Let's see how this works and how it relates to your question about the __name__ checks we always see in Python scripts. Code Sample Let's use a slightly different code sample to explore how imports and scripts work. Suppose the following is in a file called foo.py . # Suppose this is foo.py.print("before import")import mathprint("before function_a")def function_a(): print("Function A")print("before function_b")def function_b(): print("Function B {}".format(math.sqrt(100)))print("before __name__ guard")if __name__ == '__main__': function_a() function_b()print("after __name__ guard") Special Variables When the Python interpreter reads a source file, it first defines a few special variables. In this case, we care about the __name__ variable. When Your Module Is the Main Program If you are running your module (the source file) as the main program, e.g. python foo.py the interpreter will assign the hard-coded string "__main__" to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top# of your module when run as the main program.__name__ = "__main__" When Your Module Is Imported By Another On the other hand, suppose some other module is the main program and it imports your module. This means there's a statement like this in the main program, or in some other module the main program imports: # Suppose this is in some other main program.import foo The interpreter will search for your foo.py file (along with searching for a few other variants), and prior to executing that module, it will assign the name "foo" from the import statement to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top# of your module when it's imported from another module.__name__ = "foo" Executing the Module's Code After the special variables are set up, the interpreter executes all the code in the module, one statement at a time. You may want to open another window on the side with the code sample so you can follow along with this explanation. Always It prints the string "before import" (without quotes). It loads the math module and assigns it to a variable called math . This is equivalent to replacing import math with the following (note that __import__ is a low-level function in Python that takes a string and triggers the actual import): # Find and load a module given its string name, "math",# then assign it to a local variable called math.math = __import__("math") It prints the string "before function_a" . It executes the def block, creating a function object, then assigning that function object to a variable called function_a . It prints the string "before function_b" . It executes the second def block, creating another function object, then assigning it to a variable called function_b . It prints the string "before __name__ guard" . Only When Your Module Is the Main Program If your module is the main program, then it will see that __name__ was indeed set to "__main__" and it calls the two functions, printing the strings "Function A" and "Function B 10.0" . Only When Your Module Is Imported by Another ( instead ) If your module is not the main program but was imported by another one, then __name__ will be "foo" , not "__main__" , and it'll skip the body of the if statement. Always It will print the string "after __name__ guard" in both situations. Summary In summary, here's what'd be printed in the two cases: # What gets printed if foo is the main programbefore importbefore function_abefore function_bbefore __name__ guardFunction AFunction B 10.0after __name__ guard # What gets printed if foo is imported as a regular modulebefore importbefore function_abefore function_bbefore __name__ guardafter __name__ guard Why Does It Work This Way? You might naturally wonder why anybody would want this. Well, sometimes you want to write a .py file that can be both used by other programs and/or modules as a module, and can also be run as the main program itself. Examples: Your module is a library, but you want to have a script mode where it runs some unit tests or a demo. Your module is only used as a main program, but it has some unit tests, and the testing framework works by importing .py files like your script and running special test functions. You don't want it to try running the script just because it's importing the module. Your module is mostly used as a main program, but it also provides a programmer-friendly API for advanced users. Beyond those examples, it's elegant that running a script in Python is just setting up a few magic variables and importing the script. "Running" the script is a side effect of importing the script's module. Food for Thought Question: Can I have multiple __name__ checking blocks? Answer: it's strange to do so, but the language won't stop you. Suppose the following is in foo2.py . What happens if you say python foo2.py on the command-line? Why? # Suppose this is foo2.py.import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpretersdef function_a(): print("a1") from foo2 import function_b print("a2") function_b() print("a3")def function_b(): print("b")print("t1")if __name__ == "__main__": print("m1") function_a() print("m2")print("t2") Now, figure out what will happen if you remove the __name__ check in foo3.py : # Suppose this is foo3.py.import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpretersdef function_a(): print("a1") from foo3 import function_b print("a2") function_b() print("a3")def function_b(): print("b")print("t1")print("m1")function_a()print("m2")print("t2") What will this do when used as a script? When imported as a module? # Suppose this is in foo4.py__name__ = "__main__"def bar(): print("bar") print("before __name__ guard")if __name__ == "__main__": bar()print("after __name__ guard") | {} | {'log_upvote_score': 14, 'links': ['https://Stackoverflow.com/questions/419163', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51518/']} | jdg_380016 |
stackexchange | llm_judgeable_groundtruth_similarity | 18561556 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried many things and there's no way, always appears this errorI tried to use only one option to see if passed, changed the call of jquery, but not. I looked in various places on the internet about this error, but could not solve or understand why it is happening.On my pc using EasyPHP works perfectly, but when I put online does not work. Syntax Error: unexpected token < Here's my code: $(function(){$('#salvar').click(function(){ var key = 'salvar'; var title = $('#title').val(); var opcao1 = $('#opcao1').val(); var opcao2 = $('#opcao2').val(); var opcao3 = $('#opcao3').val(); var opcao4 = $('#opcao4').val(); var opcao5 = $('#opcao5').val(); var opcao6 = $('#opcao6').val(); if(title.length > 0){ if(opcao2.length > 0){ $('#resposta').removeClass().html('Salvando a enquete...<br clear="all"><br><img src="images/switch-loading.gif" />'); $.ajax({ type : 'POST', url : 'funcoes/enquete_adm.php', dataType : 'json', data: {key:key,title:title,opcao1:opcao1,opcao2:opcao2,opcao3:opcao3,opcao4:opcao4,opcao5:opcao5,opcao6:opcao6}, success : function(data){ if(data.sql == 'ok'){ $('#resposta').addClass('success-box').html('Enquete Salva!').fadeIn(1000); $('#control').fadeOut(); }else if(data.sql == 'error'){ $('#resposta').addClass('info-box').html('Ops, aconteceu um erro. Por favor, tente novamente').fadeIn(1000); } }, error: function (XMLHttpRequest, textStatus, errorThrown) { alert("XMLHttpRequest " + XMLHttpRequest[0]);alert(" errorThrown: " + errorThrown);alert( " textstatus : " + textStatus); } }); }else{ $('#resposta').addClass('warning-box').html('É necessário no mínimo duas opções'); }; }else{ $('#resposta').addClass('warning-box').html('Coloque a pergunta da enquete'); }; return false;});}); // End
Now provide the response and nothing else.
| This usually happens when you're including or posting to a file which doesn't exist.The server will return a regular html-formatted "404 Not Found" enclosed with '<html></html>' tags. That first chevron < isn't valid js nor valid json, therefore it triggers an unexpected token. What if you try to change 'funcoes/enquete_adm.php' to an absolute url, just to be sure? EDIT (several years later) The root cause might not always come from 404 errors. Sometimes you can make a request to an API and receive HTML formatted errors. I've stumbled to a couple of cases in which the API endpoint should have returned { error: "you must be authenticated to make this request"} With header 401. And instead I got <html>You must be authenticated to make this request</html> With header 200. Given the header is 200 you can't tell the request has failed beforehand, and you're stuck to try and JSON.parse the response to check if it's valid. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18561556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2737725/']} | jdg_380017 |
stackexchange | llm_judgeable_groundtruth_similarity | 20318295 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been building a simple application to learn angular.js. So far I hooked up all the pieces in the MEAN stack and I am able to save and retrieve data from Mongo. The app is essentially a todo list. The user can create a project and inside the project create "cards" with "todos" which can then be moved from state to state ("backlog", "in progress", "complete", etc.) I would like to be able to push the notifications to all the people who are connected to tell their apps that a refresh is needed to get the latest todos. In other words, let's say that user A adds a new card to project A, I would like to send a message out to all users who are currently watching project A so that their application issues a project refresh to get the latest and greatest. Any suggestions on how to proceed? Which technology, if any, I need to add to the MEAN stack to be able to do something like this? Thanks in advance
Now provide the response and nothing else.
| Since you're on the MEAN stack, the standard recommendation in Node would be to use the Socket.IO API. They provide the following example of two way messaging (which would facilitate your push messages very easily): Client <script src="/socket.io/socket.io.js"></script><script> var socket = io.connect('http://localhost'); socket.on('news', function (data) { console.log(data); socket.emit('my other event', { my: 'data' }); });</script> Server var app = require('http').createServer(handler) , io = require('socket.io').listen(app) , fs = require('fs')app.listen(80);function handler (req, res) { fs.readFile(__dirname + '/index.html', function (err, data) { if (err) { res.writeHead(500); return res.end('Error loading index.html'); } res.writeHead(200); res.end(data); });}io.sockets.on('connection', function (socket) { socket.emit('news', { hello: 'world' }); socket.on('my other event', function (data) { console.log(data); });}); It will use websockets where possible, and fallback to AJAX long polling or Flash polling in browsers where there is no websocket support. As for integrating with Angular, here's a good blog post on Socket.IO and Angular : I'll be writing about how to integrate Socket.IO to add real-time features to an AngularJS application. In this tutorial, I'm going to walk through writing a instant messaging app. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20318295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/438078/']} | jdg_380018 |
stackexchange | llm_judgeable_groundtruth_similarity | 187169 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have input file name as: Sample_0000860156-001-out_20150224150524.xml.gz I want only 0000860156-001 from the file_name to append to my target file name.
Now provide the response and nothing else.
| Here are some approaches. You haven't explained what you want to do with the string you want to extract so I will just print it. awk with multiple delimiters echo Sample_0000860156-001-out_20150224150524.xml.gz | awk -F '[_-]' '{print $2"-"$3}' GNU grep echo Sample_0000860156-001-out_20150224150524.xml.gz | grep -oP '_\K.*(?=-)' sed echo Sample_0000860156-001-out_20150224150524.xml.gz | sed 's/.*_\([0-9]*-[0-9]*\)-.*/\1/' Perl echo Sample_0000860156-001-out_20150224150524.xml.gz | perl -pe 's/.*_(.+?-.+?)-.*/\1/' Or, with recent perl versions (thanks @ksenoy ): echo Sample_0000860156-001-out_20150224150524.xml.gz | perl -aF/[_-]/ -lne 'print $F[1]-$F[2]' cut echo Sample_0000860156-001-out_20150224150524.xml.gz | cut -d_ -f2 | cut -d- -f1,2 | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/187169', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/104740/']} | jdg_380019 |
stackexchange | llm_judgeable_groundtruth_similarity | 1083076 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The question asks to solve the equation by this method. I know how to annihilate $\sin(2x)$ by $(D^2+4)$ however i don't know for the case $\sin^2(2x)$. Thanks!
Now provide the response and nothing else.
| Hint. If you know how to annihilate $\sin(2x)$, you surely know how to annihilate $\sin^2(2x)$, you just write$$\sin^2(2x)=\frac{1-\cos (4x)}{2}$$ then you annihilate $ \cos (4x)$ as you did for $\sin(2x)$, obtaining$$(D^2+16)(\cos (4x))\equiv0$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1083076', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/156491/']} | jdg_380020 |
stackexchange | llm_judgeable_groundtruth_similarity | 415799 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a zip file with size of 1.5 GB. Its content is one ridiculous large plain-text file (60 GB) and I currently do not have enough space on my disk left to extract it all nor do I want to extract it all, even if I had. As for my use case, it would suffice if I can inspect parts of the content. Hence I want to unzip the file as a stream and access a range of the file (like one can via head and tail on a normal text file). Either by memory (e.g. extract max 100kb starting from 32GB mark) or by lines (give me the plain text lines 3700-3900). Is there a way to achieve that?
Now provide the response and nothing else.
| Note that gzip can extract zip files (at least the first entry in the zip file). So if there's only one huge file in that archive, you can do: gunzip < file.zip | tail -n +3000 | head -n 20 To extract the 20 lines starting with the 3000th one for instance. Or: gunzip < file.zip | tail -c +3000 | head -c 20 For the same thing with bytes (assuming a head implementation that supports -c ). For any arbitrary member in the archive, in a Unixy way: bsdtar xOf file.zip file-to-extract | tail... | head... With the head builtin of ksh93 (like when /opt/ast/bin is ahead in $PATH ), you can also do: .... | head -s 2999 -c 20.... | head --skip=2999 --bytes=20 Note that in any case gzip / bsdtar / unzip will always need to uncompress (and discard here) the entire section of the file that leads to the portion that you want to extract. That's down to how the compression algorithm works. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/415799', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/12471/']} | jdg_380021 |
stackexchange | llm_judgeable_groundtruth_similarity | 3521311 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have logged into an application which is running from IIS . Now I haven't logged out, but closed the browser. And when I'm accessing the application again, it defaults to the login page. How does IIS recognize that it is a new request and redirects the user to the login page? I have another question. Suppose if I'm not closing the browser, which I used when I logged in. I'm opening the new browser to request a page from same application. IIS recognizes that it's a new request to the application and redirects the user to login page. Why does it not use the existing session or cookies which the first browser uses? Please don't get irritated of my continuous questions... I am having huge confusion.We say HTTP is a stateless protocol. Once the page is requested I have logged in. And the HTTP protocol connection will be terminated between IIS and browser, right? Then I am navigating to other pages in that logged in application. Now IIS recognises the user has logged in on this browser. But when I open a new browser and request that application, how does IIS recognises it is a new request? Since the HTTP protocol is disconnected, how does it work in the first case?
Now provide the response and nothing else.
| As you've correctly said, HTTP itself is stateless, and each request is technically separate from every other. Sessions, as used by web sites, are a workaround for that. What happens, normally, is that the server stores whatever info it cares to maintain between requests (like the logged-in user's username and/or ID, for example), and assigns that information an ID (called a "session ID"). It then tells the browser that session ID, in such a way that the browser can hand the ID back when it's time to make another request. If the browser plays its part and provides the session ID, then the stored information can be retrieved, updated, etc with each request, providing some degree of state even over a stateless protocol. Sessions are usually implemented using cookies. That is, the server hands the browser a cookie with the session ID, and the browser hands back that same cookie with each request until the cookie expires or is otherwise forgotten. Some cookies (so-called "session cookies") aren't saved, and are forgotten when the browser is closed. A freshly opened browser doesn't have any session cookies to pass, so if the server uses session cookies to do sessions (which it should), it will consider the user not yet logged in and bounce them to the login page if they need to be logged in. Session cookies will usually be shared between tabs in the same browser, and will sometimes even be shared by windows opened by "File > New Window" from an already running browser, because both of those cases will typically just be a part of that browser. But if you start the browser from the Start menu, or however your OS lets you start a program, it's a whole other process -- and session cookies are rarely shared between processes. The server typically also only remembers sessions on its end for a limited time (anywhere from seconds to years, depending on the server and/or site settings) after each request that uses the session. If the browser passes a cookie that corresponds to a session the server no longer remembers, it'll act as if there's no session at all. Which, in cases where you have to log in, will again bounce to the login page. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3521311', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/320009/']} | jdg_380022 |
stackexchange | llm_judgeable_groundtruth_similarity | 7727804 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make a canvas scrollable. However, once I try to set up scrollbars to work with the canvas, tkinter seems to completely ignore the dimensions I initially set for my canvas. I've tried packing them all in a frame, setting the canvas to fill the frame and then setting the frame size, but that presents the same problem unless I set the frame to fill the window as well, which isn't what I want. Basically, I want a fixed-size canvas with scrollbars on it. My current code looks like this (in python 3.1): from tkinter import *root=Tk()frame=Frame(root,width=300,height=300)frame.grid(row=0,column=0)canvas=Canvas(frame,bg='#FFFFFF',width=300,height=300,scrollregion=(0,0,500,500))hbar=Scrollbar(canvas,orient=HORIZONTAL)hbar.pack(side=BOTTOM,fill=X)hbar.config(command=canvas.xview)vbar=Scrollbar(canvas,orient=VERTICAL)vbar.pack(side=RIGHT,fill=Y)vbar.config(command=canvas.yview)canvas.config(width=300,height=300)canvas.config(xscrollcommand=hbar.set, yscrollcommand=vbar.set)canvas.pack(side=LEFT,expand=True,fill=BOTH)root.mainloop()
Now provide the response and nothing else.
| Your scrollbars need to have the Frame as a parent, not the Canvas: from tkinter import *root=Tk()frame=Frame(root,width=300,height=300)frame.pack(expand=True, fill=BOTH) #.grid(row=0,column=0)canvas=Canvas(frame,bg='#FFFFFF',width=300,height=300,scrollregion=(0,0,500,500))hbar=Scrollbar(frame,orient=HORIZONTAL)hbar.pack(side=BOTTOM,fill=X)hbar.config(command=canvas.xview)vbar=Scrollbar(frame,orient=VERTICAL)vbar.pack(side=RIGHT,fill=Y)vbar.config(command=canvas.yview)canvas.config(width=300,height=300)canvas.config(xscrollcommand=hbar.set, yscrollcommand=vbar.set)canvas.pack(side=LEFT,expand=True,fill=BOTH)root.mainloop() The reason why this works is due to how pack works. By default it will attempt to shrink (or grow) a container to exactly fit its children. Because the scrollbars are children of the canvas in the original example, the canvas shrinks to fit. If you want the scrollbars to appear inside the canvas, the trick is to use an extra frame. Place the canvas and scrollbars in this inner frame, turn the borders off of the canvas and on for the frame. Set the background of the frame to be the same as the canvas and it will appear that the scrollbars are inside the canvas. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7727804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/989677/']} | jdg_380023 |
stackexchange | llm_judgeable_groundtruth_similarity | 157775 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Directed acyclic graphs (DAGs; e.g., Greenland, et al, 1999) are a part of a formalism of causal inference from the counterfactual interpretation of causality camp. In these graphs the presence of an arrow from variable $A$ to variable $B$ asserts that variable $A$ directly causes (some change in risk of) variable $B$ , and the absence of such an arrow asserts that variable $A$ does not directly cause (some change in risk of) variable $B$ . As an example, the statement "tobacco smoke exposure directly causes a change in risk of mesothelioma" is represented by the black arrow from "tobacco smoke exposure" to "mesothelioma" in the not a DAG causal diagram below. Likewise, the statement "asbestos exposure directly causes a change in risk of mesothelioma" is represented by the black arrow from "asbestos exposure" to "mesothelioma" in the not a DAG causal graph below. I use the term not a DAG to describe the below causal graph because of the red arrow, which I intend to assert something like "asbestos exposure causes a change in the direct causal effect of tobacco smoke exposure on risk of mesothelioma" (asbestos does physical damage to the cells of the lung that, in addition to directly causing a change in risk of mesothelioma, also renders the cells more susceptible to the carcinogenic harms of tobacco smoke exposure with the result that exposure to both asbestos and tobacco result in an increase in risk that is more than the sum of the two separate risks), and this does not quite fit with the formal meaning of causal arrows in DAGs I described at the start of my question (i.e. because the red arrow does not terminate in a variable ). How does one correctly represent interaction effects within the visual formalism of a DAG? References Greenland, S., Pearl, J., and Robins, J. M. (1999). Causal diagrams for epidemiologic research . Epidemiology , 10(1):37–48.
Now provide the response and nothing else.
| Pearl's theory of causality is completely non-parametric . Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effects can vary (wildly) by assumption. If an effect is identified and you estimate it from data non-parametrically, you obtain a complete distribution of causal effects (instead of, say, a single parameter). Accordingly, you can evaluate the causal effect of tobacco exposure conditional on asbestos exposure non-parametrically to see whether it changes, without committing to any functional form. Let's have a look at the structural equations in your case, which correspond to your "DAG" stripped of the red arrow: Mesothelioma = $f_{1}$(Tobacco, Asbestos, $\epsilon_{m}$) Tobacco = $f_{2}$($\epsilon_{t}$) Asbestos = $f_{3}$($\epsilon_{a}$) where the $\epsilon$ are assumed to be independent because of missing dashed arrows between them. We have left the respective functions f() and the distributions of the errors unspecified, except for saying that the latter are independent. Nonetheless, we can apply Pearl's theory and immediately state that the causal effects of both tobacco and asbestos exposure on mesothelioma are identified . This means that if we had infinitely many observations from this process, we could exactly measure the effect of setting the exposures to different levels by simply seeing the incidences of mesothelioma in individuals with different levels of exposure. So we could infer causality without doing an actual experiment. This is because there exist no back-door paths from the exposure variables to the outcome variable. So you would get P(mesothelioma | do(Tobacco = t)) = P(mesothelioma | Tobacco = t) The same logic holds for the causal effect of asbestos, which allows you to simply evaluate: P(mesothelioma | Tobacco = t, Asbestos = a) - P(mesothelioma | Tobacco = t', Asbestos = a) in comparison to P(mesothelioma | Tobacco = t, Asbestos = a') - P(mesothelioma | Tobacco = t', Asbestos = a') for all relevant values of t and a in order to estimate the interaction effects. In your concrete example, let's assume that the outcome variable is a Bernoulli variable - you can either have mesothelioma or not - and that a person has been exposed to a very high asbestos level a. Then, it is very likely that he will suffer from mesothelioma; accordingly, the effect of increasing tobacco exposure will be very low. On the other hand, if asbestos levels a' are very low, increasing tobacco exposure will have a greater effect. This would constitute an interaction between the effects of tobacco and asbestos. Of course, non-parametric estimation can be extremely demanding and noisy with finite data and lots of different t and a values, so you might think about assuming some structure in f(). But basically you can do it without that. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/157775', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/44269/']} | jdg_380024 |
stackexchange | llm_judgeable_groundtruth_similarity | 8875376 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using twitter's bootstrap's popover here . Right now, when i scroll over the popover text a popover appears with just text from the <a> 's data-content attribute. I was wondering if there was anyway to put a <div> inside the popover. Potentially, I would like to use php and mysql in there, but if i could get a div to work i think i can figure out the rest. I tried setting data-content to a div ID, but it didnt work. HTML: <a class='danger' data-placement='above' rel='popover' data-content='#PopupDiv' href='#'>Click</a>
Now provide the response and nothing else.
| First of all, if you want to use HTML inside the content you need to set the HTML option to true: $('.danger').popover({ html : true}); Then you have two options to set the content for a Popover Use the data-content attribute. This is the default option. Use a custom JS function which returns the HTML content. Using data-content :You need to escape the HTML content, something like this: <a class='danger' data-placement='above' data-content="<div>This is your div content</div>" title="Title" href='#'>Click</a> You can either escape the HTML manually or use a function. I don't know about PHP but in Rails we use *html_safe*. Using a JS function :If you do this, you have several options. The easiest I think is to put your div content hidden wherever you want and then write a function to pass its content to popover. Something like this: $(document).ready(function(){ $('.danger').popover({ html : true, content: function() { return $('#popover_content_wrapper').html(); } });}); And then your HTML looks like this: <a class='danger' data-placement='above' title="Popover Title" href='#'>Click</a><div id="popover_content_wrapper" style="display: none"> <div>This is your div content</div></div> Hope it helps! PS: I've had some troubles when using popover and not setting the title attribute... so, remember to always set the title. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8875376', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/802370/']} | jdg_380025 |
stackexchange | llm_judgeable_groundtruth_similarity | 7403843 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider I have a hierarchy defined as below class Strategy{public: virtual void Run();};class StrategyA : public Strategy{public: virtual void Run();};class StrategyB : public Strategy{public: virtual void Run();}; I was wondering if I replace the Run() with operator() makes any sense and if there are any advantages from a design and efficiency perspective. class Strategy{public: virtual void operator()();};class StrategyA : public Strategy{public: virtual void operator()();};class StrategyB : public Strategy{public: virtual void operator()();}; Thanks CV.
Now provide the response and nothing else.
| Yes. Its fully makes sense. Any operator overload is a function, after all. It adds syntactic sugar to the language. Sometimes, they're necessary , but often, it's just syntactic sugar. Note that you've to invoke it polymorphically (of course, if you want runtime-polymorphism), and there're two ways you can do that: using pointer of base type, and using reference of base type Example ( demo ), struct A{ virtual void operator()() { cout << "A" << endl; }};struct B : A{ virtual void operator()() { cout << "B" << endl; }};int main() { B b; //using pointer A *ptr = &b; (*ptr)(); //clumsy! - prints B //using reference A &ref = b; ref(); //better - prints B //also correct b(); //prints B return 0;} And if you've a function template written as: template<typename Functor>void call(Functor fun){ (*fun)();} Then you can use this function, for functors and regular functions, both: void regular_function(){ cout << "regular_function" << endl;}B b;call(&b); //prints Bcall(regular_function); //prints regular_function Demo : http://ideone.com/B9w16 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7403843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/358013/']} | jdg_380026 |
stackexchange | llm_judgeable_groundtruth_similarity | 15558418 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you save an image from a Three.js canvas? I'm attempting to use Canvas2Image but it doesn't like to play with Threejs. Since the canvas isn't defined until it has a div to attach the canvas object to. http://ajaxian.com/archives/canvas2image-save-out-your-canvas-data-to-images
Now provide the response and nothing else.
| Since the toDataURL is a method of canvas html element, that will work for 3d context too. But you have to take care of couple of things. Make sure when the 3D context is initialized you set preserveDrawingBuffer flag to true, like so: var context = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true}); Then user canvas.toDataURL() to get the image In threejs you would have to do the following when the renderer is instantiated: new THREE.WebGLRenderer({ preserveDrawingBuffer: true }); Also, keep in mind this can have performance implications. (Read: https://github.com/mrdoob/three.js/pull/421#issuecomment-1792008 ) This is only for webgl renderer , in case of threejs canvasRenderer though, you can simply do renderer.domElement.toDataURL(); directly, no initialization parameter needed. My webgl experiment: http://jsfiddle.net/TxcTr/3/ press ' p ' to screenshot. Props to gaitat , I just followed the link in his comment to get to this answer. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15558418', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1628529/']} | jdg_380027 |
stackexchange | llm_judgeable_groundtruth_similarity | 183077 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
A complex Lie group may have several real forms.Are there any duality/trinity... between them?Maybe a trivial question to ask, is $SL(3,\mathbb{C})$ a real form of $SL(3,\mathbb{C})\times SL(3,\mathbb{C})$ ?
Now provide the response and nothing else.
| According to Djokovic and Maizan , the Specht module $V_{(3, 1, 1)}$ of $S_5$ is monomial. This is a representation of dimension $6$, induced from a representation of dimension $3$ of $A_5$. Since $A_5$ has no subgroup of index $3$ (see here for example), this representation of $A_5$ cannot be monomial. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/183077', 'https://mathoverflow.net', 'https://mathoverflow.net/users/41734/']} | jdg_380028 |
stackexchange | llm_judgeable_groundtruth_similarity | 40430186 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to TensorFlow and machine learning. I am trying to classify two objects a cup and a pendrive (jpeg images). I have trained and exported a model.ckpt successfully. Now I am trying to restore the saved model.ckpt for prediction. Here is the script: import tensorflow as tfimport mathimport numpy as npfrom PIL import Imagefrom numpy import array# image parametersIMAGE_SIZE = 64IMAGE_CHANNELS = 3NUM_CLASSES = 2def main(): image = np.zeros((64, 64, 3)) img = Image.open('./IMG_0849.JPG') img = img.resize((64, 64)) image = array(img).reshape(64,64,3) k = int(math.ceil(IMAGE_SIZE / 2.0 / 2.0 / 2.0 / 2.0)) # Store weights for our convolution and fully-connected layers with tf.name_scope('weights'): weights = { # 5x5 conv, 3 input channel, 32 outputs each 'wc1': tf.Variable(tf.random_normal([5, 5, 1 * IMAGE_CHANNELS, 32])), # 5x5 conv, 32 inputs, 64 outputs 'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])), # 5x5 conv, 64 inputs, 128 outputs 'wc3': tf.Variable(tf.random_normal([5, 5, 64, 128])), # 5x5 conv, 128 inputs, 256 outputs 'wc4': tf.Variable(tf.random_normal([5, 5, 128, 256])), # fully connected, k * k * 256 inputs, 1024 outputs 'wd1': tf.Variable(tf.random_normal([k * k * 256, 1024])), # 1024 inputs, 2 class labels (prediction) 'out': tf.Variable(tf.random_normal([1024, NUM_CLASSES])) } # Store biases for our convolution and fully-connected layers with tf.name_scope('biases'): biases = { 'bc1': tf.Variable(tf.random_normal([32])), 'bc2': tf.Variable(tf.random_normal([64])), 'bc3': tf.Variable(tf.random_normal([128])), 'bc4': tf.Variable(tf.random_normal([256])), 'bd1': tf.Variable(tf.random_normal([1024])), 'out': tf.Variable(tf.random_normal([NUM_CLASSES])) } saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, "./model.ckpt") print "...Model Loaded..." x_ = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE , IMAGE_SIZE , IMAGE_CHANNELS]) y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES]) keep_prob = tf.placeholder(tf.float32) init = tf.initialize_all_variables() sess.run(init) my_classification = sess.run(tf.argmax(y_, 1), feed_dict={x_:image}) print 'Neural Network predicted', my_classification[0], "for your image"if __name__ == '__main__': main() When I run the above script for prediction I get the following error: ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)' What am I doing wrong? And how do I fix the shape of numpy array?
Now provide the response and nothing else.
| image has a shape of (64,64,3) . Your input placeholder _x have a shape of (?,64,64,3) . The problem is that you're feeding the placeholder with a value of a different shape. You have to feed it with a value of (1,64,64,3) = a batch of 1 image. Just reshape your image value to a batch with size one. image = array(img).reshape(1,64,64,3) P.S: The fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel.You can try to read more than 1 image (N images) and then build a batch of N images, using a tensor with shape (N,64,64,3) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40430186', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6438307/']} | jdg_380029 |
stackexchange | llm_judgeable_groundtruth_similarity | 1444311 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $f$ be uniformly continuous on $[0,\infty)$ and assume that $\int_0^\infty f(x)dx$ exists. Prove that $$\lim_{x\rightarrow\infty}f(x)=0.$$ This seems obvious, but I couldn't prove it. Edit Thanks to the hint by Did, I managed to prove the theorem (See the answer below).
Now provide the response and nothing else.
| Suppose $f$ does not converge to $0$ . Then for a $\epsilon>0$ , we can always find a sequence $x_n\to\infty$ such that $$|f(x_n)|>\epsilon\tag{1}$$ Since $f$ is uniformly continuous, there is a $0<\delta<1$ that $$|f(x)-f(y)|<\epsilon/2,\quad\text{ whenever }\quad |x-y|<\delta$$ Let $I_n=[x_n-\delta/2,x_n+\delta/2]$ . So for any $x\in I_n$ , by $(1)$ there is $$f(x)>f(x_n)-\epsilon/2>\epsilon/2\tag2$$ And by $(2)$ , we have $$\biggl|\,\int_{I_n} f(x)\, dx\,\biggr|\geqslant \frac{\epsilon}{2}\cdot \delta$$ for each $n$ . But by the Cauchy criterion for integral, $x_n\to\infty$ implies that $\int_0^\infty f(x)\,dx$ diverges, contradiction. Thus we must have $\lim\limits_{x\to\infty}f(x)= 0$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1444311', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/211260/']} | jdg_380030 |
stackexchange | llm_judgeable_groundtruth_similarity | 34414736 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two arrays, for example a = [3, 2, 1]b = [1, 2, 3] I need to multiply them and create a third array c that will be like this c = [3 * 1, 2 * 2, 1 * 3] Which method is the best by speed? I need to do this for huge arrays, and time is important.
Now provide the response and nothing else.
| a.zip(b).map{|x, y| x * y} This works because zip combines the two arrays into a single array of two element arrays. i.e.: a = [3, 2, 1]b = [1, 2, 3]a.zip(b)#=> [[3, 1], [2, 2], [1, 3]] Then you use map to multiply the elements together. This is done by iterating through each two element array and multiplying one element by the other, with map returning the resulting array. a.zip(b).map{|x, y| x * y}#=> [3, 4, 3] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34414736', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5705175/']} | jdg_380031 |
stackexchange | llm_judgeable_groundtruth_similarity | 2389407 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the book "Elementary linear algebra with supplemental applications", 11th edition, page 81,task 127, the following task appears: Show that if $A$ is a square matrix such that $A^K=0$ for some positive integer $K$, then the matrix $A$ is invertible and the inverse of the matrix $(I-A)$ is equal to $I + A + A^2 + ... + A^{K-1}$. I have no difficulty in arriving at the identity given that $A^K = 0$, however I think there is something wrong here... Consider the following: Let $A$ be a square matrix not equal to 0 , such that $A^K=0$ for some nonnegative integer $K \ge 2$, and assume for contradiction that $A^{-1}$ exists. Then we can deduce the following: $$A^K=0 \implies A^K A^{-1}=0 \implies A^{K-1} A A^{-1} = 0 \implies A^{K-1} I = 0$$ and we are left with $A^{K-1} = 0$. Thus, $A^K=0 \implies A^{K-1} = 0$, and using the same logic (multiplying by $A^{-1}$), we deduce $A^K=0 \implies A^{K-1} = 0 \implies \cdots \implies A^2 = 0 \implies A = 0$, a contradiction. Thus, the three statements "$A$ does not equal $0$", "$A^{-1}$ exists" and "$A^K=0$ for some nonnegative integer $K$ greater than or equal to $2$" cannot all be true at the same time. And thus the task in the book is inncorrect since it states that $A^{-1}$ exists. Is the book wrong, or is there something wrong with my use of logic here? :)
Now provide the response and nothing else.
| The problem should be to prove $I-A$ is invertible. If $A^k=0$, the determinant of $A^k$ is zero, so the determinant of $A$ is also zero, so $A$ is not invertible. Then the equation to be proved should remind you of the sum of a geometric series. Just multiply $(I-A)(I+A+A^2+\ldots A^{K-1})$ and most of the terms cancel. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2389407', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_380032 |
stackexchange | llm_judgeable_groundtruth_similarity | 347327 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Consider a convex $n-$ gon in $\mathbb{R}^2$ with sides contained in the lines $y=k_ix+b_i, 1\leq i\leq n.$ Then its area equals to $$S=\frac{1}{2}\sum_{i=1}^{n} \frac{(b_{i+1}-b_i)^2}{k_{i+1}-k_i}.$$ Of course, it is not hard to prove this formula, but it still looks a bit mysterious to me. Question Has this expression appeared somewhere? It will be wonderful to see a pretty proof of it. Remark Another expression for $S$ can be found here , formula (17). It is a toy analog of a MHV scattering amplitude in N=4 SYM.
Now provide the response and nothing else.
| Denote $A_i=(0,b_i)$ . It is a point on $\ell_i=\{(x,y):y=k_ix+b_i\}$ , and let $$P_i=\ell_i\cap \ell_{i+1}=\left(\frac{b_{i+1}-b_i}{k_i-k_{i+1}},\frac{k_ib_{i+1}-k_{i+1}b_i}{k_i-k_{i+1}}\right)$$ be the vertex of the polygon. The (properly oriented) area of $\triangle A_iP_iA_{i+1}$ equals $\frac{(b_i-b_{i+1})^2}{2(k_{i+1}-k_{i})}$ , and the sum of such oriented areas is the oriented area of the polygon. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/347327', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21620/']} | jdg_380033 |
stackexchange | llm_judgeable_groundtruth_similarity | 1744106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Please take a look at the following code: Public Sub Method(Of TEntity As EntityObject)(ByVal entity As TEntity) If TypeOf entity Is Contact Then Dim contact As Contact = entity 'Compile error' Dim contact = DirectCast(entity, Contact) 'Compile error' Dim contact = CType(entity, Contact) 'Compile error' End IfEnd Sub Any ideas?
Now provide the response and nothing else.
| I got this figured out and thought I would post the answer here for future reference for others. So I have fully explained the problem, I will go in to more depth of the real world scenario. We have a moderately large piece of software that requires us to have multiple supporting applications that run on a number of different servers. Our current progression of upgrades makes it moderately difficult to upgrade code in a reliable fashion. Currently we use self extracting exe's to rollout our code to the different servers. The problem arises when we have such a large number of supporting applications that it becomes hard to make sure that the applications got installed correctly with the correct configuration settings, etc. To solve this problem we are looking into the ability to instead of compressing each of the supporting applications, we create a single installer (MSI) that will allow the infrastructure team to install a specific set of supporting applications to each given machine. When we have a major change (for example from 1.0 to 2.0) we will do a full upgrade install (meaning all services/processes will need to be stopped, un-installed, installed, and started.) When we have a minor change, we would like to only have to stop and reinstall the affected services/processes, without touching other applications. Now, enough of me rambling, lets get to the solution: I modified the WIX Product.wxs to remove the shortcuts as we don't really need them in our scenario. Here is the updated wxs file: <?xml version="1.0" encoding="UTF-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Product Id="13C373D3-5C27-487e-A020-C2C89E4607B1" Name="HelloWorldInstaller" Language="1033" Version="1.0.0.0" Manufacturer="HelloWorldInstaller" UpgradeCode="E7CB3C76-4D51-48a8-BFB4-6D11B2E2E65B"> <Package InstallerVersion="200" Compressed="yes" /> <Media Id="1" Cabinet="product.cab" EmbedCab="yes" /> <FeatureRef Id="HelloWorld1Feature" /> <FeatureRef Id="HelloWorld2Feature" /> <FeatureRef Id="HelloWorld3Feature" /> </Product> <Fragment> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLLOCATION" Name="Hello World" /> </Directory> <Directory Id="DesktopFolder" Name="Desktop"/> </Directory> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Directory Id="HelloWorld1Directory" Name="Hello World 1"> <Component Id="HelloWorld1Component" Guid="6D1D9D33-DA17-4db3-8132-C39F32200C3A"> <File Id="HelloWorld1.exe" Name="HelloWorld1.exe" Source="HelloWorld1.exe" DiskId="1" Checksum="yes" /> </Component> </Directory> <Directory Id="HelloWorld2Directory" Name="Hello World 2"> <Component Id="HelloWorld2Component" Guid="B2D51F85-358B-41a7-8C45-B4BB341158F8"> <File Id="HelloWorld2.exe" Name="HelloWorld2.exe" Source="HelloWorld2.exe" DiskId="1" Checksum="yes" /> </Component> </Directory> <Directory Id="HelloWorld3Directory" Name="Hello World 3"> <Component Id="HelloWorld3Component" Guid="A550223E-792F-4169-90A3-574D4240F3C4"> <File Id="HelloWorld3.exe" Name="HelloWorld3.exe" Source="HelloWorld3.exe" DiskId="1" Checksum="yes" /> </Component> </Directory> </DirectoryRef> </Fragment> <Fragment> <Feature Id="HelloWorld1Feature" Title="Hello World 1" Level="1"> <ComponentRef Id="HelloWorld1Component"/> </Feature> </Fragment> <Fragment> <Feature Id="HelloWorld2Feature" Title="Hello World 2" Level="1"> <ComponentRef Id="HelloWorld2Component"/> </Feature> </Fragment> <Fragment> <Feature Id="HelloWorld3Feature" Title="Hello World 3" Level="1"> <ComponentRef Id="HelloWorld3Component"/> </Feature> </Fragment></Wix> Now along with this, for our minor upgrades, we will be looking at releasing patches for our components. For example, let's say we have a ProductA, which has three components - 1,2, and 3. These three components must run either as services, or scheduled tasks. The nature of our product, we cannot shut down all of our services to update or fix one of our components. So, if after we've installed version 1.0, we find a bug in component 2, but we don't want 1 or 3 to be affected by the fix being applied to this bug, we will be releasing a patch for component 2, thus only component 2 will be affected. For our quick example above, we are using HelloWorld1, HelloWorld2, and HelloWorld3 as our 3 components in our software application. The thought is that we should be able to install all three with one MSI, but then update each one independently without it affecting any of the other installed components. So, to demonstrate this, I have created the three console applications above that will display "Hello World 1!", "Hello World 2!", and "Hello World 3!". Then after we release the initial MSI, lets say we find a "bug" that requires us to have HelloWorld1 say "Hello World 1! Updated." instead. Here is what we will do to simulate this: Create the Product.wixobj by executing this command at the command prompt: candle.exe Product.wxs Please remember that in order to call the candle.exe or any of the WIX commands, the Wix install directory should be in your PATH variable. ( Short tutorial on updating PATH environment variable ) Also, please perform the commands in the same directory as your Product.wxs file. Create the first version of your product (lets say 1.0): light.exe Product.wixobj -out ProductA-1.0.msi Now find a bug (change the output of HelloWorld1 to say "Hello World 1! Updated.") then update the assembly version and file version . This is important as this is how WIX can tell the exe's are different. Run the same command as step one (for good measure): candle.exe Product.wxs Run nearly the same command as step two: light.exe Product.wixobj -out ProductA-1.1.msi Notice that this is version 1.1 instead of 1.0 (this is the msi with our updated code). However, we don't want to just install this, keep reading. Here is the fun part, we get the difference in the two MSIs with the following command: torch.exe -p -xi ProductA-1.0.wixpdb ProductA-1.1.wixpdb -out Diff.WixMst Now we create the patch file from this (Patch.wxs will be explained below): candle.exe Patch.wxs We will now create the WixMsp file with this command: light.exe Patch.wixobj -out Patch.WixMsp And now, the fun part. Create the MSP file with this command: pyro.exe Patch.WixMsp -out Patch.msp -t RTM Diff.Wixmst Now, if everything went according to plan, you should have two msi's and one msp file. If you install the first msi (ProductA-1.0.msi) and run HelloWorld1.exe, you should see the message, "Hello World 1!". Just for fun (and example), run both the other applications and leave them running (I built in a stop to keep them open). Close HelloWorld1.exe as we are now going to apply the update for that exe, but in doing so we will not affect HelloWorld2.exe or HelloWorld3.exe. If you now install the msp (Patch.msp) file, and then run HelloWorld1.exe, you will see the updated message, "Hello World 1! Updated." Now, for the magical Patch.wxs file: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Patch AllowRemoval="yes" Manufacturer="Dynamo Corp" MoreInfoURL="http://www.dynamocorp.com/" DisplayName="Sample Patch" Description="Small Update Patch" Classification="Update" > <Media Id="5000" Cabinet="RTM.cab"> <PatchBaseline Id="RTM"/> </Media> <PatchFamilyRef Id="SamplePatchFamily"/> </Patch> <Fragment> <PatchFamily Id='SamplePatchFamily' Version='1.0.0' Supersede='yes'> <ComponentRef Id="HelloWorld1Component"/> </PatchFamily> </Fragment></Wix> Doesn't look like much, does it? Well, the most interesting parts are these: <PatchBaseline Id="RTM"/> - This if you recall is used in our creation of the patch msi. The "RTM" is referred to in the last step above: -t RTM - These have to match. <ComponentRef Id="HelloWorld1Component"/> - This points the patch to the correct component to update, in our case HelloWorld1Component. If you've been doing any searching around, the above code may seem familiar because it came from Peter Marcu's Blog : WiX: Building a Patch using the new Patch Building System - Part 3 I also relied heavily on Alex Shevchuk's Blog : From MSI to WiX, Part 8 - Major Upgrade If you're wondering, "Wow, that's a lot of steps, why would anyone do this many steps?", please remember that once the hard work (above) is done, you need to move this into your integration routine. Thats right, integrate, integrate, integrate ! How do you do this? Well, thats a bit more research, and maybe a blog post? - Probably. To get you off on the right foot, here is an awesome article on Automate Releases With MSBuild And Windows Installer XML . Wow, I hope you read all of this (all two of you), and learned a lot. I hope this helps someone other than myself. Thank you! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1744106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/75500/']} | jdg_380034 |
stackexchange | llm_judgeable_groundtruth_similarity | 2726649 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
this is my code ProductController.cs public ActionResult Details(string id){ product productx = productDB.products.Single(pr => pr.Product1 == id); return View(productx);} Details.aspx <td> <%-- : Html.ActionLink("Edit", "Edit", new { id=item.Id }) % --> <%: Html.ActionLink("Details", "Details", new { id = item.Product1 })%> </td> this is what im using to list some products from a sql database, each product have a link to a Details page to show more informations about it what Im trying is to only put the product label in that link to let it show something like www.mysite.com\products\battery (not the id) I've imagined this should work, but it throw an The data types text and nvarchar are incompatible in the equal to operator. error and neither (pr => pr.Product1.Equals(id)); works the error is clear and Im asking how should I do to make it work this way ? thanks
Now provide the response and nothing else.
| TEXT columns in SQL Server are considered Large Object data and therefore aren't indexable/searchable. They're also deprecated. So, actually, the problem is in your database, not in your application. If you change the column type to a varchar(max) , you can store the same amount of character data but shouldn't have this problem. Then, update your Linq to SQL entity, and you'll no longer get this particular error. Having said that... a column named ID shouldn't be TEXT or varchar(max) , it should be an auto-increment integer ID or a GUID ( uniqueidentifier ), so you might want to revisit your DB design. But assuming you have good reasons for IDs to be string values of arbitrary size, the above change will allow you to filter on the column. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2726649', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/153957/']} | jdg_380035 |
stackexchange | llm_judgeable_groundtruth_similarity | 55609339 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using recursive feature elimination with cross validation (rfecv) as a feature selector for randomforest classifier as follows. X = df[[my_features]] #all my featuresy = df['gold_standard'] #labelsclf = RandomForestClassifier(random_state = 42, class_weight="balanced")rfecv = RFECV(estimator=clf, step=1, cv=StratifiedKFold(10), scoring='roc_auc')rfecv.fit(X,y)print("Optimal number of features : %d" % rfecv.n_features_)features=list(X.columns[rfecv.support_]) I am also performing GridSearchCV as follows to tune the hyperparameters of RandomForestClassifier as follows. X = df[[my_features]] #all my featuresy = df['gold_standard'] #labelsx_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)rfc = RandomForestClassifier(random_state=42, class_weight = 'balanced')param_grid = { 'n_estimators': [200, 500], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [4,5,6,7,8], 'criterion' :['gini', 'entropy']}k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc')CV_rfc.fit(x_train, y_train)print(CV_rfc.best_params_)print(CV_rfc.best_score_)print(CV_rfc.best_estimator_)pred = CV_rfc.predict_proba(x_test)[:,1]print(roc_auc_score(y_test, pred)) However, I am not clear how to merge feature selection ( rfecv ) with GridSearchCV . EDIT: When I run the answer suggested by @Gambit I got the following error: ValueError: Invalid parameter criterion for estimator RFECV(cv=StratifiedKFold(n_splits=10, random_state=None, shuffle=False), estimator=RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators='warn', n_jobs=None, oob_score=False, random_state=42, verbose=0, warm_start=False), min_features_to_select=1, n_jobs=None, scoring='roc_auc', step=1, verbose=0). Check the list of available parameters with `estimator.get_params().keys()`. I could resolve the above issue by using estimator__ in the param_grid parameter list. My question now is How to use the selected features and parameters in x_test to verify if the model works fine with unseen data. How can I obtain the best features and train it with the optimal hyperparameters ? I am happy to provide more details if needed.
Now provide the response and nothing else.
| Basically you want to fine tune the hyper parameter of your classifier (with Cross validation) after feature selection using recursive feature elimination (with Cross validation). Pipeline object is exactly meant for this purpose of assembling the data transformation and applying estimator. May be you could use a different model ( GradientBoostingClassifier , etc. ) for your final classification. It would be possible with the following approach: from sklearn.datasets import load_breast_cancerfrom sklearn.feature_selection import RFECVfrom sklearn.model_selection import GridSearchCVfrom sklearn.model_selection import train_test_splitfrom sklearn.ensemble import RandomForestClassifierX, y = load_breast_cancer(return_X_y=True)X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)from sklearn.pipeline import Pipeline#this is the classifier used for feature selectionclf_featr_sele = RandomForestClassifier(n_estimators=30, random_state=42, class_weight="balanced") rfecv = RFECV(estimator=clf_featr_sele, step=1, cv=5, scoring = 'roc_auc')#you can have different classifier for your final classifierclf = RandomForestClassifier(n_estimators=10, random_state=42, class_weight="balanced") CV_rfc = GridSearchCV(clf, param_grid={'max_depth':[2,3]}, cv= 5, scoring = 'roc_auc')pipeline = Pipeline([('feature_sele',rfecv), ('clf_cv',CV_rfc)])pipeline.fit(X_train, y_train)pipeline.predict(X_test) Now, you can apply this pipeline (Including feature selection) for test data. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55609339', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10704050/']} | jdg_380036 |
stackexchange | llm_judgeable_groundtruth_similarity | 422210 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In The Pragmatic Programmer , the authors write: One of the benefits of detecting problems as soon as you can is that you can crash earlier, and crashing is often the best thing you can do. The alternative may be to continue, writing corrupted data to some vital database or commanding the washing machine into its twentieth consecutive spin cycle. ...when your code discovers that something that was supposed to be impossible just happened, your program is no longer viable. Anything it does from this point forward becomes suspect, so terminate it as soon as possible. To what extent does this principle apply in the context of GUI applications? That is, is the best course of action when faced with an unanticipated exception or an assertion failure to terminate the GUI program (possibly with an appropriate error messages to the user). What are the trade offs involved in applying it or not applying it? What about single-page javascript applications? For example, terminating the page (or perhaps prompting to refresh?) when an uncaught promise rejection is detected.
Now provide the response and nothing else.
| Quoting the same passage from the book (emphasis mine): One of the benefits of detecting problems as soon as you can is thatyou can crash earlier, and crashing is often the best thing you cando. The alternative may be to continue, writing corrupted data to somevital database or commanding the washing machine into its twentiethconsecutive spin cycle. ...when your code discovers that something that was supposed to be impossible just happened, your program is no longer viable. Anythingit does from this point forward becomes suspect, so terminate it assoon as possible. When a programmer uses an assertion, they're saying "This should never happen." Normally, terminating the program under these conditions is an appropriate response, especially since the programmer's assertion has been violated for unknown reasons. This is as true of a program with a GUI as it is for a console program or service. For normal exceptions, the question becomes the same as it's always been: can we meaningfully recover from this exception? That depends; did the exception occur during a write to a critical database, or did the user simply give us a file name that does not exist? | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/422210', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/69004/']} | jdg_380037 |
stackexchange | llm_judgeable_groundtruth_similarity | 68468850 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to know which radio button option is selected. I am looking for a way to loop through the radio button responses and print the one with checked = 'Checked'. Everything I have tried returns all of the options or returns nothing. here is a snippet of the html <tr onmouseover="this.style.backgroundColor='#f46f66';" onmouseout="this.style.backgroundColor='#d3e2e5';" style="background-color: rgb(155, 300, 59);"> <td width="80%">Select a fruit</td><td width="20%"> <input type="radio" name="FruitOptions" value="Peach" checked="checked" disabled="disabled">Peach<br> <input type="radio" name="FruitOptions" value="Apple" disabled="disabled">Apple<br></td></tr> <input type="radio" name="FruitOptions" value="Orange" disabled="disabled">Orange<br></td></tr> I am using vba selenium to scrape a website for this information. Here is what I have tried enter code here Chrome.Wait 100 Set seconds = Chrome.FindElementsByTag("tr") j = 2 s = 2 Set mysheet = Sheets("Sheet7") For Each second In seconds On Error Resume Next mysheet.Cells(j, s).Value = second.FindElementsByTag("td")(2).Text Set inputvs = second.FindElementsByTag("input") For Each inputv In inputvs If inputv.Attribute("checked").Value = "checked" Then 'This is where my issue is 'I want to print the value with the checked attribute Debug.Print (inputv.Attribute("value")) mysheet.Cells(j, s).Value = inputv.getAttribute("value") Else Debug.Print ("none") End If Next j = j + 1 Next
Now provide the response and nothing else.
| You need: std::format("{:%Y/%m/%d %H:%M:%S}",tSysTime); Or you can simplify it to: std::format("{:%Y/%m/%d %T}",tSysTime); If your system_clock::time_point::duration isn't microseconds , you can force it to microseconds precision with: std::format("{:%Y/%m/%d %T}",floor<microseconds>(tSysTime)); The reason std::format("{:%Y/%m/%d %H:%M:%S}",tSysTime.time_since_epoch()); fails is that the .time_since_epoch() turns the time_point into a duration . And duration s know nothing about dates. For example if you asked me what the current date and time is and I told you 1,616,600,192,123,123µs, you would look at me funny. A duration knows nothing about an epoch. A duration is just a measure of time between two time_point s. A time_point on the other hand, knows about an epoch. It holds a duration and means: This much time duration beyond (or before) my epoch. std::format understands the distinction between time_point and duration . And so if the format string asks for dates (e.g. %Y/%m/%d ) and you are formatting a duration , it throws an exception indicating insufficient information to do the requested job. But if you give it a time_point , then std::format knows how to get the date information out of that. This is all part of the type-safety design that <chrono> is built on, so that one can catch as many logic errors as possible, as early as possible. int64_t i = 1'616'600'192'123'123;microseconds d{i};sys_time<microseconds> t{d};cout << "i = " << i << '\n';cout << "d = " << d << '\n';cout << "t = " << t << '\n'; i , d and t all hold the same value in memory: 1,616,600,192,123,123, but mean different things. And you can see these different meanings when you print them out: i = 1616600192123123d = 1616600192123123µst = 2021-03-24 15:36:32.123123 This is no different than: char x = 'A';int y{x};cout << "x = " << x << '\n'; // x = Acout << "y = " << y << '\n'; // y = 65 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68468850', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13930556/']} | jdg_380038 |
stackexchange | llm_judgeable_groundtruth_similarity | 6912922 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to understand better how Gridview works, in particular auto_fit .Here is the XML layout: <?xml version="1.0" encoding="utf-8"?><GridView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gridview" android:layout_width="wrap_content" android:layout_height="wrap_content" android:columnWidth="60dp" android:numColumns="auto_fit"/> And it works fine with a series of six thumbnails (48*48 pixels). In portrait mode, it displays one row, six columns. What I don't understand is why the line android:columnWidth="60dp" is necessary, because auto_fit is expected to find the right number of columns. Without the line android:columnWidth="60dp" , it displays a grid 3 rows and 2 columns. Here is the ImageAdapter class: package com.examples.HelloGridView;import android.content.Context;import android.view.View;import android.view.ViewGroup;import android.widget.BaseAdapter;import android.widget.ImageView;public class ImageAdapter extends BaseAdapter { private Context mContext; public ImageAdapter(Context c) { mContext = c; } public int getCount() { return mThumbIds.length; } public Object getItem(int position) { return null; } public long getItemId(int position) { return 0; } // create a new ImageView for each item referenced by the Adapter public View getView(int position, View convertView, ViewGroup parent) { ImageView imageView; if (convertView == null) { // if it's not recycled, initialize some attributes imageView = new ImageView(mContext); imageView.setPadding(0, 0, 0, 0); } else { imageView = (ImageView) convertView; } imageView.setImageResource(mThumbIds[position]); return imageView; } // references to our images private Integer[] mThumbIds = { R.drawable.ic_1, R.drawable.ic_2, R.drawable.ic_3, R.drawable.ic_4, R.drawable.ic_5, R.drawable.ic_6 };} Thank you for your help.
Now provide the response and nothing else.
| Looking at the GridView source, it is clear that setting the padding and the height on your ImageView will not help you at all. When a column width is not specified, it just chooses a preset number of columns (2): private void determineColumns(int availableSpace) { ... if (mRequestedNumColumns == AUTO_FIT) { if (requestedColumnWidth > 0) { // Client told us to pick the number of columns mNumColumns = (availableSpace + requestedHorizontalSpacing) / (requestedColumnWidth + requestedHorizontalSpacing); } else { // Just make up a number if we don't have enough info mNumColumns = 2; } } else { // We picked the columns mNumColumns = mRequestedNumColumns; } if (mNumColumns <= 0) { mNumColumns = 1; } ... The solution is to measure your column size before setting the GridView's column width. Here is a quick way to measure Views offscreen: public int measureCellWidth( Context context, View cell ){ // We need a fake parent FrameLayout buffer = new FrameLayout( context ); android.widget.AbsListView.LayoutParams layoutParams = new android.widget.AbsListView.LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); buffer.addView( cell, layoutParams); cell.forceLayout(); cell.measure(1000, 1000); int width = cell.getMeasuredWidth(); buffer.removeAllViews(); return width;} And then you just set the GridView's column width: gridView.setColumnWidth( width ); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6912922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/635549/']} | jdg_380039 |
stackexchange | llm_judgeable_groundtruth_similarity | 11676120 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why don't when-let and if-let support multiple bindings by default? So: (when-let [a ... b ...] (+ a b)) ...instead of: (when-let [a ... (when-let [b ... (+ a b))) I am aware that I can write my own macro or use a monad (as described here: http://inclojurewetrust.blogspot.com/2010/12/when-let-maybe.html ).
Now provide the response and nothing else.
| Because (for if-let , at least) it's not obvious what to do with the "else" cases. At least, motivated by Better way to nest if-let in clojure I started to write a macro that did this. Given (if-let* [a ... b ...] action other) it would generate (if-let [a ...] (if-let [b ...] action ?)) and it wasn't clear to me how to continue (there are two places for "else"). You can say that there should be a single alternative for any failure, or none for when-let , but if any of the tests mutate state then things are still going to get messy. In short, it's a little more complicated than I expected, and so I guess the current approach avoids having to make a call on what the solution should be. Another way of saying the same thing: you're assuming if-let should nest like let . A better model might be cond , which isn't a "nested if" but more an "alternative if", and so doesn't fit well with scopes... or, yet another way of saying it: if doesn't handle this case any better. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11676120', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/206020/']} | jdg_380040 |
Subsets and Splits